diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/html/header.html b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/html/header.html new file mode 100644 index 0000000000000000000000000000000000000000..c308b2d88943b130bf5333e6a2d859d3aed4f48d --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/html/header.html @@ -0,0 +1,149 @@ + + + + + + ZooKeeper: Because Coordinating Distributed Systems is a Zoo + + + + + + + + + + +
+
+ Apache > ZooKeeper +
+
+ + + +
+
+
+
+
+ +
+
+   +
+ +
diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/images/state_dia.dia b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/images/state_dia.dia new file mode 100644 index 0000000000000000000000000000000000000000..4a58a00854e382b4096a87e1c7d18137cf009898 Binary files /dev/null and b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/images/state_dia.dia differ diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/javaExample.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/javaExample.md new file mode 100644 index 0000000000000000000000000000000000000000..a94b083d241d155135054c26085c9c1ea5071ef5 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/javaExample.md @@ -0,0 +1,628 @@ + + +# ZooKeeper Java Example + +* [A Simple Watch Client](#ch_Introduction) + * [Requirements](#sc_requirements) + * [Program Design](#sc_design) +* [The Executor Class](#sc_executor) +* [The DataMonitor Class](#sc_DataMonitor) +* [Complete Source Listings](#sc_completeSourceCode) + + + +## A Simple Watch Client + +To introduce you to the ZooKeeper Java API, we develop here a very simple +watch client. This ZooKeeper client watches a znode for changes +and responds to by starting or stopping a program. + + + +### Requirements + +The client has four requirements: + +* It takes as parameters: + * the address of the ZooKeeper service + * the name of a znode - the one to be watched + * the name of a file to write the output to + * an executable with arguments. +* It fetches the data associated with the znode and starts the executable. +* If the znode changes, the client re-fetches the contents and restarts the executable. +* If the znode disappears, the client kills the executable. + + + +### Program Design + +Conventionally, ZooKeeper applications are broken into two units, one which maintains the connection, +and the other which monitors data. In this application, the class called the **Executor** +maintains the ZooKeeper connection, and the class called the **DataMonitor** monitors the data +in the ZooKeeper tree. Also, Executor contains the main thread and contains the execution logic. +It is responsible for what little user interaction there is, as well as interaction with the executable program you +pass in as an argument and which the sample (per the requirements) shuts down and restarts, according to the +state of the znode. + + + +## The Executor Class + +The Executor object is the primary container of the sample application. It contains +both the **ZooKeeper** object, **DataMonitor**, as described above in +[Program Design](#sc_design). + + + // from the Executor class... + + public static void main(String[] args) { + if (args.length < 4) { + System.err + .println("USAGE: Executor hostPort znode filename program [args ...]"); + System.exit(2); + } + String hostPort = args[0]; + String znode = args[1]; + String filename = args[2]; + String exec[] = new String[args.length - 3]; + System.arraycopy(args, 3, exec, 0, exec.length); + try { + new Executor(hostPort, znode, filename, exec).run(); + } catch (Exception e) { + e.printStackTrace(); + } + } + + public Executor(String hostPort, String znode, String filename, + String exec[]) throws KeeperException, IOException { + this.filename = filename; + this.exec = exec; + zk = new ZooKeeper(hostPort, 3000, this); + dm = new DataMonitor(zk, znode, null, this); + } + + public void run() { + try { + synchronized (this) { + while (!dm.dead) { + wait(); + } + } + } catch (InterruptedException e) { + } + } + + +Recall that the Executor's job is to start and stop the executable whose name you pass in on the command line. +It does this in response to events fired by the ZooKeeper object. As you can see in the code above, the Executor passes +a reference to itself as the Watcher argument in the ZooKeeper constructor. It also passes a reference to itself +as DataMonitorListener argument to the DataMonitor constructor. Per the Executor's definition, it implements both these +interfaces: + + public class Executor implements Watcher, Runnable, DataMonitor.DataMonitorListener { + ... + + +The **Watcher** interface is defined by the ZooKeeper Java API. +ZooKeeper uses it to communicate back to its container. It supports only one method, `process()`, +and ZooKeeper uses it to communicates generic events that the main thread would be interested in, +such as the state of the ZooKeeper connection or the ZooKeeper session. The Executor in this example simply +forwards those events down to the DataMonitor to decide what to do with them. It does this simply to illustrate +the point that, by convention, the Executor or some Executor-like object "owns" the ZooKeeper connection, but it is +free to delegate the events to other events to other objects. It also uses this as the default channel on which +to fire watch events. (More on this later.) + + + public void process(WatchedEvent event) { + dm.process(event); + } + + +The **DataMonitorListener** +interface, on the other hand, is not part of the ZooKeeper API. It is a completely custom interface, +designed for this sample application. The DataMonitor object uses it to communicate back to its container, which +is also the Executor object. The DataMonitorListener interface looks like this: + + + public interface DataMonitorListener { + /** + * The existence status of the node has changed. + */ + void exists(byte data[]); + + /** + * The ZooKeeper session is no longer valid. + * + * @param rc + * the ZooKeeper reason code + */ + void closing(int rc); + } + + +This interface is defined in the DataMonitor class and implemented in the Executor class. +When `Executor.exists()` is invoked, the Executor decides whether to start up or shut down per the requirements. +Recall that the requires say to kill the executable when the znode ceases to _exist_. + +When `Executor.closing()` is invoked, the Executor decides whether or not to shut itself down +in response to the ZooKeeper connection permanently disappearing. + +As you might have guessed, DataMonitor is the object that invokes +these methods, in response to changes in ZooKeeper's state. + +Here are Executor's implementation of +`DataMonitorListener.exists()` and `DataMonitorListener.closing`: + + + public void exists( byte[] data ) { + if (data == null) { + if (child != null) { + System.out.println("Killing process"); + child.destroy(); + try { + child.waitFor(); + } catch (InterruptedException e) { + } + } + child = null; + } else { + if (child != null) { + System.out.println("Stopping child"); + child.destroy(); + try { + child.waitFor(); + } catch (InterruptedException e) { + e.printStackTrace(); + } + } + try { + FileOutputStream fos = new FileOutputStream(filename); + fos.write(data); + fos.close(); + } catch (IOException e) { + e.printStackTrace(); + } + try { + System.out.println("Starting child"); + child = Runtime.getRuntime().exec(exec); + new StreamWriter(child.getInputStream(), System.out); + new StreamWriter(child.getErrorStream(), System.err); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + + public void closing(int rc) { + synchronized (this) { + notifyAll(); + } + } + + + + +## The DataMonitor Class + +The DataMonitor class has the meat of the ZooKeeper logic. It is mostly +asynchronous and event driven. DataMonitor kicks things off in the constructor with: + + + public DataMonitor(ZooKeeper zk, String znode, Watcher chainedWatcher, + DataMonitorListener listener) { + this.zk = zk; + this.znode = znode; + this.chainedWatcher = chainedWatcher; + this.listener = listener; + + // Get things started by checking if the node exists. We are going + // to be completely event driven + + +The call to `ZooKeeper.exists()` checks for the existence of the znode, +sets a watch, and passes a reference to itself (`this`) +as the completion callback object. In this sense, it kicks things off, since the +real processing happens when the watch is triggered. + +###### Note + +>Don't confuse the completion callback with the watch callback. The `ZooKeeper.exists()` +completion callback, which happens to be the method `StatCallback.processResult()` implemented +in the DataMonitor object, is invoked when the asynchronous _setting of the watch_ operation +(by `ZooKeeper.exists()`) completes on the server. + +>The triggering of the watch, on the other hand, sends an event to the _Executor_ object, since +the Executor registered as the Watcher of the ZooKeeper object. + +>As an aside, you might note that the DataMonitor could also register itself as the Watcher +for this particular watch event. This is new to ZooKeeper 3.0.0 (the support of multiple Watchers). In this +example, however, DataMonitor does not register as the Watcher. + +When the `ZooKeeper.exists()` operation completes on the server, the ZooKeeper API invokes this completion callback on +the client: + + + public void processResult(int rc, String path, Object ctx, Stat stat) { + boolean exists; + switch (rc) { + case Code.Ok: + exists = true; + break; + case Code.NoNode: + exists = false; + break; + case Code.SessionExpired: + case Code.NoAuth: + dead = true; + listener.closing(rc); + return; + default: + // Retry errors + zk.exists(znode, true, this, null); + return; + } + + byte b[] = null; + if (exists) { + try { + b = zk.getData(znode, false, null); + } catch (KeeperException e) { + // We don't need to worry about recovering now. The watch + // callbacks will kick off any exception handling + e.printStackTrace(); + } catch (InterruptedException e) { + return; + } + } + if ((b == null && b != prevData) + || (b != null && !Arrays.equals(prevData, b))) { + listener.exists(b); + prevData = b; + } + } + + +The code first checks the error codes for znode existence, fatal errors, and +recoverable errors. If the file (or znode) exists, it gets the data from the znode, and +then invoke the exists() callback of Executor if the state has changed. Note, +it doesn't have to do any Exception processing for the getData call because it +has watches pending for anything that could cause an error: if the node is deleted +before it calls `ZooKeeper.getData()`, the watch event set by +the `ZooKeeper.exists()` triggers a callback; +if there is a communication error, a connection watch event fires when +the connection comes back up. + +Finally, notice how DataMonitor processes watch events: + + + public void process(WatchedEvent event) { + String path = event.getPath(); + if (event.getType() == Event.EventType.None) { + // We are are being told that the state of the + // connection has changed + switch (event.getState()) { + case SyncConnected: + // In this particular example we don't need to do anything + // here - watches are automatically re-registered with + // server and any watches triggered while the client was + // disconnected will be delivered (in order of course) + break; + case Expired: + // It's all over + dead = true; + listener.closing(KeeperException.Code.SessionExpired); + break; + } + } else { + if (path != null && path.equals(znode)) { + // Something has changed on the node, let's find out + zk.exists(znode, true, this, null); + } + } + if (chainedWatcher != null) { + chainedWatcher.process(event); + } + } + + +If the client-side ZooKeeper libraries can re-establish the +communication channel (SyncConnected event) to ZooKeeper before +session expiration (Expired event) all of the session's watches will +automatically be re-established with the server (auto-reset of watches +is new in ZooKeeper 3.0.0). See [ZooKeeper Watches](zookeeperProgrammers.html#ch_zkWatches) +in the programmer guide for more on this. A bit lower down in this +function, when DataMonitor gets an event for a znode, it calls`ZooKeeper.exists()` to find out what has changed. + + + +## Complete Source Listings + +### Executor.java + + + /** + * A simple example program to use DataMonitor to start and + * stop executables based on a znode. The program watches the + * specified znode and saves the data that corresponds to the + * znode in the filesystem. It also starts the specified program + * with the specified arguments when the znode exists and kills + * the program if the znode goes away. + */ + import java.io.FileOutputStream; + import java.io.IOException; + import java.io.InputStream; + import java.io.OutputStream; + + import org.apache.zookeeper.KeeperException; + import org.apache.zookeeper.WatchedEvent; + import org.apache.zookeeper.Watcher; + import org.apache.zookeeper.ZooKeeper; + + public class Executor + implements Watcher, Runnable, DataMonitor.DataMonitorListener + { + String znode; + DataMonitor dm; + ZooKeeper zk; + String filename; + String exec[]; + Process child; + + public Executor(String hostPort, String znode, String filename, + String exec[]) throws KeeperException, IOException { + this.filename = filename; + this.exec = exec; + zk = new ZooKeeper(hostPort, 3000, this); + dm = new DataMonitor(zk, znode, null, this); + } + + /** + * @param args + */ + public static void main(String[] args) { + if (args.length < 4) { + System.err + .println("USAGE: Executor hostPort znode filename program [args ...]"); + System.exit(2); + } + String hostPort = args[0]; + String znode = args[1]; + String filename = args[2]; + String exec[] = new String[args.length - 3]; + System.arraycopy(args, 3, exec, 0, exec.length); + try { + new Executor(hostPort, znode, filename, exec).run(); + } catch (Exception e) { + e.printStackTrace(); + } + } + + /*************************************************************************** + * We do process any events ourselves, we just need to forward them on. + * + * @see org.apache.zookeeper.Watcher#process(org.apache.zookeeper.proto.WatcherEvent) + */ + public void process(WatchedEvent event) { + dm.process(event); + } + + public void run() { + try { + synchronized (this) { + while (!dm.dead) { + wait(); + } + } + } catch (InterruptedException e) { + } + } + + public void closing(int rc) { + synchronized (this) { + notifyAll(); + } + } + + static class StreamWriter extends Thread { + OutputStream os; + + InputStream is; + + StreamWriter(InputStream is, OutputStream os) { + this.is = is; + this.os = os; + start(); + } + + public void run() { + byte b[] = new byte[80]; + int rc; + try { + while ((rc = is.read(b)) > 0) { + os.write(b, 0, rc); + } + } catch (IOException e) { + } + + } + } + + public void exists(byte[] data) { + if (data == null) { + if (child != null) { + System.out.println("Killing process"); + child.destroy(); + try { + child.waitFor(); + } catch (InterruptedException e) { + } + } + child = null; + } else { + if (child != null) { + System.out.println("Stopping child"); + child.destroy(); + try { + child.waitFor(); + } catch (InterruptedException e) { + e.printStackTrace(); + } + } + try { + FileOutputStream fos = new FileOutputStream(filename); + fos.write(data); + fos.close(); + } catch (IOException e) { + e.printStackTrace(); + } + try { + System.out.println("Starting child"); + child = Runtime.getRuntime().exec(exec); + new StreamWriter(child.getInputStream(), System.out); + new StreamWriter(child.getErrorStream(), System.err); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + } + + +### DataMonitor.java + + + /** + * A simple class that monitors the data and existence of a ZooKeeper + * node. It uses asynchronous ZooKeeper APIs. + */ + import java.util.Arrays; + + import org.apache.zookeeper.KeeperException; + import org.apache.zookeeper.WatchedEvent; + import org.apache.zookeeper.Watcher; + import org.apache.zookeeper.ZooKeeper; + import org.apache.zookeeper.AsyncCallback.StatCallback; + import org.apache.zookeeper.KeeperException.Code; + import org.apache.zookeeper.data.Stat; + + public class DataMonitor implements Watcher, StatCallback { + + ZooKeeper zk; + String znode; + Watcher chainedWatcher; + boolean dead; + DataMonitorListener listener; + byte prevData[]; + + public DataMonitor(ZooKeeper zk, String znode, Watcher chainedWatcher, + DataMonitorListener listener) { + this.zk = zk; + this.znode = znode; + this.chainedWatcher = chainedWatcher; + this.listener = listener; + // Get things started by checking if the node exists. We are going + // to be completely event driven + zk.exists(znode, true, this, null); + } + + /** + * Other classes use the DataMonitor by implementing this method + */ + public interface DataMonitorListener { + /** + * The existence status of the node has changed. + */ + void exists(byte data[]); + + /** + * The ZooKeeper session is no longer valid. + * + * @param rc + * the ZooKeeper reason code + */ + void closing(int rc); + } + + public void process(WatchedEvent event) { + String path = event.getPath(); + if (event.getType() == Event.EventType.None) { + // We are are being told that the state of the + // connection has changed + switch (event.getState()) { + case SyncConnected: + // In this particular example we don't need to do anything + // here - watches are automatically re-registered with + // server and any watches triggered while the client was + // disconnected will be delivered (in order of course) + break; + case Expired: + // It's all over + dead = true; + listener.closing(KeeperException.Code.SessionExpired); + break; + } + } else { + if (path != null && path.equals(znode)) { + // Something has changed on the node, let's find out + zk.exists(znode, true, this, null); + } + } + if (chainedWatcher != null) { + chainedWatcher.process(event); + } + } + + public void processResult(int rc, String path, Object ctx, Stat stat) { + boolean exists; + switch (rc) { + case Code.Ok: + exists = true; + break; + case Code.NoNode: + exists = false; + break; + case Code.SessionExpired: + case Code.NoAuth: + dead = true; + listener.closing(rc); + return; + default: + // Retry errors + zk.exists(znode, true, this, null); + return; + } + + byte b[] = null; + if (exists) { + try { + b = zk.getData(znode, false, null); + } catch (KeeperException e) { + // We don't need to worry about recovering now. The watch + // callbacks will kick off any exception handling + e.printStackTrace(); + } catch (InterruptedException e) { + return; + } + } + if ((b == null && b != prevData) + || (b != null && !Arrays.equals(prevData, b))) { + listener.exists(b); + prevData = b; + } + } + } + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md new file mode 100644 index 0000000000000000000000000000000000000000..a27b6ffdd12640e449ede8d2c90417e96eef28a8 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md @@ -0,0 +1,267 @@ + + +# ZooKeeper 3.0.0 Release Notes + +* [Migration Instructions when Upgrading to 3.0.0](#migration) + * [Migrating Client Code](#migration_code) + * [Watch Management](#Watch+Management) + * [Java API](#Java+API) + * [C API](#C+API) + * [Migrating Server Data](#migration_data) + * [Migrating Server Configuration](#migration_config) +* [Changes Since ZooKeeper 2.2.1](#changes) + +These release notes include new developer and user facing incompatibilities, features, and major improvements. + +* [Migration Instructions](#migration) +* [Changes](#changes) + + +## Migration Instructions when Upgrading to 3.0.0 +
+ +*You should only have to read this section if you are upgrading from a previous version of ZooKeeper to version 3.0.0, otw skip down to [changes](#changes)* + +A small number of changes in this release have resulted in non-backward compatible Zookeeper client user code and server instance data. The following instructions provide details on how to migrate code and date from version 2.2.1 to version 3.0.0. + +Note: ZooKeeper increments the major version number (major.minor.fix) when backward incompatible changes are made to the source base. As part of the migration from SourceForge we changed the package structure (com.yahoo.zookeeper.* to org.apache.zookeeper.*) and felt it was a good time to incorporate some changes that we had been withholding. As a result the following will be required when migrating from 2.2.1 to 3.0.0 version of ZooKeeper. + +* [Migrating Client Code](#migration_code) +* [Migrating Server Data](#migration_data) +* [Migrating Server Configuration](#migration_config) + + +### Migrating Client Code + +The underlying client-server protocol has changed in version 3.0.0 +of ZooKeeper. As a result clients must be upgraded along with +serving clusters to ensure proper operation of the system (old +pre-3.0.0 clients are not guaranteed to operate against upgraded +3.0.0 servers and vice-versa). + + +#### Watch Management + +In previous releases of ZooKeeper any watches registered by clients were lost if the client lost a connection to a ZooKeeper server. +This meant that developers had to track watches they were interested in and reregister them if a session disconnect event was received. +In this release the client library tracks watches that a client has registered and reregisters the watches when a connection is made to a new server. +Applications that still manually reregister interest should continue working properly as long as they are able to handle unsolicited watches. +For example, an old application may register a watch for /foo and /goo, lose the connection, and reregister only /goo. +As long as the application is able to receive a notification for /foo, (probably ignoring it) it does not need to be changed. +One caveat to the watch management: it is possible to miss an event for the creation and deletion of a znode if watching for creation and both the create and delete happens while the client is disconnected from ZooKeeper. + +This release also allows clients to specify call specific watch functions. +This gives the developer the ability to modularize logic in different watch functions rather than cramming everything in the watch function attached to the ZooKeeper handle. +Call specific watch functions receive all session events for as long as they are active, but will only receive the watch callbacks for which they are registered. + + +#### Java API + +1. The java package structure has changed from **com.yahoo.zookeeper*** to **org.apache.zookeeper***. This will probably affect all of your java code which makes use of ZooKeeper APIs (typically import statements) +1. A number of constants used in the client ZooKeeper API were re-specified using enums (rather than ints). See [ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7), [ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132) and [ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139) for full details +1. [ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18) removed KeeperStateChanged, use KeeperStateDisconnected instead + +Also see [the current Java API](http://zookeeper.apache.org/docs/current/apidocs/zookeeper-server/index.html) + + +#### C API + +1. A number of constants used in the client ZooKeeper API were renamed in order to reduce namespace collision, see [ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6) for full details + + +### Migrating Server Data +The following issues resulted in changes to the on-disk data format (the snapshot and transaction log files contained within the ZK data directory) and require a migration utility to be run. + +* [ZOOKEEPER-27 Unique DB identifiers for servers and clients](https://issues.apache.org/jira/browse/ZOOKEEPER-27) +* [ZOOKEEPER-32 CRCs for ZooKeeper data](https://issues.apache.org/jira/browse/ZOOKEEPER-32) +* [ZOOKEEPER-33 Better ACL management](https://issues.apache.org/jira/browse/ZOOKEEPER-33) +* [ZOOKEEPER-38 headers (version+) in log/snap files](https://issues.apache.org/jira/browse/ZOOKEEPER-38) + +**The following must be run once, and only once, when upgrading the ZooKeeper server instances to version 3.0.0.** + +###### Note +> The and directories referenced below are specified by the *dataLogDir* + and *dataDir* specification in your ZooKeeper config file respectively. *dataLogDir* defaults to + the value of *dataDir* if not specified explicitly in the ZooKeeper server config file (in which + case provide the same directory for both parameters to the upgrade utility). + +1. Shutdown the ZooKeeper server cluster. +1. Backup your and directories +1. Run upgrade using + * `bin/zkServer.sh upgrade ` + + or + + * `java -classpath pathtolog4j:pathtozookeeper.jar UpgradeMain ` + + where is the directory where all transaction logs (log.*) are stored. is the directory where all the snapshots (snapshot.*) are stored. +1. Restart the cluster. + +If you have any failure during the upgrade procedure keep reading to sanitize your database. + +This is how upgrade works in ZooKeeper. This will help you troubleshoot in case you have problems while upgrading + +1. Upgrade moves files from `` and `` to `/version-1/` and `/version-1` respectively (version-1 sub-directory is created by the upgrade utility). +1. Upgrade creates a new version sub-directory `/version-2` and `/version-2` +1. Upgrade reads the old database from `/version-1` and `/version-1` into the memory and creates a new upgraded snapshot. +1. Upgrade writes the new database in `/version-2`. + +Troubleshooting. + + +1. In case you start ZooKeeper 3.0 without upgrading from 2.0 on a 2.0 database - the servers will start up with an empty database. + This is because the servers assume that `/version-2` and `/version-2` will have the database to start with. Since this will be empty + in case of no upgrade, the servers will start with an empty database. In such a case, shutdown the ZooKeeper servers, remove the version-2 directory (remember + this will lead to loss of updates after you started 3.0.) + and then start the upgrade procedure. +1. If the upgrade fails while trying to rename files into the version-1 directory, you should try and move all the files under `/version-1` + and `/version-1` to `` and `` respectively. Then try upgrade again. +1. If you do not wish to run with ZooKeeper 3.0 and prefer to run with ZooKeeper 2.0 and have already upgraded - you can run ZooKeeper 2 with + the `` and `` directories changed to `/version-1` and `/version-1`. Remember that you will lose all the updates that you made after the upgrade. + + +### Migrating Server Configuration + +There is a significant change to the ZooKeeper server configuration file. + +The default election algorithm, specified by the *electionAlg* configuration attribute, has +changed from a default of *0* to a default of *3*. See +[Cluster Options](zookeeperAdmin.html#sc_clusterOptions) section of the administrators guide, specifically +the *electionAlg* and *server.X* properties. + +You will either need to explicitly set *electionAlg* to its previous default value +of *0* or change your *server.X* options to include the leader election port. + + + +## Changes Since ZooKeeper 2.2.1 + +Version 2.2.1 code, documentation, binaries, etc... are still accessible on [SourceForge](http://sourceforge.net/projects/zookeeper) + +| Issue | Notes | +|-------|-------| +|[ZOOKEEPER-43](https://issues.apache.org/jira/browse/ZOOKEEPER-43)|Server side of auto reset watches.| +|[ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132)|Create Enum to replace CreateFlag in ZooKepper.create method| +|[ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139)|Create Enums for WatcherEvent's KeeperState and EventType| +|[ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18)|keeper state inconsistency| +|[ZOOKEEPER-38](https://issues.apache.org/jira/browse/ZOOKEEPER-38)|headers in log/snap files| +|[ZOOKEEPER-8](https://issues.apache.org/jira/browse/ZOOKEEPER-8)|Stat enchaned to include num of children and size| +|[ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6)|List of problem identifiers in zookeeper.h| +|[ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7)|Use enums rather than ints for types and state| +|[ZOOKEEPER-27](https://issues.apache.org/jira/browse/ZOOKEEPER-27)|Unique DB identifiers for servers and clients| +|[ZOOKEEPER-32](https://issues.apache.org/jira/browse/ZOOKEEPER-32)|CRCs for ZooKeeper data| +|[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management| +|[ZOOKEEPER-203](https://issues.apache.org/jira/browse/ZOOKEEPER-203)|fix datadir typo in releasenotes| +|[ZOOKEEPER-145](https://issues.apache.org/jira/browse/ZOOKEEPER-145)|write detailed release notes for users migrating from 2.x to 3.0| +|[ZOOKEEPER-23](https://issues.apache.org/jira/browse/ZOOKEEPER-23)|Auto reset of watches on reconnect| +|[ZOOKEEPER-191](https://issues.apache.org/jira/browse/ZOOKEEPER-191)|forrest docs for upgrade.| +|[ZOOKEEPER-201](https://issues.apache.org/jira/browse/ZOOKEEPER-201)|validate magic number when reading snapshot and transaction logs| +|[ZOOKEEPER-200](https://issues.apache.org/jira/browse/ZOOKEEPER-200)|the magic number for snapshot and log must be different| +|[ZOOKEEPER-199](https://issues.apache.org/jira/browse/ZOOKEEPER-199)|fix log messages in persistence code| +|[ZOOKEEPER-197](https://issues.apache.org/jira/browse/ZOOKEEPER-197)|create checksums for snapshots| +|[ZOOKEEPER-198](https://issues.apache.org/jira/browse/ZOOKEEPER-198)|apache license header missing from FollowerSyncRequest.java| +|[ZOOKEEPER-5](https://issues.apache.org/jira/browse/ZOOKEEPER-5)|Upgrade Feature in Zookeeper server.| +|[ZOOKEEPER-194](https://issues.apache.org/jira/browse/ZOOKEEPER-194)|Fix terminology in zookeeperAdmin.xml| +|[ZOOKEEPER-151](https://issues.apache.org/jira/browse/ZOOKEEPER-151)|Document change to server configuration| +|[ZOOKEEPER-193](https://issues.apache.org/jira/browse/ZOOKEEPER-193)|update java example doc to compile with latest zookeeper| +|[ZOOKEEPER-187](https://issues.apache.org/jira/browse/ZOOKEEPER-187)|CreateMode api docs missing| +|[ZOOKEEPER-186](https://issues.apache.org/jira/browse/ZOOKEEPER-186)|add new "releasenotes.xml" to forrest documentation| +|[ZOOKEEPER-190](https://issues.apache.org/jira/browse/ZOOKEEPER-190)|Reorg links to docs and navs to docs into related sections| +|[ZOOKEEPER-189](https://issues.apache.org/jira/browse/ZOOKEEPER-189)|forrest build not validated xml of input documents| +|[ZOOKEEPER-188](https://issues.apache.org/jira/browse/ZOOKEEPER-188)|Check that election port is present for all servers| +|[ZOOKEEPER-185](https://issues.apache.org/jira/browse/ZOOKEEPER-185)|Improved version of FLETest| +|[ZOOKEEPER-184](https://issues.apache.org/jira/browse/ZOOKEEPER-184)|tests: An explicit include directive is needed for the usage of memcpy functions| +|[ZOOKEEPER-183](https://issues.apache.org/jira/browse/ZOOKEEPER-183)|Array subscript is above array bounds in od_completion, src/cli.c.| +|[ZOOKEEPER-182](https://issues.apache.org/jira/browse/ZOOKEEPER-182)|zookeeper_init accepts empty host-port string and returns valid pointer to zhandle_t.| +|[ZOOKEEPER-17](https://issues.apache.org/jira/browse/ZOOKEEPER-17)|zookeeper_init doc needs clarification| +|[ZOOKEEPER-181](https://issues.apache.org/jira/browse/ZOOKEEPER-181)|Some Source Forge Documents did not get moved over: javaExample, zookeeperTutorial, zookeeperInternals| +|[ZOOKEEPER-180](https://issues.apache.org/jira/browse/ZOOKEEPER-180)|Placeholder sections needed in document for new topics that the umbrella jira discusses| +|[ZOOKEEPER-179](https://issues.apache.org/jira/browse/ZOOKEEPER-179)|Programmer's Guide "Basic Operations" section is missing content| +|[ZOOKEEPER-178](https://issues.apache.org/jira/browse/ZOOKEEPER-178)|FLE test.| +|[ZOOKEEPER-159](https://issues.apache.org/jira/browse/ZOOKEEPER-159)|Cover two corner cases of leader election| +|[ZOOKEEPER-156](https://issues.apache.org/jira/browse/ZOOKEEPER-156)|update programmer guide with acl details from old wiki page| +|[ZOOKEEPER-154](https://issues.apache.org/jira/browse/ZOOKEEPER-154)|reliability graph diagram in overview doc needs context| +|[ZOOKEEPER-157](https://issues.apache.org/jira/browse/ZOOKEEPER-157)|Peer can't find existing leader| +|[ZOOKEEPER-155](https://issues.apache.org/jira/browse/ZOOKEEPER-155)|improve "the zookeeper project" section of overview doc| +|[ZOOKEEPER-140](https://issues.apache.org/jira/browse/ZOOKEEPER-140)|Deadlock in QuorumCnxManager| +|[ZOOKEEPER-147](https://issues.apache.org/jira/browse/ZOOKEEPER-147)|This is version of the documents with most of the [tbd...] scrubbed out| +|[ZOOKEEPER-150](https://issues.apache.org/jira/browse/ZOOKEEPER-150)|zookeeper build broken| +|[ZOOKEEPER-136](https://issues.apache.org/jira/browse/ZOOKEEPER-136)|sync causes hang in all followers of quorum.| +|[ZOOKEEPER-134](https://issues.apache.org/jira/browse/ZOOKEEPER-134)|findbugs cleanup| +|[ZOOKEEPER-133](https://issues.apache.org/jira/browse/ZOOKEEPER-133)|hudson tests failing intermittently| +|[ZOOKEEPER-144](https://issues.apache.org/jira/browse/ZOOKEEPER-144)|add tostring support for watcher event, and enums for event type/state| +|[ZOOKEEPER-21](https://issues.apache.org/jira/browse/ZOOKEEPER-21)|Improve zk ctor/watcher| +|[ZOOKEEPER-142](https://issues.apache.org/jira/browse/ZOOKEEPER-142)|Provide Javadoc as to the maximum size of the data byte array that may be stored within a znode| +|[ZOOKEEPER-93](https://issues.apache.org/jira/browse/ZOOKEEPER-93)|Create Documentation for Zookeeper| +|[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election| +|[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events| +|[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Old leader election can elect a dead leader over and over again| +|[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process| +|[ZOOKEEPER-118](https://issues.apache.org/jira/browse/ZOOKEEPER-118)|findbugs flagged switch statement in followerrequestprocessor.run| +|[ZOOKEEPER-115](https://issues.apache.org/jira/browse/ZOOKEEPER-115)|Potential NPE in QuorumCnxManager| +|[ZOOKEEPER-114](https://issues.apache.org/jira/browse/ZOOKEEPER-114)|cleanup ugly event messages in zookeeper client| +|[ZOOKEEPER-112](https://issues.apache.org/jira/browse/ZOOKEEPER-112)|src/java/main ZooKeeper.java has test code embedded into it.| +|[ZOOKEEPER-39](https://issues.apache.org/jira/browse/ZOOKEEPER-39)|Use Watcher objects rather than boolean on read operations.| +|[ZOOKEEPER-97](https://issues.apache.org/jira/browse/ZOOKEEPER-97)|supports optional output directory in code generator.| +|[ZOOKEEPER-101](https://issues.apache.org/jira/browse/ZOOKEEPER-101)|Integrate ZooKeeper with "violations" feature on hudson| +|[ZOOKEEPER-105](https://issues.apache.org/jira/browse/ZOOKEEPER-105)|Catch Zookeeper exceptions and print on the stderr.| +|[ZOOKEEPER-42](https://issues.apache.org/jira/browse/ZOOKEEPER-42)|Change Leader Election to fast tcp.| +|[ZOOKEEPER-48](https://issues.apache.org/jira/browse/ZOOKEEPER-48)|auth_id now handled correctly when no auth ids present| +|[ZOOKEEPER-44](https://issues.apache.org/jira/browse/ZOOKEEPER-44)|Create sequence flag children with prefixes of 0's so that they can be lexicographically sorted.| +|[ZOOKEEPER-108](https://issues.apache.org/jira/browse/ZOOKEEPER-108)|Fix sync operation reordering on a Quorum.| +|[ZOOKEEPER-25](https://issues.apache.org/jira/browse/ZOOKEEPER-25)|Fuse module for Zookeeper.| +|[ZOOKEEPER-58](https://issues.apache.org/jira/browse/ZOOKEEPER-58)|Race condition on ClientCnxn.java| +|[ZOOKEEPER-56](https://issues.apache.org/jira/browse/ZOOKEEPER-56)|Add clover support to build.xml.| +|[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|register the ZooKeeper mailing lists with nabble.com| +|[ZOOKEEPER-54](https://issues.apache.org/jira/browse/ZOOKEEPER-54)|remove sleeps in the tests.| +|[ZOOKEEPER-55](https://issues.apache.org/jira/browse/ZOOKEEPER-55)|build.xml fails to retrieve a release number from SVN and the ant target "dist" fails| +|[ZOOKEEPER-89](https://issues.apache.org/jira/browse/ZOOKEEPER-89)|invoke WhenOwnerListener.whenNotOwner when the ZK connection fails| +|[ZOOKEEPER-90](https://issues.apache.org/jira/browse/ZOOKEEPER-90)|invoke WhenOwnerListener.whenNotOwner when the ZK session expires and the znode is the leader| +|[ZOOKEEPER-82](https://issues.apache.org/jira/browse/ZOOKEEPER-82)|Make the ZooKeeperServer more DI friendly.| +|[ZOOKEEPER-110](https://issues.apache.org/jira/browse/ZOOKEEPER-110)|Build script relies on svnant, which is not compatible with subversion 1.5 working copies| +|[ZOOKEEPER-111](https://issues.apache.org/jira/browse/ZOOKEEPER-111)|Significant cleanup of existing tests.| +|[ZOOKEEPER-122](https://issues.apache.org/jira/browse/ZOOKEEPER-122)|Fix NPE in jute's Utils.toCSVString.| +|[ZOOKEEPER-123](https://issues.apache.org/jira/browse/ZOOKEEPER-123)|Fix the wrong class is specified for the logger.| +|[ZOOKEEPER-2](https://issues.apache.org/jira/browse/ZOOKEEPER-2)|Fix synchronization issues in QuorumPeer and FastLeader election.| +|[ZOOKEEPER-125](https://issues.apache.org/jira/browse/ZOOKEEPER-125)|Remove unwanted class declaration in FastLeaderElection.| +|[ZOOKEEPER-61](https://issues.apache.org/jira/browse/ZOOKEEPER-61)|Address in client/server test cases.| +|[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|cleanup the library directory| +|[ZOOKEEPER-109](https://issues.apache.org/jira/browse/ZOOKEEPER-109)|cleanup of NPE and Resource issue nits found by static analysis| +|[ZOOKEEPER-76](https://issues.apache.org/jira/browse/ZOOKEEPER-76)|Commit 677109 removed the cobertura library, but not the build targets.| +|[ZOOKEEPER-63](https://issues.apache.org/jira/browse/ZOOKEEPER-63)|Race condition in client close| +|[ZOOKEEPER-70](https://issues.apache.org/jira/browse/ZOOKEEPER-70)|Add skeleton forrest doc structure for ZooKeeper| +|[ZOOKEEPER-79](https://issues.apache.org/jira/browse/ZOOKEEPER-79)|Document jacob's leader election on the wiki recipes page| +|[ZOOKEEPER-73](https://issues.apache.org/jira/browse/ZOOKEEPER-73)|Move ZK wiki from SourceForge to Apache| +|[ZOOKEEPER-72](https://issues.apache.org/jira/browse/ZOOKEEPER-72)|Initial creation/setup of ZooKeeper ASF site.| +|[ZOOKEEPER-71](https://issues.apache.org/jira/browse/ZOOKEEPER-71)|Determine what to do re ZooKeeper Changelog| +|[ZOOKEEPER-68](https://issues.apache.org/jira/browse/ZOOKEEPER-68)|parseACLs in ZooKeeper.java fails to parse elements of ACL, should be lastIndexOf rather than IndexOf| +|[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process.| +|[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Fix Old leader election can elect a dead leader over and over again.| +|[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events| +|[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election| +|[ZOOKEEPER-128](https://issues.apache.org/jira/browse/ZOOKEEPER-128)|test coverage on async client operations needs to be improved| +|[ZOOKEEPER-127](https://issues.apache.org/jira/browse/ZOOKEEPER-127)|Use of non-standard election ports in config breaks services| +|[ZOOKEEPER-53](https://issues.apache.org/jira/browse/ZOOKEEPER-53)|tests failing on solaris.| +|[ZOOKEEPER-172](https://issues.apache.org/jira/browse/ZOOKEEPER-172)|FLE Test| +|[ZOOKEEPER-41](https://issues.apache.org/jira/browse/ZOOKEEPER-41)|Sample startup script| +|[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management| +|[ZOOKEEPER-49](https://issues.apache.org/jira/browse/ZOOKEEPER-49)|SetACL does not work| +|[ZOOKEEPER-20](https://issues.apache.org/jira/browse/ZOOKEEPER-20)|Child watches are not triggered when the node is deleted| +|[ZOOKEEPER-15](https://issues.apache.org/jira/browse/ZOOKEEPER-15)|handle failure better in build.xml:test| +|[ZOOKEEPER-11](https://issues.apache.org/jira/browse/ZOOKEEPER-11)|ArrayList is used instead of List| +|[ZOOKEEPER-45](https://issues.apache.org/jira/browse/ZOOKEEPER-45)|Restructure the SVN repository after initial import | +|[ZOOKEEPER-1](https://issues.apache.org/jira/browse/ZOOKEEPER-1)|Initial ZooKeeper code contribution from Yahoo!| diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/basic.css b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/basic.css new file mode 100644 index 0000000000000000000000000000000000000000..01c383da891a8c2102732a2db4758fd00badaff4 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/basic.css @@ -0,0 +1,167 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +/** + * General + */ + +img { border: 0; } + +#content table { + border: 0; + width: 100%; +} +/*Hack to get IE to render the table at 100%*/ +* html #content table { margin-left: -3px; } + +#content th, +#content td { + margin: 0; + padding: 0; + vertical-align: top; +} + +.clearboth { + clear: both; +} + +.note, .warning, .fixme { + clear:right; + border: solid black 1px; + margin: 1em 3em; +} + +.note .label { + background: #369; + color: white; + font-weight: bold; + padding: 5px 10px; +} +.note .content { + background: #F0F0FF; + color: black; + line-height: 120%; + font-size: 90%; + padding: 5px 10px; +} +.warning .label { + background: #C00; + color: white; + font-weight: bold; + padding: 5px 10px; +} +.warning .content { + background: #FFF0F0; + color: black; + line-height: 120%; + font-size: 90%; + padding: 5px 10px; +} +.fixme .label { + background: #C6C600; + color: black; + font-weight: bold; + padding: 5px 10px; +} +.fixme .content { + padding: 5px 10px; +} + +/** + * Typography + */ + +body { + font-family: verdana, "Trebuchet MS", arial, helvetica, sans-serif; + font-size: 100%; +} + +#content { + font-family: Georgia, Palatino, Times, serif; + font-size: 95%; +} +#tabs { + font-size: 70%; +} +#menu { + font-size: 80%; +} +#footer { + font-size: 70%; +} + +h1, h2, h3, h4, h5, h6 { + font-family: "Trebuchet MS", verdana, arial, helvetica, sans-serif; + font-weight: bold; + margin-top: 1em; + margin-bottom: .5em; +} + +h1 { + margin-top: 0; + margin-bottom: 1em; + font-size: 1.4em; +} +#content h1 { + font-size: 160%; + margin-bottom: .5em; +} +#menu h1 { + margin: 0; + padding: 10px; + background: #336699; + color: white; +} +h2 { font-size: 120%; } +h3 { font-size: 100%; } +h4 { font-size: 90%; } +h5 { font-size: 80%; } +h6 { font-size: 75%; } + +p { + line-height: 120%; + text-align: left; + margin-top: .5em; + margin-bottom: 1em; +} + +#content li, +#content th, +#content td, +#content li ul, +#content li ol{ + margin-top: .5em; + margin-bottom: .5em; +} + + +#content li li, +#minitoc-area li{ + margin-top: 0em; + margin-bottom: 0em; +} + +#content .attribution { + text-align: right; + font-style: italic; + font-size: 85%; + margin-top: 1em; +} + +.codefrag { + font-family: "Courier New", Courier, monospace; + font-size: 110%; +} \ No newline at end of file diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getBlank.js b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getBlank.js new file mode 100644 index 0000000000000000000000000000000000000000..d9978c0b3e6d6f184928133a1cf6e4340f35c930 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getBlank.js @@ -0,0 +1,40 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +/** + * getBlank script - when included in a html file and called from a form text field, will set the value of this field to "" + * if the text value is still the standard value. + * getPrompt script - when included in a html file and called from a form text field, will set the value of this field to the prompt + * if the text value is empty. + * + * Typical usage: + * + * + */ + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css new file mode 100644 index 0000000000000000000000000000000000000000..aaa99319acdf30b5b6ce2ec9b6967c6ad704a27b --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css @@ -0,0 +1,54 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +body { + font-family: Georgia, Palatino, serif; + font-size: 12pt; + background: white; +} + +#tabs, +#menu, +#content .toc { + display: none; +} + +#content { + width: auto; + padding: 0; + float: none !important; + color: black; + background: inherit; +} + +a:link, a:visited { + color: #336699; + background: inherit; + text-decoration: underline; +} + +#top .logo { + padding: 0; + margin: 0 0 2em 0; +} + +#footer { + margin-top: 4em; +} + +acronym { + border: 0; +} \ No newline at end of file diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/profile.css b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/profile.css new file mode 100644 index 0000000000000000000000000000000000000000..190e74f32ac81c2e71fb7ac7dfae8b352f70e816 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/profile.css @@ -0,0 +1,159 @@ + + +/* ==================== aural ============================ */ + +@media aural { + h1, h2, h3, h4, h5, h6 { voice-family: paul, male; stress: 20; richness: 90 } + h1 { pitch: x-low; pitch-range: 90 } + h2 { pitch: x-low; pitch-range: 80 } + h3 { pitch: low; pitch-range: 70 } + h4 { pitch: medium; pitch-range: 60 } + h5 { pitch: medium; pitch-range: 50 } + h6 { pitch: medium; pitch-range: 40 } + li, dt, dd { pitch: medium; richness: 60 } + dt { stress: 80 } + pre, code, tt { pitch: medium; pitch-range: 0; stress: 0; richness: 80 } + em { pitch: medium; pitch-range: 60; stress: 60; richness: 50 } + strong { pitch: medium; pitch-range: 60; stress: 90; richness: 90 } + dfn { pitch: high; pitch-range: 60; stress: 60 } + s, strike { richness: 0 } + i { pitch: medium; pitch-range: 60; stress: 60; richness: 50 } + b { pitch: medium; pitch-range: 60; stress: 90; richness: 90 } + u { richness: 0 } + + :link { voice-family: harry, male } + :visited { voice-family: betty, female } + :active { voice-family: betty, female; pitch-range: 80; pitch: x-high } +} + +#top { background-color: #FFFFFF;} + +#top .header .current { background-color: #4C6C8F;} +#top .header .current a:link { color: #ffffff; } +#top .header .current a:visited { color: #ffffff; } +#top .header .current a:hover { color: #ffffff; } + +#tabs li { background-color: #E5E4D9 ;} +#tabs li a:link { color: #000000; } +#tabs li a:visited { color: #000000; } +#tabs li a:hover { color: #000000; } + +#level2tabs a.selected { background-color: #4C6C8F ;} +#level2tabs a:link { color: #ffffff; } +#level2tabs a:visited { color: #ffffff; } +#level2tabs a:hover { color: #ffffff; } + +#level2tabs { background-color: #E5E4D9;} +#level2tabs a.unselected:link { color: #000000; } +#level2tabs a.unselected:visited { color: #000000; } +#level2tabs a.unselected:hover { color: #000000; } + +.heading { background-color: #E5E4D9;} + +.boxed { background-color: #E5E4D9;} +.underlined_5 {border-bottom: solid 5px #E5E4D9;} +.underlined_10 {border-bottom: solid 10px #E5E4D9;} +table caption { +background-color: #E5E4D9; +color: #000000; +} + +#feedback { +color: #FFFFFF; +background: #4C6C8F; +text-align: center; +} +#feedback #feedbackto { +color: #FFFFFF; +} + +#publishedStrip { +color: #FFFFFF; +background: #4C6C8F; +} + +#publishedStrip { +color: #000000; +background: #E5E4D9; +} + +#menu a.selected { background-color: #CFDCED; + border-color: #999999; + color: #000000;} +#menu a.selected:visited { color: #000000;} + +#menu { border-color: #999999;} +#menu .menupageitemgroup { border-color: #999999;} + +#menu { background-color: #4C6C8F;} +#menu { color: #ffffff;} +#menu a:link { color: #ffffff;} +#menu a:visited { color: #ffffff;} +#menu a:hover { +background-color: #4C6C8F; +color: #ffffff;} + +#menu h1 { +color: #000000; +background-color: #cfdced; +} + +#top .searchbox { +background-color: #E5E4D9 ; +color: #000000; +} + +#menu .menupageitemgroup { +background-color: #E5E4D9; +} +#menu .menupageitem { +color: #000000; +} +#menu .menupageitem a:link { color: #000000;} +#menu .menupageitem a:visited { color: #000000;} +#menu .menupageitem a:hover { +background-color: #E5E4D9; +color: #000000; +} + +body{ +background-color: #ffffff; +color: #000000; +} +a:link { color:#0000ff} +a:visited { color:#009999} +a:hover { color:#6587ff} + + +.ForrestTable { background-color: #ccc;} + +.ForrestTable td { background-color: #ffffff;} + +.highlight { background-color: #ffff00;} + +.fixme { border-color: #c60;} + +.note { border-color: #069;} + +.warning { border-color: #900;} + +#footer { background-color: #E5E4D9;} +/* extra-css */ + + p.quote { + margin-left: 2em; + padding: .5em; + background-color: #f0f0f0; + font-family: monospace; + } + + pre { + margin-left: 0em; + padding: 0.5em; + background-color: #f0f0f0; + font-family: monospace; + } + + + + \ No newline at end of file diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/prototype.js b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/prototype.js new file mode 100644 index 0000000000000000000000000000000000000000..cc89dafcd6ae69cf81adfea016a0f1a8d8341c58 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/prototype.js @@ -0,0 +1,7588 @@ +/* Prototype JavaScript framework, version 1.7.3 + * (c) 2005-2010 Sam Stephenson + * + * Prototype is freely distributable under the terms of an MIT-style license. + * For details, see the Prototype web site: http://www.prototypejs.org/ + * + *--------------------------------------------------------------------------*/ + +var Prototype = { + + Version: '1.7.3', + + Browser: (function(){ + var ua = navigator.userAgent; + var isOpera = Object.prototype.toString.call(window.opera) == '[object Opera]'; + return { + IE: !!window.attachEvent && !isOpera, + Opera: isOpera, + WebKit: ua.indexOf('AppleWebKit/') > -1, + Gecko: ua.indexOf('Gecko') > -1 && ua.indexOf('KHTML') === -1, + MobileSafari: /Apple.*Mobile/.test(ua) + } + })(), + + BrowserFeatures: { + XPath: !!document.evaluate, + + SelectorsAPI: !!document.querySelector, + + ElementExtensions: (function() { + var constructor = window.Element || window.HTMLElement; + return !!(constructor && constructor.prototype); + })(), + SpecificElementExtensions: (function() { + if (typeof window.HTMLDivElement !== 'undefined') + return true; + + var div = document.createElement('div'), + form = document.createElement('form'), + isSupported = false; + + if (div['__proto__'] && (div['__proto__'] !== form['__proto__'])) { + isSupported = true; + } + + div = form = null; + + return isSupported; + })() + }, + + ScriptFragment: ']*>([\\S\\s]*?)<\/script\\s*>', + JSONFilter: /^\/\*-secure-([\s\S]*)\*\/\s*$/, + + emptyFunction: function() { }, + + K: function(x) { return x } +}; + +if (Prototype.Browser.MobileSafari) + Prototype.BrowserFeatures.SpecificElementExtensions = false; +/* Based on Alex Arnell's inheritance implementation. */ + +var Class = (function() { + + var IS_DONTENUM_BUGGY = (function(){ + for (var p in { toString: 1 }) { + if (p === 'toString') return false; + } + return true; + })(); + + function subclass() {}; + function create() { + var parent = null, properties = $A(arguments); + if (Object.isFunction(properties[0])) + parent = properties.shift(); + + function klass() { + this.initialize.apply(this, arguments); + } + + Object.extend(klass, Class.Methods); + klass.superclass = parent; + klass.subclasses = []; + + if (parent) { + subclass.prototype = parent.prototype; + klass.prototype = new subclass; + parent.subclasses.push(klass); + } + + for (var i = 0, length = properties.length; i < length; i++) + klass.addMethods(properties[i]); + + if (!klass.prototype.initialize) + klass.prototype.initialize = Prototype.emptyFunction; + + klass.prototype.constructor = klass; + return klass; + } + + function addMethods(source) { + var ancestor = this.superclass && this.superclass.prototype, + properties = Object.keys(source); + + if (IS_DONTENUM_BUGGY) { + if (source.toString != Object.prototype.toString) + properties.push("toString"); + if (source.valueOf != Object.prototype.valueOf) + properties.push("valueOf"); + } + + for (var i = 0, length = properties.length; i < length; i++) { + var property = properties[i], value = source[property]; + if (ancestor && Object.isFunction(value) && + value.argumentNames()[0] == "$super") { + var method = value; + value = (function(m) { + return function() { return ancestor[m].apply(this, arguments); }; + })(property).wrap(method); + + value.valueOf = (function(method) { + return function() { return method.valueOf.call(method); }; + })(method); + + value.toString = (function(method) { + return function() { return method.toString.call(method); }; + })(method); + } + this.prototype[property] = value; + } + + return this; + } + + return { + create: create, + Methods: { + addMethods: addMethods + } + }; +})(); +(function() { + + var _toString = Object.prototype.toString, + _hasOwnProperty = Object.prototype.hasOwnProperty, + NULL_TYPE = 'Null', + UNDEFINED_TYPE = 'Undefined', + BOOLEAN_TYPE = 'Boolean', + NUMBER_TYPE = 'Number', + STRING_TYPE = 'String', + OBJECT_TYPE = 'Object', + FUNCTION_CLASS = '[object Function]', + BOOLEAN_CLASS = '[object Boolean]', + NUMBER_CLASS = '[object Number]', + STRING_CLASS = '[object String]', + ARRAY_CLASS = '[object Array]', + DATE_CLASS = '[object Date]', + NATIVE_JSON_STRINGIFY_SUPPORT = window.JSON && + typeof JSON.stringify === 'function' && + JSON.stringify(0) === '0' && + typeof JSON.stringify(Prototype.K) === 'undefined'; + + + + var DONT_ENUMS = ['toString', 'toLocaleString', 'valueOf', + 'hasOwnProperty', 'isPrototypeOf', 'propertyIsEnumerable', 'constructor']; + + var IS_DONTENUM_BUGGY = (function(){ + for (var p in { toString: 1 }) { + if (p === 'toString') return false; + } + return true; + })(); + + function Type(o) { + switch(o) { + case null: return NULL_TYPE; + case (void 0): return UNDEFINED_TYPE; + } + var type = typeof o; + switch(type) { + case 'boolean': return BOOLEAN_TYPE; + case 'number': return NUMBER_TYPE; + case 'string': return STRING_TYPE; + } + return OBJECT_TYPE; + } + + function extend(destination, source) { + for (var property in source) + destination[property] = source[property]; + return destination; + } + + function inspect(object) { + try { + if (isUndefined(object)) return 'undefined'; + if (object === null) return 'null'; + return object.inspect ? object.inspect() : String(object); + } catch (e) { + if (e instanceof RangeError) return '...'; + throw e; + } + } + + function toJSON(value) { + return Str('', { '': value }, []); + } + + function Str(key, holder, stack) { + var value = holder[key]; + if (Type(value) === OBJECT_TYPE && typeof value.toJSON === 'function') { + value = value.toJSON(key); + } + + var _class = _toString.call(value); + + switch (_class) { + case NUMBER_CLASS: + case BOOLEAN_CLASS: + case STRING_CLASS: + value = value.valueOf(); + } + + switch (value) { + case null: return 'null'; + case true: return 'true'; + case false: return 'false'; + } + + var type = typeof value; + switch (type) { + case 'string': + return value.inspect(true); + case 'number': + return isFinite(value) ? String(value) : 'null'; + case 'object': + + for (var i = 0, length = stack.length; i < length; i++) { + if (stack[i] === value) { + throw new TypeError("Cyclic reference to '" + value + "' in object"); + } + } + stack.push(value); + + var partial = []; + if (_class === ARRAY_CLASS) { + for (var i = 0, length = value.length; i < length; i++) { + var str = Str(i, value, stack); + partial.push(typeof str === 'undefined' ? 'null' : str); + } + partial = '[' + partial.join(',') + ']'; + } else { + var keys = Object.keys(value); + for (var i = 0, length = keys.length; i < length; i++) { + var key = keys[i], str = Str(key, value, stack); + if (typeof str !== "undefined") { + partial.push(key.inspect(true)+ ':' + str); + } + } + partial = '{' + partial.join(',') + '}'; + } + stack.pop(); + return partial; + } + } + + function stringify(object) { + return JSON.stringify(object); + } + + function toQueryString(object) { + return $H(object).toQueryString(); + } + + function toHTML(object) { + return object && object.toHTML ? object.toHTML() : String.interpret(object); + } + + function keys(object) { + if (Type(object) !== OBJECT_TYPE) { throw new TypeError(); } + var results = []; + for (var property in object) { + if (_hasOwnProperty.call(object, property)) + results.push(property); + } + + if (IS_DONTENUM_BUGGY) { + for (var i = 0; property = DONT_ENUMS[i]; i++) { + if (_hasOwnProperty.call(object, property)) + results.push(property); + } + } + + return results; + } + + function values(object) { + var results = []; + for (var property in object) + results.push(object[property]); + return results; + } + + function clone(object) { + return extend({ }, object); + } + + function isElement(object) { + return !!(object && object.nodeType == 1); + } + + function isArray(object) { + return _toString.call(object) === ARRAY_CLASS; + } + + var hasNativeIsArray = (typeof Array.isArray == 'function') + && Array.isArray([]) && !Array.isArray({}); + + if (hasNativeIsArray) { + isArray = Array.isArray; + } + + function isHash(object) { + return object instanceof Hash; + } + + function isFunction(object) { + return _toString.call(object) === FUNCTION_CLASS; + } + + function isString(object) { + return _toString.call(object) === STRING_CLASS; + } + + function isNumber(object) { + return _toString.call(object) === NUMBER_CLASS; + } + + function isDate(object) { + return _toString.call(object) === DATE_CLASS; + } + + function isUndefined(object) { + return typeof object === "undefined"; + } + + extend(Object, { + extend: extend, + inspect: inspect, + toJSON: NATIVE_JSON_STRINGIFY_SUPPORT ? stringify : toJSON, + toQueryString: toQueryString, + toHTML: toHTML, + keys: Object.keys || keys, + values: values, + clone: clone, + isElement: isElement, + isArray: isArray, + isHash: isHash, + isFunction: isFunction, + isString: isString, + isNumber: isNumber, + isDate: isDate, + isUndefined: isUndefined + }); +})(); +Object.extend(Function.prototype, (function() { + var slice = Array.prototype.slice; + + function update(array, args) { + var arrayLength = array.length, length = args.length; + while (length--) array[arrayLength + length] = args[length]; + return array; + } + + function merge(array, args) { + array = slice.call(array, 0); + return update(array, args); + } + + function argumentNames() { + var names = this.toString().match(/^[\s\(]*function[^(]*\(([^)]*)\)/)[1] + .replace(/\/\/.*?[\r\n]|\/\*(?:.|[\r\n])*?\*\//g, '') + .replace(/\s+/g, '').split(','); + return names.length == 1 && !names[0] ? [] : names; + } + + + function bind(context) { + if (arguments.length < 2 && Object.isUndefined(arguments[0])) + return this; + + if (!Object.isFunction(this)) + throw new TypeError("The object is not callable."); + + var nop = function() {}; + var __method = this, args = slice.call(arguments, 1); + + var bound = function() { + var a = merge(args, arguments); + var c = this instanceof bound ? this : context; + return __method.apply(c, a); + }; + + nop.prototype = this.prototype; + bound.prototype = new nop(); + + return bound; + } + + function bindAsEventListener(context) { + var __method = this, args = slice.call(arguments, 1); + return function(event) { + var a = update([event || window.event], args); + return __method.apply(context, a); + } + } + + function curry() { + if (!arguments.length) return this; + var __method = this, args = slice.call(arguments, 0); + return function() { + var a = merge(args, arguments); + return __method.apply(this, a); + } + } + + function delay(timeout) { + var __method = this, args = slice.call(arguments, 1); + timeout = timeout * 1000; + return window.setTimeout(function() { + return __method.apply(__method, args); + }, timeout); + } + + function defer() { + var args = update([0.01], arguments); + return this.delay.apply(this, args); + } + + function wrap(wrapper) { + var __method = this; + return function() { + var a = update([__method.bind(this)], arguments); + return wrapper.apply(this, a); + } + } + + function methodize() { + if (this._methodized) return this._methodized; + var __method = this; + return this._methodized = function() { + var a = update([this], arguments); + return __method.apply(null, a); + }; + } + + var extensions = { + argumentNames: argumentNames, + bindAsEventListener: bindAsEventListener, + curry: curry, + delay: delay, + defer: defer, + wrap: wrap, + methodize: methodize + }; + + if (!Function.prototype.bind) + extensions.bind = bind; + + return extensions; +})()); + + + +(function(proto) { + + + function toISOString() { + return this.getUTCFullYear() + '-' + + (this.getUTCMonth() + 1).toPaddedString(2) + '-' + + this.getUTCDate().toPaddedString(2) + 'T' + + this.getUTCHours().toPaddedString(2) + ':' + + this.getUTCMinutes().toPaddedString(2) + ':' + + this.getUTCSeconds().toPaddedString(2) + 'Z'; + } + + + function toJSON() { + return this.toISOString(); + } + + if (!proto.toISOString) proto.toISOString = toISOString; + if (!proto.toJSON) proto.toJSON = toJSON; + +})(Date.prototype); + + +RegExp.prototype.match = RegExp.prototype.test; + +RegExp.escape = function(str) { + return String(str).replace(/([.*+?^=!:${}()|[\]\/\\])/g, '\\$1'); +}; +var PeriodicalExecuter = Class.create({ + initialize: function(callback, frequency) { + this.callback = callback; + this.frequency = frequency; + this.currentlyExecuting = false; + + this.registerCallback(); + }, + + registerCallback: function() { + this.timer = setInterval(this.onTimerEvent.bind(this), this.frequency * 1000); + }, + + execute: function() { + this.callback(this); + }, + + stop: function() { + if (!this.timer) return; + clearInterval(this.timer); + this.timer = null; + }, + + onTimerEvent: function() { + if (!this.currentlyExecuting) { + try { + this.currentlyExecuting = true; + this.execute(); + this.currentlyExecuting = false; + } catch(e) { + this.currentlyExecuting = false; + throw e; + } + } + } +}); +Object.extend(String, { + interpret: function(value) { + return value == null ? '' : String(value); + }, + specialChar: { + '\b': '\\b', + '\t': '\\t', + '\n': '\\n', + '\f': '\\f', + '\r': '\\r', + '\\': '\\\\' + } +}); + +Object.extend(String.prototype, (function() { + var NATIVE_JSON_PARSE_SUPPORT = window.JSON && + typeof JSON.parse === 'function' && + JSON.parse('{"test": true}').test; + + function prepareReplacement(replacement) { + if (Object.isFunction(replacement)) return replacement; + var template = new Template(replacement); + return function(match) { return template.evaluate(match) }; + } + + function isNonEmptyRegExp(regexp) { + return regexp.source && regexp.source !== '(?:)'; + } + + + function gsub(pattern, replacement) { + var result = '', source = this, match; + replacement = prepareReplacement(replacement); + + if (Object.isString(pattern)) + pattern = RegExp.escape(pattern); + + if (!(pattern.length || isNonEmptyRegExp(pattern))) { + replacement = replacement(''); + return replacement + source.split('').join(replacement) + replacement; + } + + while (source.length > 0) { + match = source.match(pattern) + if (match && match[0].length > 0) { + result += source.slice(0, match.index); + result += String.interpret(replacement(match)); + source = source.slice(match.index + match[0].length); + } else { + result += source, source = ''; + } + } + return result; + } + + function sub(pattern, replacement, count) { + replacement = prepareReplacement(replacement); + count = Object.isUndefined(count) ? 1 : count; + + return this.gsub(pattern, function(match) { + if (--count < 0) return match[0]; + return replacement(match); + }); + } + + function scan(pattern, iterator) { + this.gsub(pattern, iterator); + return String(this); + } + + function truncate(length, truncation) { + length = length || 30; + truncation = Object.isUndefined(truncation) ? '...' : truncation; + return this.length > length ? + this.slice(0, length - truncation.length) + truncation : String(this); + } + + function strip() { + return this.replace(/^\s+/, '').replace(/\s+$/, ''); + } + + function stripTags() { + return this.replace(/<\w+(\s+("[^"]*"|'[^']*'|[^>])+)?(\/)?>|<\/\w+>/gi, ''); + } + + function stripScripts() { + return this.replace(new RegExp(Prototype.ScriptFragment, 'img'), ''); + } + + function extractScripts() { + var matchAll = new RegExp(Prototype.ScriptFragment, 'img'), + matchOne = new RegExp(Prototype.ScriptFragment, 'im'); + return (this.match(matchAll) || []).map(function(scriptTag) { + return (scriptTag.match(matchOne) || ['', ''])[1]; + }); + } + + function evalScripts() { + return this.extractScripts().map(function(script) { return eval(script); }); + } + + function escapeHTML() { + return this.replace(/&/g,'&').replace(//g,'>'); + } + + function unescapeHTML() { + return this.stripTags().replace(/</g,'<').replace(/>/g,'>').replace(/&/g,'&'); + } + + + function toQueryParams(separator) { + var match = this.strip().match(/([^?#]*)(#.*)?$/); + if (!match) return { }; + + return match[1].split(separator || '&').inject({ }, function(hash, pair) { + if ((pair = pair.split('='))[0]) { + var key = decodeURIComponent(pair.shift()), + value = pair.length > 1 ? pair.join('=') : pair[0]; + + if (value != undefined) { + value = value.gsub('+', ' '); + value = decodeURIComponent(value); + } + + if (key in hash) { + if (!Object.isArray(hash[key])) hash[key] = [hash[key]]; + hash[key].push(value); + } + else hash[key] = value; + } + return hash; + }); + } + + function toArray() { + return this.split(''); + } + + function succ() { + return this.slice(0, this.length - 1) + + String.fromCharCode(this.charCodeAt(this.length - 1) + 1); + } + + function times(count) { + return count < 1 ? '' : new Array(count + 1).join(this); + } + + function camelize() { + return this.replace(/-+(.)?/g, function(match, chr) { + return chr ? chr.toUpperCase() : ''; + }); + } + + function capitalize() { + return this.charAt(0).toUpperCase() + this.substring(1).toLowerCase(); + } + + function underscore() { + return this.replace(/::/g, '/') + .replace(/([A-Z]+)([A-Z][a-z])/g, '$1_$2') + .replace(/([a-z\d])([A-Z])/g, '$1_$2') + .replace(/-/g, '_') + .toLowerCase(); + } + + function dasherize() { + return this.replace(/_/g, '-'); + } + + function inspect(useDoubleQuotes) { + var escapedString = this.replace(/[\x00-\x1f\\]/g, function(character) { + if (character in String.specialChar) { + return String.specialChar[character]; + } + return '\\u00' + character.charCodeAt().toPaddedString(2, 16); + }); + if (useDoubleQuotes) return '"' + escapedString.replace(/"/g, '\\"') + '"'; + return "'" + escapedString.replace(/'/g, '\\\'') + "'"; + } + + function unfilterJSON(filter) { + return this.replace(filter || Prototype.JSONFilter, '$1'); + } + + function isJSON() { + var str = this; + if (str.blank()) return false; + str = str.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, '@'); + str = str.replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']'); + str = str.replace(/(?:^|:|,)(?:\s*\[)+/g, ''); + return (/^[\],:{}\s]*$/).test(str); + } + + function evalJSON(sanitize) { + var json = this.unfilterJSON(), + cx = /[\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff\u0000]/g; + if (cx.test(json)) { + json = json.replace(cx, function (a) { + return '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); + }); + } + try { + if (!sanitize || json.isJSON()) return eval('(' + json + ')'); + } catch (e) { } + throw new SyntaxError('Badly formed JSON string: ' + this.inspect()); + } + + function parseJSON() { + var json = this.unfilterJSON(); + return JSON.parse(json); + } + + function include(pattern) { + return this.indexOf(pattern) > -1; + } + + function startsWith(pattern, position) { + position = Object.isNumber(position) ? position : 0; + return this.lastIndexOf(pattern, position) === position; + } + + function endsWith(pattern, position) { + pattern = String(pattern); + position = Object.isNumber(position) ? position : this.length; + if (position < 0) position = 0; + if (position > this.length) position = this.length; + var d = position - pattern.length; + return d >= 0 && this.indexOf(pattern, d) === d; + } + + function empty() { + return this == ''; + } + + function blank() { + return /^\s*$/.test(this); + } + + function interpolate(object, pattern) { + return new Template(this, pattern).evaluate(object); + } + + return { + gsub: gsub, + sub: sub, + scan: scan, + truncate: truncate, + strip: String.prototype.trim || strip, + stripTags: stripTags, + stripScripts: stripScripts, + extractScripts: extractScripts, + evalScripts: evalScripts, + escapeHTML: escapeHTML, + unescapeHTML: unescapeHTML, + toQueryParams: toQueryParams, + parseQuery: toQueryParams, + toArray: toArray, + succ: succ, + times: times, + camelize: camelize, + capitalize: capitalize, + underscore: underscore, + dasherize: dasherize, + inspect: inspect, + unfilterJSON: unfilterJSON, + isJSON: isJSON, + evalJSON: NATIVE_JSON_PARSE_SUPPORT ? parseJSON : evalJSON, + include: include, + startsWith: String.prototype.startsWith || startsWith, + endsWith: String.prototype.endsWith || endsWith, + empty: empty, + blank: blank, + interpolate: interpolate + }; +})()); + +var Template = Class.create({ + initialize: function(template, pattern) { + this.template = template.toString(); + this.pattern = pattern || Template.Pattern; + }, + + evaluate: function(object) { + if (object && Object.isFunction(object.toTemplateReplacements)) + object = object.toTemplateReplacements(); + + return this.template.gsub(this.pattern, function(match) { + if (object == null) return (match[1] + ''); + + var before = match[1] || ''; + if (before == '\\') return match[2]; + + var ctx = object, expr = match[3], + pattern = /^([^.[]+|\[((?:.*?[^\\])?)\])(\.|\[|$)/; + + match = pattern.exec(expr); + if (match == null) return before; + + while (match != null) { + var comp = match[1].startsWith('[') ? match[2].replace(/\\\\]/g, ']') : match[1]; + ctx = ctx[comp]; + if (null == ctx || '' == match[3]) break; + expr = expr.substring('[' == match[3] ? match[1].length : match[0].length); + match = pattern.exec(expr); + } + + return before + String.interpret(ctx); + }); + } +}); +Template.Pattern = /(^|.|\r|\n)(#\{(.*?)\})/; + +var $break = { }; + +var Enumerable = (function() { + function each(iterator, context) { + try { + this._each(iterator, context); + } catch (e) { + if (e != $break) throw e; + } + return this; + } + + function eachSlice(number, iterator, context) { + var index = -number, slices = [], array = this.toArray(); + if (number < 1) return array; + while ((index += number) < array.length) + slices.push(array.slice(index, index+number)); + return slices.collect(iterator, context); + } + + function all(iterator, context) { + iterator = iterator || Prototype.K; + var result = true; + this.each(function(value, index) { + result = result && !!iterator.call(context, value, index, this); + if (!result) throw $break; + }, this); + return result; + } + + function any(iterator, context) { + iterator = iterator || Prototype.K; + var result = false; + this.each(function(value, index) { + if (result = !!iterator.call(context, value, index, this)) + throw $break; + }, this); + return result; + } + + function collect(iterator, context) { + iterator = iterator || Prototype.K; + var results = []; + this.each(function(value, index) { + results.push(iterator.call(context, value, index, this)); + }, this); + return results; + } + + function detect(iterator, context) { + var result; + this.each(function(value, index) { + if (iterator.call(context, value, index, this)) { + result = value; + throw $break; + } + }, this); + return result; + } + + function findAll(iterator, context) { + var results = []; + this.each(function(value, index) { + if (iterator.call(context, value, index, this)) + results.push(value); + }, this); + return results; + } + + function grep(filter, iterator, context) { + iterator = iterator || Prototype.K; + var results = []; + + if (Object.isString(filter)) + filter = new RegExp(RegExp.escape(filter)); + + this.each(function(value, index) { + if (filter.match(value)) + results.push(iterator.call(context, value, index, this)); + }, this); + return results; + } + + function include(object) { + if (Object.isFunction(this.indexOf) && this.indexOf(object) != -1) + return true; + + var found = false; + this.each(function(value) { + if (value == object) { + found = true; + throw $break; + } + }); + return found; + } + + function inGroupsOf(number, fillWith) { + fillWith = Object.isUndefined(fillWith) ? null : fillWith; + return this.eachSlice(number, function(slice) { + while(slice.length < number) slice.push(fillWith); + return slice; + }); + } + + function inject(memo, iterator, context) { + this.each(function(value, index) { + memo = iterator.call(context, memo, value, index, this); + }, this); + return memo; + } + + function invoke(method) { + var args = $A(arguments).slice(1); + return this.map(function(value) { + return value[method].apply(value, args); + }); + } + + function max(iterator, context) { + iterator = iterator || Prototype.K; + var result; + this.each(function(value, index) { + value = iterator.call(context, value, index, this); + if (result == null || value >= result) + result = value; + }, this); + return result; + } + + function min(iterator, context) { + iterator = iterator || Prototype.K; + var result; + this.each(function(value, index) { + value = iterator.call(context, value, index, this); + if (result == null || value < result) + result = value; + }, this); + return result; + } + + function partition(iterator, context) { + iterator = iterator || Prototype.K; + var trues = [], falses = []; + this.each(function(value, index) { + (iterator.call(context, value, index, this) ? + trues : falses).push(value); + }, this); + return [trues, falses]; + } + + function pluck(property) { + var results = []; + this.each(function(value) { + results.push(value[property]); + }); + return results; + } + + function reject(iterator, context) { + var results = []; + this.each(function(value, index) { + if (!iterator.call(context, value, index, this)) + results.push(value); + }, this); + return results; + } + + function sortBy(iterator, context) { + return this.map(function(value, index) { + return { + value: value, + criteria: iterator.call(context, value, index, this) + }; + }, this).sort(function(left, right) { + var a = left.criteria, b = right.criteria; + return a < b ? -1 : a > b ? 1 : 0; + }).pluck('value'); + } + + function toArray() { + return this.map(); + } + + function zip() { + var iterator = Prototype.K, args = $A(arguments); + if (Object.isFunction(args.last())) + iterator = args.pop(); + + var collections = [this].concat(args).map($A); + return this.map(function(value, index) { + return iterator(collections.pluck(index)); + }); + } + + function size() { + return this.toArray().length; + } + + function inspect() { + return '#'; + } + + + + + + + + + + return { + each: each, + eachSlice: eachSlice, + all: all, + every: all, + any: any, + some: any, + collect: collect, + map: collect, + detect: detect, + findAll: findAll, + select: findAll, + filter: findAll, + grep: grep, + include: include, + member: include, + inGroupsOf: inGroupsOf, + inject: inject, + invoke: invoke, + max: max, + min: min, + partition: partition, + pluck: pluck, + reject: reject, + sortBy: sortBy, + toArray: toArray, + entries: toArray, + zip: zip, + size: size, + inspect: inspect, + find: detect + }; +})(); + +function $A(iterable) { + if (!iterable) return []; + if ('toArray' in Object(iterable)) return iterable.toArray(); + var length = iterable.length || 0, results = new Array(length); + while (length--) results[length] = iterable[length]; + return results; +} + + +function $w(string) { + if (!Object.isString(string)) return []; + string = string.strip(); + return string ? string.split(/\s+/) : []; +} + +Array.from = $A; + + +(function() { + var arrayProto = Array.prototype, + slice = arrayProto.slice, + _each = arrayProto.forEach; // use native browser JS 1.6 implementation if available + + function each(iterator, context) { + for (var i = 0, length = this.length >>> 0; i < length; i++) { + if (i in this) iterator.call(context, this[i], i, this); + } + } + if (!_each) _each = each; + + function clear() { + this.length = 0; + return this; + } + + function first() { + return this[0]; + } + + function last() { + return this[this.length - 1]; + } + + function compact() { + return this.select(function(value) { + return value != null; + }); + } + + function flatten() { + return this.inject([], function(array, value) { + if (Object.isArray(value)) + return array.concat(value.flatten()); + array.push(value); + return array; + }); + } + + function without() { + var values = slice.call(arguments, 0); + return this.select(function(value) { + return !values.include(value); + }); + } + + function reverse(inline) { + return (inline === false ? this.toArray() : this)._reverse(); + } + + function uniq(sorted) { + return this.inject([], function(array, value, index) { + if (0 == index || (sorted ? array.last() != value : !array.include(value))) + array.push(value); + return array; + }); + } + + function intersect(array) { + return this.uniq().findAll(function(item) { + return array.indexOf(item) !== -1; + }); + } + + + function clone() { + return slice.call(this, 0); + } + + function size() { + return this.length; + } + + function inspect() { + return '[' + this.map(Object.inspect).join(', ') + ']'; + } + + function indexOf(item, i) { + if (this == null) throw new TypeError(); + + var array = Object(this), length = array.length >>> 0; + if (length === 0) return -1; + + i = Number(i); + if (isNaN(i)) { + i = 0; + } else if (i !== 0 && isFinite(i)) { + i = (i > 0 ? 1 : -1) * Math.floor(Math.abs(i)); + } + + if (i > length) return -1; + + var k = i >= 0 ? i : Math.max(length - Math.abs(i), 0); + for (; k < length; k++) + if (k in array && array[k] === item) return k; + return -1; + } + + + function lastIndexOf(item, i) { + if (this == null) throw new TypeError(); + + var array = Object(this), length = array.length >>> 0; + if (length === 0) return -1; + + if (!Object.isUndefined(i)) { + i = Number(i); + if (isNaN(i)) { + i = 0; + } else if (i !== 0 && isFinite(i)) { + i = (i > 0 ? 1 : -1) * Math.floor(Math.abs(i)); + } + } else { + i = length; + } + + var k = i >= 0 ? Math.min(i, length - 1) : + length - Math.abs(i); + + for (; k >= 0; k--) + if (k in array && array[k] === item) return k; + return -1; + } + + function concat(_) { + var array = [], items = slice.call(arguments, 0), item, n = 0; + items.unshift(this); + for (var i = 0, length = items.length; i < length; i++) { + item = items[i]; + if (Object.isArray(item) && !('callee' in item)) { + for (var j = 0, arrayLength = item.length; j < arrayLength; j++) { + if (j in item) array[n] = item[j]; + n++; + } + } else { + array[n++] = item; + } + } + array.length = n; + return array; + } + + + function wrapNative(method) { + return function() { + if (arguments.length === 0) { + return method.call(this, Prototype.K); + } else if (arguments[0] === undefined) { + var args = slice.call(arguments, 1); + args.unshift(Prototype.K); + return method.apply(this, args); + } else { + return method.apply(this, arguments); + } + }; + } + + + function map(iterator) { + if (this == null) throw new TypeError(); + iterator = iterator || Prototype.K; + + var object = Object(this); + var results = [], context = arguments[1], n = 0; + + for (var i = 0, length = object.length >>> 0; i < length; i++) { + if (i in object) { + results[n] = iterator.call(context, object[i], i, object); + } + n++; + } + results.length = n; + return results; + } + + if (arrayProto.map) { + map = wrapNative(Array.prototype.map); + } + + function filter(iterator) { + if (this == null || !Object.isFunction(iterator)) + throw new TypeError(); + + var object = Object(this); + var results = [], context = arguments[1], value; + + for (var i = 0, length = object.length >>> 0; i < length; i++) { + if (i in object) { + value = object[i]; + if (iterator.call(context, value, i, object)) { + results.push(value); + } + } + } + return results; + } + + if (arrayProto.filter) { + filter = Array.prototype.filter; + } + + function some(iterator) { + if (this == null) throw new TypeError(); + iterator = iterator || Prototype.K; + var context = arguments[1]; + + var object = Object(this); + for (var i = 0, length = object.length >>> 0; i < length; i++) { + if (i in object && iterator.call(context, object[i], i, object)) { + return true; + } + } + + return false; + } + + if (arrayProto.some) { + some = wrapNative(Array.prototype.some); + } + + function every(iterator) { + if (this == null) throw new TypeError(); + iterator = iterator || Prototype.K; + var context = arguments[1]; + + var object = Object(this); + for (var i = 0, length = object.length >>> 0; i < length; i++) { + if (i in object && !iterator.call(context, object[i], i, object)) { + return false; + } + } + + return true; + } + + if (arrayProto.every) { + every = wrapNative(Array.prototype.every); + } + + + Object.extend(arrayProto, Enumerable); + + if (arrayProto.entries === Enumerable.entries) { + delete arrayProto.entries; + } + + if (!arrayProto._reverse) + arrayProto._reverse = arrayProto.reverse; + + Object.extend(arrayProto, { + _each: _each, + + map: map, + collect: map, + select: filter, + filter: filter, + findAll: filter, + some: some, + any: some, + every: every, + all: every, + + clear: clear, + first: first, + last: last, + compact: compact, + flatten: flatten, + without: without, + reverse: reverse, + uniq: uniq, + intersect: intersect, + clone: clone, + toArray: clone, + size: size, + inspect: inspect + }); + + var CONCAT_ARGUMENTS_BUGGY = (function() { + return [].concat(arguments)[0][0] !== 1; + })(1,2); + + if (CONCAT_ARGUMENTS_BUGGY) arrayProto.concat = concat; + + if (!arrayProto.indexOf) arrayProto.indexOf = indexOf; + if (!arrayProto.lastIndexOf) arrayProto.lastIndexOf = lastIndexOf; +})(); +function $H(object) { + return new Hash(object); +}; + +var Hash = Class.create(Enumerable, (function() { + function initialize(object) { + this._object = Object.isHash(object) ? object.toObject() : Object.clone(object); + } + + + function _each(iterator, context) { + var i = 0; + for (var key in this._object) { + var value = this._object[key], pair = [key, value]; + pair.key = key; + pair.value = value; + iterator.call(context, pair, i); + i++; + } + } + + function set(key, value) { + return this._object[key] = value; + } + + function get(key) { + if (this._object[key] !== Object.prototype[key]) + return this._object[key]; + } + + function unset(key) { + var value = this._object[key]; + delete this._object[key]; + return value; + } + + function toObject() { + return Object.clone(this._object); + } + + + + function keys() { + return this.pluck('key'); + } + + function values() { + return this.pluck('value'); + } + + function index(value) { + var match = this.detect(function(pair) { + return pair.value === value; + }); + return match && match.key; + } + + function merge(object) { + return this.clone().update(object); + } + + function update(object) { + return new Hash(object).inject(this, function(result, pair) { + result.set(pair.key, pair.value); + return result; + }); + } + + function toQueryPair(key, value) { + if (Object.isUndefined(value)) return key; + + value = String.interpret(value); + + value = value.gsub(/(\r)?\n/, '\r\n'); + value = encodeURIComponent(value); + value = value.gsub(/%20/, '+'); + return key + '=' + value; + } + + function toQueryString() { + return this.inject([], function(results, pair) { + var key = encodeURIComponent(pair.key), values = pair.value; + + if (values && typeof values == 'object') { + if (Object.isArray(values)) { + var queryValues = []; + for (var i = 0, len = values.length, value; i < len; i++) { + value = values[i]; + queryValues.push(toQueryPair(key, value)); + } + return results.concat(queryValues); + } + } else results.push(toQueryPair(key, values)); + return results; + }).join('&'); + } + + function inspect() { + return '#'; + } + + function clone() { + return new Hash(this); + } + + return { + initialize: initialize, + _each: _each, + set: set, + get: get, + unset: unset, + toObject: toObject, + toTemplateReplacements: toObject, + keys: keys, + values: values, + index: index, + merge: merge, + update: update, + toQueryString: toQueryString, + inspect: inspect, + toJSON: toObject, + clone: clone + }; +})()); + +Hash.from = $H; +Object.extend(Number.prototype, (function() { + function toColorPart() { + return this.toPaddedString(2, 16); + } + + function succ() { + return this + 1; + } + + function times(iterator, context) { + $R(0, this, true).each(iterator, context); + return this; + } + + function toPaddedString(length, radix) { + var string = this.toString(radix || 10); + return '0'.times(length - string.length) + string; + } + + function abs() { + return Math.abs(this); + } + + function round() { + return Math.round(this); + } + + function ceil() { + return Math.ceil(this); + } + + function floor() { + return Math.floor(this); + } + + return { + toColorPart: toColorPart, + succ: succ, + times: times, + toPaddedString: toPaddedString, + abs: abs, + round: round, + ceil: ceil, + floor: floor + }; +})()); + +function $R(start, end, exclusive) { + return new ObjectRange(start, end, exclusive); +} + +var ObjectRange = Class.create(Enumerable, (function() { + function initialize(start, end, exclusive) { + this.start = start; + this.end = end; + this.exclusive = exclusive; + } + + function _each(iterator, context) { + var value = this.start, i; + for (i = 0; this.include(value); i++) { + iterator.call(context, value, i); + value = value.succ(); + } + } + + function include(value) { + if (value < this.start) + return false; + if (this.exclusive) + return value < this.end; + return value <= this.end; + } + + return { + initialize: initialize, + _each: _each, + include: include + }; +})()); + + + +var Abstract = { }; + + +var Try = { + these: function() { + var returnValue; + + for (var i = 0, length = arguments.length; i < length; i++) { + var lambda = arguments[i]; + try { + returnValue = lambda(); + break; + } catch (e) { } + } + + return returnValue; + } +}; + +var Ajax = { + getTransport: function() { + return Try.these( + function() {return new XMLHttpRequest()}, + function() {return new ActiveXObject('Msxml2.XMLHTTP')}, + function() {return new ActiveXObject('Microsoft.XMLHTTP')} + ) || false; + }, + + activeRequestCount: 0 +}; + +Ajax.Responders = { + responders: [], + + _each: function(iterator, context) { + this.responders._each(iterator, context); + }, + + register: function(responder) { + if (!this.include(responder)) + this.responders.push(responder); + }, + + unregister: function(responder) { + this.responders = this.responders.without(responder); + }, + + dispatch: function(callback, request, transport, json) { + this.each(function(responder) { + if (Object.isFunction(responder[callback])) { + try { + responder[callback].apply(responder, [request, transport, json]); + } catch (e) { } + } + }); + } +}; + +Object.extend(Ajax.Responders, Enumerable); + +Ajax.Responders.register({ + onCreate: function() { Ajax.activeRequestCount++ }, + onComplete: function() { Ajax.activeRequestCount-- } +}); +Ajax.Base = Class.create({ + initialize: function(options) { + this.options = { + method: 'post', + asynchronous: true, + contentType: 'application/x-www-form-urlencoded', + encoding: 'UTF-8', + parameters: '', + evalJSON: true, + evalJS: true + }; + Object.extend(this.options, options || { }); + + this.options.method = this.options.method.toLowerCase(); + + if (Object.isHash(this.options.parameters)) + this.options.parameters = this.options.parameters.toObject(); + } +}); +Ajax.Request = Class.create(Ajax.Base, { + _complete: false, + + initialize: function($super, url, options) { + $super(options); + this.transport = Ajax.getTransport(); + this.request(url); + }, + + request: function(url) { + this.url = url; + this.method = this.options.method; + var params = Object.isString(this.options.parameters) ? + this.options.parameters : + Object.toQueryString(this.options.parameters); + + if (!['get', 'post'].include(this.method)) { + params += (params ? '&' : '') + "_method=" + this.method; + this.method = 'post'; + } + + if (params && this.method === 'get') { + this.url += (this.url.include('?') ? '&' : '?') + params; + } + + this.parameters = params.toQueryParams(); + + try { + var response = new Ajax.Response(this); + if (this.options.onCreate) this.options.onCreate(response); + Ajax.Responders.dispatch('onCreate', this, response); + + this.transport.open(this.method.toUpperCase(), this.url, + this.options.asynchronous); + + if (this.options.asynchronous) this.respondToReadyState.bind(this).defer(1); + + this.transport.onreadystatechange = this.onStateChange.bind(this); + this.setRequestHeaders(); + + this.body = this.method == 'post' ? (this.options.postBody || params) : null; + this.transport.send(this.body); + + /* Force Firefox to handle ready state 4 for synchronous requests */ + if (!this.options.asynchronous && this.transport.overrideMimeType) + this.onStateChange(); + + } + catch (e) { + this.dispatchException(e); + } + }, + + onStateChange: function() { + var readyState = this.transport.readyState; + if (readyState > 1 && !((readyState == 4) && this._complete)) + this.respondToReadyState(this.transport.readyState); + }, + + setRequestHeaders: function() { + var headers = { + 'X-Requested-With': 'XMLHttpRequest', + 'X-Prototype-Version': Prototype.Version, + 'Accept': 'text/javascript, text/html, application/xml, text/xml, */*' + }; + + if (this.method == 'post') { + headers['Content-type'] = this.options.contentType + + (this.options.encoding ? '; charset=' + this.options.encoding : ''); + + /* Force "Connection: close" for older Mozilla browsers to work + * around a bug where XMLHttpRequest sends an incorrect + * Content-length header. See Mozilla Bugzilla #246651. + */ + if (this.transport.overrideMimeType && + (navigator.userAgent.match(/Gecko\/(\d{4})/) || [0,2005])[1] < 2005) + headers['Connection'] = 'close'; + } + + if (typeof this.options.requestHeaders == 'object') { + var extras = this.options.requestHeaders; + + if (Object.isFunction(extras.push)) + for (var i = 0, length = extras.length; i < length; i += 2) + headers[extras[i]] = extras[i+1]; + else + $H(extras).each(function(pair) { headers[pair.key] = pair.value }); + } + + for (var name in headers) + if (headers[name] != null) + this.transport.setRequestHeader(name, headers[name]); + }, + + success: function() { + var status = this.getStatus(); + return !status || (status >= 200 && status < 300) || status == 304; + }, + + getStatus: function() { + try { + if (this.transport.status === 1223) return 204; + return this.transport.status || 0; + } catch (e) { return 0 } + }, + + respondToReadyState: function(readyState) { + var state = Ajax.Request.Events[readyState], response = new Ajax.Response(this); + + if (state == 'Complete') { + try { + this._complete = true; + (this.options['on' + response.status] + || this.options['on' + (this.success() ? 'Success' : 'Failure')] + || Prototype.emptyFunction)(response, response.headerJSON); + } catch (e) { + this.dispatchException(e); + } + + var contentType = response.getHeader('Content-type'); + if (this.options.evalJS == 'force' + || (this.options.evalJS && this.isSameOrigin() && contentType + && contentType.match(/^\s*(text|application)\/(x-)?(java|ecma)script(;.*)?\s*$/i))) + this.evalResponse(); + } + + try { + (this.options['on' + state] || Prototype.emptyFunction)(response, response.headerJSON); + Ajax.Responders.dispatch('on' + state, this, response, response.headerJSON); + } catch (e) { + this.dispatchException(e); + } + + if (state == 'Complete') { + this.transport.onreadystatechange = Prototype.emptyFunction; + } + }, + + isSameOrigin: function() { + var m = this.url.match(/^\s*https?:\/\/[^\/]*/); + return !m || (m[0] == '#{protocol}//#{domain}#{port}'.interpolate({ + protocol: location.protocol, + domain: document.domain, + port: location.port ? ':' + location.port : '' + })); + }, + + getHeader: function(name) { + try { + return this.transport.getResponseHeader(name) || null; + } catch (e) { return null; } + }, + + evalResponse: function() { + try { + return eval((this.transport.responseText || '').unfilterJSON()); + } catch (e) { + this.dispatchException(e); + } + }, + + dispatchException: function(exception) { + (this.options.onException || Prototype.emptyFunction)(this, exception); + Ajax.Responders.dispatch('onException', this, exception); + } +}); + +Ajax.Request.Events = + ['Uninitialized', 'Loading', 'Loaded', 'Interactive', 'Complete']; + + + + + + + + +Ajax.Response = Class.create({ + initialize: function(request){ + this.request = request; + var transport = this.transport = request.transport, + readyState = this.readyState = transport.readyState; + + if ((readyState > 2 && !Prototype.Browser.IE) || readyState == 4) { + this.status = this.getStatus(); + this.statusText = this.getStatusText(); + this.responseText = String.interpret(transport.responseText); + this.headerJSON = this._getHeaderJSON(); + } + + if (readyState == 4) { + var xml = transport.responseXML; + this.responseXML = Object.isUndefined(xml) ? null : xml; + this.responseJSON = this._getResponseJSON(); + } + }, + + status: 0, + + statusText: '', + + getStatus: Ajax.Request.prototype.getStatus, + + getStatusText: function() { + try { + return this.transport.statusText || ''; + } catch (e) { return '' } + }, + + getHeader: Ajax.Request.prototype.getHeader, + + getAllHeaders: function() { + try { + return this.getAllResponseHeaders(); + } catch (e) { return null } + }, + + getResponseHeader: function(name) { + return this.transport.getResponseHeader(name); + }, + + getAllResponseHeaders: function() { + return this.transport.getAllResponseHeaders(); + }, + + _getHeaderJSON: function() { + var json = this.getHeader('X-JSON'); + if (!json) return null; + + try { + json = decodeURIComponent(escape(json)); + } catch(e) { + } + + try { + return json.evalJSON(this.request.options.sanitizeJSON || + !this.request.isSameOrigin()); + } catch (e) { + this.request.dispatchException(e); + } + }, + + _getResponseJSON: function() { + var options = this.request.options; + if (!options.evalJSON || (options.evalJSON != 'force' && + !(this.getHeader('Content-type') || '').include('application/json')) || + this.responseText.blank()) + return null; + try { + return this.responseText.evalJSON(options.sanitizeJSON || + !this.request.isSameOrigin()); + } catch (e) { + this.request.dispatchException(e); + } + } +}); + +Ajax.Updater = Class.create(Ajax.Request, { + initialize: function($super, container, url, options) { + this.container = { + success: (container.success || container), + failure: (container.failure || (container.success ? null : container)) + }; + + options = Object.clone(options); + var onComplete = options.onComplete; + options.onComplete = (function(response, json) { + this.updateContent(response.responseText); + if (Object.isFunction(onComplete)) onComplete(response, json); + }).bind(this); + + $super(url, options); + }, + + updateContent: function(responseText) { + var receiver = this.container[this.success() ? 'success' : 'failure'], + options = this.options; + + if (!options.evalScripts) responseText = responseText.stripScripts(); + + if (receiver = $(receiver)) { + if (options.insertion) { + if (Object.isString(options.insertion)) { + var insertion = { }; insertion[options.insertion] = responseText; + receiver.insert(insertion); + } + else options.insertion(receiver, responseText); + } + else receiver.update(responseText); + } + } +}); + +Ajax.PeriodicalUpdater = Class.create(Ajax.Base, { + initialize: function($super, container, url, options) { + $super(options); + this.onComplete = this.options.onComplete; + + this.frequency = (this.options.frequency || 2); + this.decay = (this.options.decay || 1); + + this.updater = { }; + this.container = container; + this.url = url; + + this.start(); + }, + + start: function() { + this.options.onComplete = this.updateComplete.bind(this); + this.onTimerEvent(); + }, + + stop: function() { + this.updater.options.onComplete = undefined; + clearTimeout(this.timer); + (this.onComplete || Prototype.emptyFunction).apply(this, arguments); + }, + + updateComplete: function(response) { + if (this.options.decay) { + this.decay = (response.responseText == this.lastText ? + this.decay * this.options.decay : 1); + + this.lastText = response.responseText; + } + this.timer = this.onTimerEvent.bind(this).delay(this.decay * this.frequency); + }, + + onTimerEvent: function() { + this.updater = new Ajax.Updater(this.container, this.url, this.options); + } +}); + +(function(GLOBAL) { + + var UNDEFINED; + var SLICE = Array.prototype.slice; + + var DIV = document.createElement('div'); + + + function $(element) { + if (arguments.length > 1) { + for (var i = 0, elements = [], length = arguments.length; i < length; i++) + elements.push($(arguments[i])); + return elements; + } + + if (Object.isString(element)) + element = document.getElementById(element); + return Element.extend(element); + } + + GLOBAL.$ = $; + + + if (!GLOBAL.Node) GLOBAL.Node = {}; + + if (!GLOBAL.Node.ELEMENT_NODE) { + Object.extend(GLOBAL.Node, { + ELEMENT_NODE: 1, + ATTRIBUTE_NODE: 2, + TEXT_NODE: 3, + CDATA_SECTION_NODE: 4, + ENTITY_REFERENCE_NODE: 5, + ENTITY_NODE: 6, + PROCESSING_INSTRUCTION_NODE: 7, + COMMENT_NODE: 8, + DOCUMENT_NODE: 9, + DOCUMENT_TYPE_NODE: 10, + DOCUMENT_FRAGMENT_NODE: 11, + NOTATION_NODE: 12 + }); + } + + var ELEMENT_CACHE = {}; + + function shouldUseCreationCache(tagName, attributes) { + if (tagName === 'select') return false; + if ('type' in attributes) return false; + return true; + } + + var HAS_EXTENDED_CREATE_ELEMENT_SYNTAX = (function(){ + try { + var el = document.createElement(''); + return el.tagName.toLowerCase() === 'input' && el.name === 'x'; + } + catch(err) { + return false; + } + })(); + + + var oldElement = GLOBAL.Element; + function Element(tagName, attributes) { + attributes = attributes || {}; + tagName = tagName.toLowerCase(); + + if (HAS_EXTENDED_CREATE_ELEMENT_SYNTAX && attributes.name) { + tagName = '<' + tagName + ' name="' + attributes.name + '">'; + delete attributes.name; + return Element.writeAttribute(document.createElement(tagName), attributes); + } + + if (!ELEMENT_CACHE[tagName]) + ELEMENT_CACHE[tagName] = Element.extend(document.createElement(tagName)); + + var node = shouldUseCreationCache(tagName, attributes) ? + ELEMENT_CACHE[tagName].cloneNode(false) : document.createElement(tagName); + + return Element.writeAttribute(node, attributes); + } + + GLOBAL.Element = Element; + + Object.extend(GLOBAL.Element, oldElement || {}); + if (oldElement) GLOBAL.Element.prototype = oldElement.prototype; + + Element.Methods = { ByTag: {}, Simulated: {} }; + + var methods = {}; + + var INSPECT_ATTRIBUTES = { id: 'id', className: 'class' }; + function inspect(element) { + element = $(element); + var result = '<' + element.tagName.toLowerCase(); + + var attribute, value; + for (var property in INSPECT_ATTRIBUTES) { + attribute = INSPECT_ATTRIBUTES[property]; + value = (element[property] || '').toString(); + if (value) result += ' ' + attribute + '=' + value.inspect(true); + } + + return result + '>'; + } + + methods.inspect = inspect; + + + function visible(element) { + return $(element).getStyle('display') !== 'none'; + } + + function toggle(element, bool) { + element = $(element); + if (typeof bool !== 'boolean') + bool = !Element.visible(element); + Element[bool ? 'show' : 'hide'](element); + + return element; + } + + function hide(element) { + element = $(element); + element.style.display = 'none'; + return element; + } + + function show(element) { + element = $(element); + element.style.display = ''; + return element; + } + + + Object.extend(methods, { + visible: visible, + toggle: toggle, + hide: hide, + show: show + }); + + + function remove(element) { + element = $(element); + element.parentNode.removeChild(element); + return element; + } + + var SELECT_ELEMENT_INNERHTML_BUGGY = (function(){ + var el = document.createElement("select"), + isBuggy = true; + el.innerHTML = ""; + if (el.options && el.options[0]) { + isBuggy = el.options[0].nodeName.toUpperCase() !== "OPTION"; + } + el = null; + return isBuggy; + })(); + + var TABLE_ELEMENT_INNERHTML_BUGGY = (function(){ + try { + var el = document.createElement("table"); + if (el && el.tBodies) { + el.innerHTML = "test"; + var isBuggy = typeof el.tBodies[0] == "undefined"; + el = null; + return isBuggy; + } + } catch (e) { + return true; + } + })(); + + var LINK_ELEMENT_INNERHTML_BUGGY = (function() { + try { + var el = document.createElement('div'); + el.innerHTML = ""; + var isBuggy = (el.childNodes.length === 0); + el = null; + return isBuggy; + } catch(e) { + return true; + } + })(); + + var ANY_INNERHTML_BUGGY = SELECT_ELEMENT_INNERHTML_BUGGY || + TABLE_ELEMENT_INNERHTML_BUGGY || LINK_ELEMENT_INNERHTML_BUGGY; + + var SCRIPT_ELEMENT_REJECTS_TEXTNODE_APPENDING = (function () { + var s = document.createElement("script"), + isBuggy = false; + try { + s.appendChild(document.createTextNode("")); + isBuggy = !s.firstChild || + s.firstChild && s.firstChild.nodeType !== 3; + } catch (e) { + isBuggy = true; + } + s = null; + return isBuggy; + })(); + + function update(element, content) { + element = $(element); + + var descendants = element.getElementsByTagName('*'), + i = descendants.length; + while (i--) purgeElement(descendants[i]); + + if (content && content.toElement) + content = content.toElement(); + + if (Object.isElement(content)) + return element.update().insert(content); + + + content = Object.toHTML(content); + var tagName = element.tagName.toUpperCase(); + + if (tagName === 'SCRIPT' && SCRIPT_ELEMENT_REJECTS_TEXTNODE_APPENDING) { + element.text = content; + return element; + } + + if (ANY_INNERHTML_BUGGY) { + if (tagName in INSERTION_TRANSLATIONS.tags) { + while (element.firstChild) + element.removeChild(element.firstChild); + + var nodes = getContentFromAnonymousElement(tagName, content.stripScripts()); + for (var i = 0, node; node = nodes[i]; i++) + element.appendChild(node); + + } else if (LINK_ELEMENT_INNERHTML_BUGGY && Object.isString(content) && content.indexOf(' -1) { + while (element.firstChild) + element.removeChild(element.firstChild); + + var nodes = getContentFromAnonymousElement(tagName, + content.stripScripts(), true); + + for (var i = 0, node; node = nodes[i]; i++) + element.appendChild(node); + } else { + element.innerHTML = content.stripScripts(); + } + } else { + element.innerHTML = content.stripScripts(); + } + + content.evalScripts.bind(content).defer(); + return element; + } + + function replace(element, content) { + element = $(element); + + if (content && content.toElement) { + content = content.toElement(); + } else if (!Object.isElement(content)) { + content = Object.toHTML(content); + var range = element.ownerDocument.createRange(); + range.selectNode(element); + content.evalScripts.bind(content).defer(); + content = range.createContextualFragment(content.stripScripts()); + } + + element.parentNode.replaceChild(content, element); + return element; + } + + var INSERTION_TRANSLATIONS = { + before: function(element, node) { + element.parentNode.insertBefore(node, element); + }, + top: function(element, node) { + element.insertBefore(node, element.firstChild); + }, + bottom: function(element, node) { + element.appendChild(node); + }, + after: function(element, node) { + element.parentNode.insertBefore(node, element.nextSibling); + }, + + tags: { + TABLE: ['', '
', 1], + TBODY: ['', '
', 2], + TR: ['', '
', 3], + TD: ['
', '
', 4], + SELECT: ['', 1] + } + }; + + var tags = INSERTION_TRANSLATIONS.tags; + + Object.extend(tags, { + THEAD: tags.TBODY, + TFOOT: tags.TBODY, + TH: tags.TD + }); + + function replace_IE(element, content) { + element = $(element); + if (content && content.toElement) + content = content.toElement(); + if (Object.isElement(content)) { + element.parentNode.replaceChild(content, element); + return element; + } + + content = Object.toHTML(content); + var parent = element.parentNode, tagName = parent.tagName.toUpperCase(); + + if (tagName in INSERTION_TRANSLATIONS.tags) { + var nextSibling = Element.next(element); + var fragments = getContentFromAnonymousElement( + tagName, content.stripScripts()); + + parent.removeChild(element); + + var iterator; + if (nextSibling) + iterator = function(node) { parent.insertBefore(node, nextSibling) }; + else + iterator = function(node) { parent.appendChild(node); } + + fragments.each(iterator); + } else { + element.outerHTML = content.stripScripts(); + } + + content.evalScripts.bind(content).defer(); + return element; + } + + if ('outerHTML' in document.documentElement) + replace = replace_IE; + + function isContent(content) { + if (Object.isUndefined(content) || content === null) return false; + + if (Object.isString(content) || Object.isNumber(content)) return true; + if (Object.isElement(content)) return true; + if (content.toElement || content.toHTML) return true; + + return false; + } + + function insertContentAt(element, content, position) { + position = position.toLowerCase(); + var method = INSERTION_TRANSLATIONS[position]; + + if (content && content.toElement) content = content.toElement(); + if (Object.isElement(content)) { + method(element, content); + return element; + } + + content = Object.toHTML(content); + var tagName = ((position === 'before' || position === 'after') ? + element.parentNode : element).tagName.toUpperCase(); + + var childNodes = getContentFromAnonymousElement(tagName, content.stripScripts()); + + if (position === 'top' || position === 'after') childNodes.reverse(); + + for (var i = 0, node; node = childNodes[i]; i++) + method(element, node); + + content.evalScripts.bind(content).defer(); + } + + function insert(element, insertions) { + element = $(element); + + if (isContent(insertions)) + insertions = { bottom: insertions }; + + for (var position in insertions) + insertContentAt(element, insertions[position], position); + + return element; + } + + function wrap(element, wrapper, attributes) { + element = $(element); + + if (Object.isElement(wrapper)) { + $(wrapper).writeAttribute(attributes || {}); + } else if (Object.isString(wrapper)) { + wrapper = new Element(wrapper, attributes); + } else { + wrapper = new Element('div', wrapper); + } + + if (element.parentNode) + element.parentNode.replaceChild(wrapper, element); + + wrapper.appendChild(element); + + return wrapper; + } + + function cleanWhitespace(element) { + element = $(element); + var node = element.firstChild; + + while (node) { + var nextNode = node.nextSibling; + if (node.nodeType === Node.TEXT_NODE && !/\S/.test(node.nodeValue)) + element.removeChild(node); + node = nextNode; + } + return element; + } + + function empty(element) { + return $(element).innerHTML.blank(); + } + + function getContentFromAnonymousElement(tagName, html, force) { + var t = INSERTION_TRANSLATIONS.tags[tagName], div = DIV; + + var workaround = !!t; + if (!workaround && force) { + workaround = true; + t = ['', '', 0]; + } + + if (workaround) { + div.innerHTML = ' ' + t[0] + html + t[1]; + div.removeChild(div.firstChild); + for (var i = t[2]; i--; ) + div = div.firstChild; + } else { + div.innerHTML = html; + } + + return $A(div.childNodes); + } + + function clone(element, deep) { + if (!(element = $(element))) return; + var clone = element.cloneNode(deep); + if (!HAS_UNIQUE_ID_PROPERTY) { + clone._prototypeUID = UNDEFINED; + if (deep) { + var descendants = Element.select(clone, '*'), + i = descendants.length; + while (i--) + descendants[i]._prototypeUID = UNDEFINED; + } + } + return Element.extend(clone); + } + + function purgeElement(element) { + var uid = getUniqueElementID(element); + if (uid) { + Element.stopObserving(element); + if (!HAS_UNIQUE_ID_PROPERTY) + element._prototypeUID = UNDEFINED; + delete Element.Storage[uid]; + } + } + + function purgeCollection(elements) { + var i = elements.length; + while (i--) + purgeElement(elements[i]); + } + + function purgeCollection_IE(elements) { + var i = elements.length, element, uid; + while (i--) { + element = elements[i]; + uid = getUniqueElementID(element); + delete Element.Storage[uid]; + delete Event.cache[uid]; + } + } + + if (HAS_UNIQUE_ID_PROPERTY) { + purgeCollection = purgeCollection_IE; + } + + + function purge(element) { + if (!(element = $(element))) return; + purgeElement(element); + + var descendants = element.getElementsByTagName('*'), + i = descendants.length; + + while (i--) purgeElement(descendants[i]); + + return null; + } + + Object.extend(methods, { + remove: remove, + update: update, + replace: replace, + insert: insert, + wrap: wrap, + cleanWhitespace: cleanWhitespace, + empty: empty, + clone: clone, + purge: purge + }); + + + + function recursivelyCollect(element, property, maximumLength) { + element = $(element); + maximumLength = maximumLength || -1; + var elements = []; + + while (element = element[property]) { + if (element.nodeType === Node.ELEMENT_NODE) + elements.push(Element.extend(element)); + + if (elements.length === maximumLength) break; + } + + return elements; + } + + + function ancestors(element) { + return recursivelyCollect(element, 'parentNode'); + } + + function descendants(element) { + return Element.select(element, '*'); + } + + function firstDescendant(element) { + element = $(element).firstChild; + while (element && element.nodeType !== Node.ELEMENT_NODE) + element = element.nextSibling; + + return $(element); + } + + function immediateDescendants(element) { + var results = [], child = $(element).firstChild; + + while (child) { + if (child.nodeType === Node.ELEMENT_NODE) + results.push(Element.extend(child)); + + child = child.nextSibling; + } + + return results; + } + + function previousSiblings(element) { + return recursivelyCollect(element, 'previousSibling'); + } + + function nextSiblings(element) { + return recursivelyCollect(element, 'nextSibling'); + } + + function siblings(element) { + element = $(element); + var previous = previousSiblings(element), + next = nextSiblings(element); + return previous.reverse().concat(next); + } + + function match(element, selector) { + element = $(element); + + if (Object.isString(selector)) + return Prototype.Selector.match(element, selector); + + return selector.match(element); + } + + + function _recursivelyFind(element, property, expression, index) { + element = $(element), expression = expression || 0, index = index || 0; + if (Object.isNumber(expression)) { + index = expression, expression = null; + } + + while (element = element[property]) { + if (element.nodeType !== 1) continue; + if (expression && !Prototype.Selector.match(element, expression)) + continue; + if (--index >= 0) continue; + + return Element.extend(element); + } + } + + + function up(element, expression, index) { + element = $(element); + + if (arguments.length === 1) return $(element.parentNode); + return _recursivelyFind(element, 'parentNode', expression, index); + } + + function down(element, expression, index) { + if (arguments.length === 1) return firstDescendant(element); + element = $(element), expression = expression || 0, index = index || 0; + + if (Object.isNumber(expression)) + index = expression, expression = '*'; + + var node = Prototype.Selector.select(expression, element)[index]; + return Element.extend(node); + } + + function previous(element, expression, index) { + return _recursivelyFind(element, 'previousSibling', expression, index); + } + + function next(element, expression, index) { + return _recursivelyFind(element, 'nextSibling', expression, index); + } + + function select(element) { + element = $(element); + var expressions = SLICE.call(arguments, 1).join(', '); + return Prototype.Selector.select(expressions, element); + } + + function adjacent(element) { + element = $(element); + var expressions = SLICE.call(arguments, 1).join(', '); + var siblings = Element.siblings(element), results = []; + for (var i = 0, sibling; sibling = siblings[i]; i++) { + if (Prototype.Selector.match(sibling, expressions)) + results.push(sibling); + } + + return results; + } + + function descendantOf_DOM(element, ancestor) { + element = $(element), ancestor = $(ancestor); + if (!element || !ancestor) return false; + while (element = element.parentNode) + if (element === ancestor) return true; + return false; + } + + function descendantOf_contains(element, ancestor) { + element = $(element), ancestor = $(ancestor); + if (!element || !ancestor) return false; + if (!ancestor.contains) return descendantOf_DOM(element, ancestor); + return ancestor.contains(element) && ancestor !== element; + } + + function descendantOf_compareDocumentPosition(element, ancestor) { + element = $(element), ancestor = $(ancestor); + if (!element || !ancestor) return false; + return (element.compareDocumentPosition(ancestor) & 8) === 8; + } + + var descendantOf; + if (DIV.compareDocumentPosition) { + descendantOf = descendantOf_compareDocumentPosition; + } else if (DIV.contains) { + descendantOf = descendantOf_contains; + } else { + descendantOf = descendantOf_DOM; + } + + + Object.extend(methods, { + recursivelyCollect: recursivelyCollect, + ancestors: ancestors, + descendants: descendants, + firstDescendant: firstDescendant, + immediateDescendants: immediateDescendants, + previousSiblings: previousSiblings, + nextSiblings: nextSiblings, + siblings: siblings, + match: match, + up: up, + down: down, + previous: previous, + next: next, + select: select, + adjacent: adjacent, + descendantOf: descendantOf, + + getElementsBySelector: select, + + childElements: immediateDescendants + }); + + + var idCounter = 1; + function identify(element) { + element = $(element); + var id = Element.readAttribute(element, 'id'); + if (id) return id; + + do { id = 'anonymous_element_' + idCounter++ } while ($(id)); + + Element.writeAttribute(element, 'id', id); + return id; + } + + + function readAttribute(element, name) { + return $(element).getAttribute(name); + } + + function readAttribute_IE(element, name) { + element = $(element); + + var table = ATTRIBUTE_TRANSLATIONS.read; + if (table.values[name]) + return table.values[name](element, name); + + if (table.names[name]) name = table.names[name]; + + if (name.include(':')) { + if (!element.attributes || !element.attributes[name]) return null; + return element.attributes[name].value; + } + + return element.getAttribute(name); + } + + function readAttribute_Opera(element, name) { + if (name === 'title') return element.title; + return element.getAttribute(name); + } + + var PROBLEMATIC_ATTRIBUTE_READING = (function() { + DIV.setAttribute('onclick', []); + var value = DIV.getAttribute('onclick'); + var isFunction = Object.isArray(value); + DIV.removeAttribute('onclick'); + return isFunction; + })(); + + if (PROBLEMATIC_ATTRIBUTE_READING) { + readAttribute = readAttribute_IE; + } else if (Prototype.Browser.Opera) { + readAttribute = readAttribute_Opera; + } + + + function writeAttribute(element, name, value) { + element = $(element); + var attributes = {}, table = ATTRIBUTE_TRANSLATIONS.write; + + if (typeof name === 'object') { + attributes = name; + } else { + attributes[name] = Object.isUndefined(value) ? true : value; + } + + for (var attr in attributes) { + name = table.names[attr] || attr; + value = attributes[attr]; + if (table.values[attr]) { + value = table.values[attr](element, value); + if (Object.isUndefined(value)) continue; + } + if (value === false || value === null) + element.removeAttribute(name); + else if (value === true) + element.setAttribute(name, name); + else element.setAttribute(name, value); + } + + return element; + } + + var PROBLEMATIC_HAS_ATTRIBUTE_WITH_CHECKBOXES = (function () { + if (!HAS_EXTENDED_CREATE_ELEMENT_SYNTAX) { + return false; + } + var checkbox = document.createElement(''); + checkbox.checked = true; + var node = checkbox.getAttributeNode('checked'); + return !node || !node.specified; + })(); + + function hasAttribute(element, attribute) { + attribute = ATTRIBUTE_TRANSLATIONS.has[attribute] || attribute; + var node = $(element).getAttributeNode(attribute); + return !!(node && node.specified); + } + + function hasAttribute_IE(element, attribute) { + if (attribute === 'checked') { + return element.checked; + } + return hasAttribute(element, attribute); + } + + GLOBAL.Element.Methods.Simulated.hasAttribute = + PROBLEMATIC_HAS_ATTRIBUTE_WITH_CHECKBOXES ? + hasAttribute_IE : hasAttribute; + + function classNames(element) { + return new Element.ClassNames(element); + } + + var regExpCache = {}; + function getRegExpForClassName(className) { + if (regExpCache[className]) return regExpCache[className]; + + var re = new RegExp("(^|\\s+)" + className + "(\\s+|$)"); + regExpCache[className] = re; + return re; + } + + function hasClassName(element, className) { + if (!(element = $(element))) return; + + var elementClassName = element.className; + + if (elementClassName.length === 0) return false; + if (elementClassName === className) return true; + + return getRegExpForClassName(className).test(elementClassName); + } + + function addClassName(element, className) { + if (!(element = $(element))) return; + + if (!hasClassName(element, className)) + element.className += (element.className ? ' ' : '') + className; + + return element; + } + + function removeClassName(element, className) { + if (!(element = $(element))) return; + + element.className = element.className.replace( + getRegExpForClassName(className), ' ').strip(); + + return element; + } + + function toggleClassName(element, className, bool) { + if (!(element = $(element))) return; + + if (Object.isUndefined(bool)) + bool = !hasClassName(element, className); + + var method = Element[bool ? 'addClassName' : 'removeClassName']; + return method(element, className); + } + + var ATTRIBUTE_TRANSLATIONS = {}; + + var classProp = 'className', forProp = 'for'; + + DIV.setAttribute(classProp, 'x'); + if (DIV.className !== 'x') { + DIV.setAttribute('class', 'x'); + if (DIV.className === 'x') + classProp = 'class'; + } + + var LABEL = document.createElement('label'); + LABEL.setAttribute(forProp, 'x'); + if (LABEL.htmlFor !== 'x') { + LABEL.setAttribute('htmlFor', 'x'); + if (LABEL.htmlFor === 'x') + forProp = 'htmlFor'; + } + LABEL = null; + + function _getAttr(element, attribute) { + return element.getAttribute(attribute); + } + + function _getAttr2(element, attribute) { + return element.getAttribute(attribute, 2); + } + + function _getAttrNode(element, attribute) { + var node = element.getAttributeNode(attribute); + return node ? node.value : ''; + } + + function _getFlag(element, attribute) { + return $(element).hasAttribute(attribute) ? attribute : null; + } + + DIV.onclick = Prototype.emptyFunction; + var onclickValue = DIV.getAttribute('onclick'); + + var _getEv; + + if (String(onclickValue).indexOf('{') > -1) { + _getEv = function(element, attribute) { + var value = element.getAttribute(attribute); + if (!value) return null; + value = value.toString(); + value = value.split('{')[1]; + value = value.split('}')[0]; + return value.strip(); + }; + } + else if (onclickValue === '') { + _getEv = function(element, attribute) { + var value = element.getAttribute(attribute); + if (!value) return null; + return value.strip(); + }; + } + + ATTRIBUTE_TRANSLATIONS.read = { + names: { + 'class': classProp, + 'className': classProp, + 'for': forProp, + 'htmlFor': forProp + }, + + values: { + style: function(element) { + return element.style.cssText.toLowerCase(); + }, + title: function(element) { + return element.title; + } + } + }; + + ATTRIBUTE_TRANSLATIONS.write = { + names: { + className: 'class', + htmlFor: 'for', + cellpadding: 'cellPadding', + cellspacing: 'cellSpacing' + }, + + values: { + checked: function(element, value) { + value = !!value; + element.checked = value; + return value ? 'checked' : null; + }, + + style: function(element, value) { + element.style.cssText = value ? value : ''; + } + } + }; + + ATTRIBUTE_TRANSLATIONS.has = { names: {} }; + + Object.extend(ATTRIBUTE_TRANSLATIONS.write.names, + ATTRIBUTE_TRANSLATIONS.read.names); + + var CAMEL_CASED_ATTRIBUTE_NAMES = $w('colSpan rowSpan vAlign dateTime ' + + 'accessKey tabIndex encType maxLength readOnly longDesc frameBorder'); + + for (var i = 0, attr; attr = CAMEL_CASED_ATTRIBUTE_NAMES[i]; i++) { + ATTRIBUTE_TRANSLATIONS.write.names[attr.toLowerCase()] = attr; + ATTRIBUTE_TRANSLATIONS.has.names[attr.toLowerCase()] = attr; + } + + Object.extend(ATTRIBUTE_TRANSLATIONS.read.values, { + href: _getAttr2, + src: _getAttr2, + type: _getAttr, + action: _getAttrNode, + disabled: _getFlag, + checked: _getFlag, + readonly: _getFlag, + multiple: _getFlag, + onload: _getEv, + onunload: _getEv, + onclick: _getEv, + ondblclick: _getEv, + onmousedown: _getEv, + onmouseup: _getEv, + onmouseover: _getEv, + onmousemove: _getEv, + onmouseout: _getEv, + onfocus: _getEv, + onblur: _getEv, + onkeypress: _getEv, + onkeydown: _getEv, + onkeyup: _getEv, + onsubmit: _getEv, + onreset: _getEv, + onselect: _getEv, + onchange: _getEv + }); + + + Object.extend(methods, { + identify: identify, + readAttribute: readAttribute, + writeAttribute: writeAttribute, + classNames: classNames, + hasClassName: hasClassName, + addClassName: addClassName, + removeClassName: removeClassName, + toggleClassName: toggleClassName + }); + + + function normalizeStyleName(style) { + if (style === 'float' || style === 'styleFloat') + return 'cssFloat'; + return style.camelize(); + } + + function normalizeStyleName_IE(style) { + if (style === 'float' || style === 'cssFloat') + return 'styleFloat'; + return style.camelize(); + } + + function setStyle(element, styles) { + element = $(element); + var elementStyle = element.style, match; + + if (Object.isString(styles)) { + elementStyle.cssText += ';' + styles; + if (styles.include('opacity')) { + var opacity = styles.match(/opacity:\s*(\d?\.?\d*)/)[1]; + Element.setOpacity(element, opacity); + } + return element; + } + + for (var property in styles) { + if (property === 'opacity') { + Element.setOpacity(element, styles[property]); + } else { + var value = styles[property]; + if (property === 'float' || property === 'cssFloat') { + property = Object.isUndefined(elementStyle.styleFloat) ? + 'cssFloat' : 'styleFloat'; + } + elementStyle[property] = value; + } + } + + return element; + } + + + function getStyle(element, style) { + element = $(element); + style = normalizeStyleName(style); + + var value = element.style[style]; + if (!value || value === 'auto') { + var css = document.defaultView.getComputedStyle(element, null); + value = css ? css[style] : null; + } + + if (style === 'opacity') return value ? parseFloat(value) : 1.0; + return value === 'auto' ? null : value; + } + + function getStyle_Opera(element, style) { + switch (style) { + case 'height': case 'width': + if (!Element.visible(element)) return null; + + var dim = parseInt(getStyle(element, style), 10); + + if (dim !== element['offset' + style.capitalize()]) + return dim + 'px'; + + return Element.measure(element, style); + + default: return getStyle(element, style); + } + } + + function getStyle_IE(element, style) { + element = $(element); + style = normalizeStyleName_IE(style); + + var value = element.style[style]; + if (!value && element.currentStyle) { + value = element.currentStyle[style]; + } + + if (style === 'opacity') { + if (!STANDARD_CSS_OPACITY_SUPPORTED) + return getOpacity_IE(element); + else return value ? parseFloat(value) : 1.0; + } + + if (value === 'auto') { + if ((style === 'width' || style === 'height') && Element.visible(element)) + return Element.measure(element, style) + 'px'; + return null; + } + + return value; + } + + function stripAlphaFromFilter_IE(filter) { + return (filter || '').replace(/alpha\([^\)]*\)/gi, ''); + } + + function hasLayout_IE(element) { + if (!element.currentStyle || !element.currentStyle.hasLayout) + element.style.zoom = 1; + return element; + } + + var STANDARD_CSS_OPACITY_SUPPORTED = (function() { + DIV.style.cssText = "opacity:.55"; + return /^0.55/.test(DIV.style.opacity); + })(); + + function setOpacity(element, value) { + element = $(element); + if (value == 1 || value === '') value = ''; + else if (value < 0.00001) value = 0; + element.style.opacity = value; + return element; + } + + function setOpacity_IE(element, value) { + if (STANDARD_CSS_OPACITY_SUPPORTED) + return setOpacity(element, value); + + element = hasLayout_IE($(element)); + var filter = Element.getStyle(element, 'filter'), + style = element.style; + + if (value == 1 || value === '') { + filter = stripAlphaFromFilter_IE(filter); + if (filter) style.filter = filter; + else style.removeAttribute('filter'); + return element; + } + + if (value < 0.00001) value = 0; + + style.filter = stripAlphaFromFilter_IE(filter) + + ' alpha(opacity=' + (value * 100) + ')'; + + return element; + } + + + function getOpacity(element) { + return Element.getStyle(element, 'opacity'); + } + + function getOpacity_IE(element) { + if (STANDARD_CSS_OPACITY_SUPPORTED) + return getOpacity(element); + + var filter = Element.getStyle(element, 'filter'); + if (filter.length === 0) return 1.0; + var match = (filter || '').match(/alpha\(opacity=(.*)\)/i); + if (match && match[1]) return parseFloat(match[1]) / 100; + return 1.0; + } + + + Object.extend(methods, { + setStyle: setStyle, + getStyle: getStyle, + setOpacity: setOpacity, + getOpacity: getOpacity + }); + + if ('styleFloat' in DIV.style) { + methods.getStyle = getStyle_IE; + methods.setOpacity = setOpacity_IE; + methods.getOpacity = getOpacity_IE; + } + + var UID = 0; + + GLOBAL.Element.Storage = { UID: 1 }; + + function getUniqueElementID(element) { + if (element === window) return 0; + + if (typeof element._prototypeUID === 'undefined') + element._prototypeUID = Element.Storage.UID++; + return element._prototypeUID; + } + + function getUniqueElementID_IE(element) { + if (element === window) return 0; + if (element == document) return 1; + return element.uniqueID; + } + + var HAS_UNIQUE_ID_PROPERTY = ('uniqueID' in DIV); + if (HAS_UNIQUE_ID_PROPERTY) + getUniqueElementID = getUniqueElementID_IE; + + function getStorage(element) { + if (!(element = $(element))) return; + + var uid = getUniqueElementID(element); + + if (!Element.Storage[uid]) + Element.Storage[uid] = $H(); + + return Element.Storage[uid]; + } + + function store(element, key, value) { + if (!(element = $(element))) return; + var storage = getStorage(element); + if (arguments.length === 2) { + storage.update(key); + } else { + storage.set(key, value); + } + return element; + } + + function retrieve(element, key, defaultValue) { + if (!(element = $(element))) return; + var storage = getStorage(element), value = storage.get(key); + + if (Object.isUndefined(value)) { + storage.set(key, defaultValue); + value = defaultValue; + } + + return value; + } + + + Object.extend(methods, { + getStorage: getStorage, + store: store, + retrieve: retrieve + }); + + + var Methods = {}, ByTag = Element.Methods.ByTag, + F = Prototype.BrowserFeatures; + + if (!F.ElementExtensions && ('__proto__' in DIV)) { + GLOBAL.HTMLElement = {}; + GLOBAL.HTMLElement.prototype = DIV['__proto__']; + F.ElementExtensions = true; + } + + function checkElementPrototypeDeficiency(tagName) { + if (typeof window.Element === 'undefined') return false; + if (!HAS_EXTENDED_CREATE_ELEMENT_SYNTAX) return false; + var proto = window.Element.prototype; + if (proto) { + var id = '_' + (Math.random() + '').slice(2), + el = document.createElement(tagName); + proto[id] = 'x'; + var isBuggy = (el[id] !== 'x'); + delete proto[id]; + el = null; + return isBuggy; + } + + return false; + } + + var HTMLOBJECTELEMENT_PROTOTYPE_BUGGY = + checkElementPrototypeDeficiency('object'); + + function extendElementWith(element, methods) { + for (var property in methods) { + var value = methods[property]; + if (Object.isFunction(value) && !(property in element)) + element[property] = value.methodize(); + } + } + + var EXTENDED = {}; + function elementIsExtended(element) { + var uid = getUniqueElementID(element); + return (uid in EXTENDED); + } + + function extend(element) { + if (!element || elementIsExtended(element)) return element; + if (element.nodeType !== Node.ELEMENT_NODE || element == window) + return element; + + var methods = Object.clone(Methods), + tagName = element.tagName.toUpperCase(); + + if (ByTag[tagName]) Object.extend(methods, ByTag[tagName]); + + extendElementWith(element, methods); + EXTENDED[getUniqueElementID(element)] = true; + return element; + } + + function extend_IE8(element) { + if (!element || elementIsExtended(element)) return element; + + var t = element.tagName; + if (t && (/^(?:object|applet|embed)$/i.test(t))) { + extendElementWith(element, Element.Methods); + extendElementWith(element, Element.Methods.Simulated); + extendElementWith(element, Element.Methods.ByTag[t.toUpperCase()]); + } + + return element; + } + + if (F.SpecificElementExtensions) { + extend = HTMLOBJECTELEMENT_PROTOTYPE_BUGGY ? extend_IE8 : Prototype.K; + } + + function addMethodsToTagName(tagName, methods) { + tagName = tagName.toUpperCase(); + if (!ByTag[tagName]) ByTag[tagName] = {}; + Object.extend(ByTag[tagName], methods); + } + + function mergeMethods(destination, methods, onlyIfAbsent) { + if (Object.isUndefined(onlyIfAbsent)) onlyIfAbsent = false; + for (var property in methods) { + var value = methods[property]; + if (!Object.isFunction(value)) continue; + if (!onlyIfAbsent || !(property in destination)) + destination[property] = value.methodize(); + } + } + + function findDOMClass(tagName) { + var klass; + var trans = { + "OPTGROUP": "OptGroup", "TEXTAREA": "TextArea", "P": "Paragraph", + "FIELDSET": "FieldSet", "UL": "UList", "OL": "OList", "DL": "DList", + "DIR": "Directory", "H1": "Heading", "H2": "Heading", "H3": "Heading", + "H4": "Heading", "H5": "Heading", "H6": "Heading", "Q": "Quote", + "INS": "Mod", "DEL": "Mod", "A": "Anchor", "IMG": "Image", "CAPTION": + "TableCaption", "COL": "TableCol", "COLGROUP": "TableCol", "THEAD": + "TableSection", "TFOOT": "TableSection", "TBODY": "TableSection", "TR": + "TableRow", "TH": "TableCell", "TD": "TableCell", "FRAMESET": + "FrameSet", "IFRAME": "IFrame" + }; + if (trans[tagName]) klass = 'HTML' + trans[tagName] + 'Element'; + if (window[klass]) return window[klass]; + klass = 'HTML' + tagName + 'Element'; + if (window[klass]) return window[klass]; + klass = 'HTML' + tagName.capitalize() + 'Element'; + if (window[klass]) return window[klass]; + + var element = document.createElement(tagName), + proto = element['__proto__'] || element.constructor.prototype; + + element = null; + return proto; + } + + function addMethods(methods) { + if (arguments.length === 0) addFormMethods(); + + if (arguments.length === 2) { + var tagName = methods; + methods = arguments[1]; + } + + if (!tagName) { + Object.extend(Element.Methods, methods || {}); + } else { + if (Object.isArray(tagName)) { + for (var i = 0, tag; tag = tagName[i]; i++) + addMethodsToTagName(tag, methods); + } else { + addMethodsToTagName(tagName, methods); + } + } + + var ELEMENT_PROTOTYPE = window.HTMLElement ? HTMLElement.prototype : + Element.prototype; + + if (F.ElementExtensions) { + mergeMethods(ELEMENT_PROTOTYPE, Element.Methods); + mergeMethods(ELEMENT_PROTOTYPE, Element.Methods.Simulated, true); + } + + if (F.SpecificElementExtensions) { + for (var tag in Element.Methods.ByTag) { + var klass = findDOMClass(tag); + if (Object.isUndefined(klass)) continue; + mergeMethods(klass.prototype, ByTag[tag]); + } + } + + Object.extend(Element, Element.Methods); + Object.extend(Element, Element.Methods.Simulated); + delete Element.ByTag; + delete Element.Simulated; + + Element.extend.refresh(); + + ELEMENT_CACHE = {}; + } + + Object.extend(GLOBAL.Element, { + extend: extend, + addMethods: addMethods + }); + + if (extend === Prototype.K) { + GLOBAL.Element.extend.refresh = Prototype.emptyFunction; + } else { + GLOBAL.Element.extend.refresh = function() { + if (Prototype.BrowserFeatures.ElementExtensions) return; + Object.extend(Methods, Element.Methods); + Object.extend(Methods, Element.Methods.Simulated); + + EXTENDED = {}; + }; + } + + function addFormMethods() { + Object.extend(Form, Form.Methods); + Object.extend(Form.Element, Form.Element.Methods); + Object.extend(Element.Methods.ByTag, { + "FORM": Object.clone(Form.Methods), + "INPUT": Object.clone(Form.Element.Methods), + "SELECT": Object.clone(Form.Element.Methods), + "TEXTAREA": Object.clone(Form.Element.Methods), + "BUTTON": Object.clone(Form.Element.Methods) + }); + } + + Element.addMethods(methods); + + function destroyCache_IE() { + DIV = null; + ELEMENT_CACHE = null; + } + + if (window.attachEvent) + window.attachEvent('onunload', destroyCache_IE); + +})(this); +(function() { + + function toDecimal(pctString) { + var match = pctString.match(/^(\d+)%?$/i); + if (!match) return null; + return (Number(match[1]) / 100); + } + + function getRawStyle(element, style) { + element = $(element); + + var value = element.style[style]; + if (!value || value === 'auto') { + var css = document.defaultView.getComputedStyle(element, null); + value = css ? css[style] : null; + } + + if (style === 'opacity') return value ? parseFloat(value) : 1.0; + return value === 'auto' ? null : value; + } + + function getRawStyle_IE(element, style) { + var value = element.style[style]; + if (!value && element.currentStyle) { + value = element.currentStyle[style]; + } + return value; + } + + function getContentWidth(element, context) { + var boxWidth = element.offsetWidth; + + var bl = getPixelValue(element, 'borderLeftWidth', context) || 0; + var br = getPixelValue(element, 'borderRightWidth', context) || 0; + var pl = getPixelValue(element, 'paddingLeft', context) || 0; + var pr = getPixelValue(element, 'paddingRight', context) || 0; + + return boxWidth - bl - br - pl - pr; + } + + if (!Object.isUndefined(document.documentElement.currentStyle) && !Prototype.Browser.Opera) { + getRawStyle = getRawStyle_IE; + } + + + function getPixelValue(value, property, context) { + var element = null; + if (Object.isElement(value)) { + element = value; + value = getRawStyle(element, property); + } + + if (value === null || Object.isUndefined(value)) { + return null; + } + + if ((/^(?:-)?\d+(\.\d+)?(px)?$/i).test(value)) { + return window.parseFloat(value); + } + + var isPercentage = value.include('%'), isViewport = (context === document.viewport); + + if (/\d/.test(value) && element && element.runtimeStyle && !(isPercentage && isViewport)) { + var style = element.style.left, rStyle = element.runtimeStyle.left; + element.runtimeStyle.left = element.currentStyle.left; + element.style.left = value || 0; + value = element.style.pixelLeft; + element.style.left = style; + element.runtimeStyle.left = rStyle; + + return value; + } + + if (element && isPercentage) { + context = context || element.parentNode; + var decimal = toDecimal(value), whole = null; + + var isHorizontal = property.include('left') || property.include('right') || + property.include('width'); + + var isVertical = property.include('top') || property.include('bottom') || + property.include('height'); + + if (context === document.viewport) { + if (isHorizontal) { + whole = document.viewport.getWidth(); + } else if (isVertical) { + whole = document.viewport.getHeight(); + } + } else { + if (isHorizontal) { + whole = $(context).measure('width'); + } else if (isVertical) { + whole = $(context).measure('height'); + } + } + + return (whole === null) ? 0 : whole * decimal; + } + + return 0; + } + + function toCSSPixels(number) { + if (Object.isString(number) && number.endsWith('px')) + return number; + return number + 'px'; + } + + function isDisplayed(element) { + while (element && element.parentNode) { + var display = element.getStyle('display'); + if (display === 'none') { + return false; + } + element = $(element.parentNode); + } + return true; + } + + var hasLayout = Prototype.K; + if ('currentStyle' in document.documentElement) { + hasLayout = function(element) { + if (!element.currentStyle.hasLayout) { + element.style.zoom = 1; + } + return element; + }; + } + + function cssNameFor(key) { + if (key.include('border')) key = key + '-width'; + return key.camelize(); + } + + Element.Layout = Class.create(Hash, { + initialize: function($super, element, preCompute) { + $super(); + this.element = $(element); + + Element.Layout.PROPERTIES.each( function(property) { + this._set(property, null); + }, this); + + if (preCompute) { + this._preComputing = true; + this._begin(); + Element.Layout.PROPERTIES.each( this._compute, this ); + this._end(); + this._preComputing = false; + } + }, + + _set: function(property, value) { + return Hash.prototype.set.call(this, property, value); + }, + + set: function(property, value) { + throw "Properties of Element.Layout are read-only."; + }, + + get: function($super, property) { + var value = $super(property); + return value === null ? this._compute(property) : value; + }, + + _begin: function() { + if (this._isPrepared()) return; + + var element = this.element; + if (isDisplayed(element)) { + this._setPrepared(true); + return; + } + + + var originalStyles = { + position: element.style.position || '', + width: element.style.width || '', + visibility: element.style.visibility || '', + display: element.style.display || '' + }; + + element.store('prototype_original_styles', originalStyles); + + var position = getRawStyle(element, 'position'), width = element.offsetWidth; + + if (width === 0 || width === null) { + element.style.display = 'block'; + width = element.offsetWidth; + } + + var context = (position === 'fixed') ? document.viewport : + element.parentNode; + + var tempStyles = { + visibility: 'hidden', + display: 'block' + }; + + if (position !== 'fixed') tempStyles.position = 'absolute'; + + element.setStyle(tempStyles); + + var positionedWidth = element.offsetWidth, newWidth; + if (width && (positionedWidth === width)) { + newWidth = getContentWidth(element, context); + } else if (position === 'absolute' || position === 'fixed') { + newWidth = getContentWidth(element, context); + } else { + var parent = element.parentNode, pLayout = $(parent).getLayout(); + + newWidth = pLayout.get('width') - + this.get('margin-left') - + this.get('border-left') - + this.get('padding-left') - + this.get('padding-right') - + this.get('border-right') - + this.get('margin-right'); + } + + element.setStyle({ width: newWidth + 'px' }); + + this._setPrepared(true); + }, + + _end: function() { + var element = this.element; + var originalStyles = element.retrieve('prototype_original_styles'); + element.store('prototype_original_styles', null); + element.setStyle(originalStyles); + this._setPrepared(false); + }, + + _compute: function(property) { + var COMPUTATIONS = Element.Layout.COMPUTATIONS; + if (!(property in COMPUTATIONS)) { + throw "Property not found."; + } + + return this._set(property, COMPUTATIONS[property].call(this, this.element)); + }, + + _isPrepared: function() { + return this.element.retrieve('prototype_element_layout_prepared', false); + }, + + _setPrepared: function(bool) { + return this.element.store('prototype_element_layout_prepared', bool); + }, + + toObject: function() { + var args = $A(arguments); + var keys = (args.length === 0) ? Element.Layout.PROPERTIES : + args.join(' ').split(' '); + var obj = {}; + keys.each( function(key) { + if (!Element.Layout.PROPERTIES.include(key)) return; + var value = this.get(key); + if (value != null) obj[key] = value; + }, this); + return obj; + }, + + toHash: function() { + var obj = this.toObject.apply(this, arguments); + return new Hash(obj); + }, + + toCSS: function() { + var args = $A(arguments); + var keys = (args.length === 0) ? Element.Layout.PROPERTIES : + args.join(' ').split(' '); + var css = {}; + + keys.each( function(key) { + if (!Element.Layout.PROPERTIES.include(key)) return; + if (Element.Layout.COMPOSITE_PROPERTIES.include(key)) return; + + var value = this.get(key); + if (value != null) css[cssNameFor(key)] = value + 'px'; + }, this); + return css; + }, + + inspect: function() { + return "#"; + } + }); + + Object.extend(Element.Layout, { + PROPERTIES: $w('height width top left right bottom border-left border-right border-top border-bottom padding-left padding-right padding-top padding-bottom margin-top margin-bottom margin-left margin-right padding-box-width padding-box-height border-box-width border-box-height margin-box-width margin-box-height'), + + COMPOSITE_PROPERTIES: $w('padding-box-width padding-box-height margin-box-width margin-box-height border-box-width border-box-height'), + + COMPUTATIONS: { + 'height': function(element) { + if (!this._preComputing) this._begin(); + + var bHeight = this.get('border-box-height'); + if (bHeight <= 0) { + if (!this._preComputing) this._end(); + return 0; + } + + var bTop = this.get('border-top'), + bBottom = this.get('border-bottom'); + + var pTop = this.get('padding-top'), + pBottom = this.get('padding-bottom'); + + if (!this._preComputing) this._end(); + + return bHeight - bTop - bBottom - pTop - pBottom; + }, + + 'width': function(element) { + if (!this._preComputing) this._begin(); + + var bWidth = this.get('border-box-width'); + if (bWidth <= 0) { + if (!this._preComputing) this._end(); + return 0; + } + + var bLeft = this.get('border-left'), + bRight = this.get('border-right'); + + var pLeft = this.get('padding-left'), + pRight = this.get('padding-right'); + + if (!this._preComputing) this._end(); + return bWidth - bLeft - bRight - pLeft - pRight; + }, + + 'padding-box-height': function(element) { + var height = this.get('height'), + pTop = this.get('padding-top'), + pBottom = this.get('padding-bottom'); + + return height + pTop + pBottom; + }, + + 'padding-box-width': function(element) { + var width = this.get('width'), + pLeft = this.get('padding-left'), + pRight = this.get('padding-right'); + + return width + pLeft + pRight; + }, + + 'border-box-height': function(element) { + if (!this._preComputing) this._begin(); + var height = element.offsetHeight; + if (!this._preComputing) this._end(); + return height; + }, + + 'border-box-width': function(element) { + if (!this._preComputing) this._begin(); + var width = element.offsetWidth; + if (!this._preComputing) this._end(); + return width; + }, + + 'margin-box-height': function(element) { + var bHeight = this.get('border-box-height'), + mTop = this.get('margin-top'), + mBottom = this.get('margin-bottom'); + + if (bHeight <= 0) return 0; + + return bHeight + mTop + mBottom; + }, + + 'margin-box-width': function(element) { + var bWidth = this.get('border-box-width'), + mLeft = this.get('margin-left'), + mRight = this.get('margin-right'); + + if (bWidth <= 0) return 0; + + return bWidth + mLeft + mRight; + }, + + 'top': function(element) { + var offset = element.positionedOffset(); + return offset.top; + }, + + 'bottom': function(element) { + var offset = element.positionedOffset(), + parent = element.getOffsetParent(), + pHeight = parent.measure('height'); + + var mHeight = this.get('border-box-height'); + + return pHeight - mHeight - offset.top; + }, + + 'left': function(element) { + var offset = element.positionedOffset(); + return offset.left; + }, + + 'right': function(element) { + var offset = element.positionedOffset(), + parent = element.getOffsetParent(), + pWidth = parent.measure('width'); + + var mWidth = this.get('border-box-width'); + + return pWidth - mWidth - offset.left; + }, + + 'padding-top': function(element) { + return getPixelValue(element, 'paddingTop'); + }, + + 'padding-bottom': function(element) { + return getPixelValue(element, 'paddingBottom'); + }, + + 'padding-left': function(element) { + return getPixelValue(element, 'paddingLeft'); + }, + + 'padding-right': function(element) { + return getPixelValue(element, 'paddingRight'); + }, + + 'border-top': function(element) { + return getPixelValue(element, 'borderTopWidth'); + }, + + 'border-bottom': function(element) { + return getPixelValue(element, 'borderBottomWidth'); + }, + + 'border-left': function(element) { + return getPixelValue(element, 'borderLeftWidth'); + }, + + 'border-right': function(element) { + return getPixelValue(element, 'borderRightWidth'); + }, + + 'margin-top': function(element) { + return getPixelValue(element, 'marginTop'); + }, + + 'margin-bottom': function(element) { + return getPixelValue(element, 'marginBottom'); + }, + + 'margin-left': function(element) { + return getPixelValue(element, 'marginLeft'); + }, + + 'margin-right': function(element) { + return getPixelValue(element, 'marginRight'); + } + } + }); + + if ('getBoundingClientRect' in document.documentElement) { + Object.extend(Element.Layout.COMPUTATIONS, { + 'right': function(element) { + var parent = hasLayout(element.getOffsetParent()); + var rect = element.getBoundingClientRect(), + pRect = parent.getBoundingClientRect(); + + return (pRect.right - rect.right).round(); + }, + + 'bottom': function(element) { + var parent = hasLayout(element.getOffsetParent()); + var rect = element.getBoundingClientRect(), + pRect = parent.getBoundingClientRect(); + + return (pRect.bottom - rect.bottom).round(); + } + }); + } + + Element.Offset = Class.create({ + initialize: function(left, top) { + this.left = left.round(); + this.top = top.round(); + + this[0] = this.left; + this[1] = this.top; + }, + + relativeTo: function(offset) { + return new Element.Offset( + this.left - offset.left, + this.top - offset.top + ); + }, + + inspect: function() { + return "#".interpolate(this); + }, + + toString: function() { + return "[#{left}, #{top}]".interpolate(this); + }, + + toArray: function() { + return [this.left, this.top]; + } + }); + + function getLayout(element, preCompute) { + return new Element.Layout(element, preCompute); + } + + function measure(element, property) { + return $(element).getLayout().get(property); + } + + function getHeight(element) { + return Element.getDimensions(element).height; + } + + function getWidth(element) { + return Element.getDimensions(element).width; + } + + function getDimensions(element) { + element = $(element); + var display = Element.getStyle(element, 'display'); + + if (display && display !== 'none') { + return { width: element.offsetWidth, height: element.offsetHeight }; + } + + var style = element.style; + var originalStyles = { + visibility: style.visibility, + position: style.position, + display: style.display + }; + + var newStyles = { + visibility: 'hidden', + display: 'block' + }; + + if (originalStyles.position !== 'fixed') + newStyles.position = 'absolute'; + + Element.setStyle(element, newStyles); + + var dimensions = { + width: element.offsetWidth, + height: element.offsetHeight + }; + + Element.setStyle(element, originalStyles); + + return dimensions; + } + + function getOffsetParent(element) { + element = $(element); + + function selfOrBody(element) { + return isHtml(element) ? $(document.body) : $(element); + } + + if (isDocument(element) || isDetached(element) || isBody(element) || isHtml(element)) + return $(document.body); + + var isInline = (Element.getStyle(element, 'display') === 'inline'); + if (!isInline && element.offsetParent) return selfOrBody(element.offsetParent); + + while ((element = element.parentNode) && element !== document.body) { + if (Element.getStyle(element, 'position') !== 'static') { + return selfOrBody(element); + } + } + + return $(document.body); + } + + + function cumulativeOffset(element) { + element = $(element); + var valueT = 0, valueL = 0; + if (element.parentNode) { + do { + valueT += element.offsetTop || 0; + valueL += element.offsetLeft || 0; + element = element.offsetParent; + } while (element); + } + return new Element.Offset(valueL, valueT); + } + + function positionedOffset(element) { + element = $(element); + + var layout = element.getLayout(); + + var valueT = 0, valueL = 0; + do { + valueT += element.offsetTop || 0; + valueL += element.offsetLeft || 0; + element = element.offsetParent; + if (element) { + if (isBody(element)) break; + var p = Element.getStyle(element, 'position'); + if (p !== 'static') break; + } + } while (element); + + valueL -= layout.get('margin-left'); + valueT -= layout.get('margin-top'); + + return new Element.Offset(valueL, valueT); + } + + function cumulativeScrollOffset(element) { + var valueT = 0, valueL = 0; + do { + if (element === document.body) { + var bodyScrollNode = document.documentElement || document.body.parentNode || document.body; + valueT += !Object.isUndefined(window.pageYOffset) ? window.pageYOffset : bodyScrollNode.scrollTop || 0; + valueL += !Object.isUndefined(window.pageXOffset) ? window.pageXOffset : bodyScrollNode.scrollLeft || 0; + break; + } else { + valueT += element.scrollTop || 0; + valueL += element.scrollLeft || 0; + element = element.parentNode; + } + } while (element); + return new Element.Offset(valueL, valueT); + } + + function viewportOffset(forElement) { + var valueT = 0, valueL = 0, docBody = document.body; + + forElement = $(forElement); + var element = forElement; + do { + valueT += element.offsetTop || 0; + valueL += element.offsetLeft || 0; + if (element.offsetParent == docBody && + Element.getStyle(element, 'position') == 'absolute') break; + } while (element = element.offsetParent); + + element = forElement; + do { + if (element != docBody) { + valueT -= element.scrollTop || 0; + valueL -= element.scrollLeft || 0; + } + } while (element = element.parentNode); + return new Element.Offset(valueL, valueT); + } + + function absolutize(element) { + element = $(element); + + if (Element.getStyle(element, 'position') === 'absolute') { + return element; + } + + var offsetParent = getOffsetParent(element); + var eOffset = element.viewportOffset(), + pOffset = offsetParent.viewportOffset(); + + var offset = eOffset.relativeTo(pOffset); + var layout = element.getLayout(); + + element.store('prototype_absolutize_original_styles', { + position: element.getStyle('position'), + left: element.getStyle('left'), + top: element.getStyle('top'), + width: element.getStyle('width'), + height: element.getStyle('height') + }); + + element.setStyle({ + position: 'absolute', + top: offset.top + 'px', + left: offset.left + 'px', + width: layout.get('width') + 'px', + height: layout.get('height') + 'px' + }); + + return element; + } + + function relativize(element) { + element = $(element); + if (Element.getStyle(element, 'position') === 'relative') { + return element; + } + + var originalStyles = + element.retrieve('prototype_absolutize_original_styles'); + + if (originalStyles) element.setStyle(originalStyles); + return element; + } + + + function scrollTo(element) { + element = $(element); + var pos = Element.cumulativeOffset(element); + window.scrollTo(pos.left, pos.top); + return element; + } + + + function makePositioned(element) { + element = $(element); + var position = Element.getStyle(element, 'position'), styles = {}; + if (position === 'static' || !position) { + styles.position = 'relative'; + if (Prototype.Browser.Opera) { + styles.top = 0; + styles.left = 0; + } + Element.setStyle(element, styles); + Element.store(element, 'prototype_made_positioned', true); + } + return element; + } + + function undoPositioned(element) { + element = $(element); + var storage = Element.getStorage(element), + madePositioned = storage.get('prototype_made_positioned'); + + if (madePositioned) { + storage.unset('prototype_made_positioned'); + Element.setStyle(element, { + position: '', + top: '', + bottom: '', + left: '', + right: '' + }); + } + return element; + } + + function makeClipping(element) { + element = $(element); + + var storage = Element.getStorage(element), + madeClipping = storage.get('prototype_made_clipping'); + + if (Object.isUndefined(madeClipping)) { + var overflow = Element.getStyle(element, 'overflow'); + storage.set('prototype_made_clipping', overflow); + if (overflow !== 'hidden') + element.style.overflow = 'hidden'; + } + + return element; + } + + function undoClipping(element) { + element = $(element); + var storage = Element.getStorage(element), + overflow = storage.get('prototype_made_clipping'); + + if (!Object.isUndefined(overflow)) { + storage.unset('prototype_made_clipping'); + element.style.overflow = overflow || ''; + } + + return element; + } + + function clonePosition(element, source, options) { + options = Object.extend({ + setLeft: true, + setTop: true, + setWidth: true, + setHeight: true, + offsetTop: 0, + offsetLeft: 0 + }, options || {}); + + var docEl = document.documentElement; + + source = $(source); + element = $(element); + var p, delta, layout, styles = {}; + + if (options.setLeft || options.setTop) { + p = Element.viewportOffset(source); + delta = [0, 0]; + if (Element.getStyle(element, 'position') === 'absolute') { + var parent = Element.getOffsetParent(element); + if (parent !== document.body) delta = Element.viewportOffset(parent); + } + } + + function pageScrollXY() { + var x = 0, y = 0; + if (Object.isNumber(window.pageXOffset)) { + x = window.pageXOffset; + y = window.pageYOffset; + } else if (document.body && (document.body.scrollLeft || document.body.scrollTop)) { + x = document.body.scrollLeft; + y = document.body.scrollTop; + } else if (docEl && (docEl.scrollLeft || docEl.scrollTop)) { + x = docEl.scrollLeft; + y = docEl.scrollTop; + } + return { x: x, y: y }; + } + + var pageXY = pageScrollXY(); + + + if (options.setWidth || options.setHeight) { + layout = Element.getLayout(source); + } + + if (options.setLeft) + styles.left = (p[0] + pageXY.x - delta[0] + options.offsetLeft) + 'px'; + if (options.setTop) + styles.top = (p[1] + pageXY.y - delta[1] + options.offsetTop) + 'px'; + + var currentLayout = element.getLayout(); + + if (options.setWidth) { + styles.width = layout.get('width') + 'px'; + } + if (options.setHeight) { + styles.height = layout.get('height') + 'px'; + } + + return Element.setStyle(element, styles); + } + + + if (Prototype.Browser.IE) { + getOffsetParent = getOffsetParent.wrap( + function(proceed, element) { + element = $(element); + + if (isDocument(element) || isDetached(element) || isBody(element) || isHtml(element)) + return $(document.body); + + var position = element.getStyle('position'); + if (position !== 'static') return proceed(element); + + element.setStyle({ position: 'relative' }); + var value = proceed(element); + element.setStyle({ position: position }); + return value; + } + ); + + positionedOffset = positionedOffset.wrap(function(proceed, element) { + element = $(element); + if (!element.parentNode) return new Element.Offset(0, 0); + var position = element.getStyle('position'); + if (position !== 'static') return proceed(element); + + var offsetParent = element.getOffsetParent(); + if (offsetParent && offsetParent.getStyle('position') === 'fixed') + hasLayout(offsetParent); + + element.setStyle({ position: 'relative' }); + var value = proceed(element); + element.setStyle({ position: position }); + return value; + }); + } else if (Prototype.Browser.Webkit) { + cumulativeOffset = function(element) { + element = $(element); + var valueT = 0, valueL = 0; + do { + valueT += element.offsetTop || 0; + valueL += element.offsetLeft || 0; + if (element.offsetParent == document.body) { + if (Element.getStyle(element, 'position') == 'absolute') break; + } + + element = element.offsetParent; + } while (element); + + return new Element.Offset(valueL, valueT); + }; + } + + + Element.addMethods({ + getLayout: getLayout, + measure: measure, + getWidth: getWidth, + getHeight: getHeight, + getDimensions: getDimensions, + getOffsetParent: getOffsetParent, + cumulativeOffset: cumulativeOffset, + positionedOffset: positionedOffset, + cumulativeScrollOffset: cumulativeScrollOffset, + viewportOffset: viewportOffset, + absolutize: absolutize, + relativize: relativize, + scrollTo: scrollTo, + makePositioned: makePositioned, + undoPositioned: undoPositioned, + makeClipping: makeClipping, + undoClipping: undoClipping, + clonePosition: clonePosition + }); + + function isBody(element) { + return element.nodeName.toUpperCase() === 'BODY'; + } + + function isHtml(element) { + return element.nodeName.toUpperCase() === 'HTML'; + } + + function isDocument(element) { + return element.nodeType === Node.DOCUMENT_NODE; + } + + function isDetached(element) { + return element !== document.body && + !Element.descendantOf(element, document.body); + } + + if ('getBoundingClientRect' in document.documentElement) { + Element.addMethods({ + viewportOffset: function(element) { + element = $(element); + if (isDetached(element)) return new Element.Offset(0, 0); + + var rect = element.getBoundingClientRect(), + docEl = document.documentElement; + return new Element.Offset(rect.left - docEl.clientLeft, + rect.top - docEl.clientTop); + } + }); + } + + +})(); + +(function() { + + var IS_OLD_OPERA = Prototype.Browser.Opera && + (window.parseFloat(window.opera.version()) < 9.5); + var ROOT = null; + function getRootElement() { + if (ROOT) return ROOT; + ROOT = IS_OLD_OPERA ? document.body : document.documentElement; + return ROOT; + } + + function getDimensions() { + return { width: this.getWidth(), height: this.getHeight() }; + } + + function getWidth() { + return getRootElement().clientWidth; + } + + function getHeight() { + return getRootElement().clientHeight; + } + + function getScrollOffsets() { + var x = window.pageXOffset || document.documentElement.scrollLeft || + document.body.scrollLeft; + var y = window.pageYOffset || document.documentElement.scrollTop || + document.body.scrollTop; + + return new Element.Offset(x, y); + } + + document.viewport = { + getDimensions: getDimensions, + getWidth: getWidth, + getHeight: getHeight, + getScrollOffsets: getScrollOffsets + }; + +})(); +window.$$ = function() { + var expression = $A(arguments).join(', '); + return Prototype.Selector.select(expression, document); +}; + +Prototype.Selector = (function() { + + function select() { + throw new Error('Method "Prototype.Selector.select" must be defined.'); + } + + function match() { + throw new Error('Method "Prototype.Selector.match" must be defined.'); + } + + function find(elements, expression, index) { + index = index || 0; + var match = Prototype.Selector.match, length = elements.length, matchIndex = 0, i; + + for (i = 0; i < length; i++) { + if (match(elements[i], expression) && index == matchIndex++) { + return Element.extend(elements[i]); + } + } + } + + function extendElements(elements) { + for (var i = 0, length = elements.length; i < length; i++) { + Element.extend(elements[i]); + } + return elements; + } + + + var K = Prototype.K; + + return { + select: select, + match: match, + find: find, + extendElements: (Element.extend === K) ? K : extendElements, + extendElement: Element.extend + }; +})(); +Prototype._original_property = window.Sizzle; + +;(function () { + function fakeDefine(fn) { + Prototype._actual_sizzle = fn(); + } + fakeDefine.amd = true; + + if (typeof define !== 'undefined' && define.amd) { + Prototype._original_define = define; + Prototype._actual_sizzle = null; + window.define = fakeDefine; + } +})(); + +/*! + * Sizzle CSS Selector Engine v1.10.18 + * http://sizzlejs.com/ + * + * Copyright 2013 jQuery Foundation, Inc. and other contributors + * Released under the MIT license + * http://jquery.org/license + * + * Date: 2014-02-05 + */ +(function( window ) { + +var i, + support, + Expr, + getText, + isXML, + compile, + select, + outermostContext, + sortInput, + hasDuplicate, + + setDocument, + document, + docElem, + documentIsHTML, + rbuggyQSA, + rbuggyMatches, + matches, + contains, + + expando = "sizzle" + -(new Date()), + preferredDoc = window.document, + dirruns = 0, + done = 0, + classCache = createCache(), + tokenCache = createCache(), + compilerCache = createCache(), + sortOrder = function( a, b ) { + if ( a === b ) { + hasDuplicate = true; + } + return 0; + }, + + strundefined = typeof undefined, + MAX_NEGATIVE = 1 << 31, + + hasOwn = ({}).hasOwnProperty, + arr = [], + pop = arr.pop, + push_native = arr.push, + push = arr.push, + slice = arr.slice, + indexOf = arr.indexOf || function( elem ) { + var i = 0, + len = this.length; + for ( ; i < len; i++ ) { + if ( this[i] === elem ) { + return i; + } + } + return -1; + }, + + booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped", + + + whitespace = "[\\x20\\t\\r\\n\\f]", + characterEncoding = "(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+", + + identifier = characterEncoding.replace( "w", "w#" ), + + attributes = "\\[" + whitespace + "*(" + characterEncoding + ")" + whitespace + + "*(?:([*^$|!~]?=)" + whitespace + "*(?:(['\"])((?:\\\\.|[^\\\\])*?)\\3|(" + identifier + ")|)|)" + whitespace + "*\\]", + + pseudos = ":(" + characterEncoding + ")(?:\\(((['\"])((?:\\\\.|[^\\\\])*?)\\3|((?:\\\\.|[^\\\\()[\\]]|" + attributes.replace( 3, 8 ) + ")*)|.*)\\)|)", + + rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + whitespace + "+$", "g" ), + + rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), + rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + "*" ), + + rattributeQuotes = new RegExp( "=" + whitespace + "*([^\\]'\"]*?)" + whitespace + "*\\]", "g" ), + + rpseudo = new RegExp( pseudos ), + ridentifier = new RegExp( "^" + identifier + "$" ), + + matchExpr = { + "ID": new RegExp( "^#(" + characterEncoding + ")" ), + "CLASS": new RegExp( "^\\.(" + characterEncoding + ")" ), + "TAG": new RegExp( "^(" + characterEncoding.replace( "w", "w*" ) + ")" ), + "ATTR": new RegExp( "^" + attributes ), + "PSEUDO": new RegExp( "^" + pseudos ), + "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + whitespace + + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + whitespace + + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), + "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), + "needsContext": new RegExp( "^" + whitespace + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + + whitespace + "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) + }, + + rinputs = /^(?:input|select|textarea|button)$/i, + rheader = /^h\d$/i, + + rnative = /^[^{]+\{\s*\[native \w/, + + rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, + + rsibling = /[+~]/, + rescape = /'|\\/g, + + runescape = new RegExp( "\\\\([\\da-f]{1,6}" + whitespace + "?|(" + whitespace + ")|.)", "ig" ), + funescape = function( _, escaped, escapedWhitespace ) { + var high = "0x" + escaped - 0x10000; + return high !== high || escapedWhitespace ? + escaped : + high < 0 ? + String.fromCharCode( high + 0x10000 ) : + String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); + }; + +try { + push.apply( + (arr = slice.call( preferredDoc.childNodes )), + preferredDoc.childNodes + ); + arr[ preferredDoc.childNodes.length ].nodeType; +} catch ( e ) { + push = { apply: arr.length ? + + function( target, els ) { + push_native.apply( target, slice.call(els) ); + } : + + function( target, els ) { + var j = target.length, + i = 0; + while ( (target[j++] = els[i++]) ) {} + target.length = j - 1; + } + }; +} + +function Sizzle( selector, context, results, seed ) { + var match, elem, m, nodeType, + i, groups, old, nid, newContext, newSelector; + + if ( ( context ? context.ownerDocument || context : preferredDoc ) !== document ) { + setDocument( context ); + } + + context = context || document; + results = results || []; + + if ( !selector || typeof selector !== "string" ) { + return results; + } + + if ( (nodeType = context.nodeType) !== 1 && nodeType !== 9 ) { + return []; + } + + if ( documentIsHTML && !seed ) { + + if ( (match = rquickExpr.exec( selector )) ) { + if ( (m = match[1]) ) { + if ( nodeType === 9 ) { + elem = context.getElementById( m ); + if ( elem && elem.parentNode ) { + if ( elem.id === m ) { + results.push( elem ); + return results; + } + } else { + return results; + } + } else { + if ( context.ownerDocument && (elem = context.ownerDocument.getElementById( m )) && + contains( context, elem ) && elem.id === m ) { + results.push( elem ); + return results; + } + } + + } else if ( match[2] ) { + push.apply( results, context.getElementsByTagName( selector ) ); + return results; + + } else if ( (m = match[3]) && support.getElementsByClassName && context.getElementsByClassName ) { + push.apply( results, context.getElementsByClassName( m ) ); + return results; + } + } + + if ( support.qsa && (!rbuggyQSA || !rbuggyQSA.test( selector )) ) { + nid = old = expando; + newContext = context; + newSelector = nodeType === 9 && selector; + + if ( nodeType === 1 && context.nodeName.toLowerCase() !== "object" ) { + groups = tokenize( selector ); + + if ( (old = context.getAttribute("id")) ) { + nid = old.replace( rescape, "\\$&" ); + } else { + context.setAttribute( "id", nid ); + } + nid = "[id='" + nid + "'] "; + + i = groups.length; + while ( i-- ) { + groups[i] = nid + toSelector( groups[i] ); + } + newContext = rsibling.test( selector ) && testContext( context.parentNode ) || context; + newSelector = groups.join(","); + } + + if ( newSelector ) { + try { + push.apply( results, + newContext.querySelectorAll( newSelector ) + ); + return results; + } catch(qsaError) { + } finally { + if ( !old ) { + context.removeAttribute("id"); + } + } + } + } + } + + return select( selector.replace( rtrim, "$1" ), context, results, seed ); +} + +/** + * Create key-value caches of limited size + * @returns {Function(string, Object)} Returns the Object data after storing it on itself with + * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) + * deleting the oldest entry + */ +function createCache() { + var keys = []; + + function cache( key, value ) { + if ( keys.push( key + " " ) > Expr.cacheLength ) { + delete cache[ keys.shift() ]; + } + return (cache[ key + " " ] = value); + } + return cache; +} + +/** + * Mark a function for special use by Sizzle + * @param {Function} fn The function to mark + */ +function markFunction( fn ) { + fn[ expando ] = true; + return fn; +} + +/** + * Support testing using an element + * @param {Function} fn Passed the created div and expects a boolean result + */ +function assert( fn ) { + var div = document.createElement("div"); + + try { + return !!fn( div ); + } catch (e) { + return false; + } finally { + if ( div.parentNode ) { + div.parentNode.removeChild( div ); + } + div = null; + } +} + +/** + * Adds the same handler for all of the specified attrs + * @param {String} attrs Pipe-separated list of attributes + * @param {Function} handler The method that will be applied + */ +function addHandle( attrs, handler ) { + var arr = attrs.split("|"), + i = attrs.length; + + while ( i-- ) { + Expr.attrHandle[ arr[i] ] = handler; + } +} + +/** + * Checks document order of two siblings + * @param {Element} a + * @param {Element} b + * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b + */ +function siblingCheck( a, b ) { + var cur = b && a, + diff = cur && a.nodeType === 1 && b.nodeType === 1 && + ( ~b.sourceIndex || MAX_NEGATIVE ) - + ( ~a.sourceIndex || MAX_NEGATIVE ); + + if ( diff ) { + return diff; + } + + if ( cur ) { + while ( (cur = cur.nextSibling) ) { + if ( cur === b ) { + return -1; + } + } + } + + return a ? 1 : -1; +} + +/** + * Returns a function to use in pseudos for input types + * @param {String} type + */ +function createInputPseudo( type ) { + return function( elem ) { + var name = elem.nodeName.toLowerCase(); + return name === "input" && elem.type === type; + }; +} + +/** + * Returns a function to use in pseudos for buttons + * @param {String} type + */ +function createButtonPseudo( type ) { + return function( elem ) { + var name = elem.nodeName.toLowerCase(); + return (name === "input" || name === "button") && elem.type === type; + }; +} + +/** + * Returns a function to use in pseudos for positionals + * @param {Function} fn + */ +function createPositionalPseudo( fn ) { + return markFunction(function( argument ) { + argument = +argument; + return markFunction(function( seed, matches ) { + var j, + matchIndexes = fn( [], seed.length, argument ), + i = matchIndexes.length; + + while ( i-- ) { + if ( seed[ (j = matchIndexes[i]) ] ) { + seed[j] = !(matches[j] = seed[j]); + } + } + }); + }); +} + +/** + * Checks a node for validity as a Sizzle context + * @param {Element|Object=} context + * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value + */ +function testContext( context ) { + return context && typeof context.getElementsByTagName !== strundefined && context; +} + +support = Sizzle.support = {}; + +/** + * Detects XML nodes + * @param {Element|Object} elem An element or a document + * @returns {Boolean} True iff elem is a non-HTML XML node + */ +isXML = Sizzle.isXML = function( elem ) { + var documentElement = elem && (elem.ownerDocument || elem).documentElement; + return documentElement ? documentElement.nodeName !== "HTML" : false; +}; + +/** + * Sets document-related variables once based on the current document + * @param {Element|Object} [doc] An element or document object to use to set the document + * @returns {Object} Returns the current document + */ +setDocument = Sizzle.setDocument = function( node ) { + var hasCompare, + doc = node ? node.ownerDocument || node : preferredDoc, + parent = doc.defaultView; + + if ( doc === document || doc.nodeType !== 9 || !doc.documentElement ) { + return document; + } + + document = doc; + docElem = doc.documentElement; + + documentIsHTML = !isXML( doc ); + + if ( parent && parent !== parent.top ) { + if ( parent.addEventListener ) { + parent.addEventListener( "unload", function() { + setDocument(); + }, false ); + } else if ( parent.attachEvent ) { + parent.attachEvent( "onunload", function() { + setDocument(); + }); + } + } + + /* Attributes + ---------------------------------------------------------------------- */ + + support.attributes = assert(function( div ) { + div.className = "i"; + return !div.getAttribute("className"); + }); + + /* getElement(s)By* + ---------------------------------------------------------------------- */ + + support.getElementsByTagName = assert(function( div ) { + div.appendChild( doc.createComment("") ); + return !div.getElementsByTagName("*").length; + }); + + support.getElementsByClassName = rnative.test( doc.getElementsByClassName ) && assert(function( div ) { + div.innerHTML = "
"; + + div.firstChild.className = "i"; + return div.getElementsByClassName("i").length === 2; + }); + + support.getById = assert(function( div ) { + docElem.appendChild( div ).id = expando; + return !doc.getElementsByName || !doc.getElementsByName( expando ).length; + }); + + if ( support.getById ) { + Expr.find["ID"] = function( id, context ) { + if ( typeof context.getElementById !== strundefined && documentIsHTML ) { + var m = context.getElementById( id ); + return m && m.parentNode ? [m] : []; + } + }; + Expr.filter["ID"] = function( id ) { + var attrId = id.replace( runescape, funescape ); + return function( elem ) { + return elem.getAttribute("id") === attrId; + }; + }; + } else { + delete Expr.find["ID"]; + + Expr.filter["ID"] = function( id ) { + var attrId = id.replace( runescape, funescape ); + return function( elem ) { + var node = typeof elem.getAttributeNode !== strundefined && elem.getAttributeNode("id"); + return node && node.value === attrId; + }; + }; + } + + Expr.find["TAG"] = support.getElementsByTagName ? + function( tag, context ) { + if ( typeof context.getElementsByTagName !== strundefined ) { + return context.getElementsByTagName( tag ); + } + } : + function( tag, context ) { + var elem, + tmp = [], + i = 0, + results = context.getElementsByTagName( tag ); + + if ( tag === "*" ) { + while ( (elem = results[i++]) ) { + if ( elem.nodeType === 1 ) { + tmp.push( elem ); + } + } + + return tmp; + } + return results; + }; + + Expr.find["CLASS"] = support.getElementsByClassName && function( className, context ) { + if ( typeof context.getElementsByClassName !== strundefined && documentIsHTML ) { + return context.getElementsByClassName( className ); + } + }; + + /* QSA/matchesSelector + ---------------------------------------------------------------------- */ + + + rbuggyMatches = []; + + rbuggyQSA = []; + + if ( (support.qsa = rnative.test( doc.querySelectorAll )) ) { + assert(function( div ) { + div.innerHTML = ""; + + if ( div.querySelectorAll("[t^='']").length ) { + rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); + } + + if ( !div.querySelectorAll("[selected]").length ) { + rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); + } + + if ( !div.querySelectorAll(":checked").length ) { + rbuggyQSA.push(":checked"); + } + }); + + assert(function( div ) { + var input = doc.createElement("input"); + input.setAttribute( "type", "hidden" ); + div.appendChild( input ).setAttribute( "name", "D" ); + + if ( div.querySelectorAll("[name=d]").length ) { + rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); + } + + if ( !div.querySelectorAll(":enabled").length ) { + rbuggyQSA.push( ":enabled", ":disabled" ); + } + + div.querySelectorAll("*,:x"); + rbuggyQSA.push(",.*:"); + }); + } + + if ( (support.matchesSelector = rnative.test( (matches = docElem.webkitMatchesSelector || + docElem.mozMatchesSelector || + docElem.oMatchesSelector || + docElem.msMatchesSelector) )) ) { + + assert(function( div ) { + support.disconnectedMatch = matches.call( div, "div" ); + + matches.call( div, "[s!='']:x" ); + rbuggyMatches.push( "!=", pseudos ); + }); + } + + rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join("|") ); + rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join("|") ); + + /* Contains + ---------------------------------------------------------------------- */ + hasCompare = rnative.test( docElem.compareDocumentPosition ); + + contains = hasCompare || rnative.test( docElem.contains ) ? + function( a, b ) { + var adown = a.nodeType === 9 ? a.documentElement : a, + bup = b && b.parentNode; + return a === bup || !!( bup && bup.nodeType === 1 && ( + adown.contains ? + adown.contains( bup ) : + a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 + )); + } : + function( a, b ) { + if ( b ) { + while ( (b = b.parentNode) ) { + if ( b === a ) { + return true; + } + } + } + return false; + }; + + /* Sorting + ---------------------------------------------------------------------- */ + + sortOrder = hasCompare ? + function( a, b ) { + + if ( a === b ) { + hasDuplicate = true; + return 0; + } + + var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; + if ( compare ) { + return compare; + } + + compare = ( a.ownerDocument || a ) === ( b.ownerDocument || b ) ? + a.compareDocumentPosition( b ) : + + 1; + + if ( compare & 1 || + (!support.sortDetached && b.compareDocumentPosition( a ) === compare) ) { + + if ( a === doc || a.ownerDocument === preferredDoc && contains(preferredDoc, a) ) { + return -1; + } + if ( b === doc || b.ownerDocument === preferredDoc && contains(preferredDoc, b) ) { + return 1; + } + + return sortInput ? + ( indexOf.call( sortInput, a ) - indexOf.call( sortInput, b ) ) : + 0; + } + + return compare & 4 ? -1 : 1; + } : + function( a, b ) { + if ( a === b ) { + hasDuplicate = true; + return 0; + } + + var cur, + i = 0, + aup = a.parentNode, + bup = b.parentNode, + ap = [ a ], + bp = [ b ]; + + if ( !aup || !bup ) { + return a === doc ? -1 : + b === doc ? 1 : + aup ? -1 : + bup ? 1 : + sortInput ? + ( indexOf.call( sortInput, a ) - indexOf.call( sortInput, b ) ) : + 0; + + } else if ( aup === bup ) { + return siblingCheck( a, b ); + } + + cur = a; + while ( (cur = cur.parentNode) ) { + ap.unshift( cur ); + } + cur = b; + while ( (cur = cur.parentNode) ) { + bp.unshift( cur ); + } + + while ( ap[i] === bp[i] ) { + i++; + } + + return i ? + siblingCheck( ap[i], bp[i] ) : + + ap[i] === preferredDoc ? -1 : + bp[i] === preferredDoc ? 1 : + 0; + }; + + return doc; +}; + +Sizzle.matches = function( expr, elements ) { + return Sizzle( expr, null, null, elements ); +}; + +Sizzle.matchesSelector = function( elem, expr ) { + if ( ( elem.ownerDocument || elem ) !== document ) { + setDocument( elem ); + } + + expr = expr.replace( rattributeQuotes, "='$1']" ); + + if ( support.matchesSelector && documentIsHTML && + ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && + ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { + + try { + var ret = matches.call( elem, expr ); + + if ( ret || support.disconnectedMatch || + elem.document && elem.document.nodeType !== 11 ) { + return ret; + } + } catch(e) {} + } + + return Sizzle( expr, document, null, [elem] ).length > 0; +}; + +Sizzle.contains = function( context, elem ) { + if ( ( context.ownerDocument || context ) !== document ) { + setDocument( context ); + } + return contains( context, elem ); +}; + +Sizzle.attr = function( elem, name ) { + if ( ( elem.ownerDocument || elem ) !== document ) { + setDocument( elem ); + } + + var fn = Expr.attrHandle[ name.toLowerCase() ], + val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? + fn( elem, name, !documentIsHTML ) : + undefined; + + return val !== undefined ? + val : + support.attributes || !documentIsHTML ? + elem.getAttribute( name ) : + (val = elem.getAttributeNode(name)) && val.specified ? + val.value : + null; +}; + +Sizzle.error = function( msg ) { + throw new Error( "Syntax error, unrecognized expression: " + msg ); +}; + +/** + * Document sorting and removing duplicates + * @param {ArrayLike} results + */ +Sizzle.uniqueSort = function( results ) { + var elem, + duplicates = [], + j = 0, + i = 0; + + hasDuplicate = !support.detectDuplicates; + sortInput = !support.sortStable && results.slice( 0 ); + results.sort( sortOrder ); + + if ( hasDuplicate ) { + while ( (elem = results[i++]) ) { + if ( elem === results[ i ] ) { + j = duplicates.push( i ); + } + } + while ( j-- ) { + results.splice( duplicates[ j ], 1 ); + } + } + + sortInput = null; + + return results; +}; + +/** + * Utility function for retrieving the text value of an array of DOM nodes + * @param {Array|Element} elem + */ +getText = Sizzle.getText = function( elem ) { + var node, + ret = "", + i = 0, + nodeType = elem.nodeType; + + if ( !nodeType ) { + while ( (node = elem[i++]) ) { + ret += getText( node ); + } + } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { + if ( typeof elem.textContent === "string" ) { + return elem.textContent; + } else { + for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { + ret += getText( elem ); + } + } + } else if ( nodeType === 3 || nodeType === 4 ) { + return elem.nodeValue; + } + + return ret; +}; + +Expr = Sizzle.selectors = { + + cacheLength: 50, + + createPseudo: markFunction, + + match: matchExpr, + + attrHandle: {}, + + find: {}, + + relative: { + ">": { dir: "parentNode", first: true }, + " ": { dir: "parentNode" }, + "+": { dir: "previousSibling", first: true }, + "~": { dir: "previousSibling" } + }, + + preFilter: { + "ATTR": function( match ) { + match[1] = match[1].replace( runescape, funescape ); + + match[3] = ( match[4] || match[5] || "" ).replace( runescape, funescape ); + + if ( match[2] === "~=" ) { + match[3] = " " + match[3] + " "; + } + + return match.slice( 0, 4 ); + }, + + "CHILD": function( match ) { + /* matches from matchExpr["CHILD"] + 1 type (only|nth|...) + 2 what (child|of-type) + 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) + 4 xn-component of xn+y argument ([+-]?\d*n|) + 5 sign of xn-component + 6 x of xn-component + 7 sign of y-component + 8 y of y-component + */ + match[1] = match[1].toLowerCase(); + + if ( match[1].slice( 0, 3 ) === "nth" ) { + if ( !match[3] ) { + Sizzle.error( match[0] ); + } + + match[4] = +( match[4] ? match[5] + (match[6] || 1) : 2 * ( match[3] === "even" || match[3] === "odd" ) ); + match[5] = +( ( match[7] + match[8] ) || match[3] === "odd" ); + + } else if ( match[3] ) { + Sizzle.error( match[0] ); + } + + return match; + }, + + "PSEUDO": function( match ) { + var excess, + unquoted = !match[5] && match[2]; + + if ( matchExpr["CHILD"].test( match[0] ) ) { + return null; + } + + if ( match[3] && match[4] !== undefined ) { + match[2] = match[4]; + + } else if ( unquoted && rpseudo.test( unquoted ) && + (excess = tokenize( unquoted, true )) && + (excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length) ) { + + match[0] = match[0].slice( 0, excess ); + match[2] = unquoted.slice( 0, excess ); + } + + return match.slice( 0, 3 ); + } + }, + + filter: { + + "TAG": function( nodeNameSelector ) { + var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); + return nodeNameSelector === "*" ? + function() { return true; } : + function( elem ) { + return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; + }; + }, + + "CLASS": function( className ) { + var pattern = classCache[ className + " " ]; + + return pattern || + (pattern = new RegExp( "(^|" + whitespace + ")" + className + "(" + whitespace + "|$)" )) && + classCache( className, function( elem ) { + return pattern.test( typeof elem.className === "string" && elem.className || typeof elem.getAttribute !== strundefined && elem.getAttribute("class") || "" ); + }); + }, + + "ATTR": function( name, operator, check ) { + return function( elem ) { + var result = Sizzle.attr( elem, name ); + + if ( result == null ) { + return operator === "!="; + } + if ( !operator ) { + return true; + } + + result += ""; + + return operator === "=" ? result === check : + operator === "!=" ? result !== check : + operator === "^=" ? check && result.indexOf( check ) === 0 : + operator === "*=" ? check && result.indexOf( check ) > -1 : + operator === "$=" ? check && result.slice( -check.length ) === check : + operator === "~=" ? ( " " + result + " " ).indexOf( check ) > -1 : + operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : + false; + }; + }, + + "CHILD": function( type, what, argument, first, last ) { + var simple = type.slice( 0, 3 ) !== "nth", + forward = type.slice( -4 ) !== "last", + ofType = what === "of-type"; + + return first === 1 && last === 0 ? + + function( elem ) { + return !!elem.parentNode; + } : + + function( elem, context, xml ) { + var cache, outerCache, node, diff, nodeIndex, start, + dir = simple !== forward ? "nextSibling" : "previousSibling", + parent = elem.parentNode, + name = ofType && elem.nodeName.toLowerCase(), + useCache = !xml && !ofType; + + if ( parent ) { + + if ( simple ) { + while ( dir ) { + node = elem; + while ( (node = node[ dir ]) ) { + if ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) { + return false; + } + } + start = dir = type === "only" && !start && "nextSibling"; + } + return true; + } + + start = [ forward ? parent.firstChild : parent.lastChild ]; + + if ( forward && useCache ) { + outerCache = parent[ expando ] || (parent[ expando ] = {}); + cache = outerCache[ type ] || []; + nodeIndex = cache[0] === dirruns && cache[1]; + diff = cache[0] === dirruns && cache[2]; + node = nodeIndex && parent.childNodes[ nodeIndex ]; + + while ( (node = ++nodeIndex && node && node[ dir ] || + + (diff = nodeIndex = 0) || start.pop()) ) { + + if ( node.nodeType === 1 && ++diff && node === elem ) { + outerCache[ type ] = [ dirruns, nodeIndex, diff ]; + break; + } + } + + } else if ( useCache && (cache = (elem[ expando ] || (elem[ expando ] = {}))[ type ]) && cache[0] === dirruns ) { + diff = cache[1]; + + } else { + while ( (node = ++nodeIndex && node && node[ dir ] || + (diff = nodeIndex = 0) || start.pop()) ) { + + if ( ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) && ++diff ) { + if ( useCache ) { + (node[ expando ] || (node[ expando ] = {}))[ type ] = [ dirruns, diff ]; + } + + if ( node === elem ) { + break; + } + } + } + } + + diff -= last; + return diff === first || ( diff % first === 0 && diff / first >= 0 ); + } + }; + }, + + "PSEUDO": function( pseudo, argument ) { + var args, + fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || + Sizzle.error( "unsupported pseudo: " + pseudo ); + + if ( fn[ expando ] ) { + return fn( argument ); + } + + if ( fn.length > 1 ) { + args = [ pseudo, pseudo, "", argument ]; + return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? + markFunction(function( seed, matches ) { + var idx, + matched = fn( seed, argument ), + i = matched.length; + while ( i-- ) { + idx = indexOf.call( seed, matched[i] ); + seed[ idx ] = !( matches[ idx ] = matched[i] ); + } + }) : + function( elem ) { + return fn( elem, 0, args ); + }; + } + + return fn; + } + }, + + pseudos: { + "not": markFunction(function( selector ) { + var input = [], + results = [], + matcher = compile( selector.replace( rtrim, "$1" ) ); + + return matcher[ expando ] ? + markFunction(function( seed, matches, context, xml ) { + var elem, + unmatched = matcher( seed, null, xml, [] ), + i = seed.length; + + while ( i-- ) { + if ( (elem = unmatched[i]) ) { + seed[i] = !(matches[i] = elem); + } + } + }) : + function( elem, context, xml ) { + input[0] = elem; + matcher( input, null, xml, results ); + return !results.pop(); + }; + }), + + "has": markFunction(function( selector ) { + return function( elem ) { + return Sizzle( selector, elem ).length > 0; + }; + }), + + "contains": markFunction(function( text ) { + return function( elem ) { + return ( elem.textContent || elem.innerText || getText( elem ) ).indexOf( text ) > -1; + }; + }), + + "lang": markFunction( function( lang ) { + if ( !ridentifier.test(lang || "") ) { + Sizzle.error( "unsupported lang: " + lang ); + } + lang = lang.replace( runescape, funescape ).toLowerCase(); + return function( elem ) { + var elemLang; + do { + if ( (elemLang = documentIsHTML ? + elem.lang : + elem.getAttribute("xml:lang") || elem.getAttribute("lang")) ) { + + elemLang = elemLang.toLowerCase(); + return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; + } + } while ( (elem = elem.parentNode) && elem.nodeType === 1 ); + return false; + }; + }), + + "target": function( elem ) { + var hash = window.location && window.location.hash; + return hash && hash.slice( 1 ) === elem.id; + }, + + "root": function( elem ) { + return elem === docElem; + }, + + "focus": function( elem ) { + return elem === document.activeElement && (!document.hasFocus || document.hasFocus()) && !!(elem.type || elem.href || ~elem.tabIndex); + }, + + "enabled": function( elem ) { + return elem.disabled === false; + }, + + "disabled": function( elem ) { + return elem.disabled === true; + }, + + "checked": function( elem ) { + var nodeName = elem.nodeName.toLowerCase(); + return (nodeName === "input" && !!elem.checked) || (nodeName === "option" && !!elem.selected); + }, + + "selected": function( elem ) { + if ( elem.parentNode ) { + elem.parentNode.selectedIndex; + } + + return elem.selected === true; + }, + + "empty": function( elem ) { + for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { + if ( elem.nodeType < 6 ) { + return false; + } + } + return true; + }, + + "parent": function( elem ) { + return !Expr.pseudos["empty"]( elem ); + }, + + "header": function( elem ) { + return rheader.test( elem.nodeName ); + }, + + "input": function( elem ) { + return rinputs.test( elem.nodeName ); + }, + + "button": function( elem ) { + var name = elem.nodeName.toLowerCase(); + return name === "input" && elem.type === "button" || name === "button"; + }, + + "text": function( elem ) { + var attr; + return elem.nodeName.toLowerCase() === "input" && + elem.type === "text" && + + ( (attr = elem.getAttribute("type")) == null || attr.toLowerCase() === "text" ); + }, + + "first": createPositionalPseudo(function() { + return [ 0 ]; + }), + + "last": createPositionalPseudo(function( matchIndexes, length ) { + return [ length - 1 ]; + }), + + "eq": createPositionalPseudo(function( matchIndexes, length, argument ) { + return [ argument < 0 ? argument + length : argument ]; + }), + + "even": createPositionalPseudo(function( matchIndexes, length ) { + var i = 0; + for ( ; i < length; i += 2 ) { + matchIndexes.push( i ); + } + return matchIndexes; + }), + + "odd": createPositionalPseudo(function( matchIndexes, length ) { + var i = 1; + for ( ; i < length; i += 2 ) { + matchIndexes.push( i ); + } + return matchIndexes; + }), + + "lt": createPositionalPseudo(function( matchIndexes, length, argument ) { + var i = argument < 0 ? argument + length : argument; + for ( ; --i >= 0; ) { + matchIndexes.push( i ); + } + return matchIndexes; + }), + + "gt": createPositionalPseudo(function( matchIndexes, length, argument ) { + var i = argument < 0 ? argument + length : argument; + for ( ; ++i < length; ) { + matchIndexes.push( i ); + } + return matchIndexes; + }) + } +}; + +Expr.pseudos["nth"] = Expr.pseudos["eq"]; + +for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { + Expr.pseudos[ i ] = createInputPseudo( i ); +} +for ( i in { submit: true, reset: true } ) { + Expr.pseudos[ i ] = createButtonPseudo( i ); +} + +function setFilters() {} +setFilters.prototype = Expr.filters = Expr.pseudos; +Expr.setFilters = new setFilters(); + +function tokenize( selector, parseOnly ) { + var matched, match, tokens, type, + soFar, groups, preFilters, + cached = tokenCache[ selector + " " ]; + + if ( cached ) { + return parseOnly ? 0 : cached.slice( 0 ); + } + + soFar = selector; + groups = []; + preFilters = Expr.preFilter; + + while ( soFar ) { + + if ( !matched || (match = rcomma.exec( soFar )) ) { + if ( match ) { + soFar = soFar.slice( match[0].length ) || soFar; + } + groups.push( (tokens = []) ); + } + + matched = false; + + if ( (match = rcombinators.exec( soFar )) ) { + matched = match.shift(); + tokens.push({ + value: matched, + type: match[0].replace( rtrim, " " ) + }); + soFar = soFar.slice( matched.length ); + } + + for ( type in Expr.filter ) { + if ( (match = matchExpr[ type ].exec( soFar )) && (!preFilters[ type ] || + (match = preFilters[ type ]( match ))) ) { + matched = match.shift(); + tokens.push({ + value: matched, + type: type, + matches: match + }); + soFar = soFar.slice( matched.length ); + } + } + + if ( !matched ) { + break; + } + } + + return parseOnly ? + soFar.length : + soFar ? + Sizzle.error( selector ) : + tokenCache( selector, groups ).slice( 0 ); +} + +function toSelector( tokens ) { + var i = 0, + len = tokens.length, + selector = ""; + for ( ; i < len; i++ ) { + selector += tokens[i].value; + } + return selector; +} + +function addCombinator( matcher, combinator, base ) { + var dir = combinator.dir, + checkNonElements = base && dir === "parentNode", + doneName = done++; + + return combinator.first ? + function( elem, context, xml ) { + while ( (elem = elem[ dir ]) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + return matcher( elem, context, xml ); + } + } + } : + + function( elem, context, xml ) { + var oldCache, outerCache, + newCache = [ dirruns, doneName ]; + + if ( xml ) { + while ( (elem = elem[ dir ]) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + if ( matcher( elem, context, xml ) ) { + return true; + } + } + } + } else { + while ( (elem = elem[ dir ]) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + outerCache = elem[ expando ] || (elem[ expando ] = {}); + if ( (oldCache = outerCache[ dir ]) && + oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { + + return (newCache[ 2 ] = oldCache[ 2 ]); + } else { + outerCache[ dir ] = newCache; + + if ( (newCache[ 2 ] = matcher( elem, context, xml )) ) { + return true; + } + } + } + } + } + }; +} + +function elementMatcher( matchers ) { + return matchers.length > 1 ? + function( elem, context, xml ) { + var i = matchers.length; + while ( i-- ) { + if ( !matchers[i]( elem, context, xml ) ) { + return false; + } + } + return true; + } : + matchers[0]; +} + +function multipleContexts( selector, contexts, results ) { + var i = 0, + len = contexts.length; + for ( ; i < len; i++ ) { + Sizzle( selector, contexts[i], results ); + } + return results; +} + +function condense( unmatched, map, filter, context, xml ) { + var elem, + newUnmatched = [], + i = 0, + len = unmatched.length, + mapped = map != null; + + for ( ; i < len; i++ ) { + if ( (elem = unmatched[i]) ) { + if ( !filter || filter( elem, context, xml ) ) { + newUnmatched.push( elem ); + if ( mapped ) { + map.push( i ); + } + } + } + } + + return newUnmatched; +} + +function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { + if ( postFilter && !postFilter[ expando ] ) { + postFilter = setMatcher( postFilter ); + } + if ( postFinder && !postFinder[ expando ] ) { + postFinder = setMatcher( postFinder, postSelector ); + } + return markFunction(function( seed, results, context, xml ) { + var temp, i, elem, + preMap = [], + postMap = [], + preexisting = results.length, + + elems = seed || multipleContexts( selector || "*", context.nodeType ? [ context ] : context, [] ), + + matcherIn = preFilter && ( seed || !selector ) ? + condense( elems, preMap, preFilter, context, xml ) : + elems, + + matcherOut = matcher ? + postFinder || ( seed ? preFilter : preexisting || postFilter ) ? + + [] : + + results : + matcherIn; + + if ( matcher ) { + matcher( matcherIn, matcherOut, context, xml ); + } + + if ( postFilter ) { + temp = condense( matcherOut, postMap ); + postFilter( temp, [], context, xml ); + + i = temp.length; + while ( i-- ) { + if ( (elem = temp[i]) ) { + matcherOut[ postMap[i] ] = !(matcherIn[ postMap[i] ] = elem); + } + } + } + + if ( seed ) { + if ( postFinder || preFilter ) { + if ( postFinder ) { + temp = []; + i = matcherOut.length; + while ( i-- ) { + if ( (elem = matcherOut[i]) ) { + temp.push( (matcherIn[i] = elem) ); + } + } + postFinder( null, (matcherOut = []), temp, xml ); + } + + i = matcherOut.length; + while ( i-- ) { + if ( (elem = matcherOut[i]) && + (temp = postFinder ? indexOf.call( seed, elem ) : preMap[i]) > -1 ) { + + seed[temp] = !(results[temp] = elem); + } + } + } + + } else { + matcherOut = condense( + matcherOut === results ? + matcherOut.splice( preexisting, matcherOut.length ) : + matcherOut + ); + if ( postFinder ) { + postFinder( null, results, matcherOut, xml ); + } else { + push.apply( results, matcherOut ); + } + } + }); +} + +function matcherFromTokens( tokens ) { + var checkContext, matcher, j, + len = tokens.length, + leadingRelative = Expr.relative[ tokens[0].type ], + implicitRelative = leadingRelative || Expr.relative[" "], + i = leadingRelative ? 1 : 0, + + matchContext = addCombinator( function( elem ) { + return elem === checkContext; + }, implicitRelative, true ), + matchAnyContext = addCombinator( function( elem ) { + return indexOf.call( checkContext, elem ) > -1; + }, implicitRelative, true ), + matchers = [ function( elem, context, xml ) { + return ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( + (checkContext = context).nodeType ? + matchContext( elem, context, xml ) : + matchAnyContext( elem, context, xml ) ); + } ]; + + for ( ; i < len; i++ ) { + if ( (matcher = Expr.relative[ tokens[i].type ]) ) { + matchers = [ addCombinator(elementMatcher( matchers ), matcher) ]; + } else { + matcher = Expr.filter[ tokens[i].type ].apply( null, tokens[i].matches ); + + if ( matcher[ expando ] ) { + j = ++i; + for ( ; j < len; j++ ) { + if ( Expr.relative[ tokens[j].type ] ) { + break; + } + } + return setMatcher( + i > 1 && elementMatcher( matchers ), + i > 1 && toSelector( + tokens.slice( 0, i - 1 ).concat({ value: tokens[ i - 2 ].type === " " ? "*" : "" }) + ).replace( rtrim, "$1" ), + matcher, + i < j && matcherFromTokens( tokens.slice( i, j ) ), + j < len && matcherFromTokens( (tokens = tokens.slice( j )) ), + j < len && toSelector( tokens ) + ); + } + matchers.push( matcher ); + } + } + + return elementMatcher( matchers ); +} + +function matcherFromGroupMatchers( elementMatchers, setMatchers ) { + var bySet = setMatchers.length > 0, + byElement = elementMatchers.length > 0, + superMatcher = function( seed, context, xml, results, outermost ) { + var elem, j, matcher, + matchedCount = 0, + i = "0", + unmatched = seed && [], + setMatched = [], + contextBackup = outermostContext, + elems = seed || byElement && Expr.find["TAG"]( "*", outermost ), + dirrunsUnique = (dirruns += contextBackup == null ? 1 : Math.random() || 0.1), + len = elems.length; + + if ( outermost ) { + outermostContext = context !== document && context; + } + + for ( ; i !== len && (elem = elems[i]) != null; i++ ) { + if ( byElement && elem ) { + j = 0; + while ( (matcher = elementMatchers[j++]) ) { + if ( matcher( elem, context, xml ) ) { + results.push( elem ); + break; + } + } + if ( outermost ) { + dirruns = dirrunsUnique; + } + } + + if ( bySet ) { + if ( (elem = !matcher && elem) ) { + matchedCount--; + } + + if ( seed ) { + unmatched.push( elem ); + } + } + } + + matchedCount += i; + if ( bySet && i !== matchedCount ) { + j = 0; + while ( (matcher = setMatchers[j++]) ) { + matcher( unmatched, setMatched, context, xml ); + } + + if ( seed ) { + if ( matchedCount > 0 ) { + while ( i-- ) { + if ( !(unmatched[i] || setMatched[i]) ) { + setMatched[i] = pop.call( results ); + } + } + } + + setMatched = condense( setMatched ); + } + + push.apply( results, setMatched ); + + if ( outermost && !seed && setMatched.length > 0 && + ( matchedCount + setMatchers.length ) > 1 ) { + + Sizzle.uniqueSort( results ); + } + } + + if ( outermost ) { + dirruns = dirrunsUnique; + outermostContext = contextBackup; + } + + return unmatched; + }; + + return bySet ? + markFunction( superMatcher ) : + superMatcher; +} + +compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { + var i, + setMatchers = [], + elementMatchers = [], + cached = compilerCache[ selector + " " ]; + + if ( !cached ) { + if ( !match ) { + match = tokenize( selector ); + } + i = match.length; + while ( i-- ) { + cached = matcherFromTokens( match[i] ); + if ( cached[ expando ] ) { + setMatchers.push( cached ); + } else { + elementMatchers.push( cached ); + } + } + + cached = compilerCache( selector, matcherFromGroupMatchers( elementMatchers, setMatchers ) ); + + cached.selector = selector; + } + return cached; +}; + +/** + * A low-level selection function that works with Sizzle's compiled + * selector functions + * @param {String|Function} selector A selector or a pre-compiled + * selector function built with Sizzle.compile + * @param {Element} context + * @param {Array} [results] + * @param {Array} [seed] A set of elements to match against + */ +select = Sizzle.select = function( selector, context, results, seed ) { + var i, tokens, token, type, find, + compiled = typeof selector === "function" && selector, + match = !seed && tokenize( (selector = compiled.selector || selector) ); + + results = results || []; + + if ( match.length === 1 ) { + + tokens = match[0] = match[0].slice( 0 ); + if ( tokens.length > 2 && (token = tokens[0]).type === "ID" && + support.getById && context.nodeType === 9 && documentIsHTML && + Expr.relative[ tokens[1].type ] ) { + + context = ( Expr.find["ID"]( token.matches[0].replace(runescape, funescape), context ) || [] )[0]; + if ( !context ) { + return results; + + } else if ( compiled ) { + context = context.parentNode; + } + + selector = selector.slice( tokens.shift().value.length ); + } + + i = matchExpr["needsContext"].test( selector ) ? 0 : tokens.length; + while ( i-- ) { + token = tokens[i]; + + if ( Expr.relative[ (type = token.type) ] ) { + break; + } + if ( (find = Expr.find[ type ]) ) { + if ( (seed = find( + token.matches[0].replace( runescape, funescape ), + rsibling.test( tokens[0].type ) && testContext( context.parentNode ) || context + )) ) { + + tokens.splice( i, 1 ); + selector = seed.length && toSelector( tokens ); + if ( !selector ) { + push.apply( results, seed ); + return results; + } + + break; + } + } + } + } + + ( compiled || compile( selector, match ) )( + seed, + context, + !documentIsHTML, + results, + rsibling.test( selector ) && testContext( context.parentNode ) || context + ); + return results; +}; + + +support.sortStable = expando.split("").sort( sortOrder ).join("") === expando; + +support.detectDuplicates = !!hasDuplicate; + +setDocument(); + +support.sortDetached = assert(function( div1 ) { + return div1.compareDocumentPosition( document.createElement("div") ) & 1; +}); + +if ( !assert(function( div ) { + div.innerHTML = ""; + return div.firstChild.getAttribute("href") === "#" ; +}) ) { + addHandle( "type|href|height|width", function( elem, name, isXML ) { + if ( !isXML ) { + return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); + } + }); +} + +if ( !support.attributes || !assert(function( div ) { + div.innerHTML = ""; + div.firstChild.setAttribute( "value", "" ); + return div.firstChild.getAttribute( "value" ) === ""; +}) ) { + addHandle( "value", function( elem, name, isXML ) { + if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { + return elem.defaultValue; + } + }); +} + +if ( !assert(function( div ) { + return div.getAttribute("disabled") == null; +}) ) { + addHandle( booleans, function( elem, name, isXML ) { + var val; + if ( !isXML ) { + return elem[ name ] === true ? name.toLowerCase() : + (val = elem.getAttributeNode( name )) && val.specified ? + val.value : + null; + } + }); +} + +if ( typeof define === "function" && define.amd ) { + define(function() { return Sizzle; }); +} else if ( typeof module !== "undefined" && module.exports ) { + module.exports = Sizzle; +} else { + window.Sizzle = Sizzle; +} + +})( window ); + +;(function() { + if (typeof Sizzle !== 'undefined') { + return; + } + + if (typeof define !== 'undefined' && define.amd) { + window.Sizzle = Prototype._actual_sizzle; + window.define = Prototype._original_define; + delete Prototype._actual_sizzle; + delete Prototype._original_define; + } else if (typeof module !== 'undefined' && module.exports) { + window.Sizzle = module.exports; + module.exports = {}; + } +})(); + +;(function(engine) { + var extendElements = Prototype.Selector.extendElements; + + function select(selector, scope) { + return extendElements(engine(selector, scope || document)); + } + + function match(element, selector) { + return engine.matches(selector, [element]).length == 1; + } + + Prototype.Selector.engine = engine; + Prototype.Selector.select = select; + Prototype.Selector.match = match; +})(Sizzle); + +window.Sizzle = Prototype._original_property; +delete Prototype._original_property; + +var Form = { + reset: function(form) { + form = $(form); + form.reset(); + return form; + }, + + serializeElements: function(elements, options) { + if (typeof options != 'object') options = { hash: !!options }; + else if (Object.isUndefined(options.hash)) options.hash = true; + var key, value, submitted = false, submit = options.submit, accumulator, initial; + + if (options.hash) { + initial = {}; + accumulator = function(result, key, value) { + if (key in result) { + if (!Object.isArray(result[key])) result[key] = [result[key]]; + result[key] = result[key].concat(value); + } else result[key] = value; + return result; + }; + } else { + initial = ''; + accumulator = function(result, key, values) { + if (!Object.isArray(values)) {values = [values];} + if (!values.length) {return result;} + var encodedKey = encodeURIComponent(key).gsub(/%20/, '+'); + return result + (result ? "&" : "") + values.map(function (value) { + value = value.gsub(/(\r)?\n/, '\r\n'); + value = encodeURIComponent(value); + value = value.gsub(/%20/, '+'); + return encodedKey + "=" + value; + }).join("&"); + }; + } + + return elements.inject(initial, function(result, element) { + if (!element.disabled && element.name) { + key = element.name; value = $(element).getValue(); + if (value != null && element.type != 'file' && (element.type != 'submit' || (!submitted && + submit !== false && (!submit || key == submit) && (submitted = true)))) { + result = accumulator(result, key, value); + } + } + return result; + }); + } +}; + +Form.Methods = { + serialize: function(form, options) { + return Form.serializeElements(Form.getElements(form), options); + }, + + + getElements: function(form) { + var elements = $(form).getElementsByTagName('*'); + var element, results = [], serializers = Form.Element.Serializers; + + for (var i = 0; element = elements[i]; i++) { + if (serializers[element.tagName.toLowerCase()]) + results.push(Element.extend(element)); + } + return results; + }, + + getInputs: function(form, typeName, name) { + form = $(form); + var inputs = form.getElementsByTagName('input'); + + if (!typeName && !name) return $A(inputs).map(Element.extend); + + for (var i = 0, matchingInputs = [], length = inputs.length; i < length; i++) { + var input = inputs[i]; + if ((typeName && input.type != typeName) || (name && input.name != name)) + continue; + matchingInputs.push(Element.extend(input)); + } + + return matchingInputs; + }, + + disable: function(form) { + form = $(form); + Form.getElements(form).invoke('disable'); + return form; + }, + + enable: function(form) { + form = $(form); + Form.getElements(form).invoke('enable'); + return form; + }, + + findFirstElement: function(form) { + var elements = $(form).getElements().findAll(function(element) { + return 'hidden' != element.type && !element.disabled; + }); + var firstByIndex = elements.findAll(function(element) { + return element.hasAttribute('tabIndex') && element.tabIndex >= 0; + }).sortBy(function(element) { return element.tabIndex }).first(); + + return firstByIndex ? firstByIndex : elements.find(function(element) { + return /^(?:input|select|textarea)$/i.test(element.tagName); + }); + }, + + focusFirstElement: function(form) { + form = $(form); + var element = form.findFirstElement(); + if (element) element.activate(); + return form; + }, + + request: function(form, options) { + form = $(form), options = Object.clone(options || { }); + + var params = options.parameters, action = form.readAttribute('action') || ''; + if (action.blank()) action = window.location.href; + options.parameters = form.serialize(true); + + if (params) { + if (Object.isString(params)) params = params.toQueryParams(); + Object.extend(options.parameters, params); + } + + if (form.hasAttribute('method') && !options.method) + options.method = form.method; + + return new Ajax.Request(action, options); + } +}; + +/*--------------------------------------------------------------------------*/ + + +Form.Element = { + focus: function(element) { + $(element).focus(); + return element; + }, + + select: function(element) { + $(element).select(); + return element; + } +}; + +Form.Element.Methods = { + + serialize: function(element) { + element = $(element); + if (!element.disabled && element.name) { + var value = element.getValue(); + if (value != undefined) { + var pair = { }; + pair[element.name] = value; + return Object.toQueryString(pair); + } + } + return ''; + }, + + getValue: function(element) { + element = $(element); + var method = element.tagName.toLowerCase(); + return Form.Element.Serializers[method](element); + }, + + setValue: function(element, value) { + element = $(element); + var method = element.tagName.toLowerCase(); + Form.Element.Serializers[method](element, value); + return element; + }, + + clear: function(element) { + $(element).value = ''; + return element; + }, + + present: function(element) { + return $(element).value != ''; + }, + + activate: function(element) { + element = $(element); + try { + element.focus(); + if (element.select && (element.tagName.toLowerCase() != 'input' || + !(/^(?:button|reset|submit)$/i.test(element.type)))) + element.select(); + } catch (e) { } + return element; + }, + + disable: function(element) { + element = $(element); + element.disabled = true; + return element; + }, + + enable: function(element) { + element = $(element); + element.disabled = false; + return element; + } +}; + +/*--------------------------------------------------------------------------*/ + +var Field = Form.Element; + +var $F = Form.Element.Methods.getValue; + +/*--------------------------------------------------------------------------*/ + +Form.Element.Serializers = (function() { + function input(element, value) { + switch (element.type.toLowerCase()) { + case 'checkbox': + case 'radio': + return inputSelector(element, value); + default: + return valueSelector(element, value); + } + } + + function inputSelector(element, value) { + if (Object.isUndefined(value)) + return element.checked ? element.value : null; + else element.checked = !!value; + } + + function valueSelector(element, value) { + if (Object.isUndefined(value)) return element.value; + else element.value = value; + } + + function select(element, value) { + if (Object.isUndefined(value)) + return (element.type === 'select-one' ? selectOne : selectMany)(element); + + var opt, currentValue, single = !Object.isArray(value); + for (var i = 0, length = element.length; i < length; i++) { + opt = element.options[i]; + currentValue = this.optionValue(opt); + if (single) { + if (currentValue == value) { + opt.selected = true; + return; + } + } + else opt.selected = value.include(currentValue); + } + } + + function selectOne(element) { + var index = element.selectedIndex; + return index >= 0 ? optionValue(element.options[index]) : null; + } + + function selectMany(element) { + var values, length = element.length; + if (!length) return null; + + for (var i = 0, values = []; i < length; i++) { + var opt = element.options[i]; + if (opt.selected) values.push(optionValue(opt)); + } + return values; + } + + function optionValue(opt) { + return Element.hasAttribute(opt, 'value') ? opt.value : opt.text; + } + + return { + input: input, + inputSelector: inputSelector, + textarea: valueSelector, + select: select, + selectOne: selectOne, + selectMany: selectMany, + optionValue: optionValue, + button: valueSelector + }; +})(); + +/*--------------------------------------------------------------------------*/ + + +Abstract.TimedObserver = Class.create(PeriodicalExecuter, { + initialize: function($super, element, frequency, callback) { + $super(callback, frequency); + this.element = $(element); + this.lastValue = this.getValue(); + }, + + execute: function() { + var value = this.getValue(); + if (Object.isString(this.lastValue) && Object.isString(value) ? + this.lastValue != value : String(this.lastValue) != String(value)) { + this.callback(this.element, value); + this.lastValue = value; + } + } +}); + +Form.Element.Observer = Class.create(Abstract.TimedObserver, { + getValue: function() { + return Form.Element.getValue(this.element); + } +}); + +Form.Observer = Class.create(Abstract.TimedObserver, { + getValue: function() { + return Form.serialize(this.element); + } +}); + +/*--------------------------------------------------------------------------*/ + +Abstract.EventObserver = Class.create({ + initialize: function(element, callback) { + this.element = $(element); + this.callback = callback; + + this.lastValue = this.getValue(); + if (this.element.tagName.toLowerCase() == 'form') + this.registerFormCallbacks(); + else + this.registerCallback(this.element); + }, + + onElementEvent: function() { + var value = this.getValue(); + if (this.lastValue != value) { + this.callback(this.element, value); + this.lastValue = value; + } + }, + + registerFormCallbacks: function() { + Form.getElements(this.element).each(this.registerCallback, this); + }, + + registerCallback: function(element) { + if (element.type) { + switch (element.type.toLowerCase()) { + case 'checkbox': + case 'radio': + Event.observe(element, 'click', this.onElementEvent.bind(this)); + break; + default: + Event.observe(element, 'change', this.onElementEvent.bind(this)); + break; + } + } + } +}); + +Form.Element.EventObserver = Class.create(Abstract.EventObserver, { + getValue: function() { + return Form.Element.getValue(this.element); + } +}); + +Form.EventObserver = Class.create(Abstract.EventObserver, { + getValue: function() { + return Form.serialize(this.element); + } +}); +(function(GLOBAL) { + var DIV = document.createElement('div'); + var docEl = document.documentElement; + var MOUSEENTER_MOUSELEAVE_EVENTS_SUPPORTED = 'onmouseenter' in docEl + && 'onmouseleave' in docEl; + + var Event = { + KEY_BACKSPACE: 8, + KEY_TAB: 9, + KEY_RETURN: 13, + KEY_ESC: 27, + KEY_LEFT: 37, + KEY_UP: 38, + KEY_RIGHT: 39, + KEY_DOWN: 40, + KEY_DELETE: 46, + KEY_HOME: 36, + KEY_END: 35, + KEY_PAGEUP: 33, + KEY_PAGEDOWN: 34, + KEY_INSERT: 45 + }; + + + var isIELegacyEvent = function(event) { return false; }; + + if (window.attachEvent) { + if (window.addEventListener) { + isIELegacyEvent = function(event) { + return !(event instanceof window.Event); + }; + } else { + isIELegacyEvent = function(event) { return true; }; + } + } + + var _isButton; + + function _isButtonForDOMEvents(event, code) { + return event.which ? (event.which === code + 1) : (event.button === code); + } + + var legacyButtonMap = { 0: 1, 1: 4, 2: 2 }; + function _isButtonForLegacyEvents(event, code) { + return event.button === legacyButtonMap[code]; + } + + function _isButtonForWebKit(event, code) { + switch (code) { + case 0: return event.which == 1 && !event.metaKey; + case 1: return event.which == 2 || (event.which == 1 && event.metaKey); + case 2: return event.which == 3; + default: return false; + } + } + + if (window.attachEvent) { + if (!window.addEventListener) { + _isButton = _isButtonForLegacyEvents; + } else { + _isButton = function(event, code) { + return isIELegacyEvent(event) ? _isButtonForLegacyEvents(event, code) : + _isButtonForDOMEvents(event, code); + } + } + } else if (Prototype.Browser.WebKit) { + _isButton = _isButtonForWebKit; + } else { + _isButton = _isButtonForDOMEvents; + } + + function isLeftClick(event) { return _isButton(event, 0) } + + function isMiddleClick(event) { return _isButton(event, 1) } + + function isRightClick(event) { return _isButton(event, 2) } + + function element(event) { + return Element.extend(_element(event)); + } + + function _element(event) { + event = Event.extend(event); + + var node = event.target, type = event.type, + currentTarget = event.currentTarget; + + if (currentTarget && currentTarget.tagName) { + if (type === 'load' || type === 'error' || + (type === 'click' && currentTarget.tagName.toLowerCase() === 'input' + && currentTarget.type === 'radio')) + node = currentTarget; + } + + return node.nodeType == Node.TEXT_NODE ? node.parentNode : node; + } + + function findElement(event, expression) { + var element = _element(event), selector = Prototype.Selector; + if (!expression) return Element.extend(element); + while (element) { + if (Object.isElement(element) && selector.match(element, expression)) + return Element.extend(element); + element = element.parentNode; + } + } + + function pointer(event) { + return { x: pointerX(event), y: pointerY(event) }; + } + + function pointerX(event) { + var docElement = document.documentElement, + body = document.body || { scrollLeft: 0 }; + + return event.pageX || (event.clientX + + (docElement.scrollLeft || body.scrollLeft) - + (docElement.clientLeft || 0)); + } + + function pointerY(event) { + var docElement = document.documentElement, + body = document.body || { scrollTop: 0 }; + + return event.pageY || (event.clientY + + (docElement.scrollTop || body.scrollTop) - + (docElement.clientTop || 0)); + } + + + function stop(event) { + Event.extend(event); + event.preventDefault(); + event.stopPropagation(); + + event.stopped = true; + } + + + Event.Methods = { + isLeftClick: isLeftClick, + isMiddleClick: isMiddleClick, + isRightClick: isRightClick, + + element: element, + findElement: findElement, + + pointer: pointer, + pointerX: pointerX, + pointerY: pointerY, + + stop: stop + }; + + var methods = Object.keys(Event.Methods).inject({ }, function(m, name) { + m[name] = Event.Methods[name].methodize(); + return m; + }); + + if (window.attachEvent) { + function _relatedTarget(event) { + var element; + switch (event.type) { + case 'mouseover': + case 'mouseenter': + element = event.fromElement; + break; + case 'mouseout': + case 'mouseleave': + element = event.toElement; + break; + default: + return null; + } + return Element.extend(element); + } + + var additionalMethods = { + stopPropagation: function() { this.cancelBubble = true }, + preventDefault: function() { this.returnValue = false }, + inspect: function() { return '[object Event]' } + }; + + Event.extend = function(event, element) { + if (!event) return false; + + if (!isIELegacyEvent(event)) return event; + + if (event._extendedByPrototype) return event; + event._extendedByPrototype = Prototype.emptyFunction; + + var pointer = Event.pointer(event); + + Object.extend(event, { + target: event.srcElement || element, + relatedTarget: _relatedTarget(event), + pageX: pointer.x, + pageY: pointer.y + }); + + Object.extend(event, methods); + Object.extend(event, additionalMethods); + + return event; + }; + } else { + Event.extend = Prototype.K; + } + + if (window.addEventListener) { + Event.prototype = window.Event.prototype || document.createEvent('HTMLEvents').__proto__; + Object.extend(Event.prototype, methods); + } + + var EVENT_TRANSLATIONS = { + mouseenter: 'mouseover', + mouseleave: 'mouseout' + }; + + function getDOMEventName(eventName) { + return EVENT_TRANSLATIONS[eventName] || eventName; + } + + if (MOUSEENTER_MOUSELEAVE_EVENTS_SUPPORTED) + getDOMEventName = Prototype.K; + + function getUniqueElementID(element) { + if (element === window) return 0; + + if (typeof element._prototypeUID === 'undefined') + element._prototypeUID = Element.Storage.UID++; + return element._prototypeUID; + } + + function getUniqueElementID_IE(element) { + if (element === window) return 0; + if (element == document) return 1; + return element.uniqueID; + } + + if ('uniqueID' in DIV) + getUniqueElementID = getUniqueElementID_IE; + + function isCustomEvent(eventName) { + return eventName.include(':'); + } + + Event._isCustomEvent = isCustomEvent; + + function getOrCreateRegistryFor(element, uid) { + var CACHE = GLOBAL.Event.cache; + if (Object.isUndefined(uid)) + uid = getUniqueElementID(element); + if (!CACHE[uid]) CACHE[uid] = { element: element }; + return CACHE[uid]; + } + + function destroyRegistryForElement(element, uid) { + if (Object.isUndefined(uid)) + uid = getUniqueElementID(element); + delete GLOBAL.Event.cache[uid]; + } + + + function register(element, eventName, handler) { + var registry = getOrCreateRegistryFor(element); + if (!registry[eventName]) registry[eventName] = []; + var entries = registry[eventName]; + + var i = entries.length; + while (i--) + if (entries[i].handler === handler) return null; + + var uid = getUniqueElementID(element); + var responder = GLOBAL.Event._createResponder(uid, eventName, handler); + var entry = { + responder: responder, + handler: handler + }; + + entries.push(entry); + return entry; + } + + function unregister(element, eventName, handler) { + var registry = getOrCreateRegistryFor(element); + var entries = registry[eventName] || []; + + var i = entries.length, entry; + while (i--) { + if (entries[i].handler === handler) { + entry = entries[i]; + break; + } + } + + if (entry) { + var index = entries.indexOf(entry); + entries.splice(index, 1); + } + + if (entries.length === 0) { + delete registry[eventName]; + if (Object.keys(registry).length === 1 && ('element' in registry)) + destroyRegistryForElement(element); + } + + return entry; + } + + + function observe(element, eventName, handler) { + element = $(element); + var entry = register(element, eventName, handler); + + if (entry === null) return element; + + var responder = entry.responder; + if (isCustomEvent(eventName)) + observeCustomEvent(element, eventName, responder); + else + observeStandardEvent(element, eventName, responder); + + return element; + } + + function observeStandardEvent(element, eventName, responder) { + var actualEventName = getDOMEventName(eventName); + if (element.addEventListener) { + element.addEventListener(actualEventName, responder, false); + } else { + element.attachEvent('on' + actualEventName, responder); + } + } + + function observeCustomEvent(element, eventName, responder) { + if (element.addEventListener) { + element.addEventListener('dataavailable', responder, false); + } else { + element.attachEvent('ondataavailable', responder); + element.attachEvent('onlosecapture', responder); + } + } + + function stopObserving(element, eventName, handler) { + element = $(element); + var handlerGiven = !Object.isUndefined(handler), + eventNameGiven = !Object.isUndefined(eventName); + + if (!eventNameGiven && !handlerGiven) { + stopObservingElement(element); + return element; + } + + if (!handlerGiven) { + stopObservingEventName(element, eventName); + return element; + } + + var entry = unregister(element, eventName, handler); + + if (!entry) return element; + removeEvent(element, eventName, entry.responder); + return element; + } + + function stopObservingStandardEvent(element, eventName, responder) { + var actualEventName = getDOMEventName(eventName); + if (element.removeEventListener) { + element.removeEventListener(actualEventName, responder, false); + } else { + element.detachEvent('on' + actualEventName, responder); + } + } + + function stopObservingCustomEvent(element, eventName, responder) { + if (element.removeEventListener) { + element.removeEventListener('dataavailable', responder, false); + } else { + element.detachEvent('ondataavailable', responder); + element.detachEvent('onlosecapture', responder); + } + } + + + + function stopObservingElement(element) { + var uid = getUniqueElementID(element), registry = GLOBAL.Event.cache[uid]; + if (!registry) return; + + destroyRegistryForElement(element, uid); + + var entries, i; + for (var eventName in registry) { + if (eventName === 'element') continue; + + entries = registry[eventName]; + i = entries.length; + while (i--) + removeEvent(element, eventName, entries[i].responder); + } + } + + function stopObservingEventName(element, eventName) { + var registry = getOrCreateRegistryFor(element); + var entries = registry[eventName]; + if (entries) { + delete registry[eventName]; + } + + entries = entries || []; + + var i = entries.length; + while (i--) + removeEvent(element, eventName, entries[i].responder); + + for (var name in registry) { + if (name === 'element') continue; + return; // There is another registered event + } + + destroyRegistryForElement(element); + } + + + function removeEvent(element, eventName, handler) { + if (isCustomEvent(eventName)) + stopObservingCustomEvent(element, eventName, handler); + else + stopObservingStandardEvent(element, eventName, handler); + } + + + + function getFireTarget(element) { + if (element !== document) return element; + if (document.createEvent && !element.dispatchEvent) + return document.documentElement; + return element; + } + + function fire(element, eventName, memo, bubble) { + element = getFireTarget($(element)); + if (Object.isUndefined(bubble)) bubble = true; + memo = memo || {}; + + var event = fireEvent(element, eventName, memo, bubble); + return Event.extend(event); + } + + function fireEvent_DOM(element, eventName, memo, bubble) { + var event = document.createEvent('HTMLEvents'); + event.initEvent('dataavailable', bubble, true); + + event.eventName = eventName; + event.memo = memo; + + element.dispatchEvent(event); + return event; + } + + function fireEvent_IE(element, eventName, memo, bubble) { + var event = document.createEventObject(); + event.eventType = bubble ? 'ondataavailable' : 'onlosecapture'; + + event.eventName = eventName; + event.memo = memo; + + element.fireEvent(event.eventType, event); + return event; + } + + var fireEvent = document.createEvent ? fireEvent_DOM : fireEvent_IE; + + + + Event.Handler = Class.create({ + initialize: function(element, eventName, selector, callback) { + this.element = $(element); + this.eventName = eventName; + this.selector = selector; + this.callback = callback; + this.handler = this.handleEvent.bind(this); + }, + + + start: function() { + Event.observe(this.element, this.eventName, this.handler); + return this; + }, + + stop: function() { + Event.stopObserving(this.element, this.eventName, this.handler); + return this; + }, + + handleEvent: function(event) { + var element = Event.findElement(event, this.selector); + if (element) this.callback.call(this.element, event, element); + } + }); + + function on(element, eventName, selector, callback) { + element = $(element); + if (Object.isFunction(selector) && Object.isUndefined(callback)) { + callback = selector, selector = null; + } + + return new Event.Handler(element, eventName, selector, callback).start(); + } + + Object.extend(Event, Event.Methods); + + Object.extend(Event, { + fire: fire, + observe: observe, + stopObserving: stopObserving, + on: on + }); + + Element.addMethods({ + fire: fire, + + observe: observe, + + stopObserving: stopObserving, + + on: on + }); + + Object.extend(document, { + fire: fire.methodize(), + + observe: observe.methodize(), + + stopObserving: stopObserving.methodize(), + + on: on.methodize(), + + loaded: false + }); + + if (GLOBAL.Event) Object.extend(window.Event, Event); + else GLOBAL.Event = Event; + + GLOBAL.Event.cache = {}; + + function destroyCache_IE() { + GLOBAL.Event.cache = null; + } + + if (window.attachEvent) + window.attachEvent('onunload', destroyCache_IE); + + DIV = null; + docEl = null; +})(this); + +(function(GLOBAL) { + /* Code for creating leak-free event responders is based on work by + John-David Dalton. */ + + var docEl = document.documentElement; + var MOUSEENTER_MOUSELEAVE_EVENTS_SUPPORTED = 'onmouseenter' in docEl + && 'onmouseleave' in docEl; + + function isSimulatedMouseEnterLeaveEvent(eventName) { + return !MOUSEENTER_MOUSELEAVE_EVENTS_SUPPORTED && + (eventName === 'mouseenter' || eventName === 'mouseleave'); + } + + function createResponder(uid, eventName, handler) { + if (Event._isCustomEvent(eventName)) + return createResponderForCustomEvent(uid, eventName, handler); + if (isSimulatedMouseEnterLeaveEvent(eventName)) + return createMouseEnterLeaveResponder(uid, eventName, handler); + + return function(event) { + if (!Event.cache) return; + + var element = Event.cache[uid].element; + Event.extend(event, element); + handler.call(element, event); + }; + } + + function createResponderForCustomEvent(uid, eventName, handler) { + return function(event) { + var cache = Event.cache[uid]; + var element = cache && cache.element; + + if (Object.isUndefined(event.eventName)) + return false; + + if (event.eventName !== eventName) + return false; + + Event.extend(event, element); + handler.call(element, event); + }; + } + + function createMouseEnterLeaveResponder(uid, eventName, handler) { + return function(event) { + var element = Event.cache[uid].element; + + Event.extend(event, element); + var parent = event.relatedTarget; + + while (parent && parent !== element) { + try { parent = parent.parentNode; } + catch(e) { parent = element; } + } + + if (parent === element) return; + handler.call(element, event); + } + } + + GLOBAL.Event._createResponder = createResponder; + docEl = null; +})(this); + +(function(GLOBAL) { + /* Support for the DOMContentLoaded event is based on work by Dan Webb, + Matthias Miller, Dean Edwards, John Resig, and Diego Perini. */ + + var TIMER; + + function fireContentLoadedEvent() { + if (document.loaded) return; + if (TIMER) window.clearTimeout(TIMER); + document.loaded = true; + document.fire('dom:loaded'); + } + + function checkReadyState() { + if (document.readyState === 'complete') { + document.detachEvent('onreadystatechange', checkReadyState); + fireContentLoadedEvent(); + } + } + + function pollDoScroll() { + try { + document.documentElement.doScroll('left'); + } catch (e) { + TIMER = pollDoScroll.defer(); + return; + } + + fireContentLoadedEvent(); + } + + + if (document.readyState === 'complete') { + fireContentLoadedEvent(); + return; + } + + if (document.addEventListener) { + document.addEventListener('DOMContentLoaded', fireContentLoadedEvent, false); + } else { + document.attachEvent('onreadystatechange', checkReadyState); + if (window == top) TIMER = pollDoScroll.defer(); + } + + Event.observe(window, 'load', fireContentLoadedEvent); +})(this); + + +Element.addMethods(); +/*------------------------------- DEPRECATED -------------------------------*/ + +Hash.toQueryString = Object.toQueryString; + +var Toggle = { display: Element.toggle }; + +Element.addMethods({ + childOf: Element.Methods.descendantOf +}); + +var Insertion = { + Before: function(element, content) { + return Element.insert(element, {before:content}); + }, + + Top: function(element, content) { + return Element.insert(element, {top:content}); + }, + + Bottom: function(element, content) { + return Element.insert(element, {bottom:content}); + }, + + After: function(element, content) { + return Element.insert(element, {after:content}); + } +}; + +var $continue = new Error('"throw $continue" is deprecated, use "return" instead'); + +var Position = { + includeScrollOffsets: false, + + prepare: function() { + this.deltaX = window.pageXOffset + || document.documentElement.scrollLeft + || document.body.scrollLeft + || 0; + this.deltaY = window.pageYOffset + || document.documentElement.scrollTop + || document.body.scrollTop + || 0; + }, + + within: function(element, x, y) { + if (this.includeScrollOffsets) + return this.withinIncludingScrolloffsets(element, x, y); + this.xcomp = x; + this.ycomp = y; + this.offset = Element.cumulativeOffset(element); + + return (y >= this.offset[1] && + y < this.offset[1] + element.offsetHeight && + x >= this.offset[0] && + x < this.offset[0] + element.offsetWidth); + }, + + withinIncludingScrolloffsets: function(element, x, y) { + var offsetcache = Element.cumulativeScrollOffset(element); + + this.xcomp = x + offsetcache[0] - this.deltaX; + this.ycomp = y + offsetcache[1] - this.deltaY; + this.offset = Element.cumulativeOffset(element); + + return (this.ycomp >= this.offset[1] && + this.ycomp < this.offset[1] + element.offsetHeight && + this.xcomp >= this.offset[0] && + this.xcomp < this.offset[0] + element.offsetWidth); + }, + + overlap: function(mode, element) { + if (!mode) return 0; + if (mode == 'vertical') + return ((this.offset[1] + element.offsetHeight) - this.ycomp) / + element.offsetHeight; + if (mode == 'horizontal') + return ((this.offset[0] + element.offsetWidth) - this.xcomp) / + element.offsetWidth; + }, + + + cumulativeOffset: Element.Methods.cumulativeOffset, + + positionedOffset: Element.Methods.positionedOffset, + + absolutize: function(element) { + Position.prepare(); + return Element.absolutize(element); + }, + + relativize: function(element) { + Position.prepare(); + return Element.relativize(element); + }, + + realOffset: Element.Methods.cumulativeScrollOffset, + + offsetParent: Element.Methods.getOffsetParent, + + page: Element.Methods.viewportOffset, + + clone: function(source, target, options) { + options = options || { }; + return Element.clonePosition(target, source, options); + } +}; + +/*--------------------------------------------------------------------------*/ + +if (!document.getElementsByClassName) document.getElementsByClassName = function(instanceMethods){ + function iter(name) { + return name.blank() ? null : "[contains(concat(' ', @class, ' '), ' " + name + " ')]"; + } + + instanceMethods.getElementsByClassName = Prototype.BrowserFeatures.XPath ? + function(element, className) { + className = className.toString().strip(); + var cond = /\s/.test(className) ? $w(className).map(iter).join('') : iter(className); + return cond ? document._getElementsByXPath('.//*' + cond, element) : []; + } : function(element, className) { + className = className.toString().strip(); + var elements = [], classNames = (/\s/.test(className) ? $w(className) : null); + if (!classNames && !className) return elements; + + var nodes = $(element).getElementsByTagName('*'); + className = ' ' + className + ' '; + + for (var i = 0, child, cn; child = nodes[i]; i++) { + if (child.className && (cn = ' ' + child.className + ' ') && (cn.include(className) || + (classNames && classNames.all(function(name) { + return !name.toString().blank() && cn.include(' ' + name + ' '); + })))) + elements.push(Element.extend(child)); + } + return elements; + }; + + return function(className, parentElement) { + return $(parentElement || document.body).getElementsByClassName(className); + }; +}(Element.Methods); + +/*--------------------------------------------------------------------------*/ + +Element.ClassNames = Class.create(); +Element.ClassNames.prototype = { + initialize: function(element) { + this.element = $(element); + }, + + _each: function(iterator, context) { + this.element.className.split(/\s+/).select(function(name) { + return name.length > 0; + })._each(iterator, context); + }, + + set: function(className) { + this.element.className = className; + }, + + add: function(classNameToAdd) { + if (this.include(classNameToAdd)) return; + this.set($A(this).concat(classNameToAdd).join(' ')); + }, + + remove: function(classNameToRemove) { + if (!this.include(classNameToRemove)) return; + this.set($A(this).without(classNameToRemove).join(' ')); + }, + + toString: function() { + return $A(this).join(' '); + } +}; + +Object.extend(Element.ClassNames.prototype, Enumerable); + +/*--------------------------------------------------------------------------*/ + +(function() { + window.Selector = Class.create({ + initialize: function(expression) { + this.expression = expression.strip(); + }, + + findElements: function(rootElement) { + return Prototype.Selector.select(this.expression, rootElement); + }, + + match: function(element) { + return Prototype.Selector.match(element, this.expression); + }, + + toString: function() { + return this.expression; + }, + + inspect: function() { + return "#"; + } + }); + + Object.extend(Selector, { + matchElements: function(elements, expression) { + var match = Prototype.Selector.match, + results = []; + + for (var i = 0, length = elements.length; i < length; i++) { + var element = elements[i]; + if (match(element, expression)) { + results.push(Element.extend(element)); + } + } + return results; + }, + + findElement: function(elements, expression, index) { + index = index || 0; + var matchIndex = 0, element; + for (var i = 0, length = elements.length; i < length; i++) { + element = elements[i]; + if (Prototype.Selector.match(element, expression) && index === matchIndex++) { + return Element.extend(element); + } + } + }, + + findChildElements: function(element, expressions) { + var selector = expressions.toArray().join(', '); + return Prototype.Selector.select(selector, element || document); + } + }); +})(); diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md new file mode 100644 index 0000000000000000000000000000000000000000..5f42bea59b89889e45e88b249416e05f9ee6648b --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md @@ -0,0 +1,2956 @@ + + +# ZooKeeper Administrator's Guide + +### A Guide to Deployment and Administration + +* [Deployment](#ch_deployment) + * [System Requirements](#sc_systemReq) + * [Supported Platforms](#sc_supportedPlatforms) + * [Required Software](#sc_requiredSoftware) + * [Clustered (Multi-Server) Setup](#sc_zkMulitServerSetup) + * [Single Server and Developer Setup](#sc_singleAndDevSetup) +* [Administration](#ch_administration) + * [Designing a ZooKeeper Deployment](#sc_designing) + * [Cross Machine Requirements](#sc_CrossMachineRequirements) + * [Single Machine Requirements](#Single+Machine+Requirements) + * [Provisioning](#sc_provisioning) + * [Things to Consider: ZooKeeper Strengths and Limitations](#sc_strengthsAndLimitations) + * [Administering](#sc_administering) + * [Maintenance](#sc_maintenance) + * [Ongoing Data Directory Cleanup](#Ongoing+Data+Directory+Cleanup) + * [Debug Log Cleanup (logback)](#Debug+Log+Cleanup+Logback) + * [Supervision](#sc_supervision) + * [Monitoring](#sc_monitoring) + * [Logging](#sc_logging) + * [Troubleshooting](#sc_troubleshooting) + * [Configuration Parameters](#sc_configuration) + * [Minimum Configuration](#sc_minimumConfiguration) + * [Advanced Configuration](#sc_advancedConfiguration) + * [Cluster Options](#sc_clusterOptions) + * [Encryption, Authentication, Authorization Options](#sc_authOptions) + * [Experimental Options/Features](#Experimental+Options%2FFeatures) + * [Unsafe Options](#Unsafe+Options) + * [Disabling data directory autocreation](#Disabling+data+directory+autocreation) + * [Enabling db existence validation](#sc_db_existence_validation) + * [Performance Tuning Options](#sc_performance_options) + * [AdminServer configuration](#sc_adminserver_config) + * [Communication using the Netty framework](#Communication+using+the+Netty+framework) + * [Quorum TLS](#Quorum+TLS) + * [Upgrading existing non-TLS cluster with no downtime](#Upgrading+existing+nonTLS+cluster) + * [ZooKeeper Commands](#sc_zkCommands) + * [The Four Letter Words](#sc_4lw) + * [The AdminServer](#sc_adminserver) + * [Data File Management](#sc_dataFileManagement) + * [The Data Directory](#The+Data+Directory) + * [The Log Directory](#The+Log+Directory) + * [File Management](#sc_filemanagement) + * [Recovery - TxnLogToolkit](#Recovery+-+TxnLogToolkit) + * [Things to Avoid](#sc_commonProblems) + * [Best Practices](#sc_bestPractices) + + + +## Deployment + +This section contains information about deploying Zookeeper and +covers these topics: + +* [System Requirements](#sc_systemReq) +* [Clustered (Multi-Server) Setup](#sc_zkMulitServerSetup) +* [Single Server and Developer Setup](#sc_singleAndDevSetup) + +The first two sections assume you are interested in installing +ZooKeeper in a production environment such as a datacenter. The final +section covers situations in which you are setting up ZooKeeper on a +limited basis - for evaluation, testing, or development - but not in a +production environment. + + + +### System Requirements + + + +#### Supported Platforms + +ZooKeeper consists of multiple components. Some components are +supported broadly, and other components are supported only on a smaller +set of platforms. + +* **Client** is the Java client + library, used by applications to connect to a ZooKeeper ensemble. +* **Server** is the Java server + that runs on the ZooKeeper ensemble nodes. +* **Native Client** is a client + implemented in C, similar to the Java client, used by applications + to connect to a ZooKeeper ensemble. +* **Contrib** refers to multiple + optional add-on components. + +The following matrix describes the level of support committed for +running each component on different operating system platforms. + +##### Support Matrix + +| Operating System | Client | Server | Native Client | Contrib | +|------------------|--------|--------|---------------|---------| +| GNU/Linux | Development and Production | Development and Production | Development and Production | Development and Production | +| Solaris | Development and Production | Development and Production | Not Supported | Not Supported | +| FreeBSD | Development and Production | Development and Production | Not Supported | Not Supported | +| Windows | Development and Production | Development and Production | Not Supported | Not Supported | +| Mac OS X | Development Only | Development Only | Not Supported | Not Supported | + +For any operating system not explicitly mentioned as supported in +the matrix, components may or may not work. The ZooKeeper community +will fix obvious bugs that are reported for other platforms, but there +is no full support. + + + +#### Required Software + +ZooKeeper runs in Java, release 1.8 or greater +(JDK 8 LTS, JDK 11 LTS, JDK 12 - Java 9 and 10 are not supported). +It runs as an _ensemble_ of ZooKeeper servers. Three +ZooKeeper servers is the minimum recommended size for an +ensemble, and we also recommend that they run on separate +machines. At Yahoo!, ZooKeeper is usually deployed on +dedicated RHEL boxes, with dual-core processors, 2GB of RAM, +and 80GB IDE hard drives. + + + +### Clustered (Multi-Server) Setup + +For reliable ZooKeeper service, you should deploy ZooKeeper in a +cluster known as an _ensemble_. As long as a majority +of the ensemble are up, the service will be available. Because Zookeeper +requires a majority, it is best to use an +odd number of machines. For example, with four machines ZooKeeper can +only handle the failure of a single machine; if two machines fail, the +remaining two machines do not constitute a majority. However, with five +machines ZooKeeper can handle the failure of two machines. + +###### Note +>As mentioned in the +[ZooKeeper Getting Started Guide](zookeeperStarted.html) +, a minimum of three servers are required for a fault tolerant +clustered setup, and it is strongly recommended that you have an +odd number of servers. + +>Usually three servers is more than enough for a production +install, but for maximum reliability during maintenance, you may +wish to install five servers. With three servers, if you perform +maintenance on one of them, you are vulnerable to a failure on one +of the other two servers during that maintenance. If you have five +of them running, you can take one down for maintenance, and know +that you're still OK if one of the other four suddenly fails. + +>Your redundancy considerations should include all aspects of +your environment. If you have three ZooKeeper servers, but their +network cables are all plugged into the same network switch, then +the failure of that switch will take down your entire ensemble. + +Here are the steps to set a server that will be part of an +ensemble. These steps should be performed on every host in the +ensemble: + +1. Install the Java JDK. You can use the native packaging system + for your system, or download the JDK from: + [http://java.sun.com/javase/downloads/index.jsp](http://java.sun.com/javase/downloads/index.jsp) + +2. Set the Java heap size. This is very important to avoid + swapping, which will seriously degrade ZooKeeper performance. To + determine the correct value, use load tests, and make sure you are + well below the usage limit that would cause you to swap. Be + conservative - use a maximum heap size of 3GB for a 4GB + machine. + +3. Install the ZooKeeper Server Package. It can be downloaded + from: + [http://zookeeper.apache.org/releases.html](http://zookeeper.apache.org/releases.html) + +4. Create a configuration file. This file can be called anything. + Use the following settings as a starting point: + + tickTime=2000 + dataDir=/var/lib/zookeeper/ + clientPort=2181 + initLimit=5 + syncLimit=2 + server.1=zoo1:2888:3888 + server.2=zoo2:2888:3888 + server.3=zoo3:2888:3888 + + You can find the meanings of these and other configuration + settings in the section [Configuration Parameters](#sc_configuration). A word + thought about a few here: + Every machine that is part of the ZooKeeper ensemble should know + about every other machine in the ensemble. You accomplish this with + the series of lines of the form **server.id=host:port:port**. + (The parameters **host** and **port** are straightforward, for each server + you need to specify first a Quorum port then a dedicated port for ZooKeeper leader + election). Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). + You attribute the + server id to each machine by creating a file named + *myid*, one for each server, which resides in + that server's data directory, as specified by the configuration file + parameter **dataDir**. + +5. The myid file + consists of a single line containing only the text of that machine's + id. So *myid* of server 1 would contain the text + "1" and nothing else. The id must be unique within the + ensemble and should have a value between 1 and 255. + **IMPORTANT:** if you enable extended features such + as TTL Nodes (see below) the id must be between 1 + and 254 due to internal limitations. + +6. Create an initialization marker file *initialize* + in the same directory as *myid*. This file indicates + that an empty data directory is expected. When present, an empty database + is created and the marker file deleted. When not present, an empty data + directory will mean this peer will not have voting rights and it will not + populate the data directory until it communicates with an active leader. + Intended use is to only create this file when bringing up a new + ensemble. + +7. If your configuration file is set up, you can start a + ZooKeeper server: + + $ java -cp zookeeper.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain zoo.conf + + QuorumPeerMain starts a ZooKeeper server, + [JMX](http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/) + management beans are also registered which allows + management through a JMX management console. + The [ZooKeeper JMX + document](zookeeperJMX.html) contains details on managing ZooKeeper with JMX. + See the script _bin/zkServer.sh_, + which is included in the release, for an example + of starting server instances. +8. Test your deployment by connecting to the hosts: + In Java, you can run the following command to execute + simple operations: + + $ bin/zkCli.sh -server 127.0.0.1:2181 + + + +### Single Server and Developer Setup + +If you want to set up ZooKeeper for development purposes, you will +probably want to set up a single server instance of ZooKeeper, and then +install either the Java or C client-side libraries and bindings on your +development machine. + +The steps to setting up a single server instance are the similar +to the above, except the configuration file is simpler. You can find the +complete instructions in the [Installing and +Running ZooKeeper in Single Server Mode](zookeeperStarted.html#sc_InstallingSingleMode) section of the [ZooKeeper Getting Started +Guide](zookeeperStarted.html). + +For information on installing the client side libraries, refer to +the [Bindings](zookeeperProgrammers.html#ch_bindings) +section of the [ZooKeeper +Programmer's Guide](zookeeperProgrammers.html). + + + +## Administration + +This section contains information about running and maintaining +ZooKeeper and covers these topics: + +* [Designing a ZooKeeper Deployment](#sc_designing) +* [Provisioning](#sc_provisioning) +* [Things to Consider: ZooKeeper Strengths and Limitations](#sc_strengthsAndLimitations) +* [Administering](#sc_administering) +* [Maintenance](#sc_maintenance) +* [Supervision](#sc_supervision) +* [Monitoring](#sc_monitoring) +* [Logging](#sc_logging) +* [Troubleshooting](#sc_troubleshooting) +* [Configuration Parameters](#sc_configuration) +* [ZooKeeper Commands](#sc_zkCommands) +* [Data File Management](#sc_dataFileManagement) +* [Things to Avoid](#sc_commonProblems) +* [Best Practices](#sc_bestPractices) + + + +### Designing a ZooKeeper Deployment + +The reliability of ZooKeeper rests on two basic assumptions. + +1. Only a minority of servers in a deployment + will fail. _Failure_ in this context + means a machine crash, or some error in the network that + partitions a server off from the majority. +1. Deployed machines operate correctly. To + operate correctly means to execute code correctly, to have + clocks that work properly, and to have storage and network + components that perform consistently. + +The sections below contain considerations for ZooKeeper +administrators to maximize the probability for these assumptions +to hold true. Some of these are cross-machines considerations, +and others are things you should consider for each and every +machine in your deployment. + + + +#### Cross Machine Requirements + +For the ZooKeeper service to be active, there must be a +majority of non-failing machines that can communicate with +each other. For a ZooKeeper ensemble with N servers, +if N is odd, the ensemble is able to tolerate up to N/2 +server failures without losing any znode data; +if N is even, the ensemble is able to tolerate up to N/2-1 +server failures. + +For example, if we have a ZooKeeper ensemble with 3 servers, +the ensemble is able to tolerate up to 1 (3/2) server failures. +If we have a ZooKeeper ensemble with 5 servers, +the ensemble is able to tolerate up to 2 (5/2) server failures. +If the ZooKeeper ensemble with 6 servers, the ensemble +is also able to tolerate up to 2 (6/2-1) server failures +without losing data and prevent the "brain split" issue. + +ZooKeeper ensemble is usually has odd number of servers. +This is because with the even number of servers, +the capacity of failure tolerance is the same as +the ensemble with one less server +(2 failures for both 5-node ensemble and 6-node ensemble), +but the ensemble has to maintain extra connections and +data transfers for one more server. + +To achieve the highest probability of tolerating a failure +you should try to make machine failures independent. For +example, if most of the machines share the same switch, +failure of that switch could cause a correlated failure and +bring down the service. The same holds true of shared power +circuits, cooling systems, etc. + + + +#### Single Machine Requirements + +If ZooKeeper has to contend with other applications for +access to resources like storage media, CPU, network, or +memory, its performance will suffer markedly. ZooKeeper has +strong durability guarantees, which means it uses storage +media to log changes before the operation responsible for the +change is allowed to complete. You should be aware of this +dependency then, and take great care if you want to ensure +that ZooKeeper operations aren’t held up by your media. Here +are some things you can do to minimize that sort of +degradation: + +* ZooKeeper's transaction log must be on a dedicated + device. (A dedicated partition is not enough.) ZooKeeper + writes the log sequentially, without seeking Sharing your + log device with other processes can cause seeks and + contention, which in turn can cause multi-second + delays. +* Do not put ZooKeeper in a situation that can cause a + swap. In order for ZooKeeper to function with any sort of + timeliness, it simply cannot be allowed to swap. + Therefore, make certain that the maximum heap size given + to ZooKeeper is not bigger than the amount of real memory + available to ZooKeeper. For more on this, see + [Things to Avoid](#sc_commonProblems) + below. + + + +### Provisioning + + + +### Things to Consider: ZooKeeper Strengths and Limitations + + + +### Administering + + + +### Maintenance + +Little long term maintenance is required for a ZooKeeper +cluster however you must be aware of the following: + + + +#### Ongoing Data Directory Cleanup + +The ZooKeeper [Data +Directory](#var_datadir) contains files which are a persistent copy +of the znodes stored by a particular serving ensemble. These +are the snapshot and transactional log files. As changes are +made to the znodes these changes are appended to a +transaction log. Occasionally, when a log grows large, a +snapshot of the current state of all znodes will be written +to the filesystem and a new transaction log file is created +for future transactions. During snapshotting, ZooKeeper may +continue appending incoming transactions to the old log file. +Therefore, some transactions which are newer than a snapshot +may be found in the last transaction log preceding the +snapshot. + +A ZooKeeper server **will not remove +old snapshots and log files** when using the default +configuration (see autopurge below), this is the +responsibility of the operator. Every serving environment is +different and therefore the requirements of managing these +files may differ from install to install (backup for example). + +The PurgeTxnLog utility implements a simple retention +policy that administrators can use. The [API docs](index.html) contains details on +calling conventions (arguments, etc...). + +In the following example the last count snapshots and +their corresponding logs are retained and the others are +deleted. The value of should typically be +greater than 3 (although not required, this provides 3 backups +in the unlikely event a recent log has become corrupted). This +can be run as a cron job on the ZooKeeper server machines to +clean up the logs daily. + + CLASSPATH='lib/*:conf' java org.apache.zookeeper.server.PurgeTxnLog -n + + +Automatic purging of the snapshots and corresponding +transaction logs was introduced in version 3.4.0 and can be +enabled via the following configuration parameters **autopurge.snapRetainCount** and **autopurge.purgeInterval**. For more on +this, see [Advanced Configuration](#sc_advancedConfiguration) +below. + + + +#### Debug Log Cleanup (logback) + +See the section on [logging](#sc_logging) in this document. It is +expected that you will setup a rolling file appender using the +in-built logback feature. The sample configuration file in the +release tar's `conf/logback.xml` provides an example of +this. + + + +### Supervision + +You will want to have a supervisory process that manages +each of your ZooKeeper server processes (JVM). The ZK server is +designed to be "fail fast" meaning that it will shut down +(process exit) if an error occurs that it cannot recover +from. As a ZooKeeper serving cluster is highly reliable, this +means that while the server may go down the cluster as a whole +is still active and serving requests. Additionally, as the +cluster is "self healing" the failed server once restarted will +automatically rejoin the ensemble w/o any manual +interaction. + +Having a supervisory process such as [daemontools](http://cr.yp.to/daemontools.html) or +[SMF](http://en.wikipedia.org/wiki/Service\_Management\_Facility) +(other options for supervisory process are also available, it's +up to you which one you would like to use, these are just two +examples) managing your ZooKeeper server ensures that if the +process does exit abnormally it will automatically be restarted +and will quickly rejoin the cluster. + +It is also recommended to configure the ZooKeeper server process to +terminate and dump its heap if an OutOfMemoryError** occurs. This is achieved +by launching the JVM with the following arguments on Linux and Windows +respectively. The *zkServer.sh* and +*zkServer.cmd* scripts that ship with ZooKeeper set +these options. + + -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' + + "-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f" + + + +### Monitoring + +The ZooKeeper service can be monitored in one of three primary ways: + +* the command port through the use of [4 letter words](#sc_zkCommands) +* with [JMX](zookeeperJMX.html) +* using the [`zkServer.sh status` command](zookeeperTools.html#zkServer) + + + +### Logging + +ZooKeeper uses **[SLF4J](http://www.slf4j.org)** +version 1.7 as its logging infrastructure. By default ZooKeeper is shipped with +**[LOGBack](http://logback.qos.ch/)** as the logging backend, but you can use +any other supported logging framework of your choice. + +The ZooKeeper default *logback.xml* +file resides in the *conf* directory. Logback requires that +*logback.xml* either be in the working directory +(the directory from which ZooKeeper is run) or be accessible from the classpath. + +For more information about SLF4J, see +[its manual](http://www.slf4j.org/manual.html). + +For more information about Logback, see +[Logback website](http://logback.qos.ch/). + + + +### Troubleshooting + +* *Server not coming up because of file corruption* : + A server might not be able to read its database and fail to come up because of + some file corruption in the transaction logs of the ZooKeeper server. You will + see some IOException on loading ZooKeeper database. In such a case, + make sure all the other servers in your ensemble are up and working. Use "stat" + command on the command port to see if they are in good health. After you have verified that + all the other servers of the ensemble are up, you can go ahead and clean the database + of the corrupt server. Delete all the files in datadir/version-2 and datalogdir/version-2/. + Restart the server. + + + +### Configuration Parameters + +ZooKeeper's behavior is governed by the ZooKeeper configuration +file. This file is designed so that the exact same file can be used by +all the servers that make up a ZooKeeper server assuming the disk +layouts are the same. If servers use different configuration files, care +must be taken to ensure that the list of servers in all of the different +configuration files match. + +###### Note +>In 3.5.0 and later, some of these parameters should be placed in +a dynamic configuration file. If they are placed in the static +configuration file, ZooKeeper will automatically move them over to the +dynamic configuration file. See [Dynamic Reconfiguration](zookeeperReconfig.html) for more information. + + + +#### Minimum Configuration + +Here are the minimum configuration keywords that must be defined +in the configuration file: + +* *clientPort* : + the port to listen for client connections; that is, the + port that clients attempt to connect to. + +* *secureClientPort* : + the port to listen on for secure client connections using SSL. + **clientPort** specifies + the port for plaintext connections while **secureClientPort** specifies the port for SSL + connections. Specifying both enables mixed-mode while omitting + either will disable that mode. + Note that SSL feature will be enabled when user plugs-in + zookeeper.serverCnxnFactory, zookeeper.clientCnxnSocket as Netty. + +* *observerMasterPort* : + the port to listen for observer connections; that is, the + port that observers attempt to connect to. + if the property is set then the server will host observer connections + when in follower mode in addition to when in leader mode and correspondingly + attempt to connect to any voting peer when in observer mode. + +* *dataDir* : + the location where ZooKeeper will store the in-memory + database snapshots and, unless specified otherwise, the + transaction log of updates to the database. + ###### Note + >Be careful where you put the transaction log. A + dedicated transaction log device is key to consistent good + performance. Putting the log on a busy device will adversely + affect performance. + +* *tickTime* : + the length of a single tick, which is the basic time unit + used by ZooKeeper, as measured in milliseconds. It is used to + regulate heartbeats, and timeouts. For example, the minimum + session timeout will be two ticks. + + + +#### Advanced Configuration + +The configuration settings in the section are optional. You can +use them to further fine tune the behaviour of your ZooKeeper servers. +Some can also be set using Java system properties, generally of the +form _zookeeper.keyword_. The exact system +property, when available, is noted below. + +* *dataLogDir* : + (No Java system property) + This option will direct the machine to write the + transaction log to the **dataLogDir** rather than the **dataDir**. This allows a dedicated log + device to be used, and helps avoid competition between logging + and snapshots. + ###### Note + >Having a dedicated log device has a large impact on + throughput and stable latencies. It is highly recommended dedicating a log device and set **dataLogDir** to point to a directory on + that device, and then make sure to point **dataDir** to a directory + _not_ residing on that device. + +* *globalOutstandingLimit* : + (Java system property: **zookeeper.globalOutstandingLimit.**) + Clients can submit requests faster than ZooKeeper can + process them, especially if there are a lot of clients. To + prevent ZooKeeper from running out of memory due to queued + requests, ZooKeeper will throttle clients so that there are no + more than globalOutstandingLimit outstanding requests across + entire ensemble, equally divided. The default limit is 1,000 + and, for example, with 3 members each of them will have + 1000 / 2 = 500 individual limit. + +* *preAllocSize* : + (Java system property: **zookeeper.preAllocSize**) + To avoid seeks ZooKeeper allocates space in the + transaction log file in blocks of preAllocSize kilobytes. The + default block size is 64M. One reason for changing the size of + the blocks is to reduce the block size if snapshots are taken + more often. (Also, see **snapCount** and **snapSizeLimitInKb**). + +* *snapCount* : + (Java system property: **zookeeper.snapCount**) + ZooKeeper records its transactions using snapshots and + a transaction log (think write-ahead log). The number of + transactions recorded in the transaction log before a snapshot + can be taken (and the transaction log rolled) is determined + by snapCount. In order to prevent all of the machines in the quorum + from taking a snapshot at the same time, each ZooKeeper server + will take a snapshot when the number of transactions in the transaction log + reaches a runtime generated random value in the \[snapCount/2+1, snapCount] + range. The default snapCount is 100,000. + +* *commitLogCount* * : + (Java system property: **zookeeper.commitLogCount**) + Zookeeper maintains an in-memory list of last committed requests for fast synchronization with + followers when the followers are not too behind. This improves sync performance in case when your + snapshots are large (>100,000). The default value is 500 which is the recommended minimum. + +* *snapSizeLimitInKb* : + (Java system property: **zookeeper.snapSizeLimitInKb**) + ZooKeeper records its transactions using snapshots and + a transaction log (think write-ahead log). The total size in bytes allowed + in the set of transactions recorded in the transaction log before a snapshot + can be taken (and the transaction log rolled) is determined + by snapSize. In order to prevent all of the machines in the quorum + from taking a snapshot at the same time, each ZooKeeper server + will take a snapshot when the size in bytes of the set of transactions in the + transaction log reaches a runtime generated random value in the \[snapSize/2+1, snapSize] + range. Each file system has a minimum standard file size and in order + to for valid functioning of this feature, the number chosen must be larger + than that value. The default snapSizeLimitInKb is 4,194,304 (4GB). + A non-positive value will disable the feature. + +* *txnLogSizeLimitInKb* : + (Java system property: **zookeeper.txnLogSizeLimitInKb**) + Zookeeper transaction log file can also be controlled more + directly using txnLogSizeLimitInKb. Larger txn logs can lead to + slower follower syncs when sync is done using transaction log. + This is because leader has to scan through the appropriate log + file on disk to find the transaction to start sync from. + This feature is turned off by default and snapCount and snapSizeLimitInKb are the + only values that limit transaction log size. When enabled + Zookeeper will roll the log when any of the limits is hit. + Please note that actual log size can exceed this value by the size + of the serialized transaction. On the other hand, if this value is + set too close to (or smaller than) **preAllocSize**, + it can cause Zookeeper to roll the log for every transaction. While + this is not a correctness issue, this may cause severely degraded + performance. To avoid this and to get most out of this feature, it is + recommended to set the value to N * **preAllocSize** + where N >= 2. + +* *maxCnxns* : + (Java system property: **zookeeper.maxCnxns**) + Limits the total number of concurrent connections that can be made to a + zookeeper server (per client Port of each server ). This is used to prevent certain + classes of DoS attacks. The default is 0 and setting it to 0 entirely removes + the limit on total number of concurrent connections. Accounting for the + number of connections for serverCnxnFactory and a secureServerCnxnFactory is done + separately, so a peer is allowed to host up to 2*maxCnxns provided they are of appropriate types. + +* *maxClientCnxns* : + (No Java system property) + Limits the number of concurrent connections (at the socket + level) that a single client, identified by IP address, may make + to a single member of the ZooKeeper ensemble. This is used to + prevent certain classes of DoS attacks, including file + descriptor exhaustion. The default is 60. Setting this to 0 + entirely removes the limit on concurrent connections. + +* *clientPortAddress* : + **New in 3.3.0:** the + address (ipv4, ipv6 or hostname) to listen for client + connections; that is, the address that clients attempt + to connect to. This is optional, by default we bind in + such a way that any connection to the **clientPort** for any + address/interface/nic on the server will be + accepted. + +* *minSessionTimeout* : + (No Java system property) + **New in 3.3.0:** the + minimum session timeout in milliseconds that the server + will allow the client to negotiate. Defaults to 2 times + the **tickTime**. + +* *maxSessionTimeout* : + (No Java system property) + **New in 3.3.0:** the + maximum session timeout in milliseconds that the server + will allow the client to negotiate. Defaults to 20 times + the **tickTime**. + +* *fsync.warningthresholdms* : + (Java system property: **zookeeper.fsync.warningthresholdms**) + **New in 3.3.4:** A + warning message will be output to the log whenever an + fsync in the Transactional Log (WAL) takes longer than + this value. The values is specified in milliseconds and + defaults to 1000. This value can only be set as a + system property. + +* *maxResponseCacheSize* : + (Java system property: **zookeeper.maxResponseCacheSize**) + When set to a positive integer, it determines the size + of the cache that stores the serialized form of recently + read records. Helps save the serialization cost on + popular znodes. The metrics **response_packet_cache_hits** + and **response_packet_cache_misses** can be used to tune + this value to a given workload. The feature is turned on + by default with a value of 400, set to 0 or a negative + integer to turn the feature off. + +* *maxGetChildrenResponseCacheSize* : + (Java system property: **zookeeper.maxGetChildrenResponseCacheSize**) + **New in 3.6.0:** + Similar to **maxResponseCacheSize**, but applies to get children + requests. The metrics **response_packet_get_children_cache_hits** + and **response_packet_get_children_cache_misses** can be used to tune + this value to a given workload. The feature is turned on + by default with a value of 400, set to 0 or a negative + integer to turn the feature off. + +* *autopurge.snapRetainCount* : + (No Java system property) + **New in 3.4.0:** + When enabled, ZooKeeper auto purge feature retains + the **autopurge.snapRetainCount** most + recent snapshots and the corresponding transaction logs in the + **dataDir** and **dataLogDir** respectively and deletes the rest. + Defaults to 3. Minimum value is 3. + +* *autopurge.purgeInterval* : + (No Java system property) + **New in 3.4.0:** The + time interval in hours for which the purge task has to + be triggered. Set to a positive integer (1 and above) + to enable the auto purging. Defaults to 0. + **Suffix support added in 3.10.0:** The interval is specified as an integer with an optional suffix to indicate the time unit. + Supported suffixes are: `ms` for milliseconds, `s` for seconds, `m` for minutes, `h` for hours, and `d` for days. + For example, "10m" represents 10 minutes, and "5h" represents 5 hours. + If no suffix is provided, the default unit is hours. + +* *syncEnabled* : + (Java system property: **zookeeper.observer.syncEnabled**) + **New in 3.4.6, 3.5.0:** + The observers now log transaction and write snapshot to disk + by default like the participants. This reduces the recovery time + of the observers on restart. Set to "false" to disable this + feature. Default is "true" + +* *extendedTypesEnabled* : + (Java system property only: **zookeeper.extendedTypesEnabled**) + **New in 3.5.4, 3.6.0:** Define to `true` to enable + extended features such as the creation of [TTL Nodes](zookeeperProgrammers.html#TTL+Nodes). + They are disabled by default. IMPORTANT: when enabled server IDs must + be less than 255 due to internal limitations. + +* *emulate353TTLNodes* : + (Java system property only:**zookeeper.emulate353TTLNodes**). + **New in 3.5.4, 3.6.0:** Due to [ZOOKEEPER-2901] + (https://issues.apache.org/jira/browse/ZOOKEEPER-2901) TTL nodes + created in version 3.5.3 are not supported in 3.5.4/3.6.0. However, a workaround is provided via the + zookeeper.emulate353TTLNodes system property. If you used TTL nodes in ZooKeeper 3.5.3 and need to maintain + compatibility set **zookeeper.emulate353TTLNodes** to `true` in addition to + **zookeeper.extendedTypesEnabled**. NOTE: due to the bug, server IDs + must be 127 or less. Additionally, the maximum support TTL value is `1099511627775` which is smaller + than what was allowed in 3.5.3 (`1152921504606846975`) + +* *watchManagerName* : + (Java system property only: **zookeeper.watchManagerName**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + New watcher manager WatchManagerOptimized is added to optimize the memory overhead in heavy watch use cases. This + config is used to define which watcher manager to be used. Currently, we only support WatchManager and + WatchManagerOptimized. + +* *watcherCleanThreadsNum* : + (Java system property only: **zookeeper.watcherCleanThreadsNum**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, this config is used to decide how + many thread is used in the WatcherCleaner. More thread usually means larger clean up throughput. The + default value is 2, which is good enough even for heavy and continuous session closing/recreating cases. + +* *watcherCleanThreshold* : + (Java system property only: **zookeeper.watcherCleanThreshold**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, the cleanup process is relatively + heavy, batch processing will reduce the cost and improve the performance. This setting is used to decide + the batch size. The default one is 1000, we don't need to change it if there is no memory or clean up + speed issue. + +* *watcherCleanIntervalInSeconds* : + (Java system property only:**zookeeper.watcherCleanIntervalInSeconds**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, the cleanup process is relatively + heavy, batch processing will reduce the cost and improve the performance. Besides watcherCleanThreshold, + this setting is used to clean up the dead watchers after certain time even the dead watchers are not larger + than watcherCleanThreshold, so that we won't leave the dead watchers there for too long. The default setting + is 10 minutes, which usually don't need to be changed. + +* *maxInProcessingDeadWatchers* : + (Java system property only: **zookeeper.maxInProcessingDeadWatchers**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + This is used to control how many backlog can we have in the WatcherCleaner, when it reaches this number, it will + slow down adding the dead watcher to WatcherCleaner, which will in turn slow down adding and closing + watchers, so that we can avoid OOM issue. By default there is no limit, you can set it to values like + watcherCleanThreshold * 1000. + +* *bitHashCacheSize* : + (Java system property only: **zookeeper.bitHashCacheSize**) + **New 3.6.0**: Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + This is the setting used to decide the HashSet cache size in the BitHashSet implementation. Without HashSet, we + need to use O(N) time to get the elements, N is the bit numbers in elementBits. But we need to + keep the size small to make sure it doesn't cost too much in memory, there is a trade off between memory + and time complexity. The default value is 10, which seems a relatively reasonable cache size. + +* *fastleader.minNotificationInterval* : + (Java system property: **zookeeper.fastleader.minNotificationInterval**) + Lower bound for length of time between two consecutive notification + checks on the leader election. This interval determines how long a + peer waits to check the set of election votes and effects how + quickly an election can resolve. The interval follows a backoff + strategy from the configured minimum (this) and the configured maximum + (fastleader.maxNotificationInterval) for long elections. + +* *fastleader.maxNotificationInterval* : + (Java system property: **zookeeper.fastleader.maxNotificationInterval**) + Upper bound for length of time between two consecutive notification + checks on the leader election. This interval determines how long a + peer waits to check the set of election votes and effects how + quickly an election can resolve. The interval follows a backoff + strategy from the configured minimum (fastleader.minNotificationInterval) + and the configured maximum (this) for long elections. + +* *connectionMaxTokens* : + (Java system property: **zookeeper.connection_throttle_tokens**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the maximum number of tokens in the token-bucket. + When set to 0, throttling is disabled. Default is 0. + +* *connectionTokenFillTime* : + (Java system property: **zookeeper.connection_throttle_fill_time**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the interval in milliseconds when the token bucket is re-filled with + *connectionTokenFillCount* tokens. Default is 1. + +* *connectionTokenFillCount* : + (Java system property: **zookeeper.connection_throttle_fill_count**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the number of tokens to add to the token bucket every + *connectionTokenFillTime* milliseconds. Default is 1. + +* *connectionFreezeTime* : + (Java system property: **zookeeper.connection_throttle_freeze_time**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the interval in milliseconds when the dropping + probability is adjusted. When set to -1, probabilistic dropping is disabled. + Default is -1. + +* *connectionDropIncrease* : + (Java system property: **zookeeper.connection_throttle_drop_increase**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the dropping probability to increase. The throttler + checks every *connectionFreezeTime* milliseconds and if the token bucket is + empty, the dropping probability will be increased by *connectionDropIncrease*. + The default is 0.02. + +* *connectionDropDecrease* : + (Java system property: **zookeeper.connection_throttle_drop_decrease**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the dropping probability to decrease. The throttler + checks every *connectionFreezeTime* milliseconds and if the token bucket has + more tokens than a threshold, the dropping probability will be decreased by + *connectionDropDecrease*. The threshold is *connectionMaxTokens* \* + *connectionDecreaseRatio*. The default is 0.002. + +* *connectionDecreaseRatio* : + (Java system property: **zookeeper.connection_throttle_decrease_ratio**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. This parameter defines the threshold to decrease the dropping + probability. The default is 0. + +* *zookeeper.connection_throttle_weight_enabled* : + (Java system property only) + **New in 3.6.0:** + Whether to consider connection weights when throttling. Only useful when connection throttle is enabled, that is, connectionMaxTokens is larger than 0. The default is false. + +* *zookeeper.connection_throttle_global_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of a global session. It is the number of tokens required for a global session request to get through the connection throttler. It has to be a positive integer no smaller than the weight of a local session. The default is 3. + +* *zookeeper.connection_throttle_local_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of a local session. It is the number of tokens required for a local session request to get through the connection throttler. It has to be a positive integer no larger than the weight of a global session or a renew session. The default is 1. + +* *zookeeper.connection_throttle_renew_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of renewing a session. It is also the number of tokens required for a reconnect request to get through the throttler. It has to be a positive integer no smaller than the weight of a local session. The default is 2. + + +* *clientPortListenBacklog* : + (No Java system property) + **New in 3.4.14, 3.5.5, 3.6.0:** + The socket backlog length for the ZooKeeper server socket. This controls + the number of requests that will be queued server-side to be processed + by the ZooKeeper server. Connections that exceed this length will receive + a network timeout (30s) which may cause ZooKeeper session expiry issues. + By default, this value is unset (`-1`) which, on Linux, uses a backlog of + `50`. This value must be a positive number. + +* *serverCnxnFactory* : + (Java system property: **zookeeper.serverCnxnFactory**) + Specifies ServerCnxnFactory implementation. + This should be set to `NettyServerCnxnFactory` in order to use TLS based server communication. + Default is `NIOServerCnxnFactory`. + +* *flushDelay* : + (Java system property: **zookeeper.flushDelay**) + Time in milliseconds to delay the flush of the commit log. + Does not affect the limit defined by *maxBatchSize*. + Disabled by default (with value 0). Ensembles with high write rates + may see throughput improved with a value of 10-20 ms. + +* *maxWriteQueuePollTime* : + (Java system property: **zookeeper.maxWriteQueuePollTime**) + If *flushDelay* is enabled, this determines the amount of time in milliseconds + to wait before flushing when no new requests are being queued. + Set to *flushDelay*/3 by default (implicitly disabled by default). + +* *maxBatchSize* : + (Java system property: **zookeeper.maxBatchSize**) + The number of transactions allowed in the server before a flush of the + commit log is triggered. + Does not affect the limit defined by *flushDelay*. + Default is 1000. + +* *enforceQuota* : + (Java system property: **zookeeper.enforceQuota**) + **New in 3.7.0:** + Enforce the quota check. When enabled and the client exceeds the total bytes or children count hard quota under a znode, the server will reject the request and reply the client a `QuotaExceededException` by force. + The default value is: false. Exploring [quota feature](http://zookeeper.apache.org/doc/current/zookeeperQuotas.html) for more details. + +* *requestThrottleLimit* : + (Java system property: **zookeeper.request_throttle_max_requests**) + **New in 3.6.0:** + The total number of outstanding requests allowed before the RequestThrottler starts stalling. When set to 0, throttling is disabled. The default is 0. + +* *requestThrottleStallTime* : + (Java system property: **zookeeper.request_throttle_stall_time**) + **New in 3.6.0:** + The maximum time (in milliseconds) for which a thread may wait to be notified that it may proceed processing a request. The default is 100. + +* *requestThrottleDropStale* : + (Java system property: **request_throttle_drop_stale**) + **New in 3.6.0:** + When enabled, the throttler will drop stale requests rather than issue them to the request pipeline. A stale request is a request sent by a connection that is now closed, and/or a request that will have a request latency higher than the sessionTimeout. The default is true. + +* *requestStaleLatencyCheck* : + (Java system property: **zookeeper.request_stale_latency_check**) + **New in 3.6.0:** + When enabled, a request is considered stale if the request latency is higher than its associated session timeout. Disabled by default. + +* *requestStaleConnectionCheck* : + (Java system property: **zookeeper.request_stale_connection_check**) + **New in 3.6.0:** + When enabled, a request is considered stale if the request's connection has closed. Enabled by default. + +* *zookeeper.request_throttler.shutdownTimeout* : + (Java system property only) + **New in 3.6.0:** + The time (in milliseconds) the RequestThrottler waits for the request queue to drain during shutdown before it shuts down forcefully. The default is 10000. + +* *advancedFlowControlEnabled* : + (Java system property: **zookeeper.netty.advancedFlowControl.enabled**) + Using accurate flow control in netty based on the status of ZooKeeper + pipeline to avoid direct buffer OOM. It will disable the AUTO_READ in + Netty. + +* *enableEagerACLCheck* : + (Java system property only: **zookeeper.enableEagerACLCheck**) + When set to "true", enables eager ACL check on write requests on each local + server before sending the requests to quorum. Default is "false". + +* *maxConcurrentSnapSyncs* : + (Java system property: **zookeeper.leader.maxConcurrentSnapSyncs**) + The maximum number of snap syncs a leader or a follower can serve at the same + time. The default is 10. + +* *maxConcurrentDiffSyncs* : + (Java system property: **zookeeper.leader.maxConcurrentDiffSyncs**) + The maximum number of diff syncs a leader or a follower can serve at the same + time. The default is 100. + +* *digest.enabled* : + (Java system property only: **zookeeper.digest.enabled**) + **New in 3.6.0:** + The digest feature is added to detect the data inconsistency inside + ZooKeeper when loading database from disk, catching up and following + leader, its doing incrementally hash check for the DataTree based on + the adHash paper mentioned in + + https://cseweb.ucsd.edu/~daniele/papers/IncHash.pdf + + The idea is simple, the hash value of DataTree will be updated incrementally + based on the changes to the set of data. When the leader is preparing the txn, + it will pre-calculate the hash of the tree based on the changes happened with + formula: + + current_hash = current_hash + hash(new node data) - hash(old node data) + + If it’s creating a new node, the hash(old node data) will be 0, and if it’s a + delete node op, the hash(new node data) will be 0. + + This hash will be associated with each txn to represent the expected hash value + after applying the txn to the data tree, it will be sent to followers with + original proposals. Learner will compare the actual hash value with the one in + the txn after applying the txn to the data tree, and report mismatch if it’s not + the same. + + These digest value will also be persisted with each txn and snapshot on the disk, + so when servers restarted and load data from disk, it will compare and see if + there is hash mismatch, which will help detect data loss issue on disk. + + For the actual hash function, we’re using CRC internally, it’s not a collisionless + hash function, but it’s more efficient compared to collisionless hash, and the + collision possibility is really really rare and can already meet our needs here. + + This feature is backward and forward compatible, so it can safely roll upgrade, + downgrade, enabled and later disabled without any compatible issue. Here are the + scenarios have been covered and tested: + + 1. When leader runs with new code while follower runs with old one, the digest will + be appended to the end of each txn, follower will only read header and txn data, + digest value in the txn will be ignored. It won't affect the follower reads and + processes the next txn. + 2. When leader runs with old code while follower runs with new one, the digest won't + be sent with txn, when follower tries to read the digest, it will throw EOF which + is caught and handled gracefully with digest value set to null. + 3. When loading old snapshot with new code, it will throw IOException when trying to + read the non-exist digest value, and the exception will be caught and digest will + be set to null, which means we won't compare digest when loading this snapshot, + which is expected to happen during rolling upgrade + 4. When loading new snapshot with old code, it will finish successfully after deserializing + the data tree, the digest value at the end of snapshot file will be ignored + 5. The scenarios of rolling restart with flags change are similar to the 1st and 2nd + scenarios discussed above, if the leader enabled but follower not, digest value will + be ignored, and follower won't compare the digest during runtime; if leader disabled + but follower enabled, follower will get EOF exception which is handled gracefully. + + Note: the current digest calculation excluded nodes under /zookeeper + due to the potential inconsistency in the /zookeeper/quota stat node, + we can include that after that issue is fixed. + + By default, this feature is enabled, set "false" to disable it. + +* *snapshot.compression.method* : + (Java system property: **zookeeper.snapshot.compression.method**) + **New in 3.6.0:** + This property controls whether or not ZooKeeper should compress snapshots + before storing them on disk (see [ZOOKEEPER-3179](https://issues.apache.org/jira/browse/ZOOKEEPER-3179)). + Possible values are: + - "": Disabled (no snapshot compression). This is the default behavior. + - "gz": See [gzip compression](https://en.wikipedia.org/wiki/Gzip). + - "snappy": See [Snappy compression](https://en.wikipedia.org/wiki/Snappy_(compression)). + +* *snapshot.trust.empty* : + (Java system property: **zookeeper.snapshot.trust.empty**) + **New in 3.5.6:** + This property controls whether or not ZooKeeper should treat missing + snapshot files as a fatal state that can't be recovered from. + Set to true to allow ZooKeeper servers recover without snapshot + files. This should only be set during upgrading from old versions of + ZooKeeper (3.4.x, pre 3.5.3) where ZooKeeper might only have transaction + log files but without presence of snapshot files. If the value is set + during upgrade, we recommend setting the value back to false after upgrading + and restart ZooKeeper process so ZooKeeper can continue normal data + consistency check during recovery process. + Default value is false. + +* *audit.enable* : + (Java system property: **zookeeper.audit.enable**) + **New in 3.6.0:** + By default audit logs are disabled. Set to "true" to enable it. Default value is "false". + See the [ZooKeeper audit logs](zookeeperAuditLogs.html) for more information. + +* *audit.impl.class* : + (Java system property: **zookeeper.audit.impl.class**) + **New in 3.6.0:** + Class to implement the audit logger. By default logback based audit logger org.apache.zookeeper.audit + .Slf4jAuditLogger is used. + See the [ZooKeeper audit logs](zookeeperAuditLogs.html) for more information. + +* *largeRequestMaxBytes* : + (Java system property: **zookeeper.largeRequestMaxBytes**) + **New in 3.6.0:** + The maximum number of bytes of all inflight large request. The connection will be closed if a coming large request causes the limit exceeded. The default is 100 * 1024 * 1024. + +* *largeRequestThreshold* : + (Java system property: **zookeeper.largeRequestThreshold**) + **New in 3.6.0:** + The size threshold after which a request is considered a large request. If it is -1, then all requests are considered small, effectively turning off large request throttling. The default is -1. + +* *outstandingHandshake.limit* + (Java system property only: **zookeeper.netty.server.outstandingHandshake.limit**) + The maximum in-flight TLS handshake connections could have in ZooKeeper, + the connections exceed this limit will be rejected before starting handshake. + This setting doesn't limit the max TLS concurrency, but helps avoid herd + effect due to TLS handshake timeout when there are too many in-flight TLS + handshakes. Set it to something like 250 is good enough to avoid herd effect. + +* *netty.server.earlyDropSecureConnectionHandshakes* + (Java system property: **zookeeper.netty.server.earlyDropSecureConnectionHandshakes**) + If the ZooKeeper server is not fully started, drop TCP connections before performing the TLS handshake. + This is useful in order to prevent flooding the server with many concurrent TLS handshakes after a restart. + Please note that if you enable this flag the server won't answer to 'ruok' commands if it is not fully started. + + The behaviour of dropping the connection has been introduced in ZooKeeper 3.7 and it was not possible to disable it. + Since 3.7.1 and 3.8.0 this feature is disabled by default. + +* *throttledOpWaitTime* + (Java system property: **zookeeper.throttled_op_wait_time**) + The time in the RequestThrottler queue longer than which a request will be marked as throttled. + A throttled requests will not be processed other than being fed down the pipeline of the server it belongs + to preserve the order of all requests. + The FinalProcessor will issue an error response (new error code: ZTHROTTLEDOP) for these undigested requests. + The intent is for the clients not to retry them immediately. + When set to 0, no requests will be throttled. The default is 0. + +* *learner.closeSocketAsync* + (Java system property: **zookeeper.learner.closeSocketAsync**) + (Java system property: **learner.closeSocketAsync**)(Added for backward compatibility) + **New in 3.7.0:** + When enabled, a learner will close the quorum socket asynchronously. This is useful for TLS connections where closing a socket might take a long time, block the shutdown process, potentially delay a new leader election, and leave the quorum unavailable. Closing the socket asynchronously avoids blocking the shutdown process despite the long socket closing time and a new leader election can be started while the socket being closed. + The default is false. + +* *leader.closeSocketAsync* + (Java system property: **zookeeper.leader.closeSocketAsync**) + (Java system property: **leader.closeSocketAsync**)(Added for backward compatibility) + **New in 3.7.0:** + When enabled, the leader will close a quorum socket asynchronously. This is useful for TLS connections where closing a socket might take a long time. If disconnecting a follower is initiated in ping() because of a failed SyncLimitCheck then the long socket closing time will block the sending of pings to other followers. Without receiving pings, the other followers will not send session information to the leader, which causes sessions to expire. Setting this flag to true ensures that pings will be sent regularly. + The default is false. + +* *learner.asyncSending* + (Java system property: **zookeeper.learner.asyncSending**) + (Java system property: **learner.asyncSending**)(Added for backward compatibility) + **New in 3.7.0:** + The sending and receiving packets in Learner were done synchronously in a critical section. An untimely network issue could cause the followers to hang (see [ZOOKEEPER-3575](https://issues.apache.org/jira/browse/ZOOKEEPER-3575) and [ZOOKEEPER-4074](https://issues.apache.org/jira/browse/ZOOKEEPER-4074)). The new design moves sending packets in Learner to a separate thread and sends the packets asynchronously. The new design is enabled with this parameter (learner.asyncSending). + The default is false. + +* *forward_learner_requests_to_commit_processor_disabled* + (Java system property: **zookeeper.forward_learner_requests_to_commit_processor_disabled**) + When this property is set, the requests from learners won't be enqueued to + CommitProcessor queue, which will help save the resources and GC time on + leader. + + The default value is false. + +* *serializeLastProcessedZxid.enabled* + (Java system property: **zookeeper.serializeLastProcessedZxid.enabled**) + **New in 3.9.0:** + If enabled, ZooKeeper serializes the lastProcessedZxid when snapshot and deserializes it + when restore. Defaults to true. Needs to be enabled for performing snapshot and restore + via admin server commands, as there is no snapshot file name to extract the lastProcessedZxid. + + This feature is backward and forward compatible. Here are the different scenarios. + + 1. Snapshot triggered by server internally + a. When loading old snapshot with new code, it will throw EOFException when trying to + read the non-exist lastProcessedZxid value, and the exception will be caught. + The lastProcessedZxid will be set using the snapshot file name. + + b. When loading new snapshot with old code, it will finish successfully after deserializing the + digest value, the lastProcessedZxid at the end of snapshot file will be ignored. + The lastProcessedZxid will be set using the snapshot file name. + + 2. Sync up between leader and follower + The lastProcessedZxid will not be serialized by leader and deserialized by follower + in both new and old code. It will be set to the lastProcessedZxid sent from leader + via QuorumPacket. + + 3. Snapshot triggered via admin server APIs + The feature flag need to be enabled for the snapshot command to work. + + + +#### Cluster Options + +The options in this section are designed for use with an ensemble +of servers -- that is, when deploying clusters of servers. + +* *electionAlg* : + (No Java system property) + Election implementation to use. A value of "1" corresponds to the + non-authenticated UDP-based version of fast leader election, "2" + corresponds to the authenticated UDP-based version of fast + leader election, and "3" corresponds to TCP-based version of + fast leader election. Algorithm 3 was made default in 3.2.0 and + prior versions (3.0.0 and 3.1.0) were using algorithm 1 and 2 as well. + ###### Note + >The implementations of leader election 1, and 2 were + **deprecated** in 3.4.0. Since 3.6.0 only FastLeaderElection is available, + in case of upgrade you have to shut down all of your servers and + restart them with electionAlg=3 (or by removing the line from the configuration file). > + +* *maxTimeToWaitForEpoch* : + (Java system property: **zookeeper.leader.maxTimeToWaitForEpoch**) + **New in 3.6.0:** + The maximum time to wait for epoch from voters when activating + leader. If leader received a LOOKING notification from one of + its voters, and it hasn't received epoch packets from majority + within maxTimeToWaitForEpoch, then it will goto LOOKING and + elect leader again. + This can be tuned to reduce the quorum or server unavailable + time, it can be set to be much smaller than initLimit * tickTime. + In cross datacenter environment, it can be set to something + like 2s. + +* *initLimit* : + (No Java system property) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to + connect and sync to a leader. Increased this value as needed, if + the amount of data managed by ZooKeeper is large. + +* *connectToLearnerMasterLimit* : + (Java system property: zookeeper.**connectToLearnerMasterLimit**) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to + connect to the leader after leader election. Defaults to the value of initLimit. + Use when initLimit is high so connecting to learner master doesn't result in higher timeout. + +* *leaderServes* : + (Java system property: zookeeper.**leaderServes**) + Leader accepts client connections. Default value is "yes". + The leader machine coordinates updates. For higher update + throughput at the slight expense of read throughput the leader + can be configured to not accept clients and focus on + coordination. The default to this option is yes, which means + that a leader will accept client connections. + ###### Note + >Turning on leader selection is highly recommended when + you have more than three ZooKeeper servers in an ensemble. + +* *server.x=[hostname]:nnnnn[:nnnnn] etc* : + (No Java system property) + servers making up the ZooKeeper ensemble. When the server + starts up, it determines which server it is by looking for the + file *myid* in the data directory. That file + contains the server number, in ASCII, and it should match + **x** in **server.x** in the left hand side of this + setting. + The list of servers that make up ZooKeeper servers that is + used by the clients must match the list of ZooKeeper servers + that each ZooKeeper server has. + There are two port numbers **nnnnn**. + The first followers used to connect to the leader, and the second is for + leader election. If you want to test multiple servers on a single machine, then + different ports can be used for each server. + + + + Since ZooKeeper 3.6.0 it is possible to specify **multiple addresses** for each + ZooKeeper server (see [ZOOKEEPER-3188](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3188)). + To enable this feature, you must set the *multiAddress.enabled* configuration property + to *true*. This helps to increase availability and adds network level + resiliency to ZooKeeper. When multiple physical network interfaces are used + for the servers, ZooKeeper is able to bind on all interfaces and runtime switching + to a working interface in case a network error. The different addresses can be specified + in the config using a pipe ('|') character. A valid configuration using multiple addresses looks like: + + server.1=zoo1-net1:2888:3888|zoo1-net2:2889:3889 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889 + server.3=zoo3-net1:2888:3888|zoo3-net2:2889:3889 + + + ###### Note + >By enabling this feature, the Quorum protocol (ZooKeeper Server-Server protocol) will change. + The users will not notice this and when anyone starts a ZooKeeper cluster with the new config, + everything will work normally. However, it's not possible to enable this feature and specify + multiple addresses during a rolling upgrade if the old ZooKeeper cluster didn't support the + *multiAddress* feature (and the new Quorum protocol). In case if you need this feature but you + also need to perform a rolling upgrade from a ZooKeeper cluster older than *3.6.0*, then you + first need to do the rolling upgrade without enabling the MultiAddress feature and later make + a separate rolling restart with the new configuration where **multiAddress.enabled** is set + to **true** and multiple addresses are provided. + +* *syncLimit* : + (No Java system property) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to sync + with ZooKeeper. If followers fall too far behind a leader, they + will be dropped. + +* *group.x=nnnnn[:nnnnn]* : + (No Java system property) + Enables a hierarchical quorum construction."x" is a group identifier + and the numbers following the "=" sign correspond to server identifiers. + The left-hand side of the assignment is a colon-separated list of server + identifiers. Note that groups must be disjoint and the union of all groups + must be the ZooKeeper ensemble. + You will find an example [here](zookeeperHierarchicalQuorums.html) + +* *weight.x=nnnnn* : + (No Java system property) + Used along with "group", it assigns a weight to a server when + forming quorums. Such a value corresponds to the weight of a server + when voting. There are a few parts of ZooKeeper that require voting + such as leader election and the atomic broadcast protocol. By default + the weight of server is 1. If the configuration defines groups, but not + weights, then a value of 1 will be assigned to all servers. + You will find an example [here](zookeeperHierarchicalQuorums.html) + +* *cnxTimeout* : + (Java system property: zookeeper.**cnxTimeout**) + Sets the timeout value for opening connections for leader election notifications. + Only applicable if you are using electionAlg 3. + ###### Note + >Default value is 5 seconds. + +* *quorumCnxnTimeoutMs* : + (Java system property: zookeeper.**quorumCnxnTimeoutMs**) + Sets the read timeout value for the connections for leader election notifications. + Only applicable if you are using electionAlg 3. + ######Note + >Default value is -1, which will then use the syncLimit * tickTime as the timeout. + +* *standaloneEnabled* : + (No Java system property) + **New in 3.5.0:** + When set to false, a single server can be started in replicated + mode, a lone participant can run with observers, and a cluster + can reconfigure down to one node, and up from one node. The + default is true for backwards compatibility. It can be set + using QuorumPeerConfig's setStandaloneEnabled method or by + adding "standaloneEnabled=false" or "standaloneEnabled=true" + to a server's config file. + +* *reconfigEnabled* : + (No Java system property) + **New in 3.5.3:** + This controls the enabling or disabling of + [Dynamic Reconfiguration](zookeeperReconfig.html) feature. When the feature + is enabled, users can perform reconfigure operations through + the ZooKeeper client API or through ZooKeeper command line tools + assuming users are authorized to perform such operations. + When the feature is disabled, no user, including the super user, + can perform a reconfiguration. Any attempt to reconfigure will return an error. + **"reconfigEnabled"** option can be set as + **"reconfigEnabled=false"** or + **"reconfigEnabled=true"** + to a server's config file, or using QuorumPeerConfig's + setReconfigEnabled method. The default value is false. + If present, the value should be consistent across every server in + the entire ensemble. Setting the value as true on some servers and false + on other servers will cause inconsistent behavior depending on which server + is elected as leader. If the leader has a setting of + **"reconfigEnabled=true"**, then the ensemble + will have reconfig feature enabled. If the leader has a setting of + **"reconfigEnabled=false"**, then the ensemble + will have reconfig feature disabled. It is thus recommended having a consistent + value for **"reconfigEnabled"** across servers + in the ensemble. + +* *4lw.commands.whitelist* : + (Java system property: **zookeeper.4lw.commands.whitelist**) + **New in 3.5.3:** + A list of comma separated [Four Letter Words](#sc_4lw) + commands that user wants to use. A valid Four Letter Words + command must be put in this list else ZooKeeper server will + not enable the command. + By default the whitelist only contains "srvr" command + which zkServer.sh uses. The rest of four-letter word commands are disabled + by default: attempting to use them will gain a response + ".... is not executed because it is not in the whitelist." + Here's an example of the configuration that enables stat, ruok, conf, and isro + command while disabling the rest of Four Letter Words command: + + 4lw.commands.whitelist=stat, ruok, conf, isro + + +If you really need enable all four-letter word commands by default, you can use +the asterisk option so you don't have to include every command one by one in the list. +As an example, this will enable all four-letter word commands: + + + 4lw.commands.whitelist=* + + +* *tcpKeepAlive* : + (Java system property: **zookeeper.tcpKeepAlive**) + **New in 3.5.4:** + Setting this to true sets the TCP keepAlive flag on the + sockets used by quorum members to perform elections. + This will allow for connections between quorum members to + remain up when there is network infrastructure that may + otherwise break them. Some NATs and firewalls may terminate + or lose state for long-running or idle connections. + Enabling this option relies on OS level settings to work + properly, check your operating system's options regarding TCP + keepalive for more information. Defaults to + **false**. + +* *clientTcpKeepAlive* : + (Java system property: **zookeeper.clientTcpKeepAlive**) + **New in 3.6.1:** + Setting this to true sets the TCP keepAlive flag on the + client sockets. Some broken network infrastructure may lose + the FIN packet that is sent from closing client. These never + closed client sockets cause OS resource leak. Enabling this + option terminates these zombie sockets by idle check. + Enabling this option relies on OS level settings to work + properly, check your operating system's options regarding TCP + keepalive for more information. Defaults to **false**. Please + note the distinction between it and **tcpKeepAlive**. It is + applied for the client sockets while **tcpKeepAlive** is for + the sockets used by quorum members. Currently this option is + only available when default `NIOServerCnxnFactory` is used. + +* *electionPortBindRetry* : + (Java system property only: **zookeeper.electionPortBindRetry**) + Property set max retry count when Zookeeper server fails to bind + leader election port. Such errors can be temporary and recoverable, + such as DNS issue described in [ZOOKEEPER-3320](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3320), + or non-retryable, such as port already in use. + In case of transient errors, this property can improve availability + of Zookeeper server and help it to self recover. + Default value 3. In container environment, especially in Kubernetes, + this value should be increased or set to 0(infinite retry) to overcome issues + related to DNS name resolving. + + +* *observer.reconnectDelayMs* : + (Java system property: **zookeeper.observer.reconnectDelayMs**) + When observer loses its connection with the leader, it waits for the + specified value before trying to reconnect with the leader so that + the entire observer fleet won't try to run leader election and reconnect + to the leader at once. + Defaults to 0 ms. + +* *observer.election.DelayMs* : + (Java system property: **zookeeper.observer.election.DelayMs**) + Delay the observer's participation in a leader election upon disconnect + so as to prevent unexpected additional load on the voting peers during + the process. Defaults to 200 ms. + +* *localSessionsEnabled* and *localSessionsUpgradingEnabled* : + **New in 3.5:** + Optional value is true or false. Their default values are false. + Turning on the local session feature by setting *localSessionsEnabled=true*. Turning on + *localSessionsUpgradingEnabled* can upgrade a local session to a global session automatically as required (e.g. creating ephemeral nodes), + which only matters when *localSessionsEnabled* is enabled. + + + +#### Encryption, Authentication, Authorization Options + +The options in this section allow control over +encryption/authentication/authorization performed by the service. + +Beside this page, you can also find useful information about client side configuration in the +[Programmers Guide](zookeeperProgrammers.html#sc_java_client_configuration). +The ZooKeeper Wiki also has useful pages about [ZooKeeper SSL support](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide), +and [SASL authentication for ZooKeeper](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+and+SASL). + +* *DigestAuthenticationProvider.enabled* : + (Java system property: **zookeeper.DigestAuthenticationProvider.enabled**) + **New in 3.7:** + Determines whether the `digest` authentication provider is + enabled. The default value is **true** for backwards + compatibility, but it may be a good idea to disable this provider + if not used, as it can result in misleading entries appearing in + audit logs + (see [ZOOKEEPER-3979](https://issues.apache.org/jira/browse/ZOOKEEPER-3979)) + +* *DigestAuthenticationProvider.superDigest* : + (Java system property: **zookeeper.DigestAuthenticationProvider.superDigest**) + By default this feature is **disabled** + **New in 3.2:** + Enables a ZooKeeper ensemble administrator to access the + znode hierarchy as a "super" user. In particular no ACL + checking occurs for a user authenticated as + super. + org.apache.zookeeper.server.auth.DigestAuthenticationProvider + can be used to generate the superDigest, call it with + one parameter of "super:". Provide the + generated "super:" as the system property value + when starting each server of the ensemble. + When authenticating to a ZooKeeper server (from a + ZooKeeper client) pass a scheme of "digest" and authdata + of "super:". Note that digest auth passes + the authdata in plaintext to the server, it would be + prudent to use this authentication method only on + localhost (not over the network) or over an encrypted + connection. + +* *DigestAuthenticationProvider.digestAlg* : + (Java system property: **zookeeper.DigestAuthenticationProvider.digestAlg**) + **New in 3.7.0:** + Set ACL digest algorithm. The default value is: `SHA1` which will be deprecated in the future for security issues. + Set this property the same value in all the servers. + + - How to support other more algorithms? + - modify the `java.security` configuration file under `$JAVA_HOME/jre/lib/security/java.security` by specifying: + `security.provider.=`. + + ``` + For example: + set zookeeper.DigestAuthenticationProvider.digestAlg=RipeMD160 + security.provider.3=org.bouncycastle.jce.provider.BouncyCastleProvider + ``` + + - copy the jar file to `$JAVA_HOME/jre/lib/ext/`. + + ``` + For example: + copy bcprov-jdk18on-1.60.jar to $JAVA_HOME/jre/lib/ext/ + ``` + + - How to migrate from one digest algorithm to another? + - 1. Regenerate `superDigest` when migrating to new algorithm. + - 2. `SetAcl` for a znode which already had a digest auth of old algorithm. + +* *IPAuthenticationProvider.usexforwardedfor* : + (Java system property: **zookeeper.IPAuthenticationProvider.usexforwardedfor**) + **New in 3.9.3:** + IPAuthenticationProvider uses the client IP address to authenticate the user. By + default it reads the **Host** HTTP header to detect client IP address. In some + proxy configurations the proxy server adds the **X-Forwarded-For** header to + the request in order to provide the IP address of the original client request. + By enabling **usexforwardedfor** ZooKeeper setting, **X-Forwarded-For** will be preferred + over the standard **Host** header. + Default value is **false**. + +* *X509AuthenticationProvider.superUser* : + (Java system property: **zookeeper.X509AuthenticationProvider.superUser**) + The SSL-backed way to enable a ZooKeeper ensemble + administrator to access the znode hierarchy as a "super" user. + When this parameter is set to an X500 principal name, only an + authenticated client with that principal will be able to bypass + ACL checking and have full privileges to all znodes. + +* *zookeeper.superUser* : + (Java system property: **zookeeper.superUser**) + Similar to **zookeeper.X509AuthenticationProvider.superUser** + but is generic for SASL based logins. It stores the name of + a user that can access the znode hierarchy as a "super" user. + You can specify multiple SASL super users using the + **zookeeper.superUser.[suffix]** notation, e.g.: + `zookeeper.superUser.1=...`. + +* *ssl.authProvider* : + (Java system property: **zookeeper.ssl.authProvider**) + Specifies a subclass of **org.apache.zookeeper.auth.X509AuthenticationProvider** + to use for secure client authentication. This is useful in + certificate key infrastructures that do not use JKS. It may be + necessary to extend **javax.net.ssl.X509KeyManager** and **javax.net.ssl.X509TrustManager** + to get the desired behavior from the SSL stack. To configure the + ZooKeeper server to use the custom provider for authentication, + choose a scheme name for the custom AuthenticationProvider and + set the property **zookeeper.authProvider.[scheme]** to the fully-qualified class name of the custom + implementation. This will load the provider into the ProviderRegistry. + Then set this property **zookeeper.ssl.authProvider=[scheme]** and that provider + will be used for secure authentication. + +* *zookeeper.ensembleAuthName* : + (Java system property only: **zookeeper.ensembleAuthName**) + **New in 3.6.0:** + Specify a list of comma-separated valid names/aliases of an ensemble. A client + can provide the ensemble name it intends to connect as the credential for scheme "ensemble". The EnsembleAuthenticationProvider will check the credential against + the list of names/aliases of the ensemble that receives the connection request. + If the credential is not in the list, the connection request will be refused. + This prevents a client accidentally connecting to a wrong ensemble. + +* *sessionRequireClientSASLAuth* : + (Java system property: **zookeeper.sessionRequireClientSASLAuth**) + **New in 3.6.0:** + When set to **true**, ZooKeeper server will only accept connections and requests from clients + that have authenticated with server via SASL. Clients that are not configured with SASL + authentication, or configured with SASL but failed authentication (i.e. with invalid credential) + will not be able to establish a session with server. A typed error code (-124) will be delivered + in such case, both Java and C client will close the session with server thereafter, + without further attempts on retrying to reconnect. + + This configuration is shorthand for **enforce.auth.enabled=true** and **enforce.auth.scheme=sasl** + + By default, this feature is disabled. Users who would like to opt-in can enable the feature + by setting **sessionRequireClientSASLAuth** to **true**. + + This feature overrules the zookeeper.allowSaslFailedClients option, so even if server is + configured to allow clients that fail SASL authentication to login, client will not be able to + establish a session with server if this feature is enabled. + +* *enforce.auth.enabled* : + (Java system property : **zookeeper.enforce.auth.enabled**) + **New in 3.7.0:** + When set to **true**, ZooKeeper server will only accept connections and requests from clients + that have authenticated with server via configured auth scheme. Authentication schemes + can be configured using property enforce.auth.schemes. Clients that are not + configured with the any of the auth scheme configured at server or configured but failed authentication (i.e. with invalid credential) + will not be able to establish a session with server. A typed error code (-124) will be delivered + in such case, both Java and C client will close the session with server thereafter, + without further attempts on retrying to reconnect. + + By default, this feature is disabled. Users who would like to opt-in can enable the feature + by setting **enforce.auth.enabled** to **true**. + + When **enforce.auth.enabled=true** and **enforce.auth.schemes=sasl** then + zookeeper.allowSaslFailedClients configuration is overruled. So even if server is + configured to allow clients that fail SASL authentication to login, client will not be able to + establish a session with server if this feature is enabled with sasl as authentication scheme. + +* *enforce.auth.schemes* : + (Java system property : **zookeeper.enforce.auth.schemes**) + **New in 3.7.0:** + Comma separated list of authentication schemes. Clients must be authenticated with at least one + authentication scheme before doing any zookeeper operations. + This property is used only when **enforce.auth.enabled** is to **true**. + +* *sslQuorum* : + (Java system property: **zookeeper.sslQuorum**) + **New in 3.5.5:** + Enables encrypted quorum communication. Default is `false`. When enabling this feature, please also consider enabling *leader.closeSocketAsync* + and *learner.closeSocketAsync* to avoid issues associated with the potentially long socket closing time when shutting down an SSL connection. + +* *ssl.keyStore.location and ssl.keyStore.password* and *ssl.quorum.keyStore.location* and *ssl.quorum.keyStore.password* : + (Java system properties: **zookeeper.ssl.keyStore.location** and **zookeeper.ssl.keyStore.password** and **zookeeper.ssl.quorum.keyStore.location** and **zookeeper.ssl.quorum.keyStore.password**) + **New in 3.5.5:** + Specifies the file path to a Java keystore containing the local + credentials to be used for client and quorum TLS connections, and the + password to unlock the file. + +* *ssl.keyStore.passwordPath* and *ssl.quorum.keyStore.passwordPath* : + (Java system properties: **zookeeper.ssl.keyStore.passwordPath** and **zookeeper.ssl.quorum.keyStore.passwordPath**) + **New in 3.8.0:** + Specifies the file path that contains the keystore password. Reading the password from a file takes precedence over + the explicit password property. + +* *ssl.keyStore.type* and *ssl.quorum.keyStore.type* : + (Java system properties: **zookeeper.ssl.keyStore.type** and **zookeeper.ssl.quorum.keyStore.type**) + **New in 3.5.5:** + Specifies the file format of client and quorum keystores. Values: JKS, PEM, PKCS12 or null (detect by filename). + Default: null. + **New in 3.5.10, 3.6.3, 3.7.0:** + The format BCFKS was added. + +* *ssl.trustStore.location* and *ssl.trustStore.password* and *ssl.quorum.trustStore.location* and *ssl.quorum.trustStore.password* : + (Java system properties: **zookeeper.ssl.trustStore.location** and **zookeeper.ssl.trustStore.password** and **zookeeper.ssl.quorum.trustStore.location** and **zookeeper.ssl.quorum.trustStore.password**) + **New in 3.5.5:** + Specifies the file path to a Java truststore containing the remote + credentials to be used for client and quorum TLS connections, and the + password to unlock the file. + +* *ssl.trustStore.passwordPath* and *ssl.quorum.trustStore.passwordPath* : + (Java system properties: **zookeeper.ssl.trustStore.passwordPath** and **zookeeper.ssl.quorum.trustStore.passwordPath**) + **New in 3.8.0:** + Specifies the file path that contains the truststore password. Reading the password from a file takes precedence over + the explicit password property. + +* *ssl.trustStore.type* and *ssl.quorum.trustStore.type* : + (Java system properties: **zookeeper.ssl.trustStore.type** and **zookeeper.ssl.quorum.trustStore.type**) + **New in 3.5.5:** + Specifies the file format of client and quorum trustStores. Values: JKS, PEM, PKCS12 or null (detect by filename). + Default: null. + **New in 3.5.10, 3.6.3, 3.7.0:** + The format BCFKS was added. + +* *ssl.protocol* and *ssl.quorum.protocol* : + (Java system properties: **zookeeper.ssl.protocol** and **zookeeper.ssl.quorum.protocol**) + **New in 3.5.5:** + Specifies to protocol to be used in client and quorum TLS negotiation. + Default: TLSv1.3 or TLSv1.2 depending on Java runtime version being used. + +* *ssl.enabledProtocols* and *ssl.quorum.enabledProtocols* : + (Java system properties: **zookeeper.ssl.enabledProtocols** and **zookeeper.ssl.quorum.enabledProtocols**) + **New in 3.5.5:** + Specifies the enabled protocols in client and quorum TLS negotiation. + Default: TLSv1.3, TLSv1.2 if value of `protocol` property is TLSv1.3. TLSv1.2 if `protocol` is TLSv1.2. + +* *ssl.ciphersuites* and *ssl.quorum.ciphersuites* : + (Java system properties: **zookeeper.ssl.ciphersuites** and **zookeeper.ssl.quorum.ciphersuites**) + **New in 3.5.5:** + Specifies the enabled cipher suites to be used in client and quorum TLS negotiation. + Default: Enabled cipher suites depend on the Java runtime version being used. + +* *ssl.context.supplier.class* and *ssl.quorum.context.supplier.class* : + (Java system properties: **zookeeper.ssl.context.supplier.class** and **zookeeper.ssl.quorum.context.supplier.class**) + **New in 3.5.5:** + Specifies the class to be used for creating SSL context in client and quorum SSL communication. + This allows you to use custom SSL context and implement the following scenarios: + 1. Use hardware keystore, loaded in using PKCS11 or something similar. + 2. You don't have access to the software keystore, but can retrieve an already-constructed SSLContext from their container. + Default: null + +* *ssl.hostnameVerification* and *ssl.quorum.hostnameVerification* : + (Java system properties: **zookeeper.ssl.hostnameVerification** and **zookeeper.ssl.quorum.hostnameVerification**) + **New in 3.5.5:** + Specifies whether the hostname verification is enabled in client and quorum TLS negotiation process. + Disabling it only recommended for testing purposes. + Default: true + +* *ssl.clientHostnameVerification* and *ssl.quorum.clientHostnameVerification* : + (Java system properties: **zookeeper.ssl.clientHostnameVerification** and **zookeeper.ssl.quorum.clientHostnameVerification**) + **New in 3.9.4:** + Specifies whether the client's hostname verification is enabled in client and quorum TLS negotiation process. + This option requires the corresponding *hostnameVerification* option to be `true`, or it will be ignored. + Default: true for quorum, false for clients + +* *ssl.crl* and *ssl.quorum.crl* : + (Java system properties: **zookeeper.ssl.crl** and **zookeeper.ssl.quorum.crl**) + **New in 3.5.5:** + Specifies whether Certificate Revocation List is enabled in client and quorum TLS protocols. + Default: false + +* *ssl.ocsp* and *ssl.quorum.ocsp* : + (Java system properties: **zookeeper.ssl.ocsp** and **zookeeper.ssl.quorum.ocsp**) + **New in 3.5.5:** + Specifies whether Online Certificate Status Protocol is enabled in client and quorum TLS protocols. + Default: false + +* *ssl.clientAuth* and *ssl.quorum.clientAuth* : + (Java system properties: **zookeeper.ssl.clientAuth** and **zookeeper.ssl.quorum.clientAuth**) + **Added in 3.5.5, but broken until 3.5.7:** + Specifies options to authenticate ssl connections from clients. Valid values are + + * "none": server will not request client authentication + * "want": server will "request" client authentication + * "need": server will "require" client authentication + + Default: "need" + +* *ssl.handshakeDetectionTimeoutMillis* and *ssl.quorum.handshakeDetectionTimeoutMillis* : + (Java system properties: **zookeeper.ssl.handshakeDetectionTimeoutMillis** and **zookeeper.ssl.quorum.handshakeDetectionTimeoutMillis**) + **New in 3.5.5:** + TBD + +* *ssl.sslProvider* : + (Java system property: **zookeeper.ssl.sslProvider**) + **New in 3.9.0:** + Allows to select SSL provider in the client-server communication when TLS is enabled. Netty-tcnative native library + has been added to ZooKeeper in version 3.9.0 which allows us to use native SSL libraries like OpenSSL on supported + platforms. See the available options in Netty-tcnative documentation. Default value is "JDK". + +* *sslQuorumReloadCertFiles* : + (No Java system property) + **New in 3.5.5, 3.6.0:** + Allows Quorum SSL keyStore and trustStore reloading when the certificates on the filesystem change without having to restart the ZK process. Default: false + +* *client.certReload* : + (Java system property: **zookeeper.client.certReload**) + **New in 3.7.2, 3.8.1, 3.9.0:** + Allows client SSL keyStore and trustStore reloading when the certificates on the filesystem change without having to restart the ZK process. Default: false + +* *client.portUnification*: + (Java system property: **zookeeper.client.portUnification**) + Specifies that the client port should accept SSL connections + (using the same configuration as the secure client port). + Default: false + +* *authProvider*: + (Java system property: **zookeeper.authProvider**) + You can specify multiple authentication provider classes for ZooKeeper. + Usually you use this parameter to specify the SASL authentication provider + like: `authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider` + +* *kerberos.removeHostFromPrincipal* + (Java system property: **zookeeper.kerberos.removeHostFromPrincipal**) + You can instruct ZooKeeper to remove the host from the client principal name during authentication. + (e.g. the zk/myhost@EXAMPLE.COM client principal will be authenticated in ZooKeeper as zk@EXAMPLE.COM) + Default: false + +* *kerberos.removeRealmFromPrincipal* + (Java system property: **zookeeper.kerberos.removeRealmFromPrincipal**) + You can instruct ZooKeeper to remove the realm from the client principal name during authentication. + (e.g. the zk/myhost@EXAMPLE.COM client principal will be authenticated in ZooKeeper as zk/myhost) + Default: false + +* *kerberos.canonicalizeHostNames* + (Java system property: **zookeeper.kerberos.canonicalizeHostNames**) + **New in 3.7.0:** + Instructs ZooKeeper to canonicalize server host names extracted from *server.x* lines. + This allows using e.g. `CNAME` records to reference servers in configuration files, while still enabling SASL Kerberos authentication between quorum members. + It is essentially the quorum equivalent of the *zookeeper.sasl.client.canonicalize.hostname* property for clients. + The default value is **false** for backwards compatibility. + +* *multiAddress.enabled* : + (Java system property: **zookeeper.multiAddress.enabled**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). Setting this parameter to + **true** will enable this feature. Please note, that you can not enable this feature + during a rolling upgrade if the version of the old ZooKeeper cluster is prior to 3.6.0. + The default value is **false**. + +* *multiAddress.reachabilityCheckTimeoutMs* : + (Java system property: **zookeeper.multiAddress.reachabilityCheckTimeoutMs**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). ZooKeeper will perform ICMP ECHO requests + or try to establish a TCP connection on port 7 (Echo) of the destination host in order to find + the reachable addresses. This happens only if you provide multiple addresses in the configuration. + In this property you can set the timeout in milliseconds for the reachability check. The check happens + in parallel for the different addresses, so the timeout you set here is the maximum time will be taken + by checking the reachability of all addresses. + The default value is **1000**. + + This parameter has no effect, unless you enable the MultiAddress feature by setting *multiAddress.enabled=true*. + +* *fips-mode* : + (Java system property: **zookeeper.fips-mode**) + **New in 3.8.2:** + Enable FIPS compatibility mode in ZooKeeper. If enabled, the following things will be changed in order to comply + with FIPS requirements: + * Custom trust manager (`ZKTrustManager`) that is used for hostname verification will be disabled. As a consequence, + hostname verification is not available in the Quorum protocol, but still can be set in client-server communication. + * DIGEST-MD5 Sasl auth mechanism will be disabled in Quorum and ZooKeeper Sasl clients. Only GSSAPI (Kerberos) + can be used. + + Default: **true** (3.9.0+), **false** (3.8.x) + + + +#### Experimental Options/Features + +New features that are currently considered experimental. + +* *Read Only Mode Server* : + (Java system property: **readonlymode.enabled**) + **New in 3.4.0:** + Setting this value to true enables Read Only Mode server + support (disabled by default). + *localSessionsEnabled* has to be activated to serve clients. + A downgrade of an existing connections is currently not supported. + ROM allows clients sessions which requested ROM support to connect to the + server even when the server might be partitioned from + the quorum. In this mode ROM clients can still read + values from the ZK service, but will be unable to write + values and see changes from other clients. See + ZOOKEEPER-784 for more details. + +* *zookeeper.follower.skipLearnerRequestToNextProcessor* : + (Java system property: **zookeeper.follower.skipLearnerRequestToNextProcessor**) + When our cluster has observers which are connected with ObserverMaster, then turning on this flag might help + you reduce some memory pressure on the Observer Master. If your cluster doesn't have any observers or + they are not connected with ObserverMaster or your Observer's don't make much writes, then using this flag + won't help you. + Currently the change here is guarded behind the flag to help us get more confidence around the memory gains. + In Long run, we might want to remove this flag and set its behavior as the default codepath. + + + +#### Unsafe Options + +The following options can be useful, but be careful when you use +them. The risk of each is explained along with the explanation of what +the variable does. + +* *forceSync* : + (Java system property: **zookeeper.forceSync**) + Requires updates to be synced to media of the transaction + log before finishing processing the update. If this option is + set to no, ZooKeeper will not require updates to be synced to + the media. + +* *jute.maxbuffer* : + (Java system property:**jute.maxbuffer**). + - This option can only be set as a Java system property. + There is no zookeeper prefix on it. It specifies the maximum + size of the data that can be stored in a znode. The unit is: byte. The default is + 0xfffff(1048575) bytes, or just under 1M. + - If this option is changed, the system property must be set on all servers and clients otherwise + problems will arise. + - When *jute.maxbuffer* in the client side is greater than the server side, the client wants to write the data + exceeds *jute.maxbuffer* in the server side, the server side will get **java.io.IOException: Len error** + - When *jute.maxbuffer* in the client side is less than the server side, the client wants to read the data + exceeds *jute.maxbuffer* in the client side, the client side will get **java.io.IOException: Unreasonable length** + or **Packet len is out of range!** + - This is really a sanity check. ZooKeeper is designed to store data on the order of kilobytes in size. + In the production environment, increasing this property to exceed the default value is not recommended for the following reasons: + - Large size znodes cause unwarranted latency spikes, worsen the throughput + - Large size znodes make the synchronization time between leader and followers unpredictable and non-convergent(sometimes timeout), cause the quorum unstable + +* *jute.maxbuffer.extrasize*: + (Java system property: **zookeeper.jute.maxbuffer.extrasize**) + **New in 3.5.7:** + While processing client requests ZooKeeper server adds some additional information into + the requests before persisting it as a transaction. Earlier this additional information size + was fixed to 1024 bytes. For many scenarios, specially scenarios where jute.maxbuffer value + is more than 1 MB and request type is multi, this fixed size was insufficient. + To handle all the scenarios additional information size is increased from 1024 byte + to same as jute.maxbuffer size and also it is made configurable through jute.maxbuffer.extrasize. + Generally this property is not required to be configured as default value is the most optimal value. + +* *skipACL* : + (Java system property: **zookeeper.skipACL**) + Skips ACL checks. This results in a boost in throughput, + but opens up full access to the data tree to everyone. + +* *quorumListenOnAllIPs* : + When set to true the ZooKeeper server will listen + for connections from its peers on all available IP addresses, + and not only the address configured in the server list of the + configuration file. It affects the connections handling the + ZAB protocol and the Fast Leader Election protocol. Default + value is **false**. + +* *multiAddress.reachabilityCheckEnabled* : + (Java system property: **zookeeper.multiAddress.reachabilityCheckEnabled**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). ZooKeeper will perform ICMP ECHO requests + or try to establish a TCP connection on port 7 (Echo) of the destination host in order to find + the reachable addresses. This happens only if you provide multiple addresses in the configuration. + The reachable check can fail if you hit some ICMP rate-limitation, (e.g. on macOS) when you try to + start a large (e.g. 11+) ensemble members cluster on a single machine for testing. + + Default value is **true**. By setting this parameter to 'false' you can disable the reachability checks. + Please note, disabling the reachability check will cause the cluster not to be able to reconfigure + itself properly during network problems, so the disabling is advised only during testing. + + This parameter has no effect, unless you enable the MultiAddress feature by setting *multiAddress.enabled=true*. + + + +#### Disabling data directory autocreation + +**New in 3.5:** The default +behavior of a ZooKeeper server is to automatically create the +data directory (specified in the configuration file) when +started if that directory does not already exist. This can be +inconvenient and even dangerous in some cases. Take the case +where a configuration change is made to a running server, +wherein the **dataDir** parameter +is accidentally changed. When the ZooKeeper server is +restarted it will create this non-existent directory and begin +serving - with an empty znode namespace. This scenario can +result in an effective "split brain" situation (i.e. data in +both the new invalid directory and the original valid data +store). As such is would be good to have an option to turn off +this autocreate behavior. In general for production +environments this should be done, unfortunately however the +default legacy behavior cannot be changed at this point and +therefore this must be done on a case by case basis. This is +left to users and to packagers of ZooKeeper distributions. + +When running **zkServer.sh** autocreate can be disabled +by setting the environment variable **ZOO_DATADIR_AUTOCREATE_DISABLE** to 1. +When running ZooKeeper servers directly from class files this +can be accomplished by setting **zookeeper.datadir.autocreate=false** on +the java command line, i.e. **-Dzookeeper.datadir.autocreate=false** + +When this feature is disabled, and the ZooKeeper server +determines that the required directories do not exist it will +generate an error and refuse to start. + +A new script **zkServer-initialize.sh** is provided to +support this new feature. If autocreate is disabled it is +necessary for the user to first install ZooKeeper, then create +the data directory (and potentially txnlog directory), and +then start the server. Otherwise as mentioned in the previous +paragraph the server will not start. Running **zkServer-initialize.sh** will create the +required directories, and optionally set up the myid file +(optional command line parameter). This script can be used +even if the autocreate feature itself is not used, and will +likely be of use to users as this (setup, including creation +of the myid file) has been an issue for users in the past. +Note that this script ensures the data directories exist only, +it does not create a config file, but rather requires a config +file to be available in order to execute. + + + +#### Enabling db existence validation + +**New in 3.6.0:** The default +behavior of a ZooKeeper server on startup when no data tree +is found is to set zxid to zero and join the quorum as a +voting member. This can be dangerous if some event (e.g. a +rogue 'rm -rf') has removed the data directory while the +server was down since this server may help elect a leader +that is missing transactions. Enabling db existence validation +will change the behavior on startup when no data tree is +found: the server joins the ensemble as a non-voting participant +until it is able to sync with the leader and acquire an up-to-date +version of the ensemble data. To indicate an empty data tree is +expected (ensemble creation), the user should place a file +'initialize' in the same directory as 'myid'. This file will +be detected and deleted by the server on startup. + +Initialization validation can be enabled when running +ZooKeeper servers directly from class files by setting +**zookeeper.db.autocreate=false** +on the java command line, i.e. +**-Dzookeeper.db.autocreate=false**. +Running **zkServer-initialize.sh** +will create the required initialization file. + + + +#### Performance Tuning Options + +**New in 3.5.0:** Several subsystems have been reworked +to improve read throughput. This includes multi-threading of the NIO communication subsystem and +request processing pipeline (Commit Processor). NIO is the default client/server communication +subsystem. Its threading model comprises 1 acceptor thread, 1-N selector threads and 0-M +socket I/O worker threads. In the request processing pipeline the system can be configured +to process multiple read request at once while maintaining the same consistency guarantee +(same-session read-after-write). The Commit Processor threading model comprises 1 main +thread and 0-N worker threads. + +The default values are aimed at maximizing read throughput on a dedicated ZooKeeper machine. +Both subsystems need to have sufficient amount of threads to achieve peak read throughput. + +* *zookeeper.nio.numSelectorThreads* : + (Java system property only: **zookeeper.nio.numSelectorThreads**) + **New in 3.5.0:** + Number of NIO selector threads. At least 1 selector thread required. + It is recommended to use more than one selector for large numbers + of client connections. The default value is sqrt( number of cpu cores / 2 ). + +* *zookeeper.nio.numWorkerThreads* : + (Java system property only: **zookeeper.nio.numWorkerThreads**) + **New in 3.5.0:** + Number of NIO worker threads. If configured with 0 worker threads, the selector threads + do the socket I/O directly. The default value is 2 times the number of cpu cores. + +* *zookeeper.commitProcessor.numWorkerThreads* : + (Java system property only: **zookeeper.commitProcessor.numWorkerThreads**) + **New in 3.5.0:** + Number of Commit Processor worker threads. If configured with 0 worker threads, the main thread + will process the request directly. The default value is the number of cpu cores. + +* *zookeeper.commitProcessor.maxReadBatchSize* : + (Java system property only: **zookeeper.commitProcessor.maxReadBatchSize**) + Max number of reads to process from queuedRequests before switching to processing commits. + If the value < 0 (default), we switch whenever we have a local write, and pending commits. + A high read batch size will delay commit processing, causing stale data to be served. + If reads are known to arrive in fixed size batches then matching that batch size with + the value of this property can smooth queue performance. Since reads are handled in parallel, + one recommendation is to set this property to match *zookeeper.commitProcessor.numWorkerThread* + (default is the number of cpu cores) or lower. + +* *zookeeper.commitProcessor.maxCommitBatchSize* : + (Java system property only: **zookeeper.commitProcessor.maxCommitBatchSize**) + Max number of commits to process before processing reads. We will try to process as many + remote/local commits as we can till we reach this count. A high commit batch size will delay + reads while processing more commits. A low commit batch size will favor reads. + It is recommended to only set this property when an ensemble is serving a workload with a high + commit rate. If writes are known to arrive in a set number of batches then matching that + batch size with the value of this property can smooth queue performance. A generic + approach would be to set this value to equal the ensemble size so that with the processing + of each batch the current server will probabilistically handle a write related to one of + its direct clients. + Default is "1". Negative and zero values are not supported. + +* *znode.container.checkIntervalMs* : + (Java system property only) + **New in 3.6.0:** The + time interval in milliseconds for each check of candidate container + and ttl nodes. Default is "60000". + +* *znode.container.maxPerMinute* : + (Java system property only) + **New in 3.6.0:** The + maximum number of container and ttl nodes that can be deleted per + minute. This prevents herding during container deletion. + Default is "10000". + +* *znode.container.maxNeverUsedIntervalMs* : + (Java system property only) + **New in 3.6.0:** The + maximum interval in milliseconds that a container that has never had + any children is retained. Should be long enough for your client to + create the container, do any needed work and then create children. + Default is "0" which is used to indicate that containers + that have never had any children are never deleted. + + + +#### Debug Observability Configurations + +**New in 3.6.0:** The following options are introduced to make zookeeper easier to debug. + +* *zookeeper.messageTracker.BufferSize* : + (Java system property only) + Controls the maximum number of messages stored in **MessageTracker**. Value should be positive + integers. The default value is 10. **MessageTracker** is introduced in **3.6.0** to record the + last set of messages between a server (follower or observer) and a leader, when a server + disconnects with leader. These set of messages will then be dumped to zookeeper's log file, + and will help reconstruct the state of the servers at the time of the disconnection and + will be useful for debugging purpose. + +* *zookeeper.messageTracker.Enabled* : + (Java system property only) + When set to "true", will enable **MessageTracker** to track and record messages. Default value + is "false". + + + +#### AdminServer configuration + +**New in 3.9.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.rateLimiterIntervalInMS* : + (Java system property: **zookeeper.admin.rateLimiterIntervalInMS**) + The time interval for rate limiting admin command to protect the server. + Defaults to 5 mins. + +* *admin.snapshot.enabled* : + (Java system property: **zookeeper.admin.snapshot.enabled**) + The flag for enabling the snapshot command. Defaults to true. + + +* *admin.restore.enabled* : + (Java system property: **zookeeper.admin.restore.enabled**) + The flag for enabling the restore command. Defaults to true. + + +* *admin.needClientAuth* : + (Java system property: **zookeeper.admin.needClientAuth**) + The flag to control whether client auth is needed. Using x509 auth requires true. + Defaults to false. + +**New in 3.7.1:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.forceHttps* : + (Java system property: **zookeeper.admin.forceHttps**) + Force AdminServer to use SSL, thus allowing only HTTPS traffic. + Defaults to disabled. + Overwrites **admin.portUnification** settings. + +**New in 3.6.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.portUnification* : + (Java system property: **zookeeper.admin.portUnification**) + Enable the admin port to accept both HTTP and HTTPS traffic. + Defaults to disabled. + +**New in 3.5.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.enableServer* : + (Java system property: **zookeeper.admin.enableServer**) + Set to "false" to disable the AdminServer. By default the + AdminServer is enabled. + +* *admin.serverAddress* : + (Java system property: **zookeeper.admin.serverAddress**) + The address the embedded Jetty server listens on. Defaults to 0.0.0.0. + +* *admin.serverPort* : + (Java system property: **zookeeper.admin.serverPort**) + The port the embedded Jetty server listens on. Defaults to 8080. + +* *admin.idleTimeout* : + (Java system property: **zookeeper.admin.idleTimeout**) + Set the maximum idle time in milliseconds that a connection can wait + before sending or receiving data. Defaults to 30000 ms. + +* *admin.commandURL* : + (Java system property: **zookeeper.admin.commandURL**) + The URL for listing and issuing commands relative to the + root URL. Defaults to "/commands". + +### Metrics Providers + +**New in 3.6.0:** The following options are used to configure metrics. + + By default ZooKeeper server exposes useful metrics using the [AdminServer](#sc_adminserver). + and [Four Letter Words](#sc_4lw) interface. + + Since 3.6.0 you can configure a different Metrics Provider, that exports metrics + to your favourite system. + + Since 3.6.0 ZooKeeper binary package bundles an integration with [Prometheus.io](https://prometheus.io) + +* *metricsProvider.className* : + Set to "org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider" to + enable Prometheus.io exporter. + +* *metricsProvider.httpHost* : + **New in 3.8.0:** Prometheus.io exporter will start a Jetty server and listen this address, default is "0.0.0.0" + +* *metricsProvider.httpPort* : + Prometheus.io exporter will start a Jetty server and bind to this port, it defaults to 7000. + Prometheus end point will be http://hostname:httPort/metrics. + +* *metricsProvider.exportJvmInfo* : + If this property is set to **true** Prometheus.io will export useful metrics about the JVM. + The default is true. + +* *metricsProvider.numWorkerThreads* : + **New in 3.7.1:** + Number of worker threads for reporting Prometheus summary metrics. + Default value is 1. + If the number is less than 1, the main thread will be used. + +* *metricsProvider.maxQueueSize* : + **New in 3.7.1:** + The max queue size for Prometheus summary metrics reporting task. + Default value is 1000000. + +* *metricsProvider.workerShutdownTimeoutMs* : + **New in 3.7.1:** + The timeout in ms for Prometheus worker threads shutdown. + Default value is 1000ms. + + + +### Communication using the Netty framework + +[Netty](http://netty.io) +is an NIO based client/server communication framework, it +simplifies (over NIO being used directly) many of the +complexities of network level communication for java +applications. Additionally the Netty framework has built +in support for encryption (SSL) and authentication +(certificates). These are optional features and can be +turned on or off individually. + +In versions 3.5+, a ZooKeeper server can use Netty +instead of NIO (default option) by setting the environment +variable **zookeeper.serverCnxnFactory** +to **org.apache.zookeeper.server.NettyServerCnxnFactory**; +for the client, set **zookeeper.clientCnxnSocket** +to **org.apache.zookeeper.ClientCnxnSocketNetty**. + + + +#### Quorum TLS + +*New in 3.5.5* + +Based on the Netty Framework ZooKeeper ensembles can be set up +to use TLS encryption in their communication channels. This section +describes how to set up encryption on the quorum communication. + +Please note that Quorum TLS encapsulates securing both leader election +and quorum communication protocols. + +1. Create SSL keystore JKS to store local credentials + +One keystore should be created for each ZK instance. + +In this example we generate a self-signed certificate and store it +together with the private key in `keystore.jks`. This is suitable for +testing purposes, but you probably need an official certificate to sign +your keys in a production environment. + +Please note that the alias (`-alias`) and the distinguished name (`-dname`) +must match the hostname of the machine that is associated with, otherwise +hostname verification won't work. + +``` +keytool -genkeypair -alias $(hostname -f) -keyalg RSA -keysize 2048 -dname "cn=$(hostname -f)" -keypass password -keystore keystore.jks -storepass password +``` + +2. Extract the signed public key (certificate) from keystore + +*This step might only necessary for self-signed certificates.* + +``` +keytool -exportcert -alias $(hostname -f) -keystore keystore.jks -file $(hostname -f).cer -rfc +``` + +3. Create SSL truststore JKS containing certificates of all ZooKeeper instances + +The same truststore (storing all accepted certs) should be shared on +participants of the ensemble. You need to use different aliases to store +multiple certificates in the same truststore. Name of the aliases doesn't matter. + +``` +keytool -importcert -alias [host1..3] -file [host1..3].cer -keystore truststore.jks -storepass password +``` + +4. You need to use `NettyServerCnxnFactory` as serverCnxnFactory, because SSL is not supported by NIO. +Add the following configuration settings to your `zoo.cfg` config file: + +``` +sslQuorum=true +serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` + +5. Verify in the logs that your ensemble is running on TLS: + +``` +INFO [main:QuorumPeer@1789] - Using TLS encrypted quorum communication +INFO [main:QuorumPeer@1797] - Port unification disabled +... +INFO [QuorumPeerListener:QuorumCnxManager$Listener@877] - Creating TLS-only quorum server socket +``` + + + +#### Upgrading existing non-TLS cluster with no downtime + +*New in 3.5.5* + +Here are the steps needed to upgrade an already running ZooKeeper ensemble +to TLS without downtime by taking advantage of port unification functionality. + +1. Create the necessary keystores and truststores for all ZK participants as described in the previous section + +2. Add the following config settings and restart the first node + +``` +sslQuorum=false +portUnification=true +serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` + +Note that TLS is not yet enabled, but we turn on port unification. + +3. Repeat step #2 on the remaining nodes. Verify that you see the following entries in the logs: + +``` +INFO [main:QuorumPeer@1791] - Using insecure (non-TLS) quorum communication +INFO [main:QuorumPeer@1797] - Port unification enabled +... +INFO [QuorumPeerListener:QuorumCnxManager$Listener@874] - Creating TLS-enabled quorum server socket +``` + +You should also double-check after each node restart that the quorum become healthy again. + +4. Enable Quorum TLS on each node and do rolling restart: + +``` +sslQuorum=true +portUnification=true +``` + +5. Once you verified that your entire ensemble is running on TLS, you could disable port unification +and do another rolling restart + +``` +sslQuorum=true +portUnification=false +``` + + + + +### ZooKeeper Commands + + + +#### The Four Letter Words + +ZooKeeper responds to a small set of commands. Each command is +composed of four letters. You issue the commands to ZooKeeper via telnet +or nc, at the client port. + +Three of the more interesting commands: "stat" gives some +general information about the server and connected clients, +while "srvr" and "cons" give extended details on server and +connections respectively. + +**New in 3.5.3:** +Four Letter Words need to be explicitly white listed before using. +Please refer to **4lw.commands.whitelist** +described in [cluster configuration section](#sc_clusterOptions) for details. +Moving forward, Four Letter Words will be deprecated, please use +[AdminServer](#sc_adminserver) instead. + +* *conf* : + **New in 3.3.0:** Print + details about serving configuration. + +* *cons* : + **New in 3.3.0:** List + full connection/session details for all clients connected + to this server. Includes information on numbers of packets + received/sent, session id, operation latencies, last + operation performed, etc... + +* *crst* : + **New in 3.3.0:** Reset + connection/session statistics for all connections. + +* *dump* : + Lists the outstanding sessions and ephemeral nodes. + +* *envi* : + Print details about serving environment + +* *ruok* : + Tests if the server is running in a non-error state. + When the whitelist enables ruok, the server will respond with `imok` + if it is running, otherwise it will not respond at all. + When ruok is disabled, the server responds with: + "ruok is not executed because it is not in the whitelist." + A response of "imok" does not necessarily indicate that the + server has joined the quorum, just that the server process is active + and bound to the specified client port. Use "stat" for details on + state wrt quorum and client connection information. + +* *srst* : + Reset server statistics. + +* *srvr* : + **New in 3.3.0:** Lists + full details for the server. + +* *stat* : + Lists brief details for the server and connected + clients. + +* *wchs* : + **New in 3.3.0:** Lists + brief information on watches for the server. + +* *wchc* : + **New in 3.3.0:** Lists + detailed information on watches for the server, by + session. This outputs a list of sessions(connections) + with associated watches (paths). Note, depending on the + number of watches this operation may be expensive (ie + impact server performance), use it carefully. + +* *dirs* : + **New in 3.5.1:** + Shows the total size of snapshot and log files in bytes + +* *wchp* : + **New in 3.3.0:** Lists + detailed information on watches for the server, by path. + This outputs a list of paths (znodes) with associated + sessions. Note, depending on the number of watches this + operation may be expensive (ie impact server performance), + use it carefully. + +* *mntr* : + **New in 3.4.0:** Outputs a list + of variables that could be used for monitoring the health of the cluster. + + + $ echo mntr | nc localhost 2185 + zk_version 3.4.0 + zk_avg_latency 0.7561 - be account to four decimal places + zk_max_latency 0 + zk_min_latency 0 + zk_packets_received 70 + zk_packets_sent 69 + zk_outstanding_requests 0 + zk_server_state leader + zk_znode_count 4 + zk_watch_count 0 + zk_ephemerals_count 0 + zk_approximate_data_size 27 + zk_learners 4 - only exposed by the Leader + zk_synced_followers 4 - only exposed by the Leader + zk_pending_syncs 0 - only exposed by the Leader + zk_open_file_descriptor_count 23 - only available on Unix platforms + zk_max_file_descriptor_count 1024 - only available on Unix platforms + + +The output is compatible with java properties format and the content +may change over time (new keys added). Your scripts should expect changes. +ATTENTION: Some of the keys are platform specific and some of the keys are only exported by the Leader. +The output contains multiple lines with the following format: + + + key \t value + + +* *isro* : + **New in 3.4.0:** Tests if + server is running in read-only mode. The server will respond with + "ro" if in read-only mode or "rw" if not in read-only mode. + +* *hash* : + **New in 3.6.0:** + Return the latest history of the tree digest associated with zxid. + +* *gtmk* : + Gets the current trace mask as a 64-bit signed long value in + decimal format. See `stmk` for an explanation of + the possible values. + +* *stmk* : + Sets the current trace mask. The trace mask is 64 bits, + where each bit enables or disables a specific category of trace + logging on the server. Logback must be configured to enable + `TRACE` level first in order to see trace logging + messages. The bits of the trace mask correspond to the following + trace logging categories. + + | Trace Mask Bit Values | | + |-----------------------|---------------------| + | 0b0000000000 | Unused, reserved for future use. | + | 0b0000000010 | Logs client requests, excluding ping requests. | + | 0b0000000100 | Unused, reserved for future use. | + | 0b0000001000 | Logs client ping requests. | + | 0b0000010000 | Logs packets received from the quorum peer that is the current leader, excluding ping requests. | + | 0b0000100000 | Logs addition, removal and validation of client sessions. | + | 0b0001000000 | Logs delivery of watch events to client sessions. | + | 0b0010000000 | Logs ping packets received from the quorum peer that is the current leader. | + | 0b0100000000 | Unused, reserved for future use. | + | 0b1000000000 | Unused, reserved for future use. | + + All remaining bits in the 64-bit value are unused and + reserved for future use. Multiple trace logging categories are + specified by calculating the bitwise OR of the documented values. + The default trace mask is 0b0100110010. Thus, by default, trace + logging includes client requests, packets received from the + leader and sessions. + To set a different trace mask, send a request containing the + `stmk` four-letter word followed by the trace + mask represented as a 64-bit signed long value. This example uses + the Perl `pack` function to construct a trace + mask that enables all trace logging categories described above and + convert it to a 64-bit signed long value with big-endian byte + order. The result is appended to `stmk` and sent + to the server using netcat. The server responds with the new + trace mask in decimal format. + + + $ perl -e "print 'stmk', pack('q>', 0b0011111010)" | nc localhost 2181 + 250 + + +Here's an example of the **ruok** +command: + + + $ echo ruok | nc 127.0.0.1 5111 + imok + + + + +#### The AdminServer + +**New in 3.5.0:** The AdminServer is +an embedded Jetty server that provides an HTTP interface to the four-letter +word commands. By default, the server is started on port 8080, +and commands are issued by going to the URL "/commands/\[command name]", +e.g., http://localhost:8080/commands/stat. The command response is +returned as JSON. Unlike the original protocol, commands are not +restricted to four-letter names, and commands can have multiple names; +for instance, "stmk" can also be referred to as "set_trace_mask". To +view a list of all available commands, point a browser to the URL +/commands (e.g., http://localhost:8080/commands). See the [AdminServer configuration options](#sc_adminserver_config) +for how to change the port and URLs. + +The AdminServer is enabled by default, but can be disabled by either: + +* Setting the zookeeper.admin.enableServer system + property to false. +* Removing Jetty from the classpath. (This option is + useful if you would like to override ZooKeeper's jetty + dependency.) + +Note that the TCP four-letter word interface is still available if +the AdminServer is disabled. + +##### Configuring AdminServer for SSL/TLS +- Generating the **keystore.jks** and **truststore.jks** which can be found in the [Quorum TLS](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#Quorum+TLS). +- Add the following configuration settings to the `zoo.cfg` config file: + +``` +admin.portUnification=true +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` +- Verify that the following entries in the logs can be seen: + +``` +2019-08-03 15:44:55,213 [myid:] - INFO [main:JettyAdminServer@123] - Successfully loaded private key from /data/software/cert/keystore.jks +2019-08-03 15:44:55,213 [myid:] - INFO [main:JettyAdminServer@124] - Successfully loaded certificate authority from /data/software/cert/truststore.jks + +2019-08-03 15:44:55,403 [myid:] - INFO [main:JettyAdminServer@170] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands +``` + +Available commands include: + +* *connection_stat_reset/crst*: + Reset all client connection statistics. + No new fields returned. + +* *configuration/conf/config* : + Print basic details about serving configuration, e.g. + client port, absolute path to data directory. + +* *connections/cons* : + Information on client connections to server. + Note, depending on the number of client connections this operation may be expensive + (i.e. impact server performance). + Returns "connections", a list of connection info objects. + +* *hash*: + Txn digests in the historical digest list. + One is recorded every 128 transactions. + Returns "digests", a list to transaction digest objects. + +* *dirs* : + Information on logfile directory and snapshot directory + size in bytes. + Returns "datadir_size" and "logdir_size". + +* *dump* : + Information on session expirations and ephemerals. + Note, depending on the number of global sessions and ephemerals + this operation may be expensive (i.e. impact server performance). + Returns "expiry_time_to_session_ids" and "session_id_to_ephemeral_paths" as maps. + +* *environment/env/envi* : + All defined environment variables. + Returns each as its own field. + +* *get_trace_mask/gtmk* : + The current trace mask. Read-only version of *set_trace_mask*. + See the description of the four letter command *stmk* for + more details. + Returns "tracemask". + +* *initial_configuration/icfg* : + Print the text of the configuration file used to start the peer. + Returns "initial_configuration". + +* *is_read_only/isro* : + A true/false if this server is in read-only mode. + Returns "read_only". + +* *last_snapshot/lsnp* : + Information of the last snapshot that zookeeper server has finished saving to disk. + If called during the initial time period between the server starting up + and the server finishing saving its first snapshot, the command returns the + information of the snapshot read when starting up the server. + Returns "zxid" and "timestamp", the latter using a time unit of seconds. + +* *leader/lead* : + If the ensemble is configured in quorum mode then emits the current leader + status of the peer and the current leader location. + Returns "is_leader", "leader_id", and "leader_ip". + +* *monitor/mntr* : + Emits a wide variety of useful info for monitoring. + Includes performance stats, information about internal queues, and + summaries of the data tree (among other things). + Returns each as its own field. + +* *observer_connection_stat_reset/orst* : + Reset all observer connection statistics. Companion command to *observers*. + No new fields returned. + +* *restore/rest* : + Restore database from snapshot input stream on the current server. + Returns the following data in response payload: + "last_zxid": String + Note: this API is rate-limited (once every 5 mins by default) to protect the server + from being over-loaded. + +* *ruok* : + No-op command, check if the server is running. + A response does not necessarily indicate that the + server has joined the quorum, just that the admin server + is active and bound to the specified port. + No new fields returned. + +* *set_trace_mask/stmk* : + Sets the trace mask (as such, it requires a parameter). + Write version of *get_trace_mask*. + See the description of the four letter command *stmk* for + more details. + Returns "tracemask". + +* *server_stats/srvr* : + Server information. + Returns multiple fields giving a brief overview of server state. + +* *snapshot/snap* : + Takes a snapshot of the current server in the datadir and stream out data. + Optional query parameter: + "streaming": Boolean (defaults to true if the parameter is not present) + Returns the following via Http headers: + "last_zxid": String + "snapshot_size": String + Note: this API is rate-limited (once every 5 mins by default) to protect the server + from being over-loaded. + +* *stats/stat* : + Same as *server_stats* but also returns the "connections" field (see *connections* + for details). + Note, depending on the number of client connections this operation may be expensive + (i.e. impact server performance). + +* *stat_reset/srst* : + Resets server statistics. This is a subset of the information returned + by *server_stats* and *stats*. + No new fields returned. + +* *observers/obsr* : + Information on observer connections to server. + Always available on a Leader, available on a Follower if its + acting as a learner master. + Returns "synced_observers" (int) and "observers" (list of per-observer properties). + +* *system_properties/sysp* : + All defined system properties. + Returns each as its own field. + +* *voting_view* : + Provides the current voting members in the ensemble. + Returns "current_config" as a map. + +* *watches/wchc* : + Watch information aggregated by session. + Note, depending on the number of watches this operation may be expensive + (i.e. impact server performance). + Returns "session_id_to_watched_paths" as a map. + +* *watches_by_path/wchp* : + Watch information aggregated by path. + Note, depending on the number of watches this operation may be expensive + (i.e. impact server performance). + Returns "path_to_session_ids" as a map. + +* *watch_summary/wchs* : + Summarized watch information. + Returns "num_total_watches", "num_paths", and "num_connections". + +* *zabstate* : + The current phase of Zab protocol that peer is running and whether it is a + voting member. + Peers can be in one of these phases: ELECTION, DISCOVERY, SYNCHRONIZATION, BROADCAST. + Returns fields "voting" and "zabstate". + + + + +### Data File Management + +ZooKeeper stores its data in a data directory and its transaction +log in a transaction log directory. By default these two directories are +the same. The server can (and should) be configured to store the +transaction log files in a separate directory than the data files. +Throughput increases and latency decreases when transaction logs reside +on a dedicated log devices. + + + +#### The Data Directory + +This directory has two or three files in it: + +* *myid* - contains a single integer in + human readable ASCII text that represents the server id. +* *initialize* - presence indicates lack of + data tree is expected. Cleaned up once data tree is created. +* *snapshot.* - holds the fuzzy + snapshot of a data tree. + +Each ZooKeeper server has a unique id. This id is used in two +places: the *myid* file and the configuration file. +The *myid* file identifies the server that +corresponds to the given data directory. The configuration file lists +the contact information for each server identified by its server id. +When a ZooKeeper server instance starts, it reads its id from the +*myid* file and then, using that id, reads from the +configuration file, looking up the port on which it should +listen. + +The *snapshot* files stored in the data +directory are fuzzy snapshots in the sense that during the time the +ZooKeeper server is taking the snapshot, updates are occurring to the +data tree. The suffix of the *snapshot* file names +is the _zxid_, the ZooKeeper transaction id, of the +last committed transaction at the start of the snapshot. Thus, the +snapshot includes a subset of the updates to the data tree that +occurred while the snapshot was in process. The snapshot, then, may +not correspond to any data tree that actually existed, and for this +reason we refer to it as a fuzzy snapshot. Still, ZooKeeper can +recover using this snapshot because it takes advantage of the +idempotent nature of its updates. By replaying the transaction log +against fuzzy snapshots ZooKeeper gets the state of the system at the +end of the log. + + + +#### The Log Directory + +The Log Directory contains the ZooKeeper transaction logs. +Before any update takes place, ZooKeeper ensures that the transaction +that represents the update is written to non-volatile storage. A new +log file is started when the number of transactions written to the +current log file reaches a (variable) threshold. The threshold is +computed using the same parameter which influences the frequency of +snapshotting (see snapCount and snapSizeLimitInKb above). The log file's +suffix is the first zxid written to that log. + + + +#### File Management + +The format of snapshot and log files does not change between +standalone ZooKeeper servers and different configurations of +replicated ZooKeeper servers. Therefore, you can pull these files from +a running replicated ZooKeeper server to a development machine with a +stand-alone ZooKeeper server for troubleshooting. + +Using older log and snapshot files, you can look at the previous +state of ZooKeeper servers and even restore that state. + +The ZooKeeper server creates snapshot and log files, but +never deletes them. The retention policy of the data and log +files is implemented outside of the ZooKeeper server. The +server itself only needs the latest complete fuzzy snapshot, all log +files following it, and the last log file preceding it. The latter +requirement is necessary to include updates which happened after this +snapshot was started but went into the existing log file at that time. +This is possible because snapshotting and rolling over of logs +proceed somewhat independently in ZooKeeper. See the +[maintenance](#sc_maintenance) section in +this document for more details on setting a retention policy +and maintenance of ZooKeeper storage. + +###### Note +>The data stored in these files is not encrypted. In the case of +storing sensitive data in ZooKeeper, necessary measures need to be +taken to prevent unauthorized access. Such measures are external to +ZooKeeper (e.g., control access to the files) and depend on the +individual settings in which it is being deployed. + + + +#### Recovery - TxnLogToolkit +More details can be found in [this](http://zookeeper.apache.org/doc/current/zookeeperTools.html#zkTxnLogToolkit) + + + +### Things to Avoid + +Here are some common problems you can avoid by configuring +ZooKeeper correctly: + +* *inconsistent lists of servers* : + The list of ZooKeeper servers used by the clients must match + the list of ZooKeeper servers that each ZooKeeper server has. + Things work okay if the client list is a subset of the real list, + but things will really act strange if clients have a list of + ZooKeeper servers that are in different ZooKeeper clusters. Also, + the server lists in each Zookeeper server configuration file + should be consistent with one another. + +* *incorrect placement of transaction log* : + The most performance critical part of ZooKeeper is the + transaction log. ZooKeeper syncs transactions to media before it + returns a response. A dedicated transaction log device is key to + consistent good performance. Putting the log on a busy device will + adversely affect performance. If you only have one storage device, + increase the snapCount so that snapshot files are generated less often; + it does not eliminate the problem, but it makes more resources available + for the transaction log. + +* *incorrect Java heap size* : + You should take special care to set your Java max heap size + correctly. In particular, you should not create a situation in + which ZooKeeper swaps to disk. The disk is death to ZooKeeper. + Everything is ordered, so if processing one request swaps the + disk, all other queued requests will probably do the same. the + disk. DON'T SWAP. + Be conservative in your estimates: if you have 4G of RAM, do + not set the Java max heap size to 6G or even 4G. For example, it + is more likely you would use a 3G heap for a 4G machine, as the + operating system and the cache also need memory. The best and only + recommend practice for estimating the heap size your system needs + is to run load tests, and then make sure you are well below the + usage limit that would cause the system to swap. + +* *Publicly accessible deployment* : + A ZooKeeper ensemble is expected to operate in a trusted computing environment. + It is thus recommended deploying ZooKeeper behind a firewall. + + + +### Best Practices + +For best results, take note of the following list of good +Zookeeper practices: + +For multi-tenant installations see the [section](zookeeperProgrammers.html#ch_zkSessions) +detailing ZooKeeper "chroot" support, this can be very useful +when deploying many applications/services interfacing to a +single ZooKeeper cluster. diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md new file mode 100644 index 0000000000000000000000000000000000000000..7096aa0cc8980729a06461725cd21c3a91658f09 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md @@ -0,0 +1,573 @@ + + +# ZooKeeper-cli: the ZooKeeper command line interface + +## Pre-requisites +Enter into the ZooKeeper-cli + +```bash +# connect to the localhost with the default port:2181 +bin/zkCli.sh +# connect to the remote host with timeout:3s +bin/zkCli.sh -timeout 3000 -server remoteIP:2181 +# connect to the remote host with -waitforconnection option to wait for connection success before executing commands +bin/zkCli.sh -waitforconnection -timeout 3000 -server remoteIP:2181 +# connect with a custom client configuration properties file +bin/zkCli.sh -client-configuration /path/to/client.properties +``` +## help +Showing helps about ZooKeeper commands + +```bash +[zkshell: 1] help +# a sample one +[zkshell: 2] h +ZooKeeper -server host:port cmd args + addauth scheme auth + close + config [-c] [-w] [-s] + connect host:port + create [-s] [-e] [-c] [-t ttl] path [data] [acl] + delete [-v version] path + deleteall path + delquota [-n|-b|-N|-B] path + get [-s] [-w] path + getAcl [-s] path + getAllChildrenNumber path + getEphemerals path + history + listquota path + ls [-s] [-w] [-R] path + printwatches on|off + quit + reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*] + redo cmdno + removewatches path [-c|-d|-a] [-l] + set [-s] [-v version] path data + setAcl [-s] [-v version] [-R] path acl + setquota -n|-b|-N|-B val path + stat [-w] path + sync path + version +``` + +## addauth +Add a authorized user for ACL + +```bash +[zkshell: 9] getAcl /acl_digest_test + Insufficient permission : /acl_digest_test +[zkshell: 10] addauth digest user1:12345 +[zkshell: 11] getAcl /acl_digest_test + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa +# add a super user +# Notice:set zookeeper.DigestAuthenticationProvider +# e.g. zookeeper.DigestAuthenticationProvider.superDigest=zookeeper:qW/HnTfCSoQpB5G8LgkwT3IbiFc= +[zkshell: 12] addauth digest zookeeper:admin +``` + +## close +Close this client/session. + +```bash +[zkshell: 0] close + 2019-03-09 06:42:22,178 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@528] - EventThread shut down for session: 0x10007ab7c550006 + 2019-03-09 06:42:22,179 [myid:] - INFO [main:ZooKeeper@1346] - Session: 0x10007ab7c550006 closed +``` + +## config +Showing the config of quorum membership + +```bash +[zkshell: 17] config + server.1=[2001:db8:1:0:0:242:ac11:2]:2888:3888:participant + server.2=[2001:db8:1:0:0:242:ac11:2]:12888:13888:participant + server.3=[2001:db8:1:0:0:242:ac11:2]:22888:23888:participant + version=0 +``` +## connect +Connect a ZooKeeper server. + +```bash +[zkshell: 4] connect + 2019-03-09 06:43:33,179 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@986] - Socket connection established, initiating session, client: /127.0.0.1:35144, server: localhost/127.0.0.1:2181 + 2019-03-09 06:43:33,189 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1421] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10007ab7c550007, negotiated timeout = 30000 + connect "localhost:2181,localhost:2182,localhost:2183" + +# connect a remote server +[zkshell: 5] connect remoteIP:2181 +``` +## create +Create a znode. + +```bash +# create a persistent_node +[zkshell: 7] create /persistent_node + Created /persistent_node + +# create a ephemeral node +[zkshell: 8] create -e /ephemeral_node mydata + Created /ephemeral_node + +# create the persistent-sequential node +[zkshell: 9] create -s /persistent_sequential_node mydata + Created /persistent_sequential_node0000000176 + +# create the ephemeral-sequential_node +[zkshell: 10] create -s -e /ephemeral_sequential_node mydata + Created /ephemeral_sequential_node0000000174 + +# create a node with the schema +[zkshell: 11] create /zk-node-create-schema mydata digest:user1:+owfoSBn/am19roBPzR1/MfCblE=:crwad + Created /zk-node-create-schema +[zkshell: 12] addauth digest user1:12345 +[zkshell: 13] getAcl /zk-node-create-schema + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# create the container node.When the last child of a container is deleted,the container becomes to be deleted +[zkshell: 14] create -c /container_node mydata + Created /container_node +[zkshell: 15] create -c /container_node/child_1 mydata + Created /container_node/child_1 +[zkshell: 16] create -c /container_node/child_2 mydata + Created /container_node/child_2 +[zkshell: 17] delete /container_node/child_1 +[zkshell: 18] delete /container_node/child_2 +[zkshell: 19] get /container_node + org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /container_node + +# create the ttl node. +# set zookeeper.extendedTypesEnabled=true +# Otherwise:KeeperErrorCode = Unimplemented for /ttl_node +[zkshell: 20] create -t 3000 /ttl_node mydata + Created /ttl_node +# after 3s later +[zkshell: 21] get /ttl_node + org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /ttl_node +``` +## delete +Delete a node with a specific path + +```bash +[zkshell: 2] delete /config/topics/test +[zkshell: 3] ls /config/topics/test + Node does not exist: /config/topics/test +``` + +## deleteall +Delete all nodes under a specific path + +```bash +zkshell: 1] ls /config + [changes, clients, topics] +[zkshell: 2] deleteall /config +[zkshell: 3] ls /config + Node does not exist: /config +``` + +## delquota +Delete the quota under a path + +```bash +[zkshell: 1] delquota /quota_test +[zkshell: 2] listquota /quota_test + absolute path is /zookeeper/quota/quota_test/zookeeper_limits + quota for /quota_test does not exist. +[zkshell: 3] delquota -n /c1 +[zkshell: 4] delquota -N /c2 +[zkshell: 5] delquota -b /c3 +[zkshell: 6] delquota -B /c4 + +``` +## get +Get the data of the specific path + +```bash +[zkshell: 10] get /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} + +# -s to show the stat +[zkshell: 11] get -s /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} + cZxid = 0x90000009a + ctime = Sat Jul 28 08:14:09 UTC 2018 + mZxid = 0x9000000a2 + mtime = Sat Jul 28 08:14:12 UTC 2018 + pZxid = 0x90000009a + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 60 + numChildren = 0 + +# -w to set a watch on the data change, Notice: turn on the printwatches +[zkshell: 12] get -w /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} +[zkshell: 13] set /latest_producer_id_block mydata + WATCHER:: + WatchedEvent state:SyncConnected type:NodeDataChanged path:/latest_producer_id_block +``` + +## getAcl +Get the ACL permission of one path + +```bash +[zkshell: 4] create /acl_test mydata ip:127.0.0.1:crwda + Created /acl_test +[zkshell: 5] getAcl /acl_test + 'ip,'127.0.0.1 + : cdrwa + [zkshell: 6] getAcl /testwatch + 'world,'anyone + : cdrwa +``` +## getAllChildrenNumber +Get all numbers of children nodes under a specific path + +```bash +[zkshell: 1] getAllChildrenNumber / + 73779 +[zkshell: 2] getAllChildrenNumber /ZooKeeper + 2 +[zkshell: 3] getAllChildrenNumber /ZooKeeper/quota + 0 +``` +## getEphemerals +Get all the ephemeral nodes created by this session + +```bash +[zkshell: 1] create -e /test-get-ephemerals "ephemeral node" + Created /test-get-ephemerals +[zkshell: 2] getEphemerals + [/test-get-ephemerals] +[zkshell: 3] getEphemerals / + [/test-get-ephemerals] +[zkshell: 4] create -e /test-get-ephemerals-1 "ephemeral node" + Created /test-get-ephemerals-1 +[zkshell: 5] getEphemerals /test-get-ephemerals + test-get-ephemerals test-get-ephemerals-1 +[zkshell: 6] getEphemerals /test-get-ephemerals + [/test-get-ephemerals-1, /test-get-ephemerals] +[zkshell: 7] getEphemerals /test-get-ephemerals-1 + [/test-get-ephemerals-1] +``` + +## history +Showing the history about the recent 11 commands that you have executed + +```bash +[zkshell: 7] history + 0 - close + 1 - close + 2 - ls / + 3 - ls / + 4 - connect + 5 - ls / + 6 - ll + 7 - history +``` + +## listquota +Listing the quota of one path + +```bash +[zkshell: 1] listquota /c1 + absolute path is /zookeeper/quota/c1/zookeeper_limits + Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2 + Output stat for /c1 count=4,bytes=0 +``` + +## ls +Listing the child nodes of one path + +```bash +[zkshell: 36] ls /quota_test + [child_1, child_2, child_3] + +# -s to show the stat +[zkshell: 37] ls -s /quota_test + [child_1, child_2, child_3] + cZxid = 0x110000002d + ctime = Thu Mar 07 11:19:07 UTC 2019 + mZxid = 0x110000002d + mtime = Thu Mar 07 11:19:07 UTC 2019 + pZxid = 0x1100000033 + cversion = 3 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 3 + +# -R to show the child nodes recursely +[zkshell: 38] ls -R /quota_test + /quota_test + /quota_test/child_1 + /quota_test/child_2 + /quota_test/child_3 + +# -w to set a watch on the child change,Notice: turn on the printwatches +[zkshell: 39] ls -w /brokers + [ids, seqid, topics] +[zkshell: 40] delete /brokers/ids + WATCHER:: + WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers +``` + +## printwatches +A switch to turn on/off whether printing watches or not. + +```bash +[zkshell: 0] printwatches + printwatches is on +[zkshell: 1] printwatches off +[zkshell: 2] printwatches + printwatches is off +[zkshell: 3] printwatches on +[zkshell: 4] printwatches + printwatches is on +``` + +## quit +Quit the CLI windows. + +```bash +[zkshell: 1] quit +``` + +## reconfig +Change the membership of the ensemble during the runtime. + +Before using this cli,read the details in the [Dynamic Reconfiguration](zookeeperReconfig.html) about the reconfig feature,especially the "Security" part. + +Pre-requisites: + +1. set reconfigEnabled=true in the zoo.cfg + +2. add a super user or skipAcl,otherwise will get “Insufficient permission”. e.g. addauth digest zookeeper:admin + +```bash +# Change follower 2 to an observer and change its port from 2182 to 12182 +# Add observer 5 to the ensemble +# Remove Observer 4 from the ensemble +[zkshell: 1] reconfig --add 2=localhost:2781:2786:observer;12182 --add 5=localhost:2781:2786:observer;2185 -remove 4 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:2183 + server.5=localhost:2784:2789:observer;0.0.0.0:2185 + version=1c00000002 + +# -members to appoint the membership +[zkshell: 2] reconfig -members server.1=localhost:2780:2785:participant;0.0.0.0:2181,server.2=localhost:2781:2786:observer;0.0.0.0:12182,server.3=localhost:2782:2787:participant;0.0.0.0:12183 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:12183 + version=f9fe0000000c + +# Change the current config to the one in the myNewConfig.txt +# But only if current config version is 2100000010 +[zkshell: 3] reconfig -file /data/software/zookeeper/zookeeper-test/conf/myNewConfig.txt -v 2100000010 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:2183 + server.5=localhost:2784:2789:observer;0.0.0.0:2185 + version=220000000c +``` + +## redo +Redo the cmd with the index from history. + +```bash +[zkshell: 4] history + 0 - ls / + 1 - get /consumers + 2 - get /hbase + 3 - ls /hbase + 4 - history +[zkshell: 5] redo 3 + [backup-masters, draining, flush-table-proc, hbaseid, master-maintenance, meta-region-server, namespace, online-snapshot, replication, rs, running, splitWAL, switch, table, table-lock] +``` + +## removewatches +Remove the watches under a node. + +```bash +[zkshell: 1] get -w /brokers + null +[zkshell: 2] removewatches /brokers + WATCHER:: + WatchedEvent state:SyncConnected type:DataWatchRemoved path:/brokers + +``` + +## set +Set/update the data on a path. + +```bash +[zkshell: 50] set /brokers myNewData + +# -s to show the stat of this node. +[zkshell: 51] set -s /quota_test mydata_for_quota_test + cZxid = 0x110000002d + ctime = Thu Mar 07 11:19:07 UTC 2019 + mZxid = 0x1100000038 + mtime = Thu Mar 07 11:42:41 UTC 2019 + pZxid = 0x1100000033 + cversion = 3 + dataVersion = 2 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 21 + numChildren = 3 + +# -v to set the data with CAS,the version can be found from dataVersion using stat. +[zkshell: 52] set -v 0 /brokers myNewData +[zkshell: 53] set -v 0 /brokers myNewData + version No is not valid : /brokers +``` + +## setAcl +Set the Acl permission for one node. + +```bash +[zkshell: 28] addauth digest user1:12345 +[zkshell: 30] setAcl /acl_auth_test auth:user1:12345:crwad +[zkshell: 31] getAcl /acl_auth_test + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# -R to set Acl recursely +[zkshell: 32] ls /acl_auth_test + [child_1, child_2] +[zkshell: 33] getAcl /acl_auth_test/child_2 + 'world,'anyone + : cdrwa +[zkshell: 34] setAcl -R /acl_auth_test auth:user1:12345:crwad +[zkshell: 35] getAcl /acl_auth_test/child_2 + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# -v set Acl with the acl version which can be found from the aclVersion using the stat +[zkshell: 36] stat /acl_auth_test + cZxid = 0xf9fc0000001c + ctime = Tue Mar 26 16:50:58 CST 2019 + mZxid = 0xf9fc0000001c + mtime = Tue Mar 26 16:50:58 CST 2019 + pZxid = 0xf9fc0000001f + cversion = 2 + dataVersion = 0 + aclVersion = 3 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 2 +[zkshell: 37] setAcl -v 3 /acl_auth_test auth:user1:12345:crwad +``` + +## setquota +Set the quota in one path. + +```bash +# -n to limit the number of child nodes(included itself) +[zkshell: 18] setquota -n 2 /quota_test +[zkshell: 19] create /quota_test/child_1 + Created /quota_test/child_1 +[zkshell: 20] create /quota_test/child_2 + Created /quota_test/child_2 +[zkshell: 21] create /quota_test/child_3 + Created /quota_test/child_3 +# Notice:don't have a hard constraint,just log the warning info + 2019-03-07 11:22:36,680 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=3 limit=2 + 2019-03-07 11:22:41,861 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=4 limit=2 + +# -b to limit the bytes(data length) of one path +[zkshell: 22] setquota -b 5 /brokers +[zkshell: 23] set /brokers "I_love_zookeeper" +# Notice:don't have a hard constraint,just log the warning info + WARN [CommitProcWorkThread-7:DataTree@379] - Quota exceeded: /brokers bytes=4206 limit=5 + +# -N count Hard quota +[zkshell: 3] create /c1 +Created /c1 +[zkshell: 4] setquota -N 2 /c1 +[zkshell: 5] listquota /c1 +absolute path is /zookeeper/quota/c1/zookeeper_limits +Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2 +Output stat for /c1 count=2,bytes=0 +[zkshell: 6] create /c1/ch-3 +Count Quota has exceeded : /c1/ch-3 + +# -B byte Hard quota +[zkshell: 3] create /c2 +[zkshell: 4] setquota -B 4 /c2 +[zkshell: 5] set /c2 "foo" +[zkshell: 6] set /c2 "foo-bar" +Bytes Quota has exceeded : /c2 +[zkshell: 7] get /c2 +foo +``` + +## stat +Showing the stat/metadata of one node. + +```bash +[zkshell: 1] stat /hbase + cZxid = 0x4000013d9 + ctime = Wed Jun 27 20:13:07 CST 2018 + mZxid = 0x4000013d9 + mtime = Wed Jun 27 20:13:07 CST 2018 + pZxid = 0x500000001 + cversion = 17 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 15 +``` + +## sync +Sync the data of one node between leader and followers(Asynchronous sync) + +```bash +[zkshell: 14] sync / +[zkshell: 15] Sync is OK +``` + +## version +Show the version of the ZooKeeper client/CLI + +```bash +[zkshell: 1] version +ZooKeeper CLI version: 3.6.0-SNAPSHOT-29f9b2c1c0e832081f94d59a6b88709c5f1bb3ca, built on 05/30/2019 09:26 GMT +``` + +## whoami +Gives all authentication information added into the current session. + + [zkshell: 1] whoami + Auth scheme: User + ip: 127.0.0.1 + [zkshell: 2] addauth digest user1:12345 + [zkshell: 3] whoami + Auth scheme: User + ip: 127.0.0.1 + digest: user1 \ No newline at end of file diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md new file mode 100644 index 0000000000000000000000000000000000000000..180f1273c1972468a54d869f1e52f20b55693c88 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md @@ -0,0 +1,269 @@ + + +# ZooKeeper Monitor Guide + +* [New Metrics System](#Metrics-System) + * [Metrics](#Metrics) + * [Prometheus](#Prometheus) + * [Alerting with Prometheus](#Alerting) + * [Grafana](#Grafana) + * [InfluxDB](#influxdb) + +* [JMX](#JMX) + +* [Four letter words](#four-letter-words) + + + +## New Metrics System +The feature:`New Metrics System` has been available since 3.6.0 which provides the abundant metrics +to help users monitor the ZooKeeper on the topic: znode, network, disk, quorum, leader election, +client, security, failures, watch/session, requestProcessor, and so forth. + + + +### Metrics +All the metrics are included in the `ServerMetrics.java`. + + + + +### Pre-requisites: +- Enable the `Prometheus MetricsProvider` by setting the following in `zoo.cfg`: + ```conf + metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider + ``` + +- The port for Prometheus metrics can be configured using: + ```conf + metricsProvider.httpPort=7000 # Default port is 7000 + ``` + +#### Enabling HTTPS for Prometheus Metrics: + +ZooKeeper also supports SSL for Prometheus metrics, which provides secure data transmission. To enable this, configure an HTTPS port and set up SSL certificates as follows: + +- Define the HTTPS port: + ```conf + metricsProvider.httpsPort=4443 + ``` + +- Configure the SSL key store (holds the server’s private key and certificates): + ```conf + metricsProvider.ssl.keyStore.location=/path/to/keystore.jks + metricsProvider.ssl.keyStore.password=your_keystore_password + metricsProvider.ssl.keyStore.type=jks # Default is JKS + ``` + +- Configure the SSL trust store (used to verify client certificates): + ```conf + metricsProvider.ssl.trustStore.location=/path/to/truststore.jks + metricsProvider.ssl.trustStore.password=your_truststore_password + metricsProvider.ssl.trustStore.type=jks # Default is JKS + ``` + +- **Note**: You can enable both HTTP and HTTPS simultaneously by defining both ports: + ```conf + metricsProvider.httpPort=7000 + metricsProvider.httpsPort=4443 + ``` +### Prometheus +- Running a [Prometheus](https://prometheus.io/) monitoring service is the easiest way to ingest and record ZooKeeper's metrics. + +- Install Prometheus: + Go to the official website download [page](https://prometheus.io/download/), download the latest release. + +- Set Prometheus's scraper to target the ZooKeeper cluster endpoints: + + ```bash + cat > /tmp/test-zk.yaml <> /tmp/test-zk.log 2>&1 & + ``` + +- Now Prometheus will scrape zk metrics every 10 seconds. + + + +### Alerting with Prometheus +- We recommend that you read [Prometheus Official Alerting Page](https://prometheus.io/docs/practices/alerting/) to explore + some principles of alerting + +- We recommend that you use [Prometheus Alertmanager](https://www.prometheus.io/docs/alerting/latest/alertmanager/) which can + help users to receive alerting email or instant message(by webhook) in a more convenient way + +- We provide an alerting example where these metrics should be taken a special attention. Note: this is for your reference only, + and you need to adjust them according to your actual situation and resource environment + + + use ./promtool check rules rules/zk.yml to check the correctness of the config file + cat rules/zk.yml + + groups: + - name: zk-alert-example + rules: + - alert: ZooKeeper server is down + expr: up == 0 + for: 1m + labels: + severity: critical + annotations: + summary: "Instance {{ $labels.instance }} ZooKeeper server is down" + description: "{{ $labels.instance }} of job {{$labels.job}} ZooKeeper server is down: [{{ $value }}]." + + - alert: create too many znodes + expr: znode_count > 1000000 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} create too many znodes" + description: "{{ $labels.instance }} of job {{$labels.job}} create too many znodes: [{{ $value }}]." + + - alert: create too many connections + expr: num_alive_connections > 50 # suppose we use the default maxClientCnxns: 60 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} create too many connections" + description: "{{ $labels.instance }} of job {{$labels.job}} create too many connections: [{{ $value }}]." + + - alert: znode total occupied memory is too big + expr: approximate_data_size /1024 /1024 > 1 * 1024 # more than 1024 MB(1 GB) + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} znode total occupied memory is too big" + description: "{{ $labels.instance }} of job {{$labels.job}} znode total occupied memory is too big: [{{ $value }}] MB." + + - alert: set too many watch + expr: watch_count > 10000 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} set too many watch" + description: "{{ $labels.instance }} of job {{$labels.job}} set too many watch: [{{ $value }}]." + + - alert: a leader election happens + expr: increase(election_time_count[5m]) > 0 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} a leader election happens" + description: "{{ $labels.instance }} of job {{$labels.job}} a leader election happens: [{{ $value }}]." + + - alert: open too many files + expr: open_file_descriptor_count > 300 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} open too many files" + description: "{{ $labels.instance }} of job {{$labels.job}} open too many files: [{{ $value }}]." + + - alert: fsync time is too long + expr: rate(fsynctime_sum[1m]) > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} fsync time is too long" + description: "{{ $labels.instance }} of job {{$labels.job}} fsync time is too long: [{{ $value }}]." + + - alert: take snapshot time is too long + expr: rate(snapshottime_sum[5m]) > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} take snapshot time is too long" + description: "{{ $labels.instance }} of job {{$labels.job}} take snapshot time is too long: [{{ $value }}]." + + - alert: avg latency is too high + expr: avg_latency > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} avg latency is too high" + description: "{{ $labels.instance }} of job {{$labels.job}} avg latency is too high: [{{ $value }}]." + + - alert: JvmMemoryFillingUp + expr: jvm_memory_bytes_used / jvm_memory_bytes_max{area="heap"} > 0.8 + for: 5m + labels: + severity: warning + annotations: + summary: "JVM memory filling up (instance {{ $labels.instance }})" + description: "JVM memory is filling up (> 80%)\n labels: {{ $labels }} value = {{ $value }}\n" + + + + +### Grafana +- Grafana has built-in Prometheus support; just add a Prometheus data source: + + ```bash + Name: test-zk + Type: Prometheus + Url: http://localhost:9090 + Access: proxy + ``` +- Then download and import the default ZooKeeper dashboard [template](https://grafana.com/grafana/dashboards/10465) and customize. +- Users can ask for Grafana dashboard account if having any good improvements by writing a email to **dev@zookeeper.apache.org**. + + + +### InfluxDB + +InfluxDB is an open source time series data that is often used to store metrics +from Zookeeper. You can [download](https://portal.influxdata.com/downloads/) the +open source version or create a [free](https://cloud2.influxdata.com/signup) +account on InfluxDB Cloud. In either case, configure the [Apache Zookeeper +Telegraf plugin](https://www.influxdata.com/integration/apache-zookeeper/) to +start collecting and storing metrics from your Zookeeper clusters into your +InfluxDB instance. There is also an [Apache Zookeeper InfluxDB +template](https://www.influxdata.com/influxdb-templates/zookeeper-monitor/) that +includes the Telegraf configurations and a dashboard to get you set up right +away. + + +## JMX +More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperJMX.html) + + +## Four letter words +More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands) diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md new file mode 100644 index 0000000000000000000000000000000000000000..4c60a3de7e2f81968208692660716bfb2a9c4d61 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md @@ -0,0 +1,336 @@ + + +# ZooKeeper + +* [ZooKeeper: A Distributed Coordination Service for Distributed Applications](#ch_DesignOverview) + * [Design Goals](#sc_designGoals) + * [Data model and the hierarchical namespace](#sc_dataModelNameSpace) + * [Nodes and ephemeral nodes](#Nodes+and+ephemeral+nodes) + * [Conditional updates and watches](#Conditional+updates+and+watches) + * [Guarantees](#Guarantees) + * [Simple API](#Simple+API) + * [Implementation](#Implementation) + * [Uses](#Uses) + * [Performance](#Performance) + * [Reliability](#Reliability) + * [The ZooKeeper Project](#The+ZooKeeper+Project) + + + +## ZooKeeper: A Distributed Coordination Service for Distributed Applications + +ZooKeeper is a distributed, open-source coordination service for +distributed applications. It exposes a simple set of primitives that +distributed applications can build upon to implement higher level services +for synchronization, configuration maintenance, and groups and naming. It +is designed to be easy to program to, and uses a data model styled after +the familiar directory tree structure of file systems. It runs in Java and +has bindings for both Java and C. + +Coordination services are notoriously hard to get right. They are +especially prone to errors such as race conditions and deadlock. The +motivation behind ZooKeeper is to relieve distributed applications the +responsibility of implementing coordination services from scratch. + + + +### Design Goals + +**ZooKeeper is simple.** ZooKeeper +allows distributed processes to coordinate with each other through a +shared hierarchical namespace which is organized similarly to a standard +file system. The namespace consists of data registers - called znodes, +in ZooKeeper parlance - and these are similar to files and directories. +Unlike a typical file system, which is designed for storage, ZooKeeper +data is kept in-memory, which means ZooKeeper can achieve high +throughput and low latency numbers. + +The ZooKeeper implementation puts a premium on high performance, +highly available, strictly ordered access. The performance aspects of +ZooKeeper means it can be used in large, distributed systems. The +reliability aspects keep it from being a single point of failure. The +strict ordering means that sophisticated synchronization primitives can +be implemented at the client. + +**ZooKeeper is replicated.** Like the +distributed processes it coordinates, ZooKeeper itself is intended to be +replicated over a set of hosts called an ensemble. + +![ZooKeeper Service](images/zkservice.jpg) + +The servers that make up the ZooKeeper service must all know about +each other. They maintain an in-memory image of state, along with a +transaction logs and snapshots in a persistent store. As long as a +majority of the servers are available, the ZooKeeper service will be +available. + +Clients connect to a single ZooKeeper server. The client maintains +a TCP connection through which it sends requests, gets responses, gets +watch events, and sends heart beats. If the TCP connection to the server +breaks, the client will connect to a different server. + +**ZooKeeper is ordered.** ZooKeeper +stamps each update with a number that reflects the order of all +ZooKeeper transactions. Subsequent operations can use the order to +implement higher-level abstractions, such as synchronization +primitives. + +**ZooKeeper is fast.** It is +especially fast in "read-dominant" workloads. ZooKeeper applications run +on thousands of machines, and it performs best where reads are more +common than writes, at ratios of around 10:1. + + + +### Data model and the hierarchical namespace + +The namespace provided by ZooKeeper is much like that of a +standard file system. A name is a sequence of path elements separated by +a slash (/). Every node in ZooKeeper's namespace is identified by a +path. + +#### ZooKeeper's Hierarchical Namespace + +![ZooKeeper's Hierarchical Namespace](images/zknamespace.jpg) + + + +### Nodes and ephemeral nodes + +Unlike standard file systems, each node in a ZooKeeper +namespace can have data associated with it as well as children. It is +like having a file-system that allows a file to also be a directory. +(ZooKeeper was designed to store coordination data: status information, +configuration, location information, etc., so the data stored at each +node is usually small, in the byte to kilobyte range.) We use the term +_znode_ to make it clear that we are talking about +ZooKeeper data nodes. + +Znodes maintain a stat structure that includes version numbers for +data changes, ACL changes, and timestamps, to allow cache validations +and coordinated updates. Each time a znode's data changes, the version +number increases. For instance, whenever a client retrieves data it also +receives the version of the data. + +The data stored at each znode in a namespace is read and written +atomically. Reads get all the data bytes associated with a znode and a +write replaces all the data. Each node has an Access Control List (ACL) +that restricts who can do what. + +ZooKeeper also has the notion of ephemeral nodes. These znodes +exists as long as the session that created the znode is active. When the +session ends the znode is deleted. + + + +### Conditional updates and watches + +ZooKeeper supports the concept of _watches_. +Clients can set a watch on a znode. A watch will be triggered and +removed when the znode changes. When a watch is triggered, the client +receives a packet saying that the znode has changed. If the +connection between the client and one of the ZooKeeper servers is +broken, the client will receive a local notification. + +**New in 3.6.0:** Clients can also set +permanent, recursive watches on a znode that are not removed when triggered +and that trigger for changes on the registered znode as well as any children +znodes recursively. + + + +### Guarantees + +ZooKeeper is very fast and very simple. Since its goal, though, is +to be a basis for the construction of more complicated services, such as +synchronization, it provides a set of guarantees. These are: + +* Sequential Consistency - Updates from a client will be applied + in the order that they were sent. +* Atomicity - Updates either succeed or fail. No partial + results. +* Single System Image - A client will see the same view of the + service regardless of the server that it connects to. i.e., a + client will never see an older view of the system even if the + client fails over to a different server with the same session. +* Reliability - Once an update has been applied, it will persist + from that time forward until a client overwrites the update. +* Timeliness - The clients view of the system is guaranteed to + be up-to-date within a certain time bound. + + + +### Simple API + +One of the design goals of ZooKeeper is providing a very simple +programming interface. As a result, it supports only these +operations: + +* *create* : + creates a node at a location in the tree + +* *delete* : + deletes a node + +* *exists* : + tests if a node exists at a location + +* *get data* : + reads the data from a node + +* *set data* : + writes data to a node + +* *get children* : + retrieves a list of children of a node + +* *sync* : + waits for data to be propagated + + + +### Implementation + +[ZooKeeper Components](#zkComponents) shows the high-level components +of the ZooKeeper service. With the exception of the request processor, +each of +the servers that make up the ZooKeeper service replicates its own copy +of each of the components. + + + +![ZooKeeper Components](images/zkcomponents.jpg) + +The replicated database is an in-memory database containing the +entire data tree. Updates are logged to disk for recoverability, and +writes are serialized to disk before they are applied to the in-memory +database. + +Every ZooKeeper server services clients. Clients connect to +exactly one server to submit requests. Read requests are serviced from +the local replica of each server database. Requests that change the +state of the service, write requests, are processed by an agreement +protocol. + +As part of the agreement protocol all write requests from clients +are forwarded to a single server, called the +_leader_. The rest of the ZooKeeper servers, called +_followers_, receive message proposals from the +leader and agree upon message delivery. The messaging layer takes care +of replacing leaders on failures and syncing followers with +leaders. + +ZooKeeper uses a custom atomic messaging protocol. Since the +messaging layer is atomic, ZooKeeper can guarantee that the local +replicas never diverge. When the leader receives a write request, it +calculates what the state of the system is when the write is to be +applied and transforms this into a transaction that captures this new +state. + + + +### Uses + +The programming interface to ZooKeeper is deliberately simple. +With it, however, you can implement higher order operations, such as +synchronizations primitives, group membership, ownership, etc. + + + +### Performance + +ZooKeeper is designed to be highly performance. But is it? The +results of the ZooKeeper's development team at Yahoo! Research indicate +that it is. (See [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW).) It is especially high +performance in applications where reads outnumber writes, since writes +involve synchronizing the state of all servers. (Reads outnumbering +writes is typically the case for a coordination service.) + + + +![ZooKeeper Throughput as the Read-Write Ratio Varies](images/zkperfRW-3.2.jpg) + +The [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW) is a throughput +graph of ZooKeeper release 3.2 running on servers with dual 2Ghz +Xeon and two SATA 15K RPM drives. One drive was used as a +dedicated ZooKeeper log device. The snapshots were written to +the OS drive. Write requests were 1K writes and the reads were +1K reads. "Servers" indicate the size of the ZooKeeper +ensemble, the number of servers that make up the +service. Approximately 30 other servers were used to simulate +the clients. The ZooKeeper ensemble was configured such that +leaders do not allow connections from clients. + +######Note +>In version 3.2 r/w performance improved by ~2x compared to + the [previous 3.1 release](http://zookeeper.apache.org/docs/r3.1.1/zookeeperOver.html#Performance). + +Benchmarks also indicate that it is reliable, too. +[Reliability in the Presence of Errors](#zkPerfReliability) shows how a deployment responds to +various failures. The events marked in the figure are the following: + +1. Failure and recovery of a follower +1. Failure and recovery of a different follower +1. Failure of the leader +1. Failure and recovery of two followers +1. Failure of another leader + + + +### Reliability + +To show the behavior of the system over time as +failures are injected we ran a ZooKeeper service made up of +7 machines. We ran the same saturation benchmark as before, +but this time we kept the write percentage at a constant +30%, which is a conservative ratio of our expected +workloads. + + + +![Reliability in the Presence of Errors](images/zkperfreliability.jpg) + +There are a few important observations from this graph. First, if +followers fail and recover quickly, then ZooKeeper is able to sustain a +high throughput despite the failure. But maybe more importantly, the +leader election algorithm allows for the system to recover fast enough +to prevent throughput from dropping substantially. In our observations, +ZooKeeper takes less than 200ms to elect a new leader. Third, as +followers recover, ZooKeeper is able to raise throughput again once they +start processing requests. + + + +### The ZooKeeper Project + +ZooKeeper has been +[successfully used](https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy) +in many industrial applications. It is used at Yahoo! as the +coordination and failure recovery service for Yahoo! Message +Broker, which is a highly scalable publish-subscribe system +managing thousands of topics for replication and data +delivery. It is used by the Fetching Service for Yahoo! +crawler, where it also manages failure recovery. A number of +Yahoo! advertising systems also use ZooKeeper to implement +reliable services. + +All users and developers are encouraged to join the +community and contribute their expertise. See the +[Zookeeper Project on Apache](http://zookeeper.apache.org/) +for more information. + + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperProgrammers.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperProgrammers.md new file mode 100644 index 0000000000000000000000000000000000000000..79aa0123a6143dd494db30c893b091bb7a15fe8e --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperProgrammers.md @@ -0,0 +1,1642 @@ + + +# ZooKeeper Programmer's Guide + +### Developing Distributed Applications that use ZooKeeper + +* [Introduction](#_introduction) +* [The ZooKeeper Data Model](#ch_zkDataModel) + * [ZNodes](#sc_zkDataModel_znodes) + * [Watches](#sc_zkDataMode_watches) + * [Data Access](#Data+Access) + * [Ephemeral Nodes](#Ephemeral+Nodes) + * [Sequence Nodes -- Unique Naming](#Sequence+Nodes+--+Unique+Naming) + * [Container Nodes](#Container+Nodes) + * [TTL Nodes](#TTL+Nodes) + * [Time in ZooKeeper](#sc_timeInZk) + * [ZooKeeper Stat Structure](#sc_zkStatStructure) +* [ZooKeeper Sessions](#ch_zkSessions) +* [ZooKeeper Watches](#ch_zkWatches) + * [Semantics of Watches](#sc_WatchSemantics) + * [Persistent, Recursive Watches](#sc_WatchPersistentRecursive) + * [Remove Watches](#sc_WatchRemoval) + * [What ZooKeeper Guarantees about Watches](#sc_WatchGuarantees) + * [Things to Remember about Watches](#sc_WatchRememberThese) +* [ZooKeeper access control using ACLs](#sc_ZooKeeperAccessControl) + * [ACL Permissions](#sc_ACLPermissions) + * [Builtin ACL Schemes](#sc_BuiltinACLSchemes) + * [ZooKeeper C client API](#ZooKeeper+C+client+API) +* [Pluggable ZooKeeper authentication](#sc_ZooKeeperPluggableAuthentication) +* [Consistency Guarantees](#ch_zkGuarantees) +* [Bindings](#ch_bindings) + * [Java Binding](#Java+Binding) + * [Client Configuration Parameters](#sc_java_client_configuration) + * [C Binding](#C+Binding) + * [Installation](#Installation) + * [Building Your Own C Client](#Building+Your+Own+C+Client) +* [Building Blocks: A Guide to ZooKeeper Operations](#ch_guideToZkOperations) + * [Handling Errors](#sc_errorsZk) + * [Connecting to ZooKeeper](#sc_connectingToZk) +* [Gotchas: Common Problems and Troubleshooting](#ch_gotchas) + + + +## Introduction + +This document is a guide for developers wishing to create +distributed applications that take advantage of ZooKeeper's coordination +services. It contains conceptual and practical information. + +The first four sections of this guide present a higher level +discussions of various ZooKeeper concepts. These are necessary both for an +understanding of how ZooKeeper works as well how to work with it. It does +not contain source code, but it does assume a familiarity with the +problems associated with distributed computing. The sections in this first +group are: + +* [The ZooKeeper Data Model](#ch_zkDataModel) +* [ZooKeeper Sessions](#ch_zkSessions) +* [ZooKeeper Watches](#ch_zkWatches) +* [Consistency Guarantees](#ch_zkGuarantees) + +The next four sections provide practical programming +information. These are: + +* [Building Blocks: A Guide to ZooKeeper Operations](#ch_guideToZkOperations) +* [Bindings](#ch_bindings) +* [Gotchas: Common Problems and Troubleshooting](#ch_gotchas) + +The book concludes with an [appendix](#apx_linksToOtherInfo) containing links to other +useful, ZooKeeper-related information. + +Most of the information in this document is written to be accessible as +stand-alone reference material. However, before starting your first +ZooKeeper application, you should probably at least read the chapters on +the [ZooKeeper Data Model](#ch_zkDataModel) and [ZooKeeper Basic Operations](#ch_guideToZkOperations). + + + +## The ZooKeeper Data Model + +ZooKeeper has a hierarchal namespace, much like a distributed file +system. The only difference is that each node in the namespace can have +data associated with it as well as children. It is like having a file +system that allows a file to also be a directory. Paths to nodes are +always expressed as canonical, absolute, slash-separated paths; there are +no relative reference. Any unicode character can be used in a path subject +to the following constraints: + +* The null character (\\u0000) cannot be part of a path name. (This + causes problems with the C binding.) +* The following characters can't be used because they don't + display well, or render in confusing ways: \\u0001 - \\u001F and \\u007F + - \\u009F. +* The following characters are not allowed: \\ud800 - uF8FF, + \\uFFF0 - uFFFF. +* The "." character can be used as part of another name, but "." + and ".." cannot alone be used to indicate a node along a path, + because ZooKeeper doesn't use relative paths. The following would be + invalid: "/a/b/./c" or "/a/b/../c". +* The token "zookeeper" is reserved. + + + +### ZNodes + +Every node in a ZooKeeper tree is referred to as a +_znode_. Znodes maintain a stat structure that +includes version numbers for data changes, acl changes. The stat +structure also has timestamps. The version number, together with the +timestamp, allows ZooKeeper to validate the cache and to coordinate +updates. Each time a znode's data changes, the version number increases. +For instance, whenever a client retrieves data, it also receives the +version of the data. And when a client performs an update or a delete, +it must supply the version of the data of the znode it is changing. If +the version it supplies doesn't match the actual version of the data, +the update will fail. (This behavior can be overridden. + +######Note + +>In distributed application engineering, the word +_node_ can refer to a generic host machine, a +server, a member of an ensemble, a client process, etc. In the ZooKeeper +documentation, _znodes_ refer to the data nodes. +_Servers_ refers to machines that make up the +ZooKeeper service; _quorum peers_ refer to the +servers that make up an ensemble; client refers to any host or process +which uses a ZooKeeper service. + +Znodes are the main entity that a programmer access. They have +several characteristics that are worth mentioning here. + + + +#### Watches + +Clients can set watches on znodes. Changes to that znode trigger +the watch and then clear the watch. When a watch triggers, ZooKeeper +sends the client a notification. More information about watches can be +found in the section +[ZooKeeper Watches](#ch_zkWatches). + + + +#### Data Access + +The data stored at each znode in a namespace is read and written +atomically. Reads get all the data bytes associated with a znode and a +write replaces all the data. Each node has an Access Control List +(ACL) that restricts who can do what. + +ZooKeeper was not designed to be a general database or large +object store. Instead, it manages coordination data. This data can +come in the form of configuration, status information, rendezvous, etc. +A common property of the various forms of coordination data is that +they are relatively small: measured in kilobytes. +The ZooKeeper client and the server implementations have sanity checks +to ensure that znodes have less than 1M of data, but the data should +be much less than that on average. Operating on relatively large data +sizes will cause some operations to take much more time than others and +will affect the latencies of some operations because of the extra time +needed to move more data over the network and onto storage media. If +large data storage is needed, the usual pattern of dealing with such +data is to store it on a bulk storage system, such as NFS or HDFS, and +store pointers to the storage locations in ZooKeeper. + + + +#### Ephemeral Nodes + +ZooKeeper also has the notion of ephemeral nodes. These znodes +exists as long as the session that created the znode is active. When +the session ends the znode is deleted. Because of this behavior +ephemeral znodes are not allowed to have children. The list of ephemerals +for the session can be retrieved using **getEphemerals()** api. + +##### getEphemerals() +Retrieves the list of ephemeral nodes created by the session for the +given path. If the path is empty, it will list all the ephemeral nodes +for the session. +**Use Case** - A sample use case might be, if the list of ephemeral +nodes for the session needs to be collected for duplicate data entry check +and the nodes are created in a sequential manner so you do not know the name +for duplicate check. In that case, getEphemerals() api could be used to +get the list of nodes for the session. This might be a typical use case +for service discovery. + + + +#### Sequence Nodes -- Unique Naming + +When creating a znode you can also request that +ZooKeeper append a monotonically increasing counter to the end +of path. This counter is unique to the parent znode. The +counter has a format of %010d -- that is 10 digits with 0 +(zero) padding (the counter is formatted in this way to +simplify sorting), i.e. "0000000001". See +[Queue +Recipe](recipes.html#sc_recipes_Queues) for an example use of this feature. Note: the +counter used to store the next sequence number is a signed int +(4bytes) maintained by the parent node, the counter will +overflow when incremented beyond 2147483647 (resulting in a +name "-2147483648"). + + + +#### Container Nodes + +**Added in 3.5.3** + +ZooKeeper has the notion of container znodes. Container znodes are +special purpose znodes useful for recipes such as leader, lock, etc. +When the last child of a container is deleted, the container becomes +a candidate to be deleted by the server at some point in the future. + +Given this property, you should be prepared to get +KeeperException.NoNodeException when creating children inside of +container znodes. i.e. when creating child znodes inside of container znodes +always check for KeeperException.NoNodeException and recreate the container +znode when it occurs. + + + +#### TTL Nodes + +**Added in 3.5.3** + +When creating PERSISTENT or PERSISTENT_SEQUENTIAL znodes, +you can optionally set a TTL in milliseconds for the znode. If the znode +is not modified within the TTL and has no children it will become a candidate +to be deleted by the server at some point in the future. + +Note: TTL Nodes must be enabled via System property as they +are disabled by default. See the [Administrator's Guide](zookeeperAdmin.html#sc_configuration) for +details. If you attempt to create TTL Nodes without the +proper System property set the server will throw +KeeperException.UnimplementedException. + + + +### Time in ZooKeeper + +ZooKeeper tracks time multiple ways: + +* **Zxid** + Every change to the ZooKeeper state receives a stamp in the + form of a _zxid_ (ZooKeeper Transaction Id). + This exposes the total ordering of all changes to ZooKeeper. Each + change will have a unique zxid and if zxid1 is smaller than zxid2 + then zxid1 happened before zxid2. +* **Version numbers** + Every change to a node will cause an increase to one of the + version numbers of that node. The three version numbers are version + (number of changes to the data of a znode), cversion (number of + changes to the children of a znode), and aversion (number of changes + to the ACL of a znode). +* **Ticks** + When using multi-server ZooKeeper, servers use ticks to define + timing of events such as status uploads, session timeouts, + connection timeouts between peers, etc. The tick time is only + indirectly exposed through the minimum session timeout (2 times the + tick time); if a client requests a session timeout less than the + minimum session timeout, the server will tell the client that the + session timeout is actually the minimum session timeout. +* **Real time** + ZooKeeper doesn't use real time, or clock time, at all except + to put timestamps into the stat structure on znode creation and + znode modification. + + + +### ZooKeeper Stat Structure + +The Stat structure for each znode in ZooKeeper is made up of the +following fields: + +* **czxid** + The zxid of the change that caused this znode to be + created. +* **mzxid** + The zxid of the change that last modified this znode. +* **pzxid** + The zxid of the change that last modified children of this znode. +* **ctime** + The time in milliseconds from epoch when this znode was + created. +* **mtime** + The time in milliseconds from epoch when this znode was last + modified. +* **version** + The number of changes to the data of this znode. +* **cversion** + The number of changes to the children of this znode. +* **aversion** + The number of changes to the ACL of this znode. +* **ephemeralOwner** + The session id of the owner of this znode if the znode is an + ephemeral node. If it is not an ephemeral node, it will be + zero. +* **dataLength** + The length of the data field of this znode. +* **numChildren** + The number of children of this znode. + + + +## ZooKeeper Sessions + +A ZooKeeper client establishes a session with the ZooKeeper +service by creating a handle to the service using a language +binding. Once created, the handle starts off in the CONNECTING state +and the client library tries to connect to one of the servers that +make up the ZooKeeper service at which point it switches to the +CONNECTED state. During normal operation the client handle will be in one of these +two states. If an unrecoverable error occurs, such as session +expiration or authentication failure, or if the application explicitly +closes the handle, the handle will move to the CLOSED state. +The following figure shows the possible state transitions of a +ZooKeeper client: + +![State transitions](images/state_dia.jpg) + +To create a client session the application code must provide +a connection string containing a comma separated list of host:port pairs, +each corresponding to a ZooKeeper server (e.g. "127.0.0.1:4545" or +"127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"). The ZooKeeper +client library will pick an arbitrary server and try to connect to +it. If this connection fails, or if the client becomes +disconnected from the server for any reason, the client will +automatically try the next server in the list, until a connection +is (re-)established. + +**Added in 3.2.0**: An +optional "chroot" suffix may also be appended to the connection +string. This will run the client commands while interpreting all +paths relative to this root (similar to the unix chroot +command). If used the example would look like: +"127.0.0.1:4545/app/a" or +"127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002/app/a" where the +client would be rooted at "/app/a" and all paths would be relative +to this root - ie getting/setting/etc... "/foo/bar" would result +in operations being run on "/app/a/foo/bar" (from the server +perspective). This feature is particularly useful in multi-tenant +environments where each user of a particular ZooKeeper service +could be rooted differently. This makes re-use much simpler as +each user can code his/her application as if it were rooted at +"/", while actual location (say /app/a) could be determined at +deployment time. + +When a client gets a handle to the ZooKeeper service, +ZooKeeper creates a ZooKeeper session, represented as a 64-bit +number, that it assigns to the client. If the client connects to a +different ZooKeeper server, it will send the session id as a part +of the connection handshake. As a security measure, the server +creates a password for the session id that any ZooKeeper server +can validate.The password is sent to the client with the session +id when the client establishes the session. The client sends this +password with the session id whenever it reestablishes the session +with a new server. + +One of the parameters to the ZooKeeper client library call +to create a ZooKeeper session is the session timeout in +milliseconds. The client sends a requested timeout, the server +responds with the timeout that it can give the client. The current +implementation requires that the timeout be a minimum of 2 times +the tickTime (as set in the server configuration) and a maximum of +20 times the tickTime. The ZooKeeper client API allows access to +the negotiated timeout. + +When a client (session) becomes partitioned from the ZK +serving cluster it will begin searching the list of servers that +were specified during session creation. Eventually, when +connectivity between the client and at least one of the servers is +re-established, the session will either again transition to the +"connected" state (if reconnected within the session timeout +value) or it will transition to the "expired" state (if +reconnected after the session timeout). It is not advisable to +create a new session object (a new ZooKeeper.class or zookeeper +handle in the c binding) for disconnection. The ZK client library +will handle reconnect for you. In particular we have heuristics +built into the client library to handle things like "herd effect", +etc... Only create a new session when you are notified of session +expiration (mandatory). + +Session expiration is managed by the ZooKeeper cluster +itself, not by the client. When the ZK client establishes a +session with the cluster it provides a "timeout" value detailed +above. This value is used by the cluster to determine when the +client's session expires. Expirations happens when the cluster +does not hear from the client within the specified session timeout +period (i.e. no heartbeat). At session expiration the cluster will +delete any/all ephemeral nodes owned by that session and +immediately notify any/all connected clients of the change (anyone +watching those znodes). At this point the client of the expired +session is still disconnected from the cluster, it will not be +notified of the session expiration until/unless it is able to +re-establish a connection to the cluster. The client will stay in +disconnected state until the TCP connection is re-established with +the cluster, at which point the watcher of the expired session +will receive the "session expired" notification. + +Example state transitions for an expired session as seen by +the expired session's watcher: + +1. 'connected' : session is established and client + is communicating with cluster (client/server communication is + operating properly) +1. .... client is partitioned from the + cluster +1. 'disconnected' : client has lost connectivity + with the cluster +1. .... time elapses, after 'timeout' period the + cluster expires the session, nothing is seen by client as it is + disconnected from cluster +1. .... time elapses, the client regains network + level connectivity with the cluster +1. 'expired' : eventually the client reconnects to + the cluster, it is then notified of the + expiration + +Another parameter to the ZooKeeper session establishment +call is the default watcher. Watchers are notified when any state +change occurs in the client. For example if the client loses +connectivity to the server the client will be notified, or if the +client's session expires, etc... This watcher should consider the +initial state to be disconnected (i.e. before any state changes +events are sent to the watcher by the client lib). In the case of +a new connection, the first event sent to the watcher is typically +the session connection event. + +The session is kept alive by requests sent by the client. If +the session is idle for a period of time that would timeout the +session, the client will send a PING request to keep the session +alive. This PING request not only allows the ZooKeeper server to +know that the client is still active, but it also allows the +client to verify that its connection to the ZooKeeper server is +still active. The timing of the PING is conservative enough to +ensure reasonable time to detect a dead connection and reconnect +to a new server. + +Once a connection to the server is successfully established +(connected) there are basically two cases where the client lib generates +connectionloss (the result code in c binding, exception in Java -- see +the API documentation for binding specific details) when either a synchronous or +asynchronous operation is performed and one of the following holds: + +1. The application calls an operation on a session that is no + longer alive/valid +1. The ZooKeeper client disconnects from a server when there + are pending operations to that server, i.e., there is a pending asynchronous call. + +**Added in 3.2.0 -- SessionMovedException**. There is an internal +exception that is generally not seen by clients called the SessionMovedException. +This exception occurs because a request was received on a connection for a session +which has been reestablished on a different server. The normal cause of this error is +a client that sends a request to a server, but the network packet gets delayed, so +the client times out and connects to a new server. When the delayed packet arrives at +the first server, the old server detects that the session has moved, and closes the +client connection. Clients normally do not see this error since they do not read +from those old connections. (Old connections are usually closed.) One situation in which this +condition can be seen is when two clients try to reestablish the same connection using +a saved session id and password. One of the clients will reestablish the connection +and the second client will be disconnected (causing the pair to attempt to re-establish +its connection/session indefinitely). + +**Updating the list of servers**. We allow a client to +update the connection string by providing a new comma separated list of host:port pairs, +each corresponding to a ZooKeeper server. The function invokes a probabilistic load-balancing +algorithm which may cause the client to disconnect from its current host with the goal +to achieve expected uniform number of connections per server in the new list. +In case the current host to which the client is connected is not in the new list +this call will always cause the connection to be dropped. Otherwise, the decision +is based on whether the number of servers has increased or decreased and by how much. + +For example, if the previous connection string contained 3 hosts and now the list contains +these 3 hosts and 2 more hosts, 40% of clients connected to each of the 3 hosts will +move to one of the new hosts in order to balance the load. The algorithm will cause the client +to drop its connection to the current host to which it is connected with probability 0.4 and in this +case cause the client to connect to one of the 2 new hosts, chosen at random. + +Another example -- suppose we have 5 hosts and now update the list to remove 2 of the hosts, +the clients connected to the 3 remaining hosts will stay connected, whereas all clients connected +to the 2 removed hosts will need to move to one of the 3 hosts, chosen at random. If the connection +is dropped, the client moves to a special mode where he chooses a new server to connect to using the +probabilistic algorithm, and not just round robin. + +In the first example, each client decides to disconnect with probability 0.4 but once the decision is +made, it will try to connect to a random new server and only if it cannot connect to any of the new +servers will it try to connect to the old ones. After finding a server, or trying all servers in the +new list and failing to connect, the client moves back to the normal mode of operation where it picks +an arbitrary server from the connectString and attempts to connect to it. If that fails, it will continue +trying different random servers in round robin. (see above the algorithm used to initially choose a server) + +**Local session**. Added in 3.5.0, mainly implemented by [ZOOKEEPER-1147](https://issues.apache.org/jira/browse/ZOOKEEPER-1147). + +- Background: The creation and closing of sessions are costly in ZooKeeper because they need quorum confirmations, + they become the bottleneck of a ZooKeeper ensemble when it needs to handle thousands of client connections. +So after 3.5.0, we introduce a new type of session: local session which doesn't have a full functionality of a normal(global) session, this feature +will be available by turning on *localSessionsEnabled*. + +when *localSessionsUpgradingEnabled* is disable: + +- Local sessions cannot create ephemeral nodes + +- Once a local session is lost, users cannot re-establish it using the session-id/password, the session and its watches are gone for good. + Note: Losing the tcp connection does not necessarily imply that the session is lost. If the connection can be reestablished with the same zk server + before the session timeout then the client can continue (it simply cannot move to another server). + +- When a local session connects, the session info is only maintained on the zookeeper server that it is connected to. The leader is not aware of the creation of such a session and +there is no state written to disk. + +- The pings, expiration and other session state maintenance are handled by the server which current session is connected to. + +when *localSessionsUpgradingEnabled* is enable: + +- A local session can be upgraded to the global session automatically. + +- When a new session is created it is saved locally in a wrapped *LocalSessionTracker*. It can subsequently be upgraded +to a global session as required (e.g. create ephemeral nodes). If an upgrade is requested the session is removed from local + collections while keeping the same session ID. + +- Currently, Only the operation: *create ephemeral node* needs a session upgrade from local to global. +The reason is that the creation of ephemeral node depends heavily on a global session. If local session can create ephemeral +node without upgrading to global session, it will cause the data inconsistency between different nodes. +The leader also needs to know about the lifespan of a session in order to clean up ephemeral nodes on close/expiry. +This requires a global session as the local session is tied to its particular server. + +- A session can be both a local and global session during upgrade, but the operation of upgrade cannot be called concurrently by two thread. + +- *ZooKeeperServer*(Standalone) uses *SessionTrackerImpl*; *LeaderZookeeper* uses *LeaderSessionTracker* which holds + *SessionTrackerImpl*(global) and *LocalSessionTracker*(if enable); *FollowerZooKeeperServer* and *ObserverZooKeeperServer* + use *LearnerSessionTracker* which holds *LocalSessionTracker*. + The UML Graph of Classes about session: + + ``` + +----------------+ +--------------------+ +---------------------+ + | | --> | | ----> | LocalSessionTracker | + | SessionTracker | | SessionTrackerImpl | +---------------------+ + | | | | +-----------------------+ + | | | | +-------------------------> | LeaderSessionTracker | + +----------------+ +--------------------+ | +-----------------------+ + | | + | | + | | + | +---------------------------+ + +---------> | | + | UpgradeableSessionTracker | + | | + | | ------------------------+ + +---------------------------+ | + | + | + v + +-----------------------+ + | LearnerSessionTracker | + +-----------------------+ + ``` + +- Q&A + - *What's the reason for having the config option to disable local session upgrade?* + - In a large deployment which wants to handle a very large number of clients, we know that clients connecting via the observers + which is supposed to be local session only. So this is more like a safeguard against someone accidentally creates lots of ephemeral nodes and global sessions. + + - *When is the session created?* + - In the current implementation, it will try to create a local session when processing *ConnectRequest* and when + *createSession* request reaches *FinalRequestProcessor*. + + - *What happens if the create for session is sent at server A and the client disconnects to some other server B + which ends up sending it again and then disconnects and connects back to server A?* + - When a client reconnects to B, its sessionId won’t exist in B’s local session tracker. So B will send validation packet. + If CreateSession issued by A is committed before validation packet arrive the client will be able to connect. + Otherwise, the client will get session expired because the quorum hasn’t know about this session yet. + If the client also tries to connect back to A again, the session is already removed from local session tracker. + So A will need to send a validation packet to the leader. The outcome should be the same as B depending on the timing of the request. + + + +## ZooKeeper Watches + +All of the read operations in ZooKeeper - **getData()**, **getChildren()**, and **exists()** - have the option of setting a watch as a +side effect. Here is ZooKeeper's definition of a watch: a watch event is +one-time trigger, sent to the client that set the watch, which occurs when +the data for which the watch was set changes. There are three key points +to consider in this definition of a watch: + +* **One-time trigger** + One watch event will be sent to the client when the data has changed. + For example, if a client does a getData("/znode1", true) and later the + data for /znode1 is changed or deleted, the client will get a watch + event for /znode1. If /znode1 changes again, no watch event will be + sent unless the client has done another read that sets a new + watch. +* **Sent to the client** + This implies that an event is on the way to the client, but may + not reach the client before the successful return code to the change + operation reaches the client that initiated the change. Watches are + sent asynchronously to watchers. ZooKeeper provides an ordering + guarantee: a client will never see a change for which it has set a + watch until it first sees the watch event. Network delays or other + factors may cause different clients to see watches and return codes + from updates at different times. The key point is that everything seen + by the different clients will have a consistent order. +* **The data for which the watch was + set** + This refers to the different ways a node can change. It + helps to think of ZooKeeper as maintaining two lists of + watches: data watches and child watches. getData() and + exists() set data watches. getChildren() sets child + watches. Alternatively, it may help to think of watches being + set according to the kind of data returned. getData() and + exists() return information about the data of the node, + whereas getChildren() returns a list of children. Thus, + setData() will trigger data watches for the znode being set + (assuming the set is successful). A successful create() will + trigger a data watch for the znode being created and a child + watch for the parent znode. A successful delete() will trigger + both a data watch and a child watch (since there can be no + more children) for a znode being deleted as well as a child + watch for the parent znode. + +Watches are maintained locally at the ZooKeeper server to which the +client is connected. This allows watches to be lightweight to set, +maintain, and dispatch. When a client connects to a new server, the watch +will be triggered for any session events. Watches will not be received +while disconnected from a server. When a client reconnects, any previously +registered watches will be reregistered and triggered if needed. In +general this all occurs transparently. There is one case where a watch +may be missed: a watch for the existence of a znode not yet created will +be missed if the znode is created and deleted while disconnected. + +**New in 3.6.0:** Clients can also set +permanent, recursive watches on a znode that are not removed when triggered +and that trigger for changes on the registered znode as well as any children +znodes recursively. + + + +### Semantics of Watches + +We can set watches with the three calls that read the state of +ZooKeeper: exists, getData, and getChildren. The following list details +the events that a watch can trigger and the calls that enable them: + +* **Created event:** + Enabled with a call to exists. +* **Deleted event:** + Enabled with a call to exists, getData, and getChildren. +* **Changed event:** + Enabled with a call to exists and getData. +* **Child event:** + Enabled with a call to getChildren. + + + +### Persistent, Recursive Watches + +**New in 3.6.0:** There is now a variation on the standard +watch described above whereby you can set a watch that does not get removed when triggered. +Additionally, these watches trigger the event types *NodeCreated*, *NodeDeleted*, and *NodeDataChanged* +and, optionally, recursively for all znodes starting at the znode that the watch is registered for. Note +that *NodeChildrenChanged* events are not triggered for persistent recursive watches as it would be redundant. + +Persistent watches are set using the method *addWatch()*. The triggering semantics and guarantees +(other than one-time triggering) are the same as standard watches. The only exception regarding events is that +recursive persistent watchers never trigger child changed events as they are redundant. +Persistent watches are removed using *removeWatches()* with watcher type *WatcherType.Any*. + + + +### Remove Watches + +We can remove the watches registered on a znode with a call to +removeWatches. Also, a ZooKeeper client can remove watches locally even +if there is no server connection by setting the local flag to true. The +following list details the events which will be triggered after the +successful watch removal. + +* **Child Remove event:** + Watcher which was added with a call to getChildren. +* **Data Remove event:** + Watcher which was added with a call to exists or getData. +* **Persistent Remove event:** + Watcher which was added with a call to add a persistent watch. + + + +### What ZooKeeper Guarantees about Watches + +With regard to watches, ZooKeeper maintains these +guarantees: + +* Watches are ordered with respect to other events, other + watches, and asynchronous replies. The ZooKeeper client libraries + ensures that everything is dispatched in order. + +* A client will see a watch event for a znode it is watching + before seeing the new data that corresponds to that znode. + +* The order of watch events from ZooKeeper corresponds to the + order of the updates as seen by the ZooKeeper service. + + + +### Things to Remember about Watches + +* Standard watches are one time triggers; if you get a watch event and + you want to get notified of future changes, you must set another + watch. + +* Because standard watches are one time triggers and there is latency + between getting the event and sending a new request to get a watch + you cannot reliably see every change that happens to a node in + ZooKeeper. Be prepared to handle the case where the znode changes + multiple times between getting the event and setting the watch + again. (You may not care, but at least realize it may + happen.) + +* A watch object, or function/context pair, will only be + triggered once for a given notification. For example, if the same + watch object is registered for an exists and a getData call for the + same file and that file is then deleted, the watch object would + only be invoked once with the deletion notification for the file. + +* When you disconnect from a server (for example, when the + server fails), you will not get any watches until the connection + is reestablished. For this reason session events are sent to all + outstanding watch handlers. Use session events to go into a safe + mode: you will not be receiving events while disconnected, so your + process should act conservatively in that mode. + + + +## ZooKeeper access control using ACLs + +ZooKeeper uses ACLs to control access to its znodes (the +data nodes of a ZooKeeper data tree). The ACL implementation is +quite similar to UNIX file access permissions: it employs +permission bits to allow/disallow various operations against a +node and the scope to which the bits apply. Unlike standard UNIX +permissions, a ZooKeeper node is not limited by the three standard +scopes for user (owner of the file), group, and world +(other). ZooKeeper does not have a notion of an owner of a +znode. Instead, an ACL specifies sets of ids and permissions that +are associated with those ids. + +Note also that an ACL pertains only to a specific znode. In +particular it does not apply to children. For example, if +_/app_ is only readable by ip:172.16.16.1 and +_/app/status_ is world readable, anyone will +be able to read _/app/status_; ACLs are not +recursive. + +ZooKeeper supports pluggable authentication schemes. Ids are +specified using the form _scheme:expression_, +where _scheme_ is the authentication scheme +that the id corresponds to. The set of valid expressions are defined +by the scheme. For example, _ip:172.16.16.1_ is +an id for a host with the address _172.16.16.1_ +using the _ip_ scheme, whereas _digest:bob:password_ +is an id for the user with the name of _bob_ using +the _digest_ scheme. + +When a client connects to ZooKeeper and authenticates +itself, ZooKeeper associates all the ids that correspond to a +client with the clients connection. These ids are checked against +the ACLs of znodes when a client tries to access a node. ACLs are +made up of pairs of _(scheme:expression, +perms)_. The format of +the _expression_ is specific to the scheme. For +example, the pair _(ip:19.22.0.0/16, READ)_ +gives the _READ_ permission to any clients with +an IP address that starts with 19.22. + + + +### ACL Permissions + +ZooKeeper supports the following permissions: + +* **CREATE**: you can create a child node +* **READ**: you can get data from a node and list its children. +* **WRITE**: you can set data for a node +* **DELETE**: you can delete a child node +* **ADMIN**: you can set permissions + +The _CREATE_ +and _DELETE_ permissions have been broken out +of the _WRITE_ permission for finer grained +access controls. The cases for _CREATE_ +and _DELETE_ are the following: + +You want A to be able to do a set on a ZooKeeper node, but +not be able to _CREATE_ +or _DELETE_ children. + +_CREATE_ +without _DELETE_: clients create requests by +creating ZooKeeper nodes in a parent directory. You want all +clients to be able to add, but only request processor can +delete. (This is kind of like the APPEND permission for +files.) + +Also, the _ADMIN_ permission is there +since ZooKeeper doesn’t have a notion of file owner. In some +sense the _ADMIN_ permission designates the +entity as the owner. ZooKeeper doesn’t support the LOOKUP +permission (execute permission bit on directories to allow you +to LOOKUP even though you can't list the directory). Everyone +implicitly has LOOKUP permission. This allows you to stat a +node, but nothing more. (The problem is, if you want to call +zoo_exists() on a node that doesn't exist, there is no +permission to check.) + +_ADMIN_ permission also has a special role in terms of ACLs: +in order to retrieve ACLs of a znode user has to have _READ_ or _ADMIN_ + permission, but without _ADMIN_ permission, digest hash values will be +masked out. + +As of versions **3.9.2 / 3.8.4 / 3.7.3** the exists() call will now verify ACLs +on nodes that exist and the client must have READ permission otherwise +'Insufficient permission' error will be raised. + + + +#### Builtin ACL Schemes + +ZooKeeeper has the following built in schemes: + +* **world** has a + single id, _anyone_, that represents + anyone. +* **auth** is a special + scheme which ignores any provided expression and instead uses the current user, + credentials, and scheme. Any expression (whether _user_ like with SASL + authentication or _user:password_ like with DIGEST authentication) provided is ignored + by the ZooKeeper server when persisting the ACL. However, the expression must still be + provided in the ACL because the ACL must match the form _scheme:expression:perms_. + This scheme is provided as a convenience as it is a common use-case for + a user to create a znode and then restrict access to that znode to only that user. + If there is no authenticated user, setting an ACL with the auth scheme will fail. +* **digest** uses + a _username:password_ string to generate + MD5 hash which is then used as an ACL ID + identity. Authentication is done by sending + the _username:password_ in clear text. When + used in the ACL the expression will be + the _username:base64_ + encoded _SHA1_ + password _digest_. +* **ip** uses the + client host IP as an ACL ID identity. The ACL expression is of + the form _addr/bits_ where the most + significant _bits_ + of _addr_ are matched against the most + significant _bits_ of the client host + IP. +* **x509** uses the client + X500 Principal as an ACL ID identity. The ACL expression is the exact + X500 Principal name of a client. When using the secure port, clients + are automatically authenticated and their auth info for the x509 scheme + is set. + + + +#### ZooKeeper C client API + +The following constants are provided by the ZooKeeper C +library: + +* _const_ _int_ ZOO_PERM_READ; //can read node’s value and list its children +* _const_ _int_ ZOO_PERM_WRITE;// can set the node’s value +* _const_ _int_ ZOO_PERM_CREATE; //can create children +* _const_ _int_ ZOO_PERM_DELETE;// can delete children +* _const_ _int_ ZOO_PERM_ADMIN; //can execute set_acl() +* _const_ _int_ ZOO_PERM_ALL;// all of the above flags OR’d together + +The following are the standard ACL IDs: + +* _struct_ Id ZOO_ANYONE_ID_UNSAFE; //(‘world’,’anyone’) +* _struct_ Id ZOO_AUTH_IDS;// (‘auth’,’’) + +ZOO_AUTH_IDS empty identity string should be interpreted as “the identity of the creator”. + +ZooKeeper client comes with three standard ACLs: + +* _struct_ ACL_vector ZOO_OPEN_ACL_UNSAFE; //(ZOO_PERM_ALL,ZOO_ANYONE_ID_UNSAFE) +* _struct_ ACL_vector ZOO_READ_ACL_UNSAFE;// (ZOO_PERM_READ, ZOO_ANYONE_ID_UNSAFE) +* _struct_ ACL_vector ZOO_CREATOR_ALL_ACL; //(ZOO_PERM_ALL,ZOO_AUTH_IDS) + +The ZOO_OPEN_ACL_UNSAFE is completely open free for all +ACL: any application can execute any operation on the node and +can create, list and delete its children. The +ZOO_READ_ACL_UNSAFE is read-only access for any +application. CREATE_ALL_ACL grants all permissions to the +creator of the node. The creator must have been authenticated by +the server (for example, using “_digest_” +scheme) before it can create nodes with this ACL. + +The following ZooKeeper operations deal with ACLs: + +* _int_ _zoo_add_auth_ + (zhandle_t \*zh,_const_ _char_* + scheme,_const_ _char_* + cert, _int_ certLen, void_completion_t + completion, _const_ _void_ + \*data); + +The application uses the zoo_add_auth function to +authenticate itself to the server. The function can be called +multiple times if the application wants to authenticate using +different schemes and/or identities. + +* _int_ _zoo_create_ + (zhandle_t \*zh, _const_ _char_ + \*path, _const_ _char_ + \*value,_int_ + valuelen, _const_ _struct_ + ACL_vector \*acl, _int_ + flags,_char_ + \*realpath, _int_ + max_realpath_len); + +zoo_create(...) operation creates a new node. The acl +parameter is a list of ACLs associated with the node. The parent +node must have the CREATE permission bit set. + +* _int_ _zoo_get_acl_ + (zhandle_t \*zh, _const_ _char_ + \*path,_struct_ ACL_vector + \*acl, _struct_ Stat \*stat); + +This operation returns a node’s ACL info. The node must have READ or ADMIN +permission set. Without ADMIN permission, the digest hash values will be masked out. + +* _int_ _zoo_set_acl_ + (zhandle_t \*zh, _const_ _char_ + \*path, _int_ + version,_const_ _struct_ + ACL_vector \*acl); + +This function replaces node’s ACL list with a new one. The +node must have the ADMIN permission set. + +Here is a sample code that makes use of the above APIs to +authenticate itself using the “_foo_” scheme +and create an ephemeral node “/xyz” with create-only +permissions. + +######Note +>This is a very simple example which is intended to show +how to interact with ZooKeeper ACLs +specifically. See *.../trunk/zookeeper-client/zookeeper-client-c/src/cli.c* +for an example of a C client implementation + + + + #include + #include + + #include "zookeeper.h" + + static zhandle_t *zh; + + /** + * In this example this method gets the cert for your + * environment -- you must provide + */ + char *foo_get_cert_once(char* id) { return 0; } + + /** Watcher function -- empty for this example, not something you should + * do in real code */ + void watcher(zhandle_t *zzh, int type, int state, const char *path, + void *watcherCtx) {} + + int main(int argc, char argv) { + char buffer[512]; + char p[2048]; + char *cert=0; + char appId[64]; + + strcpy(appId, "example.foo_test"); + cert = foo_get_cert_once(appId); + if(cert!=0) { + fprintf(stderr, + "Certificate for appid [%s] is [%s]\n",appId,cert); + strncpy(p,cert, sizeof(p)-1); + free(cert); + } else { + fprintf(stderr, "Certificate for appid [%s] not found\n",appId); + strcpy(p, "dummy"); + } + + zoo_set_debug_level(ZOO_LOG_LEVEL_DEBUG); + + zh = zookeeper_init("localhost:3181", watcher, 10000, 0, 0, 0); + if (!zh) { + return errno; + } + if(zoo_add_auth(zh,"foo",p,strlen(p),0,0)!=ZOK) + return 2; + + struct ACL CREATE_ONLY_ACL[] = {{ZOO_PERM_CREATE, ZOO_AUTH_IDS}}; + struct ACL_vector CREATE_ONLY = {1, CREATE_ONLY_ACL}; + int rc = zoo_create(zh,"/xyz","value", 5, &CREATE_ONLY, ZOO_EPHEMERAL, + buffer, sizeof(buffer)-1); + + /** this operation will fail with a ZNOAUTH error */ + int buflen= sizeof(buffer); + struct Stat stat; + rc = zoo_get(zh, "/xyz", 0, buffer, &buflen, &stat); + if (rc) { + fprintf(stderr, "Error %d for %s\n", rc, __LINE__); + } + + zookeeper_close(zh); + return 0; + } + + + + +## Pluggable ZooKeeper authentication + +ZooKeeper runs in a variety of different environments with +various different authentication schemes, so it has a completely +pluggable authentication framework. Even the builtin authentication +schemes use the pluggable authentication framework. + +To understand how the authentication framework works, first you must +understand the two main authentication operations. The framework +first must authenticate the client. This is usually done as soon as +the client connects to a server and consists of validating information +sent from or gathered about a client and associating it with the connection. +The second operation handled by the framework is finding the entries in an +ACL that correspond to client. ACL entries are <_idspec, +permissions_> pairs. The _idspec_ may be +a simple string match against the authentication information associated +with the connection or it may be a expression that is evaluated against that +information. It is up to the implementation of the authentication plugin +to do the match. Here is the interface that an authentication plugin must +implement: + + + public interface AuthenticationProvider { + String getScheme(); + KeeperException.Code handleAuthentication(ServerCnxn cnxn, byte authData[]); + boolean isValid(String id); + boolean matches(String id, String aclExpr); + boolean isAuthenticated(); + } + + +The first method _getScheme_ returns the string +that identifies the plugin. Because we support multiple methods of authentication, +an authentication credential or an _idspec_ will always be +prefixed with _scheme:_. The ZooKeeper server uses the scheme +returned by the authentication plugin to determine which ids the scheme +applies to. + +_handleAuthentication_ is called when a client +sends authentication information to be associated with a connection. The +client specifies the scheme to which the information corresponds. The +ZooKeeper server passes the information to the authentication plugin whose +_getScheme_ matches the scheme passed by the client. The +implementor of _handleAuthentication_ will usually return +an error if it determines that the information is bad, or it will associate information +with the connection using _cnxn.getAuthInfo().add(new Id(getScheme(), data))_. + +The authentication plugin is involved in both setting and using ACLs. When an +ACL is set for a znode, the ZooKeeper server will pass the id part of the entry to +the _isValid(String id)_ method. It is up to the plugin to verify +that the id has a correct form. For example, _ip:172.16.0.0/16_ +is a valid id, but _ip:host.com_ is not. If the new ACL includes +an "auth" entry, _isAuthenticated_ is used to see if the +authentication information for this scheme that is associated with the connection +should be added to the ACL. Some schemes +should not be included in auth. For example, the IP address of the client is not +considered as an id that should be added to the ACL if auth is specified. + +ZooKeeper invokes _matches(String id, String aclExpr)_ when checking an ACL. It +needs to match authentication information of the client against the relevant ACL +entries. To find the entries which apply to the client, the ZooKeeper server will +find the scheme of each entry and if there is authentication information +from that client for that scheme, _matches(String id, String aclExpr)_ +will be called with _id_ set to the authentication information +that was previously added to the connection by _handleAuthentication_ and +_aclExpr_ set to the id of the ACL entry. The authentication plugin +uses its own logic and matching scheme to determine if _id_ is included +in _aclExpr_. + +There are two built in authentication plugins: _ip_ and +_digest_. Additional plugins can adding using system properties. At +startup the ZooKeeper server will look for system properties that start with +"zookeeper.authProvider." and interpret the value of those properties as the class name +of an authentication plugin. These properties can be set using the +_-Dzookeeeper.authProvider.X=com.f.MyAuth_ or adding entries such as +the following in the server configuration file: + + + authProvider.1=com.f.MyAuth + authProvider.2=com.f.MyAuth2 + + +Care should be taking to ensure that the suffix on the property is unique. If there are +duplicates such as _-Dzookeeeper.authProvider.X=com.f.MyAuth -Dzookeeper.authProvider.X=com.f.MyAuth2_, +only one will be used. Also all servers must have the same plugins defined, otherwise clients using +the authentication schemes provided by the plugins will have problems connecting to some servers. + +**Added in 3.6.0**: An alternate abstraction is available for pluggable +authentication. It provides additional arguments. + + + public abstract class ServerAuthenticationProvider implements AuthenticationProvider { + public abstract KeeperException.Code handleAuthentication(ServerObjs serverObjs, byte authData[]); + public abstract boolean matches(ServerObjs serverObjs, MatchValues matchValues); + } + + +Instead of implementing AuthenticationProvider you extend ServerAuthenticationProvider. Your handleAuthentication() +and matches() methods will then receive the additional parameters (via ServerObjs and MatchValues). + +* **ZooKeeperServer** + The ZooKeeperServer instance +* **ServerCnxn** + The current connection +* **path** + The ZNode path being operated on (or null if not used) +* **perm** + The operation value or 0 +* **setAcls** + When the setAcl() method is being operated on, the list of ACLs that are being set + + + +## Consistency Guarantees + +ZooKeeper is a high performance, scalable service. Both reads and +write operations are designed to be fast, though reads are faster than +writes. The reason for this is that in the case of reads, ZooKeeper can +serve older data, which in turn is due to ZooKeeper's consistency +guarantees: + +* *Sequential Consistency* : + Updates from a client will be applied in the order that they + were sent. + +* *Atomicity* : + Updates either succeed or fail -- there are no partial + results. + +* *Single System Image* : + A client will see the same view of the service regardless of + the server that it connects to. i.e., a client will never see an + older view of the system even if the client fails over to a + different server with the same session. + +* *Reliability* : + Once an update has been applied, it will persist from that + time forward until a client overwrites the update. This guarantee + has two corollaries: + 1. If a client gets a successful return code, the update will + have been applied. On some failures (communication errors, + timeouts, etc) the client will not know if the update has + applied or not. We take steps to minimize the failures, but the + guarantee is only present with successful return codes. + (This is called the _monotonicity condition_ in Paxos.) + 1. Any updates that are seen by the client, through a read + request or successful update, will never be rolled back when + recovering from server failures. + +* *Timeliness* : + The clients view of the system is guaranteed to be up-to-date + within a certain time bound (on the order of tens of seconds). + Either system changes will be seen by a client within this bound, or + the client will detect a service outage. + +Using these consistency guarantees it is easy to build higher level +functions such as leader election, barriers, queues, and read/write +revocable locks solely at the ZooKeeper client (no additions needed to +ZooKeeper). See [Recipes and Solutions](recipes.html) +for more details. + +######Note + +>Sometimes developers mistakenly assume one other guarantee that +ZooKeeper does _not_ in fact make. This is: +> * Simultaneously Consistent Cross-Client Views* : + ZooKeeper does not guarantee that at every instance in + time, two different clients will have identical views of + ZooKeeper data. Due to factors like network delays, one client + may perform an update before another client gets notified of the + change. Consider the scenario of two clients, A and B. If client + A sets the value of a znode /a from 0 to 1, then tells client B + to read /a, client B may read the old value of 0, depending on + which server it is connected to. If it + is important that Client A and Client B read the same value, + Client B should call the **sync()** method from the ZooKeeper API + method before it performs its read. + So, ZooKeeper by itself doesn't guarantee that changes occur + synchronously across all servers, but ZooKeeper + primitives can be used to construct higher level functions that + provide useful client synchronization. (For more information, + see the [ZooKeeper Recipes](recipes.html). + + + +## Bindings + +The ZooKeeper client libraries come in two languages: Java and C. +The following sections describe these. + + + +### Java Binding + +There are two packages that make up the ZooKeeper Java binding: +**org.apache.zookeeper** and **org.apache.zookeeper.data**. The rest of the +packages that make up ZooKeeper are used internally or are part of the +server implementation. The **org.apache.zookeeper.data** package is made up of +generated classes that are used simply as containers. + +The main class used by a ZooKeeper Java client is the **ZooKeeper** class. Its two constructors differ only +by an optional session id and password. ZooKeeper supports session +recovery across instances of a process. A Java program may save its +session id and password to stable storage, restart, and recover the +session that was used by the earlier instance of the program. + +When a ZooKeeper object is created, two threads are created as +well: an IO thread and an event thread. All IO happens on the IO thread +(using Java NIO). All event callbacks happen on the event thread. +Session maintenance such as reconnecting to ZooKeeper servers and +maintaining heartbeat is done on the IO thread. Responses for +synchronous methods are also processed in the IO thread. All responses +to asynchronous methods and watch events are processed on the event +thread. There are a few things to notice that result from this +design: + +* All completions for asynchronous calls and watcher callbacks + will be made in order, one at a time. The caller can do any + processing they wish, but no other callbacks will be processed + during that time. +* Callbacks do not block the processing of the IO thread or the + processing of the synchronous calls. +* Synchronous calls may not return in the correct order. For + example, assume a client does the following processing: issues an + asynchronous read of node **/a** with + _watch_ set to true, and then in the completion + callback of the read it does a synchronous read of **/a**. (Maybe not good practice, but not illegal + either, and it makes for a simple example.) + Note that if there is a change to **/a** between the asynchronous read and the + synchronous read, the client library will receive the watch event + saying **/a** changed before the + response for the synchronous read, but because of the completion + callback blocking the event queue, the synchronous read will + return with the new value of **/a** + before the watch event is processed. + +Finally, the rules associated with shutdown are straightforward: +once a ZooKeeper object is closed or receives a fatal event +(SESSION_EXPIRED and AUTH_FAILED), the ZooKeeper object becomes invalid. +On a close, the two threads shut down and any further access on zookeeper +handle is undefined behavior and should be avoided. + + + +#### Client Configuration Parameters + +The following list contains configuration properties for the Java client. You can set any +of these properties using Java system properties. For server properties, please check the +[Server configuration section of the Admin Guide](zookeeperAdmin.html#sc_configuration). +The ZooKeeper Wiki also has useful pages about +[ZooKeeper SSL support](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide), +and [SASL authentication for ZooKeeper](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+and+SASL). + + +* *zookeeper.sasl.client* : + Set the value to **false** to disable + SASL authentication. Default is **true**. + +* *zookeeper.sasl.clientconfig* : + Specifies the context key in the JAAS login file. Default is "Client". + +* *zookeeper.server.principal* : + Specifies the server principal to be used by the client for authentication, while connecting to the zookeeper + server, when Kerberos authentication is enabled. If this configuration is provided, then + the ZooKeeper client will NOT USE any of the following parameters to determine the server principal: + zookeeper.sasl.client.username, zookeeper.sasl.client.canonicalize.hostname, zookeeper.server.realm + Note: this config parameter is working only for ZooKeeper 3.5.7+, 3.6.0+ + +* *zookeeper.sasl.client.username* : + Traditionally, a principal is divided into three parts: the primary, the instance, and the realm. + The format of a typical Kerberos V5 principal is primary/instance@REALM. + zookeeper.sasl.client.username specifies the primary part of the server principal. Default + is "zookeeper". Instance part is derived from the server IP. Finally server's principal is + username/IP@realm, where username is the value of zookeeper.sasl.client.username, IP is + the server IP, and realm is the value of zookeeper.server.realm. + +* *zookeeper.sasl.client.canonicalize.hostname* : + Expecting the zookeeper.server.principal parameter is not provided, the ZooKeeper client will try to + determine the 'instance' (host) part of the ZooKeeper server principal. First it takes the hostname provided + as the ZooKeeper server connection string. Then it tries to 'canonicalize' the address by getting + the fully qualified domain name belonging to the address. You can disable this 'canonicalization' + by setting: zookeeper.sasl.client.canonicalize.hostname=false + +* *zookeeper.server.realm* : + Realm part of the server principal. By default it is the client principal realm. + +* *zookeeper.disableAutoWatchReset* : + This switch controls whether automatic watch resetting is enabled. Clients automatically + reset watches during session reconnect by default, this option allows the client to turn off + this behavior by setting zookeeper.disableAutoWatchReset to **true**. + +* *zookeeper.client.secure* : + **New in 3.5.5:** + If you want to connect to the server secure client port, you need to set this property to + **true** + on the client. This will connect to server using SSL with specified credentials. Note that + it requires the Netty client. + +* *zookeeper.clientCnxnSocket* : + Specifies which ClientCnxnSocket to be used. Possible values are + **org.apache.zookeeper.ClientCnxnSocketNIO** + and + **org.apache.zookeeper.ClientCnxnSocketNetty** + . Default is + **org.apache.zookeeper.ClientCnxnSocketNIO** + . If you want to connect to server's secure client port, you need to set this property to + **org.apache.zookeeper.ClientCnxnSocketNetty** + on client. + +* *zookeeper.ssl.keyStore.location and zookeeper.ssl.keyStore.password* : + **New in 3.5.5:** + Specifies the file path to a JKS containing the local credentials to be used for SSL connections, + and the password to unlock the file. + +* *zookeeper.ssl.keyStore.passwordPath* : + **New in 3.8.0:** + Specifies the file path which contains the keystore password + +* *zookeeper.ssl.trustStore.location and zookeeper.ssl.trustStore.password* : + **New in 3.5.5:** + Specifies the file path to a JKS containing the remote credentials to be used for SSL connections, + and the password to unlock the file. + +* *zookeeper.ssl.trustStore.passwordPath* : + **New in 3.8.0:** + Specifies the file path which contains the truststore password + +* *zookeeper.ssl.keyStore.type* and *zookeeper.ssl.trustStore.type*: + **New in 3.5.5:** + Specifies the file format of keys/trust store files used to establish TLS connection to the ZooKeeper server. + Values: JKS, PEM, PKCS12 or null (detect by filename). Default: null. + **New in 3.6.3, 3.7.0:** + The format BCFKS was added. + +* *jute.maxbuffer* : + In the client side, it specifies the maximum size of the incoming data from the server. The default is 0xfffff(1048575) bytes, + or just under 1M. This is really a sanity check. The ZooKeeper server is designed to store and send + data on the order of kilobytes. If incoming data length is more than this value, an IOException + is raised. This value of client side should keep same with the server side(Setting **System.setProperty("jute.maxbuffer", "xxxx")** in the client side will work), + otherwise problems will arise. + +* *zookeeper.kinit* : + Specifies path to kinit binary. Default is "/usr/bin/kinit". + + + +### C Binding + +The C binding has a single-threaded and multi-threaded library. +The multi-threaded library is easiest to use and is most similar to the +Java API. This library will create an IO thread and an event dispatch +thread for handling connection maintenance and callbacks. The +single-threaded library allows ZooKeeper to be used in event driven +applications by exposing the event loop used in the multi-threaded +library. + +The package includes two shared libraries: zookeeper_st and +zookeeper_mt. The former only provides the asynchronous APIs and +callbacks for integrating into the application's event loop. The only +reason this library exists is to support the platforms were a +_pthread_ library is not available or is unstable +(i.e. FreeBSD 4.x). In all other cases, application developers should +link with zookeeper_mt, as it includes support for both Sync and Async +API. + + + +#### Installation + +If you're building the client from a check-out from the Apache +repository, follow the steps outlined below. If you're building from a +project source package downloaded from apache, skip to step **3**. + +1. Run `mvn compile` in zookeeper-jute directory (*.../trunk/zookeeper-jute*). + This will create a directory named "generated" under + *.../trunk/zookeeper-client/zookeeper-client-c*. +1. Change directory to the*.../trunk/zookeeper-client/zookeeper-client-c* + and run `autoreconf -if` to bootstrap **autoconf**, **automake** and **libtool**. Make sure you have **autoconf version 2.59** or greater installed. + Skip to step**4**. +1. If you are building from a project source package, + unzip/untar the source tarball and cd to the* + zookeeper-x.x.x/zookeeper-client/zookeeper-client-c* directory. +1. Run `./configure ` to + generate the makefile. Here are some of options the **configure** utility supports that can be + useful in this step: + * `--enable-debug` + Enables optimization and enables debug info compiler + options. (Disabled by default.) + * `--without-syncapi` + Disables Sync API support; zookeeper_mt library won't be + built. (Enabled by default.) + * `--disable-static` + Do not build static libraries. (Enabled by + default.) + * `--disable-shared` + Do not build shared libraries. (Enabled by + default.) +######Note +>See INSTALL for general information about running **configure**. +1. Run `make` or `make + install` to build the libraries and install them. +1. To generate doxygen documentation for the ZooKeeper API, run + `make doxygen-doc`. All documentation will be + placed in a new subfolder named docs. By default, this command + only generates HTML. For information on other document formats, + run `./configure --help` + + + +#### Building Your Own C Client + +In order to be able to use the ZooKeeper C API in your application +you have to remember to + +1. Include ZooKeeper header: `#include ` +1. If you are building a multithreaded client, compile with + `-DTHREADED` compiler flag to enable the multi-threaded version of + the library, and then link against the + _zookeeper_mt_ library. If you are building a + single-threaded client, do not compile with `-DTHREADED`, and be + sure to link against the_zookeeper_st_library. + +######Note +>See *.../trunk/zookeeper-client/zookeeper-client-c/src/cli.c* +for an example of a C client implementation + + + +## Building Blocks: A Guide to ZooKeeper Operations + +This section surveys all the operations a developer can perform +against a ZooKeeper server. It is lower level information than the earlier +concepts chapters in this manual, but higher level than the ZooKeeper API +Reference. It covers these topics: + +* [Connecting to ZooKeeper](#sc_connectingToZk) + + + +### Handling Errors + +Both the Java and C client bindings may report errors. The Java client binding does so by throwing KeeperException, calling code() on the exception will return the specific error code. The C client binding returns an error code as defined in the enum ZOO_ERRORS. API callbacks indicate result code for both language bindings. See the API documentation (javadoc for Java, doxygen for C) for full details on the possible errors and their meaning. + + + +### Connecting to ZooKeeper + +Before we begin, you will have to set up a running Zookeeper server so that we can start developing the client. For C client bindings, we will be using the multithreaded library(zookeeper_mt) with a simple example written in C. To establish a connection with Zookeeper server, we make use of C API - _zookeeper_init_ with the following signature: + + int zookeeper_init(const char *host, watcher_fn fn, int recv_timeout, const clientid_t *clientid, void *context, int flags); + +* **host* : + Connection string to zookeeper server in the format of host:port. If there are multiple servers, use comma as separator after specifying the host:port pairs. Eg: "127.0.0.1:2181,127.0.0.1:3001,127.0.0.1:3002" + +* *fn* : + Watcher function to process events when a notification is triggered. + +* *recv_timeout* : + Session expiration time in milliseconds. + +* **clientid* : + We can specify 0 for a new session. If a session has already establish previously, we could provide that client ID and it would reconnect to that previous session. + +* **context* : + Context object that can be associated with the zkhandle_t handler. If it is not used, we can set it to 0. + +* *flags* : + In an initiation, we can leave it for 0. + +We will demonstrate client that outputs "Connected to Zookeeper" after successful connection or an error message otherwise. Let's call the following code _zkClient.cc_ : + + + #include + #include + #include + using namespace std; + + // Keeping track of the connection state + static int connected = 0; + static int expired = 0; + + // *zkHandler handles the connection with Zookeeper + static zhandle_t *zkHandler; + + // watcher function would process events + void watcher(zhandle_t *zkH, int type, int state, const char *path, void *watcherCtx) + { + if (type == ZOO_SESSION_EVENT) { + + // state refers to states of zookeeper connection. + // To keep it simple, we would demonstrate these 3: ZOO_EXPIRED_SESSION_STATE, ZOO_CONNECTED_STATE, ZOO_NOTCONNECTED_STATE + // If you are using ACL, you should be aware of an authentication failure state - ZOO_AUTH_FAILED_STATE + if (state == ZOO_CONNECTED_STATE) { + connected = 1; + } else if (state == ZOO_NOTCONNECTED_STATE ) { + connected = 0; + } else if (state == ZOO_EXPIRED_SESSION_STATE) { + expired = 1; + connected = 0; + zookeeper_close(zkH); + } + } + } + + int main(){ + zoo_set_debug_level(ZOO_LOG_LEVEL_DEBUG); + + // zookeeper_init returns the handler upon a successful connection, null otherwise + zkHandler = zookeeper_init("localhost:2181", watcher, 10000, 0, 0, 0); + + if (!zkHandler) { + return errno; + }else{ + printf("Connection established with Zookeeper. \n"); + } + + // Close Zookeeper connection + zookeeper_close(zkHandler); + + return 0; + } + + +Compile the code with the multithreaded library mentioned before. + +`> g++ -Iinclude/ zkClient.cpp -lzookeeper_mt -o Client` + +Run the client. + +`> ./Client` + +From the output, you should see "Connected to Zookeeper" along with Zookeeper's DEBUG messages if the connection is successful. + + + +## Gotchas: Common Problems and Troubleshooting + +So now you know ZooKeeper. It's fast, simple, your application +works, but wait ... something's wrong. Here are some pitfalls that +ZooKeeper users fall into: + +1. If you are using watches, you must look for the connected watch + event. When a ZooKeeper client disconnects from a server, you will + not receive notification of changes until reconnected. If you are + watching for a znode to come into existence, you will miss the event + if the znode is created and deleted while you are disconnected. +1. You must test ZooKeeper server failures. The ZooKeeper service + can survive failures as long as a majority of servers are active. The + question to ask is: can your application handle it? In the real world + a client's connection to ZooKeeper can break. (ZooKeeper server + failures and network partitions are common reasons for connection + loss.) The ZooKeeper client library takes care of recovering your + connection and letting you know what happened, but you must make sure + that you recover your state and any outstanding requests that failed. + Find out if you got it right in the test lab, not in production - test + with a ZooKeeper service made up of a several of servers and subject + them to reboots. +1. The list of ZooKeeper servers used by the client must match the + list of ZooKeeper servers that each ZooKeeper server has. Things can + work, although not optimally, if the client list is a subset of the + real list of ZooKeeper servers, but not if the client lists ZooKeeper + servers not in the ZooKeeper cluster. +1. Be careful where you put that transaction log. The most + performance-critical part of ZooKeeper is the transaction log. + ZooKeeper must sync transactions to media before it returns a + response. A dedicated transaction log device is key to consistent good + performance. Putting the log on a busy device will adversely effect + performance. If you only have one storage device, put trace files on + NFS and increase the snapshotCount; it doesn't eliminate the problem, + but it can mitigate it. +1. Set your Java max heap size correctly. It is very important to + _avoid swapping._ Going to disk unnecessarily will + almost certainly degrade your performance unacceptably. Remember, in + ZooKeeper, everything is ordered, so if one request hits the disk, all + other queued requests hit the disk. + To avoid swapping, try to set the heapsize to the amount of + physical memory you have, minus the amount needed by the OS and cache. + The best way to determine an optimal heap size for your configurations + is to _run load tests_. If for some reason you + can't, be conservative in your estimates and choose a number well + below the limit that would cause your machine to swap. For example, on + a 4G machine, a 3G heap is a conservative estimate to start + with. + +## Links to Other Information + +Outside the formal documentation, there're several other sources of +information for ZooKeeper developers. + +* *[API Reference](https://zookeeper.apache.org/doc/current/apidocs/zookeeper-server/index.html)* : + The complete reference to the ZooKeeper API + +* *[ZooKeeper Talk at the Hadoop Summit 2008](https://www.youtube.com/watch?v=rXI9xiesUV8)* : + A video introduction to ZooKeeper, by Benjamin Reed of Yahoo! + Research + +* *[Barrier and Queue Tutorial](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Tutorial)* : + The excellent Java tutorial by Flavio Junqueira, implementing + simple barriers and producer-consumer queues using ZooKeeper. + +* *[ZooKeeper - A Reliable, Scalable Distributed Coordination System](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeperArticles)* : + An article by Todd Hoff (07/15/2008) + +* *[ZooKeeper Recipes](recipes.html)* : + Pseudo-level discussion of the implementation of various + synchronization solutions with ZooKeeper: Event Handles, Queues, + Locks, and Two-phase Commits. + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperQuotas.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperQuotas.md new file mode 100644 index 0000000000000000000000000000000000000000..72864c37e4f5b07ebe060ea78c1a6883b966305b --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperQuotas.md @@ -0,0 +1,85 @@ + + +# ZooKeeper Quota's Guide + +### A Guide to Deployment and Administration + +* [Quotas](#zookeeper_quotas) + * [Setting Quotas](#Setting+Quotas) + * [Listing Quotas](#Listing+Quotas) + * [Deleting Quotas](#Deleting+Quotas) + + + +## Quotas + +ZooKeeper has both namespace and bytes quotas. You can use the ZooKeeperMain class to setup quotas. +ZooKeeper prints _WARN_ messages if users exceed the quota assigned to them. The messages +are printed in the log of the ZooKeeper. + +Notice: What the `namespace` quota means is the count quota which limits the number of children +under the path(included itself). + + $ bin/zkCli.sh -server host:port** + +The above command gives you a command line option of using quotas. + + + +### Setting Quotas + +- You can use `setquota` to set a quota on a ZooKeeper node. It has an option of setting quota with +`-n` (for namespace/count) and `-b` (for bytes/data length). + +- The ZooKeeper quota is stored in ZooKeeper itself in **/zookeeper/quota**. To disable other people from +changing the quotas, users can set the ACL for **/zookeeper/quota** ,so that only admins are able to read and write to it. + +- If the quota doesn't exist in the specified path,create the quota, otherwise update the quota. + +- The Scope of the quota users set is all the nodes under the path specified (included itself). + +- In order to simplify the calculation of quota in the current directory/hierarchy structure, a complete tree path(from root to leaf node) +can be set only one quota. In the situation when setting a quota in a path which its parent or child node already has a quota. `setquota` will +reject and tell the specified parent or child path, users can adjust allocations of quotas(delete/move-up/move-down the quota) +according to specific circumstances. + +- Combined with the Chroot, the quota will have a better isolation effectiveness between different applications.For example: + + ```bash + # Chroot is: + 192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/apps/app1 + setquota -n 100000 /apps/app1 + ``` + +- Users cannot set the quota on the path under **/zookeeper/quota** + +- The quota supports the soft and hard quota. The soft quota just logs the warning info when exceeding the quota, but the hard quota +also throws a `QuotaExceededException`. When setting soft and hard quota on the same path, the hard quota has the priority. + + + +### Listing Quotas + +You can use _listquota_ to list a quota on a ZooKeeper node. + + + +### Deleting Quotas + +You can use _delquota_ to delete quota on a ZooKeeper node. + + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md new file mode 100644 index 0000000000000000000000000000000000000000..8b3e3dad799fe578578671f01387c5ec4329d1ad --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md @@ -0,0 +1,908 @@ + + +# ZooKeeper Dynamic Reconfiguration + +* [Overview](#ch_reconfig_intro) +* [Changes to Configuration Format](#ch_reconfig_format) + * [Specifying the client port](#sc_reconfig_clientport) + * [Specifying multiple server addresses](#sc_multiaddress) + * [The standaloneEnabled flag](#sc_reconfig_standaloneEnabled) + * [The reconfigEnabled flag](#sc_reconfig_reconfigEnabled) + * [Dynamic configuration file](#sc_reconfig_file) + * [Backward compatibility](#sc_reconfig_backward) +* [Upgrading to 3.5.0](#ch_reconfig_upgrade) +* [Dynamic Reconfiguration of the ZooKeeper Ensemble](#ch_reconfig_dyn) + * [API](#ch_reconfig_api) + * [Security](#sc_reconfig_access_control) + * [Retrieving the current dynamic configuration](#sc_reconfig_retrieving) + * [Modifying the current dynamic configuration](#sc_reconfig_modifying) + * [General](#sc_reconfig_general) + * [Incremental mode](#sc_reconfig_incremental) + * [Non-incremental mode](#sc_reconfig_nonincremental) + * [Conditional reconfig](#sc_reconfig_conditional) + * [Error conditions](#sc_reconfig_errors) + * [Additional comments](#sc_reconfig_additional) +* [Rebalancing Client Connections](#ch_reconfig_rebalancing) + + + +## Overview + +Prior to the 3.5.0 release, the membership and all other configuration +parameters of Zookeeper were static - loaded during boot and immutable at +runtime. Operators resorted to ''rolling restarts'' - a manually intensive +and error-prone method of changing the configuration that has caused data +loss and inconsistency in production. + +Starting with 3.5.0, “rolling restarts” are no longer needed! +ZooKeeper comes with full support for automated configuration changes: the +set of Zookeeper servers, their roles (participant / observer), all ports, +and even the quorum system can be changed dynamically, without service +interruption and while maintaining data consistency. Reconfigurations are +performed immediately, just like other operations in ZooKeeper. Multiple +changes can be done using a single reconfiguration command. The dynamic +reconfiguration functionality does not limit operation concurrency, does +not require client operations to be stopped during reconfigurations, has a +very simple interface for administrators and no added complexity to other +client operations. + +New client-side features allow clients to find out about configuration +changes and to update the connection string (list of servers and their +client ports) stored in their ZooKeeper handle. A probabilistic algorithm +is used to rebalance clients across the new configuration servers while +keeping the extent of client migrations proportional to the change in +ensemble membership. + +This document provides the administrator manual for reconfiguration. +For a detailed description of the reconfiguration algorithms, performance +measurements, and more, please see our paper: + +* *Shraer, A., Reed, B., Malkhi, D., Junqueira, F. Dynamic +Reconfiguration of Primary/Backup Clusters. In _USENIX Annual +Technical Conference (ATC)_(2012), 425-437* : + Links: [paper (pdf)](https://www.usenix.org/system/files/conference/atc12/atc12-final74.pdf), [slides (pdf)](https://www.usenix.org/sites/default/files/conference/protected-files/shraer\_atc12\_slides.pdf), [video](https://www.usenix.org/conference/atc12/technical-sessions/presentation/shraer), [hadoop summit slides](http://www.slideshare.net/Hadoop\_Summit/dynamic-reconfiguration-of-zookeeper) + +**Note:** Starting with 3.5.3, the dynamic reconfiguration +feature is disabled by default, and has to be explicitly turned on via +[reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option. + + + +## Changes to Configuration Format + + + +### Specifying the client port + +A client port of a server is the port on which the server accepts plaintext (non-TLS) client connection requests +and secure client port is the port on which the server accepts TLS client connection requests. + +Starting with 3.5.0 the +_clientPort_ and _clientPortAddress_ configuration parameters should no longer be used in zoo.cfg. + +Starting with 3.10.0 the +_secureClientPort_ and _secureClientPortAddress_ configuration parameters should no longer be used in zoo.cfg. + +Instead, this information is now part of the server keyword specification, which +becomes as follows: + + server. = ::[:role];[[:]][;[:]] + +- [New in ZK 3.10.0] The client port specification is optional and is to the right of the + first semicolon. The secure client port specification is also optional and is to the right + of the second semicolon. However, both the client port and secure client port specification + cannot be omitted, at least one of them should be present. If the user intends to omit client + port specification and provide only secure client port specification (TLS-only server), a second + semicolon should still be specified to indicate an empty client port specification (see last + example below). In either spec, the port address is optional, and if not specified it defaults + to "0.0.0.0". +- As usual, role is also optional, it can be _participant_ or _observer_ (_participant_ by default). + +Examples of legal server statements: + + server.5 = 125.23.63.23:1234:1235;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235;1236;1237 (non-TLS + TLS server) + server.5 = 125.23.63.23:1234:1235;;1237 (TLS-only server) + server.5 = 125.23.63.23:1234:1235:participant;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:observer;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235;125.23.63.24:1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236;125.23.63.23:1237 (non-TLS + TLS server) + server.5 = 125.23.63.23:1234:1235:participant;;125.23.63.23:1237 (TLS-only server) + + + + +### Specifying multiple server addresses + +Since ZooKeeper 3.6.0 it is possible to specify multiple addresses for each +ZooKeeper server (see [ZOOKEEPER-3188](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3188)). +This helps to increase availability and adds network level +resiliency to ZooKeeper. When multiple physical network interfaces are used +for the servers, ZooKeeper is able to bind on all interfaces and runtime switching +to a working interface in case a network error. The different addresses can be +specified in the config using a pipe ('|') character. + +Examples for a valid configurations using multiple addresses: + + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;2188 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889|zoo2-net3:2890:3890;2188 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;zoo2-net1:2188 + server.2=zoo2-net1:2888:3888:observer|zoo2-net2:2889:3889:observer;2188 + + + +### The _standaloneEnabled_ flag + +Prior to 3.5.0, one could run ZooKeeper in Standalone mode or in a +Distributed mode. These are separate implementation stacks, and +switching between them during run time is not possible. By default (for +backward compatibility) _standaloneEnabled_ is set to +_true_. The consequence of using this default is that +if started with a single server the ensemble will not be allowed to +grow, and if started with more than one server it will not be allowed to +shrink to contain fewer than two participants. + +Setting the flag to _false_ instructs the system +to run the Distributed software stack even if there is only a single +participant in the ensemble. To achieve this the (static) configuration +file should contain: + + standaloneEnabled=false** + +With this setting it is possible to start a ZooKeeper ensemble +containing a single participant and to dynamically grow it by adding +more servers. Similarly, it is possible to shrink an ensemble so that +just a single participant remains, by removing servers. + +Since running the Distributed mode allows more flexibility, we +recommend setting the flag to _false_. We expect that +the legacy Standalone mode will be deprecated in the future. + + + +### The _reconfigEnabled_ flag + +Starting with 3.5.0 and prior to 3.5.3, there is no way to disable +dynamic reconfiguration feature. We would like to offer the option of +disabling reconfiguration feature because with reconfiguration enabled, +we have a security concern that a malicious actor can make arbitrary changes +to the configuration of a ZooKeeper ensemble, including adding a compromised +server to the ensemble. We prefer to leave to the discretion of the user to +decide whether to enable it or not and make sure that the appropriate security +measure are in place. So in 3.5.3 the [reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option is introduced +such that the reconfiguration feature can be completely disabled and any attempts +to reconfigure a cluster through reconfig API with or without authentication +will fail by default, unless **reconfigEnabled** is set to +**true**. + +To set the option to true, the configuration file (zoo.cfg) should contain: + + reconfigEnabled=true + + + +### Dynamic configuration file + +Starting with 3.5.0 we're distinguishing between dynamic +configuration parameters, which can be changed during runtime, and +static configuration parameters, which are read from a configuration +file when a server boots and don't change during its execution. For now, +the following configuration keywords are considered part of the dynamic +configuration: _server_, _group_ +and _weight_. + +Dynamic configuration parameters are stored in a separate file on +the server (which we call the dynamic configuration file). This file is +linked from the static config file using the new +_dynamicConfigFile_ keyword. + +**Example** + +#### zoo_replicated1.cfg + + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + dynamicConfigFile=/zookeeper/conf/zoo_replicated1.cfg.dynamic + + +#### zoo_replicated1.cfg.dynamic + + + server.1=125.23.63.23:2780:2783:participant;2791 + server.2=125.23.63.24:2781:2784:participant;2792 + server.3=125.23.63.25:2782:2785:participant;2793 + + +When the ensemble configuration changes, the static configuration +parameters remain the same. The dynamic parameters are pushed by +ZooKeeper and overwrite the dynamic configuration files on all servers. +Thus, the dynamic configuration files on the different servers are +usually identical (they can only differ momentarily when a +reconfiguration is in progress, or if a new configuration hasn't +propagated yet to some of the servers). Once created, the dynamic +configuration file should not be manually altered. Changed are only made +through the new reconfiguration commands outlined below. Note that +changing the config of an offline cluster could result in an +inconsistency with respect to configuration information stored in the +ZooKeeper log (and the special configuration znode, populated from the +log) and is therefore highly discouraged. + +**Example 2** + +Users may prefer to initially specify a single configuration file. +The following is thus also legal: + +#### zoo_replicated1.cfg + + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + clientPort= + + +The configuration files on each server will be automatically split +into dynamic and static files, if they are not already in this format. +So the configuration file above will be automatically transformed into +the two files in Example 1. Note that the clientPort and +clientPortAddress lines (if specified) will be automatically removed +during this process, if they are redundant (as in the example above). +The original static configuration file is backed up (in a .bak +file). + + + +### Backward compatibility + +We still support the old configuration format. For example, the +following configuration file is acceptable (but not recommended): + +#### zoo_replicated1.cfg + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + clientPort=2791 + server.1=125.23.63.23:2780:2783:participant + server.2=125.23.63.24:2781:2784:participant + server.3=125.23.63.25:2782:2785:participant + + +During boot, a dynamic configuration file is created and contains +the dynamic part of the configuration as explained earlier. In this +case, however, the line "clientPort=2791" will remain in the static +configuration file of server 1 since it is not redundant -- it was not +specified as part of the "server.1=..." using the format explained in +the section [Changes to Configuration Format](#ch_reconfig_format). If a reconfiguration +is invoked that sets the client port of server 1, we remove +"clientPort=2791" from the static configuration file (the dynamic file +now contain this information as part of the specification of server +1). + + + +## Upgrading to 3.5.0 + +Upgrading a running ZooKeeper ensemble to 3.5.0 should be done only +after upgrading your ensemble to the 3.4.6 release. Note that this is only +necessary for rolling upgrades (if you're fine with shutting down the +system completely, you don't have to go through 3.4.6). If you attempt a +rolling upgrade without going through 3.4.6 (for example from 3.4.5), you +may get the following error: + + 2013-01-30 11:32:10,663 [myid:2] - INFO [localhost/127.0.0.1:2784:QuorumCnxManager$Listener@498] - Received connection request /127.0.0.1:60876 + 2013-01-30 11:32:10,663 [myid:2] - WARN [localhost/127.0.0.1:2784:QuorumCnxManager@349] - Invalid server id: -65536 + +During a rolling upgrade, each server is taken down in turn and +rebooted with the new 3.5.0 binaries. Before starting the server with +3.5.0 binaries, we highly recommend updating the configuration file so +that all server statements "server.x=..." contain client ports (see the +section [Specifying the client port](#sc_reconfig_clientport)). As explained earlier +you may leave the configuration in a single file, as well as leave the +clientPort/clientPortAddress statements (although if you specify client +ports in the new format, these statements are now redundant). + + + +## Dynamic Reconfiguration of the ZooKeeper Ensemble + +The ZooKeeper Java and C API were extended with getConfig and reconfig +commands that facilitate reconfiguration. Both commands have a synchronous +(blocking) variant and an asynchronous one. We demonstrate these commands +here using the Java CLI, but note that you can similarly use the C CLI or +invoke the commands directly from a program just like any other ZooKeeper +command. + + + +### API + +There are two sets of APIs for both Java and C client. + +* ***Reconfiguration API*** : + Reconfiguration API is used to reconfigure the ZooKeeper cluster. + Starting with 3.5.3, reconfiguration Java APIs are moved into ZooKeeperAdmin class + from ZooKeeper class, and use of this API requires ACL setup and user + authentication (see [Security](#sc_reconfig_access_control) for more information.). + +* ***Get Configuration API*** : + Get configuration APIs are used to retrieve ZooKeeper cluster configuration information + stored in /zookeeper/config znode. Use of this API does not require specific setup or authentication, + because /zookeeper/config is readable to any users. + + + +### Security + +Prior to **3.5.3**, there is no enforced security mechanism +over reconfig so any ZooKeeper clients that can connect to ZooKeeper server ensemble +will have the ability to change the state of a ZooKeeper cluster via reconfig. +It is thus possible for a malicious client to add compromised server to an ensemble, +e.g., add a compromised server, or remove legitimate servers. +Cases like these could be security vulnerabilities on a case by case basis. + +To address this security concern, we introduced access control over reconfig +starting from **3.5.3** such that only a specific set of users +can use reconfig commands or APIs, and these users need be configured explicitly. In addition, +the setup of ZooKeeper cluster must enable authentication so ZooKeeper clients can be authenticated. + +We also provide an escape hatch for users who operate and interact with a ZooKeeper ensemble in a secured +environment (i.e. behind company firewall). For those users who want to use reconfiguration feature but +don't want the overhead of configuring an explicit list of authorized user for reconfig access checks, +they can set ["skipACL"](zookeeperAdmin.html#sc_authOptions) to "yes" which will +skip ACL check and allow any user to reconfigure cluster. + +Overall, ZooKeeper provides flexible configuration options for the reconfigure feature +that allow a user to choose based on user's security requirement. +We leave to the discretion of the user to decide appropriate security measure are in place. + +* ***Access Control*** : + The dynamic configuration is stored in a special znode + ZooDefs.CONFIG_NODE = /zookeeper/config. This node by default is read only + for all users, except super user and users that's explicitly configured for write + access. + Clients that need to use reconfig commands or reconfig API should be configured as users + that have write access to CONFIG_NODE. By default, only the super user has full control including + write access to CONFIG_NODE. Additional users can be granted write access through superuser + by setting an ACL that has write permission associated with specified user. + A few examples of how to setup ACLs and use reconfiguration API with authentication can be found in + ReconfigExceptionTest.java and TestReconfigServer.cc. + +* ***Authentication*** : + Authentication of users is orthogonal to the access control and is delegated to + existing authentication mechanism supported by ZooKeeper's pluggable authentication schemes. + See [ZooKeeper and SASL](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL) for more details on this topic. + +* ***Disable ACL check*** : + ZooKeeper supports ["skipACL"](zookeeperAdmin.html#sc_authOptions) option such that ACL + check will be completely skipped, if skipACL is set to "yes". In such cases any unauthenticated + users can use reconfig API. + + + +### Retrieving the current dynamic configuration + +The dynamic configuration is stored in a special znode +ZooDefs.CONFIG_NODE = /zookeeper/config. The new +`config` CLI command reads this znode (currently it is +simply a wrapper to `get /zookeeper/config`). As with +normal reads, to retrieve the latest committed value you should do a +`sync` first. + + [zk: 127.0.0.1:2791(CONNECTED) 3] config + server.1=localhost:2780:2783:participant;localhost:2791 + server.2=localhost:2781:2784:participant;localhost:2792 + server.3=localhost:2782:2785:participant;localhost:2793 + +Notice the last line of the output. This is the configuration +version. The version equals to the zxid of the reconfiguration command +which created this configuration. The version of the first established +configuration equals to the zxid of the NEWLEADER message sent by the +first successfully established leader. When a configuration is written +to a dynamic configuration file, the version automatically becomes part +of the filename and the static configuration file is updated with the +path to the new dynamic configuration file. Configuration files +corresponding to earlier versions are retained for backup +purposes. + +During boot time the version (if it exists) is extracted from the +filename. The version should never be altered manually by users or the +system administrator. It is used by the system to know which +configuration is most up-to-date. Manipulating it manually can result in +data loss and inconsistency. + +Just like a `get` command, the +`config` CLI command accepts the _-w_ +flag for setting a watch on the znode, and _-s_ flag for +displaying the Stats of the znode. It additionally accepts a new flag +_-c_ which outputs only the version and the client +connection string corresponding to the current configuration. For +example, for the configuration above we would get: + + [zk: 127.0.0.1:2791(CONNECTED) 17] config -c + 400000003 localhost:2791,localhost:2793,localhost:2792 + +Note that when using the API directly, this command is called +`getConfig`. + +As any read command it returns the configuration known to the +follower to which your client is connected, which may be slightly +out-of-date. One can use the `sync` command for +stronger guarantees. For example using the Java API: + + zk.sync(ZooDefs.CONFIG_NODE, void_callback, context); + zk.getConfig(watcher, callback, context); + +Note: in 3.5.0 it doesn't really matter which path is passed to the +`sync()` command as all the server's state is brought +up to date with the leader (so one could use a different path instead of +ZooDefs.CONFIG_NODE). However, this may change in the future. + + + +### Modifying the current dynamic configuration + +Modifying the configuration is done through the +`reconfig` command. There are two modes of +reconfiguration: incremental and non-incremental (bulk). The +non-incremental simply specifies the new dynamic configuration of the +system. The incremental specifies changes to the current configuration. +The `reconfig` command returns the new +configuration. + +A few examples are in: *ReconfigTest.java*, +*ReconfigRecoveryTest.java* and +*TestReconfigServer.cc*. + + + +#### General + +**Removing servers:** Any server can +be removed, including the leader (although removing the leader will +result in a short unavailability, see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)). The server will not be shut-down automatically. +Instead, it becomes a "non-voting follower". This is somewhat similar +to an observer in that its votes don't count towards the Quorum of +votes necessary to commit operations. However, unlike a non-voting +follower, an observer doesn't actually see any operation proposals and +does not ACK them. Thus a non-voting follower has a more significant +negative effect on system throughput compared to an observer. +Non-voting follower mode should only be used as a temporary mode, +before shutting the server down, or adding it as a follower or as an +observer to the ensemble. We do not shut the server down automatically +for two main reasons. The first reason is that we do not want all the +clients connected to this server to be immediately disconnected, +causing a flood of connection requests to other servers. Instead, it +is better if each client decides when to migrate independently. The +second reason is that removing a server may sometimes (rarely) be +necessary in order to change it from "observer" to "participant" (this +is explained in the section [Additional comments](#sc_reconfig_additional)). + +Note that the new configuration should have some minimal number of +participants in order to be considered legal. If the proposed change +would leave the cluster with less than 2 participants and standalone +mode is enabled (standaloneEnabled=true, see the section [The _standaloneEnabled_ flag](#sc_reconfig_standaloneEnabled)), the reconfig will not be +processed (BadArgumentsException). If standalone mode is disabled +(standaloneEnabled=false) then it's legal to remain with 1 or more +participants. + +**Adding servers:** Before a +reconfiguration is invoked, the administrator must make sure that a +quorum (majority) of participants from the new configuration are +already connected and synced with the current leader. To achieve this +we need to connect a new joining server to the leader before it is +officially part of the ensemble. This is done by starting the joining +server using an initial list of servers which is technically not a +legal configuration of the system but (a) contains the joiner, and (b) +gives sufficient information to the joiner in order for it to find and +connect to the current leader. We list a few different options of +doing this safely. + +1. Initial configuration of joiners is comprised of servers in + the last committed configuration and one or more joiners, where + **joiners are listed as observers.** + For example, if servers D and E are added at the same time to (A, + B, C) and server C is being removed, the initial configuration of + D could be (A, B, C, D) or (A, B, C, D, E), where D and E are + listed as observers. Similarly, the configuration of E could be + (A, B, C, E) or (A, B, C, D, E), where D and E are listed as + observers. **Note that listing the joiners as + observers will not actually make them observers - it will only + prevent them from accidentally forming a quorum with other + joiners.** Instead, they will contact the servers in the + current configuration and adopt the last committed configuration + (A, B, C), where the joiners are absent. Configuration files of + joiners are backed up and replaced automatically as this happens. + After connecting to the current leader, joiners become non-voting + followers until the system is reconfigured and they are added to + the ensemble (as participant or observer, as appropriate). +1. Initial configuration of each joiner is comprised of servers + in the last committed configuration + **the + joiner itself, listed as a participant.** For example, to + add a new server D to a configuration consisting of servers (A, B, + C), the administrator can start D using an initial configuration + file consisting of servers (A, B, C, D). If both D and E are added + at the same time to (A, B, C), the initial configuration of D + could be (A, B, C, D) and the configuration of E could be (A, B, + C, E). Similarly, if D is added and C is removed at the same time, + the initial configuration of D could be (A, B, C, D). Never list + more than one joiner as participant in the initial configuration + (see warning below). +1. Whether listing the joiner as an observer or as participant, + it is also fine not to list all the current configuration servers, + as long as the current leader is in the list. For example, when + adding D we could start D with a configuration file consisting of + just (A, D) if A is the current leader. however this is more + fragile since if A fails before D officially joins the ensemble, D + doesn’t know anyone else and therefore the administrator will have + to intervene and restart D with another server list. + +######Note +>##### Warning + +>Never specify more than one joining server in the same initial +configuration as participants. Currently, the joining servers don’t +know that they are joining an existing ensemble; if multiple joiners +are listed as participants they may form an independent quorum +creating a split-brain situation such as processing operations +independently from your main ensemble. It is OK to list multiple +joiners as observers in an initial config. + +If the configuration of existing servers changes or they become unavailable +before the joiner succeeds to connect and learn about configuration changes, the +joiner may need to be restarted with an updated configuration file in order to be +able to connect. + +Finally, note that once connected to the leader, a joiner adopts +the last committed configuration, in which it is absent (the initial +config of the joiner is backed up before being rewritten). If the +joiner restarts in this state, it will not be able to boot since it is +absent from its configuration file. In order to start it you’ll once +again have to specify an initial configuration. + +**Modifying server parameters:** One +can modify any of the ports of a server, or its role +(participant/observer) by adding it to the ensemble with different +parameters. This works in both the incremental and the bulk +reconfiguration modes. It is not necessary to remove the server and +then add it back; just specify the new parameters as if the server is +not yet in the system. The server will detect the configuration change +and perform the necessary adjustments. See an example in the section +[Incremental mode](#sc_reconfig_incremental) and an exception to this +rule in the section [Additional comments](#sc_reconfig_additional). + +It is also possible to change the Quorum System used by the +ensemble (for example, change the Majority Quorum System to a +Hierarchical Quorum System on the fly). This, however, is only allowed +using the bulk (non-incremental) reconfiguration mode. In general, +incremental reconfiguration only works with the Majority Quorum +System. Bulk reconfiguration works with both Hierarchical and Majority +Quorum Systems. + +**Performance Impact:** There is +practically no performance impact when removing a follower, since it +is not being automatically shut down (the effect of removal is that +the server's votes are no longer being counted). When adding a server, +there is no leader change and no noticeable performance disruption. +For details and graphs please see Figures 6, 7 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters). + +The most significant disruption will happen when a leader change +is caused, in one of the following cases: + +1. Leader is removed from the ensemble. +1. Leader's role is changed from participant to observer. +1. The port used by the leader to send transactions to others + (quorum port) is modified. + +In these cases we perform a leader hand-off where the old leader +nominates a new leader. The resulting unavailability is usually +shorter than when a leader crashes since detecting leader failure is +unnecessary and electing a new leader can usually be avoided during a +hand-off (see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)). + +When the client port of a server is modified, it does not drop +existing client connections. New connections to the server will have +to use the new client port. + +**Progress guarantees:** Up to the +invocation of the reconfig operation, a quorum of the old +configuration is required to be available and connected for ZooKeeper +to be able to make progress. Once reconfig is invoked, a quorum of +both the old and of the new configurations must be available. The +final transition happens once (a) the new configuration is activated, +and (b) all operations scheduled before the new configuration is +activated by the leader are committed. Once (a) and (b) happen, only a +quorum of the new configuration is required. Note, however, that +neither (a) nor (b) are visible to a client. Specifically, when a +reconfiguration operation commits, it only means that an activation +message was sent out by the leader. It does not necessarily mean that +a quorum of the new configuration got this message (which is required +in order to activate it) or that (b) has happened. If one wants to +make sure that both (a) and (b) has already occurred (for example, in +order to know that it is safe to shut down old servers that were +removed), one can simply invoke an update +(`set-data`, or some other quorum operation, but not +a `sync`) and wait for it to commit. An alternative +way to achieve this was to introduce another round to the +reconfiguration protocol (which, for simplicity and compatibility with +Zab, we decided to avoid). + + + +#### Incremental mode + +The incremental mode allows adding and removing servers to the +current configuration. Multiple changes are allowed. For +example: + + > reconfig -remove 3 -add + server.5=125.23.63.23:1234:1235;1236 + +Both the add and the remove options get a list of comma separated +arguments (no spaces): + + > reconfig -remove 3,4 -add + server.5=localhost:2111:2112;2113,6=localhost:2114:2115:observer;2116 + +The format of the server statement is exactly the same as +described in the section [Specifying the client port](#sc_reconfig_clientport) and +includes the client port. Notice that here instead of "server.5=" you +can just say "5=". In the example above, if server 5 is already in the +system, but has different ports or is not an observer, it is updated +and once the configuration commits becomes an observer and starts +using these new ports. This is an easy way to turn participants into +observers and vice versa or change any of their ports, without +rebooting the server. + +ZooKeeper supports two types of Quorum Systems – the simple +Majority system (where the leader commits operations after receiving +ACKs from a majority of voters) and a more complex Hierarchical +system, where votes of different servers have different weights and +servers are divided into voting groups. Currently, incremental +reconfiguration is allowed only if the last proposed configuration +known to the leader uses a Majority Quorum System +(BadArgumentsException is thrown otherwise). + +Incremental mode - examples using the Java API: + + List leavingServers = new ArrayList(); + leavingServers.add("1"); + leavingServers.add("2"); + byte[] config = zk.reconfig(null, leavingServers, null, -1, new Stat()); + + List leavingServers = new ArrayList(); + List joiningServers = new ArrayList(); + leavingServers.add("1"); + joiningServers.add("server.4=localhost:1234:1235;1236"); + byte[] config = zk.reconfig(joiningServers, leavingServers, null, -1, new Stat()); + + String configStr = new String(config); + System.out.println(configStr); + +There is also an asynchronous API, and an API accepting comma +separated Strings instead of List. See +src/java/main/org/apache/zookeeper/ZooKeeper.java. + + + +#### Non-incremental mode + +The second mode of reconfiguration is non-incremental, whereby a +client gives a complete specification of the new dynamic system +configuration. The new configuration can either be given in place or +read from a file: + + > reconfig -file newconfig.cfg + +//newconfig.cfg is a dynamic config file, see [Dynamic configuration file](#sc_reconfig_file) + + > reconfig -members + server.1=125.23.63.23:2780:2783:participant;2791,server.2=125.23.63.24:2781:2784:participant;2792,server.3=125.23.63.25:2782:2785:participant;2793}} + +The new configuration may use a different Quorum System. For +example, you may specify a Hierarchical Quorum System even if the +current ensemble uses a Majority Quorum System. + +Bulk mode - example using the Java API: + + List newMembers = new ArrayList(); + newMembers.add("server.1=1111:1234:1235;1236"); + newMembers.add("server.2=1112:1237:1238;1239"); + newMembers.add("server.3=1114:1240:1241:observer;1242"); + + byte[] config = zk.reconfig(null, null, newMembers, -1, new Stat()); + + String configStr = new String(config); + System.out.println(configStr); + +There is also an asynchronous API, and an API accepting comma +separated String containing the new members instead of +List. See +src/java/main/org/apache/zookeeper/ZooKeeper.java. + + + +#### Conditional reconfig + +Sometimes (especially in non-incremental mode) a new proposed +configuration depends on what the client "believes" to be the current +configuration, and should be applied only to that configuration. +Specifically, the `reconfig` succeeds only if the +last configuration at the leader has the specified version. + + > reconfig -file -v + +In the previously listed Java examples, instead of -1 one could +specify a configuration version to condition the +reconfiguration. + + + +#### Error conditions + +In addition to normal ZooKeeper error conditions, a +reconfiguration may fail for the following reasons: + +1. another reconfig is currently in progress + (ReconfigInProgress) +1. the proposed change would leave the cluster with less than 2 + participants, in case standalone mode is enabled, or, if + standalone mode is disabled then its legal to remain with 1 or + more participants (BadArgumentsException) +1. no quorum of the new configuration was connected and + up-to-date with the leader when the reconfiguration processing + began (NewConfigNoQuorum) +1. `-v x` was specified, but the version +`y` of the latest configuration is not +`x` (BadVersionException) +1. an incremental reconfiguration was requested but the last + configuration at the leader uses a Quorum System which is + different from the Majority system (BadArgumentsException) +1. syntax error (BadArgumentsException) +1. I/O exception when reading the configuration from a file + (BadArgumentsException) + +Most of these are illustrated by test-cases in +*ReconfigFailureCases.java*. + + + +#### Additional comments + +**Liveness:** To better understand +the difference between incremental and non-incremental +reconfiguration, suppose that client C1 adds server D to the system +while a different client C2 adds server E. With the non-incremental +mode, each client would first invoke `config` to find +out the current configuration, and then locally create a new list of +servers by adding its own suggested server. The new configuration can +then be submitted using the non-incremental +`reconfig` command. After both reconfigurations +complete, only one of E or D will be added (not both), depending on +which client's request arrives second to the leader, overwriting the +previous configuration. The other client can repeat the process until +its change takes effect. This method guarantees system-wide progress +(i.e., for one of the clients), but does not ensure that every client +succeeds. To have more control C2 may request to only execute the +reconfiguration in case the version of the current configuration +hasn't changed, as explained in the section [Conditional reconfig](#sc_reconfig_conditional). In this way it may avoid blindly +overwriting the configuration of C1 if C1's configuration reached the +leader first. + +With incremental reconfiguration, both changes will take effect as +they are simply applied by the leader one after the other to the +current configuration, whatever that is (assuming that the second +reconfig request reaches the leader after it sends a commit message +for the first reconfig request -- currently the leader will refuse to +propose a reconfiguration if another one is already pending). Since +both clients are guaranteed to make progress, this method guarantees +stronger liveness. In practice, multiple concurrent reconfigurations +are probably rare. Non-incremental reconfiguration is currently the +only way to dynamically change the Quorum System. Incremental +configuration is currently only allowed with the Majority Quorum +System. + +**Changing an observer into a +follower:** Clearly, changing a server that participates in +voting into an observer may fail if error (2) occurs, i.e., if fewer +than the minimal allowed number of participants would remain. However, +converting an observer into a participant may sometimes fail for a +more subtle reason: Suppose, for example, that the current +configuration is (A, B, C, D), where A is the leader, B and C are +followers and D is an observer. In addition, suppose that B has +crashed. If a reconfiguration is submitted where D is said to become a +follower, it will fail with error (3) since in this configuration, a +majority of voters in the new configuration (any 3 voters), must be +connected and up-to-date with the leader. An observer cannot +acknowledge the history prefix sent during reconfiguration, and +therefore it does not count towards these 3 required servers and the +reconfiguration will be aborted. In case this happens, a client can +achieve the same task by two reconfig commands: first invoke a +reconfig to remove D from the configuration and then invoke a second +command to add it back as a participant (follower). During the +intermediate state D is a non-voting follower and can ACK the state +transfer performed during the second reconfig command. + + + +## Rebalancing Client Connections + +When a ZooKeeper cluster is started, if each client is given the same +connection string (list of servers), the client will randomly choose a +server in the list to connect to, which makes the expected number of +client connections per server the same for each of the servers. We +implemented a method that preserves this property when the set of servers +changes through reconfiguration. See Sections 4 and 5.1 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters). + +In order for the method to work, all clients must subscribe to +configuration changes (by setting a watch on /zookeeper/config either +directly or through the `getConfig` API command). When +the watch is triggered, the client should read the new configuration by +invoking `sync` and `getConfig` and if +the configuration is indeed new invoke the +`updateServerList` API command. To avoid mass client +migration at the same time, it is better to have each client sleep a +random short period of time before invoking +`updateServerList`. + +A few examples can be found in: +*StaticHostProviderTest.java* and +*TestReconfig.cc* + +Example (this is not a recipe, but a simplified example just to +explain the general idea): + + public void process(WatchedEvent event) { + synchronized (this) { + if (event.getType() == EventType.None) { + connected = (event.getState() == KeeperState.SyncConnected); + notifyAll(); + } else if (event.getPath()!=null && event.getPath().equals(ZooDefs.CONFIG_NODE)) { + // in prod code never block the event thread! + zk.sync(ZooDefs.CONFIG_NODE, this, null); + zk.getConfig(this, this, null); + } + } + } + + public void processResult(int rc, String path, Object ctx, byte[] data, Stat stat) { + if (path!=null && path.equals(ZooDefs.CONFIG_NODE)) { + String config[] = ConfigUtils.getClientConfigStr(new String(data)).split(" "); // similar to config -c + long version = Long.parseLong(config[0], 16); + if (this.configVersion == null){ + this.configVersion = version; + } else if (version > this.configVersion) { + hostList = config[1]; + try { + // the following command is not blocking but may cause the client to close the socket and + // migrate to a different server. In practice it's better to wait a short period of time, chosen + // randomly, so that different clients migrate at different times + zk.updateServerList(hostList); + } catch (IOException e) { + System.err.println("Error updating server list"); + e.printStackTrace(); + } + this.configVersion = version; + } + } + } diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md new file mode 100644 index 0000000000000000000000000000000000000000..a33e83c33b4dfe8d7f48d45c58f0ac1231915d94 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md @@ -0,0 +1,373 @@ + + +# ZooKeeper Getting Started Guide + +* [Getting Started: Coordinating Distributed Applications with ZooKeeper](#getting-started-coordinating-distributed-applications-with-zooKeeper) + * [Pre-requisites](#sc_Prerequisites) + * [Download](#sc_Download) + * [Standalone Operation](#sc_InstallingSingleMode) + * [Managing ZooKeeper Storage](#sc_FileManagement) + * [Connecting to ZooKeeper](#sc_ConnectingToZooKeeper) + * [Programming to ZooKeeper](#sc_ProgrammingToZooKeeper) + * [Running Replicated ZooKeeper](#sc_RunningReplicatedZooKeeper) + * [Other Optimizations](#other-optimizations) + + + +## Getting Started: Coordinating Distributed Applications with ZooKeeper + +This document contains information to get you started quickly with +ZooKeeper. It is aimed primarily at developers hoping to try it out, and +contains simple installation instructions for a single ZooKeeper server, a +few commands to verify that it is running, and a simple programming +example. Finally, as a convenience, there are a few sections regarding +more complicated installations, for example running replicated +deployments, and optimizing the transaction log. However for the complete +instructions for commercial deployments, please refer to the [ZooKeeper +Administrator's Guide](zookeeperAdmin.html). + + + +### Pre-requisites + +See [System Requirements](zookeeperAdmin.html#sc_systemReq) in the Admin guide. + + + +### Download + +To get a ZooKeeper distribution, download a recent +[stable](http://zookeeper.apache.org/releases.html) release from one of the Apache Download +Mirrors. + + + +### Standalone Operation + +Setting up a ZooKeeper server in standalone mode is +straightforward. The server is contained in a single JAR file, +so installation consists of creating a configuration. + +Once you've downloaded a stable ZooKeeper release unpack +it and cd to the root + +To start ZooKeeper you need a configuration file. Here is a sample, +create it in **conf/zoo.cfg**: + + + tickTime=2000 + dataDir=/var/lib/zookeeper + clientPort=2181 + + +This file can be called anything, but for the sake of this +discussion call +it **conf/zoo.cfg**. Change the +value of **dataDir** to specify an +existing (empty to start with) directory. Here are the meanings +for each of the fields: + +* ***tickTime*** : + the basic time unit in milliseconds used by ZooKeeper. It is + used to do heartbeats and the minimum session timeout will be + twice the tickTime. + +* ***dataDir*** : + the location to store the in-memory database snapshots and, + unless specified otherwise, the transaction log of updates to the + database. + +* ***clientPort*** : + the port to listen for client connections + +Now that you created the configuration file, you can start +ZooKeeper: + + + bin/zkServer.sh start + + +ZooKeeper logs messages using _logback_ -- more detail +available in the +[Logging](zookeeperProgrammers.html#Logging) +section of the Programmer's Guide. You will see log messages +coming to the console (default) and/or a log file depending on +the logback configuration. + +The steps outlined here run ZooKeeper in standalone mode. There is +no replication, so if ZooKeeper process fails, the service will go down. +This is fine for most development situations, but to run ZooKeeper in +replicated mode, please see [Running Replicated +ZooKeeper](#sc_RunningReplicatedZooKeeper). + + + +### Managing ZooKeeper Storage + +For long running production systems ZooKeeper storage must +be managed externally (dataDir and logs). See the section on +[maintenance](zookeeperAdmin.html#sc_maintenance) for +more details. + + + +### Connecting to ZooKeeper + + + $ bin/zkCli.sh -server 127.0.0.1:2181 + + +This lets you perform simple, file-like operations. + +Once you have connected, you should see something like: + + + Connecting to localhost:2181 + ... + Welcome to ZooKeeper! + JLine support is enabled + [zkshell: 0] + +From the shell, type `help` to get a listing of commands that can be executed from the client, as in: + + + [zkshell: 0] help + ZooKeeper -server host:port cmd args + addauth scheme auth + close + config [-c] [-w] [-s] + connect host:port + create [-s] [-e] [-c] [-t ttl] path [data] [acl] + delete [-v version] path + deleteall path + delquota [-n|-b] path + get [-s] [-w] path + getAcl [-s] path + getAllChildrenNumber path + getEphemerals path + history + listquota path + ls [-s] [-w] [-R] path + printwatches on|off + quit + reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*] + redo cmdno + removewatches path [-c|-d|-a] [-l] + set [-s] [-v version] path data + setAcl [-s] [-v version] [-R] path acl + setquota -n|-b val path + stat [-w] path + sync path + + +From here, you can try a few simple commands to get a feel for this simple command line interface. First, start by issuing the list command, as +in `ls`, yielding: + + + [zkshell: 8] ls / + [zookeeper] + + +Next, create a new znode by running `create /zk_test my_data`. This creates a new znode and associates the string "my_data" with the node. +You should see: + + + [zkshell: 9] create /zk_test my_data + Created /zk_test + + +Issue another `ls /` command to see what the directory looks like: + + + [zkshell: 11] ls / + [zookeeper, zk_test] + + +Notice that the zk_test directory has now been created. + +Next, verify that the data was associated with the znode by running the `get` command, as in: + + + [zkshell: 12] get /zk_test + my_data + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 5 + mtime = Fri Jun 05 13:57:06 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 7 + numChildren = 0 + + +We can change the data associated with zk_test by issuing the `set` command, as in: + + + [zkshell: 14] set /zk_test junk + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 6 + mtime = Fri Jun 05 14:01:52 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 4 + numChildren = 0 + [zkshell: 15] get /zk_test + junk + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 6 + mtime = Fri Jun 05 14:01:52 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 4 + numChildren = 0 + + +(Notice we did a `get` after setting the data and it did, indeed, change. + +Finally, let's `delete` the node by issuing: + + + [zkshell: 16] delete /zk_test + [zkshell: 17] ls / + [zookeeper] + [zkshell: 18] + + +That's it for now. To explore more, see the [Zookeeper CLI](zookeeperCLI.html). + + + +### Programming to ZooKeeper + +ZooKeeper has a Java bindings and C bindings. They are +functionally equivalent. The C bindings exist in two variants: single +threaded and multi-threaded. These differ only in how the messaging loop +is done. For more information, see the [Programming +Examples in the ZooKeeper Programmer's Guide](zookeeperProgrammers.html#ch_programStructureWithExample) for +sample code using the different APIs. + + + +### Running Replicated ZooKeeper + +Running ZooKeeper in standalone mode is convenient for evaluation, +some development, and testing. But in production, you should run +ZooKeeper in replicated mode. A replicated group of servers in the same +application is called a _quorum_, and in replicated +mode, all servers in the quorum have copies of the same configuration +file. + +######Note +>For replicated mode, a minimum of three servers are required, +and it is strongly recommended that you have an odd number of +servers. If you only have two servers, then you are in a +situation where if one of them fails, there are not enough +machines to form a majority quorum. Two servers are inherently +**less** stable than a single server, because there are two single +points of failure. + +The required +**conf/zoo.cfg** +file for replicated mode is similar to the one used in standalone +mode, but with a few differences. Here is an example: + + tickTime=2000 + dataDir=/var/lib/zookeeper + clientPort=2181 + initLimit=5 + syncLimit=2 + server.1=zoo1:2888:3888 + server.2=zoo2:2888:3888 + server.3=zoo3:2888:3888 + +The new entry, **initLimit** is +timeouts ZooKeeper uses to limit the length of time the ZooKeeper +servers in quorum have to connect to a leader. The entry **syncLimit** limits how far out of date a server can +be from a leader. + +With both of these timeouts, you specify the unit of time using +**tickTime**. In this example, the timeout +for initLimit is 5 ticks at 2000 milliseconds a tick, or 10 +seconds. + +The entries of the form _server.X_ list the +servers that make up the ZooKeeper service. When the server starts up, +it knows which server it is by looking for the file +_myid_ in the data directory. That file has the +contains the server number, in ASCII. + +Finally, note the two port numbers after each server +name: " 2888" and "3888". Peers use the former port to connect +to other peers. Such a connection is necessary so that peers +can communicate, for example, to agree upon the order of +updates. More specifically, a ZooKeeper server uses this port +to connect followers to the leader. When a new leader arises, a +follower opens a TCP connection to the leader using this +port. Because the default leader election also uses TCP, we +currently require another port for leader election. This is the +second port in the server entry. + +######Note +>If you want to test multiple servers on a single +machine, specify the servername +as _localhost_ with unique quorum & +leader election ports (i.e. 2888:3888, 2889:3889, 2890:3890 in +the example above) for each server.X in that server's config +file. Of course separate _dataDir_s and +distinct _clientPort_s are also necessary +(in the above replicated example, running on a +single _localhost_, you would still have +three config files). + +>Please be aware that setting up multiple servers on a single +machine will not create any redundancy. If something were to +happen which caused the machine to die, all of the zookeeper +servers would be offline. Full redundancy requires that each +server have its own machine. It must be a completely separate +physical server. Multiple virtual machines on the same physical +host are still vulnerable to the complete failure of that host. + +>If you have multiple network interfaces in your ZooKeeper machines, +you can also instruct ZooKeeper to bind on all of your interfaces and +automatically switch to a healthy interface in case of a network failure. +For details, see the [Configuration Parameters](zookeeperAdmin.html#id_multi_address). + + + +### Other Optimizations + +There are a couple of other configuration parameters that can +greatly increase performance: + +* To get low latencies on updates it is important to + have a dedicated transaction log directory. By default + transaction logs are put in the same directory as the data + snapshots and _myid_ file. The dataLogDir + parameters indicates a different directory to use for the + transaction logs. + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md new file mode 100644 index 0000000000000000000000000000000000000000..d4abe3854677bcf515dcc4a9483a9fedc503d776 --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md @@ -0,0 +1,698 @@ + + +# A series of tools for ZooKeeper + +* [Scripts](#Scripts) + * [zkServer.sh](#zkServer) + * [zkCli.sh](#zkCli) + * [zkEnv.sh](#zkEnv) + * [zkCleanup.sh](#zkCleanup) + * [zkTxnLogToolkit.sh](#zkTxnLogToolkit) + * [zkSnapShotToolkit.sh](#zkSnapShotToolkit) + * [zkSnapshotRecursiveSummaryToolkit.sh](#zkSnapshotRecursiveSummaryToolkit) + * [zkSnapshotComparer.sh](#zkSnapshotComparer) + +* [Benchmark](#Benchmark) + * [YCSB](#YCSB) + * [zk-smoketest](#zk-smoketest) + +* [Testing](#Testing) + * [Fault Injection Framework](#fault-injection) + * [Byteman](#Byteman) + * [Jepsen Test](#jepsen-test) + + + +## Scripts + + + +### zkServer.sh +A command for the operations for the ZooKeeper server. + +```bash +Usage: ./zkServer.sh {start|start-foreground|stop|version|restart|status|upgrade|print-cmd} +# start the server +./zkServer.sh start + +# start the server in the foreground for debugging +./zkServer.sh start-foreground + +# stop the server +./zkServer.sh stop + +# restart the server +./zkServer.sh restart + +# show the status,mode,role of the server +./zkServer.sh status +JMX enabled by default +Using config: /data/software/zookeeper/conf/zoo.cfg +Mode: standalone + +# Deprecated +./zkServer.sh upgrade + +# print the parameters of the start-up +./zkServer.sh print-cmd + +# show the version of the ZooKeeper server +./zkServer.sh version +Apache ZooKeeper, version 3.6.0-SNAPSHOT 06/11/2019 05:39 GMT + +``` + +The `status` command establishes a client connection to the server to execute diagnostic commands. +When the ZooKeeper cluster is started in client SSL only mode (by omitting the clientPort +from the zoo.cfg), then additional SSL related configuration has to be provided before using +the `./zkServer.sh status` command to find out if the ZooKeeper server is running. An example: + + CLIENT_JVMFLAGS="-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.ssl.trustStore.location=/tmp/clienttrust.jks -Dzookeeper.ssl.trustStore.password=password -Dzookeeper.ssl.keyStore.location=/tmp/client.jks -Dzookeeper.ssl.keyStore.password=password -Dzookeeper.client.secure=true" ./zkServer.sh status + + + + +### zkCli.sh +Look at the [ZooKeeperCLI](zookeeperCLI.html) + + + +### zkEnv.sh +The environment setting for the ZooKeeper server + +```bash +# the setting of log property +ZOO_LOG_DIR: the directory to store the logs +``` + + + +### zkCleanup.sh +Clean up the old snapshots and transaction logs. + +```bash +Usage: + * args dataLogDir [snapDir] -n count + * dataLogDir -- path to the txn log directory + * snapDir -- path to the snapshot directory + * count -- the number of old snaps/logs you want to keep, value should be greater than or equal to 3 +# Keep the latest 5 logs and snapshots +./zkCleanup.sh -n 5 +``` + + + +### zkTxnLogToolkit.sh +TxnLogToolkit is a command line tool shipped with ZooKeeper which +is capable of recovering transaction log entries with broken CRC. + +Running it without any command line parameters or with the `-h,--help` argument, it outputs the following help page: + + $ bin/zkTxnLogToolkit.sh + usage: TxnLogToolkit [-dhrv] txn_log_file_name + -d,--dump Dump mode. Dump all entries of the log file. (this is the default) + -h,--help Print help message + -r,--recover Recovery mode. Re-calculate CRC for broken entries. + -v,--verbose Be verbose in recovery mode: print all entries, not just fixed ones. + -y,--yes Non-interactive mode: repair all CRC errors without asking + +The default behaviour is safe: it dumps the entries of the given +transaction log file to the screen: (same as using `-d,--dump` parameter) + + $ bin/zkTxnLogToolkit.sh log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + 4/5/18 2:15:58 PM CEST session 0x16295bafcc40000 cxid 0x0 zxid 0x100000001 createSession 30000 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + 4/5/18 2:16:12 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x100000003 createSession 30000 + 4/5/18 2:17:34 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x200000001 closeSession null + 4/5/18 2:17:34 PM CEST session 0x16295bd23720000 cxid 0x0 zxid 0x200000002 createSession 30000 + 4/5/18 2:18:02 PM CEST session 0x16295bd23720000 cxid 0x2 zxid 0x200000003 create '/andor,#626262,v{s{31,s{'world,'anyone}}},F,1 + EOF reached after 6 txns. + +There's a CRC error in the 2nd entry of the above transaction log file. In **dump** +mode, the toolkit only prints this information to the screen without touching the original file. In +**recovery** mode (`-r,--recover` flag) the original file still remains +untouched and all transactions will be copied over to a new txn log file with ".fixed" suffix. It recalculates +CRC values and copies the calculated value, if it doesn't match the original txn entry. +By default, the tool works interactively: it asks for confirmation whenever CRC error encountered. + + $ bin/zkTxnLogToolkit.sh -r log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + Would you like to fix it (Yes/No/Abort) ? + +Answering **Yes** means the newly calculated CRC value will be outputted +to the new file. **No** means that the original CRC value will be copied over. +**Abort** will abort the entire operation and exits. +(In this case the ".fixed" will not be deleted and left in a half-complete state: contains only entries which +have already been processed or only the header if the operation was aborted at the first entry.) + + $ bin/zkTxnLogToolkit.sh -r log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + Would you like to fix it (Yes/No/Abort) ? y + EOF reached after 6 txns. + Recovery file log.100000001.fixed has been written with 1 fixed CRC error(s) + +The default behaviour of recovery is to be silent: only entries with CRC error get printed to the screen. +One can turn on verbose mode with the `-v,--verbose` parameter to see all records. +Interactive mode can be turned off with the `-y,--yes` parameter. In this case all CRC errors will be fixed +in the new transaction file. + + + +### zkSnapShotToolkit.sh +Dump a snapshot file to stdout, showing the detailed information of the each zk-node. + +```bash +# help +./zkSnapShotToolkit.sh +/usr/bin/java +USAGE: SnapshotFormatter [-d|-json] snapshot_file + -d dump the data for each znode + -json dump znode info in json format + +# show the each zk-node info without data content +./zkSnapShotToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d +/zk-latencies_4/session_946 + cZxid = 0x00000f0003110b + ctime = Wed Sep 19 21:58:22 CST 2018 + mZxid = 0x00000f0003110b + mtime = Wed Sep 19 21:58:22 CST 2018 + pZxid = 0x00000f0003110b + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x00000000000000 + dataLength = 100 + +# [-d] show the each zk-node info with data content +./zkSnapShotToolkit.sh -d /data/zkdata/version-2/snapshot.fa01000186d +/zk-latencies2/session_26229 + cZxid = 0x00000900007ba0 + ctime = Wed Aug 15 20:13:52 CST 2018 + mZxid = 0x00000900007ba0 + mtime = Wed Aug 15 20:13:52 CST 2018 + pZxid = 0x00000900007ba0 + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x00000000000000 + data = eHh4eHh4eHh4eHh4eA== + +# [-json] show the each zk-node info with json format +./zkSnapShotToolkit.sh -json /data/zkdata/version-2/snapshot.fa01000186d +[[1,0,{"progname":"SnapshotFormatter.java","progver":"0.01","timestamp":1559788148637},[{"name":"\/","asize":0,"dsize":0,"dev":0,"ino":1001},[{"name":"zookeeper","asize":0,"dsize":0,"dev":0,"ino":1002},{"name":"config","asize":0,"dsize":0,"dev":0,"ino":1003},[{"name":"quota","asize":0,"dsize":0,"dev":0,"ino":1004},[{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1005},{"name":"zookeeper_limits","asize":52,"dsize":52,"dev":0,"ino":1006},{"name":"zookeeper_stats","asize":15,"dsize":15,"dev":0,"ino":1007}]]],{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1008}]] +``` + + +### zkSnapshotRecursiveSummaryToolkit.sh +Recursively collect and display child count and data size for a selected node. + + $./zkSnapshotRecursiveSummaryToolkit.sh + USAGE: + + SnapshotRecursiveSummary + + snapshot_file: path to the zookeeper snapshot + starting_node: the path in the zookeeper tree where the traversal should begin + max_depth: defines the depth where the tool still writes to the output. 0 means there is no depth limit, every non-leaf node's stats will be displayed, 1 means it will only contain the starting node's and it's children's stats, 2 ads another level and so on. This ONLY affects the level of details displayed, NOT the calculation. + +```bash +# recursively collect and display child count and data for the root node and 2 levels below it +./zkSnapshotRecursiveSummaryToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d / 2 + +/ + children: 1250511 + data: 1952186580 +-- /zookeeper +-- children: 1 +-- data: 0 +-- /solr +-- children: 1773 +-- data: 8419162 +---- /solr/configs +---- children: 1640 +---- data: 8407643 +---- /solr/overseer +---- children: 6 +---- data: 0 +---- /solr/live_nodes +---- children: 3 +---- data: 0 +``` + + + +### zkSnapshotComparer.sh +SnapshotComparer is a tool that loads and compares two snapshots with configurable threshold and various filters, and outputs information about the delta. + +The delta includes specific znode paths added, updated, deleted comparing one snapshot to another. + +It's useful in use cases that involve snapshot analysis, such as offline data consistency checking, and data trending analysis (e.g. what's growing under which zNode path during when). + +This tool only outputs information about permanent nodes, ignoring both sessions and ephemeral nodes. + +It provides two tuning parameters to help filter out noise: +1. `--nodes` Threshold number of children added/removed; +2. `--bytes` Threshold number of bytes added/removed. + +#### Locate Snapshots +Snapshots can be found in [Zookeeper Data Directory](zookeeperAdmin.html#The+Data+Directory) which configured in [conf/zoo.cfg](zookeeperStarted.html#sc_InstallingSingleMode) when set up Zookeeper server. + +#### Supported Snapshot Formats +This tool supports uncompressed snapshot format, and compressed snapshot file formats: `snappy` and `gz`. Snapshots with different formats can be compared using this tool directly without decompression. + +#### Running the Tool +Running the tool with no command line argument or an unrecognized argument, it outputs the following help page: + +``` +usage: java -cp org.apache.zookeeper.server.SnapshotComparer + -b,--bytes (Required) The node data delta size threshold, in bytes, for printing the node. + -d,--debug Use debug output. + -i,--interactive Enter interactive mode. + -l,--left (Required) The left snapshot file. + -n,--nodes (Required) The descendant node delta size threshold, in nodes, for printing the node. + -r,--right (Required) The right snapshot file. +``` +Example Command: + +``` +./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1 +``` + +Example Output: +``` +... +Deserialized snapshot in snapshot.44 in 0.002741 seconds +Processed data tree in 0.000361 seconds +Node count: 10 +Total size: 0 +Max depth: 4 +Count of nodes at depth 0: 1 +Count of nodes at depth 1: 2 +Count of nodes at depth 2: 4 +Count of nodes at depth 3: 3 + +Node count: 22 +Total size: 2903 +Max depth: 5 +Count of nodes at depth 0: 1 +Count of nodes at depth 1: 2 +Count of nodes at depth 2: 4 +Count of nodes at depth 3: 7 +Count of nodes at depth 4: 8 + +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +Analysis for depth 1 +Node /zk_test found in both trees. Delta: 2903 bytes, 12 descendants +Analysis for depth 2 +Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants +Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants +Analysis for depth 3 +Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants +Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0 +Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants +Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3 +Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2 +Analysis for depth 4 +Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0 +Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0 +Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0 +Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0 +Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0 +Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0 +All layers compared. +``` + +#### Interactive Mode +Use "-i" or "--interactive" to enter interactive mode: +``` +./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1 -i +``` + +There are three options to proceed: +``` +- Press enter to move to print current depth layer; +- Type a number to jump to and print all nodes at a given depth; +- Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'. +``` + +Note: As indicated by the interactive messages, the tool only shows analysis on the result that filtered by tuning parameters bytes threshold and nodes threshold. + +Press enter to print current depth layer: + +``` +Current depth is 0 +Press enter to move to print current depth layer; +... +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +``` + +Type a number to jump to and print all nodes at a given depth: + +(Jump forward) + +``` +Current depth is 1 +... +Type a number to jump to and print all nodes at a given depth; +... +3 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 3 +Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants +Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0 +Filtered node /zk_test/gz/anotherOne of left size 0, right size 0 +Filtered right node /zk_test/gz/b of size 0 +Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants +Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3 +Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2 +``` + +(Jump back) + +``` +Current depth is 3 +... +Type a number to jump to and print all nodes at a given depth; +... +0 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +``` + +Out of range depth is handled: + +``` +Current depth is 1 +... +Type a number to jump to and print all nodes at a given depth; +... +10 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Depth must be in range [0, 4] +``` + +Enter an ABSOLUTE path to print the immediate subtree of a node: + +``` +Current depth is 3 +... +Enter an ABSOLUTE path to print the immediate subtree of a node. +/zk_test +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for node /zk_test +Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants +Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants +``` + +Invalid path is handled: + +``` +Current depth is 3 +... +Enter an ABSOLUTE path to print the immediate subtree of a node. +/non-exist-path +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for node /non-exist-path +Path /non-exist-path is neither found in left tree nor right tree. +``` + +Invalid input is handled: +``` +Current depth is 1 +- Press enter to move to print current depth layer; +- Type a number to jump to and print all nodes at a given depth; +- Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'. +12223999999999999999999999999999999999999 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Input 12223999999999999999999999999999999999999 is not valid. Depth must be in range [0, 4]. Path must be an absolute path which starts with '/'. +``` + +Exit interactive mode automatically when all layers are compared: + +``` +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 4 +Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0 +Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0 +Filtered right node /zk_test/snappy/anotherTest/abcd of size 0 +Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0 +Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0 +Filtered right node /zk_test/snappy/test_1/c of size 0 +Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0 +Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0 +All layers compared. +``` + +Or use `^c` to exit interactive mode anytime. + + + + +## Benchmark + + + +### YCSB + +#### Quick Start + +This section describes how to run YCSB on ZooKeeper. + +#### 1. Start ZooKeeper Server(s) + +#### 2. Install Java and Maven + +#### 3. Set Up YCSB + +Git clone YCSB and compile: + + git clone http://github.com/brianfrankcooper/YCSB.git + # more details in the landing page for instructions on downloading YCSB(https://github.com/brianfrankcooper/YCSB#getting-started). + cd YCSB + mvn -pl site.ycsb:zookeeper-binding -am clean package -DskipTests + +#### 4. Provide ZooKeeper Connection Parameters + +Set connectString, sessionTimeout, watchFlag in the workload you plan to run. + +- `zookeeper.connectString` +- `zookeeper.sessionTimeout` +- `zookeeper.watchFlag` + * A parameter for enabling ZooKeeper's watch, optional values:true or false.the default value is false. + * This parameter cannot test the watch performance, but for testing what effect will take on the read/write requests when enabling the watch. + + ```bash + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.watchFlag=true + ``` + +Or, you can set configs with the shell command, EG: + + # create a /benchmark namespace for sake of cleaning up the workspace after test. + # e.g the CLI:create /benchmark + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.sessionTimeout=30000 + +#### 5. Load data and run tests + +Load the data: + + # -p recordcount,the count of records/paths you want to insert + ./bin/ycsb load zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p recordcount=10000 > outputLoad.txt + +Run the workload test: + + # YCSB workloadb is the most suitable workload for read-heavy workload for the ZooKeeper in the real world. + + # -p fieldlength, test the length of value/data-content took effect on performance + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldlength=1000 + + # -p fieldcount + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldcount=20 + + # -p hdrhistogram.percentiles,show the hdrhistogram benchmark result + ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p hdrhistogram.percentiles=10,25,50,75,90,95,99,99.9 -p histogram.buckets=500 + + # -threads: multi-clients test, increase the **maxClientCnxns** in the zoo.cfg to handle more connections. + ./bin/ycsb run zookeeper -threads 10 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark + + # show the timeseries benchmark result + ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p measurementtype=timeseries -p timeseries.granularity=50 + + # cluster test + ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181,192.168.10.45:2181,192.168.10.27:2181/benchmark + + # test leader's read/write performance by setting zookeeper.connectString to leader's(192.168.10.43:2181) + ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181/benchmark + + # test for large znode(by default: jute.maxbuffer is 1048575 bytes/1 MB ). Notice:jute.maxbuffer should also be set the same value in all the zk servers. + ./bin/ycsb run zookeeper -jvm-args="-Djute.maxbuffer=4194304" -s -P workloads/workloadc -p zookeeper.connectString=127.0.0.1:2181/benchmark + + # Cleaning up the workspace after finishing the benchmark. + # e.g the CLI:deleteall /benchmark + + + + +### zk-smoketest + +**zk-smoketest** provides a simple smoketest client for a ZooKeeper ensemble. Useful for verifying new, updated, +existing installations. More details are [here](https://github.com/phunt/zk-smoketest). + + + + +## Testing + + + +### Fault Injection Framework + + + +#### Byteman + +- **Byteman** is a tool which makes it easy to trace, monitor and test the behaviour of Java application and JDK runtime code. +It injects Java code into your application methods or into Java runtime methods without the need for you to recompile, repackage or even redeploy your application. +Injection can be performed at JVM startup or after startup while the application is still running. +- Visit the official [website](https://byteman.jboss.org/) to download the latest release +- A brief tutorial can be found [here](https://developer.jboss.org/wiki/ABytemanTutorial) + + ```bash + Preparations: + # attach the byteman to 3 zk servers during runtime + # 55001,55002,55003 is byteman binding port; 714,740,758 is the zk server pid + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55001 714 + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55002 740 + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55003 758 + + # load the fault injection script + ./bmsubmit.sh -p 55002 -l my_zk_fault_injection.btm + # unload the fault injection script + ./bmsubmit.sh -p 55002 -u my_zk_fault_injectionr.btm + ``` + +Look at the below examples to customize your byteman fault injection script + +Example 1: This script makes leader's zxid roll over, to force re-election. + +```bash +cat zk_leader_zxid_roll_over.btm + +RULE trace zk_leader_zxid_roll_over +CLASS org.apache.zookeeper.server.quorum.Leader +METHOD propose +IF true +DO + traceln("*** Leader zxid has rolled over, forcing re-election ***"); + $1.zxid = 4294967295L +ENDRULE +``` + +Example 2: This script makes the leader drop the ping packet to a specific follower. +The leader will close the **LearnerHandler** with that follower, and the follower will enter the state:LOOKING +then re-enter the quorum with the state:FOLLOWING + +```bash +cat zk_leader_drop_ping_packet.btm + +RULE trace zk_leader_drop_ping_packet +CLASS org.apache.zookeeper.server.quorum.LearnerHandler +METHOD ping +AT ENTRY +IF $0.sid == 2 +DO + traceln("*** Leader drops ping packet to sid: 2 ***"); + return; +ENDRULE +``` + +Example 3: This script makes one follower drop ACK packet which has no big effect in the broadcast phrase, since after receiving +the majority of ACKs from the followers, the leader can commit that proposal + +```bash +cat zk_leader_drop_ping_packet.btm + +RULE trace zk.follower_drop_ack_packet +CLASS org.apache.zookeeper.server.quorum.SendAckRequestProcessor +METHOD processRequest +AT ENTRY +IF true +DO + traceln("*** Follower drops ACK packet ***"); + return; +ENDRULE +``` + + + + +### Jepsen Test +A framework for distributed systems verification, with fault injection. +Jepsen has been used to verify everything from eventually-consistent commutative databases to linearizable coordination systems to distributed task schedulers. +more details can be found in [jepsen-io](https://github.com/jepsen-io/jepsen) + +Running the [Dockerized Jepsen](https://github.com/jepsen-io/jepsen/blob/master/docker/README.md) is the simplest way to use the Jepsen. + +Installation: + +```bash +git clone git@github.com:jepsen-io/jepsen.git +cd docker +# maybe a long time for the first init. +./up.sh +# docker ps to check one control node and five db nodes are up +docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 8265f1d3f89c docker_control "/bin/sh -c /init.sh" 9 hours ago Up 4 hours 0.0.0.0:32769->8080/tcp jepsen-control + 8a646102da44 docker_n5 "/run.sh" 9 hours ago Up 3 hours 22/tcp jepsen-n5 + 385454d7e520 docker_n1 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n1 + a62d6a9d5f8e docker_n2 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n2 + 1485e89d0d9a docker_n3 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n3 + 27ae01e1a0c5 docker_node "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-node + 53c444b00ebd docker_n4 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n4 +``` + +Running & Test + +```bash +# Enter into the container:jepsen-control +docker exec -it jepsen-control bash +# Test +cd zookeeper && lein run test --concurrency 10 +# See something like the following to assert that ZooKeeper has passed the Jepsen test +INFO [2019-04-01 11:25:23,719] jepsen worker 8 - jepsen.util 8 :ok :read 2 +INFO [2019-04-01 11:25:23,722] jepsen worker 3 - jepsen.util 3 :invoke :cas [0 4] +INFO [2019-04-01 11:25:23,760] jepsen worker 3 - jepsen.util 3 :fail :cas [0 4] +INFO [2019-04-01 11:25:23,791] jepsen worker 1 - jepsen.util 1 :invoke :read nil +INFO [2019-04-01 11:25:23,794] jepsen worker 1 - jepsen.util 1 :ok :read 2 +INFO [2019-04-01 11:25:24,038] jepsen worker 0 - jepsen.util 0 :invoke :write 4 +INFO [2019-04-01 11:25:24,073] jepsen worker 0 - jepsen.util 0 :ok :write 4 +............................................................................... +Everything looks good! ヽ(‘ー`)ノ + +``` + +Reference: +read [this blog](https://aphyr.com/posts/291-call-me-maybe-zookeeper) to learn more about the Jepsen test for the Zookeeper. diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md new file mode 100644 index 0000000000000000000000000000000000000000..366c0a9f081dd2e7d7e80e186d4a805d73a639bc --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md @@ -0,0 +1,666 @@ + + +# Programming with ZooKeeper - A basic tutorial + +* [Introduction](#ch_Introduction) +* [Barriers](#sc_barriers) +* [Producer-Consumer Queues](#sc_producerConsumerQueues) +* [Complete example](#Complete+example) + * [Queue test](#Queue+test) + * [Barrier test](#Barrier+test) + * [Source Listing](#sc_sourceListing) + + + +## Introduction + +In this tutorial, we show simple implementations of barriers and +producer-consumer queues using ZooKeeper. We call the respective classes Barrier and Queue. +These examples assume that you have at least one ZooKeeper server running. + +Both primitives use the following common excerpt of code: + + static ZooKeeper zk = null; + static Integer mutex; + + String root; + + SyncPrimitive(String address) { + if(zk == null){ + try { + System.out.println("Starting ZK:"); + zk = new ZooKeeper(address, 3000, this); + mutex = new Integer(-1); + System.out.println("Finished starting ZK: " + zk); + } catch (IOException e) { + System.out.println(e.toString()); + zk = null; + } + } + } + + synchronized public void process(WatchedEvent event) { + synchronized (mutex) { + mutex.notify(); + } + } + + + +Both classes extend SyncPrimitive. In this way, we execute steps that are +common to all primitives in the constructor of SyncPrimitive. To keep the examples +simple, we create a ZooKeeper object the first time we instantiate either a barrier +object or a queue object, and we declare a static variable that is a reference +to this object. The subsequent instances of Barrier and Queue check whether a +ZooKeeper object exists. Alternatively, we could have the application creating a +ZooKeeper object and passing it to the constructor of Barrier and Queue. + +We use the process() method to process notifications triggered due to watches. +In the following discussion, we present code that sets watches. A watch is internal +structure that enables ZooKeeper to notify a client of a change to a node. For example, +if a client is waiting for other clients to leave a barrier, then it can set a watch and +wait for modifications to a particular node, which can indicate that it is the end of the wait. +This point becomes clear once we go over the examples. + + + +## Barriers + +A barrier is a primitive that enables a group of processes to synchronize the +beginning and the end of a computation. The general idea of this implementation +is to have a barrier node that serves the purpose of being a parent for individual +process nodes. Suppose that we call the barrier node "/b1". Each process "p" then +creates a node "/b1/p". Once enough processes have created their corresponding +nodes, joined processes can start the computation. + +In this example, each process instantiates a Barrier object, and its constructor takes as parameters: + +* the address of a ZooKeeper server (e.g., "zoo1.foo.com:2181") +* the path of the barrier node on ZooKeeper (e.g., "/b1") +* the size of the group of processes + +The constructor of Barrier passes the address of the Zookeeper server to the +constructor of the parent class. The parent class creates a ZooKeeper instance if +one does not exist. The constructor of Barrier then creates a +barrier node on ZooKeeper, which is the parent node of all process nodes, and +we call root (**Note:** This is not the ZooKeeper root "/"). + + /** + * Barrier constructor + * + * @param address + * @param root + * @param size + */ + Barrier(String address, String root, int size) { + super(address); + this.root = root; + this.size = size; + // Create barrier node + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + + // My node name + try { + name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString()); + } catch (UnknownHostException e) { + System.out.println(e.toString()); + } + } + + +To enter the barrier, a process calls enter(). The process creates a node under +the root to represent it, using its host name to form the node name. It then wait +until enough processes have entered the barrier. A process does it by checking +the number of children the root node has with "getChildren()", and waiting for +notifications in the case it does not have enough. To receive a notification when +there is a change to the root node, a process has to set a watch, and does it +through the call to "getChildren()". In the code, we have that "getChildren()" +has two parameters. The first one states the node to read from, and the second is +a boolean flag that enables the process to set a watch. In the code the flag is true. + + /** + * Join barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean enter() throws KeeperException, InterruptedException{ + zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.EPHEMERAL); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + + if (list.size() < size) { + mutex.wait(); + } else { + return true; + } + } + } + } + + +Note that enter() throws both KeeperException and InterruptedException, so it is +the responsibility of the application to catch and handle such exceptions. + +Once the computation is finished, a process calls leave() to leave the barrier. +First it deletes its corresponding node, and then it gets the children of the root +node. If there is at least one child, then it waits for a notification (obs: note +that the second parameter of the call to getChildren() is true, meaning that +ZooKeeper has to set a watch on the root node). Upon reception of a notification, +it checks once more whether the root node has any children. + + /** + * Wait until all reach barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean leave() throws KeeperException, InterruptedException { + zk.delete(root + "/" + name, 0); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() > 0) { + mutex.wait(); + } else { + return true; + } + } + } + } + + + + +## Producer-Consumer Queues + +A producer-consumer queue is a distributed data structure that groups of processes +use to generate and consume items. Producer processes create new elements and add +them to the queue. Consumer processes remove elements from the list, and process them. +In this implementation, the elements are simple integers. The queue is represented +by a root node, and to add an element to the queue, a producer process creates a new node, +a child of the root node. + +The following excerpt of code corresponds to the constructor of the object. As +with Barrier objects, it first calls the constructor of the parent class, SyncPrimitive, +that creates a ZooKeeper object if one doesn't exist. It then verifies if the root +node of the queue exists, and creates if it doesn't. + + /** + * Constructor of producer-consumer queue + * + * @param address + * @param name + */ + Queue(String address, String name) { + super(address); + this.root = name; + // Create ZK node name + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + } + + +A producer process calls "produce()" to add an element to the queue, and passes +an integer as an argument. To add an element to the queue, the method creates a +new node using "create()", and uses the SEQUENCE flag to instruct ZooKeeper to +append the value of the sequencer counter associated to the root node. In this way, +we impose a total order on the elements of the queue, thus guaranteeing that the +oldest element of the queue is the next one consumed. + + /** + * Add element to the queue. + * + * @param i + * @return + */ + + boolean produce(int i) throws KeeperException, InterruptedException{ + ByteBuffer b = ByteBuffer.allocate(4); + byte[] value; + + // Add child with value i + b.putInt(i); + value = b.array(); + zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT_SEQUENTIAL); + + return true; + } + + +To consume an element, a consumer process obtains the children of the root node, +reads the node with smallest counter value, and returns the element. Note that +if there is a conflict, then one of the two contending processes won't be able to +delete the node and the delete operation will throw an exception. + +A call to getChildren() returns the list of children in lexicographic order. +As lexicographic order does not necessarily follow the numerical order of the counter +values, we need to decide which element is the smallest. To decide which one has +the smallest counter value, we traverse the list, and remove the prefix "element" +from each one. + + /** + * Remove first element from the queue. + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + int consume() throws KeeperException, InterruptedException{ + int retvalue = -1; + Stat stat = null; + + // Get the first element available + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() == 0) { + System.out.println("Going to wait"); + mutex.wait(); + } else { + Integer min = new Integer(list.get(0).substring(7)); + for(String s : list){ + Integer tempValue = new Integer(s.substring(7)); + //System.out.println("Temporary value: " + tempValue); + if(tempValue < min) min = tempValue; + } + System.out.println("Temporary value: " + root + "/element" + min); + byte[] b = zk.getData(root + "/element" + min, + false, stat); + zk.delete(root + "/element" + min, 0); + ByteBuffer buffer = ByteBuffer.wrap(b); + retvalue = buffer.getInt(); + + return retvalue; + } + } + } + } + } + + + + +## Complete example + +In the following section you can find a complete command line application to demonstrate the above mentioned +recipes. Use the following command to run it. + + ZOOBINDIR="[path_to_distro]/bin" + . "$ZOOBINDIR"/zkEnv.sh + java SyncPrimitive [Test Type] [ZK server] [No of elements] [Client type] + + + +### Queue test + +Start a producer to create 100 elements + + java SyncPrimitive qTest localhost 100 p + + +Start a consumer to consume 100 elements + + java SyncPrimitive qTest localhost 100 c + + + +### Barrier test + +Start a barrier with 2 participants (start as many times as many participants you'd like to enter) + + java SyncPrimitive bTest localhost 2 + + + +### Source Listing + +#### SyncPrimitive.Java + + import java.io.IOException; + import java.net.InetAddress; + import java.net.UnknownHostException; + import java.nio.ByteBuffer; + import java.util.List; + import java.util.Random; + + import org.apache.zookeeper.CreateMode; + import org.apache.zookeeper.KeeperException; + import org.apache.zookeeper.WatchedEvent; + import org.apache.zookeeper.Watcher; + import org.apache.zookeeper.ZooKeeper; + import org.apache.zookeeper.ZooDefs.Ids; + import org.apache.zookeeper.data.Stat; + + public class SyncPrimitive implements Watcher { + + static ZooKeeper zk = null; + static Integer mutex; + String root; + + SyncPrimitive(String address) { + if(zk == null){ + try { + System.out.println("Starting ZK:"); + zk = new ZooKeeper(address, 3000, this); + mutex = new Integer(-1); + System.out.println("Finished starting ZK: " + zk); + } catch (IOException e) { + System.out.println(e.toString()); + zk = null; + } + } + //else mutex = new Integer(-1); + } + + synchronized public void process(WatchedEvent event) { + synchronized (mutex) { + //System.out.println("Process: " + event.getType()); + mutex.notify(); + } + } + + /** + * Barrier + */ + static public class Barrier extends SyncPrimitive { + int size; + String name; + + /** + * Barrier constructor + * + * @param address + * @param root + * @param size + */ + Barrier(String address, String root, int size) { + super(address); + this.root = root; + this.size = size; + + // Create barrier node + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + + // My node name + try { + name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString()); + } catch (UnknownHostException e) { + System.out.println(e.toString()); + } + + } + + /** + * Join barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean enter() throws KeeperException, InterruptedException{ + zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.EPHEMERAL); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + + if (list.size() < size) { + mutex.wait(); + } else { + return true; + } + } + } + } + + /** + * Wait until all reach barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + boolean leave() throws KeeperException, InterruptedException{ + zk.delete(root + "/" + name, 0); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() > 0) { + mutex.wait(); + } else { + return true; + } + } + } + } + } + + /** + * Producer-Consumer queue + */ + static public class Queue extends SyncPrimitive { + + /** + * Constructor of producer-consumer queue + * + * @param address + * @param name + */ + Queue(String address, String name) { + super(address); + this.root = name; + // Create ZK node name + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + } + + /** + * Add element to the queue. + * + * @param i + * @return + */ + + boolean produce(int i) throws KeeperException, InterruptedException{ + ByteBuffer b = ByteBuffer.allocate(4); + byte[] value; + + // Add child with value i + b.putInt(i); + value = b.array(); + zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT_SEQUENTIAL); + + return true; + } + + /** + * Remove first element from the queue. + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + int consume() throws KeeperException, InterruptedException{ + int retvalue = -1; + Stat stat = null; + + // Get the first element available + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() == 0) { + System.out.println("Going to wait"); + mutex.wait(); + } else { + Integer min = new Integer(list.get(0).substring(7)); + String minNode = list.get(0); + for(String s : list){ + Integer tempValue = new Integer(s.substring(7)); + //System.out.println("Temporary value: " + tempValue); + if(tempValue < min) { + min = tempValue; + minNode = s; + } + } + System.out.println("Temporary value: " + root + "/" + minNode); + byte[] b = zk.getData(root + "/" + minNode, + false, stat); + zk.delete(root + "/" + minNode, 0); + ByteBuffer buffer = ByteBuffer.wrap(b); + retvalue = buffer.getInt(); + + return retvalue; + } + } + } + } + } + + public static void main(String args[]) { + if (args[0].equals("qTest")) + queueTest(args); + else + barrierTest(args); + } + + public static void queueTest(String args[]) { + Queue q = new Queue(args[1], "/app1"); + + System.out.println("Input: " + args[1]); + int i; + Integer max = new Integer(args[2]); + + if (args[3].equals("p")) { + System.out.println("Producer"); + for (i = 0; i < max; i++) + try{ + q.produce(10 + i); + } catch (KeeperException e){ + + } catch (InterruptedException e){ + + } + } else { + System.out.println("Consumer"); + + for (i = 0; i < max; i++) { + try{ + int r = q.consume(); + System.out.println("Item: " + r); + } catch (KeeperException e){ + i--; + } catch (InterruptedException e){ + } + } + } + } + + public static void barrierTest(String args[]) { + Barrier b = new Barrier(args[1], "/b1", new Integer(args[2])); + try{ + boolean flag = b.enter(); + System.out.println("Entered barrier: " + args[2]); + if(!flag) System.out.println("Error when entering the barrier"); + } catch (KeeperException e){ + } catch (InterruptedException e){ + } + + // Generate random integer + Random rand = new Random(); + int r = rand.nextInt(100); + // Loop for rand iterations + for (int i = 0; i < r; i++) { + try { + Thread.sleep(100); + } catch (InterruptedException e) { + } + } + try{ + b.leave(); + } catch (KeeperException e){ + + } catch (InterruptedException e){ + + } + System.out.println("Left barrier"); + } + } + diff --git a/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md new file mode 100644 index 0000000000000000000000000000000000000000..98045444457f5841a40a1eeb349292798942732c --- /dev/null +++ b/local-test-zookeeper-delta-01/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md @@ -0,0 +1,385 @@ + + +# ZooKeeper Use Cases + +- Applications and organizations using ZooKeeper include (alphabetically) [1]. +- If your use case wants to be listed here. Please do not hesitate, submit a pull request or write an email to **dev@zookeeper.apache.org**, + and then, your use case will be included. +- If this documentation has violated your intellectual property rights or you and your company's privacy, write an email to **dev@zookeeper.apache.org**, + we will handle them in a timely manner. + + +## Free Software Projects + +### [AdroitLogic UltraESB](http://adroitlogic.org/) + - Uses ZooKeeper to implement node coordination, in clustering support. This allows the management of the complete cluster, + or any specific node - from any other node connected via JMX. A Cluster wide command framework developed on top of the + ZooKeeper coordination allows commands that fail on some nodes to be retried etc. We also support the automated graceful + round-robin-restart of a complete cluster of nodes using the same framework [1]. + +### [Akka](http://akka.io/) + - Akka is the platform for the next generation event-driven, scalable and fault-tolerant architectures on the JVM. + Or: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM [1]. + +### [Eclipse Communication Framework](http://www.eclipse.org/ecf) + - The Eclipse ECF project provides an implementation of its Abstract Discovery services using Zookeeper. ECF itself + is used in many projects providing base functionality for communication, all based on OSGi [1]. + +### [Eclipse Gyrex](http://www.eclipse.org/gyrex) + - The Eclipse Gyrex project provides a platform for building your own Java OSGi based clouds. + - ZooKeeper is used as the core cloud component for node membership and management, coordination of jobs executing among workers, + a lock service and a simple queue service and a lot more [1]. + +### [GoldenOrb](http://www.goldenorbos.org/) + - massive-scale Graph analysis [1]. + +### [Juju](https://juju.ubuntu.com/) + - Service deployment and orchestration framework, formerly called Ensemble [1]. + +### [Katta](http://katta.sourceforge.net/) + - Katta serves distributed Lucene indexes in a grid environment. + - Zookeeper is used for node, master and index management in the grid [1]. + +### [KeptCollections](https://github.com/anthonyu/KeptCollections) + - KeptCollections is a library of drop-in replacements for the data structures in the Java Collections framework. + - KeptCollections uses Apache ZooKeeper as a backing store, thus making its data structures distributed and scalable [1]. + +### [Neo4j](https://neo4j.com/) + - Neo4j is a Graph Database. It's a disk based, ACID compliant transactional storage engine for big graphs and fast graph traversals, + using external indices like Lucene/Solr for global searches. + - We use ZooKeeper in the Neo4j High Availability components for write-master election, + read slave coordination and other cool stuff. ZooKeeper is a great and focused project - we like! [1]. + +### [Norbert](http://sna-projects.com/norbert) + - Partitioned routing and cluster management [1]. + +### [spring-cloud-zookeeper](https://spring.io/projects/spring-cloud-zookeeper) + - Spring Cloud Zookeeper provides Apache Zookeeper integrations for Spring Boot apps through autoconfiguration + and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations + you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. + The patterns provided include Service Discovery and Distributed Configuration [38]. + +### [spring-statemachine](https://projects.spring.io/spring-statemachine/) + - Spring Statemachine is a framework for application developers to use state machine concepts with Spring applications. + - Spring Statemachine can provide this feature:Distributed state machine based on a Zookeeper [31,32]. + +### [spring-xd](https://projects.spring.io/spring-xd/) + - Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export. + The project’s goal is to simplify the development of big data applications. + - ZooKeeper - Provides all runtime information for the XD cluster. Tracks running containers, in which containers modules + and jobs are deployed, stream definitions, deployment manifests, and the like [30,31]. + +### [Talend ESB](http://www.talend.com/products-application-integration/application-integration-esb-se.php) + - Talend ESB is a versatile and flexible, enterprise service bus. + - It uses ZooKeeper as endpoint repository of both REST and SOAP Web services. + By using ZooKeeper Talend ESB is able to provide failover and load balancing capabilities in a very light-weight manner [1]. + +### [redis_failover](https://github.com/ryanlecompte/redis_failover) + - Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby [1]. + + +## Apache Projects + +### [Apache Accumulo](https://accumulo.apache.org/) + - Accumulo is a distributed key/value store that provides expressive, cell-level access labels. + - Apache ZooKeeper plays a central role within the Accumulo architecture. Its quorum consistency model supports an overall + Accumulo architecture with no single points of failure. Beyond that, Accumulo leverages ZooKeeper to store and communication + configuration information for users and tables, as well as operational states of processes and tablets [2]. + +### [Apache Atlas](http://atlas.apache.org) + - Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet + their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem. + - Atlas uses Zookeeper for coordination to provide redundancy and high availability of HBase,Kafka [31,35]. + +### [Apache BookKeeper](https://bookkeeper.apache.org/) + - A scalable, fault-tolerant, and low-latency storage service optimized for real-time workloads. + - BookKeeper requires a metadata storage service to store information related to ledgers and available bookies. BookKeeper currently uses + ZooKeeper for this and other tasks [3]. + +### [Apache CXF DOSGi](http://cxf.apache.org/distributed-osgi.html) + - Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming + APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, + or CORBA and work over a variety of transports such as HTTP, JMS or JBI. + - The Distributed OSGi implementation at Apache CXF uses ZooKeeper for its Discovery functionality [4]. + +### [Apache Drill](http://drill.apache.org/) + - Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage + - ZooKeeper maintains ephemeral cluster membership information. The Drillbits use ZooKeeper to find other Drillbits in the cluster, + and the client uses ZooKeeper to find Drillbits to submit a query [28]. + +### [Apache Druid](https://druid.apache.org/) + - Apache Druid is a high performance real-time analytics database. + - Apache Druid uses Apache ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are [27]: + - Coordinator leader election + - Segment "publishing" protocol from Historical and Realtime + - Segment load/drop protocol between Coordinator and Historical + - Overlord leader election + - Overlord and MiddleManager task management + +### [Apache Dubbo](http://dubbo.apache.org) + - Apache Dubbo is a high-performance, java based open source RPC framework. + - Zookeeper is used for service registration discovery and configuration management in Dubbo [6]. + +### [Apache Flink](https://flink.apache.org/) + - Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. + Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. + - To enable JobManager High Availability you have to set the high-availability mode to zookeeper, configure a ZooKeeper quorum and set up a masters file with all JobManagers hosts and their web UI ports. + Flink leverages ZooKeeper for distributed coordination between all running JobManager instances. ZooKeeper is a separate service from Flink, + which provides highly reliable distributed coordination via leader election and light-weight consistent state storage [23]. + +### [Apache Flume](https://flume.apache.org/) + - Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts + of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant + with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model + that allows for online analytic application. + - Flume supports Agent configurations via Zookeeper. This is an experimental feature [5]. + +### [Apache Fluo](https://fluo.apache.org/) + - Apache Fluo is a distributed processing system that lets users make incremental updates to large data sets. + - Apache Fluo is built on Apache Accumulo which uses Apache Zookeeper for consensus [31,37]. + +### [Apache Griffin](https://griffin.apache.org/) + - Big Data Quality Solution For Batch and Streaming. + - Griffin uses Zookeeper for coordination to provide redundancy and high availability of Kafka [31,36]. + +### [Apache Hadoop](http://hadoop.apache.org/) + - The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across + clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, + each offering local computation and storage. Rather than rely on hardware to deliver high-availability, + the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. + - The implementation of automatic HDFS failover relies on ZooKeeper for the following things: + - **Failure detection** - each of the NameNode machines in the cluster maintains a persistent session in ZooKeeper. + If the machine crashes, the ZooKeeper session will expire, notifying the other NameNode that a failover should be triggered. + - **Active NameNode election** - ZooKeeper provides a simple mechanism to exclusively elect a node as active. If the current active NameNode crashes, + another node may take a special exclusive lock in ZooKeeper indicating that it should become the next active. + - The ZKFailoverController (ZKFC) is a new component which is a ZooKeeper client which also monitors and manages the state of the NameNode. + Each of the machines which runs a NameNode also runs a ZKFC, and that ZKFC is responsible for: + - **Health monitoring** - the ZKFC pings its local NameNode on a periodic basis with a health-check command. + So long as the NameNode responds in a timely fashion with a healthy status, the ZKFC considers the node healthy. + If the node has crashed, frozen, or otherwise entered an unhealthy state, the health monitor will mark it as unhealthy. + - **ZooKeeper session management** - when the local NameNode is healthy, the ZKFC holds a session open in ZooKeeper. + If the local NameNode is active, it also holds a special “lock” znode. This lock uses ZooKeeper’s support for “ephemeral” nodes; + if the session expires, the lock node will be automatically deleted. + - **ZooKeeper-based election** - if the local NameNode is healthy, and the ZKFC sees that no other node currently holds the lock znode, + it will itself try to acquire the lock. If it succeeds, then it has “won the election”, and is responsible for running a failover to make its local NameNode active. + The failover process is similar to the manual failover described above: first, the previous active is fenced if necessary, + and then the local NameNode transitions to active state [7]. + +### [Apache HBase](https://hbase.apache.org/) + - HBase is the Hadoop database. It's an open-source, distributed, column-oriented store model. + - HBase uses ZooKeeper for master election, server lease management, bootstrapping, and coordination between servers. + A distributed Apache HBase installation depends on a running ZooKeeper cluster. All participating nodes and clients + need to be able to access the running ZooKeeper ensemble [8]. + - As you can see, ZooKeeper is a fundamental part of HBase. All operations that require coordination, such as Regions + assignment, Master-Failover, replication, and snapshots, are built on ZooKeeper [20]. + +### [Apache Helix](http://helix.apache.org/) + - A cluster management framework for partitioned and replicated distributed resources. + - We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. + Helix uses Apache ZooKeeper to achieve this functionality [21]. + Zookeeper provides: + - A way to represent PERSISTENT state which remains until its deleted + - A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies + - A notification mechanism when there is a change in PERSISTENT and EPHEMERAL state + +### [Apache Hive](https://hive.apache.org) + - The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed + storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. + - Hive has been using ZooKeeper as distributed lock manager to support concurrency in HiveServer2 [25,26]. + +### [Apache Ignite](https://ignite.apache.org/) + - Ignite is a memory-centric distributed database, caching, and processing platform for + transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale + - Apache Ignite discovery mechanism goes with a ZooKeeper implementations which allows scaling Ignite clusters to 100s and 1000s of nodes + preserving linear scalability and performance [31,34].​ + +### [Apache James Mailbox](http://james.apache.org/mailbox/) + - The Apache James Mailbox is a library providing a flexible Mailbox storage accessible by mail protocols + (IMAP4, POP3, SMTP,...) and other protocols. + - Uses Zookeeper and Curator Framework for generating distributed unique ID's [31]. + +### [Apache Kafka](https://kafka.apache.org/) + - Kafka is a distributed publish/subscribe messaging system + - Apache Kafka relies on ZooKeeper for the following things: + - **Controller election** + The controller is one of the most important broking entity in a Kafka ecosystem, and it also has the responsibility + to maintain the leader-follower relationship across all the partitions. If a node by some reason is shutting down, + it’s the controller’s responsibility to tell all the replicas to act as partition leaders in order to fulfill the + duties of the partition leaders on the node that is about to fail. So, whenever a node shuts down, a new controller + can be elected and it can also be made sure that at any given time, there is only one controller and all the follower nodes have agreed on that. + - **Configuration Of Topics** + The configuration regarding all the topics including the list of existing topics, the number of partitions for each topic, + the location of all the replicas, list of configuration overrides for all topics and which node is the preferred leader, etc. + - **Access control lists** + Access control lists or ACLs for all the topics are also maintained within Zookeeper. + - **Membership of the cluster** + Zookeeper also maintains a list of all the brokers that are functioning at any given moment and are a part of the cluster [9]. + +### [Apache Kylin](http://kylin.apache.org/) + - Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, + original contributed from eBay Inc. + - Apache Kylin leverages Zookeeper for job coordination [31,33]. + +### [Apache Mesos](http://mesos.apache.org/) + - Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), + enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. + - Mesos has a high-availability mode that uses multiple Mesos masters: one active master (called the leader or leading master) + and several backups in case it fails. The masters elect the leader, with Apache ZooKeeper both coordinating the election + and handling leader detection by masters, agents, and scheduler drivers [10]. + +### [Apache Oozie](https://oozie.apache.org) + - Oozie is a workflow scheduler system to manage Apache Hadoop jobs. + - the Oozie servers use it for coordinating access to the database and communicating with each other. In order to have full HA, + there should be at least 3 ZooKeeper servers [29]. + +### [Apache Pulsar](https://pulsar.apache.org) + - Apache Pulsar is an open-source distributed pub-sub messaging system originally created at Yahoo and now part of the Apache Software Foundation + - Pulsar uses Apache Zookeeper for metadata storage, cluster configuration, and coordination. In a Pulsar instance: + - A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. + - Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as ownership metadata, + broker load reports, BookKeeper ledger metadata, and more [24]. + +### [Apache Solr](https://lucene.apache.org/solr/) + - Solr is the popular, blazing-fast, open source enterprise search platform built on Apache Lucene. + - In the "Cloud" edition (v4.x and up) of enterprise search engine Apache Solr, ZooKeeper is used for configuration, + leader election and more [12,13]. + +### [Apache Spark](https://spark.apache.org/) + - Apache Spark is a unified analytics engine for large-scale data processing. + - Utilizing ZooKeeper to provide leader election and some state storage, you can launch multiple Masters in your cluster connected to the same ZooKeeper instance. + One will be elected “leader” and the others will remain in standby mode. If the current leader dies, another Master will be elected, + recover the old Master’s state, and then resume scheduling [14]. + +### [Apache Storm](http://storm.apache.org) + - Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably + process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. + Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! + - Storm uses Zookeeper for coordinating the cluster [22]. + + +## Companies + +### [AGETO](http://www.ageto.de/) + - The AGETO RnD team uses ZooKeeper in a variety of internal as well as external consulting projects [1]. + +### [Benipal Technologies](http://www.benipaltechnologies.com/) + - ZooKeeper is used for internal application development with Solr and Hadoop with Hbase [1]. + +### [Box](http://box.net/) + - Box uses ZooKeeper for service discovery, service coordination, Solr and Hadoop support, etc [1]. + +### [Deepdyve](http://www.deepdyve.com/) + - We do search for research and provide access to high quality content using advanced search technologies Zookeeper is used to + manage server state, control index deployment and a myriad other tasks [1]. + +### [Facebook](https://www.facebook.com/) + - Facebook uses the Zeus ([17,18]) for configuration management which is a forked version of ZooKeeper, with many scalability + and performance en- hancements in order to work at the Facebook scale. + It runs a consensus protocol among servers distributed across mul- tiple regions for resilience. If the leader fails, + a follower is converted into a new leader. + +### [Idium Portal](http://www.idium.no/no/idium_portal/) + - Idium Portal is a hosted web-publishing system delivered by Norwegian company, Idium AS. + - ZooKeeper is used for cluster messaging, service bootstrapping, and service coordination [1]. + +### [Makara](http://www.makara.com/) + - Using ZooKeeper on 2-node cluster on VMware workstation, Amazon EC2, Zen + - Using zkpython + - Looking into expanding into 100 node cluster [1]. + +### [Midokura](http://www.midokura.com/) + - We do virtualized networking for the cloud computing era. We use ZooKeeper for various aspects of our distributed control plane [1]. + +### [Pinterest](https://www.pinterest.com/) + - Pinterest uses the ZooKeeper for Service discovery and dynamic configuration.Like many large scale web sites, Pinterest’s infrastructure consists of servers that communicate with + backend services composed of a number of individual servers for managing load and fault tolerance. Ideally, we’d like the configuration to reflect only the active hosts, + so clients don’t need to deal with bad hosts as often. ZooKeeper provides a well known pattern to solve this problem [19]. + +### [Rackspace](http://www.rackspace.com/email_hosting) + - The Email & Apps team uses ZooKeeper to coordinate sharding and responsibility changes in a distributed e-mail client + that pulls and indexes data for search. ZooKeeper also provides distributed locking for connections to prevent a cluster from overwhelming servers [1]. + +### [Sematext](http://sematext.com/) + - Uses ZooKeeper in SPM (which includes ZooKeeper monitoring component, too!), Search Analytics, and Logsene [1]. + +### [Tubemogul](http://tubemogul.com/) + - Uses ZooKeeper for leader election, configuration management, locking, group membership [1]. + +### [Twitter](https://twitter.com/) + - ZooKeeper is used at Twitter as the source of truth for storing critical metadata. It serves as a coordination kernel to + provide distributed coordination services, such as leader election and distributed locking. + Some concrete examples of ZooKeeper in action include [15,16]: + - ZooKeeper is used to store service registry, which is used by Twitter’s naming service for service discovery. + - Manhattan (Twitter’s in-house key-value database), Nighthawk (sharded Redis), and Blobstore (in-house photo and video storage), + stores its cluster topology information in ZooKeeper. + - EventBus, Twitter’s pub-sub messaging system, stores critical metadata in ZooKeeper and uses ZooKeeper for leader election. + - Mesos, Twitter’s compute platform, uses ZooKeeper for leader election. + +### [Vast.com](http://www.vast.com/) + - Used internally as a part of sharding services, distributed synchronization of data/index updates, configuration management and failover support [1]. + +### [Wealthfront](http://wealthfront.com/) + - Wealthfront uses ZooKeeper for service discovery, leader election and distributed locking among its many backend services. + ZK is an essential part of Wealthfront's continuous [deployment infrastructure](http://eng.wealthfront.com/2010/05/02/deployment-infrastructure-for-continuous-deployment/) [1]. + +### [Yahoo!](http://www.yahoo.com/) + - ZooKeeper is used for a myriad of services inside Yahoo! for doing leader election, configuration management, sharding, locking, group membership etc [1]. + +### [Zynga](http://www.zynga.com/) + - ZooKeeper at Zynga is used for a variety of services including configuration management, leader election, sharding and more [1]. + + +#### References +- [1] https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy +- [2] https://www.youtube.com/watch?v=Ew53T6h9oRw +- [3] https://bookkeeper.apache.org/docs/4.7.3/getting-started/concepts/#ledgers +- [4] http://cxf.apache.org/dosgi-discovery-demo-page.html +- [5] https://flume.apache.org/FlumeUserGuide.html +- [6] http://dubbo.apache.org/en-us/blog/dubbo-zk.html +- [7] https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html +- [8] https://hbase.apache.org/book.html#zookeeper +- [9] https://www.cloudkarafka.com/blog/2018-07-04-cloudkarafka_what_is_zookeeper.html +- [10] http://mesos.apache.org/documentation/latest/high-availability/ +- [11] http://incubator.apache.org/projects/s4.html +- [12] https://lucene.apache.org/solr/guide/6_6/using-zookeeper-to-manage-configuration-files.html#UsingZooKeepertoManageConfigurationFiles-StartupBootstrap +- [13] https://lucene.apache.org/solr/guide/6_6/setting-up-an-external-zookeeper-ensemble.html +- [14] https://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper +- [15] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/zookeeper-at-twitter.html +- [16] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/dynamic-configuration-at-twitter.html +- [17] TANG, C., KOOBURAT, T., VENKATACHALAM, P.,CHANDER, A., WEN, Z., NARAYANAN, A., DOWELL,P., AND KARL, R. Holistic Configuration Management + at Facebook. In Proceedings of the 25th Symposium on Operating System Principles (SOSP’15) (Monterey, CA,USA, Oct. 2015). +- [18] https://www.youtube.com/watch?v=SeZV373gUZc +- [19] https://medium.com/@Pinterest_Engineering/zookeeper-resilience-at-pinterest-adfd8acf2a6b +- [20] https://blog.cloudera.com/what-are-hbase-znodes/ +- [21] https://helix.apache.org/Architecture.html +- [22] http://storm.apache.org/releases/current/Setting-up-a-Storm-cluster.html +- [23] https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html +- [24] https://pulsar.apache.org/docs/en/concepts-architecture-overview/#metadata-store +- [25] https://cwiki.apache.org/confluence/display/Hive/Locking +- [26] *ZooKeeperHiveLockManager* implementation in the [hive](https://github.com/apache/hive/) code base +- [27] https://druid.apache.org/docs/latest/dependencies/zookeeper.html +- [28] https://mapr.com/blog/apache-drill-architecture-ultimate-guide/ +- [29] https://oozie.apache.org/docs/4.1.0/AG_Install.html +- [30] https://docs.spring.io/spring-xd/docs/current/reference/html/ +- [31] https://cwiki.apache.org/confluence/display/CURATOR/Powered+By +- [32] https://projects.spring.io/spring-statemachine/ +- [33] https://www.tigeranalytics.com/blog/apache-kylin-architecture/ +- [34] https://apacheignite.readme.io/docs/cluster-discovery +- [35] http://atlas.apache.org/HighAvailability.html +- [36] http://griffin.apache.org/docs/usecases.html +- [37] https://fluo.apache.org/ +- [38] https://spring.io/projects/spring-cloud-zookeeper