Kitxuuu commited on
Commit
feaa20b
·
verified ·
1 Parent(s): ea8198f

Add files using upload-large-folder tool

Browse files
Files changed (20) hide show
  1. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/index.md +65 -0
  2. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/recipes.md +416 -0
  3. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md +267 -0
  4. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getMenu.js +45 -0
  5. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/menu.js +48 -0
  6. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css +54 -0
  7. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/screen.css +531 -0
  8. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md +0 -0
  9. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAuditLogs.md +140 -0
  10. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md +573 -0
  11. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperHierarchicalQuorums.md +47 -0
  12. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperInternals.md +382 -0
  13. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md +269 -0
  14. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md +336 -0
  15. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md +908 -0
  16. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperSnapshotAndRestore.md +68 -0
  17. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md +373 -0
  18. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md +698 -0
  19. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md +666 -0
  20. local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md +385 -0
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/index.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ ## ZooKeeper: Because Coordinating Distributed Systems is a Zoo
18
+
19
+ ZooKeeper is a high-performance coordination service for
20
+ distributed applications. It exposes common services - such as
21
+ naming, configuration management, synchronization, and group
22
+ services - in a simple interface so you don't have to write them
23
+ from scratch. You can use it off-the-shelf to implement
24
+ consensus, group management, leader election, and presence
25
+ protocols. And you can build on it for your own, specific needs.
26
+
27
+ The following documents describe concepts and procedures to get
28
+ you started using ZooKeeper. If you have more questions, please
29
+ ask the [mailing list](http://zookeeper.apache.org/mailing_lists.html) or browse the
30
+ archives.
31
+
32
+ + **ZooKeeper Overview**
33
+ Technical Overview Documents for Client Developers, Administrators, and Contributors
34
+ + [Overview](zookeeperOver.html) - a bird's eye view of ZooKeeper, including design concepts and architecture
35
+ + [Getting Started](zookeeperStarted.html) - a tutorial-style guide for developers to install, run, and program to ZooKeeper
36
+ + [Release Notes](releasenotes.html) - new developer and user facing features, improvements, and incompatibilities
37
+ + **Developers**
38
+ Documents for Developers using the ZooKeeper Client API
39
+ + [API Docs](apidocs/zookeeper-server/index.html) - the technical reference to ZooKeeper Client APIs
40
+ + [Programmer's Guide](zookeeperProgrammers.html) - a client application developer's guide to ZooKeeper
41
+ + [ZooKeeper Use Cases](zookeeperUseCases.html) - a series of use cases using the ZooKeeper.
42
+ + [ZooKeeper Java Example](javaExample.html) - a simple Zookeeper client application, written in Java
43
+ + [Barrier and Queue Tutorial](zookeeperTutorial.html) - sample implementations of barriers and queues
44
+ + [ZooKeeper Recipes](recipes.html) - higher level solutions to common problems in distributed applications
45
+ + **Administrators & Operators**
46
+ Documents for Administrators and Operations Engineers of ZooKeeper Deployments
47
+ + [Administrator's Guide](zookeeperAdmin.html) - a guide for system administrators and anyone else who might deploy ZooKeeper
48
+ + [Quota Guide](zookeeperQuotas.html) - a guide for system administrators on Quotas in ZooKeeper.
49
+ + [Snapshot and Restore Guide](zookeeperSnapshotAndRestore.html) - a guide for system administrators on take snapshot and restore ZooKeeper.
50
+ + [JMX](zookeeperJMX.html) - how to enable JMX in ZooKeeper
51
+ + [Hierarchical Quorums](zookeeperHierarchicalQuorums.html) - a guide on how to use hierarchical quorums
52
+ + [Oracle Quorum](zookeeperOracleQuorums.html) - the introduction to Oracle Quorum increases the availability of a cluster of 2 ZooKeeper instances with a failure detector.
53
+ + [Observers](zookeeperObservers.html) - non-voting ensemble members that easily improve ZooKeeper's scalability
54
+ + [Dynamic Reconfiguration](zookeeperReconfig.html) - a guide on how to use dynamic reconfiguration in ZooKeeper
55
+ + [ZooKeeper CLI](zookeeperCLI.html) - a guide on how to use the ZooKeeper command line interface
56
+ + [ZooKeeper Tools](zookeeperTools.html) - a guide on how to use a series of tools for ZooKeeper
57
+ + [ZooKeeper Monitor](zookeeperMonitor.html) - a guide on how to monitor the ZooKeeper
58
+ + [Audit Logging](zookeeperAuditLogs.html) - a guide on how to configure audit logs in ZooKeeper Server and what contents are logged.
59
+ + **Contributors**
60
+ Documents for Developers Contributing to the ZooKeeper Open Source Project
61
+ + [ZooKeeper Internals](zookeeperInternals.html) - assorted topics on the inner workings of ZooKeeper
62
+ + **Miscellaneous ZooKeeper Documentation**
63
+ + [Wiki](https://cwiki.apache.org/confluence/display/ZOOKEEPER)
64
+ + [FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ)
65
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/recipes.md ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Recipes and Solutions
18
+
19
+ * [A Guide to Creating Higher-level Constructs with ZooKeeper](#ch_recipes)
20
+ * [Important Note About Error Handling](#sc_recipes_errorHandlingNote)
21
+ * [Out of the Box Applications: Name Service, Configuration, Group Membership](#sc_outOfTheBox)
22
+ * [Barriers](#sc_recipes_eventHandles)
23
+ * [Double Barriers](#sc_doubleBarriers)
24
+ * [Queues](#sc_recipes_Queues)
25
+ * [Priority Queues](#sc_recipes_priorityQueues)
26
+ * [Locks](#sc_recipes_Locks)
27
+ * [Recoverable Errors and the GUID](#sc_recipes_GuidNote)
28
+ * [Shared Locks](#Shared+Locks)
29
+ * [Revocable Shared Locks](#sc_revocableSharedLocks)
30
+ * [Two-phased Commit](#sc_recipes_twoPhasedCommit)
31
+ * [Leader Election](#sc_leaderElection)
32
+
33
+ <a name="ch_recipes"></a>
34
+
35
+ ## A Guide to Creating Higher-level Constructs with ZooKeeper
36
+
37
+ In this article, you'll find guidelines for using
38
+ ZooKeeper to implement higher order functions. All of them are conventions
39
+ implemented at the client and do not require special support from
40
+ ZooKeeper. Hopefully the community will capture these conventions in client-side libraries
41
+ to ease their use and to encourage standardization.
42
+
43
+ One of the most interesting things about ZooKeeper is that even
44
+ though ZooKeeper uses _asynchronous_ notifications, you
45
+ can use it to build _synchronous_ consistency
46
+ primitives, such as queues and locks. As you will see, this is possible
47
+ because ZooKeeper imposes an overall order on updates, and has mechanisms
48
+ to expose this ordering.
49
+
50
+ Note that the recipes below attempt to employ best practices. In
51
+ particular, they avoid polling, timers or anything else that would result
52
+ in a "herd effect", causing bursts of traffic and limiting
53
+ scalability.
54
+
55
+ There are many useful functions that can be imagined that aren't
56
+ included here - revocable read-write priority locks, as just one example.
57
+ And some of the constructs mentioned here - locks, in particular -
58
+ illustrate certain points, even though you may find other constructs, such
59
+ as event handles or queues, a more practical means of performing the same
60
+ function. In general, the examples in this section are designed to
61
+ stimulate thought.
62
+
63
+ <a name="sc_recipes_errorHandlingNote"></a>
64
+
65
+ ### Important Note About Error Handling
66
+
67
+ When implementing the recipes you must handle recoverable exceptions
68
+ (see the [FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ)). In
69
+ particular, several of the recipes employ sequential ephemeral
70
+ nodes. When creating a sequential ephemeral node there is an error case in
71
+ which the create() succeeds on the server but the server crashes before
72
+ returning the name of the node to the client. When the client reconnects its
73
+ session is still valid and, thus, the node is not removed. The implication is
74
+ that it is difficult for the client to know if its node was created or not. The
75
+ recipes below include measures to handle this.
76
+
77
+ <a name="sc_outOfTheBox"></a>
78
+
79
+ ### Out of the Box Applications: Name Service, Configuration, Group Membership
80
+
81
+ Name service and configuration are two of the primary applications
82
+ of ZooKeeper. These two functions are provided directly by the ZooKeeper
83
+ API.
84
+
85
+ Another function directly provided by ZooKeeper is _group
86
+ membership_. The group is represented by a node. Members of the
87
+ group create ephemeral nodes under the group node. Nodes of the members
88
+ that fail abnormally will be removed automatically when ZooKeeper detects
89
+ the failure.
90
+
91
+ <a name="sc_recipes_eventHandles"></a>
92
+
93
+ ### Barriers
94
+
95
+ Distributed systems use _barriers_
96
+ to block processing of a set of nodes until a condition is met
97
+ at which time all the nodes are allowed to proceed. Barriers are
98
+ implemented in ZooKeeper by designating a barrier node. The
99
+ barrier is in place if the barrier node exists. Here's the
100
+ pseudo code:
101
+
102
+ 1. Client calls the ZooKeeper API's **exists()** function on the barrier node, with
103
+ _watch_ set to true.
104
+ 1. If **exists()** returns false, the
105
+ barrier is gone and the client proceeds
106
+ 1. Else, if **exists()** returns true,
107
+ the clients wait for a watch event from ZooKeeper for the barrier
108
+ node.
109
+ 1. When the watch event is triggered, the client reissues the
110
+ **exists( )** call, again waiting until
111
+ the barrier node is removed.
112
+
113
+ <a name="sc_doubleBarriers"></a>
114
+
115
+ #### Double Barriers
116
+
117
+ Double barriers enable clients to synchronize the beginning and
118
+ the end of a computation. When enough processes have joined the barrier,
119
+ processes start their computation and leave the barrier once they have
120
+ finished. This recipe shows how to use a ZooKeeper node as a
121
+ barrier.
122
+
123
+ The pseudo code in this recipe represents the barrier node as
124
+ _b_. Every client process _p_
125
+ registers with the barrier node on entry and unregisters when it is
126
+ ready to leave. A node registers with the barrier node via the **Enter** procedure below, it waits until
127
+ _x_ client process register before proceeding with
128
+ the computation. (The _x_ here is up to you to
129
+ determine for your system.)
130
+
131
+ | **Enter** | **Leave** |
132
+ |-----------------------------------|-------------------------------|
133
+ | 1. Create a name __n_ = _b_+“/”+_p__ | 1. **L = getChildren(b, false)** |
134
+ | 2. Set watch: **exists(_b_ + ‘‘/ready’’, true)** | 2. if no children, exit |
135
+ | 3. Create child: **create(_n_, EPHEMERAL)** | 3. if _p_ is only process node in L, delete(n) and exit |
136
+ | 4. **L = getChildren(b, false)** | 4. if _p_ is the lowest process node in L, wait on highest process node in L |
137
+ | 5. if fewer children in L than_x_, wait for watch event | 5. else **delete(_n_)**if still exists and wait on lowest process node in L |
138
+ | 6. else **create(b + ‘‘/ready’’, REGULAR)** | 6. goto 1 |
139
+
140
+ On entering, all processes watch on a ready node and
141
+ create an ephemeral node as a child of the barrier node. Each process
142
+ but the last enters the barrier and waits for the ready node to appear
143
+ at line 5. The process that creates the xth node, the last process, will
144
+ see x nodes in the list of children and create the ready node, waking up
145
+ the other processes. Note that waiting processes wake up only when it is
146
+ time to exit, so waiting is efficient.
147
+
148
+ On exit, you can't use a flag such as _ready_
149
+ because you are watching for process nodes to go away. By using
150
+ ephemeral nodes, processes that fail after the barrier has been entered
151
+ do not prevent correct processes from finishing. When processes are
152
+ ready to leave, they need to delete their process nodes and wait for all
153
+ other processes to do the same.
154
+
155
+ Processes exit when there are no process nodes left as children of
156
+ _b_. However, as an efficiency, you can use the
157
+ lowest process node as the ready flag. All other processes that are
158
+ ready to exit watch for the lowest existing process node to go away, and
159
+ the owner of the lowest process watches for any other process node
160
+ (picking the highest for simplicity) to go away. This means that only a
161
+ single process wakes up on each node deletion except for the last node,
162
+ which wakes up everyone when it is removed.
163
+
164
+ <a name="sc_recipes_Queues"></a>
165
+
166
+ ### Queues
167
+
168
+ Distributed queues are a common data structure. To implement a
169
+ distributed queue in ZooKeeper, first designate a znode to hold the queue,
170
+ the queue node. The distributed clients put something into the queue by
171
+ calling create() with a pathname ending in "queue-", with the
172
+ _sequence_ and _ephemeral_ flags in
173
+ the create() call set to true. Because the _sequence_
174
+ flag is set, the new pathname will have the form
175
+ _path-to-queue-node_/queue-X, where X is a monotonic increasing number. A
176
+ client that wants to be removed from the queue calls ZooKeeper's **getChildren( )** function, with
177
+ _watch_ set to true on the queue node, and begins
178
+ processing nodes with the lowest number. The client does not need to issue
179
+ another **getChildren( )** until it exhausts
180
+ the list obtained from the first **getChildren(
181
+ )** call. If there are no children in the queue node, the
182
+ reader waits for a watch notification to check the queue again.
183
+
184
+ ###### Note
185
+ >There now exists a Queue implementation in ZooKeeper
186
+ recipes directory. This is distributed with the release --
187
+ zookeeper-recipes/zookeeper-recipes-queue directory of the release artifact.
188
+
189
+ <a name="sc_recipes_priorityQueues"></a>
190
+
191
+ #### Priority Queues
192
+
193
+ To implement a priority queue, you need only make two simple
194
+ changes to the generic [queue
195
+ recipe](#sc_recipes_Queues) . First, to add to a queue, the pathname ends with
196
+ "queue-YY" where YY is the priority of the element with lower numbers
197
+ representing higher priority (just like UNIX). Second, when removing
198
+ from the queue, a client uses an up-to-date children list meaning that
199
+ the client will invalidate previously obtained children lists if a watch
200
+ notification triggers for the queue node.
201
+
202
+ <a name="sc_recipes_Locks"></a>
203
+
204
+ ### Locks
205
+
206
+ Fully distributed locks that are globally synchronous, meaning at
207
+ any snapshot in time no two clients think they hold the same lock. These
208
+ can be implemented using ZooKeeper. As with priority queues, first define
209
+ a lock node.
210
+
211
+ ###### Note
212
+ >There now exists a Lock implementation in ZooKeeper
213
+ recipes directory. This is distributed with the release --
214
+ zookeeper-recipes/zookeeper-recipes-lock directory of the release artifact.
215
+
216
+ Clients wishing to obtain a lock do the following:
217
+
218
+ 1. Call **create( )** with a pathname
219
+ of "_locknode_/guid-lock-" and the _sequence_ and
220
+ _ephemeral_ flags set. The _guid_
221
+ is needed in case the create() result is missed. See the note below.
222
+ 1. Call **getChildren( )** on the lock
223
+ node _without_ setting the watch flag (this is
224
+ important to avoid the herd effect).
225
+ 1. If the pathname created in step **1** has the lowest sequence number suffix, the
226
+ client has the lock and the client exits the protocol.
227
+ 1. The client calls **exists( )** with
228
+ the watch flag set on the path in the lock directory with the next
229
+ lowest sequence number.
230
+ 1. if **exists( )** returns null, go
231
+ to step **2**. Otherwise, wait for a
232
+ notification for the pathname from the previous step before going to
233
+ step **2**.
234
+
235
+ The unlock protocol is very simple: clients wishing to release a
236
+ lock simply delete the node they created in step 1.
237
+
238
+ Here are a few things to notice:
239
+
240
+ * The removal of a node will only cause one client to wake up
241
+ since each node is watched by exactly one client. In this way, you
242
+ avoid the herd effect.
243
+
244
+ * There is no polling or timeouts.
245
+
246
+ * Because of the way you implement locking, it is easy to see the
247
+ amount of lock contention, break locks, debug locking problems,
248
+ etc.
249
+
250
+ <a name="sc_recipes_GuidNote"></a>
251
+
252
+ #### Recoverable Errors and the GUID
253
+
254
+ * If a recoverable error occurs calling **create()** the
255
+ client should call **getChildren()** and check for a node
256
+ containing the _guid_ used in the path name.
257
+ This handles the case (noted [above](#sc_recipes_errorHandlingNote)) of
258
+ the create() succeeding on the server but the server crashing before returning the name
259
+ of the new node.
260
+
261
+ <a name="Shared+Locks"></a>
262
+
263
+ #### Shared Locks
264
+
265
+ You can implement shared locks by with a few changes to the lock
266
+ protocol:
267
+
268
+ | **Obtaining a read lock:** | **Obtaining a write lock:** |
269
+ |----------------------------|-----------------------------|
270
+ | 1. Call **create( )** to create a node with pathname "*guid-/read-*". This is the lock node use later in the protocol. Make sure to set both the _sequence_ and _ephemeral_ flags. | 1. Call **create( )** to create a node with pathname "*guid-/write-*". This is the lock node spoken of later in the protocol. Make sure to set both _sequence_ and _ephemeral_ flags. |
271
+ | 2. Call **getChildren( )** on the lock node _without_ setting the _watch_ flag - this is important, as it avoids the herd effect. | 2. Call **getChildren( )** on the lock node _without_ setting the _watch_ flag - this is important, as it avoids the herd effect. |
272
+ | 3. If there are no children with a pathname starting with "*write-*" and having a lower sequence number than the node created in step **1**, the client has the lock and can exit the protocol. | 3. If there are no children with a lower sequence number than the node created in step **1**, the client has the lock and the client exits the protocol. |
273
+ | 4. Otherwise, call **exists( )**, with _watch_ flag, set on the node in lock directory with pathname starting with "*write-*" having the next lowest sequence number. | 4. Call **exists( ),** with _watch_ flag set, on the node with the pathname that has the next lowest sequence number. |
274
+ | 5. If **exists( )** returns _false_, goto step **2**. | 5. If **exists( )** returns _false_, goto step **2**. Otherwise, wait for a notification for the pathname from the previous step before going to step **2**. |
275
+ | 6. Otherwise, wait for a notification for the pathname from the previous step before going to step **2** | |
276
+
277
+ Notes:
278
+
279
+ * It might appear that this recipe creates a herd effect:
280
+ when there is a large group of clients waiting for a read
281
+ lock, and all getting notified more or less simultaneously
282
+ when the "*write-*" node with the lowest
283
+ sequence number is deleted. In fact. that's valid behavior:
284
+ as all those waiting reader clients should be released since
285
+ they have the lock. The herd effect refers to releasing a
286
+ "herd" when in fact only a single or a small number of
287
+ machines can proceed.
288
+
289
+ * See the [note for Locks](#sc_recipes_GuidNote) on how to use the guid in the node.
290
+
291
+ <a name="sc_revocableSharedLocks"></a>
292
+
293
+ #### Revocable Shared Locks
294
+
295
+ With minor modifications to the Shared Lock protocol, you make
296
+ shared locks revocable by modifying the shared lock protocol:
297
+
298
+ In step **1**, of both obtain reader
299
+ and writer lock protocols, call **getData(
300
+ )** with _watch_ set, immediately after the
301
+ call to **create( )**. If the client
302
+ subsequently receives notification for the node it created in step
303
+ **1**, it does another **getData( )** on that node, with
304
+ _watch_ set and looks for the string "unlock", which
305
+ signals to the client that it must release the lock. This is because,
306
+ according to this shared lock protocol, you can request the client with
307
+ the lock give up the lock by calling **setData()** on the lock node, writing "unlock" to that node.
308
+
309
+ Note that this protocol requires the lock holder to consent to
310
+ releasing the lock. Such consent is important, especially if the lock
311
+ holder needs to do some processing before releasing the lock. Of course
312
+ you can always implement _Revocable Shared Locks with Freaking
313
+ Laser Beams_ by stipulating in your protocol that the revoker
314
+ is allowed to delete the lock node if after some length of time the lock
315
+ isn't deleted by the lock holder.
316
+
317
+ <a name="sc_recipes_twoPhasedCommit"></a>
318
+
319
+ ### Two-phased Commit
320
+
321
+ A two-phase commit protocol is an algorithm that lets all clients in
322
+ a distributed system agree either to commit a transaction or abort.
323
+
324
+ In ZooKeeper, you can implement a two-phased commit by having a
325
+ coordinator create a transaction node, say "/app/Tx", and one child node
326
+ per participating site, say "/app/Tx/s_i". When coordinator creates the
327
+ child node, it leaves the content undefined. Once each site involved in
328
+ the transaction receives the transaction from the coordinator, the site
329
+ reads each child node and sets a watch. Each site then processes the query
330
+ and votes "commit" or "abort" by writing to its respective node. Once the
331
+ write completes, the other sites are notified, and as soon as all sites
332
+ have all votes, they can decide either "abort" or "commit". Note that a
333
+ node can decide "abort" earlier if some site votes for "abort".
334
+
335
+ An interesting aspect of this implementation is that the only role
336
+ of the coordinator is to decide upon the group of sites, to create the
337
+ ZooKeeper nodes, and to propagate the transaction to the corresponding
338
+ sites. In fact, even propagating the transaction can be done through
339
+ ZooKeeper by writing it in the transaction node.
340
+
341
+ There are two important drawbacks of the approach described above.
342
+ One is the message complexity, which is O(n²). The second is the
343
+ impossibility of detecting failures of sites through ephemeral nodes. To
344
+ detect the failure of a site using ephemeral nodes, it is necessary that
345
+ the site create the node.
346
+
347
+ To solve the first problem, you can have only the coordinator
348
+ notified of changes to the transaction nodes, and then notify the sites
349
+ once coordinator reaches a decision. Note that this approach is scalable,
350
+ but it is slower too, as it requires all communication to go through the
351
+ coordinator.
352
+
353
+ To address the second problem, you can have the coordinator
354
+ propagate the transaction to the sites, and have each site creating its
355
+ own ephemeral node.
356
+
357
+ <a name="sc_leaderElection"></a>
358
+
359
+ ### Leader Election
360
+
361
+ A simple way of doing leader election with ZooKeeper is to use the
362
+ **SEQUENCE|EPHEMERAL** flags when creating
363
+ znodes that represent "proposals" of clients. The idea is to have a znode,
364
+ say "/election", such that each znode creates a child znode "/election/guid-n_"
365
+ with both flags SEQUENCE|EPHEMERAL. With the sequence flag, ZooKeeper
366
+ automatically appends a sequence number that is greater than anyone
367
+ previously appended to a child of "/election". The process that created
368
+ the znode with the smallest appended sequence number is the leader.
369
+
370
+ That's not all, though. It is important to watch for failures of the
371
+ leader, so that a new client arises as the new leader in the case the
372
+ current leader fails. A trivial solution is to have all application
373
+ processes watching upon the current smallest znode, and checking if they
374
+ are the new leader when the smallest znode goes away (note that the
375
+ smallest znode will go away if the leader fails because the node is
376
+ ephemeral). But this causes a herd effect: upon a failure of the current
377
+ leader, all other processes receive a notification, and execute
378
+ getChildren on "/election" to obtain the current list of children of
379
+ "/election". If the number of clients is large, it causes a spike on the
380
+ number of operations that ZooKeeper servers have to process. To avoid the
381
+ herd effect, it is sufficient to watch for the next znode down on the
382
+ sequence of znodes. If a client receives a notification that the znode it
383
+ is watching is gone, then it becomes the new leader in the case that there
384
+ is no smaller znode. Note that this avoids the herd effect by not having
385
+ all clients watching the same znode.
386
+
387
+ Here's the pseudo code:
388
+
389
+ Let ELECTION be a path of choice of the application. To volunteer to
390
+ be a leader:
391
+
392
+ 1. Create znode z with path "ELECTION/guid-n_" with both SEQUENCE and
393
+ EPHEMERAL flags;
394
+ 1. Let C be the children of "ELECTION", and i is the sequence
395
+ number of z;
396
+ 1. Watch for changes on "ELECTION/guid-n_j", where j is the largest
397
+ sequence number such that j < i and n_j is a znode in C;
398
+
399
+ Upon receiving a notification of znode deletion:
400
+
401
+ 1. Let C be the new set of children of ELECTION;
402
+ 1. If z is the smallest node in C, then execute leader
403
+ procedure;
404
+ 1. Otherwise, watch for changes on "ELECTION/guid-n_j", where j is the
405
+ largest sequence number such that j < i and n_j is a znode in C;
406
+
407
+ Notes:
408
+
409
+ * Note that the znode having no preceding znode on the list of
410
+ children do not imply that the creator of this znode is aware that it is
411
+ the current leader. Applications may consider creating a separate znode
412
+ to acknowledge that the leader has executed the leader procedure.
413
+
414
+ * See the [note for Locks](#sc_recipes_GuidNote) on how to use the guid in the node.
415
+
416
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper 3.0.0 Release Notes
18
+
19
+ * [Migration Instructions when Upgrading to 3.0.0](#migration)
20
+ * [Migrating Client Code](#migration_code)
21
+ * [Watch Management](#Watch+Management)
22
+ * [Java API](#Java+API)
23
+ * [C API](#C+API)
24
+ * [Migrating Server Data](#migration_data)
25
+ * [Migrating Server Configuration](#migration_config)
26
+ * [Changes Since ZooKeeper 2.2.1](#changes)
27
+
28
+ These release notes include new developer and user facing incompatibilities, features, and major improvements.
29
+
30
+ * [Migration Instructions](#migration)
31
+ * [Changes](#changes)
32
+
33
+ <a name="migration"></a>
34
+ ## Migration Instructions when Upgrading to 3.0.0
35
+ <div class="section">
36
+
37
+ *You should only have to read this section if you are upgrading from a previous version of ZooKeeper to version 3.0.0, otw skip down to [changes](#changes)*
38
+
39
+ A small number of changes in this release have resulted in non-backward compatible Zookeeper client user code and server instance data. The following instructions provide details on how to migrate code and date from version 2.2.1 to version 3.0.0.
40
+
41
+ Note: ZooKeeper increments the major version number (major.minor.fix) when backward incompatible changes are made to the source base. As part of the migration from SourceForge we changed the package structure (com.yahoo.zookeeper.* to org.apache.zookeeper.*) and felt it was a good time to incorporate some changes that we had been withholding. As a result the following will be required when migrating from 2.2.1 to 3.0.0 version of ZooKeeper.
42
+
43
+ * [Migrating Client Code](#migration_code)
44
+ * [Migrating Server Data](#migration_data)
45
+ * [Migrating Server Configuration](#migration_config)
46
+
47
+ <a name="migration_code"></a>
48
+ ### Migrating Client Code
49
+
50
+ The underlying client-server protocol has changed in version 3.0.0
51
+ of ZooKeeper. As a result clients must be upgraded along with
52
+ serving clusters to ensure proper operation of the system (old
53
+ pre-3.0.0 clients are not guaranteed to operate against upgraded
54
+ 3.0.0 servers and vice-versa).
55
+
56
+ <a name="Watch+Management"></a>
57
+ #### Watch Management
58
+
59
+ In previous releases of ZooKeeper any watches registered by clients were lost if the client lost a connection to a ZooKeeper server.
60
+ This meant that developers had to track watches they were interested in and reregister them if a session disconnect event was received.
61
+ In this release the client library tracks watches that a client has registered and reregisters the watches when a connection is made to a new server.
62
+ Applications that still manually reregister interest should continue working properly as long as they are able to handle unsolicited watches.
63
+ For example, an old application may register a watch for /foo and /goo, lose the connection, and reregister only /goo.
64
+ As long as the application is able to receive a notification for /foo, (probably ignoring it) it does not need to be changed.
65
+ One caveat to the watch management: it is possible to miss an event for the creation and deletion of a znode if watching for creation and both the create and delete happens while the client is disconnected from ZooKeeper.
66
+
67
+ This release also allows clients to specify call specific watch functions.
68
+ This gives the developer the ability to modularize logic in different watch functions rather than cramming everything in the watch function attached to the ZooKeeper handle.
69
+ Call specific watch functions receive all session events for as long as they are active, but will only receive the watch callbacks for which they are registered.
70
+
71
+ <a name="Java+API"></a>
72
+ #### Java API
73
+
74
+ 1. The java package structure has changed from **com.yahoo.zookeeper*** to **org.apache.zookeeper***. This will probably affect all of your java code which makes use of ZooKeeper APIs (typically import statements)
75
+ 1. A number of constants used in the client ZooKeeper API were re-specified using enums (rather than ints). See [ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7), [ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132) and [ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139) for full details
76
+ 1. [ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18) removed KeeperStateChanged, use KeeperStateDisconnected instead
77
+
78
+ Also see [the current Java API](http://zookeeper.apache.org/docs/current/apidocs/zookeeper-server/index.html)
79
+
80
+ <a name="C+API"></a>
81
+ #### C API
82
+
83
+ 1. A number of constants used in the client ZooKeeper API were renamed in order to reduce namespace collision, see [ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6) for full details
84
+
85
+ <a name="migration_data"></a>
86
+ ### Migrating Server Data
87
+ The following issues resulted in changes to the on-disk data format (the snapshot and transaction log files contained within the ZK data directory) and require a migration utility to be run.
88
+
89
+ * [ZOOKEEPER-27 Unique DB identifiers for servers and clients](https://issues.apache.org/jira/browse/ZOOKEEPER-27)
90
+ * [ZOOKEEPER-32 CRCs for ZooKeeper data](https://issues.apache.org/jira/browse/ZOOKEEPER-32)
91
+ * [ZOOKEEPER-33 Better ACL management](https://issues.apache.org/jira/browse/ZOOKEEPER-33)
92
+ * [ZOOKEEPER-38 headers (version+) in log/snap files](https://issues.apache.org/jira/browse/ZOOKEEPER-38)
93
+
94
+ **The following must be run once, and only once, when upgrading the ZooKeeper server instances to version 3.0.0.**
95
+
96
+ ###### Note
97
+ > The <dataLogDir> and <dataDir> directories referenced below are specified by the *dataLogDir*
98
+ and *dataDir* specification in your ZooKeeper config file respectively. *dataLogDir* defaults to
99
+ the value of *dataDir* if not specified explicitly in the ZooKeeper server config file (in which
100
+ case provide the same directory for both parameters to the upgrade utility).
101
+
102
+ 1. Shutdown the ZooKeeper server cluster.
103
+ 1. Backup your <dataLogDir> and <dataDir> directories
104
+ 1. Run upgrade using
105
+ * `bin/zkServer.sh upgrade <dataLogDir> <dataDir>`
106
+
107
+ or
108
+
109
+ * `java -classpath pathtolog4j:pathtozookeeper.jar UpgradeMain <dataLogDir> <dataDir>`
110
+
111
+ where <dataLogDir> is the directory where all transaction logs (log.*) are stored. <dataDir> is the directory where all the snapshots (snapshot.*) are stored.
112
+ 1. Restart the cluster.
113
+
114
+ If you have any failure during the upgrade procedure keep reading to sanitize your database.
115
+
116
+ This is how upgrade works in ZooKeeper. This will help you troubleshoot in case you have problems while upgrading
117
+
118
+ 1. Upgrade moves files from `<dataLogDir>` and `<dataDir>` to `<dataLogDir>/version-1/` and `<dataDir>/version-1` respectively (version-1 sub-directory is created by the upgrade utility).
119
+ 1. Upgrade creates a new version sub-directory `<dataDir>/version-2` and `<dataLogDir>/version-2`
120
+ 1. Upgrade reads the old database from `<dataDir>/version-1` and `<dataLogDir>/version-1` into the memory and creates a new upgraded snapshot.
121
+ 1. Upgrade writes the new database in `<dataDir>/version-2`.
122
+
123
+ Troubleshooting.
124
+
125
+
126
+ 1. In case you start ZooKeeper 3.0 without upgrading from 2.0 on a 2.0 database - the servers will start up with an empty database.
127
+ This is because the servers assume that `<dataDir>/version-2` and `<dataLogDir>/version-2` will have the database to start with. Since this will be empty
128
+ in case of no upgrade, the servers will start with an empty database. In such a case, shutdown the ZooKeeper servers, remove the version-2 directory (remember
129
+ this will lead to loss of updates after you started 3.0.)
130
+ and then start the upgrade procedure.
131
+ 1. If the upgrade fails while trying to rename files into the version-1 directory, you should try and move all the files under `<dataDir>/version-1`
132
+ and `<dataLogDir>/version-1` to `<dataDir>` and `<dataLogDir>` respectively. Then try upgrade again.
133
+ 1. If you do not wish to run with ZooKeeper 3.0 and prefer to run with ZooKeeper 2.0 and have already upgraded - you can run ZooKeeper 2 with
134
+ the `<dataDir>` and `<dataLogDir>` directories changed to `<dataDir>/version-1` and `<dataLogDir>/version-1`. Remember that you will lose all the updates that you made after the upgrade.
135
+
136
+ <a name="migration_config"></a>
137
+ ### Migrating Server Configuration
138
+
139
+ There is a significant change to the ZooKeeper server configuration file.
140
+
141
+ The default election algorithm, specified by the *electionAlg* configuration attribute, has
142
+ changed from a default of *0* to a default of *3*. See
143
+ [Cluster Options](zookeeperAdmin.html#sc_clusterOptions) section of the administrators guide, specifically
144
+ the *electionAlg* and *server.X* properties.
145
+
146
+ You will either need to explicitly set *electionAlg* to its previous default value
147
+ of *0* or change your *server.X* options to include the leader election port.
148
+
149
+
150
+ <a name="changes"></a>
151
+ ## Changes Since ZooKeeper 2.2.1
152
+
153
+ Version 2.2.1 code, documentation, binaries, etc... are still accessible on [SourceForge](http://sourceforge.net/projects/zookeeper)
154
+
155
+ | Issue | Notes |
156
+ |-------|-------|
157
+ |[ZOOKEEPER-43](https://issues.apache.org/jira/browse/ZOOKEEPER-43)|Server side of auto reset watches.|
158
+ |[ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132)|Create Enum to replace CreateFlag in ZooKepper.create method|
159
+ |[ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139)|Create Enums for WatcherEvent's KeeperState and EventType|
160
+ |[ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18)|keeper state inconsistency|
161
+ |[ZOOKEEPER-38](https://issues.apache.org/jira/browse/ZOOKEEPER-38)|headers in log/snap files|
162
+ |[ZOOKEEPER-8](https://issues.apache.org/jira/browse/ZOOKEEPER-8)|Stat enchaned to include num of children and size|
163
+ |[ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6)|List of problem identifiers in zookeeper.h|
164
+ |[ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7)|Use enums rather than ints for types and state|
165
+ |[ZOOKEEPER-27](https://issues.apache.org/jira/browse/ZOOKEEPER-27)|Unique DB identifiers for servers and clients|
166
+ |[ZOOKEEPER-32](https://issues.apache.org/jira/browse/ZOOKEEPER-32)|CRCs for ZooKeeper data|
167
+ |[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management|
168
+ |[ZOOKEEPER-203](https://issues.apache.org/jira/browse/ZOOKEEPER-203)|fix datadir typo in releasenotes|
169
+ |[ZOOKEEPER-145](https://issues.apache.org/jira/browse/ZOOKEEPER-145)|write detailed release notes for users migrating from 2.x to 3.0|
170
+ |[ZOOKEEPER-23](https://issues.apache.org/jira/browse/ZOOKEEPER-23)|Auto reset of watches on reconnect|
171
+ |[ZOOKEEPER-191](https://issues.apache.org/jira/browse/ZOOKEEPER-191)|forrest docs for upgrade.|
172
+ |[ZOOKEEPER-201](https://issues.apache.org/jira/browse/ZOOKEEPER-201)|validate magic number when reading snapshot and transaction logs|
173
+ |[ZOOKEEPER-200](https://issues.apache.org/jira/browse/ZOOKEEPER-200)|the magic number for snapshot and log must be different|
174
+ |[ZOOKEEPER-199](https://issues.apache.org/jira/browse/ZOOKEEPER-199)|fix log messages in persistence code|
175
+ |[ZOOKEEPER-197](https://issues.apache.org/jira/browse/ZOOKEEPER-197)|create checksums for snapshots|
176
+ |[ZOOKEEPER-198](https://issues.apache.org/jira/browse/ZOOKEEPER-198)|apache license header missing from FollowerSyncRequest.java|
177
+ |[ZOOKEEPER-5](https://issues.apache.org/jira/browse/ZOOKEEPER-5)|Upgrade Feature in Zookeeper server.|
178
+ |[ZOOKEEPER-194](https://issues.apache.org/jira/browse/ZOOKEEPER-194)|Fix terminology in zookeeperAdmin.xml|
179
+ |[ZOOKEEPER-151](https://issues.apache.org/jira/browse/ZOOKEEPER-151)|Document change to server configuration|
180
+ |[ZOOKEEPER-193](https://issues.apache.org/jira/browse/ZOOKEEPER-193)|update java example doc to compile with latest zookeeper|
181
+ |[ZOOKEEPER-187](https://issues.apache.org/jira/browse/ZOOKEEPER-187)|CreateMode api docs missing|
182
+ |[ZOOKEEPER-186](https://issues.apache.org/jira/browse/ZOOKEEPER-186)|add new "releasenotes.xml" to forrest documentation|
183
+ |[ZOOKEEPER-190](https://issues.apache.org/jira/browse/ZOOKEEPER-190)|Reorg links to docs and navs to docs into related sections|
184
+ |[ZOOKEEPER-189](https://issues.apache.org/jira/browse/ZOOKEEPER-189)|forrest build not validated xml of input documents|
185
+ |[ZOOKEEPER-188](https://issues.apache.org/jira/browse/ZOOKEEPER-188)|Check that election port is present for all servers|
186
+ |[ZOOKEEPER-185](https://issues.apache.org/jira/browse/ZOOKEEPER-185)|Improved version of FLETest|
187
+ |[ZOOKEEPER-184](https://issues.apache.org/jira/browse/ZOOKEEPER-184)|tests: An explicit include directive is needed for the usage of memcpy functions|
188
+ |[ZOOKEEPER-183](https://issues.apache.org/jira/browse/ZOOKEEPER-183)|Array subscript is above array bounds in od_completion, src/cli.c.|
189
+ |[ZOOKEEPER-182](https://issues.apache.org/jira/browse/ZOOKEEPER-182)|zookeeper_init accepts empty host-port string and returns valid pointer to zhandle_t.|
190
+ |[ZOOKEEPER-17](https://issues.apache.org/jira/browse/ZOOKEEPER-17)|zookeeper_init doc needs clarification|
191
+ |[ZOOKEEPER-181](https://issues.apache.org/jira/browse/ZOOKEEPER-181)|Some Source Forge Documents did not get moved over: javaExample, zookeeperTutorial, zookeeperInternals|
192
+ |[ZOOKEEPER-180](https://issues.apache.org/jira/browse/ZOOKEEPER-180)|Placeholder sections needed in document for new topics that the umbrella jira discusses|
193
+ |[ZOOKEEPER-179](https://issues.apache.org/jira/browse/ZOOKEEPER-179)|Programmer's Guide "Basic Operations" section is missing content|
194
+ |[ZOOKEEPER-178](https://issues.apache.org/jira/browse/ZOOKEEPER-178)|FLE test.|
195
+ |[ZOOKEEPER-159](https://issues.apache.org/jira/browse/ZOOKEEPER-159)|Cover two corner cases of leader election|
196
+ |[ZOOKEEPER-156](https://issues.apache.org/jira/browse/ZOOKEEPER-156)|update programmer guide with acl details from old wiki page|
197
+ |[ZOOKEEPER-154](https://issues.apache.org/jira/browse/ZOOKEEPER-154)|reliability graph diagram in overview doc needs context|
198
+ |[ZOOKEEPER-157](https://issues.apache.org/jira/browse/ZOOKEEPER-157)|Peer can't find existing leader|
199
+ |[ZOOKEEPER-155](https://issues.apache.org/jira/browse/ZOOKEEPER-155)|improve "the zookeeper project" section of overview doc|
200
+ |[ZOOKEEPER-140](https://issues.apache.org/jira/browse/ZOOKEEPER-140)|Deadlock in QuorumCnxManager|
201
+ |[ZOOKEEPER-147](https://issues.apache.org/jira/browse/ZOOKEEPER-147)|This is version of the documents with most of the [tbd...] scrubbed out|
202
+ |[ZOOKEEPER-150](https://issues.apache.org/jira/browse/ZOOKEEPER-150)|zookeeper build broken|
203
+ |[ZOOKEEPER-136](https://issues.apache.org/jira/browse/ZOOKEEPER-136)|sync causes hang in all followers of quorum.|
204
+ |[ZOOKEEPER-134](https://issues.apache.org/jira/browse/ZOOKEEPER-134)|findbugs cleanup|
205
+ |[ZOOKEEPER-133](https://issues.apache.org/jira/browse/ZOOKEEPER-133)|hudson tests failing intermittently|
206
+ |[ZOOKEEPER-144](https://issues.apache.org/jira/browse/ZOOKEEPER-144)|add tostring support for watcher event, and enums for event type/state|
207
+ |[ZOOKEEPER-21](https://issues.apache.org/jira/browse/ZOOKEEPER-21)|Improve zk ctor/watcher|
208
+ |[ZOOKEEPER-142](https://issues.apache.org/jira/browse/ZOOKEEPER-142)|Provide Javadoc as to the maximum size of the data byte array that may be stored within a znode|
209
+ |[ZOOKEEPER-93](https://issues.apache.org/jira/browse/ZOOKEEPER-93)|Create Documentation for Zookeeper|
210
+ |[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election|
211
+ |[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events|
212
+ |[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Old leader election can elect a dead leader over and over again|
213
+ |[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process|
214
+ |[ZOOKEEPER-118](https://issues.apache.org/jira/browse/ZOOKEEPER-118)|findbugs flagged switch statement in followerrequestprocessor.run|
215
+ |[ZOOKEEPER-115](https://issues.apache.org/jira/browse/ZOOKEEPER-115)|Potential NPE in QuorumCnxManager|
216
+ |[ZOOKEEPER-114](https://issues.apache.org/jira/browse/ZOOKEEPER-114)|cleanup ugly event messages in zookeeper client|
217
+ |[ZOOKEEPER-112](https://issues.apache.org/jira/browse/ZOOKEEPER-112)|src/java/main ZooKeeper.java has test code embedded into it.|
218
+ |[ZOOKEEPER-39](https://issues.apache.org/jira/browse/ZOOKEEPER-39)|Use Watcher objects rather than boolean on read operations.|
219
+ |[ZOOKEEPER-97](https://issues.apache.org/jira/browse/ZOOKEEPER-97)|supports optional output directory in code generator.|
220
+ |[ZOOKEEPER-101](https://issues.apache.org/jira/browse/ZOOKEEPER-101)|Integrate ZooKeeper with "violations" feature on hudson|
221
+ |[ZOOKEEPER-105](https://issues.apache.org/jira/browse/ZOOKEEPER-105)|Catch Zookeeper exceptions and print on the stderr.|
222
+ |[ZOOKEEPER-42](https://issues.apache.org/jira/browse/ZOOKEEPER-42)|Change Leader Election to fast tcp.|
223
+ |[ZOOKEEPER-48](https://issues.apache.org/jira/browse/ZOOKEEPER-48)|auth_id now handled correctly when no auth ids present|
224
+ |[ZOOKEEPER-44](https://issues.apache.org/jira/browse/ZOOKEEPER-44)|Create sequence flag children with prefixes of 0's so that they can be lexicographically sorted.|
225
+ |[ZOOKEEPER-108](https://issues.apache.org/jira/browse/ZOOKEEPER-108)|Fix sync operation reordering on a Quorum.|
226
+ |[ZOOKEEPER-25](https://issues.apache.org/jira/browse/ZOOKEEPER-25)|Fuse module for Zookeeper.|
227
+ |[ZOOKEEPER-58](https://issues.apache.org/jira/browse/ZOOKEEPER-58)|Race condition on ClientCnxn.java|
228
+ |[ZOOKEEPER-56](https://issues.apache.org/jira/browse/ZOOKEEPER-56)|Add clover support to build.xml.|
229
+ |[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|register the ZooKeeper mailing lists with nabble.com|
230
+ |[ZOOKEEPER-54](https://issues.apache.org/jira/browse/ZOOKEEPER-54)|remove sleeps in the tests.|
231
+ |[ZOOKEEPER-55](https://issues.apache.org/jira/browse/ZOOKEEPER-55)|build.xml fails to retrieve a release number from SVN and the ant target "dist" fails|
232
+ |[ZOOKEEPER-89](https://issues.apache.org/jira/browse/ZOOKEEPER-89)|invoke WhenOwnerListener.whenNotOwner when the ZK connection fails|
233
+ |[ZOOKEEPER-90](https://issues.apache.org/jira/browse/ZOOKEEPER-90)|invoke WhenOwnerListener.whenNotOwner when the ZK session expires and the znode is the leader|
234
+ |[ZOOKEEPER-82](https://issues.apache.org/jira/browse/ZOOKEEPER-82)|Make the ZooKeeperServer more DI friendly.|
235
+ |[ZOOKEEPER-110](https://issues.apache.org/jira/browse/ZOOKEEPER-110)|Build script relies on svnant, which is not compatible with subversion 1.5 working copies|
236
+ |[ZOOKEEPER-111](https://issues.apache.org/jira/browse/ZOOKEEPER-111)|Significant cleanup of existing tests.|
237
+ |[ZOOKEEPER-122](https://issues.apache.org/jira/browse/ZOOKEEPER-122)|Fix NPE in jute's Utils.toCSVString.|
238
+ |[ZOOKEEPER-123](https://issues.apache.org/jira/browse/ZOOKEEPER-123)|Fix the wrong class is specified for the logger.|
239
+ |[ZOOKEEPER-2](https://issues.apache.org/jira/browse/ZOOKEEPER-2)|Fix synchronization issues in QuorumPeer and FastLeader election.|
240
+ |[ZOOKEEPER-125](https://issues.apache.org/jira/browse/ZOOKEEPER-125)|Remove unwanted class declaration in FastLeaderElection.|
241
+ |[ZOOKEEPER-61](https://issues.apache.org/jira/browse/ZOOKEEPER-61)|Address in client/server test cases.|
242
+ |[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|cleanup the library directory|
243
+ |[ZOOKEEPER-109](https://issues.apache.org/jira/browse/ZOOKEEPER-109)|cleanup of NPE and Resource issue nits found by static analysis|
244
+ |[ZOOKEEPER-76](https://issues.apache.org/jira/browse/ZOOKEEPER-76)|Commit 677109 removed the cobertura library, but not the build targets.|
245
+ |[ZOOKEEPER-63](https://issues.apache.org/jira/browse/ZOOKEEPER-63)|Race condition in client close|
246
+ |[ZOOKEEPER-70](https://issues.apache.org/jira/browse/ZOOKEEPER-70)|Add skeleton forrest doc structure for ZooKeeper|
247
+ |[ZOOKEEPER-79](https://issues.apache.org/jira/browse/ZOOKEEPER-79)|Document jacob's leader election on the wiki recipes page|
248
+ |[ZOOKEEPER-73](https://issues.apache.org/jira/browse/ZOOKEEPER-73)|Move ZK wiki from SourceForge to Apache|
249
+ |[ZOOKEEPER-72](https://issues.apache.org/jira/browse/ZOOKEEPER-72)|Initial creation/setup of ZooKeeper ASF site.|
250
+ |[ZOOKEEPER-71](https://issues.apache.org/jira/browse/ZOOKEEPER-71)|Determine what to do re ZooKeeper Changelog|
251
+ |[ZOOKEEPER-68](https://issues.apache.org/jira/browse/ZOOKEEPER-68)|parseACLs in ZooKeeper.java fails to parse elements of ACL, should be lastIndexOf rather than IndexOf|
252
+ |[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process.|
253
+ |[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Fix Old leader election can elect a dead leader over and over again.|
254
+ |[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events|
255
+ |[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election|
256
+ |[ZOOKEEPER-128](https://issues.apache.org/jira/browse/ZOOKEEPER-128)|test coverage on async client operations needs to be improved|
257
+ |[ZOOKEEPER-127](https://issues.apache.org/jira/browse/ZOOKEEPER-127)|Use of non-standard election ports in config breaks services|
258
+ |[ZOOKEEPER-53](https://issues.apache.org/jira/browse/ZOOKEEPER-53)|tests failing on solaris.|
259
+ |[ZOOKEEPER-172](https://issues.apache.org/jira/browse/ZOOKEEPER-172)|FLE Test|
260
+ |[ZOOKEEPER-41](https://issues.apache.org/jira/browse/ZOOKEEPER-41)|Sample startup script|
261
+ |[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management|
262
+ |[ZOOKEEPER-49](https://issues.apache.org/jira/browse/ZOOKEEPER-49)|SetACL does not work|
263
+ |[ZOOKEEPER-20](https://issues.apache.org/jira/browse/ZOOKEEPER-20)|Child watches are not triggered when the node is deleted|
264
+ |[ZOOKEEPER-15](https://issues.apache.org/jira/browse/ZOOKEEPER-15)|handle failure better in build.xml:test|
265
+ |[ZOOKEEPER-11](https://issues.apache.org/jira/browse/ZOOKEEPER-11)|ArrayList is used instead of List|
266
+ |[ZOOKEEPER-45](https://issues.apache.org/jira/browse/ZOOKEEPER-45)|Restructure the SVN repository after initial import |
267
+ |[ZOOKEEPER-1](https://issues.apache.org/jira/browse/ZOOKEEPER-1)|Initial ZooKeeper code contribution from Yahoo!|
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getMenu.js ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one or more
3
+ * contributor license agreements. See the NOTICE file distributed with
4
+ * this work for additional information regarding copyright ownership.
5
+ * The ASF licenses this file to You under the Apache License, Version 2.0
6
+ * (the "License"); you may not use this file except in compliance with
7
+ * the License. You may obtain a copy of the License at
8
+ *
9
+ * http://www.apache.org/licenses/LICENSE-2.0
10
+ *
11
+ * Unless required by applicable law or agreed to in writing, software
12
+ * distributed under the License is distributed on an "AS IS" BASIS,
13
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ * See the License for the specific language governing permissions and
15
+ * limitations under the License.
16
+ */
17
+ /**
18
+ * This script, when included in a html file, can be used to make collapsible menus
19
+ *
20
+ * Typical usage:
21
+ * <script type="text/javascript" language="JavaScript" src="menu.js"></script>
22
+ */
23
+
24
+ if (document.getElementById){
25
+ document.write('<style type="text/css">.menuitemgroup{display: none;}</style>')
26
+ }
27
+
28
+
29
+ function SwitchMenu(obj, thePath)
30
+ {
31
+ var open = 'url("'+thePath + 'chapter_open.gif")';
32
+ var close = 'url("'+thePath + 'chapter.gif")';
33
+ if(document.getElementById) {
34
+ var el = document.getElementById(obj);
35
+ var title = document.getElementById(obj+'Title');
36
+
37
+ if(el.style.display != "block"){
38
+ title.style.backgroundImage = open;
39
+ el.style.display = "block";
40
+ }else{
41
+ title.style.backgroundImage = close;
42
+ el.style.display = "none";
43
+ }
44
+ }// end - if(document.getElementById)
45
+ }//end - function SwitchMenu(obj)
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/menu.js ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one or more
3
+ * contributor license agreements. See the NOTICE file distributed with
4
+ * this work for additional information regarding copyright ownership.
5
+ * The ASF licenses this file to You under the Apache License, Version 2.0
6
+ * (the "License"); you may not use this file except in compliance with
7
+ * the License. You may obtain a copy of the License at
8
+ *
9
+ * http://www.apache.org/licenses/LICENSE-2.0
10
+ *
11
+ * Unless required by applicable law or agreed to in writing, software
12
+ * distributed under the License is distributed on an "AS IS" BASIS,
13
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ * See the License for the specific language governing permissions and
15
+ * limitations under the License.
16
+ */
17
+ /**
18
+ * This script, when included in a html file, can be used to make collapsible menus
19
+ *
20
+ * Typical usage:
21
+ * <script type="text/javascript" language="JavaScript" src="menu.js"></script>
22
+ */
23
+
24
+ if (document.getElementById){
25
+ document.write('<style type="text/css">.menuitemgroup{display: none;}</style>')
26
+ }
27
+
28
+ function SwitchMenu(obj)
29
+ {
30
+ if(document.getElementById) {
31
+ var el = document.getElementById(obj);
32
+ var title = document.getElementById(obj+'Title');
33
+
34
+ if(obj.indexOf("_selected_")==0&&el.style.display == ""){
35
+ el.style.display = "block";
36
+ title.className = "pagegroupselected";
37
+ }
38
+
39
+ if(el.style.display != "block"){
40
+ el.style.display = "block";
41
+ title.className = "pagegroupopen";
42
+ }
43
+ else{
44
+ el.style.display = "none";
45
+ title.className = "pagegroup";
46
+ }
47
+ }// end - if(document.getElementById)
48
+ }//end - function SwitchMenu(obj)
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one or more
3
+ * contributor license agreements. See the NOTICE file distributed with
4
+ * this work for additional information regarding copyright ownership.
5
+ * The ASF licenses this file to You under the Apache License, Version 2.0
6
+ * (the "License"); you may not use this file except in compliance with
7
+ * the License. You may obtain a copy of the License at
8
+ *
9
+ * http://www.apache.org/licenses/LICENSE-2.0
10
+ *
11
+ * Unless required by applicable law or agreed to in writing, software
12
+ * distributed under the License is distributed on an "AS IS" BASIS,
13
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ * See the License for the specific language governing permissions and
15
+ * limitations under the License.
16
+ */
17
+ body {
18
+ font-family: Georgia, Palatino, serif;
19
+ font-size: 12pt;
20
+ background: white;
21
+ }
22
+
23
+ #tabs,
24
+ #menu,
25
+ #content .toc {
26
+ display: none;
27
+ }
28
+
29
+ #content {
30
+ width: auto;
31
+ padding: 0;
32
+ float: none !important;
33
+ color: black;
34
+ background: inherit;
35
+ }
36
+
37
+ a:link, a:visited {
38
+ color: #336699;
39
+ background: inherit;
40
+ text-decoration: underline;
41
+ }
42
+
43
+ #top .logo {
44
+ padding: 0;
45
+ margin: 0 0 2em 0;
46
+ }
47
+
48
+ #footer {
49
+ margin-top: 4em;
50
+ }
51
+
52
+ acronym {
53
+ border: 0;
54
+ }
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/screen.css ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Licensed to the Apache Software Foundation (ASF) under one or more
3
+ * contributor license agreements. See the NOTICE file distributed with
4
+ * this work for additional information regarding copyright ownership.
5
+ * The ASF licenses this file to You under the Apache License, Version 2.0
6
+ * (the "License"); you may not use this file except in compliance with
7
+ * the License. You may obtain a copy of the License at
8
+ *
9
+ * http://www.apache.org/licenses/LICENSE-2.0
10
+ *
11
+ * Unless required by applicable law or agreed to in writing, software
12
+ * distributed under the License is distributed on an "AS IS" BASIS,
13
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ * See the License for the specific language governing permissions and
15
+ * limitations under the License.
16
+ */
17
+ body { margin: 0px 0px 0px 0px; font-family: Verdana, Helvetica, sans-serif; }
18
+
19
+ h1 { font-size : 160%; margin: 0px 0px 0px 0px; padding: 0px; }
20
+ h2 { font-size : 140%; margin: 1em 0px 0.8em 0px; padding: 0px; font-weight : bold;}
21
+ h3 { font-size : 130%; margin: 0.8em 0px 0px 0px; padding: 0px; font-weight : bold; }
22
+ .h3 { margin: 22px 0px 3px 0px; }
23
+ h4 { font-size : 120%; margin: 0.7em 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; }
24
+ .h4 { margin: 18px 0px 0px 0px; }
25
+ h4.faq { font-size : 120%; margin: 18px 0px 0px 0px; padding: 0px; font-weight : bold; text-align: left; }
26
+ h5 { font-size : 100%; margin: 14px 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; }
27
+
28
+ /**
29
+ * table
30
+ */
31
+ table .title { background-color: #000000; }
32
+ .ForrestTable {
33
+ color: #ffffff;
34
+ background-color: #7099C5;
35
+ width: 100%;
36
+ font-size : 100%;
37
+ empty-cells: show;
38
+ }
39
+ table caption {
40
+ padding-left: 5px;
41
+ color: white;
42
+ text-align: left;
43
+ font-weight: bold;
44
+ background-color: #000000;
45
+ }
46
+ .ForrestTable td {
47
+ color: black;
48
+ background-color: #f0f0ff;
49
+ }
50
+ .ForrestTable th { text-align: center; }
51
+ /**
52
+ * Page Header
53
+ */
54
+
55
+ #top {
56
+ position: relative;
57
+ float: left;
58
+ width: 100%;
59
+ background: #294563; /* if you want a background in the header, put it here */
60
+ }
61
+
62
+ #top .breadtrail {
63
+ background: #CFDCED;
64
+ color: black;
65
+ border-bottom: solid 1px white;
66
+ padding: 3px 10px;
67
+ font-size: 75%;
68
+ }
69
+ #top .breadtrail a { color: black; }
70
+
71
+ #top .header {
72
+ float: left;
73
+ width: 100%;
74
+ background: url("header_white_line.gif") repeat-x bottom;
75
+ }
76
+
77
+ #top .grouplogo {
78
+ padding: 7px 0 10px 10px;
79
+ float: left;
80
+ text-align: left;
81
+ }
82
+ #top .projectlogo {
83
+ padding: 7px 0 10px 10px;
84
+ float: left;
85
+ width: 33%;
86
+ text-align: right;
87
+ }
88
+ #top .projectlogoA1 {
89
+ padding: 7px 0 10px 10px;
90
+ float: right;
91
+ }
92
+ html>body #top .searchbox {
93
+ bottom: 0px;
94
+ }
95
+ #top .searchbox {
96
+ position: absolute;
97
+ right: 10px;
98
+ height: 42px;
99
+ font-size: 70%;
100
+ white-space: nowrap;
101
+ bottom: -1px; /* compensate for IE rendering issue */
102
+ border-radius: 5px 5px 0px 0px;
103
+ }
104
+
105
+ #top .searchbox form {
106
+ padding: 5px 10px;
107
+ margin: 0;
108
+ }
109
+ #top .searchbox p {
110
+ padding: 0 0 2px 0;
111
+ margin: 0;
112
+ }
113
+ #top .searchbox input {
114
+ font-size: 100%;
115
+ }
116
+
117
+ #tabs {
118
+ clear: both;
119
+ padding-left: 10px;
120
+ margin: 0;
121
+ list-style: none;
122
+ }
123
+
124
+ #tabs li {
125
+ float: left;
126
+ margin: 0 3px 0 0;
127
+ padding: 0;
128
+ border-radius: 5px 5px 0px 0px;
129
+ }
130
+
131
+ /*background: url("tab-left.gif") no-repeat left top;*/
132
+ #tabs li a {
133
+ float: left;
134
+ display: block;
135
+ font-family: verdana, arial, sans-serif;
136
+ text-decoration: none;
137
+ color: black;
138
+ white-space: nowrap;
139
+ padding: 5px 15px 4px;
140
+ width: .1em; /* IE/Win fix */
141
+ }
142
+
143
+ #tabs li a:hover {
144
+
145
+ cursor: pointer;
146
+ text-decoration:underline;
147
+ }
148
+
149
+ #tabs > li a { width: auto; } /* Rest of IE/Win fix */
150
+
151
+ /* Commented Backslash Hack hides rule from IE5-Mac \*/
152
+ #tabs a { float: none; }
153
+ /* End IE5-Mac hack */
154
+
155
+ #top .header .current {
156
+ background-color: #4C6C8F;
157
+ }
158
+ #top .header .current a {
159
+ font-weight: bold;
160
+ padding-bottom: 5px;
161
+ color: white;
162
+ }
163
+ #publishedStrip {
164
+ padding-right: 10px;
165
+ padding-left: 20px;
166
+ padding-top: 3px;
167
+ padding-bottom:3px;
168
+ color: #ffffff;
169
+ font-size : 60%;
170
+ font-weight: bold;
171
+ background-color: #4C6C8F;
172
+ text-align:right;
173
+ }
174
+
175
+ #level2tabs {
176
+ margin: 0;
177
+ float:left;
178
+ position:relative;
179
+
180
+ }
181
+
182
+
183
+
184
+ #level2tabs a:hover {
185
+
186
+ cursor: pointer;
187
+ text-decoration:underline;
188
+
189
+ }
190
+
191
+ #level2tabs a{
192
+
193
+ cursor: pointer;
194
+ text-decoration:none;
195
+ background-image: url('chapter.gif');
196
+ background-repeat: no-repeat;
197
+ background-position: center left;
198
+ padding-left: 6px;
199
+ margin-left: 6px;
200
+ }
201
+
202
+ /*
203
+ * border-top: solid #4C6C8F 15px;
204
+ */
205
+ #main {
206
+ position: relative;
207
+ background: white;
208
+ clear:both;
209
+ }
210
+ #main .breadtrail {
211
+ clear:both;
212
+ position: relative;
213
+ background: #CFDCED;
214
+ color: black;
215
+ border-bottom: solid 1px black;
216
+ border-top: solid 1px black;
217
+ padding: 0px 180px;
218
+ font-size: 75%;
219
+ z-index:10;
220
+ }
221
+
222
+ img.corner {
223
+ width: 15px;
224
+ height: 15px;
225
+ border: none;
226
+ display: block !important;
227
+ }
228
+
229
+ img.cornersmall {
230
+ width: 5px;
231
+ height: 5px;
232
+ border: none;
233
+ display: block !important;
234
+ }
235
+ /**
236
+ * Side menu
237
+ */
238
+ #menu a { font-weight: normal; text-decoration: none;}
239
+ #menu a:visited { font-weight: normal; }
240
+ #menu a:active { font-weight: normal; }
241
+ #menu a:hover { font-weight: normal; text-decoration:underline;}
242
+
243
+ #menuarea { width:10em;}
244
+ #menu {
245
+ position: relative;
246
+ float: left;
247
+ width: 160px;
248
+ padding-top: 0px;
249
+ padding-bottom: 15px;
250
+ top:-18px;
251
+ left:10px;
252
+ z-index: 20;
253
+ background-color: #f90;
254
+ font-size : 70%;
255
+ border-radius: 0px 0px 15px 15px;
256
+ }
257
+
258
+ .menutitle {
259
+ cursor:pointer;
260
+ padding: 3px 12px;
261
+ margin-left: 10px;
262
+ background-image: url('chapter.gif');
263
+ background-repeat: no-repeat;
264
+ background-position: center left;
265
+ font-weight : bold;
266
+ }
267
+
268
+ .menutitle.selected {
269
+ background-image: url('chapter_open.gif');
270
+ }
271
+
272
+ .menutitle:hover{text-decoration:underline;cursor: pointer;}
273
+
274
+ #menu .menuitemgroup {
275
+ margin: 0px 0px 6px 8px;
276
+ padding: 0px;
277
+ font-weight : bold; }
278
+
279
+ #menu .selectedmenuitemgroup{
280
+ margin: 0px 0px 0px 8px;
281
+ padding: 0px;
282
+ font-weight : normal;
283
+
284
+ }
285
+
286
+ #menu .menuitem {
287
+ padding: 2px 0px 1px 13px;
288
+ background-image: url('page.gif');
289
+ background-repeat: no-repeat;
290
+ background-position: center left;
291
+ font-weight : normal;
292
+ margin-left: 10px;
293
+ }
294
+
295
+ #menu .selected {
296
+ font-style : normal;
297
+ margin-right: 10px;
298
+
299
+ }
300
+ .menuitem .selected {
301
+ border-style: solid;
302
+ border-width: 1px;
303
+ }
304
+ #menu .menupageitemgroup {
305
+ padding: 3px 0px 4px 6px;
306
+ font-style : normal;
307
+ border-bottom: 1px solid ;
308
+ border-left: 1px solid ;
309
+ border-right: 1px solid ;
310
+ margin-right: 10px;
311
+ }
312
+ #menu .menupageitem {
313
+ font-style : normal;
314
+ font-weight : normal;
315
+ border-width: 0px;
316
+ font-size : 90%;
317
+ }
318
+ #menu .searchbox {
319
+ text-align: center;
320
+ }
321
+ #menu .searchbox form {
322
+ padding: 3px 3px;
323
+ margin: 0;
324
+ }
325
+ #menu .searchbox input {
326
+ font-size: 100%;
327
+ }
328
+
329
+ #content {
330
+ padding: 20px 20px 20px 180px;
331
+ margin: 0;
332
+ font : small Verdana, Helvetica, sans-serif;
333
+ font-size : 80%;
334
+ }
335
+
336
+ #content ul {
337
+ margin: 0;
338
+ padding: 0 25px;
339
+ }
340
+ #content li {
341
+ padding: 0 5px;
342
+ }
343
+ #feedback {
344
+ color: black;
345
+ background: #CFDCED;
346
+ text-align:center;
347
+ margin-top: 5px;
348
+ }
349
+ #feedback #feedbackto {
350
+ font-size: 90%;
351
+ color: black;
352
+ }
353
+ #footer {
354
+ clear: both;
355
+ position: relative; /* IE bugfix (http://www.dracos.co.uk/web/css/ie6floatbug/) */
356
+ width: 100%;
357
+ background: #CFDCED;
358
+ border-top: solid 1px #4C6C8F;
359
+ color: black;
360
+ }
361
+ #footer .copyright {
362
+ position: relative; /* IE bugfix cont'd */
363
+ padding: 5px;
364
+ margin: 0;
365
+ width: 60%;
366
+ }
367
+ #footer .lastmodified {
368
+ position: relative; /* IE bugfix cont'd */
369
+ float: right;
370
+ width: 30%;
371
+ padding: 5px;
372
+ margin: 0;
373
+ text-align: right;
374
+ }
375
+ #footer a { color: white; }
376
+
377
+ #footer #logos {
378
+ text-align: left;
379
+ }
380
+
381
+
382
+ /**
383
+ * Misc Styles
384
+ */
385
+
386
+ acronym { cursor: help; }
387
+ .boxed { background-color: #a5b6c6;}
388
+ .underlined_5 {border-bottom: solid 5px #4C6C8F;}
389
+ .underlined_10 {border-bottom: solid 10px #4C6C8F;}
390
+ /* ==================== snail trail ============================ */
391
+
392
+ .trail {
393
+ position: relative; /* IE bugfix cont'd */
394
+ font-size: 70%;
395
+ text-align: right;
396
+ float: right;
397
+ margin: -10px 5px 0px 5px;
398
+ padding: 0;
399
+ }
400
+
401
+ #motd-area {
402
+ position:relative;
403
+ float:right;
404
+ width: 35%;
405
+ background-color: #f0f0ff;
406
+ border: solid 1px #4C6C8F;
407
+ margin: 0px 0px 10px 10px;
408
+ padding: 5px;
409
+ }
410
+
411
+ #minitoc-area {
412
+ border-top: solid 1px #4C6C8F;
413
+ border-bottom: solid 1px #4C6C8F;
414
+ margin: 15px 10% 5px 15px;
415
+ /* margin-bottom: 15px;
416
+ margin-left: 15px;
417
+ margin-right: 10%;*/
418
+ padding-bottom: 7px;
419
+ padding-top: 5px;
420
+ }
421
+ .minitoc {
422
+ list-style-image: url('current.gif');
423
+ font-weight: normal;
424
+ }
425
+
426
+ .abstract{
427
+ text-align:justify;
428
+ }
429
+
430
+ li p {
431
+ margin: 0;
432
+ padding: 0;
433
+ }
434
+
435
+ .pdflink {
436
+ position: relative; /* IE bugfix cont'd */
437
+ float: right;
438
+ margin: 0px 5px;
439
+ padding: 0;
440
+ }
441
+ .pdflink br {
442
+ margin-top: -10px;
443
+ padding-left: 1px;
444
+ }
445
+ .pdflink a {
446
+ display: block;
447
+ font-size: 70%;
448
+ text-align: center;
449
+ margin: 0;
450
+ padding: 0;
451
+ }
452
+
453
+ .pdflink img {
454
+ display: block;
455
+ height: 16px;
456
+ width: 16px;
457
+ }
458
+ .xmllink {
459
+ position: relative; /* IE bugfix cont'd */
460
+ float: right;
461
+ margin: 0px 5px;
462
+ padding: 0;
463
+ }
464
+ .xmllink br {
465
+ margin-top: -10px;
466
+ padding-left: 1px;
467
+ }
468
+ .xmllink a {
469
+ display: block;
470
+ font-size: 70%;
471
+ text-align: center;
472
+ margin: 0;
473
+ padding: 0;
474
+ }
475
+
476
+ .xmllink img {
477
+ display: block;
478
+ height: 16px;
479
+ width: 16px;
480
+ }
481
+ .podlink {
482
+ position: relative; /* IE bugfix cont'd */
483
+ float: right;
484
+ margin: 0px 5px;
485
+ padding: 0;
486
+ }
487
+ .podlink br {
488
+ margin-top: -10px;
489
+ padding-left: 1px;
490
+ }
491
+ .podlink a {
492
+ display: block;
493
+ font-size: 70%;
494
+ text-align: center;
495
+ margin: 0;
496
+ padding: 0;
497
+ }
498
+
499
+ .podlink img {
500
+ display: block;
501
+ height: 16px;
502
+ width: 16px;
503
+ }
504
+
505
+ .printlink {
506
+ position: relative; /* IE bugfix cont'd */
507
+ float: right;
508
+ }
509
+ .printlink br {
510
+ margin-top: -10px;
511
+ padding-left: 1px;
512
+ }
513
+ .printlink a {
514
+ display: block;
515
+ font-size: 70%;
516
+ text-align: center;
517
+ margin: 0;
518
+ padding: 0;
519
+ }
520
+ .printlink img {
521
+ display: block;
522
+ height: 16px;
523
+ width: 16px;
524
+ }
525
+
526
+ p.instruction {
527
+ display: list-item;
528
+ list-style-image: url('../instruction_arrow.png');
529
+ list-style-position: outside;
530
+ margin-left: 2em;
531
+ }
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md ADDED
The diff for this file is too large to render. See raw diff
 
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAuditLogs.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2022 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Audit Logging
18
+
19
+ * [ZooKeeper Audit Logs](#ch_auditLogs)
20
+ * [ZooKeeper Audit Log Configuration](#ch_reconfig_format)
21
+ * [Who is taken as user in audit logs?](#ch_zkAuditUser)
22
+ <a name="ch_auditLogs"></a>
23
+
24
+ ## ZooKeeper Audit Logs
25
+
26
+ Apache ZooKeeper supports audit logs from version 3.6.0. By default audit logs are disabled. To enable audit logs
27
+ configure audit.enable=true in conf/zoo.cfg. Audit logs are not logged on all the ZooKeeper servers, but logged only on the servers where client is connected as depicted in below figure.
28
+
29
+ ![Audit Logs](images/zkAuditLogs.jpg)
30
+
31
+
32
+ The audit log captures detailed information for the operations that are selected to be audited. The audit information is written as a set of key=value pairs for the following keys
33
+
34
+ | Key | Value |
35
+ | ----- | ----- |
36
+ |session | client session id |
37
+ |user | comma separated list of users who are associate with a client session. For more on this, see [Who is taken as user in audit logs](#ch_zkAuditUser).
38
+ |ip | client IP address
39
+ |operation | any one of the selected operations for audit. Possible values are(serverStart, serverStop, create, delete, setData, setAcl, multiOperation, reconfig, ephemeralZNodeDeleteOnSessionClose)
40
+ |znode | path of the znode
41
+ |znode type | type of znode in case of creation operation
42
+ |acl | String representation of znode ACL like cdrwa(create, delete,read, write, admin). This is logged only for setAcl operation
43
+ |result | result of the operation. Possible values are (success/failure/invoked). Result "invoked" is used for serverStop operation because stop is logged before ensuring that server actually stopped.
44
+
45
+ Below are sample audit logs for all operations, where client is connected from 192.168.1.2, client principal is zkcli@HADOOP.COM, server principal is zookeeper/192.168.1.3@HADOOP.COM
46
+
47
+ user=zookeeper/192.168.1.3 operation=serverStart result=success
48
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/a znode_type=persistent result=success
49
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/a znode_type=persistent result=failure
50
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/a result=failure
51
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/a result=success
52
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setAcl znode=/a acl=world:anyone:cdrwa result=failure
53
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setAcl znode=/a acl=world:anyone:cdrwa result=success
54
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/b znode_type=persistent result=success
55
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/b result=success
56
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/b result=success
57
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=multiOperation result=failure
58
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/a result=failure
59
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/a result=success
60
+ session=0x19344730001 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/ephemral znode_type=ephemral result=success
61
+ session=0x19344730001 user=zookeeper/192.168.1.3 operation=ephemeralZNodeDeletionOnSessionCloseOrExpire znode=/ephemral result=success
62
+ session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=reconfig znode=/zookeeper/config result=success
63
+ user=zookeeper/192.168.1.3 operation=serverStop result=invoked
64
+
65
+ <a name="ch_auditConfig"></a>
66
+
67
+ ## ZooKeeper Audit Log Configuration
68
+
69
+ By default audit logs are disabled. To enable audit logs configure `audit.enable=true` in _conf/zoo.cfg_.
70
+ Audit logging is done using logback. Following is the default logback configuration for audit logs in `conf/logback.xml`
71
+
72
+ <!--
73
+ zk audit logging
74
+ -->
75
+ <!--property name="zookeeper.auditlog.file" value="zookeeper_audit.log" />
76
+ <property name="zookeeper.auditlog.threshold" value="INFO" />
77
+ <property name="audit.logger" value="INFO, RFAAUDIT" />
78
+
79
+ <appender name="RFAAUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">
80
+ <File>${zookeeper.log.dir}/${zookeeper.auditlog.file}</File>
81
+ <encoder>
82
+ <pattern>%d{ISO8601} %p %c{2}: %m%n</pattern>
83
+ </encoder>
84
+ <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
85
+ <level>${zookeeper.auditlog.threshold}</level>
86
+ </filter>
87
+ <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
88
+ <maxIndex>10</maxIndex>
89
+ <FileNamePattern>${zookeeper.log.dir}/${zookeeper.auditlog.file}.%i</FileNamePattern>
90
+ </rollingPolicy>
91
+ <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
92
+ <MaxFileSize>10MB</MaxFileSize>
93
+ </triggeringPolicy>
94
+ </appender>
95
+
96
+ <logger name="org.apache.zookeeper.audit.Slf4jAuditLogger" additivity="false" level="${audit.logger}">
97
+ <appender-ref ref="RFAAUDIT" />
98
+ </logger-->
99
+
100
+ Change above configuration to customize the auditlog file, number of backups, max file size, custom audit logger etc.
101
+
102
+ <a name="ch_zkAuditUser"></a>
103
+
104
+ ## Who is taken as user in audit logs?
105
+
106
+ By default there are only four authentication provider:
107
+
108
+ * IPAuthenticationProvider
109
+ * SASLAuthenticationProvider
110
+ * X509AuthenticationProvider
111
+ * DigestAuthenticationProvider
112
+
113
+ User is decided based on the configured authentication provider:
114
+
115
+ * When IPAuthenticationProvider is configured then authenticated IP is taken as user
116
+ * When SASLAuthenticationProvider is configured then client principal is taken as user
117
+ * When X509AuthenticationProvider is configured then client certificate is taken as user
118
+ * When DigestAuthenticationProvider is configured then authenticated user is user
119
+
120
+ Custom authentication provider can override org.apache.zookeeper.server.auth.AuthenticationProvider.getUserName(String id)
121
+ to provide user name. If authentication provider is not overriding this method then whatever is stored in
122
+ org.apache.zookeeper.data.Id.id is taken as user.
123
+ Generally only user name is stored in this field but it is up to the custom authentication provider what they store in it.
124
+ For audit logging value of org.apache.zookeeper.data.Id.id would be taken as user.
125
+
126
+ In ZooKeeper Server not all the operations are done by clients but some operations are done by the server itself. For example when client closes the session, ephemeral znodes are deleted by the Server. These deletion are not done by clients directly but it is done the server itself these are called system operations. For these system operations the user associated with the ZooKeeper server are taken as user while audit logging these operations. For example if in ZooKeeper server principal is zookeeper/hadoop.hadoop.com@HADOOP.COM then this becomes the system user and all the system operations will be logged with this user name.
127
+
128
+ user=zookeeper/hadoop.hadoop.com@HADOOP.COM operation=serverStart result=success
129
+
130
+
131
+ If there is no user associate with ZooKeeper server then the user who started the ZooKeeper server is taken as the user. For example if server started by root then root is taken as the system user
132
+
133
+ user=root operation=serverStart result=success
134
+
135
+
136
+ Single client can attach multiple authentication schemes to a session, in this case all authenticated schemes will taken taken as user and will be presented as comma separated list. For example if a client is authenticate with principal zkcli@HADOOP.COM and ip 127.0.0.1 then create znode audit log will be as:
137
+
138
+ session=0x10c0bcb0000 user=zkcli@HADOOP.COM,127.0.0.1 ip=127.0.0.1 operation=create znode=/a result=success
139
+
140
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md ADDED
@@ -0,0 +1,573 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2021 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper-cli: the ZooKeeper command line interface
18
+
19
+ ## Pre-requisites
20
+ Enter into the ZooKeeper-cli
21
+
22
+ ```bash
23
+ # connect to the localhost with the default port:2181
24
+ bin/zkCli.sh
25
+ # connect to the remote host with timeout:3s
26
+ bin/zkCli.sh -timeout 3000 -server remoteIP:2181
27
+ # connect to the remote host with -waitforconnection option to wait for connection success before executing commands
28
+ bin/zkCli.sh -waitforconnection -timeout 3000 -server remoteIP:2181
29
+ # connect with a custom client configuration properties file
30
+ bin/zkCli.sh -client-configuration /path/to/client.properties
31
+ ```
32
+ ## help
33
+ Showing helps about ZooKeeper commands
34
+
35
+ ```bash
36
+ [zkshell: 1] help
37
+ # a sample one
38
+ [zkshell: 2] h
39
+ ZooKeeper -server host:port cmd args
40
+ addauth scheme auth
41
+ close
42
+ config [-c] [-w] [-s]
43
+ connect host:port
44
+ create [-s] [-e] [-c] [-t ttl] path [data] [acl]
45
+ delete [-v version] path
46
+ deleteall path
47
+ delquota [-n|-b|-N|-B] path
48
+ get [-s] [-w] path
49
+ getAcl [-s] path
50
+ getAllChildrenNumber path
51
+ getEphemerals path
52
+ history
53
+ listquota path
54
+ ls [-s] [-w] [-R] path
55
+ printwatches on|off
56
+ quit
57
+ reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*]
58
+ redo cmdno
59
+ removewatches path [-c|-d|-a] [-l]
60
+ set [-s] [-v version] path data
61
+ setAcl [-s] [-v version] [-R] path acl
62
+ setquota -n|-b|-N|-B val path
63
+ stat [-w] path
64
+ sync path
65
+ version
66
+ ```
67
+
68
+ ## addauth
69
+ Add a authorized user for ACL
70
+
71
+ ```bash
72
+ [zkshell: 9] getAcl /acl_digest_test
73
+ Insufficient permission : /acl_digest_test
74
+ [zkshell: 10] addauth digest user1:12345
75
+ [zkshell: 11] getAcl /acl_digest_test
76
+ 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE=
77
+ : cdrwa
78
+ # add a super user
79
+ # Notice:set zookeeper.DigestAuthenticationProvider
80
+ # e.g. zookeeper.DigestAuthenticationProvider.superDigest=zookeeper:qW/HnTfCSoQpB5G8LgkwT3IbiFc=
81
+ [zkshell: 12] addauth digest zookeeper:admin
82
+ ```
83
+
84
+ ## close
85
+ Close this client/session.
86
+
87
+ ```bash
88
+ [zkshell: 0] close
89
+ 2019-03-09 06:42:22,178 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@528] - EventThread shut down for session: 0x10007ab7c550006
90
+ 2019-03-09 06:42:22,179 [myid:] - INFO [main:ZooKeeper@1346] - Session: 0x10007ab7c550006 closed
91
+ ```
92
+
93
+ ## config
94
+ Showing the config of quorum membership
95
+
96
+ ```bash
97
+ [zkshell: 17] config
98
+ server.1=[2001:db8:1:0:0:242:ac11:2]:2888:3888:participant
99
+ server.2=[2001:db8:1:0:0:242:ac11:2]:12888:13888:participant
100
+ server.3=[2001:db8:1:0:0:242:ac11:2]:22888:23888:participant
101
+ version=0
102
+ ```
103
+ ## connect
104
+ Connect a ZooKeeper server.
105
+
106
+ ```bash
107
+ [zkshell: 4] connect
108
+ 2019-03-09 06:43:33,179 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@986] - Socket connection established, initiating session, client: /127.0.0.1:35144, server: localhost/127.0.0.1:2181
109
+ 2019-03-09 06:43:33,189 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1421] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10007ab7c550007, negotiated timeout = 30000
110
+ connect "localhost:2181,localhost:2182,localhost:2183"
111
+
112
+ # connect a remote server
113
+ [zkshell: 5] connect remoteIP:2181
114
+ ```
115
+ ## create
116
+ Create a znode.
117
+
118
+ ```bash
119
+ # create a persistent_node
120
+ [zkshell: 7] create /persistent_node
121
+ Created /persistent_node
122
+
123
+ # create a ephemeral node
124
+ [zkshell: 8] create -e /ephemeral_node mydata
125
+ Created /ephemeral_node
126
+
127
+ # create the persistent-sequential node
128
+ [zkshell: 9] create -s /persistent_sequential_node mydata
129
+ Created /persistent_sequential_node0000000176
130
+
131
+ # create the ephemeral-sequential_node
132
+ [zkshell: 10] create -s -e /ephemeral_sequential_node mydata
133
+ Created /ephemeral_sequential_node0000000174
134
+
135
+ # create a node with the schema
136
+ [zkshell: 11] create /zk-node-create-schema mydata digest:user1:+owfoSBn/am19roBPzR1/MfCblE=:crwad
137
+ Created /zk-node-create-schema
138
+ [zkshell: 12] addauth digest user1:12345
139
+ [zkshell: 13] getAcl /zk-node-create-schema
140
+ 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE=
141
+ : cdrwa
142
+
143
+ # create the container node.When the last child of a container is deleted,the container becomes to be deleted
144
+ [zkshell: 14] create -c /container_node mydata
145
+ Created /container_node
146
+ [zkshell: 15] create -c /container_node/child_1 mydata
147
+ Created /container_node/child_1
148
+ [zkshell: 16] create -c /container_node/child_2 mydata
149
+ Created /container_node/child_2
150
+ [zkshell: 17] delete /container_node/child_1
151
+ [zkshell: 18] delete /container_node/child_2
152
+ [zkshell: 19] get /container_node
153
+ org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /container_node
154
+
155
+ # create the ttl node.
156
+ # set zookeeper.extendedTypesEnabled=true
157
+ # Otherwise:KeeperErrorCode = Unimplemented for /ttl_node
158
+ [zkshell: 20] create -t 3000 /ttl_node mydata
159
+ Created /ttl_node
160
+ # after 3s later
161
+ [zkshell: 21] get /ttl_node
162
+ org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /ttl_node
163
+ ```
164
+ ## delete
165
+ Delete a node with a specific path
166
+
167
+ ```bash
168
+ [zkshell: 2] delete /config/topics/test
169
+ [zkshell: 3] ls /config/topics/test
170
+ Node does not exist: /config/topics/test
171
+ ```
172
+
173
+ ## deleteall
174
+ Delete all nodes under a specific path
175
+
176
+ ```bash
177
+ zkshell: 1] ls /config
178
+ [changes, clients, topics]
179
+ [zkshell: 2] deleteall /config
180
+ [zkshell: 3] ls /config
181
+ Node does not exist: /config
182
+ ```
183
+
184
+ ## delquota
185
+ Delete the quota under a path
186
+
187
+ ```bash
188
+ [zkshell: 1] delquota /quota_test
189
+ [zkshell: 2] listquota /quota_test
190
+ absolute path is /zookeeper/quota/quota_test/zookeeper_limits
191
+ quota for /quota_test does not exist.
192
+ [zkshell: 3] delquota -n /c1
193
+ [zkshell: 4] delquota -N /c2
194
+ [zkshell: 5] delquota -b /c3
195
+ [zkshell: 6] delquota -B /c4
196
+
197
+ ```
198
+ ## get
199
+ Get the data of the specific path
200
+
201
+ ```bash
202
+ [zkshell: 10] get /latest_producer_id_block
203
+ {"version":1,"broker":0,"block_start":"0","block_end":"999"}
204
+
205
+ # -s to show the stat
206
+ [zkshell: 11] get -s /latest_producer_id_block
207
+ {"version":1,"broker":0,"block_start":"0","block_end":"999"}
208
+ cZxid = 0x90000009a
209
+ ctime = Sat Jul 28 08:14:09 UTC 2018
210
+ mZxid = 0x9000000a2
211
+ mtime = Sat Jul 28 08:14:12 UTC 2018
212
+ pZxid = 0x90000009a
213
+ cversion = 0
214
+ dataVersion = 1
215
+ aclVersion = 0
216
+ ephemeralOwner = 0x0
217
+ dataLength = 60
218
+ numChildren = 0
219
+
220
+ # -w to set a watch on the data change, Notice: turn on the printwatches
221
+ [zkshell: 12] get -w /latest_producer_id_block
222
+ {"version":1,"broker":0,"block_start":"0","block_end":"999"}
223
+ [zkshell: 13] set /latest_producer_id_block mydata
224
+ WATCHER::
225
+ WatchedEvent state:SyncConnected type:NodeDataChanged path:/latest_producer_id_block
226
+ ```
227
+
228
+ ## getAcl
229
+ Get the ACL permission of one path
230
+
231
+ ```bash
232
+ [zkshell: 4] create /acl_test mydata ip:127.0.0.1:crwda
233
+ Created /acl_test
234
+ [zkshell: 5] getAcl /acl_test
235
+ 'ip,'127.0.0.1
236
+ : cdrwa
237
+ [zkshell: 6] getAcl /testwatch
238
+ 'world,'anyone
239
+ : cdrwa
240
+ ```
241
+ ## getAllChildrenNumber
242
+ Get all numbers of children nodes under a specific path
243
+
244
+ ```bash
245
+ [zkshell: 1] getAllChildrenNumber /
246
+ 73779
247
+ [zkshell: 2] getAllChildrenNumber /ZooKeeper
248
+ 2
249
+ [zkshell: 3] getAllChildrenNumber /ZooKeeper/quota
250
+ 0
251
+ ```
252
+ ## getEphemerals
253
+ Get all the ephemeral nodes created by this session
254
+
255
+ ```bash
256
+ [zkshell: 1] create -e /test-get-ephemerals "ephemeral node"
257
+ Created /test-get-ephemerals
258
+ [zkshell: 2] getEphemerals
259
+ [/test-get-ephemerals]
260
+ [zkshell: 3] getEphemerals /
261
+ [/test-get-ephemerals]
262
+ [zkshell: 4] create -e /test-get-ephemerals-1 "ephemeral node"
263
+ Created /test-get-ephemerals-1
264
+ [zkshell: 5] getEphemerals /test-get-ephemerals
265
+ test-get-ephemerals test-get-ephemerals-1
266
+ [zkshell: 6] getEphemerals /test-get-ephemerals
267
+ [/test-get-ephemerals-1, /test-get-ephemerals]
268
+ [zkshell: 7] getEphemerals /test-get-ephemerals-1
269
+ [/test-get-ephemerals-1]
270
+ ```
271
+
272
+ ## history
273
+ Showing the history about the recent 11 commands that you have executed
274
+
275
+ ```bash
276
+ [zkshell: 7] history
277
+ 0 - close
278
+ 1 - close
279
+ 2 - ls /
280
+ 3 - ls /
281
+ 4 - connect
282
+ 5 - ls /
283
+ 6 - ll
284
+ 7 - history
285
+ ```
286
+
287
+ ## listquota
288
+ Listing the quota of one path
289
+
290
+ ```bash
291
+ [zkshell: 1] listquota /c1
292
+ absolute path is /zookeeper/quota/c1/zookeeper_limits
293
+ Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2
294
+ Output stat for /c1 count=4,bytes=0
295
+ ```
296
+
297
+ ## ls
298
+ Listing the child nodes of one path
299
+
300
+ ```bash
301
+ [zkshell: 36] ls /quota_test
302
+ [child_1, child_2, child_3]
303
+
304
+ # -s to show the stat
305
+ [zkshell: 37] ls -s /quota_test
306
+ [child_1, child_2, child_3]
307
+ cZxid = 0x110000002d
308
+ ctime = Thu Mar 07 11:19:07 UTC 2019
309
+ mZxid = 0x110000002d
310
+ mtime = Thu Mar 07 11:19:07 UTC 2019
311
+ pZxid = 0x1100000033
312
+ cversion = 3
313
+ dataVersion = 0
314
+ aclVersion = 0
315
+ ephemeralOwner = 0x0
316
+ dataLength = 0
317
+ numChildren = 3
318
+
319
+ # -R to show the child nodes recursely
320
+ [zkshell: 38] ls -R /quota_test
321
+ /quota_test
322
+ /quota_test/child_1
323
+ /quota_test/child_2
324
+ /quota_test/child_3
325
+
326
+ # -w to set a watch on the child change,Notice: turn on the printwatches
327
+ [zkshell: 39] ls -w /brokers
328
+ [ids, seqid, topics]
329
+ [zkshell: 40] delete /brokers/ids
330
+ WATCHER::
331
+ WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers
332
+ ```
333
+
334
+ ## printwatches
335
+ A switch to turn on/off whether printing watches or not.
336
+
337
+ ```bash
338
+ [zkshell: 0] printwatches
339
+ printwatches is on
340
+ [zkshell: 1] printwatches off
341
+ [zkshell: 2] printwatches
342
+ printwatches is off
343
+ [zkshell: 3] printwatches on
344
+ [zkshell: 4] printwatches
345
+ printwatches is on
346
+ ```
347
+
348
+ ## quit
349
+ Quit the CLI windows.
350
+
351
+ ```bash
352
+ [zkshell: 1] quit
353
+ ```
354
+
355
+ ## reconfig
356
+ Change the membership of the ensemble during the runtime.
357
+
358
+ Before using this cli,read the details in the [Dynamic Reconfiguration](zookeeperReconfig.html) about the reconfig feature,especially the "Security" part.
359
+
360
+ Pre-requisites:
361
+
362
+ 1. set reconfigEnabled=true in the zoo.cfg
363
+
364
+ 2. add a super user or skipAcl,otherwise will get “Insufficient permission”. e.g. addauth digest zookeeper:admin
365
+
366
+ ```bash
367
+ # Change follower 2 to an observer and change its port from 2182 to 12182
368
+ # Add observer 5 to the ensemble
369
+ # Remove Observer 4 from the ensemble
370
+ [zkshell: 1] reconfig --add 2=localhost:2781:2786:observer;12182 --add 5=localhost:2781:2786:observer;2185 -remove 4
371
+ Committed new configuration:
372
+ server.1=localhost:2780:2785:participant;0.0.0.0:2181
373
+ server.2=localhost:2781:2786:observer;0.0.0.0:12182
374
+ server.3=localhost:2782:2787:participant;0.0.0.0:2183
375
+ server.5=localhost:2784:2789:observer;0.0.0.0:2185
376
+ version=1c00000002
377
+
378
+ # -members to appoint the membership
379
+ [zkshell: 2] reconfig -members server.1=localhost:2780:2785:participant;0.0.0.0:2181,server.2=localhost:2781:2786:observer;0.0.0.0:12182,server.3=localhost:2782:2787:participant;0.0.0.0:12183
380
+ Committed new configuration:
381
+ server.1=localhost:2780:2785:participant;0.0.0.0:2181
382
+ server.2=localhost:2781:2786:observer;0.0.0.0:12182
383
+ server.3=localhost:2782:2787:participant;0.0.0.0:12183
384
+ version=f9fe0000000c
385
+
386
+ # Change the current config to the one in the myNewConfig.txt
387
+ # But only if current config version is 2100000010
388
+ [zkshell: 3] reconfig -file /data/software/zookeeper/zookeeper-test/conf/myNewConfig.txt -v 2100000010
389
+ Committed new configuration:
390
+ server.1=localhost:2780:2785:participant;0.0.0.0:2181
391
+ server.2=localhost:2781:2786:observer;0.0.0.0:12182
392
+ server.3=localhost:2782:2787:participant;0.0.0.0:2183
393
+ server.5=localhost:2784:2789:observer;0.0.0.0:2185
394
+ version=220000000c
395
+ ```
396
+
397
+ ## redo
398
+ Redo the cmd with the index from history.
399
+
400
+ ```bash
401
+ [zkshell: 4] history
402
+ 0 - ls /
403
+ 1 - get /consumers
404
+ 2 - get /hbase
405
+ 3 - ls /hbase
406
+ 4 - history
407
+ [zkshell: 5] redo 3
408
+ [backup-masters, draining, flush-table-proc, hbaseid, master-maintenance, meta-region-server, namespace, online-snapshot, replication, rs, running, splitWAL, switch, table, table-lock]
409
+ ```
410
+
411
+ ## removewatches
412
+ Remove the watches under a node.
413
+
414
+ ```bash
415
+ [zkshell: 1] get -w /brokers
416
+ null
417
+ [zkshell: 2] removewatches /brokers
418
+ WATCHER::
419
+ WatchedEvent state:SyncConnected type:DataWatchRemoved path:/brokers
420
+
421
+ ```
422
+
423
+ ## set
424
+ Set/update the data on a path.
425
+
426
+ ```bash
427
+ [zkshell: 50] set /brokers myNewData
428
+
429
+ # -s to show the stat of this node.
430
+ [zkshell: 51] set -s /quota_test mydata_for_quota_test
431
+ cZxid = 0x110000002d
432
+ ctime = Thu Mar 07 11:19:07 UTC 2019
433
+ mZxid = 0x1100000038
434
+ mtime = Thu Mar 07 11:42:41 UTC 2019
435
+ pZxid = 0x1100000033
436
+ cversion = 3
437
+ dataVersion = 2
438
+ aclVersion = 0
439
+ ephemeralOwner = 0x0
440
+ dataLength = 21
441
+ numChildren = 3
442
+
443
+ # -v to set the data with CAS,the version can be found from dataVersion using stat.
444
+ [zkshell: 52] set -v 0 /brokers myNewData
445
+ [zkshell: 53] set -v 0 /brokers myNewData
446
+ version No is not valid : /brokers
447
+ ```
448
+
449
+ ## setAcl
450
+ Set the Acl permission for one node.
451
+
452
+ ```bash
453
+ [zkshell: 28] addauth digest user1:12345
454
+ [zkshell: 30] setAcl /acl_auth_test auth:user1:12345:crwad
455
+ [zkshell: 31] getAcl /acl_auth_test
456
+ 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE=
457
+ : cdrwa
458
+
459
+ # -R to set Acl recursely
460
+ [zkshell: 32] ls /acl_auth_test
461
+ [child_1, child_2]
462
+ [zkshell: 33] getAcl /acl_auth_test/child_2
463
+ 'world,'anyone
464
+ : cdrwa
465
+ [zkshell: 34] setAcl -R /acl_auth_test auth:user1:12345:crwad
466
+ [zkshell: 35] getAcl /acl_auth_test/child_2
467
+ 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE=
468
+ : cdrwa
469
+
470
+ # -v set Acl with the acl version which can be found from the aclVersion using the stat
471
+ [zkshell: 36] stat /acl_auth_test
472
+ cZxid = 0xf9fc0000001c
473
+ ctime = Tue Mar 26 16:50:58 CST 2019
474
+ mZxid = 0xf9fc0000001c
475
+ mtime = Tue Mar 26 16:50:58 CST 2019
476
+ pZxid = 0xf9fc0000001f
477
+ cversion = 2
478
+ dataVersion = 0
479
+ aclVersion = 3
480
+ ephemeralOwner = 0x0
481
+ dataLength = 0
482
+ numChildren = 2
483
+ [zkshell: 37] setAcl -v 3 /acl_auth_test auth:user1:12345:crwad
484
+ ```
485
+
486
+ ## setquota
487
+ Set the quota in one path.
488
+
489
+ ```bash
490
+ # -n to limit the number of child nodes(included itself)
491
+ [zkshell: 18] setquota -n 2 /quota_test
492
+ [zkshell: 19] create /quota_test/child_1
493
+ Created /quota_test/child_1
494
+ [zkshell: 20] create /quota_test/child_2
495
+ Created /quota_test/child_2
496
+ [zkshell: 21] create /quota_test/child_3
497
+ Created /quota_test/child_3
498
+ # Notice:don't have a hard constraint,just log the warning info
499
+ 2019-03-07 11:22:36,680 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=3 limit=2
500
+ 2019-03-07 11:22:41,861 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=4 limit=2
501
+
502
+ # -b to limit the bytes(data length) of one path
503
+ [zkshell: 22] setquota -b 5 /brokers
504
+ [zkshell: 23] set /brokers "I_love_zookeeper"
505
+ # Notice:don't have a hard constraint,just log the warning info
506
+ WARN [CommitProcWorkThread-7:DataTree@379] - Quota exceeded: /brokers bytes=4206 limit=5
507
+
508
+ # -N count Hard quota
509
+ [zkshell: 3] create /c1
510
+ Created /c1
511
+ [zkshell: 4] setquota -N 2 /c1
512
+ [zkshell: 5] listquota /c1
513
+ absolute path is /zookeeper/quota/c1/zookeeper_limits
514
+ Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2
515
+ Output stat for /c1 count=2,bytes=0
516
+ [zkshell: 6] create /c1/ch-3
517
+ Count Quota has exceeded : /c1/ch-3
518
+
519
+ # -B byte Hard quota
520
+ [zkshell: 3] create /c2
521
+ [zkshell: 4] setquota -B 4 /c2
522
+ [zkshell: 5] set /c2 "foo"
523
+ [zkshell: 6] set /c2 "foo-bar"
524
+ Bytes Quota has exceeded : /c2
525
+ [zkshell: 7] get /c2
526
+ foo
527
+ ```
528
+
529
+ ## stat
530
+ Showing the stat/metadata of one node.
531
+
532
+ ```bash
533
+ [zkshell: 1] stat /hbase
534
+ cZxid = 0x4000013d9
535
+ ctime = Wed Jun 27 20:13:07 CST 2018
536
+ mZxid = 0x4000013d9
537
+ mtime = Wed Jun 27 20:13:07 CST 2018
538
+ pZxid = 0x500000001
539
+ cversion = 17
540
+ dataVersion = 0
541
+ aclVersion = 0
542
+ ephemeralOwner = 0x0
543
+ dataLength = 0
544
+ numChildren = 15
545
+ ```
546
+
547
+ ## sync
548
+ Sync the data of one node between leader and followers(Asynchronous sync)
549
+
550
+ ```bash
551
+ [zkshell: 14] sync /
552
+ [zkshell: 15] Sync is OK
553
+ ```
554
+
555
+ ## version
556
+ Show the version of the ZooKeeper client/CLI
557
+
558
+ ```bash
559
+ [zkshell: 1] version
560
+ ZooKeeper CLI version: 3.6.0-SNAPSHOT-29f9b2c1c0e832081f94d59a6b88709c5f1bb3ca, built on 05/30/2019 09:26 GMT
561
+ ```
562
+
563
+ ## whoami
564
+ Gives all authentication information added into the current session.
565
+
566
+ [zkshell: 1] whoami
567
+ Auth scheme: User
568
+ ip: 127.0.0.1
569
+ [zkshell: 2] addauth digest user1:12345
570
+ [zkshell: 3] whoami
571
+ Auth scheme: User
572
+ ip: 127.0.0.1
573
+ digest: user1
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperHierarchicalQuorums.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # Introduction to hierarchical quorums
18
+
19
+ This document gives an example of how to use hierarchical quorums. The basic idea is
20
+ very simple. First, we split servers into groups, and add a line for each group listing
21
+ the servers that form this group. Next we have to assign a weight to each server.
22
+
23
+ The following example shows how to configure a system with three groups of three servers
24
+ each, and we assign a weight of 1 to each server:
25
+
26
+
27
+ group.1=1:2:3
28
+ group.2=4:5:6
29
+ group.3=7:8:9
30
+
31
+ weight.1=1
32
+ weight.2=1
33
+ weight.3=1
34
+ weight.4=1
35
+ weight.5=1
36
+ weight.6=1
37
+ weight.7=1
38
+ weight.8=1
39
+ weight.9=1
40
+
41
+
42
+ When running the system, we are able to form a quorum once we have a majority of votes from
43
+ a majority of non-zero-weight groups. Groups that have zero weight are discarded and not
44
+ considered when forming quorums. Looking at the example, we are able to form a quorum once
45
+ we have votes from at least two servers from each of two different groups.
46
+
47
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperInternals.md ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2022 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Internals
18
+
19
+ * [Introduction](#ch_Introduction)
20
+ * [Atomic Broadcast](#sc_atomicBroadcast)
21
+ * [Guarantees, Properties, and Definitions](#sc_guaranteesPropertiesDefinitions)
22
+ * [Leader Activation](#sc_leaderElection)
23
+ * [Active Messaging](#sc_activeMessaging)
24
+ * [Summary](#sc_summary)
25
+ * [Comparisons](#sc_comparisons)
26
+ * [Consistency Guarantees](#sc_consistency)
27
+ * [Quorums](#sc_quorum)
28
+ * [Logging](#sc_logging)
29
+ * [Developer Guidelines](#sc_developerGuidelines)
30
+ * [Logging at the Right Level](#sc_rightLevel)
31
+ * [Use of Standard slf4j Idioms](#sc_slf4jIdioms)
32
+
33
+ <a name="ch_Introduction"></a>
34
+
35
+ ## Introduction
36
+
37
+ This document contains information on the inner workings of ZooKeeper.
38
+ It discusses the following topics:
39
+
40
+ * [Atomic Broadcast](#sc_atomicBroadcast)
41
+ * [Consistency Guarantees](#sc_consistency)
42
+ * [Quorums](#sc_quorum)
43
+ * [Logging](#sc_logging)
44
+
45
+ <a name="sc_atomicBroadcast"></a>
46
+
47
+ ## Atomic Broadcast
48
+
49
+ At the heart of ZooKeeper is an atomic messaging system that keeps all of the servers in sync.
50
+
51
+ <a name="sc_guaranteesPropertiesDefinitions"></a>
52
+
53
+ ### Guarantees, Properties, and Definitions
54
+
55
+ The specific guarantees provided by the messaging system used by ZooKeeper are the following:
56
+
57
+ * *_Reliable delivery_* :
58
+ If a message `m`, is delivered
59
+ by one server, message `m` will be eventually delivered by all servers.
60
+
61
+ * *_Total order_* :
62
+ If a message `a` is
63
+ delivered before message `b` by one server, message `a` will be delivered before `b` by all
64
+ servers.
65
+
66
+ * *_Causal order_* :
67
+ If a message `b` is sent after a message `a` has been delivered by the sender of `b`,
68
+ message `a` must be ordered before `b`. If a sender sends `c` after sending `b`, `c` must be ordered after `b`.
69
+
70
+ The ZooKeeper messaging system also needs to be efficient, reliable, and easy to
71
+ implement and maintain. We make heavy use of messaging, so we need the system to
72
+ be able to handle thousands of requests per second. Although we can require at
73
+ least k+1 correct servers to send new messages, we must be able to recover from
74
+ correlated failures such as power outages. When we implemented the system we had
75
+ little time and few engineering resources, so we needed a protocol that is
76
+ accessible to engineers and is easy to implement. We found that our protocol
77
+ satisfied all of these goals.
78
+
79
+ Our protocol assumes that we can construct point-to-point FIFO channels between
80
+ the servers. While similar services usually assume message delivery that can
81
+ lose or reorder messages, our assumption of FIFO channels is very practical
82
+ given that we use TCP for communication. Specifically we rely on the following property of TCP:
83
+
84
+ * *_Ordered delivery_* :
85
+ Data is delivered in the same order it is sent and a message `m` is
86
+ delivered only after all messages sent before `m` have been delivered.
87
+ (The corollary to this is that if message `m` is lost all messages after `m` will be lost.)
88
+
89
+ * *_No message after close_* :
90
+ Once a FIFO channel is closed, no messages will be received from it.
91
+
92
+ FLP proved that consensus cannot be achieved in asynchronous distributed systems
93
+ if failures are possible. To ensure that we achieve consensus in the presence of failures
94
+ we use timeouts. However, we rely on time for liveness not for correctness. So,
95
+ if timeouts stop working (e.g., skewed clocks) the messaging system may
96
+ hang, but it will not violate its guarantees.
97
+
98
+ When describing the ZooKeeper messaging protocol we will talk of packets,
99
+ proposals, and messages:
100
+
101
+ * *_Packet_* :
102
+ a sequence of bytes sent through a FIFO channel.
103
+
104
+ * *_Proposal_* :
105
+ a unit of agreement. Proposals are agreed upon by exchanging packets
106
+ with a quorum of ZooKeeper servers. Most proposals contain messages, however the
107
+ NEW_LEADER proposal is an example of a proposal that does not contain to a message.
108
+
109
+ * *_Message_* :
110
+ a sequence of bytes to be atomically broadcast to all ZooKeeper
111
+ servers. A message put into a proposal and agreed upon before it is delivered.
112
+
113
+ As stated above, ZooKeeper guarantees a total order of messages, and it also
114
+ guarantees a total order of proposals. ZooKeeper exposes the total ordering using
115
+ a ZooKeeper transaction id (_zxid_). All proposals will be stamped with a zxid when
116
+ it is proposed and exactly reflects the total ordering. Proposals are sent to all
117
+ ZooKeeper servers and committed when a quorum of them acknowledge the proposal.
118
+ If a proposal contains a message, the message will be delivered when the proposal
119
+ is committed. Acknowledgement means the server has recorded the proposal to persistent storage.
120
+ Our quorums have the requirement that any pair of quorum must have at least one server
121
+ in common. We ensure this by requiring that all quorums have size (_n/2+1_) where
122
+ n is the number of servers that make up a ZooKeeper service.
123
+
124
+ The zxid has two parts: the epoch and a counter. In our implementation the zxid
125
+ is a 64-bit number. We use the high order 32-bits for the epoch and the low order
126
+ 32-bits for the counter. Because zxid consists of two parts, zxid can be represented both as a
127
+ number and as a pair of integers, (_epoch, count_). The epoch number represents a
128
+ change in leadership. Each time a new leader comes into power it will have its
129
+ own epoch number. We have a simple algorithm to assign a unique zxid to a proposal:
130
+ the leader simply increments the zxid to obtain a unique zxid for each proposal. _Leadership activation will ensure that only one leader uses a given epoch, so our
131
+ simple algorithm guarantees that every proposal will have a unique id._
132
+
133
+ ZooKeeper messaging consists of two phases:
134
+
135
+ * *_Leader activation_* :
136
+ In this phase a leader establishes the correct state of the system
137
+ and gets ready to start making proposals.
138
+
139
+ * *_Active messaging_* :
140
+ In this phase a leader accepts messages to propose and coordinates message delivery.
141
+
142
+ ZooKeeper is a holistic protocol. We do not focus on individual proposals, rather
143
+ look at the stream of proposals as a whole. Our strict ordering allows us to do this
144
+ efficiently and greatly simplifies our protocol. Leadership activation embodies
145
+ this holistic concept. A leader becomes active only when a quorum of followers
146
+ (The leader counts as a follower as well. You can always vote for yourself ) has synced
147
+ up with the leader, they have the same state. This state consists of all of the
148
+ proposals that the leader believes have been committed and the proposal to follow
149
+ the leader, the NEW_LEADER proposal. (Hopefully you are thinking to
150
+ yourself, _Does the set of proposals that the leader believes has been committed
151
+ include all the proposals that really have been committed?_ The answer is _yes_.
152
+ Below, we make clear why.)
153
+
154
+ <a name="sc_leaderElection"></a>
155
+
156
+ ### Leader Activation
157
+
158
+ Leader activation includes leader election (`FastLeaderElection`).
159
+ ZooKeeper messaging doesn't care about the exact method of electing a leader as long as the following holds:
160
+
161
+ * The leader has seen the highest zxid of all the followers.
162
+ * A quorum of servers have committed to following the leader.
163
+
164
+ Of these two requirements only the first, the highest zxid among the followers
165
+ needs to hold for correct operation. The second requirement, a quorum of followers,
166
+ just needs to hold with high probability. We are going to recheck the second requirement,
167
+ so if a failure happens during or after the leader election and quorum is lost,
168
+ we will recover by abandoning leader activation and running another election.
169
+
170
+ After leader election a single server will be designated as a leader and start
171
+ waiting for followers to connect. The rest of the servers will try to connect to
172
+ the leader. The leader will sync up with the followers by sending any proposals they
173
+ are missing, or if a follower is missing too many proposals, it will send a full
174
+ snapshot of the state to the follower.
175
+
176
+ There is a corner case in which a follower that has proposals, `U`, not seen
177
+ by a leader arrives. Proposals are seen in order, so the proposals of `U` will have a zxids
178
+ higher than zxids seen by the leader. The follower must have arrived after the
179
+ leader election, otherwise the follower would have been elected leader given that
180
+ it has seen a higher zxid. Since committed proposals must be seen by a quorum of
181
+ servers, and a quorum of servers that elected the leader did not see `U`, the proposals
182
+ of `U` have not been committed, so they can be discarded. When the follower connects
183
+ to the leader, the leader will tell the follower to discard `U`.
184
+
185
+ A new leader establishes a zxid to start using for new proposals by getting the
186
+ epoch, e, of the highest zxid it has seen and setting the next zxid to use to be
187
+ (e+1, 0), after the leader syncs with a follower, it will propose a NEW_LEADER
188
+ proposal. Once the NEW_LEADER proposal has been committed, the leader will activate
189
+ and start receiving and issuing proposals.
190
+
191
+ It all sounds complicated but here are the basic rules of operation during leader
192
+ activation:
193
+
194
+ * A follower will ACK the NEW_LEADER proposal after it has synced with the leader.
195
+ * A follower will only ACK a NEW_LEADER proposal with a given zxid from a single server.
196
+ * A new leader will COMMIT the NEW_LEADER proposal when a quorum of followers has ACKed it.
197
+ * A follower will commit any state it received from the leader when the NEW_LEADER proposal is COMMIT.
198
+ * A new leader will not accept new proposals until the NEW_LEADER proposal has been COMMITTED.
199
+
200
+ If leader election terminates erroneously, we don't have a problem since the
201
+ NEW_LEADER proposal will not be committed since the leader will not have quorum.
202
+ When this happens, the leader and any remaining followers will timeout and go back
203
+ to leader election.
204
+
205
+ <a name="sc_activeMessaging"></a>
206
+
207
+ ### Active Messaging
208
+
209
+ Leader Activation does all the heavy lifting. Once the leader is coronated he can
210
+ start blasting out proposals. As long as he remains the leader no other leader can
211
+ emerge since no other leader will be able to get a quorum of followers. If a new
212
+ leader does emerge,
213
+ it means that the leader has lost quorum, and the new leader will clean up any
214
+ mess left over during her leadership activation.
215
+
216
+ ZooKeeper messaging operates similar to a classic two-phase commit.
217
+
218
+ ![Two phase commit](images/2pc.jpg)
219
+
220
+ All communication channels are FIFO, so everything is done in order. Specifically
221
+ the following operating constraints are observed:
222
+
223
+ * The leader sends proposals to all followers using
224
+ the same order. Moreover, this order follows the order in which requests have been
225
+ received. Because we use FIFO channels this means that followers also receive proposals in order.
226
+ * Followers process messages in the order they are received. This
227
+ means that messages will be ACKed in order and the leader will receive ACKs from
228
+ followers in order, due to the FIFO channels. It also means that if message `m`
229
+ has been written to non-volatile storage, all messages that were proposed before
230
+ `m` have been written to non-volatile storage.
231
+ * The leader will issue a COMMIT to all followers as soon as a
232
+ quorum of followers have ACKed a message. Since messages are ACKed in order,
233
+ COMMITs will be sent by the leader as received by the followers in order.
234
+ * COMMITs are processed in order. Followers deliver a proposal
235
+ message when that proposal is committed.
236
+
237
+ <a name="sc_summary"></a>
238
+
239
+ ### Summary
240
+
241
+ So there you go. Why does it work? Specifically, why does a set of proposals
242
+ believed by a new leader always contain any proposal that has actually been committed?
243
+ First, all proposals have a unique zxid, so unlike other protocols, we never have
244
+ to worry about two different values being proposed for the same zxid; followers
245
+ (a leader is also a follower) see and record proposals in order; proposals are
246
+ committed in order; there is only one active leader at a time since followers only
247
+ follow a single leader at a time; a new leader has seen all committed proposals
248
+ from the previous epoch since it has seen the highest zxid from a quorum of servers;
249
+ any uncommitted proposals from a previous epoch seen by a new leader will be committed
250
+ by that leader before it becomes active.
251
+
252
+ <a name="sc_comparisons"></a>
253
+
254
+ ### Comparisons
255
+
256
+ Isn't this just Multi-Paxos? No, Multi-Paxos requires some way of assuring that
257
+ there is only a single coordinator. We do not count on such assurances. Instead
258
+ we use the leader activation to recover from leadership change or old leaders
259
+ believing they are still active.
260
+
261
+ Isn't this just Paxos? Your active messaging phase looks just like phase 2 of Paxos?
262
+ Actually, to us active messaging looks just like 2 phase commit without the need to
263
+ handle aborts. Active messaging is different from both in the sense that it has
264
+ cross proposal ordering requirements. If we do not maintain strict FIFO ordering of
265
+ all packets, it all falls apart. Also, our leader activation phase is different from
266
+ both of them. In particular, our use of epochs allows us to skip blocks of uncommitted
267
+ proposals and to not worry about duplicate proposals for a given zxid.
268
+
269
+ <a name="sc_consistency"></a>
270
+
271
+
272
+ ## Consistency Guarantees
273
+
274
+ The [consistency](https://jepsen.io/consistency) guarantees of ZooKeeper lie between sequential consistency and linearizability. In this section, we explain the exact consistency guarantees that ZooKeeper provides.
275
+
276
+ Write operations in ZooKeeper are *linearizable*. In other words, each `write` will appear to take effect atomically at some point between when the client issues the request and receives the corresponding response. This means that the writes performed by all the clients in ZooKeeper can be totally ordered in such a way that respects the real-time ordering of these writes. However, merely stating that write operations are linearizable is meaningless unless we also talk about read operations.
277
+
278
+ Read operations in ZooKeeper are *not linearizable* since they can return potentially stale data. This is because a `read` in ZooKeeper is not a quorum operation and a server will respond immediately to a client that is performing a `read`. ZooKeeper does this because it prioritizes performance over consistency for the read use case. However, reads in ZooKeeper are *sequentially consistent*, because `read` operations will appear to take effect in some sequential order that furthermore respects the order of each client's operations. A common pattern to work around this is to issue a `sync` before issuing a `read`. This too does **not** strictly guarantee up-to-date data because `sync` is [not currently a quorum operation](https://issues.apache.org/jira/browse/ZOOKEEPER-1675). To illustrate, consider a scenario where two servers simultaneously think they are the leader, something that could occur if the TCP connection timeout is smaller than `syncLimit * tickTime`. Note that this is [unlikely](https://www.amazon.com/ZooKeeper-Distributed-Coordination-Flavio-Junqueira/dp/1449361307) to occur in practice, but should be kept in mind nevertheless when discussing strict theoretical guarantees. Under this scenario, it is possible that the `sync` is served by the “leader” with stale data, thereby allowing the following `read` to be stale as well. The stronger guarantee of linearizability is provided if an actual quorum operation (e.g., a `write`) is performed before a `read`.
279
+
280
+ Overall, the consistency guarantees of ZooKeeper are formally captured by the notion of [ordered sequential consistency](http://webee.technion.ac.il/people/idish/ftp/OSC-IPL17.pdf) or `OSC(U)` to be exact, which lies between sequential consistency and linearizability.
281
+
282
+ <a name="sc_quorum"></a>
283
+
284
+ ## Quorums
285
+
286
+ Atomic broadcast and leader election use the notion of quorum to guarantee a consistent
287
+ view of the system. By default, ZooKeeper uses majority quorums, which means that every
288
+ voting that happens in one of these protocols requires a majority to vote on. One example is
289
+ acknowledging a leader proposal: the leader can only commit once it receives an
290
+ acknowledgement from a quorum of servers.
291
+
292
+ If we extract the properties that we really need from our use of majorities, we have that we only
293
+ need to guarantee that groups of processes used to validate an operation by voting (e.g., acknowledging
294
+ a leader proposal) pairwise intersect in at least one server. Using majorities guarantees such a property.
295
+ However, there are other ways of constructing quorums different from majorities. For example, we can assign
296
+ weights to the votes of servers, and say that the votes of some servers are more important. To obtain a quorum,
297
+ we get enough votes so that the sum of weights of all votes is larger than half of the total sum of all weights.
298
+
299
+ A different construction that uses weights and is useful in wide-area deployments (co-locations) is a hierarchical
300
+ one. With this construction, we split the servers into disjoint groups and assign weights to processes. To form
301
+ a quorum, we have to get a hold of enough servers from a majority of groups G, such that for each group g in G,
302
+ the sum of votes from g is larger than half of the sum of weights in g. Interestingly, this construction enables
303
+ smaller quorums. If we have, for example, 9 servers, we split them into 3 groups, and assign a weight of 1 to each
304
+ server, then we are able to form quorums of size 4. Note that two subsets of processes composed each of a majority
305
+ of servers from each of a majority of groups necessarily have a non-empty intersection. It is reasonable to expect
306
+ that a majority of co-locations will have a majority of servers available with high probability.
307
+
308
+ With ZooKeeper, we provide a user with the ability of configuring servers to use majority quorums, weights, or a
309
+ hierarchy of groups.
310
+
311
+ <a name="sc_logging"></a>
312
+
313
+ ## Logging
314
+
315
+ Zookeeper uses [slf4j](http://www.slf4j.org/index.html) as an abstraction layer for logging.
316
+ [Logback](https://logback.qos.ch/) is chosen the logging backend since ZooKeeper version 3.8.0.
317
+ For better embedding support, it is planned in the future to leave the decision of choosing the final logging implementation to the end user.
318
+ Therefore, always use the slf4j api to write log statements in the code, but configure logback for how to log at runtime.
319
+ Note that slf4j has no FATAL level, former messages at FATAL level have been moved to ERROR level.
320
+ For information on configuring logback for
321
+ ZooKeeper, see the [Logging](zookeeperAdmin.html#sc_logging) section
322
+ of the [ZooKeeper Administrator's Guide.](zookeeperAdmin.html)
323
+
324
+ <a name="sc_developerGuidelines"></a>
325
+
326
+ ### Developer Guidelines
327
+
328
+ Please follow the [slf4j manual](http://www.slf4j.org/manual.html) when creating log statements within code.
329
+ Also read the [FAQ on performance](http://www.slf4j.org/faq.html#logging\_performance), when creating log statements. Patch reviewers will look for the following:
330
+
331
+ <a name="sc_rightLevel"></a>
332
+
333
+ #### Logging at the Right Level
334
+
335
+ There are several levels of logging in slf4j.
336
+
337
+ It's important to pick the right one. In order of higher to lower severity:
338
+
339
+ 1. ERROR level designates error events that might still allow the application to continue running.
340
+ 1. WARN level designates potentially harmful situations.
341
+ 1. INFO level designates informational messages that highlight the progress of the application at coarse-grained level.
342
+ 1. DEBUG Level designates fine-grained informational events that are most useful to debug an application.
343
+ 1. TRACE Level designates finer-grained informational events than the DEBUG.
344
+
345
+ ZooKeeper is typically run in production such that log messages of INFO level
346
+ severity and higher (more severe) are output to the log.
347
+
348
+ <a name="sc_slf4jIdioms"></a>
349
+
350
+ #### Use of Standard slf4j Idioms
351
+
352
+ _Static Message Logging_
353
+
354
+ LOG.debug("process completed successfully!");
355
+
356
+ However when creating parameterized messages are required, use formatting anchors.
357
+
358
+ LOG.debug("got {} messages in {} minutes",new Object[]{count,time});
359
+
360
+ _Naming_
361
+
362
+ Loggers should be named after the class in which they are used.
363
+
364
+ public class Foo {
365
+ private static final Logger LOG = LoggerFactory.getLogger(Foo.class);
366
+ ....
367
+ public Foo() {
368
+ LOG.info("constructing Foo");
369
+
370
+ _Exception handling_
371
+
372
+ try {
373
+ // code
374
+ } catch (XYZException e) {
375
+ // do this
376
+ LOG.error("Something bad happened", e);
377
+ // don't do this (generally)
378
+ // LOG.error(e);
379
+ // why? because "don't do" case hides the stack trace
380
+
381
+ // continue process here as you need... recover or (re)throw
382
+ }
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2021 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Monitor Guide
18
+
19
+ * [New Metrics System](#Metrics-System)
20
+ * [Metrics](#Metrics)
21
+ * [Prometheus](#Prometheus)
22
+ * [Alerting with Prometheus](#Alerting)
23
+ * [Grafana](#Grafana)
24
+ * [InfluxDB](#influxdb)
25
+
26
+ * [JMX](#JMX)
27
+
28
+ * [Four letter words](#four-letter-words)
29
+
30
+ <a name="Metrics-System"></a>
31
+
32
+ ## New Metrics System
33
+ The feature:`New Metrics System` has been available since 3.6.0 which provides the abundant metrics
34
+ to help users monitor the ZooKeeper on the topic: znode, network, disk, quorum, leader election,
35
+ client, security, failures, watch/session, requestProcessor, and so forth.
36
+
37
+ <a name="Metrics"></a>
38
+
39
+ ### Metrics
40
+ All the metrics are included in the `ServerMetrics.java`.
41
+
42
+ <a name="Prometheus"></a>
43
+
44
+
45
+ ### Pre-requisites:
46
+ - Enable the `Prometheus MetricsProvider` by setting the following in `zoo.cfg`:
47
+ ```conf
48
+ metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
49
+ ```
50
+
51
+ - The port for Prometheus metrics can be configured using:
52
+ ```conf
53
+ metricsProvider.httpPort=7000 # Default port is 7000
54
+ ```
55
+
56
+ #### Enabling HTTPS for Prometheus Metrics:
57
+
58
+ ZooKeeper also supports SSL for Prometheus metrics, which provides secure data transmission. To enable this, configure an HTTPS port and set up SSL certificates as follows:
59
+
60
+ - Define the HTTPS port:
61
+ ```conf
62
+ metricsProvider.httpsPort=4443
63
+ ```
64
+
65
+ - Configure the SSL key store (holds the server’s private key and certificates):
66
+ ```conf
67
+ metricsProvider.ssl.keyStore.location=/path/to/keystore.jks
68
+ metricsProvider.ssl.keyStore.password=your_keystore_password
69
+ metricsProvider.ssl.keyStore.type=jks # Default is JKS
70
+ ```
71
+
72
+ - Configure the SSL trust store (used to verify client certificates):
73
+ ```conf
74
+ metricsProvider.ssl.trustStore.location=/path/to/truststore.jks
75
+ metricsProvider.ssl.trustStore.password=your_truststore_password
76
+ metricsProvider.ssl.trustStore.type=jks # Default is JKS
77
+ ```
78
+
79
+ - **Note**: You can enable both HTTP and HTTPS simultaneously by defining both ports:
80
+ ```conf
81
+ metricsProvider.httpPort=7000
82
+ metricsProvider.httpsPort=4443
83
+ ```
84
+ ### Prometheus
85
+ - Running a [Prometheus](https://prometheus.io/) monitoring service is the easiest way to ingest and record ZooKeeper's metrics.
86
+
87
+ - Install Prometheus:
88
+ Go to the official website download [page](https://prometheus.io/download/), download the latest release.
89
+
90
+ - Set Prometheus's scraper to target the ZooKeeper cluster endpoints:
91
+
92
+ ```bash
93
+ cat > /tmp/test-zk.yaml <<EOF
94
+ global:
95
+ scrape_interval: 10s
96
+ scrape_configs:
97
+ - job_name: test-zk
98
+ static_configs:
99
+ - targets: ['192.168.10.32:7000','192.168.10.33:7000','192.168.10.34:7000']
100
+ EOF
101
+ cat /tmp/test-zk.yaml
102
+ ```
103
+
104
+ - Set up the Prometheus handler:
105
+
106
+ ```bash
107
+ nohup /tmp/prometheus \
108
+ --config.file /tmp/test-zk.yaml \
109
+ --web.listen-address ":9090" \
110
+ --storage.tsdb.path "/tmp/test-zk.data" >> /tmp/test-zk.log 2>&1 &
111
+ ```
112
+
113
+ - Now Prometheus will scrape zk metrics every 10 seconds.
114
+
115
+ <a name="Alerting"></a>
116
+
117
+ ### Alerting with Prometheus
118
+ - We recommend that you read [Prometheus Official Alerting Page](https://prometheus.io/docs/practices/alerting/) to explore
119
+ some principles of alerting
120
+
121
+ - We recommend that you use [Prometheus Alertmanager](https://www.prometheus.io/docs/alerting/latest/alertmanager/) which can
122
+ help users to receive alerting email or instant message(by webhook) in a more convenient way
123
+
124
+ - We provide an alerting example where these metrics should be taken a special attention. Note: this is for your reference only,
125
+ and you need to adjust them according to your actual situation and resource environment
126
+
127
+
128
+ use ./promtool check rules rules/zk.yml to check the correctness of the config file
129
+ cat rules/zk.yml
130
+
131
+ groups:
132
+ - name: zk-alert-example
133
+ rules:
134
+ - alert: ZooKeeper server is down
135
+ expr: up == 0
136
+ for: 1m
137
+ labels:
138
+ severity: critical
139
+ annotations:
140
+ summary: "Instance {{ $labels.instance }} ZooKeeper server is down"
141
+ description: "{{ $labels.instance }} of job {{$labels.job}} ZooKeeper server is down: [{{ $value }}]."
142
+
143
+ - alert: create too many znodes
144
+ expr: znode_count > 1000000
145
+ for: 1m
146
+ labels:
147
+ severity: warning
148
+ annotations:
149
+ summary: "Instance {{ $labels.instance }} create too many znodes"
150
+ description: "{{ $labels.instance }} of job {{$labels.job}} create too many znodes: [{{ $value }}]."
151
+
152
+ - alert: create too many connections
153
+ expr: num_alive_connections > 50 # suppose we use the default maxClientCnxns: 60
154
+ for: 1m
155
+ labels:
156
+ severity: warning
157
+ annotations:
158
+ summary: "Instance {{ $labels.instance }} create too many connections"
159
+ description: "{{ $labels.instance }} of job {{$labels.job}} create too many connections: [{{ $value }}]."
160
+
161
+ - alert: znode total occupied memory is too big
162
+ expr: approximate_data_size /1024 /1024 > 1 * 1024 # more than 1024 MB(1 GB)
163
+ for: 1m
164
+ labels:
165
+ severity: warning
166
+ annotations:
167
+ summary: "Instance {{ $labels.instance }} znode total occupied memory is too big"
168
+ description: "{{ $labels.instance }} of job {{$labels.job}} znode total occupied memory is too big: [{{ $value }}] MB."
169
+
170
+ - alert: set too many watch
171
+ expr: watch_count > 10000
172
+ for: 1m
173
+ labels:
174
+ severity: warning
175
+ annotations:
176
+ summary: "Instance {{ $labels.instance }} set too many watch"
177
+ description: "{{ $labels.instance }} of job {{$labels.job}} set too many watch: [{{ $value }}]."
178
+
179
+ - alert: a leader election happens
180
+ expr: increase(election_time_count[5m]) > 0
181
+ for: 1m
182
+ labels:
183
+ severity: warning
184
+ annotations:
185
+ summary: "Instance {{ $labels.instance }} a leader election happens"
186
+ description: "{{ $labels.instance }} of job {{$labels.job}} a leader election happens: [{{ $value }}]."
187
+
188
+ - alert: open too many files
189
+ expr: open_file_descriptor_count > 300
190
+ for: 1m
191
+ labels:
192
+ severity: warning
193
+ annotations:
194
+ summary: "Instance {{ $labels.instance }} open too many files"
195
+ description: "{{ $labels.instance }} of job {{$labels.job}} open too many files: [{{ $value }}]."
196
+
197
+ - alert: fsync time is too long
198
+ expr: rate(fsynctime_sum[1m]) > 100
199
+ for: 1m
200
+ labels:
201
+ severity: warning
202
+ annotations:
203
+ summary: "Instance {{ $labels.instance }} fsync time is too long"
204
+ description: "{{ $labels.instance }} of job {{$labels.job}} fsync time is too long: [{{ $value }}]."
205
+
206
+ - alert: take snapshot time is too long
207
+ expr: rate(snapshottime_sum[5m]) > 100
208
+ for: 1m
209
+ labels:
210
+ severity: warning
211
+ annotations:
212
+ summary: "Instance {{ $labels.instance }} take snapshot time is too long"
213
+ description: "{{ $labels.instance }} of job {{$labels.job}} take snapshot time is too long: [{{ $value }}]."
214
+
215
+ - alert: avg latency is too high
216
+ expr: avg_latency > 100
217
+ for: 1m
218
+ labels:
219
+ severity: warning
220
+ annotations:
221
+ summary: "Instance {{ $labels.instance }} avg latency is too high"
222
+ description: "{{ $labels.instance }} of job {{$labels.job}} avg latency is too high: [{{ $value }}]."
223
+
224
+ - alert: JvmMemoryFillingUp
225
+ expr: jvm_memory_bytes_used / jvm_memory_bytes_max{area="heap"} > 0.8
226
+ for: 5m
227
+ labels:
228
+ severity: warning
229
+ annotations:
230
+ summary: "JVM memory filling up (instance {{ $labels.instance }})"
231
+ description: "JVM memory is filling up (> 80%)\n labels: {{ $labels }} value = {{ $value }}\n"
232
+
233
+
234
+ <a name="Grafana"></a>
235
+
236
+ ### Grafana
237
+ - Grafana has built-in Prometheus support; just add a Prometheus data source:
238
+
239
+ ```bash
240
+ Name: test-zk
241
+ Type: Prometheus
242
+ Url: http://localhost:9090
243
+ Access: proxy
244
+ ```
245
+ - Then download and import the default ZooKeeper dashboard [template](https://grafana.com/grafana/dashboards/10465) and customize.
246
+ - Users can ask for Grafana dashboard account if having any good improvements by writing a email to **dev@zookeeper.apache.org**.
247
+
248
+ <a name="influxdb"></a>
249
+
250
+ ### InfluxDB
251
+
252
+ InfluxDB is an open source time series data that is often used to store metrics
253
+ from Zookeeper. You can [download](https://portal.influxdata.com/downloads/) the
254
+ open source version or create a [free](https://cloud2.influxdata.com/signup)
255
+ account on InfluxDB Cloud. In either case, configure the [Apache Zookeeper
256
+ Telegraf plugin](https://www.influxdata.com/integration/apache-zookeeper/) to
257
+ start collecting and storing metrics from your Zookeeper clusters into your
258
+ InfluxDB instance. There is also an [Apache Zookeeper InfluxDB
259
+ template](https://www.influxdata.com/influxdb-templates/zookeeper-monitor/) that
260
+ includes the Telegraf configurations and a dashboard to get you set up right
261
+ away.
262
+
263
+ <a name="JMX"></a>
264
+ ## JMX
265
+ More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperJMX.html)
266
+
267
+ <a name="four-letter-words"></a>
268
+ ## Four letter words
269
+ More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands)
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper
18
+
19
+ * [ZooKeeper: A Distributed Coordination Service for Distributed Applications](#ch_DesignOverview)
20
+ * [Design Goals](#sc_designGoals)
21
+ * [Data model and the hierarchical namespace](#sc_dataModelNameSpace)
22
+ * [Nodes and ephemeral nodes](#Nodes+and+ephemeral+nodes)
23
+ * [Conditional updates and watches](#Conditional+updates+and+watches)
24
+ * [Guarantees](#Guarantees)
25
+ * [Simple API](#Simple+API)
26
+ * [Implementation](#Implementation)
27
+ * [Uses](#Uses)
28
+ * [Performance](#Performance)
29
+ * [Reliability](#Reliability)
30
+ * [The ZooKeeper Project](#The+ZooKeeper+Project)
31
+
32
+ <a name="ch_DesignOverview"></a>
33
+
34
+ ## ZooKeeper: A Distributed Coordination Service for Distributed Applications
35
+
36
+ ZooKeeper is a distributed, open-source coordination service for
37
+ distributed applications. It exposes a simple set of primitives that
38
+ distributed applications can build upon to implement higher level services
39
+ for synchronization, configuration maintenance, and groups and naming. It
40
+ is designed to be easy to program to, and uses a data model styled after
41
+ the familiar directory tree structure of file systems. It runs in Java and
42
+ has bindings for both Java and C.
43
+
44
+ Coordination services are notoriously hard to get right. They are
45
+ especially prone to errors such as race conditions and deadlock. The
46
+ motivation behind ZooKeeper is to relieve distributed applications the
47
+ responsibility of implementing coordination services from scratch.
48
+
49
+ <a name="sc_designGoals"></a>
50
+
51
+ ### Design Goals
52
+
53
+ **ZooKeeper is simple.** ZooKeeper
54
+ allows distributed processes to coordinate with each other through a
55
+ shared hierarchical namespace which is organized similarly to a standard
56
+ file system. The namespace consists of data registers - called znodes,
57
+ in ZooKeeper parlance - and these are similar to files and directories.
58
+ Unlike a typical file system, which is designed for storage, ZooKeeper
59
+ data is kept in-memory, which means ZooKeeper can achieve high
60
+ throughput and low latency numbers.
61
+
62
+ The ZooKeeper implementation puts a premium on high performance,
63
+ highly available, strictly ordered access. The performance aspects of
64
+ ZooKeeper means it can be used in large, distributed systems. The
65
+ reliability aspects keep it from being a single point of failure. The
66
+ strict ordering means that sophisticated synchronization primitives can
67
+ be implemented at the client.
68
+
69
+ **ZooKeeper is replicated.** Like the
70
+ distributed processes it coordinates, ZooKeeper itself is intended to be
71
+ replicated over a set of hosts called an ensemble.
72
+
73
+ ![ZooKeeper Service](images/zkservice.jpg)
74
+
75
+ The servers that make up the ZooKeeper service must all know about
76
+ each other. They maintain an in-memory image of state, along with a
77
+ transaction logs and snapshots in a persistent store. As long as a
78
+ majority of the servers are available, the ZooKeeper service will be
79
+ available.
80
+
81
+ Clients connect to a single ZooKeeper server. The client maintains
82
+ a TCP connection through which it sends requests, gets responses, gets
83
+ watch events, and sends heart beats. If the TCP connection to the server
84
+ breaks, the client will connect to a different server.
85
+
86
+ **ZooKeeper is ordered.** ZooKeeper
87
+ stamps each update with a number that reflects the order of all
88
+ ZooKeeper transactions. Subsequent operations can use the order to
89
+ implement higher-level abstractions, such as synchronization
90
+ primitives.
91
+
92
+ **ZooKeeper is fast.** It is
93
+ especially fast in "read-dominant" workloads. ZooKeeper applications run
94
+ on thousands of machines, and it performs best where reads are more
95
+ common than writes, at ratios of around 10:1.
96
+
97
+ <a name="sc_dataModelNameSpace"></a>
98
+
99
+ ### Data model and the hierarchical namespace
100
+
101
+ The namespace provided by ZooKeeper is much like that of a
102
+ standard file system. A name is a sequence of path elements separated by
103
+ a slash (/). Every node in ZooKeeper's namespace is identified by a
104
+ path.
105
+
106
+ #### ZooKeeper's Hierarchical Namespace
107
+
108
+ ![ZooKeeper's Hierarchical Namespace](images/zknamespace.jpg)
109
+
110
+ <a name="Nodes+and+ephemeral+nodes"></a>
111
+
112
+ ### Nodes and ephemeral nodes
113
+
114
+ Unlike standard file systems, each node in a ZooKeeper
115
+ namespace can have data associated with it as well as children. It is
116
+ like having a file-system that allows a file to also be a directory.
117
+ (ZooKeeper was designed to store coordination data: status information,
118
+ configuration, location information, etc., so the data stored at each
119
+ node is usually small, in the byte to kilobyte range.) We use the term
120
+ _znode_ to make it clear that we are talking about
121
+ ZooKeeper data nodes.
122
+
123
+ Znodes maintain a stat structure that includes version numbers for
124
+ data changes, ACL changes, and timestamps, to allow cache validations
125
+ and coordinated updates. Each time a znode's data changes, the version
126
+ number increases. For instance, whenever a client retrieves data it also
127
+ receives the version of the data.
128
+
129
+ The data stored at each znode in a namespace is read and written
130
+ atomically. Reads get all the data bytes associated with a znode and a
131
+ write replaces all the data. Each node has an Access Control List (ACL)
132
+ that restricts who can do what.
133
+
134
+ ZooKeeper also has the notion of ephemeral nodes. These znodes
135
+ exists as long as the session that created the znode is active. When the
136
+ session ends the znode is deleted.
137
+
138
+ <a name="Conditional+updates+and+watches"></a>
139
+
140
+ ### Conditional updates and watches
141
+
142
+ ZooKeeper supports the concept of _watches_.
143
+ Clients can set a watch on a znode. A watch will be triggered and
144
+ removed when the znode changes. When a watch is triggered, the client
145
+ receives a packet saying that the znode has changed. If the
146
+ connection between the client and one of the ZooKeeper servers is
147
+ broken, the client will receive a local notification.
148
+
149
+ **New in 3.6.0:** Clients can also set
150
+ permanent, recursive watches on a znode that are not removed when triggered
151
+ and that trigger for changes on the registered znode as well as any children
152
+ znodes recursively.
153
+
154
+ <a name="Guarantees"></a>
155
+
156
+ ### Guarantees
157
+
158
+ ZooKeeper is very fast and very simple. Since its goal, though, is
159
+ to be a basis for the construction of more complicated services, such as
160
+ synchronization, it provides a set of guarantees. These are:
161
+
162
+ * Sequential Consistency - Updates from a client will be applied
163
+ in the order that they were sent.
164
+ * Atomicity - Updates either succeed or fail. No partial
165
+ results.
166
+ * Single System Image - A client will see the same view of the
167
+ service regardless of the server that it connects to. i.e., a
168
+ client will never see an older view of the system even if the
169
+ client fails over to a different server with the same session.
170
+ * Reliability - Once an update has been applied, it will persist
171
+ from that time forward until a client overwrites the update.
172
+ * Timeliness - The clients view of the system is guaranteed to
173
+ be up-to-date within a certain time bound.
174
+
175
+ <a name="Simple+API"></a>
176
+
177
+ ### Simple API
178
+
179
+ One of the design goals of ZooKeeper is providing a very simple
180
+ programming interface. As a result, it supports only these
181
+ operations:
182
+
183
+ * *create* :
184
+ creates a node at a location in the tree
185
+
186
+ * *delete* :
187
+ deletes a node
188
+
189
+ * *exists* :
190
+ tests if a node exists at a location
191
+
192
+ * *get data* :
193
+ reads the data from a node
194
+
195
+ * *set data* :
196
+ writes data to a node
197
+
198
+ * *get children* :
199
+ retrieves a list of children of a node
200
+
201
+ * *sync* :
202
+ waits for data to be propagated
203
+
204
+ <a name="Implementation"></a>
205
+
206
+ ### Implementation
207
+
208
+ [ZooKeeper Components](#zkComponents) shows the high-level components
209
+ of the ZooKeeper service. With the exception of the request processor,
210
+ each of
211
+ the servers that make up the ZooKeeper service replicates its own copy
212
+ of each of the components.
213
+
214
+ <a name="zkComponents"></a>
215
+
216
+ ![ZooKeeper Components](images/zkcomponents.jpg)
217
+
218
+ The replicated database is an in-memory database containing the
219
+ entire data tree. Updates are logged to disk for recoverability, and
220
+ writes are serialized to disk before they are applied to the in-memory
221
+ database.
222
+
223
+ Every ZooKeeper server services clients. Clients connect to
224
+ exactly one server to submit requests. Read requests are serviced from
225
+ the local replica of each server database. Requests that change the
226
+ state of the service, write requests, are processed by an agreement
227
+ protocol.
228
+
229
+ As part of the agreement protocol all write requests from clients
230
+ are forwarded to a single server, called the
231
+ _leader_. The rest of the ZooKeeper servers, called
232
+ _followers_, receive message proposals from the
233
+ leader and agree upon message delivery. The messaging layer takes care
234
+ of replacing leaders on failures and syncing followers with
235
+ leaders.
236
+
237
+ ZooKeeper uses a custom atomic messaging protocol. Since the
238
+ messaging layer is atomic, ZooKeeper can guarantee that the local
239
+ replicas never diverge. When the leader receives a write request, it
240
+ calculates what the state of the system is when the write is to be
241
+ applied and transforms this into a transaction that captures this new
242
+ state.
243
+
244
+ <a name="Uses"></a>
245
+
246
+ ### Uses
247
+
248
+ The programming interface to ZooKeeper is deliberately simple.
249
+ With it, however, you can implement higher order operations, such as
250
+ synchronizations primitives, group membership, ownership, etc.
251
+
252
+ <a name="Performance"></a>
253
+
254
+ ### Performance
255
+
256
+ ZooKeeper is designed to be highly performance. But is it? The
257
+ results of the ZooKeeper's development team at Yahoo! Research indicate
258
+ that it is. (See [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW).) It is especially high
259
+ performance in applications where reads outnumber writes, since writes
260
+ involve synchronizing the state of all servers. (Reads outnumbering
261
+ writes is typically the case for a coordination service.)
262
+
263
+ <a name="zkPerfRW"></a>
264
+
265
+ ![ZooKeeper Throughput as the Read-Write Ratio Varies](images/zkperfRW-3.2.jpg)
266
+
267
+ The [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW) is a throughput
268
+ graph of ZooKeeper release 3.2 running on servers with dual 2Ghz
269
+ Xeon and two SATA 15K RPM drives. One drive was used as a
270
+ dedicated ZooKeeper log device. The snapshots were written to
271
+ the OS drive. Write requests were 1K writes and the reads were
272
+ 1K reads. "Servers" indicate the size of the ZooKeeper
273
+ ensemble, the number of servers that make up the
274
+ service. Approximately 30 other servers were used to simulate
275
+ the clients. The ZooKeeper ensemble was configured such that
276
+ leaders do not allow connections from clients.
277
+
278
+ ######Note
279
+ >In version 3.2 r/w performance improved by ~2x compared to
280
+ the [previous 3.1 release](http://zookeeper.apache.org/docs/r3.1.1/zookeeperOver.html#Performance).
281
+
282
+ Benchmarks also indicate that it is reliable, too.
283
+ [Reliability in the Presence of Errors](#zkPerfReliability) shows how a deployment responds to
284
+ various failures. The events marked in the figure are the following:
285
+
286
+ 1. Failure and recovery of a follower
287
+ 1. Failure and recovery of a different follower
288
+ 1. Failure of the leader
289
+ 1. Failure and recovery of two followers
290
+ 1. Failure of another leader
291
+
292
+ <a name="Reliability"></a>
293
+
294
+ ### Reliability
295
+
296
+ To show the behavior of the system over time as
297
+ failures are injected we ran a ZooKeeper service made up of
298
+ 7 machines. We ran the same saturation benchmark as before,
299
+ but this time we kept the write percentage at a constant
300
+ 30%, which is a conservative ratio of our expected
301
+ workloads.
302
+
303
+ <a name="zkPerfReliability"></a>
304
+
305
+ ![Reliability in the Presence of Errors](images/zkperfreliability.jpg)
306
+
307
+ There are a few important observations from this graph. First, if
308
+ followers fail and recover quickly, then ZooKeeper is able to sustain a
309
+ high throughput despite the failure. But maybe more importantly, the
310
+ leader election algorithm allows for the system to recover fast enough
311
+ to prevent throughput from dropping substantially. In our observations,
312
+ ZooKeeper takes less than 200ms to elect a new leader. Third, as
313
+ followers recover, ZooKeeper is able to raise throughput again once they
314
+ start processing requests.
315
+
316
+ <a name="The+ZooKeeper+Project"></a>
317
+
318
+ ### The ZooKeeper Project
319
+
320
+ ZooKeeper has been
321
+ [successfully used](https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy)
322
+ in many industrial applications. It is used at Yahoo! as the
323
+ coordination and failure recovery service for Yahoo! Message
324
+ Broker, which is a highly scalable publish-subscribe system
325
+ managing thousands of topics for replication and data
326
+ delivery. It is used by the Fetching Service for Yahoo!
327
+ crawler, where it also manages failure recovery. A number of
328
+ Yahoo! advertising systems also use ZooKeeper to implement
329
+ reliable services.
330
+
331
+ All users and developers are encouraged to join the
332
+ community and contribute their expertise. See the
333
+ [Zookeeper Project on Apache](http://zookeeper.apache.org/)
334
+ for more information.
335
+
336
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md ADDED
@@ -0,0 +1,908 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Dynamic Reconfiguration
18
+
19
+ * [Overview](#ch_reconfig_intro)
20
+ * [Changes to Configuration Format](#ch_reconfig_format)
21
+ * [Specifying the client port](#sc_reconfig_clientport)
22
+ * [Specifying multiple server addresses](#sc_multiaddress)
23
+ * [The standaloneEnabled flag](#sc_reconfig_standaloneEnabled)
24
+ * [The reconfigEnabled flag](#sc_reconfig_reconfigEnabled)
25
+ * [Dynamic configuration file](#sc_reconfig_file)
26
+ * [Backward compatibility](#sc_reconfig_backward)
27
+ * [Upgrading to 3.5.0](#ch_reconfig_upgrade)
28
+ * [Dynamic Reconfiguration of the ZooKeeper Ensemble](#ch_reconfig_dyn)
29
+ * [API](#ch_reconfig_api)
30
+ * [Security](#sc_reconfig_access_control)
31
+ * [Retrieving the current dynamic configuration](#sc_reconfig_retrieving)
32
+ * [Modifying the current dynamic configuration](#sc_reconfig_modifying)
33
+ * [General](#sc_reconfig_general)
34
+ * [Incremental mode](#sc_reconfig_incremental)
35
+ * [Non-incremental mode](#sc_reconfig_nonincremental)
36
+ * [Conditional reconfig](#sc_reconfig_conditional)
37
+ * [Error conditions](#sc_reconfig_errors)
38
+ * [Additional comments](#sc_reconfig_additional)
39
+ * [Rebalancing Client Connections](#ch_reconfig_rebalancing)
40
+
41
+ <a name="ch_reconfig_intro"></a>
42
+
43
+ ## Overview
44
+
45
+ Prior to the 3.5.0 release, the membership and all other configuration
46
+ parameters of Zookeeper were static - loaded during boot and immutable at
47
+ runtime. Operators resorted to ''rolling restarts'' - a manually intensive
48
+ and error-prone method of changing the configuration that has caused data
49
+ loss and inconsistency in production.
50
+
51
+ Starting with 3.5.0, “rolling restarts” are no longer needed!
52
+ ZooKeeper comes with full support for automated configuration changes: the
53
+ set of Zookeeper servers, their roles (participant / observer), all ports,
54
+ and even the quorum system can be changed dynamically, without service
55
+ interruption and while maintaining data consistency. Reconfigurations are
56
+ performed immediately, just like other operations in ZooKeeper. Multiple
57
+ changes can be done using a single reconfiguration command. The dynamic
58
+ reconfiguration functionality does not limit operation concurrency, does
59
+ not require client operations to be stopped during reconfigurations, has a
60
+ very simple interface for administrators and no added complexity to other
61
+ client operations.
62
+
63
+ New client-side features allow clients to find out about configuration
64
+ changes and to update the connection string (list of servers and their
65
+ client ports) stored in their ZooKeeper handle. A probabilistic algorithm
66
+ is used to rebalance clients across the new configuration servers while
67
+ keeping the extent of client migrations proportional to the change in
68
+ ensemble membership.
69
+
70
+ This document provides the administrator manual for reconfiguration.
71
+ For a detailed description of the reconfiguration algorithms, performance
72
+ measurements, and more, please see our paper:
73
+
74
+ * *Shraer, A., Reed, B., Malkhi, D., Junqueira, F. Dynamic
75
+ Reconfiguration of Primary/Backup Clusters. In _USENIX Annual
76
+ Technical Conference (ATC)_(2012), 425-437* :
77
+ Links: [paper (pdf)](https://www.usenix.org/system/files/conference/atc12/atc12-final74.pdf), [slides (pdf)](https://www.usenix.org/sites/default/files/conference/protected-files/shraer\_atc12\_slides.pdf), [video](https://www.usenix.org/conference/atc12/technical-sessions/presentation/shraer), [hadoop summit slides](http://www.slideshare.net/Hadoop\_Summit/dynamic-reconfiguration-of-zookeeper)
78
+
79
+ **Note:** Starting with 3.5.3, the dynamic reconfiguration
80
+ feature is disabled by default, and has to be explicitly turned on via
81
+ [reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option.
82
+
83
+ <a name="ch_reconfig_format"></a>
84
+
85
+ ## Changes to Configuration Format
86
+
87
+ <a name="sc_reconfig_clientport"></a>
88
+
89
+ ### Specifying the client port
90
+
91
+ A client port of a server is the port on which the server accepts plaintext (non-TLS) client connection requests
92
+ and secure client port is the port on which the server accepts TLS client connection requests.
93
+
94
+ Starting with 3.5.0 the
95
+ _clientPort_ and _clientPortAddress_ configuration parameters should no longer be used in zoo.cfg.
96
+
97
+ Starting with 3.10.0 the
98
+ _secureClientPort_ and _secureClientPortAddress_ configuration parameters should no longer be used in zoo.cfg.
99
+
100
+ Instead, this information is now part of the server keyword specification, which
101
+ becomes as follows:
102
+
103
+ server.<positive id> = <address1>:<quorum port>:<leader election port>[:role];[[<client port address>:]<client port>][;[<secure client port address>:]<secure client port>]
104
+
105
+ - [New in ZK 3.10.0] The client port specification is optional and is to the right of the
106
+ first semicolon. The secure client port specification is also optional and is to the right
107
+ of the second semicolon. However, both the client port and secure client port specification
108
+ cannot be omitted, at least one of them should be present. If the user intends to omit client
109
+ port specification and provide only secure client port specification (TLS-only server), a second
110
+ semicolon should still be specified to indicate an empty client port specification (see last
111
+ example below). In either spec, the port address is optional, and if not specified it defaults
112
+ to "0.0.0.0".
113
+ - As usual, role is also optional, it can be _participant_ or _observer_ (_participant_ by default).
114
+
115
+ Examples of legal server statements:
116
+
117
+ server.5 = 125.23.63.23:1234:1235;1236 (non-TLS server)
118
+ server.5 = 125.23.63.23:1234:1235;1236;1237 (non-TLS + TLS server)
119
+ server.5 = 125.23.63.23:1234:1235;;1237 (TLS-only server)
120
+ server.5 = 125.23.63.23:1234:1235:participant;1236 (non-TLS server)
121
+ server.5 = 125.23.63.23:1234:1235:observer;1236 (non-TLS server)
122
+ server.5 = 125.23.63.23:1234:1235;125.23.63.24:1236 (non-TLS server)
123
+ server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236 (non-TLS server)
124
+ server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236;125.23.63.23:1237 (non-TLS + TLS server)
125
+ server.5 = 125.23.63.23:1234:1235:participant;;125.23.63.23:1237 (TLS-only server)
126
+
127
+
128
+ <a name="sc_multiaddress"></a>
129
+
130
+ ### Specifying multiple server addresses
131
+
132
+ Since ZooKeeper 3.6.0 it is possible to specify multiple addresses for each
133
+ ZooKeeper server (see [ZOOKEEPER-3188](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3188)).
134
+ This helps to increase availability and adds network level
135
+ resiliency to ZooKeeper. When multiple physical network interfaces are used
136
+ for the servers, ZooKeeper is able to bind on all interfaces and runtime switching
137
+ to a working interface in case a network error. The different addresses can be
138
+ specified in the config using a pipe ('|') character.
139
+
140
+ Examples for a valid configurations using multiple addresses:
141
+
142
+ server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;2188
143
+ server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889|zoo2-net3:2890:3890;2188
144
+ server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;zoo2-net1:2188
145
+ server.2=zoo2-net1:2888:3888:observer|zoo2-net2:2889:3889:observer;2188
146
+
147
+ <a name="sc_reconfig_standaloneEnabled"></a>
148
+
149
+ ### The _standaloneEnabled_ flag
150
+
151
+ Prior to 3.5.0, one could run ZooKeeper in Standalone mode or in a
152
+ Distributed mode. These are separate implementation stacks, and
153
+ switching between them during run time is not possible. By default (for
154
+ backward compatibility) _standaloneEnabled_ is set to
155
+ _true_. The consequence of using this default is that
156
+ if started with a single server the ensemble will not be allowed to
157
+ grow, and if started with more than one server it will not be allowed to
158
+ shrink to contain fewer than two participants.
159
+
160
+ Setting the flag to _false_ instructs the system
161
+ to run the Distributed software stack even if there is only a single
162
+ participant in the ensemble. To achieve this the (static) configuration
163
+ file should contain:
164
+
165
+ standaloneEnabled=false**
166
+
167
+ With this setting it is possible to start a ZooKeeper ensemble
168
+ containing a single participant and to dynamically grow it by adding
169
+ more servers. Similarly, it is possible to shrink an ensemble so that
170
+ just a single participant remains, by removing servers.
171
+
172
+ Since running the Distributed mode allows more flexibility, we
173
+ recommend setting the flag to _false_. We expect that
174
+ the legacy Standalone mode will be deprecated in the future.
175
+
176
+ <a name="sc_reconfig_reconfigEnabled"></a>
177
+
178
+ ### The _reconfigEnabled_ flag
179
+
180
+ Starting with 3.5.0 and prior to 3.5.3, there is no way to disable
181
+ dynamic reconfiguration feature. We would like to offer the option of
182
+ disabling reconfiguration feature because with reconfiguration enabled,
183
+ we have a security concern that a malicious actor can make arbitrary changes
184
+ to the configuration of a ZooKeeper ensemble, including adding a compromised
185
+ server to the ensemble. We prefer to leave to the discretion of the user to
186
+ decide whether to enable it or not and make sure that the appropriate security
187
+ measure are in place. So in 3.5.3 the [reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option is introduced
188
+ such that the reconfiguration feature can be completely disabled and any attempts
189
+ to reconfigure a cluster through reconfig API with or without authentication
190
+ will fail by default, unless **reconfigEnabled** is set to
191
+ **true**.
192
+
193
+ To set the option to true, the configuration file (zoo.cfg) should contain:
194
+
195
+ reconfigEnabled=true
196
+
197
+ <a name="sc_reconfig_file"></a>
198
+
199
+ ### Dynamic configuration file
200
+
201
+ Starting with 3.5.0 we're distinguishing between dynamic
202
+ configuration parameters, which can be changed during runtime, and
203
+ static configuration parameters, which are read from a configuration
204
+ file when a server boots and don't change during its execution. For now,
205
+ the following configuration keywords are considered part of the dynamic
206
+ configuration: _server_, _group_
207
+ and _weight_.
208
+
209
+ Dynamic configuration parameters are stored in a separate file on
210
+ the server (which we call the dynamic configuration file). This file is
211
+ linked from the static config file using the new
212
+ _dynamicConfigFile_ keyword.
213
+
214
+ **Example**
215
+
216
+ #### zoo_replicated1.cfg
217
+
218
+
219
+ tickTime=2000
220
+ dataDir=/zookeeper/data/zookeeper1
221
+ initLimit=5
222
+ syncLimit=2
223
+ dynamicConfigFile=/zookeeper/conf/zoo_replicated1.cfg.dynamic
224
+
225
+
226
+ #### zoo_replicated1.cfg.dynamic
227
+
228
+
229
+ server.1=125.23.63.23:2780:2783:participant;2791
230
+ server.2=125.23.63.24:2781:2784:participant;2792
231
+ server.3=125.23.63.25:2782:2785:participant;2793
232
+
233
+
234
+ When the ensemble configuration changes, the static configuration
235
+ parameters remain the same. The dynamic parameters are pushed by
236
+ ZooKeeper and overwrite the dynamic configuration files on all servers.
237
+ Thus, the dynamic configuration files on the different servers are
238
+ usually identical (they can only differ momentarily when a
239
+ reconfiguration is in progress, or if a new configuration hasn't
240
+ propagated yet to some of the servers). Once created, the dynamic
241
+ configuration file should not be manually altered. Changed are only made
242
+ through the new reconfiguration commands outlined below. Note that
243
+ changing the config of an offline cluster could result in an
244
+ inconsistency with respect to configuration information stored in the
245
+ ZooKeeper log (and the special configuration znode, populated from the
246
+ log) and is therefore highly discouraged.
247
+
248
+ **Example 2**
249
+
250
+ Users may prefer to initially specify a single configuration file.
251
+ The following is thus also legal:
252
+
253
+ #### zoo_replicated1.cfg
254
+
255
+
256
+ tickTime=2000
257
+ dataDir=/zookeeper/data/zookeeper1
258
+ initLimit=5
259
+ syncLimit=2
260
+ clientPort=
261
+
262
+
263
+ The configuration files on each server will be automatically split
264
+ into dynamic and static files, if they are not already in this format.
265
+ So the configuration file above will be automatically transformed into
266
+ the two files in Example 1. Note that the clientPort and
267
+ clientPortAddress lines (if specified) will be automatically removed
268
+ during this process, if they are redundant (as in the example above).
269
+ The original static configuration file is backed up (in a .bak
270
+ file).
271
+
272
+ <a name="sc_reconfig_backward"></a>
273
+
274
+ ### Backward compatibility
275
+
276
+ We still support the old configuration format. For example, the
277
+ following configuration file is acceptable (but not recommended):
278
+
279
+ #### zoo_replicated1.cfg
280
+
281
+ tickTime=2000
282
+ dataDir=/zookeeper/data/zookeeper1
283
+ initLimit=5
284
+ syncLimit=2
285
+ clientPort=2791
286
+ server.1=125.23.63.23:2780:2783:participant
287
+ server.2=125.23.63.24:2781:2784:participant
288
+ server.3=125.23.63.25:2782:2785:participant
289
+
290
+
291
+ During boot, a dynamic configuration file is created and contains
292
+ the dynamic part of the configuration as explained earlier. In this
293
+ case, however, the line "clientPort=2791" will remain in the static
294
+ configuration file of server 1 since it is not redundant -- it was not
295
+ specified as part of the "server.1=..." using the format explained in
296
+ the section [Changes to Configuration Format](#ch_reconfig_format). If a reconfiguration
297
+ is invoked that sets the client port of server 1, we remove
298
+ "clientPort=2791" from the static configuration file (the dynamic file
299
+ now contain this information as part of the specification of server
300
+ 1).
301
+
302
+ <a name="ch_reconfig_upgrade"></a>
303
+
304
+ ## Upgrading to 3.5.0
305
+
306
+ Upgrading a running ZooKeeper ensemble to 3.5.0 should be done only
307
+ after upgrading your ensemble to the 3.4.6 release. Note that this is only
308
+ necessary for rolling upgrades (if you're fine with shutting down the
309
+ system completely, you don't have to go through 3.4.6). If you attempt a
310
+ rolling upgrade without going through 3.4.6 (for example from 3.4.5), you
311
+ may get the following error:
312
+
313
+ 2013-01-30 11:32:10,663 [myid:2] - INFO [localhost/127.0.0.1:2784:QuorumCnxManager$Listener@498] - Received connection request /127.0.0.1:60876
314
+ 2013-01-30 11:32:10,663 [myid:2] - WARN [localhost/127.0.0.1:2784:QuorumCnxManager@349] - Invalid server id: -65536
315
+
316
+ During a rolling upgrade, each server is taken down in turn and
317
+ rebooted with the new 3.5.0 binaries. Before starting the server with
318
+ 3.5.0 binaries, we highly recommend updating the configuration file so
319
+ that all server statements "server.x=..." contain client ports (see the
320
+ section [Specifying the client port](#sc_reconfig_clientport)). As explained earlier
321
+ you may leave the configuration in a single file, as well as leave the
322
+ clientPort/clientPortAddress statements (although if you specify client
323
+ ports in the new format, these statements are now redundant).
324
+
325
+ <a name="ch_reconfig_dyn"></a>
326
+
327
+ ## Dynamic Reconfiguration of the ZooKeeper Ensemble
328
+
329
+ The ZooKeeper Java and C API were extended with getConfig and reconfig
330
+ commands that facilitate reconfiguration. Both commands have a synchronous
331
+ (blocking) variant and an asynchronous one. We demonstrate these commands
332
+ here using the Java CLI, but note that you can similarly use the C CLI or
333
+ invoke the commands directly from a program just like any other ZooKeeper
334
+ command.
335
+
336
+ <a name="ch_reconfig_api"></a>
337
+
338
+ ### API
339
+
340
+ There are two sets of APIs for both Java and C client.
341
+
342
+ * ***Reconfiguration API*** :
343
+ Reconfiguration API is used to reconfigure the ZooKeeper cluster.
344
+ Starting with 3.5.3, reconfiguration Java APIs are moved into ZooKeeperAdmin class
345
+ from ZooKeeper class, and use of this API requires ACL setup and user
346
+ authentication (see [Security](#sc_reconfig_access_control) for more information.).
347
+
348
+ * ***Get Configuration API*** :
349
+ Get configuration APIs are used to retrieve ZooKeeper cluster configuration information
350
+ stored in /zookeeper/config znode. Use of this API does not require specific setup or authentication,
351
+ because /zookeeper/config is readable to any users.
352
+
353
+ <a name="sc_reconfig_access_control"></a>
354
+
355
+ ### Security
356
+
357
+ Prior to **3.5.3**, there is no enforced security mechanism
358
+ over reconfig so any ZooKeeper clients that can connect to ZooKeeper server ensemble
359
+ will have the ability to change the state of a ZooKeeper cluster via reconfig.
360
+ It is thus possible for a malicious client to add compromised server to an ensemble,
361
+ e.g., add a compromised server, or remove legitimate servers.
362
+ Cases like these could be security vulnerabilities on a case by case basis.
363
+
364
+ To address this security concern, we introduced access control over reconfig
365
+ starting from **3.5.3** such that only a specific set of users
366
+ can use reconfig commands or APIs, and these users need be configured explicitly. In addition,
367
+ the setup of ZooKeeper cluster must enable authentication so ZooKeeper clients can be authenticated.
368
+
369
+ We also provide an escape hatch for users who operate and interact with a ZooKeeper ensemble in a secured
370
+ environment (i.e. behind company firewall). For those users who want to use reconfiguration feature but
371
+ don't want the overhead of configuring an explicit list of authorized user for reconfig access checks,
372
+ they can set ["skipACL"](zookeeperAdmin.html#sc_authOptions) to "yes" which will
373
+ skip ACL check and allow any user to reconfigure cluster.
374
+
375
+ Overall, ZooKeeper provides flexible configuration options for the reconfigure feature
376
+ that allow a user to choose based on user's security requirement.
377
+ We leave to the discretion of the user to decide appropriate security measure are in place.
378
+
379
+ * ***Access Control*** :
380
+ The dynamic configuration is stored in a special znode
381
+ ZooDefs.CONFIG_NODE = /zookeeper/config. This node by default is read only
382
+ for all users, except super user and users that's explicitly configured for write
383
+ access.
384
+ Clients that need to use reconfig commands or reconfig API should be configured as users
385
+ that have write access to CONFIG_NODE. By default, only the super user has full control including
386
+ write access to CONFIG_NODE. Additional users can be granted write access through superuser
387
+ by setting an ACL that has write permission associated with specified user.
388
+ A few examples of how to setup ACLs and use reconfiguration API with authentication can be found in
389
+ ReconfigExceptionTest.java and TestReconfigServer.cc.
390
+
391
+ * ***Authentication*** :
392
+ Authentication of users is orthogonal to the access control and is delegated to
393
+ existing authentication mechanism supported by ZooKeeper's pluggable authentication schemes.
394
+ See [ZooKeeper and SASL](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL) for more details on this topic.
395
+
396
+ * ***Disable ACL check*** :
397
+ ZooKeeper supports ["skipACL"](zookeeperAdmin.html#sc_authOptions) option such that ACL
398
+ check will be completely skipped, if skipACL is set to "yes". In such cases any unauthenticated
399
+ users can use reconfig API.
400
+
401
+ <a name="sc_reconfig_retrieving"></a>
402
+
403
+ ### Retrieving the current dynamic configuration
404
+
405
+ The dynamic configuration is stored in a special znode
406
+ ZooDefs.CONFIG_NODE = /zookeeper/config. The new
407
+ `config` CLI command reads this znode (currently it is
408
+ simply a wrapper to `get /zookeeper/config`). As with
409
+ normal reads, to retrieve the latest committed value you should do a
410
+ `sync` first.
411
+
412
+ [zk: 127.0.0.1:2791(CONNECTED) 3] config
413
+ server.1=localhost:2780:2783:participant;localhost:2791
414
+ server.2=localhost:2781:2784:participant;localhost:2792
415
+ server.3=localhost:2782:2785:participant;localhost:2793
416
+
417
+ Notice the last line of the output. This is the configuration
418
+ version. The version equals to the zxid of the reconfiguration command
419
+ which created this configuration. The version of the first established
420
+ configuration equals to the zxid of the NEWLEADER message sent by the
421
+ first successfully established leader. When a configuration is written
422
+ to a dynamic configuration file, the version automatically becomes part
423
+ of the filename and the static configuration file is updated with the
424
+ path to the new dynamic configuration file. Configuration files
425
+ corresponding to earlier versions are retained for backup
426
+ purposes.
427
+
428
+ During boot time the version (if it exists) is extracted from the
429
+ filename. The version should never be altered manually by users or the
430
+ system administrator. It is used by the system to know which
431
+ configuration is most up-to-date. Manipulating it manually can result in
432
+ data loss and inconsistency.
433
+
434
+ Just like a `get` command, the
435
+ `config` CLI command accepts the _-w_
436
+ flag for setting a watch on the znode, and _-s_ flag for
437
+ displaying the Stats of the znode. It additionally accepts a new flag
438
+ _-c_ which outputs only the version and the client
439
+ connection string corresponding to the current configuration. For
440
+ example, for the configuration above we would get:
441
+
442
+ [zk: 127.0.0.1:2791(CONNECTED) 17] config -c
443
+ 400000003 localhost:2791,localhost:2793,localhost:2792
444
+
445
+ Note that when using the API directly, this command is called
446
+ `getConfig`.
447
+
448
+ As any read command it returns the configuration known to the
449
+ follower to which your client is connected, which may be slightly
450
+ out-of-date. One can use the `sync` command for
451
+ stronger guarantees. For example using the Java API:
452
+
453
+ zk.sync(ZooDefs.CONFIG_NODE, void_callback, context);
454
+ zk.getConfig(watcher, callback, context);
455
+
456
+ Note: in 3.5.0 it doesn't really matter which path is passed to the
457
+ `sync()` command as all the server's state is brought
458
+ up to date with the leader (so one could use a different path instead of
459
+ ZooDefs.CONFIG_NODE). However, this may change in the future.
460
+
461
+ <a name="sc_reconfig_modifying"></a>
462
+
463
+ ### Modifying the current dynamic configuration
464
+
465
+ Modifying the configuration is done through the
466
+ `reconfig` command. There are two modes of
467
+ reconfiguration: incremental and non-incremental (bulk). The
468
+ non-incremental simply specifies the new dynamic configuration of the
469
+ system. The incremental specifies changes to the current configuration.
470
+ The `reconfig` command returns the new
471
+ configuration.
472
+
473
+ A few examples are in: *ReconfigTest.java*,
474
+ *ReconfigRecoveryTest.java* and
475
+ *TestReconfigServer.cc*.
476
+
477
+ <a name="sc_reconfig_general"></a>
478
+
479
+ #### General
480
+
481
+ **Removing servers:** Any server can
482
+ be removed, including the leader (although removing the leader will
483
+ result in a short unavailability, see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)). The server will not be shut-down automatically.
484
+ Instead, it becomes a "non-voting follower". This is somewhat similar
485
+ to an observer in that its votes don't count towards the Quorum of
486
+ votes necessary to commit operations. However, unlike a non-voting
487
+ follower, an observer doesn't actually see any operation proposals and
488
+ does not ACK them. Thus a non-voting follower has a more significant
489
+ negative effect on system throughput compared to an observer.
490
+ Non-voting follower mode should only be used as a temporary mode,
491
+ before shutting the server down, or adding it as a follower or as an
492
+ observer to the ensemble. We do not shut the server down automatically
493
+ for two main reasons. The first reason is that we do not want all the
494
+ clients connected to this server to be immediately disconnected,
495
+ causing a flood of connection requests to other servers. Instead, it
496
+ is better if each client decides when to migrate independently. The
497
+ second reason is that removing a server may sometimes (rarely) be
498
+ necessary in order to change it from "observer" to "participant" (this
499
+ is explained in the section [Additional comments](#sc_reconfig_additional)).
500
+
501
+ Note that the new configuration should have some minimal number of
502
+ participants in order to be considered legal. If the proposed change
503
+ would leave the cluster with less than 2 participants and standalone
504
+ mode is enabled (standaloneEnabled=true, see the section [The _standaloneEnabled_ flag](#sc_reconfig_standaloneEnabled)), the reconfig will not be
505
+ processed (BadArgumentsException). If standalone mode is disabled
506
+ (standaloneEnabled=false) then it's legal to remain with 1 or more
507
+ participants.
508
+
509
+ **Adding servers:** Before a
510
+ reconfiguration is invoked, the administrator must make sure that a
511
+ quorum (majority) of participants from the new configuration are
512
+ already connected and synced with the current leader. To achieve this
513
+ we need to connect a new joining server to the leader before it is
514
+ officially part of the ensemble. This is done by starting the joining
515
+ server using an initial list of servers which is technically not a
516
+ legal configuration of the system but (a) contains the joiner, and (b)
517
+ gives sufficient information to the joiner in order for it to find and
518
+ connect to the current leader. We list a few different options of
519
+ doing this safely.
520
+
521
+ 1. Initial configuration of joiners is comprised of servers in
522
+ the last committed configuration and one or more joiners, where
523
+ **joiners are listed as observers.**
524
+ For example, if servers D and E are added at the same time to (A,
525
+ B, C) and server C is being removed, the initial configuration of
526
+ D could be (A, B, C, D) or (A, B, C, D, E), where D and E are
527
+ listed as observers. Similarly, the configuration of E could be
528
+ (A, B, C, E) or (A, B, C, D, E), where D and E are listed as
529
+ observers. **Note that listing the joiners as
530
+ observers will not actually make them observers - it will only
531
+ prevent them from accidentally forming a quorum with other
532
+ joiners.** Instead, they will contact the servers in the
533
+ current configuration and adopt the last committed configuration
534
+ (A, B, C), where the joiners are absent. Configuration files of
535
+ joiners are backed up and replaced automatically as this happens.
536
+ After connecting to the current leader, joiners become non-voting
537
+ followers until the system is reconfigured and they are added to
538
+ the ensemble (as participant or observer, as appropriate).
539
+ 1. Initial configuration of each joiner is comprised of servers
540
+ in the last committed configuration + **the
541
+ joiner itself, listed as a participant.** For example, to
542
+ add a new server D to a configuration consisting of servers (A, B,
543
+ C), the administrator can start D using an initial configuration
544
+ file consisting of servers (A, B, C, D). If both D and E are added
545
+ at the same time to (A, B, C), the initial configuration of D
546
+ could be (A, B, C, D) and the configuration of E could be (A, B,
547
+ C, E). Similarly, if D is added and C is removed at the same time,
548
+ the initial configuration of D could be (A, B, C, D). Never list
549
+ more than one joiner as participant in the initial configuration
550
+ (see warning below).
551
+ 1. Whether listing the joiner as an observer or as participant,
552
+ it is also fine not to list all the current configuration servers,
553
+ as long as the current leader is in the list. For example, when
554
+ adding D we could start D with a configuration file consisting of
555
+ just (A, D) if A is the current leader. however this is more
556
+ fragile since if A fails before D officially joins the ensemble, D
557
+ doesn’t know anyone else and therefore the administrator will have
558
+ to intervene and restart D with another server list.
559
+
560
+ ######Note
561
+ >##### Warning
562
+
563
+ >Never specify more than one joining server in the same initial
564
+ configuration as participants. Currently, the joining servers don’t
565
+ know that they are joining an existing ensemble; if multiple joiners
566
+ are listed as participants they may form an independent quorum
567
+ creating a split-brain situation such as processing operations
568
+ independently from your main ensemble. It is OK to list multiple
569
+ joiners as observers in an initial config.
570
+
571
+ If the configuration of existing servers changes or they become unavailable
572
+ before the joiner succeeds to connect and learn about configuration changes, the
573
+ joiner may need to be restarted with an updated configuration file in order to be
574
+ able to connect.
575
+
576
+ Finally, note that once connected to the leader, a joiner adopts
577
+ the last committed configuration, in which it is absent (the initial
578
+ config of the joiner is backed up before being rewritten). If the
579
+ joiner restarts in this state, it will not be able to boot since it is
580
+ absent from its configuration file. In order to start it you’ll once
581
+ again have to specify an initial configuration.
582
+
583
+ **Modifying server parameters:** One
584
+ can modify any of the ports of a server, or its role
585
+ (participant/observer) by adding it to the ensemble with different
586
+ parameters. This works in both the incremental and the bulk
587
+ reconfiguration modes. It is not necessary to remove the server and
588
+ then add it back; just specify the new parameters as if the server is
589
+ not yet in the system. The server will detect the configuration change
590
+ and perform the necessary adjustments. See an example in the section
591
+ [Incremental mode](#sc_reconfig_incremental) and an exception to this
592
+ rule in the section [Additional comments](#sc_reconfig_additional).
593
+
594
+ It is also possible to change the Quorum System used by the
595
+ ensemble (for example, change the Majority Quorum System to a
596
+ Hierarchical Quorum System on the fly). This, however, is only allowed
597
+ using the bulk (non-incremental) reconfiguration mode. In general,
598
+ incremental reconfiguration only works with the Majority Quorum
599
+ System. Bulk reconfiguration works with both Hierarchical and Majority
600
+ Quorum Systems.
601
+
602
+ **Performance Impact:** There is
603
+ practically no performance impact when removing a follower, since it
604
+ is not being automatically shut down (the effect of removal is that
605
+ the server's votes are no longer being counted). When adding a server,
606
+ there is no leader change and no noticeable performance disruption.
607
+ For details and graphs please see Figures 6, 7 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters).
608
+
609
+ The most significant disruption will happen when a leader change
610
+ is caused, in one of the following cases:
611
+
612
+ 1. Leader is removed from the ensemble.
613
+ 1. Leader's role is changed from participant to observer.
614
+ 1. The port used by the leader to send transactions to others
615
+ (quorum port) is modified.
616
+
617
+ In these cases we perform a leader hand-off where the old leader
618
+ nominates a new leader. The resulting unavailability is usually
619
+ shorter than when a leader crashes since detecting leader failure is
620
+ unnecessary and electing a new leader can usually be avoided during a
621
+ hand-off (see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)).
622
+
623
+ When the client port of a server is modified, it does not drop
624
+ existing client connections. New connections to the server will have
625
+ to use the new client port.
626
+
627
+ **Progress guarantees:** Up to the
628
+ invocation of the reconfig operation, a quorum of the old
629
+ configuration is required to be available and connected for ZooKeeper
630
+ to be able to make progress. Once reconfig is invoked, a quorum of
631
+ both the old and of the new configurations must be available. The
632
+ final transition happens once (a) the new configuration is activated,
633
+ and (b) all operations scheduled before the new configuration is
634
+ activated by the leader are committed. Once (a) and (b) happen, only a
635
+ quorum of the new configuration is required. Note, however, that
636
+ neither (a) nor (b) are visible to a client. Specifically, when a
637
+ reconfiguration operation commits, it only means that an activation
638
+ message was sent out by the leader. It does not necessarily mean that
639
+ a quorum of the new configuration got this message (which is required
640
+ in order to activate it) or that (b) has happened. If one wants to
641
+ make sure that both (a) and (b) has already occurred (for example, in
642
+ order to know that it is safe to shut down old servers that were
643
+ removed), one can simply invoke an update
644
+ (`set-data`, or some other quorum operation, but not
645
+ a `sync`) and wait for it to commit. An alternative
646
+ way to achieve this was to introduce another round to the
647
+ reconfiguration protocol (which, for simplicity and compatibility with
648
+ Zab, we decided to avoid).
649
+
650
+ <a name="sc_reconfig_incremental"></a>
651
+
652
+ #### Incremental mode
653
+
654
+ The incremental mode allows adding and removing servers to the
655
+ current configuration. Multiple changes are allowed. For
656
+ example:
657
+
658
+ > reconfig -remove 3 -add
659
+ server.5=125.23.63.23:1234:1235;1236
660
+
661
+ Both the add and the remove options get a list of comma separated
662
+ arguments (no spaces):
663
+
664
+ > reconfig -remove 3,4 -add
665
+ server.5=localhost:2111:2112;2113,6=localhost:2114:2115:observer;2116
666
+
667
+ The format of the server statement is exactly the same as
668
+ described in the section [Specifying the client port](#sc_reconfig_clientport) and
669
+ includes the client port. Notice that here instead of "server.5=" you
670
+ can just say "5=". In the example above, if server 5 is already in the
671
+ system, but has different ports or is not an observer, it is updated
672
+ and once the configuration commits becomes an observer and starts
673
+ using these new ports. This is an easy way to turn participants into
674
+ observers and vice versa or change any of their ports, without
675
+ rebooting the server.
676
+
677
+ ZooKeeper supports two types of Quorum Systems – the simple
678
+ Majority system (where the leader commits operations after receiving
679
+ ACKs from a majority of voters) and a more complex Hierarchical
680
+ system, where votes of different servers have different weights and
681
+ servers are divided into voting groups. Currently, incremental
682
+ reconfiguration is allowed only if the last proposed configuration
683
+ known to the leader uses a Majority Quorum System
684
+ (BadArgumentsException is thrown otherwise).
685
+
686
+ Incremental mode - examples using the Java API:
687
+
688
+ List<String> leavingServers = new ArrayList<String>();
689
+ leavingServers.add("1");
690
+ leavingServers.add("2");
691
+ byte[] config = zk.reconfig(null, leavingServers, null, -1, new Stat());
692
+
693
+ List<String> leavingServers = new ArrayList<String>();
694
+ List<String> joiningServers = new ArrayList<String>();
695
+ leavingServers.add("1");
696
+ joiningServers.add("server.4=localhost:1234:1235;1236");
697
+ byte[] config = zk.reconfig(joiningServers, leavingServers, null, -1, new Stat());
698
+
699
+ String configStr = new String(config);
700
+ System.out.println(configStr);
701
+
702
+ There is also an asynchronous API, and an API accepting comma
703
+ separated Strings instead of List<String>. See
704
+ src/java/main/org/apache/zookeeper/ZooKeeper.java.
705
+
706
+ <a name="sc_reconfig_nonincremental"></a>
707
+
708
+ #### Non-incremental mode
709
+
710
+ The second mode of reconfiguration is non-incremental, whereby a
711
+ client gives a complete specification of the new dynamic system
712
+ configuration. The new configuration can either be given in place or
713
+ read from a file:
714
+
715
+ > reconfig -file newconfig.cfg
716
+
717
+ //newconfig.cfg is a dynamic config file, see [Dynamic configuration file](#sc_reconfig_file)
718
+
719
+ > reconfig -members
720
+ server.1=125.23.63.23:2780:2783:participant;2791,server.2=125.23.63.24:2781:2784:participant;2792,server.3=125.23.63.25:2782:2785:participant;2793}}
721
+
722
+ The new configuration may use a different Quorum System. For
723
+ example, you may specify a Hierarchical Quorum System even if the
724
+ current ensemble uses a Majority Quorum System.
725
+
726
+ Bulk mode - example using the Java API:
727
+
728
+ List<String> newMembers = new ArrayList<String>();
729
+ newMembers.add("server.1=1111:1234:1235;1236");
730
+ newMembers.add("server.2=1112:1237:1238;1239");
731
+ newMembers.add("server.3=1114:1240:1241:observer;1242");
732
+
733
+ byte[] config = zk.reconfig(null, null, newMembers, -1, new Stat());
734
+
735
+ String configStr = new String(config);
736
+ System.out.println(configStr);
737
+
738
+ There is also an asynchronous API, and an API accepting comma
739
+ separated String containing the new members instead of
740
+ List<String>. See
741
+ src/java/main/org/apache/zookeeper/ZooKeeper.java.
742
+
743
+ <a name="sc_reconfig_conditional"></a>
744
+
745
+ #### Conditional reconfig
746
+
747
+ Sometimes (especially in non-incremental mode) a new proposed
748
+ configuration depends on what the client "believes" to be the current
749
+ configuration, and should be applied only to that configuration.
750
+ Specifically, the `reconfig` succeeds only if the
751
+ last configuration at the leader has the specified version.
752
+
753
+ > reconfig -file <filename> -v <version>
754
+
755
+ In the previously listed Java examples, instead of -1 one could
756
+ specify a configuration version to condition the
757
+ reconfiguration.
758
+
759
+ <a name="sc_reconfig_errors"></a>
760
+
761
+ #### Error conditions
762
+
763
+ In addition to normal ZooKeeper error conditions, a
764
+ reconfiguration may fail for the following reasons:
765
+
766
+ 1. another reconfig is currently in progress
767
+ (ReconfigInProgress)
768
+ 1. the proposed change would leave the cluster with less than 2
769
+ participants, in case standalone mode is enabled, or, if
770
+ standalone mode is disabled then its legal to remain with 1 or
771
+ more participants (BadArgumentsException)
772
+ 1. no quorum of the new configuration was connected and
773
+ up-to-date with the leader when the reconfiguration processing
774
+ began (NewConfigNoQuorum)
775
+ 1. `-v x` was specified, but the version
776
+ `y` of the latest configuration is not
777
+ `x` (BadVersionException)
778
+ 1. an incremental reconfiguration was requested but the last
779
+ configuration at the leader uses a Quorum System which is
780
+ different from the Majority system (BadArgumentsException)
781
+ 1. syntax error (BadArgumentsException)
782
+ 1. I/O exception when reading the configuration from a file
783
+ (BadArgumentsException)
784
+
785
+ Most of these are illustrated by test-cases in
786
+ *ReconfigFailureCases.java*.
787
+
788
+ <a name="sc_reconfig_additional"></a>
789
+
790
+ #### Additional comments
791
+
792
+ **Liveness:** To better understand
793
+ the difference between incremental and non-incremental
794
+ reconfiguration, suppose that client C1 adds server D to the system
795
+ while a different client C2 adds server E. With the non-incremental
796
+ mode, each client would first invoke `config` to find
797
+ out the current configuration, and then locally create a new list of
798
+ servers by adding its own suggested server. The new configuration can
799
+ then be submitted using the non-incremental
800
+ `reconfig` command. After both reconfigurations
801
+ complete, only one of E or D will be added (not both), depending on
802
+ which client's request arrives second to the leader, overwriting the
803
+ previous configuration. The other client can repeat the process until
804
+ its change takes effect. This method guarantees system-wide progress
805
+ (i.e., for one of the clients), but does not ensure that every client
806
+ succeeds. To have more control C2 may request to only execute the
807
+ reconfiguration in case the version of the current configuration
808
+ hasn't changed, as explained in the section [Conditional reconfig](#sc_reconfig_conditional). In this way it may avoid blindly
809
+ overwriting the configuration of C1 if C1's configuration reached the
810
+ leader first.
811
+
812
+ With incremental reconfiguration, both changes will take effect as
813
+ they are simply applied by the leader one after the other to the
814
+ current configuration, whatever that is (assuming that the second
815
+ reconfig request reaches the leader after it sends a commit message
816
+ for the first reconfig request -- currently the leader will refuse to
817
+ propose a reconfiguration if another one is already pending). Since
818
+ both clients are guaranteed to make progress, this method guarantees
819
+ stronger liveness. In practice, multiple concurrent reconfigurations
820
+ are probably rare. Non-incremental reconfiguration is currently the
821
+ only way to dynamically change the Quorum System. Incremental
822
+ configuration is currently only allowed with the Majority Quorum
823
+ System.
824
+
825
+ **Changing an observer into a
826
+ follower:** Clearly, changing a server that participates in
827
+ voting into an observer may fail if error (2) occurs, i.e., if fewer
828
+ than the minimal allowed number of participants would remain. However,
829
+ converting an observer into a participant may sometimes fail for a
830
+ more subtle reason: Suppose, for example, that the current
831
+ configuration is (A, B, C, D), where A is the leader, B and C are
832
+ followers and D is an observer. In addition, suppose that B has
833
+ crashed. If a reconfiguration is submitted where D is said to become a
834
+ follower, it will fail with error (3) since in this configuration, a
835
+ majority of voters in the new configuration (any 3 voters), must be
836
+ connected and up-to-date with the leader. An observer cannot
837
+ acknowledge the history prefix sent during reconfiguration, and
838
+ therefore it does not count towards these 3 required servers and the
839
+ reconfiguration will be aborted. In case this happens, a client can
840
+ achieve the same task by two reconfig commands: first invoke a
841
+ reconfig to remove D from the configuration and then invoke a second
842
+ command to add it back as a participant (follower). During the
843
+ intermediate state D is a non-voting follower and can ACK the state
844
+ transfer performed during the second reconfig command.
845
+
846
+ <a name="ch_reconfig_rebalancing"></a>
847
+
848
+ ## Rebalancing Client Connections
849
+
850
+ When a ZooKeeper cluster is started, if each client is given the same
851
+ connection string (list of servers), the client will randomly choose a
852
+ server in the list to connect to, which makes the expected number of
853
+ client connections per server the same for each of the servers. We
854
+ implemented a method that preserves this property when the set of servers
855
+ changes through reconfiguration. See Sections 4 and 5.1 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters).
856
+
857
+ In order for the method to work, all clients must subscribe to
858
+ configuration changes (by setting a watch on /zookeeper/config either
859
+ directly or through the `getConfig` API command). When
860
+ the watch is triggered, the client should read the new configuration by
861
+ invoking `sync` and `getConfig` and if
862
+ the configuration is indeed new invoke the
863
+ `updateServerList` API command. To avoid mass client
864
+ migration at the same time, it is better to have each client sleep a
865
+ random short period of time before invoking
866
+ `updateServerList`.
867
+
868
+ A few examples can be found in:
869
+ *StaticHostProviderTest.java* and
870
+ *TestReconfig.cc*
871
+
872
+ Example (this is not a recipe, but a simplified example just to
873
+ explain the general idea):
874
+
875
+ public void process(WatchedEvent event) {
876
+ synchronized (this) {
877
+ if (event.getType() == EventType.None) {
878
+ connected = (event.getState() == KeeperState.SyncConnected);
879
+ notifyAll();
880
+ } else if (event.getPath()!=null && event.getPath().equals(ZooDefs.CONFIG_NODE)) {
881
+ // in prod code never block the event thread!
882
+ zk.sync(ZooDefs.CONFIG_NODE, this, null);
883
+ zk.getConfig(this, this, null);
884
+ }
885
+ }
886
+ }
887
+
888
+ public void processResult(int rc, String path, Object ctx, byte[] data, Stat stat) {
889
+ if (path!=null && path.equals(ZooDefs.CONFIG_NODE)) {
890
+ String config[] = ConfigUtils.getClientConfigStr(new String(data)).split(" "); // similar to config -c
891
+ long version = Long.parseLong(config[0], 16);
892
+ if (this.configVersion == null){
893
+ this.configVersion = version;
894
+ } else if (version > this.configVersion) {
895
+ hostList = config[1];
896
+ try {
897
+ // the following command is not blocking but may cause the client to close the socket and
898
+ // migrate to a different server. In practice it's better to wait a short period of time, chosen
899
+ // randomly, so that different clients migrate at different times
900
+ zk.updateServerList(hostList);
901
+ } catch (IOException e) {
902
+ System.err.println("Error updating server list");
903
+ e.printStackTrace();
904
+ }
905
+ this.configVersion = version;
906
+ }
907
+ }
908
+ }
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperSnapshotAndRestore.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Snapshot and Restore Guide
18
+
19
+ Zookeeper is designed to withstand machine failures. A Zookeeper cluster can automatically recover
20
+ from temporary failures such as machine reboot. It can also tolerate up to (N-1)/2 permanent
21
+ failures for a cluster of N members due to hardware failures or disk corruption, etc. When a member
22
+ permanently fails, it loses access to the cluster. If the cluster permanently loses more than
23
+ (N-1)/2 members, it disastrously fails and loses quorum. Once the quorum is lost, the cluster
24
+ cannot reach consensus and therefore cannot continue to accept updates.
25
+
26
+ To recover from such disastrous failures, Zookeeper provides snapshot and restore functionalities to
27
+ restore a cluster from a snapshot.
28
+
29
+ 1. Snapshot and restore operate on the connected server via Admin Server APIs
30
+ 1. Snapshot and restore are rate limited to protect the server from being overloaded
31
+ 1. Snapshot and restore require authentication and authorization on the root path with ALL permission.
32
+ The supported auth schemas are digest, x509 and IP.
33
+
34
+ * [Snapshot](#zookeeper_snapshot)
35
+ * [Restore](#zookeeper_restore)
36
+
37
+ <a name="zookeeper_snapshot"></a>
38
+
39
+ ## Snapshot
40
+ Recovering a cluster needs a snapshot from a ZooKeeper cluster. Users can periodically take
41
+ snapshots from a live server which has the highest zxid and stream out data to a local
42
+ or external storage/file system (e.g., S3).
43
+
44
+ ```bash
45
+ # The snapshot command takes snapshot from the server it connects to and rate limited to once every 5 mins by default
46
+ curl -H 'Authorization: digest root:root_passwd' http://hostname:adminPort/commands/snapshot?streaming=true --output snapshotFileName
47
+ ```
48
+
49
+ <a name="zookeeper_restore"></a>
50
+ ## Restore
51
+
52
+ Restoring a cluster needs a single snapshot as input stream. Restore can be used for recovering a
53
+ cluster for quorum lost or building a brand-new cluster with seed data.
54
+
55
+ All members should restore using the same snapshot. The following are the recommended steps:
56
+
57
+ - Blocking traffic on the client port or client secure port before restore starts
58
+ - Take a snapshot of the latest database state using the snapshot admin server command if applicable
59
+ - For each server
60
+ - Moving the files in dataDir and dataLogDir to different location to prevent the restored database
61
+ from being overwritten when server restarts after restore
62
+ - Restore the server using restore admin server command
63
+ - Unblocking traffic on the client port or client secure port after restore completes
64
+
65
+ ```bash
66
+ # The restore command takes a snapshot as input stream and restore the db of the server it connects. It is rate limited to once every 5 mins by default
67
+ curl -H 'Content-Type:application/octet-stream' -H 'Authorization: digest root:root_passwd' -POST http://hostname:adminPort/commands/restore --data-binary "@snapshotFileName"
68
+ ```
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md ADDED
@@ -0,0 +1,373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2022 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Getting Started Guide
18
+
19
+ * [Getting Started: Coordinating Distributed Applications with ZooKeeper](#getting-started-coordinating-distributed-applications-with-zooKeeper)
20
+ * [Pre-requisites](#sc_Prerequisites)
21
+ * [Download](#sc_Download)
22
+ * [Standalone Operation](#sc_InstallingSingleMode)
23
+ * [Managing ZooKeeper Storage](#sc_FileManagement)
24
+ * [Connecting to ZooKeeper](#sc_ConnectingToZooKeeper)
25
+ * [Programming to ZooKeeper](#sc_ProgrammingToZooKeeper)
26
+ * [Running Replicated ZooKeeper](#sc_RunningReplicatedZooKeeper)
27
+ * [Other Optimizations](#other-optimizations)
28
+
29
+ <a name="getting-started-coordinating-distributed-applications-with-zooKeeper"></a>
30
+
31
+ ## Getting Started: Coordinating Distributed Applications with ZooKeeper
32
+
33
+ This document contains information to get you started quickly with
34
+ ZooKeeper. It is aimed primarily at developers hoping to try it out, and
35
+ contains simple installation instructions for a single ZooKeeper server, a
36
+ few commands to verify that it is running, and a simple programming
37
+ example. Finally, as a convenience, there are a few sections regarding
38
+ more complicated installations, for example running replicated
39
+ deployments, and optimizing the transaction log. However for the complete
40
+ instructions for commercial deployments, please refer to the [ZooKeeper
41
+ Administrator's Guide](zookeeperAdmin.html).
42
+
43
+ <a name="sc_Prerequisites"></a>
44
+
45
+ ### Pre-requisites
46
+
47
+ See [System Requirements](zookeeperAdmin.html#sc_systemReq) in the Admin guide.
48
+
49
+ <a name="sc_Download"></a>
50
+
51
+ ### Download
52
+
53
+ To get a ZooKeeper distribution, download a recent
54
+ [stable](http://zookeeper.apache.org/releases.html) release from one of the Apache Download
55
+ Mirrors.
56
+
57
+ <a name="sc_InstallingSingleMode"></a>
58
+
59
+ ### Standalone Operation
60
+
61
+ Setting up a ZooKeeper server in standalone mode is
62
+ straightforward. The server is contained in a single JAR file,
63
+ so installation consists of creating a configuration.
64
+
65
+ Once you've downloaded a stable ZooKeeper release unpack
66
+ it and cd to the root
67
+
68
+ To start ZooKeeper you need a configuration file. Here is a sample,
69
+ create it in **conf/zoo.cfg**:
70
+
71
+
72
+ tickTime=2000
73
+ dataDir=/var/lib/zookeeper
74
+ clientPort=2181
75
+
76
+
77
+ This file can be called anything, but for the sake of this
78
+ discussion call
79
+ it **conf/zoo.cfg**. Change the
80
+ value of **dataDir** to specify an
81
+ existing (empty to start with) directory. Here are the meanings
82
+ for each of the fields:
83
+
84
+ * ***tickTime*** :
85
+ the basic time unit in milliseconds used by ZooKeeper. It is
86
+ used to do heartbeats and the minimum session timeout will be
87
+ twice the tickTime.
88
+
89
+ * ***dataDir*** :
90
+ the location to store the in-memory database snapshots and,
91
+ unless specified otherwise, the transaction log of updates to the
92
+ database.
93
+
94
+ * ***clientPort*** :
95
+ the port to listen for client connections
96
+
97
+ Now that you created the configuration file, you can start
98
+ ZooKeeper:
99
+
100
+
101
+ bin/zkServer.sh start
102
+
103
+
104
+ ZooKeeper logs messages using _logback_ -- more detail
105
+ available in the
106
+ [Logging](zookeeperProgrammers.html#Logging)
107
+ section of the Programmer's Guide. You will see log messages
108
+ coming to the console (default) and/or a log file depending on
109
+ the logback configuration.
110
+
111
+ The steps outlined here run ZooKeeper in standalone mode. There is
112
+ no replication, so if ZooKeeper process fails, the service will go down.
113
+ This is fine for most development situations, but to run ZooKeeper in
114
+ replicated mode, please see [Running Replicated
115
+ ZooKeeper](#sc_RunningReplicatedZooKeeper).
116
+
117
+ <a name="sc_FileManagement"></a>
118
+
119
+ ### Managing ZooKeeper Storage
120
+
121
+ For long running production systems ZooKeeper storage must
122
+ be managed externally (dataDir and logs). See the section on
123
+ [maintenance](zookeeperAdmin.html#sc_maintenance) for
124
+ more details.
125
+
126
+ <a name="sc_ConnectingToZooKeeper"></a>
127
+
128
+ ### Connecting to ZooKeeper
129
+
130
+
131
+ $ bin/zkCli.sh -server 127.0.0.1:2181
132
+
133
+
134
+ This lets you perform simple, file-like operations.
135
+
136
+ Once you have connected, you should see something like:
137
+
138
+
139
+ Connecting to localhost:2181
140
+ ...
141
+ Welcome to ZooKeeper!
142
+ JLine support is enabled
143
+ [zkshell: 0]
144
+
145
+ From the shell, type `help` to get a listing of commands that can be executed from the client, as in:
146
+
147
+
148
+ [zkshell: 0] help
149
+ ZooKeeper -server host:port cmd args
150
+ addauth scheme auth
151
+ close
152
+ config [-c] [-w] [-s]
153
+ connect host:port
154
+ create [-s] [-e] [-c] [-t ttl] path [data] [acl]
155
+ delete [-v version] path
156
+ deleteall path
157
+ delquota [-n|-b] path
158
+ get [-s] [-w] path
159
+ getAcl [-s] path
160
+ getAllChildrenNumber path
161
+ getEphemerals path
162
+ history
163
+ listquota path
164
+ ls [-s] [-w] [-R] path
165
+ printwatches on|off
166
+ quit
167
+ reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*]
168
+ redo cmdno
169
+ removewatches path [-c|-d|-a] [-l]
170
+ set [-s] [-v version] path data
171
+ setAcl [-s] [-v version] [-R] path acl
172
+ setquota -n|-b val path
173
+ stat [-w] path
174
+ sync path
175
+
176
+
177
+ From here, you can try a few simple commands to get a feel for this simple command line interface. First, start by issuing the list command, as
178
+ in `ls`, yielding:
179
+
180
+
181
+ [zkshell: 8] ls /
182
+ [zookeeper]
183
+
184
+
185
+ Next, create a new znode by running `create /zk_test my_data`. This creates a new znode and associates the string "my_data" with the node.
186
+ You should see:
187
+
188
+
189
+ [zkshell: 9] create /zk_test my_data
190
+ Created /zk_test
191
+
192
+
193
+ Issue another `ls /` command to see what the directory looks like:
194
+
195
+
196
+ [zkshell: 11] ls /
197
+ [zookeeper, zk_test]
198
+
199
+
200
+ Notice that the zk_test directory has now been created.
201
+
202
+ Next, verify that the data was associated with the znode by running the `get` command, as in:
203
+
204
+
205
+ [zkshell: 12] get /zk_test
206
+ my_data
207
+ cZxid = 5
208
+ ctime = Fri Jun 05 13:57:06 PDT 2009
209
+ mZxid = 5
210
+ mtime = Fri Jun 05 13:57:06 PDT 2009
211
+ pZxid = 5
212
+ cversion = 0
213
+ dataVersion = 0
214
+ aclVersion = 0
215
+ ephemeralOwner = 0
216
+ dataLength = 7
217
+ numChildren = 0
218
+
219
+
220
+ We can change the data associated with zk_test by issuing the `set` command, as in:
221
+
222
+
223
+ [zkshell: 14] set /zk_test junk
224
+ cZxid = 5
225
+ ctime = Fri Jun 05 13:57:06 PDT 2009
226
+ mZxid = 6
227
+ mtime = Fri Jun 05 14:01:52 PDT 2009
228
+ pZxid = 5
229
+ cversion = 0
230
+ dataVersion = 1
231
+ aclVersion = 0
232
+ ephemeralOwner = 0
233
+ dataLength = 4
234
+ numChildren = 0
235
+ [zkshell: 15] get /zk_test
236
+ junk
237
+ cZxid = 5
238
+ ctime = Fri Jun 05 13:57:06 PDT 2009
239
+ mZxid = 6
240
+ mtime = Fri Jun 05 14:01:52 PDT 2009
241
+ pZxid = 5
242
+ cversion = 0
243
+ dataVersion = 1
244
+ aclVersion = 0
245
+ ephemeralOwner = 0
246
+ dataLength = 4
247
+ numChildren = 0
248
+
249
+
250
+ (Notice we did a `get` after setting the data and it did, indeed, change.
251
+
252
+ Finally, let's `delete` the node by issuing:
253
+
254
+
255
+ [zkshell: 16] delete /zk_test
256
+ [zkshell: 17] ls /
257
+ [zookeeper]
258
+ [zkshell: 18]
259
+
260
+
261
+ That's it for now. To explore more, see the [Zookeeper CLI](zookeeperCLI.html).
262
+
263
+ <a name="sc_ProgrammingToZooKeeper"></a>
264
+
265
+ ### Programming to ZooKeeper
266
+
267
+ ZooKeeper has a Java bindings and C bindings. They are
268
+ functionally equivalent. The C bindings exist in two variants: single
269
+ threaded and multi-threaded. These differ only in how the messaging loop
270
+ is done. For more information, see the [Programming
271
+ Examples in the ZooKeeper Programmer's Guide](zookeeperProgrammers.html#ch_programStructureWithExample) for
272
+ sample code using the different APIs.
273
+
274
+ <a name="sc_RunningReplicatedZooKeeper"></a>
275
+
276
+ ### Running Replicated ZooKeeper
277
+
278
+ Running ZooKeeper in standalone mode is convenient for evaluation,
279
+ some development, and testing. But in production, you should run
280
+ ZooKeeper in replicated mode. A replicated group of servers in the same
281
+ application is called a _quorum_, and in replicated
282
+ mode, all servers in the quorum have copies of the same configuration
283
+ file.
284
+
285
+ ######Note
286
+ >For replicated mode, a minimum of three servers are required,
287
+ and it is strongly recommended that you have an odd number of
288
+ servers. If you only have two servers, then you are in a
289
+ situation where if one of them fails, there are not enough
290
+ machines to form a majority quorum. Two servers are inherently
291
+ **less** stable than a single server, because there are two single
292
+ points of failure.
293
+
294
+ The required
295
+ **conf/zoo.cfg**
296
+ file for replicated mode is similar to the one used in standalone
297
+ mode, but with a few differences. Here is an example:
298
+
299
+ tickTime=2000
300
+ dataDir=/var/lib/zookeeper
301
+ clientPort=2181
302
+ initLimit=5
303
+ syncLimit=2
304
+ server.1=zoo1:2888:3888
305
+ server.2=zoo2:2888:3888
306
+ server.3=zoo3:2888:3888
307
+
308
+ The new entry, **initLimit** is
309
+ timeouts ZooKeeper uses to limit the length of time the ZooKeeper
310
+ servers in quorum have to connect to a leader. The entry **syncLimit** limits how far out of date a server can
311
+ be from a leader.
312
+
313
+ With both of these timeouts, you specify the unit of time using
314
+ **tickTime**. In this example, the timeout
315
+ for initLimit is 5 ticks at 2000 milliseconds a tick, or 10
316
+ seconds.
317
+
318
+ The entries of the form _server.X_ list the
319
+ servers that make up the ZooKeeper service. When the server starts up,
320
+ it knows which server it is by looking for the file
321
+ _myid_ in the data directory. That file has the
322
+ contains the server number, in ASCII.
323
+
324
+ Finally, note the two port numbers after each server
325
+ name: " 2888" and "3888". Peers use the former port to connect
326
+ to other peers. Such a connection is necessary so that peers
327
+ can communicate, for example, to agree upon the order of
328
+ updates. More specifically, a ZooKeeper server uses this port
329
+ to connect followers to the leader. When a new leader arises, a
330
+ follower opens a TCP connection to the leader using this
331
+ port. Because the default leader election also uses TCP, we
332
+ currently require another port for leader election. This is the
333
+ second port in the server entry.
334
+
335
+ ######Note
336
+ >If you want to test multiple servers on a single
337
+ machine, specify the servername
338
+ as _localhost_ with unique quorum &
339
+ leader election ports (i.e. 2888:3888, 2889:3889, 2890:3890 in
340
+ the example above) for each server.X in that server's config
341
+ file. Of course separate _dataDir_s and
342
+ distinct _clientPort_s are also necessary
343
+ (in the above replicated example, running on a
344
+ single _localhost_, you would still have
345
+ three config files).
346
+
347
+ >Please be aware that setting up multiple servers on a single
348
+ machine will not create any redundancy. If something were to
349
+ happen which caused the machine to die, all of the zookeeper
350
+ servers would be offline. Full redundancy requires that each
351
+ server have its own machine. It must be a completely separate
352
+ physical server. Multiple virtual machines on the same physical
353
+ host are still vulnerable to the complete failure of that host.
354
+
355
+ >If you have multiple network interfaces in your ZooKeeper machines,
356
+ you can also instruct ZooKeeper to bind on all of your interfaces and
357
+ automatically switch to a healthy interface in case of a network failure.
358
+ For details, see the [Configuration Parameters](zookeeperAdmin.html#id_multi_address).
359
+
360
+ <a name="other-optimizations"></a>
361
+
362
+ ### Other Optimizations
363
+
364
+ There are a couple of other configuration parameters that can
365
+ greatly increase performance:
366
+
367
+ * To get low latencies on updates it is important to
368
+ have a dedicated transaction log directory. By default
369
+ transaction logs are put in the same directory as the data
370
+ snapshots and _myid_ file. The dataLogDir
371
+ parameters indicates a different directory to use for the
372
+ transaction logs.
373
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md ADDED
@@ -0,0 +1,698 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2022 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # A series of tools for ZooKeeper
18
+
19
+ * [Scripts](#Scripts)
20
+ * [zkServer.sh](#zkServer)
21
+ * [zkCli.sh](#zkCli)
22
+ * [zkEnv.sh](#zkEnv)
23
+ * [zkCleanup.sh](#zkCleanup)
24
+ * [zkTxnLogToolkit.sh](#zkTxnLogToolkit)
25
+ * [zkSnapShotToolkit.sh](#zkSnapShotToolkit)
26
+ * [zkSnapshotRecursiveSummaryToolkit.sh](#zkSnapshotRecursiveSummaryToolkit)
27
+ * [zkSnapshotComparer.sh](#zkSnapshotComparer)
28
+
29
+ * [Benchmark](#Benchmark)
30
+ * [YCSB](#YCSB)
31
+ * [zk-smoketest](#zk-smoketest)
32
+
33
+ * [Testing](#Testing)
34
+ * [Fault Injection Framework](#fault-injection)
35
+ * [Byteman](#Byteman)
36
+ * [Jepsen Test](#jepsen-test)
37
+
38
+ <a name="Scripts"></a>
39
+
40
+ ## Scripts
41
+
42
+ <a name="zkServer"></a>
43
+
44
+ ### zkServer.sh
45
+ A command for the operations for the ZooKeeper server.
46
+
47
+ ```bash
48
+ Usage: ./zkServer.sh {start|start-foreground|stop|version|restart|status|upgrade|print-cmd}
49
+ # start the server
50
+ ./zkServer.sh start
51
+
52
+ # start the server in the foreground for debugging
53
+ ./zkServer.sh start-foreground
54
+
55
+ # stop the server
56
+ ./zkServer.sh stop
57
+
58
+ # restart the server
59
+ ./zkServer.sh restart
60
+
61
+ # show the status,mode,role of the server
62
+ ./zkServer.sh status
63
+ JMX enabled by default
64
+ Using config: /data/software/zookeeper/conf/zoo.cfg
65
+ Mode: standalone
66
+
67
+ # Deprecated
68
+ ./zkServer.sh upgrade
69
+
70
+ # print the parameters of the start-up
71
+ ./zkServer.sh print-cmd
72
+
73
+ # show the version of the ZooKeeper server
74
+ ./zkServer.sh version
75
+ Apache ZooKeeper, version 3.6.0-SNAPSHOT 06/11/2019 05:39 GMT
76
+
77
+ ```
78
+
79
+ The `status` command establishes a client connection to the server to execute diagnostic commands.
80
+ When the ZooKeeper cluster is started in client SSL only mode (by omitting the clientPort
81
+ from the zoo.cfg), then additional SSL related configuration has to be provided before using
82
+ the `./zkServer.sh status` command to find out if the ZooKeeper server is running. An example:
83
+
84
+ CLIENT_JVMFLAGS="-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.ssl.trustStore.location=/tmp/clienttrust.jks -Dzookeeper.ssl.trustStore.password=password -Dzookeeper.ssl.keyStore.location=/tmp/client.jks -Dzookeeper.ssl.keyStore.password=password -Dzookeeper.client.secure=true" ./zkServer.sh status
85
+
86
+
87
+ <a name="zkCli"></a>
88
+
89
+ ### zkCli.sh
90
+ Look at the [ZooKeeperCLI](zookeeperCLI.html)
91
+
92
+ <a name="zkEnv"></a>
93
+
94
+ ### zkEnv.sh
95
+ The environment setting for the ZooKeeper server
96
+
97
+ ```bash
98
+ # the setting of log property
99
+ ZOO_LOG_DIR: the directory to store the logs
100
+ ```
101
+
102
+ <a name="zkCleanup"></a>
103
+
104
+ ### zkCleanup.sh
105
+ Clean up the old snapshots and transaction logs.
106
+
107
+ ```bash
108
+ Usage:
109
+ * args dataLogDir [snapDir] -n count
110
+ * dataLogDir -- path to the txn log directory
111
+ * snapDir -- path to the snapshot directory
112
+ * count -- the number of old snaps/logs you want to keep, value should be greater than or equal to 3
113
+ # Keep the latest 5 logs and snapshots
114
+ ./zkCleanup.sh -n 5
115
+ ```
116
+
117
+ <a name="zkTxnLogToolkit"></a>
118
+
119
+ ### zkTxnLogToolkit.sh
120
+ TxnLogToolkit is a command line tool shipped with ZooKeeper which
121
+ is capable of recovering transaction log entries with broken CRC.
122
+
123
+ Running it without any command line parameters or with the `-h,--help` argument, it outputs the following help page:
124
+
125
+ $ bin/zkTxnLogToolkit.sh
126
+ usage: TxnLogToolkit [-dhrv] txn_log_file_name
127
+ -d,--dump Dump mode. Dump all entries of the log file. (this is the default)
128
+ -h,--help Print help message
129
+ -r,--recover Recovery mode. Re-calculate CRC for broken entries.
130
+ -v,--verbose Be verbose in recovery mode: print all entries, not just fixed ones.
131
+ -y,--yes Non-interactive mode: repair all CRC errors without asking
132
+
133
+ The default behaviour is safe: it dumps the entries of the given
134
+ transaction log file to the screen: (same as using `-d,--dump` parameter)
135
+
136
+ $ bin/zkTxnLogToolkit.sh log.100000001
137
+ ZooKeeper Transactional Log File with dbid 0 txnlog format version 2
138
+ 4/5/18 2:15:58 PM CEST session 0x16295bafcc40000 cxid 0x0 zxid 0x100000001 createSession 30000
139
+ CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null
140
+ 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null
141
+ 4/5/18 2:16:12 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x100000003 createSession 30000
142
+ 4/5/18 2:17:34 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x200000001 closeSession null
143
+ 4/5/18 2:17:34 PM CEST session 0x16295bd23720000 cxid 0x0 zxid 0x200000002 createSession 30000
144
+ 4/5/18 2:18:02 PM CEST session 0x16295bd23720000 cxid 0x2 zxid 0x200000003 create '/andor,#626262,v{s{31,s{'world,'anyone}}},F,1
145
+ EOF reached after 6 txns.
146
+
147
+ There's a CRC error in the 2nd entry of the above transaction log file. In **dump**
148
+ mode, the toolkit only prints this information to the screen without touching the original file. In
149
+ **recovery** mode (`-r,--recover` flag) the original file still remains
150
+ untouched and all transactions will be copied over to a new txn log file with ".fixed" suffix. It recalculates
151
+ CRC values and copies the calculated value, if it doesn't match the original txn entry.
152
+ By default, the tool works interactively: it asks for confirmation whenever CRC error encountered.
153
+
154
+ $ bin/zkTxnLogToolkit.sh -r log.100000001
155
+ ZooKeeper Transactional Log File with dbid 0 txnlog format version 2
156
+ CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null
157
+ Would you like to fix it (Yes/No/Abort) ?
158
+
159
+ Answering **Yes** means the newly calculated CRC value will be outputted
160
+ to the new file. **No** means that the original CRC value will be copied over.
161
+ **Abort** will abort the entire operation and exits.
162
+ (In this case the ".fixed" will not be deleted and left in a half-complete state: contains only entries which
163
+ have already been processed or only the header if the operation was aborted at the first entry.)
164
+
165
+ $ bin/zkTxnLogToolkit.sh -r log.100000001
166
+ ZooKeeper Transactional Log File with dbid 0 txnlog format version 2
167
+ CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null
168
+ Would you like to fix it (Yes/No/Abort) ? y
169
+ EOF reached after 6 txns.
170
+ Recovery file log.100000001.fixed has been written with 1 fixed CRC error(s)
171
+
172
+ The default behaviour of recovery is to be silent: only entries with CRC error get printed to the screen.
173
+ One can turn on verbose mode with the `-v,--verbose` parameter to see all records.
174
+ Interactive mode can be turned off with the `-y,--yes` parameter. In this case all CRC errors will be fixed
175
+ in the new transaction file.
176
+
177
+ <a name="zkSnapShotToolkit"></a>
178
+
179
+ ### zkSnapShotToolkit.sh
180
+ Dump a snapshot file to stdout, showing the detailed information of the each zk-node.
181
+
182
+ ```bash
183
+ # help
184
+ ./zkSnapShotToolkit.sh
185
+ /usr/bin/java
186
+ USAGE: SnapshotFormatter [-d|-json] snapshot_file
187
+ -d dump the data for each znode
188
+ -json dump znode info in json format
189
+
190
+ # show the each zk-node info without data content
191
+ ./zkSnapShotToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d
192
+ /zk-latencies_4/session_946
193
+ cZxid = 0x00000f0003110b
194
+ ctime = Wed Sep 19 21:58:22 CST 2018
195
+ mZxid = 0x00000f0003110b
196
+ mtime = Wed Sep 19 21:58:22 CST 2018
197
+ pZxid = 0x00000f0003110b
198
+ cversion = 0
199
+ dataVersion = 0
200
+ aclVersion = 0
201
+ ephemeralOwner = 0x00000000000000
202
+ dataLength = 100
203
+
204
+ # [-d] show the each zk-node info with data content
205
+ ./zkSnapShotToolkit.sh -d /data/zkdata/version-2/snapshot.fa01000186d
206
+ /zk-latencies2/session_26229
207
+ cZxid = 0x00000900007ba0
208
+ ctime = Wed Aug 15 20:13:52 CST 2018
209
+ mZxid = 0x00000900007ba0
210
+ mtime = Wed Aug 15 20:13:52 CST 2018
211
+ pZxid = 0x00000900007ba0
212
+ cversion = 0
213
+ dataVersion = 0
214
+ aclVersion = 0
215
+ ephemeralOwner = 0x00000000000000
216
+ data = eHh4eHh4eHh4eHh4eA==
217
+
218
+ # [-json] show the each zk-node info with json format
219
+ ./zkSnapShotToolkit.sh -json /data/zkdata/version-2/snapshot.fa01000186d
220
+ [[1,0,{"progname":"SnapshotFormatter.java","progver":"0.01","timestamp":1559788148637},[{"name":"\/","asize":0,"dsize":0,"dev":0,"ino":1001},[{"name":"zookeeper","asize":0,"dsize":0,"dev":0,"ino":1002},{"name":"config","asize":0,"dsize":0,"dev":0,"ino":1003},[{"name":"quota","asize":0,"dsize":0,"dev":0,"ino":1004},[{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1005},{"name":"zookeeper_limits","asize":52,"dsize":52,"dev":0,"ino":1006},{"name":"zookeeper_stats","asize":15,"dsize":15,"dev":0,"ino":1007}]]],{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1008}]]
221
+ ```
222
+ <a name="zkSnapshotRecursiveSummaryToolkit"></a>
223
+
224
+ ### zkSnapshotRecursiveSummaryToolkit.sh
225
+ Recursively collect and display child count and data size for a selected node.
226
+
227
+ $./zkSnapshotRecursiveSummaryToolkit.sh
228
+ USAGE:
229
+
230
+ SnapshotRecursiveSummary <snapshot_file> <starting_node> <max_depth>
231
+
232
+ snapshot_file: path to the zookeeper snapshot
233
+ starting_node: the path in the zookeeper tree where the traversal should begin
234
+ max_depth: defines the depth where the tool still writes to the output. 0 means there is no depth limit, every non-leaf node's stats will be displayed, 1 means it will only contain the starting node's and it's children's stats, 2 ads another level and so on. This ONLY affects the level of details displayed, NOT the calculation.
235
+
236
+ ```bash
237
+ # recursively collect and display child count and data for the root node and 2 levels below it
238
+ ./zkSnapshotRecursiveSummaryToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d / 2
239
+
240
+ /
241
+ children: 1250511
242
+ data: 1952186580
243
+ -- /zookeeper
244
+ -- children: 1
245
+ -- data: 0
246
+ -- /solr
247
+ -- children: 1773
248
+ -- data: 8419162
249
+ ---- /solr/configs
250
+ ---- children: 1640
251
+ ---- data: 8407643
252
+ ---- /solr/overseer
253
+ ---- children: 6
254
+ ---- data: 0
255
+ ---- /solr/live_nodes
256
+ ---- children: 3
257
+ ---- data: 0
258
+ ```
259
+
260
+ <a name="zkSnapshotComparer"></a>
261
+
262
+ ### zkSnapshotComparer.sh
263
+ SnapshotComparer is a tool that loads and compares two snapshots with configurable threshold and various filters, and outputs information about the delta.
264
+
265
+ The delta includes specific znode paths added, updated, deleted comparing one snapshot to another.
266
+
267
+ It's useful in use cases that involve snapshot analysis, such as offline data consistency checking, and data trending analysis (e.g. what's growing under which zNode path during when).
268
+
269
+ This tool only outputs information about permanent nodes, ignoring both sessions and ephemeral nodes.
270
+
271
+ It provides two tuning parameters to help filter out noise:
272
+ 1. `--nodes` Threshold number of children added/removed;
273
+ 2. `--bytes` Threshold number of bytes added/removed.
274
+
275
+ #### Locate Snapshots
276
+ Snapshots can be found in [Zookeeper Data Directory](zookeeperAdmin.html#The+Data+Directory) which configured in [conf/zoo.cfg](zookeeperStarted.html#sc_InstallingSingleMode) when set up Zookeeper server.
277
+
278
+ #### Supported Snapshot Formats
279
+ This tool supports uncompressed snapshot format, and compressed snapshot file formats: `snappy` and `gz`. Snapshots with different formats can be compared using this tool directly without decompression.
280
+
281
+ #### Running the Tool
282
+ Running the tool with no command line argument or an unrecognized argument, it outputs the following help page:
283
+
284
+ ```
285
+ usage: java -cp <classPath> org.apache.zookeeper.server.SnapshotComparer
286
+ -b,--bytes <BYTETHRESHOLD> (Required) The node data delta size threshold, in bytes, for printing the node.
287
+ -d,--debug Use debug output.
288
+ -i,--interactive Enter interactive mode.
289
+ -l,--left <LEFT> (Required) The left snapshot file.
290
+ -n,--nodes <NODETHRESHOLD> (Required) The descendant node delta size threshold, in nodes, for printing the node.
291
+ -r,--right <RIGHT> (Required) The right snapshot file.
292
+ ```
293
+ Example Command:
294
+
295
+ ```
296
+ ./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1
297
+ ```
298
+
299
+ Example Output:
300
+ ```
301
+ ...
302
+ Deserialized snapshot in snapshot.44 in 0.002741 seconds
303
+ Processed data tree in 0.000361 seconds
304
+ Node count: 10
305
+ Total size: 0
306
+ Max depth: 4
307
+ Count of nodes at depth 0: 1
308
+ Count of nodes at depth 1: 2
309
+ Count of nodes at depth 2: 4
310
+ Count of nodes at depth 3: 3
311
+
312
+ Node count: 22
313
+ Total size: 2903
314
+ Max depth: 5
315
+ Count of nodes at depth 0: 1
316
+ Count of nodes at depth 1: 2
317
+ Count of nodes at depth 2: 4
318
+ Count of nodes at depth 3: 7
319
+ Count of nodes at depth 4: 8
320
+
321
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
322
+ Analysis for depth 0
323
+ Node found in both trees. Delta: 2903 bytes, 12 descendants
324
+ Analysis for depth 1
325
+ Node /zk_test found in both trees. Delta: 2903 bytes, 12 descendants
326
+ Analysis for depth 2
327
+ Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants
328
+ Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants
329
+ Analysis for depth 3
330
+ Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants
331
+ Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0
332
+ Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants
333
+ Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3
334
+ Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2
335
+ Analysis for depth 4
336
+ Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0
337
+ Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0
338
+ Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0
339
+ Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0
340
+ Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0
341
+ Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0
342
+ All layers compared.
343
+ ```
344
+
345
+ #### Interactive Mode
346
+ Use "-i" or "--interactive" to enter interactive mode:
347
+ ```
348
+ ./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1 -i
349
+ ```
350
+
351
+ There are three options to proceed:
352
+ ```
353
+ - Press enter to move to print current depth layer;
354
+ - Type a number to jump to and print all nodes at a given depth;
355
+ - Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'.
356
+ ```
357
+
358
+ Note: As indicated by the interactive messages, the tool only shows analysis on the result that filtered by tuning parameters bytes threshold and nodes threshold.
359
+
360
+ Press enter to print current depth layer:
361
+
362
+ ```
363
+ Current depth is 0
364
+ Press enter to move to print current depth layer;
365
+ ...
366
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
367
+ Analysis for depth 0
368
+ Node found in both trees. Delta: 2903 bytes, 12 descendants
369
+ ```
370
+
371
+ Type a number to jump to and print all nodes at a given depth:
372
+
373
+ (Jump forward)
374
+
375
+ ```
376
+ Current depth is 1
377
+ ...
378
+ Type a number to jump to and print all nodes at a given depth;
379
+ ...
380
+ 3
381
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
382
+ Analysis for depth 3
383
+ Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants
384
+ Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0
385
+ Filtered node /zk_test/gz/anotherOne of left size 0, right size 0
386
+ Filtered right node /zk_test/gz/b of size 0
387
+ Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants
388
+ Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3
389
+ Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2
390
+ ```
391
+
392
+ (Jump back)
393
+
394
+ ```
395
+ Current depth is 3
396
+ ...
397
+ Type a number to jump to and print all nodes at a given depth;
398
+ ...
399
+ 0
400
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
401
+ Analysis for depth 0
402
+ Node found in both trees. Delta: 2903 bytes, 12 descendants
403
+ ```
404
+
405
+ Out of range depth is handled:
406
+
407
+ ```
408
+ Current depth is 1
409
+ ...
410
+ Type a number to jump to and print all nodes at a given depth;
411
+ ...
412
+ 10
413
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
414
+ Depth must be in range [0, 4]
415
+ ```
416
+
417
+ Enter an ABSOLUTE path to print the immediate subtree of a node:
418
+
419
+ ```
420
+ Current depth is 3
421
+ ...
422
+ Enter an ABSOLUTE path to print the immediate subtree of a node.
423
+ /zk_test
424
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
425
+ Analysis for node /zk_test
426
+ Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants
427
+ Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants
428
+ ```
429
+
430
+ Invalid path is handled:
431
+
432
+ ```
433
+ Current depth is 3
434
+ ...
435
+ Enter an ABSOLUTE path to print the immediate subtree of a node.
436
+ /non-exist-path
437
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
438
+ Analysis for node /non-exist-path
439
+ Path /non-exist-path is neither found in left tree nor right tree.
440
+ ```
441
+
442
+ Invalid input is handled:
443
+ ```
444
+ Current depth is 1
445
+ - Press enter to move to print current depth layer;
446
+ - Type a number to jump to and print all nodes at a given depth;
447
+ - Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'.
448
+ 12223999999999999999999999999999999999999
449
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
450
+ Input 12223999999999999999999999999999999999999 is not valid. Depth must be in range [0, 4]. Path must be an absolute path which starts with '/'.
451
+ ```
452
+
453
+ Exit interactive mode automatically when all layers are compared:
454
+
455
+ ```
456
+ Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1.
457
+ Analysis for depth 4
458
+ Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0
459
+ Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0
460
+ Filtered right node /zk_test/snappy/anotherTest/abcd of size 0
461
+ Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0
462
+ Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0
463
+ Filtered right node /zk_test/snappy/test_1/c of size 0
464
+ Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0
465
+ Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0
466
+ All layers compared.
467
+ ```
468
+
469
+ Or use `^c` to exit interactive mode anytime.
470
+
471
+
472
+ <a name="Benchmark"></a>
473
+
474
+ ## Benchmark
475
+
476
+ <a name="YCSB"></a>
477
+
478
+ ### YCSB
479
+
480
+ #### Quick Start
481
+
482
+ This section describes how to run YCSB on ZooKeeper.
483
+
484
+ #### 1. Start ZooKeeper Server(s)
485
+
486
+ #### 2. Install Java and Maven
487
+
488
+ #### 3. Set Up YCSB
489
+
490
+ Git clone YCSB and compile:
491
+
492
+ git clone http://github.com/brianfrankcooper/YCSB.git
493
+ # more details in the landing page for instructions on downloading YCSB(https://github.com/brianfrankcooper/YCSB#getting-started).
494
+ cd YCSB
495
+ mvn -pl site.ycsb:zookeeper-binding -am clean package -DskipTests
496
+
497
+ #### 4. Provide ZooKeeper Connection Parameters
498
+
499
+ Set connectString, sessionTimeout, watchFlag in the workload you plan to run.
500
+
501
+ - `zookeeper.connectString`
502
+ - `zookeeper.sessionTimeout`
503
+ - `zookeeper.watchFlag`
504
+ * A parameter for enabling ZooKeeper's watch, optional values:true or false.the default value is false.
505
+ * This parameter cannot test the watch performance, but for testing what effect will take on the read/write requests when enabling the watch.
506
+
507
+ ```bash
508
+ ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.watchFlag=true
509
+ ```
510
+
511
+ Or, you can set configs with the shell command, EG:
512
+
513
+ # create a /benchmark namespace for sake of cleaning up the workspace after test.
514
+ # e.g the CLI:create /benchmark
515
+ ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.sessionTimeout=30000
516
+
517
+ #### 5. Load data and run tests
518
+
519
+ Load the data:
520
+
521
+ # -p recordcount,the count of records/paths you want to insert
522
+ ./bin/ycsb load zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p recordcount=10000 > outputLoad.txt
523
+
524
+ Run the workload test:
525
+
526
+ # YCSB workloadb is the most suitable workload for read-heavy workload for the ZooKeeper in the real world.
527
+
528
+ # -p fieldlength, test the length of value/data-content took effect on performance
529
+ ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldlength=1000
530
+
531
+ # -p fieldcount
532
+ ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldcount=20
533
+
534
+ # -p hdrhistogram.percentiles,show the hdrhistogram benchmark result
535
+ ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p hdrhistogram.percentiles=10,25,50,75,90,95,99,99.9 -p histogram.buckets=500
536
+
537
+ # -threads: multi-clients test, increase the **maxClientCnxns** in the zoo.cfg to handle more connections.
538
+ ./bin/ycsb run zookeeper -threads 10 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark
539
+
540
+ # show the timeseries benchmark result
541
+ ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p measurementtype=timeseries -p timeseries.granularity=50
542
+
543
+ # cluster test
544
+ ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181,192.168.10.45:2181,192.168.10.27:2181/benchmark
545
+
546
+ # test leader's read/write performance by setting zookeeper.connectString to leader's(192.168.10.43:2181)
547
+ ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181/benchmark
548
+
549
+ # test for large znode(by default: jute.maxbuffer is 1048575 bytes/1 MB ). Notice:jute.maxbuffer should also be set the same value in all the zk servers.
550
+ ./bin/ycsb run zookeeper -jvm-args="-Djute.maxbuffer=4194304" -s -P workloads/workloadc -p zookeeper.connectString=127.0.0.1:2181/benchmark
551
+
552
+ # Cleaning up the workspace after finishing the benchmark.
553
+ # e.g the CLI:deleteall /benchmark
554
+
555
+
556
+ <a name="zk-smoketest"></a>
557
+
558
+ ### zk-smoketest
559
+
560
+ **zk-smoketest** provides a simple smoketest client for a ZooKeeper ensemble. Useful for verifying new, updated,
561
+ existing installations. More details are [here](https://github.com/phunt/zk-smoketest).
562
+
563
+
564
+ <a name="Testing"></a>
565
+
566
+ ## Testing
567
+
568
+ <a name="fault-injection"></a>
569
+
570
+ ### Fault Injection Framework
571
+
572
+ <a name="Byteman"></a>
573
+
574
+ #### Byteman
575
+
576
+ - **Byteman** is a tool which makes it easy to trace, monitor and test the behaviour of Java application and JDK runtime code.
577
+ It injects Java code into your application methods or into Java runtime methods without the need for you to recompile, repackage or even redeploy your application.
578
+ Injection can be performed at JVM startup or after startup while the application is still running.
579
+ - Visit the official [website](https://byteman.jboss.org/) to download the latest release
580
+ - A brief tutorial can be found [here](https://developer.jboss.org/wiki/ABytemanTutorial)
581
+
582
+ ```bash
583
+ Preparations:
584
+ # attach the byteman to 3 zk servers during runtime
585
+ # 55001,55002,55003 is byteman binding port; 714,740,758 is the zk server pid
586
+ ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55001 714
587
+ ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55002 740
588
+ ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55003 758
589
+
590
+ # load the fault injection script
591
+ ./bmsubmit.sh -p 55002 -l my_zk_fault_injection.btm
592
+ # unload the fault injection script
593
+ ./bmsubmit.sh -p 55002 -u my_zk_fault_injectionr.btm
594
+ ```
595
+
596
+ Look at the below examples to customize your byteman fault injection script
597
+
598
+ Example 1: This script makes leader's zxid roll over, to force re-election.
599
+
600
+ ```bash
601
+ cat zk_leader_zxid_roll_over.btm
602
+
603
+ RULE trace zk_leader_zxid_roll_over
604
+ CLASS org.apache.zookeeper.server.quorum.Leader
605
+ METHOD propose
606
+ IF true
607
+ DO
608
+ traceln("*** Leader zxid has rolled over, forcing re-election ***");
609
+ $1.zxid = 4294967295L
610
+ ENDRULE
611
+ ```
612
+
613
+ Example 2: This script makes the leader drop the ping packet to a specific follower.
614
+ The leader will close the **LearnerHandler** with that follower, and the follower will enter the state:LOOKING
615
+ then re-enter the quorum with the state:FOLLOWING
616
+
617
+ ```bash
618
+ cat zk_leader_drop_ping_packet.btm
619
+
620
+ RULE trace zk_leader_drop_ping_packet
621
+ CLASS org.apache.zookeeper.server.quorum.LearnerHandler
622
+ METHOD ping
623
+ AT ENTRY
624
+ IF $0.sid == 2
625
+ DO
626
+ traceln("*** Leader drops ping packet to sid: 2 ***");
627
+ return;
628
+ ENDRULE
629
+ ```
630
+
631
+ Example 3: This script makes one follower drop ACK packet which has no big effect in the broadcast phrase, since after receiving
632
+ the majority of ACKs from the followers, the leader can commit that proposal
633
+
634
+ ```bash
635
+ cat zk_leader_drop_ping_packet.btm
636
+
637
+ RULE trace zk.follower_drop_ack_packet
638
+ CLASS org.apache.zookeeper.server.quorum.SendAckRequestProcessor
639
+ METHOD processRequest
640
+ AT ENTRY
641
+ IF true
642
+ DO
643
+ traceln("*** Follower drops ACK packet ***");
644
+ return;
645
+ ENDRULE
646
+ ```
647
+
648
+
649
+ <a name="jepsen-test"></a>
650
+
651
+ ### Jepsen Test
652
+ A framework for distributed systems verification, with fault injection.
653
+ Jepsen has been used to verify everything from eventually-consistent commutative databases to linearizable coordination systems to distributed task schedulers.
654
+ more details can be found in [jepsen-io](https://github.com/jepsen-io/jepsen)
655
+
656
+ Running the [Dockerized Jepsen](https://github.com/jepsen-io/jepsen/blob/master/docker/README.md) is the simplest way to use the Jepsen.
657
+
658
+ Installation:
659
+
660
+ ```bash
661
+ git clone git@github.com:jepsen-io/jepsen.git
662
+ cd docker
663
+ # maybe a long time for the first init.
664
+ ./up.sh
665
+ # docker ps to check one control node and five db nodes are up
666
+ docker ps
667
+ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
668
+ 8265f1d3f89c docker_control "/bin/sh -c /init.sh" 9 hours ago Up 4 hours 0.0.0.0:32769->8080/tcp jepsen-control
669
+ 8a646102da44 docker_n5 "/run.sh" 9 hours ago Up 3 hours 22/tcp jepsen-n5
670
+ 385454d7e520 docker_n1 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n1
671
+ a62d6a9d5f8e docker_n2 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n2
672
+ 1485e89d0d9a docker_n3 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n3
673
+ 27ae01e1a0c5 docker_node "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-node
674
+ 53c444b00ebd docker_n4 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n4
675
+ ```
676
+
677
+ Running & Test
678
+
679
+ ```bash
680
+ # Enter into the container:jepsen-control
681
+ docker exec -it jepsen-control bash
682
+ # Test
683
+ cd zookeeper && lein run test --concurrency 10
684
+ # See something like the following to assert that ZooKeeper has passed the Jepsen test
685
+ INFO [2019-04-01 11:25:23,719] jepsen worker 8 - jepsen.util 8 :ok :read 2
686
+ INFO [2019-04-01 11:25:23,722] jepsen worker 3 - jepsen.util 3 :invoke :cas [0 4]
687
+ INFO [2019-04-01 11:25:23,760] jepsen worker 3 - jepsen.util 3 :fail :cas [0 4]
688
+ INFO [2019-04-01 11:25:23,791] jepsen worker 1 - jepsen.util 1 :invoke :read nil
689
+ INFO [2019-04-01 11:25:23,794] jepsen worker 1 - jepsen.util 1 :ok :read 2
690
+ INFO [2019-04-01 11:25:24,038] jepsen worker 0 - jepsen.util 0 :invoke :write 4
691
+ INFO [2019-04-01 11:25:24,073] jepsen worker 0 - jepsen.util 0 :ok :write 4
692
+ ...............................................................................
693
+ Everything looks good! ヽ(‘ー`)ノ
694
+
695
+ ```
696
+
697
+ Reference:
698
+ read [this blog](https://aphyr.com/posts/291-call-me-maybe-zookeeper) to learn more about the Jepsen test for the Zookeeper.
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md ADDED
@@ -0,0 +1,666 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2004 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # Programming with ZooKeeper - A basic tutorial
18
+
19
+ * [Introduction](#ch_Introduction)
20
+ * [Barriers](#sc_barriers)
21
+ * [Producer-Consumer Queues](#sc_producerConsumerQueues)
22
+ * [Complete example](#Complete+example)
23
+ * [Queue test](#Queue+test)
24
+ * [Barrier test](#Barrier+test)
25
+ * [Source Listing](#sc_sourceListing)
26
+
27
+ <a name="ch_Introduction"></a>
28
+
29
+ ## Introduction
30
+
31
+ In this tutorial, we show simple implementations of barriers and
32
+ producer-consumer queues using ZooKeeper. We call the respective classes Barrier and Queue.
33
+ These examples assume that you have at least one ZooKeeper server running.
34
+
35
+ Both primitives use the following common excerpt of code:
36
+
37
+ static ZooKeeper zk = null;
38
+ static Integer mutex;
39
+
40
+ String root;
41
+
42
+ SyncPrimitive(String address) {
43
+ if(zk == null){
44
+ try {
45
+ System.out.println("Starting ZK:");
46
+ zk = new ZooKeeper(address, 3000, this);
47
+ mutex = new Integer(-1);
48
+ System.out.println("Finished starting ZK: " + zk);
49
+ } catch (IOException e) {
50
+ System.out.println(e.toString());
51
+ zk = null;
52
+ }
53
+ }
54
+ }
55
+
56
+ synchronized public void process(WatchedEvent event) {
57
+ synchronized (mutex) {
58
+ mutex.notify();
59
+ }
60
+ }
61
+
62
+
63
+
64
+ Both classes extend SyncPrimitive. In this way, we execute steps that are
65
+ common to all primitives in the constructor of SyncPrimitive. To keep the examples
66
+ simple, we create a ZooKeeper object the first time we instantiate either a barrier
67
+ object or a queue object, and we declare a static variable that is a reference
68
+ to this object. The subsequent instances of Barrier and Queue check whether a
69
+ ZooKeeper object exists. Alternatively, we could have the application creating a
70
+ ZooKeeper object and passing it to the constructor of Barrier and Queue.
71
+
72
+ We use the process() method to process notifications triggered due to watches.
73
+ In the following discussion, we present code that sets watches. A watch is internal
74
+ structure that enables ZooKeeper to notify a client of a change to a node. For example,
75
+ if a client is waiting for other clients to leave a barrier, then it can set a watch and
76
+ wait for modifications to a particular node, which can indicate that it is the end of the wait.
77
+ This point becomes clear once we go over the examples.
78
+
79
+ <a name="sc_barriers"></a>
80
+
81
+ ## Barriers
82
+
83
+ A barrier is a primitive that enables a group of processes to synchronize the
84
+ beginning and the end of a computation. The general idea of this implementation
85
+ is to have a barrier node that serves the purpose of being a parent for individual
86
+ process nodes. Suppose that we call the barrier node "/b1". Each process "p" then
87
+ creates a node "/b1/p". Once enough processes have created their corresponding
88
+ nodes, joined processes can start the computation.
89
+
90
+ In this example, each process instantiates a Barrier object, and its constructor takes as parameters:
91
+
92
+ * the address of a ZooKeeper server (e.g., "zoo1.foo.com:2181")
93
+ * the path of the barrier node on ZooKeeper (e.g., "/b1")
94
+ * the size of the group of processes
95
+
96
+ The constructor of Barrier passes the address of the Zookeeper server to the
97
+ constructor of the parent class. The parent class creates a ZooKeeper instance if
98
+ one does not exist. The constructor of Barrier then creates a
99
+ barrier node on ZooKeeper, which is the parent node of all process nodes, and
100
+ we call root (**Note:** This is not the ZooKeeper root "/").
101
+
102
+ /**
103
+ * Barrier constructor
104
+ *
105
+ * @param address
106
+ * @param root
107
+ * @param size
108
+ */
109
+ Barrier(String address, String root, int size) {
110
+ super(address);
111
+ this.root = root;
112
+ this.size = size;
113
+ // Create barrier node
114
+ if (zk != null) {
115
+ try {
116
+ Stat s = zk.exists(root, false);
117
+ if (s == null) {
118
+ zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE,
119
+ CreateMode.PERSISTENT);
120
+ }
121
+ } catch (KeeperException e) {
122
+ System.out
123
+ .println("Keeper exception when instantiating queue: "
124
+ + e.toString());
125
+ } catch (InterruptedException e) {
126
+ System.out.println("Interrupted exception");
127
+ }
128
+ }
129
+
130
+ // My node name
131
+ try {
132
+ name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString());
133
+ } catch (UnknownHostException e) {
134
+ System.out.println(e.toString());
135
+ }
136
+ }
137
+
138
+
139
+ To enter the barrier, a process calls enter(). The process creates a node under
140
+ the root to represent it, using its host name to form the node name. It then wait
141
+ until enough processes have entered the barrier. A process does it by checking
142
+ the number of children the root node has with "getChildren()", and waiting for
143
+ notifications in the case it does not have enough. To receive a notification when
144
+ there is a change to the root node, a process has to set a watch, and does it
145
+ through the call to "getChildren()". In the code, we have that "getChildren()"
146
+ has two parameters. The first one states the node to read from, and the second is
147
+ a boolean flag that enables the process to set a watch. In the code the flag is true.
148
+
149
+ /**
150
+ * Join barrier
151
+ *
152
+ * @return
153
+ * @throws KeeperException
154
+ * @throws InterruptedException
155
+ */
156
+
157
+ boolean enter() throws KeeperException, InterruptedException{
158
+ zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE,
159
+ CreateMode.EPHEMERAL);
160
+ while (true) {
161
+ synchronized (mutex) {
162
+ List<String> list = zk.getChildren(root, true);
163
+
164
+ if (list.size() < size) {
165
+ mutex.wait();
166
+ } else {
167
+ return true;
168
+ }
169
+ }
170
+ }
171
+ }
172
+
173
+
174
+ Note that enter() throws both KeeperException and InterruptedException, so it is
175
+ the responsibility of the application to catch and handle such exceptions.
176
+
177
+ Once the computation is finished, a process calls leave() to leave the barrier.
178
+ First it deletes its corresponding node, and then it gets the children of the root
179
+ node. If there is at least one child, then it waits for a notification (obs: note
180
+ that the second parameter of the call to getChildren() is true, meaning that
181
+ ZooKeeper has to set a watch on the root node). Upon reception of a notification,
182
+ it checks once more whether the root node has any children.
183
+
184
+ /**
185
+ * Wait until all reach barrier
186
+ *
187
+ * @return
188
+ * @throws KeeperException
189
+ * @throws InterruptedException
190
+ */
191
+
192
+ boolean leave() throws KeeperException, InterruptedException {
193
+ zk.delete(root + "/" + name, 0);
194
+ while (true) {
195
+ synchronized (mutex) {
196
+ List<String> list = zk.getChildren(root, true);
197
+ if (list.size() > 0) {
198
+ mutex.wait();
199
+ } else {
200
+ return true;
201
+ }
202
+ }
203
+ }
204
+ }
205
+
206
+
207
+ <a name="sc_producerConsumerQueues"></a>
208
+
209
+ ## Producer-Consumer Queues
210
+
211
+ A producer-consumer queue is a distributed data structure that groups of processes
212
+ use to generate and consume items. Producer processes create new elements and add
213
+ them to the queue. Consumer processes remove elements from the list, and process them.
214
+ In this implementation, the elements are simple integers. The queue is represented
215
+ by a root node, and to add an element to the queue, a producer process creates a new node,
216
+ a child of the root node.
217
+
218
+ The following excerpt of code corresponds to the constructor of the object. As
219
+ with Barrier objects, it first calls the constructor of the parent class, SyncPrimitive,
220
+ that creates a ZooKeeper object if one doesn't exist. It then verifies if the root
221
+ node of the queue exists, and creates if it doesn't.
222
+
223
+ /**
224
+ * Constructor of producer-consumer queue
225
+ *
226
+ * @param address
227
+ * @param name
228
+ */
229
+ Queue(String address, String name) {
230
+ super(address);
231
+ this.root = name;
232
+ // Create ZK node name
233
+ if (zk != null) {
234
+ try {
235
+ Stat s = zk.exists(root, false);
236
+ if (s == null) {
237
+ zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE,
238
+ CreateMode.PERSISTENT);
239
+ }
240
+ } catch (KeeperException e) {
241
+ System.out
242
+ .println("Keeper exception when instantiating queue: "
243
+ + e.toString());
244
+ } catch (InterruptedException e) {
245
+ System.out.println("Interrupted exception");
246
+ }
247
+ }
248
+ }
249
+
250
+
251
+ A producer process calls "produce()" to add an element to the queue, and passes
252
+ an integer as an argument. To add an element to the queue, the method creates a
253
+ new node using "create()", and uses the SEQUENCE flag to instruct ZooKeeper to
254
+ append the value of the sequencer counter associated to the root node. In this way,
255
+ we impose a total order on the elements of the queue, thus guaranteeing that the
256
+ oldest element of the queue is the next one consumed.
257
+
258
+ /**
259
+ * Add element to the queue.
260
+ *
261
+ * @param i
262
+ * @return
263
+ */
264
+
265
+ boolean produce(int i) throws KeeperException, InterruptedException{
266
+ ByteBuffer b = ByteBuffer.allocate(4);
267
+ byte[] value;
268
+
269
+ // Add child with value i
270
+ b.putInt(i);
271
+ value = b.array();
272
+ zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE,
273
+ CreateMode.PERSISTENT_SEQUENTIAL);
274
+
275
+ return true;
276
+ }
277
+
278
+
279
+ To consume an element, a consumer process obtains the children of the root node,
280
+ reads the node with smallest counter value, and returns the element. Note that
281
+ if there is a conflict, then one of the two contending processes won't be able to
282
+ delete the node and the delete operation will throw an exception.
283
+
284
+ A call to getChildren() returns the list of children in lexicographic order.
285
+ As lexicographic order does not necessarily follow the numerical order of the counter
286
+ values, we need to decide which element is the smallest. To decide which one has
287
+ the smallest counter value, we traverse the list, and remove the prefix "element"
288
+ from each one.
289
+
290
+ /**
291
+ * Remove first element from the queue.
292
+ *
293
+ * @return
294
+ * @throws KeeperException
295
+ * @throws InterruptedException
296
+ */
297
+ int consume() throws KeeperException, InterruptedException{
298
+ int retvalue = -1;
299
+ Stat stat = null;
300
+
301
+ // Get the first element available
302
+ while (true) {
303
+ synchronized (mutex) {
304
+ List<String> list = zk.getChildren(root, true);
305
+ if (list.size() == 0) {
306
+ System.out.println("Going to wait");
307
+ mutex.wait();
308
+ } else {
309
+ Integer min = new Integer(list.get(0).substring(7));
310
+ for(String s : list){
311
+ Integer tempValue = new Integer(s.substring(7));
312
+ //System.out.println("Temporary value: " + tempValue);
313
+ if(tempValue < min) min = tempValue;
314
+ }
315
+ System.out.println("Temporary value: " + root + "/element" + min);
316
+ byte[] b = zk.getData(root + "/element" + min,
317
+ false, stat);
318
+ zk.delete(root + "/element" + min, 0);
319
+ ByteBuffer buffer = ByteBuffer.wrap(b);
320
+ retvalue = buffer.getInt();
321
+
322
+ return retvalue;
323
+ }
324
+ }
325
+ }
326
+ }
327
+ }
328
+
329
+
330
+ <a name="Complete+example"></a>
331
+
332
+ ## Complete example
333
+
334
+ In the following section you can find a complete command line application to demonstrate the above mentioned
335
+ recipes. Use the following command to run it.
336
+
337
+ ZOOBINDIR="[path_to_distro]/bin"
338
+ . "$ZOOBINDIR"/zkEnv.sh
339
+ java SyncPrimitive [Test Type] [ZK server] [No of elements] [Client type]
340
+
341
+ <a name="Queue+test"></a>
342
+
343
+ ### Queue test
344
+
345
+ Start a producer to create 100 elements
346
+
347
+ java SyncPrimitive qTest localhost 100 p
348
+
349
+
350
+ Start a consumer to consume 100 elements
351
+
352
+ java SyncPrimitive qTest localhost 100 c
353
+
354
+ <a name="Barrier+test"></a>
355
+
356
+ ### Barrier test
357
+
358
+ Start a barrier with 2 participants (start as many times as many participants you'd like to enter)
359
+
360
+ java SyncPrimitive bTest localhost 2
361
+
362
+ <a name="sc_sourceListing"></a>
363
+
364
+ ### Source Listing
365
+
366
+ #### SyncPrimitive.Java
367
+
368
+ import java.io.IOException;
369
+ import java.net.InetAddress;
370
+ import java.net.UnknownHostException;
371
+ import java.nio.ByteBuffer;
372
+ import java.util.List;
373
+ import java.util.Random;
374
+
375
+ import org.apache.zookeeper.CreateMode;
376
+ import org.apache.zookeeper.KeeperException;
377
+ import org.apache.zookeeper.WatchedEvent;
378
+ import org.apache.zookeeper.Watcher;
379
+ import org.apache.zookeeper.ZooKeeper;
380
+ import org.apache.zookeeper.ZooDefs.Ids;
381
+ import org.apache.zookeeper.data.Stat;
382
+
383
+ public class SyncPrimitive implements Watcher {
384
+
385
+ static ZooKeeper zk = null;
386
+ static Integer mutex;
387
+ String root;
388
+
389
+ SyncPrimitive(String address) {
390
+ if(zk == null){
391
+ try {
392
+ System.out.println("Starting ZK:");
393
+ zk = new ZooKeeper(address, 3000, this);
394
+ mutex = new Integer(-1);
395
+ System.out.println("Finished starting ZK: " + zk);
396
+ } catch (IOException e) {
397
+ System.out.println(e.toString());
398
+ zk = null;
399
+ }
400
+ }
401
+ //else mutex = new Integer(-1);
402
+ }
403
+
404
+ synchronized public void process(WatchedEvent event) {
405
+ synchronized (mutex) {
406
+ //System.out.println("Process: " + event.getType());
407
+ mutex.notify();
408
+ }
409
+ }
410
+
411
+ /**
412
+ * Barrier
413
+ */
414
+ static public class Barrier extends SyncPrimitive {
415
+ int size;
416
+ String name;
417
+
418
+ /**
419
+ * Barrier constructor
420
+ *
421
+ * @param address
422
+ * @param root
423
+ * @param size
424
+ */
425
+ Barrier(String address, String root, int size) {
426
+ super(address);
427
+ this.root = root;
428
+ this.size = size;
429
+
430
+ // Create barrier node
431
+ if (zk != null) {
432
+ try {
433
+ Stat s = zk.exists(root, false);
434
+ if (s == null) {
435
+ zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE,
436
+ CreateMode.PERSISTENT);
437
+ }
438
+ } catch (KeeperException e) {
439
+ System.out
440
+ .println("Keeper exception when instantiating queue: "
441
+ + e.toString());
442
+ } catch (InterruptedException e) {
443
+ System.out.println("Interrupted exception");
444
+ }
445
+ }
446
+
447
+ // My node name
448
+ try {
449
+ name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString());
450
+ } catch (UnknownHostException e) {
451
+ System.out.println(e.toString());
452
+ }
453
+
454
+ }
455
+
456
+ /**
457
+ * Join barrier
458
+ *
459
+ * @return
460
+ * @throws KeeperException
461
+ * @throws InterruptedException
462
+ */
463
+
464
+ boolean enter() throws KeeperException, InterruptedException{
465
+ zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE,
466
+ CreateMode.EPHEMERAL);
467
+ while (true) {
468
+ synchronized (mutex) {
469
+ List<String> list = zk.getChildren(root, true);
470
+
471
+ if (list.size() < size) {
472
+ mutex.wait();
473
+ } else {
474
+ return true;
475
+ }
476
+ }
477
+ }
478
+ }
479
+
480
+ /**
481
+ * Wait until all reach barrier
482
+ *
483
+ * @return
484
+ * @throws KeeperException
485
+ * @throws InterruptedException
486
+ */
487
+ boolean leave() throws KeeperException, InterruptedException{
488
+ zk.delete(root + "/" + name, 0);
489
+ while (true) {
490
+ synchronized (mutex) {
491
+ List<String> list = zk.getChildren(root, true);
492
+ if (list.size() > 0) {
493
+ mutex.wait();
494
+ } else {
495
+ return true;
496
+ }
497
+ }
498
+ }
499
+ }
500
+ }
501
+
502
+ /**
503
+ * Producer-Consumer queue
504
+ */
505
+ static public class Queue extends SyncPrimitive {
506
+
507
+ /**
508
+ * Constructor of producer-consumer queue
509
+ *
510
+ * @param address
511
+ * @param name
512
+ */
513
+ Queue(String address, String name) {
514
+ super(address);
515
+ this.root = name;
516
+ // Create ZK node name
517
+ if (zk != null) {
518
+ try {
519
+ Stat s = zk.exists(root, false);
520
+ if (s == null) {
521
+ zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE,
522
+ CreateMode.PERSISTENT);
523
+ }
524
+ } catch (KeeperException e) {
525
+ System.out
526
+ .println("Keeper exception when instantiating queue: "
527
+ + e.toString());
528
+ } catch (InterruptedException e) {
529
+ System.out.println("Interrupted exception");
530
+ }
531
+ }
532
+ }
533
+
534
+ /**
535
+ * Add element to the queue.
536
+ *
537
+ * @param i
538
+ * @return
539
+ */
540
+
541
+ boolean produce(int i) throws KeeperException, InterruptedException{
542
+ ByteBuffer b = ByteBuffer.allocate(4);
543
+ byte[] value;
544
+
545
+ // Add child with value i
546
+ b.putInt(i);
547
+ value = b.array();
548
+ zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE,
549
+ CreateMode.PERSISTENT_SEQUENTIAL);
550
+
551
+ return true;
552
+ }
553
+
554
+ /**
555
+ * Remove first element from the queue.
556
+ *
557
+ * @return
558
+ * @throws KeeperException
559
+ * @throws InterruptedException
560
+ */
561
+ int consume() throws KeeperException, InterruptedException{
562
+ int retvalue = -1;
563
+ Stat stat = null;
564
+
565
+ // Get the first element available
566
+ while (true) {
567
+ synchronized (mutex) {
568
+ List<String> list = zk.getChildren(root, true);
569
+ if (list.size() == 0) {
570
+ System.out.println("Going to wait");
571
+ mutex.wait();
572
+ } else {
573
+ Integer min = new Integer(list.get(0).substring(7));
574
+ String minNode = list.get(0);
575
+ for(String s : list){
576
+ Integer tempValue = new Integer(s.substring(7));
577
+ //System.out.println("Temporary value: " + tempValue);
578
+ if(tempValue < min) {
579
+ min = tempValue;
580
+ minNode = s;
581
+ }
582
+ }
583
+ System.out.println("Temporary value: " + root + "/" + minNode);
584
+ byte[] b = zk.getData(root + "/" + minNode,
585
+ false, stat);
586
+ zk.delete(root + "/" + minNode, 0);
587
+ ByteBuffer buffer = ByteBuffer.wrap(b);
588
+ retvalue = buffer.getInt();
589
+
590
+ return retvalue;
591
+ }
592
+ }
593
+ }
594
+ }
595
+ }
596
+
597
+ public static void main(String args[]) {
598
+ if (args[0].equals("qTest"))
599
+ queueTest(args);
600
+ else
601
+ barrierTest(args);
602
+ }
603
+
604
+ public static void queueTest(String args[]) {
605
+ Queue q = new Queue(args[1], "/app1");
606
+
607
+ System.out.println("Input: " + args[1]);
608
+ int i;
609
+ Integer max = new Integer(args[2]);
610
+
611
+ if (args[3].equals("p")) {
612
+ System.out.println("Producer");
613
+ for (i = 0; i < max; i++)
614
+ try{
615
+ q.produce(10 + i);
616
+ } catch (KeeperException e){
617
+
618
+ } catch (InterruptedException e){
619
+
620
+ }
621
+ } else {
622
+ System.out.println("Consumer");
623
+
624
+ for (i = 0; i < max; i++) {
625
+ try{
626
+ int r = q.consume();
627
+ System.out.println("Item: " + r);
628
+ } catch (KeeperException e){
629
+ i--;
630
+ } catch (InterruptedException e){
631
+ }
632
+ }
633
+ }
634
+ }
635
+
636
+ public static void barrierTest(String args[]) {
637
+ Barrier b = new Barrier(args[1], "/b1", new Integer(args[2]));
638
+ try{
639
+ boolean flag = b.enter();
640
+ System.out.println("Entered barrier: " + args[2]);
641
+ if(!flag) System.out.println("Error when entering the barrier");
642
+ } catch (KeeperException e){
643
+ } catch (InterruptedException e){
644
+ }
645
+
646
+ // Generate random integer
647
+ Random rand = new Random();
648
+ int r = rand.nextInt(100);
649
+ // Loop for rand iterations
650
+ for (int i = 0; i < r; i++) {
651
+ try {
652
+ Thread.sleep(100);
653
+ } catch (InterruptedException e) {
654
+ }
655
+ }
656
+ try{
657
+ b.leave();
658
+ } catch (KeeperException e){
659
+
660
+ } catch (InterruptedException e){
661
+
662
+ }
663
+ System.out.println("Left barrier");
664
+ }
665
+ }
666
+
local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--
2
+ Copyright 2002-2021 The Apache Software Foundation
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ //-->
16
+
17
+ # ZooKeeper Use Cases
18
+
19
+ - Applications and organizations using ZooKeeper include (alphabetically) [1].
20
+ - If your use case wants to be listed here. Please do not hesitate, submit a pull request or write an email to **dev@zookeeper.apache.org**,
21
+ and then, your use case will be included.
22
+ - If this documentation has violated your intellectual property rights or you and your company's privacy, write an email to **dev@zookeeper.apache.org**,
23
+ we will handle them in a timely manner.
24
+
25
+
26
+ ## Free Software Projects
27
+
28
+ ### [AdroitLogic UltraESB](http://adroitlogic.org/)
29
+ - Uses ZooKeeper to implement node coordination, in clustering support. This allows the management of the complete cluster,
30
+ or any specific node - from any other node connected via JMX. A Cluster wide command framework developed on top of the
31
+ ZooKeeper coordination allows commands that fail on some nodes to be retried etc. We also support the automated graceful
32
+ round-robin-restart of a complete cluster of nodes using the same framework [1].
33
+
34
+ ### [Akka](http://akka.io/)
35
+ - Akka is the platform for the next generation event-driven, scalable and fault-tolerant architectures on the JVM.
36
+ Or: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM [1].
37
+
38
+ ### [Eclipse Communication Framework](http://www.eclipse.org/ecf)
39
+ - The Eclipse ECF project provides an implementation of its Abstract Discovery services using Zookeeper. ECF itself
40
+ is used in many projects providing base functionality for communication, all based on OSGi [1].
41
+
42
+ ### [Eclipse Gyrex](http://www.eclipse.org/gyrex)
43
+ - The Eclipse Gyrex project provides a platform for building your own Java OSGi based clouds.
44
+ - ZooKeeper is used as the core cloud component for node membership and management, coordination of jobs executing among workers,
45
+ a lock service and a simple queue service and a lot more [1].
46
+
47
+ ### [GoldenOrb](http://www.goldenorbos.org/)
48
+ - massive-scale Graph analysis [1].
49
+
50
+ ### [Juju](https://juju.ubuntu.com/)
51
+ - Service deployment and orchestration framework, formerly called Ensemble [1].
52
+
53
+ ### [Katta](http://katta.sourceforge.net/)
54
+ - Katta serves distributed Lucene indexes in a grid environment.
55
+ - Zookeeper is used for node, master and index management in the grid [1].
56
+
57
+ ### [KeptCollections](https://github.com/anthonyu/KeptCollections)
58
+ - KeptCollections is a library of drop-in replacements for the data structures in the Java Collections framework.
59
+ - KeptCollections uses Apache ZooKeeper as a backing store, thus making its data structures distributed and scalable [1].
60
+
61
+ ### [Neo4j](https://neo4j.com/)
62
+ - Neo4j is a Graph Database. It's a disk based, ACID compliant transactional storage engine for big graphs and fast graph traversals,
63
+ using external indices like Lucene/Solr for global searches.
64
+ - We use ZooKeeper in the Neo4j High Availability components for write-master election,
65
+ read slave coordination and other cool stuff. ZooKeeper is a great and focused project - we like! [1].
66
+
67
+ ### [Norbert](http://sna-projects.com/norbert)
68
+ - Partitioned routing and cluster management [1].
69
+
70
+ ### [spring-cloud-zookeeper](https://spring.io/projects/spring-cloud-zookeeper)
71
+ - Spring Cloud Zookeeper provides Apache Zookeeper integrations for Spring Boot apps through autoconfiguration
72
+ and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations
73
+ you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper.
74
+ The patterns provided include Service Discovery and Distributed Configuration [38].
75
+
76
+ ### [spring-statemachine](https://projects.spring.io/spring-statemachine/)
77
+ - Spring Statemachine is a framework for application developers to use state machine concepts with Spring applications.
78
+ - Spring Statemachine can provide this feature:Distributed state machine based on a Zookeeper [31,32].
79
+
80
+ ### [spring-xd](https://projects.spring.io/spring-xd/)
81
+ - Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export.
82
+ The project’s goal is to simplify the development of big data applications.
83
+ - ZooKeeper - Provides all runtime information for the XD cluster. Tracks running containers, in which containers modules
84
+ and jobs are deployed, stream definitions, deployment manifests, and the like [30,31].
85
+
86
+ ### [Talend ESB](http://www.talend.com/products-application-integration/application-integration-esb-se.php)
87
+ - Talend ESB is a versatile and flexible, enterprise service bus.
88
+ - It uses ZooKeeper as endpoint repository of both REST and SOAP Web services.
89
+ By using ZooKeeper Talend ESB is able to provide failover and load balancing capabilities in a very light-weight manner [1].
90
+
91
+ ### [redis_failover](https://github.com/ryanlecompte/redis_failover)
92
+ - Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby [1].
93
+
94
+
95
+ ## Apache Projects
96
+
97
+ ### [Apache Accumulo](https://accumulo.apache.org/)
98
+ - Accumulo is a distributed key/value store that provides expressive, cell-level access labels.
99
+ - Apache ZooKeeper plays a central role within the Accumulo architecture. Its quorum consistency model supports an overall
100
+ Accumulo architecture with no single points of failure. Beyond that, Accumulo leverages ZooKeeper to store and communication
101
+ configuration information for users and tables, as well as operational states of processes and tablets [2].
102
+
103
+ ### [Apache Atlas](http://atlas.apache.org)
104
+ - Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet
105
+ their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem.
106
+ - Atlas uses Zookeeper for coordination to provide redundancy and high availability of HBase,Kafka [31,35].
107
+
108
+ ### [Apache BookKeeper](https://bookkeeper.apache.org/)
109
+ - A scalable, fault-tolerant, and low-latency storage service optimized for real-time workloads.
110
+ - BookKeeper requires a metadata storage service to store information related to ledgers and available bookies. BookKeeper currently uses
111
+ ZooKeeper for this and other tasks [3].
112
+
113
+ ### [Apache CXF DOSGi](http://cxf.apache.org/distributed-osgi.html)
114
+ - Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming
115
+ APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP,
116
+ or CORBA and work over a variety of transports such as HTTP, JMS or JBI.
117
+ - The Distributed OSGi implementation at Apache CXF uses ZooKeeper for its Discovery functionality [4].
118
+
119
+ ### [Apache Drill](http://drill.apache.org/)
120
+ - Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage
121
+ - ZooKeeper maintains ephemeral cluster membership information. The Drillbits use ZooKeeper to find other Drillbits in the cluster,
122
+ and the client uses ZooKeeper to find Drillbits to submit a query [28].
123
+
124
+ ### [Apache Druid](https://druid.apache.org/)
125
+ - Apache Druid is a high performance real-time analytics database.
126
+ - Apache Druid uses Apache ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are [27]:
127
+ - Coordinator leader election
128
+ - Segment "publishing" protocol from Historical and Realtime
129
+ - Segment load/drop protocol between Coordinator and Historical
130
+ - Overlord leader election
131
+ - Overlord and MiddleManager task management
132
+
133
+ ### [Apache Dubbo](http://dubbo.apache.org)
134
+ - Apache Dubbo is a high-performance, java based open source RPC framework.
135
+ - Zookeeper is used for service registration discovery and configuration management in Dubbo [6].
136
+
137
+ ### [Apache Flink](https://flink.apache.org/)
138
+ - Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams.
139
+ Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
140
+ - To enable JobManager High Availability you have to set the high-availability mode to zookeeper, configure a ZooKeeper quorum and set up a masters file with all JobManagers hosts and their web UI ports.
141
+ Flink leverages ZooKeeper for distributed coordination between all running JobManager instances. ZooKeeper is a separate service from Flink,
142
+ which provides highly reliable distributed coordination via leader election and light-weight consistent state storage [23].
143
+
144
+ ### [Apache Flume](https://flume.apache.org/)
145
+ - Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts
146
+ of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant
147
+ with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model
148
+ that allows for online analytic application.
149
+ - Flume supports Agent configurations via Zookeeper. This is an experimental feature [5].
150
+
151
+ ### [Apache Fluo](https://fluo.apache.org/)
152
+ - Apache Fluo is a distributed processing system that lets users make incremental updates to large data sets.
153
+ - Apache Fluo is built on Apache Accumulo which uses Apache Zookeeper for consensus [31,37].
154
+
155
+ ### [Apache Griffin](https://griffin.apache.org/)
156
+ - Big Data Quality Solution For Batch and Streaming.
157
+ - Griffin uses Zookeeper for coordination to provide redundancy and high availability of Kafka [31,36].
158
+
159
+ ### [Apache Hadoop](http://hadoop.apache.org/)
160
+ - The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across
161
+ clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines,
162
+ each offering local computation and storage. Rather than rely on hardware to deliver high-availability,
163
+ the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
164
+ - The implementation of automatic HDFS failover relies on ZooKeeper for the following things:
165
+ - **Failure detection** - each of the NameNode machines in the cluster maintains a persistent session in ZooKeeper.
166
+ If the machine crashes, the ZooKeeper session will expire, notifying the other NameNode that a failover should be triggered.
167
+ - **Active NameNode election** - ZooKeeper provides a simple mechanism to exclusively elect a node as active. If the current active NameNode crashes,
168
+ another node may take a special exclusive lock in ZooKeeper indicating that it should become the next active.
169
+ - The ZKFailoverController (ZKFC) is a new component which is a ZooKeeper client which also monitors and manages the state of the NameNode.
170
+ Each of the machines which runs a NameNode also runs a ZKFC, and that ZKFC is responsible for:
171
+ - **Health monitoring** - the ZKFC pings its local NameNode on a periodic basis with a health-check command.
172
+ So long as the NameNode responds in a timely fashion with a healthy status, the ZKFC considers the node healthy.
173
+ If the node has crashed, frozen, or otherwise entered an unhealthy state, the health monitor will mark it as unhealthy.
174
+ - **ZooKeeper session management** - when the local NameNode is healthy, the ZKFC holds a session open in ZooKeeper.
175
+ If the local NameNode is active, it also holds a special “lock” znode. This lock uses ZooKeeper’s support for “ephemeral” nodes;
176
+ if the session expires, the lock node will be automatically deleted.
177
+ - **ZooKeeper-based election** - if the local NameNode is healthy, and the ZKFC sees that no other node currently holds the lock znode,
178
+ it will itself try to acquire the lock. If it succeeds, then it has “won the election”, and is responsible for running a failover to make its local NameNode active.
179
+ The failover process is similar to the manual failover described above: first, the previous active is fenced if necessary,
180
+ and then the local NameNode transitions to active state [7].
181
+
182
+ ### [Apache HBase](https://hbase.apache.org/)
183
+ - HBase is the Hadoop database. It's an open-source, distributed, column-oriented store model.
184
+ - HBase uses ZooKeeper for master election, server lease management, bootstrapping, and coordination between servers.
185
+ A distributed Apache HBase installation depends on a running ZooKeeper cluster. All participating nodes and clients
186
+ need to be able to access the running ZooKeeper ensemble [8].
187
+ - As you can see, ZooKeeper is a fundamental part of HBase. All operations that require coordination, such as Regions
188
+ assignment, Master-Failover, replication, and snapshots, are built on ZooKeeper [20].
189
+
190
+ ### [Apache Helix](http://helix.apache.org/)
191
+ - A cluster management framework for partitioned and replicated distributed resources.
192
+ - We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state.
193
+ Helix uses Apache ZooKeeper to achieve this functionality [21].
194
+ Zookeeper provides:
195
+ - A way to represent PERSISTENT state which remains until its deleted
196
+ - A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies
197
+ - A notification mechanism when there is a change in PERSISTENT and EPHEMERAL state
198
+
199
+ ### [Apache Hive](https://hive.apache.org)
200
+ - The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed
201
+ storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.
202
+ - Hive has been using ZooKeeper as distributed lock manager to support concurrency in HiveServer2 [25,26].
203
+
204
+ ### [Apache Ignite](https://ignite.apache.org/)
205
+ - Ignite is a memory-centric distributed database, caching, and processing platform for
206
+ transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale
207
+ - Apache Ignite discovery mechanism goes with a ZooKeeper implementations which allows scaling Ignite clusters to 100s and 1000s of nodes
208
+ preserving linear scalability and performance [31,34].​
209
+
210
+ ### [Apache James Mailbox](http://james.apache.org/mailbox/)
211
+ - The Apache James Mailbox is a library providing a flexible Mailbox storage accessible by mail protocols
212
+ (IMAP4, POP3, SMTP,...) and other protocols.
213
+ - Uses Zookeeper and Curator Framework for generating distributed unique ID's [31].
214
+
215
+ ### [Apache Kafka](https://kafka.apache.org/)
216
+ - Kafka is a distributed publish/subscribe messaging system
217
+ - Apache Kafka relies on ZooKeeper for the following things:
218
+ - **Controller election**
219
+ The controller is one of the most important broking entity in a Kafka ecosystem, and it also has the responsibility
220
+ to maintain the leader-follower relationship across all the partitions. If a node by some reason is shutting down,
221
+ it’s the controller’s responsibility to tell all the replicas to act as partition leaders in order to fulfill the
222
+ duties of the partition leaders on the node that is about to fail. So, whenever a node shuts down, a new controller
223
+ can be elected and it can also be made sure that at any given time, there is only one controller and all the follower nodes have agreed on that.
224
+ - **Configuration Of Topics**
225
+ The configuration regarding all the topics including the list of existing topics, the number of partitions for each topic,
226
+ the location of all the replicas, list of configuration overrides for all topics and which node is the preferred leader, etc.
227
+ - **Access control lists**
228
+ Access control lists or ACLs for all the topics are also maintained within Zookeeper.
229
+ - **Membership of the cluster**
230
+ Zookeeper also maintains a list of all the brokers that are functioning at any given moment and are a part of the cluster [9].
231
+
232
+ ### [Apache Kylin](http://kylin.apache.org/)
233
+ - Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets,
234
+ original contributed from eBay Inc.
235
+ - Apache Kylin leverages Zookeeper for job coordination [31,33].
236
+
237
+ ### [Apache Mesos](http://mesos.apache.org/)
238
+ - Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual),
239
+ enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.
240
+ - Mesos has a high-availability mode that uses multiple Mesos masters: one active master (called the leader or leading master)
241
+ and several backups in case it fails. The masters elect the leader, with Apache ZooKeeper both coordinating the election
242
+ and handling leader detection by masters, agents, and scheduler drivers [10].
243
+
244
+ ### [Apache Oozie](https://oozie.apache.org)
245
+ - Oozie is a workflow scheduler system to manage Apache Hadoop jobs.
246
+ - the Oozie servers use it for coordinating access to the database and communicating with each other. In order to have full HA,
247
+ there should be at least 3 ZooKeeper servers [29].
248
+
249
+ ### [Apache Pulsar](https://pulsar.apache.org)
250
+ - Apache Pulsar is an open-source distributed pub-sub messaging system originally created at Yahoo and now part of the Apache Software Foundation
251
+ - Pulsar uses Apache Zookeeper for metadata storage, cluster configuration, and coordination. In a Pulsar instance:
252
+ - A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
253
+ - Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as ownership metadata,
254
+ broker load reports, BookKeeper ledger metadata, and more [24].
255
+
256
+ ### [Apache Solr](https://lucene.apache.org/solr/)
257
+ - Solr is the popular, blazing-fast, open source enterprise search platform built on Apache Lucene.
258
+ - In the "Cloud" edition (v4.x and up) of enterprise search engine Apache Solr, ZooKeeper is used for configuration,
259
+ leader election and more [12,13].
260
+
261
+ ### [Apache Spark](https://spark.apache.org/)
262
+ - Apache Spark is a unified analytics engine for large-scale data processing.
263
+ - Utilizing ZooKeeper to provide leader election and some state storage, you can launch multiple Masters in your cluster connected to the same ZooKeeper instance.
264
+ One will be elected “leader” and the others will remain in standby mode. If the current leader dies, another Master will be elected,
265
+ recover the old Master’s state, and then resume scheduling [14].
266
+
267
+ ### [Apache Storm](http://storm.apache.org)
268
+ - Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably
269
+ process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing.
270
+ Apache Storm is simple, can be used with any programming language, and is a lot of fun to use!
271
+ - Storm uses Zookeeper for coordinating the cluster [22].
272
+
273
+
274
+ ## Companies
275
+
276
+ ### [AGETO](http://www.ageto.de/)
277
+ - The AGETO RnD team uses ZooKeeper in a variety of internal as well as external consulting projects [1].
278
+
279
+ ### [Benipal Technologies](http://www.benipaltechnologies.com/)
280
+ - ZooKeeper is used for internal application development with Solr and Hadoop with Hbase [1].
281
+
282
+ ### [Box](http://box.net/)
283
+ - Box uses ZooKeeper for service discovery, service coordination, Solr and Hadoop support, etc [1].
284
+
285
+ ### [Deepdyve](http://www.deepdyve.com/)
286
+ - We do search for research and provide access to high quality content using advanced search technologies Zookeeper is used to
287
+ manage server state, control index deployment and a myriad other tasks [1].
288
+
289
+ ### [Facebook](https://www.facebook.com/)
290
+ - Facebook uses the Zeus ([17,18]) for configuration management which is a forked version of ZooKeeper, with many scalability
291
+ and performance en- hancements in order to work at the Facebook scale.
292
+ It runs a consensus protocol among servers distributed across mul- tiple regions for resilience. If the leader fails,
293
+ a follower is converted into a new leader.
294
+
295
+ ### [Idium Portal](http://www.idium.no/no/idium_portal/)
296
+ - Idium Portal is a hosted web-publishing system delivered by Norwegian company, Idium AS.
297
+ - ZooKeeper is used for cluster messaging, service bootstrapping, and service coordination [1].
298
+
299
+ ### [Makara](http://www.makara.com/)
300
+ - Using ZooKeeper on 2-node cluster on VMware workstation, Amazon EC2, Zen
301
+ - Using zkpython
302
+ - Looking into expanding into 100 node cluster [1].
303
+
304
+ ### [Midokura](http://www.midokura.com/)
305
+ - We do virtualized networking for the cloud computing era. We use ZooKeeper for various aspects of our distributed control plane [1].
306
+
307
+ ### [Pinterest](https://www.pinterest.com/)
308
+ - Pinterest uses the ZooKeeper for Service discovery and dynamic configuration.Like many large scale web sites, Pinterest’s infrastructure consists of servers that communicate with
309
+ backend services composed of a number of individual servers for managing load and fault tolerance. Ideally, we’d like the configuration to reflect only the active hosts,
310
+ so clients don’t need to deal with bad hosts as often. ZooKeeper provides a well known pattern to solve this problem [19].
311
+
312
+ ### [Rackspace](http://www.rackspace.com/email_hosting)
313
+ - The Email & Apps team uses ZooKeeper to coordinate sharding and responsibility changes in a distributed e-mail client
314
+ that pulls and indexes data for search. ZooKeeper also provides distributed locking for connections to prevent a cluster from overwhelming servers [1].
315
+
316
+ ### [Sematext](http://sematext.com/)
317
+ - Uses ZooKeeper in SPM (which includes ZooKeeper monitoring component, too!), Search Analytics, and Logsene [1].
318
+
319
+ ### [Tubemogul](http://tubemogul.com/)
320
+ - Uses ZooKeeper for leader election, configuration management, locking, group membership [1].
321
+
322
+ ### [Twitter](https://twitter.com/)
323
+ - ZooKeeper is used at Twitter as the source of truth for storing critical metadata. It serves as a coordination kernel to
324
+ provide distributed coordination services, such as leader election and distributed locking.
325
+ Some concrete examples of ZooKeeper in action include [15,16]:
326
+ - ZooKeeper is used to store service registry, which is used by Twitter’s naming service for service discovery.
327
+ - Manhattan (Twitter’s in-house key-value database), Nighthawk (sharded Redis), and Blobstore (in-house photo and video storage),
328
+ stores its cluster topology information in ZooKeeper.
329
+ - EventBus, Twitter’s pub-sub messaging system, stores critical metadata in ZooKeeper and uses ZooKeeper for leader election.
330
+ - Mesos, Twitter’s compute platform, uses ZooKeeper for leader election.
331
+
332
+ ### [Vast.com](http://www.vast.com/)
333
+ - Used internally as a part of sharding services, distributed synchronization of data/index updates, configuration management and failover support [1].
334
+
335
+ ### [Wealthfront](http://wealthfront.com/)
336
+ - Wealthfront uses ZooKeeper for service discovery, leader election and distributed locking among its many backend services.
337
+ ZK is an essential part of Wealthfront's continuous [deployment infrastructure](http://eng.wealthfront.com/2010/05/02/deployment-infrastructure-for-continuous-deployment/) [1].
338
+
339
+ ### [Yahoo!](http://www.yahoo.com/)
340
+ - ZooKeeper is used for a myriad of services inside Yahoo! for doing leader election, configuration management, sharding, locking, group membership etc [1].
341
+
342
+ ### [Zynga](http://www.zynga.com/)
343
+ - ZooKeeper at Zynga is used for a variety of services including configuration management, leader election, sharding and more [1].
344
+
345
+
346
+ #### References
347
+ - [1] https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy
348
+ - [2] https://www.youtube.com/watch?v=Ew53T6h9oRw
349
+ - [3] https://bookkeeper.apache.org/docs/4.7.3/getting-started/concepts/#ledgers
350
+ - [4] http://cxf.apache.org/dosgi-discovery-demo-page.html
351
+ - [5] https://flume.apache.org/FlumeUserGuide.html
352
+ - [6] http://dubbo.apache.org/en-us/blog/dubbo-zk.html
353
+ - [7] https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
354
+ - [8] https://hbase.apache.org/book.html#zookeeper
355
+ - [9] https://www.cloudkarafka.com/blog/2018-07-04-cloudkarafka_what_is_zookeeper.html
356
+ - [10] http://mesos.apache.org/documentation/latest/high-availability/
357
+ - [11] http://incubator.apache.org/projects/s4.html
358
+ - [12] https://lucene.apache.org/solr/guide/6_6/using-zookeeper-to-manage-configuration-files.html#UsingZooKeepertoManageConfigurationFiles-StartupBootstrap
359
+ - [13] https://lucene.apache.org/solr/guide/6_6/setting-up-an-external-zookeeper-ensemble.html
360
+ - [14] https://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper
361
+ - [15] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/zookeeper-at-twitter.html
362
+ - [16] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/dynamic-configuration-at-twitter.html
363
+ - [17] TANG, C., KOOBURAT, T., VENKATACHALAM, P.,CHANDER, A., WEN, Z., NARAYANAN, A., DOWELL,P., AND KARL, R. Holistic Configuration Management
364
+ at Facebook. In Proceedings of the 25th Symposium on Operating System Principles (SOSP’15) (Monterey, CA,USA, Oct. 2015).
365
+ - [18] https://www.youtube.com/watch?v=SeZV373gUZc
366
+ - [19] https://medium.com/@Pinterest_Engineering/zookeeper-resilience-at-pinterest-adfd8acf2a6b
367
+ - [20] https://blog.cloudera.com/what-are-hbase-znodes/
368
+ - [21] https://helix.apache.org/Architecture.html
369
+ - [22] http://storm.apache.org/releases/current/Setting-up-a-Storm-cluster.html
370
+ - [23] https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html
371
+ - [24] https://pulsar.apache.org/docs/en/concepts-architecture-overview/#metadata-store
372
+ - [25] https://cwiki.apache.org/confluence/display/Hive/Locking
373
+ - [26] *ZooKeeperHiveLockManager* implementation in the [hive](https://github.com/apache/hive/) code base
374
+ - [27] https://druid.apache.org/docs/latest/dependencies/zookeeper.html
375
+ - [28] https://mapr.com/blog/apache-drill-architecture-ultimate-guide/
376
+ - [29] https://oozie.apache.org/docs/4.1.0/AG_Install.html
377
+ - [30] https://docs.spring.io/spring-xd/docs/current/reference/html/
378
+ - [31] https://cwiki.apache.org/confluence/display/CURATOR/Powered+By
379
+ - [32] https://projects.spring.io/spring-statemachine/
380
+ - [33] https://www.tigeranalytics.com/blog/apache-kylin-architecture/
381
+ - [34] https://apacheignite.readme.io/docs/cluster-discovery
382
+ - [35] http://atlas.apache.org/HighAvailability.html
383
+ - [36] http://griffin.apache.org/docs/usecases.html
384
+ - [37] https://fluo.apache.org/
385
+ - [38] https://spring.io/projects/spring-cloud-zookeeper