id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
08-8-6
08
8-6
8-6
docs/Chap08/Problems/8-6.md
The problem of merging two sorted lists arises frequently. We have seen a procedure for it as the subroutine $\text{MERGE}$ in Section 2.3.1. In this problem, we will prove a lower bound of $2n - 1$ on the worst-case number of comparisons required to merge two sorted lists, each containing $n$ items. First we will sho...
**a.** There are $\binom{2n}{n}$ ways to divide $2n$ numbers into two sorted lists, each with $n$ numbers. **b.** Based on Exercise C.1.13, $$ \begin{aligned} \binom{2n}{n} & \le 2^h \\\\ h & \ge \lg\frac{(2n)!}{(n!)^2} \\\\ & = \lg (2n!) - 2\lg (n!) \\\\ & = \Theta(2n\lg 2...
[]
false
[]
08-8-7
08
8-7
8-7
docs/Chap08/Problems/8-7.md
A **_compare-exchange_** operation on two array elements $A[i]$ and $A[j]$, where $i < j$, has the form ```cpp COMPARE-EXCHANGE(A, i, j) if A[i] > A[j] exchange A[i] with A[j] ``` After the compare-exchange operation, we know that $A[i] \le A[j]$. An **_oblivious compare-exchange algorithm_** operates solely by a se...
(Removed)
[ { "lang": "cpp", "code": "> COMPARE-EXCHANGE(A, i, j)\n> if A[i] > A[j]\n> exchange A[i] with A[j]\n>" }, { "lang": "cpp", "code": "> INSERTION-SORT(A)\n> for j = 2 to A.length\n> for i = j - 1 downto 1\n> COMPARE-EXCHANGE(A, i, i + 1)\n>" } ]
false
[]
09-9.1-1
09
9.1
9.1-1
docs/Chap09/9.1.md
Show that the second smallest of $n$ elements can be found with $n + \lceil \lg n \rceil - 2$ comparisons in the worst case. ($\textit{Hint:}$ Also find the smallest element.)
We can compare the elements in a tournament fashion - we split them into pairs, compare each pair and then proceed to compare the winners in the same fashion. We need to keep track of each "match" the potential winners have participated in. We select a winner in $n − 1$ matches. At this point, we know that the second ...
[]
false
[]
09-9.1-2
09
9.1
9.1-2 $\star$
docs/Chap09/9.1.md
Prove the lower bound of $\lceil 3n / 2 \rceil - 2$ comparisons in the worst case to find both the maximum and minimum of $n$ numbers. ($\textit{Hint:}$ Consider how many numbers are potentially either the maximum or minimum, and investigate how a comparison affects these counts.)
If $n$ is odd, there are $$ \begin{aligned} 1 + \frac{3(n-3)}{2} + 2 & = \frac{3n}{2} - \frac{3}{2} \\\\ & = (\bigg\lceil \frac{3n}{2} \bigg\rceil - \frac{1}{2}) - \frac{3}{2} \\\\ & = \bigg\lceil \frac{3n}{2} \bigg\rceil - 2 \end{aligned} $$ comparisons. If $n$ is even, there are $$ \begin{aligned} 1 +...
[]
false
[]
09-9.2-1
09
9.2
9.2-1
docs/Chap09/9.2.md
Show that $\text{RANDOMIZED-SELECT}$ never makes a recursive call to a $0$-length array.
Calling a $0$-length array would mean that the second and third arguments are equal. So, if the call is made on line 8, we would need that $p = q - 1$, which means that $q - p + 1 = 0$. However, $i$ is assumed to be a nonnegative number, and to be executing line 8, we would need that $i < k = q - p + 1 = 0$, a contrad...
[]
false
[]
09-9.2-2
09
9.2
9.2-2
docs/Chap09/9.2.md
Argue that the indicator random variable $X_k$ and the value $T(\max(k - 1, n - k))$ are independent.
The probability that $X_k$ is equal to $1$ is unchanged when we know the max of $k - 1$ and $n - k$. In other words, $\Pr\\{X_k = a \mid \max(k - 1, n - k) = m\\} = \Pr\\{X_k = a\\}$ for $a = 0, 1$ and $m = k - 1, n - k$ so $X_k$ and $\max(k - 1, n - k)$ are independent. By C.3-5, so are $X_k$ and $T(\max(k - 1, n - k...
[]
false
[]
09-9.2-3
09
9.2
9.2-3
docs/Chap09/9.2.md
Write an iterative version of $\text{RANDOMIZED-SELECT}$.
```cpp PARTITION(A, p, r) x = A[r] i = p for k = p - 1 to r if A[k] < x i = i + 1 swap A[i] with A[k] i = i + 1 swap A[i] with A[r] return i ``` ```cpp RANDOMIZED-PARTITION(A, p, r) x = RANDOM(p - 1, r) swap A[x] with A[r] return PARTITION(A, p, r) ``` ...
[ { "lang": "cpp", "code": "PARTITION(A, p, r)\n x = A[r]\n i = p\n for k = p - 1 to r\n if A[k] < x\n i = i + 1\n swap A[i] with A[k]\n i = i + 1\n swap A[i] with A[r]\n return i" }, { "lang": "cpp", "code": "RANDOMIZED-PARTITION(A, p, r)\n x = RANDO...
false
[]
09-9.2-4
09
9.2
9.2-4
docs/Chap09/9.2.md
Suppose we use $\text{RANDOMIZED-SELECT}$ to select the minimum element of the array $A = \langle 3, 2, 9, 0, 7, 5, 4, 8, 6, 1 \rangle$. Describe a sequence of partitions that results in a worst-case performance of $\text{RANDOMIZED-SELECT}$.
When the partition selected is always the maximum element of the array we get worst-case performance. In the example, the sequence would be $\langle 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 \rangle$.
[]
false
[]
09-9.3-1
09
9.3
9.3-1
docs/Chap09/9.3.md
In the algorithm $\text{SELECT}$, the input elements are divided into groups of $5$. Will the algorithm work in linear time if they are divided into groups of $7$? Argue that $\text{SELECT}$ does not run in linear time if groups of $3$ are used.
It will still work if they are divided into groups of $7$, because we will still know that the median of medians is less than at least $4$ elements from half of the $\lceil n / 7 \rceil$ groups, so, it is greater than roughly $4n / 14$ of the elements. Similarly, it is less than roughly $4n / 14$ of the elements. So, ...
[]
false
[]
09-9.3-2
09
9.3
9.3-2
docs/Chap09/9.3.md
Analyze $\text{SELECT}$ to show that if $n \ge 140$, then at least $\lceil n / 4 \rceil$ elements are greater than the median-of-medians $x$ and at least $\lceil n / 4 \rceil$ elements are less than $x$.
$$ \begin{aligned} \frac{3n}{10} - 6 & \ge \lceil \frac{n}{4} \rceil \\\\ \frac{3n}{10} - 6 & \ge \frac{n}{4} + 1 \\\\ 12n - 240 & \ge 10n + 40 \\\\ n & \ge 140. \end{aligned} $$
[]
false
[]
09-9.3-3
09
9.3
9.3-3
docs/Chap09/9.3.md
Show how quicksort can be made to run in $O(n\lg n)$ time in the worst case, assuming that all elements are distinct.
We can modify quicksort to run in worst case $n\lg n$ time by choosing our pivot element to be the exact median by using quick select. Then, we are guaranteed that our pivot will be good, and the time taken to find the median is on the same order of the rest of the partitioning.
[]
false
[]
09-9.3-4
09
9.3
9.3-4 $\star$
docs/Chap09/9.3.md
Suppose that an algorithm uses only comparisons to find the $i$th smallest element in a set of $n$ elements. Show that it can also find the $i - 1$ smaller elements and $n - i$ larger elements without performing additional comparisons.
Create a graph with $n$ vertices and draw a directed edge from vertex $i$ to vertex $j$ if the $i$th and $j$th elements of the array are compared in the algorithm and we discover that $A[i] \ge A[j]$. Observe that $A[i]$ is one of the $i - 1$ smaller elements if there exists a path from $x$ to $i$ in the graph, and $A[...
[]
false
[]
09-9.3-5
09
9.3
9.3-5
docs/Chap09/9.3.md
Suppose that you have a "black-box" worst-case linear-time median subroutine. Give a simple, linear-time algorithm that solves the selection problem for an arbitrary order statistic.
To use it, just find the median, partition the array based on that median. - If $i$ is less than half the length of the original array, recurse on the first half. - If $i$ is half the length of the array, return the element coming from the median finding black box. - If $i$ is more than half the length of the array, s...
[]
false
[]
09-9.3-6
09
9.3
9.3-6
docs/Chap09/9.3.md
The $k$th **_quantiles_** of an $n$-element set are the $k - 1$ order statistics that divide the sorted set into $k$ equal-sized sets (to within $1$). Give an $O(n\lg k)$-time algorithm to list the $k$th quantiles of a set.
Pre-calculate the positions of the quantiles in $O(k)$, we use the $O(n)$ select algorithm to find the $\lfloor k / 2 \rfloor$th position, after that the elements are divided into two sets by the pivot the $\lfloor k / 2 \rfloor$th position, we do it recursively in the two sets to find other positions. Since the maximu...
[ { "lang": "cpp", "code": "PARTITION(A, p, r)\n x = A[r]\n i = p\n for k = p to r\n if A[k] < x\n i = i + 1\n swap A[i] with A[k]\n i = i + 1\n swap a[i] with a[r]\n return i" }, { "lang": "cpp", "code": "RANDOMIZED-PARTITION(A, p, r)\n x = RANDOM...
false
[]
09-9.3-7
09
9.3
9.3-7
docs/Chap09/9.3.md
Describe an $O(n)$-time algorithm that, given a set $S$ of $n$ distinct numbers and a positive integer $k \le n$, determines the $k$ numbers in $S$ that are closest to the median of $S$.
Find the median in $O(n)$; create a new array, each element is the absolute value of the original value subtract the median; find the $k$th smallest number in $O(n)$, then the desired values are the elements whose absolute difference with the median is less than or equal to the $k$th smallest number in the new array.
[]
false
[]
09-9.3-8
09
9.3
9.3-8
docs/Chap09/9.3.md
Let $X[1..n]$ and $Y[1..n]$ be two arrays, each containing $n$ numbers already in sorted order. Give an $O(\lg n)$-time algorithm to find the median of all $2n$ elements in arrays $X$ and $Y$.
Without loss of generality, assume $n$ is a power of $2$. ```cpp MEDIAN(X, Y, n) if n == 1 return min(X[1], Y[1]) if X[n / 2] < Y[n / 2] return MEDIAN(X[n / 2 + 1..n], Y[1..n / 2], n / 2) return MEDIAN(X[1..n / 2], Y[n / 2 + 1..n], n / 2) ```
[ { "lang": "cpp", "code": "MEDIAN(X, Y, n)\n if n == 1\n return min(X[1], Y[1])\n if X[n / 2] < Y[n / 2]\n return MEDIAN(X[n / 2 + 1..n], Y[1..n / 2], n / 2)\n return MEDIAN(X[1..n / 2], Y[n / 2 + 1..n], n / 2)" } ]
false
[]
09-9.3-9
09
9.3
9.3-9
docs/Chap09/9.3.md
Professor Olay is consulting for an oil company, which is planning a large pipeline running east to west through an oil field of $n$ wells. The company wants to connect a spur pipeline from each well directly to the main pipeline along a shortest route (either north or south), as shown in Figure 9.2. Given the $x$- and...
- If $n$ is odd, we pick the $y$ coordinate of the main pipeline to be equal to the median of all the $y$ coordinates of the wells. - If $n$ is even, we pick the $y$ coordinate of the pipeline to be anything between the $y$ coordinates of the wells with $y$-coordinates which have order statistics $\lfloor (n + 1) / 2 ...
[]
false
[]
09-9-1
09
9-1
9-1
docs/Chap09/Problems/9-1.md
Given a set of $n$ numbers, we wish to find the $i$ largest in sorted order using a comparison-based algorithm. Find the algorithm that implements each of the following methods with the best asymptotic worst-case running time, and analyze the running times of the algorithms in terms of $n$ and $i$ . **a.** Sort the nu...
**a.** The running time of sorting the numbers is $O(n\lg n)$, and the running time of listing the $i$ largest is $O(i)$. Therefore, the total running time is $O(n\lg n + i)$. **b.** The running time of building a max-priority queue (using a heap) from the numbers is $O(n)$, and the running time of each call $\text{EX...
[]
false
[]
09-9-2
09
9-2
9-2
docs/Chap09/Problems/9-2.md
For $n$ distinct elements $x_1, x_2, \ldots, x_n$ with positive weights $w_1, w_2, \ldots, w_n$ such that $\sum_{i = 1}^n w_i = 1$, the **_weighted (lower) median_** is the element $x_k$ satisfying $$\sum_{x_i < x_k} w_i < \frac{1}{2}$$ and $$\sum_{x_i > x_k} w_i \le \frac{1}{2}.$$ For example, if the elements are ...
**a.** Let $m_k$ be the number of $x_i$ smaller than $x_k$. When weights of $1 / n$ are assigned to each $x_i$, we have $\sum_{x_i < x_k} w_i = m_k / n$ and $\sum_{x_i > x_k} w_i = (n - m_k - 1) / 2$. The only value of $m_k$ which makes these sums $< 1 / 2$ and $\le 1 / 2$ respectively is when $\lceil n / 2 \rceil - 1$...
[]
false
[]
09-9-3
09
9-3
9-3
docs/Chap09/Problems/9-3.md
We showed that the worst-case number $T(n)$ of comparisons used by $\text{SELECT}$ to select the $i$th order statistic from $n$ numbers satisfies $T(n) = \Theta(n)$, but the constant hidden by the $\Theta$-notation is rather large. When $i$ is small relative to $n$, we can implement a different procedure that uses $\te...
(Removed)
[]
false
[]
09-9-4
09
9-4
9-4
docs/Chap09/Problems/9-4.md
In this problem, we use indicator random variables to analyze the $\text{RANDOMIZED-SELECT}$ procedure in a manner akin to our analysis of $\text{RANDOMIZED-QUICKSORT}$ in Section 7.4.2. As in the quicksort analysis, we assume that all elements are distinct, and we rename the elements of the input array $A$ as $z_1, z...
(Removed)
[]
false
[]
10-10.1-1
10
10.1
10.1-1
docs/Chap10/10.1.md
Using Figure 10.1 as a model, illustrate the result of each operation in the sequence $\text{PUSH}(S, 4)$, $\text{PUSH}(S, 1)$, $\text{PUSH}(S, 3)$, $\text{POP}(S)$, $\text{PUSH}(S, 8)$, and $\text{POP}(S)$ on an initially empty stack $S$ stored in array $S[1..6]$.
$$ \begin{array}{l|ccc} \text{PUSH($S, 4$)} & 4 & & \\\\ \text{PUSH($S, 1$)} & 4 & 1 & \\\\ \text{PUSH($S, 3$)} & 4 & 1 & 3 \\\\ \text{POP($S$)} & 4 & 1 & \\\\ \text{PUSH($S, 8$)} & 4 & 1 & 8 \\\\ \text{POP($S$)} & 4 & 1 & \end{array} $$
[]
false
[]
10-10.1-2
10
10.1
10.1-2
docs/Chap10/10.1.md
Explain how to implement two stacks in one array $A[1..n]$ in such a way that neither stack overflows unless the total number of elements in both stacks together is $n$. The $\text{PUSH}$ and $\text{POP}$ operations should run in $O(1)$ time.
The first stack starts at $1$ and grows up towards n, while the second starts form $n$ and grows down towards $1$. Stack overflow happens when an element is pushed when the two stack pointers are adjacent.
[]
false
[]
10-10.1-3
10
10.1
10.1-3
docs/Chap10/10.1.md
Using Figure 10.2 as a model, illustrate the result of each operation in the sequence $\text{ENQUEUE}(Q, 4)$, $\text{ENQUEUE}(Q ,1)$, $\text{ENQUEUE}(Q, 3)$, $\text{DEQUEUE}(Q)$, $\text{ENQUEUE}(Q, 8)$, and $\text{DEQUEUE}(Q)$ on an initially empty queue $Q$ stored in array $Q[1..6]$.
$$ \begin{array}{l|cccc} \text{ENQUEUE($Q, 4$)} & 4 & & & \\\\ \text{ENQUEUE($Q, 1$)} & 4 & 1 & & \\\\ \text{ENQUEUE($Q, 3$)} & 4 & 1 & 3 & \\\\ \text{DEQUEUE($Q$)} & & 1 & 3 & \\\\ \text{ENQUEUE($Q, 8$)} & & 1 & 3 & 8 \\\\ \text{DEQUEUE($Q$)} & & & 3 & 8 \end{array} $$
[]
false
[]
10-10.1-4
10
10.1
10.1-4
docs/Chap10/10.1.md
Rewrite $\text{ENQUEUE}$ and $\text{DEQUEUE}$ to detect underflow and overflow of a queue.
To detect underflow and overflow of a queue, we can implement $\text{QUEUE-EMPTY}$ and $\text{QUEUE-FULL}$ first. ```cpp QUEUE-EMPTY(Q) if Q.head == Q.tail return true else return false ``` ```cpp QUEUE-FULL(Q) if Q.head == Q.tail + 1 or (Q.head == 1 and Q.tail == Q.length) return true ...
[ { "lang": "cpp", "code": "QUEUE-EMPTY(Q)\n if Q.head == Q.tail\n return true\n else return false" }, { "lang": "cpp", "code": "QUEUE-FULL(Q)\n if Q.head == Q.tail + 1 or (Q.head == 1 and Q.tail == Q.length)\n return true\n else return false" }, { "lang": "cpp", ...
false
[]
10-10.1-5
10
10.1
10.1-5
docs/Chap10/10.1.md
Whereas a stack allows insertion and deletion of elements at only one end, and a queue allows insertion at one end and deletion at the other end, a **_deque_** (double-ended queue) allows insertion and deletion at both ends. Write four $O(1)$-time procedures to insert elements into and delete elements from both ends of...
The procedures $\text{QUEUE-EMPTY}$ and $\text{QUEUE-FULL}$ are implemented in Exercise 10.1-4. ```cpp HEAD-ENQUEUE(Q, x) if QUEUE-FULL(Q) error "overflow" else if Q.head == 1 Q.head = Q.length else Q.head = Q.head - 1 Q[Q.head] = x ``` ```cpp TAIL-ENQUEUE(Q, x) ...
[ { "lang": "cpp", "code": "HEAD-ENQUEUE(Q, x)\n if QUEUE-FULL(Q)\n error \"overflow\"\n else\n if Q.head == 1\n Q.head = Q.length\n else Q.head = Q.head - 1\n Q[Q.head] = x" }, { "lang": "cpp", "code": "TAIL-ENQUEUE(Q, x)\n if QUEUE-FULL(Q)\n ...
false
[]
10-10.1-6
10
10.1
10.1-6
docs/Chap10/10.1.md
Show how to implement a queue using two stacks. Analyze the running time of the queue operations.
- $\text{ENQUEUE}$: $\Theta(1)$. - $\text{DEQUEUE}$: worst $O(n)$, amortized $\Theta(1)$. Let the two stacks be $A$ and $B$. $\text{ENQUEUE}$ pushes elements on $B$. $\text{DEQUEUE}$ pops elements from $A$. If $A$ is empty, the contents of $B$ are transfered to $A$ by popping them out of $B$ and pushing them to $A$. ...
[]
false
[]
10-10.1-7
10
10.1
10.1-7
docs/Chap10/10.1.md
Show how to implement a stack using two queues. Analyze the running time of the stack operations.
- $\text{PUSH}$: $\Theta(1)$. - $\text{POP}$: $\Theta(n)$. We have two queues and mark one of them as active. $\text{PUSH}$ queues an element on the active queue. $\text{POP}$ should dequeue all but one element of the active queue and queue them on the inactive. The roles of the queues are then reversed, and the final...
[]
false
[]
10-10.2-1
10
10.2
10.2-1
docs/Chap10/10.2.md
Can you implement the dynamic-set operation $\text{INSERT}$ on a singly linked list in $O(1)$ time? How about $\text{DELETE}$?
- $\text{INSERT}$: can be implemented in constant time by prepending it to the list. ```cpp LIST-INSERT(L, x) x.next = L.head L.head = x ``` - $\text{DELETE}$: you can copy the value from the successor to element you want to delete, and then you can delete the successor in $O(1)$ time. Thi...
[ { "lang": "cpp", "code": " LIST-INSERT(L, x)\n x.next = L.head\n L.head = x" } ]
false
[]
10-10.2-2
10
10.2
10.2-2
docs/Chap10/10.2.md
Implement a stack using a singly linked list $L$. The operations $\text{PUSH}$ and $\text{POP}$ should still take $O(1)$ time.
```cpp STACK-EMPTY(L) if L.head == NIL return true else return false ``` - $\text{PUSH}$: adds an element in the beginning of the list. ```cpp PUSH(L, x) x.next = L.head L.head = x ``` - $\text{POP}$: removes the first element from the list. ```cpp POP(L) ...
[ { "lang": "cpp", "code": "STACK-EMPTY(L)\n if L.head == NIL\n return true\n else return false" }, { "lang": "cpp", "code": " PUSH(L, x)\n x.next = L.head\n L.head = x" }, { "lang": "cpp", "code": " POP(L)\n if STACK-EMPTY(L)\n error ...
false
[]
10-10.2-3
10
10.2
10.2-3
docs/Chap10/10.2.md
Implement a queue by a singly linked list $L$. The operations $\text{ENQUEUE}$ and $\text{DEQUEUE}$ should still take $O(1)$ time.
```cpp QUEUE-EMPTY(L) if L.head == NIL return true else return false ``` - $\text{ENQUEUE}$: inserts an element at the end of the list. In this case we need to keep track of the last element of the list. We can do that with a sentinel. ```cpp ENQUEUE(L, x) if QUEUE-EMPTY(L) ...
[ { "lang": "cpp", "code": "QUEUE-EMPTY(L)\n if L.head == NIL\n return true\n else return false" }, { "lang": "cpp", "code": " ENQUEUE(L, x)\n if QUEUE-EMPTY(L)\n L.head = x\n else L.tail.next = x\n L.tail = x\n x.next = NIL" }, { "lan...
false
[]
10-10.2-4
10
10.2
10.2-4
docs/Chap10/10.2.md
As written, each loop iteration in the $\text{LIST-SEARCH}'$ procedure requires two tests: one for $x \ne L.nil$ and one for $x.key \ne k$. Show how to eliminate the test for $x \ne L.nil$ in each iteration.
```cpp LIST-SEARCH'(L, k) x = L.nil.next L.nil.key = k while x.key != k x = x.next return x ```
[ { "lang": "cpp", "code": "LIST-SEARCH'(L, k)\n x = L.nil.next\n L.nil.key = k\n while x.key != k\n x = x.next\n return x" } ]
false
[]
10-10.2-5
10
10.2
10.2-5
docs/Chap10/10.2.md
Implement the dictionary operations $\text{INSERT}$, $\text{DELETE}$, and $\text{SEARCH}$ using singly linked, circular lists. What are the running times of your procedures?
- $\text{INSERT}$: $O(1)$. ```cpp LIST-INSERT''(L, x) x.next = L.nil.next L.nil.next = x ``` - $\text{DELETE}$: $O(n)$. ```cpp LIST-DELETE''(L, x) prev = L.nil while prev.next != x if prev.next == L.nil error "element not exist" ...
[ { "lang": "cpp", "code": " LIST-INSERT''(L, x)\n x.next = L.nil.next\n L.nil.next = x" }, { "lang": "cpp", "code": " LIST-DELETE''(L, x)\n prev = L.nil\n while prev.next != x\n if prev.next == L.nil\n error \"element not exist\"\n ...
false
[]
10-10.2-6
10
10.2
10.2-6
docs/Chap10/10.2.md
The dynamic-set operation $\text{UNION}$ takes two disjoint sets $S_1$ and $S_2$ as input, and it returns a set $S = S_1 \cup S_2$ consisting of all the elements of $S_1$ and $S_2$. The sets $S_1$ and $S_2$ are usually destroyed by the operation. Show how to support $\text{UNION}$ in $O(1)$ time using a suitable list d...
If both sets are a doubly linked lists, we just point link the last element of the first list to the first element in the second. If the implementation uses sentinels, we need to destroy one of them. ```cpp LIST-UNION(L[1], L[2]) L[2].nil.next.prev = L[1].nil.prev L[1].nil.prev.next = L[2].nil.next L[2].ni...
[ { "lang": "cpp", "code": "LIST-UNION(L[1], L[2])\n L[2].nil.next.prev = L[1].nil.prev\n L[1].nil.prev.next = L[2].nil.next\n L[2].nil.prev.next = L[1].nil\n L[1].nil.prev = L[2].nil.prev" } ]
false
[]
10-10.2-7
10
10.2
10.2-7
docs/Chap10/10.2.md
Give a $\Theta(n)$-time nonrecursive procedure that reverses a singly linked list of $n$ elements. The procedure should use no more than constant storage beyond that needed for the list itself.
```cpp LIST-REVERSE(L) p[1] = NIL p[2] = L.head while p[2] != NIL p[3] = p[2].next p[2].next = p[1] p[1] = p[2] p[2] = p[3] L.head = p[1] ```
[ { "lang": "cpp", "code": "LIST-REVERSE(L)\n p[1] = NIL\n p[2] = L.head\n while p[2] != NIL\n p[3] = p[2].next\n p[2].next = p[1]\n p[1] = p[2]\n p[2] = p[3]\n L.head = p[1]" } ]
false
[]
10-10.2-8
10
10.2
10.2-8 $\star$
docs/Chap10/10.2.md
Explain how to implement doubly linked lists using only one pointer value $x.np$ per item instead of the usual two ($next$ and $prev$). Assume all pointer values can be interpreted as $k$-bit integers, and define $x.np$ to be $x.np = x.next \text{ XOR } x.prev$, the $k$-bit "exclusive-or" of $x.next$ and $x.prev$. (The...
```cpp LIST-SEARCH(L, k) prev = NIL x = L.head while x != NIL and x.key != k next = prev XOR x.np prev = x x = next return x ``` ```cpp LIST-INSERT(L, x) x.np = NIL XOR L.tail if L.tail != NIL L.tail.np = (L.tail.np XOR NIL) XOR x // tail.prev XOR x if L.he...
[ { "lang": "cpp", "code": "LIST-SEARCH(L, k)\n prev = NIL\n x = L.head\n while x != NIL and x.key != k\n next = prev XOR x.np\n prev = x\n x = next\n return x" }, { "lang": "cpp", "code": "LIST-INSERT(L, x)\n x.np = NIL XOR L.tail\n if L.tail != NIL\n ...
false
[]
10-10.3-1
10
10.3
10.3-1
docs/Chap10/10.3.md
Draw a picture of the sequence $\langle 13, 4, 8, 19, 5, 11 \rangle$ stored as a doubly linked list using the multiple-array representation. Do the same for the single-array representation.
- A multiple-array representation with $L = 2$, $$ \begin{array}{|r|c|c|c|c|c|c|c|} \hline index & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \hline next & & 3 & 4 & 5 & 6 & 7 & \diagup \\\\ \hline key & & 13 & 4 & 8 & 19 & 5 & 11 \\\\ \hline prev & & \diagup & ...
[]
false
[]
10-10.3-2
10
10.3
10.3-2
docs/Chap10/10.3.md
Write the procedures $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ for a homogeneous collection of objects implemented by the single-array representation.
```cpp ALLOCATE-OBJECT() if free == NIL error "out of space" else x = free free = A[x + 1] return x ``` ```cpp FREE-OBJECT(x) A[x + 1] = free free = x ```
[ { "lang": "cpp", "code": "ALLOCATE-OBJECT()\n if free == NIL\n error \"out of space\"\n else x = free\n free = A[x + 1]\n return x" }, { "lang": "cpp", "code": "FREE-OBJECT(x)\n A[x + 1] = free\n free = x" } ]
false
[]
10-10.3-3
10
10.3
10.3-3
docs/Chap10/10.3.md
Why don't we need to set or reset the $prev$ attributes of objects in the implementation of the $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ procedures?
We implement $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ in the hope of managing the storage of currently non-used object in the free list so that one can be allocated for reusing. As the free list acts like a stack, to maintain this stack-like collection, we merely remember its first pointer and set the $next$ a...
[]
false
[]
10-10.3-4
10
10.3
10.3-4
docs/Chap10/10.3.md
It is often desirable to keep all elements of a doubly linked list compact in storage, using, for example, the first $m$ index locations in the multiple-array representation. (This is the case in a paged, virtual-memory computing environment.) Explain how to implement the procedures $\text{ALLOCATE-OBJECT}$ and $\text{...
```cpp ALLOCATE-OBJECT() if STACK-EMPTY(F) error "out of space" else x = POP(F) return x ``` ```cpp FREE-OBJECT(x) p = F.top - 1 p.prev.next = x p.next.prev = x x.key = p.key x.prev = p.prev x.next = p.next PUSH(F, p) ```
[ { "lang": "cpp", "code": "ALLOCATE-OBJECT()\n if STACK-EMPTY(F)\n error \"out of space\"\n else x = POP(F)\n return x" }, { "lang": "cpp", "code": "FREE-OBJECT(x)\n p = F.top - 1\n p.prev.next = x\n p.next.prev = x\n x.key = p.key\n x.prev = p.prev\n x.next ...
false
[]
10-10.3-5
10
10.3
10.3-5
docs/Chap10/10.3.md
Let $L$ be a doubly linked list of length $n$ stored in arrays $key$, $prev$, and $next$ of length $m$. Suppose that these arrays are managed by $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ procedures that keep a doubly linked free list $F$. Suppose further that of the $m$ items, exactly $n$ are on list $L$ and $m...
We represent the combination of arrays $key$, $prev$, and $next$ by a multible-array $A$. Each object of $A$'s is either in list $L$ or in the free list $F$, but not in both. The procedure $\text{COMPACTIFY-LIST}$ transposes the first object in $L$ with the first object in $A$, the second objects until the list $L$ is ...
[ { "lang": "cpp", "code": "COMPACTIFY-LIST(L, F)\n TRANSPOSE(A[L.head], A[1])\n if F.head == 1\n F.head = L.head\n L.head = 1\n l = A[L.head].next\n i = 2\n while l != NIL\n TRANSPOSE(A[l], A[i])\n if F == i\n F = l\n l = A[l].next\n i = i + 1" ...
false
[]
10-10.4-1
10
10.4
10.4-1
docs/Chap10/10.4.md
Draw the binary tree rooted at index $6$ that is represented by the following attributes: $$ \begin{array}{cccc} \text{index} & key & left & right \\\\ \hline 1 & 12 & 7 & 3 \\\\ 2 & 15 & 8 & \text{NIL} \\\\ 3 & 4 & 10 & \text{NIL} \\\\ 4 & 10 & 5 & 9 \\\\ 5 & 2 &...
![](../img/10.4-1.png)
[]
true
[ "../img/10.4-1.png" ]
10-10.4-2
10
10.4
10.4-2
docs/Chap10/10.4.md
Write an $O(n)$-time recursive procedure that, given an $n$-node binary tree, prints out the key of each node in the tree.
```cpp PRINT-BINARY-TREE(T) PRINT-BINARY-TREE-AUX(T.root) PRINT-BINARY-TREE-AUX(x) if node != NIL PRINT-BINARY-TREE-AUX(x.left) print x.key PRINT-BINARY-TREE-AUX(x.right) ```
[ { "lang": "cpp", "code": "PRINT-BINARY-TREE(T)\n PRINT-BINARY-TREE-AUX(T.root)\n\nPRINT-BINARY-TREE-AUX(x)\n if node != NIL\n PRINT-BINARY-TREE-AUX(x.left)\n print x.key\n PRINT-BINARY-TREE-AUX(x.right)" } ]
false
[]
10-10.4-3
10
10.4
10.4-3
docs/Chap10/10.4.md
Write an O$(n)$-time nonrecursive procedure that, given an $n$-node binary tree, prints out the key of each node in the tree. Use a stack as an auxiliary data structure.
```cpp PRINT-BINARY-TREE(T, S) PUSH(S, T.root) while !STACK-EMPTY(S) x = S[S.top] while x != NIL // store all nodes on the path towards the leftmost leaf PUSH(S, x.left) x = S[S.top] POP(S) // S has NIL on its top, so pop it if !STACK-EMP...
[ { "lang": "cpp", "code": "PRINT-BINARY-TREE(T, S)\n PUSH(S, T.root)\n while !STACK-EMPTY(S)\n x = S[S.top]\n while x != NIL // store all nodes on the path towards the leftmost leaf\n PUSH(S, x.left)\n x = S[S.top]\n POP(S) // S has NIL on it...
false
[]
10-10.4-4
10
10.4
10.4-4
docs/Chap10/10.4.md
Write an $O(n)$-time procedure that prints all the keys of an arbitrary rooted tree with $n$ nodes, where the tree is stored using the left-child, right-sibling representation.
```cpp PRINT-LCRS-TREE(T) x = T.root if x != NIL print x.key lc = x.left-child if lc != NIL PRINT-LCRS-TREE(lc) rs = lc.right-sibling while rs != NIL PRINT-LCRS-TREE(rs) rs = rs.right-sibling ```
[ { "lang": "cpp", "code": "PRINT-LCRS-TREE(T)\n x = T.root\n if x != NIL\n print x.key\n lc = x.left-child\n if lc != NIL\n PRINT-LCRS-TREE(lc)\n rs = lc.right-sibling\n while rs != NIL\n PRINT-LCRS-TREE(rs)\n rs = rs.r...
false
[]
10-10.4-5
10
10.4
10.4-5 $\star$
docs/Chap10/10.4.md
Write an $O(n)$-time nonrecursive procedure that, given an $n$-node binary tree, prints out the key of each node. Use no more than constant extra space outside of the tree itself and do not modify the tree, even temporarily, during the procedure.
```cpp PRINT-KEY(T) prev = NIL x = T.root while x != NIL if prev = x.parent print x.key prev = x if x.left x = x.left else if x.right x = x.right else x = x.parent...
[ { "lang": "cpp", "code": "PRINT-KEY(T)\n prev = NIL\n x = T.root\n while x != NIL\n if prev = x.parent\n print x.key\n prev = x\n if x.left\n x = x.left\n else\n if x.right\n x = x.right\n ...
false
[]
10-10.4-6
10
10.4
10.4-6 $\star$
docs/Chap10/10.4.md
The left-child, right-sibling representation of an arbitrary rooted tree uses three pointers in each node: _left-child_, _right-sibling_, and _parent_. From any node, its parent can be reached and identified in constant time and all its children can be reached and identified in time linear in the number of children. Sh...
Use boolean to identify the last sibling, and the last sibling's right-sibling points to the parent.
[]
false
[]
10-10-1
10
10-1
10-1
docs/Chap10/Problems/10-1.md
For each of the four types of lists in the following table, what is the asymptotic worst-case running time for each dynamic-set operation listed? $$ \begin{array}{l|c|c|c|c|} & \text{unsorted, singly linked} & \text{sorted, singly linked} & \text{unsorted, doubly linked} & \text{sorted, doubly linked} \\\\ \hline \tex...
$$ \begin{array}{l|c|c|c|c|} & \text{unsorted, singly linked} & \text{sorted, singly linked} & \text{unsorted, doubly linked} & \text{sorted, doubly linked} \\\\ \hline \text{SEARCH($L, k$)} & \Theta(n) & \T...
[]
false
[]
10-10-2
10
10-2
10-2
docs/Chap10/Problems/10-2.md
A **_mergeable heap_** supports the following operations: $\text{MAKE-HEAP}$ (which creates an empty mergeable heap), $\text{INSERT}$, $\text{MINIMUM}$, $\text{EXTRACT-MIN}$, and $\text{UNION}$. Show how to implement mergeable heaps using linked lists in each of the following cases. Try to make each operation as effici...
In all three cases, $\text{MAKE-HEAP}$ simply creates a new list $L$, sets $L.head = \text{NIL}$, and returns $L$ in constant time. Assume lists are doubly linked. To realize a linked list as a heap, we imagine the usual array implementation of a binary heap, where the children of the $i$th element are $2i$ and $2i + 1...
[ { "lang": "cpp", "code": "EXTRACT-MIN(L)\n min = MINIMIM(L)\n linearly scan for the second smallest element, located in position i\n L.head.key = L[i]\n L[i].key = L[L.length].key\n DELETE(L, L[L.length])\n MIN-HEAPIFY(L[i], i)\n return min" }, { "lang": "cpp", "code": "MIN-...
false
[]
10-10-3
10
10-3
10-3
docs/Chap10/Problems/10-3.md
Exercise 10.3-4 asked how we might maintain an $n$-element list compactly in the first $n$ positions of an array. We shall assume that all keys are distinct and that the compact list is also sorted, that is, $key[i] < key[next[i]]$ for all $i = 1, 2, \ldots, n$ such that $next[i] \ne \text{NIL}$. We will also assume th...
**a.** If the original version of the algorithm takes only $t$ iterations, then, we have that it was only at most t random skips though the list to get to the desired value, since each iteration of the original while loop is a possible random jump followed by a normal step through the linked list. **b.** The for loop ...
[ { "lang": "cpp", "code": "> COMPACT-LIST-SEARCH(L, n, k)\n> i = L\n> while i != NIL and key[i] < k\n> j = RANDOM(1, n)\n> if key[i] < key[j] and key[j] ≤ k\n> i = j\n> if key[i] == k\n> return i\n> i = next[i]\n> if i == NIL or key[...
false
[]
11-11.1-1
11
11.1
11.1-1
docs/Chap11/11.1.md
Suppose that a dynamic set $S$ is represented by a direct-address table $T$ of length $m$. Describe a procedure that finds the maximum element of $S$. What is the worst-case performance of your procedure?
As the dynamic set $S$ is represented by the direct-address table $T$, for each key $k$ in $S$, there is a slot $k$ in $T$ points to it. If no element with key $k$ in $S$, then $T[k] = \text{NIL}$. Using this property, we can find the maximum element of $S$ by traversing down from the highest slot to seek the first non...
[ { "lang": "cpp", "code": "MAXIMUM(S)\n return TABLE-MAXIMUM(T, m - 1)" }, { "lang": "cpp", "code": "TABLE-MAXIMUM(T, l)\n if l < 0\n return NIL\n else if DIRECT-ADDRESS-SEARCH(T, l) != NIL\n return l\n else return TABLE-MAXIMUM(T, l - 1)" } ]
false
[]
11-11.1-2
11
11.1
11.1-2
docs/Chap11/11.1.md
A **_bit vector_** is simply an array of bits ($0$s and $1$s). A bit vector of length $m$ takes much less space than an array of $m$ pointers. Describe how to use a bit vector to represent a dynamic set of distinct elements with no satellite data. Dictionary operations should run in $O(1)$ time.
Using the bit vector data structure, we can represent keys less than $m$ by a string of $m$ bits, denoted by $V[0..m - 1]$, in which each position that occupied by the bit $1$, corresponds to a key in the set $S$. If the set contains no element with key $k$, then $V[k] = 0$. For instance, we can store the set $\\{2, 4,...
[ { "lang": "cpp", "code": "BITMAP-SEARCH(V, k)\n if V[k] != 0\n return k\n else return NIL" }, { "lang": "cpp", "code": "BITMAP-INSERT(V, x)\n V[x] = 1" }, { "lang": "cpp", "code": "BITMAP-DELETE(V, x)\n V[x] = 0" } ]
false
[]
11-11.1-3
11
11.1
11.1-3
docs/Chap11/11.1.md
Suggest how to implement a direct-address table in which the keys of stored elements do not need to be distinct and the elements can have satellite data. All three dictionary operations ($\text{INSERT}$, $\text{DELETE}$, and $\text{SEARCH}$) should run in $O(1)$ time. (Don't forget that $\text{DELETE}$ takes as an argu...
Assuming that fetching an element should return the satellite data of all the stored elements, we can have each key map to a doubly linked list. - $\text{INSERT}$: appends the element to the list in constant time - $\text{DELETE}$: removes the element from the linked list in constant time (the element contains pointer...
[]
false
[]
11-11.1-4
11
11.1
11.1-4 $\star$
docs/Chap11/11.1.md
We wish to implement a dictionary by using direct addressing on a _huge_ array. At the start, the array entries may contain garbage, and initializing the entire array is impractical because of its size. Describe a scheme for implementing a direct-address dictionary on a huge array. Each stored object should use $O(1)$ ...
The additional data structure will be a stack $S$. Initially, set $S$ to be empty, and do nothing to initialize the huge array. Each object stored in the huge array will have two parts: the key value, and a pointer to an element of $S$, which contains a pointer back to the object in the huge array. - To insert $x$, p...
[]
false
[]
11-11.2-1
11
11.2
11.2-1
docs/Chap11/11.2.md
Suppose we use a hash function $h$ to hash $n$ distinct keys into an array $T$ of length $m$. Assuming simple uniform hashing, what is the expected number of collisions? More precisely, what is the expected cardinality of $\\{\\{k, l\\}: k \ne l \text{ and } h(k) = h(l)\\}$?
Under the assumption of simple uniform hashing, we will use linearity of expectation to compute this. Suppose that all the keys are totally ordered $\\{k_1, \dots, k_n\\}$. Let $X_i$ be the number of $\ell$'s such that $\ell > k_i$ and $h(\ell) = h(k_i)$. So $X_i$ is the (expected) number of times that key $k_i$ is co...
[]
false
[]
11-11.2-2
11
11.2
11.2-2
docs/Chap11/11.2.md
Demonstrate what happens when we insert the keys $5, 28, 19, 15, 20, 33, 12, 17, 10$ into a hash table with collisions resolved by chaining. Let the table have $9$ slots, and let the hash function be $h(k) = k \mod 9$.
Let us number our slots $0, 1, \dots, 8$. Then our resulting hash table will look like following: $$ \begin{array}{c|l} h(k) & \text{keys} \\\\ \hline 0 \mod 9 & \\\\ 1 \mod 9 & 10 \to 19 \to 28 \\\\ 2 \mod 9 & 20 \\\\ 3 \mod 9 & 12 \\\\ 4 \mod 9 & ...
[]
false
[]
11-11.2-3
11
11.2
11.2-3
docs/Chap11/11.2.md
Professor Marley hypothesizes that he can obtain substantial performance gains by modifying the chaining scheme to keep each list in sorted order. How does the professor's modification affect the running time for successful searches, unsuccessful searches, insertions, and deletions?
- Successful searches: no difference, $\Theta(1 + \alpha)$. - Unsuccessful searches: faster but still $\Theta(1 + \alpha)$. - Insertions: same as successful searches, $\Theta(1 + \alpha)$. - Deletions: same as before if we use doubly linked lists, $\Theta(1)$.
[]
false
[]
11-11.2-4
11
11.2
11.2-4
docs/Chap11/11.2.md
Suggest how to allocate and deallocate storage for elements within the hash table itself by linking all unused slots into a free list. Assume that one slot can store a flag and either one element plus a pointer or two pointers. All dictionary and free-list operations should run in $O(1)$ expected time. Does the free li...
The flag in each slot of the hash table will be $1$ if the element contains a value, and $0$ if it is free. The free list must be doubly linked. - Search is unmodified, so it has expected time $O(1)$. - To insert an element $x$, first check if $T[h(x.key)]$ is free. If it is, delete $T[h(x.key)]$ and change the flag o...
[]
false
[]
11-11.2-5
11
11.2
11.2-5
docs/Chap11/11.2.md
Suppose that we are storing a set of $n$ keys into a hash table of size $m$. Show that if the keys are drawn from a universe $U$ with $|U| > nm$, then $U$ has a subset of size $n$ consisting of keys that all hash to the same slot, so that the worst-case searching time for hashing with chaining is $\Theta(n)$.
Suppose the $m - 1$ slots contains at most $n - 1$ elements, then the remaining slot should have $$|U| - (m - 1)(n - 1) > nm - (m - 1)(n - 1) = n + m - 1 \ge n$$ elements, thus $U$ has a subset of size $n$.
[]
false
[]
11-11.2-6
11
11.2
11.2-6
docs/Chap11/11.2.md
Suppose we have stored $n$ keys in a hash table of size $m$, with collisions resolved by chaining, and that we know the length of each chain, including the length $L$ of the longest chain. Describe a procedure that selects a key uniformly at random from among the keys in the hash table and returns it in expected time $...
Choose one of the $m$ spots in the hash table at random. Let $n_k$ denote the number of elements stored at $T[k]$. Next pick a number $x$ from $1$ to $L$ uniformly at random. If $x < n_j$, then return the $x$th element on the list. Otherwise, repeat this process. Any element in the hash table will be selected with prob...
[]
false
[]
11-11.3-1
11
11.3
11.3-1
docs/Chap11/11.3.md
Suppose we wish to search a linked list of length $n$, where each element contains a key $k$ along with a hash value $h(k)$. Each key is a long character string. How might we take advantage of the hash values when searching the list for an element with a given key?
If every element also contained a hash of the long character string, when we are searching for the desired element, we'll first check if the hashvalue of the node in the linked list, and move on if it disagrees. This can increase the runtime by a factor proportional to the length of the long character strings.
[]
false
[]
11-11.3-2
11
11.3
11.3-2
docs/Chap11/11.3.md
Suppose that we hash a string of $r$ characters into $m$ slots by treating it as a radix-128 number and then using the division method. We can easily represent the number $m$ as a 32-bit computer word, but the string of $r$ characters, treated as a radix-128 number, takes many words. How can we apply the division metho...
```cpp sum = 0 for i = 1 to r sum = (sum * 128 + s[i]) % m ``` Use `sum` as the key.
[ { "lang": "cpp", "code": " sum = 0\n for i = 1 to r\n sum = (sum * 128 + s[i]) % m" } ]
false
[]
11-11.3-3
11
11.3
11.3-3
docs/Chap11/11.3.md
Consider a version of the division method in which $h(k) = k \mod m$, where $m = 2^p - 1$ and $k$ is a character string interpreted in radix $2^p$. Show that if we can derive string $x$ from string $y$ by permuting its characters, then $x$ and $y$ hash to the same value. Give an example of an application in which this ...
We will show that each string hashes to the sum of it's digits $\mod 2^p − 1$. We will do this by induction on the length of the string. - Base case Suppose the string is a single character, then the value of that character is the value of $k$ which is then taken $\mod m$. - Inductive step. Let $w = w_1w_2$...
[]
false
[]
11-11.3-4
11
11.3
11.3-4
docs/Chap11/11.3.md
Consider a hash table of size $m = 1000$ and a corresponding hash function $h(k) = \lfloor m (kA \mod 1) \rfloor$ for $A = (\sqrt 5 - 1) / 2$. Compute the locations to which the keys $61$, $62$, $63$, $64$, and $65$ are mapped.
- $h(61) = \lfloor 1000(61 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 700$. - $h(62) = \lfloor 1000(62 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 318$. - $h(63) = \lfloor 1000(63 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 936$. - $h(64) = \lfloor 1000(64 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 554$. - $h...
[]
false
[]
11-11.3-5
11
11.3
11.3-5 $\star$
docs/Chap11/11.3.md
Define a family $\mathcal H$ of hash functions from a finite set $U$ to a finite set $B$ to be **_$\epsilon$-universal_** if for all pairs of distinct elements $k$ and $l$ in $U$, $$\Pr\\{h(k) = h(l)\\} \le \epsilon,$$ where the probability is over the choice of the hash function $h$ drawn at random from the family $...
As a simplifying assumption, assume that $|B|$ divides $|U|$. It's just a bit messier if it doesn't divide evenly. Suppose to a contradiction that $\epsilon > \frac{1}{|B|} - \frac{1}{|U|}$. This means that $\forall$ pairs $k, \ell \in U$, we have that the number $n_{k, \ell}$ of hash functions in $\mathcal H$ that ha...
[]
false
[]
11-11.3-6
11
11.3
11.3-6 $\star$
docs/Chap11/11.3.md
Let $U$ be the set of $n$-tuples of values drawn from $\mathbb Z_p$, and let $B = \mathbb Z_p$, where $p$ is prime. Define the hash function $h_b: U \rightarrow B$ for $b \in \mathbb Z_p$ on an input $n$-tuple $\langle a_0, a_1, \ldots, a_{n - 1} \rangle$ from $U$ as $$h_b(\langle a_0, a_1, \ldots, a_{n - 1} \rangle) ...
Fix $b \in \mathbb Z_p$. By exercise 31.4-4, $h_b(x)$ collides with $h_b(y)$ for at most $n - 1$ other $y \in U$. Since there are a total of $p$ possible values that $h_b$ takes on, the probability that $h_b(x) = h_b(y)$ is bounded from above by $\frac{n - 1}{p}$, since this holds for any value of $b$, $\mathcal H$ is ...
[]
false
[]
11-11.4-1
11
11.4
11.4-1
docs/Chap11/11.4.md
Consider inserting the keys $10, 22, 31, 4, 15, 28, 17, 88, 59$ into a hash table of length $m = 11$ using open addressing with the auxiliary hash function $h'(k) = k$. Illustrate the result of inserting these keys using linear probing, using quadratic probing with $c_1 = 1$ and $c_2 = 3$, and using double hashing with...
We use $T_t$ to represent each time stamp $t$ starting with $i = 0$, and if encountering a collision, then we iterate $i$ from $i = 1$ to $i = m - 1 = 10$ until there is no collision. - **Linear probing**: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & ...
[]
false
[]
11-11.4-2
11
11.4
11.4-2
docs/Chap11/11.4.md
Write pseudocode for $\text{HASH-DELETE}$ as outlined in the text, and modify $\text{HASH-INSERT}$ to handle the special value $\text{DELETED}$.
```cpp HASH-DELETE(T, k) i = 0 repeat j = h(k, i) if T[j] == k T[j] = DELETE return j else i = i + 1 until T[j] == NIL or i == m error "element not exist" ``` By implementing $\text{HASH-DELETE}$ in this way, the $\text{HASH-INSERT}$ need to be modified t...
[ { "lang": "cpp", "code": "HASH-DELETE(T, k)\n i = 0\n repeat\n j = h(k, i)\n if T[j] == k\n T[j] = DELETE\n return j\n else i = i + 1\n until T[j] == NIL or i == m\n error \"element not exist\"" }, { "lang": "cpp", "code": "HASH-INSERT(T, k)...
false
[]
11-11.4-3
11
11.4
11.4-3
docs/Chap11/11.4.md
Consider an open-address hash table with uniform hashing. Give upper bounds on the expected number of probes in an unsuccessful search and on the expected number of probes in a successful search when the load factor is $3 / 4$ and when it is $7 / 8$.
- $\alpha = 3 / 4$, - unsuccessful: $\frac{1}{1 - \frac{3}{4}} = 4$ probes, - successful: $\frac{1}{\frac{3}{4}} \ln\frac{1}{1-\frac{3}{4}} \approx 1.848$ probes. - $\alpha = 7 / 8$, - unsuccessful: $\frac{1}{1 - \frac{7}{8}} = 8$ probes, - successful: $\frac{1}{\frac{7}{8}} \ln\frac{1}{1 - \frac{7...
[]
false
[]
11-11.4-4
11
11.4
11.4-4 $\star$
docs/Chap11/11.4.md
Suppose that we use double hashing to resolve collisions—that is, we use the hash function $h(k, i) = (h_1(k) + ih_2(k)) \mod m$. Show that if $m$ and $h_2(k)$ have greatest common divisor $d \ge 1$ for some key $k$, then an unsuccessful search for key $k$ examines $(1/d)$th of the hash table before returning to slot $...
Suppose $d = \gcd(m, h_2(k))$, the $\text{LCM}$ $l = m \cdot h_2(k) / d$. Since $d | h_2(k)$, then $m \cdot h_2(k) / d \mod m = 0 \cdot (h_2(k) / d \mod m) = 0$, therefore $(l + ih_2(k)) \mod m = ih_2(k) \mod m$, which means $ih_2(k) \mod m$ has a period of $m / d$.
[]
false
[]
11-11.4-5
11
11.4
11.4-5 $\star$
docs/Chap11/11.4.md
Consider an open-address hash table with a load factor $\alpha$. Find the nonzero value $\alpha$ for which the expected number of probes in an unsuccessful search equals twice the expected number of probes in a successful search. Use the upper bounds given by Theorems 11.6 and 11.8 for these expected numbers of probes.
$$ \begin{aligned} \frac{1}{1 - \alpha} & = 2 \cdot \frac{1}{\alpha} \ln\frac{1}{1 - \alpha} \\\\ \alpha & = 0.71533. \end{aligned} $$
[]
false
[]
11-11.5-1
11
11.5
11.5-1 $\star$
docs/Chap11/11.5.md
Suppose that we insert $n$ keys into a hash table of size $m$ using open addressing and uniform hashing. Let $p(n, m)$ be the probability that no collisions occur. Show that $p(n, m) \le e^{-n(n - 1) / 2m}$. ($\textit{Hint:}$ See equation $\text{(3.12)}$.) Argue that when $n$ exceeds $\sqrt m$, the probability of avoid...
$$ \begin{aligned} p(n, m) & = \frac{m}{m} \cdot \frac{m - 1}{m} \cdots \frac{m - n + 1}{m} \\\\ & = \frac{m \cdot (m - 1) \cdots (m - n + 1)}{m^n}. \end{aligned} $$ $$ \begin{aligned} (m - i) \cdot (m - n + i) & = (m - \frac{n}{2} + \frac{n}{2} - i) \cdot (m - \frac{n}{2} - \frac{n}{2} + i) \\\\ & ...
[]
false
[]
11-11-1
11
11-1
11-1
docs/Chap11/Problems/11-1.md
Suppose that we use an open-addressed hash table of size $m$ to store $n \le m / 2$ items. **a.** Assuming uniform hashing, show that for $i = 1, 2, \ldots, n$, the probability is at most $2^{-k}$ that the $i$th insertion requires strictly more than $k$ probes. **b.** Show that for $i = 1, 2, \ldots, n$, the probabil...
(Removed)
[]
false
[]
11-11-2
11
11-2
11-2
docs/Chap11/Problems/11-2.md
Suppose that we have a hash table with $n$ slots, with collisions resolved by chaining, and suppose that $n$ keys are inserted into the table. Each key is equally likely to be hashed to each slot. Let $M$ be the maximum number of keys in any slot after all the keys have been inserted. Your mission is to prove an $O(\lg...
(Removed)
[]
false
[]
11-11-3
11
11-3
11-3
docs/Chap11/Problems/11-3.md
Suppose that we are given a key $k$ to search for in a hash table with positions $0, 1, \ldots, m - 1$, and suppose that we have a hash function $h$ mapping the key space into the set $\\{0, 1, \ldots, m - 1\\}$. The search scheme is as follows: 1. Compute the value $j = h(k)$, and set $i = 0$. 2. Probe in position $j...
(Removed)
[]
false
[]
11-11-4
11
11-4
11-4
docs/Chap11/Problems/11-4.md
Let $\mathcal H$ be a class of hash functions in which each hash function $h \in \mathcal H$ maps the universe $U$ of keys to $\\{0, 1, \ldots, m - 1 \\}$. We say that $\mathcal H$ is **_k-universal_** if, for every fixed sequence of $k$ distinct keys $\langle x^{(1)}, x^{(2)}, \ldots, x^{(k)} \rangle$ and for any $h$ ...
**a.** The number of hash functions for which $h(k) = h(l)$ is $\frac{m}{m^2}|\mathcal H| = \frac{1}{m}|\mathcal H|$, therefore the family is universal. **b.** For $x = \langle 0, 0, \ldots, 0 \rangle$, $\mathcal H$ could not be $2$-universal. **c.** Let $x, y \in U$ be fixed, distinct $n$-tuples. As $a_i$ and $b$ ra...
[]
false
[]
12-12.1-1
12
12.1
12.1-1
docs/Chap12/12.1.md
For the set of $\\{ 1, 4, 5, 10, 16, 17, 21 \\}$ of keys, draw binary search trees of heights $2$, $3$, $4$, $5$, and $6$.
- $height = 2$: ![](../img/12.1-1-1.png) - $height = 3$: ![](../img/12.1-1-2.png) - $height = 4$: ![](../img/12.1-1-3.png) - $height = 5$: ![](../img/12.1-1-4.png) - $height = 6$: ![](../img/12.1-1-5.png)
[]
true
[ "../img/12.1-1-1.png", "../img/12.1-1-2.png", "../img/12.1-1-3.png", "../img/12.1-1-4.png", "../img/12.1-1-5.png" ]
12-12.1-2
12
12.1
12.1-2
docs/Chap12/12.1.md
What is the difference between the binary-search-tree property and the min-heap property (see page 153)? Can the min-heap property be used to print out the keys of an $n$-node tree in sorted order in $O(n)$ time? Show how, or explain why not.
- The binary-search-tree property guarantees that all nodes in the left subtree are smaller, and all nodes in the right subtree are larger. - The min-heap property only guarantees the general child-larger-than-parent relation, but doesn't distinguish between left and right children. For this reason, the min-heap proper...
[]
false
[]
12-12.1-3
12
12.1
12.1-3
docs/Chap12/12.1.md
Give a nonrecursive algorithm that performs an inorder tree walk. ($\textit{Hint:}$ An easy solution uses a stack as an auxiliary data structure. A more complicated, but elegant, solution uses no stack but assumes that we can test two pointers for equality.)
```cpp INORDER-TREE-WALK(T) let S be an empty stack current = T.root done = 0 while !done if current != NIL PUSH(S, current) current = current.left else if !S.EMPTY() current = POP(S) print current curren...
[ { "lang": "cpp", "code": "INORDER-TREE-WALK(T)\n let S be an empty stack\n current = T.root\n done = 0\n while !done\n if current != NIL\n PUSH(S, current)\n current = current.left\n else\n if !S.EMPTY()\n current = POP(S)\n ...
false
[]
12-12.1-4
12
12.1
12.1-4
docs/Chap12/12.1.md
Give recursive algorithms that perform preorder and postorder tree walks in $\Theta(n)$ time on a tree of $n$ nodes.
```cpp PREORDER-TREE-WALK(x) if x != NIL print x.key PREORDER-TREE-WALK(x.left) PREORDER-TREE-WALK(x.right) ``` ```cpp POSTORDER-TREE-WALK(x) if x != NIL POSTORDER-TREE-WALK(x.left) POSTORDER-TREE-WALK(x.right) print x.key ```
[ { "lang": "cpp", "code": "PREORDER-TREE-WALK(x)\n if x != NIL\n print x.key\n PREORDER-TREE-WALK(x.left)\n PREORDER-TREE-WALK(x.right)" }, { "lang": "cpp", "code": "POSTORDER-TREE-WALK(x)\n if x != NIL\n POSTORDER-TREE-WALK(x.left)\n POSTORDER-TREE-WALK(x...
false
[]
12-12.1-5
12
12.1
12.1-5
docs/Chap12/12.1.md
Argue that since sorting $n$ elements takes $\Omega(n\lg n)$ time in the worst case in the comparison model, any comparison-based algorithm for constructing a binary search tree from an arbitrary list of $n$ elements takes $\Omega(n\lg n)$ time in the worst case.
Assume, for the sake of contradiction, that we can construct the binary search tree by comparison-based algorithm using less than $\Omega(n\lg n)$ time, since the inorder tree walk is $\Theta(n)$, then we can get the sorted elements in less than $\Omega(n\lg n)$ time, which contradicts the fact that sorting $n$ element...
[]
false
[]
12-12.2-1
12
12.2
12.2-1
docs/Chap12/12.2.md
Suppose that we have numbers between $1$ and $1000$ in a binary search tree, and we want to search for the number $363$. Which of the following sequences could _not_ be the sequence of nodes examined? **a.** $2, 252, 401, 398, 330, 344, 397, 363$. **b.** $924, 220, 911, 244, 898, 258, 362, 363$. **c.** $925, 202, 91...
- **c.** could not be the sequence of nodes explored because we take the left child from the $911$ node, and yet somehow manage to get to the $912$ node which cannot belong the left subtree of $911$ because it is greater. - **e.** is also impossible because we take the right subtree on the $347$ node and yet later come...
[]
false
[]
12-12.2-2
12
12.2
12.2-2
docs/Chap12/12.2.md
Write recursive versions of $\text{TREE-MINIMUM}$ and $\text{TREE-MAXIMUM}$.
```cpp TREE-MINIMUM(x) if x.left != NIL return TREE-MINIMUM(x.left) else return x ``` ```cpp TREE-MAXIMUM(x) if x.right != NIL return TREE-MAXIMUM(x.right) else return x ```
[ { "lang": "cpp", "code": "TREE-MINIMUM(x)\n if x.left != NIL\n return TREE-MINIMUM(x.left)\n else return x" }, { "lang": "cpp", "code": "TREE-MAXIMUM(x)\n if x.right != NIL\n return TREE-MAXIMUM(x.right)\n else return x" } ]
false
[]
12-12.2-3
12
12.2
12.2-3
docs/Chap12/12.2.md
Write the $\text{TREE-PREDECESSOR}$ procedure.
```cpp TREE-PREDECESSOR(x) if x.left != NIL return TREE-MAXIMUM(x.left) y = x.p while y != NIL and x == y.left x = y y = y.p return y ```
[ { "lang": "cpp", "code": "TREE-PREDECESSOR(x)\n if x.left != NIL\n return TREE-MAXIMUM(x.left)\n y = x.p\n while y != NIL and x == y.left\n x = y\n y = y.p\n return y" } ]
false
[]
12-12.2-4
12
12.2
12.2-4
docs/Chap12/12.2.md
Professor Bunyan thinks he has discovered a remarkable property of binary search trees. Suppose that the search for key $k$ in a binary search tree ends up in a leaf. Consider three sets: $A$, the keys to the left of the search path; $B$, the keys on the search path; and $C$, the keys to the right of the search path. P...
Search for $9$ in this tree. Then $A = \\{7\\}$, $B = \\{5, 8, 9\\}$ and $C = \\{\\}$. So, since $7 > 5$ it breaks professor's claim.
[]
false
[]
12-12.2-5
12
12.2
12.2-5
docs/Chap12/12.2.md
Show that if a node in a binary search tree has two children, then its successor has no left child and its predecessor has no right child.
Suppose the node $x$ has two children. Then it's successor is the minimum element of the BST rooted at $x.right$. If it had a left child then it wouldn't be the minimum element. So, it must not have a left child. Similarly, the predecessor must be the maximum element of the left subtree, so cannot have a right child.
[]
false
[]
12-12.2-6
12
12.2
12.2-6
docs/Chap12/12.2.md
Consider a binary search tree $T$ whose keys are distinct. Show that if the right subtree of a node $x$ in $T$ is empty and $x$ has a successor $y$, then $y$ is the lowest ancestor of $x$ whose left child is also an ancestor of $x$. (Recall that every node is its own ancestor.)
First we establish that $y$ must be an ancestor of $x$. If $y$ weren't an ancestor of $x$, then let $z$ denote the first common ancestor of $x$ and $y$. By the binary-search-tree property, $x < z < y$, so $y$ cannot be the successor of $x$. Next observe that $y.left$ must be an ancestor of $x$ because if it weren't, t...
[]
false
[]
12-12.2-7
12
12.2
12.2-7
docs/Chap12/12.2.md
An alternative method of performing an inorder tree walk of an $n$-node binary search tree finds the minimum element in the tree by calling $\text{TREE-MINIMUM}$ and then making $n - 1$ calls to $\text{TREE-SUCCESSOR}$. Prove that this algorithm runs in $\Theta(n)$ time.
To show this bound on the runtime, we will show that using this procedure, we traverse each edge twice. This will suffice because the number of edges in a tree is one less than the number of vertices. Consider a vertex of a BST, say $x$. Then, we have that the edge between $x.p$ and $x$ gets used when successor is cal...
[]
false
[]
12-12.2-8
12
12.2
12.2-8
docs/Chap12/12.2.md
Prove that no matter what node we start at in a height-$h$ binary search tree, $k$ successive calls to $\text{TREE-SUCCESSOR}$ take $O(k + h)$ time.
Suppose $x$ is the starting node and $y$ is the ending node. The distance between $x$ and $y$ is at most $2h$, and all the edges connecting the $k$ nodes are visited twice, therefore it takes $O(k + h)$ time.
[]
false
[]
12-12.2-9
12
12.2
12.2-9
docs/Chap12/12.2.md
Let $T$ be a binary search tree whose keys are distinct, let $x$ be a leaf node, and let $y$ be its parent. Show that $y.key$ is either the smallest key in $T$ larger than $x.key$ or the largest key in $T$ smaller than $x.key$.
- If $x = y.left$, then calling successor on $x$ will result in no iterations of the while loop, and so will return $y$. - If $x = y.right$, the while loop for calling predecessor (see exercise 3) will be run no times, and so $y$ will be returned.
[]
false
[]
12-12.3-1
12
12.3
12.3-1
docs/Chap12/12.3.md
Give a recursive version of the $\text{TREE-INSERT}$ procedure.
```cpp RECURSIVE-TREE-INSERT(T, z) if T.root == NIL T.root = z else INSERT(NIL, T.root, z) ``` ```cpp INSERT(p, x, z) if x == NIL z.p = p if z.key < p.key p.left = z else p.right = z else if z.key < x.key INSERT(x, x.left, z) else INSERT(x, x.righ...
[ { "lang": "cpp", "code": "RECURSIVE-TREE-INSERT(T, z)\n if T.root == NIL\n T.root = z\n else INSERT(NIL, T.root, z)" }, { "lang": "cpp", "code": "INSERT(p, x, z)\n if x == NIL\n z.p = p\n if z.key < p.key\n p.left = z\n else p.right = z\n else i...
false
[]
12-12.3-2
12
12.3
12.3-2
docs/Chap12/12.3.md
Suppose that we construct a binary search tree by repeatedly inserting distinct values into the tree. Argue that the number of nodes examined in searching for a value in the tree is one plus the number of nodes examined when the value was first inserted into the tree.
Number of nodes examined while searching also includes the node which is searched for, which isn't the case when we inserted it.
[]
false
[]
12-12.3-3
12
12.3
12.3-3
docs/Chap12/12.3.md
We can sort a given set of $n$ numbers by first building a binary search tree containing these numbers (using $\text{TREE-INSERT}$ repeatedly to insert the numbers one by one) and then printing the numbers by an inorder tree walk. What are the worst-case and best-case running times for this sorting algorithm?
- The worst-case is that the tree formed has height $n$ because we were inserting them in already sorted order. This will result in a runtime of $\Theta(n^2)$. - The best-case is that the tree formed is approximately balanced. This will mean that the height doesn't exceed $O(\lg n)$. Note that it can't have a smaller h...
[]
false
[]
12-12.3-4
12
12.3
12.3-4
docs/Chap12/12.3.md
Is the operation of deletion "commutative" in the sense that deleting $x$ and then $y$ from a binary search tree leaves the same tree as deleting $y$ and then $x$? Argue why it is or give a counterexample.
No, giving the following courterexample. - Delete $A$ first, then delete $B$: ``` A C C / \ / \ \ B D B D D / C ``` - Delete $B$ first, then delete $A$: ``` A A D / \ \ / B D D C ...
[ { "lang": "", "code": " A C C\n / \\ / \\ \\\n B D B D D\n /\n C" }, { "lang": "", "code": " A A D\n / \\ \\ /\n B D D C\n / /\n C C" } ]
false
[]
12-12.3-5
12
12.3
12.3-5
docs/Chap12/12.3.md
Suppose that instead of each node $x$ keeping the attribute $x.p$, pointing to $x$'s parent, it keeps $x.succ$, pointing to $x$'s successor. Give pseudocode for $\text{SEARCH}$, $\text{INSERT}$, and $\text{DELETE}$ on a binary search tree $T$ using this representation. These procedures should operate in time $O(h)$, wh...
We don't need to change $\text{SEARCH}$. We have to implement $\text{PARENT}$, which facilitates us a lot. ```cpp PARENT(T, x) if x == T.root return NIL y = TREE-MAXIMUM(x).succ if y == NIL y = T.root else if y.left == x return y y = y.left while y.right...
[ { "lang": "cpp", "code": "PARENT(T, x)\n if x == T.root\n return NIL\n y = TREE-MAXIMUM(x).succ\n if y == NIL\n y = T.root\n else\n if y.left == x\n return y\n y = y.left\n while y.right != x\n y = y.right\n return y" }, { "lang": "cpp"...
false
[]
12-12.3-6
12
12.3
12.3-6
docs/Chap12/12.3.md
When node $z$ in $\text{TREE-DELETE}$ has two children, we could choose node $y$ as its predecessor rather than its successor. What other changes to $\text{TREE-DELETE}$ would be necessary if we did so? Some have argued that a fair strategy, giving equal priority to predecessor and successor, yields better empirical pe...
Update line 5 so that $y$ is set equal to $\text{TREE-MAXIMUM}(z.left)$ and lines 6-12 so that every $y.left$ and $z.left$ is replaced with $y.right$ and $z.right$ and vice versa. To implement the fair strategy, we could randomly decide each time $\text{TREE-DELETE}$ is called whether or not to use the predecessor or ...
[]
false
[]
12-12.4-1
12
12.4
12.4-1
docs/Chap12/12.4.md
Prove equation $\text{(12.3)}$.
Consider all the possible positions of the largest element of the subset of $n + 3$ of size $4$. Suppose it were in position $i + 4$ for some $i \le n − 1$. Then, we have that there are $i + 3$ positions from which we can select the remaining three elements of the subset. Since every subset with different largest eleme...
[]
false
[]
12-12.4-2
12
12.4
12.4-2
docs/Chap12/12.4.md
Describe a binary search tree on n nodes such that the average depth of a node in the tree is $\Theta(\lg n)$ but the height of the tree is $\omega(\lg n)$. Give an asymptotic upper bound on the height of an $n$-node binary search tree in which the average depth of a node is $\Theta(\lg n)$.
To keep the average depth low but maximize height, the desired tree will be a complete binary search tree, but with a chain of length $c(n)$ hanging down from one of the leaf nodes. Let $k = \lg(n − c(n))$ be the height of the complete binary search tree. Then the average height is approximately given by $$\frac{1}{n}...
[]
false
[]
12-12.4-3
12
12.4
12.4-3
docs/Chap12/12.4.md
Show that the notion of a randomly chosen binary search tree on $n$ keys, where each binary search tree of $n$ keys is equally likely to be chosen, is different from the notion of a randomly built binary search tree given in this section. ($\textit{Hint:}$ List the possibilities when $n = 3$.)
Suppose we have the elements $\\{1, 2, 3\\}$. Then, if we construct a tree by a random ordering, then, we get trees which appear with probabilities some multiple of $\frac{1}{6}$. However, if we consider all the valid binary search trees on the key set of $\\{1, 2, 3\\}$. Then, we will have only five different possibil...
[]
false
[]
12-12.4-4
12
12.4
12.4-4
docs/Chap12/12.4.md
Show that the function $f(x) = 2^x$ is convex.
The second derivative is $2^x\ln^2 2$ which is always positive, so the function is convex
[]
false
[]