question
stringlengths
34
5.67k
answer
stringlengths
20
20.1k
support_files
listlengths
0
4
metadata
dict
Write a program that takes a string on standard input and an integer k as command-line argument and puts on standard output a sorted list of the k-grams found in the string, each followed by its index in the string.
3.1.15 Searches Percentage of total time spent on insertions 1000 0.00% 1000000 6.10% 1000000000 96.26% The average insertion cost is N, so the total insertion cost for N keys is N^2. The average search cost is lg N, so the total search cost for M keys is M * lg N. According to the question, M = 1000 * N. The total cost is then: total cost = N^2 + M * lg N = N^2 + 1000N * lg N The insertion percentage is: P = (N^2) / (N^2 + 1000N * lg N) N^2 is the higher-order element in the equation, so as N increases, the insertion percentage approaches 100%. Thanks to faame (https://github.com/faame) for improving this answer. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/222
[]
{ "number": "3.5.15", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Exercise" }
Multisets. After referring to Exercises 3.5.2 and 3.5.3 and the previous exercise, develop APIs MultiHashSET and MultiSET for multisets (sets that can have equal keys) and implementations SeparateChainingMultiSET and BinarySearchMultiSET for multisets and ordered multisets, respectively.
3.1.18 The rank() method in BinarySearchST always starts the search in the middle of the array if it has an odd number of elements. If the array has an even number of elements, the rank() method starts the search on the left of the two middle elements. After comparing the middle element with the search key, if it is smaller, the search continues on the right side of the array. If it is bigger, the search continues on the left side of the array. If they have the same value, the current element's index is the rank we are looking for. This guarantees that if an element exists in the symbol table its rank will be found in the rank() method. When the element does not exist the value of the "low" variable will have passed the value of the "high" variable, pointing to the correct rank of where the key should be. This only happens when both the element on the left of the final rank has been checked and the element on the current (final) rank has been checked. After these checks, low will be pointing to the correct rank location. Example: Symbol Table: 0 1 2 3 5 6 Rank of key 4 (non-existent) 1- The initial search range is [0..5]. The rank() method checks the left of the middle elements on index 2 -> value 2 2- 2 is less than 4, so the new range to search is [3..5]. The rank() method checks the middle element on index 4 -> value 5 3- 5 is more than 4, so the new range to search is [3..3]. The rank() method checks the only element left (index 3) -> value 3 4- 3 is less than 4, so the new range to search is [4..3]. Now the "low" variable is bigger than the "high" variable. 5- The rank() method returns the value of the "low" variable, 4. This is the correct rank for key 4.
[]
{ "number": "3.5.18", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Creative Problem" }
Concordance. Write an ST client Concordance that puts on standard output a concordance of the strings in the standard input stream (see page 498).
3.1.20 Proposition B. Binary search in an ordered array with N keys uses no more than lg N + 1 compares for a search (successful or unsuccessful). Proof: Let C(N) be the number of compares to search for a key in a symbol table of size N. We have C(0) = 0, C(1) = 1, and for N > 0 we can write a recurrence relationship that directly mirrors the recursive method: C(N) <= C(FLOOR(N / 2)) + 1 Whether the search goes to the left or to the right, the size of the subarray is no more than FLOOR(N / 2), and we can use one compare to check for equality and to choose whether to go left or right. Let's first prove by induction that C(N) is monotonic: C(N) <= C(N + 1) for all N > 0. It is trivial to prove that: C(1) = 1 C(2) = 1 or 2 Thus we have: C(1) <= C(2) Assume for all N=[0, K] C(N + 1) >= C(N) Thus we have: C(K + 1) >= C(K) At the beginning of the binary search, low has value 0 and high has value N. Mid is computed as: mid = low + (high - low) / 2 = high / 2 Let's use L to represent the size of the left half and R to represent the size of the right half. The mid element has size 1. When N = K + 1 the total size is L + 1 + R. When N = K + 2: mid = low + (high - low) / 2 = (high + 1) / 2 = (N + 1) / 2 When N is incremented by 1 (from K + 1 to K + 2), the mid point either remains in the same place or shifts to the right by 1. 1- If mid remains in the same place, then L remains the same, and R is incremented by 1. Since (R + 1) <= K + 1, we have: C(K + 2) = C(L) + 1 + C(R + 1) >= C(L) + 1 + C(R) = C(K + 1) 2- If mid shifts to the right by 1, then L is incremented by 1 and R remains the same. Since (L + 1) <= K + 1, we have: C(K + 2) = C(L + 1) + 1 + C(R) >= C(L) + 1 + C(R) = C(K + 1) Therefore, given C(K + 1) >= C(K) it is also true that C(K + 2) >= C(K + 1), which proves that C(N) is monotonic. For a general N, we have that: C(N) <= C(N / 2) + 1 (one comparison to check equality or decide which way of the subarray to go) C(N / 2) <= C(N / 4) + 1 Putting the value of C(N / 2) in the first equation: C(N) <= C(N / 4) + 1 + 1 And adding all values to the first equation until C = 1: C(N) <= C(N / 8) + 1 + 1 + 1 C(N) <= C(N / 16) + 1 + 1 + 1 + 1 C(N) <= C(N / 2^k) + 1 + 1 + 1 + 1 + ... + 1 until we get to C(N) <= 1 + 1 + 1 + 1 + 1 + ... + 1 (even if N is not divisible by 2 there is still a compare operation) C(N) <= k + 1 In this case, 2^k = N k <= lg N + 1 We can also prove it using the Master theorem: Binary search recurrence relation: T(N) = T(N/2) + O(1) Master theorem: T(N) = aT(N/b) + f(N) Here, a = 1, b = 2 and f(n) is O(1) (constant) c = log(a base b) = log(1 base 2) = 0 We can see that this is the case 2 of the Master theorem by taking k = 0 in this equation: O(n^c * (log n)^k) O(n^0 * (log n)^0) = O(1) = f(n) -> This means we are in case 2 of the Master theorem From the Case 2 of the Master Theorem we know that: T(n) = O(n^(log a base b) * (log n)^(k + 1)) T(n) = O(n^0 * log(n)^1) = O(log n) With binary search, we achieve a logarithmic-time search guarantee. Reference: https://en.wikipedia.org/wiki/Master_theorem Thanks to faame (https://github.com/faame) for adding the section proving that C(N) is monotonic. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/223
[]
{ "number": "3.5.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Creative Problem" }
Inverted concordance. Write a program InvertedConcordance that takes a concordance on standard input and puts the original string on standard output stream. Note: This computation is associated with a famous story having to do with the Dead Sea Scrolls. The team that discovered the original tablets enforced a secrecy rule that essentially resulted in their making public only a concordance. After a while, other researchers figured out how to invert the concordance, and the full text was eventually made public.
3.1.21 - Memory usage * BinarySearchST object overhead -> 16 bytes Key[] reference (keys) -> 8 bytes Value[] reference (values) -> 8 bytes int value (size) -> 4 bytes padding -> 4 bytes Key[] object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes N Key references -> between 8N and 32N bytes (the resizing array may be 25% to 100% full) Value[] object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes N Value references -> between 8N and 32N bytes (the resizing array may be 25% to 100% full) Amount of memory needed: 16 + 8 + 8 + 4 + 4 + 16 + 4 + 4 + (8N to 32N) + 16 + 4 + 4 + (8N to 32N) = (16N to 64N) + 88 bytes * SequentialSearchST object overhead -> 16 bytes Node reference (first) -> 8 bytes Node object overhead -> 16 bytes extra overhead for reference to the enclosing instance -> 8 bytes Key reference (key) -> 8 bytes Value reference (value) -> 8 bytes Node reference (next) -> 8 bytes (N Node references -> 48N bytes) int value (size) -> 4 bytes padding -> 4 bytes Amount of memory needed: 16 + 8 + (16 + 8 + 8 + 8 + 8)N + 4 + 4 = 48N + 32 bytes
[]
{ "number": "3.5.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Creative Problem" }
Sparse matrices. Develop an API and an implementation for sparse 2D matrices. Support matrix addition and matrix multiplication. Include constructors for row and column vectors.
3.1.23 - Analysis of binary search As the book and exercise 3.1.20 have proven, the maximum number of compares used for a binary search in a table of size N is lg N + 1. A number N has exactly lg N + 1 bits. This is because shifting 1 bit to the right reduces the number by half (rounded down). For example: N Bit representation Number of bits lg N + 1 1 1 1 1 2 10 2 2 4 100 3 3 5 101 3 3 9 1001 4 4 Therefore, the maximum number of compares used for a binary search in a table of size N is precisely the number of bits in the binary representation of N.
[]
{ "number": "3.5.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Creative Problem" }
List. Develop an implementation of the following API: public class List<Item> implements Iterable<Item> List() create a list void addFront(Item item) add item to the front void addBack(Item item) add item to the back Item deleteFront() remove from the front Item deleteBack() remove from the back void delete(Item item) remove item from the list void add(int i, Item item) add item as the ith in the list Item delete(int i) remove the ith item from the list boolean contains(Item item) is key in the list? boolean isEmpty() is the list empty? int size() number of items in the list Hint: Use two symbol tables, one to find the ith item in the list efficiently, and the other to efficiently search by item. (Java’s java.util.List interface contains methods like these but does not supply any implementation that efficiently supports all operations.)
3.1.27 - Small tables Building the binary search symbol table requires N calls to put(). Every put() operation makes a call to rank() and does a search, an operation with order of growth O(lg N). Assuming that we can choose the order of the keys to insert, we can create the table in a sorted order, starting with the smallest element and ending with the highest element. By doing this, every element will be inserted at the end of the keys[] and values[] array, making the put() operation use O(lg N) for the rank() operation and O(1) for the insert (there will be no need to move keys and values to the right since the new element is the rightmost element). N inserts will have an order of growth of O(N lg N). A search operation has the order of growth O(lg N). Therefore, the order of growth of S should be O(N), with S search operations having an order of growth O(N lg N), making the cost of building the table the same as the cost of all searches. If the items are inserted in random order, the put operation has an order of growth O(N): O(lg N) for the rank operation and O(N) for inserting an element. N inserts will have an order of growth of O(N^2). In this case, the order of growth of S should be O(N^2 / lg N). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for correcting the orders of growth of S. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/120
[]
{ "number": "3.5.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 3, "chapter_title": "Searching", "section": 3.5, "section_title": "Applications", "type": "Creative Problem" }
What is the maximum number of edges in a graph with V vertices and no parallel edges? What is the minimum number of edges in a graph with V vertices, none of which are isolated?
4.1.1 The maximum number of edges in a graph with V vertices and no parallel edges is V * (V - 1) / 2. Since we do not have self-loops or parallel edges, each vertex can connect to V - 1 other vertices. In an undirected graph vertex v connected to vertex w is the same as vertex w connected to vertex v, so we divide the result by 2. Example: o - o | X | o — o The minimum number of edges in a graph with V vertices, none of which are isolated (have degree 0) is V - 1. Example: o — o — o
[]
{ "number": "4.1.1", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Draw, in the style of the figure in the text (page 524), the adjacency lists built by Graph’s input stream constructor for the file tinyGex2.txt depicted at left.
4.1.2 adj[] 0 -> 5 -> 2 -> 6 1 -> 4 -> 8 -> 11 2 -> 5 -> 6 -> 0 -> 3 3 -> 10 -> 6 -> 2 4 -> 1 -> 8 5 -> 0 -> 10 -> 2 6 -> 2 -> 3 -> 0 7 -> 8 -> 11 8 -> 1 -> 11 -> 7 -> 4 9 -> 10 -> 5 -> 3 11 -> 8 -> 7 -> 1
[]
{ "number": "4.1.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Consider the four-vertex graph with edges 0-1, 1-2, 2-3, and 3-0. Draw an array of adjacency-lists that could not have been built calling addEdge() for these edges no matter what order.
4.1.6 The edges form a cycle, so changing the connection order of one of the vertices' adjacency list creates an impossible sequence of connections. adj[] 0 -> 1 -> 3 (the original was 0 -> 3 -> 1) 1 -> 2 -> 0 2 -> 3 -> 1 3 -> 0 -> 2
[]
{ "number": "4.1.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Show, in the style of the figure on page 533, a detailed trace of the call dfs(0) for the graph built by Graph’s input stream constructor for the file tinyGex2.txt (see Exercise 4.1.2). Also, draw the tree represented by edgeTo[].
4.1.9 marked[] adj[] dfs (0) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (5) 0 T 0 5 2 6 check 0 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (10) 0 T 0 5 2 6 check 5 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (3) 0 T 0 5 2 6 check 10 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (6) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (2) 0 T 0 5 2 6 check 5 1 1 4 8 11 check 6 2 T 2 5 6 0 3 check 0 3 T 3 10 6 2 check 3 4 4 1 8 2 done 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done edgeTo[] tree 0 5 10 3 6 2
[]
{ "number": "4.1.9", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Prove that every connected graph has a vertex whose removal (including all adjacent edges) will not disconnect the graph, and write a DFS method that finds such a vertex. Hint: Consider a vertex whose adjacent vertices are all marked.
4.1.10 Every connected graph has a vertex whose removal (including all incident edges) will not disconnect the graph. Proof by contradiction: If the graph has a node of degree one, removing it gives a connected graph. Example: o - o Otherwise, every path in the graph belongs to a cycle. To see this, start with any path, then notice that the terminal nodes of this path must be connected to other nodes that are not in the path, so we add them to the path to make a new path. Since the graph is connected, continuing this process it is possible to see that in the end all the nodes from the graph will be in the path. If among all those paths there is no path from which one can remove a node without disconnecting the graph, then all those paths are bridges. In this case, they do not belong to a cycle, which is a contradiction (assuming the graph is finite). Based on: https://math.stackexchange.com/questions/891325/proof-verification-a-connected-graph-always-has-a-vertex-that-is-not-a-cut-vert
[]
{ "number": "4.1.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Draw the tree represented by edgeTo[] after the call bfs(G, 0) in ALGORITHM 4.2 for the graph built by Graph’s input stream constructor for the file tinyGex2.txt (see Exercise 4.1.2).
4.1.11 Tree represented by edgeTo[] after call to bfs(G, 0): 0 5 2 6 10 3
[]
{ "number": "4.1.11", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
What does the BFS tree tell us about the distance from v to w when neither is at the root?
4.1.12 When neither v nor w are at the root, the BFS tree tell us that if they are on the same branch, there is a path between v and w of distance equal to the number of edges between them in the branch. If they are not on the same branch, there is a path between v and w of distance Dv + Dw, where Dv is the distance from the root to vertex v and Dw is the distance from the root to vertex w.
[]
{ "number": "4.1.12", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Suppose you use a stack instead of a queue when running breadth-first search. Does it still compute shortest paths?
4.1.14 If we use a stack instead of a queue when running breadth-first search, it may not compute shortest paths. This can be seen in the following graph: 0 (source) / \ 1 - 2 | | 4 - 3 If the edge 0 - 2 is inserted before the edge 0 - 1: Using a stack, the distance from 0 to 4 will be 3. Using a queue, the distance from 0 to 4 will be 2. If the edge 0 - 1 is inserted before the edge 0 - 2: Using a stack, the distance from 0 to 3 will be 3. Using a queue, the distance from 0 to 3 will be 2. Thanks to lemonadeseason (https://github.com/lemonadeseason) for correcting the example in this exercise. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/24
[]
{ "number": "4.1.14", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Show, in the style of the figure on page 545, a detailed trace of CC for finding the connected components in the graph built by Graph’s input stream constructor for the file tinyGex2.txt (see EXERCISE 4.1.2).
4.1.19 count marked[] id[] 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) 0 T 0 dfs(5) 0 T T 0 0 check 0 dfs(10) 0 T T T 0 0 0 check 5 dfs(3) 0 T T T T 0 0 0 0 check 10 dfs(6) 0 T T T T T 0 0 0 0 0 dfs(2) 0 T T T T T T 0 0 0 0 0 0 check 5 check 6 check 0 check 3 2 done check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done dfs(1) 1 T T T T T T T 0 1 0 0 0 0 0 dfs(4) 1 T T T T T T T T 0 1 0 0 1 0 0 0 check 1 dfs(8) 1 T T T T T T T T T 0 1 0 0 1 0 0 1 0 check 1 dfs(11) 1 T T T T T T T T T T 0 1 0 0 1 0 0 1 0 1 check 8 dfs(7) 1 T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 0 1 check 8 check 11 7 done check 1 11 done check 7 check 4 8 done 4 done check 8 check 11 1 done dfs(9) 2 T T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 2 0 1 9 done
[]
{ "number": "4.1.19", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Show, in the style of the figures in this section, a detailed trace of Cycle for finding a cycle in the graph built by Graph’s input stream constructor for the file tinyGex2.txt (see EXERCISE 4.1.2). What is the order of growth of the running time of the Cycle constructor, in the worst case?
4.1.20 Has Cycle? marked[] F 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T dfs(5) T T check 0 F dfs(10) T T T check 5 F dfs(3) T T T T check 10 F dfs(6) T T T T T dfs(2) T T T T T T check 5 T (cycle found here) check 6 T check 0 T check 3 T 2 done check 3 T check 0 T 6 done check 2 T 3 done 10 done check 2 T 5 done check 2 T check 6 T 0 done dfs(1) T T T T T T T dfs(4) T T T T T T T T check 1 T dfs(8) T T T T T T T T T check 1 T dfs(11) T T T T T T T T T T check 8 T dfs(7) T T T T T T T T T T T check 8 T check 11 T 7 done check 1 T 11 done check 7 T check 4 T 8 done 4 done check 8 T check 11 T 1 done dfs(9) T T T T T T T T T T T T 9 done The order of growth of the running time of the Cycle constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] array takes time proportional to V.
[]
{ "number": "4.1.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Show, in the style of the figures in this section, a detailed trace of TwoColor for finding a two-coloring of the graph built by Graph’s input stream constructor for the file tinyGex2.txt (see EXERCISE 4.1.2). What is the order of growth of the running time of the TwoColor constructor, in the worst case?
4.1.21 Is 2-colorable? marked[] color[] T 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T F F F F F F F F F F F F dfs(5) T T F F F F F T F F F F F F check 0 T dfs(10) T T T F F F F F T F F F F F F check 5 T dfs(3) T T T T F F F T F T F F F F F F check 10 T dfs(6) T T T T T F F F T F T F F F F F F dfs(2) T T T T T T F F T T F T F F F F F F check 5 F check 6 F check 0 F check 3 F 2 done check 3 F check 0 F 6 done check 2 F 3 done 10 done check 2 F 5 done check 2 F check 6 F 0 done dfs(1) T T T T T T T F F T T F T F F F F F F dfs(4) T T T T T T T T F F T T T T F F F F F F check 1 F dfs(8) T T T T T T T T T F F T T F T F F F F F F check 1 F dfs(11) T T T T T T T T T T F F T T F T F F F F F T check 8 F dfs(7) T T T T T T T T T T T F F T T F T F F F F F F check 8 F check 11 F 7 done check 1 F 11 done check 7 F check 4 F 8 done 4 done check 8 F check 11 F 1 done dfs(9) T T T T T T T T T T T T F F T T F T F F F F F F 9 done The order of growth of the running time of the TwoColor constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] and color[] arrays takes time proportional to V.
[]
{ "number": "4.1.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Compute the number of connected components in movies.txt, the size of the largest component, and the number of components of size less than 10. Find the eccentricity, diameter, radius, a center, and the girth of the largest component in the graph. Does it contain Kevin Bacon?
4.1.24 Number of connected components: 33 Size of the largest component: 118774 Number of components of size less than 10: 5 Does the largest component contain Kevin Bacon: Yes Eccentricity, diameter, radius, center and girth: The strategy for computing the eccentricity, diameter, radius, center and girth of the largest component was the following: The algorithm required for these exact computations has complexity of O(V * E) (running breadth-first search from all vertices). V is ~= 10^5, which means the algorithm required for these exact computation has complexity ~= 10^10. In order to reduce the complexity of the computations (and to be able to compute them), domain knowledge was used to compute approximate results. Based on the Kevin Bacon game, we know that Kevin Bacon has been on several movies and that he is closely connected to most of the actors and actresses in the graph. Therefore, he has a high probability of being the center of the graph. Using Kevin Bacon as the center, we compute his vertex eccentricity to get the graph radius. A breadth-first search using Kevin Bacon as the source computes the vertices that are furthest from the center. Computing the eccentricities of these vertices we can find the diameter of the graph. The eccentricities of the center and of the vertices furthest from it are shown in the results. The range of the eccentricities is [10, 18]. Computing the eccentricities of all vertices would bring us back to the original problem of =~ 10^10 operations. Finally, for the girth, we know that there is a very high probability that two actors have worked together on two different movies. This gives a girth of 4, which is the minimum girth possible for the movies graph: Actor -- Movie -- Actor \ / \ Movie / To validate this theory we run the algorithm to compute the girth of the graph but stop once we find a cycle of length 4, since it is the shortest cycle possible. Eccentricities of Kevin Bacon and of vertices furthest from the center in the largest component: Eccentricity of vertex 22970: 16 Eccentricity of vertex 22971: 16 Eccentricity of vertex 22972: 16 Eccentricity of vertex 22973: 16 Eccentricity of vertex 22974: 16 Eccentricity of vertex 22976: 16 Eccentricity of vertex 22977: 16 Eccentricity of vertex 22978: 16 Eccentricity of vertex 22979: 16 Eccentricity of vertex 22980: 16 Eccentricity of vertex 51437: 18 Eccentricity of vertex 51438: 18 Eccentricity of vertex 51439: 18 Eccentricity of vertex 51440: 18 Eccentricity of vertex 51441: 18 Eccentricity of vertex 51442: 18 Eccentricity of vertex 51443: 18 Eccentricity of vertex 51444: 18 Eccentricity of vertex 51445: 18 Eccentricity of vertex 51446: 18 Eccentricity of vertex 51447: 18 Eccentricity of vertex 51448: 18 Eccentricity of vertex 51449: 18 Eccentricity of vertex 51450: 18 Eccentricity of vertex 51451: 18 Eccentricity of vertex 51452: 18 Eccentricity of vertex 51453: 18 Eccentricity of vertex 51454: 18 Eccentricity of vertex 51455: 18 Eccentricity of vertex 51456: 18 Eccentricity of vertex 51457: 18 Eccentricity of vertex 51458: 18 Eccentricity of vertex 51459: 18 Eccentricity of vertex 51460: 18 Eccentricity of vertex 51461: 18 Eccentricity of vertex 51462: 18 Eccentricity of vertex 51463: 18 Eccentricity of vertex 51464: 18 Eccentricity of vertex 51465: 18 Eccentricity of vertex 51466: 18 Eccentricity of vertex 51467: 18 Eccentricity of vertex 51468: 18 Eccentricity of vertex 51469: 18 Eccentricity of vertex 51470: 18 Eccentricity of vertex 51471: 18 Eccentricity of vertex 51472: 18 Eccentricity of vertex 51473: 18 Eccentricity of vertex 51474: 18 Eccentricity of vertex 51475: 18 Eccentricity of vertex 51476: 18 Eccentricity of vertex 51477: 18 Eccentricity of vertex 51478: 18 Eccentricity of vertex 51479: 18 Eccentricity of vertex 51480: 18 Eccentricity of vertex 51481: 18 Eccentricity of vertex 51482: 18 Eccentricity of vertex 51483: 18 Eccentricity of vertex 51484: 18 Eccentricity of vertex 51485: 18 Eccentricity of vertex 51486: 18 Eccentricity of vertex 51487: 18 Eccentricity of vertex 51488: 18 Eccentricity of vertex 51489: 18 Eccentricity of vertex 51490: 18 Eccentricity of vertex 51491: 18 Eccentricity of vertex 51492: 18 Eccentricity of vertex 51493: 18 Eccentricity of vertex 51494: 18 Eccentricity of vertex 51495: 18 Eccentricity of vertex 51496: 18 Eccentricity of vertex 51497: 18 Eccentricity of vertex 51498: 18 Eccentricity of vertex 51499: 18 Eccentricity of vertex 51500: 18 Eccentricity of vertex 51501: 18 Eccentricity of vertex 51502: 18 Eccentricity of vertex 51503: 18 Eccentricity of vertex 51504: 18 Eccentricity of vertex 51505: 18 Eccentricity of vertex 51506: 18 Eccentricity of vertex 51507: 18 Eccentricity of vertex 51508: 18 Eccentricity of vertex 51509: 18 Eccentricity of vertex 51510: 18 Eccentricity of vertex 51511: 18 Eccentricity of vertex 51512: 18 Eccentricity of vertex 51513: 18 Eccentricity of vertex 51514: 18 Eccentricity of vertex 51515: 18 Eccentricity of vertex 51516: 18 Eccentricity of vertex 51517: 18 Eccentricity of vertex 51518: 18 Eccentricity of vertex 51519: 18 Eccentricity of vertex 51520: 18 Eccentricity of vertex 86241: 16 Eccentricity of vertex 86242: 16 Eccentricity of vertex 86243: 16 Eccentricity of vertex 86244: 16 Eccentricity of vertex 86245: 16 Eccentricity of vertex 86246: 16 Eccentricity of vertex 86247: 16 Eccentricity of vertex 86248: 16 Eccentricity of vertex 86249: 16 Eccentricity of vertex 86250: 16 Eccentricity of vertex 86251: 16 Eccentricity of vertex 86252: 16 Eccentricity of vertex 86253: 16 Eccentricity of vertex 86254: 16 Eccentricity of vertex 86255: 16 Eccentricity of vertex 86256: 16 Eccentricity of vertex 86257: 16 Eccentricity of vertex 86258: 16 Eccentricity of vertex 86259: 16 Eccentricity of vertex 118353: 18 Eccentricity of vertex 118354: 18 Eccentricity of vertex 118355: 18 Eccentricity of vertex 118356: 18 Eccentricity of vertex 118357: 18 Eccentricity of vertex 118358: 18 Eccentricity of vertex 118359: 18 Eccentricity of vertex 118360: 18 Eccentricity of vertex 118361: 18 Eccentricity of vertex 118362: 18 Eccentricity of vertex 118363: 18 Eccentricity of vertex 118364: 18 Eccentricity of vertex 9145: 10 Diameter of largest component: 18 Radius of largest component: 10 Center of largest component: 9145 Girth of largest component: 4
[]
{ "number": "4.1.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Determine the amount of memory used by Graph to represent a graph with V vertices and E edges, using the memory-cost model of SECTION 1.4.
4.1.27 Integer * object overhead -> 16 bytes * int value -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 4 + 4 = 24 bytes Node * object overhead -> 16 bytes * extra overhead for reference to the enclosing instance -> 8 bytes * Item reference (item) -> 8 bytes * Node reference (next) -> 8 bytes Amount of memory needed: 16 + 8 + 8 + 8 = 40 bytes Bag * object overhead -> 16 bytes * Node reference (first) -> 8 bytes * int value (size) -> 4 bytes * padding -> 4 bytes * N Nodes -> 40N bytes * Integer (item) -> 24N bytes Amount of memory needed: 16 + 8 + 4 + 4 + 40N + 24N = 64N + 32 bytes Graph * object overhead -> 16 bytes * int value (V) -> 4 bytes * int value (E) -> 4 bytes * Bag<Integer>[] reference (adj) -> 8 bytes * Bag<Integer>[] (adj) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes Bag references -> 8V Bag -> 64E + 32 bytes -> There are V Bags and in total, they have 2E nodes -> 128E + 32V Amount of memory needed: 16 + 4 + 4 + 8 + 16 + 4 + 4 + 8V + 128E + 32V = 128E + 40V + 56 bytes
[]
{ "number": "4.1.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Two graphs are isomorphic if there is a way to rename the vertices of one to make it identical to the other. Draw all the nonisomorphic graphs with two, three, four, and five vertices.
4.1.28 Non-isomorphic graphs: There are 2 non-isomorphic graphs with 2 vertices: o o o-o There are 4 non-isomorphic graphs with 3 vertices: o o o o o-o o-o-o o / \ o-o There are 11 non-isomorphic graphs with 4 vertices: o o o o o-o o o o-o-o o o-o-o-o o-o o-o o | o / \ o o o / \ o---o o o-o | | o-o o | o / \ o---o o / \ o---o \ / o o /|\ / o \ / / \ \ o------o There are 34 non-isomorphic graphs with 5 vertices: o o o o o o o o o-o o o o / o-o o o o \ / o o o / | \ o o o o o / \ o o o-o o / \ o o \ o o o / \ o----o o o o /|\\ o o oo o-o | | o-o o o | o-o | | o o o | o / \ o---o o o-o-o-o-o o o /| | o | | \| | o o o--o | | o--o | o o o | | o---o \ / o o--o \/ o--o--o o / \ o o \ / o-o o (This is a complete graph, where all vertices have degree = 4) / / \\ o------o \ /\ /\/ \|/\ |/ o---o o / / \\ o------o \ /\ /\/ \|/\ |/ o o o---o |\ /|\ | X | o |/ \|/ o---o o---o |\ /| | o | |/ \| o---o o---o |\ /|\ | X | o |/ \| o---o o // \ o-o o \/ o o-o-o-o \| | / o o /\ / \ / \ o------o \ \ / / \ /\ / o o o---o | X | o o---o o o |\ /| | o | |/ \| o o o /|\ o--o | o \|/ o o | o /|\ o | o \|/ o o / \ o---o | | o---o o---o | /| | o | |/ | o---o o | o | o / \ o---o o---o | \ | o---o o Based on: http://www.graphclasses.org/smallgraphs.html
[]
{ "number": "4.1.28", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Exercise" }
Eulerian and Hamiltonian cycles. Consider the graphs defined by the following four sets of edges: 0-1 0-2 0-3 1-3 1-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 0-1 0-2 0-3 1-3 0-3 2-5 5-6 3-6 4-7 4-8 5-8 5-9 6-7 6-9 8-8 0-1 1-2 1-3 0-3 0-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 4-1 7-9 6-2 7-3 5-0 0-2 0-8 1-6 3-9 6-3 2-8 1-5 9-8 4-5 4-7 Which of these graphs have Euler cycles (cycles that visit each edge exactly once)? Which of them have Hamilton cycles (cycles that visit each vertex exactly once)?
4.1.30 - Eulerian and Hamiltonian cycles An Eulerian cycle (or Eulerian circuit) is a path which starts and ends at the same vertex and includes every edge exactly once. A Hamiltonian cycle is a path which starts and ends at the same vertex and includes every vertex exactly once (except for the source, which is visited twice). According to Euler theorems a graph has an Eulerian cycle/circuit if and only if it does not have any vertices of odd degree. First graph: 0-1 0-2 0-3 1-3 1-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9). It has a Hamiltonian cycle: 1-4 4-8 8-7 7-6 6-9 9-5 5-2 2-0 0-3 3-1 Second graph: 0-1 0-2 0-3 1-3 0-3 2-5 5-6 3-6 4-7 4-8 5-8 5-9 6-7 6-9 8-8 It has an Eulerian cycle (all the vertices have even degrees): 0-3 3-0 0-2 2-5 5-9 9-6 6-5 5-8 8-8 8-4 4-7 7-6 6-3 3-1 1-0 There is no Hamiltonian cycle. Third graph: 0-1 1-2 1-3 0-3 0-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 4-8 8-7 7-6 6-9 9-5 5-2 2-1 1-3 3-0 0-4 Fourth graph: 4-1 7-9 6-2 7-3 5-0 0-2 0-8 1-6 3-9 6-3 2-8 1-5 9-8 4-5 4-7 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 0-5 5-4 4-1 1-6 6-3 3-7 7-9 9-8 8-2 2-0
[]
{ "number": "4.1.30", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Creative Problem" }
Graph enumeration. How many different undirected graphs are there with V vertices and E edges (and no parallel edges)?
4.1.31 - Graph enumeration If we were considering graphs with no self-loops: There are (V) ways to choose a set {u, v} of two vertices. From this set there are E ways to choose the vertices to connect. (2) So there are ((V)) = (V! / 2! * (V - 2)!)! / E! * ((V! / 2! * (V - 2)!) - E)! ((2)) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges and no self-loops). Since we are also considering graphs with self-loops, there are (V) * 2^V ways to choose a set {u, v} of two vertices in which (2) vertices may or may not have a self-loop. So there are ((V) * 2^V) = ((V! / 2! * (V - 2)!) * 2^V)! / E! * (((V! / 2! * (V - 2)!) * 2^V) - E)! ((2) ) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges). Reference: Handbook of Discrete and Combinatorial Mathematics by Kenneth H. Rosen, page 580 https://math.stackexchange.com/questions/1072726/counting-simple-connected-labeled-graphs-with-n-vertices-and-k-edges https://math.stackexchange.com/questions/128439/how-to-determine-the-number-of-directed-undirected-graphs
[]
{ "number": "4.1.31", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Creative Problem" }
Odd cycles. Prove that a graph is two-colorable (bipartite) if and only if it contains no odd-length cycle.
4.1.33 - Odd cycles A graph is two-colorable (bipartite) if and only if it contains no odd-length cycle. Proof: 1- Proving that a graph with an odd-length cycle cannot be bipartite: If a graph G is bipartite with vertex sets V1 and V2, every step along a walk takes you either from V1 to V2 or from V2 to V1. To end up where you started, therefore, you must take an even number of steps. 2- Proving that a graph with only even-length cycles is bipartite: Consider G to be a graph with only even-length cycles. Let v0 be any vertex. For each vertex v in the same component C0 as v0 let d(v) be the length of the shortest path from v0 to v. Color red every vertex in C0 whose distance from v0 is even, and color the other vertices of C0 blue. Do the same for each component of G. Check that if G had any edge between two red vertices or between two blue vertices, it would have an odd cycle. Thus, G is bipartite, the red vertices and the blue vertices being the two parts. Reference: https://math.stackexchange.com/questions/311665/proof-a-graph-is-bipartite-if-and-only-if-it-contains-no-odd-cycles
[]
{ "number": "4.1.33", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Creative Problem" }
Biconnectedness. A graph is biconnected if every pair of vertices is connected by two disjoint paths. An articulation point in a connected graph is a vertex that would disconnect the graph if it (and its adjacent edges) were removed. Prove that any graph with no articulation points is biconnected. Hint: Given a pair of vertices s and t and a path connecting them, use the fact that none of the vertices on the path are articulation points to construct two disjoint paths connecting s and t.
4.1.35 - Biconnectedness Any graph with no articulation points is biconnected. Proof: Consider two vertices, s, t, and a path P1 connecting s to t. We know that no vertex in P1 is an articulation point, so for each vertex v in the path, there is always another path P2 connecting s to t that does not include it. Also, P2 does not include any of the vertices of P1, otherwise any vertex included in both paths would be an articulation point (being the only way to connect s to t). This means that every pair of vertices is connected by two vertex-disjoint paths (such as P1 and P2), making the graph biconnected. Graph illustration (P1 is the path s-v1 v1-v2 v2-t and P2 is the path s-v3 v3-v4 v4-t): s |\ | \ v1 v3 | \ | v4 v2 / | / |/ t
[]
{ "number": "4.1.35", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.1, "section_title": "Undirected Graphs", "type": "Creative Problem" }
What is the maximum number of edges in a digraph with V vertices and no parallel edges? What is the minimum number of edges in a digraph with V vertices, none of which are isolated?
4.1.1 The maximum number of edges in a graph with V vertices and no parallel edges is V * (V - 1) / 2. Since we do not have self-loops or parallel edges, each vertex can connect to V - 1 other vertices. In an undirected graph vertex v connected to vertex w is the same as vertex w connected to vertex v, so we divide the result by 2. Example: o - o | X | o — o The minimum number of edges in a graph with V vertices, none of which are isolated (have degree 0) is V - 1. Example: o — o — o
[]
{ "number": "4.2.1", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Draw, in the style of the figure in the text (page 524), the adjacency lists built by Digraph’s input stream constructor for the file tinyDGex2.txt depicted at left.
4.1.2 adj[] 0 -> 5 -> 2 -> 6 1 -> 4 -> 8 -> 11 2 -> 5 -> 6 -> 0 -> 3 3 -> 10 -> 6 -> 2 4 -> 1 -> 8 5 -> 0 -> 10 -> 2 6 -> 2 -> 3 -> 0 7 -> 8 -> 11 8 -> 1 -> 11 -> 7 -> 4 9 -> 10 -> 5 -> 3 11 -> 8 -> 7 -> 1
[]
{ "number": "4.2.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Develop a test client for Digraph.
4.1.6 The edges form a cycle, so changing the connection order of one of the vertices' adjacency list creates an impossible sequence of connections. adj[] 0 -> 1 -> 3 (the original was 0 -> 3 -> 1) 1 -> 2 -> 0 2 -> 3 -> 1 3 -> 0 -> 2
[]
{ "number": "4.2.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Write a method that checks whether or not a given permutation of a DAG’s vertices is a topological order of that DAG.
4.1.9 marked[] adj[] dfs (0) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (5) 0 T 0 5 2 6 check 0 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (10) 0 T 0 5 2 6 check 5 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (3) 0 T 0 5 2 6 check 10 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (6) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (2) 0 T 0 5 2 6 check 5 1 1 4 8 11 check 6 2 T 2 5 6 0 3 check 0 3 T 3 10 6 2 check 3 4 4 1 8 2 done 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done edgeTo[] tree 0 5 10 3 6 2
[]
{ "number": "4.2.9", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Given a DAG, does there exist a topological order that cannot result from applying a DFS-based algorithm, no matter in what order the vertices adjacent to each vertex are chosen? Prove your answer.
4.1.10 Every connected graph has a vertex whose removal (including all incident edges) will not disconnect the graph. Proof by contradiction: If the graph has a node of degree one, removing it gives a connected graph. Example: o - o Otherwise, every path in the graph belongs to a cycle. To see this, start with any path, then notice that the terminal nodes of this path must be connected to other nodes that are not in the path, so we add them to the path to make a new path. Since the graph is connected, continuing this process it is possible to see that in the end all the nodes from the graph will be in the path. If among all those paths there is no path from which one can remove a node without disconnecting the graph, then all those paths are bridges. In this case, they do not belong to a cycle, which is a contradiction (assuming the graph is finite). Based on: https://math.stackexchange.com/questions/891325/proof-verification-a-connected-graph-always-has-a-vertex-that-is-not-a-cut-vert
[]
{ "number": "4.2.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Describe a family of sparse digraphs whose number of directed cycles grows exponentially in the number of vertices.
4.1.11 Tree represented by edgeTo[] after call to bfs(G, 0): 0 5 2 6 10 3
[]
{ "number": "4.2.11", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
How many edges are there in the transitive closure of a digraph that is a simple directed path with V vertices and V–1 edges?
4.1.12 When neither v nor w are at the root, the BFS tree tell us that if they are on the same branch, there is a path between v and w of distance equal to the number of edges between them in the branch. If they are not on the same branch, there is a path between v and w of distance Dv + Dw, where Dv is the distance from the root to vertex v and Dw is the distance from the root to vertex w.
[]
{ "number": "4.2.12", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Prove that the strong components in G^R are the same as in G.
4.1.14 If we use a stack instead of a queue when running breadth-first search, it may not compute shortest paths. This can be seen in the following graph: 0 (source) / \ 1 - 2 | | 4 - 3 If the edge 0 - 2 is inserted before the edge 0 - 1: Using a stack, the distance from 0 to 4 will be 3. Using a queue, the distance from 0 to 4 will be 2. If the edge 0 - 1 is inserted before the edge 0 - 2: Using a stack, the distance from 0 to 3 will be 3. Using a queue, the distance from 0 to 3 will be 2. Thanks to lemonadeseason (https://github.com/lemonadeseason) for correcting the example in this exercise. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/24
[]
{ "number": "4.2.14", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Exercise" }
Topological sort and BFS. Explain why the following algorithm does not necessarily produce a topological order: Run BFS, and label the vertices by increasing distance to their respective source.
4.1.19 count marked[] id[] 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) 0 T 0 dfs(5) 0 T T 0 0 check 0 dfs(10) 0 T T T 0 0 0 check 5 dfs(3) 0 T T T T 0 0 0 0 check 10 dfs(6) 0 T T T T T 0 0 0 0 0 dfs(2) 0 T T T T T T 0 0 0 0 0 0 check 5 check 6 check 0 check 3 2 done check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done dfs(1) 1 T T T T T T T 0 1 0 0 0 0 0 dfs(4) 1 T T T T T T T T 0 1 0 0 1 0 0 0 check 1 dfs(8) 1 T T T T T T T T T 0 1 0 0 1 0 0 1 0 check 1 dfs(11) 1 T T T T T T T T T T 0 1 0 0 1 0 0 1 0 1 check 8 dfs(7) 1 T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 0 1 check 8 check 11 7 done check 1 11 done check 7 check 4 8 done 4 done check 8 check 11 1 done dfs(9) 2 T T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 2 0 1 9 done
[]
{ "number": "4.2.19", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Directed Eulerian cycle. An Eulerian cycle is a directed cycle that contains each edge exactly once. Write a graph client Euler that finds an Eulerian cycle or reports that no such tour exists. Hint: Prove that a digraph G has a directed Eulerian cycle if and only if G is connected and each vertex has its indegree equal to its outdegree.
4.1.20 Has Cycle? marked[] F 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T dfs(5) T T check 0 F dfs(10) T T T check 5 F dfs(3) T T T T check 10 F dfs(6) T T T T T dfs(2) T T T T T T check 5 T (cycle found here) check 6 T check 0 T check 3 T 2 done check 3 T check 0 T 6 done check 2 T 3 done 10 done check 2 T 5 done check 2 T check 6 T 0 done dfs(1) T T T T T T T dfs(4) T T T T T T T T check 1 T dfs(8) T T T T T T T T T check 1 T dfs(11) T T T T T T T T T T check 8 T dfs(7) T T T T T T T T T T T check 8 T check 11 T 7 done check 1 T 11 done check 7 T check 4 T 8 done 4 done check 8 T check 11 T 1 done dfs(9) T T T T T T T T T T T T 9 done The order of growth of the running time of the Cycle constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] array takes time proportional to V.
[]
{ "number": "4.2.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
LCA of a DAG. Given a DAG and two vertices v and w, find the lowest common ancestor (LCA) of v and w. The LCA of v and w is an ancestor of v and w that has no descendants that are also ancestors of v and w. Computing the LCA is useful in multiple inheritance in programming languages, analysis of genealogical data (find degree of inbreeding in a pedigree graph), and other applications. Hint: Define the height of a vertex v in a DAG to be the length of the longest path from a root to v. Among vertices that are ancestors of both v and w, the one with the greatest height is an LCA of v and w.
4.1.21 Is 2-colorable? marked[] color[] T 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T F F F F F F F F F F F F dfs(5) T T F F F F F T F F F F F F check 0 T dfs(10) T T T F F F F F T F F F F F F check 5 T dfs(3) T T T T F F F T F T F F F F F F check 10 T dfs(6) T T T T T F F F T F T F F F F F F dfs(2) T T T T T T F F T T F T F F F F F F check 5 F check 6 F check 0 F check 3 F 2 done check 3 F check 0 F 6 done check 2 F 3 done 10 done check 2 F 5 done check 2 F check 6 F 0 done dfs(1) T T T T T T T F F T T F T F F F F F F dfs(4) T T T T T T T T F F T T T T F F F F F F check 1 F dfs(8) T T T T T T T T T F F T T F T F F F F F F check 1 F dfs(11) T T T T T T T T T T F F T T F T F F F F F T check 8 F dfs(7) T T T T T T T T T T T F F T T F T F F F F F F check 8 F check 11 F 7 done check 1 F 11 done check 7 F check 4 F 8 done 4 done check 8 F check 11 F 1 done dfs(9) T T T T T T T T T T T T F F T T F T F F F F F F 9 done The order of growth of the running time of the TwoColor constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] and color[] arrays takes time proportional to V.
[]
{ "number": "4.2.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Hamiltonian path in DAGs. Given a DAG, design a linear-time algorithm to determine whether there is a directed path that visits each vertex exactly once. Answer: Compute a topological sort and check if there is an edge between each consecutive pair of vertices in the topological order.
4.1.24 Number of connected components: 33 Size of the largest component: 118774 Number of components of size less than 10: 5 Does the largest component contain Kevin Bacon: Yes Eccentricity, diameter, radius, center and girth: The strategy for computing the eccentricity, diameter, radius, center and girth of the largest component was the following: The algorithm required for these exact computations has complexity of O(V * E) (running breadth-first search from all vertices). V is ~= 10^5, which means the algorithm required for these exact computation has complexity ~= 10^10. In order to reduce the complexity of the computations (and to be able to compute them), domain knowledge was used to compute approximate results. Based on the Kevin Bacon game, we know that Kevin Bacon has been on several movies and that he is closely connected to most of the actors and actresses in the graph. Therefore, he has a high probability of being the center of the graph. Using Kevin Bacon as the center, we compute his vertex eccentricity to get the graph radius. A breadth-first search using Kevin Bacon as the source computes the vertices that are furthest from the center. Computing the eccentricities of these vertices we can find the diameter of the graph. The eccentricities of the center and of the vertices furthest from it are shown in the results. The range of the eccentricities is [10, 18]. Computing the eccentricities of all vertices would bring us back to the original problem of =~ 10^10 operations. Finally, for the girth, we know that there is a very high probability that two actors have worked together on two different movies. This gives a girth of 4, which is the minimum girth possible for the movies graph: Actor -- Movie -- Actor \ / \ Movie / To validate this theory we run the algorithm to compute the girth of the graph but stop once we find a cycle of length 4, since it is the shortest cycle possible. Eccentricities of Kevin Bacon and of vertices furthest from the center in the largest component: Eccentricity of vertex 22970: 16 Eccentricity of vertex 22971: 16 Eccentricity of vertex 22972: 16 Eccentricity of vertex 22973: 16 Eccentricity of vertex 22974: 16 Eccentricity of vertex 22976: 16 Eccentricity of vertex 22977: 16 Eccentricity of vertex 22978: 16 Eccentricity of vertex 22979: 16 Eccentricity of vertex 22980: 16 Eccentricity of vertex 51437: 18 Eccentricity of vertex 51438: 18 Eccentricity of vertex 51439: 18 Eccentricity of vertex 51440: 18 Eccentricity of vertex 51441: 18 Eccentricity of vertex 51442: 18 Eccentricity of vertex 51443: 18 Eccentricity of vertex 51444: 18 Eccentricity of vertex 51445: 18 Eccentricity of vertex 51446: 18 Eccentricity of vertex 51447: 18 Eccentricity of vertex 51448: 18 Eccentricity of vertex 51449: 18 Eccentricity of vertex 51450: 18 Eccentricity of vertex 51451: 18 Eccentricity of vertex 51452: 18 Eccentricity of vertex 51453: 18 Eccentricity of vertex 51454: 18 Eccentricity of vertex 51455: 18 Eccentricity of vertex 51456: 18 Eccentricity of vertex 51457: 18 Eccentricity of vertex 51458: 18 Eccentricity of vertex 51459: 18 Eccentricity of vertex 51460: 18 Eccentricity of vertex 51461: 18 Eccentricity of vertex 51462: 18 Eccentricity of vertex 51463: 18 Eccentricity of vertex 51464: 18 Eccentricity of vertex 51465: 18 Eccentricity of vertex 51466: 18 Eccentricity of vertex 51467: 18 Eccentricity of vertex 51468: 18 Eccentricity of vertex 51469: 18 Eccentricity of vertex 51470: 18 Eccentricity of vertex 51471: 18 Eccentricity of vertex 51472: 18 Eccentricity of vertex 51473: 18 Eccentricity of vertex 51474: 18 Eccentricity of vertex 51475: 18 Eccentricity of vertex 51476: 18 Eccentricity of vertex 51477: 18 Eccentricity of vertex 51478: 18 Eccentricity of vertex 51479: 18 Eccentricity of vertex 51480: 18 Eccentricity of vertex 51481: 18 Eccentricity of vertex 51482: 18 Eccentricity of vertex 51483: 18 Eccentricity of vertex 51484: 18 Eccentricity of vertex 51485: 18 Eccentricity of vertex 51486: 18 Eccentricity of vertex 51487: 18 Eccentricity of vertex 51488: 18 Eccentricity of vertex 51489: 18 Eccentricity of vertex 51490: 18 Eccentricity of vertex 51491: 18 Eccentricity of vertex 51492: 18 Eccentricity of vertex 51493: 18 Eccentricity of vertex 51494: 18 Eccentricity of vertex 51495: 18 Eccentricity of vertex 51496: 18 Eccentricity of vertex 51497: 18 Eccentricity of vertex 51498: 18 Eccentricity of vertex 51499: 18 Eccentricity of vertex 51500: 18 Eccentricity of vertex 51501: 18 Eccentricity of vertex 51502: 18 Eccentricity of vertex 51503: 18 Eccentricity of vertex 51504: 18 Eccentricity of vertex 51505: 18 Eccentricity of vertex 51506: 18 Eccentricity of vertex 51507: 18 Eccentricity of vertex 51508: 18 Eccentricity of vertex 51509: 18 Eccentricity of vertex 51510: 18 Eccentricity of vertex 51511: 18 Eccentricity of vertex 51512: 18 Eccentricity of vertex 51513: 18 Eccentricity of vertex 51514: 18 Eccentricity of vertex 51515: 18 Eccentricity of vertex 51516: 18 Eccentricity of vertex 51517: 18 Eccentricity of vertex 51518: 18 Eccentricity of vertex 51519: 18 Eccentricity of vertex 51520: 18 Eccentricity of vertex 86241: 16 Eccentricity of vertex 86242: 16 Eccentricity of vertex 86243: 16 Eccentricity of vertex 86244: 16 Eccentricity of vertex 86245: 16 Eccentricity of vertex 86246: 16 Eccentricity of vertex 86247: 16 Eccentricity of vertex 86248: 16 Eccentricity of vertex 86249: 16 Eccentricity of vertex 86250: 16 Eccentricity of vertex 86251: 16 Eccentricity of vertex 86252: 16 Eccentricity of vertex 86253: 16 Eccentricity of vertex 86254: 16 Eccentricity of vertex 86255: 16 Eccentricity of vertex 86256: 16 Eccentricity of vertex 86257: 16 Eccentricity of vertex 86258: 16 Eccentricity of vertex 86259: 16 Eccentricity of vertex 118353: 18 Eccentricity of vertex 118354: 18 Eccentricity of vertex 118355: 18 Eccentricity of vertex 118356: 18 Eccentricity of vertex 118357: 18 Eccentricity of vertex 118358: 18 Eccentricity of vertex 118359: 18 Eccentricity of vertex 118360: 18 Eccentricity of vertex 118361: 18 Eccentricity of vertex 118362: 18 Eccentricity of vertex 118363: 18 Eccentricity of vertex 118364: 18 Eccentricity of vertex 9145: 10 Diameter of largest component: 18 Radius of largest component: 10 Center of largest component: 9145 Girth of largest component: 4
[]
{ "number": "4.2.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Digraph enumeration. Show that the number of different V-vertex digraphs with no parallel edges is 2^(V^2). (How many digraphs are there that contain V vertices and E edges?) Then compute an upper bound on the percentage of 20-vertex digraphs that could ever be examined by any computer, under the assumptions that every electron in the universe examines a digraph every nanosecond, that the universe has fewer than 10^80 electrons, and that the age of the universe will be less than 10^20 years.
4.1.27 Integer * object overhead -> 16 bytes * int value -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 4 + 4 = 24 bytes Node * object overhead -> 16 bytes * extra overhead for reference to the enclosing instance -> 8 bytes * Item reference (item) -> 8 bytes * Node reference (next) -> 8 bytes Amount of memory needed: 16 + 8 + 8 + 8 = 40 bytes Bag * object overhead -> 16 bytes * Node reference (first) -> 8 bytes * int value (size) -> 4 bytes * padding -> 4 bytes * N Nodes -> 40N bytes * Integer (item) -> 24N bytes Amount of memory needed: 16 + 8 + 4 + 4 + 40N + 24N = 64N + 32 bytes Graph * object overhead -> 16 bytes * int value (V) -> 4 bytes * int value (E) -> 4 bytes * Bag<Integer>[] reference (adj) -> 8 bytes * Bag<Integer>[] (adj) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes Bag references -> 8V Bag -> 64E + 32 bytes -> There are V Bags and in total, they have 2E nodes -> 128E + 32V Amount of memory needed: 16 + 4 + 4 + 8 + 16 + 4 + 4 + 8V + 128E + 32V = 128E + 40V + 56 bytes
[]
{ "number": "4.2.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
DAG enumeration. Give a formula for the number of V-vertex DAGs with E edges.
4.1.28 Non-isomorphic graphs: There are 2 non-isomorphic graphs with 2 vertices: o o o-o There are 4 non-isomorphic graphs with 3 vertices: o o o o o-o o-o-o o / \ o-o There are 11 non-isomorphic graphs with 4 vertices: o o o o o-o o o o-o-o o o-o-o-o o-o o-o o | o / \ o o o / \ o---o o o-o | | o-o o | o / \ o---o o / \ o---o \ / o o /|\ / o \ / / \ \ o------o There are 34 non-isomorphic graphs with 5 vertices: o o o o o o o o o-o o o o / o-o o o o \ / o o o / | \ o o o o o / \ o o o-o o / \ o o \ o o o / \ o----o o o o /|\\ o o oo o-o | | o-o o o | o-o | | o o o | o / \ o---o o o-o-o-o-o o o /| | o | | \| | o o o--o | | o--o | o o o | | o---o \ / o o--o \/ o--o--o o / \ o o \ / o-o o (This is a complete graph, where all vertices have degree = 4) / / \\ o------o \ /\ /\/ \|/\ |/ o---o o / / \\ o------o \ /\ /\/ \|/\ |/ o o o---o |\ /|\ | X | o |/ \|/ o---o o---o |\ /| | o | |/ \| o---o o---o |\ /|\ | X | o |/ \| o---o o // \ o-o o \/ o o-o-o-o \| | / o o /\ / \ / \ o------o \ \ / / \ /\ / o o o---o | X | o o---o o o |\ /| | o | |/ \| o o o /|\ o--o | o \|/ o o | o /|\ o | o \|/ o o / \ o---o | | o---o o---o | /| | o | |/ | o---o o | o | o / \ o---o o---o | \ | o---o o Based on: http://www.graphclasses.org/smallgraphs.html
[]
{ "number": "4.2.28", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Queue-based topological sort. Develop a topological sort implementation that maintains a vertex-indexed array that keeps track of the indegree of each vertex. Initialize the array and a queue of sources in a single pass through all the edges, as in EXERCISE 4.2.7. Then, perform the following operations until the source queue is empty: ■ Remove a source from the queue and label it. ■ Decrement the entries in the indegree array corresponding to the destination vertex of each of the removed vertex’s edges. ■ If decrementing any entry causes it to become 0, insert the corresponding vertex onto the source queue.
4.1.30 - Eulerian and Hamiltonian cycles An Eulerian cycle (or Eulerian circuit) is a path which starts and ends at the same vertex and includes every edge exactly once. A Hamiltonian cycle is a path which starts and ends at the same vertex and includes every vertex exactly once (except for the source, which is visited twice). According to Euler theorems a graph has an Eulerian cycle/circuit if and only if it does not have any vertices of odd degree. First graph: 0-1 0-2 0-3 1-3 1-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9). It has a Hamiltonian cycle: 1-4 4-8 8-7 7-6 6-9 9-5 5-2 2-0 0-3 3-1 Second graph: 0-1 0-2 0-3 1-3 0-3 2-5 5-6 3-6 4-7 4-8 5-8 5-9 6-7 6-9 8-8 It has an Eulerian cycle (all the vertices have even degrees): 0-3 3-0 0-2 2-5 5-9 9-6 6-5 5-8 8-8 8-4 4-7 7-6 6-3 3-1 1-0 There is no Hamiltonian cycle. Third graph: 0-1 1-2 1-3 0-3 0-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 4-8 8-7 7-6 6-9 9-5 5-2 2-1 1-3 3-0 0-4 Fourth graph: 4-1 7-9 6-2 7-3 5-0 0-2 0-8 1-6 3-9 6-3 2-8 1-5 9-8 4-5 4-7 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 0-5 5-4 4-1 1-6 6-3 3-7 7-9 9-8 8-2 2-0
[]
{ "number": "4.2.30", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Euclidean digraphs. Modify your solution to EXERCISE 4.1.37 to create an API EuclideanDigraph for graphs whose vertices are points in the plane, so that you can work with graphical representations.
4.1.31 - Graph enumeration If we were considering graphs with no self-loops: There are (V) ways to choose a set {u, v} of two vertices. From this set there are E ways to choose the vertices to connect. (2) So there are ((V)) = (V! / 2! * (V - 2)!)! / E! * ((V! / 2! * (V - 2)!) - E)! ((2)) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges and no self-loops). Since we are also considering graphs with self-loops, there are (V) * 2^V ways to choose a set {u, v} of two vertices in which (2) vertices may or may not have a self-loop. So there are ((V) * 2^V) = ((V! / 2! * (V - 2)!) * 2^V)! / E! * (((V! / 2! * (V - 2)!) * 2^V) - E)! ((2) ) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges). Reference: Handbook of Discrete and Combinatorial Mathematics by Kenneth H. Rosen, page 580 https://math.stackexchange.com/questions/1072726/counting-simple-connected-labeled-graphs-with-n-vertices-and-k-edges https://math.stackexchange.com/questions/128439/how-to-determine-the-number-of-directed-undirected-graphs
[]
{ "number": "4.2.31", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.2, "section_title": "Directed Graphs", "type": "Creative Problem" }
Prove that you can rescale the weights by adding a positive constant to all of them or by multiplying them all by a positive constant without affecting the MST.
4.1.1 The maximum number of edges in a graph with V vertices and no parallel edges is V * (V - 1) / 2. Since we do not have self-loops or parallel edges, each vertex can connect to V - 1 other vertices. In an undirected graph vertex v connected to vertex w is the same as vertex w connected to vertex v, so we divide the result by 2. Example: o - o | X | o — o The minimum number of edges in a graph with V vertices, none of which are isolated (have degree 0) is V - 1. Example: o — o — o
[]
{ "number": "4.3.1", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Draw all of the MSTs of the graph depicted at right (all edge weights are equal).
4.1.2 adj[] 0 -> 5 -> 2 -> 6 1 -> 4 -> 8 -> 11 2 -> 5 -> 6 -> 0 -> 3 3 -> 10 -> 6 -> 2 4 -> 1 -> 8 5 -> 0 -> 10 -> 2 6 -> 2 -> 3 -> 0 7 -> 8 -> 11 8 -> 1 -> 11 -> 7 -> 4 9 -> 10 -> 5 -> 3 11 -> 8 -> 7 -> 1
[]
{ "number": "4.3.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Give the MST of the weighted graph obtained by deleting vertex 7 from tinyEWG.txt (see page 604).
4.1.6 The edges form a cycle, so changing the connection order of one of the vertices' adjacency list creates an impossible sequence of connections. adj[] 0 -> 1 -> 3 (the original was 0 -> 3 -> 1) 1 -> 2 -> 0 2 -> 3 -> 1 3 -> 0 -> 2
[]
{ "number": "4.3.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Implement the constructor for EdgeWeightedGraph that reads a graph from the input stream, by suitably modifying the constructor from Graph (see page 526).
4.1.9 marked[] adj[] dfs (0) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (5) 0 T 0 5 2 6 check 0 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (10) 0 T 0 5 2 6 check 5 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (3) 0 T 0 5 2 6 check 10 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (6) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (2) 0 T 0 5 2 6 check 5 1 1 4 8 11 check 6 2 T 2 5 6 0 3 check 0 3 T 3 10 6 2 check 3 4 4 1 8 2 done 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done edgeTo[] tree 0 5 10 3 6 2
[]
{ "number": "4.3.9", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Develop an EdgeWeightedGraph implementation for dense graphs that uses an adjacency-matrix (two-dimensional array of weights) representation. Disallow parallel edges.
4.1.10 Every connected graph has a vertex whose removal (including all incident edges) will not disconnect the graph. Proof by contradiction: If the graph has a node of degree one, removing it gives a connected graph. Example: o - o Otherwise, every path in the graph belongs to a cycle. To see this, start with any path, then notice that the terminal nodes of this path must be connected to other nodes that are not in the path, so we add them to the path to make a new path. Since the graph is connected, continuing this process it is possible to see that in the end all the nodes from the graph will be in the path. If among all those paths there is no path from which one can remove a node without disconnecting the graph, then all those paths are bridges. In this case, they do not belong to a cycle, which is a contradiction (assuming the graph is finite). Based on: https://math.stackexchange.com/questions/891325/proof-verification-a-connected-graph-always-has-a-vertex-that-is-not-a-cut-vert
[]
{ "number": "4.3.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Determine the amount of memory used by EdgeWeightedGraph to represent a graph with V vertices and E edges, using the memory-cost model of SECTION 1.4.
4.1.11 Tree represented by edgeTo[] after call to bfs(G, 0): 0 5 2 6 10 3
[]
{ "number": "4.3.11", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Suppose that a graph has distinct edge weights. Does its shortest edge have to belong to the MST? Can its longest edge belong to the MST? Does a min-weight edge on every cycle have to belong to the MST? Prove your answer to each question or give a counterexample.
4.1.12 When neither v nor w are at the root, the BFS tree tell us that if they are on the same branch, there is a path between v and w of distance equal to the number of edges between them in the branch. If they are not on the same branch, there is a path between v and w of distance Dv + Dw, where Dv is the distance from the root to vertex v and Dw is the distance from the root to vertex w.
[]
{ "number": "4.3.12", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Given an MST for an edge-weighted graph G, suppose that an edge in G that does not disconnect G is deleted. Describe how to find an MST of the new graph in time proportional to E.
4.1.14 If we use a stack instead of a queue when running breadth-first search, it may not compute shortest paths. This can be seen in the following graph: 0 (source) / \ 1 - 2 | | 4 - 3 If the edge 0 - 2 is inserted before the edge 0 - 1: Using a stack, the distance from 0 to 4 will be 3. Using a queue, the distance from 0 to 4 will be 2. If the edge 0 - 1 is inserted before the edge 0 - 2: Using a stack, the distance from 0 to 3 will be 3. Using a queue, the distance from 0 to 3 will be 2. Thanks to lemonadeseason (https://github.com/lemonadeseason) for correcting the example in this exercise. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/24
[]
{ "number": "4.3.14", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Suppose that you use a priority-queue implementation that maintains a sorted list. What would be the order of growth of the worst-case running time for Prim’s algorithm and for Kruskal’s algorithm for graphs with V vertices and E edges? When would this method be appropriate, if ever? Defend your answer.
4.1.19 count marked[] id[] 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) 0 T 0 dfs(5) 0 T T 0 0 check 0 dfs(10) 0 T T T 0 0 0 check 5 dfs(3) 0 T T T T 0 0 0 0 check 10 dfs(6) 0 T T T T T 0 0 0 0 0 dfs(2) 0 T T T T T T 0 0 0 0 0 0 check 5 check 6 check 0 check 3 2 done check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done dfs(1) 1 T T T T T T T 0 1 0 0 0 0 0 dfs(4) 1 T T T T T T T T 0 1 0 0 1 0 0 0 check 1 dfs(8) 1 T T T T T T T T T 0 1 0 0 1 0 0 1 0 check 1 dfs(11) 1 T T T T T T T T T T 0 1 0 0 1 0 0 1 0 1 check 8 dfs(7) 1 T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 0 1 check 8 check 11 7 done check 1 11 done check 7 check 4 8 done 4 done check 8 check 11 1 done dfs(9) 2 T T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 2 0 1 9 done
[]
{ "number": "4.3.19", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
True or false: At any point during the execution of Kruskal’s algorithm, each vertex is closer to some vertex in its subtree than to any vertex not in its subtree. Prove your answer.
4.1.20 Has Cycle? marked[] F 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T dfs(5) T T check 0 F dfs(10) T T T check 5 F dfs(3) T T T T check 10 F dfs(6) T T T T T dfs(2) T T T T T T check 5 T (cycle found here) check 6 T check 0 T check 3 T 2 done check 3 T check 0 T 6 done check 2 T 3 done 10 done check 2 T 5 done check 2 T check 6 T 0 done dfs(1) T T T T T T T dfs(4) T T T T T T T T check 1 T dfs(8) T T T T T T T T T check 1 T dfs(11) T T T T T T T T T T check 8 T dfs(7) T T T T T T T T T T T check 8 T check 11 T 7 done check 1 T 11 done check 7 T check 4 T 8 done 4 done check 8 T check 11 T 1 done dfs(9) T T T T T T T T T T T T 9 done The order of growth of the running time of the Cycle constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] array takes time proportional to V.
[]
{ "number": "4.3.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Provide an implementation of edges() for PrimMST (page 622). Solution: public Iterable<Edge> edges() { Bag<Edge> mst = new Bag<Edge>(); for (int v = 1; v < edgeTo.length; v++) mst.add(edgeTo[v]); return mst; }
4.1.21 Is 2-colorable? marked[] color[] T 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T F F F F F F F F F F F F dfs(5) T T F F F F F T F F F F F F check 0 T dfs(10) T T T F F F F F T F F F F F F check 5 T dfs(3) T T T T F F F T F T F F F F F F check 10 T dfs(6) T T T T T F F F T F T F F F F F F dfs(2) T T T T T T F F T T F T F F F F F F check 5 F check 6 F check 0 F check 3 F 2 done check 3 F check 0 F 6 done check 2 F 3 done 10 done check 2 F 5 done check 2 F check 6 F 0 done dfs(1) T T T T T T T F F T T F T F F F F F F dfs(4) T T T T T T T T F F T T T T F F F F F F check 1 F dfs(8) T T T T T T T T T F F T T F T F F F F F F check 1 F dfs(11) T T T T T T T T T T F F T T F T F F F F F T check 8 F dfs(7) T T T T T T T T T T T F F T T F T F F F F F F check 8 F check 11 F 7 done check 1 F 11 done check 7 F check 4 F 8 done 4 done check 8 F check 11 F 1 done dfs(9) T T T T T T T T T T T T F F T T F T F F F F F F 9 done The order of growth of the running time of the TwoColor constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] and color[] arrays takes time proportional to V.
[]
{ "number": "4.3.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Exercise" }
Reverse-delete algorithm. Develop an implementation that computes the MST as follows: Start with a graph containing all of the edges. Then repeatedly go through the edges in decreasing order of weight. For each edge, check if deleting that edge will disconnect the graph; if not, delete it. Prove that this algorithm computes the MST. What is the order of growth of the number of edge-weight compares performed by your implementation?
4.1.24 Number of connected components: 33 Size of the largest component: 118774 Number of components of size less than 10: 5 Does the largest component contain Kevin Bacon: Yes Eccentricity, diameter, radius, center and girth: The strategy for computing the eccentricity, diameter, radius, center and girth of the largest component was the following: The algorithm required for these exact computations has complexity of O(V * E) (running breadth-first search from all vertices). V is ~= 10^5, which means the algorithm required for these exact computation has complexity ~= 10^10. In order to reduce the complexity of the computations (and to be able to compute them), domain knowledge was used to compute approximate results. Based on the Kevin Bacon game, we know that Kevin Bacon has been on several movies and that he is closely connected to most of the actors and actresses in the graph. Therefore, he has a high probability of being the center of the graph. Using Kevin Bacon as the center, we compute his vertex eccentricity to get the graph radius. A breadth-first search using Kevin Bacon as the source computes the vertices that are furthest from the center. Computing the eccentricities of these vertices we can find the diameter of the graph. The eccentricities of the center and of the vertices furthest from it are shown in the results. The range of the eccentricities is [10, 18]. Computing the eccentricities of all vertices would bring us back to the original problem of =~ 10^10 operations. Finally, for the girth, we know that there is a very high probability that two actors have worked together on two different movies. This gives a girth of 4, which is the minimum girth possible for the movies graph: Actor -- Movie -- Actor \ / \ Movie / To validate this theory we run the algorithm to compute the girth of the graph but stop once we find a cycle of length 4, since it is the shortest cycle possible. Eccentricities of Kevin Bacon and of vertices furthest from the center in the largest component: Eccentricity of vertex 22970: 16 Eccentricity of vertex 22971: 16 Eccentricity of vertex 22972: 16 Eccentricity of vertex 22973: 16 Eccentricity of vertex 22974: 16 Eccentricity of vertex 22976: 16 Eccentricity of vertex 22977: 16 Eccentricity of vertex 22978: 16 Eccentricity of vertex 22979: 16 Eccentricity of vertex 22980: 16 Eccentricity of vertex 51437: 18 Eccentricity of vertex 51438: 18 Eccentricity of vertex 51439: 18 Eccentricity of vertex 51440: 18 Eccentricity of vertex 51441: 18 Eccentricity of vertex 51442: 18 Eccentricity of vertex 51443: 18 Eccentricity of vertex 51444: 18 Eccentricity of vertex 51445: 18 Eccentricity of vertex 51446: 18 Eccentricity of vertex 51447: 18 Eccentricity of vertex 51448: 18 Eccentricity of vertex 51449: 18 Eccentricity of vertex 51450: 18 Eccentricity of vertex 51451: 18 Eccentricity of vertex 51452: 18 Eccentricity of vertex 51453: 18 Eccentricity of vertex 51454: 18 Eccentricity of vertex 51455: 18 Eccentricity of vertex 51456: 18 Eccentricity of vertex 51457: 18 Eccentricity of vertex 51458: 18 Eccentricity of vertex 51459: 18 Eccentricity of vertex 51460: 18 Eccentricity of vertex 51461: 18 Eccentricity of vertex 51462: 18 Eccentricity of vertex 51463: 18 Eccentricity of vertex 51464: 18 Eccentricity of vertex 51465: 18 Eccentricity of vertex 51466: 18 Eccentricity of vertex 51467: 18 Eccentricity of vertex 51468: 18 Eccentricity of vertex 51469: 18 Eccentricity of vertex 51470: 18 Eccentricity of vertex 51471: 18 Eccentricity of vertex 51472: 18 Eccentricity of vertex 51473: 18 Eccentricity of vertex 51474: 18 Eccentricity of vertex 51475: 18 Eccentricity of vertex 51476: 18 Eccentricity of vertex 51477: 18 Eccentricity of vertex 51478: 18 Eccentricity of vertex 51479: 18 Eccentricity of vertex 51480: 18 Eccentricity of vertex 51481: 18 Eccentricity of vertex 51482: 18 Eccentricity of vertex 51483: 18 Eccentricity of vertex 51484: 18 Eccentricity of vertex 51485: 18 Eccentricity of vertex 51486: 18 Eccentricity of vertex 51487: 18 Eccentricity of vertex 51488: 18 Eccentricity of vertex 51489: 18 Eccentricity of vertex 51490: 18 Eccentricity of vertex 51491: 18 Eccentricity of vertex 51492: 18 Eccentricity of vertex 51493: 18 Eccentricity of vertex 51494: 18 Eccentricity of vertex 51495: 18 Eccentricity of vertex 51496: 18 Eccentricity of vertex 51497: 18 Eccentricity of vertex 51498: 18 Eccentricity of vertex 51499: 18 Eccentricity of vertex 51500: 18 Eccentricity of vertex 51501: 18 Eccentricity of vertex 51502: 18 Eccentricity of vertex 51503: 18 Eccentricity of vertex 51504: 18 Eccentricity of vertex 51505: 18 Eccentricity of vertex 51506: 18 Eccentricity of vertex 51507: 18 Eccentricity of vertex 51508: 18 Eccentricity of vertex 51509: 18 Eccentricity of vertex 51510: 18 Eccentricity of vertex 51511: 18 Eccentricity of vertex 51512: 18 Eccentricity of vertex 51513: 18 Eccentricity of vertex 51514: 18 Eccentricity of vertex 51515: 18 Eccentricity of vertex 51516: 18 Eccentricity of vertex 51517: 18 Eccentricity of vertex 51518: 18 Eccentricity of vertex 51519: 18 Eccentricity of vertex 51520: 18 Eccentricity of vertex 86241: 16 Eccentricity of vertex 86242: 16 Eccentricity of vertex 86243: 16 Eccentricity of vertex 86244: 16 Eccentricity of vertex 86245: 16 Eccentricity of vertex 86246: 16 Eccentricity of vertex 86247: 16 Eccentricity of vertex 86248: 16 Eccentricity of vertex 86249: 16 Eccentricity of vertex 86250: 16 Eccentricity of vertex 86251: 16 Eccentricity of vertex 86252: 16 Eccentricity of vertex 86253: 16 Eccentricity of vertex 86254: 16 Eccentricity of vertex 86255: 16 Eccentricity of vertex 86256: 16 Eccentricity of vertex 86257: 16 Eccentricity of vertex 86258: 16 Eccentricity of vertex 86259: 16 Eccentricity of vertex 118353: 18 Eccentricity of vertex 118354: 18 Eccentricity of vertex 118355: 18 Eccentricity of vertex 118356: 18 Eccentricity of vertex 118357: 18 Eccentricity of vertex 118358: 18 Eccentricity of vertex 118359: 18 Eccentricity of vertex 118360: 18 Eccentricity of vertex 118361: 18 Eccentricity of vertex 118362: 18 Eccentricity of vertex 118363: 18 Eccentricity of vertex 118364: 18 Eccentricity of vertex 9145: 10 Diameter of largest component: 18 Radius of largest component: 10 Center of largest component: 9145 Girth of largest component: 4
[]
{ "number": "4.3.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
Animations. Write a client program that does dynamic graphical animations of MST algorithms. Run your program for mediumEWG.txt to produce images like the figures on page 621 and page 624.
4.1.27 Integer * object overhead -> 16 bytes * int value -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 4 + 4 = 24 bytes Node * object overhead -> 16 bytes * extra overhead for reference to the enclosing instance -> 8 bytes * Item reference (item) -> 8 bytes * Node reference (next) -> 8 bytes Amount of memory needed: 16 + 8 + 8 + 8 = 40 bytes Bag * object overhead -> 16 bytes * Node reference (first) -> 8 bytes * int value (size) -> 4 bytes * padding -> 4 bytes * N Nodes -> 40N bytes * Integer (item) -> 24N bytes Amount of memory needed: 16 + 8 + 4 + 4 + 40N + 24N = 64N + 32 bytes Graph * object overhead -> 16 bytes * int value (V) -> 4 bytes * int value (E) -> 4 bytes * Bag<Integer>[] reference (adj) -> 8 bytes * Bag<Integer>[] (adj) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes Bag references -> 8V Bag -> 64E + 32 bytes -> There are V Bags and in total, they have 2E nodes -> 128E + 32V Amount of memory needed: 16 + 4 + 4 + 8 + 16 + 4 + 4 + 8V + 128E + 32V = 128E + 40V + 56 bytes
[]
{ "number": "4.3.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
Space-efficient data structures. Develop an implementation of the lazy version of Prim’s algorithm that saves space by using lower-level data structures for EdgeWeightedGraph and for MinPQ instead of Bag and Edge. Estimate the amount of memory saved as a function of V and E, using the memory-cost model of SECTION 1.4 (see EXERCISE 4.3.11).
4.1.28 Non-isomorphic graphs: There are 2 non-isomorphic graphs with 2 vertices: o o o-o There are 4 non-isomorphic graphs with 3 vertices: o o o o o-o o-o-o o / \ o-o There are 11 non-isomorphic graphs with 4 vertices: o o o o o-o o o o-o-o o o-o-o-o o-o o-o o | o / \ o o o / \ o---o o o-o | | o-o o | o / \ o---o o / \ o---o \ / o o /|\ / o \ / / \ \ o------o There are 34 non-isomorphic graphs with 5 vertices: o o o o o o o o o-o o o o / o-o o o o \ / o o o / | \ o o o o o / \ o o o-o o / \ o o \ o o o / \ o----o o o o /|\\ o o oo o-o | | o-o o o | o-o | | o o o | o / \ o---o o o-o-o-o-o o o /| | o | | \| | o o o--o | | o--o | o o o | | o---o \ / o o--o \/ o--o--o o / \ o o \ / o-o o (This is a complete graph, where all vertices have degree = 4) / / \\ o------o \ /\ /\/ \|/\ |/ o---o o / / \\ o------o \ /\ /\/ \|/\ |/ o o o---o |\ /|\ | X | o |/ \|/ o---o o---o |\ /| | o | |/ \| o---o o---o |\ /|\ | X | o |/ \| o---o o // \ o-o o \/ o o-o-o-o \| | / o o /\ / \ / \ o------o \ \ / / \ /\ / o o o---o | X | o o---o o o |\ /| | o | |/ \| o o o /|\ o--o | o \|/ o o | o /|\ o | o \|/ o o / \ o---o | | o---o o---o | /| | o | |/ | o---o o | o | o / \ o---o o---o | \ | o---o o Based on: http://www.graphclasses.org/smallgraphs.html
[]
{ "number": "4.3.28", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
Euclidean weighted graphs. Modify your solution to EXERCISE 4.1.37 to create an API EuclideanEdgeWeightedGraph for graphs whose vertices are points in the plane, so that you can work with graphical representations.
4.1.30 - Eulerian and Hamiltonian cycles An Eulerian cycle (or Eulerian circuit) is a path which starts and ends at the same vertex and includes every edge exactly once. A Hamiltonian cycle is a path which starts and ends at the same vertex and includes every vertex exactly once (except for the source, which is visited twice). According to Euler theorems a graph has an Eulerian cycle/circuit if and only if it does not have any vertices of odd degree. First graph: 0-1 0-2 0-3 1-3 1-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9). It has a Hamiltonian cycle: 1-4 4-8 8-7 7-6 6-9 9-5 5-2 2-0 0-3 3-1 Second graph: 0-1 0-2 0-3 1-3 0-3 2-5 5-6 3-6 4-7 4-8 5-8 5-9 6-7 6-9 8-8 It has an Eulerian cycle (all the vertices have even degrees): 0-3 3-0 0-2 2-5 5-9 9-6 6-5 5-8 8-8 8-4 4-7 7-6 6-3 3-1 1-0 There is no Hamiltonian cycle. Third graph: 0-1 1-2 1-3 0-3 0-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 4-8 8-7 7-6 6-9 9-5 5-2 2-1 1-3 3-0 0-4 Fourth graph: 4-1 7-9 6-2 7-3 5-0 0-2 0-8 1-6 3-9 6-3 2-8 1-5 9-8 4-5 4-7 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 0-5 5-4 4-1 1-6 6-3 3-7 7-9 9-8 8-2 2-0
[]
{ "number": "4.3.30", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
MST weights. Develop implementations of weight() for LazyPrimMST, PrimMST, and KruskalMST, using a lazy strategy that iterates through the MST edges when the client calls weight().Then develop alternate implementations that use an eager strategy that maintains a running total as the MST is computed.
4.1.31 - Graph enumeration If we were considering graphs with no self-loops: There are (V) ways to choose a set {u, v} of two vertices. From this set there are E ways to choose the vertices to connect. (2) So there are ((V)) = (V! / 2! * (V - 2)!)! / E! * ((V! / 2! * (V - 2)!) - E)! ((2)) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges and no self-loops). Since we are also considering graphs with self-loops, there are (V) * 2^V ways to choose a set {u, v} of two vertices in which (2) vertices may or may not have a self-loop. So there are ((V) * 2^V) = ((V! / 2! * (V - 2)!) * 2^V)! / E! * (((V! / 2! * (V - 2)!) * 2^V) - E)! ((2) ) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges). Reference: Handbook of Discrete and Combinatorial Mathematics by Kenneth H. Rosen, page 580 https://math.stackexchange.com/questions/1072726/counting-simple-connected-labeled-graphs-with-n-vertices-and-k-edges https://math.stackexchange.com/questions/128439/how-to-determine-the-number-of-directed-undirected-graphs
[]
{ "number": "4.3.31", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
Certification. Write an MST and EdgeWeightedGraph client check() that uses the following cut optimality conditions implied by PROPOSITION J to verify that a proposed set of edges is in fact an MST: A set of edges is an MST if it is a spanning tree and every edge is a minimum-weight edge in the cut defined by removing that edge from the tree. What is the order of growth of the running time of your method?
4.1.33 - Odd cycles A graph is two-colorable (bipartite) if and only if it contains no odd-length cycle. Proof: 1- Proving that a graph with an odd-length cycle cannot be bipartite: If a graph G is bipartite with vertex sets V1 and V2, every step along a walk takes you either from V1 to V2 or from V2 to V1. To end up where you started, therefore, you must take an even number of steps. 2- Proving that a graph with only even-length cycles is bipartite: Consider G to be a graph with only even-length cycles. Let v0 be any vertex. For each vertex v in the same component C0 as v0 let d(v) be the length of the shortest path from v0 to v. Color red every vertex in C0 whose distance from v0 is even, and color the other vertices of C0 blue. Do the same for each component of G. Check that if G had any edge between two red vertices or between two blue vertices, it would have an odd cycle. Thus, G is bipartite, the red vertices and the blue vertices being the two parts. Reference: https://math.stackexchange.com/questions/311665/proof-a-graph-is-bipartite-if-and-only-if-it-contains-no-odd-cycles
[]
{ "number": "4.3.33", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.3, "section_title": "Minimum Spanning Trees", "type": "Creative Problem" }
True or false: Adding a constant to every edge weight does not change the solution to the single-source shortest-paths problem.
4.1.1 The maximum number of edges in a graph with V vertices and no parallel edges is V * (V - 1) / 2. Since we do not have self-loops or parallel edges, each vertex can connect to V - 1 other vertices. In an undirected graph vertex v connected to vertex w is the same as vertex w connected to vertex v, so we divide the result by 2. Example: o - o | X | o — o The minimum number of edges in a graph with V vertices, none of which are isolated (have degree 0) is V - 1. Example: o — o — o
[]
{ "number": "4.4.1", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Provide an implementation of toString() for EdgeWeightedDigraph.
4.1.2 adj[] 0 -> 5 -> 2 -> 6 1 -> 4 -> 8 -> 11 2 -> 5 -> 6 -> 0 -> 3 3 -> 10 -> 6 -> 2 4 -> 1 -> 8 5 -> 0 -> 10 -> 2 6 -> 2 -> 3 -> 0 7 -> 8 -> 11 8 -> 1 -> 11 -> 7 -> 4 9 -> 10 -> 5 -> 3 11 -> 8 -> 7 -> 1
[]
{ "number": "4.4.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Give a trace that shows the process of computing the SPT of the digraph defined in EXERCISE 4.4.5 with the eager version of Dijkstra’s algorithm.
4.1.6 The edges form a cycle, so changing the connection order of one of the vertices' adjacency list creates an impossible sequence of connections. adj[] 0 -> 1 -> 3 (the original was 0 -> 3 -> 1) 1 -> 2 -> 0 2 -> 3 -> 1 3 -> 0 -> 2
[]
{ "number": "4.4.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
The table below, from an old published road map, purports to give the length of the shortest routes connecting the cities. It contains an error. Correct the table. Also, add a table that shows how to achieve the shortest routes.
4.1.9 marked[] adj[] dfs (0) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (5) 0 T 0 5 2 6 check 0 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 10 5 3 11 11 8 7 1 dfs (10) 0 T 0 5 2 6 check 5 1 1 4 8 11 2 2 5 6 0 3 3 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (3) 0 T 0 5 2 6 check 10 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (6) 0 T 0 5 2 6 1 1 4 8 11 2 2 5 6 0 3 3 T 3 10 6 2 4 4 1 8 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 dfs (2) 0 T 0 5 2 6 check 5 1 1 4 8 11 check 6 2 T 2 5 6 0 3 check 0 3 T 3 10 6 2 check 3 4 4 1 8 2 done 5 T 5 0 10 2 6 T 6 2 3 0 7 7 8 11 8 8 1 11 7 4 9 9 10 T 10 5 3 11 11 8 7 1 check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done edgeTo[] tree 0 5 10 3 6 2
[]
{ "number": "4.4.9", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Consider the edges in the digraph defined in EXERCISE 4.4.4 to be undirected edges such that each edge corresponds to equal-weight edges in both directions in the edge-weighted digraph. Answer EXERCISE 4.4.6 for this corresponding edge-weighted digraph.
4.1.10 Every connected graph has a vertex whose removal (including all incident edges) will not disconnect the graph. Proof by contradiction: If the graph has a node of degree one, removing it gives a connected graph. Example: o - o Otherwise, every path in the graph belongs to a cycle. To see this, start with any path, then notice that the terminal nodes of this path must be connected to other nodes that are not in the path, so we add them to the path to make a new path. Since the graph is connected, continuing this process it is possible to see that in the end all the nodes from the graph will be in the path. If among all those paths there is no path from which one can remove a node without disconnecting the graph, then all those paths are bridges. In this case, they do not belong to a cycle, which is a contradiction (assuming the graph is finite). Based on: https://math.stackexchange.com/questions/891325/proof-verification-a-connected-graph-always-has-a-vertex-that-is-not-a-cut-vert
[]
{ "number": "4.4.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Use the memory-cost model of SECTION 1.4 to determine the amount of memory used by EdgeWeightedDigraph to represent a graph with V vertices and E edges.
4.1.11 Tree represented by edgeTo[] after call to bfs(G, 0): 0 5 2 6 10 3
[]
{ "number": "4.4.11", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Adapt the DirectedCycle and Topological classes from SECTION 4.2 to use the EdgeWeightedDigraph and DirectedEdge APIs of this section, thus implementing EdgeWeightedCycleFinder and EdgeWeightedTopological classes.
4.1.12 When neither v nor w are at the root, the BFS tree tell us that if they are on the same branch, there is a path between v and w of distance equal to the number of edges between them in the branch. If they are not on the same branch, there is a path between v and w of distance Dv + Dw, where Dv is the distance from the root to vertex v and Dw is the distance from the root to vertex w.
[]
{ "number": "4.4.12", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Show the paths that would be discovered by the two strawman approaches described on page 668 for the example tinyEWDn.txt shown on that page.
4.1.14 If we use a stack instead of a queue when running breadth-first search, it may not compute shortest paths. This can be seen in the following graph: 0 (source) / \ 1 - 2 | | 4 - 3 If the edge 0 - 2 is inserted before the edge 0 - 1: Using a stack, the distance from 0 to 4 will be 3. Using a queue, the distance from 0 to 4 will be 2. If the edge 0 - 1 is inserted before the edge 0 - 2: Using a stack, the distance from 0 to 3 will be 3. Using a queue, the distance from 0 to 3 will be 2. Thanks to lemonadeseason (https://github.com/lemonadeseason) for correcting the example in this exercise. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/24
[]
{ "number": "4.4.14", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Find the lowest-weight cycle (best arbitrage opportunity) in the example shown in the text.
4.1.19 count marked[] id[] 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) 0 T 0 dfs(5) 0 T T 0 0 check 0 dfs(10) 0 T T T 0 0 0 check 5 dfs(3) 0 T T T T 0 0 0 0 check 10 dfs(6) 0 T T T T T 0 0 0 0 0 dfs(2) 0 T T T T T T 0 0 0 0 0 0 check 5 check 6 check 0 check 3 2 done check 3 check 0 6 done check 2 3 done 10 done check 2 5 done check 2 check 6 0 done dfs(1) 1 T T T T T T T 0 1 0 0 0 0 0 dfs(4) 1 T T T T T T T T 0 1 0 0 1 0 0 0 check 1 dfs(8) 1 T T T T T T T T T 0 1 0 0 1 0 0 1 0 check 1 dfs(11) 1 T T T T T T T T T T 0 1 0 0 1 0 0 1 0 1 check 8 dfs(7) 1 T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 0 1 check 8 check 11 7 done check 1 11 done check 7 check 4 8 done 4 done check 8 check 11 1 done dfs(9) 2 T T T T T T T T T T T T 0 1 0 0 1 0 0 1 1 2 0 1 9 done
[]
{ "number": "4.4.19", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Find a currency-conversion table online or in a newspaper. Use it to build an arbitrage table. Note: Avoid tables that are derived (calculated) from a few values and that therefore do not give sufficiently accurate conversion information to be interesting. Extra credit: Make a killing in the money-exchange market!
4.1.20 Has Cycle? marked[] F 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T dfs(5) T T check 0 F dfs(10) T T T check 5 F dfs(3) T T T T check 10 F dfs(6) T T T T T dfs(2) T T T T T T check 5 T (cycle found here) check 6 T check 0 T check 3 T 2 done check 3 T check 0 T 6 done check 2 T 3 done 10 done check 2 T 5 done check 2 T check 6 T 0 done dfs(1) T T T T T T T dfs(4) T T T T T T T T check 1 T dfs(8) T T T T T T T T T check 1 T dfs(11) T T T T T T T T T T check 8 T dfs(7) T T T T T T T T T T T check 8 T check 11 T 7 done check 1 T 11 done check 7 T check 4 T 8 done 4 done check 8 T check 11 T 1 done dfs(9) T T T T T T T T T T T T 9 done The order of growth of the running time of the Cycle constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] array takes time proportional to V.
[]
{ "number": "4.4.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Show, in the style of the trace in the text, the process of computing the SPT with the Bellman-Ford algorithm for the edge-weighted digraph of EXERCISE 4.4.5.
4.1.21 Is 2-colorable? marked[] color[] T 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 dfs(0) T F F F F F F F F F F F F dfs(5) T T F F F F F T F F F F F F check 0 T dfs(10) T T T F F F F F T F F F F F F check 5 T dfs(3) T T T T F F F T F T F F F F F F check 10 T dfs(6) T T T T T F F F T F T F F F F F F dfs(2) T T T T T T F F T T F T F F F F F F check 5 F check 6 F check 0 F check 3 F 2 done check 3 F check 0 F 6 done check 2 F 3 done 10 done check 2 F 5 done check 2 F check 6 F 0 done dfs(1) T T T T T T T F F T T F T F F F F F F dfs(4) T T T T T T T T F F T T T T F F F F F F check 1 F dfs(8) T T T T T T T T T F F T T F T F F F F F F check 1 F dfs(11) T T T T T T T T T T F F T T F T F F F F F T check 8 F dfs(7) T T T T T T T T T T T F F T T F T F F F F F F check 8 F check 11 F 7 done check 1 F 11 done check 7 F check 4 F 8 done 4 done check 8 F check 11 F 1 done dfs(9) T T T T T T T T T T T T F F T T F T F F F F F F 9 done The order of growth of the running time of the TwoColor constructor, in the worst case is O(V + E). Each adjacency-list entry is examined exactly once, and there are 2 * E such entries (two for each edge); initializing the marked[] and color[] arrays takes time proportional to V.
[]
{ "number": "4.4.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Exercise" }
Multisource shortest paths. Develop an API and implementation that uses Dijkstra’s algorithm to solve the multisource shortest-paths problem on edge-weighted digraphs with positive edge weights: given a set of sources, find a shortest-paths forest that enables implementation of a method that returns to clients the shortest path from any source to each vertex. Hint: Add a dummy vertex with a zero-weight edge to each source, or initialize the priority queue with all sources, with their distTo[] entries set to 0.
4.1.24 Number of connected components: 33 Size of the largest component: 118774 Number of components of size less than 10: 5 Does the largest component contain Kevin Bacon: Yes Eccentricity, diameter, radius, center and girth: The strategy for computing the eccentricity, diameter, radius, center and girth of the largest component was the following: The algorithm required for these exact computations has complexity of O(V * E) (running breadth-first search from all vertices). V is ~= 10^5, which means the algorithm required for these exact computation has complexity ~= 10^10. In order to reduce the complexity of the computations (and to be able to compute them), domain knowledge was used to compute approximate results. Based on the Kevin Bacon game, we know that Kevin Bacon has been on several movies and that he is closely connected to most of the actors and actresses in the graph. Therefore, he has a high probability of being the center of the graph. Using Kevin Bacon as the center, we compute his vertex eccentricity to get the graph radius. A breadth-first search using Kevin Bacon as the source computes the vertices that are furthest from the center. Computing the eccentricities of these vertices we can find the diameter of the graph. The eccentricities of the center and of the vertices furthest from it are shown in the results. The range of the eccentricities is [10, 18]. Computing the eccentricities of all vertices would bring us back to the original problem of =~ 10^10 operations. Finally, for the girth, we know that there is a very high probability that two actors have worked together on two different movies. This gives a girth of 4, which is the minimum girth possible for the movies graph: Actor -- Movie -- Actor \ / \ Movie / To validate this theory we run the algorithm to compute the girth of the graph but stop once we find a cycle of length 4, since it is the shortest cycle possible. Eccentricities of Kevin Bacon and of vertices furthest from the center in the largest component: Eccentricity of vertex 22970: 16 Eccentricity of vertex 22971: 16 Eccentricity of vertex 22972: 16 Eccentricity of vertex 22973: 16 Eccentricity of vertex 22974: 16 Eccentricity of vertex 22976: 16 Eccentricity of vertex 22977: 16 Eccentricity of vertex 22978: 16 Eccentricity of vertex 22979: 16 Eccentricity of vertex 22980: 16 Eccentricity of vertex 51437: 18 Eccentricity of vertex 51438: 18 Eccentricity of vertex 51439: 18 Eccentricity of vertex 51440: 18 Eccentricity of vertex 51441: 18 Eccentricity of vertex 51442: 18 Eccentricity of vertex 51443: 18 Eccentricity of vertex 51444: 18 Eccentricity of vertex 51445: 18 Eccentricity of vertex 51446: 18 Eccentricity of vertex 51447: 18 Eccentricity of vertex 51448: 18 Eccentricity of vertex 51449: 18 Eccentricity of vertex 51450: 18 Eccentricity of vertex 51451: 18 Eccentricity of vertex 51452: 18 Eccentricity of vertex 51453: 18 Eccentricity of vertex 51454: 18 Eccentricity of vertex 51455: 18 Eccentricity of vertex 51456: 18 Eccentricity of vertex 51457: 18 Eccentricity of vertex 51458: 18 Eccentricity of vertex 51459: 18 Eccentricity of vertex 51460: 18 Eccentricity of vertex 51461: 18 Eccentricity of vertex 51462: 18 Eccentricity of vertex 51463: 18 Eccentricity of vertex 51464: 18 Eccentricity of vertex 51465: 18 Eccentricity of vertex 51466: 18 Eccentricity of vertex 51467: 18 Eccentricity of vertex 51468: 18 Eccentricity of vertex 51469: 18 Eccentricity of vertex 51470: 18 Eccentricity of vertex 51471: 18 Eccentricity of vertex 51472: 18 Eccentricity of vertex 51473: 18 Eccentricity of vertex 51474: 18 Eccentricity of vertex 51475: 18 Eccentricity of vertex 51476: 18 Eccentricity of vertex 51477: 18 Eccentricity of vertex 51478: 18 Eccentricity of vertex 51479: 18 Eccentricity of vertex 51480: 18 Eccentricity of vertex 51481: 18 Eccentricity of vertex 51482: 18 Eccentricity of vertex 51483: 18 Eccentricity of vertex 51484: 18 Eccentricity of vertex 51485: 18 Eccentricity of vertex 51486: 18 Eccentricity of vertex 51487: 18 Eccentricity of vertex 51488: 18 Eccentricity of vertex 51489: 18 Eccentricity of vertex 51490: 18 Eccentricity of vertex 51491: 18 Eccentricity of vertex 51492: 18 Eccentricity of vertex 51493: 18 Eccentricity of vertex 51494: 18 Eccentricity of vertex 51495: 18 Eccentricity of vertex 51496: 18 Eccentricity of vertex 51497: 18 Eccentricity of vertex 51498: 18 Eccentricity of vertex 51499: 18 Eccentricity of vertex 51500: 18 Eccentricity of vertex 51501: 18 Eccentricity of vertex 51502: 18 Eccentricity of vertex 51503: 18 Eccentricity of vertex 51504: 18 Eccentricity of vertex 51505: 18 Eccentricity of vertex 51506: 18 Eccentricity of vertex 51507: 18 Eccentricity of vertex 51508: 18 Eccentricity of vertex 51509: 18 Eccentricity of vertex 51510: 18 Eccentricity of vertex 51511: 18 Eccentricity of vertex 51512: 18 Eccentricity of vertex 51513: 18 Eccentricity of vertex 51514: 18 Eccentricity of vertex 51515: 18 Eccentricity of vertex 51516: 18 Eccentricity of vertex 51517: 18 Eccentricity of vertex 51518: 18 Eccentricity of vertex 51519: 18 Eccentricity of vertex 51520: 18 Eccentricity of vertex 86241: 16 Eccentricity of vertex 86242: 16 Eccentricity of vertex 86243: 16 Eccentricity of vertex 86244: 16 Eccentricity of vertex 86245: 16 Eccentricity of vertex 86246: 16 Eccentricity of vertex 86247: 16 Eccentricity of vertex 86248: 16 Eccentricity of vertex 86249: 16 Eccentricity of vertex 86250: 16 Eccentricity of vertex 86251: 16 Eccentricity of vertex 86252: 16 Eccentricity of vertex 86253: 16 Eccentricity of vertex 86254: 16 Eccentricity of vertex 86255: 16 Eccentricity of vertex 86256: 16 Eccentricity of vertex 86257: 16 Eccentricity of vertex 86258: 16 Eccentricity of vertex 86259: 16 Eccentricity of vertex 118353: 18 Eccentricity of vertex 118354: 18 Eccentricity of vertex 118355: 18 Eccentricity of vertex 118356: 18 Eccentricity of vertex 118357: 18 Eccentricity of vertex 118358: 18 Eccentricity of vertex 118359: 18 Eccentricity of vertex 118360: 18 Eccentricity of vertex 118361: 18 Eccentricity of vertex 118362: 18 Eccentricity of vertex 118363: 18 Eccentricity of vertex 118364: 18 Eccentricity of vertex 9145: 10 Diameter of largest component: 18 Radius of largest component: 10 Center of largest component: 9145 Girth of largest component: 4
[]
{ "number": "4.4.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
Shortest paths in Euclidean graphs. Adapt our APIs to speed up Dijkstra’s algorithm in the case where it is known that vertices are points in the plane.
4.1.27 Integer * object overhead -> 16 bytes * int value -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 4 + 4 = 24 bytes Node * object overhead -> 16 bytes * extra overhead for reference to the enclosing instance -> 8 bytes * Item reference (item) -> 8 bytes * Node reference (next) -> 8 bytes Amount of memory needed: 16 + 8 + 8 + 8 = 40 bytes Bag * object overhead -> 16 bytes * Node reference (first) -> 8 bytes * int value (size) -> 4 bytes * padding -> 4 bytes * N Nodes -> 40N bytes * Integer (item) -> 24N bytes Amount of memory needed: 16 + 8 + 4 + 4 + 40N + 24N = 64N + 32 bytes Graph * object overhead -> 16 bytes * int value (V) -> 4 bytes * int value (E) -> 4 bytes * Bag<Integer>[] reference (adj) -> 8 bytes * Bag<Integer>[] (adj) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes Bag references -> 8V Bag -> 64E + 32 bytes -> There are V Bags and in total, they have 2E nodes -> 128E + 32V Amount of memory needed: 16 + 4 + 4 + 8 + 16 + 4 + 4 + 8V + 128E + 32V = 128E + 40V + 56 bytes
[]
{ "number": "4.4.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
Longest paths in DAGs. Develop an implementation AcyclicLP that can solve the longest-paths problem in edge-weighted DAGs, as described in PROPOSITION T.
4.1.28 Non-isomorphic graphs: There are 2 non-isomorphic graphs with 2 vertices: o o o-o There are 4 non-isomorphic graphs with 3 vertices: o o o o o-o o-o-o o / \ o-o There are 11 non-isomorphic graphs with 4 vertices: o o o o o-o o o o-o-o o o-o-o-o o-o o-o o | o / \ o o o / \ o---o o o-o | | o-o o | o / \ o---o o / \ o---o \ / o o /|\ / o \ / / \ \ o------o There are 34 non-isomorphic graphs with 5 vertices: o o o o o o o o o-o o o o / o-o o o o \ / o o o / | \ o o o o o / \ o o o-o o / \ o o \ o o o / \ o----o o o o /|\\ o o oo o-o | | o-o o o | o-o | | o o o | o / \ o---o o o-o-o-o-o o o /| | o | | \| | o o o--o | | o--o | o o o | | o---o \ / o o--o \/ o--o--o o / \ o o \ / o-o o (This is a complete graph, where all vertices have degree = 4) / / \\ o------o \ /\ /\/ \|/\ |/ o---o o / / \\ o------o \ /\ /\/ \|/\ |/ o o o---o |\ /|\ | X | o |/ \|/ o---o o---o |\ /| | o | |/ \| o---o o---o |\ /|\ | X | o |/ \| o---o o // \ o-o o \/ o o-o-o-o \| | / o o /\ / \ / \ o------o \ \ / / \ /\ / o o o---o | X | o o---o o o |\ /| | o | |/ \| o o o /|\ o--o | o \|/ o o | o /|\ o | o \|/ o o / \ o---o | | o---o o---o | /| | o | |/ | o---o o | o | o / \ o---o o---o | \ | o---o o Based on: http://www.graphclasses.org/smallgraphs.html
[]
{ "number": "4.4.28", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
All-pairs shortest path in graphs with negative cycles. Articulate an API like the one implemented on page 656 for the all-pairs shortest-paths problem in graphs with no negative cycles. Develop an implementation that runs a version of Bellman-Ford to identify weights pi[v] such that for any edge v->w, the edge weight plus the difference between pi[v] and pi[w] is nonnegative. Then use these weights to reweight the graph, so that Dijkstra’s algorithm is effective for finding all shortest paths in the reweighted graph.
4.1.30 - Eulerian and Hamiltonian cycles An Eulerian cycle (or Eulerian circuit) is a path which starts and ends at the same vertex and includes every edge exactly once. A Hamiltonian cycle is a path which starts and ends at the same vertex and includes every vertex exactly once (except for the source, which is visited twice). According to Euler theorems a graph has an Eulerian cycle/circuit if and only if it does not have any vertices of odd degree. First graph: 0-1 0-2 0-3 1-3 1-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9). It has a Hamiltonian cycle: 1-4 4-8 8-7 7-6 6-9 9-5 5-2 2-0 0-3 3-1 Second graph: 0-1 0-2 0-3 1-3 0-3 2-5 5-6 3-6 4-7 4-8 5-8 5-9 6-7 6-9 8-8 It has an Eulerian cycle (all the vertices have even degrees): 0-3 3-0 0-2 2-5 5-9 9-6 6-5 5-8 8-8 8-4 4-7 7-6 6-3 3-1 1-0 There is no Hamiltonian cycle. Third graph: 0-1 1-2 1-3 0-3 0-4 2-5 2-9 3-6 4-7 4-8 5-8 5-9 6-7 6-9 7-8 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 4-8 8-7 7-6 6-9 9-5 5-2 2-1 1-3 3-0 0-4 Fourth graph: 4-1 7-9 6-2 7-3 5-0 0-2 0-8 1-6 3-9 6-3 2-8 1-5 9-8 4-5 4-7 It does not have an Eulerian cycle because it has vertices of odd degree (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). It has a Hamiltonian cycle: 0-5 5-4 4-1 1-6 6-3 3-7 7-9 9-8 8-2 2-0
[]
{ "number": "4.4.30", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
All-pairs shortest path on a line. Given a weighted line graph (undirected connected graph, all vertices of degree 2, except two endpoints which have degree 1), devise an algorithm that preprocesses the graph in linear time and can return the distance of the shortest path between any two vertices in constant time.
4.1.31 - Graph enumeration If we were considering graphs with no self-loops: There are (V) ways to choose a set {u, v} of two vertices. From this set there are E ways to choose the vertices to connect. (2) So there are ((V)) = (V! / 2! * (V - 2)!)! / E! * ((V! / 2! * (V - 2)!) - E)! ((2)) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges and no self-loops). Since we are also considering graphs with self-loops, there are (V) * 2^V ways to choose a set {u, v} of two vertices in which (2) vertices may or may not have a self-loop. So there are ((V) * 2^V) = ((V! / 2! * (V - 2)!) * 2^V)! / E! * (((V! / 2! * (V - 2)!) * 2^V) - E)! ((2) ) ( E ) different undirected graphs with V vertices and E edges (and no parallel edges). Reference: Handbook of Discrete and Combinatorial Mathematics by Kenneth H. Rosen, page 580 https://math.stackexchange.com/questions/1072726/counting-simple-connected-labeled-graphs-with-n-vertices-and-k-edges https://math.stackexchange.com/questions/128439/how-to-determine-the-number-of-directed-undirected-graphs
[]
{ "number": "4.4.31", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
Shortest path in a grid. Given an N-by-N matrix of positive integers, find the shortest path from the (0, 0) entry to the (N-1, N-1) entry, where the length of the path is the sum of the integers in the path. Repeat the problem but assume you can only move right and down.
4.1.33 - Odd cycles A graph is two-colorable (bipartite) if and only if it contains no odd-length cycle. Proof: 1- Proving that a graph with an odd-length cycle cannot be bipartite: If a graph G is bipartite with vertex sets V1 and V2, every step along a walk takes you either from V1 to V2 or from V2 to V1. To end up where you started, therefore, you must take an even number of steps. 2- Proving that a graph with only even-length cycles is bipartite: Consider G to be a graph with only even-length cycles. Let v0 be any vertex. For each vertex v in the same component C0 as v0 let d(v) be the length of the shortest path from v0 to v. Color red every vertex in C0 whose distance from v0 is even, and color the other vertices of C0 blue. Do the same for each component of G. Check that if G had any edge between two red vertices or between two blue vertices, it would have an odd cycle. Thus, G is bipartite, the red vertices and the blue vertices being the two parts. Reference: https://math.stackexchange.com/questions/311665/proof-a-graph-is-bipartite-if-and-only-if-it-contains-no-odd-cycles
[]
{ "number": "4.4.33", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
Bitonic shortest path. Given a digraph, find a bitonic shortest path from s to every other vertex (if one exists). A path is bitonic if there is an intermediate vertex v such that the edges on the path from s to v are strictly increasing and the edges on the path from v to t are strictly decreasing. The path should be simple (no repeated vertices).
4.1.35 - Biconnectedness Any graph with no articulation points is biconnected. Proof: Consider two vertices, s, t, and a path P1 connecting s to t. We know that no vertex in P1 is an articulation point, so for each vertex v in the path, there is always another path P2 connecting s to t that does not include it. Also, P2 does not include any of the vertices of P1, otherwise any vertex included in both paths would be an articulation point (being the only way to connect s to t). This means that every pair of vertices is connected by two vertex-disjoint paths (such as P1 and P2), making the graph biconnected. Graph illustration (P1 is the path s-v1 v1-v2 v2-t and P2 is the path s-v3 v3-v4 v4-t): s |\ | \ v1 v3 | \ | v4 v2 / | / |/ t
[]
{ "number": "4.4.35", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 4, "chapter_title": "Graphs", "section": 4.4, "section_title": "Shortest Paths", "type": "Creative Problem" }
Give a trace for LSD string sort for the keys no is th ti fo al go pe to co to th ai of th pa
5.1.2 Trace for LSD string sort (same model as used in the book): input d=1 d=0 output no pa ai ai is pe al al th of co co ti th fo fo fo th go go al th is is go ti no no pe ai of of to al pa pa co no pe pe to fo th th th go th th ai to th th of co ti ti th to to to pa is to to
[]
{ "number": "5.1.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Give a trace for MSD string sort for the keys no is th ti fo al go pe to co to th ai of th pa
5.1.3 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 15, 0): input 0 no 1 is 2 th 3 ti 4 fo 5 al 6 go 7 pe 8 to 9 co 10 to 11 th 12 ai 13 of 14 th 15 pa d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 2 18 q 0 19 r 0 20 s 0 21 t 6 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 10 18 q 10 19 r 10 20 s 10 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Distribute and copy back 0 al 1 ai 2 co 3 fo 4 go 5 is 6 no 7 of 8 pe 9 pa 10 th 11 ti 12 to 13 to 14 th 15 th Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 10 17 p 10 18 q 10 19 r 10 20 s 16 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); Sorted result 0 ai 1 al 2 co 3 fo 4 go 5 is 6 no 7 of 8 pa 9 pe 10 th 11 th 12 th 13 ti 14 to 15 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input __ __ output no al ai ai ai ai is ai al al al al th co -- co co co ti fo co fo fo fo fo go fo go go go al is go is is is go no is no no no pe of no of of of to pe of -- pa pa co pa pe pa pe pe to th pa pe -- th th ti th -- th th ai to ti th th th of to to ti th ti th th to to ti to pa th th to to to -- th th to th --
[]
{ "number": "5.1.3", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Give a trace for 3-way string quicksort for the keys no is th ti fo al go pe to co to th ai of th pa
5.1.4 Trace for 3-way string quicksort (same model as used in the book): -- -- -- 0 no is ai ai ai ai 1 is ai co al -- al 2 th co fo -- al co 3 ti fo al fo -- fo 4 fo al go go co go 5 al go -- co -- is 6 go -- is -- fo no 7 pe no -- is -- of 8 to -- no -- go pa 9 co to -- no -- pe 10 to pe pe -- is th 11 th to of of -- th 12 ai th pa -- no th 13 of ti -- pe -- ti 14 th of th pa of to 15 pa th ti -- -- to pa to th pa th th th -- to th pe th -- -- -- to th to th ti th -- ti -- to to --
[]
{ "number": "5.1.4", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Give a trace for MSD string sort for the keys now is the time for all good people to come to the aid of
5.1.5 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 13, 0): input 0 now 1 is 2 the 3 time 4 for 5 all 6 good 7 people 8 to 9 come 10 to 11 the 12 aid 13 of d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 1 18 q 0 19 r 0 20 s 0 21 t 5 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 9 18 q 9 19 r 9 20 s 9 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Distribute and copy back 0 all 1 aid 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 time 11 to 12 to 13 the Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 9 17 p 9 18 q 9 19 r 9 20 s 14 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 13, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); Sorted result 0 aid 1 all 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 the 11 time 12 to 13 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input ____ ___ output now all aid aid aid aid is aid all all all all the come --- come come come time for come for for for for good for good good good all is good is is is good now is now now now people of now of of of to people of people people people come the people --- --- the to time the the the the the to time the the time aid to to time --- to of the to to time to ---- the to to --- to
[]
{ "number": "5.1.5", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Give a trace for 3-way string quicksort for the keys now is the time for all good people to come to the aid of
5.1.6 Trace for 3-way string quicksort (same model as used in the book): ---- --- --- --- 0 now is aid aid aid aid aid 1 is aid come all --- --- all 2 the come for --- all all come 3 time for all for --- --- for 4 for all good good come come good 5 all good ---- come ---- ---- is 6 good --- is ---- for for now 7 people now --- is ---- ---- of 8 to --- now --- good good people 9 come to --- now ---- ---- the 10 to people people --- is is the 11 the to of of --- --- time 12 aid the --- ------ now now to 13 of time to people --- --- to of the ------ of of the time the ------ ------ to time people people the the ------ ------ --- ---- the the to the the to ---- ---- ---- time time ---- ---- to to to to -- --
[]
{ "number": "5.1.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Give the number of characters examined by MSD string sort and 3-way string quicksort for a file of N keys a, aa, aaa, aaaa, aaaaa, . . .
5.1.8 Both MSD string sort and 3-way string quicksort examine all characters in the N keys. That number is equal to 1 + 2 + ... + N = (N^2 + N) / 2 characters. MSD string sort, however, generates (R - 1) * N empty subarrays (an empty subarray for all digits in R other than 'a', in every pass) while 3-way string quicksort generates 2N empty subarrays (empty subarrays for digits smaller than 'a' and for digits higher than 'a', or empty subarrays for digits smaller than '-1' and for digits equal to '-1', in every pass). MSD string sort trace (no cutoff for small subarrays, subarrays of size 0 and 1 omitted): input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ---- 3-way string quicksort trace: input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ----
[]
{ "number": "5.1.8", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
What is the total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W), in the worst case?
5.1.10 The total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W) in the worst case is O(N * W * R). This can be seen with a recurrence relation T(W). The base case T(1) is when all the strings have length 1. An example with R = 3 is { "a", "b", "c" }. In the worst case they are in reverse order. For example: { "c", "b", "a" }. In this case we only remove one string from the list in each pass. If we consider N = R^W (in this case, W = 1), the number of comparisons is equal to: Characters examined = Sum[i=0..R] i Characters examined = R * (R + 1) / 2 To build the worst case for strings of length 2 (T(2)), we take each string from T(1) and append it to the end of each character in R. So for single character strings "a", "b", "c", with R = 3, the two character list is: "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc". The list can then be split into R groups: one for each character in R that is a prefix to every string of length W - 1. During the partitioning phase all strings that start with "a" will be in the same partition and the algorithm will do the same process as in T(1) because removing the first character 'a' will lead to the same 1-length strings { "c", "b", "a" } as before. The same thing happens for strings starting with "b" and "c". So, for R = 3, the algorithm will check 3 * R + 2 * R + R characters in the first position of the strings (which is 3 + 2 + 1 characters times R groups). Then it will check the second characters in the strings in each of the R groups. For T(W), where W > 2, the list will then again be split into R groups: one for each character in R that is a prefix to every string of length W - 2. Quicksort will then remove R strings from the list in each partition. It will then check R * T(W - 1) more characters for each of those groups. This gives the recurrence T(W) = (R^(W - 1) * Sum[i=0..R] R - i) + R * T(W - 1), which simplifies to: T(W) = R^(W + 1) + R^W + R * T(W - 1) ----------------- 2 Solving the recurrence gives us: T(W) = W * (R^W) * (R + 1) --------------------- 2 Substituting N = R^W: T(W) = W * N * (R + 1) ----------------- 2 Which is O(N * W * R). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/153 Thanks to GenevaS (https://github.com/GenevaS) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/245
[]
{ "number": "5.1.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Exercise" }
Hybrid sort. Investigate the idea of using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins.
5.1.13 - Hybrid sort Idea: using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins. This idea will work well for random strings because, in general, the higher the number of keys to be sorted, the higher the number of non-empty subarrays generated on each pass of MSD string sort. Such scenario would work well due to the advantage of having multiway partitioning. However, MSD string sort will still generate a large number of empty subarrays if there is a large number of equal keys (or a large number of keys with long common prefixes). 3-way string quicksort will avoid the negative effects of large numbers of empty bins not only for smaller arrays, but also for large arrays, while also having the benefit of using less space than MSD string sort since it does not require space for frequency counts or for an auxiliary array. On the other hand, it envolves more data movement than MSD string sort when the number of nonempty subarrays is large because it has to do a series of 3-way partitions to get the effect of the multiway partition. This would not be a problem in the hybrid sort if there were many equal keys in smaller arrays, since 3-way string quicksort would be the algorithm of choice in such situation. Overall, hybrid sort would be a good choice for random strings. However, a version of hybrid sort that chooses between MSD string sort and 3-way string quicksort based on the percentage of equal keys (choosing MSD string sort if there is a low percentage of equal keys and choosing 3-way string quicksort if there is a high number of equal keys) would be more effective than a version that makes the choice based on the number of keys.
[]
{ "number": "5.1.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Creative Problem" }
In-place key-indexed counting. Develop a version of key-indexed counting that uses only a constant amount of extra space. Prove that your version is stable or provide a counterexample.
5.1.17 - In-place key-indexed counting LSD and MSD sorts that use only a constant amount of extra space are not stable. Counterexample for LSD sort: The array ["4PGC938", "2IYE230", "3CIO720", "1ICK750", "1OHV845", "4JZY524", "1ICK750", "3CIO720", "1OHV845", "1OHV845", "2RLA629", "2RLA629", "3ATW723"] after being sorted by in-place LSD becomes: ["1OHV845", "1OHV845", "1OHV845", "1ICK750", "1ICK750", "2RLA629", "2IYE230", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4PGC938", "4JZY524"] If it were sorted by non-in-place LSD the output would be: ["1ICK750", "1ICK750", "1OHV845", "1OHV845", "1OHV845", "2IYE230", "2RLA629", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4JZY524", "4PGC938"] Counterexample for both LSD and MSD sorts: The array ["CAA" (index 0), "ABB" (index 1), "ABB" (index 2)] after being sorted by either in-place LSD or in-place MSD becomes: ["ABB" (original index 2), "ABB" (original index 1), "CAA" (original index 0)] If it were sorted by non-in-place LSD or MSD the output would be: ["ABB" (original index 1), "ABB" (original index 2), "CAA" (original index 0)]
[]
{ "number": "5.1.17", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Creative Problem" }
Timings. Compare the running times of MSD string sort and 3-way string quicksort, using various key generators. For fixed-length keys, include LSD string sort.
5.1.22 - Timings Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Average time spent Decimal keys Least-Significant-Digit 2.30 Decimal keys Most-Significant-Digit 0.45 Decimal keys 3-way string quicksort 0.32 CA license plates Least-Significant-Digit 1.48 CA license plates Most-Significant-Digit 0.41 CA license plates 3-way string quicksort 0.33 Fixed length words Least-Significant-Digit 2.52 Fixed length words Most-Significant-Digit 0.28 Fixed length words 3-way string quicksort 0.35 Variable length items Most-Significant-Digit 1.80 Variable length items 3-way string quicksort 0.55 The experiment results show that for all random string types, LSD sort had the worst results. For random decimal keys, random CA license plates and random variable length items 3-way string quicksort had the best results. For random fixed-length words, MSD had the best running time. Having to always scan all characters in all keys may explain why LSD sort had the slowest running times when sorting all random string types. 3-way string quicksort may have had good results because it does not create a high number of empty subarrays, as MSD sort does, and because it can handle well keys with long common prefixes (which are likely to happen in random decimal keys, random CA license plates and random variable length items). Random fixed-length words are less likely to have long common prefixes (because all their characters are in the range [40, 125]), which may explain why MSD sort had better results than both LSD sort and 3-way string quicksort during their sort.
[]
{ "number": "5.1.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Experiment" }
Array accesses. Compare the number of array accesses used by MSD string sort and 3-way string sort, using various key generators. For fixed-length keys, include LSD string sort.
5.1.23 - Array accesses Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Number of array accesses Decimal keys Least-Significant-Digit 40000000 Decimal keys Most-Significant-Digit 35443947 Decimal keys 3-way string quicksort 78124405 CA license plates Least-Significant-Digit 28000000 CA license plates Most-Significant-Digit 25703588 CA license plates 3-way string quicksort 82889002 Fixed length words Least-Significant-Digit 40000000 Fixed length words Most-Significant-Digit 14841196 Fixed length words 3-way string quicksort 95310075 Variable length items Most-Significant-Digit 72121457 Variable length items 3-way string quicksort 97523400 The experiment results show that for all random string types, 3-way string quicksort accessed the array more times than LSD and MSD sort; LSD sort accessed the array more times than MSD sort; and MSD sort had the lowest number of array accesses. A possible explanation for these results is the fact that 3-way string quicksort accesses the array 4 times for each exchange operation, which leads to more array accesses than both LSD and MSD sorts, that do not make inplace exchanges. LSD sort will always access the array 4 * N * W times, where N is the number of strings and W is the length of the strings (which is equivalent to 4 array accesses for each character in the keys) and MSD sort will only access the array while the strings have common prefixes, which explains why MSD sort has the lowest number of array accesses of all sort types.
[]
{ "number": "5.1.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Experiment" }
Rightmost character accessed. Compare the position of the rightmost character accessed for MSD string sort and 3-way string quicksort, using various key generators.
5.1.24 - Rightmost character accessed Running 1 experiment with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Rightmost character accessed Decimal keys Most-Significant-Digit 9 Decimal keys 3-way string quicksort 9 CA license plates Most-Significant-Digit 6 CA license plates 3-way string quicksort 6 Fixed length words Most-Significant-Digit 5 Fixed length words 3-way string quicksort 5 Variable length items Most-Significant-Digit 20 Variable length items 3-way string quicksort 20 In all experiments the rightmost character position accessed in MSD sort and in 3-way string quicksort was the same, which shows that both algorithms scan the same characters.
[]
{ "number": "5.1.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.1, "section_title": "String Sorts", "type": "Experiment" }
Draw the TST that results when the keys no is th ti fo al go pe to co to th ai of th pa are inserted in that order into an initially empty TST.
5.1.2 Trace for LSD string sort (same model as used in the book): input d=1 d=0 output no pa ai ai is pe al al th of co co ti th fo fo fo th go go al th is is go ti no no pe ai of of to al pa pa co no pe pe to fo th th th go th th ai to th th of co ti ti th to to to pa is to to
[]
{ "number": "5.2.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Exercise" }
Draw the R-way trie that results when the keys now is the time for all good people to come to the aid of are inserted in that order into an initially empty trie (do not draw null links).
5.1.3 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 15, 0): input 0 no 1 is 2 th 3 ti 4 fo 5 al 6 go 7 pe 8 to 9 co 10 to 11 th 12 ai 13 of 14 th 15 pa d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 2 18 q 0 19 r 0 20 s 0 21 t 6 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 10 18 q 10 19 r 10 20 s 10 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Distribute and copy back 0 al 1 ai 2 co 3 fo 4 go 5 is 6 no 7 of 8 pe 9 pa 10 th 11 ti 12 to 13 to 14 th 15 th Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 10 17 p 10 18 q 10 19 r 10 20 s 16 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); Sorted result 0 ai 1 al 2 co 3 fo 4 go 5 is 6 no 7 of 8 pa 9 pe 10 th 11 th 12 th 13 ti 14 to 15 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input __ __ output no al ai ai ai ai is ai al al al al th co -- co co co ti fo co fo fo fo fo go fo go go go al is go is is is go no is no no no pe of no of of of to pe of -- pa pa co pa pe pa pe pe to th pa pe -- th th ti th -- th th ai to ti th th th of to to ti th ti th th to to ti to pa th th to to to -- th th to th --
[]
{ "number": "5.2.3", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Exercise" }
Draw the TST that results when the keys now is the time for all good people to come to the aid of are inserted in that order into an initially empty TST.
5.1.4 Trace for 3-way string quicksort (same model as used in the book): -- -- -- 0 no is ai ai ai ai 1 is ai co al -- al 2 th co fo -- al co 3 ti fo al fo -- fo 4 fo al go go co go 5 al go -- co -- is 6 go -- is -- fo no 7 pe no -- is -- of 8 to -- no -- go pa 9 co to -- no -- pe 10 to pe pe -- is th 11 th to of of -- th 12 ai th pa -- no th 13 of ti -- pe -- ti 14 th of th pa of to 15 pa th ti -- -- to pa to th pa th th th -- to th pe th -- -- -- to th to th ti th -- ti -- to to --
[]
{ "number": "5.2.4", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Exercise" }
Develop nonrecursive versions of TrieST and TST.
5.1.5 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 13, 0): input 0 now 1 is 2 the 3 time 4 for 5 all 6 good 7 people 8 to 9 come 10 to 11 the 12 aid 13 of d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 1 18 q 0 19 r 0 20 s 0 21 t 5 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 9 18 q 9 19 r 9 20 s 9 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Distribute and copy back 0 all 1 aid 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 time 11 to 12 to 13 the Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 9 17 p 9 18 q 9 19 r 9 20 s 14 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 13, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); Sorted result 0 aid 1 all 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 the 11 time 12 to 13 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input ____ ___ output now all aid aid aid aid is aid all all all all the come --- come come come time for come for for for for good for good good good all is good is is is good now is now now now people of now of of of to people of people people people come the people --- --- the to time the the the the the to time the the time aid to to time --- to of the to to time to ---- the to to --- to
[]
{ "number": "5.2.5", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Exercise" }
Implement the following API, for a StringSET data type: public class StringSET StringSET() create a string set void add(String key) put key into the set void delete(String key) remove key from the set boolean contains(String key) is key in the set? boolean isEmpty() is the set empty? int size() number of keys in the set int toString() string representation of the set
5.1.6 Trace for 3-way string quicksort (same model as used in the book): ---- --- --- --- 0 now is aid aid aid aid aid 1 is aid come all --- --- all 2 the come for --- all all come 3 time for all for --- --- for 4 for all good good come come good 5 all good ---- come ---- ---- is 6 good --- is ---- for for now 7 people now --- is ---- ---- of 8 to --- now --- good good people 9 come to --- now ---- ---- the 10 to people people --- is is the 11 the to of of --- --- time 12 aid the --- ------ now now to 13 of time to people --- --- to of the ------ of of the time the ------ ------ to time people people the the ------ ------ --- ---- the the to the the to ---- ---- ---- time time ---- ---- to to to to -- --
[]
{ "number": "5.2.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Exercise" }
Ordered operations for tries. Implement the floor(), ceil(), rank(), and select() (from our standard ordered ST API from Chapter 3) for TrieST.
5.1.8 Both MSD string sort and 3-way string quicksort examine all characters in the N keys. That number is equal to 1 + 2 + ... + N = (N^2 + N) / 2 characters. MSD string sort, however, generates (R - 1) * N empty subarrays (an empty subarray for all digits in R other than 'a', in every pass) while 3-way string quicksort generates 2N empty subarrays (empty subarrays for digits smaller than 'a' and for digits higher than 'a', or empty subarrays for digits smaller than '-1' and for digits equal to '-1', in every pass). MSD string sort trace (no cutoff for small subarrays, subarrays of size 0 and 1 omitted): input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ---- 3-way string quicksort trace: input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ----
[]
{ "number": "5.2.8", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Creative Problem" }
Size. Implement very eager size() (that keeps in each node the number of keys in its subtree) for TrieST and TST.
5.1.10 The total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W) in the worst case is O(N * W * R). This can be seen with a recurrence relation T(W). The base case T(1) is when all the strings have length 1. An example with R = 3 is { "a", "b", "c" }. In the worst case they are in reverse order. For example: { "c", "b", "a" }. In this case we only remove one string from the list in each pass. If we consider N = R^W (in this case, W = 1), the number of comparisons is equal to: Characters examined = Sum[i=0..R] i Characters examined = R * (R + 1) / 2 To build the worst case for strings of length 2 (T(2)), we take each string from T(1) and append it to the end of each character in R. So for single character strings "a", "b", "c", with R = 3, the two character list is: "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc". The list can then be split into R groups: one for each character in R that is a prefix to every string of length W - 1. During the partitioning phase all strings that start with "a" will be in the same partition and the algorithm will do the same process as in T(1) because removing the first character 'a' will lead to the same 1-length strings { "c", "b", "a" } as before. The same thing happens for strings starting with "b" and "c". So, for R = 3, the algorithm will check 3 * R + 2 * R + R characters in the first position of the strings (which is 3 + 2 + 1 characters times R groups). Then it will check the second characters in the strings in each of the R groups. For T(W), where W > 2, the list will then again be split into R groups: one for each character in R that is a prefix to every string of length W - 2. Quicksort will then remove R strings from the list in each partition. It will then check R * T(W - 1) more characters for each of those groups. This gives the recurrence T(W) = (R^(W - 1) * Sum[i=0..R] R - i) + R * T(W - 1), which simplifies to: T(W) = R^(W + 1) + R^W + R * T(W - 1) ----------------- 2 Solving the recurrence gives us: T(W) = W * (R^W) * (R + 1) --------------------- 2 Substituting N = R^W: T(W) = W * N * (R + 1) ----------------- 2 Which is O(N * W * R). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/153 Thanks to GenevaS (https://github.com/GenevaS) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/245
[]
{ "number": "5.2.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Creative Problem" }
Hybrid TST with R2-way branching at the root. Add code to TST to do multiway branching at the first two levels, as described in the text.
5.1.13 - Hybrid sort Idea: using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins. This idea will work well for random strings because, in general, the higher the number of keys to be sorted, the higher the number of non-empty subarrays generated on each pass of MSD string sort. Such scenario would work well due to the advantage of having multiway partitioning. However, MSD string sort will still generate a large number of empty subarrays if there is a large number of equal keys (or a large number of keys with long common prefixes). 3-way string quicksort will avoid the negative effects of large numbers of empty bins not only for smaller arrays, but also for large arrays, while also having the benefit of using less space than MSD string sort since it does not require space for frequency counts or for an auxiliary array. On the other hand, it envolves more data movement than MSD string sort when the number of nonempty subarrays is large because it has to do a series of 3-way partitions to get the effect of the multiway partition. This would not be a problem in the hybrid sort if there were many equal keys in smaller arrays, since 3-way string quicksort would be the algorithm of choice in such situation. Overall, hybrid sort would be a good choice for random strings. However, a version of hybrid sort that chooses between MSD string sort and 3-way string quicksort based on the percentage of equal keys (choosing MSD string sort if there is a low percentage of equal keys and choosing 3-way string quicksort if there is a high number of equal keys) would be more effective than a version that makes the choice based on the number of keys.
[]
{ "number": "5.2.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Creative Problem" }
Spell checking. Write a TST client SpellChecker that takes as command-line argument the name of a file containing a dictionary of words in the English language, and then reads a string from standard input and prints out any word that is not in the dictionary. Use a string set.
5.1.17 - In-place key-indexed counting LSD and MSD sorts that use only a constant amount of extra space are not stable. Counterexample for LSD sort: The array ["4PGC938", "2IYE230", "3CIO720", "1ICK750", "1OHV845", "4JZY524", "1ICK750", "3CIO720", "1OHV845", "1OHV845", "2RLA629", "2RLA629", "3ATW723"] after being sorted by in-place LSD becomes: ["1OHV845", "1OHV845", "1OHV845", "1ICK750", "1ICK750", "2RLA629", "2IYE230", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4PGC938", "4JZY524"] If it were sorted by non-in-place LSD the output would be: ["1ICK750", "1ICK750", "1OHV845", "1OHV845", "1OHV845", "2IYE230", "2RLA629", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4JZY524", "4PGC938"] Counterexample for both LSD and MSD sorts: The array ["CAA" (index 0), "ABB" (index 1), "ABB" (index 2)] after being sorted by either in-place LSD or in-place MSD becomes: ["ABB" (original index 2), "ABB" (original index 1), "CAA" (original index 0)] If it were sorted by non-in-place LSD or MSD the output would be: ["ABB" (original index 1), "ABB" (original index 2), "CAA" (original index 0)]
[]
{ "number": "5.2.17", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Creative Problem" }
Typing monkeys. Suppose that a typing monkey creates random words by appending each of 26 possible letter with probability p to the current word and finishes the word with probability 1 - 26p. Write a program to estimate the frequency distribution of the lengths of words produced. If "abc" is produced more than once, count it only once.
5.1.22 - Timings Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Average time spent Decimal keys Least-Significant-Digit 2.30 Decimal keys Most-Significant-Digit 0.45 Decimal keys 3-way string quicksort 0.32 CA license plates Least-Significant-Digit 1.48 CA license plates Most-Significant-Digit 0.41 CA license plates 3-way string quicksort 0.33 Fixed length words Least-Significant-Digit 2.52 Fixed length words Most-Significant-Digit 0.28 Fixed length words 3-way string quicksort 0.35 Variable length items Most-Significant-Digit 1.80 Variable length items 3-way string quicksort 0.55 The experiment results show that for all random string types, LSD sort had the worst results. For random decimal keys, random CA license plates and random variable length items 3-way string quicksort had the best results. For random fixed-length words, MSD had the best running time. Having to always scan all characters in all keys may explain why LSD sort had the slowest running times when sorting all random string types. 3-way string quicksort may have had good results because it does not create a high number of empty subarrays, as MSD sort does, and because it can handle well keys with long common prefixes (which are likely to happen in random decimal keys, random CA license plates and random variable length items). Random fixed-length words are less likely to have long common prefixes (because all their characters are in the range [40, 125]), which may explain why MSD sort had better results than both LSD sort and 3-way string quicksort during their sort.
[]
{ "number": "5.2.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Creative Problem" }
Duplicates (revisited again). Redo Exercise 3.5.30 using StringSET (see Exercise 5.2.6) instead of HashSET. Compare the running times of the two approaches. Then use Dedup to run the experiments for N = 10^7, 10^8, and10^9, repeat the experiments for random long values and discuss the results.
5.1.23 - Array accesses Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Number of array accesses Decimal keys Least-Significant-Digit 40000000 Decimal keys Most-Significant-Digit 35443947 Decimal keys 3-way string quicksort 78124405 CA license plates Least-Significant-Digit 28000000 CA license plates Most-Significant-Digit 25703588 CA license plates 3-way string quicksort 82889002 Fixed length words Least-Significant-Digit 40000000 Fixed length words Most-Significant-Digit 14841196 Fixed length words 3-way string quicksort 95310075 Variable length items Most-Significant-Digit 72121457 Variable length items 3-way string quicksort 97523400 The experiment results show that for all random string types, 3-way string quicksort accessed the array more times than LSD and MSD sort; LSD sort accessed the array more times than MSD sort; and MSD sort had the lowest number of array accesses. A possible explanation for these results is the fact that 3-way string quicksort accesses the array 4 times for each exchange operation, which leads to more array accesses than both LSD and MSD sorts, that do not make inplace exchanges. LSD sort will always access the array 4 * N * W times, where N is the number of strings and W is the length of the strings (which is equivalent to 4 array accesses for each character in the keys) and MSD sort will only access the array while the strings have common prefixes, which explains why MSD sort has the lowest number of array accesses of all sort types.
[]
{ "number": "5.2.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Experiment" }
Spell checker. Redo Exercise 3.5.31, which uses the file dictionary.txt from the booksite and the BlackFilter client on page 491 to print all misspelled words in a text file. Compare the performance of TrieST and TST for the file war.txt with this client and discuss the results.
5.1.24 - Rightmost character accessed Running 1 experiment with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Rightmost character accessed Decimal keys Most-Significant-Digit 9 Decimal keys 3-way string quicksort 9 CA license plates Most-Significant-Digit 6 CA license plates 3-way string quicksort 6 Fixed length words Most-Significant-Digit 5 Fixed length words 3-way string quicksort 5 Variable length items Most-Significant-Digit 20 Variable length items 3-way string quicksort 20 In all experiments the rightmost character position accessed in MSD sort and in 3-way string quicksort was the same, which shows that both algorithms scan the same characters.
[]
{ "number": "5.2.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.2, "section_title": "Tries", "type": "Experiment" }
Give the dfa[][] array for the Knuth-Morris-Pratt algorithm for the pattern A A A A A A A A A, and draw the DFA, in the style of the figures in the text.
5.1.2 Trace for LSD string sort (same model as used in the book): input d=1 d=0 output no pa ai ai is pe al al th of co co ti th fo fo fo th go go al th is is go ti no no pe ai of of to al pa pa co no pe pe to fo th th th go th th ai to th th of co ti ti th to to to pa is to to
[]
{ "number": "5.3.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }