question
stringlengths
34
5.67k
answer
stringlengths
20
20.1k
support_files
listlengths
0
4
metadata
dict
Give the dfa[][] array for the Knuth-Morris-Pratt algorithm for the pattern A B R A C A D A B R A, and draw the DFA, in the style of the figures in the text.
5.1.3 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 15, 0): input 0 no 1 is 2 th 3 ti 4 fo 5 al 6 go 7 pe 8 to 9 co 10 to 11 th 12 ai 13 of 14 th 15 pa d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 2 18 q 0 19 r 0 20 s 0 21 t 6 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 10 18 q 10 19 r 10 20 s 10 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Distribute and copy back 0 al 1 ai 2 co 3 fo 4 go 5 is 6 no 7 of 8 pe 9 pa 10 th 11 ti 12 to 13 to 14 th 15 th Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 10 17 p 10 18 q 10 19 r 10 20 s 16 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); Sorted result 0 ai 1 al 2 co 3 fo 4 go 5 is 6 no 7 of 8 pa 9 pe 10 th 11 th 12 th 13 ti 14 to 15 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input __ __ output no al ai ai ai ai is ai al al al al th co -- co co co ti fo co fo fo fo fo go fo go go go al is go is is is go no is no no no pe of no of of of to pe of -- pa pa co pa pe pa pe pe to th pa pe -- th th ti th -- th th ai to ti th th th of to to ti th ti th th to to ti to pa th th to to to -- th th to th --
[]
{ "number": "5.3.3", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Write an efficient method that takes a string txt and an integer M as arguments and returns the position of the first occurrence of M consecutive blanks in the string, txt.length if there is no such occurrence. Estimate the number of character compares used by your method, on typical text and in the worst case.
5.1.4 Trace for 3-way string quicksort (same model as used in the book): -- -- -- 0 no is ai ai ai ai 1 is ai co al -- al 2 th co fo -- al co 3 ti fo al fo -- fo 4 fo al go go co go 5 al go -- co -- is 6 go -- is -- fo no 7 pe no -- is -- of 8 to -- no -- go pa 9 co to -- no -- pe 10 to pe pe -- is th 11 th to of of -- th 12 ai th pa -- no th 13 of ti -- pe -- ti 14 th of th pa of to 15 pa th ti -- -- to pa to th pa th th th -- to th pe th -- -- -- to th to th ti th -- ti -- to to --
[]
{ "number": "5.3.4", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Develop a brute-force substring search implementation BruteForceRL that processes the pattern from right to left (a simplified version of ALGORITHM 5.7).
5.1.5 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 13, 0): input 0 now 1 is 2 the 3 time 4 for 5 all 6 good 7 people 8 to 9 come 10 to 11 the 12 aid 13 of d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 1 18 q 0 19 r 0 20 s 0 21 t 5 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 9 18 q 9 19 r 9 20 s 9 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Distribute and copy back 0 all 1 aid 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 time 11 to 12 to 13 the Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 9 17 p 9 18 q 9 19 r 9 20 s 14 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 13, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); Sorted result 0 aid 1 all 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 the 11 time 12 to 13 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input ____ ___ output now all aid aid aid aid is aid all all all all the come --- come come come time for come for for for for good for good good good all is good is is is good now is now now now people of now of of of to people of people people people come the people --- --- the to time the the the the the to time the the time aid to to time --- to of the to to time to ---- the to to --- to
[]
{ "number": "5.3.5", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Give the right[] array computed by the constructor in ALGORITHM 5.7 for the pattern A B R A C A D A B R A.
5.1.6 Trace for 3-way string quicksort (same model as used in the book): ---- --- --- --- 0 now is aid aid aid aid aid 1 is aid come all --- --- all 2 the come for --- all all come 3 time for all for --- --- for 4 for all good good come come good 5 all good ---- come ---- ---- is 6 good --- is ---- for for now 7 people now --- is ---- ---- of 8 to --- now --- good good people 9 come to --- now ---- ---- the 10 to people people --- is is the 11 the to of of --- --- time 12 aid the --- ------ now now to 13 of time to people --- --- to of the ------ of of the time the ------ ------ to time people people the the ------ ------ --- ---- the the to the the to ---- ---- ---- time time ---- ---- to to to to -- --
[]
{ "number": "5.3.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Add to KMP a count() method to count occurrences and a searchAll() method to print all occurrences.
5.1.8 Both MSD string sort and 3-way string quicksort examine all characters in the N keys. That number is equal to 1 + 2 + ... + N = (N^2 + N) / 2 characters. MSD string sort, however, generates (R - 1) * N empty subarrays (an empty subarray for all digits in R other than 'a', in every pass) while 3-way string quicksort generates 2N empty subarrays (empty subarrays for digits smaller than 'a' and for digits higher than 'a', or empty subarrays for digits smaller than '-1' and for digits equal to '-1', in every pass). MSD string sort trace (no cutoff for small subarrays, subarrays of size 0 and 1 omitted): input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ---- 3-way string quicksort trace: input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ----
[]
{ "number": "5.3.8", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Add to RabinKarp a count() method to count occurrences and a searchAll() method to print all occurrences.
5.1.10 The total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W) in the worst case is O(N * W * R). This can be seen with a recurrence relation T(W). The base case T(1) is when all the strings have length 1. An example with R = 3 is { "a", "b", "c" }. In the worst case they are in reverse order. For example: { "c", "b", "a" }. In this case we only remove one string from the list in each pass. If we consider N = R^W (in this case, W = 1), the number of comparisons is equal to: Characters examined = Sum[i=0..R] i Characters examined = R * (R + 1) / 2 To build the worst case for strings of length 2 (T(2)), we take each string from T(1) and append it to the end of each character in R. So for single character strings "a", "b", "c", with R = 3, the two character list is: "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc". The list can then be split into R groups: one for each character in R that is a prefix to every string of length W - 1. During the partitioning phase all strings that start with "a" will be in the same partition and the algorithm will do the same process as in T(1) because removing the first character 'a' will lead to the same 1-length strings { "c", "b", "a" } as before. The same thing happens for strings starting with "b" and "c". So, for R = 3, the algorithm will check 3 * R + 2 * R + R characters in the first position of the strings (which is 3 + 2 + 1 characters times R groups). Then it will check the second characters in the strings in each of the R groups. For T(W), where W > 2, the list will then again be split into R groups: one for each character in R that is a prefix to every string of length W - 2. Quicksort will then remove R strings from the list in each partition. It will then check R * T(W - 1) more characters for each of those groups. This gives the recurrence T(W) = (R^(W - 1) * Sum[i=0..R] R - i) + R * T(W - 1), which simplifies to: T(W) = R^(W + 1) + R^W + R * T(W - 1) ----------------- 2 Solving the recurrence gives us: T(W) = W * (R^W) * (R + 1) --------------------- 2 Substituting N = R^W: T(W) = W * N * (R + 1) ----------------- 2 Which is O(N * W * R). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/153 Thanks to GenevaS (https://github.com/GenevaS) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/245
[]
{ "number": "5.3.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
In the Boyer-Moore implementation in ALGORITHM 5.7, show that you can set right[c] to the penultimate occurrence of c when c is the last character in the pattern.
5.1.13 - Hybrid sort Idea: using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins. This idea will work well for random strings because, in general, the higher the number of keys to be sorted, the higher the number of non-empty subarrays generated on each pass of MSD string sort. Such scenario would work well due to the advantage of having multiway partitioning. However, MSD string sort will still generate a large number of empty subarrays if there is a large number of equal keys (or a large number of keys with long common prefixes). 3-way string quicksort will avoid the negative effects of large numbers of empty bins not only for smaller arrays, but also for large arrays, while also having the benefit of using less space than MSD string sort since it does not require space for frequency counts or for an auxiliary array. On the other hand, it envolves more data movement than MSD string sort when the number of nonempty subarrays is large because it has to do a series of 3-way partitions to get the effect of the multiway partition. This would not be a problem in the hybrid sort if there were many equal keys in smaller arrays, since 3-way string quicksort would be the algorithm of choice in such situation. Overall, hybrid sort would be a good choice for random strings. However, a version of hybrid sort that chooses between MSD string sort and 3-way string quicksort based on the percentage of equal keys (choosing MSD string sort if there is a low percentage of equal keys and choosing 3-way string quicksort if there is a high number of equal keys) would be more effective than a version that makes the choice based on the number of keys.
[]
{ "number": "5.3.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Draw the KMP DFA for the following pattern strings. a. AAAAAAB b. AACAAAB c. ABABABAB d. ABAABAAABAAAB e. ABAABCABAABCB
5.1.17 - In-place key-indexed counting LSD and MSD sorts that use only a constant amount of extra space are not stable. Counterexample for LSD sort: The array ["4PGC938", "2IYE230", "3CIO720", "1ICK750", "1OHV845", "4JZY524", "1ICK750", "3CIO720", "1OHV845", "1OHV845", "2RLA629", "2RLA629", "3ATW723"] after being sorted by in-place LSD becomes: ["1OHV845", "1OHV845", "1OHV845", "1ICK750", "1ICK750", "2RLA629", "2IYE230", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4PGC938", "4JZY524"] If it were sorted by non-in-place LSD the output would be: ["1ICK750", "1ICK750", "1OHV845", "1OHV845", "1OHV845", "2IYE230", "2RLA629", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4JZY524", "4PGC938"] Counterexample for both LSD and MSD sorts: The array ["CAA" (index 0), "ABB" (index 1), "ABB" (index 2)] after being sorted by either in-place LSD or in-place MSD becomes: ["ABB" (original index 2), "ABB" (original index 1), "CAA" (original index 0)] If it were sorted by non-in-place LSD or MSD the output would be: ["ABB" (original index 1), "ABB" (original index 2), "CAA" (original index 0)]
[]
{ "number": "5.3.17", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
How would you modify the Rabin-Karp algorithm to search for an H-by-V pattern in an N-by-N text?
5.1.22 - Timings Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Average time spent Decimal keys Least-Significant-Digit 2.30 Decimal keys Most-Significant-Digit 0.45 Decimal keys 3-way string quicksort 0.32 CA license plates Least-Significant-Digit 1.48 CA license plates Most-Significant-Digit 0.41 CA license plates 3-way string quicksort 0.33 Fixed length words Least-Significant-Digit 2.52 Fixed length words Most-Significant-Digit 0.28 Fixed length words 3-way string quicksort 0.35 Variable length items Most-Significant-Digit 1.80 Variable length items 3-way string quicksort 0.55 The experiment results show that for all random string types, LSD sort had the worst results. For random decimal keys, random CA license plates and random variable length items 3-way string quicksort had the best results. For random fixed-length words, MSD had the best running time. Having to always scan all characters in all keys may explain why LSD sort had the slowest running times when sorting all random string types. 3-way string quicksort may have had good results because it does not create a high number of empty subarrays, as MSD sort does, and because it can handle well keys with long common prefixes (which are likely to happen in random decimal keys, random CA license plates and random variable length items). Random fixed-length words are less likely to have long common prefixes (because all their characters are in the range [40, 125]), which may explain why MSD sort had better results than both LSD sort and 3-way string quicksort during their sort.
[]
{ "number": "5.3.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Write a program that reads characters one at a time and reports at each instant if the current string is a palindrome. Hint : Use the Rabin-Karp hashing idea.
5.1.23 - Array accesses Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Number of array accesses Decimal keys Least-Significant-Digit 40000000 Decimal keys Most-Significant-Digit 35443947 Decimal keys 3-way string quicksort 78124405 CA license plates Least-Significant-Digit 28000000 CA license plates Most-Significant-Digit 25703588 CA license plates 3-way string quicksort 82889002 Fixed length words Least-Significant-Digit 40000000 Fixed length words Most-Significant-Digit 14841196 Fixed length words 3-way string quicksort 95310075 Variable length items Most-Significant-Digit 72121457 Variable length items 3-way string quicksort 97523400 The experiment results show that for all random string types, 3-way string quicksort accessed the array more times than LSD and MSD sort; LSD sort accessed the array more times than MSD sort; and MSD sort had the lowest number of array accesses. A possible explanation for these results is the fact that 3-way string quicksort accesses the array 4 times for each exchange operation, which leads to more array accesses than both LSD and MSD sorts, that do not make inplace exchanges. LSD sort will always access the array 4 * N * W times, where N is the number of strings and W is the length of the strings (which is equivalent to 4 array accesses for each character in the keys) and MSD sort will only access the array while the strings have common prefixes, which explains why MSD sort has the lowest number of array accesses of all sort types.
[]
{ "number": "5.3.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Exercise" }
Find all occurrences. Add a method findAll() to each of the four substring search algorithms given in the text that returns an Iterable<Integer> that allows clients to iterate through all offsets of the pattern in the text.
5.1.24 - Rightmost character accessed Running 1 experiment with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Rightmost character accessed Decimal keys Most-Significant-Digit 9 Decimal keys 3-way string quicksort 9 CA license plates Most-Significant-Digit 6 CA license plates 3-way string quicksort 6 Fixed length words Most-Significant-Digit 5 Fixed length words 3-way string quicksort 5 Variable length items Most-Significant-Digit 20 Variable length items 3-way string quicksort 20 In all experiments the rightmost character position accessed in MSD sort and in 3-way string quicksort was the same, which shows that both algorithms scan the same characters.
[]
{ "number": "5.3.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Creative Problem" }
Tandem repeat search. A tandem repeat of a base string b in a string s is a substring of s having at least two consecutive copies b (nonoverlapping). Develop and implement a linear-time algorithm that, given two strings b and s, returns the index of the beginning of the longest tandem repeat of b in s. For example, your program should return 3 when b is abcab and s is abcabcababcababcababcab.
5.5.27 - Long repeats Compressing 2 * 1000 random characters (for N = 1000), with 16000 bits (8 bits per character). % java -cp algs4.jar:. RunLengthEncoding - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 53736 bits Compression ratio: 53736 / 16000 = 335% % java -cp algs4.jar:. edu.princeton.cs.algs4.Huffman - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 12624 bits Compression ratio: 12624 / 16000 = 79% % java -cp algs4.jar:. edu.princeton.cs.algs4.LZW - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 13416 bits Compression ratio: 13416 / 16000 = 83%
[]
{ "number": "5.3.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Creative Problem" }
Random patterns. How many character compares are needed to do a substring search for a random pattern of length 100 in a given text? Answer: None. The method public boolean search(char[] txt) { return false; } is quite effective for this problem, since the chances of a random pattern of length 100 appearing in any text are so low that you may consider it to be 0.
5.3.31 - Random patterns None. The method public boolean search(char[] text) { return false; } is quite effective for this problem, since the chances of a random pattern of length 100 appearing in any text are so low that you may consider it to be 0.
[]
{ "number": "5.3.31", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Creative Problem" }
Timings. Write a program that times the four methods for the task of searchng for the substring it is a far far better thing that i do than i have ever done in the text of Tale of Two Cities (tale.txt). Discuss the extent to which your results validate the hypthotheses about performance that are stated in the text.
5.3.39 - Timings Method | Time spent Bruteforce 0.01 Knuth-Morris-Pratt 0.01 Boyer-Moore 0.00 Rabin-Karp 0.04 The results validate the hypotheses about performance stated in the text: Both brute force and Knuth-Morris-Pratt methods took 0.01 seconds to do the search, which is aligned with their expected performance of 1.1N operations when searching in typical texts. Boyer-Moore took 0.00 seconds to do the search, which suggests that it did a sublinear number of operations. This is aligned with its expected performance of N / M operations when searching in typical texts. Rabin-Karp took 0.04 seconds to do the search, a longer time than all other three methods. However, this is expected according to the hypothesis in the text that it performs 7N operations when searching in typical texts.
[]
{ "number": "5.3.39", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.3, "section_title": "Substring Search", "type": "Experiment" }
Give a brief English description of each of the following REs: a. .* b. A.*A | A c. .*ABBABBA.* d. .* A.*A.*A.*A.*
5.1.2 Trace for LSD string sort (same model as used in the book): input d=1 d=0 output no pa ai ai is pe al al th of co co ti th fo fo fo th go go al th is is go ti no no pe ai of of to al pa pa co no pe pe to fo th th th go th th ai to th th of co ti ti th to to to pa is to to
[]
{ "number": "5.4.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
What is the maximum number of different strings that can be described by a regular expression with M or operators and no closure operators (parentheses and concatenation are allowed)?
5.1.3 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 15, 0): input 0 no 1 is 2 th 3 ti 4 fo 5 al 6 go 7 pe 8 to 9 co 10 to 11 th 12 ai 13 of 14 th 15 pa d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 2 18 q 0 19 r 0 20 s 0 21 t 6 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 10 18 q 10 19 r 10 20 s 10 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Distribute and copy back 0 al 1 ai 2 co 3 fo 4 go 5 is 6 no 7 of 8 pe 9 pa 10 th 11 ti 12 to 13 to 14 th 15 th Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 10 17 p 10 18 q 10 19 r 10 20 s 16 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); Sorted result 0 ai 1 al 2 co 3 fo 4 go 5 is 6 no 7 of 8 pa 9 pe 10 th 11 th 12 th 13 ti 14 to 15 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input __ __ output no al ai ai ai ai is ai al al al al th co -- co co co ti fo co fo fo fo fo go fo go go go al is go is is is go no is no no no pe of no of of of to pe of -- pa pa co pa pe pa pe pe to th pa pe -- th th ti th -- th th ai to ti th th th of to to ti th ti th th to to ti to pa th th to to to -- th th to th --
[]
{ "number": "5.4.3", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Draw the NFA corresponding to the pattern ( ( ( A | B ) * | C D * | E F G ) * ) * .
5.1.4 Trace for 3-way string quicksort (same model as used in the book): -- -- -- 0 no is ai ai ai ai 1 is ai co al -- al 2 th co fo -- al co 3 ti fo al fo -- fo 4 fo al go go co go 5 al go -- co -- is 6 go -- is -- fo no 7 pe no -- is -- of 8 to -- no -- go pa 9 co to -- no -- pe 10 to pe pe -- is th 11 th to of of -- th 12 ai th pa -- no th 13 of ti -- pe -- ti 14 th of th pa of to 15 pa th ti -- -- to pa to th pa th th th -- to th pe th -- -- -- to th to th ti th -- ti -- to to --
[]
{ "number": "5.4.4", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Draw the digraph of ε-transitions for the NFA from Exercise 5.4.4.
5.1.5 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 13, 0): input 0 now 1 is 2 the 3 time 4 for 5 all 6 good 7 people 8 to 9 come 10 to 11 the 12 aid 13 of d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 1 18 q 0 19 r 0 20 s 0 21 t 5 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 9 18 q 9 19 r 9 20 s 9 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Distribute and copy back 0 all 1 aid 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 time 11 to 12 to 13 the Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 9 17 p 9 18 q 9 19 r 9 20 s 14 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 13, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); Sorted result 0 aid 1 all 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 the 11 time 12 to 13 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input ____ ___ output now all aid aid aid aid is aid all all all all the come --- come come come time for come for for for for good for good good good all is good is is is good now is now now now people of now of of of to people of people people people come the people --- --- the to time the the the the the to time the the time aid to to time --- to of the to to time to ---- the to to --- to
[]
{ "number": "5.4.5", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Give the sets of states reachable by your NFA from EXERCISE 5.4.4 after each character match and susbsequent ε-transitions for the input A B B A C E F G E F G C A A B .
5.1.6 Trace for 3-way string quicksort (same model as used in the book): ---- --- --- --- 0 now is aid aid aid aid aid 1 is aid come all --- --- all 2 the come for --- all all come 3 time for all for --- --- for 4 for all good good come come good 5 all good ---- come ---- ---- is 6 good --- is ---- for for now 7 people now --- is ---- ---- of 8 to --- now --- good good people 9 come to --- now ---- ---- the 10 to people people --- is is the 11 the to of of --- --- time 12 aid the --- ------ now now to 13 of time to people --- --- to of the ------ of of the time the ------ ------ to time people people the the ------ ------ --- ---- the the to the the to ---- ---- ---- time time ---- ---- to to to to -- --
[]
{ "number": "5.4.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Write a regular expression for each of the following sets of binary strings: a. Contains at least three consecutive 1s b. Contains the substring 110 c. Contains the substring 1101100 d. Does not contain the substring 110
5.1.8 Both MSD string sort and 3-way string quicksort examine all characters in the N keys. That number is equal to 1 + 2 + ... + N = (N^2 + N) / 2 characters. MSD string sort, however, generates (R - 1) * N empty subarrays (an empty subarray for all digits in R other than 'a', in every pass) while 3-way string quicksort generates 2N empty subarrays (empty subarrays for digits smaller than 'a' and for digits higher than 'a', or empty subarrays for digits smaller than '-1' and for digits equal to '-1', in every pass). MSD string sort trace (no cutoff for small subarrays, subarrays of size 0 and 1 omitted): input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ---- 3-way string quicksort trace: input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ----
[]
{ "number": "5.4.8", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Write a regular expression for each of the following sets of binary strings: a. Has at least 3 characters, and the third character is 0 b. Number of 0s is a multiple of 3 c. Starts and ends with the same character d. Odd length e. Starts with 0 and has odd length, or starts with 1 and has even length f. Length is at least 1 and at most 3
5.1.10 The total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W) in the worst case is O(N * W * R). This can be seen with a recurrence relation T(W). The base case T(1) is when all the strings have length 1. An example with R = 3 is { "a", "b", "c" }. In the worst case they are in reverse order. For example: { "c", "b", "a" }. In this case we only remove one string from the list in each pass. If we consider N = R^W (in this case, W = 1), the number of comparisons is equal to: Characters examined = Sum[i=0..R] i Characters examined = R * (R + 1) / 2 To build the worst case for strings of length 2 (T(2)), we take each string from T(1) and append it to the end of each character in R. So for single character strings "a", "b", "c", with R = 3, the two character list is: "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc". The list can then be split into R groups: one for each character in R that is a prefix to every string of length W - 1. During the partitioning phase all strings that start with "a" will be in the same partition and the algorithm will do the same process as in T(1) because removing the first character 'a' will lead to the same 1-length strings { "c", "b", "a" } as before. The same thing happens for strings starting with "b" and "c". So, for R = 3, the algorithm will check 3 * R + 2 * R + R characters in the first position of the strings (which is 3 + 2 + 1 characters times R groups). Then it will check the second characters in the strings in each of the R groups. For T(W), where W > 2, the list will then again be split into R groups: one for each character in R that is a prefix to every string of length W - 2. Quicksort will then remove R strings from the list in each partition. It will then check R * T(W - 1) more characters for each of those groups. This gives the recurrence T(W) = (R^(W - 1) * Sum[i=0..R] R - i) + R * T(W - 1), which simplifies to: T(W) = R^(W + 1) + R^W + R * T(W - 1) ----------------- 2 Solving the recurrence gives us: T(W) = W * (R^W) * (R + 1) --------------------- 2 Substituting N = R^W: T(W) = W * N * (R + 1) ----------------- 2 Which is O(N * W * R). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/153 Thanks to GenevaS (https://github.com/GenevaS) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/245
[]
{ "number": "5.4.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Exercise" }
Challenging REs. Construct an RE that describes each of the following sets of strings over the binary alphabet: a. All strings except 11 or 111 b. Strings with 1 in every odd-number bit position c. Strings with at least two 0s and at most one 1 d. Strings with no two consecutive 1s
5.1.13 - Hybrid sort Idea: using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins. This idea will work well for random strings because, in general, the higher the number of keys to be sorted, the higher the number of non-empty subarrays generated on each pass of MSD string sort. Such scenario would work well due to the advantage of having multiway partitioning. However, MSD string sort will still generate a large number of empty subarrays if there is a large number of equal keys (or a large number of keys with long common prefixes). 3-way string quicksort will avoid the negative effects of large numbers of empty bins not only for smaller arrays, but also for large arrays, while also having the benefit of using less space than MSD string sort since it does not require space for frequency counts or for an auxiliary array. On the other hand, it envolves more data movement than MSD string sort when the number of nonempty subarrays is large because it has to do a series of 3-way partitions to get the effect of the multiway partition. This would not be a problem in the hybrid sort if there were many equal keys in smaller arrays, since 3-way string quicksort would be the algorithm of choice in such situation. Overall, hybrid sort would be a good choice for random strings. However, a version of hybrid sort that chooses between MSD string sort and 3-way string quicksort based on the percentage of equal keys (choosing MSD string sort if there is a low percentage of equal keys and choosing 3-way string quicksort if there is a high number of equal keys) would be more effective than a version that makes the choice based on the number of keys.
[]
{ "number": "5.4.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Creative Problem" }
Wildcard. Add to NFA the capability to handle wildcards.
5.1.17 - In-place key-indexed counting LSD and MSD sorts that use only a constant amount of extra space are not stable. Counterexample for LSD sort: The array ["4PGC938", "2IYE230", "3CIO720", "1ICK750", "1OHV845", "4JZY524", "1ICK750", "3CIO720", "1OHV845", "1OHV845", "2RLA629", "2RLA629", "3ATW723"] after being sorted by in-place LSD becomes: ["1OHV845", "1OHV845", "1OHV845", "1ICK750", "1ICK750", "2RLA629", "2IYE230", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4PGC938", "4JZY524"] If it were sorted by non-in-place LSD the output would be: ["1ICK750", "1ICK750", "1OHV845", "1OHV845", "1OHV845", "2IYE230", "2RLA629", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4JZY524", "4PGC938"] Counterexample for both LSD and MSD sorts: The array ["CAA" (index 0), "ABB" (index 1), "ABB" (index 2)] after being sorted by either in-place LSD or in-place MSD becomes: ["ABB" (original index 2), "ABB" (original index 1), "CAA" (original index 0)] If it were sorted by non-in-place LSD or MSD the output would be: ["ABB" (original index 1), "ABB" (original index 2), "CAA" (original index 0)]
[]
{ "number": "5.4.17", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Creative Problem" }
Proof. Develop a version of NFA that prints a proof that a given string is in the language recognized by the NFA (a sequence of state transitions that ends in the accept state).
5.1.22 - Timings Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Average time spent Decimal keys Least-Significant-Digit 2.30 Decimal keys Most-Significant-Digit 0.45 Decimal keys 3-way string quicksort 0.32 CA license plates Least-Significant-Digit 1.48 CA license plates Most-Significant-Digit 0.41 CA license plates 3-way string quicksort 0.33 Fixed length words Least-Significant-Digit 2.52 Fixed length words Most-Significant-Digit 0.28 Fixed length words 3-way string quicksort 0.35 Variable length items Most-Significant-Digit 1.80 Variable length items 3-way string quicksort 0.55 The experiment results show that for all random string types, LSD sort had the worst results. For random decimal keys, random CA license plates and random variable length items 3-way string quicksort had the best results. For random fixed-length words, MSD had the best running time. Having to always scan all characters in all keys may explain why LSD sort had the slowest running times when sorting all random string types. 3-way string quicksort may have had good results because it does not create a high number of empty subarrays, as MSD sort does, and because it can handle well keys with long common prefixes (which are likely to happen in random decimal keys, random CA license plates and random variable length items). Random fixed-length words are less likely to have long common prefixes (because all their characters are in the range [40, 125]), which may explain why MSD sort had better results than both LSD sort and 3-way string quicksort during their sort.
[]
{ "number": "5.4.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.4, "section_title": "Regular Expressions", "type": "Creative Problem" }
Given a example of a uniquely decodable code that is not prefix-free. Answer : Any suffix-free code is uniquely decodable.
5.1.2 Trace for LSD string sort (same model as used in the book): input d=1 d=0 output no pa ai ai is pe al al th of co co ti th fo fo fo th go go al th is is go ti no no pe ai of of to al pa pa co no pe pe to fo th th th go th th ai to th th of co ti ti th to to to pa is to to
[]
{ "number": "5.5.2", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Give an example of a uniquely decodable code that is not prefix free or suffix free. Answer : {0011, 011, 11, 1110} or {01, 10, 011, 110}
5.1.3 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 15, 0): input 0 no 1 is 2 th 3 ti 4 fo 5 al 6 go 7 pe 8 to 9 co 10 to 11 th 12 ai 13 of 14 th 15 pa d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 2 18 q 0 19 r 0 20 s 0 21 t 6 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 10 18 q 10 19 r 10 20 s 10 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Distribute and copy back 0 al 1 ai 2 co 3 fo 4 go 5 is 6 no 7 of 8 pe 9 pa 10 th 11 ti 12 to 13 to 14 th 15 th Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 10 17 p 10 18 q 10 19 r 10 20 s 16 21 t 16 22 u 16 23 v 16 24 x 16 25 w 16 26 y 16 27 z 16 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 9, 1); sort(a, 10, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); sort(a, 16, 15, 1); Sorted result 0 ai 1 al 2 co 3 fo 4 go 5 is 6 no 7 of 8 pa 9 pe 10 th 11 th 12 th 13 ti 14 to 15 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input __ __ output no al ai ai ai ai is ai al al al al th co -- co co co ti fo co fo fo fo fo go fo go go go al is go is is is go no is no no no pe of no of of of to pe of -- pa pa co pa pe pa pe pe to th pa pe -- th th ti th -- th th ai to ti th th th of to to ti th ti th th to to ti to pa th th to to to -- th th to th --
[]
{ "number": "5.5.3", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Are { 01, 1001, 1011, 111, 1110 } and { 01, 1001, 1011, 111, 1110 } uniquely decodable? If not, find a string with two encodings.
5.1.4 Trace for 3-way string quicksort (same model as used in the book): -- -- -- 0 no is ai ai ai ai 1 is ai co al -- al 2 th co fo -- al co 3 ti fo al fo -- fo 4 fo al go go co go 5 al go -- co -- is 6 go -- is -- fo no 7 pe no -- is -- of 8 to -- no -- go pa 9 co to -- no -- pe 10 to pe pe -- is th 11 th to of of -- th 12 ai th pa -- no th 13 of ti -- pe -- ti 14 th of th pa of to 15 pa th ti -- -- to pa to th pa th th th -- to th pe th -- -- -- to th to th ti th -- ti -- to to --
[]
{ "number": "5.5.4", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Use RunLength on the file q128x192.bin from the booksite. How many bits are there in the compressed file?
5.1.5 According to ASC II values, indices of chars 'a' through 'z' start at 97. But this trace indices follow the book convention of 'a' starting at 0. Trace for MSD string sort (same model as used in the book): Top level of sort(array, 0, 13, 0): input 0 now 1 is 2 the 3 time 4 for 5 all 6 good 7 people 8 to 9 come 10 to 11 the 12 aid 13 of d=0 Count frequencies 0 0 1 0 2 a 2 3 b 0 4 c 1 5 d 0 6 e 0 7 f 1 8 g 1 9 h 0 10 i 1 11 j 0 12 k 0 13 l 0 14 m 0 15 n 1 16 o 1 17 p 1 18 q 0 19 r 0 20 s 0 21 t 5 22 u 0 23 v 0 24 x 0 25 w 0 26 y 0 27 z 0 Transform counts to indices 0 0 1 0 2 a 2 3 b 2 4 c 3 5 d 3 6 e 3 7 f 4 8 g 5 9 h 5 10 i 6 11 j 6 12 k 6 13 l 6 14 m 6 15 n 7 16 o 8 17 p 9 18 q 9 19 r 9 20 s 9 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Distribute and copy back 0 all 1 aid 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 time 11 to 12 to 13 the Indices at completion of distribute phase 0 0 1 2 2 a 2 3 b 3 4 c 3 5 d 3 6 e 4 7 f 5 8 g 5 9 h 6 10 i 6 11 j 6 12 k 6 13 l 6 14 m 7 15 n 8 16 o 9 17 p 9 18 q 9 19 r 9 20 s 14 21 t 14 22 u 14 23 v 14 24 x 14 25 w 14 26 y 14 27 z 14 Recursively sort subarrays sort(a, 0, 1, 1); sort(a, 2, 1, 1); sort(a, 2, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 2, 1); sort(a, 3, 3, 1); sort(a, 4, 4, 1); sort(a, 5, 4, 1); sort(a, 5, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 5, 1); sort(a, 6, 6, 1); sort(a, 7, 7, 1); sort(a, 8, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 8, 1); sort(a, 9, 13, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); sort(a, 13, 14, 1); Sorted result 0 aid 1 all 2 come 3 for 4 good 5 is 6 now 7 of 8 people 9 the 10 the 11 time 12 to 13 to Trace of recursive calls for MSD string sort (no cutoff for small subarrays, subarrays of size 0 and 1 omitted) input ____ ___ output now all aid aid aid aid is aid all all all all the come --- come come come time for come for for for for good for good good good all is good is is is good now is now now now people of now of of of to people of people people people come the people --- --- the to time the the the the the to time the the time aid to to time --- to of the to to time to ---- the to to --- to
[]
{ "number": "5.5.5", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
How many bits are needed to encode N copies of the symbol a (as a function of N)? N copies of the sequence abc?
5.1.6 Trace for 3-way string quicksort (same model as used in the book): ---- --- --- --- 0 now is aid aid aid aid aid 1 is aid come all --- --- all 2 the come for --- all all come 3 time for all for --- --- for 4 for all good good come come good 5 all good ---- come ---- ---- is 6 good --- is ---- for for now 7 people now --- is ---- ---- of 8 to --- now --- good good people 9 come to --- now ---- ---- the 10 to people people --- is is the 11 the to of of --- --- time 12 aid the --- ------ now now to 13 of time to people --- --- to of the ------ of of the time the ------ ------ to time people people the the ------ ------ --- ---- the the to the the to ---- ---- ---- time time ---- ---- to to to to -- --
[]
{ "number": "5.5.6", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Give the result of encoding the strings ab, abab, ababab, abababab, ... (strings consisting of N repetitions of ab) with run-length, Huffman, and LZW encoding. What is the compression ratio as a function of N?
5.1.8 Both MSD string sort and 3-way string quicksort examine all characters in the N keys. That number is equal to 1 + 2 + ... + N = (N^2 + N) / 2 characters. MSD string sort, however, generates (R - 1) * N empty subarrays (an empty subarray for all digits in R other than 'a', in every pass) while 3-way string quicksort generates 2N empty subarrays (empty subarrays for digits smaller than 'a' and for digits higher than 'a', or empty subarrays for digits smaller than '-1' and for digits equal to '-1', in every pass). MSD string sort trace (no cutoff for small subarrays, subarrays of size 0 and 1 omitted): input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ---- 3-way string quicksort trace: input ---- a a a a a a a aa aa ---- aa aa aa aa aaa aaa aa ---- aaa aaa aaa aaaa aaaa aaa aaa ---- aaaa aaaa ... ... aaaa aaaa aaaa ---- ... ---- ... ... ... ... ---- ---- ---- ----
[]
{ "number": "5.5.8", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
In the style of the figure in the text, show the Huffman coding tree construction process when you use Huffman for the string "it was the age of foolishness”. How many bits does the compressed bitstream require?
5.1.10 The total number of characters examined by 3-way string quicksort when sorting N fixed-length strings (all of length W) in the worst case is O(N * W * R). This can be seen with a recurrence relation T(W). The base case T(1) is when all the strings have length 1. An example with R = 3 is { "a", "b", "c" }. In the worst case they are in reverse order. For example: { "c", "b", "a" }. In this case we only remove one string from the list in each pass. If we consider N = R^W (in this case, W = 1), the number of comparisons is equal to: Characters examined = Sum[i=0..R] i Characters examined = R * (R + 1) / 2 To build the worst case for strings of length 2 (T(2)), we take each string from T(1) and append it to the end of each character in R. So for single character strings "a", "b", "c", with R = 3, the two character list is: "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc". The list can then be split into R groups: one for each character in R that is a prefix to every string of length W - 1. During the partitioning phase all strings that start with "a" will be in the same partition and the algorithm will do the same process as in T(1) because removing the first character 'a' will lead to the same 1-length strings { "c", "b", "a" } as before. The same thing happens for strings starting with "b" and "c". So, for R = 3, the algorithm will check 3 * R + 2 * R + R characters in the first position of the strings (which is 3 + 2 + 1 characters times R groups). Then it will check the second characters in the strings in each of the R groups. For T(W), where W > 2, the list will then again be split into R groups: one for each character in R that is a prefix to every string of length W - 2. Quicksort will then remove R strings from the list in each partition. It will then check R * T(W - 1) more characters for each of those groups. This gives the recurrence T(W) = (R^(W - 1) * Sum[i=0..R] R - i) + R * T(W - 1), which simplifies to: T(W) = R^(W + 1) + R^W + R * T(W - 1) ----------------- 2 Solving the recurrence gives us: T(W) = W * (R^W) * (R + 1) --------------------- 2 Substituting N = R^W: T(W) = W * N * (R + 1) ----------------- 2 Which is O(N * W * R). Thanks to dragon-dreamer (https://github.com/dragon-dreamer) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/153 Thanks to GenevaS (https://github.com/GenevaS) for finding a more accurate worst case. https://github.com/reneargento/algorithms-sedgewick-wayne/issues/245
[]
{ "number": "5.5.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Suppose that all of the symbol frequencies are equal. Describe the Huffman code.
5.1.13 - Hybrid sort Idea: using standard MSD string sort for large arrays, in order to get the advantage of multiway partitioning, and 3-way string quicksort for smaller arrays, in order to avoid the negative effects of large numbers of empty bins. This idea will work well for random strings because, in general, the higher the number of keys to be sorted, the higher the number of non-empty subarrays generated on each pass of MSD string sort. Such scenario would work well due to the advantage of having multiway partitioning. However, MSD string sort will still generate a large number of empty subarrays if there is a large number of equal keys (or a large number of keys with long common prefixes). 3-way string quicksort will avoid the negative effects of large numbers of empty bins not only for smaller arrays, but also for large arrays, while also having the benefit of using less space than MSD string sort since it does not require space for frequency counts or for an auxiliary array. On the other hand, it envolves more data movement than MSD string sort when the number of nonempty subarrays is large because it has to do a series of 3-way partitions to get the effect of the multiway partition. This would not be a problem in the hybrid sort if there were many equal keys in smaller arrays, since 3-way string quicksort would be the algorithm of choice in such situation. Overall, hybrid sort would be a good choice for random strings. However, a version of hybrid sort that chooses between MSD string sort and 3-way string quicksort based on the percentage of equal keys (choosing MSD string sort if there is a low percentage of equal keys and choosing 3-way string quicksort if there is a high number of equal keys) would be more effective than a version that makes the choice based on the number of keys.
[]
{ "number": "5.5.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Characterize the tricky situation in LZW coding. Solution: Whenever it encounters cScSc, where c is a symbol and S is a string, cS is in the dictionary already but cSc is not.
5.1.17 - In-place key-indexed counting LSD and MSD sorts that use only a constant amount of extra space are not stable. Counterexample for LSD sort: The array ["4PGC938", "2IYE230", "3CIO720", "1ICK750", "1OHV845", "4JZY524", "1ICK750", "3CIO720", "1OHV845", "1OHV845", "2RLA629", "2RLA629", "3ATW723"] after being sorted by in-place LSD becomes: ["1OHV845", "1OHV845", "1OHV845", "1ICK750", "1ICK750", "2RLA629", "2IYE230", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4PGC938", "4JZY524"] If it were sorted by non-in-place LSD the output would be: ["1ICK750", "1ICK750", "1OHV845", "1OHV845", "1OHV845", "2IYE230", "2RLA629", "2RLA629", "3ATW723", "3CIO720", "3CIO720", "4JZY524", "4PGC938"] Counterexample for both LSD and MSD sorts: The array ["CAA" (index 0), "ABB" (index 1), "ABB" (index 2)] after being sorted by either in-place LSD or in-place MSD becomes: ["ABB" (original index 2), "ABB" (original index 1), "CAA" (original index 0)] If it were sorted by non-in-place LSD or MSD the output would be: ["ABB" (original index 1), "ABB" (original index 2), "CAA" (original index 0)]
[]
{ "number": "5.5.17", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Prove the following fact about Huffman codes: If the frequency of symbol i is strictly larger than the frequency of symbol j, then the length of the codeword for symbol i is less than or equal to the length of the codeword for symbol j.
5.1.22 - Timings Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Average time spent Decimal keys Least-Significant-Digit 2.30 Decimal keys Most-Significant-Digit 0.45 Decimal keys 3-way string quicksort 0.32 CA license plates Least-Significant-Digit 1.48 CA license plates Most-Significant-Digit 0.41 CA license plates 3-way string quicksort 0.33 Fixed length words Least-Significant-Digit 2.52 Fixed length words Most-Significant-Digit 0.28 Fixed length words 3-way string quicksort 0.35 Variable length items Most-Significant-Digit 1.80 Variable length items 3-way string quicksort 0.55 The experiment results show that for all random string types, LSD sort had the worst results. For random decimal keys, random CA license plates and random variable length items 3-way string quicksort had the best results. For random fixed-length words, MSD had the best running time. Having to always scan all characters in all keys may explain why LSD sort had the slowest running times when sorting all random string types. 3-way string quicksort may have had good results because it does not create a high number of empty subarrays, as MSD sort does, and because it can handle well keys with long common prefixes (which are likely to happen in random decimal keys, random CA license plates and random variable length items). Random fixed-length words are less likely to have long common prefixes (because all their characters are in the range [40, 125]), which may explain why MSD sort had better results than both LSD sort and 3-way string quicksort during their sort.
[]
{ "number": "5.5.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
What would be the result of breaking up a Huffman-encoded string into five-bit characters and Huffman-encoding that string?
5.1.23 - Array accesses Running 10 experiments with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Number of array accesses Decimal keys Least-Significant-Digit 40000000 Decimal keys Most-Significant-Digit 35443947 Decimal keys 3-way string quicksort 78124405 CA license plates Least-Significant-Digit 28000000 CA license plates Most-Significant-Digit 25703588 CA license plates 3-way string quicksort 82889002 Fixed length words Least-Significant-Digit 40000000 Fixed length words Most-Significant-Digit 14841196 Fixed length words 3-way string quicksort 95310075 Variable length items Most-Significant-Digit 72121457 Variable length items 3-way string quicksort 97523400 The experiment results show that for all random string types, 3-way string quicksort accessed the array more times than LSD and MSD sort; LSD sort accessed the array more times than MSD sort; and MSD sort had the lowest number of array accesses. A possible explanation for these results is the fact that 3-way string quicksort accesses the array 4 times for each exchange operation, which leads to more array accesses than both LSD and MSD sorts, that do not make inplace exchanges. LSD sort will always access the array 4 * N * W times, where N is the number of strings and W is the length of the strings (which is equivalent to 4 array accesses for each character in the keys) and MSD sort will only access the array while the strings have common prefixes, which explains why MSD sort has the lowest number of array accesses of all sort types.
[]
{ "number": "5.5.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
In the style of the figures in the text, show the encoding trie and the compression and expansion processes when LZW is used for the string it was the best of times it was the worst of times
5.1.24 - Rightmost character accessed Running 1 experiment with 1000000 strings for random decimal keys (with fixed-length of 10 characters), random CA license plates, random fixed-length words (with fixed-length of 10 characters) and random variable length items (with given values 'A' and 'B'). The cutoff for small subarrays used in Most-Significant-Digit sort was equal to 15. Random string type | Sort type | Rightmost character accessed Decimal keys Most-Significant-Digit 9 Decimal keys 3-way string quicksort 9 CA license plates Most-Significant-Digit 6 CA license plates 3-way string quicksort 6 Fixed length words Most-Significant-Digit 5 Fixed length words 3-way string quicksort 5 Variable length items Most-Significant-Digit 20 Variable length items 3-way string quicksort 20 In all experiments the rightmost character position accessed in MSD sort and in 3-way string quicksort was the same, which shows that both algorithms scan the same characters.
[]
{ "number": "5.5.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Exercise" }
Long repeats. Estimate the compression ratio achieved by run-length, Huffman, and LZW encoding for a string of length 2N formed by concatenating two copies of a random ASCII string of length N (see EXERCISE 5.5.9), under any assumptions that you think are reasonable.
5.5.27 - Long repeats Compressing 2 * 1000 random characters (for N = 1000), with 16000 bits (8 bits per character). % java -cp algs4.jar:. RunLengthEncoding - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 53736 bits Compression ratio: 53736 / 16000 = 335% % java -cp algs4.jar:. edu.princeton.cs.algs4.Huffman - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 12624 bits Compression ratio: 12624 / 16000 = 79% % java -cp algs4.jar:. edu.princeton.cs.algs4.LZW - < 5.5.27_random.txt | java -cp algs4.jar:. BinaryDump 0 13416 bits Compression ratio: 13416 / 16000 = 83%
[]
{ "number": "5.5.27", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 5, "chapter_title": "Strings", "section": 5.5, "section_title": "Data Compression", "type": "Creative Problem" }
Molecules travel very quickly (faster than a speeding jet) but diffuse slowly because they collide with other molecules, thereby changing their direction. Extend the model to have a boundary shape where two vessels are connected by a pipe containing two different types of particles. Run a simulation and measure the fraction of particles of each type in each vessel as a function of time.
Results Time 0 LEFT VESSEL: 10 particles Type 1: 10/10 Type 2: 0/10 RIGHT VESSEL: 10 particles Type 1: 0/10 Type 2: 10/10 Time 1000 LEFT VESSEL: 14 particles Type 1: 9/14 Type 2: 5/14 RIGHT VESSEL: 6 particles Type 1: 1/6 Type 2: 5/6 Time 2000 LEFT VESSEL: 11 particles Type 1: 6/11 Type 2: 5/11 RIGHT VESSEL: 9 particles Type 1: 4/9 Type 2: 5/9 Time 3000 LEFT VESSEL: 13 particles Type 1: 7/13 Type 2: 6/13 RIGHT VESSEL: 6 particles Type 1: 2/6 Type 2: 4/6 Time 4000 LEFT VESSEL: 13 particles Type 1: 7/13 Type 2: 6/13 RIGHT VESSEL: 7 particles Type 1: 3/7 Type 2: 4/7 Time 5000 LEFT VESSEL: 10 particles Type 1: 6/10 Type 2: 4/10 RIGHT VESSEL: 10 particles Type 1: 4/10 Type 2: 6/10 Time 6000 LEFT VESSEL: 11 particles Type 1: 6/11 Type 2: 5/11 RIGHT VESSEL: 9 particles Type 1: 4/9 Type 2: 5/9 Time 7000 LEFT VESSEL: 10 particles Type 1: 5/10 Type 2: 5/10 RIGHT VESSEL: 10 particles Type 1: 5/10 Type 2: 5/10 Time 8000 LEFT VESSEL: 8 particles Type 1: 3/8 Type 2: 5/8 RIGHT VESSEL: 12 particles Type 1: 7/12 Type 2: 5/12 Time 9000 LEFT VESSEL: 10 particles Type 1: 3/10 Type 2: 7/10 RIGHT VESSEL: 10 particles Type 1: 7/10 Type 2: 3/10 Time 10000 LEFT VESSEL: 9 particles Type 1: 5/9 Type 2: 4/9 RIGHT VESSEL: 11 particles Type 1: 5/11 Type 2: 6/11 The system tends to achieve a balance between the number of particles of both types on both vessels.
[]
{ "number": "6.9", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.1, "section_title": "Collision Simulation", "type": "Exercise" }
After running a simulation, negate all velocities and then run the system backward. It should return to its original state! Measure roundoff error by measuring the difference between the final and original states of the system.
Roundoff error: 0.22176242098152166
[]
{ "number": "6.10", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.1, "section_title": "Collision Simulation", "type": "Exercise" }
Add a method pressure() to Particle that measures pressure by accumulating the number and magnitude of collisions against walls. The pressure of the system is the sum of these quantities. Then add a method pressure() to CollisionSystem and write a client that validates the equation pv = nRT.
Pressure measured with P V = n R T: 5.480029392202992E-4 Pressure measured with wall collisions: 4.394236724937985E-4 Pressure measured with P V = n R T: 3.855098323894351E-4 Pressure measured with wall collisions: 4.389151787833636E-4 Pressure measured with P V = n R T: 4.675754466697337E-4 Pressure measured with wall collisions: 4.3342966247778204E-4 Pressure measured with P V = n R T: 4.441454588925588E-4 Pressure measured with wall collisions: 4.4036995403253654E-4 Pressure measured with P V = n R T: 5.251377268433578E-4 Pressure measured with wall collisions: 4.385114019830638E-4 Pressure measured with P V = n R T: 4.931824176831048E-4 Pressure measured with wall collisions: 4.3656839984208926E-4 Pressure measured with P V = n R T: 6.345006963980435E-4 Pressure measured with wall collisions: 4.3819347783853905E-4 Pressure measured with P V = n R T: 5.26662616889756E-4 Pressure measured with wall collisions: 4.3843233793566013E-4 Pressure measured with P V = n R T: 4.024926450876052E-4 Pressure measured with wall collisions: 4.378203542088311E-4 Pressure measured with P V = n R T: 6.089323091252579E-4 Pressure measured with wall collisions: 4.3764997179323103E-4 The pressure results from the P V = n R T formula and from the measurement based on wall collisions are similar, with small differences.
[]
{ "number": "6.11", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.1, "section_title": "Collision Simulation", "type": "Exercise" }
Instrument the priority queue and test Pressure at various temperatures to identify the computational bottleneck. If warranted, try switching to a different priority-queue implementation for better performance at high temperatures.
Tests done with a random baseline temperature T, 100 * T, 10000 * T, 1000000 * T and 100000000 * T. The standard priority queue was used to test all the temperatures. The index priority queue was also used to test the highest temperatures: 1000000 * T and 100000000 * T. Results: **** Standard priority queue tests **** Testing with temperature: 5.817661439511093E12 Total time spent on insert operations: 0.02200 Total time spent on deleteMin operations: 0.01300 Total time spent on isEmpty operations: 0.00200 Computational bottleneck: insert operations Testing with temperature: 8.270951314920601E14 Total time spent on insert operations: 0.00700 Total time spent on deleteMin operations: 0.01400 Total time spent on isEmpty operations: 0.00000 Computational bottleneck: delete min operations Testing with temperature: 8.2709513149206064E16 Total time spent on insert operations: 0.00800 Total time spent on deleteMin operations: 0.01500 Total time spent on isEmpty operations: 0.00300 Computational bottleneck: delete min operations Testing with temperature: 8.2709513149206047E18 Total time spent on insert operations: 0.01900 Total time spent on deleteMin operations: 0.06700 Total time spent on isEmpty operations: 0.00300 Computational bottleneck: delete min operations Testing with temperature: 8.270951314920604E20 Total time spent on insert operations: 0.04700 Total time spent on deleteMin operations: 0.12100 Total time spent on isEmpty operations: 0.02500 Computational bottleneck: delete min operations **** Index priority queue tests **** Testing with temperature: 6.3274920297362524E18 Total time spent on insert operations: 0.00900 Total time spent on deleteMin operations: 0.00500 Total time spent on isEmpty operations: 0.00000 Total time spent on contains operations: 0.00300 Total time spent on min operations: 0.00100 Total time spent on delete operations: 0.00400 Computational bottleneck: insert operations Testing with temperature: 6.327492029736252E20 Total time spent on insert operations: 0.00600 Total time spent on deleteMin operations: 0.00700 Total time spent on isEmpty operations: 0.00000 Total time spent on contains operations: 0.00300 Total time spent on min operations: 0.00200 Total time spent on delete operations: 0.00500 Computational bottleneck: deleteMin operations For the standard priority queue on a lower temperature the computational bottleneck were the insert operations. However, for all higher temperatures, the computational bottleneck were the deleteMin operations. For the index priority queue the computational bottleneck were also both the insert and deleteMin operations. In the first test the computational bottleneck were the insert operations. In the second test, with a higher temperature, the computational bottleneck were the deleteMin operations.
[]
{ "number": "6.13", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.1, "section_title": "Collision Simulation", "type": "Exercise" }
Suppose that, in a three-level tree, we can afford to keep a links in internal memory, between b and 2b links in pages representing internal nodes, and between c and 2c items in pages representing external nodes. What is the maximum number of items that we can hold in such a tree, as a function of a, b, and c?
The maximum number of items that we can hold in such a tree is achieved when there are "a" links in internal memory (the links between the first and second level of the tree), 2b links in pages representing internal nodes (the highest branching possible) and 2c items in pages representing external nodes (the highest number of items per external node). Maximum number of items = a * 2b * 2c
[]
{ "number": "6.14", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
Estimate the average number of probes per search in a B-tree for S random searches, in a typical cache system, where the T most-recently-accessed pages are kept in memory (and therefore add 0 to the probe count). Assume that S is much larger than T.
As seen on Proposition B, a search in a B-tree of order M with N items requires between log M N and log M/2 N probes. Considering that: 1- We are evaluating the example given in Proposition B, with M = 1000 and N less than 62.5 billion, where the number of probes is less than 4. This means that the B-tree has height 3. 2- The average number of probes per search when the target page is not in memory is 2 (which is 3 / 2 rounded up). 3- T (the number of most-recently accessed pages) is 5% of S (the number of random searches). 4- There are already T pages in memory. Average number of probes = 0 * 0.05 / (0.05 + 0.95) + 2 * 0.95 / (0.05 + 0.95) Average number of probes = 0 * 0.05 + 2 * 0.95 Average number of probes = 2 * 0.95 Average number of probes = 1.9 The average number of probes per search in this B-tree is 1.9.
[]
{ "number": "6.18", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
Consider the sibling split (or B*-tree) heuristic for B-trees: When it comes time to split a node because it contains M entries, we combine the node with its sibling. If the sibling has k entries with k < M - 1, we reallocate the items giving the sibling and the full node each about (M+k)/2 entries. Otherwise, we create a new node and give each of the three nodes about 2M/3 entries. Also, we allow the root to grow to hold about 4M/3 items, splitting it and creating a new root node with two entries when it reaches that bound. State bounds on the number of probes used for a search or an insertion in a B*-tree of order M with N items. Compare your bounds with the corresponding bounds for B-trees (see PROPOSITION B). Develop an insert implementation for B*-trees.
6.20 - B* trees Bounds on the number of probes used for a search or an insertion in a B*-tree of order M with N items: Between log(M) N and log(2M/3) N probes. This is because almost all internal nodes of the tree have between 2M / 3 and M - 1 links, since they are formed from a split of a full node with M keys and can only grow in size. The only exception in which an internal node can have less than 2M / 3 links is when a node's child keys are reallocated with the creation of a new child node and its rightmost child does not get allocated enough entries. This happens when there are K keys to be reallocated to C child nodes and K / C < 2M / 3. As seen in Proposition B, the bounds on the number of probes used for a search or an insertion in a B-tree of order M with N items is between log(M) N and log(M/2) N probes. Since log(2M/3) N < log(M/2) N, B*-trees are more efficient for both search and insert operations. This efficiency comes with a tradeoff that the split() operation, which has a runtime complexity of O(M / 2) in B-trees, changes its runtime complexity to O(M^2).
[]
{ "number": "6.20", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
Write a program to compute the average number of external pages for a B-tree of order M built from N random insertions into an initially empty tree. Run your program for reasonable values of M and N.
Results: Order M | Number of items | AVG Number of External Pages 4 100000 42865 4 1000000 428592 4 10000000 4276006 16 100000 9424 16 1000000 94288 16 10000000 940648 64 100000 2277 64 1000000 22822 64 10000000 227472 256 100000 542 256 1000000 5650 256 10000000 57418 As expected, the higher the value of the order (M), the lower the number of external pages in the B-tree. Also, when comparing experiments with the same order (M) but with different number of items (N), the higher the number of items, the higher the number of external pages in the B-tree.
[]
{ "number": "6.21", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
If your system supports virtual memory, design and conduct experiments to compare the performance of B-trees with that of binary search, for random searches in a huge symbol table.
Results: Number of searches | B-tree time | Binary search time 1000 0.005 0.001 100000 0.082 0.013 10000000 6.352 0.910 For random searches in a huge symbol table (with 10,000,000 entries) binary search has a better performance than B-trees. Reference related to virtual memory in macs: https://www.howtogeek.com/319151/why-you-shouldnt-turn-off-virtual-memory-on-your-mac/
[]
{ "number": "6.22", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
For your internal-memory implementation of Page in EXERCISE 6.15, run experiments to determine the value of M that leads to the fastest search times for a B-tree implementation supporting random search operations in a huge symbol table. Restrict your attention to values of M that are multiples of 100.
Results: Order M | Number of searches | Total time 100 1000 0.005 100 100000 0.198 100 10000000 19.973 200 1000 0.002 200 100000 0.195 200 10000000 18.331 400 1000 0.003 400 100000 0.188 400 10000000 17.533 1000 1000 0.002 1000 100000 0.176 1000 10000000 16.818 1500 1000 0.002 1500 100000 0.162 1500 10000000 15.784 2000 1000 0.003 2000 100000 0.205 2000 10000000 17.821 The value of M (order) that leads to the fastest search times for a B-tree doing random search operations in a huge symbol table (with 10,000,000 entries) is 1500.
[]
{ "number": "6.23", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
Run experiments to compare search times for internal B-trees (using the value of M determined in the previous exercise), linear probing hashing, and red-black trees for random search operations in a huge symbol table.
Results: Number of searches | B-tree time | Linear probing hashing time | Red-Black tree time 1000 0.004 0.001 0.002 100000 0.173 0.019 0.114 10000000 16.659 1.931 11.297 On the experiments using B-trees with order 1500, linear probing hashing and red-black trees for random searches in a huge symbol table (with 10,000,000 entries), the best search performance by far was achieved by linear probing hashing, which took 1.931 seconds to search 10 million random keys. It was followed by red-black trees, with 11.297 seconds to search 10 million random keys. B-trees had the worst search performance, taking 16.659 seconds to do the same searches. Such results can be explained by the fact that linear probing hashing perform searches in O(1) while B-trees perform searches in O(log(M/2) N) - O(log(750) N) in this case - and red-black trees perform searches in O(lgN).
[]
{ "number": "6.24", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.2, "section_title": "B-Trees", "type": "Exercise" }
Give, in the style of the figure on page 882, the suffixes, sorted suffixes, index() and lcp() tables for the following strings: a. abacadaba b. mississippi c. abcdefghij d. aaaaaaaaaa
a. abacadaba suffixes sorted suffix array i index(i) lcp(i) 0 abacadaba 0 8 0 a 1 bacadaba 1 6 1 aba 2 acadaba 2 0 3 abacadaba 3 cadaba 3 2 1 acadaba 4 adaba 4 4 1 adaba 5 daba 5 7 0 ba 6 aba 6 1 2 bacadaba 7 ba 7 3 0 cadaba 8 a 8 5 0 daba b. mississippi suffixes sorted suffix array i index(i) lcp(i) 0 mississippi 0 10 0 i 1 ississippi 1 7 1 ippi 2 ssissippi 2 4 1 issippi 3 sissippi 3 1 4 ississippi 4 issippi 4 0 0 mississippi 5 ssippi 5 9 0 pi 6 sippi 6 8 1 ppi 7 ippi 7 6 0 sippi 8 ppi 8 3 2 sissippi 9 pi 9 5 1 ssippi 10 i 10 2 3 ssissippi c. abcdefghij suffixes sorted suffix array i index(i) lcp(i) 0 abcdefghij 0 0 0 abcdefghij 1 bcdefghij 1 1 0 bcdefghij 2 cdefghij 2 2 0 cdefghij 3 defghij 3 3 0 defghij 4 efghij 4 4 0 efghij 5 fghij 5 5 0 fghij 6 ghij 6 6 0 ghij 7 hij 7 7 0 hij 8 ij 8 8 0 ij 9 j 9 9 0 j d. aaaaaaaaaa suffixes sorted suffix array i index(i) lcp(i) 0 aaaaaaaaaa 0 9 0 a 1 aaaaaaaaa 1 8 1 aa 2 aaaaaaaa 2 7 2 aaa 3 aaaaaaa 3 6 3 aaaa 4 aaaaaa 4 5 4 aaaaa 5 aaaaa 5 4 5 aaaaaa 6 aaaa 6 3 6 aaaaaaa 7 aaa 7 2 7 aaaaaaaa 8 aa 8 1 8 aaaaaaaaa 9 a 9 0 9 aaaaaaaaaa
[]
{ "number": "6.25", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.3, "section_title": "Suffix Arrays", "type": "Exercise" }
Identify the problem with the following code fragment to compute all the suffixes for suffix sort: suffix = ""; for (int i = s.length() - 1; i >= 0; i--) { suffix = s.charAt(i) + suffix; suffixes[i] = suffix; }
The problem with the code fragment is that if the substring() method takes linear time and space (which is the case in Java 7) the loop will take quadratic time and space. This wouldn't be a problem in Java 6, where the substring() method takes constant time and space (which would lead to a linear time and space loop).
[]
{ "number": "6.26", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.3, "section_title": "Suffix Arrays", "type": "Exercise" }
Under the assumptions described in SECTION 1.4. give the memory usage of a SuffixArray object with a string of length N.
Suffix * object overhead -> 16 bytes * String reference (text) -> 8 bytes * int value (index) -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 8 + 4 + 4 = 32 bytes SuffixArray * object overhead -> 16 bytes * Suffix[] (suffixes) object overhead -> 16 bytes int value (length) -> 4 bytes int value (padding) -> 4 bytes Suffix references -> 8N bytes N Suffixes -> 32N bytes Amount of memory needed: 16 + 16 + 4 + 4 + 8N + 32N = 40N + 40 bytes
[]
{ "number": "6.29", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.3, "section_title": "Suffix Arrays", "type": "Exercise" }
Write a SuffixArray client LCS that take two file-names as command-line arguments, reads the two text files, and finds the longest substring that appears in both in linear time. (In 1970, D. Knuth conjectured that this task was impossible.) Hint: Create a suffix array for s#t where s and t are the two text strings and # is a character that does not appear in either.
Suffix array with Suffix class: Running time: O(N^2), but on average runs in 2N ln N (as seen in the book, on chapter 6, proposition C). The worst case happens when sorting the suffixes of a string consisting of N copies of the same character, or already sorted suffixes. Memory: 40N + 40 bytes (as seen on exercise 6.29) Suffix array without Suffix class: Running time: O(N * w * logR), where w is the suffix' average length and R is the alphabet size. For suffix arrays this is equal to O(N * N/2 * log256) = O(N^2) character compares. The worst case happens when sorting the suffixes of a string consisting of N copies of the same character. On average it runs in 2N ln N character compares (as seen in the book, on chapter 5, proposition E). Memory: SuffixArrayNoInnerClass * object overhead -> 16 bytes * char[] reference (textChars) -> 8 bytes * char[] (textChars) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes char items -> 2N * int[] reference (indexes) -> 8 bytes * int[] (indexes) object overhead -> 16 bytes int value (length) -> 4 bytes padding -> 4 bytes int items -> 4N * int value (textLength) -> 4 bytes * padding -> 4 bytes Amount of memory needed: 16 + 8 + 16 + 4 + 4 + 2N + 8 + 16 + 4 + 4 + 4N + 4 + 4 = 6N + 88 bytes The running time of the suffix array implementation without Suffix class is better than the running time of the implementation using Suffix class. The former has a worst case of O(N^2) character compares while the latter has a worst case of O(N^2) key compares. Also, on average the suffix array implementation without Suffix class has a runtime of 2N ln N character compares while the implementation using Suffix class has an average runtime of 2N ln N key compares. The difference becomes clearer when dealing with suffixes that have long common prefixes, where the implementation without Suffix class does not have to re-compare such prefixes during the sort. In terms of memory usage, the suffix array implementation without Suffix class uses less memory (6N + 88 bytes) than the implementation using Suffix class (40N + 40 bytes). Overall the suffix array implementation without Suffix class is better than the implementation with Suffix class in both running time and memory usage.
[]
{ "number": "6.30", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.3, "section_title": "Suffix Arrays", "type": "Exercise" }
If capacities are positive integers less than M, what is the maximum possible flow value for any st-network with V vertices and E edges? Give two answers, depending on whether or not parallel edges are allowed.
Maximum possible flow value for any st-network with V vertices and E edges (with positive integer capacities less than M): Consider C to be the highest possible edge capacity. 1- Parallel edges not allowed Max flow = (V - 2) * C In this case, since parallel edges are not allowed, each vertex other than the source and the target is connected to them through 2 edges of flow and capacity C. s C/C / |C/C \ C/C v v v 1 2 3 C/C \ |C/C / v v v t 2- Parallel edges allowed Max flow = E * C In this case, with parallel edges allowed, the source vertex can be directly connected to the target vertex through E parallel edges, each with flow and capacity C. s C/C / |C/C \ C/C v v v t
[]
{ "number": "6.36", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.4, "section_title": "Maxflow", "type": "Exercise" }
Give an algorithm to solve the maxflow problem for the case that the network forms a tree if the sink is removed.
If the network forms a tree if the sink is removed this means that there are no directed cycles in the network. In other words, intermediate vertices are not directly connected: for each edge e (v->w), either v or w must be the source or the sink vertex. A situation like this would happen: Original network: s c/f / |c/f \ c/f v v v 1 2 3 c/f| |c/f / v v / 4 5 / c/f c/f \ |c/f/ v v v t Tree formed after removing the sink vertex: s c/f/ |c/f\ c/f v v v 1 2 3 c/f| |c/f v v 4 5 In this case the maxflow can be computed with the following algorithm: For each vertex adjacent to the source: Do a depth first search until reaching the sink vertex. Keep track of the minimum edge capacity on the path. Maxflow will be equal to the sum of all the minimum edge capacities computed. Return maxflow.
[]
{ "number": "6.37", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.4, "section_title": "Maxflow", "type": "Exercise" }
If true provide a short proof, if false give a counterexample: a. In any max flow, there is no directed cycle on which every edge carries positive flow b. There exists a max flow for which there is no directed cycle on which every edge carries positive flow c. If all edge capacities are distinct, the max flow is unique d. If all edge capacities are increased by an additive constant, the min cut remains unchanged e. If all edge capacities are multiplied by a positive integer, the min cut remains unchanged
6.38 - True of false a. In any max flow, there is no directed cycle on which every edge carries positive flow. False. Counterexample: s | ^ 2/2| |1/1 v | 1 | |1/1 v t b. There exists a max flow for which there is no directed cycle on which every edge carries positive flow. True. Proof: Let f be a maximum flow and let C be a cycle on which every edge carries positive flow. Let g = min(e E C) f(e). In other words, g is equal to the value of the minimum edge flow among the edges in cycle C. Reducing the flow of each edge in C by g maintains the value of the max flow and sets the flow f(e) of at least one of the edges e E C to zero. c. If all edge capacities are distinct, the max flow is unique. False. Counterexample: s | 1/1 | v 1 | / \ 2/0 / | v | 2 | 3/1 \ | 4/0 \ | v v t s | 1/1 | v 1 | / \ 2/1 / | v | 2 | 3/0 \ | 4/1 \ | v v t d. If all edge capacities are increased by an additive constant, the min cut remains unchanged. False. Counterexample: original network: s | 4/3 v 1 1/1/ |1/1 \ 1/1 <- min cut v v v \ | / 1/1 \ |1/1 / 1/1 v v v t After adding a constant value of 1 to all edge capacities: s | 5/3 <- min cut v 1 2/1/ |2/1 \ 2/1 v v v \ | / 2/1 \ |2/1 / 2/1 v v v t e. If all edge capacities are multiplied by a positive integer, the min cut remains unchanged. True. Proof: Let g be the positive integer for which every edge capacity is multiplied. When all edge capacities are multiplied by g the value of every cut also gets multiplied by g. Thus, the relative order of the cuts does not change and the min cut before the multiplication will remain the min cut after it. References: http://algo2.iti.kit.edu/sanders/courses/algdat03/sol3.pdf https://stackoverflow.com/questions/40277603/is-minimum-cut-same-for-the-graph-after-increasing-edge-capacity-by-1-for-all-ed
[]
{ "number": "6.38", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.4, "section_title": "Maxflow", "type": "Exercise" }
Complete the proof of PROPOSITION G: Show that each time an edge is a critical edge, the length of the augmenting path through it must increase by 2.
Proposition: Each time an edge is a critical edge, the length of the augmenting path through it must increase by 2. Proof: Let df(vertex1, vertex2) be the augmenting path distance from vertex 1 to vertex 2. When edge (u, v) is critical this means that: df(s, v) = df(s, u) + 1 (since augmenting paths are shortest paths) Before becoming a critical edge again, (u, v) must be added back by some augmenting path, which must contain it. Let df' be the augmenting path distance when (u, v) is added back. df'(s, u) = df'(s, v) + 1 (since edge (u, v) was removed, to add it back it is necessary to reach it from vertex v). df'(s, v) + 1 >= df(s, v) + 1 (since df'(s, v) is in an augmenting path found after the augmenting path in which df(s, v) is, it cannot have a distance lower than df(s, v)). df(s, v) + 1 = df(s, u) + 1 + 1 (replace df(s, v) by df(s, u) + 1, according to the above definition). This means that df'(s, u) = df(s, u) + 2. Therefore, the length of the augmenting path through a re-added critical edge must increase by 2. Reference: https://www.cs.usfca.edu/~galles/cs673/lecture/lecture18.pdf
[]
{ "number": "6.39", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.4, "section_title": "Maxflow", "type": "Exercise" }
Prove that the shortest-paths problem reduces to linear programming.
Reduction from shortest paths problem to linear programming: Method 1: We consider a system of inequalities and equations that involve the following variables: l(u,v) -> variable corresponding to the length of the directed edge u -> v. x(u,v) -> indicator variable for whether edge u -> v is in the shortest path. It has value 1 when the edge is, and 0 when the edge is not. Given a directed graph G with a source vertex s and a target vertex t. Linear programming formulation: Minimize E l(u,v) * x(u,v) u->v Subject to the constraints E x(s,v) - E x(v,s) = 1 v v E x(t,v) - E x(v,t) = -1 v v for every vertex u other than s and t: E x(u,v) - E x(v,u) = 0 v v x(u,v) >= 0 for every edge u -> v The constraints state that the path should start at s, end at t, and either pass through or avoid every other vertex v. This selects the set of edges with minimal length, subject to the constraint that this set forms a path from s to t - represented by the constraints that ensure that for all vertices except s and t the number of incoming and outcoming edges that are part of the path must be the same. The solution is easily converted to a solution of the shortest paths problem: the edges u -> v for which x(u,v) is equal to 1 are part of the shortest path and the sum of all their lengths is the length of the shortest path from s to t. Method 2: We consider a system of inequalities and equations that involve the following variables: d(v) -> variable corresponding to the shortest distance from s to vertex v. l(u,v) -> variable corresponding to the length of the directed edge u -> v. Given a directed graph G with a set of vertices V, a set of edges E and with a source vertex s and a target vertex t. Linear programming formulation: Maximize d(t) Subject to the constraints d(s) = 0 d(v) - d(u) <= l(u,v) for every edge u -> v The constraints state that in a shortest path from s to t, d(v) is at most the shortest path distance from s to v. And the value of d(t) will be the shortest path distance from s to t. To find the edges in the shortest path from s to t, the following method can be used: Pre-process the edges to map each vertex with its incoming edges. Starting with vertex t, check which incoming neighbor vertex v has d(v) equal to d(t) - l(v,t). Add the edge v -> t to the shortest path. Replace t with v and do the same check for its incoming neighbors, iterating this process until vertex s is reached. To compute the shortest paths from s to every other vertex, the following linear programming formulation can be used: Maximize E d(v) v Subject to the constraints d(s) = 0 d(v) - d(u) <= l(u,v) for every edge u -> v The resulting values of d(v) will be the shortest path distances from s to every vertex v. To find the edges in the shortest path from s to any vertex v, the same process mentioned above can be used, replacing vertex t with vertex v. To compute the shortest paths from all vertices to every other vertex (all-pairs shortest paths), the following linear programming formulation can be used: d(u,v) -> variable corresponding to the shortest distance from vertex u to vertex v. Maximize E d(u,v) v Subject to the constraints d(v,v) = 0 for every vertex v d(u,w) - d(u,v) <= l(v,w) for every vertex u in V and every edge v -> w in E The resulting values of d(u,v) will be the shortest path distances from every vertex u to every vertex v. To find the edges in the shortest paths from every vertex u to every vertex v, the same process mentioned above can be used, replacing vertex t with vertex v and vertex s with vertex u. References: https://en.wikipedia.org/wiki/Shortest_path_problem#Linear_programming_formulation https://courses.engr.illinois.edu/cs498dl1/sp2015/notes/26-lp.pdf http://www.cs.yale.edu/homes/aspnes/pinewiki/attachments/LinearProgramming/lp.pdf
[]
{ "number": "6.50", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Could there be an algorithm that solves an NP-complete problem in an average time of NlogN, if P != NP? Explain your answer.
Could there be an algorithm that solves an NP-complete problem in an average time of N^(log n), if P != NP? Explain your answer. Yes, there could be, it is not possible to affirm either yes or no. An algorithm of average time N^(log n) does not belong to either P or NP, it is part of an intermediate class of algorithms called quasi-polynomial (QP). Quasi-polynomial time algorithms are algorithms that run slower than polynomial time, yet not so slow as to be exponential time. If P != NP this would mean that there is no polynomial time algorithm that can solve an NP-complete problem. It could still be possible that QP = NP (or not).
[]
{ "number": "6.51", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that someone discovers an algorithm that is guaranteed to solve the boolean satisfiability problem in time proportional to 1.1^N. Does this imply that we can solve other NP-complete problems in time proportional to 1.1^N?
Yes, if the algorithm can guarantee to solve the boolean satisfiability problem in time proportional to 1.1^N this implies that it can solve other NP-complete problems in time proportional to 1.1^N. This is because boolean satisfiability is an NP-complete problem and due to that all problems in NP polynomially reduce to it.
[]
{ "number": "6.52", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
What would be the significance of a program that could solve the integer linear programming problem in time proportional to 1.1^N?
A program that could solve the integer linear programming problem in time proportional to 1.1^N would signify that it can also solve any other problem in NP in time proportional to 1.1^N (this is because the integer linear programming problem is an NP-complete problem and due to that all problems in NP polynomially reduce to it). 1.1^N is still exponential running time, so it does not prove anything related to P = NP, but it would mean that we can solve instances of any problem in NP of size up to ~218 efficiently. Reference: https://www.ics.uci.edu/~eppstein/265/exponential.html
[]
{ "number": "6.53", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Give a poly-time reduction from vertex cover to 0-1 integer linear inequality satisfiability.
Poly-time reduction from vertex cover to 0-1 integer linear inequality satisfiability: We consider a system of inequalities and equations that involve the following variables: x(v) -> binary variable that is equal to 1 if vertex v is selected and 0 otherwise. Given a graph G with a set of vertices V and a set of edges E. Linear programming formulation: Minimize E x(v) veV Subject to the constraints x(u) + x(v) >= 1 for every edge u - v in E x(v) e {0, 1} for every vertex v in V Vertices with x(v) equal to 1 will be the vertices in the minimum vertex cover. Reference: https://cs.stackexchange.com/questions/68232/reduction-vertex-cover-to-binary-integer-program-decision-problem
[]
{ "number": "6.54", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Prove that the problem of finding a Hamiltonian path in a directed graph is NP-complete, using the NP-completeness of the Hamiltonian-path problem for undirected graphs.
Prove that the problem of finding a Hamiltonian path in a directed graph is NP-complete, using the NP-completeness of the Hamiltonian path problem for undirected graphs. Let's call the Hamiltonian path problem in undirected graphs HPg and the Hamiltonian path problem in directed graphs HPdg. Reduction from HPg to HPdg: Replace every edge v - w in the undirected graph in HPg to two directed edges in the directed graph in HPdg: A v -> w directed edge and a w -> v directed edge. Solve HPdg. The directed edges selected in the solution to HPdg can be mapped to the edges in the solution to HPg: If either v -> w or w -> v are in HPdg solution, v - w is in HPg solution. A problem is NP-complete if it is in NP and all problems in NP poly-time reduce to it. HPdg is in NP because if given a Hamiltonian path as a solution, it is possible to check if all vertices are visited in polynomial time. Since HPg is NP-complete, all problems in NP poly-time reduce to it. As seen above, there is a poly-time reduction from HPg to HPdg. By transitive relation this shows that all problems in NP poly-time reduce to HPdg, and it is, therefore, NP-complete. All problems in NP | V Hamiltonian path problem for undirected graphs | V Hamiltonian path problem for directed graphs
[]
{ "number": "6.55", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that two problems are known to be NP-complete. Does this imply that there is a poly-time reduction from one to the other?
Yes, if two problems are known to be NP-complete this implies that there is a poly-time reduction from one to the other. This is because all problems in NP poly-time reduce to any NP-complete problem. All NP-complete problems are in NP, therefore, both problems poly-time reduce from one to the other.
[]
{ "number": "6.56", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that X is NP-complete, X poly-time reduces to Y, and Y poly-time reduces to X. Is Y necessarily NP-complete?
Suppose that X is NP-complete, X poly-time reduces to Y, and Y poly-time reduces to X. Is Y necessarily NP-complete? For Y to be NP-complete the following conditions must be met: 1) Y is in NP 2) Every problem in NP poly-time reduces to Y Condition 2 is true because X poly-time reduces to Y, and since X is NP-complete, all problems in NP poly-time reduce to it (and transitively to Y). Nothing is said about condition 1, so Y is not necessarily NP-complete. It may be the case that Y is not in NP because it is not possible to verify if a given solution to it is correct or not in polynomial time. In this case (with condition 2 satisfied but condition 1 not satisfied) Y would be NP-hard, but not NP-complete. For example if X = CIRCUIT-SAT and Y = CO-CIRCUIT-SAT then X and Y are equal to the problem description, but it is unknown whether Y is in NP. This answer is based on the definition of reductions as polynomial-time Turing reductions (and not Karp reductions). Reference: https://algs4.cs.princeton.edu/66intractability/
[]
{ "number": "6.57", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that we have an algorithm to solve the decision version of boolean satisfiability, which indicates that there exists an assignment of truth values to the variables that satisfies the boolean expression. Show how to find the assignment.
Given an algorithm that solves the decision version of boolean satisfiability - which indicates whether there exists an assignment of truth values to the variables that satisfies a boolean expression - the assignment can be found with the following method: Consider a boolean expression BE composed by a set of distinct variables x1, x2, ..., xn. 1- Use the algorithm to check if it is possible to satisfy BE. If the answer is no, the expression is not satisfiable and there is no possible assignment. If the answer is yes: 2- Assign the value true to x1 and use the algorithm to check if it is possible to satisfy BE. 3- If the answer is yes, then x1 is equal to true. Otherwise, x1 is equal to false, so update its value. 4- Repeat steps 2 and 3 for all other variables. In total, all variable assignments can be found in n + 1 runs of the algorithm, where n is number of distinct variables in BE. Reference: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#Self-reducibility
[]
{ "number": "6.58", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that we have an algorithm to solve the decision version of the vertex cover problem, which indicates that there exists a vertex cover of a given size. Show how to solve the optimization version of finding the vertex cover of minimum cardinality.
Given an algorithm that solves the decision version of the vertex cover problem - which indicates that there exists a vertex cover of a given size - the optimization version of finding the vertex cover of minimum cardinality can be solved with the following method: Consider a graph G with V vertices and E edges. 1- Using the algorithm, binary search for k, the cardinality of the minimum vertex cover. 2- Find a vertex v such that G - {v} (removing the vertex and its incident edges) has a vertex cover of size k - 1. Any vertex in any minimum vertex cover will have this property. 3- Include v in the vertex cover. 4- Recursively find a minimum vertex cover in G - {v}. The stop condition is when there is no vertex whose removal leads to a smaller vertex cover. Reference: https://www.cs.princeton.edu/~wayne/kleinberg-tardos/pearson/08Intractability.pdf
[]
{ "number": "6.59", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Explain why the optimization version of the vertex cover problem is not necessarily a search problem.
The optimization version of the vertex cover problem is not necessarily a search problem because its output (a vertex cover of minimum cardinality) cannot be certified to be correct in polynomial time. It is possible to certify in polynomial time that the output is a vertex cover, but not that it is a minimum cardinality vertex cover.
[]
{ "number": "6.60", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that X and Y are two search problems and that X poly-time reduces to Y. Which of the following can we infer? a. If Y is NP-complete then so is X. b. If X is NP-complete then so is Y. c. If X is in P, then Y is in P. d. If Y is in P, then X is in P.
If X and Y are two search problems and X poly-time reduces to Y, we can infer that: b. If X is NP-complete then so is Y. This is because if both X and Y are search problems they are both in NP. If X is NP-complete, then all problems in NP poly-time reduce to it. And if X poly-time reduces to Y, then all problems in NP also poly-time reduce to Y (through X). Since Y is both in NP and all problems in NP poly-time reduce to it, Y is NP-complete. d. If Y is in P, then X is in P. If Y is in P it can be solved in polynomial time. If we can reduce X to Y then we can solve Y in polynomial time and consequently solve X in polynomial time, meaning that X is also in P. The following alternatives are wrong: a. If Y is NP-complete then so is X. Not necessarily, there may not exist reductions from all problems in NP to X. c. If X is in P, then Y is in P. Not necessarily, Y may be NP-complete and currently it is unknown whether P = NP.
[]
{ "number": "6.61", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }
Suppose that P != NP. Which of the following can we infer? e. If X is NP-complete, then X cannot be solved in polynomial time. f. If X is in NP, then X cannot be solved in polynomial time. g. If X is in NP but not NP-complete, then X can be solved in polynomial time. h. If X is in P, then X is not NP-complete.
If P != NP, we can infer that: a. If X is NP-complete, then X cannot be solved in polynomial time. If P != NP, no NP-complete problem can be solved in polynomial time. d. If X is in P, then X is not NP-complete. If P != NP, then problems in P can be solved in polynomial time whereas problems NP-complete cannot. So X cannot be both in P and NP-complete. The following alternatives are wrong: b. If X is in NP, then X cannot be solved in polynomial time. Not necessarily, X can be in P, which is the class of problems that can be solved in polynomial time. c. If X is in NP but not NP-complete, then X can be solved in polynomial time. Not necessarily, X can be in an intermediate class between P and NP-complete, such as QP (quasi-polynomial). Quasi-polynomial time algorithms are algorithms that run slower than polynomial time, yet not so slow as to be exponential time.
[]
{ "number": "6.62", "code_execution": false, "url": null, "params": null, "dependencies": null, "chapter": 6, "chapter_title": "Context", "section": 6.5, "section_title": "Reductions and Intractability", "type": "Exercise" }