id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
fdd05d56d93186d7cf75842cbce3085f607a34fd
Backward Exact String Searching Strategy Ebaa Fayyoumi* and Ahmed Al-Jaber* Received on Oct. 8, 2002 Accepted for publication on April 9, 2003 Abstract A new strategy is presented which finds all occurrences of one given string within another. It is worth mentioning that there are no studies in literature related to change the direction of text scan by starting the match of the pattern at the end of the text, often allows the algorithm to proceed faster. Performance of this method was measured through applying some of exact string searching algorithms and their backward on different text sizes. It affords an improvement of the time required by reducing it up to 5.6%. Keywords: String searching; Text editing; Information retrieval; Boyer-Moore-Horspool Algorithm; Raita Algorithm; Cycle Algorithm. 1. Introduction String searching is an important component of many problems, including text editing, information retrieval and symbol manipulation. The string searching or string-matching problem consists of finding all occurrences (or the first occurrence) of a pattern in a text, where the pattern and the text are strings over same alphabet. Let $pat$ be the $i^{th}$ character in the pattern string $Pat = pat_0...pat_m$, of length $m$, and let $text$ be the $j^{th}$ character in the text string $Text = text_0...text_n$, of length $n$ which is fairly larger than $m$. There are many algorithms that focus on this problem [1-24]. However, string searching requires two kinds of solutions depending on which string, the pattern or the text is given first. Algorithms based on the use of automata or combinatorial properties of strings are commonly implemented to preprocess the pattern and solve the first kind of problem [8,4,5,13]. The notation of indexes realized by tree or automata is used in the second kind of solutions [15], this study deals with the first kind only. The best way to understand how string matching algorithm works, is to imagine that there is a window on the text. This window has the same length as the pattern "pat". This window is first aligned with the left end of the text and then the string matching algorithm scans if the characters of the window match the characters of the pattern (this specific work is called attempt). After each attempt the window is shifted to right over the text until it goes over the right end of the text (this mechanism is usually called the sliding window mechanism). A string-matching algorithm is a succession of attempts and shifts. The aim of a good algorithm is to minimize the work done during each attempt and to maximize the length of the shift [13]. Most of the previous studies in exact string searching algorithms differ in the way of performing the comparison between pattern characters and text characters at each attempt. Four categories arise: the most natural way to perform the comparisons is from left to right, which is the reading direction, such as the "Shift-Or Algorithm" [1] and "KMP Algorithm" [12]. Generally, the second category leads to the most practical algorithms via performing the comparison from right to left as described in "Boyer-Moore Algorithm" [7] and "Turbo-Boyer-Moore Algorithm" [8]. But the best theoretical bounds are reached when comparisons are done in a specific order as in "Two-Way String-Matching Algorithm" [9] and "Time-Space-Optimal String Matching Algorithm" [10]. Finally, some algorithms are not relevant for which the order in which the comparisons are done such as "Quick Search Algorithm" [20], "Raita Algorithm" [17], "Boyer-Moore-Horspool Algorithm" [21], and the most recently one is Cycle Algorithm [14]. This article consists of five sections. An introduction has been introduced with fine details in first section. Second section explains the Cycle Algorithm with an example, while section three concentrated in the new methodology. The resulting codes are presented in the forth section. Finally the conclusion and further work are added. 2. Cycle Algorithm. This algorithm finds the exact occurrences of a pattern \( pat_{n} \ldots pat_{1} \) in the text \( text_{n} \ldots text_{1} \). Cycle Algorithm treats the pattern as a cycle logically, this means that there is no fixed order of comparison. At the beginning of the search process, the algorithm chooses the first character in the pattern to be compared first. In each checking step, it always starts from comparing the mismatch character in the last step. When the comparison successfully turns a round in one checking step, a complete match is found, if there is a mismatch, the character that is next to the right most character of the current substring should be chosen for checking. If that character doesn't occur in the pattern, the skip distance is \( m+1 \), and this is the maximum distance that this algorithm can move. Otherwise, the text is shifted so as to align that character with its right most occurrence in the pattern character [14]. The Cycle Algorithm is based on the idea of Smith's adaptive method [22]. The mismatched character should be given a high priority in the next checking step. The difference of the two methods is that the information of mismatched results is used in a statistical way in Smith's adaptive method, on the other hand, the result comparing Backward Exact String Searching Strategy characters is chosen in a self-adaptive way in Cycle method. The character, which is most difficult to be matched with in the pattern, will be most frequently chosen to compare first [22,14]. Example 1 Assuming the text string is "ABCDCDFBCDRBCDFABCDFABCDR", and the pattern string is "ABCDR". The Cycle Algorithm is used in this example, because it is faster than other algorithms. According to the given pattern, the value of skip array is Skip ['A'] = 5 Skip ['B'] = 4 Skip ['C'] = 3 Skip ['D'] = 2 Skip ['R'] = 1 Otherwise, the value of the skip array is m+1 = 6. ABCDCDFBCDRBCDFABCDFABCDR ABCDR At the beginning, the cycle starts the comparison from left to right. It can be noticed that there is a match between the character pat_0 to pat_3 with the corresponding character in the text. There is a mismatch at pat_4 with the corresponding character 'C' at text_4. The number of characters passed for comparisons is five, in addition, four comparisons are needed to test whether it reaches the end of the pattern or not. According to the skipping step, since the character 'D' at text_5 occurs in the pattern, two locations right are required to align text_5 with pat_5. The result is displayed as ABCDCDFBCDRBCDFABCDFABCDR ABCDR Since the mismatch occurs at pat_4, the comparison starts from that position. There is a mismatch at pat_4. To align text_5 with pat_4, four positions are needed to moved. One character comparison is needed. The result is depicted as ABCDCDFBCDRBCDFABCDFABCDR ABCDR Starting the comparison at pat_4, there is a match. Now reaching the end of the pattern, the comparison as a logical circle Pat_0 must be continued. There is a mismatch at pat_4. Two character comparisons and one logical end comparison are needed. Moving four positions to align text_11 with pat_11. The result is shown as ABCDCDFBCDRBCDFABCDFABCDR ABCDR 419 There is a mismatch at \( \text{pat}_0 \). Only one comparison is needed. Moving five positions. The result is as follows \[ \text{ABCD CDCFBC DRBCDFABCDFA BCDR} \] It can be noticed that there is a match between \( \text{pat}_0 \) to \( \text{pat}_3 \) with the corresponding text characters. There is a mismatch at \( \text{pat}_4 \). Five comparisons and four logical end comparisons are needed. In addition to five positions are required to be shifted. The result is as illustrated. \[ \text{ABCD CDCFBC DRBCDFABCDFA BCDR} \] Finally, there is a match between each character in the pattern with the corresponding characters in the text. So, the pattern is found. Ten comparisons are required, including five for character comparison and five for the logical end comparisons. The total number of character comparison required is 19, while the total number of logical end comparisons is 14. But the total number of shift is 6. 3. **Backward String Searching Strategy** It is more conventional to start scanning the text in exact string searching algorithms from the beginning. Consequently, none of these algorithms starts scanning form the end of the text. This study reflects the novel idea of changing the direction of text scan and evaluates its effect in the pattern pre-process phase, number of comparison and shift, and also running time. Many algorithms try to improve the length of the shift through matching some suffixes of the pattern [7,17,14]. In rapport of that it is possible to invert all string searching algorithms to improve the length of the shift via matching some prefixes of the pattern by scanning the character of the window from right to left. Three algorithms have been adapted from literature for testing and to prove its validation where each algorithm has its own way in searching [21, 2, 17, 14] In pre-process phase, the skip array does not express the right most occurrences of character in the pattern, on the contrary of that the skip array has to express the left most occurrences of character in the pattern in order to conserve the correctness of algorithm. As first impression, it may appear that pre-processing the pattern from left to right equals to pre-process from right to left. However, the following example gives a deep analyzing of the difference between pre-processes the pattern from right to left and from left to right. Example 2 Assume there exist three patterns "ABCD\textsuperscript{R}\textsuperscript{K}B\textsuperscript{K}\textsuperscript{K}B\textsuperscript{C}\textsuperscript{C}K" need to be processed from right to left and from left to right. 1) Pattern "ABCD\textsuperscript{R}\textsuperscript{K}B\textsuperscript{K}\textsuperscript{K}B\textsuperscript{C}\textsuperscript{C}K" Pre-process the pattern from right to left \begin{align*} \text{Skip}'A' &= 4 \\ \text{Skip}'B' &= 3 \\ \text{Skip}'C' &= 2 \\ \text{Skip}'D' &= 1 \\ \text{Skip}'R' &= 5 \\ \end{align*} Pre-process the pattern from left to right \begin{align*} \text{Skip}'A' &= 5 \\ \text{Skip}'B' &= 1 \\ \text{Skip}'C' &= 2 \\ \text{Skip}'D' &= 3 \\ \text{Skip}'R' &= 4 \\ \end{align*} Total number of Shift = 15. 2) Pattern "EKKKCB" Pre-process the pattern from right to left \begin{align*} \text{Skip}'E' &= 5 \\ \text{Skip}'K' &= 2 \\ \text{Skip}'C' &= 1 \\ \text{Skip}'B' &= 6 \\ \end{align*} Pre-process the pattern from left to right \begin{align*} \text{Skip}'E' &= 6 \\ \text{Skip}'K' &= 1 \\ \text{Skip}'C' &= 4 \\ \text{Skip}'B' &= 5 \\ \end{align*} Total number of Shift = 14. 3) Pattern "ABCKC" Pre-process the pattern from right to left \begin{align*} \text{Skip}'A' &= 5 \\ \text{Skip}'B' &= 1 \\ \text{Skip}'C' &= 2 \\ \text{Skip}'K' &= 4 \\ \end{align*} Pre-process the pattern from left to right \begin{align*} \text{Skip}'A' &= 4 \\ \text{Skip}'B' &= 3 \\ \text{Skip}'C' &= 1 \\ \text{Skip}'K' &= 5 \\ \end{align*} Total number of Shift = 13. The difference appears only when there is a repetition in the pattern character. Otherwise, the total number of shift will be equal. During the searching phase the algorithm starts at the end of the text to find an occurrence or all occurrences of the pattern in the text by parsing the characters of the window from the left to right, instead of parsing form right to left. It can be useful if the text is large and there exist a prior knowledge about the frequency of each character. Then the high frequency character takes the large amount of skip from two auxiliary tables which are computed in Preprocessing phase. Here Boyer-Moore-Horspool Algorithm, Raita Algorithm, and Cycle Algorithm are inverted as shown in figures 1, 2, and 3. ```c char pat[m] char text [m] int skip [alpha+1] int n; {text length} int m; {pattern length} int poss ; { number of the pattern occurrence } 1. // Preprocess Phase 2. 3. for (i=0 ; i< alphasize ; i++) 4. L_occ[i]= m; 5. 6. j=1; 7. for (i= m-1 ; i>=0 ;i--) 8. { L_occ[pat[i]]= m-j; j++;} 9. 10. //Searching Phase 11. 12. i= n-m; 13. j= m-1; 14. pos=1; 15. while (i>= 0) 16. { 17. k = i; 18. j = 0; 19. while ((text[k] == pat[j])&&(j<m)) 20. { k++; j++; } 21. if (j == m) ``` Backward Exact String Searching Strategy ``` 22. { position[pos] = i; pos++; } 23. i = i - L_occ[ text[i] ]; 24. } ``` **Figure 1.** Backward Boyer-Moore-Horspool Algorithm (BBMHA) ```c char pat[m] } char text [m] int skip [alpha+1] int n: {text length} int m: {pattern length} int poss : { number of the pattern occurrence } 1. // Preprocess Phase 2. 3. for (j=0 : j<alphasize : j++) 4. skip[j] = m : 5. 6. int jj=0; 7. for (j=m-1 : j>0 : j--) 8. { skip[pat[j]] = m-jj-1 ; jj++ ; } 9. 10. //Searching Phase 11. 12. poss = 0; 13. i = n- m; 14. j =0 ; 15. while (i >=0) 16. { 17. if (text[i]==pat[0] ) 18. if (text[i+m-1]==pat[m-1]) 19. { 20. for ( j=1, k=i0+1 : j<m-1 ; k++ , j++) 21. if (text[k]!= pat[j]) break; } 22. if (j==m-1) 23. {poss++; position[poss]=i; } 24. } 25. } 26. i = i - skip[text[i]]; // End while; 27. } ``` **Figure 2.** Backward Raita Algorithm (BRA) ```c char pat[m] } char text [m] int skip [alpha+1] int n: {text length} int m: {pattern length} int poss : { number of the pattern occurrence } ``` Figure 3. Backward Cycle Algorithm (BCA) Example 3 Assuming the text string is "ABCDCDFBCDCFRBCDFABCDFABCDR", and the pattern string is "ABCDR". The Backward Cycle Algorithm is used in this example. According to the given pattern, the value of skip array is. Skip['A'] = 5 Skip['B'] = 4 Backward Exact String Searching Strategy Skip ['C'] = 3 Skip ['D'] = 2 Skip ['R'] = 1 Otherwise, the value of the skip array is m+1 = 6. At the beginning, the backward algorithm starts the comparison from left to right at the end of the text as ABCD It can be noticed that there is a match between each character in the pattern with the corresponding characters in text, so the pattern is found. Five comparisons is required to detect an occurrence of the pattern. According to skipping step, six locations have to be moved left to align text9 with pat8 since the character 'F' at text9 doesn't occur in the pattern. The result is depicted as the following ABCD There is a mismatch at pat0 with the corresponding character 'F' at text14 so one comparison is needed, in addition to fourth position need to be moved left since the character 'D' occurs in the pattern. The result is shown as the following ABCD Since the mismatch in previous step occurs at pat0 the comparison starts from the same position. Where a mismatch found at pat0. To align text0 with pat5 five positions have to be moved left, one comparison is needed. The result is displayed as the following ABCD Starting the comparison at pat0 there is a mismatch. One comparison is needed with four positions have to be moved left, relative to 'D' text character as illustrated below. ABCD 425 It can be noticed that there is a mismatch between text$_2$ and pat$_5$, so one comparison is needed beside two position moves, to the left as it is depicted below. ABCD DFBCDR BCDFABCDFABCDR ABCDR Finally, there is a match between the character pat$_6$ to pat$_3$ with the corresponding text characters but there is a mismatch at pat$_4$ with the corresponding text character 'C' at text$_4$. The number of comparison passed is five. The previous example shows that both top down and bottom up of text scan has the same number of shift. However the total numbers of comparison and logical end comparison have been reduced dramatically to 14, and 9 instead of 19 and 14 respectively. 4. Experimental Results Boyer-Moore-Horspool, Raita Algorithm, and Cycle Algorithm were compared with their backward algorithms. These algorithms are implemented using C language, due to it's usage as a system language and also most of the previous string searching algorithms use it [8,14]. These algorithms have been applied on three different English texts extracted from the Internet, online papers, and documents in different fields such as health, sports, computer, architecture, CNN news...etc. 1. The first text size was about 0.5 million characters (exactly 530,440 characters). 2. The second text size was about 1.0 million characters (exactly 1,111,842 characters). 3. The third text size was about 2.0 million characters (exactly 2,110,613 characters). A C program was designed to select randomly 560 patterns vary in lengths from 3 to 30. Each pattern length has 20 different patterns with different number of occurrences. This program is applied on the above three texts thereafter we call text1, and text3. Another program is designed also to select 1,120 patterns that differ in length from 3 to 30. Each pattern length has 40 different patterns with various numbers of occurrences. The search cost is measured using three criteria: total time, average number of comparisons, and average number of shifts to find all occurrences of all patterns in each text. The time during the pre-processing phase of pattern is taken each time as Backward Exact String Searching Strategy an average over 13 searches. The tests were done on Pentium 1.32 RAM microcomputer. Table 1 depicts an improvement in comparison through changing the direction of text scan. ranges from 0.96% to 1.933% depending on the size of the text and the type of the algorithm. Also there is an improvement in the average number of shift in BCA ranges from 1.659% to 2.0%, while in the other algorithm the average number of shifts is almost the same as shown in table2. Generally, the improvement in running time is a reasonable result obtained from the improvement in comparisons and shifts as shown in table3 the time improvement reaches (4.18%) using BBMHA, (3.94%) using BRA, and (5.69%) using BCA. Figures 4, 5 and 6 show the improvement in the number of comparisons, the number of shifts, and running the time, respectively. It can be noticed that Backward Cycle Algorithm achieved the highest improvement reached to 5.618% in the average running time. 2.193% in the average number of comparison, and 2.257% in the average number of shifts. This may be attributed to which character is chosen to determine the amount of shift. In other words, Cycle Algorithm chooses the next text character that corresponds to the last character in the pattern to determine amount of shift, so the maximum shift length is equal to m+1 [14]. But Boyer-Moore-Horspool Algorithm and Raita Algorithm choose the text character that corresponds to the last character in the pattern to determine amount of shift, therefore the maximum shift length is equal to m [8,23]. Table 1. Average of averages of total number of comparison and percent improvement of Top down and Bottom up through out different text lengths. <table> <thead> <tr> <th>Algorithm*</th> <th>Text1 Improvement%</th> <th>Text2 Improvement%</th> <th>Text3 Improvement%</th> </tr> </thead> <tbody> <tr> <td>BMHA</td> <td>194621</td> <td>398422</td> <td>75993</td> </tr> <tr> <td>BBMHA</td> <td>19783</td> <td>393388</td> <td>751121</td> </tr> <tr> <td>RA</td> <td>128278</td> <td>266417</td> <td>50161</td> </tr> <tr> <td>BRA</td> <td>126906</td> <td>262859</td> <td>502076</td> </tr> <tr> <td>CA</td> <td>118759</td> <td>243577</td> <td>497121</td> </tr> <tr> <td>BCA</td> <td>16884</td> <td>238021</td> <td>458185</td> </tr> </tbody> </table> * BMHA denoted to Boyer-Moore –Horspool Algorithm , RA to Raita Algorithm, and CA to Cycle Algorithm ** Text size equals 0.5, 1.0 and 2.0 million character respectively. Figure 4 shows a percent of improvement in bottom up in the average of total number of comparison. Table 2. Average of averages of total number of shift, and percent improvement of Top down and Bottom up throughout different text lengths <table> <thead> <tr> <th>Algorithm</th> <th>Text1</th> <th>Improvement %</th> <th>Text2</th> <th>Improvement %</th> <th>Text3</th> <th>Improvement %</th> </tr> </thead> <tbody> <tr> <td>BMHA</td> <td>62461</td> <td>-0.172</td> <td>130081</td> <td>-0.142</td> <td>247852</td> <td>-0.161</td> </tr> <tr> <td>BBMHA</td> <td>62569</td> <td></td> <td>130266</td> <td></td> <td>248252</td> <td></td> </tr> <tr> <td>RA</td> <td>62461</td> <td>-0.172</td> <td>130082</td> <td>-0.142</td> <td>247852</td> <td>-0.161</td> </tr> <tr> <td>BRA</td> <td>62569</td> <td></td> <td>130266</td> <td></td> <td>248252</td> <td></td> </tr> <tr> <td>CA</td> <td>56910</td> <td>1.659</td> <td>118052</td> <td>2.257</td> <td>225445</td> <td>2.00</td> </tr> <tr> <td>BCA</td> <td>55966</td> <td></td> <td>115388</td> <td></td> <td>220937</td> <td></td> </tr> </tbody> </table> Figure 5 shows a percent of improvement in bottom up in the average of averages of total number of shift. Backward Exact String Searching Strategy Table 3. Average of averages of total time (second) and percent improvement of Top down and Bottom up through out different text lengths. <table> <thead> <tr> <th>Algorithm</th> <th>Text1</th> <th>Improvement</th> <th>Text2</th> <th>Improvement</th> <th>Text3</th> <th>Improvement</th> </tr> </thead> <tbody> <tr> <td></td> <td>%</td> <td>%</td> <td>%</td> <td>%</td> <td>%</td> <td>%</td> </tr> <tr> <td>BMHA</td> <td>9.1913</td> <td>1.488</td> <td>21.013</td> <td>4.175</td> <td>69.09</td> <td>2.828</td> </tr> <tr> <td>BBMHS</td> <td>9.0546</td> <td>3.939</td> <td>20.136</td> <td>67.136</td> <td></td> <td></td> </tr> <tr> <td>RA</td> <td>8.8535</td> <td>17.891</td> <td>1.859</td> <td>68.043</td> <td>2.928</td> <td></td> </tr> <tr> <td>BRA</td> <td>8.5047</td> <td>17.559</td> <td>66.051</td> <td></td> <td></td> <td></td> </tr> <tr> <td>CA</td> <td>7.2767</td> <td>5.618</td> <td>15.556</td> <td>5.013</td> <td>59.738</td> <td>5.618</td> </tr> <tr> <td>BCA</td> <td>6.8679</td> <td>14.776</td> <td>56.382</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Figure 6 shows a percent of improvement in bottom up in the average of averages of total time (second). 5. Conclusion and Further Work Searching Algorithms of a new backward strategy developed based on the three exact string searching algorithms BMHA, RA, and CA. The new developed algorithms process the text in the converse of its original algorithms, while the pre- processing of the pattern use the same heuristic tables used in the original algorithms, with size doesn’t exceed the alphabet size. Many experiments were designed to compare these algorithms with their original searching algorithms according to the three main factors: numbers of comparisons, number of shifts, and total time of execution. As a result of changing the direction of the text scan, we can conclude: First, the new developed algorithms give a better performance than the original algorithm. The running time has reduced in all adapted algorithms to provide more improvement in the range of 1.49-5.6%. Second, the backward cycle algorithm achieves the best performance time among all other algorithms in case of large text size (up to 5.618). Third, the running time is more accurately governed by the number of comparisons rather than the number of shifts as in table1 and table3. This could be justified due to two reasons. One of them, by changing the direction of the text scan not the same character was compared through scanning the text from the end of the text. In addition, this will result in supplying with a different amount of shifts. In other words it is a language dependency factor. The other reason is the changing of the direction in the pre-process step which attributed significantly in the search process; see Example2. Finally, it is worth mentioning that searching in this new strategy is useful to find a specific pattern wherever is the position you start the search process in a given text. It searches from the starting position in the given text to the end using forward scan, while backward scan to search from the starting position of the search to the beginning of the text, it is more important to give a large text size for the backward scan, in order to reduce the cost of the searching. This is because backward search is more efficient than forward search as shown in the tables [8,12,20]. Future research could be directed towards the parallelizations of the three algorithms, or to investigates the behavior of these algorithms under a different language, such as Arabic. طريقة البحث في النص في اتجاه عكسي إباء القيومي وأحمد الجابر ملخص تطرق هذا البحث إلى استخدام أسلوب عكسي في عملية البحث في النص لتحديد المواقع التي يتواجد بها نص جزئي. وكذلك إعادة بناء الخوارزميات المعروفة بحيث تستخدم أسلوب عكسي في مسار الخوارزميات الجديدة مقارنة مع الخوارزميات المعروفة في هذا المجال. وتتبين من خلال التجارب المعمولة على نصوص مختلفة الأطوال (نصف مليون حرف إلى مليون حرف) حيث أخذت من مواقع مختلفة على الشبكة بأن هذا الأسلوب أعطي فعالية أكثر من الخوارزميات الأصلية بحيث أنه يقلل من الوقت التنفيذي للخوارزمية بمقدار يصل إلى 5.6% من وقت الخوارزمية الأصلية في بعض الحالات. Backward Exact String Searching Strategy References
{"Source-Url": "http://repository.yu.edu.jo/bitstream/123456789/2157/1/323779.pdf", "len_cl100k_base": 6909, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 17136, "total-output-tokens": 8554, "length": "2e12", "weborganizer": {"__label__adult": 0.0004363059997558594, "__label__art_design": 0.0004127025604248047, "__label__crime_law": 0.0005645751953125, "__label__education_jobs": 0.0009365081787109376, "__label__entertainment": 0.0001251697540283203, "__label__fashion_beauty": 0.0002180337905883789, "__label__finance_business": 0.0002830028533935547, "__label__food_dining": 0.0004763603210449219, "__label__games": 0.0011358261108398438, "__label__hardware": 0.0016393661499023438, "__label__health": 0.0008215904235839844, "__label__history": 0.0002994537353515625, "__label__home_hobbies": 0.00010341405868530272, "__label__industrial": 0.0004880428314208984, "__label__literature": 0.00074005126953125, "__label__politics": 0.0003464221954345703, "__label__religion": 0.0005993843078613281, "__label__science_tech": 0.12310791015625, "__label__social_life": 0.0001163482666015625, "__label__software": 0.01174163818359375, "__label__software_dev": 0.8544921875, "__label__sports_fitness": 0.00037479400634765625, "__label__transportation": 0.000522613525390625, "__label__travel": 0.0001798868179321289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27755, 0.08254]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27755, 0.69247]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27755, 0.82177]], "google_gemma-3-12b-it_contains_pii": [[0, 2228, false], [2228, 5301, null], [5301, 7220, null], [7220, 9605, null], [9605, 11569, null], [11569, 12419, null], [12419, 13510, null], [13510, 13800, null], [13800, 15168, null], [15168, 17310, null], [17310, 19861, null], [19861, 20896, null], [20896, 22061, null], [22061, 24776, null], [24776, 27148, null], [27148, 27755, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2228, true], [2228, 5301, null], [5301, 7220, null], [7220, 9605, null], [9605, 11569, null], [11569, 12419, null], [12419, 13510, null], [13510, 13800, null], [13800, 15168, null], [15168, 17310, null], [17310, 19861, null], [19861, 20896, null], [20896, 22061, null], [22061, 24776, null], [24776, 27148, null], [27148, 27755, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27755, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27755, null]], "pdf_page_numbers": [[0, 2228, 1], [2228, 5301, 2], [5301, 7220, 3], [7220, 9605, 4], [9605, 11569, 5], [11569, 12419, 6], [12419, 13510, 7], [13510, 13800, 8], [13800, 15168, 9], [15168, 17310, 10], [17310, 19861, 11], [19861, 20896, 12], [20896, 22061, 13], [22061, 24776, 14], [24776, 27148, 15], [27148, 27755, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27755, 0.08333]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
eb4e203d56bd47773b03d51d07736701fea7d960
CENG-492 CONFIGURATION MANAGEMENT REPORT YENILINK PROJECT Assistant: Ali Orkan Bayer GROUP MEMBERS <table> <thead> <tr> <th>Name</th> <th>Number</th> <th>Email</th> </tr> </thead> <tbody> <tr> <td>Furkan Kürşat Danışmaz</td> <td>1394881</td> <td><a href="mailto:k.furkan@gmail.com">k.furkan@gmail.com</a></td> </tr> <tr> <td>Ömer Nebil Yaveroğlu</td> <td>1449248</td> <td><a href="mailto:omernebil@hotmail.com">omernebil@hotmail.com</a></td> </tr> <tr> <td>Mehmet Bahattin Yaşar</td> <td>1395664</td> <td><a href="mailto:e1395664@ceng.metu.edu.tr">e1395664@ceng.metu.edu.tr</a></td> </tr> <tr> <td>Gülsüm Selcen Mülazimoğlu</td> <td>1395276</td> <td><a href="mailto:selcen.mulazimoglu@gmail.com">selcen.mulazimoglu@gmail.com</a></td> </tr> </tbody> </table> Table of Content 1. INTRODUCTION ........................................................................................................... 2 1.1. Purpose of CMP ........................................................................................................ 2 1.2. Scope of Document .................................................................................................... 2 1.3. Definitions, Acronyms, and Abbreviations ............................................................... 3 1.4. Document References ................................................................................................. 3 1.5. Document Overview .................................................................................................... 3 2. THE ORGANIZATION & CM FRAMEWORK ...................................................................... 4 2.1. Organization ................................................................................................................ 4 2.2. Responsibilities ......................................................................................................... 5 2.3. Tools & Infrastructure ............................................................................................... 5 3. THE PROCESS OF CONFIGURATION MANAGEMENT ...................................................... 5 3.1. Identification .............................................................................................................. 5 3.2. Management & Control ............................................................................................... 7 3.3. Status Accounting ...................................................................................................... 7 3.4. Auditing ..................................................................................................................... 7 4. PROJECT SCHEDULE AND CM MILESTONES .................................................................. 8 5. PROJECT RESOURCES ..................................................................................................... 9 6. PLAN OPTIMIZATION ..................................................................................................... 9 APPENDIX ....................................................................................................................... 10 1. INTRODUCTION 1.1. Purpose of CMP The purpose of this software configuration management (SCM) plan is to maintain the continuity of our project "Yenilink". Since the change is inevitable during development of all software projects, and our project is not an exception, making modifications should be easy enough in our project. In addition, the possible modifications should not confuse any one of our project group members. Thus, this configuration management plan is prepared to define the process of identifying, managing, and auditing the changes as they occur throughout the lifecycle of “Yenilink” Project. 1.2. Scope of Document The scope of this document is to define and explain all the configuration management properties of Pseudosoft’s YeniLink project. The activities discussed here are applicable to all documentation, source code development, software and hardware tools used, and any other process involved. Thus, this document determines the responsibilities and authorities for accomplishing the planned activities, details of the items under configuration management process and necessary coordination of SCM activities with the other activities in the project. Using this Configuration Management Plan makes recognition, reexamination, and identification of items of software configuration of the project unambiguous to all of project team members. We can clarify the control of changes of codes and we can describe more realistic software configuration in terms of time. In other words, this report is intended for the group members in first place. Our supervisor and the instructor are among the other audience of the document. 1.3. Definitions, Acronyms and Abbreviations JSWS: Job Seeking Web Site TMP: Temporary GUI Package: Graphical User Interface Package WS Package: Web Services Package Docs: Documentations CM: Configuration Management CMP: Configuration Management Plan SCM: Software Configuration Management 1.4. Document References While preparing this plan, we used the following documents as a reference: - Software Configuration Management Plan, Presentation prepared by METU Computer Engineering Department for the course Ceng 492 - Our Requirement Analysis Report - Our Final Design Report 1.5. Document Overview This document consists of six main parts, each of which is described below: - Introduction: In the introduction part, we explained the purpose of CMP. On the other hand we explained the scope of the document, definitions and abbreviations, references. • **The Organizations CM Framework:** In this part, we determined the organization and responsibilities of all team members for CM. Moreover, we explained the infrastructures and the tools we will use in configuration management. • **The CM Process:** The identification, management, and the auditing of the (CIs) are discussed here. • **Project Schedules and CM Milestones:** The deadlines for the CM activities are given in this part. • **Project Resources:** In this part, we explained the project resources which will be necessary for CM. • **Plan Optimization:** This section explains the methods that can be used to optimize the CMP. ### 2. THE ORGANIZATION & CM FRAMEWORK #### 2.1. Organization Our group consists of four people. There is a good communication between our group members, since each of us has a complete understanding of how things are to be handled. On the other hand, to carry on the YeniLink project successfully, we have grouped the tasks to be fulfilled into categories. These categories can also be seen as our modules. These are mainly “Databases”, “User Interfaces” and “Web Services”. Depending on these categories, we have planned our CVS structure. You can see a graphical representation of our CVS directory structure in the Appendix part. We have designed this in a way that goes parallel to our development process. We more or less complete our tasks module by module so keeping a directory structure based on modules seemed more suitable for us. The updates and changes will only affect one of the modules which are being worked on and others will not be affected. We have also divided these modules into the different parts of the project like portal, JSWS and by this way a change made will not affect another part of the project. If we have made the development based on these parts of the project, we may think about another directory structure in which the first division was based on modules and the second division was based on modules. But this directory structure didn’t seem suitable for our aim. 2.2. Responsibilities We have mentioned our CVS folder structure in the “Organization” part of this report. Now in this part of the report, we will mention who is responsible for which part of this tree. By responsibility, we mean both development related responsibility and transfer of the developed code to CVS responsibility. Also keeping the directory tidy and avoiding any mess is also important. - SQL, JSWS1 and JSWS2 directories under DBPackage directory → Bahattin - Portal directory under DBPackage directory → Furkan - Portal directory under GUIPackage directory → Ömer + Selcen + Furkan - Bank, JSWS1 and JSWS2 directory under GUIPackage directory → Ömer + Selcen - All the unmentioned left parts → All group members Apart from these responsibilities, Ömer is responsible for informing an important or big change to group members. 2.3. Tools & Infrastructure - Eclipse CVS in the design of Web Services and GUI Modules - NetBeans CVS in the design of Database Modules 3. THE PROCESS OF CONFIGURATION MANAGEMENT 3.1. Identification There are three main parts of our project which are “the Database”, “Graphical User Interface”, and “the Web services” as we use the “Three Tier Architecture”. Each of these three is being developed as independent projects for now. The Database: Apart from the database structure and sql scripts (the create statements), we have our database utility modules containing insert, delete, update, and select methods implemented with Java using Hibernate technique. The database directory will contain three subdirectories as we have three different databases (one for portal, and two for two different Job seeking Website we are simulating). And each subdirectory will contain four subdirectories under “src”: - globalPack - independentTablesPack - utilityPack - HibernateTesting - SQL “globalPack” contains “SessionUtility.java” which is used by every hibernate utility class. “independentTablesPack” contains Java class and hibernate mapping file again for every table for every table in the corresponding database. “utilityPack” contains different Java classes for each table containing insert, delete, update, and select methods. Finally, “HibernateTesting” package contains different Java classes for each utility class. This testing package will be taken out in the release versions of the Project since it contains nothing usefull but codes for testing. The Graphical User Interface: This directory contains four different subdirectories (apart from tmp) as we have three different Websites to be implemented (one for our portal, and two career Websites and one for bank). The Web services: This directory contains all the Web services. Again we have three different subdirectories (one for portal, and two for each career Website) apart from “Tmp”. Each Web service is independent of the other to be deployed in the server. Therefore we will have different subdirectories for each service under corresponding directory. SQL: Finally the SQL directory as a final subdirectory under “Database” includes the sql scripts for creating the databases and tables for each database. 3.2. Management & Control We have divided our work into fragments so that the dependence of one developer to another is minimized. We take all the decisions together, but the development of each module is under different person’s responsibility. Therefore, there will be no difficulty on versioning the modules. Anyone can upload the new version of a module without bothering the others. The only thing to be done while versioning is informing the other member’s about the new version of the module so that the integration of the module takes place immediately. As new modules are added to the system, their integration and testing will be done and if the changes cannot be applied successfully, the new version of the module will be rejected. 3.3. Status Accounting Every member is responsible for preparing “readme” file for each module, in fact for anything that is under his/her responsibility. For every change, there will be explanation about what change is made and why. For such a big Project, the reasons for the decisions could easily be forgotten. Therefore, every change and every decision should be documented with all the reasons why they are applied. Apart from the changes and the decisions, the bugs will also be reported. 3.4. Auditing As any new module is developed or a new version of any module is uploaded, it will be tested and the decisions about it will be taken by all the members together. Although the dependency of development is minimized, the group members are still dependent to each other. As a one example, the select methods of the database utility classes should be implemented according to the user interface decisions. Therefore the auditing will be done together as new modules, or new versions of existing modules are uploaded. 4. PROJECT SCHEDULE AND CM MILESTONES We have made some changes in our project schedule during the preparation of living schedule. These changes were not made because the schedule was unrealistic but it was made because we have prepared it in so much detail. These details made our living schedule difficult to follow on a day-by-day basis. Also there were some completed parts in the first term although they were in the scope of the second term. There were also some parts which were not done although it was planned to be made in the first semester. These differences have occurred since the flow of the development process lead us that way. So we updated and generalized our schedule. You can reach our schedule from “Appendix” part of this report. Apart from our project schedule we have defined some milestone for our project development. There are demo days defined by our assistant (first one is on 1 April and second one is on 6 May). There is also one final demo on 13 June. These demo days rules us a lot. Taking these demonstration days into account we have defined our milestones as below: 17 March → All database implementations will be completed. 1 April → Some of the web services will be implemented. → Most of the user interface requirements will be implemented. 28 April → All web services will be implemented basically. 6 May → All user interface parts will be implemented. → Some optimizations will be made. 1 June → All web services will be completed in detail. 13 June → All parts will be integrated and tested to work together. 5. PROJECT RESOURCES Up to this day, we have managed our codes and documents using Groove and manual transfers. But the project is getting bigger and the importance of our project files increase. Although we take many backups on DVD’s and hard disks, it doesn’t seem reliable enough to go on this way. Since we are provided by CVS accounts, we will use CVS as our configuration management tool. We plan to use Netbeans or Eclipse IDE as our connection way to our CVS account. These two IDE are quite similar to each other by means of CVS usage. The decision will be made depending on the IDE we use at the time of submission. Although we use CVS by one of these ways, we will continue to backup our code using Groove and disks. We have weekly meetings and share the new thing we have made or discovered. We also communicate with each other using Groove and MSN Messenger when we need immediate access. By one of these ways, we will inform about the changes on CVS. We will create our documentation about the project using “Wordpad” since we plan to include our documentation files on CVS. This will give us the flexibility of opening the documentation files with any of the word processors existing in any computer. 6. PLAN OPTIMIZATION Although we have written our configuration management plan here, we are sure that there will be slight changes on the dates and orders of the process to be done since it is done on predictions. We will try to stick to the configuration management plan during our development process but we may need some small changes in this plan. These changes will be welcomed by our group if it doesn’t cause a big problem in our demos. We have even changed our schedule since it became out of data after a while. So changes are vital and we will optimize the plan in a way that there will not be any problems in our demonstrations. APPENDIX 1. OUR CVS DIRECTORY STRUCTURE <table> <thead> <tr> <th>Task Name</th> <th>Duration</th> <th>Start</th> <th>Finish</th> <th>Resource Names</th> </tr> </thead> <tbody> <tr> <td>Learning and Practicing Development Tools</td> <td>100 days</td> <td>Mon 15.10.07</td> <td>Thu 28.02.08</td> <td></td> </tr> <tr> <td>Tomcat &amp; Axis 2</td> <td>24 days</td> <td>Mon 15.10.07</td> <td>Thu 15.11.07</td> <td>Everyone</td> </tr> <tr> <td>Flash Development</td> <td>14 days</td> <td>Mon 15.10.07</td> <td>Thu 28.02.08</td> <td>Everyone</td> </tr> <tr> <td>J2EE Topics &amp; Web Services</td> <td>54 days</td> <td>Fri 02.11.07</td> <td>Tue 15.01.08</td> <td>Everyone</td> </tr> <tr> <td>Hibernate</td> <td>70 days</td> <td>Tue 16.10.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>Design Of Databases</td> <td>51 days</td> <td>Mon 12.11.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>Database Design Of JSWS1</td> <td>51 days</td> <td>Mon 12.11.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>Database Design Of JSWS2</td> <td>51 days</td> <td>Mon 12.11.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>Database Design Of Yenilink Portal</td> <td>51 days</td> <td>Mon 12.11.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>User Interface Design</td> <td>26 days</td> <td>Sat 15.12.07</td> <td>Fri 18.01.08</td> <td>Everyone</td> </tr> <tr> <td>User Interface Implementation</td> <td>71 days</td> <td>Fri 01.02.08</td> <td>Fri 09.05.08</td> <td></td> </tr> <tr> <td>User Interface Implementation Of JSWS 1</td> <td>71 days</td> <td>Fri 01.02.08</td> <td>Fri 09.05.08</td> <td>Ömer Selcen</td> </tr> <tr> <td>User Interface Implementation Of JSWS 2</td> <td>71 days</td> <td>Fri 01.02.08</td> <td>Fri 09.05.08</td> <td>Ömer Selcen</td> </tr> <tr> <td>User Interface Implementation Of Yenilink Portal</td> <td>71 days</td> <td>Fri 01.02.08</td> <td>Fri 09.05.08</td> <td>Ömer Selcen</td> </tr> <tr> <td>Implementation Of Databases With Hibernate</td> <td>20 days</td> <td>Mon 18.02.08</td> <td>Sun 16.03.08</td> <td></td> </tr> <tr> <td>Implementation Of JSWS1</td> <td>20 days</td> <td>Mon 18.02.08</td> <td>Fri 14.03.08</td> <td>Bahattin</td> </tr> <tr> <td>Implementation Of JSWS2</td> <td>20 days</td> <td>Mon 18.02.08</td> <td>Fri 14.03.08</td> <td>Sebzen</td> </tr> <tr> <td>Implementation Of Yenilink Portal</td> <td>20 days</td> <td>Mon 18.02.08</td> <td>Fri 14.03.08</td> <td>Furkan</td> </tr> <tr> <td>Testing Databases</td> <td>20 days</td> <td>Mon 18.02.08</td> <td>Sun 16.03.08</td> <td>Everyone</td> </tr> <tr> <td>Implementation Of Web Services</td> <td>33 days</td> <td>Mon 17.03.08</td> <td>Wed 30.04.08</td> <td>Bahattin</td> </tr> <tr> <td>Implementation Of JSWS1</td> <td>33 days</td> <td>Mon 17.03.08</td> <td>Wed 30.04.08</td> <td>Furkan</td> </tr> <tr> <td>Implementation Of JSWS2</td> <td>33 days</td> <td>Mon 17.03.08</td> <td>Wed 30.04.08</td> <td>Sebzen</td> </tr> <tr> <td>Implementation Of Yenilink Portal</td> <td>33 days</td> <td>Mon 17.03.08</td> <td>Wed 30.04.08</td> <td>Sebzen</td> </tr> <tr> <td>Testing Of Web Services</td> <td>33 days</td> <td>Mon 17.03.08</td> <td>Wed 30.04.08</td> <td>Everyone</td> </tr> <tr> <td>Integration Of All Components</td> <td>11 days</td> <td>Thu 01.05.08</td> <td>Thu 15.05.08</td> <td>Everyone</td> </tr> <tr> <td>General Testing and Debugging</td> <td>21 days</td> <td>Thu 15.05.08</td> <td>Thu 12.06.08</td> <td>Everyone</td> </tr> </tbody> </table>
{"Source-Url": "http://senior.ceng.metu.edu.tr/2008/pseudosoft/docs/CMP%20Pseudo%20v.1.0.pdf", "len_cl100k_base": 4561, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 22101, "total-output-tokens": 4612, "length": "2e12", "weborganizer": {"__label__adult": 0.0004742145538330078, "__label__art_design": 0.0006108283996582031, "__label__crime_law": 0.0003314018249511719, "__label__education_jobs": 0.0193023681640625, "__label__entertainment": 9.763240814208984e-05, "__label__fashion_beauty": 0.0002720355987548828, "__label__finance_business": 0.0013370513916015625, "__label__food_dining": 0.0004954338073730469, "__label__games": 0.000568389892578125, "__label__hardware": 0.0008182525634765625, "__label__health": 0.0004544258117675781, "__label__history": 0.00032210350036621094, "__label__home_hobbies": 0.00023233890533447263, "__label__industrial": 0.0005826950073242188, "__label__literature": 0.0003323554992675781, "__label__politics": 0.0002818107604980469, "__label__religion": 0.0005497932434082031, "__label__science_tech": 0.003292083740234375, "__label__social_life": 0.0003020763397216797, "__label__software": 0.006542205810546875, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.00034689903259277344, "__label__transportation": 0.0006008148193359375, "__label__travel": 0.0003218650817871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19020, 0.04484]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19020, 0.11292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19020, 0.84512]], "google_gemma-3-12b-it_contains_pii": [[0, 85, false], [85, 2941, null], [2941, 4597, null], [4597, 5547, null], [5547, 7460, null], [7460, 8883, null], [8883, 10735, null], [10735, 12508, null], [12508, 14083, null], [14083, 15945, null], [15945, 15986, null], [15986, 19020, null]], "google_gemma-3-12b-it_is_public_document": [[0, 85, true], [85, 2941, null], [2941, 4597, null], [4597, 5547, null], [5547, 7460, null], [7460, 8883, null], [8883, 10735, null], [10735, 12508, null], [12508, 14083, null], [14083, 15945, null], [15945, 15986, null], [15986, 19020, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19020, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19020, null]], "pdf_page_numbers": [[0, 85, 1], [85, 2941, 2], [2941, 4597, 3], [4597, 5547, 4], [5547, 7460, 5], [7460, 8883, 6], [8883, 10735, 7], [10735, 12508, 8], [12508, 14083, 9], [14083, 15945, 10], [15945, 15986, 11], [15986, 19020, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19020, 0.23944]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
d030a651710c7437daa944031996968d6db063ab
Features - Atmel® ATSAM3X8E microcontroller - Atmel AT86RF231 2.4GHz radio transceiver - Atmel proprietary Lightweight Mesh software stack - 10/100Mbps Ethernet - LwIP TCP/IP stack support - TCP/IP client - Single MCU gateway solution - Preprogrammed firmware of wireless lighting control via PC software Introduction This application note mainly describes the software architecture and the application programming interfaces (API) of Lightweight Mesh to Ethernet Gateway reference design (hereafter the Gateway). A getting started guide at the end of this document gives details about the setup and operation of preprogrammed firmware. Figure 1. Lightweight Mesh to Ethernet Gateway. The Gateway is based on Atmel ATSAM3X8E microcontroller and Atmel AT86RF231 2.4GHz radio transceiver. For gateway hardware design details, refer to Atmel AT2200: ZigBee® to Ethernet and Wi-Fi Gateway with SAM3X - Hardware User's Guide. For this reference design, the hardware design files (schematic, BOM and PCB Gerber) and software source code can be downloaded from Atmel website. The provided hardware documentation can be used with no limitations to manufacture the reference hardware solution for the design. # Table of Contents 1. Overview .............................................................................................. 4 2. Development Tools .............................................................................. 4 3. Software Architecture........................................................................... 4 3.1 Atmel Lightweight Mesh Software Stack ........................................... 5 3.2 LwIP TCP/IP Stack .......................................................................... 6 3.3 PC software ..................................................................................... 6 4. Inside the Gateway Application ............................................................ 8 4.1 Gateway Application Layer Structure ............................................. 8 4.2 Lightweight Mesh Task .................................................................... 8 4.3 LwIP Task ....................................................................................... 10 5. Main API Introduction ........................................................................... 11 5.1 Lightweight Mesh Software Stack API .......................................... 11 5.2 LwIP API Introduction ................................................................... 12 6. Software Package Content ................................................................... 13 7. Footprint .............................................................................................. 16 8. Getting Started Guide .......................................................................... 17 8.1 Programming the Gateway ............................................................. 17 8.2 Connecting to Ethernet .................................................................. 18 8.2.1 Step-by-step guide .................................................................. 18 8.3 PC software Menu .......................................................................... 20 Appendix A. Additional Information ....................................................... 21 A.1 Lightweight Mesh Configuration .................................................... 21 A.2 LwIP Configuration ....................................................................... 21 Appendix B. Revision History ................................................................. 22 1. **Overview** The Lightweight Mesh to Ethernet Gateway is designed to interface Lightweight Mesh network to Ethernet network. Through the Gateway, the user can control and monitor any nodes in Lightweight Mesh network remotely via Ethernet. A typical application scenario is shown in Figure 1-1. **Figure 1-1. Typical Gateway Application Scenario.** In the preprogrammed firmware, a wireless lighting control and monitor network is established. The lights in Lightweight Mesh network can be controlled and monitored via PC software remotely. 2. **Development Tools** To download or debug the preprogrammed firmware, the following development toolchain is needed: - **Atmel Studio 6.** Version: 6.1.2674 with Service Pack 1 or above - **Atmel ARM® GNU Toolchain.** Version: 4.7.3.158 - GCC 4.7.3 or above - **Atmel Software Framework.** Version: 3.9.1 or above - **Programming and debugging device:** *Atmel SAM-ICE™* - **SAM-ICE Adaptor:** a minimized (1.27mm pitch 10-pin header) adaptor for Atmel SAM-ICE. For more details refer to Atmel AVR2033: SAM-ICE Adapter - Hardware User Manual 3. **Software Architecture** The software for the Gateway is composed of two main parts: - **Atmel Lightweight Mesh Software Stack** - **LwIP TCP/IP Stack** Besides the Gateway itself, PC software named “TCP server” is also provided as a simple graphic user interface (GUI). The Gateway application is designed based on Atmel Software Framework (ASF). In fact, except the Lightweight Mesh Software Stack, other modules are from ASF. The Gateway exchanges data between Lightweight Mesh network and Ethernet network. The Lightweight Mesh network data is collected by the Gateway and transferred into Ethernet network, and finally displayed in PC software. The user input in PC software is also sent back to Lightweight Mesh network via the Gateway. The software block diagram of the Gateway is given in Figure 3-1. **Figure 3-1. The Gateway Software Block Diagram.** --- ### 3.1 Atmel Lightweight Mesh Software Stack *Atmel Lightweight Mesh* is the easy-to-use proprietary low power wireless mesh network protocol from Atmel. It is designed to work with all Atmel IEEE\textsuperscript{®} 802.15.4 transceivers and SoCs. To find more detailed information about the Lightweight Mesh architecture and application development process, refer to *Atmel AVR2130: Lightweight Mesh Developer Guide*. Atmel Lightweight Mesh software stack features: - Simplicity of configuration and use - Up to 65535 nodes in one network (theoretical limit) - Up to 65535 separate PANs on one channel - Up to 15 independent application endpoints - No dedicated node is required to start a network - No periodic service traffic occupying bandwidth - Two distinct types of nodes: - Routing (network address < 0x8000) - Non-routing (network address >= 0x8000) - Once powered on, the node is ready to send and receive data; no special joining procedure is required - No child-parent relationship between the nodes - Non-routing nodes can send and receive data to/from any other node (including non-routing nodes), but they will never be used for routing purposes - Route discovery happens automatically if route to the destination is not known - Routing table is updated automatically based on the data from the received and transmitted frames - Duplicate frames (broadcast or multipath unicast) are rejected - Small footprint (less than 8kB of Flash and 4kB of RAM for a typical application) Currently the public release version of Lightweight Mesh software stack works with AVR®-based MCUs, but given its extreme portability and low resource requirements, it can be run on almost any Atmel MCU. In the Gateway, it runs on ATSAM3X8E MCU. The Lightweight Mesh software stack version is v1.0.0. Note that at time of writing this application note, the Atmel Lightweight Mesh Software Stack is not integrated into ASF. In this reference design, the files in hal folder of Lightweight Mesh Software Stack are modified to reuse the low level drivers from ASF. 3.2 LwIP TCP/IP Stack The Lightweight TCP/IP stack is designed for embedded systems. The focus of the LwIP TCP/IP implementation is to reduce resource usage while still having a full scale TCP. LwIP features: - IP (Internet Protocol) including packet forwarding over multiple network interfaces - ICMP (Internet Control Message Protocol) for network maintenance and debugging - UDP (User Datagram Protocol) including experimental UDP-lite extensions - TCP (Transmission Control Protocol) with congestion control, RTT estimation and fast recovery/fast retransmit - Specialized raw API for enhanced performance - Optional Berkeley-alike socket API - DHCP (Dynamic Host Configuration Protocol) - PPP (Point-to-Point Protocol) - ARP (Address Resolution Protocol) for Ethernet For more detailed information about the LwIP, refer to LwIP Wiki: http://lwip.wikia.com/wiki/LwIP_Wiki or the Atmel AVR32817: Getting Started with the 32-bit AVR® UC3 Software Framework LwIP TCP/IP Stack application note. In the Gateway, only TCP/IP client is implemented. The LwIP version is 1.4.0. 3.3 PC software PC software named “TCPServer” is provided to control and monitor devices in Lightweight Mesh network. The following information is displayed in the “TCPServer” main window. - Device: End device or Router - Address: Device short address in Lightweight Mesh network - Status: Device in network or not - Channel: Working channel - PAN ID: Lightweight Mesh network PAN ID - LQI: Link quality Indicator of the last data transfer - RSSI: Received Signal Strength Indication of the last data transfer - TSensor: Dummy sensor data sent from devices in Lightweight Mesh network - LED status: LED on / off status When clicking on a specific device in main window, the selected device can be controlled and monitored in the “TCPServer” control window. - The LED status will be displayed. Red stands for “ON”, and blank indicates “OFF” - The LED can be turned on or off by selecting “ON” or “OFF” and clicking “Submit” The following information is displayed in the status bar: - The server status - Client IP address - Port number - Number of devices in Lightweight Mesh network - Received data pack A screenshot of the PC software is shown in Figure 3-2. For details of the PC software operation, check Chapter 8. Figure 3-2. The Gateway PC Software “TCPServer”. 4. **Inside the Gateway Application** In this chapter, the overall Gateway application layer structure is explained. And by different functions, the main application tasks in the Gateway are introduced in two parts: Lightweight Mesh, LwIP task. 4.1 **Gateway Application Layer Structure** As the Gateway is based on WSNDemo example from Lightweight Mesh Software Stack, it has similar structure in main(). The main() function of the Gateway is shown below: ```c int main(void) { sysclk_init(); board_init(); ... init_ethernet(); ... /* This is the main polling loop */ while (1) { /* LwMesh task handler */ SYS_TaskHandler(); HAL_UartTaskHandler(); ... APP_TaskHandler(); ... ethernet_task(); ... } } ``` `sysclk_init()` and `board_init()` are two functions to initialize clock and board from ASF. The Lightweight Mesh Software Stack hardware initializations are also put in `board_init()`. `init_ethernet()` is to initialize Ethernet. The task handlers in the while loop handles application tasks. They are introduced in the following sections. By following this structure, more functions can be added into main() to enrich the Gateway features. Several macro switches are defined in `conf_board.h` to give different application options. Here are some examples. - `#define LWMESH_USED 1 // Enable LWMESH PHY` - `#define ETH_USED 1 // Enable Ethernet` To run the Gateway with default features, don’t not change the macro switches unless you know what it changes clearly. 4.2 **Lightweight Mesh Task** In the Gateway, Lightweight Mesh task is based on WSNDemo example from Lightweight Mesh Software Stack. The `SYS_TaskHandler()` and `APP_TaskHandler()` are the two APIs of Lightweight Mesh task. They should be called as frequently as possible. The Lightweight Mesh stack low layer tasks are handled by calling `SYS_TaskHandler()`. The application layer is handled in `APP_TaskHandler()`. The Gateway is a node in Ethernet network. It sends data to Ethernet by calling `appSendData()` from `APP_TaskHandler()` at a predefined interval. The Gateway acts as Coordinator in Lightweight Mesh network. It receives data from other Lightweight Mesh devices from callback function `appDataInd()` and sends data back to Lightweight Mesh network by filling the Acknowledgment command frame in this function. Figure 4-1. Lightweight Mesh Task Flow. - **APP_STATE_INITIAL**: `appInit();` - **APP_STATE_SEND** - `appSendData();` - **APP_STATE_SENDING_DONE** - `SYS_TimerStart();` - **APP_STATE_WAIT_SEND_TIMER** - `appDataSendingTimerHandler();` - **Data received** - `appDataInd();` ACK Control field filled 4.3 LwIP Task The Gateway is configured as TCP client when LwIP network connection is initialized. The LwIP task uses the `ethernet_task()` function to read data packet and run periodical tasks. This function should be called periodically. If Ethernet is used, the Gateway tries to connect to TCP server on a port specified in firmware. After successful connection, the Gateway is informed of incoming data through callback function `tcp_client_received()`. The data is sent to LwIP by calling `appSendMessageToLwIP()`. Figure 4-2. LwIP Task Flow. ![LwIP Task Flow Diagram] - `lwip_init` - `ethernet_configure_interface` - `ethernet_task` - `Data received?` - `IP packet?` - `ARP packet?` - `etharp_arp_input` - `ip_input` - `TCP protocol?` - `tcp_input` - `Other protocol handler` - `tcp_client_received` - `appSendMessageToLwIP` - `tcp_write` - `tcp_output` 5. **Main API Introduction** The main API's introduction will be divided into two parts: Lightweight Mesh Software Stack API, LwIP Stack API driver API. Each part will be introduced respectively in the following sections. Note the APIs described here are focusing on application layer. For complete API descriptions, refer to the stacks and their documents mentioned in Chapter 3. 5.1 **Lightweight Mesh Software Stack API** As the Lightweight Mesh task is based on WSNDemo example, similar APIs in WSNDemo are used in the Gateway. Most of the Lightweight Mesh APIs used in Gateway are in WSNDemo.c. The main APIs are as below. - **SYS_Init()** It initializes Lightweight Mesh HAL, PHY, NWK layer and system timer. It's called from `board_init()`. - **SYS_TaskHandler()** It's the core API of Lightweight Mesh. The PHY, NWK and system timer task handlers are called in this API. - **APP_TaskHandler()** It's the application layer task handler of Lightweight Mesh. The main purpose of using Lightweight Mesh on Gateway is to exchange data. Here are some APIs for sending and receiving data. - **appDataInd()** The call back function registered by `NWK_OpenEndpoint()` in `appInit()`. It's called when valid data received from Lightweight Mesh low level layer. It also fills the Acknowledgment command frame control field by calling `NWK_SetAckControl()`. Thus the data sent back to Lightweight Mesh network is implemented in a very simple manner. The limit here is the control field is only one byte. If large amount of data needs to be transmitted, a dedicated API and structure should be used. Refer to the none-coordinator code in WSNDemo.c for example. - **appSendData()** It’s called by a predefined interval from `APP_TaskHandler().appSendMessageToLwIP()` and `appSendMessage()` are called in this function to send data to LwIP Tx buffer respectively. The timer API is used to generate fix interval through call back function. Here is the timer call back function in application layer. - **appDataSendingTimerHandler()** It is registered by `SYS_TimerStart()` and called at predefined interval. As the Gateway acts as Coordinator in Lightweight Mesh network, some APIs from the stack are not used. For more details about other APIs in Lightweight Mesh, refer to the software package and documents inside. The latest Lightweight Mesh Software Stack package can be downloaded from [http://www.atmel.com/tools/LIGHTWEIGHT_MESH.aspx](http://www.atmel.com/tools/LIGHTWEIGHT_MESH.aspx). 5.2 LwIP API Introduction As no operating system is running, LwIP raw API has been used in the Gateway. And the Gateway is set as TCP client. Most of the LwIP APIs are in ethernet_sam.c. The main APIs of LwIP are as below: - **init_ethernet()** It initializes LwIP Ethernet interface and related hardware. - **ethernet_task()** The LwIP Ethernet task handler. It polls the Ethernet tasks periodically. The specific application codes are implemented in several call back functions. - **tcp_client_init()** It initializes the Gateway as TCP client. By default, static IP is assigned to Gateway and a port number is bound. In this function, it tries to connect to TCP server with the default parameters. - **appSendMessageToLwIP()** This function sends data from Lightweight Mesh to LwIP. It fills the data buffer to be transferred. tcp_write() and tcp_ouput() are the actual functions sends data in LwIP. Here is the list of callback functions used in the Gateway. - **tcp_client_received()** It’s the callback function invoked whenever a data packet is received from LwIP. For Gateway, it stores data received from TCP server in a buffer. - **tcp_client_connected()** It’s the callback function invoked when a TCP connection is established. It sends a string to TCP server after successful connection and set TCP client in receive state by registering a callback function tcp_client_received() in tcp_recv(). - **tcp_err_handler()** It’s the callback function for TCP error handler. It re-initializes Gateway to TCP client if connection aborted or connection reset occurs in LwIP. - **status_callback()** It’s the callback function for a status change in default network interface. It initializes the Gateway as TCP client by calling tcp_client_init(). For more details about LwIP APIs, refer to LwIP stack. 6. **Software Package Content** As mentioned before, the Gateway is developed based on ASF. The directory structure of the software package integrates ASF structure and Lightweight Mesh Software Stack structure. For details of the structure of ASF, refer to Atmel AVR4029: Atmel Software Framework - Getting Started. For the structure of Lightweight Mesh, refer to Atmel AVR2130: Lightweight Mesh Developer Guide. The Gateway directory structure is shown as below: **Figure 6-1. The Gateway Directory Structure.** The directory details are described below: - **apps:** - **WSNDemo** - The Gateway application layer code. The main() is in WSNDemo.c. - **asf:** - **common** - **boards** - This directory contains the various board definitions shared between multiple architectures. As the Gateway is not a standard Atmel Kit, it’s defined as USER_BOARD. And the board details are defined in user_board.h and conf_board.h. - **services** - ASF common services. - **utils** - ASF common utilities. - **sam** - **components** - Components supported by sam. Here the Gateway Ethernet PHY chip is supported by ASF. - **drivers** - ASF sam drivers. It contains the low level drivers of sam peripherals. - **utils** - ASF sam utilities. - **thirdparty** - **CMSIS** - ARM Cortex® Microcontroller Software Interface Standard folder. - **lwip** - LwIP stack folder. - **config:** - **conf_board.h** - The ASF config file of board. The Gateway board settings and macros are placed in this file. - **conf_clock.h** - The ASF config file of clock. The Gateway clock settings can be configured here. - **conf_eth.h** - The ASF config file of EMAC. The Gateway Ethernet hardware, MAC address and IP address (if static IP if used) etc. are defined in this file. - **conf_spi_master.h** - The ASF config file of SPI in master mode. - **conf_uart_serial.h** - The ASF config file of UART port. It’s for debug purpose in the Gateway. - **config.h** - The config file of Lightweight Mesh Software Stack. The device type and working channel are defined in this file. - **lwipopts.h** - The config file of LwIP stack. • stack: • hal • atsam3x8e Hardware abstraction layer of Lightweight Mesh Software Stack. In the Gateway, it reuses the low level drivers from ASF. • nwk Network layer of Lightweight Mesh Software Stack. • phy • at86rf231 The radio PHY chip supported by the Gateway. • Services The Lightweight Mesh application services. OTA service is provided, but it’s not used in the Gateway. It can be removed from the project folder with no harm. It’s kept there for not breaking Lightweight Mesh Software Stack original structure. • Sys The Lightweight Mesh system services. 7. **Footprint** Figure 7-1 and Figure 7-2 illustrate the CODE and RAM spaces that each module used in the software of the Gateway. **Figure 7-1. Lightweight Mesh to Ethernet Gateway CODE Footprint [KB].** ![Pie chart showing CODE footprint] **Figure 7-2. Lightweight Mesh to Ethernet Gateway RAM Footprint [KB].** ![Pie chart showing RAM footprint] The Lightweight Mesh and LwIP stack RAM usage should be configured in their corresponding configuration file according to real application. 8. **Getting Started Guide** In this chapter, it gives step-by-step guide to setup the Gateway and run the preprogrammed firmware. 8.1 **Programming the Gateway** Along with this document, two hex files are provided. One is for the Gateway (LwMesh_Gateway.hex) and the other is for Lightweight Mesh devices connecting to the Gateway (LwMesh_RCB128RFA1.hex). To program the Gateway hex file, SAM-ICE adaptor mentioned in Chapter 2 is needed. The steps are as below. 1. Connect SAM-ICE to the SAM-ICE adapter. 2. Connect SAM-ICE adapter to the Gateway programming header J2. 3. Power the Gateway via the USB cable. 4. Open Atmel Studio and select menu “Tools -> Device Programming”. 5. Choose SAM-ICE for Tool, ATSAM3X8E for Device and JTAG for Interface, and then click “Apply” button. 6. Click the Device signature “Read” button to check if the connection is correct. 7. Select the Memories tab and then select the pre-built image for the Gateway from “… in Flash section. 8. Click Program. If the pre-built image is downloaded to the board, message “Verifying Flash…OK” appears. ![Figure 8-2. Programming the Gateway.](image) The hex file for Lightweight Mesh devices is running on RCB128RFA1. It acts as End Device in the network. LED (D4) on RCB128RFA1 can be monitored and controlled through the Gateway. Refer to Atmel AVR2131: Lightweight Mesh Getting Started Guide for the setup and programming the RCB128RFA1 Radio Control Board. 8.2 Connecting to Ethernet In the preprogrammed firmware, the Gateway Ethernet is configured as below. - TCP client The parameters can be changed in the Gateway firmware, but the PC software “TCPServer” provided with this document is designed to work with the settings above. 8.2.1 Step-by-step guide To directly connect the Gateway to PC via Ethernet, follow the steps below. 1. Configure PC IP address to 192.168.1.105, Sub net mask: 255.255.255.0, Default Gateway: 192.168.1.1 as shown in Figure 8-1. Figure 8-1. PC IP address configuration. 2. Connect Ethernet cable between Gateway and PC. 3. Power on Gateway by connecting USB cable. Ethernet connection status is indicated by LED D2, D3 and D4 on Gateway. The red color of bi-color LED D7 on Gateway will be on when Lightweight Mesh network ready. 4. Open “TCPServer” in PC. If the Gateway and PC are setup correctly, “TCPServer” will report “Connected”, “Client IP” and “Port” in the status bar as shown in Figure 8-2. 5. Power on the other Lightweight Mesh devices. If the Lightweight Mesh devices are setup correctly, the device information will be displayed in the main window of “TCPServer” as shown in Figure 8-3. The green color of bi-color LED D7 on Gateway will toggle each time it receives data from connected devices. 6. Click on one device in main window. The LED status on selected device is displayed in control window. Select “ON” or “OFF” and then click “Submit”, the LED on the device can be controlled accordingly. 8.3 **PC software Menu** The TCPServer provided with this document is a simple GUI to demonstrate the Gateway reference design. In a real application, a more complicated GUI may be used. For an overview of the PC software features refer to Section 3.3. The menu of TCPServer is simple. Only the menu items under “Action” are implemented. They are described as below. - **Refresh List:** Refresh the device list manually - **Start Listening:** Start listening on the default server port 8840 - **Stop Listening:** Stop listening on the default server port 8840 Whenever the device list is not updated or showing “Not Found” in main window as in Figure 8-4, the TCPServer can restart working by selecting “Stop Listening” and “Start Listening”. **Figure 8-4. Device Not Found in TCPServer.** Appendix A. Additional Information A.1 Lightweight Mesh Configuration Table A-1 lists the Lightweight Mesh Software Stack configuration used in this reference design and this configuration can be modified in src/config/config.h. Table A-1. Lightweight Mesh Options. <table> <thead> <tr> <th>Option</th> <th>Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>APP_ADDR</td> <td>0 - Coordinator</td> <td>Node network address. It should be 0 for the Gateway</td> </tr> <tr> <td>APP_CHANNEL</td> <td>0x18</td> <td>Radio transceiver channel. Valid range for 2.4GHz radios is 11 – 26 (0x0b – 0x1a)</td> </tr> <tr> <td>APP_PAN_ID</td> <td>0x1234</td> <td>Network identifier</td> </tr> <tr> <td>APP_ENDPOINT</td> <td>1</td> <td>Application main data communication endpoint</td> </tr> <tr> <td>NWK_BUFFERS_AMOUNT</td> <td>10</td> <td>Number of buffers reserved for stack operation</td> </tr> <tr> <td>HAL_UART_RX_FIFO_SIZE</td> <td>4</td> <td>UART RX buffer size</td> </tr> </tbody> </table> A.2 LwIP Configuration Table A-2 lists the LwIP stack configuration in this reference design, and these configurations can be modified in src/config/lwipopts.h. Table A-2. LwIP Options. <table> <thead> <tr> <th>Option</th> <th>Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>MEM_SIZE</td> <td>3*1024</td> <td>The size of the heap memory</td> </tr> <tr> <td>MEMP_NUM_PBUF</td> <td>12</td> <td>The number of memp struct pbufs</td> </tr> <tr> <td>MEMP_NUM_TCP_PCB</td> <td>2</td> <td>The number of simultaneously active TCP connections</td> </tr> <tr> <td>MEMP_NUM_TCP_PCB_LISTEN</td> <td>1</td> <td>The number of listening TCP connections</td> </tr> <tr> <td>MEMP_NUM_TCP_SEG</td> <td>9</td> <td>The number of simultaneously queued TCP segments</td> </tr> <tr> <td>PBUF_POOL_SIZE</td> <td>6</td> <td>The number of buffers in the pbuf pool</td> </tr> <tr> <td>PBUF_POOL_BUFSIZE</td> <td>512</td> <td>The size of each pbuf in the pbuf pool</td> </tr> <tr> <td>LWIP_TCP</td> <td>1</td> <td>Turn on TCP</td> </tr> <tr> <td>TCP_WND</td> <td>1500</td> <td>The size of a TCP window</td> </tr> <tr> <td>TCP_MSS</td> <td>1500</td> <td>TCP Maximum segment size</td> </tr> <tr> <td>TCP_SND_BUF</td> <td>2150</td> <td>TCP sender buffer space</td> </tr> </tbody> </table> | TCP_SND_QUEUELEN | (6 * TCP_SND_BUF) + (TCP_MSS - 1))/TCP_MSS | TCP sender buffer space | ## Appendix B. Revision History <table> <thead> <tr> <th>Doc. Rev.</th> <th>Date</th> <th>Comments</th> </tr> </thead> <tbody> <tr> <td>42165A</td> <td>11/2013</td> <td>Initial document release</td> </tr> </tbody> </table>
{"Source-Url": "http://ww1.microchip.com/downloads/en/AppNotes/Atmel-42165-LwMesh-to-Ethernet-and-Wi-Fi-Gateway-with-SAM3X-Software-Users-Guide_Application-Note_AT02744.pdf", "len_cl100k_base": 6377, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 42475, "total-output-tokens": 7439, "length": "2e12", "weborganizer": {"__label__adult": 0.000896453857421875, "__label__art_design": 0.0012569427490234375, "__label__crime_law": 0.0005440711975097656, "__label__education_jobs": 0.0003960132598876953, "__label__entertainment": 0.00016427040100097656, "__label__fashion_beauty": 0.0004580020904541016, "__label__finance_business": 0.0003261566162109375, "__label__food_dining": 0.0006666183471679688, "__label__games": 0.0013570785522460938, "__label__hardware": 0.2467041015625, "__label__health": 0.0007281303405761719, "__label__history": 0.0005426406860351562, "__label__home_hobbies": 0.0005288124084472656, "__label__industrial": 0.0031452178955078125, "__label__literature": 0.00025081634521484375, "__label__politics": 0.0002646446228027344, "__label__religion": 0.0010852813720703125, "__label__science_tech": 0.0816650390625, "__label__social_life": 6.771087646484375e-05, "__label__software": 0.03106689453125, "__label__software_dev": 0.625, "__label__sports_fitness": 0.0006871223449707031, "__label__transportation": 0.0016069412231445312, "__label__travel": 0.00034737586975097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28141, 0.01969]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28141, 0.34145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28141, 0.8048]], "google_gemma-3-12b-it_contains_pii": [[0, 931, false], [931, 1210, null], [1210, 3660, null], [3660, 5035, null], [5035, 7133, null], [7133, 9704, null], [9704, 10052, null], [10052, 12499, null], [12499, 12806, null], [12806, 13671, null], [13671, 16203, null], [16203, 18038, null], [18038, 18555, null], [18555, 20274, null], [20274, 20873, null], [20873, 21369, null], [21369, 22814, null], [22814, 23934, null], [23934, 24448, null], [24448, 25243, null], [25243, 27966, null], [27966, 28141, null], [28141, 28141, null]], "google_gemma-3-12b-it_is_public_document": [[0, 931, true], [931, 1210, null], [1210, 3660, null], [3660, 5035, null], [5035, 7133, null], [7133, 9704, null], [9704, 10052, null], [10052, 12499, null], [12499, 12806, null], [12806, 13671, null], [13671, 16203, null], [16203, 18038, null], [18038, 18555, null], [18555, 20274, null], [20274, 20873, null], [20873, 21369, null], [21369, 22814, null], [22814, 23934, null], [23934, 24448, null], [24448, 25243, null], [25243, 27966, null], [27966, 28141, null], [28141, 28141, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28141, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28141, null]], "pdf_page_numbers": [[0, 931, 1], [931, 1210, 2], [1210, 3660, 3], [3660, 5035, 4], [5035, 7133, 5], [7133, 9704, 6], [9704, 10052, 7], [10052, 12499, 8], [12499, 12806, 9], [12806, 13671, 10], [13671, 16203, 11], [16203, 18038, 12], [18038, 18555, 13], [18555, 20274, 14], [20274, 20873, 15], [20873, 21369, 16], [21369, 22814, 17], [22814, 23934, 18], [23934, 24448, 19], [24448, 25243, 20], [25243, 27966, 21], [27966, 28141, 22], [28141, 28141, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28141, 0.06612]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
8503738fa346fd71f4a8558e2f51539b7e832ecf
[REMOVED]
{"len_cl100k_base": 6232, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30419, "total-output-tokens": 7746, "length": "2e12", "weborganizer": {"__label__adult": 0.00032067298889160156, "__label__art_design": 0.0003287792205810547, "__label__crime_law": 0.0004711151123046875, "__label__education_jobs": 0.0013189315795898438, "__label__entertainment": 9.679794311523438e-05, "__label__fashion_beauty": 0.00017440319061279297, "__label__finance_business": 0.0007991790771484375, "__label__food_dining": 0.0004057884216308594, "__label__games": 0.0005922317504882812, "__label__hardware": 0.0012493133544921875, "__label__health": 0.0007529258728027344, "__label__history": 0.0003154277801513672, "__label__home_hobbies": 0.0001208186149597168, "__label__industrial": 0.000766754150390625, "__label__literature": 0.0003368854522705078, "__label__politics": 0.00033974647521972656, "__label__religion": 0.0004394054412841797, "__label__science_tech": 0.19677734375, "__label__social_life": 0.00010704994201660156, "__label__software": 0.027252197265625, "__label__software_dev": 0.76611328125, "__label__sports_fitness": 0.0002397298812866211, "__label__transportation": 0.0005578994750976562, "__label__travel": 0.00022041797637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32026, 0.02945]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32026, 0.40245]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32026, 0.90349]], "google_gemma-3-12b-it_contains_pii": [[0, 4141, false], [4141, 8746, null], [8746, 12029, null], [12029, 15690, null], [15690, 20859, null], [20859, 24649, null], [24649, 27833, null], [27833, 32026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4141, true], [4141, 8746, null], [8746, 12029, null], [12029, 15690, null], [15690, 20859, null], [20859, 24649, null], [24649, 27833, null], [27833, 32026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32026, null]], "pdf_page_numbers": [[0, 4141, 1], [4141, 8746, 2], [8746, 12029, 3], [12029, 15690, 4], [15690, 20859, 5], [20859, 24649, 6], [24649, 27833, 7], [27833, 32026, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32026, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
8ca1cee9faf70c63f716d973f2108a93f74b1b3e
Expressing Quality of Service in Agent Communication Lisa Cingiser DiPippo and Lekshmi Nair Department of Computer Science The University of Rhode Island Kingston, RI USA 02881 Abstract This paper presents extensions to a well-known agent communication language for the expression of quality of service. It describes the semantics of the extensions, while allowing quality of service to be interpreted as broadly as possible. The paper then describes the specific extensions to KQML through added performative parameters. A prototype implementation of the extended language is also discussed. Keywords: agent, communication, quality of service, semantics 1. Introduction An agent communication language (ACL) provides a mechanism for agents to express their desires and intentions to other agents in a content language independent manner. Agents can converse about what they know and what they want to know from other agents. This sharing of information allows multiple agents to work together to meet common goals, as well as individual goals. However, in some applications, it is not enough for one agent to let another agent know that it wants some information. A requesting agent must also be able to express something about how it wants the information to be delivered. For example, consider a system in which multiple agents communicate to provide stock market information to an end user. It is not enough for a UserAgent to request the price of Intel stock from a QuotingAgent because the price of stocks changes so rapidly. There must be a way for the UserAgent to express that it needs the price information within a certain amount of time, or with a specified degree of accuracy. In general, in many applications, it is important for an agent to be able to express a desired quality of service (QoS) as part of a communication with another agent. Further, it is also necessary for agents to be able to express the level of quality that it can provide to other agents in the services that it can offer. In this paper we present a methodology for expressing QoS in the capabilities of agents and in the requirements of agents. Section 2 defines the semantics of QoS in agent communication by extending the semantics of a well-known communication language (KQML). Section 3 presents extensions to KQML that allow for the expression of QoS in the language. Section 4 briefly describes a prototype that we have implemented to demonstrate the use of these language extensions. And Section 5 concludes with a summary and discussion of the applicability of our work. 2. Semantics of Quality of Service The QoS provided or required by an agent should be an integral part of the communications among agents. This allows communicating agents to “know what they are getting”. In this section, we describe the semantics of the Knowledge Query Manipulation Language (KQML) [1], a well-known ACL. We then describe what is meant by QoS in the context of agent communication. Finally, we present an extended semantics of KQML to allow for the expression of QoS capabilities and requirements. ### Expressing Quality of Service in Agent Communication This paper presents extensions to a well-known agent communication language for the expression of quality of service. It describes the semantics of the extensions, while allowing quality of service to be interpreted as broadly as possible. The paper then describes the specific extensions to KQML through added performative parameters. A prototype implementation of the extended language is also discussed. 2.1. KQML Semantics KQML is an agent communication language in which agents communicate through the expression of performatives [1]. Each performative specifies the kind of communication the speaking agent wants to have with the receiving agent. For instance, the tell performative allows one agent to inform another agent about something it knows about. The semantics of KQML is based on speech act theory [2]. Cognitive states of KQML-speaking agents are expressed using a meta-language of operators that specify propositional attitudes [2]. These operators express the beliefs, knowledge, desires and intentions of an agent. The meta-language operators are used to describe the semantics of the performatives through preconditions, postconditions and completion conditions. For further details on the specifics of KQML semantics, see [2]. We more fully discuss the semantics in Section 2.3 through our extension of KQML. 2.2. Quality of Service Quality of service is a broad term that can encompass many criteria within a multi-agent system. It can be used to express timing capabilities of an agent, or the accuracy of a response that an agent can provide. For example, if an agent can find the price of a requested stock within 10 seconds, this can be expressed as a QoS capability. Furthermore, if the same agent can find the price of the same stock more quickly, with slightly lower accuracy, this can be expressed through QoS as well. QoS can also express other criteria such as level of security and network bandwidth. For the purpose of expressing agent communications with QoS, we do not distinguish among the different criteria that can be expressed. Rather, we use a general QoS parameter in the specification of agent communication semantics. Quality of service is treated as a general concept, and can be interpreted in any semantic context as “level of quality” provided by or required by an agent. 2.3. Extended Semantics To express QoS as an integral part of agent communication, we have extended the semantics of KQML to include a “level of quality” parameter in the meta-language for expressing agents’ states, and in the expression of pre-, post- and completion conditions for KQML performatives. 2.3.1. QoS in Agent State The cognitive state of an agent is expressed through beliefs, knowledge, desires and intentions. The following operators are extended from [2] to express QoS as a part of the agent’s state. - \( \text{BEL}(A,P) \) – Agent \( A \) believes \( P \) to be true. - \( \text{KNOW}(A,S) \) – \( A \) has some knowledge about \( S \), where \( S \) is a state description. - \( \text{WANT}(A,S,Q) \) – \( A \) desires state (or action) \( S \) to occur in the future, with a level of quality \( Q \). - \( \text{INT}(A,S,Q) \) – \( A \) intends on doing \( S \) with a level of quality \( Q \). The belief (BEL) and knowledge (KNOW) operators refer to the current state of the agent’s knowledge. They do not require any extension for expression of QoS. On the other hand, it is necessary for desires (WANT) and intentions (INT) to be able to express QoS. In the context of QoS expression, it is not enough to say that agent \( A \) wants to know something about \( S \), or wants action \( S \) to occur. We must be able to express when or how \( A \) wants to know about \( S \). Similarly, when expressing intentions, we must be able to express that \( A \) intends to do \( S \) within a specified level of quality. 2.3.2. QoS in Agent Performatives We now use the agent state operators described above to express the semantics of KQML performatives that can express QoS capabilities and requirements. In [2], the semantics of a performative are described with (1) a natural language description, (2) a formalization of the description, (3) a set of preconditions, (4) a set of postconditions, and (5) a completion condition. We use these same descriptors to characterize the extensions that we have made to the performative semantics. We present the semantics for three performatives: ask-if, tell and advertise. These semantics are representative of the extensions that we have made to all of the performatives in KQML for the expression of QoS. We begin by presenting the semantics for ask-if, followed by an explanation of the specified conditions. \[ \text{ask-if}(A,B,X,Q) \] 1. \( A \) wants to know, with level of quality \( Q \), what \( B \) believes about the truthfulness of \( X \). 2. \( \text{WANT}(A,\text{KNOW}(A,Y),Q) \) \[ Y = (\text{BEL}(B,X)) \text{ or } Y = (\neg \text{BEL}(B,X)) \] 3. \( \text{Pre}(A): \text{WANT}(A,\text{KNOW}(A,Y),Q) \wedge \text{KNOW}(A,\text{INT}(B,\text{PROC}(B,M),Q)) \) \[ \text{Where } M = \text{ask-if}(A,B,X) \] 4. \( \text{Post}(A): \text{INT}(A,\text{KNOW}(A,Y),Q) \) \( \text{Post}(B): \text{KNOW}(B,\text{WANT}(A,\text{KNOW}(A,Y),Q)) \) 5. Completion: \( \text{KNOW}(A,Y) \) The above preconditions imply that before agent \( A \) sends an ask-if message, it wants to know something about \( X \) within a certain level of quality, and it knows that \( B \) can process the request within this level of quality. Also, \( B \) intends to process an ask-if message from \( A \) about \( X \) with level of quality \( Q \). The postconditions indicate that after the ask-if message is sent, \( A \) intends to know something about \( X \) with the specified level of quality, and \( B \) knows that \( A \) wants to know something about \( X \) with that level of quality. The completion condition, which specifies the result of the conversation in which this message exists, indicates that when the conversation is over, \( A \) will know something about \( X \). For example, if agent \( A \) asked agent \( B \) for the price if Intel’s stock within 15 seconds, the above semantics imply that in \( A \) wants to know what \( B \) knows about the stock price of Intel within 15 seconds, and that \( B \) has expressed its intention to provide this stock price, perhaps through an advertisement. The extended semantics for the tell and advertise performatives are shown below. The interpretation of the pre-, post- and completion conditions are similar to those for ask-if, so we do not explain the semantics further. \[ \text{tell}(A,B,X,Q) \] 1. \( A \) states to \( B \), with a level of quality \( Q \), that \( X \) is true. 2. \( \text{BEL}(A,X) \) 3. \( \text{Pre}(A): \text{BEL}(A,X) \wedge \text{KNOW}(A,\text{WANT}(B,\text{KNOW}(B,Y),Q)) \) \[ \text{Where } Y = (\text{BEL}(A,X)) \text{ or } Y = (\neg \text{BEL}(A,X)) \] 4. \( \text{Post}(A): \text{KNOW}(A,\text{KNOW}(B,BEL(A,X))) \) \( \text{Post}(B): \text{KNOW}(B,BEL(A,X)) \) 5. Completion: \( \text{KNOW}(B,BEL(A,X)) \) To continue the example of above, if agent \( B \) tells agent \( A \) the stock price of Intel, it is expressing its belief about the stock price. The semantics also imply that \( B \) knows that \( A \) wants this information within 15 seconds. \[ \text{advertise}(A,B,M,Q) \] 1. \( A \) states to \( B \) that it can (and will) process the message \( M \) from \( B \) with level of quality \( Q \). 2. \( \text{INT}(A,\text{PROC}(A,M),Q) \) \[ \text{Where } M \text{ is a performative} \] 3. \( \text{Pre}(A): \text{INT}(A,\text{PROC}(A,M),Q) \) \( \text{Pre}(B): \text{none} \) 4. \( \text{Post}(A): \text{KNOW}(A,\text{KNOW}(B,\text{INT}(A,\text{PROC}(A,M),Q))) \) \( \text{Post}(B): \text{KNOW}(B,\text{INT}(A,\text{PROC}(A,M),Q)) \) The advertise performative can be used to allow an agent to inform another agent about its capabilities. It can also be used to allow an agent to register its capabilities with a facilitator agent that helps match agent requests with servicing agents. 3. Extending KQML to Express QoS The semantics expressed in the previous section provide a foundation for extending KQML performatives to express QoS capabilities and requirements. In this section we briefly describe the agent model on which our work is based. We then explain how we have extended KQML performatives with a QoS parameter. We show examples of the the extension for several specific performatives. 3.1. QoS Agent Model Our QoS extensions to KQML are based upon a model of a real-time multi-agent system (RTMAS) that we have developed [3]. The model is based on the assumption that agents may be able to perform their tasks in multiple ways. It is made up of a set of real-time agents (RTAgent) and a set of communications among the real-time agents (Message). Figure 1 displays the elements of the model. \[\text{RTAgent} = \{S_1, S_2, ..., S_n\}\] \[S_i = \langle O, ES \rangle\] \[ES = \{es_1, es_2, ..., es_f\}\] \[es_i = \langle ex, a, tv \rangle\] \[tv_i = \frac{(a_i - a_{i+1})}{ex_i - ex_{i+1}}\] \[Message = \langle A, V, Q \rangle\] \[Q = \langle I, D, H \rangle\] Figure 1 - RTMAS Model Elements 3.1.1. RTAgent Each RTAgent is comprised of a set of solvables, \{S_1, S_2, ..., S_n\} where a solvable is a problem that the agent is designed to solve. Each solvable within the agent is represented by an optimal result (O) and a set of execution strategies (ES). The optimal result is a system-specific definition of what is considered to be the best result for this problem. For instance, in a system in which agents buy, sell and recommend stocks, a BuySellAgent may have a solvable, BuyStock, to purchase a specified stock. The optimal solution in this scenario might be to buy the stock at the current price with no fee. The ES component of a solvable is the set of execution strategies that can be used to produce a result for a solvable. For example, the solvable BuyStock may have an execution strategy, BS_1, that uses a discount broker with a low fee. This execution strategy may come close to the no fee requirement of the optimal result, but if the discount broker typically has a longer turn around time, then the deadline of the BuyStock request may be violated and the price of the stock may have changed. On the other hand, an execution strategy, BS_2, that uses a more expensive broker may be able to handle the request more quickly. Each execution strategy of a solvable is comprised of three elements. This model uses criteria for expressing real-time constraints. However, it can easily be modified to handle other measures of QoS. The execution time, ex, represents the amount of time it takes a strategy to run. The level of accuracy, a, is a rating of the result of an execution strategy. Accuracy is calculated as a percentage of the optimal result \((a = \text{strategy result} / \text{optimal result})\). In the example above, we quantify the optimal result of the BuyStock solvable by specifying zero fee for the transaction. While this optimal result may be impossible to achieve, it provides a metric by which to measure the results of the actual execution strategies. The accuracy of a result of a particular execution strategy may be known a priori. In other cases the accuracy can be based upon a statistical average of returned results. The last component of an execution strategy is the tradeoff value (tv). This parameter provides a measure of how much value would be lost by choosing one execution strategy of a solvable over another one. That is, it measures the percent change in accuracy per change in execution strategy execution time. 3.1.2. Message Communication among agents in this model is performed through messages sent between agents. The formal specification for a message is displayed in Figure 1. A represents the name of the real-time agent to which the message is directed. V is the name of the solvable that the requesting agent wants to be performed, and Q is the QoS requirement of the message. While Q can represent any number of QoS constraints, we specifically represent three parameters that model real-time agent behavior. I is the level of importance of the request. This value is based on some system-wide scale of importance agreed-upon by all agents. D represents the deadline by which the request must be completed. H specifies the accuracy threshold for the request. If the servicing agent cannot provide at least this accuracy, then the requesting agent may choose to abort the request. As an example of a real-time agent message, consider a UserAgent in the stock trading example that sends a message to a QuotingAgent to find the price of Intel stock (GetPrice). The deadline that the UserAgent specifies on this message may be based on the amount of time available before a decision must be made. The UserAgent may specify an accuracy threshold that allows for a quarter of a point difference from the actual stock price in order to meet its deadline. The importance of the request depends upon the overall transaction that the UserAgent is attempting to perform. If the transaction involves spending a few hundred dollars, then the importance may be low. But if it involves thousands of dollars, the importance may be higher. 3.2. QoS in Performatives The expression of QoS in agent communication is in the form of an agent making known its capabilities, and of an agent specifying its requirements to another agent. We have extended KQML by adding two optional parameters to each performative [4]. We refer to the extended language as KQML-Q. The first parameter is QoS_requirement, which allows an agent to specify the level of quality that it requires from the other agent to whom it is sending a message. This parameter is meant to be used with performatives that specify a request for information from another agent. The other parameter that we have added is QoS_capabilities. This parameter is designed to be used with an advertise performative to allow an agent to let other agents know what levels of quality it can provide for a particular request. In our examples, and in our implementation, we have considered timing information and accuracy as our QoS measures. However, we have designed the KQML extension in such a way that other measures can be added easily. 3.2.1. Agent Message In our RTMAS model, there are three kinds of QoS constraints: deadline, importance, and accuracy threshold. The QoS_requirement parameter includes these constraint specifications. Consider an example in the stock trading system in which a UserAgent requests information from another agent (TrendWatcher) that keeps track of trends in a particular segment of the stock market. The following example shows how the UserAgent would ask the TrendWatcher to report on current trends in internet stocks within 15 seconds, providing at least 75% accuracy. (ask-one :sender UserAgent :receiver TrendWatcher :content Watch(internet): :QoS_requirement (dl 15, imp 4, acc 75)) Note that in our examples, we leave out some performative parameters for brevity. The TrendWatcher could respond to this request with a the following message: (tell :sender TrendWatchingAgent :receiver UserAgent :content ReportTrend(35): :QoS_requirement (dl 5, imp 4, acc 75)) This tell message expresses a QoS parameter because agent communication is asynchronous, and therefore all messages must be sent explicitly. All KQML-Q performatives may express QoS constraints in the form of the QoS_requirement parameter, so that they can be scheduled to meet their constraints. 3.2.2. Advertisement Communication between agents, and from agents to facilitators is extended to allow for expression of QoS capabilities. Agents that provide some services to other agents may advertise to other agents, or to a facilitator. For example, if an agent that can buy and sell stocks (BuyerSeller) wants to advertise to a facilitator that it can buy a stock with two explicit execution strategies, one with execution time 5 seconds and 85% accuracy, the other with execution time 2 seconds and 65% accuracy, the advertise performative would be as follows: (advertise :sender BuyerSeller :receiver Facilitator :content BuyStock(A) :QoS_capabilities((ex 5, acc 85) (ex 2, acc 65))) The facilitator can use this information to match requests to buy a stock given specific QoS specifications with this BuyerSeller agent. 4. Implementation This section describes the implementation of a prototype RTMAS that allows agents to communicate through KQML-Q. The implementation is based on the KCobalt system [5] that maps KQML messages to CORBA Interface Definition Language (IDL) [6]. All agents in our implementation are represented as CORBA [6] objects whose interface contains a method for each KQML-Q performative. The IDL interface for an agent object in our implementation extends the IDL of KCobalt, and includes the following specifications: ```java interface CoreS { void askOne (in string sender, in string receiver, ... in string qos_Info); ... } ``` This segment of IDL code shows the ask-one method of an agent object. Each other KQML-Q performative is represented as a method as well, with a string parameter for each performative parameter. We have extended the specification of each KCobalt performative interface to provide a parameter for expression of QoS constraints. Figure 2 displays the flow of control in our implementation. When a real-time agent object expresses a KQML-Q string (1), a parser object parses the string, determines what performative is being requested and forwards the information to a dispatcher object (2). Given the QoS specification in the KQML-Q message, the dispatcher calls a scheduling object (3) that provides real-time scheduling parameters to the system. The scheduling object also provides information that will be used by the servicing agent to determine which execution strategy will meet the QoS specifications of the requesting agent (4). Finally, the dispatcher calls the method corresponding to the requested performative on the servicing agent, with the QoS information provided by the scheduler. Further details on scheduling our real-time agents can be found in [6,7]. ![Figure 2 - Implementation](image) 5. Conclusion In this paper we have presented a way of expressing quality of service in agent communication. By making clear the semantics of this expression, we were able to easily show how the KQML language could be extended. The semantics of QoS that we have presented are independent of the kind of quality that is being expressed. While we have specifically focused on real-time characteristics to define quality, the QoS extension to KQML is flexible. enough to be easily modified to handle any expression of quality that is appropriate for a particular application. We have chosen to apply our QoS extensions to KQML because it is a widely accepted agent communication language that has many current implementations. We are confident that the spirit of this work would apply equally well to the only other well-known agent communication language, FIPA’s ACL [8]. While the semantics of FIPA-ACL are somewhat different from KQML semantics, we feel that the expression of QoS requirements and capabilities is sufficiently straightforward to apply to FIPA-ACL semantics as well. References
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a478122.pdf", "len_cl100k_base": 5281, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23232, "total-output-tokens": 6192, "length": "2e12", "weborganizer": {"__label__adult": 0.00035953521728515625, "__label__art_design": 0.00028228759765625, "__label__crime_law": 0.0004425048828125, "__label__education_jobs": 0.0005745887756347656, "__label__entertainment": 0.00010079145431518556, "__label__fashion_beauty": 0.00016117095947265625, "__label__finance_business": 0.0007462501525878906, "__label__food_dining": 0.00040221214294433594, "__label__games": 0.0005931854248046875, "__label__hardware": 0.0009927749633789062, "__label__health": 0.0006961822509765625, "__label__history": 0.0002448558807373047, "__label__home_hobbies": 8.106231689453125e-05, "__label__industrial": 0.0005283355712890625, "__label__literature": 0.00039005279541015625, "__label__politics": 0.00038909912109375, "__label__religion": 0.0003886222839355469, "__label__science_tech": 0.0673828125, "__label__social_life": 8.249282836914062e-05, "__label__software": 0.01212310791015625, "__label__software_dev": 0.912109375, "__label__sports_fitness": 0.0002741813659667969, "__label__transportation": 0.0006480216979980469, "__label__travel": 0.0002181529998779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24009, 0.0243]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24009, 0.27596]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24009, 0.89379]], "google_gemma-3-12b-it_contains_pii": [[0, 3095, false], [3095, 3560, null], [3560, 7412, null], [7412, 11213, null], [11213, 14937, null], [14937, 18757, null], [18757, 22044, null], [22044, 24009, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3095, true], [3095, 3560, null], [3560, 7412, null], [7412, 11213, null], [11213, 14937, null], [14937, 18757, null], [18757, 22044, null], [22044, 24009, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24009, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24009, null]], "pdf_page_numbers": [[0, 3095, 1], [3095, 3560, 2], [3560, 7412, 3], [7412, 11213, 4], [11213, 14937, 5], [14937, 18757, 6], [18757, 22044, 7], [22044, 24009, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24009, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
580f1e952a9564bea4ffa17086908c87e18a166d
Dynamic subscription to YANG Events and Datastores over RESTCONF draft-ietf-netconf-restconf-notif-12 Abstract This document provides a RESTCONF binding to the dynamic subscription capability of both subscribed notifications and YANG-Push. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on July 15, 2019. Copyright Notice Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of 1. Introduction Mechanisms to support event subscription and push are defined in [I-D.draft-ietf-netconf-subscribed-notifications]. Enhancements to [I-D.draft-ietf-netconf-subscribed-notifications] which enable YANG datastore subscription and push are defined in [I-D.ietf-netconf-yang-push]. This document provides a transport specification for dynamic subscriptions over RESTCONF [RFC8040]. Driving these requirements is [RFC7923]. The streaming of notifications encapsulating the resulting information push is done via the mechanism described in section 6.3 of [RFC8040]. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. The following terms use the definitions from [I-D.draft-ietf-netconf-subscribed-notifications]: dynamic subscription, event stream, notification message, publisher, receiver, subscriber, and subscription. Other terms reused include datastore, which is defined in [RFC8342], and HTTP2 stream which maps to the definition of "stream" within [RFC7540], Section 2. [ note to the RFC Editor - please replace XXXX within this document with the number of this document ] 3. Dynamic Subscriptions This section provides specifics on how to establish and maintain dynamic subscriptions over RESTCONF [RFC8040]. Subscribing to event streams is accomplished in this way via RPCs defined within [I-D.draft-ietf-netconf-subscribed-notifications] Section 2.4, the RPCs are done via RESTCONF POSTs. YANG datastore subscription is accomplished via augmentations to [I-D.draft-ietf-netconf-subscribed-notifications] as described within [I-D.ietf-netconf-yang-push] Section 4.4. As described in [RFC8040] Section 6.3, a GET needs to be made against a specific URI on the publisher. Subscribers cannot pre-determine the URI against which a subscription might exist on a publisher, as the URI will only exist after the "establish-subscription" RPC has been accepted. Therefore, the POST for the "establish-subscription" RPC replaces the GET request for the "location" leaf which is used in [RFC8040] to obtain the URI. The subscription URI will be determined and sent as part of the response to the "establish-subscription" RPC, and a subsequent GET to this URI will be done in order to start the flow of notification messages back to the subscriber. A subscription does not move to the active state as per Section 2.4.1. of [I-D.draft-ietf-netconf-subscribed-notifications] until the GET is received. 3.1. Transport Connectivity For a dynamic subscription, where a RESTCONF session doesn’t already exist, a new RESTCONF session is initiated from the subscriber. As stated in Section 2.1 of [RFC8040], a subscriber MUST establish the HTTP session over TLS [RFC5246] in order to secure the content in transit. Without the involvement of additional protocols, HTTP sessions by themselves do not allow for a quick recognition of when the communication path has been lost with the publisher. Where quick recognition of the loss of a publisher is required, a subscriber SHOULD use a TLS heartbeat [RFC6520], just from receiver to publisher, to track HTTP session continuity. Loss of the heartbeat MUST result in any subscription related TCP sessions between those endpoints being torn down. A subscriber can then attempt to re-establish the dynamic subscription by using the procedure described in Section 3. 3.2. Discovery Subscribers can learn what event streams a RESTCONF server supports by querying the "streams" container of ietf-subscribed-notification.yang in [I-D.draft-ietf-netconf-subscribed-notifications]. Support for the "streams" container of ietf-restconf-monitoring.yang in [RFC8040] is not required. If it is supported, the event streams which are in the "streams" container of ietf-subscribed-notifications.yang SHOULD also be in the "streams" container of ietf-restconf-monitoring.yang. Subscribers can learn what datastores a RESTCONF server supports by following Section 2 of [I-D.draft-ietf-netconf-nmda-restconf]. 3.3. RESTCONF RPCs and HTTP Status Codes Specific HTTP responses codes as defined in [RFC7231] section 6 will indicate the result of RESTCONF RPC requests with publisher. An HTTP status code of 200 is the proper response to any successful RPC defined within [I-D.draft-ietf-netconf-subscribed-notifications] or [I-D.ietf-netconf-yang-push]. If a publisher fails to serve the RPC request for one of the reasons indicated in [I-D.draft-ietf-netconf-subscribed-notifications] Section 2.4.6 or [I-D.ietf-netconf-yang-push] Appendix A, this will be indicated by "406" status code transported in the HTTP response. When a "406" status code is returned, the RPC reply MUST include an "rpc-error" element per [RFC8040] Section 7.1 with the following parameter values: - an "error-type" node of "application". - an "error-tag" node of "operation-failed". - an "error-app-tag" node with the value being a string that corresponds to an identity associated with the error, as defined in [I-D.draft-ietf-netconf-subscribed-notifications] section 2.4.6 for general subscriptions, and [I-D.ietf-netconf-yang-push] Appendix A.1, for datastore subscriptions. The tag to use depends on the RPC for which the error occurred. Viable errors for different RPCs are as follows: <table> <thead> <tr> <th>RPC</th> <th>select an identity with a base</th> </tr> </thead> <tbody> <tr> <td>establish-subscription</td> <td>establish-subscription-error</td> </tr> <tr> <td>modify-subscription</td> <td>modify-subscription-error</td> </tr> <tr> <td>delete-subscription</td> <td>delete-subscription-error</td> </tr> <tr> <td>kill-subscription</td> <td>kill-subscription-error</td> </tr> <tr> <td>resync-subscription</td> <td>resync-subscription-error</td> </tr> </tbody> </table> Each error identity will be inserted as the "error-app-tag" using JSON encoding following the form <modulename>:<identityname>. An example of such as valid encoding would be "ietf-subscribed-notifications:no-such-subscription". In case of error responses to an "establish-subscription" or "modify-subscription" request there is the option of including an "error-info" node. This node may contain hints for parameter settings that might lead to successful RPC requests in the future. Following are the yang-data structures which may be returned: establish-subscription returns hints in yang-data structure ---------------------- ------------------------------------ target: event stream establish-subscription-stream-error-info target: datastore establish-subscription-datastore-error-info modify-subscription returns hints in yang-data structure ---------------------- ------------------------------------ target: event stream modify-subscription-stream-error-info target: datastore modify-subscription-datastore-error-info The yang-data included within "error-info" SHOULD NOT include the optional leaf "error-reason", as such a leaf would be redundant with information that is already placed within the "error-app-tag". In case of an rpc error as a result of a "delete-subscription", a "kill-subscription", or a "resync-subscription" request, no "error-info" needs to be included, as the "subscription-id" is the only RPC input parameter and no hints regarding this RPC input parameters need to be provided. Note that "error-path" [RFC8040] does not need to be included with the "rpc-error" element, as subscription errors are generally associated with the choice of RPC input parameters. 3.4. Call Flow for Server-Sent Events (SSE) The call flow is defined in Figure 1. The logical connections denoted by (a) and (b) can be a TCP connection or an HTTP2 stream (multiple HTTP2 streams can be carried in one TCP connection). Requests to [I-Dietf-netconf-subscribed-notifications] or [I-D.ietf-netconf-yang-push] augmented RPCs are sent on a connection indicated by (a). A successful "establish-subscription" will result in an RPC response returned with both a subscription identifier which uniquely identifies a subscription, as well as a URI which uniquely identifies the location of subscription on the publisher (b). This URI is defined via the "uri" leaf the Data Model in Section 7. An HTTP GET is then sent on a separate logical connection (b) to the URI on the publisher. This initiates the publisher to initiate the flow of notification messages which are sent in SSE [W3C-20150203] as a response to the GET. Additional requirements for dynamic subscriptions over SSE include: - All subscription state notifications from a publisher MUST be returned in a separate SSE message used by the subscription to which the state change refers. - Subscription RPCs MUST NOT use the connection currently providing notification messages for that subscription. - In addition to an RPC response for a "modify-subscription" RPC traveling over (a), a "subscription-modified" state change notification MUST be sent within (b). This allows the receiver to... know exactly when the new terms of the subscription have been applied to the notification messages. See arrow (c). - In addition to any required access permissions (e.g., NACM), RPCs modify-subscription, resync-subscription and delete-subscription SHOULD only be allowed by the same RESTCONF username [RFC8040] which invoked establish-subscription. - The kill-subscription RPC can be invoked by any RESTCONF username with the required administrative permissions. A publisher MUST terminate a subscription in the following cases: - Receipt of a "delete-subscription" or a "kill-subscription" RPC for that subscription. - Loss of TLS heartbeat A publisher MAY terminate a subscription at any time as stated in [I-D.draft-ietf-netconf-subscribed-notifications] Section 1.3 4. QoS Treatment To meet subscription quality of service promises, the publisher MUST take any existing subscription "dscp" and apply it to the DSCP marking in the IP header. In addition, where HTTP2 transport is available to a notification message queued for transport to a receiver, the publisher MUST: - take any existing subscription "priority", as specified by the "weighting" leaf node in [I-D.draft-ietf-netconf-subscribed-notifications], and copy it into the HTTP2 stream weight, [RFC7540] section 5.3, and - take any existing subscription "dependency", as specified by the "dependency" leaf node in [I-D.draft-ietf-netconf-subscribed-notifications], and use the HTTP2 stream for the parent subscription as the HTTP2 stream dependency, [RFC7540] section 5.3.1, of the dependent subscription. - set the exclusive flag, [RFC7540] section 5.3.1, to 0. 5. Notification Messages Notification messages transported over RESTCONF will be encoded according to [RFC8040], section 6.4. 6. YANG Tree The YANG model defined in Section 7 has one leaf augmented into three places of [I-D. draft-ietf-netconf-subscribed-notifications]. module: ietf-restconf-subscribed-notifications augment /sn:establish-subscription/sn:output: +--ro uri? inet:uri augment /sn:subscriptions/sn:subscription: +--ro uri? inet:uri augment /sn:subscription-modified: +--ro uri? inet:uri 7. YANG module This module references [I-D. draft-ietf-netconf-subscribed-notifications]. <CODE BEGINS> file "ietf-restconf-subscribed-notifications@2019-01-11.yang" module ietf-restconf-subscribed-notifications { yang-version 1.1; namespace "urn:ietf:params:xml:ns:yang:" + "ietf-restconf-subscribed-notifications"; prefix rsn; import ietf-subscribed-notifications { prefix sn; } import ietf-inet-types { prefix inet; } organization "IETF NETCONF (Network Configuration) Working Group"; contact "WG Web: <http://tools.ietf.org/wg/netconf/> WG List: <mailto:netconf@ietf.org> Editor: Eric Voit <mailto:evoit@cisco.com> Internet-Draft RESTCONF-Notif January 2019 Editor: Alexander Clemm <mailto:ludwig@clemm.org> Editor: Reshad Rahman <mailto:rrahman@cisco.com> description "Defines RESTCONF as a supported transport for subscribed event notifications. Copyright (c) 2019 IETF Trust and the persons identified as authors of the code. All rights reserved. Redistribution and use in source and binary forms, with or without modification, is permitted pursuant to, and subject to the license terms contained in, the Simplified BSD License set forth in Section 4.c of the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info). This version of this YANG module is part of RFC XXXX; see the RFC itself for full legal notices." revision 2019-01-11 { description "Initial version"; reference "RFC XXXX: RESTCONF Transport for Event Notifications"; } grouping uri { description "Provides a reusable description of a URI."; leaf uri { type inet:uri; config false; description "Location of a subscription specific URI on the publisher."; } } augment "/sn:establish-subscription/sn:output" { description "This augmentation allows RESTCONF specific parameters for a response to a publisher’s subscription request."; uses uri; } augment "/sn:subscriptions/sn:subscription" { This augmentation allows RESTCONF specific parameters to be exposed for a subscription. augment "/sn:subscription-modified" { description "This augmentation allows RESTCONF specific parameters to be included as part of the notification that a subscription has been modified." uses uri; } 8. IANA Considerations This document registers the following namespace URI in the "IETF XML Registry" [RFC3688]: Registrant Contact: The IESG. XML: N/A; the requested URI is an XML namespace. This document registers the following YANG module in the "YANG Module Names" registry [RFC6020]: Name: ietf-restconf-subscribed-notifications Prefix: rsn Reference: RFC XXXX: RESTCONF Transport for Event Notifications 9. Security Considerations The YANG module specified in this document defines a schema for data that is designed to be accessed via network management transports such as NETCONF [RFC6241] or RESTCONF [RFC8040]. The lowest NETCONF layer is the secure transport layer, and the mandatory-to-implement secure transport is Secure Shell (SSH) [RFC6242]. The lowest RESTCONF layer is HTTPS, and the mandatory-to-implement secure transport is TLS [RFC5246]. The one new data node introduced in this YANG module may be considered sensitive or vulnerable in some network environments. It is thus important to control read access (e.g., via get, get-config, or notification) to this data nodes. These are the subtrees and data nodes and their sensitivity/vulnerability: Container: "/subscriptions" - "uri": leaf will show where subscribed resources might be located on a publisher. Access control must be set so that only someone with proper access permissions, and perhaps even HTTP session has the ability to access this resource. The subscription URI is implementation specific and is encrypted via the use of TLS. Therefore, even if an attacker succeeds in guessing the subscription URI, a RESTCONF username [RFC8040] with the required administrative permissions must be used to be able to access or modify that subscription. 10. Acknowledgments We wish to acknowledge the helpful contributions, comments, and suggestions that were received from: Ambika Prasad Tripathy, Alberto Gonzalez Prieto, Susan Hares, Tim Jenkins, Balazs Lengyel, Kent Watsen, Michael Scharf, Guangying Zheng, Martin Bjorklund, Qin Wu and Robert Wilton. 11. References 11.1. Normative References [I-D.draft-ietf-netconf-subscribed-notifications] Voit, E., Clemm, A., Gonzalez Prieto, A., Tripathy, A., and E. Nilsen-Nyggaard, "Custom Subscription to Event Streams", draft-ietf-netconf-subscribed-notifications-21 (work in progress), January 2019. [I-D.ietf-netconf-yang-push] Clemm, A., Voit, E., Gonzalez Prieto, A., Prasad Tripathy, A., Nilsen-Nyggaard, E., Bierman, A., and B. Lengyel, "Subscribing to YANG datastore push updates", draft-ietf-netconf-yang-push-20 (work in progress), October 2018, [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, 11.2. Informative References [I-D.draft-ietf-netconf-netconf-event-notifications] Clemm, Alexander., Voit, Eric., Gonzalez Prieto, Alberto., Nilsen-Nygaaard, E., and A. Tripathy, "NETCONF support for event notifications", May 2018, <https://datatracker.ietf.org/doc/ draft-ietf-netconf-netconf-event-notifications/>. [I-D.draft-ietf-netconf-nmda-restconf] Bjorklund, M., Schoenwaelder, J., Shafer, P., Watsen, K., and R. Wilton, "RESTCONF Extensions to Support the Network Management Datastore Architecture", April 2018, <https://datatracker.ietf.org/doc/ draft-ietf-netconf-nmda-restconf/>. Protocol (HTTP/1.1): Semantics and Content", RFC 7231, DOI 10.17487/RFC7231, June 2014, for Subscription to YANG Datastores", RFC 7923, DOI 10.17487/RFC7923, June 2016, Zhang, "A YANG Data Model for the Virtual Router Redundancy Protocol (VRRP)", RFC 8347, DOI 10.17487/RFC8347, March 2018, [XPATH] Clark, J. and S. DeRose, "XML Path Language (XPath) Version 1.0", November 1999, <http://www.w3.org/TR/1999/REC-xpath-19991116>. Appendix A. Examples This section is non-normative. To allow easy comparison, this section mirrors the functional examples shown with NETCONF over XML within [I-D.draft-ietf-netconf-netconf-event-notifications]. In addition, HTTP2 vs HTTP1.1 headers are not shown as the contents of the JSON encoded objects are identical within. A.1. Dynamic Subscriptions A.1.1. Establishing Dynamic Subscriptions The following figure shows two successful "establish-subscription" RPC requests as per [I-D.draft-ietf-netconf-subscribed-notifications]. The first request is given a subscription identifier of 22, the second, an identifier of 23. ``` +------------+ +-----------+ | Subscriber | | Publisher | +------------+ +-----------+ <table> <thead> <tr> <th>establish-subscription</th> <th>establish-subscription</th> </tr> </thead> <tbody> <tr> <td>HTTP 200 OK, id#22, URI#1</td> <td>HTTP 200 OK, id#23, URI#2</td> </tr> <tr> <td>GET (URI#1)</td> <td>GET (URI#2)</td> </tr> <tr> <td>------------------------</td> <td>------------------------</td> </tr> <tr> <td>HTTP 200 OK,notif-mesg (id#22)</td> <td>HTTP 200 OK,notif-mesg (id#23)</td> </tr> <tr> <td>notif-mesg (id#22)</td> <td>notif-mesg (id#23)</td> </tr> </tbody> </table> ``` Figure 2: Multiple subscriptions over RESTCONF/HTTP To provide examples of the information being transported, example messages for interactions in Figure 2 are detailed below: POST /restconf/operations /ietf-subscribed-notifications:establish-subscription { "ietf-subscribed-notifications:input": { "stream-xpath-filter": "/example-module:foo/", "stream": "NETCONF", "dscp": "10" } } Figure 3: establish-subscription request (a) As publisher was able to fully satisfy the request, the publisher sends the subscription identifier of the accepted subscription, and the URI: HTTP status code - 200 { "id": "22", "uri": "https://example.com/restconf/subscriptions/22" } Figure 4: establish-subscription success (b) Upon receipt of the successful response, the subscriber does a GET the provided URI to start the flow of notification messages. When the publisher receives this, the subscription is moved to the active state (c). GET /restconf/subscriptions/22 Figure 5: establish-subscription subsequent POST While not shown in Figure 2, if the publisher had not been able to fully satisfy the request, or subscriber has no authorization to establish the subscription, the publisher would have sent an RPC error response. For instance, if the "dscp" value of 10 asserted by the subscriber in Figure 3 proved unacceptable, the publisher may have returned: HTTP status code - 406 ``` { "ietf-restconf:errors" : { "error" : [ { "error-type": "application", "error-tag": "operation-failed", "error-severity": "error", "error-app-tag": "ietf-subscribed-notifications:dscp-unavailable" } ] } } ``` Figure 6: an unsuccessful establish subscription The subscriber can use this information in future attempts to establish a subscription. **A.1.2. Modifying Dynamic Subscriptions** An existing subscription may be modified. The following exchange shows a negotiation of such a modification via several exchanges between a subscriber and a publisher. This negotiation consists of a failed RPC modification request/response, followed by a successful one. If the subscription being modified in Figure 7 is a datastore subscription as per [I-D.ietf-netconf-yang-push], the modification request made in (d) may look like that shown in Figure 8. As can be seen, the modifications being attempted are the application of a new xpath filter as well as the setting of a new periodic time interval. ```json POST /restconf/operations /ietf-subscribed-notifications:modify-subscription { "ietf-subscribed-notifications:input": { "id": "23", "ietf-yang-push:datatstore-xpath-filter": "/example-module:foo/example-module:bar", "ietf-yang-push:periodic": { "ietf-yang-push:period": "500" } } } ``` Figure 8: Subscription modification request (c) If the publisher can satisfy both changes, the publisher sends a positive result for the RPC. If the publisher cannot satisfy either of the proposed changes, the publisher sends an RPC error response (e). The following is an example RPC error response for (e) which includes a hint. This hint is an alternative time period value which might have resulted in a successful modification: HTTP status code - 406 ``` { "ietf-restconf:errors" : { "error" : [ "error-type": "application", "error-tag": "operation-failed", "error-severity": "error", "error-app-tag": "ietf-yang-push:period-unsupported", "error-info": { "ietf-yang-push": "modify-subscription-datastore-error-info": { "period-hint": "3000" } } ] } ``` Figure 9: Modify subscription failure with Hint (e) A.1.3. Deleting Dynamic Subscriptions The following demonstrates deleting a subscription. This subscription may have been to either a stream or a datastore. POST /restconf/operations /ietf-subscribed-notifications:delete-subscription ``` { "delete-subscription": { "id": "22" } } ``` Figure 10: Delete subscription If the publisher can satisfy the request, the publisher replies with success to the RPC request. If the publisher cannot satisfy the request, the publisher sends an error-rpc element indicating the modification didn’t work. Figure 11 shows a valid response for existing valid subscription identifier, but that subscription identifier was created on a different transport session: HTTP status code - 406 ```json { "ietf-restconf:errors" : { "error" : [ "error-type": "application", "error-tag": "operation-failed", "error-severity": "error", "error-app-tag": "ietf-subscribed-notifications:no-such-subscription" ] } } ``` Figure 11: Unsuccessful delete subscription A.2. Subscription State Notifications A publisher will send subscription state notifications according to the definitions within [I-D.draft-ietf-netconf-subscribed-notifications]). A.2.1. subscription-modified A "subscription-modified" encoded in JSON would look like: ```json { "ietf-restconf:notification" : { "eventTime": "2007-09-01T10:00:00Z", "ietf-subscribed-notifications:subscription-modified": { "id": "39", "uri": "https://example.com/restconf/subscriptions/22" } } } ``` Figure 12: subscription-modified subscription state notification A.2.2. subscription-completed, subscription-resumed, and replay-complete A "subscription-completed" would look like: ```json { "ietf-restconf:notification": { "eventTime": "2007-09-01T10:00:00Z", "ietf-subscribed-notifications:subscription-completed": { "id": "39", } } } ``` Figure 13: subscription-completed notification in JSON The "subscription-resumed" and "replay-complete" are virtually identical, with "subscription-completed" simply being replaced by "subscription-resumed" and "replay-complete". A.2.3. subscription-terminated and subscription-suspended A "subscription-terminated" would look like: ```json { "ietf-restconf:notification": { "eventTime": "2007-09-01T10:00:00Z", "ietf-subscribed-notifications:subscription-terminated": { "id": "39", "error-id": "suspension-timeout" } } } ``` Figure 14: subscription-terminated subscription state notification The "subscription-suspended" is virtually identical, with "subscription-terminated" simply being replaced by "subscription-suspended". A.3. Filter Example This section provides an example which illustrate the method of filtering event record contents. The example is based on the YANG notification "vrrp-protocol-error-event" as defined per the ietf-vrrp.yang module within [RFC8347]. Event records based on this specification which are generated by the publisher might appear as: data: { data: "ietf-restconf:notification": { data: "eventTime": "2018-09-14T08:22:33.44Z", data: "ietf-vrrp:vrrp-protocol-error-event": { data: "protocol-error-reason": "checksum-error" } } } Figure 15: RFC 8347 (VRRP) - Example Notification Suppose a subscriber wanted to establish a subscription which only passes instances of event records where there is a "checksum-error" as part of a VRRP protocol event. Also assume the publisher places such event records into the NETCONF stream. To get a continuous series of matching event records, the subscriber might request the application of an XPath filter against the NETCONF stream. An "establish-subscription" RPC to meet this objective might be: POST /restconf/operations /ietf-subscribed-notifications:establish-subscription { "ietf-subscribed-notifications:input": { "stream": "NETCONF", "stream-xpath-filter": "/ietf-vrrp:vrrp-protocol-error-event[ protocol-error-reason='checksum-error']/", } } Figure 16: Establishing a subscription error reason via XPath For more examples of XPath filters, see [XPATH]. Suppose the "establish-subscription" in Figure 16 was accepted. And suppose later a subscriber decided they wanted to broaden this subscription cover to all VRRP protocol events (i.e., not just those with a "checksum error"). The subscriber might attempt to modify the subscription in a way which replaces the XPath filter with a subtree filter which sends all VRRP protocol events to a subscriber. Such a "modify-subscription" RPC might look like: POST /restconf/operations /ietf-subscribed-notifications:modify-subscription { "ietf-subscribed-notifications:input": { "stream": "NETCONF", "stream-subtree-filter": { "/ietf-vrrp:vrrp-protocol-error-event" : {} } } } Figure 17 For more examples of subtree filters, see [RFC6241], section 6.4. Appendix B. Changes between revisions (To be removed by RFC editor prior to publication) v11 - v12 - Added text in 3.2 for expected behavior when ietf-restconf-monitoring.yang is also supported. - Added section 2 to the reference to draft-ietf-netconf-nmda-restconf. - Replaced kill-subscription-error by delete-subscription-error in section 3.3. - Clarified vertical lines (a) and (b) in Figure 1 of section 3.4. - Section 3.4, 3rd bullet after Figure 1, replaced "must" with "MUST". - Modified text in section 3.4 regarding access to RPCs modify-subscription, resync-subscription, delete-subscription and kill-subscription. - Section 4, first bullet for HTTP2: replaced dscp and priority with weighting and weight. - Section 6, added YANG tree diagram and fixed description of the module. - Section 7, fixed indentation of module description statement. Section 7, in YANG module changed year in copyright statement to 2019. Section 8, added text on how server protects access to the subscription URI. Fixed outdated references and removed unused references. Fixed the instances of line too long. Fixed example in Figure 3. v10 - v11 Per Kent’s request, added name attribute to artwork which need to be extracted. v09 - v10 Fixed typo for resync. Added text wrt RPC permissions and RESTCONF username. v08 - v09 Addressed comments received during WGLC. v07 - v08 Aligned with RESTCONF mechanism. YANG model: removed augment of subscription-started, added restconf transport. Added Appendix A.3 for filter example. v06 - v07 Removed configured subscriptions. Subscription identifier renamed to id. v05 - v06 JSON examples updated by Reshad. v04 - v05 - Error mechanisms updated to match embedded RESTCONF mechanisms - Restructured format and sections of document. - Added a YANG data model for HTTP specific parameters. - Mirrored the examples from the NETCONF transport draft to allow easy comparison. v03 - v04 - Draft not fully synched to new version of subscribed-notifications yet. - References updated v02 - v03 - Event notification reframed to notification message. - Tweaks to wording/capitalization/format. v01 - v02 - Removed sections now redundant with [I-D.draft-ietf-netconf-subscribed-notifications] and [I-D.ietf-netconf-yang-push] such as: mechanisms for subscription maintenance, terminology definitions, stream discovery. - 3rd party subscriptions are out-of-scope. - SSE only used with RESTCONF and HTTP1.1 dynamic subscriptions - Timeframes for event tagging are self-defined. - Clean-up of wording, references to terminology, section numbers. v00 - v01 - Removed the ability for more than one subscription to go to a single HTTP2 stream. - Updated call flows. Extensively. - SSE only used with RESTCONF and HTTP1.1 dynamic subscriptions o HTTP is not used to determine that a receiver has gone silent and is not Receiving Event Notifications o Many clean-ups of wording and terminology Authors’ Addresses Eric Voit Cisco Systems Email: evoit@cisco.com Reshad Rahman Cisco Systems Email: rrahman@cisco.com Einar Nilsen-Nygaard Cisco Systems Email: einarnn@cisco.com Alexander Clemm Huawei Email: ludwig@clemm.org Andy Bierman YumaWorks Email: andy@yumaworks.com
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-netconf-restconf-notif-12.pdf", "len_cl100k_base": 7765, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 49404, "total-output-tokens": 10335, "length": "2e12", "weborganizer": {"__label__adult": 0.00033783912658691406, "__label__art_design": 0.0003063678741455078, "__label__crime_law": 0.0005602836608886719, "__label__education_jobs": 0.0008988380432128906, "__label__entertainment": 0.0001653432846069336, "__label__fashion_beauty": 0.00019478797912597656, "__label__finance_business": 0.0006923675537109375, "__label__food_dining": 0.0002875328063964844, "__label__games": 0.0006933212280273438, "__label__hardware": 0.0038967132568359375, "__label__health": 0.0004177093505859375, "__label__history": 0.0003695487976074219, "__label__home_hobbies": 8.487701416015625e-05, "__label__industrial": 0.0006237030029296875, "__label__literature": 0.00039887428283691406, "__label__politics": 0.0004220008850097656, "__label__religion": 0.00048613548278808594, "__label__science_tech": 0.12939453125, "__label__social_life": 0.00011086463928222656, "__label__software": 0.08929443359375, "__label__software_dev": 0.7685546875, "__label__sports_fitness": 0.00032019615173339844, "__label__transportation": 0.0012121200561523438, "__label__travel": 0.00023090839385986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34525, 0.03625]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34525, 0.21161]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34525, 0.79177]], "google_gemma-3-12b-it_contains_pii": [[0, 1480, false], [1480, 1915, null], [1915, 4064, null], [4064, 6214, null], [6214, 7830, null], [7830, 9931, null], [9931, 10466, null], [10466, 12105, null], [12105, 13361, null], [13361, 14737, null], [14737, 16257, null], [16257, 18038, null], [18038, 19945, null], [19945, 21639, null], [21639, 22626, null], [22626, 23854, null], [23854, 24631, null], [24631, 25496, null], [25496, 26821, null], [26821, 27886, null], [27886, 29297, null], [29297, 30896, null], [30896, 32087, null], [32087, 32974, null], [32974, 34095, null], [34095, 34525, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1480, true], [1480, 1915, null], [1915, 4064, null], [4064, 6214, null], [6214, 7830, null], [7830, 9931, null], [9931, 10466, null], [10466, 12105, null], [12105, 13361, null], [13361, 14737, null], [14737, 16257, null], [16257, 18038, null], [18038, 19945, null], [19945, 21639, null], [21639, 22626, null], [22626, 23854, null], [23854, 24631, null], [24631, 25496, null], [25496, 26821, null], [26821, 27886, null], [27886, 29297, null], [29297, 30896, null], [30896, 32087, null], [32087, 32974, null], [32974, 34095, null], [34095, 34525, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34525, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34525, null]], "pdf_page_numbers": [[0, 1480, 1], [1480, 1915, 2], [1915, 4064, 3], [4064, 6214, 4], [6214, 7830, 5], [7830, 9931, 6], [9931, 10466, 7], [10466, 12105, 8], [12105, 13361, 9], [13361, 14737, 10], [14737, 16257, 11], [16257, 18038, 12], [18038, 19945, 13], [19945, 21639, 14], [21639, 22626, 15], [22626, 23854, 16], [23854, 24631, 17], [24631, 25496, 18], [25496, 26821, 19], [26821, 27886, 20], [27886, 29297, 21], [29297, 30896, 22], [30896, 32087, 23], [32087, 32974, 24], [32974, 34095, 25], [34095, 34525, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34525, 0.02742]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ae0550da001e432dd14aea97a4e8b94e362b6908
Asymmetric Key Packages Abstract This document defines the syntax for private-key information and a content type for it. Private-key information includes a private key for a specified public-key algorithm and a set of attributes. The Cryptographic Message Syntax (CMS), as defined in RFC 5652, can be used to digitally sign, digest, authenticate, or encrypt the asymmetric key format content type. This document obsoletes RFC 5208. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc5958. Copyright Notice Copyright (c) 2010 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. This document defines the syntax for private-key information and a Cryptographic Message Syntax (CMS) [RFC5652] content type for it. Private-key information includes a private key for a specified public-key algorithm and a set of attributes. The CMS can be used to digitally sign, digest, authenticate, or encrypt the asymmetric key format content type. This document obsoletes PKCS #8 v1.2 [RFC5208]. 1.1. Requirements Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 1.2. ASN.1 Syntax Notation The key package is defined using ASN.1 [X.680], [X.681], [X.682], and [X.683]. 1.3. Summary of Updates to RFC 5208 The following summarizes the updates to [RFC5208]: - Changed the name "PrivateKeyInfo" to "OneAsymmetricKey". This reflects the addition of the publicKey field to allow both parts of the asymmetric key to be conveyed separately. Not all algorithms will use both fields; however, the publicKey field was added for completeness. - Defined Asymmetric Key Package CMS content type. - Removed redundant IMPLICIT from attributes. - Added publicKey to OneAsymmetricKey and updated the version number. - Added that PKCS #9 attributes may be supported. - Added discussion of compatibility with other private-key formats. - Added requirements for encoding rule set. - Changed imports from PKCS #5 to [RFC5912] and [RFC5911]. - Replaced ALGORITHM-IDENTIFIER with ALGORITHM from [RFC5912]. - Registers application/pkcs8 media type and .p8 file extension. 2. Asymmetric Key Package CMS Content Type The asymmetric key package CMS content type is used to transfer one or more plaintext asymmetric keys from one party to another. An asymmetric key package MAY be encapsulated in one or more CMS protecting content types (see Section 4). Earlier versions of this specification [RFC5208] did not specify a particular encoding rule set, but generators SHOULD use DER [X.690] and receivers MUST support BER [X.690], which also includes DER [X.690]. The asymmetric key package content type has the following syntax: ``` ct-asymmetric-key-package CONTENT-TYPE ::= { AsymmetricKeyPackage IDENTIFIED BY id-ct-KP-aKeyPackage } id-ct-KP-aKeyPackage OBJECT IDENTIFIER ::= { joint-iso-itu-t(2) country(16) us(840) organization(1) gov(101) dod(2) infosec(1) formats(2) key-package-content-types(78) 5 } AsymmetricKeyPackage ::= SEQUENCE SIZE (1..MAX) OF OneAsymmetricKey OneAsymmetricKey ::= SEQUENCE { version Version, privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, privateKey PrivateKey, attributes [0] Attributes OPTIONAL, ..., [[2: publicKey [1] PublicKey OPTIONAL ]], ... } ``` PrivateKeyInfo ::= OneAsymmetricKey -- PrivateKeyInfo is used by [P12]. If any items tagged as version -- 2 are used, the version must be v2, else the version should be -- v1. When v1, PrivateKeyInfo is the same as it was in [RFC5208]. Version ::= INTEGER { v1(0), v2(1) } (v1, ..., v2) PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier { PUBLIC-KEY, { PrivateKeyAlgorithms } } PrivateKey ::= OCTET STRING -- Content varies based on type of key. The -- algorithm identifier dictates the format of -- the key. PublicKey ::= BIT STRING -- Content varies based on type of key. The -- algorithm identifier dictates the format of -- the key. Attributes ::= SET OF Attribute { { OneAsymmetricKeyAttributes } } The AsymmetricKeyPackage contains one or more OneAsymmetricKey elements. The syntax of OneAsymmetricKey accommodates a version number, an indication of the asymmetric algorithm to be used with the private key, a private key, optional keying material attributes (e.g., userCertificate from [X.520]), and an optional public key. In general, either the public key or the certificate will be present. In very rare cases will both the public key and the certificate be present as this includes two copies of the public key. OneAsymmetricKey renames the PrivateKeyInfo syntax defined in [RFC5208]. The new name better reflects the ability to carry both private- and public-key components. Backwards compatibility with the original PrivateKeyInfo is preserved via version number. The fields in OneAsymmetricKey are used as follows: - version identifies the version of OneAsymmetricKey. If publicKey is present, then version is set to v2 else version is set to v1. - privateKeyAlgorithm identifies the private-key algorithm and optionally contains parameters associated with the asymmetric key pair. The algorithm is identified by an object identifier (OID) and the format of the parameters depends on the OID, but the PrivateKeyAlgorithmInformation object set restricts the permissible OIDs. The value placed in privateKeyAlgorithmIdentifier is the value an originator would apply to indicate which algorithm is to be used with the private key. - privateKey is an OCTET STRING that contains the value of the private key. The interpretation of the content is defined in the registration of the private-key algorithm. For example, a DSA key is an INTEGER, an RSA key is represented as RSAPrivateKey as defined in [RFC3447], and an Elliptic Curve Cryptography (ECC) key is represented as ECPrivateKey as defined in [RFC5915]. - attributes is OPTIONAL. It contains information corresponding to the public key (e.g., certificates). The attributes field uses the class ATTRIBUTE which is restricted by the OneAsymmetricKeyAttributes information object set. OneAsymmetricKeyAttributes is an open ended set in this document. Others documents can constrain these values. Attributes from [RFC2985] MAY be supported. - publicKey is OPTIONAL. When present, it contains the public key encoded in a BIT STRING. The structure within the BIT STRING, if any, depends on the privateKeyAlgorithm. For example, a DSA key is an INTEGER. Note that RSA public keys are included in RSAPrivateKey (i.e., n and e are present), as per [RFC3447], and ECC public keys are included in ECPrivateKey (i.e., in the publicKey field), as per [RFC5915]. 3. Encrypted Private Key Info This section gives the syntax for encrypted private-key information, which is used by [P12]. Encrypted private-key information shall have ASN.1 type EncryptedPrivateKeyInfo: EncryptedPrivateKeyInfo ::= SEQUENCE { encryptionAlgorithm EncryptionAlgorithmIdentifier, encryptedData EncryptedData } EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier { CONTENT-ENCRIPTION, { KeyEncryptionAlgorithms } } EncryptedData ::= OCTET STRING The fields in EncryptedPrivateKeyInfo are used as follows: - encryptionAlgorithm identifies the algorithm under which the private-key information is encrypted. - encryptedData is the result of encrypting the private-key information (i.e., the PrivateKeyInfo). The encryption process involves the following two steps: 1. The private-key information is encoded, yielding an octet string. Generators SHOULD use DER \([X.690]\) and receivers MUST support BER \([X.690]\), which also includes DER \([X.690]\). 2. The result of step 1 is encrypted with the secret key to give an octet string, the result of the encryption process. ### 4. Protecting the AsymmetricKeyPackage CMS protecting content types, [RFC5652] and [RFC5083], can be used to provide security to the AsymmetricKeyPackage: - SignedData can be used to apply a digital signature to the AsymmetricKeyPackage. - EncryptedData can be used to encrypt the AsymmetricKeyPackage with symmetric encryption, where the sender and the receiver already share the necessary encryption key. - EnvelopedData can be used to encrypt the AsymmetricKeyPackage with symmetric encryption, where the sender and the receiver do not share the necessary encryption key. - AuthenticatedData can be used to protect the AsymmetricKeyPackage with message authentication codes, where key management information is handled in a manner similar to EnvelopedData. - AuthEnvelopedData can be used to protect the AsymmetricKeyPackage with algorithms that support authenticated encryption, where key management information is handled in a manner similar to EnvelopedData. ### 5. Other Private-Key Format Considerations This document defines the syntax and the semantics for a content type that exchanges asymmetric private keys. There are two other formats that have been used for the transport of asymmetric private keys: - Personal Information Exchange (PFX) Syntax Standard [P12], which is more commonly referred to as PKCS #12 or simply P12, is a transfer syntax for personal identity information, including private keys, certificates, miscellaneous secrets, and extensions. OneAsymmetricKey, PrivateKeyInfo, and EncryptedPrivateKeyInfo can be carried in a P12 message. The private key information, OneAsymmetricKey and PrivateKeyInfo, are carried in the P12 keyBag BAG-TYPE. EncryptedPrivateKeyInfo is carried in the P12 pkcs8ShroudedKeyBag BAG-TYPE. In current implementations, the file extensions .pfx and .p12 can be used interchangeably. - Microsoft’s private-key proprietary transfer syntax. The .pvk file extension is used for local storage. The .pvk and .p12/.pfx formats are not interchangeable; however, conversion tools exist to convert from one format to another. To extract the private-key information from the AsymmetricKeyPackage, the encapsulating layers need to be removed. At a minimum, the outer ContentInfo [RFC5652] layer needs to be removed. If the AsymmetricKeyPackage is encapsulated in a SignedData [RFC5652], then the SignedData and EncapsulatedContentInfo layers [RFC5652] also need to be removed. The same is true for EnvelopedData, EncryptedData, and AuthenticatedData all from [RFC5652] as well as AuthEnvelopedData from [RFC5083]. Once all the outer layers are removed, there are as many sets of private-key information as there are OneAsymmetricKey structures. OneAsymmetricKey and PrivateKeyInfo are the same structure; therefore, either can be saved as a .p8 file or copied into the P12 KeyBag BAG-TYPE. Removing encapsulating security layers will invalidate any signature and may expose the key to unauthorized disclosure. .p8 files are sometimes PEM-encoded. When .p8 files are PEM encoded they use the .pem file extension. PEM encoding is either the Base64 encoding, from Section 4 of [RFC4648], of the DER-encoded EncryptedPrivateKeyInfo sandwiched between: ``` -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY----- ``` or the Base64 encoding, see Section 4 of [RFC4648], of the DER-encoded PrivateKeyInfo sandwiched between: ``` -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY----- ``` 6. Security Considerations Protection of the private-key information is vital to public-key cryptography. Disclosure of the private-key material to another entity can lead to masquerades. The encryption algorithm used in the encryption process must be as ‘strong’ as the key it is protecting. The asymmetric key package contents are not protected. This content type can be combined with a security protocol to protect the contents of the package. 7. IANA Considerations This document makes use of object identifiers to identify a CMS content type and the ASN.1 module found in Appendix A. The CMS content type OID is registered in a DoD arc. The ASN.1 module OID is registered in an arc delegated by RSADSI to the SMIME Working Group. No further action by IANA is necessary for this document or any anticipated updates. This specification also defines a new media subtype that IANA has registered at http://www.iana.org/. 7.1. Registration of media subtype application/pkcs8 Type name: application Subtype name: pkcs8 Required parameters: None Optional parameters: None Encoding considerations: binary Security considerations: Carries a cryptographic private key. See section 6. Interoperability considerations: The PKCS #8 object inside this media type MUST be DER-encoded PrivateKeyInfo. Published specification: RFC 5958 Applications which use this media type: Any MIME-compliant transport that processes asymmetric keys. Additional information: Magic number(s): None File extension(s): .p8 Macintosh File Type Code(s): Person & email address to contact for further information: Sean Turner <turners@ieca.com> Restrictions on usage: none Author: Sean Turner <turners@ieca.com> Intended usage: COMMON Change controller: The IESG 8. References 8.1. Normative References 8.2. Informative References Appendix A. ASN.1 Module This annex provides the normative ASN.1 definitions for the structures described in this specification using ASN.1 as defined in [X.680] through [X.683]. AsymmetricKeyPackageModuleV1 { iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-9(9) smime(16) modules(0) id-mod-asymmetricKeyPkgV1(50) } DEFINITIONS IMPLICIT TAGS ::= begin -- EXPORTS ALL IMPORTS -- FROM New SMIME ASN.1 [RFC5911] Attribute{}, CONTENT-TYPE FROM CryptographicMessageSyntax-2009 { iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-9(9) smime(16) modules(0) id-mod-cms-2004-02(41) } -- From New PKIX ASN.1 [RFC5912] ATTRIBUTE FROM PKIX-CommonTypes-2009 { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-pkixCommon-02(57) } -- From New PKIX ASN.1 [RFC5912] AlgorithmIdentifier{}, ALGORITHM, PUBLIC-KEY, CONTENT-ENCRYPTION FROM AlgorithmInformation-2009 { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-algorithmInformation-02(58) } ContentSet CONTENT-TYPE ::= { ct-asymmetric-key-package, ... -- Expect additional content types -- } ct-asymmetric-key-package CONTENT-TYPE ::= { AsymmetricKeyPackage IDENTIFIED BY id-ct-KP-aKeyPackage } id-ct-KP-aKeyPackage OBJECT IDENTIFIER ::= { joint-iso-itu-t(2) country(16) us(840) organization(1) gov(101) dod(2) infosec(1) formats(2) key-package-content-types(78) 5 } AsymmetricKeyPackage ::= SEQUENCE SIZE (1..MAX) OF OneAsymmetricKey OneAsymmetricKey ::= SEQUENCE { version Version, privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, privateKey PrivateKey, attributes [0] Attributes OPTIONAL, ... [2: publicKey [1] PublicKey OPTIONAL ]}, ... } PrivateKeyInfo ::= OneAsymmetricKey -- PrivateKeyInfo is used by [P12]. If any items tagged as version -- 2 are used, the version must be v2, else the version should be -- v1. When v1, PrivateKeyInfo is the same as it was in [RFC5208]. Version ::= INTEGER { v1(0), v2(1) } (v1, ..., v2) PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier { PUBLIC-KEY, { PrivateKeyAlgorithms } } PrivateKey ::= OCTET STRING -- Content varies based on type of key. The -- algorithm identifier dictates the format of -- the key. Publickey ::= BIT STRING -- Content varies based on type of key. The -- algorithm identifier dictates the format of -- the key. Attributes ::= SET OF Attribute { { OneAsymmetricKeyAttributes } } OneAsymmetricKeyAttributes ATTRIBUTE ::= { ... -- For local profiles } -- An alternate representation that makes full use of ASN.1 -- constraints follows. Also note that PUBLIC-KEY needs to be -- imported from the new PKIX ASN.1 Algorithm Information module -- and PrivateKeyAlgorithms needs to be commented out. -- OneAsymmetricKey ::= SEQUENCE { -- version Version, -- privateKeyAlgorithm SEQUENCE { -- algorithm PUBLIC-KEY.&id({PublicKeySet}), -- parameters PUBLIC-KEY.&Params({PublicKeySet} -- ({@privateKeyAlgorithm.algorithm}) -- OPTIONAL} -- privateKey OCTET STRING (CONTAINING -- PUBLIC-KEY.&PrivateKey({PublicKeySet} -- ({@privateKeyAlgorithm.algorithm})), -- attributes [0] Attributes OPTIONAL, -- ..., -- ![2: publicKey [1] BIT STRING (CONTAINING -- PUBLIC-KEY.&Params({PublicKeySet} -- ({@privateKeyAlgorithm.algorithm}) -- OPTIONAL, -- ... -- } EncryptedPrivateKeyInfo ::= SEQUENCE { encryptionAlgorithm EncryptionAlgorithmIdentifier, encryptedData EncryptedData } EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier { CONTENT-ENCRYPTION, { KeyEncryptionAlgorithms } } EncryptedData ::= OCTET STRING -- Encrypted PrivateKeyInfo PrivateKeyAlgorithms ALGORITHM ::= { ... -- Extensible } KeyEncryptionAlgorithms ALGORITHM ::= { ... -- Extensible } END Acknowledgements Many thanks go out to the Burt Kaliski and Jim Randall at RSA. Without the prior version of the document, this one wouldn’t exist. I’d also like to thank Pasi Eronen, Roni Even, Alfred Hoenes, Russ Housley, Jim Schaad, and Carl Wallace. Author’s Address Sean Turner IECA, Inc. 3057 Nutley Street, Suite 106 Fairfax, VA 22031 USA EMail: turners@ieca.com
{"Source-Url": "https://trac.tools.ietf.org/pdf/rfc5958.pdf", "len_cl100k_base": 4409, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 26125, "total-output-tokens": 5934, "length": "2e12", "weborganizer": {"__label__adult": 0.0004062652587890625, "__label__art_design": 0.0003812313079833984, "__label__crime_law": 0.0018901824951171875, "__label__education_jobs": 0.0006465911865234375, "__label__entertainment": 0.00011348724365234376, "__label__fashion_beauty": 0.00018203258514404297, "__label__finance_business": 0.001369476318359375, "__label__food_dining": 0.00029349327087402344, "__label__games": 0.0007853507995605469, "__label__hardware": 0.00223541259765625, "__label__health": 0.0005474090576171875, "__label__history": 0.00037789344787597656, "__label__home_hobbies": 0.00011843442916870116, "__label__industrial": 0.00083160400390625, "__label__literature": 0.0003571510314941406, "__label__politics": 0.0005764961242675781, "__label__religion": 0.000568389892578125, "__label__science_tech": 0.19775390625, "__label__social_life": 0.00012010335922241212, "__label__software": 0.057769775390625, "__label__software_dev": 0.7314453125, "__label__sports_fitness": 0.00032019615173339844, "__label__transportation": 0.0005517005920410156, "__label__travel": 0.0001747608184814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20146, 0.0282]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20146, 0.36728]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20146, 0.75754]], "google_gemma-3-12b-it_contains_pii": [[0, 1646, false], [1646, 2919, null], [2919, 4462, null], [4462, 6459, null], [6459, 8289, null], [8289, 10143, null], [10143, 12377, null], [12377, 13819, null], [13819, 14869, null], [14869, 15819, null], [15819, 17015, null], [17015, 18406, null], [18406, 19772, null], [19772, 20146, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1646, true], [1646, 2919, null], [2919, 4462, null], [4462, 6459, null], [6459, 8289, null], [8289, 10143, null], [10143, 12377, null], [12377, 13819, null], [13819, 14869, null], [14869, 15819, null], [15819, 17015, null], [17015, 18406, null], [18406, 19772, null], [19772, 20146, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20146, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20146, null]], "pdf_page_numbers": [[0, 1646, 1], [1646, 2919, 2], [2919, 4462, 3], [4462, 6459, 4], [6459, 8289, 5], [8289, 10143, 6], [10143, 12377, 7], [12377, 13819, 8], [13819, 14869, 9], [14869, 15819, 10], [15819, 17015, 11], [17015, 18406, 12], [18406, 19772, 13], [19772, 20146, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20146, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2c2e634c112f3019e04638cf445f7dd65d505258
[REMOVED]
{"len_cl100k_base": 7012, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 43603, "total-output-tokens": 9713, "length": "2e12", "weborganizer": {"__label__adult": 0.0008687973022460938, "__label__art_design": 0.0016622543334960938, "__label__crime_law": 0.0006837844848632812, "__label__education_jobs": 0.00157928466796875, "__label__entertainment": 0.0007596015930175781, "__label__fashion_beauty": 0.0005002021789550781, "__label__finance_business": 0.00034999847412109375, "__label__food_dining": 0.0006933212280273438, "__label__games": 0.0017499923706054688, "__label__hardware": 0.0019741058349609375, "__label__health": 0.0016412734985351562, "__label__history": 0.0004684925079345703, "__label__home_hobbies": 0.00014388561248779297, "__label__industrial": 0.0008935928344726562, "__label__literature": 0.0030994415283203125, "__label__politics": 0.0004968643188476562, "__label__religion": 0.0011892318725585938, "__label__science_tech": 0.460205078125, "__label__social_life": 0.00019121170043945312, "__label__software": 0.0199432373046875, "__label__software_dev": 0.49951171875, "__label__sports_fitness": 0.0005192756652832031, "__label__transportation": 0.0006928443908691406, "__label__travel": 0.00026988983154296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33080, 0.06061]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33080, 0.43019]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33080, 0.86018]], "google_gemma-3-12b-it_contains_pii": [[0, 2857, false], [2857, 6755, null], [6755, 9626, null], [9626, 13765, null], [13765, 18215, null], [18215, 20000, null], [20000, 23655, null], [23655, 25808, null], [25808, 30103, null], [30103, 33080, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2857, true], [2857, 6755, null], [6755, 9626, null], [9626, 13765, null], [13765, 18215, null], [18215, 20000, null], [20000, 23655, null], [23655, 25808, null], [25808, 30103, null], [30103, 33080, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33080, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33080, null]], "pdf_page_numbers": [[0, 2857, 1], [2857, 6755, 2], [6755, 9626, 3], [9626, 13765, 4], [13765, 18215, 5], [18215, 20000, 6], [20000, 23655, 7], [23655, 25808, 8], [25808, 30103, 9], [30103, 33080, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33080, 0.13966]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a6948e6980366521968ce5321659d37730edc1ae
A conceptual Bayesian net model for integrated software quality prediction Łukasz Radliński\textsuperscript{1}\textsuperscript{*} \textsuperscript{1}Institute of Information Technology in Management, University of Szczecin Mickiewicza 64, 71-101 Szczecin, Poland Abstract – Software quality can be described by a set of features, such as functionality, reliability, usability, efficiency, maintainability, portability and others. There are various models for software quality prediction developed in the past. Unfortunately, they typically focus on a single quality feature. The main goal of this study is to develop a predictive model that integrates several features of software quality, including relationships between them. This model is an expert-driven Bayesian net, which can be used in diverse analyses and simulations. The paper discusses model structure, behaviour, calibration and enhancement options as well as possible use in fields other than software engineering. 1 Introduction Software quality has been one of the most widely studied areas of software engineering. One of the aspects of quality assurance is quality prediction. Several predictive models have been proposed since 1970’s. A clear trade-off can be observed between model’s analytical potential and the number of used quality features. Models that contain a wide range of quality features [1, 2, 3] typically have low analytical potential and are more frames for building calibrated predictive models. On the other hand, models with higher analytical potential typically focus on a single or very few aspects of quality, for example on reliability [4, 5]. This trade-off has been the main motivation for research focused on building predictive models that both incorporate various aspects of software quality and have high analytical potential. The aim of this paper is to build such predictive model as a \textsuperscript{*}lukrad@uoo.univ.szczecin.pl Bayesian net (BN). This model may be used to deliver information for decision-makers about managing software projects to achieve specific targets for software quality. Bayesian nets have been selected for this study for several reasons. The most important is related with the ability to incorporate both expert knowledge and empirical data. Typically, predictive models for software engineering are built using data-driven techniques like multiple regression, neural networks, nearest neighbours or decision trees. For the current type of study, a dataset with past projects of high volume and appropriate level of details is typically not available. Thus, the model has to be based more on expert knowledge and only partially on empirical data. Other advantages of BNs include the ability to incorporate causal relationships between variables, explicit incorporation of uncertainty through probabilistic definition of variables, no fixed lists of independent and dependent variables, running the model with incomplete data, forward and backward inference, and graphical representation. More information on the BN theory can be found in [6, 7] while recent applications in software engineering have been discussed in [8, 9, 10, 11, 12, 13, 14, 15, 16]. The rest of this paper is organized as follows: Section 2 brings closer the point of view on software quality that was the subject of the research. Section 3 discusses background knowledge used when building the predictive model. Section 4 provides the details on the structure of the proposed predictive model. Section 5 focuses on the behaviour of this model. Section 6 discusses possibilities for calibrating and extending the proposed model. Section 7 considers the use of such type of model in other areas. Section 8 summarizes this study. 2 Software Quality Software quality is typically expressed in science and industry as a range of features rather than a single aggregated value. This study follows the ISO approach where software quality is defined as a “degree to which the software product satisfies stated and implied needs when used under specified conditions” [1]. This standard defines eleven characteristics, shown in Fig. 1 with dark backgrounds. The last three characteristics (on the left) refer to “quality in use” while others refer to internal and external metrics. Each characteristic is decomposed into the sub-characteristics, shown in Fig. 1 with white background. On the next level each sub-characteristic aggregates the values of metrics that describe the software product. The metrics are not shown here because they should be selected depending on the particular environment where such quality assessment would be used. Other quality models have been proposed in literature [17, 3], from which some concepts may be adapted when building a customized predictive model. In our approach we follow the general taxonomy of software quality proposed by ISO. However, our approach is not limited to the ISO point of view and may be adjusted according to specific needs. For this reason our approach uses slightly different Fig. 1. Quality features and sub-features. terminology with “features” at the highest level, “sub-features” at the second level and “measures” at the lowest level. 3 Background knowledge Our approach assumes that the industrial-scale model for integrated software quality prediction has to be calibrated for specific needs and environment before it can be used in decision support. Normally such calibration should be performed among domain experts from the target environment, for example using a questionnaire survey. However, at this point such survey has not been completed yet so the current model has been built fully based on the available literature and expert knowledge of the modellers. This is the reason why the model is currently at the “conceptual” stage. The literature used includes quality standards [1, 18, 2, 19, 20, 21], widely accepted results on software quality [22, 23, 24, 17, 3, 25, 26, 27, 28, 29], and experience from building models for similar areas of software engineering [8, 9, 10, 11, 12, 30, 13, 14, 15, 16]. Available literature provides useful information on the relationships among quality features. Fig. 2 illustrates the relationships encoded in the proposed predictive model. There are two types of relationships: positive (“+”) and negative (“−”). The positive relationship indicates a situation where the increased level of one feature causes a probable increase of the level of another feature. The negative relationship indicates a situation where the increased level of one feature causes a probably decrease of the level of another feature unless some compensation is provided. This compensation typically has a form of additional effort, increase of development process quality, or use of better tools or technologies. Table 1 summarizes the relationships between the effort and the quality features. A conceptual Bayesian net model for integrated... Fig. 2. Impact of controllable factors on quality features Table 1. LMS types and features (Source: self study). <table> <thead> <tr> <th>Quality feature</th> <th>Requirements effort</th> <th>Implementation effort</th> <th>Testing effort</th> </tr> </thead> <tbody> <tr> <td>functional suitability</td> <td>+</td> <td>+</td> <td></td> </tr> <tr> <td>reliability</td> <td>+</td> <td>+</td> <td></td> </tr> <tr> <td>performance efficiency</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>operability</td> <td>+</td> <td>+</td> <td></td> </tr> <tr> <td>security</td> <td></td> <td>+</td> <td></td> </tr> <tr> <td>compatibility</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>maintainability</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>portability</td> <td></td> <td>+</td> <td></td> </tr> <tr> <td>usability</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>safety</td> <td></td> <td>+</td> <td>+</td> </tr> <tr> <td>flexibility</td> <td>+</td> <td>-</td> <td>+</td> </tr> </tbody> </table> Currently there are two groups of controllable factors in the model: effort and process quality – defined separately for three development phases. It is assumed that the increase of effort or process quality has a positive impact on the selected quality features. This impact is not deterministic though, i.e. the increased effort does not guarantee better quality but causes that this better quality is more probable. It should be noted that the relationships in Fig. 2 and Table 1 may be defined differently in specific target environments. 4 Model Structure The proposed predictive model is a Bayesian net where the variables are defined as conditional probability distributions given their parents (i.e. immediate predecessors). It is beyond the scope of the paper to discuss the structure of the whole model because the full model contains over 100 variables. However, for full transparency and reproducibility of the results full model definition is available on-line [31]. Fig. 3 illustrates a part of the model structure by showing two quality features and relevant relationships. The whole model is a set of linked hierarchical naïve Bayesian classifiers where each quality feature is modelled by one classifier. Quality feature is the root of this classifier, sub-features are in the second level (children) and measures are the leaves. To enable relatively easy model calibration and enhancement this model was built with the following assumptions: - the links between various aspects of software quality may be defined only at the level of features; - controllable factors are aggregated as the “effectiveness” variables, which, in turn, influence selected quality features. Currently, all variables in the model, except measures, are expressed on a five-point ranked scale from ‘very low’ to ‘very high’. Two important concepts, implemented in AgenaRisk tool [32], were used to simplify the definition of probability distributions. First, the whole scale of ranked variable is internally treated as the numeric a range (0, 1) with five intervals – i.e. for ‘very low’ an interval (0, 0.2), for ‘low’ an interval (0.2, 0.4) etc. This gives the possibility to express the variable not only as a probability distribution but also using summary statistics, such as the mean (used in the next section). It also opens the door for the second concept – using expressions to define probability distributions for variables. Instead of time-consuming and prone to inconsistencies manual filling probability tables for each variable, it is sufficient to provide only very few parameters for the expressions like Normal distribution (mean, variance), TNormal distribution (mean, variance, lower bound, upper bound), or weighted mean function – wmean(weight for parameter 1, parameter 1, weight for parameter 2, parameter 2 etc.). Table 2 provides the definitions for the selected variables in different layers of the model. Table 2. Definition of selected variables. <table> <thead> <tr> <th>Type</th> <th>Variable</th> <th>Definition</th> </tr> </thead> <tbody> <tr> <td>feature</td> <td>usability</td> <td>TNormal(wmean(1, 0.5, 3, wmean(3, reg_effect, 2, impl_effect, 1, test_effect), 1, funct_suit), 0.05, 0.1)</td> </tr> <tr> <td>sub-feature</td> <td>effectiveness</td> <td>TNormal(utility, 0.01, 0.1)</td> </tr> </tbody> </table> | measure | percentage of tasks accomplished | effectiveness = 'very high' → Normal(95, 10) | effectiveness = 'high' → Normal(90, 40) | effectiveness = 'medium' → Normal(75, 60) | effectiveness = 'low' → Normal(65, 80) | effectiveness = 'very low' → Normal(50, 100) | | controllable| testing effort | TNormal(0.5, 0.05, 0.1) | | controllable| testing effectiveness | TNormal(wmean(3, test_effort, 4, test_procc), 0.001, 0.1) | 5 Model Behaviour To demonstrate model behaviour four simulations were performed with the focus to analyse the impact of one group of variables on another. **Simulation 1** was focused on the sensitivity analysis of quality features in response to the level of controllable factors. An observation about the state for a single controllable factor was entered to the model and then the predictions for all quality features were analyzed. This procedure was repeated for each state of each controllable factor. Fig. 4 illustrates the results for one of such runs by demonstrating the changes of predicted levels of maintainability and performance efficiency caused by different levels of implementation effort. These results have been compared with the background knowledge in Table 1 to validate if the relationships have been correctly defined, i.e. if the change of the level of the controllable factor causes the assumed direction of changed level of quality feature. In this case the obtained results confirm that the background knowledge was correctly incorporated into the model. With these graphs, it is possible to analyze the strength of impact of controllable factors on quality features. The impact of implementation effort is larger on maintainability than on performance efficiency – predicted probability distributions are more ‘responsive’ to different states of implementation effort for maintainability than for performance efficiency. Such information may be used in decision support. **Simulation 2** is similar to simulation 1 because it also analyses the impact of controllable factors on quality features. However, this simulation involves the analysis of summary statistics (mean values) rather than full probability distributions. Here, an observation ‘very high’ was entered to each controllable factor (one at the time) and then the mean value of predicted probability distribution for each quality feature was analyzed. Table 3 summarizes the results for effort at various phases. All of these mean values are above the default value of 0.5. These higher values suggest the increase in the predicted level specific quality features. These values correspond to “+” signs in Table 1 which further confirms the correct incorporation of the relationships between the controllable factors and the quality features. **Table 3. Predictions in simulation 2.** <table> <thead> <tr> <th>Quality feature</th> <th>Requirements effort</th> <th>Implementation effort</th> <th>Testing effort</th> </tr> </thead> <tbody> <tr> <td>functional suitability</td> <td>0.55</td> <td>0.56</td> <td></td> </tr> <tr> <td>reliability</td> <td></td> <td>0.56</td> <td>0.55</td> </tr> <tr> <td>performance efficiency</td> <td></td> <td>0.54</td> <td>0.53</td> </tr> <tr> <td>operability</td> <td>0.60</td> <td></td> <td></td> </tr> <tr> <td>security</td> <td></td> <td></td> <td>0.55</td> </tr> <tr> <td>compatibility</td> <td></td> <td></td> <td></td> </tr> <tr> <td>maintainability</td> <td>0.56</td> <td>0.57</td> <td></td> </tr> <tr> <td>portability</td> <td></td> <td></td> <td></td> </tr> <tr> <td>usability</td> <td>0.56</td> <td>0.55</td> <td>0.52</td> </tr> <tr> <td>flexibility</td> <td>0.57</td> <td>0.56</td> <td>0.55</td> </tr> <tr> <td>safety</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Simulation 3** was focused on the analysis of the relationships among various quality features. Similarly to simulation 2, it also covered the analysis of the mean values of predicted probability distributions. The results are presented in Table 4. A conceptual Bayesian net model for integrated... Table 4. Predictions in simulation 3. <table> <thead> <tr> <th>Observed</th> <th>Predicted</th> <th>implementation suitability</th> <th>reliability</th> <th>security</th> <th>operability</th> <th>performance efficiency</th> <th>maintainability</th> <th>portability</th> <th>usability</th> <th>safety</th> <th>flexibility</th> </tr> </thead> <tbody> <tr> <td>functionality</td> <td>0.55</td> <td>0.55</td> <td>0.55</td> <td>0.58</td> <td>0.58</td> <td>0.53</td> <td>0.57</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>reliability</td> <td>0.55</td> <td>0.55</td> <td>0.52</td> <td>0.55</td> <td>0.55</td> <td>0.54</td> <td>0.54</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>security</td> <td></td> <td>0.46</td> <td>0.46</td> <td>0.53</td> <td>0.53</td> <td>0.52</td> <td>0.47</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>compatibility</td> <td>0.55</td> <td>0.46</td> <td>0.53</td> <td>0.48</td> <td>0.55</td> <td>0.55</td> <td>0.47</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>operability</td> <td>0.55</td> <td>0.52</td> <td>0.53</td> <td>0.46</td> <td>0.55</td> <td>0.56</td> <td>0.56</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>performance</td> <td></td> <td>0.46</td> <td>0.48</td> <td>0.46</td> <td>0.47</td> <td>0.44</td> <td>0.48</td> <td>&lt;0.50</td> <td>0.47</td> <td></td> <td></td> </tr> <tr> <td>efficiency</td> <td>maintainability</td> <td>0.57</td> <td>0.55</td> <td>0.55</td> <td>0.47</td> <td>0.56</td> <td>0.57</td> <td>0.58</td> <td></td> <td></td> <td></td> </tr> <tr> <td>portability</td> <td></td> <td>0.55</td> <td>0.55</td> <td>0.57</td> <td>0.55</td> <td>0.56</td> <td>0.56</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>usability</td> <td>0.58</td> <td>0.55</td> <td>0.53</td> <td>0.56</td> <td>0.58</td> <td>0.54</td> <td>0.57</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>safety</td> <td>0.53</td> <td>0.54</td> <td>0.52</td> <td>0.55</td> <td>&lt;0.50</td> <td>0.54</td> <td>0.54</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>flexibility</td> <td>0.57</td> <td>0.54</td> <td>0.47</td> <td>0.57</td> <td>0.58</td> <td>0.56</td> <td>0.57</td> <td>0.48</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> The predicted mean values are either lower or higher than the default value 0.5. The values lower than 0.5 correspond to “−” signs in Fig. 2 while the values higher than 0.5 correspond to “+” signs in Fig. 2. Such results confirm that the model correctly incorporates the assumed relationships among quality features. **Simulation 4** has been focused on demonstrating more advanced model capabilities for delivering important information for decision support using what-if and trade-off analysis. Although such analysis may involve more variables, for simplicity, four variables were investigated: implementation effort, testing effort, maintainability, and performance efficiency. Some input data on the hypothetical project under consideration were entered into the model. The model provides predictions for these four variables as shown in Fig. 5 (scenario: baseline). Let us assume that a manager is not satisfied with the low level of maintainability. Apart from previously entered input data, an additional constraint is entered to the model to analyze how to achieve high level of maintainability (maintainability=‘high’→ mean(maintainability)=0.7). As shown in Fig. 5, scenario: revision 1, the model predicts that such target is achievable with the increased level of implementation effort and testing effort (although the increase of required testing effort is very narrow). The model also predicts that the level of performance efficiency is expected to be lower. This is due to the negative relationship between the maintainability and performance efficiency (Fig. 2). Let us further assume that, due to limited resources, not only the increase of effort is impossible, but even it has to be reduced to the level ‘low’ for implementation and testing. In such case the level of performance efficiency is expected to be further decreased (scenario: revision 2). It is possible to perform various other types of simulations similar to simulation 4 to use the model with what-if, trade-off and goal-seeking analyses for decision support. Such simulation may involve more steps and more variables. Such simulations will be performed in future to enhance the validation of model correctness and usefulness. 6 Calibration and Enhancement Options The proposed model has a structure that enables relatively easy calibration. As the variables are defined using expressions, the calibration requires setting appropriate parameters in these expressions: - the values of weights in wmean functions – higher value for weight indicates stronger impact of particular variable on the aggregated value; - the value of variance in TNormal expressions (second parameter) – value closer to zero indicates stronger relationship, higher values indicate lower relationships. Note, that since ranked variables are internally defined over the range (0, 1), typically a variance of 0.001 indicates very strong relationship and 0.01 – medium relationship. Apart from calibration focused on the defining parameters for the existing structure, the model may be enhanced to meet specific needs: - by adding new sub-features to features or new measures to sub-features – such change requires only the definition of newly added variable, no change in definitions of existing variables is necessary; • by adding new controllable factors – such change requires the change in definition of “effectiveness” variable for specific phase, typically by setting new weights in wmean function; • by adding new quality feature – such change requires the most work because it involves setting sub-features and measures, relationships among features, and relationships between the controllable factors and this new feature. Currently the model does not contain many causal relationships. This may reduce the analytical potential. Defining the model using more causal relationships may increase analytical potential but may also make the model more difficult in calibration. Thus, this issue needs to be investigated carefully when building a tailored model. The model enables static analysis, i.e. for the assumed point of time. Because both the project and the development environment evolve over time, it may be useful to reflect such dynamics in the model. However, such enhancement requires significantly more time spent on modelling and makes the calibration more difficult because more parameters need to be set. 7 Possible Use in Other Fields The proposed predictive model is focused on the software quality area. Such approach may also be used in other fields/domains because the general constraints on model structure may also apply there. Possible use outside software quality area depends on the following conditions: • the problem under investigation is complex but can be divided to a set of sub-problems, • there is no or not enough empirical data to generate a reliable model from them, • domain expert (or group of experts) is able to define, calibrate and enhance the model, • the relationships are of stochastic and non-linear nature, • there is a need for a high analytical potential. However, even meeting these conditions, the use in other fields may be difficult. This happens in the case of a high number of additional deterministic relationships, which have to be reflected in the model with high precision. Possible use in other fields will be investigated in detail in future. 8 Conclusions This paper introduced a new model for integrated software quality prediction. Formally, a model is a Bayesian net. This model contains a wide range of quality aspects (features, sub-features, measures) together with relationships among them. To make the model useful in decision support it also contains a set of controllable factors (currently effort and process quality in different development phases). This model encodes knowledge on software quality area published in literature as well as personal expert judgment. To prepare the model for using in the target environment it is necessary to calibrate the model, for example using questionnaires. The model may also be enhanced to meet specific needs. The model was partially validated for correctness and usefulness in providing information for decision support. In future, such model may become a heart of an intelligent system for analysis and managing software quality. To achieve this higher level of automation would be required, for example in calibration and enhancement by automated extraction of relevant data/knowledge. In addition, the model would have to reflect more details on the development process, project or software architecture. The stages of building customized models will be formalized in a framework supporting the proposed approach. This framework may also be used in building models with a similar general structure but in the fields other than software quality. Acknowledgement: This work has been supported by the research funds from the Ministry of Science and Higher Education as a research grant no. N N111 291738 for the years 2010-2012. References A conceptual Bayesian net model for integrated...
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3331/2525", "len_cl100k_base": 5859, "olmocr-version": "0.1.51", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24110, "total-output-tokens": 7231, "length": "2e12", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.0003273487091064453, "__label__crime_law": 0.0002760887145996094, "__label__education_jobs": 0.0010166168212890625, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.00012350082397460938, "__label__finance_business": 0.00029850006103515625, "__label__food_dining": 0.00032138824462890625, "__label__games": 0.0005011558532714844, "__label__hardware": 0.0004837512969970703, "__label__health": 0.0004146099090576172, "__label__history": 0.0001538991928100586, "__label__home_hobbies": 6.836652755737305e-05, "__label__industrial": 0.0002624988555908203, "__label__literature": 0.0003216266632080078, "__label__politics": 0.0001609325408935547, "__label__religion": 0.00033020973205566406, "__label__science_tech": 0.01262664794921875, "__label__social_life": 9.40561294555664e-05, "__label__software": 0.0072784423828125, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.00020885467529296875, "__label__transportation": 0.0002734661102294922, "__label__travel": 0.00014793872833251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30564, 0.03516]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30564, 0.11896]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30564, 0.89058]], "google_gemma-3-12b-it_contains_pii": [[0, 1940, false], [1940, 5046, null], [5046, 6900, null], [6900, 8724, null], [8724, 10317, null], [10317, 13317, null], [13317, 16041, null], [16041, 20449, null], [20449, 21979, null], [21979, 24342, null], [24342, 27337, null], [27337, 30564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1940, true], [1940, 5046, null], [5046, 6900, null], [6900, 8724, null], [8724, 10317, null], [10317, 13317, null], [13317, 16041, null], [16041, 20449, null], [20449, 21979, null], [21979, 24342, null], [24342, 27337, null], [27337, 30564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30564, null]], "pdf_page_numbers": [[0, 1940, 1], [1940, 5046, 2], [5046, 6900, 3], [6900, 8724, 4], [8724, 10317, 5], [10317, 13317, 6], [13317, 16041, 7], [16041, 20449, 8], [20449, 21979, 9], [21979, 24342, 10], [24342, 27337, 11], [27337, 30564, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30564, 0.2875]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
7e4bb271cc85714cadf524c908ccd536aab959f4
class A: def __init__(self, x): self.x = x def __repr__(self): return self.x def __str__(self): return self.x * 2 class B: def __init__(self): print('boo!') self.a = [] def add_a(self, a): self.a.append(a) def __repr__(self): print(len(self.a)) ret = '' for a in self.a: ret += str(a) return ret Given the above class definitions, what will the following lines output? >>> A('one') >>> print(A('one')) Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section. ```python >>> repr(A('two')) ``` ```python >>> b = B() ``` ```python >>> b.add_a(A('a')) >>> b.add_a(A('b')) >>> b ``` Linked Lists There are many different implementations of sequences in Python. Today, we’ll explore the linked list implementation. A linked list is either an empty linked list, or a Link object containing a first value and the rest of the linked list. To check if a linked list is an empty linked list, compare it against the class attribute `Link.empty`: ```python if link is Link.empty: print('This linked list is empty!') else: print('This linked list is not empty!') ``` You can find an implementation of the `Link` class below: ```python class Link: """A linked list."" empty = () def __init__(self, first, rest=empty): assert rest is Link.empty or isinstance(rest, Link) self.first = first self.rest = rest def __repr__(self): if self.rest: rest_repr = ', ' + repr(self.rest) else: rest_repr = '' return 'Link(' + repr(self.first) + rest_repr + ') def __str__(self): string = '<' while self.rest is not Link.empty: string += str(self.first) + ' ' # added a space self = self.rest return string + str(self.first) + '>' ``` Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section. Q2: The Hy-rules of Linked Lists In this question, we are given the following Linked List: ```python ganondorf = Link('zelda', Link('link', Link('sheik', Link.empty))) ``` What expression would give us the value 'sheik' from this Linked List? What is the value of `ganondorf.rest.first`? What would be the value of `str(ganondorf)`? What expression would mutate this linked list to `<zelda ganondorf sheik>`? Q3: Sum Nums Write a function that takes in a linked list and returns the sum of all its elements. You may assume all elements in `s` are integers. Try to implement this recursively! ```python def sum_nums(s): """ >>> a = Link(1, Link(6, Link(7))) >>> sum_nums(a) 14 """ "*** YOUR CODE HERE ***" # You can use more space on the back if you want ``` Q4: Multiply Links Write a function that takes in a Python list of linked lists and multiplies them element-wise. It should return a new linked list. If not all of the Link objects are of equal length, return a linked list whose length is that of the shortest linked list given. You may assume the Link objects are shallow linked lists, and that lst_of_lnks contains at least one linked list. ```python def multiply_lnks(lst_of_lnks): """ >>> a = Link(2, Link(3, Link(5))) >>> b = Link(6, Link(4, Link(2))) >>> c = Link(4, Link(1, Link(0, Link(2))))) >>> p = multiply_lnks([a, b, c]) >>> p.first 48 >>> p.rest.first 12 >>> p.rest.rest.rest is Link.empty True """ # Implementation Note: you might not need all lines in this skeleton code ---------------- = for ____________________________: if ____________________________: ____________________________ _______________ _______________ ____________________________ ____________________________ ``` Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section. Q5: Flip Two Write a recursive function `flip_two` that takes as input a linked list `s` and mutates `s` so that every pair is flipped. ```python def flip_two(s): "" >>> one_lnk = Link(1) >>> flip_two(one_lnk) >>> one_lnk Link(1) >>> lnk = Link(1, Link(2, Link(3, Link(4, Link(5))))) >>> flip_two(lnk) >>> lnk Link(2, Link(1, Link(4, Link(3, Link(5))))) "" "*** YOUR CODE HERE ***" ``` # For an extra challenge, try writing out an iterative approach as well below! "*** YOUR CODE HERE ***" # You can use more space on the back if you want Trees We define a tree to be a recursive data abstraction that has a label (the value stored in the root of the tree) and branches (a list of trees directly underneath the root). Previously, we implemented the tree abstraction using Python lists. Let’s look at another implementation using objects instead: ```python class Tree: def __init__(self, label, branches=[]): for b in branches: assert isinstance(b, Tree) self.label = label self.branches = branches def is_leaf(self): return not self.branches ``` With this implementation, we can mutate a tree using attribute assignment, which wasn’t possible in the previous implementation using lists. That’s why we sometimes call these objects “mutable trees.” ```python >>> t = Tree(3, [Tree(4), Tree(5)]) >>> t.label = 5 >>> t.label 5 ``` Q6: Make Even Define a function `make_even` which takes in a tree `t` whose values are integers, and mutates the tree such that all the odd integers are increased by 1 and all the even integers remain the same. ```python def make_even(t): """ >>> t = Tree(1, [Tree(2, [Tree(3)]), Tree(4), Tree(5)]) >>> make_even(t) 2 >>> t.branches[0].branches[0].label 4 """ "*** YOUR CODE HERE ***" ``` # You can use more space on the back if you want Q7: Add Leaves Implement `add_d_leaves`, a function that takes in a `Tree` instance `t` and a number `v`. We define the depth of a node in `t` to be the number of edges from the root to that node. The depth of root is therefore 0. For each node in the tree, you should add `d` leaves to it, where `d` is the depth of the node. Every added leaf should have a label of `v`. If the node at this depth has existing branches, you should add these leaves to the end of that list of branches. For example, you should be adding 1 leaf with label `v` to each node at depth 1, 2 leaves to each node at depth 2, and so on. Here is an example of a tree `t` (shown on the left) and the result after `add_d_leaves` is applied with `v` as 5. Try drawing out the second doctest to visualize how the function is mutating `t3`. **Hint:** Use a helper function to keep track of the depth! def add_d_leaves(t, v): """Add d leaves containing v to each node at every depth d." >>> t_one_to_four = Tree(1, [Tree(2), Tree(3, [Tree(4)])]) >>> print(t_one_to_four) 1 2 3 4 >>> add_d_leaves(t_one_to_four, 5) >>> print(t_one_to_four) 1 2 5 3 4 5 5 5 >>> t1 = Tree(1, [Tree(3)]) >>> add_d_leaves(t1, 4) >>> t1 Tree(1, [Tree(3, [Tree(4)])]) >>> t2 = Tree(2, [Tree(5), Tree(6)]) >>> t3 = Tree(3, [t1, Tree(0), t2]) >>> print(t3) 3 1 3 4 0 2 5 6 >>> add_d_leaves(t3, 10) >>> print(t3) 3 1 3 4 10 10 10 10 10 10 0 10 Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section. Efficiency (Orders of Growth) When we talk about the efficiency of a function, we are often interested in the following: as the size of the input grows, how does the runtime of the function change? And what do we mean by runtime? **Example 1:** \( \text{square}(1) \) requires one primitive operation: multiplication. \( \text{square}(100) \) also requires one. No matter what input \( n \) we pass into \( \text{square} \), it always takes a *constant* number of operations (1). In other words, this function has a runtime complexity of \( \Theta(1) \). As an illustration, check out the table below: <table> <thead> <tr> <th>input</th> <th>function call</th> <th>return value</th> <th>operations</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>square(1)</td> <td>1*1</td> <td>1</td> </tr> <tr> <td>2</td> <td>square(2)</td> <td>2*2</td> <td>1</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>100</td> <td>square(100)</td> <td>100*100</td> <td>1</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>( n )</td> <td>( \text{square}(n) )</td> <td>( n*n )</td> <td>1</td> </tr> </tbody> </table> **Example 2:** \( \text{factorial}(1) \) requires one multiplication, but \( \text{factorial}(100) \) requires 100 multiplications. As we increase the input size of \( n \), the runtime (number of operations) increases *linearly* proportional to the input. In other words, this function has a runtime complexity of \( \Theta(n) \). As an illustration, check out the table below: <table> <thead> <tr> <th>input</th> <th>function call</th> <th>return value</th> <th>operations</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>factorial(1)</td> <td>1*1</td> <td>1</td> </tr> <tr> <td>2</td> <td>factorial(2)</td> <td>2<em>1</em>1</td> <td>2</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>100</td> <td>factorial(100)</td> <td>100<em>99</em>...<em>1</em>1</td> <td>100</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>( n )</td> <td>( \text{factorial}(n) )</td> <td>( n*(n-1)*...<em>1</em>1 )</td> <td>( n )</td> </tr> </tbody> </table> **Example 3:** Consider the following function: `def bar(n): for a in range(n): for b in range(n): print(a,b)` \( \text{bar}(1) \) requires 1 print statements, while \( \text{bar}(100) \) requires \( 100*100 = 10000 \) print statements (each time \( a \) increments, we have 100 print statements due to the inner for loop). Thus, the runtime increases *quadratically* proportional to the input. In other words, this function has a runtime complexity of \( \Theta(n^2) \). <table> <thead> <tr> <th>input</th> <th>function call</th> <th>operations (prints)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>bar(1)</td> <td>1</td> </tr> <tr> <td>2</td> <td>bar(2)</td> <td>4</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>100</td> <td>bar(100)</td> <td>10000</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>( n )</td> <td>( \text{bar}(n) )</td> <td>( n^2 )</td> </tr> </tbody> </table> Example 4: Consider the following function: ```python def rec(n): if n == 0: return 1 else: return rec(n - 1) + rec(n - 1) ``` `rec(1)` requires one addition, as it returns `rec(0) + rec(0)`, and `rec(0)` hits the base case and requires no further additions. But `rec(4)` requires $2^4 - 1 = 15$ additions. To further understand the intuition, we can take a look at the recursive tree below. To get `rec(4)`, we need one addition. We have two calls to `rec(3)`, which each require one addition, so this level needs two additions. Then we have four calls to `rec(2)`, so this level requires four additions, and so on down the tree. In total, this adds up to $1 + 2 + 4 + 8 = 15$ additions. ![Recursive Call Tree](image) As we increase the input size of `n`, the runtime (number of operations) increases exponentially proportional to the input. In other words, this function has a runtime complexity of $\Theta(2^n)$. As an illustration, check out the table below: <table> <thead> <tr> <th>input</th> <th>function call</th> <th>return value</th> <th>operations</th> </tr> </thead> <tbody> <tr> <td>1</td> <td><code>rec(1)</code></td> <td>2</td> <td>1</td> </tr> <tr> <td>2</td> <td><code>rec(2)</code></td> <td>4</td> <td>3</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>10</td> <td><code>rec(10)</code></td> <td>1024</td> <td>1023</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td><code>n</code></td> <td><code>rec(n)</code></td> <td>$2^n$</td> <td>$2^n$</td> </tr> </tbody> </table> Here are some general guidelines for finding the order of growth for the runtime of a function: - If the function is recursive or iterative, you can subdivide the problem as seen above: - Count the number of recursive calls/iterations that will be made in terms of input size `n`. *Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section.* Find how much work is done per recursive call or iteration in terms of input size \( n \). The answer is usually the product of the above two, but be sure to pay attention to control flow! - If the function calls helper functions that are not constant-time, you need to take the runtime of the helper functions into consideration. - We can ignore constant factors. For example \( 1000000n \) and \( n \) steps are both linear. - We can also ignore smaller factors. For example if \( h \) calls \( f \) and \( g \), and \( f \) is Quadratic while \( g \) is linear, then \( h \) is Quadratic. - For the purposes of this class, we take a fairly coarse view of efficiency. All the problems we cover in this course can be grouped as one of the following: - Constant: the amount of time does not change based on the input size. Rule: \( n \rightarrow 2n \) means \( t \rightarrow t \). - Logarithmic: the amount of time changes based on the logarithm of the input size. Rule: \( n \rightarrow 2n \) means \( t \rightarrow t + k \). - Linear: the amount of time changes with direct proportion to the size of the input. Rule: \( n \rightarrow 2n \) means \( t \rightarrow 2t \). - Quadratic: the amount of time changes based on the square of the input size. Rule: \( n \rightarrow 2n \) means \( t \rightarrow 4t \). - Exponential: the amount of time changes with a power of the input size. Rule: \( n \rightarrow n + 1 \) means \( t \rightarrow 2t \). Q8: WWPD: Orders of Growth What is the worst case (i.e. when \( n \) is prime) order of growth of `is_prime` in terms of \( n \)? ```python def is_prime(n): for i in range(2, n): if n % i == 0: return False return True ``` Choose one of: - Constant - Logarithmic - Linear - Quadratic - Exponential - None of these What is the order of growth of `bar` in terms of `n`? ```python def bar(n): i, sum = 1, 0 while i <= n: sum += biz(n) i += 1 return sum def biz(n): i, sum = 1, 0 while i <= n: sum += i**3 i += 1 return sum ``` Choose one of: - Constant - Logarithmic - Linear - Quadratic - Exponential - None of these What is the order of growth of `foo` in terms of `n`, where `n` is the length of `lst`? Assume that slicing a list and calling `len` on a list can both be done in constant time. Write your answer in \( \Theta \) notation. ```python def foo(lst, i): mid = len(lst) // 2 if mid == 0: return lst elif i > 0: return foo(lst[mid:], -1) else: return foo(lst[:mid], 1) ``` Note: This worksheet is a problem bank—most TAs will not cover all the problems in discussion section.
{"Source-Url": "https://cs61a.org/disc/disc08/disc08.pdf", "len_cl100k_base": 4517, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28810, "total-output-tokens": 4991, "length": "2e12", "weborganizer": {"__label__adult": 0.0004603862762451172, "__label__art_design": 0.00044155120849609375, "__label__crime_law": 0.0003609657287597656, "__label__education_jobs": 0.0090179443359375, "__label__entertainment": 9.179115295410156e-05, "__label__fashion_beauty": 0.00020503997802734375, "__label__finance_business": 0.00019109249114990232, "__label__food_dining": 0.0007619857788085938, "__label__games": 0.0010471343994140625, "__label__hardware": 0.00131988525390625, "__label__health": 0.0005841255187988281, "__label__history": 0.00034928321838378906, "__label__home_hobbies": 0.0002968311309814453, "__label__industrial": 0.0008654594421386719, "__label__literature": 0.0004625320434570313, "__label__politics": 0.00024580955505371094, "__label__religion": 0.0008192062377929688, "__label__science_tech": 0.0195465087890625, "__label__social_life": 0.000255584716796875, "__label__software": 0.006359100341796875, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0005207061767578125, "__label__transportation": 0.0007376670837402344, "__label__travel": 0.0002779960632324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14789, 0.05367]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14789, 0.52192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14789, 0.77487]], "google_gemma-3-12b-it_contains_pii": [[0, 525, false], [525, 750, null], [750, 2040, null], [2040, 2832, null], [2832, 4013, null], [4013, 4600, null], [4600, 5445, null], [5445, 5918, null], [5918, 6795, null], [6795, 7559, null], [7559, 10298, null], [10298, 12114, null], [12114, 13918, null], [13918, 14789, null]], "google_gemma-3-12b-it_is_public_document": [[0, 525, true], [525, 750, null], [750, 2040, null], [2040, 2832, null], [2832, 4013, null], [4013, 4600, null], [4600, 5445, null], [5445, 5918, null], [5918, 6795, null], [6795, 7559, null], [7559, 10298, null], [10298, 12114, null], [12114, 13918, null], [13918, 14789, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 14789, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14789, null]], "pdf_page_numbers": [[0, 525, 1], [525, 750, 2], [750, 2040, 3], [2040, 2832, 4], [2832, 4013, 5], [4013, 4600, 6], [4600, 5445, 7], [5445, 5918, 8], [5918, 6795, 9], [6795, 7559, 10], [7559, 10298, 11], [10298, 12114, 12], [12114, 13918, 13], [13918, 14789, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14789, 0.09384]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
ec73fcf74afc03f50cfa7e7f597394681a42af7a
Software Quality Revisited Ronan Fitzpatrick *Technological University Dublin, Ronan.fitzpatrick@tudublin.ie* Recommended Citation This work is licensed under a [Creative Commons Attribution-Noncommercial-Share Alike 4.0 License](https://creativecommons.org/licenses/by-nc-sa/4.0/) Software quality revisited Ronan Fitzpatrick, Peter Smith, Brendan O’Shea Abstract Definitions of software quality have focused on software product quality factors. Quality that focuses on product quality is referred to by Kaoru Ishikawa as a narrow view of quality and he suggests that a broader more embracing and inclusive view is really necessary. The requirements of successful E-Commerce Web sites demonstrate this view. While the site might be considered as the product, Web site producers, owners and visitors also have a “quality” requirement. This broader view gives rise to the need to research and understand quality-of-development, quality-of-ownership, quality-of-engagement as well as quality-of-product. From the quality-of-product perspective, while many of the well established and understood software quality factors of McCall and Boehm still apply in this new domain, they need to be reinterpreted and they are no longer a complete set. Additional quality factors are needed for the WWW. Already identified are quality factors like visibility, intelligibility, credibility, engagibility and differentiation. In this new situation it is also necessary to take a step beyond MIS practice in order to achieve Web site quality and in this regard the paper further considers, proprietor development, engagibility, Software Quality - Metric Ratio Analysis (SQ-MRA) and progressive maintenance. Finally, having revisited software quality definitions and interpretations, it is appropriate to review original thinking regarding software quality factors in order to determine if lessons learned from this revisit apply to the quality of traditional Information Systems. 1. Introduction The study of software quality has focused on product quality [1] and [2]. However, quality that is limited to product quality is referred to by Kaoru Ishikawa (the founding father of the Japanese Quality movement) as a narrow view of quality and he suggests that a broader view of quality is necessary [3]. The requirements of successful E-Commerce Web sites demonstrate this view and have precipitated a need to cater for the requirements of Web site visitors and owner organizations [4] in addition to product quality. While the site might be considered as the product, Web site owners and visitors also have a “quality” requirement. These E-Commerce sites have a sales focus and their quality is being driven by the sales and marketing professionals whose principal object is to attract and retain customers. The visitors’ perspectives are described by [4] in terms of Total Customer Experience (TCE) which addresses the issues involved in attracting and retaining E-commerce customers. A full understanding of the issues involved is also important when determining work effort and cost of Web site development and for complying with the legal requirements of Web sites. The purpose of this paper is to present some of the findings of continuing research which is motivated by the broader view philosophy and seeks to extend that philosophy by going a step beyond it in the context of E-Commerce. The paper also introduces measurement considerations being addressed by the research. The paper will be of interest to academics and professionals who are researching, studying or practicing in areas where quality is a driver of software measurement. Section 2 revisits software quality definitions to reflect E-Commerce requirements. Section 3 addresses software quality drivers and Section 4 outlines quality challenges in E-Commerce. In the context of going a step beyond traditional MIS ownership and use, Section 5 outlines quality considerations that need to be addressed by software quality professionals in order to achieve quality Web sites. 2. Software quality definitions revisited In a series of definitions relating to quality control, Kaoru Ishikawa [3] refers to products, which can “satisfy the requirements of consumers”. This, he explains, should be “Narrowly interpreted to mean quality of products”. He continues “that broadly interpreted, quality means quality of work, quality of service, quality of information, quality of process, quality of division, quality of people including workers, engineers, managers and executives, quality of system, quality of company, quality of objects etc. To control quality in its every manifestation is our basic approach”. This paper embraces this broader interpretation of quality and adds that in the context of E-Commerce Web sites it is appropriate to add that in addition to quality-of-product and quality-of-production as implied by Ishikawa, quality also means quality-of-ownership and quality of visitor experience, i.e. quality-of-use. While the broader view is the correct view and is a foundation of this research it is inappropriate to use the term quality to describe quality, that is, it is incorrect to say “that broadly interpreted quality means quality of work…” and so on. What should we understand by quality of work? An opportunity to define quality is missed so, this paper defines quality as a measure of excellence and suggests that measures of excellence apply to all of the perspectives. To control this quality it is necessary to be able to quantify the attributes of excellence of these perspectives and to understand metrics which are appropriate to measuring them. 3. Software quality drivers In keeping with the broader view philosophy and in relation to software quality, it is appropriate to review strategic considerations that influence software stakeholders in order to determine a broader understanding of what actually contributes to their perspective of software quality. This research has focused on identifying strategic issues that drive quality from the procurer’s (owner’s) and the software producer’s perspectives. The Software Quality – Strategic Driver Model (SQ-SDM) explains both of these perspectives and both are fully described by [5]. They are illustrated in Figure 1 and definitions for all drivers are set out in Figure 2. ![Software Quality – Strategic Driver Model](image-url) The growth and demands of E-Commerce are resulting in requests from software acquirers for Web sites that will provide them with competitive support. While explained in the context of traditional IT systems these drivers can be easily interpreted for Web site owners. Additionally, aspects of them (e.g., Technical excellence and User acceptance) are easily interpreted for Web site visitors such that both parties in a B2C contract are considered. 4. Quality Challenges in E-Commerce Web sites This section considers three significant challenges associated with quality in the domain of the World Wide Web. The first challenge concerns itself with interpreting the strategic driver perspectives in relation to quality Web sites. The second challenge is concerned with the sufficiency of product quality factors when interpreted for the WWW and the third challenge considers the impact of these first two challenges on the measurement metrics and methods that are uses in connection with quality Web sites. 4.1. Strategic driver perspectives The Software Quality – Strategic Driver Model (SQ-SDM) sets out drivers associated with development and ownership, that is, quality-of-development and quality-of-ownership. In the case of quality Web sites, quality-of-development and quality-of-ownership do not address all of the issues and there are significant considerations regarding the quality of the site visitor’s experience. Product quality (Kaoru Ishikawa’s narrow view of quality) also has to be revisited. 4.1.1. Quality-of-Development Quality-of-development considers all of the drivers set out in the producer's perspective in Figure 2. In particular, it is important that developers' own website display a quality perspective in terms of all of the aspects of quality discussed here. Indeed, how is a developer going to install confidence in their development capabilities unless they do so? Furthermore, consideration must be given to design and development methodologies for websites and the extent to which developers use and adhere to these. 4.1.2. Quality-of-Ownership Quality-of-ownership considers all of the drivers set out in the procurer’s perspective in Figure 2. The research is now principally focusing on the User acceptance driver and in particular its extended meaning in the context of E-Commerce. The research shows that there is a connection between the owner’s need to create consumer-centric Web sites and the Visitor’s need for an engaging experience [5]. In addition, opportunities and challenges relating to proprietor development exist and these are considered later in Section 5.1. Technical excellence (the strategic quality driver for excellence in software product support) also takes on new meaning especially in relation to the on-going maintenance which is essential for a quality Web site. This is also considered further in Section 5.4. 4.1.3. Quality-of-Engagement Quality-of-engagement is visitor focused and is concerned with the quality of the visit or the quality of the experience and is similar to quality-of-use of traditional MIS applications [5]. However, in the context of the WWW the use has a much more engaging dimension from the visitor’s perspective while at the same time is a significant on-going consideration for the Website owner. The product is no longer an artefact sold by a seller and purchased by a customer. The product is now a core sales and marketing tool of the seller and is designed to attract and retain customers. Quality-of-engagement considers all of the external quality factors as illustrated in Figure 3 while at the same time remaining conscious of the competitive advantage requirements of quality-of-ownership. 4.1.4. Quality-of-Product In this case the product is a Web site and this research has interpreted acknowledged software quality factors and identified additional factors for the WWW. These additional quality factors are addressed in Section 4.2 which follows. 4.2. Web site quality factors The continuing research has in the first instance identified a comprehensive set of core software quality factors which extend those published by [1] and embrace requirements of European Community law [7]. In addition, five new domain-specific quality factors for to the WWW have been identified [8]. <table> <thead> <tr> <th>Core quality factors</th> <th>External quality factors</th> <th>Internal quality factors</th> <th>Strategic quality factors</th> </tr> </thead> <tbody> <tr> <td>suitability</td> <td>interoperability</td> <td>maintainability</td> <td></td> </tr> <tr> <td>installability</td> <td>reliability</td> <td>testability</td> <td></td> </tr> <tr> <td>functionality</td> <td>safety</td> <td>flexibility</td> <td></td> </tr> <tr> <td>adaptability</td> <td>security</td> <td>reusability</td> <td></td> </tr> <tr> <td>ease-of-use</td> <td>correctness</td> <td>portability</td> <td></td> </tr> <tr> <td>learnability</td> <td>efficiency</td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Domain-specific quality factors</th> <th>visibility</th> <th>intelligibility</th> <th>credibility</th> <th>engagibility</th> </tr> </thead> <tbody> <tr> <td></td> <td>differentia</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Figure 3: Combined set of software quality factors for the World Wide Web These five new quality factors are visibility, intelligibility, credibility, engagibility and differentiation. The combined set of quality factors is illustrated in Figure 3 The taxonomy in which they are presented is similar to traditional External and Internal categorisation of quality factors [9]. In these categories the emphases is on usability (external quality), which is principally of interest to the user, and on technical excellence (internal quality) which is principally of interest to software engineering and IS professionals. The Core quality factors, which are appropriate to all software applications, are shown separate from the Domain-specific quality factors, which are appropriate to the Web. A third category is added in order to reflect owner interest. This category is named Strategic quality factors and differentiation is shown in this category. The research includes definitions and characteristics for the five new quality factors for the WWW and these are set out in Figure 4 [8]. The first four of these fit with what [4] also refer to as the Total Customer Experience (TCE) and are user focused, while the fifth, differentiation is an issue of primary concern to a quality Web site owner. They clearly reflect quality-of-use and quality-of-ownership. <table> <thead> <tr> <th>Quality factor</th> <th>Characteristics</th> </tr> </thead> <tbody> <tr> <td>Visibility</td> <td>Tracability, Retrievability, Ease-of-access</td> </tr> <tr> <td>Intelligibility</td> <td>Legibility, Audibility, Comprehensibility</td> </tr> <tr> <td>Credibility</td> <td>Integrity, Accuracy</td> </tr> <tr> <td>Engagibility</td> <td>Navigability, Interactivity, Appeal</td> </tr> <tr> <td>Differentiation</td> <td>Speciality, Identity</td> </tr> </tbody> </table> *Figure 4: Definitions and characteristics of additional quality factors for the WWW* From a software measurement viewpoint, the Core quality factors when combined with the Domain-specific quality factors (Figure 3) can be used as essential components for a quality accreditation system for Web sites. Furthermore, the research also includes a comprehensive set of ENABLERS for the domain-specific quality factors which can be used by specifiers, designers, developers and evaluators as essential issues which must be addressed in order to estimate, quality assure and evaluate quality Web sites [8]. 4.3. Metrics and methods Within the discipline of software engineering, product quality is assured through testing using well understood and proven processes. Typically, this specifies what must be tested, its timing during the life cycle, the methods to be used and the expected results. This is not the case in the context of the Web sites. The issues of “what has to be tested” include the additional quality factors identified by this research. From research observations, its timing during the life cycle appears to be mainly confined to evaluation after the artefact has been created. Similarly, the methods might benefit through classification and mapping to the life cycle. However, “the expected results” is a major challenge, as metrics, especially in an E-Commerce context, are not well researched or well understood. While some metrics like Nielsen’s less than 10 seconds for WWW response time is an accepted norm, similar types of metrics must be identified and defined especially metrics which relate to the quality-of-use factors per figure 4. In addition to these quality-of-use metrics there is also a need to consider metrics from a quality-of-ownership and quality-of-producer perspective too. The reader will appreciate that these metrics are in addition to quality-of-product metrics. It follows that since new quality factors have been identified for quality Web sites, new metrics associated with these will necessitate revisiting estimation, quality assurance and evaluation methods and techniques. 5. Going a step beyond MIS practice to achieve Web site quality Having considered the broader perspective of software quality, this section focuses on quality in the domain of the WWW and considers how a step beyond the understood Management Information Systems practice relating to implementation, usability, measurement and maintenance is appropriate in the context of quality Web sites. In the context of the WWW the section addresses the next wave of end-user development, site visitor engagement, assessment by way of software quality Metric Ration Analysis and enhancement by way of progressive maintenance. It outlines quality considerations that need to be addressed by systems professionals whose work is impacted by software quality. 5.1. A development step beyond implementation Quality Web sites provide an excellent opportunity for the next wave of end-user development. Opportunities abound for professionals in all business sectors to become their own Web site proprietor. Typical of those who are already availing of such opportunities are medical consultants and academics who have large numbers of clients and students who willingly visit their specialist Web site. This approach overcomes the cost of having a solutions provider implement a Web site for them, so, the end-user concept fits well. From the professional’s perspective a quality Web site provides an excellent solution to a significant communications problem and from the site visitor’s perspective the quality of their requirements is consistent in its delivery and assured by its existence. Respectively, they contribute to quality-of-ownership and quality-of-use. This style of Web site development is proprietor development and is a step beyond implementation. It is successfully achieved by combining development tools with a sound understanding of what constitutes a quality Web site in the area of specialism. And, it need not be limited to the examples already explained. Similar opportunities abound for organisations to support qualified staff to expand their intranets and extranets. In the context of the broader view of quality, definitions of quality Web sites need to be revisited and best practice guidelines need to be formulated. 5.2. An engagement step beyond usability In the domain of Management Information Systems external software quality (i.e., usability) and its evaluation are well understood by those involved in software measurement. Usability addresses all issues that impact the user of the software [7]. This interaction with software is called quality-of-use by [6]. But usability is the limit of MIS interactivity in that the users’ ability to contribute to the software artefact is limited to tailoring the interface to suit their own preference. Beyond that the artefact is static. In the domain of the World Wide Web the artefact is a Web site that can be dynamically created to suit visitors. This type of interaction is named (by this research) engagibility and is an engagement step beyond usability. It is achieved by the inclusion of communication paths such as visitor contribution to Web site content, moderated mailing list, chat room, support for feedback contribution, comments forum and similar paths, which empower and enable site visitors. It is a significant consideration in E-Commerce and the Total Customer Experience (TCE). The inclusion of this level of engagibility dictates that software quality measurement (methods and metrics) needs to be revisited in relation to productivity, effort and cost estimation, quality assurance and usability evaluation. 5.3. An assessment step beyond system review The benefits of ownership of Information Systems have been the subject of extensive research for many years and there is a mature understanding of the issues involved. This research advances our understanding and focuses on quality-of-ownership in keeping with the broader view of Ishikawa [4]. In particular the research addresses the owner’s perspective in the Software Quality - Strategic Drivers Model (SQ-SDM) [5]. In the context of the World Wide Web our understanding of quality-of-ownership of Web sites involves new criteria not found in traditional MIS applications. Measuring these criteria typically addresses analysis of use for competitive advantage and involves measurements associated with page mining, hyperlink excursions, activity usage and a set of similar metrics. This analysis is an assessment step beyond system review and is part of what this research names as Software Quality - Metric Ratio Analysis (SQ-MRA). In relation to quality Web sites SQ-MRA uses a formula to determine a numeric value for individual ratios from a set of strategic quality-of-ownership Web site ratios. These ratios relate to usage considerations. SQ-MRA is motivated by the style of ratio analysis which is well understood as a core measure in the principles of financial accounting which are regarded as a simple yet powerful approach to analysing business performance and this research proposes similar analysis for quality Web sites. The concept is similar to the work of [10] who focus on a set of marketing focused metrics which they style E-Metrics. The value of SQ-MRA is that it identifies a set of Web site quality metrics which Web site owners can use as a guide in order to specify requirements and performance objectives for the site. Once live, the Web site can be monitored and analysed and then, subject to the findings, can be tuned for optimum usage. Results achieved from this monitoring and analysis give rise to an evolutionary form of maintenance as discussed in Section 5.4 which follows. 5.4. An enhancement step beyond maintenance A Management Information System, once coded and installed, is maintained by way of adaptive, corrective and perfective maintenance [11]. A significant element which contributes to this form of software maintenance is defect detection and repair. In an MIS system the functionality remains consistent. This is not the case with a Web site. A quality Web site is different because it can continue to evolve (in some instances on a daily basis) and needs to be continuously maintained by way of updating or refreshing general presentation, updating and repairing hyperlinks, adding new content, updating brochureware and responding to visitor communication. This type of maintenance is progressive maintenance (to add to adaptive, corrective and perfective) and is an enhancement step beyond traditional maintenance. It is achieved through a proactive policy of system review which regularly (often daily) revisits the Web site in order to update, upgrade, repair, respond, improve and sometimes delete aspects of the Web site. Progressive maintenance results in crisp, clean, fresh Web sites that impress new visitors, retain existing customers and generally contribute to the overall objective of engagibility. From a Web site owner’s perspective, progressive maintenance typically supports tangible and intangible benefits of quality-of-ownership. Progressive maintenance is an evolving component of software maintainability (an original software quality factor) and so it follows that estimation, quality assurance and usability evaluation need to be reconsidered in order to determine how they are impacted by this new component. Enhancements achieved by a policy of progressive maintenance will contribute to Web site differentiation and support competitive advantage. All of the considerations in this Section illustrate the need for improved understanding of what constitutes a quality Web site from the owner’s and user’s perspective. There is also a need for a similar understanding from the developer’s perspective. It follows that measurement methods and metrics appropriate to the area of estimation, quality assurance and usability evaluation need to be revisited in order to address the requirements of quality Web sites. 6. Closing observations Before closing there are two observations worth recording. These are: - Despite much reworking of Systems Life Cycle models over many years, none of the acknowledged and most cited models (Waterfall, Spiral, V or Star) includes quality in its conceptual model. - To date few formal computing curricula have focussed on quality. Quality is usually covered within syllabuses on system design and software engineering. Examples of formal benchmarks for computing courses are those offered by the United Kingdom Quality Assurance Agency [12] and by the British Computer Society examinations [13]. Both of these examples do make substantial references to quality within their syllabus guidance. However, it is the belief of the authors that this context would benefit from expansion and more explicit reference to a holistic view of quality. 7. Conclusion This paper has explained that in the context of E-Commerce the broader view philosophy of quality as referred to by Kaoru Ishikawa is especially relevant. The paper has shown that this broader view addresses quality-of-development, quality-of-ownership, quality-of-engagement and quality-of-product. Driven by sales and marketing professionals, quality-of-ownership and quality-of-engagement have competitive advantage issues that need to be addressed. The paper also explains that in the context of E-Commerce there is a need to go a step beyond the traditional quality issues of MIS applications. This step beyond addresses proprietor development, engagibility, Software Quality – Metric Ratio Analysis (SQ-MRA) and progressive maintenance. Revisiting software quality also means reviewing current approaches to quality in the system life cycle model, the curriculum, measurement, and evaluation methods and metrics. It follows that standards that are impacted by such considerations also need to be reviewed when software quality is revisited. 8. References
{"Source-Url": "https://arrow.tudublin.ie/cgi/viewcontent.cgi?article=1007&context=scschcomcon", "len_cl100k_base": 4950, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22691, "total-output-tokens": 5845, "length": "2e12", "weborganizer": {"__label__adult": 0.0003190040588378906, "__label__art_design": 0.0004134178161621094, "__label__crime_law": 0.0004115104675292969, "__label__education_jobs": 0.0031280517578125, "__label__entertainment": 7.766485214233398e-05, "__label__fashion_beauty": 0.00012671947479248047, "__label__finance_business": 0.0015802383422851562, "__label__food_dining": 0.0003337860107421875, "__label__games": 0.00066375732421875, "__label__hardware": 0.0004825592041015625, "__label__health": 0.00043129920959472656, "__label__history": 0.00015926361083984375, "__label__home_hobbies": 6.908178329467773e-05, "__label__industrial": 0.0002472400665283203, "__label__literature": 0.00044655799865722656, "__label__politics": 0.0001678466796875, "__label__religion": 0.0003154277801513672, "__label__science_tech": 0.01264190673828125, "__label__social_life": 9.250640869140624e-05, "__label__software": 0.0230255126953125, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0001575946807861328, "__label__transportation": 0.0002841949462890625, "__label__travel": 0.00015604496002197266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27879, 0.02221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27879, 0.11524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27879, 0.93696]], "google_gemma-3-12b-it_contains_pii": [[0, 491, false], [491, 4244, null], [4244, 6589, null], [6589, 8649, null], [8649, 12119, null], [12119, 14833, null], [14833, 18721, null], [18721, 22714, null], [22714, 25644, null], [25644, 27879, null]], "google_gemma-3-12b-it_is_public_document": [[0, 491, true], [491, 4244, null], [4244, 6589, null], [6589, 8649, null], [8649, 12119, null], [12119, 14833, null], [14833, 18721, null], [18721, 22714, null], [22714, 25644, null], [25644, 27879, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27879, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27879, null]], "pdf_page_numbers": [[0, 491, 1], [491, 4244, 2], [4244, 6589, 3], [6589, 8649, 4], [8649, 12119, 5], [12119, 14833, 6], [14833, 18721, 7], [18721, 22714, 8], [22714, 25644, 9], [25644, 27879, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27879, 0.19149]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
242185f19229fc35a1550312168817faaf7ccd90
[REMOVED]
{"Source-Url": "https://flosshub.org/sites/flosshub.org/files/The%20Organizational%20Adoption%20of%20OSS%20Server.pdf", "len_cl100k_base": 4977, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23154, "total-output-tokens": 6878, "length": "2e12", "weborganizer": {"__label__adult": 0.0003952980041503906, "__label__art_design": 0.0008745193481445312, "__label__crime_law": 0.0007791519165039062, "__label__education_jobs": 0.01483154296875, "__label__entertainment": 0.00018894672393798828, "__label__fashion_beauty": 0.0002187490463256836, "__label__finance_business": 0.041107177734375, "__label__food_dining": 0.0005497932434082031, "__label__games": 0.0009055137634277344, "__label__hardware": 0.0010595321655273438, "__label__health": 0.0009760856628417968, "__label__history": 0.0005645751953125, "__label__home_hobbies": 0.0002732276916503906, "__label__industrial": 0.0007042884826660156, "__label__literature": 0.0005631446838378906, "__label__politics": 0.0008959770202636719, "__label__religion": 0.0004887580871582031, "__label__science_tech": 0.05889892578125, "__label__social_life": 0.0004405975341796875, "__label__software": 0.314208984375, "__label__software_dev": 0.56005859375, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.0005269050598144531, "__label__travel": 0.00041031837463378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31351, 0.04141]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31351, 0.61086]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31351, 0.94102]], "google_gemma-3-12b-it_contains_pii": [[0, 2479, false], [2479, 5166, null], [5166, 7721, null], [7721, 10574, null], [10574, 13539, null], [13539, 16360, null], [16360, 19363, null], [19363, 22230, null], [22230, 24837, null], [24837, 27773, null], [27773, 31147, null], [31147, 31351, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2479, true], [2479, 5166, null], [5166, 7721, null], [7721, 10574, null], [10574, 13539, null], [13539, 16360, null], [16360, 19363, null], [19363, 22230, null], [22230, 24837, null], [24837, 27773, null], [27773, 31147, null], [31147, 31351, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31351, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31351, null]], "pdf_page_numbers": [[0, 2479, 1], [2479, 5166, 2], [5166, 7721, 3], [7721, 10574, 4], [10574, 13539, 5], [13539, 16360, 6], [16360, 19363, 7], [19363, 22230, 8], [22230, 24837, 9], [24837, 27773, 10], [27773, 31147, 11], [31147, 31351, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31351, 0.0625]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
766df5da1b9ad6f9b8b536e9a211b0d318539d03
Issues in Developing Object-Oriented Database Systems for Real-Time Applications Juhnyoung Lee and Sang H. Son Department of Computer Science University of Virginia Charlottesville, VA 22903 Myung-Joon Lee Department of Computer Science University of Ulsan Ulsan, Kyung-Nam 680-749, Korea Abstract Database systems for real-time applications must satisfy timing constraints associated with transactions, in addition to maintaining data consistency. Recently, interests in object-oriented databases have been growing for non-traditional applications of database systems, and several real-time applications are being developed using an object-oriented paradigm. The object-oriented approach seems promising for developing complex real-time database applications. However, it is not clear whether object-oriented database systems would be superior than relational database systems for supporting real-time applications. In this paper, we address issues that must be investigated in order to design and develop an object-oriented database system for real-time applications. Also, we present a model that integrates features for scheduling real-time transactions with the traditional object-oriented database model. 1. Introduction A real-time database system (RTDBS) is a transaction processing system where transactions have explicit timing constraints. Typically, a timing constraint is expressed in the form of a deadline, a certain time in the future by which a transaction needs to be completed. A deadline is said to be hard if it cannot be missed or else the result is useless. If a deadline can be missed, it is a soft deadline. With soft deadlines, the usefulness of a result may decrease after the deadline is missed. In RTDBS, the correctness of transaction processing depends not only on maintaining consistency constraints and producing correct results, but also on the time at which a transaction is completed. Transactions must be scheduled and processed in such a way that they can be completed before their corresponding deadlines expire. Real-time database systems are being used for a variety of applications such as process control, mission critical applications in command and control systems and radar systems, computer integrated manufacturing systems, and air traffic control systems, among others. Conventional data models and databases are not adequate for time-critical applications, since they are not designed to provide features required to support real-time transactions. They are designed to provide good average performance, while possibly yielding unacceptable worst-case response times. Very few of them allow users to specify or ensure timing constraints. During the last few years, several research and development efforts on RTDBSs have been reported [19, 21, 22, 26]. However, almost all of them are based on relational data model. Although object-oriented database systems have received a lot of attention for last several years, not much work has been done in investigating how object-oriented model can benefit database systems for real-time applications. Only recently, object-oriented data models have attracted the attention of researchers in RTDBSs [5, 6, 13, 14, 15]. 2. Preliminary Questions There are several questions to be answered, before object-oriented database can be considered for real-time applications. First, do we need object-oriented data models to satisfy real-time database requirements? Related questions are: Are the features of object-oriented data models helpful/necessary to satisfy timing constraints? Or do they interfere with timely execution of transactions? Why do we need to consider object-oriented models for real-time database systems? In general, using object-oriented data models for an RTDBS does not directly help the system to improve timeliness or to guarantee deterministic behavior of transaction execution, because none of the object-oriented data model's features provides active pursuit of timely/deterministic processing of transactions. In addition, poor performance of current 1. This work was supported in part by ONR, IBM and CIT. 2. Currently visiting Department of Computer Science at University of Virginia. <table> <thead> <tr> <th>1. REPORT DATE</th> <th>2. REPORT TYPE</th> <th>3. DATES COVERED</th> </tr> </thead> <tbody> <tr> <td>1994</td> <td></td> <td>00-00-1994 to 00-00-1994</td> </tr> </tbody> </table> <table> <thead> <tr> <th>4. TITLE AND SUBTITLE</th> <th>5a. CONTRACT NUMBER</th> <th>5b. GRANT NUMBER</th> <th>5c. PROGRAM ELEMENT NUMBER</th> </tr> </thead> <tbody> <tr> <td>Issues in Developing Object-Oriented Database Systems for Real-Time Applications</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>6. AUTHOR(S)</th> <th>5d. PROJECT NUMBER</th> <th>5e. TASK NUMBER</th> <th>5f. WORK UNIT NUMBER</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)</th> <th>8. PERFORMING ORGANIZATION REPORT NUMBER</th> </tr> </thead> <tbody> <tr> <td>University of Virginia, Department of Computer Science, 151 Engineer’s Way, Charlottesville, VA, 22094-4740</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)</th> <th>10. SPONSOR/MONITOR’S ACRONYM(S)</th> <th>11. SPONSOR/MONITOR’S REPORT NUMBER(S)</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>12. DISTRIBUTION/AVAILABILITY STATEMENT</th> <th>13. SUPPLEMENTARY NOTES</th> </tr> </thead> <tbody> <tr> <td>Approved for public release; distribution unlimited</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>14. ABSTRACT</th> <th>15. SUBJECT TERMS</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>16. SECURITY CLASSIFICATION OF:</th> <th>17. LIMITATION OF ABSTRACT</th> <th>18. NUMBER OF PAGES</th> <th>19a. NAME OF RESPONSIBLE PERSON</th> </tr> </thead> <tbody> <tr> <td>a. REPORT</td> <td>unclassified</td> <td>5</td> <td></td> </tr> <tr> <td>b. ABSTRACT</td> <td>unclassified</td> <td></td> <td></td> </tr> <tr> <td>c. THIS PAGE</td> <td>unclassified</td> <td></td> <td></td> </tr> </tbody> </table> object-oriented database systems partly due to the lack of efficient implementation techniques will have a negative impact on satisfying timing constraints. Two major benefits of object-oriented data models are (1) better support of advanced data-intensive applications by providing the capabilities for modeling, storing and manipulating complex objects, and (2) better software engineering in building large and complex application systems providing support for encapsulated objects. The need for supporting real-time database requirements with object-oriented data models may arise because real-time applications may require modeling complex encapsulated objects. Is there any inherent problem in object-oriented data models in satisfying timing constraints of real-time applications? It is obvious that the basic features of object-oriented data models (objects, attributes, methods, messages, classes, class hierarchy, and inheritance) do not directly help timely processing of transactions. It is also true, however, that none of them particularly interferes with active pursuit of timely processing of transactions, except for the potential lack of efficient implementation techniques. Thus, issues in supporting real-time requirements with an object-oriented data model lie in extending the object-oriented data model to include specification of timing constraints on objects (more specifically on attributes/methods), and to actively pursue timely processing of transactions, rather than in combating any incompatibility between object-oriented data models and real-time requirements. 3. An Extended Object-Oriented Database Model In this section, we first describe the traditional model of object-oriented databases [3, 8]. Since the notion of nesting is natural in object-oriented databases [1, 8, 20], the model allows objects to include methods that are not necessarily atomic and may invoke other methods. Then, we briefly discuss features required for scheduling transactions with real-time requirements, and extend the model of object-oriented databases to include policies for scheduling real-time transactions. An object-oriented database is a collection of classes and instances of these classes. Both classes and instances are referred to as objects. A class defines a set of attributes for its instances and procedures through which instances can be manipulated. The procedures associated with a class are referred to as methods, and a method may invoke other methods on other objects in the database. In this model, we allow inheritance of properties (attributes and methods) between classes, i.e., the class is structured as a hierarchy. All subclasses of a class inherit all properties defined for the class and have additional properties local to the subclass. Users of an object-oriented database access the instance objects by executing methods. Since multiple users often may need to access several classes and instances, the traditional transaction model for database systems can be used to ensure atomicity of user interactions. Users access the database by executing transactions, where a transaction is a partially ordered set of operations on class and instance objects. We use commutativity as the basis for determining whether a particular operation invocation can be allowed to execute concurrently with those in progress [24]. Two operations commute if the order in which they execute does not affect the results of the operations, i.e., results returned by the operation as well as the resulting state of the objects accessed. Two operations in different transactions conflict with each other if they do not commute. To include nested transactions in this object model, we assume that a method execution is a transaction which may invoke atomic operations or invoke other methods on other objects. Namely, an operation in a transaction may be an atomic operation or another transaction, and now the transaction imposes a tree structure. Hadzilacos and Hadzilacos [8] established an analogue to the classical serializability theorem to prove the correctness of nested transaction execution in object-oriented databases. As in the classical serializability theorem, the correctness of a history $H$ over a set of nested transactions can be determined by constructing the serialization graph of $H$ denoted as $SG(H)$. $SG(H)$ is a directed graph whose nodes correspond to the transactions in $H$ and whose edges capture orderings of transactions that must be obeyed by an equivalent serial history. In [8], it was shown that if $SG(H)$ is acyclic then $H$ is serializable on the basis of view serializability. To synchronize the concurrent execution of nested transactions, a concurrency control protocol, namely nested two-phase locking (N2PL) has been proposed [17]. In [8], the protocol has been modified to synchronize nested transactions in object-oriented databases, and its correctness was argued by using the notion of serialization graphs for nested transactions. We use this protocol as the basic synchronization mechanism for transaction execution in object-oriented databases. In order for an object-oriented database to support real-time applications, we need to integrate features for scheduling real-time transactions with the conventional object-oriented database model. There are four major policies for scheduling transactions in real-time requirements: (1) priority assignment, i.e., a policy for assigning priorities to transactions, (2) eligibility test, i.e., a policy to determine which transactions are eligible for service, (3) concurrency control, i.e., a basic synchronization mechanism and (4) conflict-resolution, i.e., a policy for resolving conflicts between two (or more) transactions that access the same data object. Each scheduling policy should work cooperatively to maximize the number of transactions that meet their deadlines. Transaction scheduling in an RTDBS can be studied from several different perspectives. This largely depends on how the system is specified in terms of data consistency requirements and timing constraints. In this study, we assume that data consistency is defined by the correctness notion of serializability (i.e., a relaxation of serializability is not considered for improving timeliness), and that the timing constraints associated with transactions are firm deadlines (i.e., transactions which miss their deadlines are useless and need to be discarded from the system). In addition, we assume that transactions arrive sporadically with unpredictable arrival times, and that the data and resource requirements of each transaction is unknown to concurrency control beforehand. Our model of an object-oriented database for scheduling real-time transactions is an extension of the traditional object-oriented database model to include the four features for scheduling real-time transactions. Considering the assumptions we made regarding real-time requirements for transactions, we choose to use the Earliest Deadline First (EDF) algorithm. Since the real-time scheduling theory ensures that the EDF algorithm, which assigns the highest priority to the transaction with the earliest deadline, is optimal for dynamic priority assignment [12], EDF is a plausible choice for the given transaction model. Second, to maintain data consistency, we employ the nested two-phase locking protocol. As mentioned in the previous section, the correctness of the protocol has been proven for the execution of nested transactions in object-oriented databases. Third, for conflict resolution, we employ the high priority and wait promote [2] schemes to be incorporated into the basic concurrency control mechanism. i.e., N2PL. We also consider a conditional conflict resolution scheme discussed in [9], which switches between the two schemes using the information of the lock holding transaction's current state. Finally, we abort transactions that have missed their deadlines by using eligibility test. Due to the firm deadline assumption, the aborted transactions are discarded from the system. One salient point about the eligibility test used in this model is that it can screen out transactions that not only have missed but also are about to miss their deadlines. To decide for the eligibility of transactions, we use the minimal execution times of methods defined on objects. The minimal execution times of methods may be relatively easy to compute by empirically measuring their running times under no contention for objects among transactions. Note that the nested structure of transaction execution helps the computation and use of the minimal execution times of component methods in a nested transaction. The details of this extended model of object-oriented databases for scheduling real-time transactions and related concurrency control protocols are given in [13]. 4. Concurrency Control Issues In this section, first we consider the difficulties in synchronizing concurrent execution of transactions in object-oriented databases and discuss research directions to enhance the performance of concurrency control in object-oriented databases. Then we discuss a simple object-oriented database system model for the development of a complete object-oriented database for real-time applications. Object-oriented databases generalize the traditional database model in several ways. First, nested executions of transactions on objects are natural since a method may invoke another method on some other objects. Second, instead of simple read and write operations on database objects, object-oriented databases permit arbitrary operations on objects. Finally, inheritance in object-oriented databases allows class hierarchies. These properties often make the problem of ensuring data consistency in object-oriented databases more difficult, because objects of arbitrary complexity become the unit of locking (and thus less concurrency in transaction execution is resulted), and sometimes concurrency control requires to lock not only the object accessed by a transaction, but also several other objects not directly accessed by the transaction. Specifically, due to inheritance, (1) while a transaction accesses instances of a class, another transaction should not be able to modify the definition of any of the super classes of the class, and (2) while a transaction is evaluating a query, a set of class sub-hierarchies must not be modified by a conflicting transaction [11]. In order to overcome the inefficiency in ensuring data consistency in object-oriented databases, an extensive study on improving concurrency in transaction execution in object-oriented databases has been done. Three major approaches are: (1) exploiting the structure of complex objects for enhanced concurrency or reduced overhead, (2) exploiting the semantics of operations on encapsulated objects to enhance concurrency, and (3) automating the process of extracting possible concurrency from the specification of objects. Examples of approach (1) include the concurrency control mechanisms of Orion [7] and Q2 systems [4]. Orion uses locking on three orthogonal types of hierarchy: granularity locking (to minimize the number of locks to be eight lock modes used in Orion only consider read and write operations, and require a complex lock compatibility table without considering operation semantics. Approach (2) is related to work on concurrency control for abstract data types (ADTs), and the use of fine and ad hoc commutativity relation of operations in such ADTs as sets, maps, stacks, and counters [10, 23, 24]. Examples of previous work applying this approach for concurrency control in object-oriented databases include [1, 18, 20]. Finally, an example of approach (3) can be found in [16]. The degree of concurrency that can be extracted from this static analysis of operation specification at the stage of compilers seems limited. Difficulties in managing transactions caused by objects of arbitrary complexity and their hierarchical relationship also make the implementation of an object-oriented database system complicated, and have an adverse impact on its capability to support real-time applications. In order for an object-oriented database system to efficiently support real-time applications, the system needs to be carefully designed to mitigate the complexity in transaction management. Now we describe a simple object-oriented database system model that is designed taking this consideration into account. Two key concepts of this model are atomic objects and class manager. Atomic objects are basic entities for ensuring atomicity of transactions in this model, and the class manager is the major vehicle that lessens the complexity involved in transaction management in the object-oriented database system. The notion of atomic objects was studied in a number of papers, including [10, 23, 24, 25], and was first used in the context of real-time object-oriented databases in [5]. Atomic objects are ones that provide appropriate synchronization and recovery. Encapsulating the synchronization and recovery needed to support atomicity in the implementations of the shared objects is feasible because methods defined in an object provide the only means to access the object’s data, and data contention can occur only among method invocations within the object. With atomic objects, we can enhance modularity; in addition, we can increase concurrency among transactions by using information about the specifications of the shared objects. For an efficient support of transaction management in an object-oriented database system, we believe that the task of managing class hierarchies and method commutativities should be performed by a single module. Thus, our model uses a class manager to maintain the definition of classes, and the information of class hierarchies due to inheritance and composition. The class manager uses this information to maintain commutativity relation among methods of each class, and provides concurrency control with its information when requested. Note that in this model, the commutativity of method invocations is statically determined and maintained by the class manager, while concurrency control which uses the information of method commutativity, is dynamically performed by each atomic object. In [5, 6], DiPippo and Wolfe proposed a comprehensive model for real-time object-oriented databases and flexible approaches to processing real-time transactions in such a model. To determine compatibility relation of methods, their approach considers not only a broad domain of semantic information affecting logical consistency, but also temporal consistency constraints. In addition, the approach allows a wide range of correctness criteria for logical consistency that relax serializability. Our work (described in this paper and [13]) is different from theirs in a number of aspects. First, we use the correctness notion of serializability to define data consistency requirements, but do not consider any relaxation of serializability for improving timeliness. The work in [5, 6] proposed a concurrency control technique that allows imprecision to accumulate in data values and in transactions as a result of trading off logical consistency and temporal consistency. Second, in our system, commutativity of methods (and method invocations in case of the range of parameter values being discrete) is determined a priori at compile-time, and run-time checking of commutativity is as efficient as for compatibility. We believe that the simple and efficient run-time checking is beneficial to support real-time transactions, because that may increase predictability in transaction execution. One drawback of this scheme is that concurrency level may be relatively limited, because it does not exploit dynamic information about objects. In [5, 6], granting of locks is controlled by run-time evaluation of a set of preconditions and compatibility functions defined on every ordered pair of methods. This approach seems to increase concurrency level among transactions at the cost of run-time overhead. Finally, in [13] we proposed a synchronization mechanism for scheduling real-time transactions in an object-oriented database, which uses the minimal execution time estimates of methods to decide eligibility of transactions for service. Note that our system model helps to accurately estimate the minimal execution times due to its run-time efficiency. The minimal execution times of methods are useful in scheduling real-time transactions in an object-oriented database, while in general the worst execution time estimates may not be helpful due to large variance of transaction execution time in typical database systems. 5. Conclusion In summary, object-oriented data models do not directly help the database systems to improve timeliness or to guarantee deterministic behavior of real-time applications. They do not provide features to support active pursuit of timely and deterministic processing of transactions. However, since object-oriented database models allow better support for managing complex objects and encapsulation, real-time systems that need to handle large and complex applications would require an object-oriented approach. Considering the implications of inherent complexity of concurrency control in object-oriented paradigms, we need to start from a simple model that can be easily extended to support real-time transactions and temporal constraints of real-time data. The model outlined in this paper can be one candidate for the development of a complete object-oriented database for real-time applications. References
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a457078.pdf", "len_cl100k_base": 4654, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19208, "total-output-tokens": 6158, "length": "2e12", "weborganizer": {"__label__adult": 0.0003707408905029297, "__label__art_design": 0.00022494792938232425, "__label__crime_law": 0.0004096031188964844, "__label__education_jobs": 0.0011682510375976562, "__label__entertainment": 6.031990051269531e-05, "__label__fashion_beauty": 0.00017333030700683594, "__label__finance_business": 0.0003006458282470703, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.0005669593811035156, "__label__hardware": 0.00116729736328125, "__label__health": 0.00070953369140625, "__label__history": 0.00027441978454589844, "__label__home_hobbies": 8.249282836914062e-05, "__label__industrial": 0.0005292892456054688, "__label__literature": 0.0002233982086181641, "__label__politics": 0.0002751350402832031, "__label__religion": 0.0004744529724121094, "__label__science_tech": 0.04510498046875, "__label__social_life": 8.511543273925781e-05, "__label__software": 0.008087158203125, "__label__software_dev": 0.93798828125, "__label__sports_fitness": 0.0002834796905517578, "__label__transportation": 0.0007228851318359375, "__label__travel": 0.00019752979278564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28779, 0.03188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28779, 0.35904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28779, 0.90063]], "google_gemma-3-12b-it_contains_pii": [[0, 4213, false], [4213, 6667, null], [6667, 12466, null], [12466, 17898, null], [17898, 23441, null], [23441, 28779, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4213, true], [4213, 6667, null], [6667, 12466, null], [12466, 17898, null], [17898, 23441, null], [23441, 28779, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28779, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28779, null]], "pdf_page_numbers": [[0, 4213, 1], [4213, 6667, 2], [6667, 12466, 3], [12466, 17898, 4], [17898, 23441, 5], [23441, 28779, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28779, 0.26263]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
18ee72b8d8149b001d3e4cd240054e249cec26d5
Part I. Multiple Choice Questions (2 points each): 1. Which of the following represents the life-cycle of software development? (a) Analysis -> Design -> Coding -> Testing -> Operation and Maintenance ***** (b) Design -> Analysis -> Coding -> Testing -> Operation and Maintenance (c) Design -> Analysis -> Coding -> Testing (d) Analysis -> Design -> Coding -> Operation and Maintenance 2. Defining a class so that the implementation of the data and methods of the class are not known to the programmers who use the class is called: (a) Data Binding (b) Polymorphism (c) Encapsulation ***** (d) Inheritance 3. Which of the following is an incorrect identifier? (a) 3theValue***** (b) THE_IDENTIFIER (c) a_b_ (d) neolithic123FOUR 4. In the following block of code, what is the value of theVar? ```java int theVar = // 2 /* /* 4 + 5 */ 6 * 3 // - 2 ; ``` (a) 18 ***** (b) 9 (c) -2 (d) 2 5. Which of the following is the proper order of promotion? (a) short -> byte -> long -> int -> float -> double (b) short -> byte -> int -> long -> float -> double (c) short -> byte -> int -> float -> double -> long (d) byte -> short -> int -> long -> float -> double***** 6. In the following block of code, what is the value of the Phrase? ```java String S1 = "anabolic regzrding vaccination"; String S2 = "itate"; String S3 = "grad"; String thePhrase = S1.substring(S1.indexOf("r"), S1.indexOf("z")) + S3.substring(1) + S2; ``` (a) "egraditate" (b) "regraditate" ****** (c) "regzgraditate" (d) None of these 7. Which of the following is an illegal assignment expression? (a) float x = 3.5; ****** (b) int x = 3; (c) double x = 4.66f; (d) long x = (int)4; 8. What is the resulting value of the following Java expression? ```java double x = (4.0f + (3.0)/(int)1.5) * (3/(int)4.0); ``` (a) 4.125 (b) 3.5 (c) 0.0 ****** (d) 4.5 9. Given the following class and usage thereof, which of the labeled lines are incorrect? ```java public class Exam1 { private final int aQuandry; public Exam1( int quandry ) { I: aQuandry = quandry; } } // ... In some other class, in some method: II: Exam1 exam = new Exam1(); III: exam.aQuandry = 42; IV: Exam1 = new Exam1( 99 ); ``` (a) I, II (b) III, IV (c) II, III, IV ****** (d) II, III 10. What is printed by the code below? ```java public class Test { private static final int value = 5; public static void main( String[] args ) { int total, value = 4; total = value + value; total = total + someMethod( total ); System.out.println( total ); } public static int someMethod( int val ) { return value; } } ``` (a) 13 ***** (b) None of these (c) 16 (d) 15 11. A mutator method is a method that: (a) prints to the screen the value of a data member (b) reads and returns the value of a data member (c) changes the value of a data member ***** (d) constructs a class 12. What is the output of the following program? ```java public class Query { private static String someString = "hello"; private String name; public Query(String newName) { name = newName; } public static void main(String[] args) { Query query = new Query("Gordon"); changeString(someString); changeName(query); System.out.println(someString + query.name); } public static void changeString(String str) { str = "Howdy"; } public static void changeName(Query q) { q.name = "Lightfoot"; } } ``` (a) HowdyLightfoot (b) helloLightfoot (c) HowdyGordon (d) helloGordon 13. Which of the following boolean expressions is always true? (a) 10 <= x && !(x >= 10 ) (b) y == x + y && x == x + y (c) 10 <= x || !(x >= 10 ) (d) y == x + y || x == x + y 14. What is displayed by the following? ```java public class Quest { public Quest() { } public void display(String goal, String days, int adj) { System.out.println("I am on a "+adj+" quest for the " +goal+" in "+days+" days."); } public static void main(String[] args) { String adj = "perilous", goal = "sticky wicket"; int days = 3; Quest q = new Quest(); q.display(adj, goal, days); } } ``` (a) I am on a sticky wicket quest for the perilous in 3 days. (b) I am on a perilous quest for the sticky wicket in 3 days. (c) I am on a 3 quest for the perilous in sticky wicket days. **** (d) I am on a perilous quest for the 3 in sticky wicket days. 15. Assuming: a = -1, b = -2, c = -4, d = 2, e = -1. What is the output of the following code fragment? ```java if (a < 0) if (b < 0) if (c < 0) if (!(d < 0 && e < 0)) System.out.println("One"); else System.out.println("Two"); else System.out.println("Three"); else System.out.println("Four"); ``` (a) Two Three (b) Two Four (c) One Four (d) One **** Three 16. Which of the following if statements is equivalent to this switch statement? ```plaintext switch( grade ) { case 5: case 4: a = 1; b = 2; break; case 3: a = 5; break; default: a = 2; break; } (a) if (grade == 4 || grade == 5) { a = 1; b = 2; } else if (grade == 3) { a = 5; } else { a = 2; } (b) if( grade == 4 ) { a = 1; b = 2; } else if( grade == 3 ) { a = 5; } else { a = 2; } (c) if (grade == 4 && grade == 5) { a = 1; b = 2; } else if (grade == 3) { a = 5; } else { a = 2; } (d) if( grade != 5 ) { if( grade == 4 ) { a = 1; b = 2; } else if( grade == 3 ) { a = 5; } else { a = 2; } } ``` 17. Given the following StudentID class: ```java class StudentID { private String id; public StudentID( String newid ) { id = newid; } public String getID() { return id; } public boolean equals( StudentID otherid ) { return id.equals( otherid.getID() ); } } ``` What is the output of the following code: ```java StudentID s1 = new StudentID( "8675309" ); StudentID s2 = new StudentID( "8675309" ); boolean result1 = s1 == s2; boolean result2 = s1.equals( s2 ); System.out.println( result1 ); System.out.println( result2 ); ``` (a) true (b) false ***** (c) true (d) false 18. Which two of the following statements are true about constructors: I. A constructor has no return type and is therefore a void method. II. A constructor has the same name as the class. III. A class can have more than one constructor. IV. Constructors are called like any other method. (a) III and IV (b) I and II (c) II and III ***** (d) I and IV 19. The basic idea of _________ is that it allows the same program instruction to mean different things in different contexts. (a) object oriented programming (b) polymorphism ***** (c) encapsulation (d) inheritance 20. Complete the following Java statement to allow the instance of the Scanner class to read keyboard input. Scanner keyboard = new Scanner(__________); (a) System.out (b) System.in ***** (c) System.keyboard (d) System.input The version of your test is A. Please FILL IN CIRCLE (A) for the TEST FORM field on the BUBBLE SHEET directly under the DATE field and turn in your exam booklet and answer sheet to the stack labeled (A) in the front of the classroom. Thank you. Part II. Programming Questions (60 points total): 1. (15 pts) Create a class called Kitten that has three fields: String name, Person owner, and int age. Create a constructor for Kitten that takes a String name and a Person owner for the Kitten and uses them for initialization. Have the age for a Kitten start at 0; Implement accessor and mutator methods for both name and owner. Make the mutator for name such that whenever a name is applied to a Kitten, the actual name of the Kitten is "<Given Name> the Feline". (e.g. given "Bob", the Kitten’s name becomes "Bob the Feline") Implement only an accessor for age. Implement a method called haveBirthday that does not return anything and simply increases a Kitten’s age by one. Finally, write a method called toString that returns a string of the form: "<Kitten name> is <age> and belongs to <Owner name>" e.g. "Bob the Feline is 87 and belongs to Gregor Samsa" The definition for Person is found below. ```java public class Person { private final String name; public Person(String newName) { name = newName; } public String getName() { return name; } } ``` public class Kitten { private String name; private Person owner; private int age; public Kitten( String name, Person owner ) { setName( name ); setOwner( owner ); age = 0; } public void setName( String newName ) { name = newName + " the Feline"; } public String getName() { return name; } public void setOwner( Person newOwner ) { owner = newOwner; } public Person getOwner() { return owner; } public void haveBirthday() { ++age; } public int getAge() { return age; } public String toString() { return name+" is "+age+" and belongs to "+owner.getName(); } } 2. (15 pts) Implement a Bicycle class which has the following three methods: ```java public void increaseSpeed(); public void decreaseSpeed(); public boolean isMoving(); ``` Within the Bicycle class, you must keep track of the Bicycle’s state: moving or not moving. You must also keep track of the Bicycle’s current speed. Whenever the Bicycle has a positive current speed, the state should be moving. Whenever the Bicycle has a current speed of 0 the state must be not moving. You must use a boolean variable to maintain the Bicycle’s state. The methods increaseSpeed() and decreaseSpeed() always increment and decrement [respectively] the current speed by 1. If increaseSpeed() is called on a Bicycle which is not currently moving, the Bicycle should be set to moving, and the current speed should be increased by 1. If decreaseSpeed() is called on a Bicycle which is moving, the current speed should be decremented by one. If the current speed is ever decreased to 0, the Bicycle’s state should change from moving to not moving. The method isMoving() should return the status of the Bicycle. Your Bicycle class should also provide two constructors. One constructor takes no arguments and the other takes an integer representing the initial speed of the Bicycle. The default constructor should create a Bicycle which is not currently moving and has a current speed of 0. The second constructor should set the current speed to the passed initial speed ONLY if the speed is positive. It should also set the Bicycle’s state to moving. If the initial speed given is negative or 0, the current speed should be set to 0 and the Bicycle’s state should be not moving. public class Bicycle { private boolean isMoving; private int currentSpeed; public Bicycle() { isMoving = false; currentSpeed = 0; } public Bicycle( int speed ) { if( speed > 0 ) { isMoving = true; currentSpeed = speed; } else { isMoving = false; currentSpeed = 0; } } public void increaseSpeed() { if( isMoving ) { currentSpeed++; } else { isMoving = true; currentSpeed++; } } public void decreaseSpeed() { if( isMoving ) { currentSpeed--; } else { isMoving = true; currentSpeed++; } } } 3. (15 pts) Part I: In mathematics, a polynomial equation of the second degree is commonly known as a quadratic equation. This equation can be generalized to the following: \[ ax^2 + bx + c = 0 \] We also know that there is a very simple formula to solve this equation. Recall this formula as the following: \[ x_1 = \frac{-b + \sqrt{b^2 - 4ac}}{2a} \quad \text{and} \quad x_2 = \frac{-b - \sqrt{b^2 - 4ac}}{2a} \] Write a method, solveQuadratic, which takes in a, b, and c and solves the appropriate quadratic equation. Your method should return a Pair object, as defined by the class below. This object simply holds two doubles, in this case, the two doubles are the solutions to the quadratic. If the quadratic equation has no real roots (i.e., the discriminant under the square root is negative), then return a Pair object where both numbers are Double.NaN (standing for Not a Number). ```java public class Pair { private double x1, x2; public Pair() { x1 = Double.NaN; x2 = Double.NaN; } public Pair(double newX1, double newX2) { x1 = newX1; x2 = newX2; } public void setPair(double newX1, double newX2) { x1 = newX1; x2 = newX2; } public double getX() { return x1; } public double getY() { return x2; } } ``` public Pair solveQuadratic(double a, double b, double c) { // write your code here double discriminant = b*b - 4*a*c; if (discriminant < 0) return new Pair(); return new Pair((-b+Math.sqrt(discriminant))/(2*a), (-b-Math.sqrt(discriminant))/(2*a)); } (15 pts) Part II: A less known equation is a unique quartic equation called the biquadratic equation. This is a fourth order equation of the form: \[ ax^4 + bx^2 + c = 0 \] To solve a biquadratic equation, we can observe that this type of equation can be made to the form of a quadratic by substituting \( z = x^2 \). This results in the following equation: \[ az^2 + bz + c = 0 \] Using the quadratic equation, we can get a pair of solutions \((z_1, z_2)\). To get the four solutions to the biquadratic equation, substitute back in for \( x \): \[ x_1 = \sqrt{z_1} \quad x_2 = -\sqrt{z_1} \quad x_3 = \sqrt{z_2} \quad x_4 = -\sqrt{z_2} \] Using the quadratic equation solver from PART I (you may assume it is written correctly if you are unsure of your solution), write a biquadratic equation solver method, solveBiquadratic. Your method should call the solveQuadratic method to solve the quadratic equation defined by the above substitution and return a Quad object, which simply holds four doubles. If a particular solution to the quadratic equation is not real (that is, the Pair object returned from solveQuadratic contains a Double.NaN value), then its associated pair of biquadratic solutions is also not real and should be set to Double.NaN as well. ```java public class Quad { private double x1, x2, x3, x4; public Quad() { x1 = Double.NaN; x2 = Double.NaN; x3 = Double.NaN; x4 = Double.NaN; } public Quad(double newX1, double newX2, double newX3, double newX4) { x1 = newX1; x2 = newX2; x3 = newX3; x4 = newX4; } public void setQuad(double newX1, double newX2, double newX3, double newX4) { x1 = newX1; x2 = newX2; x3 = newX3; x4 = newX4; } public double getX1() { return x1; } public double getX2() { return x2; } public double getX3() { return x3; } public double getX4() { return x4; } } ``` public Quad solveBiquadratic(double a, double b, double c) { // write your code here
{"Source-Url": "https://www.cs.purdue.edu/homes/bxd/csrt/180E1KeyF06.pdf", "len_cl100k_base": 4218, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 27694, "total-output-tokens": 5251, "length": "2e12", "weborganizer": {"__label__adult": 0.0005574226379394531, "__label__art_design": 0.0003523826599121094, "__label__crime_law": 0.0005626678466796875, "__label__education_jobs": 0.010284423828125, "__label__entertainment": 7.718801498413086e-05, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.00025844573974609375, "__label__food_dining": 0.0007109642028808594, "__label__games": 0.001094818115234375, "__label__hardware": 0.0011129379272460938, "__label__health": 0.0005168914794921875, "__label__history": 0.000301361083984375, "__label__home_hobbies": 0.00023066997528076172, "__label__industrial": 0.0005517005920410156, "__label__literature": 0.0004055500030517578, "__label__politics": 0.0003638267517089844, "__label__religion": 0.0007548332214355469, "__label__science_tech": 0.0038852691650390625, "__label__social_life": 0.0002104043960571289, "__label__software": 0.0032939910888671875, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0005955696105957031, "__label__transportation": 0.0008702278137207031, "__label__travel": 0.00033211708068847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15587, 0.026]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15587, 0.90835]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15587, 0.78263]], "google_gemma-3-12b-it_contains_pii": [[0, 1293, false], [1293, 2395, null], [2395, 3043, null], [3043, 3898, null], [3898, 5087, null], [5087, 5857, null], [5857, 6839, null], [6839, 7562, null], [7562, 8765, null], [8765, 9519, null], [9519, 11185, null], [11185, 11940, null], [11940, 13248, null], [13248, 13543, null], [13543, 15499, null], [15499, 15587, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1293, true], [1293, 2395, null], [2395, 3043, null], [3043, 3898, null], [3898, 5087, null], [5087, 5857, null], [5857, 6839, null], [6839, 7562, null], [7562, 8765, null], [8765, 9519, null], [9519, 11185, null], [11185, 11940, null], [11940, 13248, null], [13248, 13543, null], [13543, 15499, null], [15499, 15587, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 15587, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15587, null]], "pdf_page_numbers": [[0, 1293, 1], [1293, 2395, 2], [2395, 3043, 3], [3043, 3898, 4], [3898, 5087, 5], [5087, 5857, 6], [5857, 6839, 7], [6839, 7562, 8], [7562, 8765, 9], [8765, 9519, 10], [9519, 11185, 11], [11185, 11940, 12], [11940, 13248, 13], [13248, 13543, 14], [13543, 15499, 15], [15499, 15587, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15587, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c3b1262852f024989e4afea69f3dc65d538723ec
Linear Time Algorithm for Weak Parity Games Krishnendu Chatterjee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-153 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-153.html November 19, 2006 Acknowledgement This research was supported in part by the AFOSR MURI grant F49620-00-1-0327, and the NSF grant CCR-0225610. Linear Time Algorithm for Weak Parity Games * Krishnendu Chatterjee University of California, Berkeley, USA c.krish@eecs.berkeley.edu Abstract. We consider games played on graphs with the winning conditions for the players specified as weak-parity conditions. In weak-parity conditions the winner of a play is decided by looking into the set of states appearing in the play, rather than the set of states appearing infinitely often in the play. A naive analysis of the classical algorithm for weak-parity games yields a quadratic time algorithm. We present a linear time algorithm for solving weak-parity games. 1 Introduction We consider two-player games on graphs with winning objectives formalized as a weak-parity objective. In a two-player game, the set of vertices or states are partitioned into player 1 states and player 2 states. At player 1 states player 1 decides the successor and likewise for player 2. We consider weak-parity objectives, where we have a priority function that maps every state to an integer priority. A play is an infinite sequence of states, and in a weak-parity objective the winner of a play is decided by considering the minimum priority state that appear in the play: if the minimum priority is even, then player 1 wins, and otherwise player 2 is the winner. The classical algorithm to solve weak-parity games with a naive running time analysis works in $O(d \cdot m)$ time, where $d$ is the number of priorities and $m$ is the number of edges of the game graph. Since $d$ can be $O(n)$, in the worst case the naive analysis requires $O(n \cdot m)$ time, where $n$ is the number of states. We present an improved analysis of the algorithm and show that the algorithm works in $O(m)$ time. 2 Definitions We consider turn-based deterministic games played by two-players with weak-parity objectives; we call them weak-parity games. We define game graphs, plays, strategies, objectives and notion of winning below. Game graphs. A game graph $G = ((S, E), (S_1, S_2))$ consists of a directed graph $(S, E)$ with a finite state space $S$ and a set $E$ of edges, and a partition $(S_1, S_2)$ of the state space $S$ into two sets. The states in $S_1$ are player 1 states, and the states in $S_2$ are player 2 states. For a state $s \in S$, we write $E(s) = \{ t \in S \mid (s, t) \in E \}$ for the set of successor states of $s$. We assume that every state has at least one out-going edge, i.e., $E(s)$ is non-empty for all states $s \in S$. Plays. A game is played by two players: player 1 and player 2, who form an infinite path in the game graph by moving a token along edges. They start by placing the token on an initial state, and then they take moves indefinitely in the following way. If the token is on a state in $S_1$, then player 1 moves the token along one of the edges going out of the state. If the token is on a state --- * This research was supported in part by the AFOSR MURI grant F49620-00-1-0327, and the NSF grant CCR-0225610. in $S_2$, then player 2 does likewise. The result is an infinite path in the game graph; we refer to such infinite paths as plays. Formally, a play is an infinite sequence $\langle s_0, s_1, s_2, \ldots \rangle$ of states such that $(s_k, s_{k+1}) \in E$ for all $k \geq 0$. We write $\Omega$ for the set of all plays. **Strategies.** A strategy $\sigma$ for player 1 is a function $\sigma : S^* \cdot S_1 \rightarrow S$ that, given a finite sequence of states (representing the history of the play so far) which ends in a player 1 state, chooses the next state. The strategy must choose only available successors, i.e., for all $w \in S^*$ and $s \in S_1$ we have $\sigma(w \cdot s) \in E(s)$. The strategies for player 2 are defined analogously. We write $\Sigma$ and $\Pi$ for the sets of all strategies for player 1 and player 2, respectively. An important special class of strategies are memoryless strategies. The memoryless strategies do not depend on the history of a play, but only on the current state. Each memoryless strategy for player 1 can be specified as a function $\sigma$. Given a starting state $s \in S$, a strategy $\sigma \in \Sigma$ for player 1, and a strategy $\pi \in \Pi$ for player 2, there is a unique play, denoted $\omega(s, \sigma, \pi) = \langle s_0, s_1, s_2, \ldots \rangle$, which is defined as follows: $s_0 = s$ and for all $k \geq 0$, if $s_k \in S_1$, then $\sigma(s_k, s_1, \ldots, s_k) = s_{k+1}$, and if $s_k \in S_2$, then $\pi(s_0, s_1, \ldots, s_k) = s_{k+1}$. **Weak-parity objectives.** We consider game graphs with weak-parity objectives for player 1 and the complementary weak-parity objectives for player 2. For a play $\omega = \langle s_0, s_1, s_2, \ldots \rangle \in \Omega$, we define $\text{Occur}(\omega) = \{s \in S \mid s_k = s \text{ for some } k \geq 0 \}$ to be the set of states that occur in $\omega$. We also define reachability and safety objectives as they will be useful in the analysis of the algorithms. 1. **Reachability and safety objectives.** Given a set $T \subseteq S$ of states, the reachability objective $\text{Reach}(T)$ requires that some state in $T$ be visited, and dually, the safety objective $\text{Safe}(F)$ requires that only states in $F$ be visited. Formally, the sets of winning plays are $\text{Reach}(T) = \{\langle s_0, s_1, s_2, \ldots \rangle \in \Omega \mid \exists k \geq 0. s_k \in T \}$ and $\text{Safe}(F) = \{\langle s_0, s_1, s_2, \ldots \rangle \in \Omega \mid \forall k \geq 0. s_k \in F \}$. The reachability and safety objectives are dual in the sense that $\text{Reach}(T) = \Omega \setminus \text{Safe}(S \setminus T)$. 2. **Weak-parity objectives.** For $d \in \mathbb{N}$, we let $[d] = \{0, 1, \ldots, d-1\}$ and $[d]_+ = \{1, 2, \ldots, d\}$. Let $p : S \rightarrow [d]$ be a function that assigns a priority $p(s)$ to every state $s \in S$. The weak-parity objective requires that the minimal priority occurring is even. Formally, the set of winning plays is $\text{WeakParityEven}(p) = \{\omega \in \Omega \mid \min(p(\text{Occur}(\omega))) \text{ is even} \}$. The complementary objective to $\text{WeakParityEven}(p)$ is $\text{WeakParityOdd}(p)$ defined as the set $\text{WeakParityOdd}(p) = \{\omega \in \Omega \mid \min(p(\text{Occur}(\omega))) \text{ is odd} \}$ of winning plays. **Winning strategies and sets.** Given a game graph $G$ and an objective $\Phi \subseteq \Omega$ for player 1, a strategy $\sigma \in \Sigma$ is a winning strategy for player 1 from a state $s$ if for all player 2 strategies $\pi \in \Pi$ the play $\omega(s, \sigma, \pi)$ is winning, i.e., $\omega(s, \sigma, \pi) \in \Phi$. The winning strategies for player 2 are defined analogously. A state $s \in S$ is winning for player 1 with respect to the objective $\Phi$ if player 1 has a winning strategy from $s$. Formally, the set of winning states for player 1 with respect to the objective $\Phi$ in a game graph $G$ is $W_1^G(\Phi) = \{s \in S \mid \exists \sigma \in \Sigma. \forall \pi \in \Pi. \omega(s, \sigma, \pi) \in \Phi \}$. Analogously, the set of winning states for player 2 with respect to an objective $\Psi \subseteq \Omega$ is $W_2^G(\Psi) = \{s \in S \mid \exists \pi \in \Pi. \forall \sigma \in \Sigma. \omega(s, \sigma, \pi) \in \Psi \}$. If the game graph is clear from the context we drop the game graph from the superscript. We say that there exists a memoryless winning strategy for player 1 with respect to the objective $\Phi$ if there exists such a strategy from all states in $W_1(\Phi)$; and similarly for player 2. **Theorem 1.** For all game graphs $G = (S, E, (S_1, S_2))$, for all weak-parity objectives $\Phi = \text{WeakParityEven}(p)$ for player 1, and the complementary objective $\Psi = \Omega \setminus \Phi$ for player 2, the following assertions hold. 1. We have $W_1(\Phi) = S \setminus W_2(\Psi)$. 2. Closed sets and attractors. Some notions that will play key roles in the analysis of the algorithms are the notion of closed sets and attractors. We define them below. Closed sets. A set $U \subseteq S$ of states is a closed set for player 1 if the following two conditions hold: (a) for all states $u \in (U \cap S_1)$, we have $E(u) \subseteq U$, i.e., all successors of player 1 states in $U$ are again in $U$; and (b) for all $u \in (U \cap S_2)$, we have $E(u) \cap U \neq \emptyset$, i.e., every player 2 state in $U$ has a successor in $U$. A player 1 closed set is also called a trap for player 1. The closed sets for player 2 are defined analogously. Every closed set $U$ for player $\ell$, for $\ell \in \{1, 2\}$, induces a sub-game graph, denoted $G \mid U$. Proposition 1. Consider a game graph $G$, and a closed set $U$ for player 2. For every objective $\Phi$ for player 1, we have $W^G_U(\Phi) \subseteq W^G_1(\Phi)$. Attractors. Given a game graph $G$, a set $U \subseteq S$ of states, and a player $\ell \in \{1, 2\}$, the set $\text{Attr} \ell(U, G)$ contains the states from which player $\ell$ has a strategy to reach a state in $U$ against all strategies of the other player; that is, $\text{Attr} \ell(U, G) = W_\ell(\text{Reach}(U))$. The set $\text{Attr} \ell(U, G)$ can be computed inductively as follows: let $R_0 = U$; let $$R_{i+1} = R_i \cup \{s \in S_1 \mid E(s) \cap R_i \neq \emptyset\} \cup \{s \in S_2 \mid E(s) \subseteq R_i\} \quad \text{for all } i \geq 0;$$ then $\text{Attr} \ell(U, G) = \bigcup_{i \geq 0} R_i$. The inductive computation of $\text{Attr} \ell(U, G)$ is analogous. For all states $s \in \text{Attr} \ell(U, G)$, define $\text{rank}(s) = i$ if $s \in R_i \setminus R_{i-1}$, that is, $\text{rank}(s)$ denotes the least $i \geq 0$ such that $s$ is included in $R_i$. Define a memoryless attractor strategy $\sigma \in \Sigma$ for player 1 as follows: for each state $s \in (\text{Attr} \ell(U, G) \cap S_1)$ with $\text{rank}(s) = i$, choose a successor $\sigma(s) \in (R_{i-1} \cap E(s))$ (such a successor exists by the inductive definition). It follows that for all states $s \in \text{Attr} \ell(U, G)$ and all strategies $\pi \in \Pi$ for player 2, the play $\omega(s, \sigma, \pi)$ reaches $U$ in at most $|\text{Attr} \ell(U, G)|$ transitions. Proposition 2. For all game graphs $G$, all players $\ell \in \{1, 2\}$, and all sets $U \subseteq S$ of states, the set $S \setminus \text{Attr} \ell(U, G)$ is a closed set for player $\ell$. Notation. For a game graph $G = ((S, E), (S_1, S_2))$, a set $U \subseteq S$ and $\ell \in \{1, 2\}$, we write $G \setminus \text{Attr} \ell(U, G)$ to denote the game graph $G \mid (S \setminus \text{Attr} \ell(U, G))$. Computation of attractors. Given a game graph $G = (S, E)$ and a set $T \subseteq S$ of states let us denote by $A = \text{Attr} \ell(T, G)$ the attractor for a player $\ell \in \{1, 2\}$ to the set $T$. A naive analysis of the computation of attractor shows that the computation can be done in $O(m)$ time, where $m$ is the number of edges. An improved analysis can be done as follows. For every state $s \in S \setminus T$ we keep a counter initialized to 0. Whenever a state $t$ is included for the set of states in $A$, for all states $s$ such that $(s, t) \in E$ we increase the counter by 1. For a state $s \in S_1$ if the counter is positive, then we include it in $A$, and for a state $s \in S_1 \setminus S_1$ if the counter equals the number of edges $|E(s)|$, then we include it in $A$. Let us consider the following set of edges: $E_A = E \cap ((S \setminus T) \times A)$. The work of the attractor computation is only on edges with the start state in $(S \setminus T)$ and end state in $A$. That is the total work of attractor computation on edges is $O(m_A)$ where $m_A = |E_A|$. Also the counter initialization phase does not require to initialize counters for all states, but only initializes a counter for a state $s$, when some state $t \in E(s)$ gets included in $A$ for the first time. This gives us the following lemma. Lemma 1. Given a game graph $G = (S, E)$ and a set $T \subseteq S$ of states let us denote by $A = \text{Attr} \ell(T, G)$ the attractor for a player $\ell \in \{1, 2\}$ to the set $T$. The set $A$ can be computed in time $O(|E_A|)$, where $E_A = E \cap ((S \setminus T) \times A)$. 3 The Classical Algorithm We first present the classical algorithm for weak-parity games and present an improved analysis to show that the algorithm has a linear-time complexity. We first present an informal description of the algorithm; and a formal description of the algorithm is given as Algorithm 1. **Informal description of the classical algorithm.** We will consider a priority function $p : S → [d]$. The objective $Φ$ for player 1 is the weak-parity objective $\text{WeakParityEven}(p)$ and the objective for player 2 is the complementary objective $Ψ = \text{WeakParityOdd}(p)$. The algorithm proceeds by computing attractors and removing the attractors from the game graph and proceeds on the subgame graph. At iteration $i$, we denote the game graph by $G^i$ and the state space as $S^i$ and the set of edges of $G^i$ as $E^i$. At iteration $i$, the attractor set to the set of states of priority $i$ in $G^i$ (i.e., attractor to $p^{-1}(i) \cap S^i$) is computed. If $i$ is even, the set is included in the winning set for player 1, and otherwise it is included in the winning set for player 2 and the set is removed from the game graph for the next iterations. **Correctness.** The following theorem states the correctness of Algorithm 1. **Theorem 2 (Correctness).** Given a game graph $G = ((S,E),(S_1,S_2))$ and priority function $p : S → [d]$ we have $$W_1 = W_1(\text{WeakParityEven}(p)); \quad \quad S \setminus W_1 = W_2(\text{WeakParityOdd}(p)),$$ where $(W_1, W_2)$ is the output of Algorithm 1. **Proof.** Observe that in the game graph $G^i$ we have $S^i \subseteq \bigcup_{j \geq i} p^{-1}(j)$, i.e., the priorities in $G^i$ are at least $i$. Let us denote by $W_1^i$ and $W_2^i$ the sets $W_1$ and $W_2$ at the end of iteration $i − 1$ of Algorithm 1. Then for all $s \in S^i ∩ S_1$ we have $E(s) \subseteq S^i ∪ W_2^i$ and for all $s \in S^i ∩ S_2$ we have $E(s) \subseteq S^i \cup W_1^i$. We prove by induction that the following two conditions hold $$W_1^i \subseteq W_1^G(\text{WeakParityEven}(p) \cap \{ω \mid \min(p(\text{Occur}(ω))) < i\});$$ $$W_2^i \subseteq W_2^G(\text{WeakParityOdd}(p)) \cap \{ω \mid \min(p(\text{Occur}(ω))) < i\}).$$ The base case is trivial and we now prove the inductive case. For $i$ even, for a state $s \in A_i$, the attractor strategy $σ$ for player 1 in $G^i$ to reach $p^{-1}(i) \cap S^i$ and then choosing edges in $S^i$, ensures that for all strategies $π$ for player 2 we have $$\omega(s, σ, π) ∈ (\text{WeakParityEven}(p) \cap \{ω \mid \min(p(\text{Occur}(ω))) \leq i\}) \cup \text{Reach}(W_1^i).$$ By the inductive hypothesis it follows that \[ A_i \subseteq W_1^G (\text{WeakParityEven}(p) \cap \{ \omega \mid \min(p(\text{Occur}(\omega))) < i + 1 \}) \]. Similarly, it follows for \( i \) odd that \( A_i \subseteq W_2^G (\text{WeakParityOdd}(p) \cap \{ \omega \mid \min(p(\text{Occur}(\omega))) < i + 1 \}) \). The desired result follows. \( \blacksquare \) Running time analysis. In the running time analysis we will denote by \( n \) the number of states, and by \( m \) the number of edges in the game graph. The naive analysis of the running time of Algorithm 1 yields a \( O(d \cdot m) \) running time analysis. This is because the loop of step 2 runs for \( d \) times, and each iteration can be computed in \( O(m) \) time. Since \( d \) can be \( O(n) \), the worst case bound of the naive analysis is \( O(n \cdot m) \), which is quadratic. We will now present a linear-time analysis of the algorithm. The two key issues in the running time analysis of the algorithm is to analyze the computation of the attractors (step 2.1 of the algorithm) and obtaining the target sets. The attractor computations. We first argue that the attractor computation over all iterations can be done in \( O(m) \) time. To prove this claim we observe that the sets \( A_i \) computed at step 2.1 of the algorithm satisfies that \( A_i \cap A_j = \emptyset \) for \( i \neq j \), (since the set \( A_i \) once computed is removed from the game graph for further iterations). Let us consider the set \( E_{A_i} = E^i \cap (S^i \times A_i) \) of edges. Then for \( i \neq j \) we have \( E_{A_i} \cap E_{A_j} = \emptyset \). By Lemma 1 it follows that the \( i \)-th iteration of the attractor can be computed in \( O(|E_{A_i}|) \) time. Hence the total time for attractor computations over all iterations is \[ \sum_{i=0}^{d-1} O(|E_{A_i}|) = O(|E|) = O(m), \] where the first equality follows since the edges \( E_{A_i} \) and \( E_{A_j} \) are disjoint for \( i \neq j \). Obtaining the target sets. We will now argue that the target sets \( p^{-1}(i) \cap S^i \) can be computed in \( O(n) \) time over all iterations. Without loss of generality we assume that the set of states \( S \) are numbered \( 0, 1, \ldots, n - 1 \) and the priority function \( p : S \rightarrow [d] \) is given as an array \( P[0..n-1] \) of integers such that \( P[i] = p(i) \). The procedure for obtaining the target sets will involve several steps. We present the steps below. 1. Renaming phase. First, we construct a renaming of the states such that states in \( p^{-1}(i) \) are numbered lower than \( p^{-1}(j) \) for \( i < j \). Here is a \( O(n) \) time procedure for renaming. (a) Consider an array of counters \( \text{ct}[0..d-1] \) all initialized to 0, and arrays \( A[0], A[1], \ldots, A[d-1] \) (each \( A[i] \) is an array and will contain states of priority \( i \)). (b) The first step is as follows. \[ \begin{cases} k = P[i]; j = \text{ct}[k]; \\ A[k][j] = i; \\ \text{ct}[k] = \text{ct}[k] + 1; \end{cases} \] This step assigns to the array in \( A[i] \) the set of states with priority \( i \) (in the same relative order) and also works in \( O(n) \) time. The counter \( \text{ct}[i] \) is the number of states with priority \( i \). (c) The renaming step. We now construct arrays $B$ and $C$ in $O(n)$ time to store renaming and the inverse renaming. For simplicity let us assume $ct[-1] = 0$ and the procedure is as follows. $$\text{for } (i := 0; i < d; i := i + 1)$$ $$\text{for } (j := 0; j < ct[i]; j := j + 1)$$ $$\{$$ $$B[ct[i] - 1] + j = A[i][j];$$ $$C[A[i][j]] = ct[i] - 1 + j;$$ $$\}$$ This creates the renaming such that for $B[0..ct[0] - 1]$ are states of priority 0, then we have states of priority 1 for $B[ct[0]..ct[1] - 1]$, and so on. The array $C$ stores the inverse of the renaming, i.e., if $B[i] = j$, then $C[j] = i$. Moreover, though it is a nested loop, since $\sum_{i=1}^{d-1} ct[i] = n$ this procedure also works in $O(n)$ time. 2. In the renaming phase we have obtained in $O(n)$ time a renaming in the array $B$ and the inverse renaming in the array $C$. Since renaming and its inverse, for a given state, can be obtained in constant time\(^1\) we can move back and forth the renaming without increasing the time complexity other than in constants. We now obtain the set of states as targets required for the attractor computation of step 2.1 of Algorithm 1 in total $O(n)$ time across the whole computation. First, we create a copy of $B$ as an array $D$, and keep a global counter called $g$ initialized to 0. We modify the attractor computation in step 2.1 such that in the attractor computation when a state $j$ is removed from the game graph, then $D[k]$ is set to $-1$ such that $D[j] = j$, (the entry of the array $D$ that represent state $j$ is set to $-1$). This is simply done as follows $D[C[j]] = -1$. This is a constant work for a state and hence the extra work in the attractor computation of step 2.1 across the whole computation is $O(n)$. The computation to obtain the target for priority $i$ (i.e., $p^{-1}(i) \cap S'$), denoted as procedure $\text{ObtainTargets}$, is described below. The procedure $\text{ObtainTargets}$ is called by Algorithm 1 with parameter $i$ in step 2.1 to obtain $p^{-1}(i) \cap S'$. (a) We have the global counter $g := 0$ (initialized to 0) and the value of the global counter persists across calls to the procedure $\text{ObtainTargets}$. We present the pseudocode for the procedure $\text{ObtainTargets}$ to obtain in an array $T$ the set $p^{-1}(i) \cap S'$ of states. The procedure assumes that when $\text{ObtainTargets}(i)$ is invoked we have $g = 0$, if $i = 0$, and for $i > 0$, we have $g = \sum_{j=0}^{i-1} ct[j]$. Also, for all $j \in S \setminus S'$ we have $D[C[j]] = -1$ (the set of states in $S \setminus S'$ is set to $-1$ in the attractor computation). The procedure invoked with $i$ returns $T$ as an array with states in $p^{-1}(i) \cap S'$, and sets $g = \sum_{j=0}^{i} ct[j]$. $$\text{ObtainTargets}(i)$$ $$k := 0;$$ $$\text{for } (j := 0; j < ct[i] - 1; j := j + 1)$$ $$\{$$ $$\text{if } (D[j + g] \neq -1)$$ $$\{$$ $$T[k] = D[j + g]; k := k + 1;$$ $$\}$$ $$g := g + ct[i];$$ $$\text{return } T.$$ \(^1\) We assume the random access model, and an element in the arrays $B$ and $C$ can be accessed in constant time. The work for a given $i$ is $O(c[i])$ and since $\sum_{i=0}^{d-1} c[i] = n$, the total work to get the target sets over all iterations is $O(n)$. This completes the $O(n + m) = O(m)$ running time analysis for Algorithm 1. This yields the following result. **Theorem 3 (Running time).** Given a game graph $G = ((S, E), (S_1, S_2))$ and priority function $p : S \rightarrow [d]$, the sets $W_1(WeakParityEven(p))$ and $W_2(WeakParityOdd(p))$ can be computed in $O(m)$ time, where $m = |E|$. **Acknowledgments.** I thank Florian Horn for useful comments.
{"Source-Url": "https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-153.pdf", "len_cl100k_base": 6908, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 34585, "total-output-tokens": 7482, "length": "2e12", "weborganizer": {"__label__adult": 0.0010929107666015625, "__label__art_design": 0.0007195472717285156, "__label__crime_law": 0.00183868408203125, "__label__education_jobs": 0.002044677734375, "__label__entertainment": 0.0005002021789550781, "__label__fashion_beauty": 0.0005397796630859375, "__label__finance_business": 0.0010347366333007812, "__label__food_dining": 0.0014066696166992188, "__label__games": 0.03399658203125, "__label__hardware": 0.004070281982421875, "__label__health": 0.002521514892578125, "__label__history": 0.001323699951171875, "__label__home_hobbies": 0.0004296302795410156, "__label__industrial": 0.0019025802612304688, "__label__literature": 0.0009031295776367188, "__label__politics": 0.0009546279907226562, "__label__religion": 0.00124359130859375, "__label__science_tech": 0.425537109375, "__label__social_life": 0.00023818016052246096, "__label__software": 0.005977630615234375, "__label__software_dev": 0.50732421875, "__label__sports_fitness": 0.0018329620361328125, "__label__transportation": 0.0017595291137695312, "__label__travel": 0.0006303787231445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22094, 0.0302]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22094, 0.3336]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22094, 0.87643]], "google_gemma-3-12b-it_contains_pii": [[0, 276, false], [276, 402, null], [402, 3389, null], [3389, 8245, null], [8245, 12599, null], [12599, 15180, null], [15180, 18454, null], [18454, 21538, null], [21538, 22094, null]], "google_gemma-3-12b-it_is_public_document": [[0, 276, true], [276, 402, null], [402, 3389, null], [3389, 8245, null], [8245, 12599, null], [12599, 15180, null], [15180, 18454, null], [18454, 21538, null], [21538, 22094, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22094, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22094, null]], "pdf_page_numbers": [[0, 276, 1], [276, 402, 2], [402, 3389, 3], [3389, 8245, 4], [8245, 12599, 5], [12599, 15180, 6], [15180, 18454, 7], [18454, 21538, 8], [21538, 22094, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22094, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
2895b80d4e53220509f98fa2e022606474b92459
Improving the software architecture design process by reusing technology-specific experience Glib Kutepov Information Systems Development Fraunhofer ISE Fraunhofer-Platz 1 67663 Kaiserslautern glib.kutepov@iese.fraunhofer.de Abstract: Experience with particular technologies such as SOA, Cloud Computing, or Mobile Apps plays a crucial role when designing the architecture of a software system. Being aware of the challenges usually encountered when using a technology and knowing in advance how to resolve these challenges can dramatically increase the quality of the software system architecture and decrease the design effort. However, it is not always a straightforward process to collect the necessary architectural experience, persist it on the organizational level, and reuse it in the right way, especially if the technology is new. This paper describes how architecture design processes can be improved by supplementing them with architectural experience related to a particular technology. The way architectural experience can be described using architectural scenarios and solution patterns is explained and persisted in the architecture design process. The efficiency of the approach is validated with the help of a case study. 1. Introduction Consider a modern software development organization that develops its products with a strong focus on the architecture. During the software architecture design process, the architect identifies key challenges that influence the current system architecture and finds appropriate solutions. This task is carried out successfully because the architect bases his system on familiar, wide-spread IT technologies whose caveats are well known to him or her. Fortunately, the world of technology does not stand still: Every day, some new technology appears on the IT scene, which opens up new opportunities: maybe an alternative or an improvement to an existing technology, or a completely new technological paradigm, such as SOA, Cloud Computing, or Mobile Applications. The organization may have to incorporate the new technology in its architectural practices in order to stay competitive. Initially, the identification of challenges and solutions will be less effective than before due to a lack of knowledge on the architect’s part regarding the new technology. This may increase the time needed for architecture design and decrease quality. However, with time the process will become more effective and the quality of the product will improve. This will be mostly due to the tacit experience of the software architect. Having one person as the only source of this specific knowledge may jeopardize all the benefits of adopting the new technology. Thus, a way to systematically collect and persist the architect’s experience in an organization has to be found. Hofmeister et al. [HN99] suggest performing similar activities before designing every architectural view and refer to these activities as “Global Analysis”. The core of “Global Analysis” is the identification of factor-strategy pairs, which are then applied in the actual view design phase. “Factors” represent the challenges or problems that may influence architecture design and “strategies” represent solutions to them. We see such pairs as an effective tool for describing and persisting technology-specific experience. In this paper, we propose an approach for systematically collecting, describing, and reusing such pairs in the context of the architecture design process. The pairs are collected with the help of project post-mortems, while the factors are described in the form of architectural scenarios and strategies with solution patterns. Furthermore, several scenarios are provided for using such pairs in the architecture design phase, thus operationalizing the experience. The paper is structured as follows: After an introductory part, the pros and cons of well-known techniques for persisting architectural experience are described in section 2. In section 3, the details of the approach are given. Our explanation is based on Fraunhofer ACES - the architecture design process [KK11] used for software architecture design at Fraunhofer IESE. Section 4 shows how we instantiated our idea for the actively evolving technology of mobile “apps” and particularly its application in the business domain. The “Initial Validation” chapter features a case study that was performed in order to take first steps towards the validation of the approach. The section “Conclusion” concludes the paper. 2. Related Work The aim of this paper is to show the reader how new technology can be efficiently adopted in the architectural practices of software engineering companies. The main challenge in adopting a new technology is the lack of structured organization-wide architectural experience that can be reused in an operational manner. Thus, our goal is to find a way to collect and persist this experience inside the architecture design process. Well-known processes such as ADD [BK01] or BAPO/CAFCR [Sm03] provide solid guidance for architecture design but, unfortunately, do not offer any facilities for collecting and persisting architectural experience for further reuse. This means that such facilities have to be either invented or created by combining prior works. We examined related work on existing solutions to the following challenges: ways to collect architectural experience and ways to persist it within the architecture design process. One of the prominent techniques for collecting experience is “Project Postmortems” [Ti90]. How this technique was used for our purpose will be explained in section 3.2. A popular approach for persisting architectural experience are domain-specific software architectures [Tr95]. Researchers have developed a body of various methodologies for approaching such architectures, ranging from guidelines and recommendations for creating domain-specific architectures [AA07] to proposals of ready-to-use reference architectures [ZL00], [DN08]. Unfortunately, the applicability of these approaches is limited to particular well-scoped business domains, which is not our case. Our target is a technology that can have multiple application areas; therefore it would be too effort-consuming and inflexible to model it with a set of software components. Alternative approach for technology-specific experience persistence are pattern languages, such as [BM00], [GH94], or the factor-strategy pairs of [HN99]. Although pattern languages are a very effective approach, we feel that they lack precision in describing architectural problems: In [HN99], the description of the factor (problem) is rather unstructured, and in the “Gang of Four” work [GH94], the problems are described in terms of object-oriented programming rather than architecture. In this paper, these drawbacks are mitigated with the help of architectural scenarios [KB94], [DF99]. It is always a challenge to find the right detail level for describing the solution part of a pattern. For example, the detail level of [GH94] patterns differs from ours: Unlike the implementation level used by [GH94], we describe patterns on the architectural level, which is more abstract and can lead to multiple implementations; hence, it is a complex task to describe a pattern in such a way that it gives the architect solid guidance without prescribing any particular implementation. Therefore we propose our own generic template for solution descriptions. The next section describes how these approaches were combined in order to mitigate the drawbacks of each one and develop the needed technique. 3. Approach This section contains a step-by-step description of our approach for persisting technology-specific architectural experience in the architecture design process. 3.1 Baseline Architecture Method As a basis for the technology-specific architecture design process, we take Fraunhofer ACES [KK11] – the architecture design process actively used by Fraunhofer ISEE. The input to the process are system requirements obtained in previous phases of the software system engineering process, and the output is, among others, a set of documented architectural views of the system. ACES is a comprehensive process that guides architects in all their activities, starting from the identification of stakeholders and the understanding of architectural drivers via architecture realization all the way to the documentation and validation of the system architecture produced. The approach consists of two parts: core competence – the main part, which includes domain-independent sub-processes for architecture design, and domain competencies – an optional part, which contains domain-specific static experience artifacts related to the architecture design. The structure of Fraunhofer ACES is shown in Figure 1. The “domain competencies” (in our case rather “technology competencies”) part of Fraunhofer ACES foresees three crucial experience collection areas - challenges, solutions, and technologies, which represent a similar concept as the factors and strategy pairs of the “Global Analysis” of [HN99]. Challenges correspond to factors and solutions correspond to strategies. The technologies area is used for persisting information about COTS solutions that are relevant for the considered domain. Solutions can address technologies in their description. This paper focuses on challenges and solutions in architecture design; therefore the technology part of ACES is not addressed here. ACES provides experience areas, but gives no concrete guidelines on how to collect this experience or how to represent it. In order to reuse architectural experience during the architecture design process, we examined two sub-processes of the Fraunhofer ACES core competencies: ASR [KK11] (Architecturally Significant Requirements) – the process analyzing stakeholders and eliciting requirements specifically related to the architecture, and DMM [KK11] (Design, Modeling, Migration) – the process for designing the actual system architecture based on the elicited requirements. In sections 3.2 – 3.5, we describe how one can fill out the “domain competencies” part of ACES with the necessary experience and then reuse it in ASR and DMM processes. 3.2 Eliciting Challenges and Solutions The idea of collecting experiences by identifying challenges of a particular area is not new and has already been proposed by [DF99]. However, the problem is that [DF99] did not specify where to get these challenges from and how to describe them. Knowledge about challenges regarding a particular technology comes from experience; therefore, an experience source is required. We consider the best experience source to be pilot projects implemented with the current technology. Therefore, several initial projects have to be performed during which the experience base is filled. After each project, a project postmortem [DD05], [Ti90] has to be performed in order to elicit tacit architectural experience residing in the mind of the architect, or in the artifacts of the project, such as documentation, code etc. Postmortems are techniques that allow for systematically examining the completed project for the purpose of eliciting pitfalls to avoid in the future, and best practices to be reused. We use postmortems for eliciting challenges that recur when applying a specific technology. We perform project postmortems in the form of interviews with architects. These interviews consist of the following steps: 1. **Find out challenging architectural scenarios.** Our experience in designing software architecture shows that most of the challenges are usually related to the quality (non-functional) requirements of the architecture. In Fraunhofer ACES [KK11], quality requirements are described in the form of architectural scenarios [DF99]. Thus, all that architects need to do during the post-mortem meeting is to point out architectural scenarios that in their minds do not only represent a unique challenge met in one project, but that reflect the crosscutting challenge of the current technology. More details on describing challenges with architectural scenarios can be found in section 3.3. 2. **Check for duplicates.** Check if this architectural scenario already exists in the experience base. If not, add it; then check if the related solution has been refined or improved in the current project. If yes, supplement the existing solution with the new details. 3. **Elicit and describe the solution.** An architect needs to elicit the solution for the identified challenge from the project documentation and describe it according to the solution pattern template given in section 3.4. A solution described with the solution pattern template is then added to the experience base. 4. **Retain traceability.** We trace relationship between architectural scenarios and solution patterns with the help of a traceability matrix; therefore this matrix has to be updated every time a change is made to the experience base. The artifacts elicited in this phase are: a set of architectural challenges described using the architectural scenario template presented in section 3.3, a set of solution patterns described using the template presented in section 3.4, and a traceability matrix, which establishes the relationship between both. ### 3.3 Describing the Challenges with Architectural Scenarios Quality requirements for software architectures are described in ACES with architectural scenarios [KB94]. According to [RW05] - “An architectural scenario is a crisp, concise description of a situation that the system is likely to face, along with a definition of the response required of the system”. Initially, architectural scenarios were used as a tool for software architecture evaluation [KB94], but later this approach has also turned out to be suitable for describing quality requirements for software architectures [DF99]. The template shown in Figure 2 can be used for describing architectural scenarios. In order to clarify this issue better, an example of an architectural scenario is given in section 4. <table> <thead> <tr> <th>Scenario</th> <th>Name of scenario</th> </tr> </thead> <tbody> <tr> <td>Quality</td> <td>Related quality attribute</td> </tr> <tr> <td>Environment</td> <td>Context applying to this scenario</td> </tr> <tr> <td>Stimulus</td> <td>The event or condition arising from this scenario</td> </tr> <tr> <td>Response</td> <td>The expected reaction of the system to the scenario event</td> </tr> <tr> <td>Response Measure</td> <td>The measurable effects showing if the scenario is fulfilled by the architecture</td> </tr> </tbody> </table> Figure 2: Architectural scenario template 3.4 Describing Solutions Following the idea of pattern languages [BM00], [GH94], we use “Solution Patterns” for the description of our solutions. The crucial point in describing solution patterns is the level of detail. On the one hand, the more details the solution contains, the better. On the other hand, each additional detail induces a certain assumption regarding the context in which the system is developed, which makes the solution less applicable. Developers of pattern languages use templates for their descriptions [GH94]. It is good to use strict templates when the context of an application is well known, as in the case of pattern languages for specific domains. In our case, we consider a whole technology that can be applied in various ways. The template for describing the solution must therefore be sufficiently detailed to be understandable and implementable, and at the same time remain applicable when used in different contexts. Therefore we keep our solution pattern template simple and rather base it on examples than on principles, resulting in more freedom during description and subsequent application. The template consists of the following parts: 1. **Textual description.** A clear description of the way the current pattern resolves the challenge described in the architectural scenario. The description must include pros, cons, restrictions, and known uses of the solution. Should the challenge be resolved by employing COTS components (frameworks, platforms etc.), a link to this component has to be specified. 2. **Structural Diagram.** A structural view of the system that supports the current architectural scenario. 3. **Behavioral Diagram.** An instance(s) of interaction among the components of the system, which supports the current architectural scenario(s). 3.5 Enriching the Process Once the static experience artifacts (challenges and solutions) have been collected and described appropriately, they must be included in the architecture design process and provide the users of the process with usage guide. How artifacts are included into Fraunhofer ACES and then reused is sketched in Figure 3. Figure 3: Technology-specific Fraunhofer ACES We guide users with the help of reuse scenarios that describe how experience artifacts shall be applied. Three key reuse scenarios (the numbers of reuse scenarios can be matched to the numbers in Figure 3) that may occur when designing a software architecture with the technology-specific Fraunhofer ACES are: 1. **Using architectural scenarios for finding missing requirements.** There are certain quality requirements that are very likely to appear in the context of a particular technology. The customer might first overlook these requirements and come back to them only in a later phase of the project when any change is costly. Therefore, the architect will find it handy to use typical architectural scenarios of the technology as a checklist for finding missing requirements. 2. **Using architectural scenarios as input for the architecture design process.** Architectural scenarios accurately describe the challenge that needs to be resolved by the architecture of a particular software product. Ideally, the architecture scenario will be linked directly to the solution pattern (see point 3) that resolves it; otherwise, its precise description will help the architect to significantly narrow the solution search domain and will later on serve as a basis for assessing the selected solution. 3. **Applying solution patterns for quality driven design.** Based on the architectural scenarios to be satisfied, an architect selects solution patterns to be implemented. Using proven solution patterns will make the architecture design process more efficient and will increase the quality of the resulting product. 4. Application Example In order to collect initial evidence regarding the effectiveness of our approach, we apply it to one of the technologies currently under research at Fraunhofer IESE. Due to the current trend towards ubiquitous computing and the growing need for mobile support for enterprises, we decided to choose mobile apps as our target technology and scope it to the business domain of the application (excluding, e.g., the gaming domain). The challenges encountered here are mostly related to the fact that business-oriented mobile applications have the same quality requirements as common desktop applications, but run in a totally different environment. These applications suffer from Internet connectivity problems, high power dependency, and other challenges brought on, for example, by the specifics of the operating system or by a particular application store. Therefore it is absolutely necessary to be aware of the typical challenges of this technology and make sure they are covered with the system architecture. 4.1 Identified Challenges After conducting several pilot projects at Fraunhofer IESE and performing their post-mortems (section 3.2), we identified a set of challenges related to mobile business applications. Some of these challenges are presented in the table in Figure 4. The second and third columns of the table represent the name of the challenge and architectural scenario that describes it. The first column classifies the challenges into challenge areas. <table> <thead> <tr> <th>Challenge Area</th> <th>Challenge (Quality Requirement)</th> <th>Architectural Scenario</th> </tr> </thead> <tbody> <tr> <td>Unreliable Connectivity</td> <td>Application must operate with bad internet connection</td> <td>Seamless Connectivity</td> </tr> <tr> <td></td> <td>Application must operate when there is no internet connection</td> <td>Connection Loss Tolerance</td> </tr> <tr> <td></td> <td>Remote communication must be fast</td> <td>Reduced Network Latency</td> </tr> <tr> <td></td> <td>Remote communication must be reliable</td> <td>Consistent Communication</td> </tr> <tr> <td>Limited Energy Supply</td> <td>Application must be energy efficient</td> <td>Reduced Power Consumption</td> </tr> <tr> <td>Deployment</td> <td>It must be possible to deploy application update within one hour</td> <td>Rapid Application Deployment</td> </tr> </tbody> </table> Figure 4: Challenges of Mobile Business Applications One prominent challenge for mobile apps is deployment. In contrast to desktop or web applications, the standard facility for deploying apps is an “appstore”. An “appstore” is completely proprietary entity and lies beyond the developers’ control: There is no guarantee that the application will pass the internal approval process and that deployment will be allowed. Furthermore, the duration of the approval process cannot be foreseen and deployment time can therefore not be guaranteed to the customer. Controlled deployment is clearly a crucial requirement for the mobile application development organization, which is legally obliged to guarantee software defect removal within a certain timeframe. The architectural scenario “Rapid Application Deployment” in Figure 5 describes a concrete instance of a controlled deployment challenge. Having such precise description allows an architect to easily evaluate the suitability of the proposed solution by simply playing the scenario. 4.2 Identified Solutions Based on the experience obtained during the pilot projects, we were also able to find appropriate solution patterns for the identified challenges. In Figure 6, the reader can find a traceability matrix that establishes the relationship between solution patterns and the architectural scenarios they resolve. It can be clearly seen that one pattern can resolve various scenarios and one scenario can be resolved by multiple patterns. In Figure 7, Figure 8, and Figure 9, an example of a solution pattern described according to the template given in section 3.4 is given. This pattern is named “Deployment bypassing appstore” and resolves the architectural scenario given in Figure 5. 4.3 Initial Validation This section features an initial validation of our assumption regarding the improved efficiency of the architecture design process and better quality of the end product due to the reuse of technology-specific experience during the architecture design process. Both points are hard to assess without a real project setting. However, a coarse validation of the efficiency aspect can be carried out with the help of a case study. The case study was designed as follows: We took an architecture document of one of the mobile business applications developed at our institute, designed a change request for it, and asked three Fraunhofer IESE employees with different levels of knowledge in software engineering and only general knowledge in mobile systems to perform several tasks related to this change request with the help of the mobile technology experience persisted in Fraunhofer ACES. The experience base included a catalog with 21 architectural scenarios and 22 solution patterns (partially shown in Figure 6) identified at Fraunhofer IESE using the approach described in section 3. The participants had to imagine having to implement the change request and thus had to redesign some of the system structures accordingly. They were asked to record the time they spent on: 1. Finding the relevant architectural scenario and solution pattern in the catalog; 2. Reading the pattern, understanding how they would apply it for the given system architecture, and sketching modified structural and behavioral views of the system. According to the results, the participants reusing experience artifacts needed five minutes on average to find the matching architectural scenario and solution pattern in the catalog. In order to understand the application of the chosen pattern to the current system architecture and sketch modified views, the participants needed 18 minutes on average. The participants were asked to compare their experiences using the extended ACES approach with using the original approach. Their assessment was that given Fraunhofer ACES without technology-specific experience artifacts, they would require 2.5 hours on average to come up with their own solution for the given problem. The use of the extended approach thus reduced the time spent to less than 20%. However, these results are based only on one case and on the subjective judgment of the case study participants. Obviously, a more thorough validation needs to be done regarding the proposed approach in order to enable drawing more concrete conclusions about its efficiency. It will be necessary to perform similar case studies or full-size experiments while varying factors such as number of participants, level of participants’ experience, size of the catalog, experiment context, and type of tasks for the participants. Also, ways to validate the quality aspect of the resulting product have to be found. 5. Conclusion In this paper, we have shown a way to collect and persist technology-specific architectural experience in an organization. Reuse of this experience during follow-up projects is supposed to increase the efficiency of the software architecture design process and the quality of the resulting software product. Furthermore, stored experience will lower an organization’s dependence on human resources. We described how technology-specific experience can be persisted and reused beneficially within the architecture design process. For experience persistence we used challenge-solution pairs. We described a method for collecting these pairs using project postmortems and gave templates for their precise description with architectural scenarios and solution patterns. Finally, we gave an example of a typical challenge-solution pair for the technology of mobile apps and described it using given templates. A case study served as coarse validation and allowed us to draw first conclusions regarding the efficiency of a software architecture design process supplemented with technology-specific experience artifacts. Although this case study showed a noticeable increase in efficiency, it is too early to draw final conclusions in this regard. A more thorough validation of the approach has to be performed. References [GH94] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley, Massachusetts, 1994.
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings198/83.pdf", "len_cl100k_base": 5139, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 22626, "total-output-tokens": 6242, "length": "2e12", "weborganizer": {"__label__adult": 0.0003933906555175781, "__label__art_design": 0.0009832382202148438, "__label__crime_law": 0.0003077983856201172, "__label__education_jobs": 0.0010356903076171875, "__label__entertainment": 5.733966827392578e-05, "__label__fashion_beauty": 0.00015842914581298828, "__label__finance_business": 0.0003132820129394531, "__label__food_dining": 0.000354766845703125, "__label__games": 0.0004830360412597656, "__label__hardware": 0.0009150505065917968, "__label__health": 0.0004105567932128906, "__label__history": 0.0002772808074951172, "__label__home_hobbies": 7.551908493041992e-05, "__label__industrial": 0.0004265308380126953, "__label__literature": 0.00026607513427734375, "__label__politics": 0.00024271011352539065, "__label__religion": 0.0005393028259277344, "__label__science_tech": 0.0138092041015625, "__label__social_life": 6.699562072753906e-05, "__label__software": 0.0040740966796875, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.0003018379211425781, "__label__transportation": 0.0004756450653076172, "__label__travel": 0.00020754337310791016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29626, 0.02372]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29626, 0.36258]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29626, 0.91357]], "google_gemma-3-12b-it_contains_pii": [[0, 2567, false], [2567, 5823, null], [5823, 8837, null], [8837, 10651, null], [10651, 13866, null], [13866, 16583, null], [16583, 18392, null], [18392, 21996, null], [21996, 22706, null], [22706, 23841, null], [23841, 26810, null], [26810, 29626, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2567, true], [2567, 5823, null], [5823, 8837, null], [8837, 10651, null], [10651, 13866, null], [13866, 16583, null], [16583, 18392, null], [18392, 21996, null], [21996, 22706, null], [22706, 23841, null], [23841, 26810, null], [26810, 29626, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29626, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29626, null]], "pdf_page_numbers": [[0, 2567, 1], [2567, 5823, 2], [5823, 8837, 3], [8837, 10651, 4], [10651, 13866, 5], [13866, 16583, 6], [16583, 18392, 7], [18392, 21996, 8], [21996, 22706, 9], [22706, 23841, 10], [23841, 26810, 11], [26810, 29626, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29626, 0.14286]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
562491170e52ee50a564106ae6a97ba9e24a0cc5
Learning Unified Features from Natural and Programming Languages for Locating Buggy Source Code* Xuan Huo and Ming Li and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China {huox, lim, zhouzh}@lamda.nju.edu.com Abstract Bug reports provide an effective way for end-users to disclose potential bugs hidden in a software system, while automatically locating the potential buggy source code according to a bug report remains a great challenge in software maintenance. Many previous studies treated the source code as natural language by representing both the bug report and source code based on bag-of-words feature representations, and correlate the bug report and source code by measuring similarity in the same lexical feature space. However, these approaches fail to consider the structure information of source code which carries additional semantics beyond the lexical terms. Such information is important in modeling program functionality. In this paper, we propose a novel convolutional neural network NP-CNN, which leverages both lexical and program structure information to learn unified features from natural language and source code in programming language for automatically locating the potential buggy source code according to bug report. Experimental results on widely-used software projects indicate that NP-CNN significantly outperforms the state-of-the-art methods in locating the buggy source files. 1 Introduction Software quality assurance is vital to the success of a software system. As software systems become larger and more complex, it is extremely difficult to identify every software defect (or bug, informally) before its formal release due to the limited software testing resources and tight development schedule. Thus, software systems are often shipped with bugs. To facilitate fast and efficient identification and fixing of the bugs in a released software system, bug reports, which are documents written in natural language specifying the situations in which the software fails to behave as it is expected or follow the technical requirements of the system. Bug reports are generated by the end-users of the software and then submitted to the software maintenance team. Once a bug report is received and verified, the software maintenance team would read the textual description of the bug report to locate the potential buggy source files in the source code, and assign appropriate developer to fix the bug accordingly. However, for large and evolving software, the maintenance team may receive a large number of bug reports over a period of time and it is costly to manually locate the potential buggy source files based on bug reports. Bug localization, which aims to alleviate the burden of software maintenance team by automatically locating potentially buggy files in source code bases for a given bug report, has drawn significant attention in software engineering community. To accomplish the goal, the key for bug localization is to correlate the abnormal program behaviors written in natural languages and the source code written in programming languages that implement the corresponding functionality. Most of the state-of-the-art methods treat the source code as natural language by representing both bug reports and source files based on bag-of-words feature representations, and correlate the bug reports and source files by measuring similarity in the same feature space. For example, Lukins et al. [2008] apply a generative probabilistic Latent Dirichlet Allocation (LDA) to represent software code and bug reports and locate the buggy files according to their similarities. Gay et al. [2009] represent both source files and bug reports using vector space model (VSM) based on which the similarities between the buggy source files and a bug report are computed for localizing the corresponding buggy files, and their experiment results suggest that the VSM model may perform better than the LDA model. Zhou et al. [2012] propose a revised vector space model (rVSM), where similar historical bug reports whose corresponding buggy files are further exploited to improve the bug localization results obtained by simply measuring the similarity between bug reports and source files. Recently, Lam et al. [2015] employ autoencoder to learn features that correlate the frequently occurred terms in bug reports and source files in order to enhance the bag-of-words features. While enjoying the convenience of correlated the heterogeneous data in the same lexical feature space, these methods also suffer from the loss of information when tailoring programming language to natural language by ignoring the program structure. The program structure specifies how --- *This research was supported by NSFC (61333014, 61422304, 61272217, 61321491), JiangsuSF(BK20131278) and NCET-13-0275. different statements interact with each other to accomplishing certain functionality, which provides additional semantics to the program functionality besides the lexical terms. For example, assume a private string type variable path initialized with a default value DEFAULT_PATH, the following two pieces of code, i.e., “path = getNewPath(); File f = File.open(path);” and “File f = File.open(path); path = getNewPath();”, may result in different program behaviors. Thus, to represent the program functionality better, a richer feature representation which captures both lexical semantics of terms and the program structure needs to be extracted from source code. However, the enrichment of feature for source code lays the bug reports and source files into two different feature spaces, and consequently, increases the difficulties in measuring the correlation between reports and source codes. One question arises here: can we learn a unified feature representation from both natural and programming language, where the semantics in lexicon and program structure are captured and the correlations between bug reports and source files for bug localization are carefully embedded. In this paper, we propose a novel convolutional neural network called NP-CNN (Natural language and Programming language Convolutional Neural Network) to learn unified feature from bug report in natural language and source code in programming language. This model mainly consists of two consecutive parts. The first part is the intra-language feature extraction layer, which extracts features based on multiple layers of convolutional neurons using bug reports and source files, respectively, where the convolution operation for source code is particularly designed to reflect the program structure. The second part is the cross-language feature fusion layer, which combines the extracted features from bug reports and source files into a unified representation in the purpose of correctly identifying the related source code given a bug report. Experimental results on widely-used software projects indicate that learning unified feature by respecting the program structure is beneficial and the proposed NP-CNN significantly outperforms the state-of-the-art bug localization methods. The contributions of our work are in two folds: - We propose a CNN-based deep neural network to learn unified features from natural language and programming language for locating buggy files. - We design particular convolution operations with respect to the program structure, which is able to capture semantics of the program from both lexical and program-structural perspectives. The rest of this paper is organized as follows. In Section 2, we discuss several related work. In Section 3, we present the proposed NP-CNN model. In Section 4, we report the experimental results, and finally in Section 5, we conclude the paper and issue some future works. 2 Related Work Bug localization, which locates source files potentially responsible for the bugs reported in bug reports, is an important but costly activity in software maintenance. Most existing approaches treated the source files as documents and formalized the bug localization problem as a document retrieval problem. Various models has been constructed to compute the similarity or relevancy between the bug reports and the source files. Many information retrieval based bug localization methods have been proposed. Poshyvanyk et al. [2007] proposed a feature location model to mine buggy files based on a Latent Semantic Indexing (LSI) model, which can identify the relationship between reports and terms based on Singular Value Decomposition (SVD). Lukins et al. [2008] treated bug reports as a mixture of various topics that spit out words with certain probabilities, so they applied a generative probabilistic Latent Dirichlet Allocation (LDA) model for locating buggy files. Gay et al. [2009] employed Vector Space Model (VSM) based on concept localization to represent bug reports and source code files as feature vectors, which is used to measure the similarity between bug reports and source files. Zhou et al. [2012] proposed BugLocator approach using revised Vector Space Model (rVSM), which is based on document length and similar bugs that have been resolved before as new features. However, all these models ignore the structural information of software code, which may disclose important semantics of the source files beyond the textual representation of source files. In natural language processing (NLP), deep learning is applied to learn word vector representation. Collobert et al. [2011] presented a multi-layer neural network architecture that can handle a number of NLP tasks, which was determined to avoid task-specific engineering. Kim [2014] conducted a series of experiments with CNN trained on top of pre-trained word vectors, showing that a simple CNN with little hyper parameter tuning achieves excellent results for sentence classification tasks. Zhang et al. [2015] applied temporal ConvNets to various large-scale text understanding tasks, in which ConvNets do not require knowledge of words or knowledge of syntax. Johnson and Zhang [2015] studied CNN on text categorization to exploit the word order of text data for text categorization, which showed that CNN provides an alternative mechanism for effective use of word order. Recently, deep learning is applied to tackle some software engineering problems. White et al. [2015] tried deep learning to induce high-quality model for code suggestion. Mou et al. [2016], applied CNN on abstract syntax tree to detect code snippets of certain patterns. Lam et al. [2015] combined auto encoder with information retrieval based methods to locate buggy files. 3 Convolutional Neural Networks for Natural and Programming Languages The goal of bug localization is to locate the potentially buggy source files that produce the program behaviors specified in a given bug report. Let \( C = \{c_1, c_2, \ldots, c_{N_1}\} \) denotes the set of source code files of a software project and \( R = \{r_1, r_2, \ldots, r_{N_2}\} \) denotes the collection of bug reports received by the software maintenance team, where \( N_1, N_2 \) are the number of source files and bug reports, respectively. The bug reports and source files can be collected from bug tracking systems (e.g., Bugzilla, Jira, etc.) and history control systems (e.g., CVS, Git, etc.). Unlike many existing methods [Gay et al., 2009; Zhou et al., 2012] which represented bug reports and source codes in the same lexical feature space and computed similarity to identify their correlation, we formalize the bug localization as a learning task, which attempts to learn a prediction function \( f : \mathcal{R} \times \mathcal{C} \rightarrow \mathcal{Y} \). \( y_{ij} \in \mathcal{Y} = \{+1, -1\} \) indicates whether a source code file \( c_j \in \mathcal{C} \) is related to a bug report \( r_i \in \mathcal{R} \), which can be obtained by investigating software commit logs and bug report descriptions [Fischer et al., 2003]. The prediction function \( f \) can be learned by minimizing the following objective function \[ \min_f \sum_{i,j} \mathcal{L}(f(r_i, c_j), y_{ij}) + \lambda \Omega(f), \] where \( \mathcal{L}(\cdot, \cdot) \) is the empirical loss and \( \Omega(f) \) is a regularization term imposing on the prediction function. The trade-off between \( \mathcal{L}(\cdot, \cdot) \) and \( \Omega(f) \) is balanced by \( \lambda \). We instantiate the learning task by proposing a novel convolutional neural network NP-CNN which takes the raw data of bug reports and source codes as input and learns a unified feature mapping \( \varphi(\cdot, \cdot) \) for a given \( r_i \) and \( c_j \), based on which the prediction can be made with a subsequent linear output layer. Based on the one-hot encoding, a bug report or a source file with \( n \) regions of sentences can be represented by \( X \in \mathbb{R}^{n \times k} \), which is then fed into subsequent convolutional layers. Such encoding directly transforms textual data to the raw binary representation with no requirement on domain knowledge for data representation. Such representation is shown to be effective in processing textual data [Kim, 2014]. After the preprocessing of the input layer, the encoded data \( X_i^r \) of a bug report \( r_i \) and \( X_j^c \) of a source code file \( c_j \) are then passed to intra-language feature extraction layers. In these layers, bug reports and source codes are processed separately by different convolutional networks to extract middle-level intra-language features, where the convolution operations are designed with respect to different characteristics of natural language and programming language. Then, the intra-language features from bug report and source code are further fused into a unified feature representation by the cross-language feature fusion layers, followed by a linear output layer mapping the unified feature to \( \mathcal{Y} \) which indicates whether \( c_j \) is related to \( r_i \). The key of the NP-CNN model lies in the intra-language feature extraction layers and cross-language feature fusion layer, which are discussed in detail in the following subsections. ### 3.1 Intra-language Feature Extraction Layers Intra-language feature extraction layers employ separate convolutional neural networks to extract intra-language features form natural language and programming language. Since extracting features from natural language using CNN has been widely studied [Johnson and Zhang, 2015], we follow the standard approach to extract features from bug reports. Thus, we focus on building convolutional networks for source code in programming language. Programming language, although in textual format, differs from natural language mainly in two aspects. First, the basic language component carrying meaningful semantics in natural language is word or term, and the semantics of the natural language can be inferred from a bag of words. By contrast, in programming language the basic language component carrying meaningful semantics is statement, and the semantics of the programming language can be inferred from the semantics on multiple statements plus the way how these statements interact with each other along the execution path. Thus, to extract features from programming language, the convolution operations should explicitly respect to the atomicity of statements in semantics. Second, natural language organizes words in a “flat” way while programming language organizes its statements in a “structured” way to produce richer semantics. For example, a branching structure “if-then-else” defines two parallel groups of statements. Each group interacts with the statements before and after the branching block while there is no interaction between the two groups. Thus, to extract features from programming language, the convolution operations should obey the program structure defined by the programming languages. Based on the aforementioned considerations, we propose the substructure of NP-CNN responsible for extracting features from source code based on convolutional neural network. The network structure is specified in Figure 2. The first convolutional and pooling layer aims to represent the semantics of a statement based on the tokens within the statement, and the subsequent convolution and pooling layers aim to model the semantics conveyed by the interactions between statements with respect to the program structure while preserving the integrity of statements. The fully connected networks are connected to cross-language feature fusion layers. The subsequent convolutional and pooling layer aims to model high order interactions between statements in different granularity by varying the size of convolution windows. For example, the first filter operates on the window of d tokens is represented as \( s_q \in \mathbb{R}^{dk} \). The first convolutional layer employs a filter \( \mathbf{w} \in \mathbb{R}^{dk} \) and a non-linear activation function \( \sigma \) to convert a statement of \( n \) words into a new vector \( \mathbf{z} \in \mathbb{R}^{n-d+1} \). Since the length of each sentence is different, the extracted features cannot be fed directly to a fixed-size neural layer. Therefore, we fix the number of pooling units and dynamically determine the pooling region size on each data point, which has been shown to be efficient in previous works [Zhang et al., 2015; Johnson and Zhang, 2015]. It is noteworthy that each row of the feature map represents one line of code after the first convolutional and pooling layer, and consequently the integrity of the statements is well-preserved. The subsequent convolutional and pooling layer aims to model high order interactions between statements in different granularity by varying the size of convolution windows. For example, the first filter operates on the window of d tokens, which can be viewed that the information between two consecutive statements along the execution path is extracted and represented. The second filter on the window with \( d = 3 \) is viewed that the filter extracts features from three consecutive statements along the execution path, and so on. To avoid the poor performance caused by using a large window size [Collobert and Weston, 2008; Kim, 2014], we slice the program into different building blocks [Binkley et al., 2014] and set the maximal window size as the average length of program blocks. Moreover, we pad the window locating on the boundary of branches and loops to ensure the interactions between statements do not violate the execution path. ### 3.2 Cross-language Feature Fusion Layers In the cross-language feature fusion layers, we employ a fully connected neural network to fuse middle-level features extracted from bug reports and source files to generate a unified feature representation, where the network is learned in order to facilitate the determination on whether the given source code file is related to the given bug report based on the unified feature. In most cases of bug localization, a reported bug may be only related to one or only a few source code files, while a large number of source code files are irrelevant to the given bug report. Such an imbalance nature increases the difficulty in learning a well-performing prediction function based on the unified feature. To address this problem, we propose to learn the unified feature that may counteract the negative influence of the imbalanced data in the subsequent learning of prediction function. Inspired by [Zhou and Liu, 2006], we introduce an unequal misclassification cost according to the imbalance ratio and train the fully connected network in a cost-sensitive manner. Let \( \text{cost}_{a} \) denote the cost of incorrectly associating an irrelevant source code file to a bug report and \( \text{cost}_{p} \) denote the cost of missing a buggy source code file that is responsible for the reported bugs. The weight of the fully connected networks \( \mathbf{w} \) can be learned by minimizing the following objective function based on SGD (stochastic gradient descent). \[ \min_{\mathbf{w}} \sum_{i,j} \left[ \text{cost}_{a} L(z_i^r, x_j; y_{ij}; \mathbf{w})(1 - y_{ij}) \right] + \text{cost}_{p} L(z_i^r, x_j; y_{ij}; \mathbf{w})(1 + y_{ij}) \right] + \lambda ||\mathbf{w}||^2 \] where \( L \) is the loss function and \( \lambda \) is the trade-off parameter. 4 Experiments To evaluate the effectiveness of NP-CNN, we conduct experiments on open source software projects and compare with several state-of-the-art bug localization methods. 4.1 Experiment Settings The data sets used in the experiments are extracted from four well-known open source software projects and the statistics are shown in Table 1. The data set JDT (Java Development Tools) is an Eclipse project used for plug-ins support and development of any Java applications. The project PF (Eclipse Platform) contains set of frameworks and common services that make up Eclipse infrastructures. Another project PDE (Plug-in Development Environment) is a tool to create and deploy features and plug-ins of Eclipse. We also investigate the AspectJ project to evaluate the performance, which is an aspect-oriented extension to the Java programming language. All the projects and labels of software code and bug reports can be extracted from bug tracking system and the CSV/Git version control system, which have been widely used in previous studies [Zhou et al., 2012; Lam et al., 2015]. Table 1: Statistics of our data sets. <table> <thead> <tr> <th>Data sets</th> <th># fixed bug reports</th> <th># source files</th> <th># avg buggy files per report</th> </tr> </thead> <tbody> <tr> <td>JDT</td> <td>12,826</td> <td>2,272</td> <td>4.39</td> </tr> <tr> <td>PF</td> <td>14,893</td> <td>1,012</td> <td>6.79</td> </tr> <tr> <td>PDE</td> <td>4,034</td> <td>2,970</td> <td>8.34</td> </tr> <tr> <td>AspectJ</td> <td>1,734</td> <td>1,136</td> <td>1.73</td> </tr> </tbody> </table> As indicated by Table 1, the number of candidate source files is large, but the data sets are highly imbalanced in that only a few source files are related to a given bug report. Therefore, we use AUC, which has been widely applied to evaluate the learning performance in imbalanced learning problem. Besides, we also evaluate the performance using MAP (Mean Average Precision) and Top k Rank, which are widely used for evaluating the cost-effectiveness of bug localization performance [Zhou et al., 2012; Ye et al., 2014; Lam et al., 2015]. We compare the proposed model NP-CNN with following baseline methods: - Buglocator [Zhou et al., 2012]: a state-of-the-art bug localization method which employs the revised Vector Space model to measure the similarity between bug reports to identify potential buggy files related to a given bug report. - Two-phase [Kim et al., 2013]: a state-of-the-art bug localization model that firstly uses Naive Bayes to filter uninformative bug reports and then use vector space model to predict buggy files. - HyLoc [Lam et al., 2015]: a recently proposed bug localization model which employs auto-encoder and vector space model to identify potential buggy files related to a given bug report. - CNN: a straightforward CNN-based approach which merges textual bug reports and textual source code together and feeds them directly to a CNN. - US-CNN (CNN with Under Sampling): a variant of NP-CNN, which addresses the imbalanced problem by applying under-sampling to training set in advance to reduce the number of negative pairs of bug reports to irrelevant source files. - N-CNN (Natural language CNN): a variant of NP-CNN where no program structure is considered in “intra-language feature extraction layers” for source code and source code is processed in the same way as the textual bug reports. For Buglocator and Two-Phase, we use the same parameter settings suggested in [Zhou et al., 2012; Kim et al., 2013], respectively. For all data sets, we fix the activation function to \( \sigma(x) = \max(x, 0) \). We use windows \( d \) of 2, 3, 4, 5 with 100 feature maps each, and roughly 5,000 words that appear most frequently in the bug reports and software code are used in the experiment. In addition, we use two techniques to improve prediction performance: response normalization [Krizhevsky et al., 2012] and dropout [Hinton et al., 2012]. Response normalization scales the output of the pooling layer \( \sigma \) by multiplying \( 1 + |z|^2 \). Another method dropout is used to prevent co-adaptation of hidden units by randomly dropping out values. In our experiment, we set dropout probability \( p=0.5 \) in fusion layer. 4.2 Experiment Results For each data set, 10-fold cross validation is repeated 10 times and the average performance of all the compared methods with respect to AUC and MAP are tabulated in Table 2 and Table 3, respectively, where the best performance on each data set is boldfaced, and the performance with respect to Top k Rank is depicted in Figure 3. We conduct Mann-Whitney test at 95% confidence level. If NP-CNN significantly outperforms a compared method, the inferior performance of the compared method would be marked with “•”, and the value that significant better than NP-CNN is marked with “◦”. It can be observed from the tables that the proposed NP-CNN achieves the best average performance (0.891) in terms of AUC, which improves BugLocator (0.747) by 19.2%, Two-phase (0.738) by 20.7%, HyLoc (0.807) by 10.4%, and NP-CNN achieves best performance (0.557) with respect to MAP on almost all data sets except for JDT. The superiority of the NP-CNN is statistically significant. Figure 3 also indicates the superiority of NP-CNN over the other compared methods with respect to Top k Rank. NP-CNN can achieve an average Top k Rank at 0.881, which improves the average value of BugLocator (0.691) by 27.4%. Two-phase (0.600) by 46.8% and HyLoc (0.752) by 17.2%. The superior performance of NP-CNN over the state-of-the-art bug localization methods indicates that NP-CNN is A novel convolutional neural network (NP-CNN) for bug localization is proposed in this paper. NP-CNN exploits the program structure by explicitly modeling the high-order interactions between statements. Combining richer program structure information derived from program analysis tools for extracting features from programming languages will be investigated in future. Moreover, incorporating additional data to enrich the structure of NP-CNN and program structure will be investigated in future. In addition, to evaluate the effectiveness of cost-sensitive cross-language fusion layer, we use US-CNN for comparison, a variant implementation of NP-CNN which first uses undersampling operation on training data sets to discard negative pairs until the number equal to the positive ones. The sampling procedure repeats for 10 times and the results are ensembled at last. It can be clearly observed from Table 2 and Table 3 that NP-CNN performs better than US-CNN on all data sets in terms of MAP and AUC. Table 3 that NP-CNN outperforms N-CNN 3.7% in terms of AUC and 5.6% in terms of MAP, which suggests our convolutional neural network for programming language can extract better inner features from source code than natural language network. In addition, to evaluate the effectiveness of cost-sensitive cross-language fusion layer, we use US-CNN for comparison, a variant implementation of NP-CNN which first uses undersampling operation on training data sets to discard negative pairs until the number equal to the positive ones. The sampling procedure repeats for 10 times and the results are ensembled at last. It can be clearly observed from Table 2 and Table 3 that NP-CNN performs better than US-CNN on all data sets in terms of MAP and AUC. In summary, the experimental results suggest that NP-CNN can learn unified feature representation from natural and programming languages to facilitate better bug localization. 5 Conclusion In this paper, we propose a novel convolutional neural network called NP-CNN to learn unified features from natural language and source code in programming language for bug localization problems, where particular convolution operations that reflect the program structure are carefully designed to generate features that capture semantics from both lexicon and program structure. Experimental results on widely-used software projects indicate that learning unified feature by respecting to the program structure is beneficial and the proposed NP-CNN significantly outperforms the state-of-the-art bug localization methods. NP-CNN exploits the program structure by explicitly modeling the high-order interactions between statements. Combining richer program structure information derived from program analysis tools for extracting features from programming languages will be investigated in future. Moreover, incorporating additional data to enrich the structure of NP-CNN is also another interesting future work. References
{"Source-Url": "https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/ijcai16npCNN.pdf", "len_cl100k_base": 6048, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 26400, "total-output-tokens": 7981, "length": "2e12", "weborganizer": {"__label__adult": 0.0003986358642578125, "__label__art_design": 0.0003273487091064453, "__label__crime_law": 0.00032711029052734375, "__label__education_jobs": 0.0005397796630859375, "__label__entertainment": 6.54458999633789e-05, "__label__fashion_beauty": 0.00017595291137695312, "__label__finance_business": 0.00016105175018310547, "__label__food_dining": 0.00028896331787109375, "__label__games": 0.0006151199340820312, "__label__hardware": 0.0009388923645019532, "__label__health": 0.0005006790161132812, "__label__history": 0.00014841556549072266, "__label__home_hobbies": 8.016824722290039e-05, "__label__industrial": 0.0002987384796142578, "__label__literature": 0.0002071857452392578, "__label__politics": 0.0001829862594604492, "__label__religion": 0.0003476142883300781, "__label__science_tech": 0.0157318115234375, "__label__social_life": 7.450580596923828e-05, "__label__software": 0.006725311279296875, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.0002796649932861328, "__label__transportation": 0.00037026405334472656, "__label__travel": 0.0001760721206665039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33871, 0.03378]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33871, 0.40776]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33871, 0.886]], "google_gemma-3-12b-it_contains_pii": [[0, 4973, false], [4973, 11431, null], [11431, 16004, null], [16004, 20490, null], [20490, 26178, null], [26178, 29421, null], [29421, 33871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4973, true], [4973, 11431, null], [11431, 16004, null], [16004, 20490, null], [20490, 26178, null], [26178, 29421, null], [29421, 33871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33871, null]], "pdf_page_numbers": [[0, 4973, 1], [4973, 11431, 2], [11431, 16004, 3], [16004, 20490, 4], [20490, 26178, 5], [26178, 29421, 6], [29421, 33871, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33871, 0.0566]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
6b10b3b5415778910222d4e0929805a5a7db1073
1. Key Fob Development Platform The Si4010 key fob development platform is a flexible platform for comfortably developing software and testing the whole system using the Silicon Laboratories software development IDE. The platform also allows programming of the NVM on chip. The kit has three versions: one for the 434 MHz band (P/N 4010-KFOBDEV-434), one for the 868 MHz band (P/N 4010-KFOBDEV-868) and one for the 915 MHz band (P/N 4010-KFOBDEV-915). 1.1. Kit Content <table> <thead> <tr> <th>Qty</th> <th>Part Number</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>4010-KFOBDEV-434</td> <td>Si4010 Key Fob Development Kit 434 MHz</td> <td></td> </tr> <tr> <td>2</td> <td>4010-KFOB-434-NF</td> <td>Si4010 key fob demo board 434 MHz w/o IC</td> </tr> <tr> <td>1</td> <td>MSC-DKPE1</td> <td>SOIC/MSOP socketed development board</td> </tr> <tr> <td>3</td> <td>Si4010-C2-GS</td> <td>Si4010-C2-GS transmitter IC, SOIC Package</td> </tr> <tr> <td>1</td> <td>4010-DKPB434-BM</td> <td>Si4010 MSOP key fob development board 434 MHz, SMA</td> </tr> <tr> <td>1</td> <td>4355-LED-434-SRX</td> <td>Si4355 RFStick 434 MHz receiver board</td> </tr> <tr> <td>1</td> <td>MSC-PLPB_1</td> <td>Key Fob Plastic Case (translucent grey)</td> </tr> <tr> <td>1</td> <td>MSC-BA5</td> <td>Programming interface board</td> </tr> <tr> <td>1</td> <td>MSC-BA4</td> <td>Burning adapter board</td> </tr> <tr> <td>1</td> <td>EC3</td> <td>USB Debug Adapter</td> </tr> <tr> <td>1</td> <td>Toolstick_BA</td> <td>Toolstick Base Adapter</td> </tr> <tr> <td>1</td> <td>MSC-DKCS5</td> <td>USB Cable</td> </tr> <tr> <td>1</td> <td>USB extender cable (USBA-USBA)</td> <td></td> </tr> <tr> <td>2</td> <td>AAA</td> <td>AAA battery</td> </tr> <tr> <td>2</td> <td>CRD2032</td> <td>CR2032 3 V coin battery</td> </tr> <tr> <td>4010-KFOBDEV-868</td> <td>Si4010 Key Fob Development Kit 868 MHz</td> <td></td> </tr> <tr> <td>2</td> <td>4010-KFOB-868-NF</td> <td>Si4010 key fob demo board 868 MHz w/o IC</td> </tr> <tr> <td>1</td> <td>MSC-DKPE1</td> <td>SOIC/MSOP socketed development board</td> </tr> <tr> <td>3</td> <td>Si4010-C2-GS</td> <td>Si4010-C2-GS transmitter IC, SOIC Package</td> </tr> <tr> <td>1</td> <td>4010-DKPB868-BM</td> <td>Si4010 MSOP key fob development board 868 MHz, SMA</td> </tr> <tr> <td>1</td> <td>4355-LED-868-SRX</td> <td>Si4355 RFStick 868 MHz receiver board</td> </tr> </tbody> </table> # Si4010-DK ## Table 1. Kit Content (Continued) <table> <thead> <tr> <th>Quantity</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>MSC-PLPB_1 Key Fob Plastic Case (translucent grey)</td> </tr> <tr> <td>1</td> <td>MSC-BA5 Programming interface board</td> </tr> <tr> <td>1</td> <td>MSC-BA4 Burning adapter board</td> </tr> <tr> <td>1</td> <td>EC3 USB Debug Adapter</td> </tr> <tr> <td>1</td> <td>Toolstick_BA Toolstick Base Adapter</td> </tr> <tr> <td>1</td> <td>MSC-DKCS5 USB Cable</td> </tr> <tr> <td>1</td> <td>USB extender cable (USBA-USBA)</td> </tr> <tr> <td>2</td> <td>AAA AAA battery</td> </tr> <tr> <td>2</td> <td>CRD2032 CR2032 3 V coin battery</td> </tr> </tbody> </table> ### 4010-KFOBDEV-915 <table> <thead> <tr> <th>Quantity</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>4010-KFOB-915-NF Si4010 key fob demo board 915 MHz w/o IC</td> </tr> <tr> <td>1</td> <td>MSC-DKPE1 SOIC/MSOP socketed development board</td> </tr> <tr> <td>3</td> <td>Si4010-C2-GS Si4010-C2-GS transmitter IC, SOIC Package</td> </tr> <tr> <td>1</td> <td>4010-DKPB915-BM Si4010 MSOP key fob development board 915 MHz, SMA</td> </tr> <tr> <td>1</td> <td>4355-LED-915-SRX Si4355 RFStick 915 MHz receiver board</td> </tr> <tr> <td>1</td> <td>MSC-PLPB_1 Key Fob Plastic Case (translucent grey)</td> </tr> <tr> <td>1</td> <td>MSC-BA5 Programming interface board</td> </tr> <tr> <td>1</td> <td>MSC-BA4 Burning adapter board</td> </tr> <tr> <td>1</td> <td>EC3 USB Debug Adapter</td> </tr> <tr> <td>1</td> <td>Toolstick_BA Toolstick Base Adapter</td> </tr> <tr> <td>1</td> <td>MSC-DKCS5 USB Cable</td> </tr> <tr> <td>1</td> <td>USB extender cable (USBA-USBA)</td> </tr> <tr> <td>2</td> <td>AAA AAA battery</td> </tr> <tr> <td>2</td> <td>CRD2032 CR2032 3 V coin battery</td> </tr> </tbody> </table> 1.1.1. Burning Adapter (P/N MSC-BA4) The programming interface board serves as an interface between the debug adapter and the Socketed Key Fob Development Board or the Development Key Fob. It provides 6.5 V for NVM programming. The power source is activated by a sliding switch on the board. It is required when the user wants to program the internal NVM memory on the chip. The programming interface board contains an 8-pin header, to which GPIO0 to GPIO5, along with power and ground, are connected from the development boards. Therefore, the user can tap to that header to control or monitor the chip pins. 1.1.2. Si4010 Socketed Key Fob Development Board (P/N MSC-DKPE1) Socketed (both SOIC and MSOP) key fob board with SMA connector. 1.1.3. Si4010 MSOP Key Fob Development Board 434 MHz, SMA (P/N 4010-DKPB434-BM) This development board has an unburned soldered Si4010, five push buttons, matched 50 \(\Omega\) SMA RF output, battery clip, and battery switch. This board allows running user application from RAM during program development even while board is disconnected and powered by the battery. The SMA output connector allows wired measurements of the RF output signal. **Note:** Instead of this board, some 434 MHz development kits may contain the pcb antenna version of this board, described in "1.2.2. Si4010 Key Fob Development Board 434 MHz" on page 7. 1.1.4. Si4355 RFstick 434MHz receiver board (P/N 4355-LED-434-SRX) Receiver board factory programmed with the simple receiver program srx_demo. Can be used for link testing with Si4010 programmed with the rke_demo. 1.1.5. Programming Interface Board (P/N MSC-BA5) Adapter board for interfacing customer PCB to the debug adapter. 1.1.6. 4010 Key Fob Demo Board 434 MHz without IC (P/N 4010-KFOB-434-NF) 1.1.7. Key Fob Plastic Case (translucent grey) (P/N MSC-PLPB_1) 1.1.8. Toolstick Base Adapter (P/N Toolstick_BA) Debugging adapter compatible with Si4355 RFstick receiver board and the Si4010 development boards. 1.1.9. Si4010 sample, SOIC package (P/N Si4010-C2-GS) 1.1.10. USB Cable (P/N MSC-DKCS5) Cable to connect EC3 Debug Adapter to PC. 1.1.11. EC3 Debug Adapter (P/N EC3) Silicon Labs debugging adapter, used by other Silicon Labs’ MCU products as well, compatible with the development platform. 1.2. Other Boards The following boards are not part of the development kit but can be ordered separately from Silicon Labs. <table> <thead> <tr> <th>Part Number</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>4010-DKPB434-BS</td> <td>Si4010 SOIC key fob development board 434 MHz, SMA</td> </tr> <tr> <td>4010-DKPB_434</td> <td>Si4010 key fob development board 434 MHz</td> </tr> <tr> <td>4010-DKMX_434</td> <td>Si4010 matrix keyboard development board 434 MHz</td> </tr> <tr> <td>4010-KFOB-315-NF</td> <td>Si4010 key fob demo board 315 MHz w/o IC</td> </tr> </tbody> </table> 1.2.1. Si4010 SOIC Key Fob Development Board 434 MHz, SMA This board is similar to the board described in "1.1.3. Si4010 MSOP Key Fob Development Board 434 MHz, SMA (P/N 4010-DKPB434-BM)" but contains the SOIC package version of the Si4010. 1.2.2. Si4010 Key Fob Development Board 434 MHz This is also a key fob development board, but with a pcb antenna instead of the SMA connector. 1.3. Usage of the Key Fob Development Platform The Silicon Labs IDE communicates with the USB Debug Adapter through the USB bus. The following debugging scenarios are possible: 1. **EC3 debug adapter → Burning adapter → Si4010 socketed key fob development board** This setup is suitable for downloading, running, and debugging the program in RAM or burning the program in the NVM and running it. The antenna or measuring instrument can be connected through an SMA connector. Since sockets on board allow use of unsoldered ICs, this is the ideal scenario for burning the NVM memory of Si4010. 2. **EC3 debug adapter → Burning adapter → Si4010 MSOP key fob development board** This setup is suitable for downloading, running, and debugging the program in RAM. This board has a PCB antenna and battery, so after downloading the program and starting the execution by disconnecting in the IDE, the board can be physically disconnected from the programming interface and tested in mobile form. A switch is provided on the board to connect/disconnect the battery. The Toolstick Base Adapter can be also used in the two above scenarios as a debug adapter. It can be connected to the pcb edge connector of the burning adapter. *Note:* Although burning is also possible with this setup, it is not practical since Si4010 is soldered on the key fob development board. 3. **EC3 debug adapter → Programming adapter → User's own application** In this setup, the user can incorporate the debugging capabilities into the final application using a cheap 4-pin header connection. ### 2. Debugging an Application To debug an application the user is provided with the Silicon Laboratories IDE (Integrated Development Environment). The IDE has an integral help. This section is not a user manual for the IDE, but highlights the items which are important when working with the IDE. #### 2.1. Installing the IDE and USB Debug Adapters Download the Silicon Labs IDE (Integrated Development Environment) from the following URL: [http://www.silabs.com/products/mcu/Pages/SiliconLaboratoriesIDE.aspx](http://www.silabs.com/products/mcu/Pages/SiliconLaboratoriesIDE.aspx) and install it on your computer. The IDE gets installed into its own directory. The main executable file is `IDE.exe`. The IDE works with the USB Debug Adapter or the Toolstick Base Adapter, shown in the section above. When the IDE recognizes the Silicon Labs USB debug adapters, it queries whether its internal firmware is compatible with the Si4010. If not, then it notifies the user and requests permission to update the adapter's firmware. Silicon Labs also provides a program, `usb_debug_adapter_firmware_reset.exe`, to clear the adapter's firmware manually before connection to the IDE. The program resides in the same directory as the IDE main executable. With the Si4010 debugging chain it is required that the manual adapter firmware clearing is done for each USB adapter before using the key fob debugging chain. That operation needs to be done only once per USB Debug Adapter. The IDE will then program the correct firmware into the adapter. The reset firmware executable will scan USB ports and give the user a list of connected Silicon Labs USB adapters. The USB Debug Adapter name starts with EC. Users can have more than one USB adapter connected to the computer. #### 2.2. Keil toolchain integration The project files in examples assume that the Keil toolchain is installed into: C:\Keil directory. The location of the Keil toolchain can be easily changed in the Silabs IDE in the Project—Tool Chain Integration menu. An evaluation version of the Keil toolchain can be downloaded from the Keil website at [http://www.keil.com/](http://www.keil.com/). This free version has a 2 kB code limitation and starts the code at the 0x0800 address. The Keil free evaluation version can be unlocked to become a 4 kB version with no code placement limitation by following the directions given in application note AN104 about Keil toolchain integration and license management. Unlock code can be found on the WDS CDROM in the root folder in the Keil_license_number.txt file. Contact your Silicon Laboratories sales representative or distributor for application assistance. 2.3. IDE Features The IDE allows the following: 1. Download the OMF-51 linker output format (Keil BL51 linker output, for example) and match the source code lines with the compiled file. This allows source code debugging, including variable value viewing, setting breakpoints, single-stepping, etc. Note that the output of the Keil LX51 linker is not understood by the IDE. 2. Download the IntelHEX file for the application. When using the IntelHEX file the source code debugging is not available. The user can set a breakpoint for a specific code address by going through the Debug → Breakpoints (Alt+B) menu item. The user can also single-step through the disassembly of the loaded code. 3. Setting at least 4 breakpoints with a possible maximum of 8. The actual number of breakpoints available is determined by the IDE from the factory setting of the chip. 4. Single-stepping through the disassembly of the code. If the OMF file is loaded, the single-stepping is matched with the source code. 5. Viewing and changing variables, SFR registers, XREG registers, and the contents of both DATA/IDATA RAM and CODE/XDATA RAM on the fly during debugging. When the changes are made by the user in the corresponding windows, the user must press the Refresh Values (Alt+R) button on the toolbar to update the values in the device. Just changing values in the IDE will not automatically update them in the device. 2.4. IDE Debugging Session The typical IDE debugging session consists of the following sequence: 1. Connect the IDE to the chip by hitting the Connect or invoking menu Debug → Connect menu item. 2. Download the OMF file either by hitting the Download code (Alt+D) toolbar button or from the Debug → Download object code menu item. The latter also allows IntelHEX download, but without the source code debugging capability. 3. After the code download, the device is automatically halted at address 0x0000 in CODE/XDATA RAM. Then the user can set breakpoints, single-step, animate, etc. 4. The user can hit the Reset (Ctrl+R) toolbar button any time the device is halted (not running). The internal digital system level reset is invoked and the device goes through the boot sequence. The previously loaded code by the user into the CODE/XDATA RAM is preserved and the device is halted at the address 0x0000 of CODE/XDATA RAM. 5. When a bug is found, the user can download a new OMF file whenever the device is halted. There is no need to disconnect the device from the debug chain or to hit reset. The download, item 2 above, will automatically reset the device after the OMF/IntelHEX new code download is finished. It is very important to note that whenever the Disconnect toolbar button is hit or the Debug → Disconnect menu item is invoked, the debugging chain does the following: • Enables the LED driver. During the debugging sessions the LED current driver is forcibly disabled. • Clears all the breakpoints. • Releases the device from halt and lets it run from the point when it was halted. 2.5. Important Note about Single-Stepping Over ROM Code Single-stepping through the ROM code is disabled. Whenever the user encounters the call to the ROM API functions he or she should use the Step Over (F10) toolbar button rather than the Step (F11) or Multiple Step button. Even though single-stepping through the ROM API function using the Step (F11) button works from the user’s point of view, the CPU timing is modified and real-time performance is not guaranteed when using the Step (F11) or Multiple Step buttons over the ROM API functions. Therefore, it is highly recommended to use the Step Over (F10) toolbar button when stepping over the ROM API functions in IDE. Single-stepping over the bMtp_Write() function using Step (F11) or Multiple Step buttons may yield unpredictable results in the MTP (EEPROM) and is highly discouraged. One should use the Step Over (F10) tool, run to cursor, or setting a breakpoint when debugging around the MTP write function. 2.6. Device Version 1. The device ID information can be read through the IDE through Views → Debug Windows → Si4010 → XREG Regs. The last item bREV_ID is the device revision. The user can also call the API function bSys_GetRevId(). 2. The trim version can be read by the Silicon Labs IDE as External Memory through Views → Debug Windows → External Memory at location 0x11D6. There is a macro bSys_TrimId_c defined in the headers for use in customer code as well. The user needs to know the TrimId before writing any code, so the manual access to it is adequate. The trim version will rarely change and customers will be notified about the change. The provided NVM burner program reads both the bREV_ID device revision and the trim version bSys_TrimId in the [Device] tab after the burner is connected to the device. 2.7. Debugging Application which Drives LED To maximize utilization of the package pins, the LED current driver output is shared with the debug chain clock signal C2CLK. To share the functionality and be able to use the IDE for debugging there are some limitations to note and rules to follow. The following figure shows the recommended connection of the USB 10-pin debug header to the device in the user application. Note: The LED must be isolated by the 470 Ω resistor for the debug chain to work. Facts about using the LED with IDE chain: 1. The IDE chain can connect to the device only if the LED current driver is off and the LED is not lit. 2. Once the IDE chain is connected to the device it blocks the device LED driver. Therefore, the application can be written in a normal fashion using LED as desired in the final application without worry of being disconnected from the debug chain. The only limitation is that the LED will not be lit from the application during the IDE debug session. The user will still observe LED activity, but that activity is related to the debug chain communicating with the device, not the user application driving the LED. 3. Once the IDE chain is disconnected from the device (for example, by pressing the Disconnect button in the IDE), the device is released from halt and at the same time the blocking of the LED driver is removed. From that point on, the application behaves and runs as a regular application and the LED activity reflects what the application desires to do with the LED. 4. If the user wants to reconnect the IDE to the device the only requirement is that the LED must not be lit by the application and the C2 debug interface must be active, not being actively turned off by the application. Therefore, if the device user software is stuck in an infinite loop and driving the LED constantly or the C2 interface was turned off by the application, the IDE chain will not be able to connect to the device. In such a situation, the device power has to be cycled to invoke internal power on reset. (See item 1 above.) Cycling the power to the part in this context means either physical removal of the power to the device or calling the vSys_Shutdown() function from within the application, which achieves the same result. 2.8. Hardware Issue with Debugging LED Application There is an issue with the LED turning on and off and the functionality of the GPIO4. There is no issue when the part is programmed as the Run part and runs the final application code. Therefore, the issue affects only the application development. There are several possible software workarounds, depending on the approach the user wants to take. 2.8.1. Application LED Control The user can control the LED intensity and whether the LED is on or off. The LED intensity has 4 values, 0 to 3: Off, 0.3 mA, 0.6 mA, and 1 mA current. The user can set the intensity any time, but the LED is not going to be turned on until the GPIO_LED is set to 1. The GPIO_LED is an alias for the P0.5 bit. After the reset the P0.5 bit is set to 1, so it is recommended that the user use GPIO_LED = 0 at the beginning of the user application. To turn the LED off at the very beginning of the user application: /* Clear the GPIO_LED.. reset will set this bit! */ GPIO_LED = 0; To turn the LED on and off inside the user application: /* Set LED intensity .. acceptable values are 0 (off) or 1, 2, and 3 */ vSys_LedIntensity( 3 ); ...  /* To turn the LED on at currently set intensity */ GPIO_LED = 1; ...  /* To turn the LED off, keep the intensity setting */ GPIO_LED = 0; The intensity setting can be changed any time, even when the GPIO_LED = 1. This is basically how the LED control operates. This approach will work when the part status is finalized as the Run device, since for that program level the C2 interface is turned off after the boot-by-boot routine. However, when the code above is used for a device in the Factory or User programming state, then the GPIO4 will stop working after the first LED blink. The LED must be seen to be turned on and off by the application (to blink) to experience this problem. 2.8.2. Solution 1: Living with the Limitation The simplest solution is to know about the issue and decide to live with it. After the first LED blink, the GPIO4 will not work. In this scenario, the user may decide to test the GPIO4 only when the part is fully programmed as the Run part. 2.8.3. Solution 2: Controlled Compilation The user may use a `#define` statement to define a LED "on" value. For button press debugging purposes when the LED can be off the code is compiled with value set as 0, so the LED will never light up and the GPIO4 will always function. For debugging the LED, and for final application compilation for the Run state of the device, the user will compile the application with the LED "on" value set to 1. For example: ```c #ifdef DEBUG #define gLedOnValue_c 0 #else #define gLedOnValue_c 1 #endif /* Clear the GPIO_LED off after reset .. reset will set this bit! */ GPIO_LED = 0; /* Set LED intensity .. acceptable values are 0 (off) or 1, 2, and 3 */ vSys_LedIntensity( 3 ); ... /* Turn the LED on at currently set intensity */ GPIO_LED = gLedOnValue_c; ... /* Turn the LED off, keep the intensity setting */ GPIO_LED = 0; ``` One advantage of this solution is that the code size is identical in both cases, Debug or Run. Cycling the power to the part in this context means either physical removal of the power to the device or calling the `vSys_Shutdown()` function from within the application, which achieves the same result. 2.8.4. Solution 3: Dynamic C2 Disable (Recommended) The GPIO4 issue manifests itself when the LED is actually being turned on and off from the application. The LED physically blinks and is not blocked from being lit up by an application being connected to the IDE debug chain, and the C2 interface is active and enabled. If we disable the C2 interface when the device is not connected to the IDE chain, and before the LED is turned off when lit, then there will not be the GPIO4 problem. To do that the user must add the following function to the user application to turn the LED on. C function to turn the LED on: /* *------------------------------------------------------------------------------ * INCLUDES: */ #include "si4010.h" /* *------------------------------------------------------------------------------ * VISIBLE FUNCTIONS: */ void vLedOn (void) /*------------------------------------------------------------- * FUNCTION DESCRIPTION: * Turn LED on with disabling of the C2. * The C2 is disabled only if the part is not connected * to the IDE debugging chain. *------------------------------------------------------------- */ { /* *------------------------------------------------------------- * VARIABLES: *------------------------------------------------------------- */ GPIO_LED = 1; if ( 0 != (RBIT_DATA & M_GPIO_LED_DRIVE) ) { PROTO_CTRL |= M_C2_OFF; } Assembly version of the same function, assuming that the file name is ledon.a51 for Keil toolchain. INCLUDES: $NOLIST $INCLUDE (si4010.inc) $LIST SEGMENTS: NAME LEDON EXTERNALS AND PUBLIC: PUBLIC vLedOn CODE: vLedOn: setb GPIO_LED mov A, RBIT_DATA jnb ACC.B_GPIO_LED_DRIVE, NoC2Disable orl PROTO_CTRL, #M_C2_OFF NoC2Disable: The function is able to determine whether the device is connected to the IDE chain. If it is not connected, then the function turns the C2 interface off. Once that is done it is not possible to turn the C2 interface back on unless the power to the device, or at least to the digital portion of the device, is cycled. See the discussion below about advantages and disadvantages. The following is an example of how to use the vLedOn() function: ```c /* Clear the GPIO_LED off after reset .. reset will set this bit! */ GPIO_LED = 0; /* Set LED intensity .. acceptable values are 0 (off) or 1, 2, and 3 */ vSys_LedIntensity( 3 ); ... /* Turn the LED on at currently set intensity */ vLedOn(); ... /* Turn the LED off, keep the intensity setting */ GPIO_LED = 0; ``` Following are the advantages and disadvantages of this solution: **Advantages:** 1. Uniform code, no need for conditional compilation, the GPIO4 and LED will function as expected under all scenarios. 2. The user can use the GPIO_LED=1 in the code, which will block the GPIO4. But subsequent call to vLedOn() will clear the blocking of the GPIO4 and it will start functioning normally again. **Disadvantages:** 1. Once the LED is physically blinked then it is not possible for the IDE to connect to the part until the power is cycled or the vSys_Shutdown() is called from within the application. It is up to the user to make sure that the power is cycled. 2. If the part is programmed as the User part with the option to execute the user code after the boot automatically without stopping, then the user application must not use the vLedOn() function just to blink LED without a user input. If the application blinks the LED on its own, then the IDE will not be able to connect to the part, since the C2 interface is disabled at the time when the LED is turned on. If the user does not use the option to execute user code without stopping after the boot, there is not a problem since the device will load User code after the reset and wait for further instructions, essentially waiting for the IDE to connect to it without executing the User code. 3. The `vLedOn()` function code is bigger than simple `GPIO_LED=1` and is not necessary for the Run part, so conditional compilation for LED bug may still be an option. One recommendation for using the `vLedOn()` function is that the user application would include monitoring of several buttons pressed simultaneously. If that combination happens, then the `vSys_Shutdown()` is invoked and the IDE chain would be able to connect to the part again. That would satisfy the power cycling requirement without actually cycling the physical power to the device. 2.9. **Notes about USB Adapter Use** The following facts are worth noting when using the IDE debug chain: 1. Whenever the **Reset** button is pressed on the IDE, the system reset is invoked and the part goes through a boot sequence. 2. Every time the new code, in OMF or HEX format, is downloaded to the part through the IDE, the IDE issues a system reset and the device reboots. The content of the RAM memories is not touched by the boot, with the exception of the API reserved regions in CODE/XDATA and DATA/IDATA memories. The register banks RB0, RB1, and RB3 are cleared by the boot routine. 3. Whenever the ToolStick adapter is directly connected to the key fob design platform and the IDE is connecting to the part, the GPIO0 will be forcibly driven to 1 for about 260 ms around the beginning of the connection sequence. In the Silicon Labs-provided key fob platform, the GPIO0 isolated by a resistor, then if the user is pressing a GPIO0 button during the connection sequence, the GPIO0 value will be viewed as 1 by the internal CPU during the IDE connection to the device. 4. It is recommended that the user uses the Burning adapter board along with the USB Debug Adapter. 3. Examples provided There are 5 demonstration examples provided with the development kit documentation pack: 1. aes_demo 2. fcast_demo 3. fstep_demo 4. tone_demo 5. keyfob_demo 6. rke_demo All are precompiled and ready to be used without compilation for convenience. The user just needs to go to the <name>_demo/bin directory and open the Silicon Labs IDE *.wsp project file by the Silicon Labs IDE. Each demo can be built and debugged from within the Silicon Labs IDE. 3.1. AES demo-aes_demo AES example with timer usage example. The timer counts the number of system clock cycles needed to run encryption, decryption key preparation, and decryption. 3.2. Frequency casting demo-fcast_demo This example shows the main flow when using the main vFCast_Tune tuning function. It also shows how to transmit a predefined data packet when a button is pressed. The buttons are not debounced in this simple example. 3.3. Frequency casting two step demo-fstep_demo This example shows the main flow when the user wants to switch in between several frequencies fast. It is possible to call vFCast_Tune() for several frequencies in advance, collect the information calculated, and then just quickly apply it during transmission. This is for the cases when the 5-6 ms time spent in vFCast_Tune() is prohibitive for switching in between frequencies. 3.4. Tone (CW) generation demo-tone_demo This example shows the steps to generate continuous wave (tone) at a desired frequency. There are two main files compiled to two separate example applications: **tone_demo** Run main tune vFCast_Tune once, then use only fine tuning to track the temperature changes. **tone_demo_ptune** Periodic tuning, run main tune vFCast_Tune every minute and use the fine tuning only in between the main tuning events. However, there will be about 6 ms interruption of the output during the main tuning, once per minute. The Keil µVision project tone_demo.Uv2 covers both targets: **tone_demo** **tone_demo_ptune** For the Silicon Labs IDE there is only one target per *.wsp file, so there are two project files: **tone_demo.wsp** **tone_demo_ptune.wsp** 3.5. Simple key fob demo-keyfob_demo This example demonstrates a basic key fob application transmitting a packet for every button push. Packets can be received by an 4355-LED-XXX-SRX board. Buttons are debounced using the Button Service API functions. 3.6. RKE key fob demo-rke_demo An advanced key fob demo using AES encryption, rolling counter in MTP memory, battery voltage measurement, and production ID of chip as node address. This is the firmware used in the Si4010 Demo Key Fobs, available in Silicon Labs key fob demo kits. CONTACT INFORMATION Silicon Laboratories Inc. 400 West Cesar Chavez Austin, TX 78701 Tel: 1+(512) 416-8500 Fax: 1+(512) 416-9669 Toll Free: 1+(877) 444-3032 Please visit the Silicon Labs Technical Support web page: and register to submit a technical support request.
{"Source-Url": "https://docs.rs-online.com/27d5/0900766b811adabf.pdf", "len_cl100k_base": 7794, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 36818, "total-output-tokens": 8530, "length": "2e12", "weborganizer": {"__label__adult": 0.0012655258178710938, "__label__art_design": 0.0011930465698242188, "__label__crime_law": 0.0006074905395507812, "__label__education_jobs": 0.0007076263427734375, "__label__entertainment": 0.00019359588623046875, "__label__fashion_beauty": 0.0007681846618652344, "__label__finance_business": 0.0005412101745605469, "__label__food_dining": 0.0008363723754882812, "__label__games": 0.0019121170043945312, "__label__hardware": 0.368408203125, "__label__health": 0.0008387565612792969, "__label__history": 0.0005631446838378906, "__label__home_hobbies": 0.000804901123046875, "__label__industrial": 0.0078887939453125, "__label__literature": 0.00036072731018066406, "__label__politics": 0.0004162788391113281, "__label__religion": 0.0017261505126953125, "__label__science_tech": 0.079833984375, "__label__social_life": 0.00010484457015991212, "__label__software": 0.01453399658203125, "__label__software_dev": 0.51318359375, "__label__sports_fitness": 0.0009899139404296875, "__label__transportation": 0.00197601318359375, "__label__travel": 0.0003600120544433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31812, 0.04451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31812, 0.15561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31812, 0.86255]], "google_gemma-3-12b-it_contains_pii": [[0, 2975, false], [2975, 5097, null], [5097, 5838, null], [5838, 6800, null], [6800, 7219, null], [7219, 7379, null], [7379, 8421, null], [8421, 9788, null], [9788, 12660, null], [12660, 16649, null], [16649, 17970, null], [17970, 21063, null], [21063, 23407, null], [23407, 24520, null], [24520, 24931, null], [24931, 27057, null], [27057, 28801, null], [28801, 30942, null], [30942, 31478, null], [31478, 31812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2975, true], [2975, 5097, null], [5097, 5838, null], [5838, 6800, null], [6800, 7219, null], [7219, 7379, null], [7379, 8421, null], [8421, 9788, null], [9788, 12660, null], [12660, 16649, null], [16649, 17970, null], [17970, 21063, null], [21063, 23407, null], [23407, 24520, null], [24520, 24931, null], [24931, 27057, null], [27057, 28801, null], [28801, 30942, null], [30942, 31478, null], [31478, 31812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31812, null]], "pdf_page_numbers": [[0, 2975, 1], [2975, 5097, 2], [5097, 5838, 3], [5838, 6800, 4], [6800, 7219, 5], [7219, 7379, 6], [7379, 8421, 7], [8421, 9788, 8], [9788, 12660, 9], [12660, 16649, 10], [16649, 17970, 11], [17970, 21063, 12], [21063, 23407, 13], [23407, 24520, 14], [24520, 24931, 15], [24931, 27057, 16], [27057, 28801, 17], [28801, 30942, 18], [30942, 31478, 19], [31478, 31812, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31812, 0.18605]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
55302cd71bfde78f38dbcb0f67902b9c75b894e8
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F3-540-63107-0_12.pdf", "len_cl100k_base": 6567, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28002, "total-output-tokens": 8137, "length": "2e12", "weborganizer": {"__label__adult": 0.0003287792205810547, "__label__art_design": 0.0002849102020263672, "__label__crime_law": 0.00023877620697021484, "__label__education_jobs": 0.00040841102600097656, "__label__entertainment": 3.6776065826416016e-05, "__label__fashion_beauty": 0.00012755393981933594, "__label__finance_business": 0.0001475811004638672, "__label__food_dining": 0.0002930164337158203, "__label__games": 0.00039124488830566406, "__label__hardware": 0.0007519721984863281, "__label__health": 0.0003020763397216797, "__label__history": 0.0001729726791381836, "__label__home_hobbies": 7.158517837524414e-05, "__label__industrial": 0.00030732154846191406, "__label__literature": 0.00014340877532958984, "__label__politics": 0.0001850128173828125, "__label__religion": 0.0003705024719238281, "__label__science_tech": 0.003406524658203125, "__label__social_life": 5.549192428588867e-05, "__label__software": 0.00318145751953125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00027871131896972656, "__label__transportation": 0.0004208087921142578, "__label__travel": 0.000194549560546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36946, 0.0092]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36946, 0.44354]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36946, 0.89916]], "google_gemma-3-12b-it_contains_pii": [[0, 3232, false], [3232, 6367, null], [6367, 8468, null], [8468, 10816, null], [10816, 14050, null], [14050, 17211, null], [17211, 19267, null], [19267, 19945, null], [19945, 22202, null], [22202, 24477, null], [24477, 27011, null], [27011, 30527, null], [30527, 33933, null], [33933, 36946, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3232, true], [3232, 6367, null], [6367, 8468, null], [8468, 10816, null], [10816, 14050, null], [14050, 17211, null], [17211, 19267, null], [19267, 19945, null], [19945, 22202, null], [22202, 24477, null], [24477, 27011, null], [27011, 30527, null], [30527, 33933, null], [33933, 36946, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36946, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36946, null]], "pdf_page_numbers": [[0, 3232, 1], [3232, 6367, 2], [6367, 8468, 3], [8468, 10816, 4], [10816, 14050, 5], [14050, 17211, 6], [17211, 19267, 7], [19267, 19945, 8], [19945, 22202, 9], [22202, 24477, 10], [24477, 27011, 11], [27011, 30527, 12], [30527, 33933, 13], [33933, 36946, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36946, 0.05952]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9e453350745ecaf77b333376f0f69a7b3ff04ea7
Almost all Languages are undecidable Set of all languages: $|S| = |\{0, 1\}^*| = |\mathcal{P}(\{0, 1\}^*)| = |\mathbb{R}| Set of all dec. lang.: $|\{(M) \in \{0, 1\}^* | M \text{ is a decider TM} \}| = |\{0, 1\}^*| = |\mathbb{N}| $\Rightarrow$ Most languages do not have a TM deciding them Question: Is it just weird languages that no one would care about which are undecidable? Answer (due to Turing, 1936): Sadly, no. There are many natural languages one would like to compute but which are undecidable. Many interesting Languages are undecidable In particular, any problems related to non-wimpy / Turing equivalent computation are undecidable. Example: Program Equivalence Given a program P and a program P’ we would like to automatically decide whether both do the same thing. Formally: \[ \text{EQUIV}_{TM} = \{ \langle P, P' \rangle \mid P \text{ and } P' \text{ are Python programs and } L(P) = L(P') \} \] Useful for: - Compiler Optimization - Matching programs to their specification - Autograder for 112 or 251 Decidable Problems \[ \text{ACCEPT}_{DFA} = \{ \langle D, x \rangle \mid D \text{ is a DFA that accepts } x \} \] \[ \text{SELF-ACCEPT}_{DFA} = \{ D \mid D \text{ is a DFA that accepts } \langle D \rangle \} \] \[ \text{EMPTY}_{DFA} = \{ \langle D \rangle \mid D \text{ is a DFA that accepts no } x \} \] \[ \text{EQUIV}_{DFA} = \{ \langle D, D' \rangle \mid D \text{ and } D' \text{ are DFA and } L(D) = L(D') \} \] Theorem: \[ \text{ACCEPT}_{DFA}, \text{ SELF-ACCEPT}_{DFA}, \text{ EMPTY}_{DFA} \text{ and } \text{EQUIV}_{DFA} \text{ are decideable.} \] Undecidable Problems \[ \text{ACCEPT} = \{ (M, x) \mid M \text{ is a TM that accepts } x \} \] \[ \text{SELF-ACCEPT} = \{ M \mid M \text{ is a TM that accepts } x \} \] \[ \text{EMPTY} = \{ (M) \mid M \text{ is a TM that accepts no } x \} \] \[ \text{EQUIV} = \{ (M, M') \mid M \text{ and } M' \text{ are TMs and } L(M) = L(M') \} \] Theorem: \[ \text{ACCEPT}, \text{SELF-ACCEPT}, \text{EMPTY} \text{ and } \text{EQUIV} \text{ are undecidable.} \] A simple undecidable language Autograder / Hello World problem: Given a program P, is it terminating and outputting “Hello World”? \[ \text{HELLO} = \{ M \mid M \text{ is a TM that outputs “Hello World” when run on the empty input} \} \] Hello Problem Instance #1 This C program prints out all the lyrics of "The Twelve Days Of Christmas." Hello Problem Instance #2 ```python def HelloWorld(): t = 3 while (True): for n in xrange(3, t+1): for x in xrange(1, t+1): for y in xrange(1, t+1): for z in xrange(1, t+1): if (x**n + y**n == z**n): return "Hello World" t += 1 Terminates and outputs "Hello World" if and only if Fermat's Last Theorem is false. ``` Hello Problem Instance #3 ```python numberToTest := 2; flag := 1; while flag = 1 do flag := 0; numberToTest := numberToTest + 2; for p from 2 to numberToTest do if IsPrime(p) and IsPrime(numberToTest−p) then flag := 1; break; end if end for end do print("HELLO WORLD") Terminates and outputs "Hello World" if and only if Goldbach's Conjecture is false. ``` A simple undecidable language **Autograder / Hello World problem:** Given a program P, is it terminating and outputting "Hello World"? HELLO = \{(M) | M is a TM that outputs “Hello World” on the empty input ε\} **Halting problem:** Given a program P, is it terminating? HALT_ε = \{(M) | M is a TM terminating on ε\} HALT = \{(M, x) | M is a TM terminating on x\} The Halting Problem is Undecidable (1936) Theorem: The language \[ \text{HALT} = \{ (M, x) : M \text{ is a TM terminating on } x \} \] is undecidable. Proof: Assume for the sake of contradiction that \( M_{\text{HALT}} \) is a decider TM which decides HALT. The Halting Problem is Undecidable Here is the description of another TM called \( D \), which uses \( M_{\text{HALT}} \) as a subroutine: \[ D: \quad \text{Given as input } (M), \text{ the encoding of a TM } M: \begin{align*} & \text{D executes } M_{\text{HALT}}((M, (M))). \\ & \text{If this call accepts, } D \text{ enters an infinite loop.} \\ & \text{If this call rejects, } D \text{ halts (say, it accepts).} \end{align*} \] In other words… \[ D((M)) \quad \text{loops if } M((M)) \text{ halts}, \\ \text{halts if } M((M)) \text{ loops.} \] The Halting Problem is Undecidable Assume $M_{\text{HALT}}$ is a decider TM which decides $\text{HALT}$. We can use it to construct a machine $D$ such that $$D(\langle M \rangle) \begin{cases} \text{loops} & \text{if } M(\langle M \rangle) \text{ halts}, \\ \text{halts} & \text{if } M(\langle M \rangle) \text{ loops}. \end{cases}$$ Time for the contradiction: **Does $D(\langle D \rangle)$ loop or halt?** By definition, if it loops it halts and if it halts it loops. Contradiction. BTW: This is essentially just Cantor's Diagonal Argument. The set of all TMs is countable, so list it: <table> <thead> <tr> <th>$\langle M_1 \rangle$</th> <th>$\langle M_2 \rangle$</th> <th>$\langle M_3 \rangle$</th> <th>$\langle M_4 \rangle$</th> <th>...</th> </tr> </thead> <tbody> <tr> <td>$M_1$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>$M_2$</td> <td>loop</td> <td>loops</td> <td>loops</td> <td>loops</td> </tr> <tr> <td>$M_3$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>halts</td> </tr> <tr> <td>$M_4$</td> <td>halts</td> <td>halts</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>$M_5$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> How could $D$ be on this list? What would the diagonal entry be?? The set of all TMs is countable, so list it: <table> <thead> <tr> <th>$\langle M_1 \rangle$</th> <th>$\langle M_2 \rangle$</th> <th>$\langle M_3 \rangle$</th> <th>$\langle M_4 \rangle$</th> <th>...</th> </tr> </thead> <tbody> <tr> <td>$M_1$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>$M_2$</td> <td>loop</td> <td>loops</td> <td>loops</td> <td>loops</td> </tr> <tr> <td>$M_3$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>halts</td> </tr> <tr> <td>$M_4$</td> <td>halts</td> <td>halts</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>$M_5$</td> <td>halts</td> <td>loops</td> <td>halts</td> <td>loops</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Given some code, determine if it terminates. It's not: "we don't know how to solve it efficiently". It's not: "we don't know if it's a solvable problem". We know that it is unsolvable by any algorithm. We know that it is unsolvable by any algorithm, any mechanism, any human being, anything in this world and any (physical) world we can imagine. **ACCEPT is undecidable** Theorem: \[ \text{ACCEPT} = \{\langle M, x \rangle \mid M \text{ is a TM which accepts } x\} \] is undecidable. We could use the same diagonalization proof for ACCEPT. But maybe there is an easier way … Particularly, ACCEPT seems clearly harder than HALT. After all, how can I decide if a program accepts if I don't even know if it halts. **ACCEPT is undecidable** Theorem: \[ \text{ACCEPT} = \{\langle M, x \rangle \mid M \text{ is a TM which accepts } x\} \] is undecidable. **New Proof Strategy:** Try to show that: \[ \begin{align*} \text{ACCEPT} & \text{ is at least as hard as HALT} \\ \text{HALT} & \text{ is at most as hard as ACCEPT} \\ \text{HALT would be easy if ACCEPT were easy} \end{align*} \] Theorem: \[ \text{ACCEPT} = \{ (M, x) \mid M \text{ is a TM which accepts } x \} \] is undecidable. Proof (by contradiction): Assume ACCEPT is decidable then show that HALT would be also decidable: Suppose \( M_{\text{ACCEPT}} \) is a TM deciding ACCEPT. Here is a description of a TM deciding HALT: “Given \( (M, x) \), run \( M_{\text{ACCEPT}}(M, x) \). If it accepts, then accept. Reverse the accept & reject states in \( M \), forming \( \tilde{M} \). Run \( M_{\text{ACCEPT}}(\tilde{M}, x) \). If it accepts (i.e., \( M \) rejects \( x \)), then accept. Else reject.” New Proof Strategy summarized: Want to show: Problem L is undecidable New Proof Strategy: Deciding L is at least as hard as deciding HALT \[ \iff \] HALT would be easy if L were easy \[ \iff \] HALT reduces to L \[ \iff \] HALT \( \leq_T \) L Reductions Definition: Language A reduces to language B means: "It is possible to decide A using an algorithm for deciding B as a subroutine." Notation: \[ A \leq_T B \quad (T \text{ stands for Turing}) \] Think, "A is no harder than B". Reductions Fact: Suppose A ≤ \_T B; i.e., A reduces to B. If B is decidable, then so is A. We actually used the contrapositive: Fact: Suppose A ≤ \_T B; i.e., A reduces to B. If A is undecidable, then so is B. Note that “A ≤ \_T B” is a stronger statement than proving that A is decidable under the assumption that B is decidable. Reductions Reductions are the main technique for showing undecidability. Interesting: We use a positive statement, i.e., the existence of a reduction algorithm, in order to prove a negative (impossibility) result. Reductions (HALT ≤ \_T ACCEPT) Theorem: HALT ≤ \_T ACCEPT. Proof: Suppose M_{ACCEPT} is a subroutine deciding ACCEPT. Here is a description of a TM deciding HALT: “Given (M, x), run M_{ACCEPT}(M, x). If it accepts, then accept. Reverse the accept & reject states in (M), forming (M'). Run M_{ACCEPT}(M', x). If it accepts (i.e., M rejects x), then accept. Else reject.” More Reductions (ACCEPT ≤_T ALL) Theorem: ALL = \{(M) \mid M \text{ accepts all strings}\} is undecidable. Proof: (ACCEPT ≤_T ALL) Suppose M_{ALL} is a subroutine deciding ALL. Here is a description of a TM deciding ACCEPT: "Given (M, x), write down the description (M_x) of a TM M_x which does this: "Overwrite the input with x and then run M." Call subroutine M_{ALL} on input (M_x). Accept if it accepts, reject otherwise" (Note that M_x behaves the same on all inputs and in particular we have that M_x accepts all strings if and only if M accepts x.) More Reductions (ACCEPT ≤_T EMPTY) Theorem: We also have ACCEPT ≤_T EMPTY. Proof: (ACCEPT ≤_T EMPTY) Suppose M_{EMPTY} is a subroutine deciding EMPTY. Here is a description of a TM deciding ACCEPT: "Given (M, x), write down the description (M_x) of a TM M_x which does this: "Overwrite the input with x and then run M." Call subroutine M_{EMPTY} on input (M_x). Reject if it accepts else reject." More Reductions (ALL,EMPTY ≤_T EQUIV) Theorem: EQUIV = \{(M,M') \mid L(M) = L(M')\} is undecidable. Proof: (ALL ≤_T EQUIV and EMPTY ≤_T EQUIV) Suppose M_{EQUIV} is a subroutine deciding EQUIV. Here is a description of a TM deciding ALL: "Given (M) write down the description (M') of a TM M' which always accepts / rejects. Then call subroutine M_{EQUIV} on input (M,M')." Poll – Test your Intuition We just showed: \[ \text{HALT} \leq_T \text{ACCEPT} \leq_T \text{EMPTY} \leq_T \text{EQUIV} \] and \[ \text{ACCEPT} \leq_T \text{ALL} \leq_T \text{EQUIV} \] Which of the following, do you believe also hold? \[ \text{HALT} \leq \text{EMPTY} \] \[ \text{HALT} \leq \text{EQUIV} \] \[ \text{EMPTY} \leq \text{ACCEPT} \] \[ \text{EQUIV} \leq \text{EMPTY} \] \[ \text{EQUIV} \leq \text{HALT} \] More Reductions (EMPTY \(\leq_T\) HALT) Theorem: HALT, ACCEPT, EMPTY are all equally hard. Proof: (EMPTY \(\leq_T\) HALT) Suppose \(M_{\text{halt}}\) is a subroutine deciding HALT. Here is a description of a TM deciding EMPTY: *Given \(\langle M \rangle\), write down the description \(\langle M' \rangle\) of a TM \(M'\) which does this: *For \(t=1\) to \(\infty\) *run \(M\) on each string of length at most \(t\) for \(t\) steps *If any execution terminates and accepts then terminate (+ accept)" Then call subroutine \(M_{\text{halt}}\) on input \(\langle M', \epsilon \rangle\) but reverse the accept/reject.* More Undecidability Theorem: HALT, ACCEPT, EMPTY are all equally hard. What about EQUIV and ALL? Fun Fact #1: EQUIV and ALL are harder than HALT and so are \[ \text{TOTAL} = \{ \langle M \rangle \mid M \text{ halts on all inputs } x \} \] \[ \text{FINITE} = \{ \langle M \rangle \mid L(M) \text{ is finite} \} \] and in fact all these problems are equally hard. Fun Fact #2: There is an infinite hierarchy of harder and harder undecidable languages. More Undecidability Fun Fact #2: There is an infinite hierarchy of harder and harder undecidable languages, (which however still only covers countably many languages) How does one define / construct this hierarchy? Look at TMs which have a subroutine/oracle that solves HALT. These oracle TMs can solve ACCEPT and other equivalent problems easily BUT they cannot decide if an oracle TM given to them halts. This makes the HALTing problem for oracle TMs even harder. … Question: Do all undecidable problems involve TM’s? Answer: No! Some very different problems are undecidable! Cellular Automata Input: A CA with its initial configuration. E.g. a game of life pattern Theorem: Deciding whether the input CA loops is an undecidable problem. Post's Correspondence Problem Input: A finite collection of “dominoes”, having strings written on each half. E.g.: ``` | | | | a | ab | | a | cabc | | bcc | c | ``` Definition: A match is a sequence of dominoes, repetitions allowed, such that top string = bottom string. Match: ``` | | | | a | bcc | | ab | cabc | | c | c | ``` = abccabcc Post's Correspondence Problem Input: A finite collection of “dominoes”, having strings written on each half. E.g.: ``` | | | | a | ab | | a | cabc | | bcc | c | ``` Match: ``` | | | | a | bcc | | ab | cabc | | c | c | ``` = abccabcc Post's Correspondence Problem Input: A finite collection of “dominoes”, having strings written on each half. Task: Output YES if and only if there is a match. Theorem (Post, 1946): Undecidable. There is no algorithm solving this problem. (More formally, PCP = \(\{(\text{Domino Set}) : \text{there’s a match}\}\) is an undecidable language.) Post's Correspondence Problem Input: A finite collection of “dominoes”, having strings written on each half. Task: Output YES if and only if there is a match. Theorem (Post, 1946): Undecidable. Two-second proof sketch: Given a TM M, you can make a domino set such that the only matches are execution traces of M which end in the accepting state. Hence $\text{ACCEPTS} \leq_T \text{PCP}$. Wang Tiles Input: Finite collection of "Wang Tiles" (squares) with colors on the edges. E.g., ``` A B C D ``` Task: Output YES if and only if it's possible to make an infinite grid from copies of them, where touching sides must color-match. Theorem (Berger, 1966): Undecidable. Modular Systems Input: Finite set of rules of the form “from $ax+b$, can derive $cx+d$”, where $a,b,c,d \in \mathbb{Z}$. Also given is a starting integer $u$ and a target $v$. Task: Decide if $v$ can be derived starting from $u$. E.g.: “from $2x$ derive $x$”, “from $2x+1$ derive $6x+4$”, target $v = 1$. Starting from $u$, this is equivalent to asking if the “3n+1 problem” halts on $u$. Theorem (Börger, 1989): Undecidable. Richardson’s Problem Input: A set S of rational numbers. What you can do: Make an expression E using the numbers in S, the numbers π and ln(2), the variable x, and operations +, −, ; sin, exp, abs. Question: Can you make an E such that E ≡ 0? Theorem (Richardson, 1968): Undecidable. Mortal Matrices Input: Two 21×21 matrices of integers, A & B. Question: Is it possible to multiply A and B together (multiple times in any order) to get the 0 matrix? Hilbert’s 10th problem Input: Multivariate polynomial w/ integer coeffs. Question: Does it have an integer root? Matiyasevich Robinson Davis Putnam Hilbert's 10th problem Input: Multivariate polynomial w/ integer coeffs. Question: Does it have an integer root? Undecidable. Question: Does it have a real root? Decidable. Question: Does it have a rational root? Not known if it's decidable or not. Definitions: Halting and other Problems Theorems/proofs: Undecidability of HALT many reduction proofs Practice: Diagonalization Reductions Programming with TM's Study Guide
{"Source-Url": "https://www.anilada.com/courses/15251f18/www/slides/lec7HA.pdf", "len_cl100k_base": 5224, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 31844, "total-output-tokens": 6005, "length": "2e12", "weborganizer": {"__label__adult": 0.0005559921264648438, "__label__art_design": 0.0004925727844238281, "__label__crime_law": 0.0005955696105957031, "__label__education_jobs": 0.0019388198852539065, "__label__entertainment": 0.00017774105072021484, "__label__fashion_beauty": 0.00029158592224121094, "__label__finance_business": 0.0002658367156982422, "__label__food_dining": 0.0008373260498046875, "__label__games": 0.0017118453979492188, "__label__hardware": 0.0016002655029296875, "__label__health": 0.0011196136474609375, "__label__history": 0.00045013427734375, "__label__home_hobbies": 0.00022518634796142575, "__label__industrial": 0.0009741783142089844, "__label__literature": 0.0011987686157226562, "__label__politics": 0.000522613525390625, "__label__religion": 0.00128173828125, "__label__science_tech": 0.1317138671875, "__label__social_life": 0.00018668174743652344, "__label__software": 0.006000518798828125, "__label__software_dev": 0.845703125, "__label__sports_fitness": 0.0005545616149902344, "__label__transportation": 0.0011568069458007812, "__label__travel": 0.00023245811462402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16582, 0.00861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16582, 0.37869]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16582, 0.79434]], "google_gemma-3-12b-it_contains_pii": [[0, 511, false], [511, 1594, null], [1594, 2392, null], [2392, 3576, null], [3576, 4387, null], [4387, 6718, null], [6718, 7814, null], [7814, 8878, null], [8878, 9805, null], [9805, 11138, null], [11138, 12652, null], [12652, 13399, null], [13399, 14349, null], [14349, 15461, null], [15461, 16147, null], [16147, 16582, null]], "google_gemma-3-12b-it_is_public_document": [[0, 511, true], [511, 1594, null], [1594, 2392, null], [2392, 3576, null], [3576, 4387, null], [4387, 6718, null], [6718, 7814, null], [7814, 8878, null], [8878, 9805, null], [9805, 11138, null], [11138, 12652, null], [12652, 13399, null], [13399, 14349, null], [14349, 15461, null], [15461, 16147, null], [16147, 16582, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16582, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16582, null]], "pdf_page_numbers": [[0, 511, 1], [511, 1594, 2], [1594, 2392, 3], [2392, 3576, 4], [3576, 4387, 5], [4387, 6718, 6], [6718, 7814, 7], [7814, 8878, 8], [8878, 9805, 9], [9805, 11138, 10], [11138, 12652, 11], [12652, 13399, 12], [13399, 14349, 13], [14349, 15461, 14], [15461, 16147, 15], [16147, 16582, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16582, 0.08466]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
442a42314e6db4cd7ac186ef4a8c1f6026b2e78d
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F3-540-55639-7_12.pdf", "len_cl100k_base": 4765, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23196, "total-output-tokens": 6972, "length": "2e12", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.0015306472778320312, "__label__crime_law": 0.00039458274841308594, "__label__education_jobs": 0.0019388198852539065, "__label__entertainment": 0.00032520294189453125, "__label__fashion_beauty": 0.00017976760864257812, "__label__finance_business": 0.0005583763122558594, "__label__food_dining": 0.0003361701965332031, "__label__games": 0.0006890296936035156, "__label__hardware": 0.0032444000244140625, "__label__health": 0.0005960464477539062, "__label__history": 0.00046539306640625, "__label__home_hobbies": 9.071826934814452e-05, "__label__industrial": 0.0008225440979003906, "__label__literature": 0.0005483627319335938, "__label__politics": 0.00034308433532714844, "__label__religion": 0.0005469322204589844, "__label__science_tech": 0.437255859375, "__label__social_life": 0.00010859966278076172, "__label__software": 0.07220458984375, "__label__software_dev": 0.4765625, "__label__sports_fitness": 0.00019812583923339844, "__label__transportation": 0.0005640983581542969, "__label__travel": 0.00021696090698242188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28788, 0.02459]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28788, 0.55299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28788, 0.84233]], "google_gemma-3-12b-it_contains_pii": [[0, 1754, false], [1754, 4721, null], [4721, 7701, null], [7701, 10280, null], [10280, 12760, null], [12760, 15334, null], [15334, 17441, null], [17441, 20037, null], [20037, 23216, null], [23216, 25756, null], [25756, 28128, null], [28128, 28788, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1754, true], [1754, 4721, null], [4721, 7701, null], [7701, 10280, null], [10280, 12760, null], [12760, 15334, null], [15334, 17441, null], [17441, 20037, null], [20037, 23216, null], [23216, 25756, null], [25756, 28128, null], [28128, 28788, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28788, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28788, null]], "pdf_page_numbers": [[0, 1754, 1], [1754, 4721, 2], [4721, 7701, 3], [7701, 10280, 4], [10280, 12760, 5], [12760, 15334, 6], [15334, 17441, 7], [17441, 20037, 8], [20037, 23216, 9], [23216, 25756, 10], [25756, 28128, 11], [28128, 28788, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28788, 0.01961]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
16d928bbc9f92911a2b1ed7d5c1d1db3f476917e
Empirical Research on Customer Communication Challenges in the Companies Adopting Agile Practices Paolo Ciancarini\textsuperscript{2,}a, Shokhista Ergasheva\textsuperscript{1,}b, Ilyuza Gizzatullina\textsuperscript{1}, Vladimir Ivanov\textsuperscript{1,}c, Sergey Masyagin\textsuperscript{1,}d and Giancarlo Succi\textsuperscript{1,}e \textsuperscript{1}Innopolis University, Innopolis, Russia \textsuperscript{2}University of Bologna, Italy Keywords: Software Requirements Engineering, Software Metrics, Customer Communication, Agile Methodologies. Abstract: One of the most critical aspects of Software Development Process is the Requirements Engineering process and defining the correct and understandable requirements in Agile methodology. Hence, Requirements Engineering in agile directly effect the overall project success. This paper demonstrates a research study about the usage of Agile methods in the set of industrial companies located in Russia. The survey gives insights about different aspects of the method: communication challenges and issues arising during the Software Requirements Engineering phase in particular the challenges in the communication with the customers. To investigate these issues the paper presents an analysis of the state of the art done with the help of the research survey. The results of the interview sessions are summarized and the set of suggestions to overcome the challenges are proposed. 30 representatives from 20 different companies who are mainly Product owners and Product Managers participated in the survey. As the results indicate, the communication is always a key challenge for the companies. The analysis of particular qualities of the communication field in the context of rapidly changing Software Development environment helped to define the outcomes related to the customer communication. 1 INTRODUCTION Agile methodologies were formalized a couple of decades ago to address the so-called “Software crisis” (Maurer et al., 1999; Kivi et al., 2000; Succi et al., 2001a; Sillitti et al., 2002; Succi et al., 2002; Pedrycz et al., 2011; Sillitti et al., 2012; Janes and Succi, 2014; Coman et al., 2014). One of their key approaches has been a close and continuous interaction of the customer throughout the Requirements Management phase. The practise introduces customer collaboration and receiving the feedback on each development iteration, as well as adaptiveness and response to change. Despite the fact that both Agile and traditional methodology practises have been adopted during decades, still the Requirements Engineering phase of the Software Development Life Cycle appears the issue of paramount importance (Baruah, 2015). According to the recent studies conducted in the Requirements Engineering of Agile methodologies concerning the customer collaboration, most of the studies are centered around the customer collaboration challenges. Some of the authors emphasize the importance of face-to-face communication over written specifications and described the challenges occurring in customer-related agile requirements engineering activities (Cao and Ramesh, 2008). Mainly the challenges claimed in the paper include customer availability, consensus among customer groups and customer trust to the agile team and many others. Most of the researches done till this time around the issue of collaboration of the customer with the production can turn to deficient because of the aging of the methodology taken towards the communication between customer and development team. Because the development of the modern communication panels like social networks, messengers and platforms for easier connection does not require taking under consideration the physical availability. The research survey mainly focuses in the issues related to the customer communication in Agile practising companies during Requirements Engineering... process. It includes the definition of the challenges in the customer communication, gives suggestions to overcome and minimize the issues. The paper starts with the discussion of the related works done in section 2, which includes contributions done in the area of Requirements Engineering and collaboration of customer with the development team. Following, section 3 illustrates the methodology underlying this paper, research experiments, interview process and evaluation details. All the results and analysis concerning the conducted interviews, derived challenges in the area and suggested solutions to the stated issues were explained in details in section 4. Finally the section 5 discusses the validity and threats of this study, concludes the results and gives directions for future research in section 6. 2 RELATED WORK In the science of domain dozens of publications have described the activities to be conducted in the Requirements Engineering process which requires the customer collaboration. According to the research paper Paetsch at al. (2003), agile methodology adopting practise includes conduction of the following main activities during the Requirements Engineering process (Paetsch et al., 2003): 1. Requirements Elicitation - requirements which can be derived in the form of interviews, use cases, user scenarios, observation and social analysis, focus groups, prototyping of the software. 2. Requirements Analysis - the main techniques for analysis can be Joint Application Development (JAD), requirements prioritization and modeling. 3. Documentation - baseline for evaluating the subsequent product and process. 4. Requirements Validation - requirements review, requirements testing and validating the requirements whether they are acceptable for the system to be implemented. 5. Requirements Management - concerns with the change and version control, requirements tracing and requirements status tracking. Whereas (Abdullah et al., 2011) identifies three major activities of requirements engineering in the context of user stories exploitation: gathering, clarifying, and evolving a story card. Each of these activities involves active collaboration with customers. Bjarnason et al. (2011) also highlighted the core Requirements engineering process activities performed in collaboration with the active customers (or their representatives) (Bjarnason et al., 2011) in particular (1) One Continuous Scope Flow where the product backlog prioritization and update for both business and development site is performed, (2) Cross-functional Development Teams where customer is included in the requirements definition, implementation and testing phases, (3) Integrated Requirements Engineering, where both requirements definition and documentation is done simultaneously with design and development phase, (4) Gradual and Iterative Detailing of requirements refinement process, (5) documentation of requirements in the form of User stories and Acceptance Criteria. Another research Korkala et al. (2006) describes the importance of the effective communication and feedback in agile development and investigation of customer-developer communication. As the authors conclude, the main challenge is in the selection of communication channel. The less informative communication channels give higher defect rate. As a result, paper suggests agile teams to pay much attention on the means of communication they select in order to understand client needs in a proper way. There are also several Systematic Literature Reviews investigating the challenges of the Requirements Engineering process in Agile. One of that SLR’s was held by Irum Inayat et al. (2015) (Inayat et al., 2015). The paper describes some particular challenges traditional compared to agile methodology practise during Requirements Engineering process. Traditional methodology challenges in the RE process were mentioned as communication gaps, over-scoping, requirements validation, requirements documentation, rare customer involvement. However, agile RE process challenges were also addressed in the paper include - customer availability, customer inability and agreement, contractual limitations and requirements volatility, requirements change and change evaluation, and the associated prediction of quality and productivity (Marino and Succi, 1989; Velario et al., 1997; Vernazza et al., 2000; Musilek et al., 2002; Sillitti et al., 2004; Clark et al., 2004; Scotto et al., 2004; Pedrycz and Succi, 2005; Ronchetti et al., 2006; Scotto et al., 2006; Moser et al., 2008a; Moser et al., 2008b; Pedrycz et al., 2012). Another SLR Schon et al. (2017) (Schn et al., 2017) reviews the papers concentrated on the stakeholder and user involvement. The authors highlight the several activities to involve customers to participate in agile requirements engineering processes where The combination of XP with co-design session(Bellucci et al., 2015),Qualitative/Quantitative Customer-driven Development (QCD) (Olsson and Bosch, 2015), to organize the additional roles in agile development such as business users, Agile-UCD specialist (AUS) who use usability-pattern-based requirement-analysis method (Dragicevic et al., 2014), and Agile Software Development (ASD) with Participatory Design (Liskin et al., 2014). (Sillitti et al., 2005) mention problems related to customer’s uncertainty in requirements engineering in ASD (Agile Software development) which shows more significant results compare to document driven Software Development. The problems listed in the papers are Cognitive limits of the memory-attention-understanding, Lack of information-knowledge, Problems of the communication-language-communication channels, Emotional and relational limits or difficulties and Lack of clarity in business objectives. (Pikkarainen et al., 2008) dive more into the communication details and divide it into 2 types: internal and external. Where internal communication includes only team members, while the external communication is the communication between the customers and the development team. As the main activities requiring the communication in agile were mentioned iteration planning meetings, iteration reviews, daily meetings, iteration retrospectives and refused formal tools/media type communications. Customers are involved in collaboration with development team in the following RE activities: Feasibility study, Requirements Elicitation, Analysis, Validation. Following customer communication challenges are found in each RE activity: 1. Feasibility study - right choice of the customer or the customer groups. It influences directly on overall customer collaboration process during further requirements engineering activities resulting in not so well-defined requirements elicitation, prioritization and validation. 2. Requirements Elicitation - the important factor is means of communication used. 3. Requirements Analysis - prioritization of requirements. 4. Requirements Validation - presence of the customer and his involvement into QA process (sprint review). 5. Requirements Management - analysis, prioritization, validation of the requirements. According to most of the research publications, Requirements-Driven customer collaboration is one of the central ideas behind RE activities in agile. Furthermore, there is a big need in further investigation of case studies of the customer communication problem in agile requirements engineering due to the lack of such researches and not enough qualitative data. 3 METHODOLOGY This section describes the methodology that was chosen for the research study starting from the questions generated till the results revealing process. 3.1 Rationale The review of the literature showed what kind of customer communication challenges in agile requirements engineering activities exist and require solutions. Objective behind conducting this experimental survey can be stated as exploratory and explanatory purposes as described below: - **Exploratory purpose activities:** to reveal existing communication problems in Requirements Engineering processes within Agile methodology, Scrum, modified Scrum and an exploitation of agile practices with no definite methodology, highlight the existing procedure, find correlation between agile in theory and in practice, explore new hypotheses for further research. - **Explanatory purpose activities:** to build a framework and a language based pattern on the collected data, issues and root causes of the issues during the process, emphasize the most commonly repeatable activities. To figure out usual algorithm of events. 3.2 Research Questions To proceed with the research study we defined the essential questions need to be answered. The research questions are the followings: - What communication challenges occur between customers and development team in the context of agile requirements engineering? - How communication challenges revealed in RQ1 are being solved in the context of investigated companies? To answer these aspects the questionnaire with the set of questions devised according to the best practises in the field. It was submitted to the 40 senior IT specialists (managers, CTOs, CEOs) of the companies. The companies considered in the research survey are located in Russia, in particular Kazan, Innopolis and Moscow. As the first degree contact in the interviews it was used a semi-structured interview model according to (Lethbridge et al., 2005). Moreover, in comparison to the structures interview sessions where it is suggested strictly follow the pre-written script, a semi-structured interview session seems more natural. Furthermore, it is more applicable when the interview session is organized by the researcher himself. This method is considered as one of the most flexible ones because it is impossible to predict interview process and having some freedom during the process allows interviewer to adapt according to the interviewed person and find out more relevant information generating natural environment during the interview session (Myers and Newman, 2007). The scientific validity of the research depend on the the minimum of the biased results during the data collection process. Therefore, in order to decrease the level of biased data during the interview sessions, we tried not to interrupt the interviews. Based on the first interview session outcomes the final version of the questionnaire was formed. 3.3 Experiment Design and Planning As the base plan for this research experiment was chosen the plan suggested by (Robson, 2002). This research experiment can be considered as a single-case study with multiple units of analysis. Single case – customer communication challenges within agile RE. Multiple units of analysis - multiple people from multiple companies. During the research we have studied the Software development teams in the Information Systems and Technologies related companies. Closely worked with the experienced IT experts in the companies such as POs, PMs, team members performing the role of a customer representative, team members directly communicating to a customer. The study is single case with multiple units of study designs according to (K Yin, 2003) investigating different teams in different companies (different units of analysis) in one context. The collection of the data from the companies under consideration was done with the help of direct methods: - Interviews, informal personal communication, - Questionnaires and informal Q&A (question and answer) sessions. Selection Strategy and Criteria. The selection of the data needed for the research was collected from Russian IT companies located in mostly in Innopolis, a high-tech city in Russia, Kazan and Moscow. The size of the industrial companies considered can be separated into a few large companies and majority of small companies. The exact distribution of the companies can be separated into: - Large - at about half of the companies can be considered as large (more than 250 employees) - Small - quarter of the companies in the survey (10-49) - Medium - the rest of the companies can be considered as medium (50-249) and micro-sized (< 10). In the selection criteria for the company representatives we have chosen a requirement that the representative should be wither PO or PM or performing the role of a customer representative or directly communicating with the customer. The team is required to adopting the agile methodology practises or practising Scrum or modified Scrum. Despite the fact that the respondents had different scopes of the projects (see Figure 1), products, nevertheless all of them relate to the production of software. (Runeson and Höst, 2009) proposed a checklist for the validation of the case studies. Based on this checklist questionnaire for interviewing and interview planning was made in 2 iterations. It was applied for the analysis of the state of the research before the overall interviewing phase. It helped to reveal existing weak points and modify the interview questions in case of necessity. 3.4 Interview Process The 30 interviews with the representatives of the companies were conducted during 6 weeks period each taking on average 20 minutes. Interviews were face-to-face offline or online using Skype video call depending on the accessibility of the respondents to the authors. All the interviews were recorded on a voice recorder with the permission of the respondents. In order to establish comfortable atmosphere that could let respondents go beyond the question borders and give unique phenomenon in the area, it was decided not to take notes directly during the interview. The notes were taken after the interview based on the record. The very first interview was conducted as a test interview, checking applicability of the questionnaire to the process and relevance of the questions and the test interview is not included in the results of the survey. As a result of the test interview, a few updates were made on the questionnaire and before its final form. Interview transcripts were analysed based on the Google voice-to-text extension and pattern coding. Table 1: Scope of the Respondents Work Projects (Products). <table> <thead> <tr> <th>ID</th> <th>Product Type</th> <th>Industry Sector</th> </tr> </thead> <tbody> <tr> <td>R1</td> <td>Inner bank product for the bank employees</td> <td>Communication System</td> </tr> <tr> <td>R2</td> <td>Internal technological platform for the product teams of the company</td> <td>Communication System</td> </tr> <tr> <td>R3</td> <td>Virtual analytics</td> <td>AI, Machine Learning</td> </tr> <tr> <td>R4</td> <td>Music streaming platform</td> <td>Search engine</td> </tr> <tr> <td>R5</td> <td>Marketplace</td> <td>Retail</td> </tr> <tr> <td>R6</td> <td>Product for inner use in the group of companies</td> <td>Communication System</td> </tr> <tr> <td>R7</td> <td>Product for the inner use of the employees to control and manage customer accounts</td> <td>Management System</td> </tr> <tr> <td>R8</td> <td>Checks scanning application</td> <td></td> </tr> <tr> <td>R9</td> <td>Robots for houses/apartments construction</td> <td>Robotics</td> </tr> <tr> <td>R10</td> <td>Online cashbox</td> <td>Trading</td> </tr> <tr> <td>R11</td> <td>Inner bank product for the bank employees</td> <td>Banking</td> </tr> <tr> <td>R12</td> <td>Mobile games</td> <td>Gaming</td> </tr> <tr> <td>R13</td> <td>Bank software for legal entities, retail paying system for Individuals</td> <td>Banking</td> </tr> <tr> <td>R14</td> <td>Platform for the work of distributed teams, AI agents</td> <td>Banking</td> </tr> <tr> <td>R15</td> <td>Saas product to work with geo-data</td> <td>Geo-information system</td> </tr> <tr> <td>R16</td> <td>Cloud geo-information system</td> <td>Geo-information system</td> </tr> <tr> <td>R17</td> <td>Platform for the ministry of ecology to reveal violations in the nature</td> <td>Geo-information system</td> </tr> <tr> <td>R18</td> <td>Inner digital platform for bank</td> <td>Communication System</td> </tr> <tr> <td>R19</td> <td>Robots</td> <td>Robotics</td> </tr> <tr> <td>R20</td> <td>Software products delivery</td> <td>Service</td> </tr> <tr> <td>R21</td> <td>Inner bank product for the bank employees</td> <td>Communication System</td> </tr> <tr> <td>R22</td> <td>Contact center (operator assistant app)</td> <td>Management System</td> </tr> <tr> <td>R23</td> <td>Digital solutions for airlines</td> <td>Management System</td> </tr> <tr> <td>R24</td> <td>Platform for HRs to make office corporate bonuses customizable and motivate employees</td> <td>Management System</td> </tr> <tr> <td>R25</td> <td>B2B platform for the fish retail</td> <td>Retail</td> </tr> <tr> <td>R26</td> <td>Portal for the communication with the company partners</td> <td>Communication System</td> </tr> <tr> <td>R27</td> <td>Dispatching system for autonomous vehicles</td> <td>Robotics, AI &amp; Machine Learning</td> </tr> <tr> <td>R28</td> <td>Voice recognition, artificial intelligence</td> <td>AI &amp; Machine Learning</td> </tr> <tr> <td>R29</td> <td>Chatbots</td> <td>AI</td> </tr> <tr> <td>R30</td> <td>Autonomous computer vision traffic analysis system</td> <td>AI, Machine Learning</td> </tr> </tbody> </table> with the help of manual notes of the Interviewee. The challenges encountered in the process of pattern coding as a final step of the qualitative analysis is described in Section 4.1. 3.5 Questionnaire Construction In the challenging process of creation of an effective survey, the way how the questions are defined, organized and put in the context can influence the results of the survey significantly. To address these limitations we followed the rules suggested in the literature (Basili, 1992; Basili et al., 1994; Bond and Fox, 2013; Vannette and Krosnick, 2014; Krosnick and Presser, 2010; Lietz, 2008; Thayer-Hart et al., 2010). Besides, the usage the approach helped us to avoid redundancy and replication and leading questions during the interview sessions. In particular, we put a significant attention in preventing biases in the responses and confirmation, based on the approach described in (Furnham, 1986; Podsakoff et al., 2003). The first part of the questionnaire contains background questions to know the environment of the case that is studied – to get the information about the respondents and the company. This part of the questionnaire is mainly consists of the single-choice questions because such background research requires direct and unambiguous answers. However, “Product scope” questions require plain text answers. As far as the respondents work on a different and variety numbers of the products that requires more information and particular details. The second part of the questionnaire consists of several questions about the way they organize the customer collaboration and about the agile RE activities. There are also single-choice questions with the implementation of Likert-Type Scale answer choice. And, finally, the Third part consists of general free-from questions to find out about the challenges encountered during the work and the way customers handle the problems. 3.6 Research Validity As in any survey, there are threats in our survey to the external validity. The main question lies on the degree of our representatives responsiveness. To address this thread and increase the external validity of our findings, we followed the best practices suggested in the literature (Szolnoki and Hoffmann, 2013; Khazaal et al., 2014; Maalej et al., 2014). To provide an overall validation framework for the research, we used the 4 design tests by (Yin, 2009). Results of the application of the design tests are shown in the Table 2. 4 RESULTS In the scope of the research study, we contacted 40 Product Owners and Product Managers from 30 companies adopting Agile methodology practices. Out of overall 40 contacted representatives only 30 of them responded from 20 different firms. The distribution of the company representatives roles involved in the research study can be described as following: 17 of the representatives are Product Owners (PO), 10 - Product Managers (PM), 2 - CTOs and 1 - CEO (see in Fig. 1). 2 CTOs and 1 CEO represent 3 Startups in the starting phase and/or transitioning to another business model. The limitation according to these companies is that the roles of PO/PM are not specifically established for the employee because of the limited number of employees in the team. There were generated a particular situation during the investigation of the case. As far as most of the teams are small-sized that fully seizes agile concept of the team. 24 representatives of the survey participants mentioned their team size from 1-10 people, whereas 3 of them stated their team size 10-20 and only 3 of them has team with size of 20 and more (See Figure 2) One of the most influencing data for the result was the information derived from the company representatives who were in the Chef positions, had more than 20 team members and several teams at the same time. We considered all the subordinates of their branch as a single unified team. Project teams are transformed into product teams due to adoption of agile practices (see Figure 3). As it was defined from the research, Scrum is the methodology adopted by the majority of our respondents. Besides, there are may studies related to the effective Scrum teams such as (Matharu et al., 2015). Despite the fact that the number of our representatives from the companies consisted 30, while their number \[ \text{Figure 1: Distribution of the roles.} \] \[ \text{Figure 2: Distribution of the size of the teams.} \] Table 2: Case study tactics for 4 design tests. <table> <thead> <tr> <th>Tests</th> <th>Case study tactic</th> <th>Phase of research in which tactic occurs</th> <th>Application to the current research</th> </tr> </thead> <tbody> <tr> <td>Construct validity</td> <td>Use multiple sources of evidence</td> <td>Data collection</td> <td>Applied</td> </tr> <tr> <td></td> <td>Establish chain of evidence</td> <td>Data collection</td> <td>Applied</td> </tr> <tr> <td></td> <td>Have key informants review draft case study report</td> <td>Composition</td> <td>Applied</td> </tr> <tr> <td>Internal validity</td> <td>Do pattern-matching</td> <td>Data analysis</td> <td>Applied</td> </tr> <tr> <td></td> <td>Do explanation building</td> <td>Data analysis</td> <td>Applied</td> </tr> <tr> <td></td> <td>Do time-series analysis</td> <td>Data analysis</td> <td>To be done in future researches</td> </tr> <tr> <td>External validity</td> <td>Use replication logic in multiple case studies</td> <td>Research design</td> <td>Not Applied</td> </tr> <tr> <td>Reliability</td> <td>Use case study protocol</td> <td>Data collection</td> <td>Applied</td> </tr> <tr> <td></td> <td>Develop case study database</td> <td>Data collection</td> <td>Applied</td> </tr> </tbody> </table> Products Owners and Products Managers are the ones who most of time (96.7%) directly communicate with the customer rather than the other team members in the company (see Figure 5). According to the practises of the interviewees the Products Owners and Products Managers are the ones who most of time (96.7%) directly communicate with the customer rather than the other team members in the company (see Figure 5). Although the major part of the communication with the customer is conducted by the Product Owners, Project Managers, it is inevitable that the development team also needs direct involvement of customers to understand the customer needs fully. Direct involvement of the customer during the Requirements Engineering phase provides the transparency and improve the understanding of the requirements. However, the direct involvement of the customer in the process is not popular practise among the participants of our survey as it can be seen from the Figure 6. Although the face-to-face and live communication with the customer during the RE process is the most efficient way, nevertheless there are also possibilities to interact with the customer such as internet and online meeting. Almost all of the participants of our survey mentioned the face-to-face communication one of the ways when there is the least distortion of informa- Table 3: Number of used methodologies per representative. <table> <thead> <tr> <th>Methodology</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>Scrum</td> <td>10</td> </tr> <tr> <td>Modified Scrum</td> <td>13</td> </tr> <tr> <td>Kanban</td> <td>4</td> </tr> <tr> <td>No particular methodology but the combination of agile practices</td> <td>4</td> </tr> <tr> <td>Scaled Agile</td> <td>3</td> </tr> <tr> <td>Disciplined Agile (DAD)</td> <td>2</td> </tr> <tr> <td>Do not apply any methodology</td> <td>2</td> </tr> </tbody> </table> Figure 6: Customer collaboration with the development team during the requirements engineering phase. The interview revealed also the controversial points where the communication of the team with the customer can be discouraged in some particular cases such as: 1. Situation 1 - It is better to isolate business customers from internal development processes, especially when customers do not have technical background. Product Owner is the only one person in a team who communicates to the customer in this case. - In this case, the customers needs to communicate closely to the team and participate in all the activities. 2. Situation 2 - POs, PMs should have technical background in order to communicate effectively with the team and be able to form requirements efficiently. - If POs, PMs do not have technical background, in this case, they need to get an expertise from the developers to form adequate requirements. 4.1 Challenges In the result of the research survey, there were derived many challenges coming out of the customer collaboration specifically during the Requirements Engineering process in the project. The challenges mentioned by the participants of the interview sessions include much more details. Nonetheless, they were incorporated into the general groups according to their categories of the problem (see Table 4). 4.2 Solutions Suggested Experience of the challenges in the communication with the customer lead to defining some solutions to these challenges from the participants. This subsection includes the suggested solutions for the challenges encountered by the participants of the survey. Most of the suggested solutions the representatives were trying to implement into the project. The suggested solutions for the PM/PO and managers include the followings: 1. PO/PM can write down the best practices for the developers, so that they will be ready for the customer collaboration with the help of special customer collaboration training so both of them could form proper requirements. ### Table 4: Challenges encountered during RE process in collaboration with the customer. <table> <thead> <tr> <th>#</th> <th>Description of the challenges</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Excessive expertise of the developers, PO/PM. They need to simplify the language they speak to make customers understand what they explain.</td> </tr> <tr> <td>2</td> <td>Developers are separated from requirements engineering. That is why they are far from understanding of the user needs.</td> </tr> <tr> <td>3</td> <td>Customers do not agree on product vision. Different interpretation of the same things, different design view (UX and scenarios). Customers insists on their literally non-correct view. As an example, customers think that developers are just code writers and do not have product vision. But, in fact, developers are experts in product.</td> </tr> <tr> <td>4</td> <td>Development team do not want to be aware of business part. Customers do not want to collaborate with a team.</td> </tr> <tr> <td>5</td> <td>Big number of users to be analyzed in order to form hypotheses for customers.</td> </tr> <tr> <td>6</td> <td>Customers are pressed by PO/PM's and developers' suggestions and refused from what they wanted. As a result, end-product differs from their expectations.</td> </tr> <tr> <td>7</td> <td>Customers are not flexible and require deadlines. They are often not available, team creates features as it thinks that is correct. Customers do not want to go to demos, meetings, get in touch with a team randomly, when it’s convenient for them. Customers do minimum what team asks from them. (E.g., they were asked to perform user testing, they tested only 1 person).</td> </tr> <tr> <td>8</td> <td>Project (product) requirements approval takes a long time, company jurists approve everything very slowly. When customers are big organizations with a lot of departments, solutions are taken on several steps and often time-prolonged.</td> </tr> <tr> <td>9</td> <td>Lack of common information model for requirements related activities. Need in establishing one universal communication/process standard.</td> </tr> <tr> <td>10</td> <td>After customers ex-representative is fired, it takes quite a long for new representative to understand previously elicited requirements.</td> </tr> <tr> <td>11</td> <td>Customer reproaches PO/PM for ex-PO/PM mistakes.</td> </tr> <tr> <td>12</td> <td>Too big number of customers that leads to too diverse and controversial requirements. Customers think everything is easy to implement so they form not adequate requirements. Customers are not relevant even with the non-technical scope of the software under development.</td> </tr> <tr> <td>13</td> <td>Developed requirements differ from product to test. Testability of the requirements is not provided in advance.</td> </tr> </tbody> </table> 2. PO/PM can involve the developers in Requirements Engineering process to get common understanding of what the customer wants (applicable for the case when only PO/PM is responsible for working with the requirements). 3. To try to find consensus with customer in case of different product/feature vision. Suggest alternatives. Use the facts to encourage the customer to come to a compromise. 4. To make more universal feedback form with checkbooks, automated visualization, in case of more than one customer is involved. 5. To isolate customers from technical specification, terms and notation and ask more business related questions to form the requirements. 6. To explain everything in the language of customers involving analysts into the activity. 7. To generate the proper planning for the customer meetings. In case of the long project approval sessions, together with developers to find the extreme cases, possible mistakes and failures. And then with the ready solutions set collaborate with the customer. 8. To have more communication with the end users and get more information about the expectations from the system. 9. To rearrange the Requirements Engineering process, in case current one did not work. 10. To introduce step-by-step Requirements Engineering activities suggested by Agile methodologies. Besides, to work with the customers continuously to introduce and experience Agile practices to the customer. 11. To develop personal patience, building emotional stability during the communication with the customer. ### 5 CONCLUSIONS Requirements engineering in agile takes its most efficient form comparing to traditional development. According to communication channels efficiency model suggested by Alistair Cockburn, 2 most efficient and rich channels are considered such as face-to-face communication with and without whiteboard (Cockburn, 2002). That is why all important moments in the Requirements engineering process should be discussed directly with the customer as it is implied in Agile. The paper reviewed the existing literature and work done in this area, proposed a methodology to organize and conduct an interview session, and discussed the results of the interviews. According to our survey which was done in the context of identifying the Requirements Engineering process challenges with the customer communication in Agile practises adopting companies any problems were defined. The survey involved participants from the different complexity and different size companies which are located in three cities of Russian Federation - Innopolis, Kazan and Moscow. The participants are PO, PM, CTO, CEOs of the companies under consideration. 30 representatives of the 20 companies work in several teams with different sized teams and methodologies applied in the company. The survey derived many challenges which can be encountered during the Requirements Engineering process and solutions for the PM/PO’s to ease and minimize the issues with the customer collaboration during RE process. The main constraint in the survey was that the company representatives rarely share their internal problem publicly. They share their best practices as well as bad experiences which they managed to turn into success or consider as a good lesson. Very few of the companies can share the current situation and problems. This, as a rule, has a form of insider information. The conducted research allowed us to analyze particular qualities of the communication field in the context of rapidly changing Software Development environment. There is a need in further work on formation of the communication patterns from described challenges and solutions in the field of RE. Moreover, it would be also extremely important and interesting to consider the case of Open Source development processes (Succi et al., 2001b; Kovács et al., 2004; Paulson et al., 2004; Rossi et al., 2010; Petrinja et al., 2010; Fitzgerald et al., 2011; Rossi et al., 2012; Di Bella et al., 2013), which are strictly related to agile methods. Also the mobile market would be very interesting to analyse, given its constant and very fast evolution (Moser et al., 2008a; Corral et al., 2011; Corral et al., 2013; Corral et al., 2014; Corral et al., 2015). Finally, this research has been developed in the context of Russian Software Development companies, and in further research the practitioners form other countries will be also be considered. ACKNOWLEDGEMENTS This research project is carried out under the support of the Russian Science Foundation Grant No 19-19-00623. REFERENCES Pedrycz, W., Russo, B., and Succi, G. (2011). A model of job satisfaction for collaborative development pro-
{"Source-Url": "https://www.scitepress.org/Papers/2021/105264/105264.pdf", "len_cl100k_base": 8070, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 36192, "total-output-tokens": 12142, "length": "2e12", "weborganizer": {"__label__adult": 0.0003991127014160156, "__label__art_design": 0.0004780292510986328, "__label__crime_law": 0.0003230571746826172, "__label__education_jobs": 0.004039764404296875, "__label__entertainment": 7.152557373046875e-05, "__label__fashion_beauty": 0.0001760721206665039, "__label__finance_business": 0.001705169677734375, "__label__food_dining": 0.00035452842712402344, "__label__games": 0.0006504058837890625, "__label__hardware": 0.0004482269287109375, "__label__health": 0.00042557716369628906, "__label__history": 0.0002777576446533203, "__label__home_hobbies": 7.551908493041992e-05, "__label__industrial": 0.000415802001953125, "__label__literature": 0.0004072189331054687, "__label__politics": 0.0002951622009277344, "__label__religion": 0.00038695335388183594, "__label__science_tech": 0.00905609130859375, "__label__social_life": 0.0001323223114013672, "__label__software": 0.00669097900390625, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00028204917907714844, "__label__transportation": 0.0005049705505371094, "__label__travel": 0.00021517276763916016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52244, 0.02958]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52244, 0.06864]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52244, 0.89396]], "google_gemma-3-12b-it_contains_pii": [[0, 3900, false], [3900, 8955, null], [8955, 13243, null], [13243, 18072, null], [18072, 22292, null], [22292, 26190, null], [26190, 29404, null], [29404, 32067, null], [32067, 36504, null], [36504, 41520, null], [41520, 47140, null], [47140, 52244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3900, true], [3900, 8955, null], [8955, 13243, null], [13243, 18072, null], [18072, 22292, null], [22292, 26190, null], [26190, 29404, null], [29404, 32067, null], [32067, 36504, null], [36504, 41520, null], [41520, 47140, null], [47140, 52244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52244, null]], "pdf_page_numbers": [[0, 3900, 1], [3900, 8955, 2], [8955, 13243, 3], [13243, 18072, 4], [18072, 22292, 5], [22292, 26190, 6], [26190, 29404, 7], [29404, 32067, 8], [32067, 36504, 9], [36504, 41520, 10], [41520, 47140, 11], [47140, 52244, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52244, 0.26275]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
f8b0c44fcdbe3e4855742abb15c2026e9ac5d412
1 INTRODUCTION The Semantic Web is founded on technologies related to ontologies. An ontology can be built by using a language whose semantics is formal, e.g., OWL (Ontology Web Language\(^1\)). This allows us to define unambiguously the meaning of new terms from a set of atomic terms which can be concepts, relations or individuals. Such an OWL ontology enables new knowledge to be inferred by using reasoners such as Pellet (Sirin et al., 2007), FaCT++ (Tsarkov and Horrocks, 2006). In addition, an application domain can be described by several ontologies which are related in some way. In this context, a reconciliation between such ontologies can yield new pieces of knowledge which establish correspondences between entities of two ontologies. A set of such correspondences is called an alignment. (Borgida and Serafini, 2003) proposed a formalism, namely DDL (Distributed Description Logics), to represent a system of ontologies with alignments. DRAGO reasoner \(^2\) resulting from this work allows to check consistency of such system and provides other applications related to alignment manipulations, e.g., alignment debugging and minimization. The most of these reasoners have been integrated within ontology editors such as Protg, Swoop, etc. However, they are only usable through interfaces (API). These API are more and more numerous. They are often specific to reasoners (e.g. KAON2 API). The proliferation of APIs leads to two issues. On the one hand, a change of environment can imply to modify source codes of an application and recompile them in order to be able to use a given reasoner. On the other hand, there exist several reasoners supporting different semantics such as standard Description Logics (DL), Distributed Description Logics (DDL) (Borgida and Serafini, 2003), Integrated Distributed Description Logics (IDDL) (Zimmermann and Le Duc, 2008), etc.. This fact implies that specific APIs offer the same services, but with different syntaxes. Developers must make efforts to learn APIs on which their application depends. To address this issue, a W3C working group proposes a new protocol, namely, OWLlink (Liebig et al., 2009). The main goal is to facilitate (i) the specification of reasoners with knowledge bases associated, (ii) the axiom specification, and (iii) the manner of asking inferred results. This protocol seems to be par- --- \(^1\)http://www.w3.org/TR/owl-features/ \(^2\)http://drago.itc.it/download.html ticularly interesting since it is extensible. This enables us to add functionalities adequate to different kinds of current and future reasoners. The main functionalities of the proposed API are creation and checking consistency of a network of OWL ontologies with alignments. In addition, it allows users to select a semantics associated to a given ontology network (e.g. DL, DDL, IDDL), and thus reasoner corresponding to the selected semantics is determined as well (Horridge et al., 2007). In this paper, we focus on the IDDL reasoner which implements the algorithm presented in (Zimmermann and Le Duc, 2008). This reasoner supports distributed reasoning services that allow us: - to deal with ontologies and ontology alignments (i.e. an ontology network constituted of local OWL ontologies and their alignments), - to check local and global consistency of a network of OWL ontologies with alignments. Roughly speaking, the IDDL reasoner performs the following tasks: (i) checking consistency of alignments by using a global reasoner (e.g. Pellet), (ii) propagating knowledge from alignments to each local ontology and asking consistency of the local ontology with respect to the propagated knowledge, and (iii) collecting the consistency result from each local ontology and deciding consistency of the whole system. According to this schema, checking consistency of each local ontology with respect to the knowledge propagated from alignments can be carried out independently by local reasoners. By consequent, the computation for deciding consistency of the whole system is performed in a distributed way. In the following, we present the main features of different reasoners in Section 2. We introduce in Section 3 the new protocol OWLlink which facilitates communication between reasoners. We propose in Section 4 different ways to use the IDDL reasoner by our own Java API based on OWLlink. Section 5 provides details on an optimized implementation of the algorithm presented in (Zimmermann and Le Duc, 2008). Section 6 presents how to integrate the IDDL reasoner within the NeOn Toolkit. We conclude and present future work in Section 7. 2 EXISTING REASONERS Reasoning on ontologies is essentially aimed to infer new knowledge. Reasoners can be also used to check ontology consistency and to classify terms (classes or relations) in an ontology. 2.1 Reasoners for Simple Ontology Amongst existing reasoners, which have a Java interface and use OWL API, Pellet and FaCT++ are the most common. Therefore, we claim that an API used in distributed reasoners must be compatible with them. We choose Pellet to experiment the integration of new reasoners in a transparent way to end-user. The tested interface with Pellet remains identical to other open reasoners. In the context of distributed reasoning supported by the IDDL reasoner, Pellet is used to deal with reasoning at global level with alignments. The OWL API has a standard interface org.semanticweb.owl.inference.OWLReasoner with several methods such as isConsistent(), isSatisfiable(OWLClass), isEquivalent(OWLClass, OWLClass), etc. Pellet provides an implementation of these methods. In terms of interoperability with existing reasoners, Pellet can be used with Jena as follows: ``` import org.mindswap.pellet.jena.PelletReasonerFactory; import com.hp.hpl.jena.rdf.model.InfModel; import com.hp.hpl.jena.rdf.model.Model; import com.hp.hpl.jena.reasoner.Reasoner; // creating Pellet reasoner Reasoner reasoner = PelletReasonerFactory.theInstance().create(); // creating empty model Model emptyModel = ModelFactory.createDefaultModel(); // creating inference engine InfModel model = ModelFactory.createInfModel(reasoner, emptyModel); ``` This interoperability ability makes Pellet easier to be integrated within systems based on RDF ontologies. 2.2 Reasoners for Networked Ontologies Another reasoners support distributed reasoning on ontologies and alignments such as DRAGO. The peer-to-peer DRAGO system uses a procedure based on distributed tableau algorithm for reasoning with several ontologies interconnected by semantic mappings. This distributed reasoner is built on the top of standard tableau reasoner as Pellet or FaCT++. The algorithm for distributed reasoning as described in (Serafini and Tamlin, 2005) builds a distributed 3http://gforge.inria.fr/projects/iddl 4http://neon-toolkit.org/wiki 5http://drago.iltc.it/download.html tableau and consults the consistency of other local ontologies. If there is a local ontology participating to bridge rules (mappings) the algorithm consults another distributed reasoner at the peer where the local ontology is located. This process may be propagated in the whole network. The major disadvantage of this algorithm is that it requires a tableau-based reasoning for every peer. Therefore, the heterogeneity of reasoning mechanisms is impossible. Moreover, DRAGO adopts the Pellet API by adding “D” to method signatures to dedicate distributed reasoning (e.g., isDconsistent(), isDSatisfiable()). However, this interface does not satisfy one of our criteria (see Section 4) which says that the name of services (e.g. consistency, satisfiability) definable in two different semantics must not be different. ContentMap\(^6\) is a tool supporting the integration of independently designed ontologies by using mappings. This tool uses the standard reasoners to generate semantic consequences of integrated ontologies. It can help users to detect potential errors and to evaluate mappings. This work is comparable to that of (Meilicke et al., 2009), who proposed a method to help human experts to check automatically alignments created. They used DDL to formalize mappings as bridge rules, which exploit Drago to reason with alignments. 3 OWLlink PROTOCOL If ontologies are expressed in OWL, it is necessary also to use standard communication to make easier interactions between reasoners and different services, mainly in distributed ontology context. OWLlink (Liebig et al., 2009), successor of DIG protocol, is a Java implementation-neutral communication interface for OWL (Figure 1). It follows the new W3C recommendations about extensible communication protocols for systems based on OWL2 (Motik et al., 2009), which allows one to add new functionalities required by specific applications. OWLlink overcomes drawbacks of DIG protocol (Bechhofer et al., 2003) (Dickinson, 2004) since it relies on DIG2 propositions (Turhan et al., 2006). OWLlink consists of two parts: (i) structural protocol specification, and (ii) link mechanism to transport protocols. 3.1 OWLlink Specification OWLlink manages client/server communication (with HTTP protocol) by message through session (Wessel and Luther, 2009). Each message of request type (requestMessage()) on ontology corresponds a response message (requestResponse()). These two message types embed respectively a set of objects Request and Response. Associated with these messages, a mechanism of error management informs the client about errors arising from syntactic problems (SyntaxError), semantic problems (SemanticError) or ontology management (KBError). Clients can get also information about server itself (getDescription() and configuration()). Moreover, OWLlink server can manage simultaneously several ontologies by their IRI or by ontology name. It can check axioms about classes, properties, or individuals. To do that, client must make a request (tell) embedding axioms to be checked. In terms of interoperability, OWLlink provides a mechanism enabling to encapsulate existing OWL-based reasoners (Pellet, FaCT++, etc.). This facilitates system design which needs to deal with several ontologies associated with local reasoners. This ability with client/server communication gives fundamental elements on which a API dedicated to distributed reasoning can be elaborated. 3.2 Basic Primitives OWLlink provides a set of basic primitives\(^7\) to access ontologies to obtain information about entities, status, schema and individual. For instance, the primitive isKBDeclaredConsistent() checks ontology consistency. Regarding the schema of ontologies, the primitive isClassSatisfiable() and isClassSubsumedBy() allow to ask whether a class is satisfiable or a class is subsumed by another one. Finally, knowledge about individuals can be obtained by using isInstanceOf(), areIndividualsRelated() or getEquivalentIndividuals(), etc. 4 IDDL REASONER INTERFACE Zimmermann (Zimmermann, 2007) has introduced IDDL (Integrated Distributed Description Logics) as \(^6\)http://krono.act.uji.es/people/Ernesto/contentmap \(^7\)http://owllink-owlapi.sourceforge.net/documentation.html a new formalism enabling to represent a set of ontologies with their alignments, i.e. networked ontologies interconnected by alignments. Such a network consists of ontologies expressed by OWL, and alignments considered as a set of subsumption or disjunction relations between ontology entities (concept/role/individual). This formalism is adapted particularly to reason with OWL ontology alignment automatically generated by tools as Alignment Server.\footnote{http://aserv.inrialpes.fr/} The differences between IDDL and other formalisms are: 1. In IDDL, alignments are considered as knowledge pieces independent from ontology knowledge. As a result, when knowledge is propagated from alignments to local ontologies, new semantic consequences can be entailed. 2. IDDL does not make any expressiveness hypothesis over used formalisms in local ontology apart from decidability. This enables the heterogeneity of reasoning mechanisms and formalisms that are used in local ontologies. For instance, a local reasoner uses a tableaux-based algorithm while another reasoner can decide consistency with help of an automata-based algorithm. 3. IDDL supports a real distributed reasoning, i.e. all reasoning on local ontologies can be carried out independently. Like the most of reasoners, IDDL gives two main services: consistency checking and entailment. IDDL reasoner provides the two following interfaces: 1. “standalone” interface compatible with classical reasoners (Pellet, FaCT++). The IDDL reasoner consists of main class IDDLReasoner, which implements standard interface org.semanticweb.owl.inference.OWLReasoner.\footnote{http://owllink-owlapi.sourceforge.net/} 2. interface for distributed reasoning which is based on OWLlinkReasoner presented in section 3. It is important to note that the services available in the IDDL API must have the same signature that should not depend on which reasoner associated with a semantics is used. Moreover, from users’ point of view, this feature facilitates changes from a reasoner to another, and corresponds to the fact that we can talk about (the) consistency of a system including several logics in condition that the notion of consistency is generally definable for each one. This is the case for a system consisting of DL (OWL), DDL, IDDL, etc. Besides, distributed reasoners depend on APIs for manipulating correspondences between ontologies. For this purpose, the IDDL reasoner currently uses the Alignment API (Euzenat, 2004). It may arise an issue if users replace a distributed reasoner by another and they use different Alignment APIs. 4.1 Classical Reasoning with a Standalone Interface Regarding classical reasoning, the IDDL reasoner merges all local ontologies and their alignments, then applies reasoning on the unique ontology. To do so, the IDDL reasoner provides the following primitives (where <domain>=fr.inrialpes.exmo.iddl): - <domain>.IDDLReasoner.isConsistent() returns true if and only if the resulting ontology is consistent, - <domain>.IDDLReasoner.isEntailed(OWLAxiom) returns true if and only if the axiom can be entailed from the resulting ontology, - <domain>.IDDLReasoner.isEntailed(Alignment) returns true if and only if each axiom belonging to Alignment can be entailed from the resulting ontology. From this point of view, the OWLlink interface (presented in section 3) can be used to encapsulate the IDDL reasoner in the same way as classical reasoner (i.e. OWLReasoner). In this case, the IDDL reasoner with the OWLLink protocol is comparable to a local IDDL system with regard to the global IDDL system. 4.2 Distributed Reasoning with OWLlinkReasoner In distributed reasoning context, the IDDL reasoner consists of global reasoner using also Pellet. Moreover, the IDDL reasoner needs to know consistency of each local extended ontology (Zimmermann and Le Duc, 2008) with the help of local associated reasoners. The local reasoners can be Pellet, FaCT++, DRAGO, IDDL itself, etc., and they can be located in different sites. OWLlinkReasoner interface provides necessary elements that the IDDL reasoner needs to ensure communication between local and global reasoners. Indeed, in order to define the method: ```java IDDLReasoner.addOntology(OWLOntology onto1, URL localReasoner); ``` the following code can be used: ```java URL reasonerURL = new URL(...); OWLlinkReasoner reasoner = new OWLlinkHTTPXMLReasoner( OWLOntologyManager manager, URL reasonerURL); CreateKB createKB = new CreateKB(); KB kb = reasoner.answer(createKB); IRI kbIRI = kb.getKB(); // configuration : a set of global axioms Tell tell = new Tell(kbIRI, OWLAxioms[] configuration); OK ok = reasoner.answer(tell); ... ``` Roughly speaking, a configuration is a set of axioms propagated from alignments to local ontologies. This notion will be clarified in Section 5. This code enables to associate a local ontology (createKB) to a local reasoner (reasoner which is responsible for deciding and transmitting the result (consistency) to the global reasoner (reasoner.answer(tell)) whether the local ontology is consistent with respect to the configuration (newTell(kbIRI, OWLAXioms[] configuration)). The following code shows how to use the IDDL reasoner. ```java import fr.inrialpes.exmo.iddl.IDDLreasoner; import fr.inrialpes.exmo.iddl.types.Semantics; import org.semanticweb.owl.model.OWLAxiom; import org.semanticweb.owl.align.Alignment; ... IDDLReasoner reasoner = new IDDLReasoner(); reasoner.addAlignment(URI onto1); reasoner.addAlignment(URI align1); reasoner.setSemantics(Semantics.IDDL); //check consistence of IDDL system reasoner.isConsistent(); //check entailment of an OWL axiom reasoner.isEntailed(OWLAxiom ax); //check entailment of an alignment reasoner.isEntailed(Alignment al); ... ``` 5 OPTIMIZATION AND IMPLEMENTATION This section briefly presents the principle of the algorithm for distributed reasoning and its implementation in the IDDL reasoner. This version does not allow disjointness correspondences to occur in alignments. This restriction does not lead to a serious drawback of the expressiveness since alignments generated by the majority of matching algorithms do not often include disjointness correspondences. 5.1 Algorithm and Optimization The distributed algorithm implemented in the IDDL reasoner was presented in (Zimmermann and Le Duc, 2008). However, the vocabulary used in (Zimmermann and Le Duc, 2008) differs from the one used here. According to the metamodel, wherever ontology is used in (Zimmermann and Le Duc, 2008), we can replace it by module without loss of correctness. Where alignment is used in (Zimmermann and Le Duc, 2008), we use mapping here. Finally, a correspondence in (Zimmermann and Le Duc, 2008) is the equivalent of a mapping assertion here. 5.1.1 Preliminary Assumption The IDDL reasoner works by having a module reasoner communicate with imported modules’ reasoners. The imported modules reasoners are supposed to be encapsulated so that their implementation is unknown but they can be used via an interface. Consequently, for each imported module \( m_i \), we assume that there exists an oracle \( F_i \) which takes a set of DL axioms \( A_i \) as arguments and returns a boolean equal to Consistency\( (m_i \cup A_i) \). **Definition 1** (Reasoning oracle). Let \( O \) be an ontology defined in a logic \( L \). A reasoning oracle is a boolean function \( F : \mathcal{L} \rightarrow \text{Bool} \) which returns \[ F(A) = \text{Consistency}_L(O \cup A), \text{ for all sets of axioms } A \in \mathbb{PL}. \] The term ontology in Definition 1 is to be taken in a general sense. It can be a module or even a distributed system, as long as the associated reasoner can interpret DL axioms and offers correct and complete reasoning capabilities. In practice, such oracles will be implemented as an interface which encapsulates a reasoner like Pellet, Fact++, or a module reasoner. A module reasoner must call the oracles associated with the imported modules with well chosen axioms in order to determine consistency. The choice of axioms will be explained below. In addition, the module reasoner has access to the mappings that may exist between imported modules. From the importing module’s point of view, these mappings are treated like local axioms. Therefore, we can consider that the mappings are equivalent to an ontology (called the alignment ontology in (Zimmermann and Le Duc, 2008)). **Definition 2** (Alignment ontology). Let \( A \) be a set of mappings. The alignment ontology is an ontology \( \hat{A} \) such that: - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \); - for each mapping assertion \( i : C \leftarrow j : D \) with \( C \) and \( D \) local concepts, - \( i : C \) and \( i : D \in \hat{A} \). In order to check the global consistency of the module, we also assume that there is a reasoning oracle \( F_A \) associated to \( \hat{A} \). The algorithm consists in questioning all the reasoning oracles with well chosen axioms that are detailed just below. **5.1.2 Algorithm Overview** In (Zimmermann and Le Duc, 2008), it is formally proven that consistency checking of an IDDL system with only subsumption mapping assertions can be reduced to determining the emptiness and non emptiness of specific concepts. More precisely, we define the notion of configuration, which serves to explicitly separate concepts between empty concepts and not empty concepts, among a given set of concepts. It can be represented by a subset of the given set of concepts, which contains the asserted non-empty concepts. **Definition 3** (Configuration). Let \( \mathcal{C} \) be a set of concepts. A configuration \( \Omega \) over \( \mathcal{C} \) is a subset of \( \mathcal{C} \). In principle, a configuration \( \Omega \) implicitly assert that for all \( C \in \Omega, C \subseteq \perp \) and for all \( C \notin \Omega, C(a) \) for some individual \( a \). A similar notion of role configuration is also defined in (Zimmermann and Le Duc, 2008), but for the sake of simplicity we will only present it for concepts. The algorithm then consists in selecting a configuration over the set of all concepts of the alignment ontology. The axioms associated with the configuration are then sent to the oracles to verify the consistency of the resulting ontologies. If they all return true (i.e., they are all consistency with these additional axioms) then the modular ontology is consistent. Otherwise, another configuration must be chosen. If all configurations have been tested negatively, the modular ontology is inconsistent, according to the proof in (Zimmermann and Le Duc, 2008). Since there is a finite number of configurations, this algorithm is correct and complete. The sets of axioms that must be used to query the oracles are defined according to a configuration \( \Omega \) as follows. Let \( A \) (resp. \( A_1, \ldots, A_n \)) be the sets of axioms associated with the oracles of the alignment ontology (resp. with the oracle of modules \( m_1, \ldots, m_n \)), For all imported modules \( m_i \), - \( i : C \in \Omega, i : C(a) \in A_i \) and \( C(a) \in A_i \), where \( a \) is a fixed individual and \( a_C \) is a new individual in \( m_i \), - \( i : C \notin \Omega, i : \perp \subseteq \perp \in A_i \) and \( C \subseteq \perp \in A_i \). **5.1.3 Optimization** We can identify, from the algorithm as described above, some situations that may lead to blow-up complexity. 1. The algorithm will answer negatively if and only if it has tested all possible configurations. So, every time a module is inconsistent, the reasoner must call all the oracles \( 2^n \) times, where \( n \) is the number of concepts in the alignment ontology. This situation is hardly acceptable in a practical reasoner. However, optimizations can be carried out to improve this situation. In particular, backtrack algorithms can be applied to this procedure. Indeed, for each concept \( C \) appearing in a mapping, it must be decided whether it is empty or not. There are cases where it can be deduced that \( C \) is empty (resp. not empty) immediately. In this case, it is not necessary to test configurations where \( C \) is not empty (resp. empty). This can be visualized in Figure 2. 2. Theoretically, there are \(2^n\) possible global configurations where \(n\) is the number of terms occurring in alignments. Each configuration \(\Omega\) is of the form \(\Omega = (X_1, \ldots, X_n)\) where \(X_i = C(a)\) asserts the non-emptiness of the concept \(C\) or \(X_i = (C \sqsubseteq \bot)\) asserts the emptiness of the concept \(C\). We denote \(X_i = C(a)\) if \(X_i = (C \sqsubseteq \bot)\) and \(X_i = (C \sqsubseteq \bot)\) if \(X_i = C(a)\). It holds that \(\Omega \sqcup X_i\) is not consistent if there is \(X_i \in \Omega\) such that \(\Omega \cup \{X_i\}\) is not consistent. This observation allows the global reasoner to reduce dramatically the number of checks on consistency of local ontologies since the global reasoner can reuse the inconsistency result of \(\Omega \cup \{X_i\}\) for all \(X_i \in \Omega\) and for all configuration \(\Omega\). 6 INTEGRATION OF THE IDDL REASONER WITHIN NEONTOOLKIT In this section we describe the principle of the integration of the IDDL reasoner within the NeOn Toolkit. This integration is performed by developing a plug-in, namely IDDL reasoner plug-in, which plays an interface role between the IDDL reasoner and the NeOn-Toolkit plug-ins, e.g. the module API, Ontology Navigator, Alignment Plugin, etc. The NeOn toolkit is an environment for managing and manipulating networked ontologies developed. It was developed within the NeOn project as a plug-in for managing ontologies under Eclipse and extends previous products such as KAON2. The NeOn toolkit features run time and design time ontology alignment support. It can be extended through a plug-in mechanism, so it can be customized to the users needs. As a development environment for ontology management, the NeOn Toolkit supports the W3C recommendations OWL and RDF as well as F-Logic for processing rules. With the support of the integrated mapping-tool, named OntoMap, heterogeneous data sources, e.g., databases, file systems, UML diagrams, can be connected to ontologies quickly and easily. The IDDL reasoner API for ontology modules uses the module API to access to alignments and imported modules of an ontology module. In other terms, the IDDL reasoner plug-in is developed such that it can get access to the alignments and imported modules from an ontology module and pass them to the IDDL reasoner with help of the IDDL reasoner API. More precisely, from the NeOn toolkit environment the IDDL reasoner plug-in gets URIs of the imported modules and the alignments from an ontology module. In the most of cases where alignments are not available from the ontology module, the plug-in can fetch available alignments from an alignment server and use them as alignments. This feature allows users to use alignments permanently stored on servers and select the most suitable alignments for an intended purpose. The IDDL reasoner plug-in relies on the IDDL reasoner, and the core module API which provides basic operations to manipulate ontology modules and alignments. By using the core module API, the IDDL reasoner plug-in can get necessary inputs from an ontology module. In the case where the alignments obtained from the ontology module in question are not appropriate, the plug-in can connect to Alignment Server\(^9\) to fetch alignments available. For this purpose, the plug-in offers to users an interface allowing to visualize and select alignments. Figure 3 is a screenshot of the plugin integrated within Neon Toolkit. From a determined input, the IDDL reasoner plug-in can obtain an answer for consistency from the IDDL reasoner. In the case where the answer is negative the plug-in can obtain an explanation indicating configurations and/or correspondences which are responsible for that inconsistency. \(^9\)http://aserv.inrialpes.fr/ The answer time of the IDDL reasoner depends on the following elements which are taken into account in the optimized algorithm design. 1. If there are more unsatisfiable or non-empty concepts occurring in correspondences, the answer time is shorter, 2. If there are more equivalent concepts or properties occurring in correspondences, the reasoner answers faster, 3. If ontology module is inconsistent, the reasoner has to check likely all configurations. Therefore, the answer time would be long. In Example 4, we have two imported ontologies: Geopolitics and Geography with a mapping between them. The axioms of the ontologies and mapping are expressed in a description logic and they can be directly coded in OWL. We consider the following cases: 1. If these imported modules are merged with the correspondences of the mapping, we obtain an OWL ontology which is not consistent. The reason is that the mapping allows one to deduce that two classes "EuropeanRegion" and "SouthAmericanRegion" are not disjoint, which contradicts the disjointness axiom in Geography. 2. However, in the context of ontology modules the IDDL reasoner can check consistency of the module and answer that the module is consistent (Figure 3). 3. If we now add to the following mapping $1 : \text{Guyana} \leftrightarrow \text{SouthAmericanRegion} \cap \text{EuropeanRegion}$, the IDDL reasoner answers that the module is no longer consistent. 7 CONCLUSIONS AND FUTURE WORK In this paper, we argued that it is necessary to introduce a generic API for reasoners to facilitate environment changes without upgrading the current API version. Moreover, we suggested to use the same signature of the reasoning services independently from reasoners associated with a given semantics. Then, we proposed a design and implementation of such an API which meets these two criteria. Finally, we optimized and implemented in this API a distributed algorithm for checking consistency of networked ontologies with alignments based on the IDDL semantics. Comparing to other approaches, e.g., the distributed reasoning based on DLL, the IDDL distributed reasoning is more modular and heterogeneous, i.e., what global reasoner needs about a local ontology is whether it is consistent. Thus, local ontologies can use different logics in condition that they are IDDL consistent. remain to be decidable. This feature of IDDL provides possibility to use different algorithms for reasoning on local ontologies. The current version of the IDDL reasoner provides only explanations for inconsistencies which are caused by correspondences propagated from mappings to local ontologies. A future version of the IDDL reasoner should take advantages of explanations from local reasoners to give more details about how propagated correspondences impact on a local ontology. As mentioned at the beginning of the present section, the current version of the IDDL reasoner does not allow disjointness correspondences to occur in alignments. This limitation prevents us from supporting axiom entailment since it is equivalent to inconsistency of an IDDL system including disjointness correspondences. For instance, the current IDDL reasoner does not know whether \( (O, A) \models i:C \sqsubseteq j:D \) where \( O, A \) are the sets of imported ontologies and mappings from an ontology module. In a future version, we plan to extend the reasoner such that it takes into account only disjointness correspondences translated from entailment but not those initially included in alignments. Allowing disjointness correspondences in this controlled way may not lead to a complexity blow-up. REFERENCES Serafini, L. and Tamilin, A. (2005). DRAGO: Distributed reasoning architecture for the semantic web. In Lecture Notes in in computer science, S., editor, Proceed-
{"Source-Url": "http://www.scitepress.org/Papers/2010/31010/31010.pdf", "len_cl100k_base": 7614, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31753, "total-output-tokens": 9167, "length": "2e12", "weborganizer": {"__label__adult": 0.0003464221954345703, "__label__art_design": 0.0006194114685058594, "__label__crime_law": 0.0006947517395019531, "__label__education_jobs": 0.0010213851928710938, "__label__entertainment": 0.00015592575073242188, "__label__fashion_beauty": 0.00019991397857666016, "__label__finance_business": 0.0004243850708007813, "__label__food_dining": 0.0003991127014160156, "__label__games": 0.0008459091186523438, "__label__hardware": 0.0006422996520996094, "__label__health": 0.00060272216796875, "__label__history": 0.0003995895385742187, "__label__home_hobbies": 0.00011086463928222656, "__label__industrial": 0.0005364418029785156, "__label__literature": 0.000782012939453125, "__label__politics": 0.0005636215209960938, "__label__religion": 0.0007686614990234375, "__label__science_tech": 0.130126953125, "__label__social_life": 0.00017940998077392578, "__label__software": 0.036651611328125, "__label__software_dev": 0.82275390625, "__label__sports_fitness": 0.0003268718719482422, "__label__transportation": 0.0006074905395507812, "__label__travel": 0.00026106834411621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35444, 0.01271]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35444, 0.44646]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35444, 0.85298]], "google_gemma-3-12b-it_contains_pii": [[0, 2459, false], [2459, 6927, null], [6927, 11183, null], [11183, 14779, null], [14779, 18664, null], [18664, 25374, null], [25374, 29164, null], [29164, 31506, null], [31506, 34276, null], [34276, 35444, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2459, true], [2459, 6927, null], [6927, 11183, null], [11183, 14779, null], [14779, 18664, null], [18664, 25374, null], [25374, 29164, null], [29164, 31506, null], [31506, 34276, null], [34276, 35444, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35444, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35444, null]], "pdf_page_numbers": [[0, 2459, 1], [2459, 6927, 2], [6927, 11183, 3], [11183, 14779, 4], [14779, 18664, 5], [18664, 25374, 6], [25374, 29164, 7], [29164, 31506, 8], [31506, 34276, 9], [34276, 35444, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35444, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
cb6e2bb9aa900be85e727602cddeaefb036a2b5b
Package ‘BioCor’ April 25, 2017 Title Functional similarities Version 1.0.0 Author Lluís Revilla Sancho <lluis.revilla@gmail.com> Maintainer Lluís Revilla Sancho <lluis.revilla@gmail.com> Description Calculates functional similarities based on the pathways described on KEGG and REACTOME or in gene sets. These similarities can be calculated for pathways or gene sets, genes, or clusters and combined with other similarities. They can be used to improve networks, gene selection, testing relationships... Depends R (>= 3.4.0) License GPL-3 | file LICENSE Encoding UTF-8 LazyData true biocViews Software, StatisticalMethod, Clustering, GeneExpression, Reactome, Network, KEGG, Pathways Imports org.Hs.eg.db, AnnotationDbi, reactome.db, graph, methods, utils Suggests WGCNA, testthat, knitr, rmarkdown, BiocStyle, GOSemSim, GSEABase BugReports https://github.com/llrs/BioCor/issues URL https://github.com/llrs/BioCor/ VignetteBuilder knitr RoxygenNote 6.0.1 NeedsCompilation no R topics documented: BioCor-package .................................................. 2 addSimilarities .............................................. 2 AintoB ......................................................... 3 clusterGeneSim ............................................. 4 clusterSim ................................................... 5 combinadic .................................................. 6 BioCor-package BioCor: A package to calculate functional similarities Description Calculates a functional similarity measure between gene identifiers based on the pathways described on KEGG and REACTOME. Important functions - **pathSim**: Calculates the similarity between two pathways - **geneSim**: Calculates the similarity (based on pathSim) between two genes - **clusterSim**: Calculates the similarity between two clusters of genes by joining pathways of each gene. - **clusterGeneSim**: Calculates the similarity between two clusters of genes by comparing the similarity between the genes of a cluster - **similarities**: Allows to combine the value of matrices of similarities - **conversions**: Two functions to convert similarity measures - **weighted**: Functions provided to combine similarities addSimilarities Additive integration of similarities Description Function that use the previously calculated similarities into a single similarity matrix. Usage addSimilarities(x, bio_mat, weights = c(0.5, 0.18, 0.1, 0.22)) Arguments - **x**: A matrix with the similarity of expression - **bio_mat**: A list of matrices of the same dimension as x. - **weights**: A numeric vector of weight to multiply each similarity AintoB Details The total weight can’t be higher than 1 to prevent values above 1 but can be below 1. It uses weighted.sum with abs = TRUE internally. Value A square matrix of the same dimensions as the input matrices. Author(s) Lluís Revilla See Also similarities, weighted. Examples set.seed(100) a <- seq2mat(LETTERS[1:5], rnorm(10)) b <- seq2mat(LETTERS[1:5], seq(from = 0.1, to = 1, by = 0.1)) sim <- list(b) addSimilarities(a, sim, c(0.5, 0.5)) AintoB Description Insert values from a matrix into another matrix bade on the rownames and colnames replacing the values. Usage AintoB(A, B) Arguments A A matrix to be inserted. B A matrix to insert in. Details If all the genes with pathway information are already calculated but you would like to use more genes when performing analysis. insert the once you have calculated on the matrix of genes. Value A matrix with the values of A in the matrix B Author(s) Lluís Revilla Examples ```r B <- matrix(ncol = 10, nrow = 10, dimnames = list(letters[1:10], letters[1:10])) A <- matrix(c(1:15), byrow = TRUE, nrow = 5, dimnames = list(letters[1:5], letters[1:3])) AintoB(A, B) # Mixed orders colnames(A) <- c("c", "h", "e") rownames(A) <- c("b", "a", "f", "c", "j") AintoB(A, B) # Missing columns or rows colnames(A) <- c("d", "f", "k") AintoB(A, B) ``` --- **clusterGeneSim** *Similarity score between clusters of genes based on genes similarity* **Description** Looks for the similarity between genes of a group and then between each group. **Usage** ```r clusterGeneSim(cluster1, cluster2, info, method = c("max", "rcmax.avg"), ...) mclusterGeneSim(clusters, info, method = c("max", "rcmax.avg"), ...) ``` **Arguments** - **cluster1**: A vector with genes. - **cluster2**: A vector with genes. - **info**: A list of genes and the pathways they are involved. - **method**: A vector with two or one argument to be passed to combineScores the first one is used to summarize the similarities of genes, the second one for clusters. - **...**: Other arguments passed to `combineScores` - **clusters**: A list of clusters of genes to be found in `id`. **Details** Differs with `clusterGeneSim` that first each combination between genes is calculated, and with this values then the comparison between the two clusters is done. Thus applying `combineScores` twice, one at gene level and another one at cluster level. **Value** - `clusterGeneSim` returns a similarity score of the two clusters or the similarity between the genes of the two clusters. - `mclusterGeneSim` returns a matrix with the similarity scores for each cluster comparison. clusterSim Similarity score between clusters of genes based on pathways similarity Description Looks for the similarity between genes in groups Usage clusterSim(cluster1, cluster2, info, method = "max", ...) mclusterSim(clusters, info, method = "max", ...) Arguments cluster1, cluster2 A vector with genes. info A list of genes and the pathways they are involved. method To combine the scores of each pathway, one of c("avg", "max", "rcmax", "rcmax.avg", "BMA"), if NULL returns the matrix of similarities. ... Other arguments passed to combineScores clusters A list of clusters of genes to be found in id. Details Once the pathways for each cluster are found they are combined using combineScores. Value clusterSim returns a similarity score of the two clusters mclusterSim returns a matrix with the similarity scores for each cluster comparison. Author(s) Lluís Revilla See Also For a different approach see clusterGeneSim, combineScores and conversions Examples ```r library("org.Hs.eg.db") # Extract the paths of all genes of org.Hs.eg.db from KEGG (last update in # data of June 31st 2011) genes.kegg <- as.list(org.Hs.egPATH) clusterSim(c("9", "15", "10"), c("33", "19", "20"), genes.kegg) clusterSim(c("9", "15", "10"), c("33", "19", "20"), genes.kegg, NULL) clusterSim(c("9", "15", "10"), c("33", "19", "20"), genes.kegg, "avg") clusters <- list(cluster1 = c("18", "01", "10"), cluster2 = c("100", "10", "1"), cluster3 = c("18", "10", "83")) mclusterSim(clusters, genes.kegg) mclusterSim(clusters, genes.kegg, "avg") ``` combinadic \textit{i-th combination of} \textit{n elements taken from} \textit{r} Description Function similar to combn but for larger vectors. To avoid allocating a big vector with all the combinations each one can be computed with this function. Usage combinadic(n, r, i) Arguments \begin{itemize} \item \texttt{n} \hspace{1em} Elements to extract the combination from \item \texttt{r} \hspace{1em} Number of elements per combination \item \texttt{i} \hspace{1em} ith combination \end{itemize} **Value** The combination $i$th of the elements **Author(s)** Joshua Ulrich **References** StackOverflow answer 4494469/2886003 **See Also** `combn` **Examples** ```r # Output of all combinations combn(LETTERS[1:5], 2) # Output of the second combination combinadic(LETTERS[1:5], 2, 2) ``` --- **Description** Combine several values into one by several methods. **Usage** ```r combineScores(scores, method, round = FALSE) ``` **Arguments** - `scores`: Matrix of scores to be combined - `method`: one of c("avg", "max", "rcmax", "rcmax.avg", "BMA") see details - `round`: Should the resulting value be rounded to the third digit? **Details** The methods return: - `avg`: The average or mean value - `max`: The max value - `rcmax`: The max of the column means or row means - `rcmax.avg`: The sum of the max values by rows and columns divided by the number of columns and rows - `BMA`: The same as `rcmax.avg` Value A numeric value as described in details. Note This is a version of combineScores from `combineScores` with optional rounding and some internal differences. Author(s) Lluís Revilla based on Guangchuang Yu Examples d <- structure(c(0.4, 0.6, 0.222222222222222, 0.4, 0.4, 0, 0.25, 0.5, 0.285714285714286), .Dim = c(3L, 3L), .Dimnames = list(c("a", "b", "c"), c("d", "e", "f"))) dsapply(c("avg", "max", "rcmax", "rcmax.avg", "BMA"), combineScores, scores = d) d[1,2] <- NA sapply(c("avg", "max", "rcmax", "rcmax.avg", "BMA"), combineScores, scores = d) conversions Convert the similarities formats Description Functions to convert the similarity coefficients between Jaccard and Dice. D2J is the opposite of J2D. Usage D2J(D) J2D(J) Arguments D Dice coefficient, as returned by `diceSim`, `geneSim`, `clusterSim` and `clusterGeneSim` J Jaccard coefficient Value A numeric value. Author(s) Lluís Revilla **diceSim** **Examples** D2J(0.5) J2D(0.5) D2J(J2D(0.5)) --- **diceSim** *Compare pathways* **Description** Function to estimate how much two graphs or list of genes overlap by looking how much of the nodes are shared. **Usage** ```r diceSim(g1, g2) ``` **Arguments** - `g1`, `g2` Graph in GraphNEL format, or a character list with the names of the proteins in each pathway. **Value** A score between 0 and 1 calculated as the double of the proteins shared by `g1` and `g2` divided by the number of genes in both groups. **Author(s)** Lluís Revilla **See Also** Used for `geneSim`, see `conversions` help page to transform Dice score to Jaccard score. **Examples** ```r genes.id2 <- c("52", "11342", "80895", "57654", "548953", "11586", "45985") genes.id1 <- c("52", "11342", "80895", "57654", "58493", "1164", "1163", "4150", "2130", "159") diceSim(genes.id1, genes.id2) diceSim(genes.id2, genes.id2) ``` duplicateIndices Finds the indices of the duplicated events of a vector Description Finds the indices of duplicated elements in the vector given. Usage duplicateIndices(vec) Arguments vec Vector of identifiers presumably duplicated Details For each duplication it can return a list or if all the duplication events are of the same length it returns a matrix, where each column is duplicated. Value The format is determined by the simplify2array Author(s) Lluís Revilla See Also removeDup Examples duplicateIndices(c("52", "52", "53", "55")) # One repeated element duplicateIndices(c("52", "52", "53", "55", "55")) # Repeated elements duplicateIndices(c("52", "55", "53", "55", "52")) # Mixed repeated elements geneSim Similarity score genes based on pathways similarity Description Given two genes, calculates the Dice similarity between each pathway which is combined to obtain a similarity between the genes. Usage geneSim(gene1, gene2, info, method = "max", ...) mgeneSim(genes, info, method = "max", ...) **geneSim** **Arguments** - `gene1, gene2`: Ids of the genes to calculate the similarity, to be found in genes. - `info`: A list of genes and the pathways they are involved. - `method`: To combine the scores of each pathway, one of c("avg", "max", "rcmax", "rcmax.avg", "BMA"), if NULL returns the matrix of similarities. - `...`: Other arguments passed to `combineScores`. - `genes`: A vector of genes. **Details** Given the information about the genes and their pathways, uses the ids of the genes to find the Dice similarity score for each pathway comparison between the genes. Later this similarities are combined using `combineScores`. **Value** The highest Dice score of all the combinations of pathways between the two ids compared if a method to combine scores is provided or NA if there isn’t information for one gene. If an NA is returned this means that there isn’t information available for any pathways for one of the genes. Otherwise a number between 0 and 1 (both included) is returned. Note that there isn’t a negative value of similarity. `mgeneSim` returns the matrix of similarities between the genes in the vector. **Author(s)** Lluis Revilla **See Also** - `conversions` help page to transform Dice score to Jaccard score. For the method to combine the scores see `combineScores`. **Examples** ```r library("org.Hs.eg.db") library("reactome.db") # Extract the paths of all genes of org.Hs.eg.db from KEGG (last update in # data of June 31st 2011) genes.kegg <- as.list(org.Hs.egPATH) # Extracts the paths of all genes of org.Hs.eg.db from reactome genes.react <- as.list(reactomeEXTID2PATHID) geneSim("81", "18", genes.react) geneSim("81", "18", genes.kegg) geneSim("81", "18", genes.react, NULL) geneSim("81", "18", genes.kegg, NULL) mgeneSim(c("81", "18", "10"), genes.react) mgeneSim(c("81", "18", "10"), genes.react, "avg") ``` pathSim Calculates the Dice similarity between pathways Description Calculates the similarity between pathways using dice similarity score. Usage pathSim(pathway1, pathway2, info) mpathSim(pathways, info, method = "max", ...) Arguments pathway1, pathway2 A single pathway to calculate the similarity info A list of genes and the pathways they are involved. pathways Pathways to calculate the similarity for method To combine the scores of each pathway, one of c("avg", "max", "rcmax", "rcmax.avg", "BMA"), if NULL returns the matrix of similarities. ... Other arguments passed to combineScores Details diceSim is used to calculate similarities between the two pathways. mpathSim compares the similarity between several pathways and can use combineScores to extract the similarity between those pathways. If one needs the matrix of similarities between pathways set the argument methods to NULL. Value The similarity between those pathways or all the similarities between each comparison. Author(s) Lluís Revilla See Also diceSim and combineScores and conversions help page to transform Dice score to Jaccard score. Examples library("reactome.db") # Extracts the paths of all genes of org.Hs.eg.db from reactome genes.react <- as.list(reactomeEXTID2PATHID) pathways <- c("112315", "112310", "112316", "373753", "916853", "109582", "114608", "1500931") pathSim("112310", "112316", genes.react) mpathSim(pathways, genes.react, NULL) ### removeDup **Remove duplicated rows and columns** **Description** Given the indices of the duplicated entries remove the columns and rows until just one is left, it keeps the duplicated with the highest absolute mean value. **Usage** ```r removeDup(cor_mat, dupli) ``` **Arguments** - `cor_mat`: List of matrices - `dupli`: List of indices with duplicated entries **Value** A matrix with only one of the columns and rows duplicated **Author(s)** Lluis Revilla **See Also** - `duplicateIndices` to obtain the list of indices with duplicated entries. **Examples** ```r a <- seq2mat(c("52", "52", "53", "55"), runif(choose(4, 2))) b <- seq2mat(c("52", "52", "53", "55"), runif(choose(4, 2))) mat <- list("kegg" = a, "react" = b) mat dupli <- duplicateIndices(rownames(a)) remat <- removeDup(mat, dupli) remat ``` ### seq2mat **Transforms a vector to a symmetric matrix** **Description** Fills a matrix of ncol = length(x) and nrow = length(x) with the values in dat and setting the diagonal to 1. **Usage** ```r seq2mat(x, dat) ``` similarities Arguments x names of columns and rows, used to define the size of the matrix dat Data to fill with the matrix with except the diagonal. Details dat should be at least \( \text{choose}(\text{length}(x), 2) \) of length. It assumes that the data provided comes from using the row and column id to obtain it. Value A square matrix with the diagonal set to 1 and dat on the upper and lower triangle with the columns ids and row ids from x. Author(s) Llúis Revilla See Also upper.tri and lower.tri Examples ``` seq2mat(LETTERS[1:5], 1:10) seq2mat(LETTERS[1:5], seq(from = 0.1, to = 1, by = 0.1)) ``` description Function to join list of similarities by a function provided by the user. Usage ``` similarities(sim, func, ...) ``` Arguments sim list of similarities to be joined. All similarities must have the same dimensions. The genes are assumed to be in the same order for all the matrices. func function to perform on those similarities: prod, sum... It should accept as many arguments as similarities matrices are provided, and should use numbers. ... Other arguments passed to the function func. Usually na.rm or similar. Value A matrix of the size of the similarities Weighted operations Description Calculates the weighted sum or product of \( x \). Each values should have its weight, otherwise it will throw an error. Usage \[ \text{weighted.sum}(x, w, \text{abs} = \text{TRUE}) \] \[ \text{weighted.prod}(x, w) \] Arguments - \( x \) an object containing the values whose weighted operations is to be computed - \( w \) a numerical vector of weights the same length as \( x \) giving the weights to use for elements of \( x \). - \( \text{abs} \) If any \( x \) is negative you want the result negative too? Details This functions are thought to be used with \textit{similarities}. As some similarities might be positive and others negative the argument \( \text{abs} \) is provided for \textit{weighted.sum}, assuming that only one similarity will be negative (usually the one coming from expression correlation). Value \textit{weighted.sum} returns the sum of the product of \( x \)*\( w \)s removing all NA values. See parameter \( \text{abs} \) if there are any negative values. \textit{weighted.prod} returns the product of product of \( x \)*\( w \)s removing all NA values. Author(s) Lluís Revilla See Also similarities and addSimilarities Examples ```r expr <- c(-0.2, 0.3, 0.5, 0.8, 0.1) weighted.sum(expr, c(0.5, 0.2, 0.1, 0.1, 0.1)) weighted.sum(expr, c(0.5, 0.2, 0.1, 0.2, 0.1), FALSE) weighted.sum(expr, c(0.4, 0.2, 0.1, 0.2, 0.1)) weighted.sum(expr, c(0.4, 0.2, 0.1, 0.2, 0.1), FALSE) weighted.sum(expr, c(0.4, 0.2, 0, 0.2, 0.1)) weighted.sum(expr, c(0.5, 0.2, 0, 0.2, 0.1)) # Compared to weighted.prod: weighted.prod(expr, c(0.5, 0.2, 0.1, 0.1, 0.1)) weighted.prod(expr, c(0.4, 0.2, 0.1, 0.2, 0.1)) weighted.prod(expr, c(0.4, 0.2, 0, 0.2, 0.1)) weighted.prod(expr, c(0.5, 0.2, 0, 0.2, 0.1)) ``` Index addSimilarities, 2, 15, 16 AintoB, 3 BioCor (BioCor-package), 2 BioCor-package, 2 clusterGeneSim, 2, 4, 5, 6, 8 clusterSim, 2, 5, 8 combinadic, 6 combineScores, 4–6, 7, 8, 11, 12 combn, 7 conversions, 2, 5, 6, 8, 9, 11, 12 D2J (conversions), 8 diceSim, 8, 9, 12 duplicateIndices, 10, 13 geneSim, 2, 8, 9, 10 J2D (conversions), 8 lower.tri, 14 mclusterGeneSim (clusterGeneSim), 4 mclusterSim (clusterSim), 5 mgeneSim (geneSim), 10 mpathSim (pathSim), 12 pathSim, 2, 12 removeDup, 10, 13 seq2mat, 13 similarities, 2, 3, 14, 16 upper.tri, 14 weighted, 2, 3, 15, 15
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/manuals/BioCor/man/BioCor.pdf", "len_cl100k_base": 5466, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32179, "total-output-tokens": 6747, "length": "2e12", "weborganizer": {"__label__adult": 0.0003223419189453125, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.000446319580078125, "__label__education_jobs": 0.0016412734985351562, "__label__entertainment": 0.0002224445343017578, "__label__fashion_beauty": 0.00020551681518554688, "__label__finance_business": 0.0005464553833007812, "__label__food_dining": 0.0005340576171875, "__label__games": 0.0012540817260742188, "__label__hardware": 0.0014858245849609375, "__label__health": 0.0007939338684082031, "__label__history": 0.00038242340087890625, "__label__home_hobbies": 0.0002453327178955078, "__label__industrial": 0.0007491111755371094, "__label__literature": 0.00030350685119628906, "__label__politics": 0.00039124488830566406, "__label__religion": 0.0005702972412109375, "__label__science_tech": 0.24560546875, "__label__social_life": 0.0003037452697753906, "__label__software": 0.1319580078125, "__label__software_dev": 0.6103515625, "__label__sports_fitness": 0.00043582916259765625, "__label__transportation": 0.00032067298889160156, "__label__travel": 0.00028228759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19083, 0.03756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19083, 0.81943]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19083, 0.69149]], "google_gemma-3-12b-it_contains_pii": [[0, 1396, false], [1396, 2635, null], [2635, 3574, null], [3574, 5270, null], [5270, 5889, null], [5889, 7353, null], [7353, 8279, null], [8279, 9203, null], [9203, 10141, null], [10141, 11164, null], [11164, 13031, null], [13031, 14478, null], [14478, 15531, null], [15531, 16742, null], [16742, 17871, null], [17871, 18505, null], [18505, 19083, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1396, true], [1396, 2635, null], [2635, 3574, null], [3574, 5270, null], [5270, 5889, null], [5889, 7353, null], [7353, 8279, null], [8279, 9203, null], [9203, 10141, null], [10141, 11164, null], [11164, 13031, null], [13031, 14478, null], [14478, 15531, null], [15531, 16742, null], [16742, 17871, null], [17871, 18505, null], [18505, 19083, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19083, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19083, null]], "pdf_page_numbers": [[0, 1396, 1], [1396, 2635, 2], [2635, 3574, 3], [3574, 5270, 4], [5270, 5889, 5], [5889, 7353, 6], [7353, 8279, 7], [8279, 9203, 8], [9203, 10141, 9], [10141, 11164, 10], [11164, 13031, 11], [13031, 14478, 12], [14478, 15531, 13], [15531, 16742, 14], [16742, 17871, 15], [17871, 18505, 16], [18505, 19083, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19083, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
fe0a927b3227289c56214a1b785ec2b21b6719ec
Dialogue State Tracking with Convolutional Semantic Taggers Mandy Korpusik, Jim Glass MIT Computer Science and Artificial Intelligence Laboratory Cambridge, MA, USA ICASSP, Brighton, UK May 16, 2019 Motivation: Spoken Diet Tracking *Coco Nutritionist* lets you record what you ate with everyday spoken natural language. Welcome back, Mandy! Yesterday was low in these nutrients, so consider taking supplements: potass. (4387 / 4700mg) vitamin B12 (0 / 1mcg) What are you having for breakfast? Motivation: Nutrition Multi-turn Dialogue Nutrition question answering – *is grilled chicken or red meat better?* – *What should I eat for dinner?* – *What is a healthy breakfast* – *Which cereal is best to keep you satisfied?* – *How many calories in ## of food item* – *Is milk healthy? …* Personalized food recommendation (Kopusik et al., CBRecSys, 2016) Motivation: Nutrition Multi-turn Dialogue Nutrition question answering – *is grilled chicken or red meat better?* – *What should I eat for dinner?* – *What is a healthy breakfast* – *Which cereal is best to keep you satisfied?* – *How many calories in ## of food item* – *Is milk healthy? …* Personalized food recommendation (Korusik et al., CBRecSys, 2016) Overview • Motivation: Nutrition • Introduction • Our work in 3 state tracking challenges: – DSTC7 – DSTC6 – DSTC2 • Conclusion Introduction: Spoken Dialogue Systems can you book a table for two in bombay in a cheap price range Intent Detection Semantic Tagging can you book a table for two in bombay in a cheap price range Database Retrieval Restaurant Knowledge Base Introduction: Spoken Dialogue Systems Our Goal: Develop ability to do dialogue state tracking. can you book a table for **two** in **bombay** in a **cheap** price range **Dialogue State** - cuisine: None - location: bombay - number: 2 - price: cheap - atmosphere: None System: any preference on a type of cuisine **Dialogue State** - cuisine: **indian** - location: bombay - number: 2 - price: cheap - atmosphere: None Dialogue State Tracking Challenges (DSTC) - DSTC1 (2013): human-computer bus timetables - DSTC2 and 3 (2014): human-computer restaurant info - DSTC5 (2016): multilingual tourist info - DSTC6 (2017): 3 tracks, end-to-end learning - DSTC7 (2019): 3 tracks (response selection, generation, and audio-visual) Dialogue State Tracking Challenges (DSTC) • DSTC1 (2013): human-computer bus timetables • DSTC2 and 3 (2014): human-computer restaurant info • DSTC5 (2016): multilingual tourist info • DSTC6 (2017): 3 tracks, end-to-end learning • DSTC7 (2019): 3 tracks (response selection, generation, and audio-visual) Student-Advisor Partial Dialogue: **ADVISOR** / Hi! What can I help you with? **STUDENT** / Hello! I’m trying to schedule classes for next semester. Can you help me? **STUDENT** / Hardware has been an interest of mine. **STUDENT** / But I don’t want too hard of classes. **ADVISOR** / So are you interested in pursuing Electrical or Computer Engineering? **STUDENT** / I’m undecided. **STUDENT** / I enjoy programming but enjoy hardware a little more. **ADVISOR** / Computer Engineering consists of both programming and hardware. **ADVISOR** / I think it will be a great fit for you. **STUDENT** / Awesome, I think that’s some good advice. **STUDENT** / What classes should I take to become a Computer Engineer? **ADVISOR** / You haven’t taken EECS 203, 280, and 270, so it may be in your best interest to take one or two of those classes next semester. **STUDENT** / Ok. Which of those is in the morning. I like morning classes. DSTC7: Convolutional Neural Encoder Binary Verification: 1 (Match) / 0 (Not) Meanpool + Sigmoid Dropout + Batch Norm Dot Product Conv + ReLU Maxpooling 0 0 0 ... 481 is the early morning and is quite similar to EECS381, so you might want to skip it. Candidate System Response 0 0 0 ... What time does the course occur? I like afternoon classes and will find something else if it's scheduled too early. EECS351 is after lunch. The others are before. EECS481 is from nine to ten thirty and EECS 492 is from ten thirty to twelve. Previous Utterances Goal: select the best system response. Hi! Hello, what can I help you with today? I'd like to book a table for six people in an expensive price range with British food. I'm on it! Where should it be? In London. Ok, let me look into some options for you. Actually I would prefer for four. Sure. Is there anything else to update? No. Ok let me look into some options for you. api_call(British, London, Six, Expensive) Task 1 Issuing API calls Task 2 Updating API calls (Bordes & Weston, 2016) Task 1: API Call Dialogue State - cuisine: indian - location: bombay - number: 2 - price: cheap - atmosphere: casual api_call indian bombay two cheap casual Restaurant Knowledge Base Task 2: Updating API Call Dialogue State - cuisine: indian - location: bombay - number: 2 - price: cheap - atmosphere: casual Actually there are four of us Dialogue State - cuisine: indian - location: bombay - number: 4 - price: cheap - atmosphere: None DSTC6 Data • 10,000 simulated training dialogues per task • KB of restaurants – 10 cuisines – 10 locations – 3 price ranges – 4 party sizes DSTC6: Related Work - 2 challenge participants achieved 100% on all tasks: - Extended Hybrid Code Networks for DSTC6 (*Ham et al., 2017*) - Modeling Conversations to Learn Responding Policies (*Bai et al., 2017*) DSTC6: Related Work - 2 challenge participants achieved 100% on all tasks: - Extended Hybrid Code Networks for DSTC6 (Ham et al., 2017) - Modeling Conversations to Learn Responding Policies (Bai et al., 2017) DSTC6: Our Binary CNN Baseline Binary Verification: 1 (Match) / 0 (Not) Meanpool + Sigmoid Dot Product Conv + ReLU Maxpooling ... 0 0 0 ... good morning ... hello what can i help you with ... let's do moderate price range, and keep ... expensive price range for another day ... 0 0 0 ... ok let me look into ... some options for you DSTC6: Our Full CNN Architecture Approach: 1. Select action template with CNN. 2. Populate action template with CNN-predicted semantic tags. DSTC6: Our Full CNN Architecture Approach: 1. Select action template with CNN. 2. Populate action template with CNN-predicted semantic tags. Step 1: Semantic Tagging Problem: prior state-of-the-art Conditional Random Field (CRF) model requires hand-crafted features. Solution: use a neural network to automatically learn features during training. Step 1: Semantic Tagging can you book a table for two in bombay in a cheap price range - Number - Location - Price Step 1: Generating Tagging Data api_call indian bombay two cheap casual cuisine: indian location: bombay number: 2 price: cheap atmosphere: casual can you book a table for two in bombay in a cheap price range i'm looking for a casual atmosphere ... Step 1: Semantic Tagging Results <table> <thead> <tr> <th>Semantic Tag</th> <th>Precision</th> <th>Recall</th> <th>F-score</th> </tr> </thead> <tbody> <tr> <td>Cuisine</td> <td>100</td> <td>96.9</td> <td>98.4</td> </tr> <tr> <td>Location</td> <td>100</td> <td>95.9</td> <td>97.9</td> </tr> <tr> <td>Number</td> <td>100</td> <td>100</td> <td>100</td> </tr> <tr> <td>Price</td> <td>96.9</td> <td>96.5</td> <td>96.7</td> </tr> <tr> <td>Atmosphere</td> <td>100</td> <td>100</td> <td>100</td> </tr> <tr> <td>All</td> <td>99.8</td> <td>99.8</td> <td>99.8</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Filter</th> <th>Top-3 Highest Activation Tokens</th> </tr> </thead> <tbody> <tr> <td>19</td> <td>french, spanish, italian</td> </tr> <tr> <td>52</td> <td>two, six, four</td> </tr> <tr> <td>63</td> <td>bombay, london, paris</td> </tr> </tbody> </table> expensive is tempting but cheap may be more reasonable let me check if london or bombay would work Step 2: Action Template Selection <table> <thead> <tr> <th>Action Template</th> <th>Ranked Actions:</th> </tr> </thead> <tbody> <tr> <td>ok let me look into some options for you</td> <td>1) request_api_slot</td> </tr> <tr> <td>api_call</td> <td>2) ok let me look into some options for you</td> </tr> <tr> <td>i’m on it</td> <td>ok let me look into some options for you</td> </tr> <tr> <td>hello what can i help you with today</td> <td>...</td> </tr> <tr> <td>sure is there anything else to update</td> <td>...</td> </tr> <tr> <td>you’re welcome</td> <td>Maxpool + Softmax</td> </tr> <tr> <td>what do you think of this option:</td> <td>Conv + ReLU</td> </tr> <tr> <td>great let me do the reservation</td> <td>0 0 0 ... good morning</td> </tr> <tr> <td>sure let me find another option for you</td> <td>hello what can i help you with</td> </tr> <tr> <td>here it is</td> <td>...</td> </tr> <tr> <td>whenever you’re ready</td> <td>let's do moderate price range, and keep</td> </tr> <tr> <td>the option was</td> <td>expensive price range for another day</td> </tr> <tr> <td>i am sorry i don’t have an answer to that question</td> <td>...</td> </tr> <tr> <td>is there anything i can help you with</td> <td>...</td> </tr> <tr> <td>request_api_slot</td> <td>...</td> </tr> </tbody> </table> Step 3: Final Response Generation 1) Action mask api_call: masked out if any slots are still unspecified. request_api_slot: masked out if all slots are specified. 2) Use dialogue state api_call: populate slots with values in dialogue state. request_api_slot: select the next slot missing a value. <table> <thead> <tr> <th>Slot</th> <th>System Response</th> </tr> </thead> <tbody> <tr> <td>Cuisine</td> <td>any preference on a type of cuisine</td> </tr> <tr> <td>Location</td> <td>where should it be</td> </tr> <tr> <td>Number</td> <td>how many people would be in your party</td> </tr> <tr> <td>Price</td> <td>which price range are you looking for</td> </tr> <tr> <td>Atmosphere</td> <td>are you looking for a specific atmosphere</td> </tr> </tbody> </table> Dialogue State - cuisine: indian - location: bombay - number: 2 - price: cheap - atmosphere: casual Dialogue State - cuisine: None - location: bombay - number: 2 - price: cheap - atmosphere: None DSTC6: Test Results 100% precision on both tasks <table> <thead> <tr> <th>Model</th> <th>Task 1 P@1</th> <th>Task 1 P@2</th> <th>Task 1 P@5</th> <th>Task 2 P@1</th> <th>Task 2 P@2</th> <th>Task 2 P@5</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>10.2</td> <td>20.4</td> <td>50.9</td> <td>0.95</td> <td>19.5</td> <td>46.7</td> </tr> <tr> <td>TFIDF</td> <td>21.0</td> <td>29.9</td> <td>52.2</td> <td>36.7</td> <td>47.4</td> <td>66.9</td> </tr> <tr> <td>SVM</td> <td>81.3</td> <td>81.6</td> <td>83.0</td> <td>74.5</td> <td>76.4</td> <td>78.9</td> </tr> <tr> <td>LSTM</td> <td>84.3</td> <td>90.6</td> <td>98.5</td> <td>77.8</td> <td>84.0</td> <td>97.8</td> </tr> <tr> <td>Hier. LSTM</td> <td>88.6</td> <td>94.1</td> <td>99.9</td> <td>81.7</td> <td>92.6</td> <td>100</td> </tr> <tr> <td>Bai et al.</td> <td>99.8</td> <td>100</td> <td>100</td> <td>99.7</td> <td>100</td> <td>100</td> </tr> <tr> <td>Ham et al.</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> </tr> <tr> <td>Binary CNN</td> <td>78.9</td> <td>88.9</td> <td>99.7</td> <td>69.0</td> <td>79.3</td> <td>99.6</td> </tr> <tr> <td>Our Model</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> </tr> </tbody> </table> From Simulated to Real Data: WOZ 2.0 Task: predict all the user’s requested and informable slots at each turn in a restaurant booking dialogue. <table> <thead> <tr> <th>Slot</th> <th>Type</th> <th>Num Values</th> </tr> </thead> <tbody> <tr> <td>Food</td> <td>Informable, Requestable</td> <td>75</td> </tr> <tr> <td>Area</td> <td>Informable, Requestable</td> <td>7</td> </tr> <tr> <td>Price range</td> <td>Informable, Requestable</td> <td>4</td> </tr> <tr> <td>Name</td> <td>Requestable</td> <td>N/A</td> </tr> <tr> <td>Address</td> <td>Requestable</td> <td>N/A</td> </tr> <tr> <td>Phone</td> <td>Requestable</td> <td>N/A</td> </tr> <tr> <td>Postcode</td> <td>Requestable</td> <td>N/A</td> </tr> <tr> <td>Signature</td> <td>Requestable</td> <td>N/A</td> </tr> </tbody> </table> (User: Is there any place here in the centre that serves corsica food? food = corsica; area = centre) (System: What price range are you looking for? User: Any price range will do. food = corsica; area = centre; price = dontcare) (System: There are no restaurants available matching your criteria. Would you like to try a different area, price range, or food type? User: Are there any restaurants in the centre that serves North American type of food? food = north_american; area = centre; price = dontcare) (System: Yes. The gourmet burger kitchen serves north american food and is located in the centre part of the city. Would you like their location? User: Can I get the phone number? food = north_american; area = centre; price = dontcare; requested = phone) (System: The phone number for gourmet burger kitchen is 01223 312598. User: Thank you. Good bye. food = north_american; area = centre; price = dontcare) From Simulated to Real Data: WOZ 2.0 Our Goal: Show our NN models generalize to real dialogues. Requestable Slots: 1) $P(\text{phone}) > 0.5$ 2) address 0 0 0 ... Would you like their location? Can I get the phone number? From Simulated to Real Data: WOZ 2.0 Separately trained informable slot models. **Food Slots:** 1) $P(\text{corsica}) > 0.5$ 2) indian **Area Slots:** 1) $P(\text{centre}) > 0.5$ 2) north **Name Slots:** 1) $P(\text{None})$ 2) curry prince **Price Slots:** 1) $P(\text{None})$ 2) dontcare 0 0 0 ... Is there any place here in the centre that serves corsica food? CNN is competitive with state-of-the-art, without requiring semantic dictionaries or pre-trained word vectors. Conclusion Demonstrated our neural network models’ ability to do dialogue state tracking in several domains. Future Work: • Experiment on the remaining DSTC6 subtasks. • Jointly train tagger and action selector as end-to-end model. • Automatically learn action mask by adding a feature to action selector model indicating whether all slots have values. • Apply these techniques to the nutrition domain!
{"Source-Url": "http://people.csail.mit.edu/korpusik/icassp19.pdf", "len_cl100k_base": 4354, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 42698, "total-output-tokens": 5191, "length": "2e12", "weborganizer": {"__label__adult": 0.0008654594421386719, "__label__art_design": 0.0008149147033691406, "__label__crime_law": 0.0003688335418701172, "__label__education_jobs": 0.00579833984375, "__label__entertainment": 0.0003592967987060547, "__label__fashion_beauty": 0.0004363059997558594, "__label__finance_business": 0.0004317760467529297, "__label__food_dining": 0.0024089813232421875, "__label__games": 0.00205230712890625, "__label__hardware": 0.0024356842041015625, "__label__health": 0.0011539459228515625, "__label__history": 0.00029158592224121094, "__label__home_hobbies": 0.00016939640045166016, "__label__industrial": 0.0005421638488769531, "__label__literature": 0.0007266998291015625, "__label__politics": 0.00035762786865234375, "__label__religion": 0.0006642341613769531, "__label__science_tech": 0.059417724609375, "__label__social_life": 0.0003919601440429687, "__label__software": 0.0560302734375, "__label__software_dev": 0.86328125, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0005869865417480469, "__label__travel": 0.0002765655517578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13536, 0.0352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13536, 0.00325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13536, 0.81905]], "google_gemma-3-12b-it_contains_pii": [[0, 201, false], [201, 323, null], [323, 499, null], [499, 861, null], [861, 1223, null], [1223, 1361, null], [1361, 1608, null], [1608, 1704, null], [1704, 2033, null], [2033, 2380, null], [2380, 2727, null], [2727, 3658, null], [3658, 4216, null], [4216, 4722, null], [4722, 4908, null], [4908, 5167, null], [5167, 5316, null], [5316, 5534, null], [5534, 5748, null], [5748, 6092, null], [6092, 6235, null], [6235, 6377, null], [6377, 6585, null], [6585, 6702, null], [6702, 6954, null], [6954, 7698, null], [7698, 8731, null], [8731, 9698, null], [9698, 10805, null], [10805, 12426, null], [12426, 12651, null], [12651, 13020, null], [13020, 13131, null], [13131, 13536, null]], "google_gemma-3-12b-it_is_public_document": [[0, 201, true], [201, 323, null], [323, 499, null], [499, 861, null], [861, 1223, null], [1223, 1361, null], [1361, 1608, null], [1608, 1704, null], [1704, 2033, null], [2033, 2380, null], [2380, 2727, null], [2727, 3658, null], [3658, 4216, null], [4216, 4722, null], [4722, 4908, null], [4908, 5167, null], [5167, 5316, null], [5316, 5534, null], [5534, 5748, null], [5748, 6092, null], [6092, 6235, null], [6235, 6377, null], [6377, 6585, null], [6585, 6702, null], [6702, 6954, null], [6954, 7698, null], [7698, 8731, null], [8731, 9698, null], [9698, 10805, null], [10805, 12426, null], [12426, 12651, null], [12651, 13020, null], [13020, 13131, null], [13131, 13536, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13536, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13536, null]], "pdf_page_numbers": [[0, 201, 1], [201, 323, 2], [323, 499, 3], [499, 861, 4], [861, 1223, 5], [1223, 1361, 6], [1361, 1608, 7], [1608, 1704, 8], [1704, 2033, 9], [2033, 2380, 10], [2380, 2727, 11], [2727, 3658, 12], [3658, 4216, 13], [4216, 4722, 14], [4722, 4908, 15], [4908, 5167, 16], [5167, 5316, 17], [5316, 5534, 18], [5534, 5748, 19], [5748, 6092, 20], [6092, 6235, 21], [6235, 6377, 22], [6377, 6585, 23], [6585, 6702, 24], [6702, 6954, 25], [6954, 7698, 26], [7698, 8731, 27], [8731, 9698, 28], [9698, 10805, 29], [10805, 12426, 30], [12426, 12651, 31], [12651, 13020, 32], [13020, 13131, 33], [13131, 13536, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13536, 0.17901]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a91ade838ec10d202133a14e41bbd1088154c4d0
A Formal Model for Abstracting the Interaction of Web Services Li Bao Institute of Software Engineering, Dalian Maritime University, Dalian, China Email: ebond@163.com Weishi Zhang and Xiong Xie Institute of Software Engineering, Dalian Maritime University, Dalian, China Email: {teesiv, xxyj}@dlmu.edu.cn Abstract—This paper addresses the problems of modeling the interaction of Web services when they are composed together. Many subtle errors such as message not received and deadlock may occur due to uncontrolled concurrency of Web services. A model called IMWSC (Interaction Module for Web Service Composition, IMWSC for short) is proposed. The proposed model is used to abstract and analyze the interaction of web services. IMWSC is given a formal semantics by means of CCS (Calculus of Communicating System, CCS for short), which is a kind of process algebra that can be used to model concurrent systems. The application of this model is further investigated in a case study. Some important points related to verify the correctness of interaction of Web service are discussed. Index Terms—Web Service, Interaction, Formal Method, IMWSC I. INTRODUCTION In order to survive the massive competition created by the new online economy, many organizations are rushing to put their core business competencies on the Internet as a collection of web services for more automation and global visibility[1]. The concept of web service has become recently very popular. Web services are software applications which can be used through a network (intranet or Internet) via the exchange of messages based on XML standards[2]. It has become a vehicle of web services rather than just a repository of information. The ability to efficiently and effectively share services on the Web is a critical step towards the development of the new online economy driven by the Business-to-Business (B2B) e-commerce[1]. Existing enterprises would form alliances and integrate their services to share costs, skills, and resources in offering a value-added service to form what is known as composite service. A composite web service is a system that consists of several conceptually autonomous but cooperating units. In order to establish a long-running service composition, many languages and tools emerged, which provide different schemas to glue service operations properly. Service composition approaches can be generally divided into two categories [3,4], business flow based approach and semantic based approach. Some famous projects on web service are based on business flow[24], such as eFlow[5], METEOR-S[6], SELF-SERV[7], Semantics based approach composes services based on ontology and relies on the use of AI planning techniques to automatically search, orchestrate, compose and execute services. Representative projects on web service research that is based on Semantics are: WebDG[8], SWORD[9], SHOP[10]. From a software engineering viewpoint, the construction of new services by the static or dynamic composition of existing services raises exciting new perspectives which can significantly impact the way industrial applications will be developed in the future — but they also raise a number of challenges. Among them is the essential problem of guaranteeing the correct interaction of independent, communicating software pieces[2]. One legitimate question is therefore whether or not the correct and reliable interaction of web services can be guaranteed to a great extent by introducing the formal description techniques. Our investigations suggest a positive answer. This paper addresses the problem of formally modeling the interaction of web services when they are composed together, be it in a dynamic or static way. A model for abstracting and analyzing one scenario of the interaction process of web services called IMWSC is proposed. After the interaction of web service is described in an abstract way, available supporting tool can be used to determine whether or not this interaction process satisfies the desired properties which are expressed in a kind of modal logic. This paper is structured as follows. Section 2 discusses the related work. In Section 3, we present IMWSC. Section 4 defines the semantics of IMWSC. The application of IMWSC is investigated in a case study in Section 5. And the conclusion and future work are drawn up in Section 6. II. Related Work Petri nets are a formal model for concurrency. Since the semantics of Petri nets is formally defined, by mapping each BPEL process to a Petri net a formal model of BPEL can be obtained which allows the verification techniques and tools developed for Petri nets to be exploited in the context of BPEL processes. Many works such as [11, 12, 21, 22] introduce the Petri net based method for describing and verifying web service. In [21], Schmidt and Stahl discuss a mapping from BPEL to Petri nets by giving several examples. Each BPEL construct is mapped into a Petri net pattern. In [22], Schlingloff, Martens and Schmidt also consider the usability problem. They show that usability can be expressed in alternating-time temporal logic. As a consequence, model checking algorithms for this logic can be exploited to check for usability. As research aiming at facilitating web services integration and verification, WS-Net introduced in [11] is an executable architectural description language incorporating the semantics of Colored Petri-net with the style and understandability of object-oriented concepts. In [12], Tao provide a web service composition model which is based on a kind of advanced Petri-net, OOPN (Object Oriented Petri Net). A web service can be mapped to an OOPN system based on this model and different OOPN system can be integrated together into a composite service via message passing. A process algebra is a rather small concurrent language that abstracts from many details and focuses on particular features. There are several relevant publications [1, 13, 14] for process algebra based methods. Gwen Salaün, Lucas Bordeaux present an overview of the applicability of process algebras in the context of web services in [1]. Authors present a framework for the design and the verification of WSs using process algebras and their tools in [13]. Li Bao, Weishi Zhang present a CCS based method for describing and verifying the behaviour of web service in [14]. III. Defining IMWSC A. Initiative of IMWSC For the Petri net based methods, one major defect is that the number of the places and the transitions described in a Petri net is too large. Researchers often map each element in a web service composition language to an element in Petri net and do not restrict the number of the places and the transitions in a Petri net. If the number of the places and the transitions described in a Petri net is not restricted, the designers will meet a condition of state explosion, which is very difficult to be dealt with; another major defect for the Petri net based methods is the lack of the description of interaction process of web services. The Petri net based methods often put their emphasis on describing the workflow inside a web service, and do not present the complicate interaction process of web services. For the process algebra based methods, one major defect is that some kinds of complex structure of web service composition can not be defined by using these methods; another major defect for the process algebra based methods is that the lack of rigorous translation mechanism between the element of web service composition language and the element of process algebra. These methods often give simple corresponding relations and translation rules. These relations and rules can not guarantee the correct reservation of the information related to behavior and are apt to lead to the loss of information. We adopt a kind of hierarchically refined description method to define the interaction process of web services, i.e., we divide the interaction process of web services into smaller parts, which is defined as Interaction Module for Web Service Composition (IMWSC in short). For each of these parts, a scenario of the interaction of web services is defined. These smaller parts, i.e., modules, have a common property that the outcome of each module is determinate, in other words, each of the module has only one terminative state. This important property suggests these modules can be composed. Therefore, by mapping each module to a transition in a Petri net, modules which describe the scenarios of the interaction of web services are strictly composed. However, for the limit of the length, we only introduce the definition and properties of a module, i.e. IMWSC model, method about how to compose these modules will be introduced in further work. Instead of composing activities of web service, we compose modules. The merit of our approach is that it can effectively reduce the number of the objects to be analysis, such that the interaction process of web service is described more concisely, as well as the state explosion can be avoided. At the same time, web service composition with complex structure can be described by composing these modules, while the process algebra based methods can not achieve. Another benefit of our approach is the introducing of the semantic of IMWSC. The semantic of IMWSC comprises three parts: semantic domain, semantic range, and valuation function. A process calculus CCS (Calculus of Communicating Systems, CCS in short) is introduced as a semantic range, and then valuation functions are defined that translate an IMWSC(semantic domain) into a process term. Since the valuation functions are rigorously defined, the correct reservation of the information related to behavior can be guaranteed, such that the loss of information can be avoided. B. Formal Definition of IMWSC A web service is a software application which can be used through a network (intranet or Internet). For a web service, the basic functional unit is operation. The process of web service invocation is actually the process of operation invocation. For IMWSC, the invocation of operation is modeled by Activity. For a better control of the structure of activities, we introduce a set of processes, i.e. Proc, as the basic control unit. A process, i.e. an element in Proc, is a linear concatenation of activities. If the output data of one operation \( opr_1 \) is the input data of another operation \( opr_2 \), we consider that there is a corresponding relation between \( opr_1 \) and \( opr_2 \). For IMWSC, we introduce the binary relation \( R_a \) to represent this kind of relation. Symbol \( L \) is introduced into IMWSC to record the interaction history of web services. We described the interaction process of web services in a scenario in the way defined by IMWSC, in other words, the definition of an instance of IMWSC is the definition of the interaction process of web services in a scenario. **Definition 1. (IMWSC)** Formally, an IMWSC is a septuple \(<\text{Service}, \text{Proc}, \text{Activity}, L, \text{Message}, R_a, F>\), where: - \( \text{Service} \) denotes a set of web services; - \( \text{Proc} \) is a set of processes; - \( \text{Activity} \) is a set of activities; - \( L \) is a set of sequences of activities; - \( \text{Message} \) is a set of messages that are exchanged by services; - \( R_a \subseteq \text{Activity} \times \text{Activity} \) is a binary relation; - \( F \) is a sextuple \(<f_{pT}, f_{pS}, f_{pU}, f_{aP}, f_{aT}, f_{mA}>\), where: - \( f_{pT} : \text{Proc} \rightarrow \{c, b\} \) is a mapping that describes the type of each process (composite or basic); - \( f_{pS} : \text{Proc} \rightarrow \text{Service} \) is a mapping that describes the type of each process (composite or atomic); - \( f_{pU} : \text{Proc} \rightarrow \text{Proc} \) is a mapping that associates a process with a composite process; - \( f_{aP} : \text{Activity} \rightarrow \text{Proc} \) is a mapping that associates each activity with a process; - \( f_{aT} : \text{Activity} \rightarrow \{ii, io, ei, eo, ex\} \) is a mapping that describes the type of each activity (internal input, internal output, environmental input, environmental output, execute); - \( f_{mA} : \text{Message} \rightarrow \text{Activity} \) is a mapping that associates each message with an Activity. We let \( f_{con}(proc) = \{a | a \in \text{Activity} \wedge f_{aP}(a) = \text{proc}\} \) for \( p \in \text{Proc} \wedge f_{pT}(p) = b \); Let \( <_c \subseteq \text{Activity} \times \text{Activity} \) be an partial order relation over Activity, defined as: \( <_c = \{(a_1, a_2) | a_1, a_2 \in \text{Activity} \wedge f_{aP}(a_1) = f_{aP}(a_2) \wedge (a_1 \text{ happens earlier than } a_2)\} \); An element proc in Proc is constructed by the following grammar: \[ \text{proc} = \alpha | \text{proc}_1 || \text{proc}_2 | \text{proc}_1 < \text{proc}_2, \] where: \( \alpha \in \text{Activity} \); \( \text{proc}_1, \text{proc}_2 \in \text{Proc} \). - \( \text{proc}_1 || \text{proc}_2 \) is a new process that performs \( \text{proc}_1 \) and \( \text{proc}_2 \) independently; - \( \text{proc}_1 < \text{proc}_2 \) is a new process that performs \( \text{proc}_1 \) and \( \text{proc}_2 \) sequentially. Fig. 1 presents an illustration of the structure of IMWSC. In Fig.1, a service is visualized by a circle; interaction of services is visualized by a pair of parallel arrows (with opposite directions); the interaction process Definition, i.e., the definition of an instance of IMWSC, is visualized by a rectangle. ![Figure 1. Structure of IMWSC](image-url) **C. The Necessary Condition for the Correctness of IMWSC** The fundamental requirement for a correct interaction process of services is that each input of a service shall be met by another service. Thus the basic requirement that guarantees the correctness of IMWSC is: - for any activity \( a_1 \in \text{Activity} \), if it is an input activity, then there shall be another activity \( a_2 \in \text{Activity} \), such that \( a_1 \in R_a a_2 \). The necessary condition for the correctness of IMWSC can also be defined as the following predicative formula: \[ \forall a \in \text{Activity} \ (f_{aP}(a) = ii \lor f_{aT}(a) = io \rightarrow (\exists a' \in \text{Activity} \ (aR_a a' \lor a' R_a a))) \] **IV. Formal Semantics of IMWSC** Formal semantic descriptions of a model are the basis for proving properties of this model. Moreover, they provide precise documentation of model design and standards for implementations, and (sometimes) they can be used for generation of prototype implementations. The formal semantics of IMWSC comprises three parts: semantic domain, semantic range, and valuation function. A process calculus CCS (Calculus of Communicating Systems, CCS for short) is introduced as a semantic range, and then valuation functions are defined that translate an IMWSC (semantic domain) into a process term. A. Basic Syntax of CCS Let \( \mathcal{A} \) be a countably infinite collection of names, and the set \( \overline{\mathcal{A}} = \{ a \mid a \in \mathcal{A} \} \) be the set of complementary names (or co-names for short). Let \( \mathcal{L} = \mathcal{A} \cup \overline{\mathcal{A}} \) be a set of labels, and \( \text{Act} = \mathcal{L} \cup \{ \tau \} \) be the set of actions, where \( \tau \) denote the activities which are not externally visible. Let \( \mathcal{K} \) be a countably infinite collection of process names. The collection of CCS expressions is given by the following grammar: \[ P, Q ::= K \mid \alpha.P \mid \sum_{i \in I} P_i \mid P \mid Q \mid [ f ] \] where: - \( K \) is a process name in \( \mathcal{K} \); - \( \alpha \) is an action in \( \text{Act} \); - \( I \) is an index set; - \( f : \text{Act} \rightarrow \text{Act} \) is a relabelling function satisfying the following constraints: - \( f(\tau) = \tau \) and - \( f(a) = f(\overline{a}) \) for each label \( a \); - \( L \) is a set of labels. B. Operational Semantics of CCS CCS is formalized using axiomatic and operational semantics. To formally capture the understanding of the semantics of the language CCS, the collection of inference rules are therefore introduced as follows (a transition \( P \xrightarrow{\alpha} Q \) holds for CCS expressions \( P, Q \) if, and only if, it can be proven using these rules): \[ \begin{align*} \text{Rule 1 (} \alpha.P & \xrightarrow{\alpha} P \text{)} & \text{Rule 4 (} \xrightarrow{\alpha} \text{)} \\ \text{Rule 2 (} \sum_{i \in I} P_i & \xrightarrow{\alpha} P_i^{\alpha} \text{)} & \text{Rule 5 (} \xrightarrow{\alpha} \text{)} \\ \text{Rule 3 (} P & \xrightarrow{\alpha} P^{\alpha} \text{)} & \\ \end{align*} \] For a detailed introduction to the syntax and operational semantics of CCS, readers are referred to [17, 18]. C. Defining Valuation Functions The valuation functions of IMWSC, and their corresponding semantic domains, semantic ranges are given in Tab. 1 ( symbols IMWSC denotes an IMWSC instance; \( P \) denotes process term in CCS; \( A \) denotes the set of atomic processes; \( \text{Activity} \) denotes a set of activities; \( \text{Act} \) denotes the set of actions in CCS). <table> <thead> <tr> <th>Valuation Function</th> <th>Semantic Domain</th> <th>Semantic Range</th> </tr> </thead> <tbody> <tr> <td>( f_{\text{Inst}} )</td> <td>IMWSC</td> <td>( P )</td> </tr> <tr> <td>( f_\alpha )</td> <td>Proc</td> <td>( P )</td> </tr> <tr> <td>( f_a )</td> <td>Proc</td> <td>( P )</td> </tr> <tr> <td>( f_r )</td> <td>( A )</td> <td>( P )</td> </tr> <tr> <td>( f_c )</td> <td>Activity</td> <td>Act</td> </tr> </tbody> </table> By means of the valuation functions defined in Tab. 1, an algorithm aiming at translating an IMWSC instance to CCS terms can be developed: Algorithm. IMWSC_Instance_to_CCS \[ \text{INPUT: IMWSC Instance} \\ \text{OUTPUT: The corresponding CCS terms} \\ \text{Process Trans}_{f_{\text{Inst}}}(\text{IMWSC Instance}) \\ \{ \\ 1. \text{Str Exp} = \text{Empty} ; \\ 2. \text{For each } p \text{ in Proc} ; \\ 3. \text{Exp} = \text{Exp} | \text{Trans}_{f_c}(p) ; \\ 4. \text{Return Exp} ; \\ \} \] Process Trans \(_f_\text{c} \) (process \( p \in \text{Proc} \)) { 1. If (process Type = basic) \{Return Trans \(_f_\text{c} \) (p)\}; 2. Else \{Return Trans \(_f_\text{c} \) (p)\}; } Process Trans \(_f_a\) (process \( p_i \in \text{Proc} \)) { 1. Str name = getName( p ); 2. SET name = NIL ; 3. For each activity \( a \) in Activity\(_i\) of process \( p_i \) 4. If (activityType = output) name = ! getName( \( a_i \) ). name ; 5. Else If (activityType = input) name = ? getName( \( a_i \) ). name. 6. RETURN name ; } Process Trans \(_f_r\) (process \( p_i \in \text{Proc} \)) { 1. Str Exp = Empty ; 2. For each subService \( u_j \) of process \( p_i \) 3. If (compositionType of \( p_i \) is parallel) Exp = Exp \| Trans \(_f_r\) ( \( u_j \) ) ; 4. Else Exp = Exp.Tran \(_f_r\) ( \( u_j \) ) ; 5. Return Exp ; } If the IMWSC instance to be translated comprises \( m \) basic processes, and \( n = \max \{\text{num}(p_i)\} \), where \( \text{num}(p_i) \) returns the number of the activities contained in process \( p_i \), the complexity of above algorithm will be \( O(m \times n) \). V. Case Study: Application of IMWSC to a Concrete Scenario A. Abstracting the Interaction of Web Services We will investigate the application of IMWSC in a simple scenario. There are three services involved in this scenario: - The Client Service, which need to find out some useful information (for convenience, client here is considered as a service); - The Response Service, which is responsible for dealing with information inquiry requests; - The Information Service, which acts as a database and providing the useful information. The business process of this scenario is introduced briefly as follows: 1. The Response Service receives a request from the Client Service which need to find out some useful information; 2. The Response Service contacts the Information Service and relay the information inquiry request; 3. The Response Service answers the questions to the Client Service. Fig. 3 presents an illustration of the structure of this scenario, where - A service is visualized by a rectangle (with round angles); - A state of a service is visualized by a circle (the initial and the terminative states of a service are visualized by icons \( \bigcirc \), \( \bigcirc \) respectively); - A transition between states is visualized by an arrow (with curve line), from the source state to the target state ; - The supply channels of services in this scenario is visualized by a pair of parallel arrows (with opposite directions). By applying IMWSC, the interaction process of services in this scenario is described as follows: \[ \begin{align*} \text{f}_\text{con} (\text{Client}) &= \{ \text{cReq}, \text{cAsk}, \text{cInquiry}, \text{cInfo} \}; \\ \text{f}_\text{con} (\text{Response}) &= \{ \text{rReq}, \text{rAsk}, \text{rInquiry}, \text{rAnswer} \}; \\ \text{f}_\text{con} (\text{InfoS}) &= \{ \text{iAnswer}, \text{iInfo} \}; \\ \text{f}_\text{at} (\text{cReq}) &= \text{ii}; \text{f}_\text{at} (\text{cAsk}) = \text{io}; \text{f}_\text{at} (\text{cInquiry}) = \text{ii}; \\ \text{f}_\text{at} (\text{cInfo}) &= \text{io}; \text{f}_\text{at} (\text{rReq}) = \text{ii}; \text{f}_\text{at} (\text{rAsk}) = \text{io}; \\ \text{f}_\text{at} (\text{rInquiry}) &= \text{io}; \text{f}_\text{at} (\text{rAnswer}) = \text{ii}; \\ \text{f}_\text{at} (\text{iReq}) &= \text{io}; \text{f}_\text{at} (\text{iInfo}) = \text{ii}; \text{f}_\text{at} (\text{iAnswer}) = \text{io}. \\ \text{cReq} <_c \text{cAsk} <_c \text{cInquiry} <_c \text{cInfo} ; \end{align*} \] rReq < ε, rAsk < ε, rInquiry < ε, rAnswer; iAnswer < ε, iInfo. < rReq, cReq > ∈ $R_a$; < cAsk, rAsk > ∈ $R_a$; < rInquiry, cInquiry > ∈ $R_a$; < cInfo, iInfo > ∈ $R_a$; < iAnswer, rAnswer > ∈ $R_a$; By means of the semantics of IMWSC defined in Section 4, the corresponding CCS terms translated are as follows: InfoS = ? Answer. ! Info. nil; Scenario = ( Client | Response | InfoS ) / { req, ask, info, Inquiry, Answer } B. Verifying the Interaction of Web Services CCS is an effective modeling language which has available supporting tool CWB-NC (Concurrency Workbench of the New Century, CWB-NC for short) [20]. We use this tool to reason on and verify the behavior of an instance of IMWSC. Using the supporting tool of CCS, i.e., CWB-NC, aims at assist the design and verification of a system. Applying CCS in the design phase of a system is helpful to show explicitly the interaction of the components that compose this system; after the model of a system has been constructed, modal $\mu$ - calculus [23] can be used to reason on the system behavior. For a detailed introduction to modal logic, readers are referred to, for example, [19, 23]. One type of verification supported by the tool is reachability analysis. Here, as in each type of verification, our first step in using the tool is to write a description of the system supported by CWB-NC. The description is then parsed by the tool and checked for syntactic correctness. We then give a logical formula describing a “bad state” that the system should never reach. Given such a formula and system description, CWB-NC explores every possible state the system may reach during execution sequence and checks to see if a bad state is reachable. If a bad state is detected, a description of the execution sequence leading to the state is reported to the user. Many bugs such as deadlock and critical section violation may be found using this approach [20]. Correct termination is one of the main properties a proper web service should satisfy. We use can_terminate to define the state of termination of a system. And the explanation for this state is as follows: can_terminate is true of a system if it will reach a terminative state. We express this property the system should have in modal $\mu$ - calculus: prop can_terminate = min X = [¬]If∀/ <=>X Reachability analysis is actually a special case of a more general type of verification called model checking. In the model checking approach a system is again described using a design language and a property the system should have is formulated as a logical formula [20]. Another type of verification supported by CWB-NC involves using a design language for defining both systems and specifications. Here the specification describes a system behavior more abstractly than the system description [20]. A relation, i.e., Observational equivalence needs to be introduced before we conduct this type of verification. Observational equivalence is useful in verification as they lay the conceptual basis for deciding that the behavior of two web services can be considered to be the same. They can also be used as a tool for reducing verification effort by replacing a process by a smaller (in size), but equivalent one. The bisimulation equivalence between two processes is a relation between their evolutions such that for each evolution of one of the services there is a corresponding evolution of the other service such that the evolutions are observationally equivalent and lead to processes which are again bisimilar. This characterization of the behavior of web services using the notion of bisimulation helps service designer optimize composite services by, e.g., changing their component web services with equivalent ones. Another motivation is customization of services. To enhance competitiveness a service providers may modify their service for customers’ convenience and this customized service must conform to the original one. Formally, the relation of observational equivalence is defined as: Definition 1 [Weak Transitions][23]: - $q \Rightarrow q’$ iff $q = q_0 \Rightarrow q_1 \Rightarrow \cdots \Rightarrow q_n = q’$, $n \geq 0$; - $q \Rightarrow q’’$ iff $q \Rightarrow q’’$; - $q \Rightarrow q’’$ iff $q \Rightarrow q_1 \Rightarrow q_2 \Rightarrow q’’$, ($\alpha \neq \tau$). Definition 2 [Observational Equivalence][23]: Let $S \subseteq Q \times Q$. The relation $S$ is a weak bisimulation relation if whenever $q_1, S q_2$ then: - $q_1 \Rightarrow q_1’$ implies $q_2 \Rightarrow q_2’$ for some $q_2’$ such that $q_1’, S q_2’$; - $q_2 \Rightarrow q_2’$ implies $q_1 \Rightarrow q_1’$ for some $q_1’$ such that $q_1’, S q_2’$. $q_1$ and $q_2$ are observationally equivalent, if $q_1, S q_2$ for some weak bisimulation relation $S$, written $q_1 \approx q_2$. © 2010 ACADEMY PUBLISHER In this scenario, Client is considered as a service which interacts with the composition of the services \textit{Response} and \textit{InfoS}. The behaviour of the composition of the services \textit{Response} and \textit{InfoS} can be described in two ways: 1. The system description of the composition of services \textit{Response} and \textit{InfoS} is: \[ \text{Response} = ? \text{Req.} \! \text{ Ask.} ? \text{Inquiry.} ! \text{Answer.} \text{ nil}; \] \[ \text{InfoS} = ? \text{Answer.} ! \text{Info.} \text{ nil}; \] \[ \text{Info\_Response} = ( \text{Response} \mid \text{InfoS} ) / \{\text{answer}\}; \] 2. The specification of this composition is: \[ \text{Spe} = ? \text{Req.} ! \text{Ask.} ? \text{Inquiry.} ! \text{Answer.} \text{ nil}; \] Command ‘eq-S obseq’ of CWB-NC tool can be used to examine whether or not two processes are observationally equivalent. By executing this command, we know that processes \textit{Info\_Response} and \textit{Spe} are observationally equivalent. VI. Conclusions and Future Work Formal description and verification of the interaction of web services is an important research field. After the description and verification of a practical application of web service, we come to a conclusion that IMWSC has very good capability in abstracting, simulating, and analyzing a scenario of the interaction process of web services, which will facilitate the correct implementation. Currently many service composition methods do not take into account abstracting and analyzing the interactive features of services in a composition. Therefore it is apt to make mistakes when using these methods. Our work is an attempt to abstract and verify the interaction process of web services which will make the composition process more reliable. Further work will involve defining the way IMWSC instances are composed. An instance of IMWSC model defined only one scenario of the interaction process of web services. To model the complete interaction process of web services, there is a need for composing the instances of IMWSC model. Since Petri nets are a well known formal model that is capable of defining the composition process, we plan to compose the instances by using Petri net. In our further work, we will present the fixed point property of IMWSC model. The fixed point property indicated the outcome of each instance of IMWSC model is determinate, in other words, each of the module has only one terminative state. This property lays the mathematical foundation for mapping a module to a transition in a Petri net. ACKNOWLEDGMENT This research is supported by the National Natural Science Foundation of China under Grant No.60573087. REFERENCES © 2010 ACADEMY PUBLISHER Li Bao received the BS degree in computer science from Dalian Nationality University, China, in 2003, and the MS degree in computer science from Dalian Maritime University, China, in 1996. From 2006 to date, he works as a PhD candidate in the Institute of Software Engineering, Dalian Maritime University, China. His research interests include distributed computing, software engineering, and formal description techniques. Weishi Zhang received the BS degree in computer science from Xi’an Jiaotong University, China, in 1984, and the MS degree in computer science from the Chinese Academy of Science, China, in 1986. He received the PhD degree in computer science from the University of Munich, Germany, in 1996. From 1986 to1990, he was an assistant researcher at the Shenyang Institute of Computing, Chinese Academy of Science, China. From 1990 to 1992, he was a visiting scholar at Passau University, Germany. From 1992 to 1997, he was an assistant professor at the University of Munich, Germany. In 1997, he joined the Department of Computer Science, Dalian Maritime University, China, where he is currently a professor of computer science. His research interests include distributed computing, software engineering, software architecture, formal specification techniques, and program semantics models. Xiong Xie works as a PhD candidate in the Institute of Software Engineering, Dalian Maritime University, China. Her research interests include distributed computing, software engineering, and formal description techniques.
{"Source-Url": "https://pdfs.semanticscholar.org/480d/7a6642d4a0d6b998fc7952da704be0404f8f.pdf", "len_cl100k_base": 7674, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29631, "total-output-tokens": 9798, "length": "2e12", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0004558563232421875, "__label__crime_law": 0.00039505958557128906, "__label__education_jobs": 0.0015430450439453125, "__label__entertainment": 7.647275924682617e-05, "__label__fashion_beauty": 0.0001595020294189453, "__label__finance_business": 0.00027942657470703125, "__label__food_dining": 0.0003867149353027344, "__label__games": 0.0004887580871582031, "__label__hardware": 0.0006351470947265625, "__label__health": 0.0006232261657714844, "__label__history": 0.0002465248107910156, "__label__home_hobbies": 9.590387344360352e-05, "__label__industrial": 0.00034618377685546875, "__label__literature": 0.0003962516784667969, "__label__politics": 0.0002694129943847656, "__label__religion": 0.0004267692565917969, "__label__science_tech": 0.04046630859375, "__label__social_life": 0.00013303756713867188, "__label__software": 0.0081787109375, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.00025343894958496094, "__label__transportation": 0.0004978179931640625, "__label__travel": 0.00019931793212890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35093, 0.01631]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35093, 0.75935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35093, 0.85338]], "google_gemma-3-12b-it_contains_pii": [[0, 4368, false], [4368, 10369, null], [10369, 14877, null], [14877, 18016, null], [18016, 21595, null], [21595, 26527, null], [26527, 32394, null], [32394, 35093, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4368, true], [4368, 10369, null], [10369, 14877, null], [14877, 18016, null], [18016, 21595, null], [21595, 26527, null], [26527, 32394, null], [32394, 35093, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35093, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35093, null]], "pdf_page_numbers": [[0, 4368, 1], [4368, 10369, 2], [10369, 14877, 3], [14877, 18016, 4], [18016, 21595, 5], [21595, 26527, 6], [26527, 32394, 7], [32394, 35093, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35093, 0.02767]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f69811269d545ef6aa22b833471379abe1a6ad3b
CatBoost: gradient boosting with categorical features support Anna Veronika Dorogush, Vasily Ershov, Andrey Gulin Yandex Abstract In this paper we present CatBoost, a new open-sourced gradient boosting library that successfully handles categorical features and outperforms existing publicly available implementations of gradient boosting in terms of quality on a set of popular publicly available datasets. The library has a GPU implementation of learning algorithm and a CPU implementation of scoring algorithm, which are significantly faster than other gradient boosting libraries on ensembles of similar sizes. 1 Introduction Gradient boosting is a powerful machine-learning technique that achieves state-of-the-art results in a variety of practical tasks. For a number of years, it has remained the primary method for learning problems with heterogeneous features, noisy data, and complex dependencies: web search, recommendation systems, weather forecasting, and many others [2, 15, 17, 18]. It is backed by strong theoretical results that explain how strong predictors can be built by iterative combining weaker models (base predictors) via a greedy procedure that corresponds to gradient descent in a function space. Most popular implementations of gradient boosting use decision trees as base predictors. It is convenient to use decision trees for numerical features, but, in practice, many datasets include categorical features, which are also important for prediction. Categorical feature is a feature having a discrete set of values that are not necessarily comparable with each other (e.g., user ID or name of a city). The most commonly used practice for dealing with categorical features in gradient boosting is converting them to numbers before training. In this paper we present a new gradient boosting algorithm that successfully handles categorical features and takes advantage of dealing with them during training as opposed to preprocessing time. Another advantage of the algorithm is that it uses a new schema for calculating leaf values when selecting the tree structure, which helps to reduce overfitting. As a result, the new algorithm outperforms the existing state-of-the-art implementations of gradient boosted decision trees (GBDTs) XGBoost [4], LightGBM [1] and H2O [2] on a diverse set of popular tasks (Sec. 6). The algorithm is called CatBoost (for “categorical boosting”) and is released in open source [3]. CatBoost has both CPU and GPU implementations. The GPU implementation allows for much faster training and is faster than both state-of-the-art open-source GBDT GPU implementations, XGBoost and LightGBM, on ensembles of similar sizes. The library also has a fast CPU scoring implementation, which outperforms XGBoost and LightGBM implementations on ensembles of similar sizes. 1 https://github.com/Microsoft/LightGBM 2 http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/gbm.html 3 https://github.com/catboost/catboost 2 Categorical features Categorical features have a discrete set of values called categories which are not necessary comparable with each other; thus, such features cannot be used in binary decision trees directly. A common practice for dealing with categorical features is converting them to numbers at the preprocessing time, i.e., each category for each example is substituted with one or several numerical values. The most widely used technique which is usually applied to low-cardinality categorical features is one-hot encoding: the original feature is removed and a new binary variable is added for each category. One-hot encoding can be done during the preprocessing phase or during training, the latter can be implemented more efficiently in terms of training time and is implemented in CatBoost. Another way to deal with categorical features is to compute some statistics using the label values of the examples. Namely, assume that we are given a dataset of observations \( D = \{(X_i, Y_i)\}_{i=1..n} \), where \( X_i = (x_{i,1}, \ldots, x_{i,m}) \) is a vector of \( m \) features, some numerical, some categorical, and \( Y_i \in \mathbb{R} \) is a label value. The simplest way is to substitute the category with the average label value on the whole train dataset. So, \( x_{i,k} \) is substituted with \( \frac{\sum_{j=1}^{n} [x_{j,k}=x_{i,k}] Y_j}{\sum_{j=1}^{n} [x_{j,k}=x_{i,k}]} \), where \([\cdot]\) denotes Iverson brackets, i.e., \([x_{j,k} = x_{i,k}]\) equals 1 if \( x_{j,k} = x_{i,k} \) and 0 otherwise. This procedure, obviously, leads to overfitting. For example, if there is a single example from the category \( x_{i,k} \) in the whole dataset then the new numeric feature value will be equal to the label value on this example. A straightforward way to overcome the problem is to partition the dataset into two parts and use one part only to calculate the statistics and the second part to perform training. This reduces overfitting but it also reduces the amount of data used to train the model and to calculate the statistics. CatBoost uses a more efficient strategy which reduces overfitting and allows to use the whole dataset for training. Namely, we perform a random permutation of the dataset and for each example we compute average label value for the example with the same category value placed before the given one in the permutation. Let \( \sigma = (\sigma_1, \ldots, \sigma_n) \) be the permutation, then \( x_{\sigma_j, k} \) is substituted with \[ \frac{\sum_{j=1}^{n} [x_{\sigma_j,k} = x_{\sigma_j,k}] Y_{\sigma_j}}{\sum_{j=1}^{n} [x_{\sigma_j,k} = x_{\sigma_j,k}]} + P \cdot (1 - a) + P + a, \tag{1} \] where we also add a prior value \( P \) and a parameter \( a > 0 \), which is the weight of the prior. Adding prior is a common practice and it helps to reduce the noise obtained from low-frequency categories. For regression tasks standard technique for calculating prior is to take the average label value in the dataset. For binary classification task a prior is usually an a priori probability of encountering a positive class. It is also efficient to use several permutations. However, one can see that a straightforward usage of statistics computed for several permutations would lead to overfitting. As we discuss in the next section, CatBoost uses a novel schema for calculating leaf values which allows to use several permutations without this problem. Feature combinations Note that any combination of several categorical features could be considered as a new one. For example, assume that the task is music recommendation and we have two categorical features: user ID and musical genre. Some user prefers, say, rock music. When we convert user ID and musical genre to numerical features according to (1), we loose this information. A combination of two features solves this problem and gives a new powerful feature. However, the number of combinations grows exponentially with the number of categorical features in dataset and it is not possible to consider all of them in the algorithm. When constructing a new split for the current tree, CatBoost considers combinations in a greedy way. No combinations are considered for the first split in the tree. For the next splits CatBoost combines all combinations and categorical features present in current tree with all categorical features in dataset. Combination values are converted to numbers on the fly. CatBoost also generates combinations of numerical and categorical features in the following way: all the splits selected in the tree are considered as categorical with two values and used in combinations in the same way as categorical ones. Important implementation details Another way of substituting category with a number is calculating number of appearances of this category in the dataset. This is a simple but powerful technique and it is implemented in CatBoost. This type of statistic is also calculated for feature combinations. In order to fit the optimal prior at each step of CatBoost algorithm, we consider several priors and construct a feature for each of them, which is more efficient in terms of quality than standard techniques mentioned above. 3 Fighting Gradient Bias CatBoost, as well as all standard gradient boosting implementations, builds each new tree to approximate the gradients of the current model. However, all classical boosting algorithms suffer from overfitting caused by the problem of biased pointwise gradient estimates. Gradients used at each step are estimated using the same data points the current model was built on. This leads to a shift of the distribution of estimated gradients in any domain of feature space in comparison with the true distribution of gradients in this domain, which leads to overfitting. The idea of biased gradients was discussed in previous literature \[5\]. We have provided a formal analysis of this problem in the paper \[5\]. The paper also contains modifications of classical gradient boosting algorithm that try to solve this problem. CatBoost implements one of those modifications, briefly described below. In many GBDTs (e.g., XGBoost, LightGBM) building next tree comprises two steps: choosing the tree structure and setting values in leaves after the tree structure is fixed. To choose the best tree structure, the algorithm enumerates through different splits, builds trees with these splits, sets values in the obtained leaves, scores the trees and selects the best split. Leaf values in both phases are calculated as approximations for gradients \[8\] or for Newton steps. In CatBoost the second phase is performed using traditional GBDT scheme and for the first phase we use the modified version. According to intuition obtained from our empirical results and our theoretical analysis in \[5\], it is highly desirable to use unbiased estimates of the gradient step. Let \( F^i \) be the model constructed after building first \( i \) trees, \( g^i(X_k, Y_k) \) be the gradient value on \( k \)-th training sample after building \( i \) trees. To make the gradient \( g^i(X_k, Y_k) \) unbiased w.r.t. the model \( F^i \), we need to have \( F^i \) trained without the observation \( X_k \). Since we need unbiased gradients for all training examples, no observations may be used for training \( F^i \), which at first glance makes the training process impossible. We consider the following trick to deal with this problem: for each example \( X_k \), we train a separate model \( M_k \) that is never updated using a gradient estimate for this example. With \( M_k \), we estimate the gradient on \( X_k \) and use this estimate to score the resulting tree. Let us present the pseudo-code that explains how this trick can be performed. Let \( \text{Loss}(y, a) \) be the optimizing loss function, where \( y \) is the label value and \( a \) is the formula value. **Algorithm 1:** Updating the models and calculating model values for gradient estimation ``` Algorithm 1: Updating the models and calculating model values for gradient estimation \[ \text{input:} \{(X_k, Y_k)\}_{k=1}^{n} \text{ ordered according to } \sigma, \text{ the number of trees } I; \] 1. \( M_i \leftarrow 0 \) for \( i = 1..n; \) 2. for \( \text{iter} \leftarrow 1 \) to \( I \) do 3. for \( i \leftarrow 1 \) to \( n \) do 4. for \( j \leftarrow 1 \) to \( i - 1 \) do 5. \( g_j \leftarrow \frac{d}{da} \text{Loss}(y_j, a)|_{a=M_i(X_j)}; \) 6. \( M_i \leftarrow \text{LearnOneTree}((X_j, g_j) \text{ for } j = 1..i - 1); \) 7. \( M_i \leftarrow M_i + M; \) 8. return \( M_1, \ldots, M_n, M_1(X_1), M_2(X_2) \ldots M_n(X_n) \) ``` Note that \( M_i \) is trained without using the example \( X_i \). CatBoost implementation uses the following relaxation of this idea: all \( M_i \) share the same tree structures. In CatBoost we generate \( s \) random permutations of our training dataset. We use several permutations to enhance the robustness of the algorithm: we sample a random permutation and obtain gradients on its basis. These are the same permutations as ones used for calculating statistics for categorical features. We use different permutations for training distinct models, thus using several permutations does not lead to overfitting. For each permutation \( \sigma \), we train \( n \) different models \( M_i \), as shown above. That means that for building one tree we need to store and recalculate \( O(n^2) \) approximations for each permutation \( \sigma \): for each model \( M_i \), we have to update \( M_i(X_1), \ldots, M_i(X_n) \). Thus, the resulting complexity of this operation is \( O(s n^2) \). In our practical implementation, we use one important trick which reduces the complexity of one tree construction to \( O(s n) \): for each permutation, instead of storing and updating \( O(n^2) \) values \( M_i(X_j) \), we maintain values \( M'_i(X_j), i = 1, \ldots, \lfloor \log_2(n) \rfloor, j < 2^{i+1} \), where \( M'_i(X_j) \) is the approximation for the sample \( j \) based on the first \( 2^i \) samples. Then, the number of predictions \( M'_i(X_j) \) is not larger than \( \sum_{0 \leq i \leq \log_2(n)} 2^{i+1} < 4n \). The gradient on the example \( X_k \) used for choosing a tree structure is estimated on the basis of the approximation \( M'_i(X_k)\), where \( i = \lfloor \log_2(k) \rfloor \). 4 Fast scorer CatBoost uses oblivious trees as base predictors. In such trees the same splitting criterion is used across an entire level of the tree [12] [13]. Such trees are balanced and less prone to overfitting. Gradient boosted oblivious trees were successfully used in various learning tasks [7] [10]. In oblivious trees each leaf index can be encoded as a binary vector with length equal to the depth of the tree. This fact is widely used in CatBoost model evaluator: we first binarize all used float features, statistics and one-hot encoded features and then use binary features to calculate model predictions. All binary feature values for all examples are stored in a continuous vector $B$. Leaf values are stored in a float vectors of size $2^d$, where $d$ is the tree depth. To calculate the leaf index for the $t$-th tree and for an example $x$ we build a binary vector $\sum_{i=0}^{d-1} 2^i \cdot B(x, f(t, i))$, where $B(x, f)$ is the value of the binary feature $f$ on the example $x$ that we read from the vector $B$ and $f(t, i)$ is the number of the binary feature from $t$-th tree on depth $i$. That vectors can be built in a data parallel manner which gives up to 3x speedup. This results in a much faster scorer than all existing ones as shown in our experiments. 5 Fast training on GPU Dense numerical features One of the most important building blocks for any GBDT implementation is searching for the best split. This block is the main computation burden for building decision tree on dense numerical datasets. CatBoost uses oblivious decision trees as base learners and performs feature discretization into a fixed amount of bins to reduce memory usage [10]. The number of bins is the parameter of the algorithm. As a result we could use histogram-based approach for searching for best splits. Our approach to building decision trees on GPU is similar in spirit to one described in [11]. We group several numerical features in one 32-bit integer and currently use: - 1 bit for binary features and group 32 features per integer. - 4 bits for features with no more than 15 bins, 8 features per integer. - 8 bits for other features (maximum feature discretization is 255), 4 features per integer. In terms of GPU memory usage CatBoost is at least as efficient as LightGBM [11]. The main difference is another way of histogram computation. Algorithms in LightGBM and XGBoost have a major drawback: they rely on atomic operations. Such technique is very easy to deal with concurrent memory accesses but it is also relatively slow even on the modern generation of GPU. Actually histograms could be computed more efficiently without any atomic operations. We will describe here only the basic idea of our approach by a simplified example: simultaneous computation of four 32-bin histograms with a single float additive statistic per feature. This idea could be efficiently generalized for cases with several statistics and multiple histograms. So we have gradient values $g[i]$ and feature groups $(f_1, f_2, f_3, f_4)[i]$. We need to compute 4 histograms: $\text{hist}(j)[b] = \sum_{i: f_j[i] = b} g[i]$. CatBoost builds partial histogram per each warp instead of histogram per thread block. We will describe work which is done by one warp on first 32 samples. Thread with index $i$ processes sample $i$. Since we are building 4 histograms at once we need $32 \times 32 \times 4$ bytes of shared memory per warp. To update histograms all 32 threads load sample labels and grouped features to registers. Then warp performs updates of shared-memory histogram simultaneously in 4 iterations: on $l$-th ($l = 0 \ldots 3$) iteration thread with index $i$ works with feature $f_{(l+i) \mod 4}$ and adds $g[i]$ for $\text{hist}[(l + i) \mod 4][f_{(l+i) \mod 4}]$. With proper histogram layout this operation could avoid any bank conflicts and add statistics via all 32 threads in parallel. CatBoost implementation builds histograms for 8 features per group and 2 statistics; for 32 binary features per group and 2 statistics; for 4 features per group and 2 statistics with bin count 32, 64, 128 and 255. In order to achieve fast computation of all these histograms we have to use all available shared memory. As a result our code can’t achieve 100% occupancy. So we do loop unrolling to utilize instruction-level parallelism. This technique allows high performance even at lower occupancy.\footnote{https://devblogs.nvidia.com/parallelforall/gradient-boosting-decision-trees-xgboost-cuda/} \footnote{http://www.nvidia.com/content/GTC-2010/pdfs/2238_GTC2010.pdf} We use perfect hash to store values of categorical features to reduce memory usage. Because of the table shows that the CatBoost algorithm outperforms other algorithms on all datasets in classifi- even on old generation GPU (Tesla K40) and gains impressive x15 speedup on NVIDIA V100 card. Dealing with them is the slowest and most memory consuming part of algorithm. We use perfect hash to store values of categorical features to reduce memory usage. Because of memory constraints on GPU, we store bit-compressed perfect hashes in CPU RAM and stream required data on demand, overlapping computation and memory operations. Construction of feature combinations on the fly requires us to dynamically build (perfect) hash functions for this new features and compute statistics with respect to some permutation ([1]) for each unique value of hash. We use radix sort to build perfect hashes and group observations by hash. In every group we need to compute prefix sums of some statistic. Computation of this statistics is done with segmented scan GPU primitive (CatBoost segmented scan implementation is done via operator transformation [16] and is based on highly-efficient implementation of scan primitive in CUB [6]). Multiple GPU support CatBoost GPU implementation supports several GPUs out of the box. Distributed tree learning could be parallelized by samples or by features. CatBoost uses a computation scheme with several permutations of learning dataset and computes statistics for categorical features during training. So we need to use feature parallel learning. 6 Experiments Quality: comparison with baselines We compare our algorithm with XGBoost, LightGBM and H2O. The results of the comparison are presented in Table 1. The detailed description of the experimental setup as well as dataset descriptions are published on our github together with the code of the experiment and a container with all the used libraries, so that the result can be reproduced. In this comparison we made categorical features preprocessing using the statistics on random permutation data. The parameter tuning and training was performed on 4/5 of the data and the testing was performed on the other 1/5. The training with selected parameters was performed 5 times with different random seeds, the result is the average logloss on 5 runs. The table shows that the CatBoost algorithm outperforms other algorithms on all datasets in classification task. In our repo on github you can also see that CatBoost with default parameters outperforms tuned XGBoost and H2O on all but one datasets. GPU vs CPU training performance Scripts for running GPU experiments are in our github repo. In the first experiment we compared training speed for our GPU vs CPU implementations. For CPU version we used dual-socket server with 2 Intel Xeon CPU (E5-2650v2, 2.60GHz) and 256GB RAM and run CatBoost in 32 threads (equal to number of logical cores). GPU implementation was run on several servers with different GPU types. Our GPU implementation doesn’t require multi-core server for high performance, so different CPU and machines shouldn’t significantly affect benchmark results. Results are present in Table 2. We used Criteo dataset (36 * 10^6 samples, 26 categorical, 13 numerical features) to benchmark our categorical features support. We used 2 GTX1080 because 1 had not enough memory. It’s clearly seen that the GPU version significantly outperforms CPU training time even on old generation GPU (Tesla K40) and gains impressive x15 speedup on NVIDIA V100 card. Table 1: Comparison with baselines. Tuned algorithms. Logloss <table> <thead> <tr> <th>Dataset</th> <th>CatBoost</th> <th>LightGBM</th> <th>XGBoost</th> <th>H2O</th> </tr> </thead> <tbody> <tr> <td>Adult</td> <td>0.29741</td> <td>0.276018 (+2.33%)</td> <td>0.257423 (+2.11%)</td> <td>0.27504 (+1.99%)</td> </tr> <tr> <td>Amazon</td> <td>0.137720</td> <td>0.163620 (+18.79%)</td> <td>0.16321 (+18.55%)</td> <td>0.162641 (+18.09%)</td> </tr> <tr> <td>Appet</td> <td>0.071511</td> <td>0.071795 (+0.40%)</td> <td>0.071760 (+0.35%)</td> <td>0.072457 (+1.32%)</td> </tr> <tr> <td>Chick</td> <td>0.399922</td> <td>0.396226 (+1.39%)</td> <td>0.396242 (+1.37%)</td> <td>0.397595 (+1.75%)</td> </tr> <tr> <td>Internet</td> <td>0.28784</td> <td>0.223154 (+6.90%)</td> <td>0.225323 (+7.94%)</td> <td>0.222091 (+6.39%)</td> </tr> <tr> <td>Kdd98</td> <td>0.194668</td> <td>0.195759 (+0.56%)</td> <td>0.195775 (+0.52%)</td> <td>0.195395 (+0.37%)</td> </tr> <tr> <td>Kdd churn</td> <td>0.232889</td> <td>0.232089 (+0.33%)</td> <td>0.233123 (+0.79%)</td> <td>0.232752 (+0.63%)</td> </tr> <tr> <td>Kick</td> <td>0.284793</td> <td>0.255600 (+3.82%)</td> <td>0.294647 (+3.46%)</td> <td>0.254814 (+3.52%)</td> </tr> </tbody> </table> Table 2: GPU vs CPU training <table> <thead> <tr> <th>Dataset</th> <th>128 bins</th> <th>32 bins</th> <th>128 bins</th> </tr> </thead> <tbody> <tr> <td>Epsilon</td> <td></td> <td></td> <td></td> </tr> <tr> <td>CPU</td> <td>1060 (1.0)</td> <td>655 (1.0)</td> <td>120 (1.0)</td> </tr> <tr> <td>K40</td> <td>373 (2.84)</td> <td>248 (2.6)</td> <td>337 (2.83)</td> </tr> <tr> <td>GTX 1080</td> <td>283 (3.7)</td> <td>120 (5.4)</td> <td>123 (9.6)</td> </tr> <tr> <td>GTX 1080Ti</td> <td>301 (3.5)</td> <td>88 (7.4)</td> <td>123 (9.6)</td> </tr> <tr> <td>P100-PCI</td> <td>82 (12.9)</td> <td>70 (9.3)</td> <td>123 (9.6)</td> </tr> <tr> <td>V100-PCI</td> <td>60 (15)</td> <td>49 (13.3)</td> <td>123 (9.6)</td> </tr> </tbody> </table> **Categorical features** CatBoost implements several ways to deal with categorical features. For one-hot encoded features we don’t need any special treatment — histogram-based approach for split searching can be easily adopted to such case. Statistics computation for single categorical features also could be done during preprocessing stage. CatBoost also uses statistics for feature combinations. Implementations. For CPU version we used dual-socket server with 2 Intel Xeon CPU (E5-2650v2, 2.60GHz) and 256GB RAM and run CatBoost in 32 threads (equal to number of logical cores). GPU implementation was run on several servers with different GPU types. Our GPU implementation doesn’t require multi-core server for high performance, so different CPU and machines shouldn’t significantly affect benchmark results. **Multiple GPU support** CatBoost GPU implementation supports several GPUs out of the box. Distributed tree learning could be parallelized by samples or by features. CatBoost uses a computation scheme with several permutations of learning dataset and computes statistics for categorical features during training. So we need to use feature parallel learning. 6 Experiments **Quality: comparison with baselines** We compare our algorithm with XGBoost, LightGBM and H2O. The results of the comparison are presented in Table 1. The detailed description of the experimental setup as well as dataset descriptions are published on our github together with the code of the experiment and a container with all the used libraries, so that the result can be reproduced. In this comparison we made categorical features preprocessing using the statistics on random permutation data. The parameter tuning and training was performed on 4/5 of the data and the testing was performed on the other 1/5. The training with selected parameters was performed 5 times with different random seeds, the result is the average logloss on 5 runs. The table shows that the CatBoost algorithm outperforms other algorithms on all datasets in classification task. In our repo on github you can also see that CatBoost with default parameters outperforms tuned XGBoost and H2O on all but one datasets. **GPU vs CPU training performance** Scripts for running GPU experiments are in our github repo. In the first experiment we compared training speed for our GPU vs CPU implementations. For CPU version we used dual-socket server with 2 Intel Xeon CPU (E5-2650v2, 2.60GHz) and 256GB RAM and run CatBoost in 32 threads (equal to number of logical cores). GPU implementation was run on several servers with different GPU types. Our GPU implementation doesn’t require multi-core server for high performance, so different CPU and machines shouldn’t significantly affect benchmark results. Results are present in Table 2. We used Criteo dataset (36 * 10^6 samples, 26 categorical, 13 numerical features) to benchmark our categorical features support. We used 2 GTX1080 because 1 had not enough memory. It’s clearly seen that the GPU version significantly outperforms CPU training time even on old generation GPU (Tesla K40) and gains impressive x15 speedup on NVIDIA V100 card. We used Epsilon dataset (4 * 10^5 samples, 2000 features) to benchmark our performance on dense numerical dataset. For dense numerical dataset CatBoost GPU training time depends on level of feature discretization. In the table we report time for default 128 bins and for 32 bins which is often sufficient. We would like to mention that Epsilon dataset has not enough samples to fully utilize GPU, and with bigger datasets we observe larger speedups. **GPU training performance: comparison with baselines** Its very hard to compare different boosting libraries in terms of training speed. Every library has a vast number of parameters which affect training speed, quality and model size in a non-obvious way. Every library has its unique quality/training speed trade-off’s and can’t be compared without domain knowledge (e.g. is 0.5% of quality metric worth it to train model 3-4 times slower?). Plus for each library it is possible to obtain almost the same quality with different ensemble sizes and parameters. As a result, we can’t compare libraries by time we need to obtain certain level of quality. So we could give only some insights of how fast our GPU implementation could train a model of fixed size. We use Epsilon dataset (4 * 10^5 samples for train, 10^5 samples for test). For this dataset we measure mean tree construction time one can achieve without using feature subsampling and/or bagging by CatBoost and 2 open-source implementations of boosting with GPU support: XGBoost (we use histogram-based version, exact version is very slow) and LightGBM. We run all experiments on the same machines with NVIDIA P100 accelerator, dual-core Intel Xeon E5-2660 CPU and 128GB RAM. For XGBoost and CatBoost we use default tree depth equal to 6, for LightGBM we set leafs count to 64 to have more comparable results. We set bin to 15 for all 3 methods. Such bin count gives the best performance and the lowest memory usage for LightGBM and CatBoost (Using 128-255 bin count usually leads both algorithms to run 2-4 times slower). For XGBoost we could use even smaller bin count but performance gains compared to 15 bins are too small to account for. All algorithms were run with 16 threads, which is equal to hardware core count. By default CatBoost uses bias-fighting scheme describe in section 3. This scheme is by design 2-3 times slower then classical boosting approach. GPU implementation of CatBoost contains a mode based on classic scheme for those who need best training performance, in this benchmark we used classic scheme. We set such learning rate that algorithms start to overfit approximately after 8000 trees (learning curves are displayed at figure 1, quality of obtained models differs by approximately 0.5%). We measure time to train ensembles of 8000 trees. Mean tree construction time for CatBoost was 17.9ms, for XGBoost 488ms, for LightGBM 40ms. These times are very rough speed comparison, because training time of one tree construction depends on distribution of features and ensemble size. At the same times it shows that if we have similar size ensembles we could expect CatBoost and LightGBM to be competitors for the fastest method, while XGBoost is significantly slower than both of them. **Scorer performance** We used LightGBM, XGBoost and CatBoost models for Epsilon dataset trained as described above. For each model we limit number of trees used for evaluation to 8000 to make results comparable for the reasons described above. Thus this comparison gives only some insights of how fast the models can be scored. For each algorithm we loaded test dataset in python, converted it to the algorithm internal representation and measured wall-time of model predictions on Intel Xeon E5-2660 CPU with 128GB RAM. The results are present in Table 3. We can see that on similar sizes of ensembles CatBoost can be scored around 25 times faster than XGBoost and around 60 times faster than LightGBM. ![Figure 1: GPU learning curves](image) (a) AUC vs Number of trees (b) AUC vs Time <table> <thead> <tr> <th>Method</th> <th>1 thread</th> <th>32 threads</th> </tr> </thead> <tbody> <tr> <td>CatBoost</td> <td>2.4s (x32.5)</td> <td>21ms (x19.5)</td> </tr> <tr> <td>XGBoost</td> <td>78s (x50.8)</td> <td>17.1s (x74)</td> </tr> <tr> <td>LightGBM</td> <td>122s (x50.8)</td> <td>17.1s (x74)</td> </tr> </tbody> </table> Table 3: Scorer comparison References
{"Source-Url": "https://export.arxiv.org/pdf/1810.11363", "len_cl100k_base": 7511, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24037, "total-output-tokens": 8935, "length": "2e12", "weborganizer": {"__label__adult": 0.0005021095275878906, "__label__art_design": 0.0005640983581542969, "__label__crime_law": 0.0006327629089355469, "__label__education_jobs": 0.00091552734375, "__label__entertainment": 0.00017523765563964844, "__label__fashion_beauty": 0.00027942657470703125, "__label__finance_business": 0.00041365623474121094, "__label__food_dining": 0.0004475116729736328, "__label__games": 0.0013723373413085938, "__label__hardware": 0.0026397705078125, "__label__health": 0.0009946823120117188, "__label__history": 0.0004634857177734375, "__label__home_hobbies": 0.0001577138900756836, "__label__industrial": 0.0009307861328125, "__label__literature": 0.0003135204315185547, "__label__politics": 0.0005269050598144531, "__label__religion": 0.0007700920104980469, "__label__science_tech": 0.424560546875, "__label__social_life": 0.0001462697982788086, "__label__software": 0.0182952880859375, "__label__software_dev": 0.54296875, "__label__sports_fitness": 0.0004184246063232422, "__label__transportation": 0.0010356903076171875, "__label__travel": 0.0002777576446533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33036, 0.06002]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33036, 0.26179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33036, 0.87123]], "google_gemma-3-12b-it_contains_pii": [[0, 2975, false], [2975, 8161, null], [8161, 13339, null], [13339, 17923, null], [17923, 25958, null], [25958, 30205, null], [30205, 33036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2975, true], [2975, 8161, null], [8161, 13339, null], [13339, 17923, null], [17923, 25958, null], [25958, 30205, null], [30205, 33036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33036, null]], "pdf_page_numbers": [[0, 2975, 1], [2975, 8161, 2], [8161, 13339, 3], [13339, 17923, 4], [17923, 25958, 5], [25958, 30205, 6], [30205, 33036, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33036, 0.12834]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
169d32b3a72bafc6312721e60c4edcf24249f6be
D.1 Introduction - General differences to Java - Objects and Classes in C++ - Constructors and Destructors - Inheritance - Exceptions - Odds and Ends - Operator overloading - No: Templates - No: Standard Template Library (STL) 1. A Short History of C++ - 1980: Dennis Ritchie extends C to *C with Classes* - 1983: Bjarne Stroustrup introduces C++ V1.0 - 1989: ANSI approves Standard C with elements from C++ - 1989: ANSI committee X3J16 begins standardization of C++ (V2.0) - 1991: *The Annotated C++ Reference Manual* defines C++ V3.0 including *Templates* and *Exceptions* - 1993: C++ V3.1 includes *Namespaces* and *Run-Time Type Identification* - 1997: ISO WG21 and ANSI X3J16 adopt C++ and the *Standard Template Library (STL)* as standard ISO/IEC FDIS 14882 2 What is C++? ■ Super-set of C ■ A better C ◆ Strong typing ◆ Prototypes ◆ Overloading ■ Extends C to include object-oriented concepts ◆ Objects ◆ Classes ◆ Inheritance ◆ Polymorphism ■ BUT: C++ does not enforce an object-oriented style of programming ➔ Therefore you learn Java first! 3 Literature ■ ANSI C++ Public Comment Draft, December 1996. See tutorial web page D.2 General Differences to Java - Input and output - Inlining - Scope operator - Namespaces - Memory management - Function overloading - Reference variables - Default parameters - Constants 1 Input and output - Input and output to Streams via Operators - `cin` Input stream (global) - `cout, cerr, clog` Output streams (global) - `>>` Input operator - `<<` Output operator - Example: ```cpp #include <iostream> void main() { int test; // i/o test variable cin >> test; cout << "test=\" \"test\" \" \"\n"; } ``` - C: `scanf` and `printf` are not type-safe (format string) 2 Inlining - Reserved word `inline`: ```c inline return_type function_name( parameter_list ) { function_body } ``` - Compiler tries to optimize function calls - Instead of a function call the body of the whole function is inserted - Faster calls, but larger programs - Further optimizations possible (e.g. for calls with constant parameters) - Not possible for recursive functions - *Function body must be implemented in the header file (.H or .hh)!!!* - Differences to pre-processor macros (`#define`): - Macros are expanded as normal text - No type checking, often mysterious syntax errors - No repeated expansion for `inline` functions 3 Scope operator - New operator `::` for accessing scopes - Mainly used with classes and namespaces - *Here:* Accessing hidden variables with the same identifier in other scopes - Example: ```c #include <iostream> int test = 4711; // global variable void main() { int test = 1234; // local variable cout << "The global variable is " << ::test << "\n"; cout << "The local variable is " << test << "\n"; } ``` 4 Namespaces - New reserved word `namespace`: ```cpp namespace namespace_name { declarations/definitions } ``` - Opens a new namespace for identifiers - Can be nested - Access via scope operator `::` - Like `package` in Java, but no relation to file organisation Example: ```cpp namespace Date { struct Time { int year; ... }; } Date::Time today; ``` 4 Namespaces (2) - Import of identifiers from other name spaces via `using`: ```cpp using namespace_name::identifier; ``` - Like `import package.identifier;` in Java - Import of complete name spaces: ```cpp using namespace namespace_name; ``` - Like `import package.*;` in Java Example: ```cpp namespace Date { struct Time { ... }; } namespace MyApp { using Date::Time; Date::Time today; ``` 5 Memory management - Two operators in C++: - Memory allocation with `new` ``` type *pointer_to_type; pointer_to_type = new type; ``` - If allocation fails a `std: :bad_alloc` exception is thrown (or a `NULL` pointer is returned) - C: No explicit type casting necessary - Memory deallocation with `delete` ``` delete pointer_to_type; ``` - Programmer is responsible for deallocation - Pointer is still accessible after deallocation - Common source of programming errors - `delete` for a `NULL` pointer is allowed - C: memory management with `malloc` and `free` 5 Memory management (2) - Example: ``` int *x=0; // okay delete x; // okay x = new int; // okay delete x; // okay delete x; // wrong ``` - Special syntax for arrays: ``` int *ap = new int[7]; delete[] ap; // not delete ap !!! ``` - Never ever mix `malloc / free` with `new / delete` - Caution: E.g. `strdup` does an implicit `malloc` - Unfortunately no `Garbage Collection` in C++ 6 Function overloading - Same function name for different implementations - Works for pure C functions and C++ methods - Overloaded functions are distinguished by: - Number of parameters - Type of parameters - Sequence of parameter types - Not: Return type of function (Return value may be ignored) - Example: ``` void Print(); // okay void Print(int, char*); // okay int Print(float); // okay int Print(); // error, not distinguishable ``` 7 Reference variables - Address operator `&` in variable declaration ``` type &reference_variable = variable_of_type; ``` - Reference variable - No real variables - Proxy or alias for another variable - Must be initialized during declaration (with `lvalue` - a thing that can be on the left side of an assignment, i.e. it can take a value) - Example: ``` int x = 5; // variable int &rx = x; // reference to x x = 6; // x==6 and rx==6 rx++; // x==7 and rx==7 ``` - Operations on reference variables affect the referenced variables - Similar to pointers with implicit dereferencing but less flexible 7 Reference variables (2) - Reference parameters - Allow implicit *call-by-reference* semantics - No pointers necessary - Caller writes down call with normal syntax - Disadvantage: syntax of call does not show semantics **Example:** ```cpp #include <iostream> void increment(int& x) { x++; } void main() { int x = 5; increment(x); cout << "x=" << x << "\n"; // x==6 } ``` 7 Reference variables (3) - Returning references is also possible - Function returns a variable (*lvalue*) not a value ```cpp int global = 0; // global variable int& func() { return global; // returns reference to global } int main() { int x; x = func() + 1; // x = global + 1; func() = x; // global = x; } ``` - Returning references to local variables is forbidden ```cpp int& func() { int x = 0; int& rx = x; return rx; // forbidden } ``` 8 Default parameters - Function parameters may contain *default* values - Will be used when the actual parameter in a call is missing - Only at the end of the parameter list, no gaps allowed **Example:** ```c void print(char* string, int nl = 1); print( "Test", 0 ); print( "Test" ); // is equal to print( "Test", 1 ) print(); // wrong, char* parameter is missing ``` - Caution: overloading and default parameters may generate ambiguities ```c void print(char* string); void print(char* string, int nl = 1); print( "Test" ); // which function ?????????? ``` 9 Constants - Reserved word `const` modifies declaration - `const` variables are read-only (`final` in Java) - Initialization during declaration **Example:** ```c const int k = 42; char* const s1 = "Test1"; const char* s2 = "Test2"; const char* const s3 = "Test3"; k = 4; // error: k is const s1 = "New test"; // error: pointer is const *s1 = 'P'; // okay, characters are not const s2 = "New test"; // okay, pointer is not const *s2 = 'P'; // error: characters are const ``` - Should be preferred to `#define`, because managed by the compiler - Definition of local constants - Pointer to constants possible (like pointers to variables) D.3 Objects and Classes in C++ - Extension of structs - Classes - Visibility - Object creation - Object access - Member functions (methods) 1 Extension of structs - New concept for structs - Every struct defines a type - Local functions in structs - Example: ``` struct Person { char* name; int age; void setName( char* ); void setAge( int ); }; ``` - Disadvantage: unrestricted access to all parts from the outside 2 Classes - Class declaration in C++ with reserved word `class`: ```cpp class class_name { Declaration of member variables and functions }; ``` - Contains declaration of data and methods (in C++ called `members`) - Sending a message means in C++: accessing a member Example: ```cpp class Person { char* name; int age; void setName( char* ); void setAge( int ); }; ``` 3 Visibility - Different visibility for parts of an object: - `private`: Member can be accessed only from within its class - `public`: Member can be accessed from anywhere - `protected`: like `private`, but subclasses have access - Parts can be declared in any order and can be repeated - `public` parts are the interface for other objects - Default visibility is `private`! 3 Visibility (2) Example: ```cpp class Person { private: char* name; // private member variables int age; public: void setName( char* ); // public member functions void setAge( int ); }; ``` 4 Object creation - Syntax is the same like declaring a variable ```cpp Person peter; Person john; ``` - Object deleted when identifier goes out of scope - Static creation: ```cpp Person* peter; peter = new Person; // object is created now ``` - Object explicitly deleted ```cpp delete peter; // object is deleted now ``` 5 Object access - Access from outside the object - Private member variables are not accessible - Private member functions are not accessible - Public member variables and functions are accessible - Access operators - As in structs with the dot operator . - With pointers to objects use the arrow operator -> - Example: ```cpp Person peter; Person* john = new Person; peter.setName( "Peter Smith" ); // okay, public cout << peter.name; // error, private john->setAge( 35 ); // okay, public cout << john->age; // error, private delete john; ``` 6 Member functions (methods) - Definition within the class declaration: - Function body comes directly after the declaration (as in Java) - Function becomes automatically inline - Usually used in header files (.h, .H or .hh) - Definition outside the class: - Within the class only declaration of the function prototype - During definition you first have to name the class - Afterwards comes the function name separated by the scope operator :: - Usually used in implementation files (.c, .cc, or .cpp) 7 Member functions (methods) (2) - Example: - Header (`Person.h`) ``` #ifndef PERSON_H #define PERSON_H class Person { private: char* name; int age; public: void setName( char* n ) { // inline name = n; } void setAge( int ); } #endif ``` ``` #include "Person.h" void Person::setAge( int i ) { age = i; } ``` - Implementation (`Person.cpp`) 8 Constant Objects - Variable declared `const` - Initialized when declared - Cannot be changed afterwards - Very useful for method parameters - Silly example: ``` const Person nobody; ``` - Only operations that do not alter the object may be executed - Easy for member variable access - Methods that do not alter members - How does the compiler know? - It does not! - Needs a hint from the programmer 8 Constant objects (2) - Methods may be declared `const` - `Const` methods do not change the object they are called at - Example: ```cpp class Person { private: char* name; int age; public: int getAge() const { return age; } }; ``` D.4 Constructors and Destructors - Constructors - Destructors - Member objects - Copy constructor - Arrays of objects 1 Constructors - Like in Java - Class method - Method name is the name of the class - No return type (not even `void`) - Different constructors through overloading - Declaration usually in the `public` part of the class - Purpose: New object is automatically initialized after creation - Constructor has to put object in a consistent state - Compiler creates a minimal default constructor (no arguments) if not declared in class 1 Constructors (2) - Called during: - Creation of an object via the operator `new` - Creation of a static object - Minimal default constructor (created by the compiler): ```cpp Person::Person() {} ``` - Default constructor (replaces minimal constructor): ```cpp Person::Person() { name = NULL; age = 0; } ``` 1 Constructors (3) - Other constructors: ``` Person::Person(char *n, int i = 0) { name = n; age = i; } ``` - Default values are possible 2 Destructors - Similar to `finalize` in Java - Class method - Method name is the name of the class with `~` in front - No return type (not even `void`) - Only one destructor possible, no overloading - Destructors have no parameters - Declaration usually in the `public` part of the class - Purpose: Cleaning up before deleting the object - Compiler creates a default destructor (does nothing) if not declared in class 2 Destructors (2) - Called during: - Destruction of an object via the operator `delete` - Leaving the scope of a static object - Minimal default destructor (created by the compiler): ```cpp Person::~Person() {} ``` 3 Member objects - Objects of other classes as members within a class ```cpp class Workplace { Person worker; ... }; ``` - Access via operators `.` and `->` as usual - Problems during initialization: - Will the constructors of the member objects be called? - If yes, when will they be called? - Which constructors will be called? - Which parameter values will be used? - Similar problem with object destruction: - When will the destructors of the member objects be called? - No problem: There is only one destructor which has no parameters 3 Member objects (2) - Definition of an initialization list in the constructor: ``` class_name::class_name( parameter_list ) : member1( parameters ), member2( parameters ), ... { ... } ``` - Example: ``` class Person { public: Person( char* ); ... }; class Workplace { Person worker; ... }; Workplace::Workplace( char* name ) : worker( name ) { ... } ``` 4 Copy constructor - When is a copy constructor used? - Object is a value parameter in a function call (call-by-value) - Object is a return value of a function - Initialization of an object with an existing object - Example: ``` Person peter( john ); ``` - Important: use reference operator & - Default copy constructor (created by the compiler) copies bit-by-bit 5 Arrays of objects - Static arrays - Without initialization - For all elements the standard constructor is called ```cpp Person test[4]; // calls 4 times Person::Person() ``` - With initialization - Initialization expressions are used for the first elements, for the rest the standard constructor is called ```cpp Person test[4] = { "Peter", Person("John") }; // test[0] and test[1]: Person::Person( char* ) // test[2] and test[3]: Person::Person() ``` - Arrays of objects (2) - Dynamically allocated arrays - The default constructor is always called ```cpp Person *table; table = new Person[4]; // 4 times Person::Person() ``` - Access as usual via operator [] ```cpp Person table[4]; table[0].SetName( "Peter" ); ``` - Destruction of arrays - For all elements the destructor is called - Dynamically allocated arrays have to be deleted via `delete[]` D.5 Inheritance - Single Inheritance - Scope operator - Modification of visibility - Constructors und Destructors - Type casting - Virtual methods - Polymorphism - Virtual destructors - Abstract base class - Multiple inheritance 1 Inheritance - Like in Java - Reuse of existing implementations (classes) - New class *inherits* features from the existing class - Denotation: - Class that inherits: Subclass - Class that is inherited from: Superclass or Base class - In C++: *Derivation* of new classes from existing ones - Derivation/Inheritance is a "is-a" relation - One base class: Single inheritance, otherwise Multiple inheritance 1 Inheritance (2) - Syntax: ``` class subclass : [modifier] superclass1, [modifier] superclass2, ... { Declaration of new member variables and new or re-implemented member functions (methods) } ``` - Not inherited - Constructors - Destructor - Assignment operator 1 Inheritance (3) - Rule in C++: Everything that is not re-implemented, is inherited ``` class Person { ... public: void print(); void setName( char* ); } class Employee : public Person { ... public: void print(); void setSalary( float ); } ``` behaves like ``` class Employee : public Person { ... public: void print(); // from Employee void setName( char* ); // from Person void setSalary( float ); // from Employee } ``` 2 Scope operator - Often access to re-implemented methods of a superclass is needed - **Scope-Operator ::** `class_name::method( ... )` - No `super` as in Java - Example: ```cpp class Employee : public Person { public: void print() { // print(); // no, endless recursion Person::print(); cout << "Salary:" << salary << "\n"; } }; ... Employee a; a.print(); a.Person::print(); ``` 3 Modification of visibility - Specification how members of a base class should be visible in the subclass - **public** modifier for inheritance: - `public` stays `public` - `protected` stays `protected` - `private` not accessible in subclass - **protected / private** modifiers for inheritance: - `public` becomes `protected / private` - `protected` becomes `protected / private` - `private` not accessible in subclass 3 Modification of visibility (2) - Usually only public inheritance is used - protected and private inheritance make the interface smaller ➔ Subclass is no longer a subtype of the superclass - Default modifier is private! 4 Constructors - Initialization of superclass members via superclass constructors - Subclass constructor calls superclass constructor via initialisation list ``` class_name::class_name( parameter_list ) : superclass1( parameters ), superclass2( parameters ), ... { ... } ``` - Superclass constructors are called before subclass constructor - Subclass members are initialized after superclass members - Example: ``` Employee::Employee( char* n, int a, float s ) : Person( n, a ), salary( s ) { ... } ``` 5 Destructors - Destruction of superclass members has to happen in the destructor of the superclass - Superclass destructor is *automatically* called after the subclass destructor (other way round as with constructors) - Example: ```cpp Employee::~Employee() { Destroy only new members in employee } ``` 6 Pointers to objects - Pointer to a subclass object can be assigned to a pointer to a superclass object: - Subclass is extension of superclass, therefore also subtype - Doesn’t work the other way round: - Explicit type *casting* necessary - Not very nice but sometimes unavoidable - General rule: - Specialized type can be assigned to a more general type. - Pointers have a *static* and a *dynamic* type: - static: Class from pointer declaration - dynamic: Class of the object that the pointer points to (can be the class from the pointer declaration or any subclass of it) - Static type defines accessible interface (members and methods) 7 Type casting - **C-style casts:** ```cpp class Person { ... }; class Employee : public Person { ... }; ... Employee* e = new Employee; // okay Person* p = new Person; // okay Person* pe = e; // okay Employee* e1 = p; // compiler error Employee* e2 = pe; // compiler error Employee* e3 = (Employee*) pe; // okay Employee* e4 = (Employee*) p; // unrecognisable error ``` - Compiler doesn’t look at dynamic type - Before ANSI-C++ there was no Run-Time Type Information (RTTI) - Avoid them !!! - In ANSI-C++ use `static_cast` or `reinterpret_cast` for low-level type casting ```cpp type variable = static_cast<type>( parameter ); ype variable = reinterpret_cast<type>( parameter ); ``` 7 Type casting (2) - **Dynamic casts:** ```cpp type variable = dynamic_cast<type>( parameter ); ``` - Uses Run-Time Type Information to determine if valid - Like all Java casts - Returns NULL if cast fails, no exceptions thrown !!! - Example: ```cpp class Person { ... }; class Employee : public Person { ... }; ... Employee* e = new Employee; Person* p = new Person; Person* pe = e; Employee* e3 = dynamic_cast<Employee*>( pe ); // okay Employee* e4 = dynamic_cast<Employee*>( p ); // returns NULL ``` - Additionally `const_cast` for casting away constness 8 Virtual methods - Up to now: - Type of pointer (static type) not type of object pointed to (dynamic type) defines interface semantics of a call - Access to subclass members only after type casting of the pointer - Aim is polymorphism: Execution of the suitable subclass method without explicitly knowing the subclass *(This is what you always have in Java!)* - Solution: Virtual methods - Object defines semantics, not the pointer - Syntax with reserved word `virtual`: ```cpp class class_name { virtual return_type method_name( parameter_list ) { ... } } ``` - `virtual` has to be specified in the base class and is inherited 9 Polymorphism - Example: ```cpp class Person { ... public: virtual void print(); }; class Employee : public Person { ... public: void print(); }; ... Person* p = new Person; Person* pe = new Employee; p->print(); // Person::print() pe->print(); // Employee::print() ``` - Called method is determined at run-time - Called object has a defined type, therefore method to be called is unambiguous - Compiler generates vtables (jump tables for virtual methods) - Every object contains pointer to vtable of its class, therefore larger objects 10 Virtual destructors - Dynamically allocated objects may be assigned to superclass pointers. - Problem: If object is deleted, only the superclass destructor is called because of the static type of the superclass pointer. - Objects are not destroyed properly. - Solution: **Virtual** destructor: ``` class class_name { virtual class_name::~class_name() { ... } }; ``` - **virtual** has to be specified in the base class. - Is inherited by all subclasses although destructor names are different in subclasses. 11 Abstract classes - Abstract classes: - No all methods that were declared are also implemented. - There can be no instances/objects of this class. - Subclasses can only have instances if all declared methods are also implemented. - Abstract classes can be used: - As superclasses without instances (class with **abstract** methods in Java). - To define a type/interface (**interface** in Java). - Syntax for methods that are not implemented (**pure virtual**): ``` class class_name { virtual return_type method_name( parameter_list ) = 0; }; ``` - Pointers to abstract classes are possible but have to be initialized with object of a subclass that is not abstact. 12 Multiple inheritance - Subclass has *multiple* superclasses (forbidden in Java) - Subclass contains *every* superclass as an implicit part - The subclass constructor can call constructors of every superclass in the initialization list ```cpp class Base1 { ... public: Base1( int, char* ); } class Base2 { ... public: Base2( int, float ); } class Derived : public Base1, public Base2 { ... public: Derived( char *s, int i ) : Base1( i, s ), Base2( i, 4.2 ) { } } ``` - When an object of the subclass is destroyed the destructors of all superclasses are called --- 12 Multiple inheritance (2) - Problem: *Ambiguities* through name clashes - Two or more superclasses have the same member: - Member variables with the same name - Methods with the same name and the same parameters - First automatic resolution of ambiguities, then access control (visibility) - Making one member private doesn’t help - Explicit resolution of name clashes for variables: - Specify the superclass before the variable name using the scope operator `::` - Possible solution for methods: - Reimplement method and use the desired superclass method(s) via the scope operator `::` 12 **Multiple Inheritance (3)** - Superclass contains common features (intersection set) of all subclasses (generalization) - Problem with multiple inheritance: Common base class is contained multiple times - Example: ![Diagram](image-url) 12 **Multiple Inheritance (4)** - Implementation with a *virtual* base class - Example: ![Diagram](image-url) - Syntax for *virtual* inheritance: ``` class subclass : virtual public superclass { Declaration of member variables and functions }; ``` 12 Multiple inheritance (5) - Example: ```cpp class Boat { protected: char* name; public: Boat( char* n ) : name( n ) { } }; class SailingBoat : virtual public Boat { protected: Sail mySail; public: SailingBoat( char* n ) : Boat( n ) { } }; class MotorBoat : virtual public Boat { protected: Motor myMotor; public: MotorBoat( char* n ) : Boat( n ) { } }; class SailingBoatWithMotor : public SailingBoat, public MotorBoat { public: SailingBoatWithMotor( char* n ) : Boat( n ), SailingBoat( n ), MotorBoat( n ) { } }; ``` D.6 Exceptions - Exception syntax - How exceptions work - Example: Ressource allocation - Differences to Java - Exceptions in ANSI C++ - Solution for the new problem 1 Exception Syntax - 3 reserved words: - `try` tries to execute the following block - `throw` creates an exception and starts exception handling - `catch` catches an exception from the `try` block and processes it in the following block - Example: ```java try { computation if error: throw exception_class(...); } catch (exception_class variable) { exception processing } ``` 2 How Exceptions Work - Linear processing of the `catch` list - Grouping of error types through inheritance - Catching a base class also catches all subclasses - Exceptions are propagated upwards until a `catch` clause is found who’s type matches the type of the exception - All destructors are called when leaving a block because of an exception - There is no suitable `catch` clause ➔ Program is aborted - `catch(...)` catches all exceptions 3 Differences to Java - No **finally** block - Similar functionality can be achieved through: ``` catch( ... ) { // clean up throw; // re-throw caught exception } ``` ◆ Attention: Not executed if there are other catch clauses that match or when no exception was thrown - Exceptions do *not* belong to a method’s type - Can be thrown anywhere - Compiler cannot check if all thrown exceptions are caught at some point 4 Exceptions in ANSI C++ - Functions and methods *may* specify an exception list - Reserved word `throw` in function prototype: ``` return_type method_name ( parameter_list ) throw ( exception_list ) { Body of method } ``` - Similar to `throws` in Java - Exception list is a guarantee to the the caller - `std::unexpected()` is called if an exception that is not in the list leaves the function - Functions without an exception list may still throw any exception D.7 Odds and Ends 1. This pointer - `this` points to the called object itself - Implicit parameter in every method call - Looks like: `class_name * const this` - If method is `const`: `const class_name * const this` Example: ```cpp class Person { char* name; public: void print() { cout << this->name; } // = name void insertInto( List* l ) { l->insert(this) } void prettyPrint() { cout << "Data: "; this->print(); // = print() } }; ``` 2 Static members - Normally every object contains its own set of variables - Except for: member variables declared as `static` - `static` members exist once for each class, no matter how many objects of that class were created - Makes it possible to use it as a shared variable for all instances of a class - Class variable - Access rights can be specified as with instance variables 2 Static members (2) - Global initialization outside the class (access rights don’t matter for initialization) - Example: ``` class BankAccount { static float interestRate; ... }; ... float BankAccount::interestRate = 0.5; ``` 2 Static members (3) - Methods that only access other *static* members may be declared *static* themselves - *static* methods can be called without an object - *No* access to dynamic (per instance) members of the class - *No this* pointer D.8 Operators - Operator overloading - Global operators - Operators as members - Binary operators - Unary operators - Allocation operators 1 Operator overloading - In C++ (in contrast to Java) operators can be overloaded to work with new types - Looks like function or method overloading - New reserved word `operator` ```cpp return_type operator operator ( parameter_list ) { ... } ``` - Operators that can be overloaded ```text + - * / % ^ & ~ ! = < > += -= *= /= %= ^= &= | << >> <<= >>= ++) -- , ->* -> () [] new delete ``` - Operators that cannot be overloaded `.* :: ?:` - Operator precedence and associativity cannot be changed 2 Global operators - Work like (global) functions - Can be friends of classes - Always have the object itself as a parameter - Example: ```cpp class Person { char* name; friend ostream& operator << ( ostream&, Person ); }; ostream& operator << ( ostream& os, Person& p ) { os << p.name; return os; } Person p( "Peter" ); cout << p; // call as operator operator << ( cout, p ); // call as function ``` 3 Operators as members - Operator is treated like a method of the class - Access to all members, there is a `this` pointer - One parameter less than the same global operator (object via `this`) - Example: ```cpp class Complex { double real, imag; public: Complex( double r=0, double i=0 ) : real( r ), imag( i ) { } Complex operator + ( const Complex& c ) const; }; Complex Complex::operator + ( const Complex& c ) const { Complex result( real+c.real, imag+c.image ); return result; } ... Complex c1, c2, c3; // normal call // generated by the compiler ``` 4 Binary operators - As a global operator: Two parameters - As a member: One parameter - Examples (only member operators): - Assignment operator ```cpp class& class::operator = ( class& ) ``` - Index operator ```cpp element_type& class::operator [] ( index_type ) ``` - Index type usually `int` - Arithmetic operators and their combination with the assignment operator 5 Unary operators - As a global operator: One parameter - As a member: No parameters - Except for: Postfix operators - Examples (only member operators): - Prefix increment operator ```cpp class& class::operator ++ ( ) ``` - Postfix increment operator ```cpp class& class::operator ++ ( int ) ``` int is just a dummy parameter to distinguish it from the prefix version - Cast operator ```cpp class::operator target_type ( ) ``` Target type of the cast is operator name and return type at once 6 Allocation operators - Custom memory allocation strategies - Global operators for all classes - Operators for allocation on a per-class basis - Override global operators - E.g. memory pool for short-lived objects - Operator syntax - Allocation operator ```cpp void* operator new ( size_t ) ``` - Deallocation operator ```cpp void operator delete ( void * ) ``` - For arrays operators `new[]` and `delete[]`
{"Source-Url": "http://www4.cs.fau.de/Lehre/WS00/V_OODS1/Tutorial/8-A5.pdf", "len_cl100k_base": 8052, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 78764, "total-output-tokens": 10518, "length": "2e12", "weborganizer": {"__label__adult": 0.0004673004150390625, "__label__art_design": 0.0002491474151611328, "__label__crime_law": 0.00021958351135253904, "__label__education_jobs": 0.0005984306335449219, "__label__entertainment": 5.060434341430664e-05, "__label__fashion_beauty": 0.00013887882232666016, "__label__finance_business": 0.00015914440155029297, "__label__food_dining": 0.00035190582275390625, "__label__games": 0.0007014274597167969, "__label__hardware": 0.0006818771362304688, "__label__health": 0.00028967857360839844, "__label__history": 0.00018608570098876953, "__label__home_hobbies": 8.666515350341797e-05, "__label__industrial": 0.0002651214599609375, "__label__literature": 0.000217437744140625, "__label__politics": 0.00020456314086914065, "__label__religion": 0.0004987716674804688, "__label__science_tech": 0.001251220703125, "__label__social_life": 7.736682891845703e-05, "__label__software": 0.003215789794921875, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0003502368927001953, "__label__transportation": 0.00042724609375, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32816, 0.0086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32816, 0.47413]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32816, 0.72728]], "google_gemma-3-12b-it_contains_pii": [[0, 848, false], [848, 1582, null], [1582, 2196, null], [2196, 3279, null], [3279, 4115, null], [4115, 5157, null], [5157, 6263, null], [6263, 7141, null], [7141, 8358, null], [8358, 8808, null], [8808, 9585, null], [9585, 10151, null], [10151, 11228, null], [11228, 12115, null], [12115, 12523, null], [12523, 13292, null], [13292, 13873, null], [13873, 14677, null], [14677, 15479, null], [15479, 16466, null], [16466, 17108, null], [17108, 17887, null], [17887, 18741, null], [18741, 19479, null], [19479, 20453, null], [20453, 21715, null], [21715, 22927, null], [22927, 24154, null], [24154, 25377, null], [25377, 25877, null], [25877, 26574, null], [26574, 27424, null], [27424, 28360, null], [28360, 28841, null], [28841, 29478, null], [29478, 29859, null], [29859, 30871, null], [30871, 31841, null], [31841, 32816, null]], "google_gemma-3-12b-it_is_public_document": [[0, 848, true], [848, 1582, null], [1582, 2196, null], [2196, 3279, null], [3279, 4115, null], [4115, 5157, null], [5157, 6263, null], [6263, 7141, null], [7141, 8358, null], [8358, 8808, null], [8808, 9585, null], [9585, 10151, null], [10151, 11228, null], [11228, 12115, null], [12115, 12523, null], [12523, 13292, null], [13292, 13873, null], [13873, 14677, null], [14677, 15479, null], [15479, 16466, null], [16466, 17108, null], [17108, 17887, null], [17887, 18741, null], [18741, 19479, null], [19479, 20453, null], [20453, 21715, null], [21715, 22927, null], [22927, 24154, null], [24154, 25377, null], [25377, 25877, null], [25877, 26574, null], [26574, 27424, null], [27424, 28360, null], [28360, 28841, null], [28841, 29478, null], [29478, 29859, null], [29859, 30871, null], [30871, 31841, null], [31841, 32816, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32816, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32816, null]], "pdf_page_numbers": [[0, 848, 1], [848, 1582, 2], [1582, 2196, 3], [2196, 3279, 4], [3279, 4115, 5], [4115, 5157, 6], [5157, 6263, 7], [6263, 7141, 8], [7141, 8358, 9], [8358, 8808, 10], [8808, 9585, 11], [9585, 10151, 12], [10151, 11228, 13], [11228, 12115, 14], [12115, 12523, 15], [12523, 13292, 16], [13292, 13873, 17], [13873, 14677, 18], [14677, 15479, 19], [15479, 16466, 20], [16466, 17108, 21], [17108, 17887, 22], [17887, 18741, 23], [18741, 19479, 24], [19479, 20453, 25], [20453, 21715, 26], [21715, 22927, 27], [22927, 24154, 28], [24154, 25377, 29], [25377, 25877, 30], [25877, 26574, 31], [26574, 27424, 32], [27424, 28360, 33], [28360, 28841, 34], [28841, 29478, 35], [29478, 29859, 36], [29859, 30871, 37], [30871, 31841, 38], [31841, 32816, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32816, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
a95715c209e356169016f7e97b3376e0882e01d3
CS 271 Computer Architecture & Assembly Language Lecture 9 The System Stack More MASM Procedures Intro to Parameter Passing 2/1/22, Tuesday Odds and Ends • Label names • Do not name them as L1, L2,... (our textbook give bad examples!) • Taking points off starting from programming assignment 4 • Use meaningful names instead • Indentation • Align in-line comments as well • Midterm: 2/8 (Next Tuesday) during lecture time, same classroom • Review on Thursday Lecture Topics: • The System Stack • More about MASM Procedures • Documenting Procedures • Register Management for Procedures • Introduction to Parameter Passing The System Stack Stack • Data structure (ADT) • Last-in, first-out (LIFO or FILO) • All operations reference the “top” of the stack • Special names for operations • push, pop • Applications: • Activation stack • Iterative implementation of recursive algorithms • Base conversion • Expression evaluation • Many others The System Stack (Runtime Stack) • The operating system maintains a stack • Implemented in memory • LIFO structure • Managed by the CPU, using two registers • SS: address of stack segment • ESP: stack pointer (always points to “top” of stack) • i.e., ESP contains the address of the top of the stack PUSH and POP Instructions (32-bit) • **PUSH** syntax • PUSH $r/m32$ • PUSH $immed$ • **POP** syntax • POP $r/m32$ PUSH Operation • A push operation • **Decrements** the stack pointer by 4 • Copies a value into the location pointed to by the stack pointer • Actual decrement depends on the size of the operand • Note: it’s best to use 32-bit (DWORD, 4-byte) operands Example PUSH • Suppose that ECX contains 317 and ESP contains 0200h. In this case, [ESP] is 25. • The next instruction is • push ecx • Execute push ecx • ESP: 01FCh • [ESP]: 317 • Note: ESP is decremented, then 317 is stored in the stack • Note: [ESP] means “content” of memory at the address in ESP Stack Segment in Memory <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...</td> <td>...</td> </tr> <tr> <td>01ECh</td> <td>?</td> </tr> <tr> <td>01F0h</td> <td>?</td> </tr> <tr> <td>01F4h</td> <td>?</td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td></td> </tr> <tr> <td>0200h</td> <td>25</td> </tr> </tbody> </table> POP Operation • A pop operation • Copies value at ESP into a register or variable. • Increments the stack pointer by 4 • Actual increment depends on the size of the operand • Note: it’s best to use 32-bit (DWORD, 4-byte) operands Example POP - Suppose that ESP contains 01FCh. In this case, [ESP] is 317 - The next instruction is `pop eax` - Execute `pop eax` - eax now contains 317 - ESP: 0200h - [ESP]: 25 - Note: 317 is copied to EAX, then ESP is incremented. Memory contents unchanged. ### Stack Segment in Memory <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...</td> <td>...</td> </tr> <tr> <td>01ECh</td> <td>?</td> </tr> <tr> <td>01F0h</td> <td>?</td> </tr> <tr> <td>01F4h</td> <td>?</td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>317</td> </tr> <tr> <td>0200h</td> <td>25</td> </tr> </tbody> </table> ![Stack Segment in Memory Diagram] Using PUSH and POP • Save and restore registers when they contain important values. POP operands occur in the opposite of the order of PUSH operands ```assembly push ecx ; save registers push ebx mov ecx,100h mov ebx,0 ; etc. pop ebx ; restore registers pop ecx ``` Example: Nested Loop - Push the outer loop counter before entering the inner loop. - Pop the outer loop counter when the inner loop terminates. ```asm mov ecx,100 ; set outer loop count L1: ; begin the outer loop push ecx ; save outer loop count mov ecx,20 ; set inner loop count L2: ; begin the inner loop ; ; loop L2 ; repeat the inner loop pop ecx ; restore outer loop count loop L1 ; repeat the outer loop ``` When **not** to push - Be sure that **PUSH** does not hide a return address - Be sure that **POP** dose not lose a return address and/or replace needed values. CALL and RET Instructions • The **CALL** instruction calls a procedure • Pushes the offset of the next instruction onto the stack • Copies the address of the called procedure into EIP • The **RET** instruction returns from a procedure • Pops top of stack into EIP Procedure call/return Example (p1) main PROC ... mov eax, 175 mov ebx, 37 mov edx, 25 call Sum3 mov result, eax ... main ENDP Sum3 PROC add eax, ebx add eax, edx ret SumTwo ENDP EAX ? EBX ? EDX ? ESP 0200h EIP 1202h (address of next instruction) <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...etc</td> <td></td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>?</td> </tr> <tr> <td>0200h</td> <td>456</td> </tr> </tbody> </table> Procedure call/return Example (p2) ``` main PROC ... mov eax,175 mov ebx,37 mov edx,25 call Sum3 mov result,eax ... main ENDP Sum3 PROC add eax, ebx add eax, edx ret SumTwo ENDP ``` <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...etc</td> <td></td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>?</td> </tr> <tr> <td>0200h</td> <td>456</td> </tr> </tbody> </table> Procedure call/return Example (p3) main PROC ... mov eax, 175 mov ebx, 37 mov edx, 25 call Sum3 mov result, eax ... main ENDP Sum3 PROC add eax, ebx add eax, edx ret SumTwo ENDP <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...etc</td> <td></td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>1216h (return address)</td> </tr> <tr> <td>0200h</td> <td>456</td> </tr> </tbody> </table> Procedure call/return Example (p4) main PROC ... mov eax,175 mov ebx,37 mov edx,25 call Sum3 mov result,eax main ENDP Sum3 PROC add eax, ebx add eax, edx ret SumTwo ENDP EAX 237 EBX 37 EDX 25 ESP 01FCh EIP 2C7Ah (address of ret instruction) Stack Segment in Memory <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...etc</td> <td></td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>1216h</td> </tr> <tr> <td>0200h</td> <td>456</td> </tr> </tbody> </table> Procedure call/return Example (p5) main PROC ... mov eax,175 mov ebx,37 mov edx,25 call Sum3 mov result,eax ... main ENDP Sum3 PROC add eax, ebx add eax, edx ret SumTwo ENDP <table> <thead> <tr> <th>Address</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>...etc</td> <td></td> </tr> <tr> <td>01F8h</td> <td>?</td> </tr> <tr> <td>01FCh</td> <td>1216h</td> </tr> <tr> <td>0200h</td> <td>456</td> </tr> </tbody> </table> EAX 237 EBX 37 EDX 25 ESP 0200h EIP 1216h (address of mov instruction) The System Stack • There is much more to learn about the system stack • Parameter passing • Activation records • Etc. • Be sure that you understand: • How the stack works • Push decrements, Pop increments • The importance of keeping the stack aligned More about MASM Procedures Documenting Procedures Register Management for Procedures In MASM Procedures ... Beware! • Avoid duplicate labels • Labels inside a procedure are only visible within that procedure • Don’t use the same label names in different procedures • **Preconditions**: Be sure to set required registers before calling library procedures. • Be aware of registers changed in procedures. Local and Global Labels - Procedures should be invoked by executing a `call` statement - Bad style (and a **very bad idea**) to jump into a procedure from outside the procedure - Procedures should terminate by executing a `ret` statement - Bad style (and a **very bad idea**) to jump to a label outside a procedure - Assembly language permits implementing some **very bad ideas** and **very bad styles** - However, good programmers don’t use them Nested Procedure calls • Any procedure might call another procedure • Return addresses are “stacked” (LIFO) • **RET** instructions must follow the order on the stack • This is one very good reason not to jump into or out of a procedure! • It is essential that the stack be properly aligned when the **RET** instruction is executed!! Documenting Procedures • Documentation for each procedure: • Description: A description of the task accomplished by the procedure • Receives: A list of input parameters; state usage and requirements • Returns: A description of values returns by the procedure • Preconditions: List of requirements that must be satisfied before the procedure is called • Register changed: List of registers that may have different values than they had when the procedure was called • If a procedure is called without satisfying the preconditions, the procedure’s creator makes no promise that it will work. ; Procedure to calculate the summation ; of integers from a to b. ; receives: a and b are global variables ; returns: global sum = a+(a+1)+ ... +b ; preconditions: a <= b ; registers changed: eax, ebx, ecx calculate PROC ... ret calculate ENDP Saving Registers • If a procedure changes any registers, the calling procedure might lose important data • Two ways to save data: • By the calling procedure • Registers may be saved before call, and restored after return • By the called procedure • Registers may be saved at the beginning of the procedure, and restored before the return Saving / Restoring Registers • Methods: 1. Move register contents to named memory locations, then restore after procedure returns. 2. Use `pushad` and `popad` • Option 1: calling procedure pushes before call, pops after return • Option 2: called procedure pushes at beginning, and pops before the return 3. Save selected registers on the system stack • Option 1: calling procedure pushes before call, pops after return • Option 2: called procedure pushes at beginning, and pops before the return Method 1: Save Register Contents in Memory - Example (in main ... aReg, bReg declared in .data) ```assembly mov aReg, eax ;save registers mov bReg, ebx mov eax, count ;set parameters mov ebx, OFFSET val call someProc mov eax, aReg ;restore registers mov ebx, bReg ``` Method 2: Save all Registers on the System Stack - **pushad** pushes the 32-bit general-purpose registers onto the stack - Order: EAX, ECX, EDX, EBX, ESP, EBP, ESI, EDI - **popad** pops the same registers off the stack in reverse order - Note: it’s best to use 32-bit (DWORD) operands Method 2: Save all Registers on the System Stack • Example (Option 1: in calling procedure): ```assembly pushad ;save registers call someProc popad ;restore registers ...``` Method 2: Save all Registers on the System Stack • Example (Option 2: in the called procedure): ```assembly calcSum PROC pushad ;save registers ... ;procedure body ... popad ;restore registers ret calcSum ENDP ``` Method 3: Save Selected Registers on the System Stack • Example: • **push eax** • pushes the contents of eax onto the system stack • **pop eax** • Pops the top of the system stack into eax Methods 2 and 3: Save Registers on the System Stack • **Warnings:** • Be sure that values don’t get lost • Be sure that the system stack is properly aligned • The return address must be on the top of the stack when the `ret` statement is executed!! • Experiment with MASM • Try several ways to do some simple tasks • Use DEBUG to see what happens Introduction to Parameter Passing Parameters • Definitions: • **Argument** *(actual parameters)* is a value or reference **passed to** a procedure • **Parameter** *(formal parameters)* is a value or reference **received by** a procedure • **Return value** is a value determined by the procedure, and **communicated back** to the calling procedure. • No theoretical limit, but **practicality** and readability rule. Parameters Classifications • An **input parameter** is data passed by a calling program to a procedure. • The called procedure is not expected to modify the corresponding argument variable, and even if it does, the modification is confined to the procedure itself. • An **output parameter** is created by passing the **address** of an argument variable when a procedure is called. • The “address of” a variable is the same thing as a “**pointer to**” or a “**reference to**” the variable. In MASM, we use **OFFSET**. • The procedure does not use any existing data from the variable, but it fills in new contents before it returns. • An **input-output parameter** is the **address** of an argument variable which contains input that will be both **used** and **modified** by the procedure. • The content is modified at the memory address passed by the calling procedure. Passing Values/Addresses to/from Procedures • Methods: 1. Use shared memory (global variables) 2. Pass parameters in registers 3. Pass parameters on the system stack 1. Use Shared Memory (Global Variables) • Set up memory contents before call and/or before return • Generally ... it’s a **bad idea** to use global variables • Procedure might change memory contents needed by other procedures (unwanted side-effect) • **For Program #1 - #4 ... we use globals** • Later we will pass parameters on the system stack. 2. Pass Parameters in Registers • Set up registers before call and/or before return • Generally ... it’s a not a good idea to pass parameters in registers • Procedure might change register contents • However • Some Irvine library procedures require values in registers (e.g., “Receives” and “Preconditions” for ReadString) • Some Irvine library procedures return values in registers (e.g., “Returns” for ReadInt) 3. Pass Parameters on the System Stack • Push parameters onto the system stack before the call • Two ways to use the parameters: • Procedure moves parameters from the stack into registers/variables • Set up a “stack frame”, and reference parameters directly on the stack • Remove parameters and return to the calling program • Much more later on this method • This is the method used by high-level languages Register vs. Stack Parameters - Register parameters require dedicating a register to each parameter. - Stack parameters make better use of system resources. - Example: - Two ways of calling Summation procedure. **Method 1** (parameters in registers): ``` pushad ;save registers mov ebx,low mov ecx,high call Summation mov sum, eax popad ;restore registers ``` **Method 2** (parameters on stack): ``` push low push high push OFFSET sum call Summation ``` Register vs. Stack Parameters • Of course, methods of calling a procedure and passing parameters depend on the procedure implementation ... and vice-versa. • Some “setup” is involved (in the calling procedure) • Some “bookkeeping” is involved (in the called procedure) • Parameters in registers require register management • Parameters on the system stack require stack management Saving Registers • Remember! • There’s only one set of registers. • If a called procedure changes any registers, the calling procedure might lose important data • In call cases, when a procedure is called: • Be aware of preconditions • What conditions must be true before the procedure can perform its task? • Be aware of what registers are changed (document!) • Save and restore registers if necessary
{"Source-Url": "https://classes.engr.oregonstate.edu/eecs/winter2022/cs271-001/slides/Lecture9.pdf", "len_cl100k_base": 4366, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 50036, "total-output-tokens": 5820, "length": "2e12", "weborganizer": {"__label__adult": 0.0004050731658935547, "__label__art_design": 0.00045680999755859375, "__label__crime_law": 0.0003719329833984375, "__label__education_jobs": 0.00972747802734375, "__label__entertainment": 8.350610733032227e-05, "__label__fashion_beauty": 0.00019299983978271484, "__label__finance_business": 0.0001647472381591797, "__label__food_dining": 0.0004296302795410156, "__label__games": 0.000896453857421875, "__label__hardware": 0.002590179443359375, "__label__health": 0.0003986358642578125, "__label__history": 0.0003352165222167969, "__label__home_hobbies": 0.00017881393432617188, "__label__industrial": 0.0008797645568847656, "__label__literature": 0.0002770423889160156, "__label__politics": 0.0003113746643066406, "__label__religion": 0.0006937980651855469, "__label__science_tech": 0.0219573974609375, "__label__social_life": 0.00015938282012939453, "__label__software": 0.007709503173828125, "__label__software_dev": 0.9501953125, "__label__sports_fitness": 0.0004329681396484375, "__label__transportation": 0.0007281303405761719, "__label__travel": 0.00021719932556152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15002, 0.03418]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15002, 0.36543]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15002, 0.70853]], "google_gemma-3-12b-it_contains_pii": [[0, 141, false], [141, 475, null], [475, 638, null], [638, 655, null], [655, 968, null], [968, 1282, null], [1282, 1404, null], [1404, 1664, null], [1664, 2224, null], [2224, 2462, null], [2462, 2996, null], [2996, 3267, null], [3267, 3724, null], [3724, 3885, null], [3885, 4157, null], [4157, 4577, null], [4577, 4947, null], [4947, 5371, null], [5371, 5813, null], [5813, 6283, null], [6283, 6548, null], [6548, 6633, null], [6633, 6957, null], [6957, 7411, null], [7411, 7748, null], [7748, 8349, null], [8349, 8598, null], [8598, 8950, null], [8950, 9462, null], [9462, 9745, null], [9745, 10036, null], [10036, 10230, null], [10230, 10493, null], [10493, 10695, null], [10695, 10954, null], [10954, 11053, null], [11053, 11087, null], [11087, 11476, null], [11476, 12357, null], [12357, 12530, null], [12530, 12883, null], [12883, 13304, null], [13304, 13721, null], [13721, 14201, null], [14201, 14588, null], [14588, 14751, null], [14751, 15002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 141, true], [141, 475, null], [475, 638, null], [638, 655, null], [655, 968, null], [968, 1282, null], [1282, 1404, null], [1404, 1664, null], [1664, 2224, null], [2224, 2462, null], [2462, 2996, null], [2996, 3267, null], [3267, 3724, null], [3724, 3885, null], [3885, 4157, null], [4157, 4577, null], [4577, 4947, null], [4947, 5371, null], [5371, 5813, null], [5813, 6283, null], [6283, 6548, null], [6548, 6633, null], [6633, 6957, null], [6957, 7411, null], [7411, 7748, null], [7748, 8349, null], [8349, 8598, null], [8598, 8950, null], [8950, 9462, null], [9462, 9745, null], [9745, 10036, null], [10036, 10230, null], [10230, 10493, null], [10493, 10695, null], [10695, 10954, null], [10954, 11053, null], [11053, 11087, null], [11087, 11476, null], [11476, 12357, null], [12357, 12530, null], [12530, 12883, null], [12883, 13304, null], [13304, 13721, null], [13721, 14201, null], [14201, 14588, null], [14588, 14751, null], [14751, 15002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15002, null]], "pdf_page_numbers": [[0, 141, 1], [141, 475, 2], [475, 638, 3], [638, 655, 4], [655, 968, 5], [968, 1282, 6], [1282, 1404, 7], [1404, 1664, 8], [1664, 2224, 9], [2224, 2462, 10], [2462, 2996, 11], [2996, 3267, 12], [3267, 3724, 13], [3724, 3885, 14], [3885, 4157, 15], [4157, 4577, 16], [4577, 4947, 17], [4947, 5371, 18], [5371, 5813, 19], [5813, 6283, 20], [6283, 6548, 21], [6548, 6633, 22], [6633, 6957, 23], [6957, 7411, 24], [7411, 7748, 25], [7748, 8349, 26], [8349, 8598, 27], [8598, 8950, 28], [8950, 9462, 29], [9462, 9745, 30], [9745, 10036, 31], [10036, 10230, 32], [10230, 10493, 33], [10493, 10695, 34], [10695, 10954, 35], [10954, 11053, 36], [11053, 11087, 37], [11087, 11476, 38], [11476, 12357, 39], [12357, 12530, 40], [12530, 12883, 41], [12883, 13304, 42], [13304, 13721, 43], [13721, 14201, 44], [14201, 14588, 45], [14588, 14751, 46], [14751, 15002, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15002, 0.10619]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2a27c28fb1eb0243044b3a54e8f007028211bb28
Package ‘GEOquery’ April 5, 2014 Type Package Title Get data from NCBI Gene Expression Omnibus (GEO) Version 2.28.0 Date 2013-04-07 Author Sean Davis <sdavis2@mail.nih.gov> Maintainer Sean Davis <sdavis2@mail.nih.gov> Depends methods, Biobase Imports XML, RCurl Suggests limma, RUnit URL http://watson.nci.nih.gov/~sdavis biocViews Microarray, DataImport, OneChannel, TwoChannel, SAGE Description The NCBI Gene Expression Omnibus (GEO) is a public repository of microarray data. Given the rich and varied nature of this resource, it is only natural to want to apply BioConductor tools to these data. GEOquery is the bridge between GEO and BioConductor. License GPL-2 R topics documented: Converting .............................................................. 2 GDS-class ................................................................. 3 Generic functions ....................................................... 4 GEOData-class .......................................................... 4 GEODataTable-class .................................................... 5 getGEO ................................................................. 5 getGEOfile ............................................................ 8 getGEOSuppFiles ...................................................... 9 getGSEDataTables .................................................... 10 Converting Convert a GDS data structure to a BioConductor data structure Description Functions to take a GDS data structure from getGEO and coerce it to limma MALists or ExpressionSets. Usage GDS2MA(GDS, do.log2=FALSE, GPL=NULL, AnnotGPL=TRUE) GDS2eSet(GDS, do.log2=FALSE, GPL=NULL, AnnotGPL=TRUE) Arguments GDS The GDS datastructure returned by getGEO do.log2 Boolean, should the data in the GDS be log2 transformed before inserting into the new data structure GPL Either a GPL data structure (from a call to getGEO) or NULL. If NULL, this will cause a call to getGEO to produce a GPL. The gene information from the GPL is then used to construct the genes slot of the resulting limma MAList object or the featureData slot of the ExpressionSet instance. AnnotGPL In general, the annotation GPL files will be available for GDS records, so the default is to use these files over the user-submitted GPL files Details This function just rearranges one data structure into another. For GDS, it also deals appropriately with making the "targets" list item for the limma data structure and the phenoData slot of ExpressionSets. Value GDS2MA A limma MAList GDS2eSet An ExpressionSet object Author(s) Sean Davis GDS-class References See the limma and ExpressionSet help in the appropriate packages Examples ```r ## Not run: gds5/zero.noslash5 <- getGEO(GDS5/zero.noslash5) ## Not run: MA <- GDS2MA(gds5/zero.noslash5) ## Not run: eset <- GDS2eSet(gds5/zero.noslash5) ``` GDS-class Class "GDS" Description A class describing a GEO GDS entity Objects from the Class Objects can be created by calls of the form `new("GDS", ...)` Slots - `gpl`: Object of class "GPL" - `dataTable`: Object of class "GEODataTable" containing the data table for the GDS - `header`: Object of class "list" containing the metadata for the GDS; can be accessed via the `Meta` accessor Extends Class "GEOData", directly. Methods No methods defined with class "GDS" in the signature, but methods applying to GEOData are also applicable here. Author(s) Sean Davis See Also GEOData-class GEOData-class Generic functions Generic functions for GEOquery Description The main documentation is in the Class documentation Author(s) Sean Davis See Also GEOData-class GEOData-class Class “GEOData” Description A virtual class for holding GEO samples, platforms, and datasets Objects from the Class Objects can be created by calls of the form new("GEOData", ...). Slots header: Object of class "list" containing metadata Methods Accession signature(object = "GEOData"): returns the GEO accession for the current object Columns signature(object = "GEOData"): returns the column descriptions for the current object Meta signature(object = "GEOData"): returns the metadata for the current object Table signature(object = "GEOData"): returns the "Table" for the current object dataTable signature(object = "GEOData"): returns the dataTable (column info and data) for the current object show signature(object = "GEOData"): a convenience method for showing a GEO object Author(s) Sean Davis GEODataTable-class See Also GDS-class, GPL-class, GSM-class, GEODataTable-class. GEODataTable-class Class "GEODataTable" Description Contains the column descriptions and data for the datatable part of a GEO object Objects from the Class Objects can be created by calls of the form new("GEODataTable", ...). Slots columns: Object of class "data.frame" containing information about the columns in the datatable table: Object of class "data.frame" containing the actual data Methods Columns signature(object = "GEODataTable"): get the column portion of the GEODataTable Table signature(object = "GEODataTable"): get the table portion of the GEODataTable show signature(object = "GEODataTable"): convenience show method Author(s) Sean Davis getGEO Get a GEO object from NCBI or file Description This function is the main user-level function in the GEOquery package. It directs the download (if no filename is specified) and parsing of a GEO SOFT format file into an R data structure specifically designed to make access to each of the important parts of the GEO SOFT format easily accessible. Usage getGEO(GEO = NULL, filename = NULL, destdir = tempdir(), GSElimits=NULL, GSEMatrix=TRUE, AnnotGPL=FALSE) Arguments GEO A character string representing a GEO object for download and parsing. (eg., 'GDS505','GSE2','GSM2','GPL96') filename The filename of a previously downloaded GEO SOFT format file or its gzipped representation (in which case the filename must end in .gz). Either one of GEO or filename may be specified, not both. GEO series matrix files are also handled. Note that since a single file is being parsed, the return value is not a list of esets, but a single eset when GSE matrix files are parsed. destdir The destination directory for any downloads. Defaults to the architecture-dependent tempdir. You may want to specify a different directory if you want to save the file for later use. Doing so is a good idea if you have a slow connection, as some of the GEO files are HUGE! GSElimits This argument can be used to load only a contiguous subset of the GSMs from a GSE. It should be specified as a vector of length 2 specifying the start and end (inclusive) GSMs to load. This could be useful for splitting up large GSEs into more manageable parts, for example. GSEMatrix A boolean telling GEOquery whether or not to use GSE Series Matrix files from GEO. The parsing of these files can be many orders-of-magnitude faster than parsing the GSE SOFT format files. Defaults to TRUE, meaning that the SOFT format parsing will not occur; set to FALSE if you for some reason need other columns from the GSE records. AnnotGPL A boolean defaulting to FALSE as to whether or not to use the Annotation GPL information. These files are nice to use because they contain up-to-date information remapped from Entrez Gene on a regular basis. However, they do not exist for all GPLs; in general, they are only available for GPLs referenced by a GDS Details getGEO functions to download and parse information available from NCBI GEO (http://www.ncbi.nlm.nih.gov/geo). Here are some details about what is available from GEO. All entity types are handled by getGEO and essentially any information in the GEO SOFT format is reflected in the resulting data structure. From the GEO website: The Gene Expression Omnibus (GEO) from NCBI serves as a public repository for a wide range of high-throughput experimental data. These data include single and dual channel microarray-based experiments measuring mRNA, genomic DNA, and protein abundance, as well as non-array techniques such as serial analysis of gene expression (SAGE), and mass spectrometry proteomic data. At the most basic level of organization of GEO, there are three entity types that may be supplied by users: Platforms, Samples, and Series. Additionally, there is a curated entity called a GEO dataset. A Platform record describes the list of elements on the array (e.g., cDNAs, oligonucleotide probe-sets, ORFs, antibodies) or the list of elements that may be detected and quantified in that experiment (e.g., SAGE tags, peptides). Each Platform record is assigned a unique and stable GEO accession number (GPLxxx). A Platform may reference many Samples that have been submitted by multiple submitters. A Sample record describes the conditions under which an individual Sample was handled, the manipulations it underwent, and the abundance measurement of each element derived from it. Each Sample record is assigned a unique and stable GEO accession number (GSMxxx). A Sample entity must reference only one Platform and may be included in multiple Series. A Series record defines a set of related Samples considered to be part of a group, how the Samples are related, and if and how they are ordered. A Series provides a focal point and description of the experiment as a whole. Series records may also contain tables describing extracted data, summary conclusions, or analyses. Each Series record is assigned a unique and stable GEO accession number (GSExxx). GEO DataSets (GDSxxx) are curated sets of GEO Sample data. A GDS record represents a collection of biologically and statistically comparable GEO Samples and forms the basis of GEO’s suite of data display and analysis tools. Samples within a GDS refer to the same Platform, that is, they share a common set of probe elements. Value measurements for each Sample within a GDS are assumed to be calculated in an equivalent manner, that is, considerations such as background processing and normalization are consistent across the dataset. Information reflecting experimental design is provided through GDS subsets. Value An object of the appropriate class (GDS, GPL, GSM, or GSE) is returned. If the GSEMatrix option is used, then a list of ExpressionSet objects is returned, one for each SeriesMatrix file associated with the GSE accession. If the filename argument is used in combination with a GSEMatrix file, then the return value is a single ExpressionSet. Warning Some of the files that are downloaded, particularly those associated with GSE entries from GEO are absolutely ENORMOUS and parsing them can take quite some time and memory. So, particularly when working with large GSE entries, expect that you may need a good chunk of memory and that coffee may be involved when parsing.... Author(s) Sean Davis See Also getGEOfile Examples # gds <- getGEO("GDS10") # gds getGEOfile Download a file from GEO soft file to the local machine Description This function simply downloads a SOFT format file associated with the GEO accession number given. Usage getGEOfile(GEO, destdir = tempdir(), AnnotGPL = FALSE, amount = c("full", "brief", "quick", "data")) Arguments <table> <thead> <tr> <th>Argument</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>GEO</td> <td>Character string, the GEO accession for download (eg., GDS84, GPL96, GSE2553, or GSM10)</td> </tr> <tr> <td>destdir</td> <td>Directory in which to store the resulting downloaded file. Defaults to tempdir()</td> </tr> <tr> <td>AnnotGPL</td> <td>A boolean defaulting to FALSE as to whether or not to use the Annotation GPL information. These files are nice to use because they contain up-to-date information remapped from Entrez Gene on a regular basis. However, they do not exist for all GPLs; in general, they are only available for GPLs referenced by a GDS</td> </tr> <tr> <td>amount</td> <td>Amount of information to pull from GEO. Only applies to GSE, GPL, or GSM. See details...</td> </tr> </tbody> </table> Details This function downloads GEO SOFT files based on accession number. It does not do any parsing. The first two arguments should be fairly self-explanatory, but the last is based on the input to the acc.cgi url at the geo website. In the default "full" mode, the entire SOFT format file is downloaded. Both "brief" and "quick" offer shortened versions of the files, good for "peeking" at the file before a big download on a slow connection. Finally, "data" downloads only the data table part of the SOFT file and is good for downloading a simple EXCEL-like file for use with other programs (a convenience). Value Invisibly returns the full path of the downloaded file. Author(s) Sean Davis References **getGEOSuppFiles** ### Description NCBI GEO allows supplemental files to be attached to GEO Series (GSE), GEO platforms (GPL), and GEO samples (GSM). This function "knows" how to get these files based on the GEO accession. No parsing of the downloaded files is attempted, since the file format is not generally knowable by the computer. ### Usage ```r getGEOSuppFiles(GEO, makeDirectory = TRUE, baseDir = getwd()) ``` ### Arguments - **GEO** - A GEO accession number such as GPL1073 or GSM1137 - **makeDirectory** - Should a "subdirectory" for the downloaded files be created? Default is TRUE. If FALSE, the files will be downloaded directly into the baseDir. - **baseDir** - The base directory for the downloads. Default is the current working directory. ### Details Again, just a note that the files are simply downloaded. ### Value A data frame is returned invisibly with rownames representing the full path of the resulting downloaded files and the records in the data.frame the output of file.info for each downloaded file. ### Author(s) Sean Davis <sdavis2@mail.nih.gov> ### Examples ```r # a <- getGEOSuppFiles(GSM1137) # a ``` getGSEDataTables *Get GSE data tables from GEO into R data structures.* **Description** In some cases, instead of individual sample records (GSM) containing information regarding sample phenotypes, the GEO Series contains that information in an attached data table. An example is given by GSE3494 where there are two data tables with important information contained within them. Using getGEO with the standard parameters downloads the GSEMatrix file which, unfortunately, does not contain the information in the data tables. This function simply downloads the “header” information from the GSE record and parses out the data tables into R data.frames. **Usage** ```r getGSEDataTables(GSE) ``` **Arguments** - `GSE` The GSE identifier, such as “GSE3494”. **Value** A list of data.frames. **Author(s)** Sean Davis <sdavis2@mail.nih.gov> **References** **See Also** getGEO **Examples** ```r dfl = getGSEDataTables("GSE3494") lapply(dfl, head) ``` GPL-class Class "GPL" Description Contains a full GEO Platform entity Objects from the Class Objects can be created by calls of the form `new("GPL", ...)`. Slots dataTable: Object of class "GEODataTable" header: Object of class "list" containing metadata associated with the GPL Extends Class "GEOData", directly. Methods No methods defined with class "GPL" in the signature, but methods applicable to GEOData are also applicable here. Author(s) Sean Davis See Also GEOData-class GSE-class Class "GSE" Description Contains a GEO Series entity Objects from the Class Objects can be created by calls of the form `new("GSE", ...)`. Slots header: Object of class "list" containing metadata for the series gsms: Object of class "list" containing a list of items of class GSM associated with the series gpls: Object of class "list" containing a list of items of class GPL associate with the series Methods GPLList signature(object = "GSE"): returns a list, each item of the list being a GPL object GSMList signature(object = "GSE"): returns a list, each item of the list being a GSM object Meta signature(object = "GSE"): returns a list, the metadata associated with the GSE Author(s) Sean Davis See Also GPL-class, GSM-class Description A class containing a GEO Sample entity Objects from the Class Objects can be created by calls of the form new("GSM", ...). Slots dataTable: Object of class "GEODataTable" header: Object of class "list" containing the metadata associated with the sample Extends Class "GEOData", directly. Methods No methods defined with class "GSM" in the signature, but methods that apply to the GEOData also apply here. Author(s) Sean Davis gunzip See Also GEOData-class gunzip Gunzip a file Description gunzip a file Usage gunzip(filename, destname = gsub("[.]gz$","", filename), overwrite = FALSE, remove = TRUE, BFR.SIZE = 1e+/zero.noslash7) Arguments filename The filename to be unzipped destname The destination file overwrite Boolean indicating whether or not to overwrite a destfile of the same name remove Boolean indicating whether or not to remove the original file after completion BFR.SIZE The size of the read buffer.... Details This function was stripped out of R.utils due to breaking some stuff on the bioconductor build machine. Value Invisibly, the number of bytes read. Author(s) Original author: Henrik Bengtsson See Also gzfile parseGEO Parse GEO text Description Workhorse GEO parsers. Usage parseGEO(fname, GSElimits) parseGPL(fname) parseGDS(fname) parseGSE(fname, GSElimits) parseGSM(fname) Arguments fname The filename of a SOFT format file. If the filename ends in .gz, a gzfile() connection is used to read the file directly. GSElimits Used to limit the number of GSMs parsed into the GSE object; useful for memory management for large GSEs. Details These are probably not useful to the end-user. Use getGEO to access these functions. parseGEO simply delegates to the appropriate specific parser. There should be no reason to use the parseGPL, parseGDS, parseGSE, or parseGSM functions directly. Value parseGEO returns an object of the associated type. For example, if it is passed the text from a GDS entry, a GDS object is returned. Author(s) Sean Davis See Also getGEO Index *Topic IO Converting, 2 Generic functions, 4 getGEO, 5 getGEOfile, 8 getGEOSuppFiles, 9 getGSEDataTables, 10 gunzip, 13 parseGEO, 14 *Topic classes GDS-class, 3 GEOData-class, 4 GEODataTable-class, 5 GPL-class, 11 GSE-class, 11 GSM-class, 12 *Topic database getGEOSuppFiles, 9 Accession (Generic functions), 4 Accession, GEOData-method (GEOData-class), 4 Accession, GEODataTable-method (GEODataTable-class), 5 Columns (Generic functions), 4 Columns, GEOData-method (GEOData-class), 4 Columns, GEODataTable-method (GEODataTable-class), 5 Converting, 2 dataTable (Generic functions), 4 dataTable, GEOData-method (GEOData-class), 4 dataTable, GEODataTable-method (GEODataTable-class), 5 GDS-class, 3 GDS2eSet (Converting), 2 GDS2MA (Converting), 2 Generic functions, 4 GEOData-class, 4 GEODataTable-class, 5 getGEO, 5, 9, 10, 14 getGEOfile, 7, 8 getGEOSuppFiles, 9 getGSEDataTables, 10 GPL-class, 11 GPLList (Generic functions), 4 GPLList, GSE-method (GSE-class), 11 GSE-class, 11 GSM-class, 12 GSMList (Generic functions), 4 GSMList, GSE-method (GSE-class), 11 gunzip, 13 gzfile, 13 Meta (Generic functions), 4 Meta, GEOData-method (GEOData-class), 4 Meta, GEODataTable-method (GEODataTable-class), 5 Meta, GSE-method (GSE-class), 11 parseGDS (parseGEO), 14 parseGEO, 14 parseGPL (parseGEO), 14 parseGSE (parseGEO), 14 parseGSM (parseGEO), 14 show, GEOData-method (GEOData-class), 4 show, GEODataTable-method (GEODataTable-class), 5 Table (Generic functions), 4 Table, GEOData-method (GEOData-class), 4 Table, GEODataTable-method (GEODataTable-class), 5
{"Source-Url": "http://www.bioconductor.org/packages//2.13/bioc/manuals/GEOquery/man/GEOquery.pdf", "len_cl100k_base": 4916, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29781, "total-output-tokens": 5740, "length": "2e12", "weborganizer": {"__label__adult": 0.0003428459167480469, "__label__art_design": 0.0004038810729980469, "__label__crime_law": 0.0003705024719238281, "__label__education_jobs": 0.0008707046508789062, "__label__entertainment": 0.0001920461654663086, "__label__fashion_beauty": 0.00014781951904296875, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.000347137451171875, "__label__games": 0.000637054443359375, "__label__hardware": 0.0011301040649414062, "__label__health": 0.0004832744598388672, "__label__history": 0.00032138824462890625, "__label__home_hobbies": 9.995698928833008e-05, "__label__industrial": 0.0004744529724121094, "__label__literature": 0.0002579689025878906, "__label__politics": 0.0003070831298828125, "__label__religion": 0.000514984130859375, "__label__science_tech": 0.081298828125, "__label__social_life": 0.0001634359359741211, "__label__software": 0.10235595703125, "__label__software_dev": 0.80810546875, "__label__sports_fitness": 0.00031495094299316406, "__label__transportation": 0.0002758502960205078, "__label__travel": 0.00024020671844482425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19774, 0.01827]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19774, 0.45578]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19774, 0.76498]], "google_gemma-3-12b-it_contains_pii": [[0, 1378, false], [1378, 2594, null], [2594, 3452, null], [3452, 4453, null], [4453, 5673, null], [5673, 8742, null], [8742, 10882, null], [10882, 12634, null], [12634, 13790, null], [13790, 14787, null], [14787, 15442, null], [15442, 16490, null], [16490, 17216, null], [17216, 18085, null], [18085, 19774, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1378, true], [1378, 2594, null], [2594, 3452, null], [3452, 4453, null], [4453, 5673, null], [5673, 8742, null], [8742, 10882, null], [10882, 12634, null], [12634, 13790, null], [13790, 14787, null], [14787, 15442, null], [15442, 16490, null], [16490, 17216, null], [17216, 18085, null], [18085, 19774, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19774, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19774, null]], "pdf_page_numbers": [[0, 1378, 1], [1378, 2594, 2], [2594, 3452, 3], [3452, 4453, 4], [4453, 5673, 5], [5673, 8742, 6], [8742, 10882, 7], [10882, 12634, 8], [12634, 13790, 9], [13790, 14787, 10], [14787, 15442, 11], [15442, 16490, 12], [16490, 17216, 13], [17216, 18085, 14], [18085, 19774, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19774, 0.01609]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
0ef5c5a4ee3f79c768f0bf9cb37ef51d87854474
Evaluating the Success of IS/IT Projects: How Are Companies Doing It? João Varajão *University of Minho, Centro ALGORITMI, varajao@dsi.uminho.pt* João Álvaro Carvalho *University of Minho, Centro ALGORITMI, jac@dsi.uminho.pt* Follow this and additional works at: [https://aisel.aisnet.org/irwitpm2018](https://aisel.aisnet.org/irwitpm2018) Recommended Citation [https://aisel.aisnet.org/irwitpm2018/8](https://aisel.aisnet.org/irwitpm2018/8) This material is brought to you by the International Research Workshop on IT Project Management (IRWITPM) at AIS Electronic Library (AISeL). It has been accepted for inclusion in International Research Workshop on IT Project Management 2018 by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Evaluating the Success of IS/IT Projects: How Are Companies Doing It? João Varajão University of Minho, Centro ALGORITMI varajao@dsi.uminho.pt João Álvaro Carvalho University of Minho, Centro ALGORITMI jac@dsi.uminho.pt ABSTRACT The article aims to contribute to a better understanding of project management practices concerned with the evaluation of the success of Information Systems (IS)/Information Technology (IT) projects. It describes an exploratory study that inquired ten companies regarding their practices of projects success valuation. Results show that regardless of company size, sector or adopted project management methodology, the evaluation of projects success is currently an informal and rudimentary process mainly focused on the success of project management and not on the success of the projects’ deliverables. Given the importance and complexity of the evaluation of projects' success, companies should define and implement systematic processes for success management aiming to improve project performance and expected benefits, and this seems not to be happening in practice. Keywords Information Systems, IS, Information Technology, IT, Project, Success, Evaluation, Assessment, Exploratory Study. INTRODUCTION IS/IT projects are temporary endeavors that involve the creation of some unique outcome. This outcome can take very different forms, for example: an IT component (e.g., a software application; the migration of data to a new support; the upgrade of the enterprise’ IT infrastructure); or a change in an enterprise that aims at achieving some mid/long-term benefit resulting from the implantation of a new IT application. In some cases, the quality of the outcome – and the success of the corresponding project – can be established just after it has been delivered (well, at least an early account of that quality and that success). In other cases, a full account of the outcome’s quality and the project’s success can only be established after time is given for the impact of the outcome on the enterprise to be felt. This delay between the moment when a project’s outcome is made available to an enterprise and the moment when its benefits are determinable, brings some difficulties to the evaluation of the success of IS/IT projects. One can ask how the success of an IS/IT project is defined and evaluated: the success is measured regarding the quality of project’ outcome (deliverables) that is dealt with and deployed into the enterprise or also takes into consideration the impact of the outcome in the enterprise? The success is evaluated when the resources to produce the outcome are no longer needed or only a period long enough for its impact to be felt (or both)? Project success is an issue within project management that demands further investigation. In times of Agile Development, where time, scope and costs are handled in different matter (Drury-Grogan 2014; Moe et al. 2018) and success of projects can be defined differently (Serrador et al. 2015), the evaluation of project success is of great interest and concerns researchers as well as practitioners. On the one hand, many studies focus on various aspects of project success, such as success factors (e.g., (Procaccino et al. 2002)) or success criteria (e.g., (Müller et al. 2007)). However, few studies (e.g., (Varajão 2016; Varajão 2018)) address the evaluation process. Proper attention to the success evaluation process also lacks in studies that report problems on IS/IT projects since most often these studies refer to the success of the projects (v.g. (Cooke-Davies 2002; StandishGroup 2015)), but do not describe how success is ascertained. Furthermore, little attention is paid to the practices of project managers concerned with the evaluation of projects’ success. Given the undeniable importance of the evaluation of projects’ success (Arviansyah et al. 2015), it is surprising that this topic is underrepresented in the IS/IT and project management literature. This article addresses this topic. It describes an exploratory multi-case study about the practices of projects’ success evaluation. The central research question is: How is the success of projects being evaluated in practice by companies? Key respondents in ten companies where projects and project management are part of their routine were inquired regarding the following questions: 1. Is the success of projects evaluated? 2. Is the process for evaluating success formally defined? 3. Who is involved in the evaluation process? 4. When is the evaluation done? 5. What criteria are used to evaluate projects’ success? 6. What are the sources of information used? This paper is organized as follows. Section 2 presents a brief literature review on the evaluation of success. Then, in Section 3 it is presented the research method. In section 4 are presented the main results and the discussion. Finally, we conclude with the main contributions and highlights for further research. THE EVALUATION OF PROJECTS’ SUCCESS The subject of success in the context of projects and project management is complex due to the diverse insights on success (which depend on, for example, the stakeholders), to the characteristics of the project (for example, project size), to circumstantial factors of the projects (for example, offshore outsourcing), and to many other aspects that need to be managed throughout the project lifecycle (for example, the interdependence of projects (Bathallath et al. 2016)) (Varajão 2016). There are also several perspectives about project success. For instance, Shenhar et al. (2007) identify five categories of project success: efficiency; impact on the team; impact on the customer; business success; and preparing for the future. Thomas et al. (2008) state that there are three important dimensions of IT project success: project management success; technical success; and business success. For Baccarini (1999), the two main distinct components of project success are: project management success; and the success of the deliverables of the project. These two components are distinguished as follows: - Project management success is related to the management process and mainly to the successful realization of the project regarding scope, time and cost. These three dimensions indicate the degree of the efficiency and effectiveness of project execution. Typically, project management success can be assessed at project closing. - The success of the deliverables is related to the impacts of the product(s)/service(s)/other resulting from the project on the customer business (as, for instance, increase of service performance), and most of the times can only be appraised at the post-project stage. The evaluation of both components of the project success is of major importance. On the one hand, project management success enable to ascertain the competence of project management and the efficiency concerning the use of resources. On the other hand, the success of the deliverables refers to the effectiveness of the project, since it is directly connected to the effects of the results of the project (as, for instance, business benefits). Several aspects of project success have been the focus of numerous studies over the last years, for instance, related to: causes of project failure (v.g. (Huysegoms et al. 2013; Tsirakidis et al. 2009)); concepts of project success (v.g. (Agarwal et al. 2006; Papke-Shields et al. 2010)); success factors (v.g. (Cooke-Davies 2002; Davis 2014)); success perspectives (v.g. (Davis 2014; Savolainen et al. 2012)); success achieved in projects (v.g. (Marnewick 2012; van Hillegersberg et al. 2016)); and the criteria used in success evaluation (v.g. (Atkinson 1999; Pankratz et al. 2014)). From the literature, it is evident that there is a significant concern in trying to understand what contributes to the success of a project. However, the evaluation process is not addressed in depth. Guides and standards of good practices, such as PMBOK (PMI 2017) and PRINCE2 (OGC 2009), are not exceptions to this fact since they do not address systematically the processes required for success evaluation. While analyzing the various project management guides, it is possible to identify many references to project success, which is not surprising, since the main objective of the guides is indeed to improve success in project management. Nevertheless, that concern is not translated into systematic processes. In other others words, even though the main concern is success, we cannot find processes directly related to success management in the guides (for instance, “define success criteria”), in the same way as it happens in the case of processes of areas such as communication, risk, stakeholders, among others, denoting an area that needs more contributions (Varajão 2018). Considering the sketchy coverage of projects’ success evaluation both by theoretical frameworks and by project management guides, several interesting questions can be raised: what are the practices of project’s success evaluation in companies where projects and project management are well established in their operations and/or enterprise development initiatives? Do these practices reflect the mentioned limitations? Or do they somehow overcome them? The exploratory study described in this article is a first attempt to address those questions. The literature provides a starting point for the inquiry on practices project’s success evaluation. According to Varajão et al. (Varajão 2016; Varajão et al. 2016) these practices should consider several dimensions, namely: when the evaluation process is defined; when evaluation activities are carried out; who gets involved in the evaluation; what evaluation criteria are used; what information from what sources, is used. These dimensions were considered in the study described in this article to formulate the questions that integrate the inquiry script. The script was complemented with questions that enable a demographic characterization of the inquired companies, thus providing the overall context for each case study. **METHOD** Due to the scarcity of studies covering the topic, it seemed sensible to start with an exploratory study before launching an extensive survey, whose nature and scope demands a deeper understanding of the issue. The results provide support to the decisions regarding the design of subsequent research on the topic. Having as the central research question “How is the success of projects being evaluated in practice by companies?”, the study focused on project management practices concerned with the evaluation of projects’ success evaluation. Ten companies have participated in the study. The main criterion for selection was the fact that IS/IT projects and project management are part of their operational routine or that get involved in IS/IT project-based initiatives as part of their continuous development. The identification, selection and study of companies was an interactive process, aiming at achieving a diversified set of companies concerning demography. In other words, invitations for participation were sent until there was a rich diversity in what concerns size (covering SME and large companies), activity sector, the form of organization and project management practices and approaches followed. To note that not all sought companies agreed on participating in the study. Two companies declined the invitation to participate due to momentary difficulties in finding time for the interview. The research can be described as a multi-case study (Yin 2009) that allows getting a glimpse on current project management practices in a wide range of circumstances. It consisted of an interview in each company, following a script that, besides contextual information, covers the key topics related to the evaluation of projects’ success aforementioned. Figure 1 shows the questions related to the evaluation of projects’ success. Contextual information included: the company size (number of employees), the activity sector, the location of headquarters, the international presence (national or multinational) and the adopted project management methodology. The data gathering started before each interview, by analyzing the company’s website. Since only one interview per company was planned, the selection of the participant was considered critical for the trustworthiness of the results. In this case reliable answers to the defined questions could only be obtained from the top responsible for the IS/IT projects or the IS/IT top manager. Therefore, when a company was first approached, the effort has been put on reaching the appropriate interlocutor for the study. All the companies that answered our request, have assigned experienced top managers with a thoughtful knowledge of project management practices in the organization (and belonging, in several cases, to the top management team). Interviews were scheduled at the interviewee convenience. It was followed the pre-defined script in an informal setting. The interviews started by a brief presentation of the study and of the goal of the research; Followed by an open question about the company and type of projects that are carried out; Then, all the questions in the script were addressed. Interviews lasted one hour in average (ranging from 45 minutes to one and a half hours). Some interviews were recorded in audio and later transcribed for content analysis. In other, only notes were taken by the interviewer. Considering the nature of the study and its focus on the questions mentioned above, notes were considered sufficient to capture the basic aspects of projects’ success evaluation. All the records were compiled, systematized and analyzed in order to obtain a global perspective of the evaluation of the success of IS/IT projects in the companies and to be able to draw conclusions. RESULTS AND DISCUSSION Participating companies include micro, small and medium-sized and large companies (national reference), to large multinationals (international reference). The smallest company has 14 employees, while the largest company has approximately 400,000 employees worldwide. The overwhelming majority of companies are headquartered in Europe (nine companies), and only one is headquartered in the United States. Virtually all companies (nine companies) have business in several countries. Diversity is also present in what concerns the adopted methodologies for project management. Although most companies reported using an internally defined methodology (meaning that they have a customized methodology), some mention that they incorporated state of the art practices from well-established sources, also including agile methodologies (more than half companies). This is understandable since they carry out software development projects. One company reported using more than one methodology, depending on the nature of the project and/or the preference of the customer. Table 1 summarizes the results obtained, allowing comparison among the ten companies studied. For confidentiality reasons, the companies studied were anonymized. All companies stated that they evaluate the success of their projects. However, several hints suggest that the evaluation might be approached in a partial/superficial way. A first hint is that the evaluation process is informal. That is, there is not a formally defined process. Therefore, a great deal of the evaluation is left to the improvisation of project managers and other participants in the evaluation. Even in the two cases where it was mentioned to have the projects’ success evaluation process formally defined at the project initiation, the process seems to be minimal (taking on the account, for instance, the defined evaluation criteria). A second hint is related to the fact that the success of a project is ascertained at its closing. This suggests that the success of the project is viewed only as the success of project management and not as the success of the project’s deliverables. As for the participants in the evaluation process, as one would expect, in all cases the project manager is an important actor. In some cases, top management and the client were also mentioned as participants in the evaluation (three companies). It should be noted that in most cases the project manager is the only involved in the process. Success criteria should be agreed with the stakeholders before the start of the project, and repeatedly at configuration review points throughout the project (Turner 2004). With only the participation of the project manager, success can be compromised due to the conflicting perspectives that can arise. There are no references to any criterion that addresses the impact of project deliverables. Therefore, an important part of the project success is the accomplishment of those benefits, which many times result from the use of the project deliverables. However, if the project deliverables are not evaluated considering the effects in the company, it is not possible to be certain if project prioritization is being well performed and the resources well used. Ignoring this may cause the losing of opportunities for improvement of the deliverables, to repeat management and technical mistakes from project to project, failing to improve project prioritization practices, among others. Another important aspect to consider is the criteria used to evaluate a projects’ success. Success criteria are the measures used to evaluate project success (Cooke-Davies 2002). The following criteria were pointed out for the evaluation of success: Time (eight companies); Cost (six companies); Scope (four companies); Quality (three companies); Client satisfaction (eight companies). There are no references to any criterion that addresses the impact <table> <thead> <tr> <th>Company</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>Employees</td> <td>400 000</td> <td>1 000</td> <td>8 000</td> <td>14</td> <td>550</td> <td>130</td> <td>375 000</td> <td>200</td> <td>250</td> <td>150</td> </tr> <tr> <td>Sector</td> <td>Industry</td> <td>Services</td> <td>Services</td> <td>Services</td> <td>Services</td> <td>Services</td> <td>Industry</td> <td>R&amp;D Centre</td> <td>Industry</td> <td>Services</td> </tr> <tr> <td>Headquarters</td> <td>Germany</td> <td>United Kingdom</td> <td>United States of America</td> <td>Portugal</td> <td>Portugal</td> <td>Portugal</td> <td>Germany</td> <td>Portugal</td> <td>Portugal</td> <td>Portugal</td> </tr> <tr> <td>Multinational</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Project Management methodology</td> <td>Internal</td> <td>Internal - based on several best practice guides</td> <td>Internal - based on several best practice guides</td> <td>Internal - Agile</td> <td>Internal - based on IPMA and Agile</td> <td>Internal - not formal, depends on the client decision - Agile</td> <td>Internal - based on PMBoK and Agile</td> <td>Internal - Agile</td> <td>Internal - Agile</td> <td>Internal - based on IPMA and Agile</td> </tr> <tr> <td>Success evaluation</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Success evaluation process</td> <td>Not formal</td> <td>Not formal</td> <td>Not formal</td> <td>Not formal</td> <td>Formal - project initiation</td> <td>Not formal</td> <td>Not formal</td> <td>Not formal</td> <td>Formal - project initiation</td> <td></td> </tr> <tr> <td>Evaluation process actors</td> <td>Project manager and Project manager hierarchal superior</td> <td>Project manager</td> <td>Project manager</td> <td>Project manager</td> <td>Project manager</td> <td>Project manager</td> <td>Project manager</td> <td>Several stakeholders - including the Project manager and the Client</td> <td></td> <td></td> </tr> <tr> <td>Evaluation milestones</td> <td>At project closing</td> <td>At project closing</td> <td>At project closing</td> <td>At several milestones and closing</td> <td>At several milestones and closing</td> <td>At several milestones and closing</td> <td>At several milestones and closing</td> <td>Project closing</td> <td>At several milestones, closing, and post-project</td> <td></td> </tr> <tr> <td>Information for success evaluation</td> <td>Project reports</td> <td>Project reports</td> <td>Surveys</td> <td>Meetings</td> <td>Surveys - Client satisfaction</td> <td>Meetings - with Clients</td> <td>Meetings - with Clients</td> <td>Surveys</td> <td>Project reports, Surveys, Meetings</td> <td></td> </tr> <tr> <td>Evaluation criteria</td> <td>Time Cost Client satisfaction</td> <td>Time Scope Client satisfaction</td> <td>Quality Client satisfaction</td> <td>Time Cost Client satisfaction</td> <td>Time Scope Quality</td> <td>Cost Client satisfaction</td> <td>Time Cost Client satisfaction</td> <td>Time Scope Client satisfaction</td> <td>Time Cost Quality</td> <td>Time Scope</td> </tr> </tbody> </table> Table 1. Evaluation of IS projects success in 10 companies of the project’s outcomes in its context. In all the studied cases, project time and money spent seems to be of major concern. These two criteria allow determining the accomplishment of the project’s estimations regarding duration and cost. However, they do not focus on the project’s outcomes. Too much focus on accomplishing durations and cost estimation can be detrimental to the success of the project’s outcome. In most cases, scope/quality and customer satisfaction are also considered. What customer satisfaction means, depends on the time at which the assessment is made. If the last evaluation of customer satisfaction is carried out when the project outcome is delivered or deployed, the customer satisfaction will inevitably address only the fulfillment of requirements without taking into consideration the motivations for the launch of the project and its expected impact. The defined criteria are one of the most important aspects which influence the result of a project since they are used when evaluating the project success (Varajão 2016). According to Bannerman (2008), the success of the project should be measured based on five aspects: (i) processes; (ii) management; (iii) products; (iv) business; and (v) strategy. The quasi-exclusive focus on the Iron Triangle in the studied companies denotes a primitive or embryonic evaluation process. A limited view on the success of a project – focusing only on time, cost and scope – can turn the projects to be managed based on an incomplete set of goals and may subsequently lead to a feeling of dissatisfaction on the part of different stakeholders. Despite the success being currently viewed in literature as multidimensional, with technical, economic, behavioral, business and strategic dimensions (Bannerman 2008; Cao et al. 2011; Ika 2009) in practice this is not evident in the measurement of a project’s success. Finally, the sources of information used by the companies for evaluating success are the following: project reports (three companies); surveys (four companies); meetings (five companies). The emphasis on meetings is understandable since in most companies the evaluation of success is an informal process. It is worth noting that it was not possible in this study to point out differences in the evaluation of success related to the size of the companies, the sector or the project management methodology adopted. In further studies, with a larger set of respondents, this should be explored. CONCLUSION The multi-case study enabled to get a first glimpse on the practices of projects’ success evaluation. Regardless of company characteristics, this seems to be an informal and undeveloped process. This should raise concern from both researchers and practitioners since a limited view on project success or the lack of well-defined processes for the assessment of success can turn projects to be managed according to a misfit and incomplete set of success objectives, later causing stakeholders’ dissatisfaction (Varajão 2016; Varajão 2018; Varajão et al. 2016). Furthermore, a recent study showed that, by defining a success management process, companies can achieve several benefits (Varajão et al. 2018): a precise definition of success; a better understanding of the different perspectives of the participating stakeholders; a greater focus in what is most important for achieving the project success; the identification and definition of criteria for evaluating success; definition of milestones to carry out the evaluation; and a better monitoring and performance of the project. Whether the situation revealed by the study is common at a broader scale is a relevant question to ask, whose answer demands the launch of a study at a global scale. However, the inquiring instrument for such a survey should incorporate questions that contribute to two research streams. The first research stream addresses deepening the understanding of this issue. It involves “why” and “what factors” research questions that enable to establish in what circumstances companies move from a rudimentary to advanced approaches for evaluating the success of projects. Approaches that go beyond measures concerning the project itself to measures that encompass the achievement of the mid/long-term benefits that motivated the launch of the project. Genuine interest on continuous improvement, organizational performance, organizational learning, stakeholders satisfaction, intangible benefits, among others, can also encompass the set of reasons that motivate companies into advanced approaches to the evaluation of the success of projects. The second research stream focus on improving existing guidelines for project management in order to increase the attention paid to the evaluation of the success of projects. The aspects to incorporate in such guidelines include defining and establishing systematic processes for success management, i.e., processes for the planning, assessment, monitoring, and reporting of project success. Main Contributions The obtained results present contributions at various levels. On the one hand, raises awareness among practitioners regarding the need of evaluating success in a properly defined and structured way according to the projects’ characteristics. On the other hand, allow researchers to identify research opportunities in areas that are not currently receiving the required attention. Finally, it also contributes to IS/IT project management education. Since the evaluation of success is crucial for improving project results, this should be a concern of IS/IT courses and should be included in the courses’ curricula. Limitations and Further Work This work is based on an exploratory study that had the participation of ten companies. To mention also that only one interview was held per company. Although it was intentionally included a diversified sample, with companies of very different sizes and activity sectors, it is not enough to reach definitive and generalizable conclusions. Also to mention the high percentage of European-based companies (only one company is based outside Europe - in the United States of America). Thus, it is proposed to carry out a large-scale survey, in order to substantiate the results presented in this article, and new in-depth case studies to answer raised questions. There are also several related questions to explore in further studies: Why companies do not commonly have a formal process for evaluating success since this would be expected to find at least at large companies? They do not need a formal process or are being missing opportunities for improvement? How can the existing gap between research and practice be solved regarding the evaluation of projects’ success? There is the need for new processes/techniques to help project managers in the evaluation of success? ACKNOWLEDGEMENTS This work has been supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2013. The authors would like to thank the companies for their participation in the study. REFERENCES
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1003&context=irwitpm2018", "len_cl100k_base": 6119, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 31299, "total-output-tokens": 7929, "length": "2e12", "weborganizer": {"__label__adult": 0.000965118408203125, "__label__art_design": 0.002429962158203125, "__label__crime_law": 0.0016393661499023438, "__label__education_jobs": 0.2200927734375, "__label__entertainment": 0.0004870891571044922, "__label__fashion_beauty": 0.0004940032958984375, "__label__finance_business": 0.1904296875, "__label__food_dining": 0.0011320114135742188, "__label__games": 0.0024471282958984375, "__label__hardware": 0.001590728759765625, "__label__health": 0.0019073486328125, "__label__history": 0.0016145706176757812, "__label__home_hobbies": 0.00103759765625, "__label__industrial": 0.0040740966796875, "__label__literature": 0.0025691986083984375, "__label__politics": 0.0010805130004882812, "__label__religion": 0.001026153564453125, "__label__science_tech": 0.139892578125, "__label__social_life": 0.0009207725524902344, "__label__software": 0.048248291015625, "__label__software_dev": 0.373046875, "__label__sports_fitness": 0.0006146430969238281, "__label__transportation": 0.0015230178833007812, "__label__travel": 0.0007624626159667969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34267, 0.02874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34267, 0.11686]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34267, 0.92069]], "google_gemma-3-12b-it_contains_pii": [[0, 1008, false], [1008, 5242, null], [5242, 9849, null], [9849, 14914, null], [14914, 17715, null], [17715, 21189, null], [21189, 26200, null], [26200, 30113, null], [30113, 34267, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1008, true], [1008, 5242, null], [5242, 9849, null], [9849, 14914, null], [14914, 17715, null], [17715, 21189, null], [21189, 26200, null], [26200, 30113, null], [30113, 34267, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34267, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34267, null]], "pdf_page_numbers": [[0, 1008, 1], [1008, 5242, 2], [5242, 9849, 3], [9849, 14914, 4], [14914, 17715, 5], [17715, 21189, 6], [21189, 26200, 7], [26200, 30113, 8], [30113, 34267, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34267, 0.10484]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
dd4619dbc945c8168e9cc67728e120dc535af519
Hit and Peak Finding Algorithms This note is about n-d array processing algorithms implemented in ImgAlgos.PyAlgos. Algorithms can be called from python but low level implementation is done on C++ with boost/python wrapper. All examples are shown for python level interface. Content - Common features of algorithms - n-d arrays - Windows - Mask - Make object and set parameters - Define ROI using windows and/or mask - Hit finders - Number of pixels above threshold - number_of_pix_above_thr - Total intensity above threshold - intensity_of_pix_above_thr - Peak finders - Peak selection parameters - Two threshold "Droplet finder" - peak_finder_v1 - peak_finder_v4 - Flood filling algorithm - peak_finder_v2 - Local maximum search algorithm - peak_finder_v3 - Demonstration for local maximum map - Evaluation of the background level, rms, and S/N ratio - Matrices of pixels for r0=3 and 4 and different dr values - Matrices of pixels for r0=5 and 6 and different dr values - Matrix of pixels for r0=7 - Test of peak finders - Photon counting - References Common features of algorithms n-d arrays LCLS detector data come from DAQ as n-d arrays (ndarray in C++ or numpy.array in Python). In simple case camera data is an image presented by the 2-d array. For composite detectors like CSPAD, CSPAD2X2, EPIX, PNCCD, etc. data comes from a set of sensors as 3-d or 4-d arrays. If relative sensors’ positions are known, then sensors can be composed in 2-d image. But this image contains significant portion of “fake” empty pixels, that may be up to ~20-25% in case of CSPAD. Most efficient data processing algorithms should be able to work with n-d arrays. Windows In some experiments not all sensors contain useful data. It might be more efficient to select Region of Interest (ROI) on sensors, where data need to be processed. To support this feature a tuple (or list) of windows is passed as a constructor parameter. Each window is presented by the tuple of 5 parameters (segnum, rowmin, rowmax, colmin, colmax), where segnum is a sensor index in the n-d array, other parameters constrain window 2-d matrix rows and columns. Several windows can be defined for the same sensor using the same segnum. For 2-d arrays segnum parameter is not used, but still needs to be presented in the window tuple by any integer number. To increase algorithm efficiency only pixels in windows are processed. If windows=None, all sensors will be processed. The array of windows can be converted in 3-d or 2-d array of mask using method pyimgalgos.GlobalUtils.mask_from_windows. Mask Alternatively ROI can be defined by the mask of good/bad (1/0) pixels. For 2-d image mask can easily be defined in user’s code. In case of 3-d arrays the Mask Editor helps to produce ROI mask. Entire procedure includes - conversion of n-d array to 2-d image using geometry, - production of ROI 2-d mask with Mask Editor, - conversion of the 2-d mask to the mask n-d array using geometry. All steps of this procedure can be completed in Calibration Management Tool under the tab ROI. In addition mask accounts for bad pixels which should be discarded in processing. Total mask may be a product of ROI and other masks representing good/bad pixels. **Make object and set parameters** Any algorithm object can be created as shown below. ```python import numpy as np from ImgAlgos.PyAlgos import PyAlgos # create object: alg = PyAlgos(windows=winds, mask=mask, pbits=0) ``` **Define ROI using windows and/or mask** Region Of Interest (ROI) is defined by the set of rectangular windows on segments and mask, as shown in example below. ```python # List of windows winds = None # entire size of all segments will be used for peak finding winds = (( 0, 0, 185, 0, 388), ( 1, 20,160, 30,300), ( 7, 0, 185, 0, 388)) # Mask mask = None # (default) all pixels in windows will be used for peak finding mask = det.mask() # see class Detector.PyDetector mask = np.loadtxt(fname_mask) # mask.shape = <should be the same as shape of data n-d array> ``` **Hit finders** Hit finders return simple values for decision on event selection. Two algorithms are implemented in ImgAlgos.PyAlgos. They count number of pixels and intensity above threshold in the Region Of Interest (ROI) defined by windows and mask parameters in object constructor. Both hit-finders receive input n-d array `data` and threshold `thr` parameters and return a single value in accordance with method name. **Number of pixels above threshold** `number_of_pix_above_thr` ```python npix = alg.number_of_pix_above_thr(data, thr=10) ``` **Total intensity above threshold** intensity_of_pix_above_thr \[ \text{intensity} = \text{alg.intensity_of_pix_above_thr(data, thr=12)} \] Peak finders Peak finder works on calibrated, background subtracted n-d array of data in the region of interest specified by the list of windows and using only good pixels from mask n-d array. All algorithms implemented here have three major stages: 1. find a list of seed peak candidates 2. process peak candidates and evaluate their parameters 3. apply selection criteria to the peak candidates and return the list of peaks with their parameters The list of peaks contains 17 (float for uniformity) parameters per peak: - seg - segment index beginning from 0, example for CSPAD this index should be in the range (0,32) - row - index of row beginning from 0 - col - index of column beginning from 0 - npix - number of pixels accounted in the peak - amp_max - pixel with maximal intensity - amp_total - total intensity of all pixels accounted in the peak - row_cgrav - row coordinate of the peak evaluated as a “center of gravity” over pixels accounted in the peak using their intensities as weights - col_cgrav - column coordinate of the peak evaluated as a “center of gravity” over pixels accounted in the peak using their intensities as weights - raw_sigma - row sigma evaluated in the “center of gravity” algorithm - col_sigma - column sigma evaluated in the “center of gravity” algorithm - row_min - minimal row of the pixel group accounted in the peak - col_min - minimal column of the pixel group accounted in the peak - row_max - maximal row of the pixel group accounted in the peak - col_max - maximal column of the pixel group accounted in the peak - bkgd - background level estimated as explained in section below - noise - r.m.s. of the background estimated as explained in section below - son - signal over noise ratio estimated as explained in section below There is a couple of classes helping to save/retrieve peak parameter records in/from the text file: - pyimgalogs.PeakStore - pyimgalogs.TDFileContainer Peak selection parameters Internal peak selection is done at the end of each peak finder, but all peak selection parameters need to be defined right after algorithm object is created. These peak selection parameters are set for all peak-finders: ```python # create object: alg = PyAlgos(windows=winds, mask=mask) # set peak-selector parameters: alg.set_peak_selection_pars(npix_min=5, npix_max=5000, amax_thr=0, atot_thr=0, son_min=10) ``` - npix_min: minimum number of pixels that pass the “low threshold” cut - npix_max: maximum number of pixels that pass the “low threshold” cut - amax_thr: pixel value must be greater than this high threshold to start a peak - atot_thr: to be considered a peak the sum of all pixels in a peak must be greater than this value - son_min: required signal-over-noise (where noise region is typically evaluated with radius/dr parameters). \textbf{set this to zero to disable the signal-over-noise cut.} All peak finders have a few algorithm-dependent parameters - nda - calibrated n-d array of data, pedestals and background should be subtracted, common mode - corrected Two threshold "Droplet finder" two-threshold peak-finding algorithm in restricted region around pixel with maximal intensity. Two threshold allows to speed-up this algorithms. It is assumed that only pixels with intensity above $thr_{high}$ are pretending to be peak candidate centers. Candidates are considered as a peak if their intensity is maximal in the (square) region of $radius$ around them. Low threshold in the same region is used to account for contributing to peak pixels. peak_finder_v1 ```python peaks = alg.peak_finder_v1(nda, thr_low=10, thr_high=150, radius=5, dr=0.05) ``` Parameter $radius$ in this algorithm is used for two purpose: - defines (square) region to search for local maximum with intensity above $thr_{high}$ and contributing pixels with intensity above $thr_{low}$, - is used as a $r_0$ parameter to evaluate background and noise rms as explained in section below. peak_finder_v4 ```python peaks = alg.peak_finder_v4(nda, thr_low=10, thr_high=150, rank=4, r0=5, dr=0.05) ``` The same algorithm as peak_finder_v1, but parameter $radius$ is split for two (unsigned) $rank$ and (float)$r_0$ with the same meaning as in peak_finder_v3. Flood filling algorithm define peaks for regions of connected pixels above threshold peak_finder_v2 ```python peaks = alg.peak_finder_v2(nda, thr=10, r0=5, dr=0.05) ``` Two neighbor pixels are assumed connected if have common side. Pixels with intensity above threshold $thr$ are considered only. Local maximums search algorithm define peaks in local maximums of specified rank (radius), for example rank=2 means 5x5 pixel region around central pixel. peak_finder_v3 ```python peaks = alg.peak_finder_v3(nda, rank=2, r0=5, dr=0.05) ``` - makes a map of pixels with local maximums of requested rank for data ndarray and mask, pixel code in the map may have bits 0/1/2/4 standing for not-a-maximum / maximum-in-row / maximum-in-column / maximum-in-rectangular-region of radius=rank. - for each pixel with local maximal intensity in the region defined by the rank radius counts a number of pixels with intensity above zero, total positive intensity, center of gravity coordinates and rms. - using parameters $r_0$(ex.=5.0), $dr$(ex.=0.05) evaluates background level, rms of noise, and S/N for the pixel with maximal Demonstration for local maximum map Test for 100x100 image with random normal distribution of intensities Example of the map of local maximums found for rank from 1 to 5: color coding of pixels: - blue=0 - not a local maximum - green=1 - local maximum in row - yellow=1+2 - local maximum in row and column - red=1+2+4 - local maximum in rectangular region of radius=rank. Table for rank, associated 2-d region size, fraction of pixels recognized as local maximums for rank, and time consumption for this algorithm. <table> <thead> <tr> <th>rank</th> <th>2-d region</th> <th>fraction</th> <th>time, ms</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3x3</td> <td>0.1062</td> <td>5.4</td> </tr> <tr> <td>2</td> <td>5x5</td> <td>0.0372</td> <td>5.2</td> </tr> <tr> <td>3</td> <td>7x7</td> <td>0.0179</td> <td>5.1</td> </tr> <tr> <td>4</td> <td>9x9</td> <td>0.0104</td> <td>5.2</td> </tr> <tr> <td>5</td> <td>11x11</td> <td>0.0066</td> <td>5.2</td> </tr> </tbody> </table> Evaluation of the background level, rms, and S/N ratio When peak is found, its parameters can be precised for background level, noise rms, and signal over background ratio (S/N) can be estimated. All these values can be evaluated using pixels surrounding the peak on some distance. For all peak-finders we use the same algorithm. Surrounding pixels are defined by the ring with internal radial parameter $r_0$ and ring width $dr$ (both in pixels). The number of surrounding pixels depends on $r_0$ and $dr$ parameters as shown in matrices below. We use notation - + central pixel with maximal intensity, - 1 pixels counted in calculation of averaged background level and noise rms, - 0 pixels not counted. Matrices of pixels for $r_0=3$ and 4 and different $dr$ values Matrices of pixels for r0=5 and 6 and different dr values Matrix of pixels for r0=5 <table> <thead> <tr> <th>r0=5 dr=0.05 (12 pixels)</th> <th>r0=5 dr=0.5 (28 pixels)</th> </tr> </thead> <tbody> <tr> <td>000000000000000000000000</td> <td>000000000000000000000000</td> </tr> <tr> <td>000000010000000000000000</td> <td>000001111100000000000000</td> </tr> <tr> <td>000100000000100000000000</td> <td>000100000000100000000000</td> </tr> <tr> <td>000000000000000010000000</td> <td>000010000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>010000000000000000000000</td> <td>010000000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001111100000000000000</td> </tr> </tbody> </table> Matrix of pixels for r0=6 <table> <thead> <tr> <th>r0=6 dr=0.2 (12 pixels)</th> <th>r0=6 dr=0.5 (28 pixels)</th> </tr> </thead> <tbody> <tr> <td>000000000000000000000000</td> <td>000000000000000000000000</td> </tr> <tr> <td>000000011100000000000000</td> <td>000001111100000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> <tr> <td>000000000000000000000000</td> <td>000001000000000000000000</td> </tr> </tbody> </table> Matrix of pixels for r0=7 Photon counting Photon conversion in pixel detectors is complicated by the split photons between neighboring pixels. In some cases, energy deposited by a photon is split between two or (sometimes) more pixels. The photon counting algorithm described here is designed to account for this effect and return an unassembled array with correct number of photons per pixel. Pythonic API for this algorithm is as follows: ```python # Import import psana det = psana.Detector('myAreaDetectorName') nphotons_nda = det.photons(evt, nda_calib=None, mask=None, adu_per_photon=None) ``` The `det.photons()` function divides the pixel intensities (ADUs) by `adu_per_photon`, resulting in a fractional number of photons for each pixel. This function is a wrapper around `photons()` method in PyAlgos. # Import from ImgAlgos.PyAlgos import photons # Merges photons split among pixels and returns n-d array with integer number of # photons per pixel. nphotons_nda = photons(fphotons, adu_per_photon=30) Sphinx doc Method `photons` receives (float) n-d numpy array `fphotons` representing image intensity in terms of (float) fractional number of photons and an associated mask of bad pixels. Both arrays should have the same shape. Two lowest dimensions represent pixel rows and columns in 2-d pixel matrix arrays. Algorithm works with good pixels defined by the mask array (1/0 = good/bad pixel). Array `fphotons` is represented with two arrays; An array containing whole number of photons (integer) and the leftover fractional number of photon array (float) of the same shape. Assuming the photons are only split between two adjacent pixels, we round up the adjacent pixels if they sum up to be above 0.9 photons. The algorithm is best explained using an example: Let’s say we measured the following ADUs on our detector. “adu_per_photon” is user-defined, but for this example let’s set it to 1: <table> <thead> <tr> <th>ADUs (adu_per_photon=1):</th> </tr> </thead> <tbody> <tr> <td>0.0 3.5 0.1 0.2</td> </tr> <tr> <td>0.2 0.4 0.0 1.2</td> </tr> <tr> <td>0.1 4.7 3.4 0.0</td> </tr> <tr> <td>0.5 0.4 0.4 0.1</td> </tr> </tbody> </table> We expect the converted photon counts to be: <table> <thead> <tr> <th>Photons:</th> </tr> </thead> <tbody> <tr> <td>0 4 0 0</td> </tr> <tr> <td>0 0 1 0</td> </tr> <tr> <td>0 5 3 0</td> </tr> <tr> <td>1 0 0 0</td> </tr> </tbody> </table> To see how we get from ADUs to Photons, we split the ADUs into whole photons and fractional photons. <table> <thead> <tr> <th>ADUs</th> <th>=</th> <th>Whole photons</th> <th>+</th> <th>Fractional photons</th> </tr> </thead> <tbody> <tr> <td>0.0 3.5 0.1 0.2</td> <td>=</td> <td>0 3 0 0</td> <td>+</td> <td>0.0 0.5 0.1 0.2</td> </tr> <tr> <td>0.2 0.4 0.0 1.2</td> <td>=</td> <td>0 0 0 1</td> <td>+</td> <td>0.2 0.4 0.0 0.2</td> </tr> <tr> <td>0.1 4.7 3.4 0.0</td> <td>=</td> <td>0 4 3 0</td> <td>+</td> <td>0.1 0.7 0.4 0.0</td> </tr> <tr> <td>0.5 0.4 0.4 0.1</td> <td>=</td> <td>0 0 0 0</td> <td>+</td> <td>0.5 0.4 0.4 0.1</td> </tr> </tbody> </table> Assuming the photons are only split by two adjacent pixels, we search for a pixel that has at least 0.5 photons with an adjacent pixel that sum up to above 0.9 photons. In cases where a pixel has multiple adjacent pixels which sum up to above 0.9 photons, we take the largest adjacent pixel. If such an adjacent pair of pixels is found, then the adjacent pixel values are merged into one pixel. It is merged into the pixel with the larger value. (See “After merging adjacent pixels” example below). The merged adjacent pixels are then rounded to whole photons. (See “Rounded whole photons” example below). Fractional photons 0.0 0.5 0.1 0.2 0.2 0.4 0.0 0.2 0.1 0.7 0.4 0.0 0.5 0.4 0.4 0.1 After merging adjacent pixels: 0.0 0.9 0.1 0.2 0.2 0.0 0.0 0.2 0.1 1.1 0.0 0.0 0.9 0.0 0.4 0.1 Rounded whole photons: 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 Photons is then the sum of "Whole photons" and "Rounded whole photons": <table> <thead> <tr> <th></th> <th>Whole photons</th> <th>Rounded whole photons</th> </tr> </thead> <tbody> <tr> <td>0 4 0 0</td> <td>0 3 0 0</td> <td>0 1 0 0</td> </tr> <tr> <td>0 0 0 1</td> <td>0 0 0 1</td> <td>0 0 0 0</td> </tr> <tr> <td>0 5 3 0</td> <td>0 4 3 0</td> <td>0 1 0 0</td> </tr> <tr> <td>1 0 0 0</td> <td>0 0 0 0</td> <td>1 0 0 0</td> </tr> </tbody> </table> References - ImgAlgos.PyAlgos - code documentation - psalogs - new peak-finder and other algorithms code documentation - Peak Finding - short announcement about peak finders - Hit and Peak Finders - examples in Chris' tutorial - GUI for tuning peak finding - Chun's page in development - Auto-generated documentation - references to code-based documentation for a few other useful packages - pyimgalgos.PeakStore - class helping to save peak parameter records in the text file - pyimgalgos.TDFileContainer - class helping to retrieve peak parameter records from the text file - Test of Peak Finders - example of exploitation of peak finders - Test of Peak Finders - V2 - example of exploitation of peak finders after revision 1 (uniformization) - photons - sphinx doc - Peak Finding Module - (depricated) psana module, it demonstrate examples and results - Psana Module Catalog - (depricated) peak finding psana modules - Psana Module Examples - (depricated) peak finding examples in psana modules
{"Source-Url": "https://confluence.slac.stanford.edu/download/temp/pdfexport-20180131-310118-1126-23/PSDM-HitandPeakFindingAlgorithms-310118-1126-24.pdf?contentType=application/pdf", "len_cl100k_base": 5538, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24594, "total-output-tokens": 6181, "length": "2e12", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.0012187957763671875, "__label__crime_law": 0.000457763671875, "__label__education_jobs": 0.0005245208740234375, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.00023043155670166016, "__label__finance_business": 0.00015485286712646484, "__label__food_dining": 0.0004973411560058594, "__label__games": 0.000885009765625, "__label__hardware": 0.006389617919921875, "__label__health": 0.0005698204040527344, "__label__history": 0.00040030479431152344, "__label__home_hobbies": 0.00028324127197265625, "__label__industrial": 0.0013246536254882812, "__label__literature": 0.00021541118621826172, "__label__politics": 0.00034117698669433594, "__label__religion": 0.0007729530334472656, "__label__science_tech": 0.2301025390625, "__label__social_life": 0.0001304149627685547, "__label__software": 0.019378662109375, "__label__software_dev": 0.734375, "__label__sports_fitness": 0.0005273818969726562, "__label__transportation": 0.0005154609680175781, "__label__travel": 0.00028967857360839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17971, 0.12551]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17971, 0.61406]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17971, 0.7689]], "google_gemma-3-12b-it_contains_pii": [[0, 2837, false], [2837, 4780, null], [4780, 7927, null], [7927, 10223, null], [10223, 10421, null], [10421, 11824, null], [11824, 11882, null], [11882, 13249, null], [13249, 14040, null], [14040, 16441, null], [16441, 17971, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2837, true], [2837, 4780, null], [4780, 7927, null], [7927, 10223, null], [10223, 10421, null], [10421, 11824, null], [11824, 11882, null], [11882, 13249, null], [13249, 14040, null], [14040, 16441, null], [16441, 17971, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17971, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17971, null]], "pdf_page_numbers": [[0, 2837, 1], [2837, 4780, 2], [4780, 7927, 3], [7927, 10223, 4], [10223, 10421, 5], [10421, 11824, 6], [11824, 11882, 7], [11882, 13249, 8], [13249, 14040, 9], [14040, 16441, 10], [16441, 17971, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17971, 0.19636]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
3799fb824ce16499f78969d14529a4da72dbca41
VulDeBERT: A Vulnerability Detection System Using BERT Soolin Kim∗†, Jusop Choi∗, Muhammad Ejaz Ahmed†, Surya Nepal† and Hyoungshick Kim∗† * Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea † Data61, CSIRO, Australia {soolinkim, cjs1992, hyoung}@skku.edu {ejaz.ahmed, surya.nepal}@data61.csiro.au. Abstract—Deep learning technologies recently received much attention to detect vulnerable code patterns accurately. This paper proposes a new deep learning-based vulnerability detection tool dubbed VulDeBERT by fine-tuning a pre-trained language model, Bidirectional Encoder Representations from Transformers (BERT), on the vulnerable code dataset. To support VulDeBERT, we develop a new code analysis tool to extract well-represented abstract code fragments from C and C++ source code. The experimental results show that VulDeBERT outperforms the state-of-the-art tool, VulDeePecker [1] for two security vulnerability types (CWE-119 and CWE-399). For the CWE-119 dataset, VulDeBERT achieved an F1 score of 94.6%, which is significantly better than VulDeePecker, achieving an F1 score of 86.6% in the same settings. Again, for the CWE-399 dataset, VulDeBERT achieved an F1 score of 97.9%, which is also better than VulDeePecker, achieving an F1 score of 95% in the same settings. Index Terms—Vulnerability Detection, Code Gadget I. INTRODUCTION Detecting security vulnerabilities is a well-known fundamental problem in software security because they can potentially be abused for security attacks. Therefore, many static analysis techniques have been proposed to identify vulnerable parts in source code [2], [3], [4], [5]. However, these techniques mainly rely on a database of known vulnerable code patterns or rules, requiring significant human expertise to find effective code patterns and rules. Consequently, they are ineffective in detecting new vulnerable code patterns when they have a slight change in some expressions. Recently, machine learning-based techniques [1], [6] have been proposed to overcome these limitations. For example, Li et al. [1] introduced a deep learning-based vulnerability detection system dubbed VulDeePecker and showed that VulDeePecker could achieve fewer false negatives (with reasonable false positives) than other approaches. This paper presents a more effective and accurate deep learning-based vulnerability detection system dubbed VulDeBERT using Bidirectional Encoder Representations from Transformers (BERT) [7], which is known as a model for natural language processing. Because the program source code is composed of meaningful sequences of multiple instructions and contains highly repeated instruction sequences, BERT has been applied to various tasks in analyzing program code and achieved high performance [8]. Thus, we use BERT to develop VulDeBERT for processing program source code as inputs and detecting vulnerable code fragments. BERT was originally pre-trained with unlabeled data extracted from BooksCorpus and English Wikipedia for natural language processing. Therefore, to adapt BERT to the vulnerable code detection task in C and C++ source code, we found that it is important to conduct a fine-tuning process with code gadgets. Each code gadget represents a sequence of multiple (abstract) program statements that are semantically related to each other in terms of data and control dependencies. Generating well-structured code gadgets from source code is essential for VulDeBERT. Thus, we develop a new code gadget generation method that can effectively be used for VulDeBERT and the other deep learning-based vulnerable detection models. Our code gadget generation method can properly handle the code containing nested function calls that cannot be processed by VulDeePecker [1]. Our contributions are summarized below: • We propose a vulnerability detection system called VulDeBERT using BERT model in C and C++ source code. We will make our code available at https://github.com/SKKU-SecLab/VulDeBERT.git (see Section II). • We develop a new code gadget generation tool that can be used for static analysis in C and C++ (see Section III). • Our experimental results show that VulDeBERT outperforms the state-of-the-art method [1] for two security vulnerability types (CWE-119 and CWE-399) on the SARD [9] and NVD [10] datasets (see Section IV). II. OVERVIEW OF VULDEBERT In this section, we describe the overview of VulDeBERT, which is designed as a classification model to detect security vulnerabilities in C and C++ source code. VulDeBERT has the training and detection phases, as shown in Figure 1. In the training phase, VulDeBERT takes program source codes as input, generates labeled code gadgets representing safe Copyright and Reprint Permission: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limit of U.S. copyright law for private use of patrons those articles in this volume that carry a code at the bottom of the first page, provided the per-copy fee indicated in the code is paid through Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For reprint or republication permission, email to IEEE Copyrights Manager at pubs-permissions@ieee.org. All rights reserved. Copyright ©2022 by IEEE. and vulnerable abstract program code fragments, and fine-tunes a pre-trained BERT model with those code gadgets (see Section II-A). In the detection phase, VulDeBERT takes a target source code fragment as input, generates its code gadget, and checks whether it is vulnerable or not with the fine-tune BERT model (see Section II-B). A. Training phase The training phase aims to train a pre-trained BERT model for detecting a vulnerability by receiving ground-truth data. This phase comprises five stages, as shown in Figure 1a. Given a set of program source codes, VulDeBERT computes program slices related to system function calls (1). Next, VulDeBERT transforms program slices into (labeled) code gadgets by abstracting program slices (e.g., replacing variable and function names with symbolic representations) (2). Section III provides a more detailed explanation of generating code gadgets. The code abstraction process can occasionally generate ambiguous code gadgets as a side effect because the identical code gadgets can be transformed from both safe and vulnerable program slices simultaneously. Therefore, we remove ambiguous code gadgets to avoid training with such code gadgets (3). Finally, we should embed the generated code gadgets for BERT input representation to train a BERT model. VulDeBERT appends the special [CLS] token to indicate the start of the input vector and the special [SEP] token to indicate the end of the input vector (4). After encoding code gadgets, we input them into the BERT model and fine-tune all model parameters in the BERT model to detect some target vulnerabilities. The output vector is fed to a binary classifier to detect a specific vulnerability type (5). B. Detection phase In the detection phase, VulDeBERT uses the fine-tuned BERT model to detect the vectors corresponding to the vulnerable code gadgets generated from a target source code. Given a target source code, VulDeBERT performs the same steps except for the step of removing ambiguous code gadgets (3 in Figure 1a) to obtain the vector corresponding to the target source code’s code gadgets, as shown in Figure 1b. When a vector is classified as a vulnerable one, the vulnerability’s Fig. 2: Example of code gadget generation. We collect the code gadget when it is triggered by the call of system function in function `free_fun`. If we adjust the depth of the callee/caller functions to the unlimited, the code gadget is composed of all functions. location is reported. Otherwise, the vector is classified as a safe one. III. CODE GADGET GENERATION To use a deep learning model with program source codes, we need to generate input vectors representing program source codes, which can be fed into the deep learning model. To achieve this goal, we use an abstract representation of the program source code fragment, dubbed code gadget representing a sequence of multiple (abstract) program statements that are semantically related to each other in terms of data and control dependencies. The previous work [1] introduced the idea of using code gadgets for vulnerability detection. This paper also uses this idea with a new code gadget generation method. We are particularly interested in generating code gadgets from code slices related to the parameters of system function calls because our main research interests are to find security vulnerabilities related to system function calls. Therefore, a code gadget is created by collecting the code slices related to a system function call from its caller and callee parts. The code gadget generation of VulDeBERT is composed of three stages: computing program slices related to system function calls (see Section III-A), transforming program slices into code gadgets (see Section III-B) and, removing ambiguous code gadgets (see Section III-C). A. Computing program slices In VulDeBERT, the first stage of the code gadget generation computes the program slices related to system function calls extracted from a given source code. Figure 2 shows an example of the code gadget generation. The first step to computing program slices is removing non-ASCII characters and comments. Then VulDeBERT starts collecting program slices related to a system function call, `free(data)`. We perform backward program slicing from this function call to discover the statements contributing to the value of the `data` variable. The code lines which are related to the `data` variable are in the `free_fun` function and also in the `main` function. We extracted those code slices. After repeating such backward program slicing processes, we can collect the program slices related to `free(data)`. We combine those slices into one program code according to the function call order. VulDeePecker [1] used a similar way to compute program slices related to a system function call. However, we found that the VulDeePecker’s implementation cannot properly handle nested function calls. To address this problem, we develop a new code gadget generation tool to handle k level of nested function calls from the main function by tracing nested function calls iteratively. B. Transforming code gadgets In this stage, VulDeBERT transforms collected program slices into code gadgets. We need to transform concrete program code in program slices into generalized and abstract symbolic representations for being fed into the deep learning model. For program analysis, user-specific functions and variables, comments, strings, and constant values may not be important. Therefore, we aim to capture only core semantic information needed to analyze the structure of vulnerable code. The first step to transforming program slices into code gadgets is replacing user-defined functions and variables with abstract symbolic representations. For example, as shown in Figure 2, the user-defined function (“free_fun”) or variable name (“data”) are replaced with abstract symbols (“FUN1” and “VAR1”) representing a function and a variable, respectively. We only consider the structure of statements and their relationship. To achieve this goal, we employ the following rules. - The ith ordered variable name is replaced with “VARi.” - The ith ordered function name is replaced with “FUNi.” However, we preserve the function names for system function calls (e.g., `malloc`, `fread`, and `memset`). Next, we label the generated code gadgets into vulnerable or patched ones with the corresponding program source code information, which can be collected with source code from NVD and SARD. Sometimes, vulnerable and patched codes can generate the same code gadgets. Therefore, in VulDeBERT, we exclude such code gadgets as ambiguous code gadgets. Finally, all generated code gadgets are classified into vulnerable or patched ones. Interestingly, we found that the VulDeePecker code gadget dataset contains several mislabeled code gadgets, which is consistent with the observation in the previous study [11]. Thus, we develop our own gadget method to avoid such cases. C. Removing ambiguous code gadgets We manually examined the generated code gadgets and found that code abstraction (i.e., replacing variable and function names with symbolic representations) can occasionally cause false alarms as a side effect specifically for short source code. A few lines of abstracted code would be too ambiguous to determine whether it matches a piece of vulnerable or patched code. We need to remove such code gadgets from the dataset to avoid false alarms. IV. EVALUATION In this section, we conduct experiments to show the feasibility of VulDeBERT and evaluate its detection accuracy compared to the state-of-the-art solution, VulDeePecker [1]. We first explain the implementation details and the dataset used in experiments and present the evaluation results to discuss the effectiveness of VulDeBERT. A. Experimental settings Setup. Our experimental settings are as follows: we use Intel(R) Xeon(R) 2.10 GHz CPU with 256.0 GB RAM and NVIDIA GeForce Titan for building the machine learning models used in experiments. We used Pytorch to implement VulDeBERT. To find the optimized VulDeBERT, we trained VulDeBERT with a learning rate of 0.00001 and three epochs. We set the random seed to 2022 and batch size to 24. For VulDeBERT, we use the BERT-base model with 24 layers of transformer blocks, 1024 hidden layers, and 16 self-attention heads. Furthermore, we use the Adam optimizer and cross-entropy for the classification loss. To show the superiority of VulDeBERT, we used VulDeePecker [1] as a baseline model. However, the source code of VulDeePecker is not publicly available. Therefore, we implemented the bidirectional LSTM (BiLSTM) classification model used for VulDeePecker [1] with the same model architecture and parameters described in [1]. We used Keras [12] to implement BiLSTM. The model’s hyperparameters are exactly the same as those in VulDeePecker [1]. The model was trained with a learning rate of 0.00001 and four epochs; the number of tokens in the vector representation of code gadgets was set to 50; the dropout was set to 0.5; the batch size was set to 64; the number of epochs was set to 4; the mini-batch gradient descent together with ADAMAX [13] was used for training; 300 hidden nodes were chosen. <table> <thead> <tr> <th>Dataset</th> <th># of code gadgets</th> <th># of vulnerable code gadgets</th> <th># of patched code gadgets</th> </tr> </thead> <tbody> <tr> <td>CWE-119 VulDeePecker code gadgets</td> <td>14,807</td> <td>11,555</td> <td>3,252</td> </tr> <tr> <td>VulDeePecker code gadgets</td> <td>39,753</td> <td>10,440</td> <td>29,313</td> </tr> <tr> <td>CWE-399 VulDeePecker code gadgets</td> <td>21,885</td> <td>7,285</td> <td>14,600</td> </tr> <tr> <td>VulDeePecker code gadgets</td> <td>26,702</td> <td>14,206</td> <td>12,496</td> </tr> </tbody> </table> **TABLE I: Code gadgets description.** **Dataset.** We generated our code gadgets from two program source codes maintained by the National Institute of Standards and Technology (NIST): the NVD [10] and the Software Assurance Reference Dataset (SARD) project [9]. We focus on two types of CWE (i.e., buffer error vulnerabilities (CWE-119) and resource management error vulnerabilities (CWE-399)) to evaluate the performance of VulDeBERT. For CWE-119, our code gadget generation method generated 14,206 vulnerable code gadgets and 12,496 patched code gadgets, respectively, while 10,440 vulnerable and 29,313 patched code gadgets exist respectively at the VulDeePecker code gadget dataset (https://github.com/CGCL-codes/VulDeePecker). For CWE-399, our code gadget generation method generated 11,555 vulnerable code gadgets and 3,252 patched code gadgets, respectively, while 7,285 vulnerable and 14,600 patched code gadgets exist, respectively, in the VulDeePecker code gadget dataset. Table I summarizes the dataset used for experiments. For CWE-119, we initially obtained 34,805 vulnerable and 24,457 patched code gadgets, including ambiguous ones. However, when we remove ambiguous code gadgets described in Section III-C, we finally have 14,206 vulnerable and 12,496 patched code gadgets, respectively. In contrast, for CWE-399, the number of ambiguous code gadgets is relatively smaller—we initially obtained 16,625 vulnerable and 3,368 patched code gadgets, and finally have 11,555 vulnerable and 3,252 patched code gadgets after removing ambiguous ones. **Training and test code gadgets.** We randomly selected 80% of the code gadgets as training code gadgets and used the remaining 20% as the test code gadgets. We used this setting for all experiments. **Evaluation metrics.** For evaluating our VulDeBERT detection systems, we use the following metrics: false positive rate (FPR), false negative rate (FNR), true positive rate (TPR), precision (P), and F1 score (F1) [14]. Let TP is the number of code gadgets with vulnerabilities detected correctly, FP is the number of code gadgets with false vulnerabilities detected, FN is the number of code gadgets with real vulnerabilities that are not detected, and TN is the number of code gadgets with no undetected vulnerabilities. The false positive rate metric is \(\text{FPR} = \frac{\text{FP}}{\text{TP} + \text{FN}}\), which means false positive vulnerabilities to the whole code gadgets that are not vulnerable. The false negative rate metric \(\text{FNR} = \frac{\text{FN}}{\text{TP} + \text{FN}}\) measures the ratio of false negative vulnerabilities to the whole code gadgets that are vulnerable. The true positive rate metric is \(\text{TPR} = \frac{\text{TP}}{\text{TP} + \text{FP}}\), which means the ratio of true positive vulnerabilities to the whole code gadgets are vulnerable. The precision metric is \(\text{P} = \frac{\text{TP}}{\text{TP} + \text{FP}}\) measures the correctness of the detected vulnerabilities. The F1 score metric is \(\text{F1} = \frac{2 \times \text{P} \times \text{TPR}}{\text{P} + \text{TPR}}\) which considers both accuracy and false negative rate in detecting vulnerabilities. **B. Results** **Effectiveness of code gadget generation method.** To demonstrate the effectiveness of our code gadget generation method (see Section II-A), we evaluate the performance of models with our own code gadgets and VulDeePecker code gadgets [1], respectively. Experiments were repeated ten times at each configuration. Table II and III summarize the evaluation ### Table II: Results of detecting CWE-119 ($\mu$: Mean, $\sigma$: Standard deviation). <table> <thead> <tr> <th></th> <th>FPR</th> <th>FNR</th> <th>TPR</th> <th>P</th> <th>F1 score</th> </tr> </thead> <tbody> <tr> <td></td> <td>$\mu$</td> <td>$\sigma$</td> <td>$\mu$</td> <td>$\sigma$</td> <td>$\mu$</td> </tr> <tr> <td>BERT</td> <td>Our code gadgets</td> <td>2.1</td> <td>0.4</td> <td>7.8</td> <td>1.3</td> </tr> <tr> <td></td> <td>VulDeePecker code gadgets</td> <td>1.5</td> <td>0.1</td> <td>10.9</td> <td>0.6</td> </tr> <tr> <td>BiLSTM</td> <td>Our code gadgets</td> <td>4.8</td> <td>0.5</td> <td>15.6</td> <td>0.7</td> </tr> <tr> <td></td> <td>VulDeePecker code gadgets</td> <td>2.5</td> <td>0.5</td> <td>29.0</td> <td>1.3</td> </tr> <tr> <td></td> <td>VulDeePecker results [1]</td> <td>2.9</td> <td>18.0</td> <td>82.0</td> <td>19.7</td> </tr> </tbody> </table> ### Table III: Results of detecting CWE-399 ($\mu$: Mean, $\sigma$: Standard deviation). <table> <thead> <tr> <th></th> <th>FPR</th> <th>FNR</th> <th>TPR</th> <th>P</th> <th>F1 score</th> </tr> </thead> <tbody> <tr> <td></td> <td>$\mu$</td> <td>$\sigma$</td> <td>$\mu$</td> <td>$\sigma$</td> <td>$\mu$</td> </tr> <tr> <td>BERT</td> <td>Our code gadgets</td> <td>0.3</td> <td>0.2</td> <td>4.0</td> <td>0.3</td> </tr> <tr> <td></td> <td>VulDeePecker code gadgets</td> <td>1.0</td> <td>0.2</td> <td>1.1</td> <td>0.5</td> </tr> <tr> <td>BiLSTM</td> <td>Our code gadgets</td> <td>3.2</td> <td>0.3</td> <td>10.8</td> <td>0.8</td> </tr> <tr> <td></td> <td>VulDeePecker code gadgets</td> <td>2.4</td> <td>3.0</td> <td>25.1</td> <td>6.2</td> </tr> <tr> <td></td> <td>VulDeePecker results [1]</td> <td>2.8</td> <td>4.7</td> <td>95.3</td> <td>94.6</td> </tr> </tbody> </table> Table II shows that deep learning models (BERT and BiLSTM) with our code gadgets overall achieve higher accuracy than those with VulDeePecker code gadgets for CWE-119. VulDeBERT with our code gadgets produced the best accuracy results, achieving the mean F1 score of 94.6% (standard deviation of 0.9), which is significantly better than VulDeePecker [1], achieving an F1 score of 86.6%. Although our own BiLSTM implementation’s detection accuracy (79.8%) is not superior to VulDeePecker’s detection accuracy (86.6%) reported in [1] when VulDeePecker’s code gadgets are used, our BiLSTM implementation produced better detection accuracy (88.5%) than VulDeePecker’s accuracy reported in [1] when our code gadgets are used. Table III shows the experimental results for CWE-399. Again, BERT and BiLSTM work better with our code gadgets than the VulDeePecker code gadgets. The best model configuration is BERT with our code gadgets, which achieves them mean F1 score of 97.9% (standard deviation of 0.1). **Effectiveness of deep learning models.** Table II and III also show that BERT works better than BiLSTM for detecting vulnerable codes. For CWE-119, BERT always outperforms BiLSTM when the same code gadget dataset is used for both models. Similarly, for CWE-399, BERT always outperforms BiLSTM when the same code gadget dataset is used for both models. Based on those evaluation results, our recommended configuration is to use BERT with our code gadgets for both vulnerability types (CWE-119 and CWE-399). ### V. Limitations **Programming language specific analysis.** In theory, VulDeBERT can be implemented for any programming language. In practice, however, it is challenging to provide a code gadget generation tool suitable for a target programming language. Therefore, our current VulDeBERT implementation supports only program source code written in C and C++. **Lack of supporting diverse vulnerability types.** Our current VulDeBERT implementation can discover the only vulnerabilities related to system function calls. Therefore, we must explore new static analysis techniques to compute program slices related to other vulnerability types (e.g., cryptographic misuses). In addition, BERT is also needed to fine-tune with new code gadgets related to such vulnerabilities. **Focusing on fine-tuning.** In the current VulDeBERT implementation, we used a conventional BERT model pre-trained for natural language processing. To propose a more suitable model for vulnerability detection, we need to consider another pre-trained model, such as CodeBERT [8], which was trained with a program source code dataset. ### VI. Related work Our discussion of related work is grouped as follows. **Conventional vulnerability detection methods.** Conventional vulnerability detection methods use a set of rules or a database of known vulnerable code patterns. Rule-based methods require manual effort to generate effective rules, and their performance can be varied with security experts’ knowledge. For example, Flawfinder [15], RATS [16], and Checkmarx [17] are well-known tools, but they suffer from high false positives and false negatives [6]. Other approaches [2], [3], [4], [5] are to develop a static analysis tool based on a database of known vulnerable code patterns. This approach is quite useful in detecting known vulnerable code patterns but is not robust with new vulnerable code patterns with a slight change. **Machine learning-based vulnerability detection methods.** Using machine learning is not new for developing static analysis tools. However, recent deep learning advancements have inspired the development of deep learning-based vulnerability detection tools. In general, this approach builds a classification model with vulnerable and safe codes and uses the model to detect (unlabeled) vulnerable code fragments. Li et al. [1] introduced the first deep learning-based vulnerability detection model dubbed VulDeePecker using a BiLSTM model feeding one LSTM network [18] with the program statements in the forward direction and another in the backward direction. To train program source codes effectively, they also introduced the concept of code gadgets to transform actual program codes into more generalized and normalized code fragments. The experimental results show that VulDeePecker outper- forms the other detection methods such as VUDDY [5] and VulPecker [4] in detection accuracy. In addition, VulDeepecker was used to detect four vulnerabilities that are not reported in public vulnerability databases. Li et al. [6] extended VulDeepecker into a new tool dubbed SySeVR with a program dependency graph to capture the semantic meaning of program code more effectively. They built a BiGRU-based model [18], [19], [20] as SySeVR because their experimental results showed BiGRU outperformed other model architectures, including BiLSTM. Jeon et al. [21] proposed a deep learning-based vulnerability detection tool using BERT dubbed SmartConDetect for detecting security vulnerabilities in smart contracts on Ethereum. The experimental results showed that BERT could effectively be used to detect software vulnerabilities in program source code. We propose a novel deep learning-based vulnerability detection tool dubbed VulDeBERT for software vulnerabilities in C and C++ program codes. Our experimental results demonstrate that VulDeBERT produced better detection accuracy than VulDeePecker [1] for two security vulnerability types (CWE-119 and CWE-399). **BERT.** Bidirectional Encoder Representations from Transformers (BERT) [7] has been proposed for pre-training deep bidirectional representations from unlabeled texts by jointly considering the forward and backward directions of contextual sentences. As a result, BERT was successfully adapted as a pre-trained representation for various applications (e.g., a question-answering task [22]). Furthermore, BERT is also used as a pre-trained model for programming analysis. For example, the CuBERT [23] was built with two pre-trained tasks such as predicting masked tokens and checking whether two logical lines of code are related to each other in a contextual sentence. Feng et al. [8] proposed the CodeBERT as a bimodal pre-trained model for natural language and programming language. The fine-tuned CodeBERT performed well for natural language code search and code-to-documentation generation. **VII. CONCLUSION** In this paper, we propose VulDeBERT as a novel vulnerability detection model for C and C++ source code. The experimental results demonstrated that VulDeBERT outperforms VulDeePecker [1] in detecting two well-known security vulnerability types (CWE-119 and CWE-399). For the CWE-119 dataset, VulDeBERT achieved an F1 score of 94.6%, which is significantly better than VulDeePecker, achieving an F1 score of 86.6%. For the CWE-399 dataset, VulDeBERT achieved an F1 score of 97.9%, which is also better than VulDeePecker, achieving an F1 score of 95%. VulDeBERT can be extended to detecting security vulnerabilities in other programming languages. Therefore, as a part of future work, we plan to extend VulDeBERT into a more generalized one with pre-training tasks for various programming language analyses. Furthermore, we plan to analyze not only the security vulnerabilities related to system function calls but also other types of security vulnerabilities. **ACKNOWLEDGMENT** The authors would thank anonymous reviewers. Hyoungshick Kim is the corresponding author. This work was supported by the Korean government’s projects (No.2018-0-00532, No.2022-0-00995) and was carried out while the first and last authors worked at CSIRO Data61. **REFERENCES**
{"Source-Url": "https://seclab.skku.edu/wp-content/uploads/2022/11/Vulnerability_detection_based_on_BERT.pdf", "len_cl100k_base": 6461, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19762, "total-output-tokens": 8106, "length": "2e12", "weborganizer": {"__label__adult": 0.00045680999755859375, "__label__art_design": 0.00029540061950683594, "__label__crime_law": 0.0006318092346191406, "__label__education_jobs": 0.00030303001403808594, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.0001621246337890625, "__label__finance_business": 0.00015342235565185547, "__label__food_dining": 0.0003376007080078125, "__label__games": 0.0005278587341308594, "__label__hardware": 0.0015630722045898438, "__label__health": 0.0004925727844238281, "__label__history": 0.0001342296600341797, "__label__home_hobbies": 9.769201278686523e-05, "__label__industrial": 0.0003736019134521485, "__label__literature": 0.00016117095947265625, "__label__politics": 0.00022041797637939453, "__label__religion": 0.0003619194030761719, "__label__science_tech": 0.0159149169921875, "__label__social_life": 8.654594421386719e-05, "__label__software": 0.005954742431640625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0003085136413574219, "__label__transportation": 0.00042629241943359375, "__label__travel": 0.0001691579818725586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31122, 0.04606]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31122, 0.30233]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31122, 0.83969]], "google_gemma-3-12b-it_contains_pii": [[0, 5297, false], [5297, 7498, null], [7498, 12234, null], [12234, 18526, null], [18526, 24356, null], [24356, 31122, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5297, true], [5297, 7498, null], [7498, 12234, null], [12234, 18526, null], [18526, 24356, null], [24356, 31122, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31122, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31122, null]], "pdf_page_numbers": [[0, 5297, 1], [5297, 7498, 2], [7498, 12234, 3], [12234, 18526, 4], [18526, 24356, 5], [24356, 31122, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31122, 0.18644]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
46ef90b8ed790fcf6575b77d52a50226dd909c99
2006 Realizing Privacy-Preserving Features in Hippocratic Databases Yasin Laura-Silva Walid G. Aref Purdue University, aref@cs.purdue.edu Report Number: 06-022 https://docs.lib.purdue.edu/cstech/1665 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for additional information. REALIZING PRIVACY-PRESERVING FEATURES IN HIPPOCRATIC DATABASES Yasin Laura-Silva Walid Aref Department of Computer Science Purdue University West Lafayette, IN 47907 CSD TR #06-022 December 2006 Technical Report REALIZING PRIVACY-PRESERVING FEATURES IN HIPPOCRATIC DATABASES By Yasin Laura-Silva and Walid G. Aref Purdue University Purdue University, West Lafayette, IN 47907 Realizing Privacy-Preserving Features in Hippocratic Databases Yasin Laura-Silva Walid G. Aref Purdue University \{ylaurasi, aref\}@cs.purdue.edu Abstract Preserving privacy has become a crucial requirement for operating a business that manages personal data. Hippocratic databases have been proposed to answer this requirement through a database design that includes responsibility for the privacy of data as a founding tenet. We identify, study, and implement several privacy-preserving features that extend the previous work on Limiting Disclosure in Hippocratic databases. These features include the support of multiple policy versions, retention time, generalization hierarchies, and multiple SQL operations. The proposed features facilitate in making Hippocratic databases one step closer to fitting real-world scenarios. We present the design and implementation guidelines of each of the proposed features. The evaluation of the effect in performance shows that the cost of these extensions is small and scales well to large databases. 1. Introduction Privacy preservation is an important requirement when personal data is collected, stored and published. One of the main challenges is to share information while complying with the data-owner privacy preferences. In recent years, several research directions have received substantial attention including Hippocratic databases, anonymization and generalization, privacy-preserving data mining, privacy rules languages, e.g., P3P and EPAL and fine-grained access control techniques in discretionary and mandatory access control. The notion of Hippocratic databases was introduced to incorporate privacy protection as a founding tenet in relational database systems [1] [2] [3] [9]. Ten guiding principles of Hippocratic databases and initial designs to provide limited disclosure and compliance audition were introduced. One key element of the Hippocratic database architecture is that it makes use of a centralized and standardized definition of privacy rules via a privacy policy. A privacy policy usually is born outside the database system and is expressed using natural language. In order to process this policy more effectively it is expressed using a standard privacy specification language, e.g., P3P [10] or EPAL [11]. The resulting version is translated into its Hippocratic database equivalent, i.e., the policy rules tables inside the database. The great value of this policy-driven approach is that companies that use the Hippocratic database have at their disposal an important tool to comply with privacy laws and guidelines, e.g., the Health Insurance Portability and Accountability Act (HIPAA), or the EOCD Guidelines in Europe. Even though the previous work in the area of limiting disclosure in Hippocratic databases has discussed the main guidelines and proposed an initial architecture, there are still several problems that need to be addressed before a Hippocratic database can support efficiently the requirements in real-world systems. Among these problems are the inadequate support of policy retention time, the lack of support of policy versions that could allow a company to use several versions of a policy simultaneously, the lack of an effective and flexible way to ensure that users only use purposes and recipients that they are supposed to use, and a way to restrict the access not only for the SELECT operation but also for all the DML operations. Along with Hippocratic databases, there has been a significant amount of research in the area of anonymization and generalization [4] [5] [6] [7]. The main goal is to transform a database table into its anonymized form that allows users to get useful information that does not single out data about individuals (owners of the data) who want their data to remain private. Two main notions of anonymization that have been proposed are: k-anonymity [4] [5] and l-diversity [6]. Although, both Hippocratic databases and anonymization are important areas in the effort to achieve effective mechanisms to ensure privacy in database systems, to the best of our knowledge, no much work has been done to integrate their results. 1.1. Contributions We integrate the different design features related to limiting disclosure in Hippocratic databases proposed in previous work, and present a unified architecture to support limited disclosure. We take this unified architecture as our starting point to study various extensions. These extensions solve problems that are faced while implementing Hippocratic databases that support real-world privacy requirements. The extensions covered are: - Mapping purpose, recipient, and data type of a policy with database roles - Support of multiple DML operations - Support of retention time - Support of policy versions - Support of generalization hierarchies We implement these extensions and present the study of their effect on database performance. The rest of the paper is organized as follows. Section 2 presents the unified original architecture for limiting disclosure. Section 3 presents the realization of the various extensions cited above. Section 4 presents the evaluation of their effect in performance. Finally, Section 5 contains concluding remarks. 2. Unified original architecture for limiting disclosure We integrate the design elements of previous work [2] [9] [1] into a unified architecture to support limited disclosure in Hippocratic databases presented in Figure 1. In this figure, P stands for purpose, R for recipient, PolicyDataType for data type of a P3P-like policy, T for table, C for column or attribute, CT for choice table, and CC for choice column. Furthermore, data type makes reference to the data categories used in a privacy policy, e.g., PatientDiseaseInfo, not to the regular database data types. The remaining part of this section explains the main components of this architecture. - Privacy policy. The document that specifies how an organization, e.g., a company, can use data associated to the data owner. It states the purposes, recipients and retention time of the different pieces of data. A privacy policy is expressed using a privacy specification language, e.g., P3P [10] or EPAL [11]. In this work, we assume the use of a P3P-like language. 3. Extending the architecture for limiting disclosure This section describes each of the extensions on the initial design for limited disclosure in Hippocratic databases introduced in section 2. The extensions are independent but are presented here incrementally. Figure 3 gives the database schema that is used in the examples. **Figure 2: Example of query modification** ``` SELECT name, phone, address FROM PATIENT; Purpose = Treatment; Recipient = Nurses ``` ``` SELECT name, phone, address FROM (SELECT pno, name, NULL AS phone, CASE WHEN EXISTS (SELECT address_option FROM options_patient WHERE patient.pno = options_patient.pno AND options_patient.address_option = TRUE) THEN address ELSE NULL END AS address FROM patient) ``` **Figure 3: Example database schema** - **Privacy catalog.** These tables drive the translation of the P3P-like policy into the database privacy policy. Table Datatypes stores the mapping between the data types used in the privacy policy and the database tables and attributes associated with them. Table OwnerChoices stores the table and attribute names where the individual opt-in/opt-out choices are stored for a combination of purpose-recipient-data type if a choice is available for this combination; this table is known as the choice table. The attribute MapCol in OwnerChoices is used to match each tuple in the table associated to the data type with the corresponding tuple in the choice table. For example, the attribute patient ID could be used to match each tuple in DiseasePatient (table associated to data type PatientDiseaseInfo) with the choice table PatientChoices that stores individual preferences. - **Policy translator.** Translates the privacy policy expressed in the P3P-like language into the privacy metadata tables in the database. - **Policy metadata.** It is the equivalent of the privacy policy inside the database. It contains the tables Rules and ChoiceConditions. Table Rules contains tuples of the form (P,R,T,C,CCOND); each tuple represents a rule that grants access to the table T and column C for the purpose P, and Recipient R. The optional condition CCOND restricts this access in case an opt-in/opt-out choice is available for that combination. Table ChoiceConditions stores the SQL statement (similar to WHERE condition statement) for each condition used in Rules. - **Query modification.** Before execution, a query is modified into its privacy-preserving form: each table in the FROM clause is transformed into a privacy-preserving view that checks the privacy metadata rules and data-owner preferences. Figure 2 gives the result of modifying a query when the privacy policy does not allow access to the attribute Phone and only opt-in access over the attribute Address for the purpose Treatment and the recipient Nurses. 3.1 Mapping purpose, recipient and data type of a policy with database roles The initial design for limiting disclosure translates P3P-like rules of the form (purpose, recipient, data type, opt-in/opt-out condition) into database privacy rules of the form (purpose, recipient, table, column, choice condition). When a user issues a query we need to determine the purpose and recipient of this access. Purpose and recipient are elements used to specify privacy policies even in its natural language form; consequently, there is not necessarily a one-to-one mapping between recipients and database roles or users. The mapping will depend on the specific way users are organized and the relationships between the roles and the different entities that will receive the data. There are different ways in which the purpose and recipient can be identified when a user issues a query: (1) The user could explicitly state the purpose and recipient along with the query; this requires trust on the users. (2) Dynamically infer the purpose and recipient from the context of the application [2]. A downside of this approach is that it is difficult to capture all possibilities. (3) Register every application or procedure with a purpose and recipient, which becomes a difficult task for complex applications and procedures. (4) The user specifies the purpose and the system validates it based on user attributes, e.g., active roles, job position and location [12]. We propose to use the relationship between purpose-recipient-data type and database roles during privacy policy translation. We accomplish this using an additional privacy catalog table RoleAccess that records this mapping. This approach is flexible enough to represent any relationship between the elements of a policy rule and the database roles associated to them. The mapping can be viewed as a way to specify the database roles that can access specific sections of the data using a particular combination of purpose and recipient. The policy translator gets the (purpose, recipient, data type) triplet from each P3P-like rule and creates a database privacy rule for each role associated with this triplet in RoleAccess. The database rule will have the following structure: (DBRole, purpose, recipient, table, column, choice condition). The query modification module considers only the rules defined for the roles of the user issuing the query and the purpose-recipient specified with this query. If a user is not allowed to use a certain combination of purpose-recipient, the query processing is terminated. This extension allows us to enforce the following example restrictions: - User Mary should use only recipient Doctors while user Tom should use only recipient Nurses when accessing table Patients for the purpose Treatment. - Given two database roles that are allowed to use purpose Treatment and recipient Doctors, e.g., doctors1 and sysadmin, allow sysadmin to access all the columns of table Patient, and doctors1 a subset of them. With the extension described in the next section, we will be able to enforce restrictions like: - Allow user Mary, using purpose Treatment and recipient Doctors, to access the table Drugs only to perform SELECT but not UPDATE. - Given two database roles that are allowed to use purpose Treatment and recipient Doctors, e.g., doctors1 and sysadmin, allow sysadmin to perform SELECT and UPDATE over table Patient but only SELECT to doctors1. 3.2 Support of multiple DML operations The original architecture for limiting disclosure ensures that access using the SELECT command will respect the privacy rules and user preferences. In this section, we extend the ideas used for SELECT to other DML operations, i.e., INSERT, UPDATE, and DELETE. To support privacy restrictions for other DML operations, we extend the structure of the privacy catalog table RoleAccess to (P,R,PolicyDataType,DBRole, Operations). Operations is a bitmap in which each bit is associated to each DML operation (bit0=SELECT, bit1=INSERT, bit2=UPDATE, bit3=DELETE). When the value of a bit is 1 the operation is allowed, otherwise it is restricted. For example the tuples (Treatment, Nurses, DrugAdm, nurse, 0001) and (Treatment, Nurses, DrugAdm, nurse-practitioner, 0111), mean that if the privacy policy contains rules that give access to drug administration data for purpose Treatment and recipient Nurses, the database roles that should receive this access are nurse and nurse-practitioner, additionally the role nurse will receive only access to view the data while the role nurse-practitioner will receive access to view and modify it. The policy translator will produce privacy rules of the form \((DBRole,P,R,T,c,CConditional,Operations)\) and this information will be used when processing DML operations. The processing of the SELECT operation is similar to the one implemented in the original design. The main difference is that when the process requires checking if a rule has been defined for purpose \(P\), recipient \(R\), table \(T\) and column \(C\), it also ensures that the operations granted with this rule include SELECT. For other DML operations, a privacy checking process is performed based on the algorithms provided in Figure 4. An operation can be allowed, denied or allowed with limited effect; in this last case, the effect of an update operation is restricted to the subset of the data to which a user has access to. As in previous work in limiting disclosure in Hippocratic databases, we use NULL to represent a prohibited value; the advantages and disadvantages of this use are presented in [2]. For the INSERT operation, we treat NULL as a special value that users can always insert independently of the privacy restrictions; this will allow a user who only has access to insert on certain columns of a table, to insert a tuple with values for these columns and NULL for the remaining columns. Naturally, if there is a column that is NOT NULL and the user ```plaintext INSERT Input: INSERT INTO t1 (col_list) VALUES (value_list) For each column in col_list in which value_list[i] is NULL status = checkPermission(purpose, recipient, dbRole, t1, col_list[i], Insert, out conditionChoice) case status: 0: prohibited, 1: allowed without condition, 2: allowed without condition 0: return -1 1: break //continue with the next column 2: If conditionChoice does not depend on t1 Check if conditionChoice is fulfilled Execute (unmodified) INSERT command If operation was successful We insert in the choice tables that depend on t1 UPDATE Input: UPDATE t1 SET col_i=newValue_i [,...] WHERE conditions translatedCols="" For each column in col_list status = checkPermission(purpose, recipient, dbRole, t1, col_list[i], Update, out conditionChoice); case status: 0: prohibited, 1: allowed without condition, 2: allowed without condition 0: break //update will not affect this col 1: //update will affect all rows of this col translatedCols += col_i + "=" + newValue_i + "," break 2: //update will affect the allowed rows of this col translatedCols += col_i + "=" + conditionChoice + " CASE WHEN " + conditionChoice + " THEN " + newValue_i + " ELSE " + col_i + " END;" Execute "UPDATE " + t1 + " SET " + translatedCols + conditions; DELETE Input: DELETE FROM t1 WHERE conditions col_list = set of all columns in t1 newConditions="" For each column in col_list status = checkPermission(purpose, recipient, dbRole, t1, col_list[i], Delete, out conditionChoice); case status: 0: prohibited, 1: allowed without condition, 2: allowed without condition 0: return -1; // abort 1: break //there is access to the whole column 2: //delete will affect the allowed rows of this col newConditions += conditionChoice + " AND " Execute "DELETE FROM " + t1 + conditions + newConditions; If operation was successful Remove rows in choice tables that depend on t1 ``` Figure 4: Algorithms for other DML operations does not have access to insert on it, he will be unable to insert in this table. For UPDATE, the user needs to have access to all the columns being updated independently of the new values; the modified command will apply the changes only to those columns that the user has access to according to the privacy rules, and the rows he has access to according to the data-owner preferences. For DELETE, the user needs to have permission over all the columns of the table; additionally, the translated command will delete only the rows that the user has access to according to data-owner preferences. The resulting architecture after applying the modifications introduced in the two first extensions is presented in Figure 5. The new or modified components are in bold. 3.3 Support of retention time Limited retention is a principle of Hippocratic databases and a key element of privacy policies. It ensures that data is retained only as long as necessary for the fulfillment of the purposes for which it has been collected. The original architecture of the Hippocratic database [1] suggests the implementation of the Data Retention Manager which basically deletes all data items that have outlived their purpose. The same work recognizes that completely forgetting some information once it is stored in a database without affecting recovery is non-trivial. To the best of our knowledge no further mechanism to support retention time was proposed in the context of Hippocratic databases. Our approach to support retention time is similar to the one used to support opt-in/opt-out preferences. The advantage of this approach is that it does not require deleting the information after the allowed retention time. Additionally, using SQL conditions constitutes a flexible mechanism to express complex retention restrictions. P3P defines the element Retention as part of privacy rules. This element can have several predefined values: no-retention, stated-purpose, legal-requirement, business-practices, and indefinitely [10]. The time length associated to each of these values depends on the specific privacy policy and organization. Furthermore, for values, e.g., stated-purpose or legal-requirement, the time length can depend also on the purpose associated to each privacy rule. We store this mapping between P3P retention value, purpose and actual time length in the privacy catalog table Retention. We assume there is a table, referred to as primary table, which stores basic information of the data owner and where each row is associated with exactly one data owner. Our support of retention time makes use of the Signature-Date table in which we store the policy signature date for each data owner. During policy translation, if the retention element is included in a P3P rule, the values of the retention and purpose elements are used to determine the retention time length $tl$. The translator also builds a condition that ensures that the date in which a command is executed falls in the period between the privacy signature date $sd$, which will probably be different for each data owner, and $sd+tl$. We store the reference to this condition in the new column $DCOND$ of the table Rules and the actual condition in the table $DateConditions$. Figure 6 ![Figure 5: Architecture after first two extensions](image-url) Select name, phone, address from PATIENT; Purpose = Treatment; Recipient = Nurses ```sql SELECT name, phone, address FROM PATIENT; ``` Figure 6: Example of limited retention DML Operation + Purpose + Recipient Figure 7: Architecture after adding support for limited retention gives a query and its modified form that ensures limited disclosure and limited retention. Figure 7 gives the incremental architecture after adding this feature. ### 3.4 Support of policy versions The study in [8] found that 80% of organizations use a different privacy policy for employees and clients, 42% have multiple policies for clients, and 75% require support of policy versions. To the best of our knowledge, there is little work about how we could support multiple versions and multiple policies in the context of Hippocratic databases. The different cases of multiple versions and multiple policies requirements can be analyzed as follows: - **Multiple policies.** Company ABC needs to support two policies, P1 for patients and P2 for doctors. **Solution:** We translate P1 and P2 independently. The metadata will contain the rules of both policies and we will have two primary tables. - **Single policy, multiple data owners.** Company ABC uses policy P for patients and doctors. Patients and doctors are different entities in the database. **Solution:** We translate P twice. During the first time, the privacy catalog considers the tables associated to patients; during the second one, the tables associated to doctors. - **Multiple policies over time.** A policy is updated for old and new patients. *Solution:* We translate initially the original policy. When it is updated, we delete the metadata and translate the updated policy. We have one primary table. - **Multiple versions.** a) The policy for patients is updated only for new patients. b) Two policy versions for different groups of patients are simultaneously used. *Solution:* Since this case will require the use of two policies associated with the same database entity Patient, this case is not directly supported by the frameworks for limiting disclosure proposed in previous work. The remaining part of this section presents our extension to support multiple policy versions. In our approach we get the policy ID and version from the P3P-like policy. We assume that the version of a policy is part of its ID. Also, each row of a primary table being used by more than one policy will have a label i.e., extra column, with the ID of the active policy for this row. Each data owner has one active policy at any time, but different data owners can have different policy versions. The new privacy catalog table *Policies* contains information of the policies supported by the system, and the primary and signature-date tables they should use. This information is used during policy translation and each generated rule is stored with its corresponding Policy ID. During query modification, the system performs the regular test to determine if there is access for the specific combination of database role, purpose, recipient, data table and attribute. In the presence of multiple versions, there will be more than one rule for this combination and the system will add another level of CASE statement to process the versions accordingly. Figure 8 shows an example of this query modification. We need to propagate the association with policy versions to other tables that store information about data owners; we could add another column to store the version, or we could implement the query modification module such that each privacy-preserving view joins the corresponding primary table and consequently uses its version information. In this work, we use the first approach. Figure 9 shows the incremental design with support for multiple policy versions. ### 3.5 Support of generalization hierarchies in Hippocratic databases Hippocratic databases and anonymization are two important areas in the effort to achieve effective mechanisms to ensure privacy in database systems. Unfortunately, little work has been done to integrate their results. In the design for limited disclosure presented so far the support of opt-in/opt-out choices is very limited; data owners can only give either full access to the data or deny it completely; there is not the option to give access to a generalized version of the data. We propose the study of the integration of Hippocratic databases and anonymization/generalization techniques. The ideas we present in this section represent only the first step in this integration path. We present here a design to introduce generalization hierarchies into the limiting disclosure framework for Hippocratic databases. The first step is to identify the data elements that will be generalized and build a generalization hierarchy for each of them. The number of levels of a generalization tree could be different for different elements. The first level represents the actual value of the data element; level two represents the first degree of generalization, and so on. The information of the tree is loaded by the DBA into the metadata table *Generalization*. Figure 10 gives an example of a generalization tree and some tuples of the table Generalization corresponding to this tree. The content of the choice tables for the data elements --- **Figure 8** Example of limiting disclosure with multiple policy versions ```sql Select name, phone, address from PATIENT; Purpose = Treatment; Recipient = Nurses Select name, phone, address from (SELECT pno, name, NULL AS phone, CASE WHEN policyversion=01 THEN address WHEN policyversion=02 THEN CASE WHEN EXISTS (SELECT address_option FROM options_patient WHERE patient.pno=options_patient.pno AND options_patient.address_option=TRUE) THEN address ELSE NULL END END AS address From patient) ``` **Figure 10** gives an example of a generalization tree and some tuples of the table Generalization corresponding to this tree. that can be generalized will not be Boolean anymore. They will store instead, the level of generalization that the data owner wants for the element. A value of 0 means that access is not allowed, 1 means full access and values greater than 1 will allow the disclosure of generalized values. The query modification module will use a generalization function that will convert a data value into its generalized form. The form of the CASE statement will change to process each possible choice value. Figure 11 gives an example of query modification with support of generalization hierarchies. Figure 12 shows the incremental design after adding support of generalization hierarchies. 4. Experiments We implement the extensions presented in Section 3 as a middleware application that performs the functionality of the SQL modification module. In this section, we present the results of the performance study of the various extensions, analyzing the overhead, scalability, and effect of record filtering associated to them. The cost considered for selection queries is the query execution and retrieval time. We ignore the cost of query rewriting. For update queries, we consider 4.1. Tests configuration We use a synthetic database based on the Wisconsin Benchmark [13] with attributes presented in Table 1. We use PostgreSQL 8.1, set the shared buffer to 25MB and leave all the other configuration parameters with their default values. The tests are performed in a 3.2 GHz Pentium IV machine with 1.5GB of memory and running Microsoft Windows XP as operating system. The results presented in this section consider the average of the warm performance numbers having 95% confidence and an error margin less than ±5%. As discussed in [2], there are several ways in which we could store the choice columns. We use the external single approach since it was found to be an effective compromise. With this approach, we store all the choice columns in a single external table. We also use an external table to store the policy signature dates and assume the use of two versions in the experiments that use multiple version support. 4.2. Performance evaluation 4.2.1. Overhead and scalability of Select queries. To measure the overhead cost of the different extensions we consider a worst-case scenario to run simple Select queries. The queries select all the records of the data table i.e., application selectivity = 100%, and nothing is filtered by the data-owner preferences or the retention time restrictions, i.e., choice selectivity and retention selectivity are 100%. This scenario incurs all the cost of privacy checking but does not get any benefit from record filtering. Figure 13 gives the overhead cost of the various extensions for different sizes of the data tables. We can observe that the costs of the extensions and combinations of them are small and scale well when the database size increases. 4.2.2. Effect of record filtering on select queries. When the choice selectivity and retention selectivity are less than 100%, select queries perform significantly better than in the worst case scenario and in several cases even better than the original queries. Figure 14 and 15 give the execution time of select queries when we change the choice selectivity and the retention selectivity, respectively. The results are presented for different combinations of the implemented extensions. For this experiment, we use tables with one million records and application selectivity of 100%. The performance improvement is significant for values of selectivities smaller than 50%. We expect even better results when choice and retention filtering are considered simultaneously. The performance results for update queries follow those for select queries. The cost of privacy checking is relatively more significant in the case of update queries because of the reduced cost of update operations when modifying few tuples, and the extra cost of maintaining the choice and signature-date tables. For example, inserting a tuple in the primary table requires also inserting the corresponding tuples in choice and signature-date tables. This cost is compensated by the performance gains associated with the operations that do not need to be executed because their privacy check fails. 5. Conclusions and future work We identified, studied and implemented several privacy-preserving features that extend the previous work on Limiting Disclosure in Hippocratic databases. The features studied in detail are: mapping purpose, recipient, and data type of a policy with database roles, support of multiple DML operations, support of retention time, support of policy versions, and support of generalization hierarchies. We discussed why we need these extensions and the limited or non-existing support of these features in previous work. Our performance analysis showed that the overhead of the implemented extensions is <p>| Table 1: Benchmark attributes specification and choice columns |</p> <table> <thead> <tr> <th>Column</th> <th>Datatype</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Unique2</td> <td>Int</td> <td>Primary key, Sequential order</td> </tr> <tr> <td>Unique1</td> <td>Int</td> <td>Candidate key, random order</td> </tr> <tr> <td>Onepercent</td> <td>Int</td> <td>Values 0-99, random order</td> </tr> <tr> <td>Tenpercent</td> <td>Int</td> <td>Values 0-9, random order</td> </tr> <tr> <td>Twentypercent</td> <td>Int</td> <td>Values 0-4, random order</td> </tr> <tr> <td>Fiftypercent</td> <td>Int</td> <td>Values 0-1, random order</td> </tr> <tr> <td>stringu1</td> <td>52-byte str</td> <td>Unique character string</td> </tr> <tr> <td>stringu2</td> <td>52-byte str</td> <td>Unique character string</td> </tr> <tr> <td>Choice1</td> <td>Int</td> <td>Values 0-1 (10% = 1), indexed</td> </tr> <tr> <td>Choice2</td> <td>Int</td> <td>Values 0-1 (50% = 1), indexed</td> </tr> <tr> <td>Choice3</td> <td>Int</td> <td>Values 0-1 (90% = 1), indexed</td> </tr> <tr> <td>Choice4</td> <td>Int</td> <td>Values 0-1 (100% = 1), indexed</td> </tr> <tr> <td>SignatureDate</td> <td>Date</td> <td>Values d-d+99, random order</td> </tr> </tbody> </table> small and scale well to large databases. We believe that our contribution in this work represents a step in the challenging path of finding efficient ways to engineer the Hippocratic database and answer real world privacy requirements. Some paths for future work are: the integration of results in the area of anonymization into the Hippocratic database, the design of privacy-preserving mechanisms to support Export and Import operations maintaining privacy definitions, the support of Mandatory Access Control via Hippocratic databases, and the study of performance of different ways to organize the metadata (normalized versus de-normalized tables, storing conditions as strings versus storing the components used in conditions in compact attributes and building the conditions on-the-fly, indexes over privacy catalog and metadata, etc.). 6. References
{"Source-Url": "https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2664&context=cstech", "len_cl100k_base": 7017, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 44578, "total-output-tokens": 8220, "length": "2e12", "weborganizer": {"__label__adult": 0.0006551742553710938, "__label__art_design": 0.00052642822265625, "__label__crime_law": 0.00226593017578125, "__label__education_jobs": 0.0032138824462890625, "__label__entertainment": 9.351968765258788e-05, "__label__fashion_beauty": 0.0003247261047363281, "__label__finance_business": 0.0020236968994140625, "__label__food_dining": 0.0005936622619628906, "__label__games": 0.0006561279296875, "__label__hardware": 0.0013027191162109375, "__label__health": 0.00637054443359375, "__label__history": 0.0005698204040527344, "__label__home_hobbies": 0.0001876354217529297, "__label__industrial": 0.000903606414794922, "__label__literature": 0.0005202293395996094, "__label__politics": 0.0006818771362304688, "__label__religion": 0.0005807876586914062, "__label__science_tech": 0.35791015625, "__label__social_life": 0.0002262592315673828, "__label__software": 0.06683349609375, "__label__software_dev": 0.55224609375, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.0007147789001464844, "__label__travel": 0.0002894401550292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35728, 0.03054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35728, 0.18351]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35728, 0.88427]], "google_gemma-3-12b-it_contains_pii": [[0, 547, false], [547, 745, null], [745, 929, null], [929, 5103, null], [5103, 7212, null], [7212, 10048, null], [10048, 14670, null], [14670, 18091, null], [18091, 21415, null], [21415, 22934, null], [22934, 27378, null], [27378, 28554, null], [28554, 29046, null], [29046, 33088, null], [33088, 33690, null], [33690, 35728, null]], "google_gemma-3-12b-it_is_public_document": [[0, 547, true], [547, 745, null], [745, 929, null], [929, 5103, null], [5103, 7212, null], [7212, 10048, null], [10048, 14670, null], [14670, 18091, null], [18091, 21415, null], [21415, 22934, null], [22934, 27378, null], [27378, 28554, null], [28554, 29046, null], [29046, 33088, null], [33088, 33690, null], [33690, 35728, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35728, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35728, null]], "pdf_page_numbers": [[0, 547, 1], [547, 745, 2], [745, 929, 3], [929, 5103, 4], [5103, 7212, 5], [7212, 10048, 6], [10048, 14670, 7], [14670, 18091, 8], [18091, 21415, 9], [21415, 22934, 10], [22934, 27378, 11], [27378, 28554, 12], [28554, 29046, 13], [29046, 33088, 14], [33088, 33690, 15], [33690, 35728, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35728, 0.07273]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
679865517a1409abe4b8e45647c27b2aaa70c706
Refactoring Software in the Automotive Domain for Execution on Heterogeneous Platforms Downloaded from: https://research.chalmers.se, 2021-10-17 11:48 UTC Citation for the original published paper (version of record): Sica de Andrade, H., Crnkovic, I., Bosch, J. (2020) Refactoring Software in the Automotive Domain for Execution on Heterogeneous Platforms Proceedings - IEEE Computer Society Signature Conference on Computers, Software and Applications: N.B. When citing this work, cite the original published paper. Abstract—The most important way to achieve higher performance in computer systems is through heterogeneous computing, i.e., by adopting hardware platforms containing more than one type of processor, such as CPUs, GPUs, and FPGAs. Several types of algorithms can be executed significantly faster on a heterogeneous platform. However, migrating CPU-executable software to other types of execution platforms poses a number of challenges to software engineering. Significant efforts are required in such type of migration, particularly for re-architecting and re-implementing the software. Further, optimizing it in terms of performance and other runtime properties can be very challenging, making the process complex, expensive, and error-prone. Therefore, a systematic approach based on explicit and justified architectural decisions is needed for a successful refactoring process from a homogeneous to a heterogeneous platform. In this paper, we propose a decision framework that supports engineers when refactoring software systems to accommodate heterogeneous platforms. It includes the assessment of important factors in order to minimize the risk of recurrent problems in the process. Through a set of questions, practitioners are able to formulate answers that will help in making appropriate architectural decisions to accommodate heterogeneous platforms. The contents of the framework have been developed and evolved based on discussions with architects and developers in the automotive domain. Index Terms—heterogeneous computing, software engineering, refactoring, architectural decisions I. INTRODUCTION As technology advances and software applications become widespread, the requirements for multiple functionalities increase at a fast rate. In the automotive industry, high-end products nowadays embed more than 100 million lines of code that realize a variety of functions. From robust safety features to increased comfort in the cabin, the role of software is crucial. The industry now has a clear focus on artificial intelligence (AI) applications that handle very large amounts of data. Mainly due to such large amounts of data, most time and development efforts in this domain are spent on understanding, preparing, monitoring, and logging of data, rather than implementing the machine learning algorithms and models [1]. Such high demands on software can only be realized through mechanisms that allow for increased hardware performance and energy efficiency at a reasonable cost. Currently, the most important way to increase performance of computer systems is by using heterogeneous platforms, i.e., hardware platforms containing more than one type of processor, like CPUs, GPUs, and FPGAs. In the context of heterogeneous platforms, the processing of data can be parallelized, and different types of data can be assigned to specialized processors. For instance, GPUs are known to be more efficient than CPUs when executing tasks that require multiple parallel processes. A typical example is computer vision, which processes image data obtained from sensors to create an accurate world model. Heterogeneous computing, however, poses a number of challenges to software engineering, mainly due to the inherently different characteristics of the hardware processing units. It is typically very difficult to optimize the processing of data with respect to non-functional requirements such as performance, energy consumption, and real-time constraints. In most industrial cases, new products are developed from existing software. The challenges are not only related to the deployment of software onto a heterogeneous platform, but also to the refactoring of existing software for it to be executed on a new platform. This scenario poses a number of new challenges particularly to the software architecture design – which have not been addressed in literature, to the best of our knowledge. However, we have identified through our industrial partners a major need for a systematic approach to support decision-making during such migration process – from CPU-centric to heterogeneous platforms. In this paper, we propose a reasoning framework that specifies a set of considerations supporting the decision-making process in refactoring software systems when migrating from CPU-centric to heterogeneous platforms. We provide means for reasoning about different aspects that practitioners must address when refactoring software-intensive systems. Our proposal is based on a series of in-depth discussions with our industrial partners in the automotive industry. The remainder of this paper is organized as follows. A motivational example is presented in Section II. In Section III, we present the research methodology that was used in this study. In Section IV, we describe our approach to refactoring systems for heterogeneous platforms. We discuss validation of the proposed framework in Section V. In Section VI, we present the related work. Finally, in Section VII, we present the conclusion and future work. II. MOTIVATIONAL EXAMPLE The automotive industry provides an illustrative example of architectural evolution of a system deploying heterogeneous computing. A current state of practice for system and software architecture for vehicular control systems, working well in the last 25 years, is a distributed system consisting of many computational units (a.k.a. Electronic Control Units – ECUs) with embedded software, typically including a control loop that receives signals from sensors, performs computation, and produces signals to the connected actuators that control the electromechanical parts of the vehicle. For the communication between ECUs, a common bus (typically a standard Controller Area Network (CAN)) is used. This modular and component-based approach, like AUTOSAR [2], enables efficient evolution of the system: to introduce a new service, a new ECU is added with its embedded software. ECUs use simple CPUs and are dimensioned to maximally utilize the computational and memory resources that are automotive grade. In recent years, the development of new software and hardware technologies has enabled significant improvements in the automotive industry. The main and disruptive changes are the transformation to electrical vehicles, autonomous driving, and connectivity. Examples of new functionality include different elements of autonomous driving, optimized engine control, and improved behavior in risk-full situations. The new functions being introduced are typically computation intensive, with extensive parallel computing, processing large amounts of data in real-time, and have major requirements on system performance. The new technologies include the use of machine learning, parallel computing, intensive communication in real time, cloud computing and edge-computing, etc. This requires that many strategically important architectural decisions need to be made with respect to (i) system and software architecture; and (ii) business-oriented decisions on development and deployment processes. Fig 1 shows a new architecture of an automotive system that provides architectural prerequisites for new functions and enables a continuous transformation from the old architecture to the new architecture. The basic architecture is the same – distributed systems connect via a bus, but the node structure is changing. Instead of nodes optimized for low computational and storage capacity, the nodes (ECUs) become heterogeneous computational platforms: (i) CPUs are getting more powerful, and in some cases replaced by multi-core CPUs, (ii) FPGAs are being included on the platforms for specialized computation, in particular processing input data from sensors (such as camera and radars). Further, the sensors are equipped with computational platforms (typically CPU + FPGA), enabling direct data processing and significant reduction in the amount of data for further processing. Additionally, the computing power is concentrated on a new, centralized, powerful computational platform that includes (multi-core) CPUs and GPUs, and in this way, many functions from distributed ECUs can be moved to it. Thus, the most-intensive computational services can be performed in real-time. This computation altogether can be seen as edge computing in respect to the cloud computing to which the automotive system is connected, but not necessarily continuously. The cloud computing resources are used for additional services that do not have hard real-time requirements. Additionally, the cloud computing resources are used for further development of the system, including the training of machine learning models and analysis of the data provided by the monitoring and logging functions of the vehicles. Refactoring to heterogeneous platforms requires rewriting the code. The following code snippet illustrates the type of code changes that is required [3]. The example depicts a simple function in C++ using the CUDA framework [4] that adds the elements of two arrays. Compared to plain C++ code, the add function must be transformed into a function that can be executed on the GPU by adding the global specifier. Then, the memory must be explicitly allocated in a location that can be accessible by the GPU. In this example, the memory space is accessible by both the CPU and the GPU. Finally, with a number of changes to the syntax, the “add” function is invoked on the GPU using multiple parameters. CUDA programming demands explicit management of several aspects, such as device memory, data transfer between memories, and the synchronization of data access. For instance, the GPU must be told to wait for the GPU to finish the job before accessing the data in a shared memory. The aforementioned example setting shows the complexity of the process, which raises important questions related to the process of refactoring software. The affected areas include evaluation, design, testing and deployment operations. Some of these challenges are listed next. - Process-related decisions: - What is the process of refactoring of the system architecture? - What is the process of refactoring of existing code from a platform (CPU) to a heterogeneous platform (e.g., GPU, or FPGA)? - What are the implications of deployment of new architectures on the overall system’s properties (re- ### IV. Refactoring for Heterogeneous Platforms In this section we describe the framework, which consists of four steps that are explained in detail below. The steps are, namely: A. “Determining the impact on the software architecture”; B. “Mapping software and hardware”; C. “Determining the overall architecture design”; and D. “Refactoring software components”. Within each step we elaborated activities and questions that should be answered by the system engineers in order to obtain a set of considerations of different perspectives. Finally, the answers to these questions will help the engineers to make appropriate architectural decisions. #### A. Determining the impact on the software architecture When introducing a new processing unit, the software architecture must be adapted to accommodate the changes. In particular, issues related to communication and memory management become relevant. A1: Examine the existing data pipeline. As first step, engineers must examine the existing software architecture in order to obtain an understanding of the current design. In particular, the communication between components should be revisited. as the data pipeline will be changed with the introduction of a heterogeneous platform. In this stage, a reassessment of the system’s documentation might be useful, given that there is consistency between the documentation and the actual implementation. Engineers should run measurements to obtain a “default” performance of the system prior to refactoring. A2: Determine the expected performance gains. Then, the engineers specify which non-functional properties are intended to be improved, according to the system’s requirements. One should particularly take into account the additional communication demands that will be present in such a distributed system. In the case of automotive applications, there are substantial constraints in terms of resources that can be utilized. This issue is partially addressed if engineers have the liberty to first design software – putting software functionalities in the focus – and then proceed to determine the embedded hardware components that will be utilized. The assessment can include several considerations related to non-functional properties concerning runtime (e.g., timing issue, power consumption), lifecycle (e.g., development process, maintainability), or business models (e.g., development and production costs) [9]. A3: Elicit the changes in the software architecture. Engineers must then elicit the necessary changes in the software architecture according to the predefined non-functional requirements. The communication between components becomes relevant to the performance of the overall system, due to the inherent characteristic of heterogeneous systems to pass over messages to components deployed on accelerators. For instance, one decision in this stage may indicate that the messages required by computationally heavy functionalities will need to be forwarded to the newly introduced software component to be deployed on the heterogeneous platform. Therefore, the message passing infrastructure must ensure the adequate capabilities for such communication to occur. B. Mapping software and hardware Determining the mapping between software and hardware can be very challenging, typically requiring several rounds of experimentation and prototyping. B1: Identify the functionalities to be accelerated. The accelerated portions of the software will most likely be the ones identified as the most computationally intensive tasks. In the case of automotive applications using AI technology, for instance, the training of the machine learning algorithms are strong candidates for execution on the accelerator(s). These algorithms typically include the processing of multidimensional matrices that are suitable for execution on GPUs. B2: Experiment and measure performance. Well-established frameworks, such as CUDA, typically embed a useful tool for assessing portions of code in terms of execution time, namely “profiling”. With this tool, practitioners can experiment with different portions of code prior to determining the mapping between software and hardware. This step allows engineers to assess the performance of potential configurations and compare them to the default benchmark obtained with the original CPU-centric software. B3: Establish a configuration that is suitable. A number of approaches have been proposed in literature to tackle the mapping between software and heterogeneous platforms. In [10] for instance, the authors used a genetic algorithm to find a locally optimal solution in respect to a defined cost function. The model proposed in the paper takes into consideration both the system constraints and user-defined architectural decisions. The latter might include, for instance, a requirement that two particular components are not allowed to be allocated to the same processing unit. All constraints are accounted for in a cost function, representing the overall performance of the system given a certain allocation configuration. The result of the proposed method is a system deployment configuration that is (at least) nearly optimal for the overall system performance. Additionally, dynamic deployment mechanisms can be created in order to allow different components to be executed depending on the current status of runtime. There is plenty of literature that defines the mapping procedure [7] – most often in selecting particular concerns and defining a cost function, attempting to find its minimum in respect to a given component distribution. There are some approaches that enable the specification of many requirements and/or resource constraints, as well as the communication capacity required for interaction between the components, attempting to find a local optimum for a set of components [10], [11]. These approaches may be challenging as they require a lot of effort to provide data which are results of analysis, simulations, measurements, and estimations. A complementary approach is the architectural reasoning that leads to a particular decision. For example, processing visualization data can be deliberate put on a sensor if such sensor includes an FPGA. In all cases, the process of mapping can be done in an iterative manner, in particular when fine-tuning optimization is required. C. Determining the overall architecture design In this stage, the engineers analyze the requirements and constraints that were previously defined and begin to fit them into an organization that supports the system’s requirements. Aspects to consider include: (i) similarities and proximity between software components; (ii) the amount of data that is transferred between components; and (iii) the use of stan- standardized design solutions based on the type of system that is being implemented. There are multiple architectural design options that can be used in order to support the organization of software-intensive systems containing heterogeneous platforms. One example of such solution is the standard proposed by the HSA foundation [12]. An HSA-compliant architecture meets the requirements for enabling heterogeneous programming models for computing platforms using standardized interfaces, processes, communication protocols, and memory models. When elaborating the overall software architecture design of such systems, communication and computation rise as the main aspects to be properly addressed. C1: Address the communication aspect. The communication aspect is known to play an important role in heterogeneous systems since they are inherently distributed. In the case of automotive applications, there are two main characteristics that influence communication performance: the resource constraints of the embedded hardware, and the high demands on reliability that are connected to such safety critical domain. In AI-base systems, for instance, the processing of large amounts of data is an inherent characteristic, typically including activities to understand, structure, process and monitor information. An architecture design containing AI components must allow for the appropriate communication structures to fulfill the increasing requirements on the system. There are still few systematic methods for designing such systems, but they will be of high importance as the domain of AI advances. As shown in Section II, computation in vehicles may be centralized, requiring access to be granted across multiple nodes. The architecture typically allows for seamless access of data, either by streaming, or on demand, in order to provide software components with the necessary means to realize functionalities. Further, there is typically a clear distinction between components that realize the training of models, and the ones realizing the execution of the models. These components must communicate, raising a number of questions, regarding e.g., the execution of these components (local, or parallel), and the re-distribution of trained models to the components that requested them. One way to practically address the communication topic is through the separation of concerns technique, setting “communication” and “computation” as main concerns in the center. Components that exchange messages are placed closer in the architecture, as well as components that are executed by the same processor. Another possibility is to establish three main concerns in the architecture: “application model”, “platform model”, and “mapping between application and platform”, as presented in [13]. The application specification should contain the non-functional requirements formally encoded. The platform model should specify redundancy and replaceability of computation, as well as I/O components. The mapping should bind the application to the hardware resources according to the non-functional requirements. Another practical possibility is to design the architecture in layers, including a communication layer that allows different processors to communicate, as shown in [14]. Such standardized channel of communication between different processing units allows for developers to avoid explicit handling of the low-level memory copying. Further, it is also possible to design a dedicated layer for constant monitoring of the resources, providing status information to a deployment layer. C2: Address the computation aspect. The computation aspect must also be addressed, as the distribution of computational load has direct influence on the system’s performance. As the prices of hardware components decrease, the opportunity to distribute computation between the cloud and the edge arises as an alternative to architectures based on cloud-only or edge-only computation. As mentioned earlier in Section II, there is a trend in the automotive industry to move from simple CPU-based computation to smart sensors that contain powerful, heterogeneous computational nodes on the edge. The main motivation is the large amounts of data to be processed, which can be partially handled already on the edge. The important decisions in this context are related to the analysis of locality or globality of data, real-time and performance requirements, and similar concerns. For instance, certain types of data can be pre-processed already in the vehicle, while the training of models, storage of data, and execution of computationally intensive tasks can be done in the cloud. In practice, a pipelined architecture [15] allows the software to be represented as general data flow graphs, with particular focus on the performance. The approach bases the allocation strategy on the simulation of executing these graphs. The pipelined architecture is a reasonable candidate architectural style when there is a clear separation between the component functionality and the processing data. Part of the processing can be placed on different processing units, then the transfer of data can be defined through communication rules. Alternatively, and most commonly, engineers can implement a master-slave architecture in order to take advantage of the inherent characteristics of heterogeneous platforms which contain, typically, one main processor (CPU for procedural tasks), and one or more accelerators (e.g., GPU for highly parallelized and dynamic tasks). On AI-based systems, for instance, the main application flow may be processed by a CPU (master) while the training of the model is performed by the GPU, the accelerator (slave). Further, aspect-oriented architecture [16] can be used in the context of building components that are executable in different processing units – the portions in the design that are platform-specific can be treated as aspects in the overall architectural design. A typical example of realizing it is through conditional compilation, where the conditions are connected with the different processing units that are available. The approach can be used for automatic generation of code specific for a given platform, for example in creating connectors for data communication between different execution platforms [17]. 1538 D. Refactoring software components Finally, the design of the individual components must be sketched and implemented according to the previously defined characteristics of the overall architecture. D1: Determine the new set of software components. In this step, engineers analyze the current architecture design and determine which components will be refactored, and which ones will be created. There might exist a number of constraints to refactoring or creating new software components due to limited hardware resources, time constraints, or increased complexity of the system. However, it is important to precisely determine the changes that will occur to every software component. Components that have migrated from one platform to another must comply with the characteristics and limitations of the target hardware architecture. Therefore, the efforts for refactoring them must be considered, as it may occur that an extensive re-design has to be performed. D2: Design and implement the software components. In this step, the engineers will determine how the software components will be either designed or refactored. This stage is crucial to architectural decisions, since the adaptation of a component to be executed on an accelerator typically requires communication structures to be created. In practice, a component that is developed for execution on CPU is likely to be turned into two components due to the nature of heterogeneous platforms. CPU remains as the host processor and executes the main flow of the application, while the most computationally intensive portion is offset to the accelerator. This scenario demands robust solutions for the communication between portions running on different processing units. As shown in the code snippet in Section II, the simplest solution is to designate a shared memory accessible by both units. For complex algorithms with large amounts of data, this solution might create deficiencies in performance due to the data transfer between memory spaces from dedicated to shared. The concept of flexible software components can be used in order to create software components that can be executed in any of the available processing units. Support for this type of component design has been proposed earlier in the context of GPUs [18]. Flexible software components allow developers to focus on implementing functions, while mechanisms (namely adapters) automatically transfer data between components, taking into consideration the platform specifications. Creating flexible components results in higher flexibility in the architecture, allowing several execution algorithms to be implemented (e.g., round-robin, first deadline first). However, the implementation of flexible components typically includes computation overhead using adapters that is not negligible, due to the additional code transformations that are needed for execution by any processor. V. VALIDATION A. Validation procedure One main aspect of this framework is that we included practitioners from industrial contexts in the loop of creating the approach. Then, we conducted a set of steps in order to evaluate whether or not the proposed design approach is appropriate for its purpose, meets all constraints and will perform as expected. In total, we presented the framework to six companies that were in different stages in accommodating heterogeneous platforms into their processes. We presented the proposed approach and the rationale behind every step in the process. The group then discussed the initiated topic and expressed agreements and/or disagreements to every aspect that was shown. The basis for argument were typically their day-to-day activities at work and their own views on how the refactoring process should occur. After several iterations with different partners, we adjusted the framework and sent it back to the them. We also sent out a questionnaire in order to capture the respondents’ background information along with their impressions of the framework. We have received written feedback from two large organizations that are market leaders in their respective industries. The two companies we have received replies from are briefly described next. Company A is a large, globally distributed manufacturer of busses, trucks and construction equipment with strong focus on technology and innovation. It is a key player in the vehicles market and has made significant investments in the development of self-driving vehicles technology. Company B is a recent subsidiary of the automotive group that Company A is inserted in. It mainly addresses software development projects with focus on autonomous driving and driver assistant systems. The respondents come from different backgrounds and have slightly different work assignments and experiences with heterogeneous computing. The employees of Company A are based in India and have focus on research projects related to heterogeneous computing. They are a part of mainly works on programming models to facilitate software development across different types of processors. The employee of Company B has a software development role and is currently working on computer vision algorithms, with focus on object detection. The employee reported some experience on high performance computing, although limited expertise on heterogeneous platforms. Both companies develop embedded systems in the automotive domain, and utilize GPUs for acceleration. B. Received feedback The questionnaire that was sent out contained three questions regarding the professionals’ backgrounds and experience, followed by a general open question about the proposed framework. Then, two questions were added regarding their opinions about adding or removing aspects of the framework. Finally, there were questions about the architectural decisions that are typically made in the context of their work. We received feedback that was complementary to the discussions which occurred during the meetings, and they are presented as follows. - **Feedback loops**: One main aspect that was reported was the need for feedback loops between the different steps, particularly during the “software and hardware mapping” step, in which mechanisms like “profiling” are necessary prior to determining the best configuration according to a given set of requirements. The changes are typically constant and iterative, allowing smaller changes to occur at each iteration. - **Continuous refactoring**: Refactoring is regularly conducted due to constantly changing requirements, despite the high complexity of the projects. Therefore, the process should include careful analysis and assessment of the current architecture prior to making changes into effect. - **Priority to software**: The projects typically put software in the center, and later on evaluate which types of hardware are needed for execution. The functionalities that entail applications are the main focus for the development of the systems. - **Dependency analysis**: In Company A, the refactoring process includes an analysis of dependencies to be conducted on the components that are meant to be executed on the accelerator(s) (in this case, GPUs). Since components are typically developed for execution on CPUs, and CPUs are inherently serial in their execution method, developers must check whether there are any dependencies that prevent algorithms from running in parallel, due to the parallel execution nature of GPUs. When there are no dependencies, the component can easily be transformed into GPU-runnable code. Otherwise, when there are dependencies, the core functionality within the component must be changed in order to make it parallel. - **Refactoring procedure**: The following procedure is followed in Company A in order to refactor. The CPU-oriented program is used as a baseline for performance measurement. Once the algorithm is modified into code that can be executed on an accelerator (GPU in their case), the execution time is again measured and compared with the performance of the CPU code. The changes in the execution time are then thoroughly analyzed, followed by a process to determine whether or not such deviation is acceptable. In case the trade-offs are approved, the developer proceeds to port code to the GPU. - **Execution policy**: Company A has reported that their usual policy to determining the execution of software components is heavily based on profiling. Typically, the functions that take more time to execute are selected as guidelines to determine software and hardware mapping. VI. RELATED WORK As identified in a literature review [7], [19], there is a large amount of literature addressing heterogeneous computing. In particular, the software deployment stage is highlighted as one of the most challenging aspects of applying this technology. Several concerns and approaches were identified in [7], primarily addressing the problems of scheduling, software quality, and software architecture pointing to the challenges in establishing a design that balances the workload between units properly. Moreover, it was mentioned through some studies the importance of a solid communication strategy, as well as the efficient management of memory spaces. Twenty-eight studies discussing concerns of software architectures for heterogeneous computing were identified previously in [8]. These studies typically propose solutions to specific problems, rather than a holistic framework to aid in the process of migrating to heterogeneous platforms. Some of them are described next. In [20], the authors tackle the problem of workload distribution according to the characteristics of both the load and the processing unit. The approach identifies hotspots in the code, and then means to generate binary code depending on the processing units that are available. The proposed architecture contains one component (called orchestrator) to perform resource allocations at runtime and monitor the system. The problem of resource allocation is also addressed in [21], which the authors propose an approach that inputs a standard UML/MARTE model and explores different allocation possibilities for software components. From a number of different models, the proposed approach generates the software infrastructure required to connect different memory spaces using communication libraries. Another example is presented in [22], in which the authors propose a GPU interface to identify race conditions through simulations. Other papers simply present an architecture design that includes heterogeneous platforms. In [23], for instance, the authors propose an architecture design for a CPU-GPU-FPGA-based hardware platform that is used for applications in the health domain. The solution uses the pipelined architectural style, processing images from a camera feed. VII. CONCLUSION & FUTURE WORK Heterogeneous platforms, i.e., hardware containing processing units like CPUs, GPUs and FPGAs are now reaching accessible costs, making a reasonable case for adopting such alternative. In this sense, heterogeneous computing has emerged as a viable option to satisfy increasing system requirements, such as performance, energy consumption, and time constraints. However, the process of accommodating such hardware into the system may be challenging in a number of different aspects. One of such aspects is the software architecture design, which is very likely to be adapted for the software take full advantage of the underlying hardware. In this paper, we proposed a framework that supports the refactoring of software systems when migrating from CPU-based projects to execution on heterogeneous platforms. Such migration poses a number of challenges to the software architecture design, in particular the allocation of resources and management of memory spaces and communication. The framework is divided into four steps that architects should follow in order to make architectural decisions that support the newly added hardware capabilities. The steps are, namely: A. “Determining the impact on the software architecture”; B. “Mapping software and hardware”; C. “Determining the overall architecture design”; and D. “Refactoring software components”. Within the refactoring process, the engineers are guided through a set of questions that allow for considerations for re-design, focusing on architectural decisions. The research methodology conducted as follows. First, we studied the literature and identified the common approaches for software architecture when heterogeneous platforms are available. Then, we included our expert industrial partners in the loop by conducting face-to-face workshops in order to obtain their in-practice perspectives on the matter and iteratively evolve our proposed framework. Finally, we sent out questionnaires and obtained written feedback from the participants in order to improve the approach. As future work, we will refine and extend the proposed approach to include further considerations to the migration problem, both prior to and after the re-architecting stage. We intend to provide in-depth analysis of each step in the same way that we have done for the refactoring stage presented in this work. Further, we will evaluate the technical feasibility of the complete framework in collaboration with our partners. Then, we intend to investigate the impact of business decisions on the architectural decisions connected to the refactoring of systems to accommodate heterogeneous platforms. ACKNOWLEDGMENT This research was supported by the research projects “HELPING – Heterogeneous Platform Deployment Modelling of Embedded Systems” funded by the Swedish Research Council, and “HoliDev – Holistic DevOps Framework” funded by Vinnova. REFERENCES
{"Source-Url": "https://research.chalmers.se/publication/521382/file/521382_Fulltext.pdf", "len_cl100k_base": 6609, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27463, "total-output-tokens": 8806, "length": "2e12", "weborganizer": {"__label__adult": 0.0004851818084716797, "__label__art_design": 0.0005092620849609375, "__label__crime_law": 0.00034499168395996094, "__label__education_jobs": 0.000675201416015625, "__label__entertainment": 8.189678192138672e-05, "__label__fashion_beauty": 0.0002187490463256836, "__label__finance_business": 0.00030803680419921875, "__label__food_dining": 0.00041866302490234375, "__label__games": 0.0009064674377441406, "__label__hardware": 0.003326416015625, "__label__health": 0.0006685256958007812, "__label__history": 0.0003437995910644531, "__label__home_hobbies": 0.00011372566223144533, "__label__industrial": 0.0008492469787597656, "__label__literature": 0.0002701282501220703, "__label__politics": 0.00029206275939941406, "__label__religion": 0.0006704330444335938, "__label__science_tech": 0.05810546875, "__label__social_life": 7.098913192749023e-05, "__label__software": 0.006038665771484375, "__label__software_dev": 0.92333984375, "__label__sports_fitness": 0.0003833770751953125, "__label__transportation": 0.0015411376953125, "__label__travel": 0.00025200843811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42906, 0.02577]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42906, 0.63573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42906, 0.92688]], "google_gemma-3-12b-it_contains_pii": [[0, 521, false], [521, 5552, null], [5552, 10862, null], [10862, 12015, null], [12015, 17637, null], [17637, 23983, null], [23983, 29206, null], [29206, 35370, null], [35370, 42906, null]], "google_gemma-3-12b-it_is_public_document": [[0, 521, true], [521, 5552, null], [5552, 10862, null], [10862, 12015, null], [12015, 17637, null], [17637, 23983, null], [23983, 29206, null], [29206, 35370, null], [35370, 42906, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42906, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42906, null]], "pdf_page_numbers": [[0, 521, 1], [521, 5552, 2], [5552, 10862, 3], [10862, 12015, 4], [12015, 17637, 5], [17637, 23983, 6], [23983, 29206, 7], [29206, 35370, 8], [35370, 42906, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42906, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
599ac6eed8133f82619d53c6d7fadccede39ac7a
[REMOVED]
{"Source-Url": "http://www.scanx.org/loiret/uploads/publications/2008/Plsek_Loiret_Merle_Seinturier-Middleware08.pdf", "len_cl100k_base": 8167, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 46438, "total-output-tokens": 10259, "length": "2e12", "weborganizer": {"__label__adult": 0.0003573894500732422, "__label__art_design": 0.00032782554626464844, "__label__crime_law": 0.0002703666687011719, "__label__education_jobs": 0.0004172325134277344, "__label__entertainment": 5.155801773071289e-05, "__label__fashion_beauty": 0.00014770030975341797, "__label__finance_business": 0.0001735687255859375, "__label__food_dining": 0.00030922889709472656, "__label__games": 0.00055694580078125, "__label__hardware": 0.0012683868408203125, "__label__health": 0.00039839744567871094, "__label__history": 0.0002357959747314453, "__label__home_hobbies": 7.855892181396484e-05, "__label__industrial": 0.0003960132598876953, "__label__literature": 0.00016188621520996094, "__label__politics": 0.00023066997528076172, "__label__religion": 0.00046324729919433594, "__label__science_tech": 0.0103607177734375, "__label__social_life": 5.936622619628906e-05, "__label__software": 0.003664016723632813, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0003292560577392578, "__label__transportation": 0.0005717277526855469, "__label__travel": 0.0002181529998779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48618, 0.01972]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48618, 0.4245]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48618, 0.89863]], "google_gemma-3-12b-it_contains_pii": [[0, 2276, false], [2276, 5084, null], [5084, 7782, null], [7782, 10029, null], [10029, 11787, null], [11787, 14570, null], [14570, 16731, null], [16731, 17310, null], [17310, 19277, null], [19277, 21805, null], [21805, 24598, null], [24598, 27278, null], [27278, 29070, null], [29070, 31826, null], [31826, 35198, null], [35198, 36773, null], [36773, 39440, null], [39440, 42696, null], [42696, 45359, null], [45359, 48618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2276, true], [2276, 5084, null], [5084, 7782, null], [7782, 10029, null], [10029, 11787, null], [11787, 14570, null], [14570, 16731, null], [16731, 17310, null], [17310, 19277, null], [19277, 21805, null], [21805, 24598, null], [24598, 27278, null], [27278, 29070, null], [29070, 31826, null], [31826, 35198, null], [35198, 36773, null], [36773, 39440, null], [39440, 42696, null], [42696, 45359, null], [45359, 48618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48618, null]], "pdf_page_numbers": [[0, 2276, 1], [2276, 5084, 2], [5084, 7782, 3], [7782, 10029, 4], [10029, 11787, 5], [11787, 14570, 6], [14570, 16731, 7], [16731, 17310, 8], [17310, 19277, 9], [19277, 21805, 10], [21805, 24598, 11], [24598, 27278, 12], [27278, 29070, 13], [29070, 31826, 14], [31826, 35198, 15], [35198, 36773, 16], [36773, 39440, 17], [39440, 42696, 18], [42696, 45359, 19], [45359, 48618, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48618, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
afebf90491225ddd4f74501821ecd78c50f8cc02
[REMOVED]
{"len_cl100k_base": 6749, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20944, "total-output-tokens": 8238, "length": "2e12", "weborganizer": {"__label__adult": 0.0003905296325683594, "__label__art_design": 0.0006737709045410156, "__label__crime_law": 0.0006475448608398438, "__label__education_jobs": 0.003093719482421875, "__label__entertainment": 0.00014579296112060547, "__label__fashion_beauty": 0.00023806095123291016, "__label__finance_business": 0.0012359619140625, "__label__food_dining": 0.0006961822509765625, "__label__games": 0.0009012222290039062, "__label__hardware": 0.0008616447448730469, "__label__health": 0.0015201568603515625, "__label__history": 0.0004496574401855469, "__label__home_hobbies": 0.0002579689025878906, "__label__industrial": 0.0010890960693359375, "__label__literature": 0.0010213851928710938, "__label__politics": 0.0003147125244140625, "__label__religion": 0.0006046295166015625, "__label__science_tech": 0.3974609375, "__label__social_life": 0.00022721290588378904, "__label__software": 0.048309326171875, "__label__software_dev": 0.5390625, "__label__sports_fitness": 0.0002111196517944336, "__label__transportation": 0.0006275177001953125, "__label__travel": 0.00026226043701171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29285, 0.02523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29285, 0.67628]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29285, 0.88588]], "google_gemma-3-12b-it_contains_pii": [[0, 3027, false], [3027, 6523, null], [6523, 8502, null], [8502, 11392, null], [11392, 13755, null], [13755, 16648, null], [16648, 19863, null], [19863, 23664, null], [23664, 26342, null], [26342, 29285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3027, true], [3027, 6523, null], [6523, 8502, null], [8502, 11392, null], [11392, 13755, null], [13755, 16648, null], [16648, 19863, null], [19863, 23664, null], [23664, 26342, null], [26342, 29285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29285, null]], "pdf_page_numbers": [[0, 3027, 1], [3027, 6523, 2], [6523, 8502, 3], [8502, 11392, 4], [11392, 13755, 5], [13755, 16648, 6], [16648, 19863, 7], [19863, 23664, 8], [23664, 26342, 9], [26342, 29285, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29285, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
111aa9accee840f35df0f4990e57adcd7432669e
[REMOVED]
{"Source-Url": "http://ei.cs.vt.edu:80/~csei/project/sf/notes/chapter3.pdf", "len_cl100k_base": 7748, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 42364, "total-output-tokens": 8624, "length": "2e12", "weborganizer": {"__label__adult": 0.0007162094116210938, "__label__art_design": 0.0007557868957519531, "__label__crime_law": 0.0006175041198730469, "__label__education_jobs": 0.0017213821411132812, "__label__entertainment": 0.0001531839370727539, "__label__fashion_beauty": 0.0003843307495117187, "__label__finance_business": 0.0005917549133300781, "__label__food_dining": 0.0007448196411132812, "__label__games": 0.0018663406372070312, "__label__hardware": 0.060943603515625, "__label__health": 0.0008516311645507812, "__label__history": 0.0005803108215332031, "__label__home_hobbies": 0.0005135536193847656, "__label__industrial": 0.004413604736328125, "__label__literature": 0.0002753734588623047, "__label__politics": 0.000507354736328125, "__label__religion": 0.0008592605590820312, "__label__science_tech": 0.326904296875, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.00799560546875, "__label__software_dev": 0.5849609375, "__label__sports_fitness": 0.0007996559143066406, "__label__transportation": 0.00237274169921875, "__label__travel": 0.0003173351287841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36667, 0.01825]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36667, 0.72133]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36667, 0.92979]], "google_gemma-3-12b-it_contains_pii": [[0, 1572, false], [1572, 2420, null], [2420, 4427, null], [4427, 5189, null], [5189, 6286, null], [6286, 7939, null], [7939, 9593, null], [9593, 11159, null], [11159, 13161, null], [13161, 14444, null], [14444, 16830, null], [16830, 20098, null], [20098, 20940, null], [20940, 21543, null], [21543, 24060, null], [24060, 26351, null], [26351, 29051, null], [29051, 31036, null], [31036, 33263, null], [33263, 36496, null], [36496, 36667, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1572, true], [1572, 2420, null], [2420, 4427, null], [4427, 5189, null], [5189, 6286, null], [6286, 7939, null], [7939, 9593, null], [9593, 11159, null], [11159, 13161, null], [13161, 14444, null], [14444, 16830, null], [16830, 20098, null], [20098, 20940, null], [20940, 21543, null], [21543, 24060, null], [24060, 26351, null], [26351, 29051, null], [29051, 31036, null], [31036, 33263, null], [33263, 36496, null], [36496, 36667, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36667, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36667, null]], "pdf_page_numbers": [[0, 1572, 1], [1572, 2420, 2], [2420, 4427, 3], [4427, 5189, 4], [5189, 6286, 5], [6286, 7939, 6], [7939, 9593, 7], [9593, 11159, 8], [11159, 13161, 9], [13161, 14444, 10], [14444, 16830, 11], [16830, 20098, 12], [20098, 20940, 13], [20940, 21543, 14], [21543, 24060, 15], [24060, 26351, 16], [26351, 29051, 17], [29051, 31036, 18], [31036, 33263, 19], [33263, 36496, 20], [36496, 36667, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36667, 0.0641]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ca7a6a69d0cf71a2d45ac93cb3c9026ab7760ebb
Today's presentation will cover TestBencher Pro, a VHDL, Verilog, and C++ test bench generator that dramatically reduces the time required to create and maintain test benches. One of the most time consuming tasks for users of HDL languages is coding test benches to verify the operation of their design. In his book "Writing Testbenches," Janick Bergeron estimates that 70% of design time is spent verifying HDL code models and that the test bench makes up 80% of the total HDL code generated during product development. TestBencher Pro automates the most tedious aspects of test bench development, allowing you to focus on the design and operation of the test bench. This is accomplished by representing each bus transaction graphically and then automatically generating the code for each transaction. TestBencher makes use of the powerful features of the language that is being generated and the engineer does not have to hand-code each transaction. When hand coding, the designer would have to take the time to deal with the specifics of the design (port information, monitoring system response, etc) as well as common programming errors (race conditions, minor logic errors, and code design problems). This removes a considerable amount of time from the test bench design process because TestBencher manages the low-level details and automatically generates a valid test bench. 1.0 TestBencher Pro Overview - Provides specification based verification - Generates VHDL, Verilog, and C++ bus-functional models and test benches from graphical timing diagrams - Resulting code is modular, easy to debug, and compatible with all major VHDL and Verilog simulators - Language independent timing diagrams enhance the ability of engineers to share data across projects - The graphical interface speeds development for both expert and novice users, dramatically reducing the time necessary to create and maintain test benches TestBencher represents a radical breakthrough in the automated development of HDL test benches. TestBencher Pro provides designers with a graphical environment for rapidly generating system level test benches. Users draw timing diagrams and TestBencher generates native VHDL, Verilog, and C++ code. The resulting code is modular and can be used with all major VHDL and Verilog simulators. TestBencher Pro's graphical interface speeds up test bench development for both expert and novice users. TestBencher generates all of the low-level transaction code, verification code, sequence detection, error reporting and file I/O code. The graphical representation also enhances the ability of engineers to share data across projects, even though new engineers might not be familiar with the details of the test bench design. Specification Based Verification **Problem:** Verify an SOC Model that interacts with different protocols **Solution:** Use TestBencher to define the protocols, and then automatically generate the transactors, transaction data, and logic to verify the results. 1.1 Overview Most system-level verification problems involve the creation of bus-functional models that imitate external devices communicating using standard and proprietary bus protocols. Here, for example, we have an SOC design that communicates with an SDRAM memory subsystem, PCI bus devices, and ATM physical devices. With TestBencher, a user describes the protocols by entering timing diagrams that illustrate the input, output, and timing of information exchanged between the devices. Each timing diagram describes a possible transaction that can occur between devices supporting the protocol. From this little bit of information, TestBencher generates a complete verification system. Each timing diagram is converted into a transactor that exchanges transaction data with the model under test. TestBencher also creates a transaction generator that randomly manufactures transactions, and a transaction manager that assigns transactions to the transactors. The generated test bench also monitors and accumulates statistics on what transactions occur, and then can dynamically adjust the randomization constraints to ensure important test cases are covered. Optionally, TestBencher can also generate a high-level behavioral reference model of the system that can be used to compare against the model under test, enabling automated verification of system output. TestBencher creates bus-functional models by using a combination of graphical timing diagrams and top-level template files. The graphical timing diagrams are used to define the reusable timing transactions like a PCI bus read cycle or write cycle. The top-level template file defines the sequence in which the timing transactions will be applied to the model under test. For advanced verification systems, TestBencher can create a Transaction Manager that can read transactions in from a file or automatically generate them based on random constraints. The code generation process in TestBencher is an interactive process so it is easy to experiment with different test bench functionality. Each time a timing diagram is saved the code for that transaction is re-generated, so you can watch how the low-level code changes when you add a new graphical element like a sample or a loop. The top-level test bench controls the execution sequence and monitors the status of each timing transaction in the project. It is also the place where the model under test is instantiated and connected to the test bench model. The Make TB button generates the completed test bench model and updates any timing transactions that need it. During code generation TestBencher only changes the code blocks that appear between the macro begin and end statements. Any code that outside the macro blocks is preserved during code generation. Timing Diagrams Communicate Transaction Behavior HDL Code: - Simplest signal behavior difficult to communicate - Code complexity greatly increases for: - response checking code - parallel execution blocks Graphical Representation: - Concisely communicates transaction behavior Let’s take a look at the difference in readability between an HDL code transaction and a graphical representation of a transaction. Even the simplest signal behavior can be difficult to understand when looking at HDL code. Here, for example, is some simple signal stimulus code without any response checking code. It executes in a strictly sequential fashion. Take a minute and see if you can figure out exactly what this code is doing. Now take a look a graphical representation of this same block of code. Notice how much easier it is to understand what’s happening in this representation. This is true despite the fact that the diagram includes extra details to verify a setup constraint within the transaction. A glance at the timing diagram communicates the temporal relationships between the edges of the signals. By comparison, the code segment has to be studied and possibly drawn out by hand to figure out the temporal relationships of the signals. For more complex transactions that contain response checking code and parallel execution blocks, this difference between timing diagram representations and HDL code descriptions becomes even more striking. This, of course, shouldn’t be surprising: it’s why chip vendors put timing diagrams in their data sheets instead of HDL code descriptions of bus transactions. Sequencing Transactions - Sequencer Process controls order in which transactions are applied to the MUT. - Transactions can run sequentially in a blocking mode or concurrently. - Transactions can be set to run once or run in a continuously looping mode. - Transaction calls are automatically generated using a dialog interface. The sequencer process is the place in the top-level test bench that defines the order in which the timing transactions are applied to the model under test. The sequencer process controls and monitors the execution of the timing transactions. Several tasks are generated for each timing transaction, each with a different execution mode. These tasks are then called from the sequencer process. The task calls are placed sequentially in the order that you wish to have them applied to the model under test. In addition to these task calls, you can also place HDL code in the sequencer. One example where this would be useful is if you wish to place conditions on whether or not a timing transaction is called, or on the parameter values that you wish to have applied. Executing Concurrent Timing Transactions In addition to ordering the timing transactions, the sequencer process is also used to specify the manner in which the timing transactions are applied. Tasks can run in a continuous looping mode or in a run once mode. Also each task can run in either a blocking or a concurrent mode. Generally master bus cycles run once in a blocking mode while global clocks and slave transactions run in a continuous and looping mode. Transaction Manager Transaction Manager maintains a queue of transactions to be executed. Accepts transactions from many sources: - Transactions read from a file - Other BFM models in the verification hierarchy - Transaction Apply calls made by the user in the template file - Transactions spawned by other transactions - Randomly generated transactions produced by the Transaction Generator (in DAC release) 1.5 Overview In addition to sequentially executing transaction calls that are placed in the template file, TestBencher can generate a transaction manager module that maintains a queue of transactions to be executed. Transactions can be generated randomly, posted to the queue during simulation, or read in from a file using the Test Reader component. TestBencher automatically generates Transaction Manager and Test Reader code from the transactions included in the project. Attempting to create and maintain this type of code manually is difficult because the code changes each time you add a new transaction type or change the number and types of parameters for a transaction. During simulation, the Transaction Manager maintains a queue of transactions to be executed. The manager can randomly generate transactions to fill the queue based on a weighted function. Transaction Diagrams can dynamically post other transaction calls to the queue based on responses from the model under test. Each BFM has its own transaction manager, so a top-level BFM model can generate test sequences and post them to its child BFM transaction managers. All calls to transactions can be specified either as relative or fixed path, allowing any transaction to be initiated from anywhere in the test bench. Transaction Monitor & Generator Transaction Monitor and Generator work together to accumulate statistics on what transactions occur, and then dynamically adjust the randomization constraints to ensure important test cases are covered. (By DAC) 1.6 Overview The Transaction Monitor and Generator work together to accumulate statistics on what transactions occur, and then dynamically adjust the randomization constraints to ensure important test cases are covered. Both the types of transactions and the input data for the transactions can be randomized. The user can specify coverage levels required before test will finish. The Transaction Monitor generates a coverage report for the completed test. 2.0 Graphical Constructs Simple set of graphical constructs naturally express timing and bus protocols: - **Waveforms** - stimulus and expected response - **Variables** - parameterize state and timing values - **Delays** - parameterize time delays - **Setups & Holds** - monitor stability between transitions - **Samples** - verify and react to output from MUT - **Markers** - model looping constructs, insert native HDL subroutine calls, or end transaction We have given an overview of TestBencher and gone over the basic steps required to create a test bench. Next we will talk about the graphical constructs used to create the bus transactions. TestBencher is easy to use because we have taken great care to keep the number of constructs down to a minimum. There are 5 basic constructs that are used to create a transaction. Drawn waveforms provide a quick way to describe the basic functionality of a transaction. Usually the waveforms already exist in the design specifications or data sheets of the accompanying parts, so it is a simple exercise in importing a TDML file or redrawing a waveform specification. State Variables allow flexible re-use of transactions by parameterizing the states so that new values can be passed in each time the transaction executes. For example, one use would be to provide data and address values for a write cycle transaction. Delays provide a mechanism to parameterize time in the same way that state variable parameterize waveform values. Samples provide a reactive, self-testing mechanism for checking the response of the model under test. Samples can monitor response at a particular point or over a time period. Markers are used to create conditional loops for variable burst transactions, calling native HDL subroutine calls, or ending transactions. We will cover each of these constructs in detail in the next few slides. Waveforms Provide Stimulus and Expected Response Information Several methods available for waveform entry: - Graphically draw stimulus/response waveforms - Generate waveforms using RTL-level equations - Import from simulators: VHDL, Verilog, SPICE - Import from logic analyzers: Agilent, Tektronix - Import state information from spreadsheets State information is represented by waveforms. TestBencher Pro includes a professional timing diagram editor that allows you to quickly draw waveforms using the 7 graphical states or by describing the waveform using an RTL level equation or Time Based Waveform equation. In addition to the drawing environment, TestBencher can import waveforms from other simulators, logic analyzers, and spreadsheets. TestBencher also supports the Timing Diagram Markup Language, TDML. This is the standard being adopted by semiconductor manufacturers and used in on-line data sheets. The toolbar for the timing diagram editor is shown in this image. The first group of buttons are used to add signal objects to the diagram. The next group of buttons are used to add the model constructs discussed in the previous slide. Waveforms can be drawn using the state buttons in the middle of the toolbar, and the last set of buttons can be used to zoom the diagram view in or out. Variables Parameterize State Values - State Variables control bus states during simulation - Variables can be read from a file like @readData.dbus[7:0] passed into the transaction as a function parameter like $$addr - Variables can be specified as conditional expressions including Boolean equations that reference state variables and user-defined data structures. TestBench Pro can define a signal’s value graphically by drawing the waveform or by defining it using several types of expressions and variables. These variables make the timing transactions reusable, because new values can be passed into the transaction each time it is called. TestBench allows both state and timing variables to be parameterized. Both the state and timing variables can either be passed into a transaction through its transaction call or read in from a file. In the example shown, the variable with the $$ in front indicates that it is a parameter variable that will be passed into the transaction from the top-level test bench. The variable with the @ symbol indicates that it is a file variable; the state value will be read in from a file in the form of a column-based tab separated file (like a spreadsheet file). Each time the transaction is called a new line from the file is read and the value from the proper column is placed in the variable. Delays Parameterize Time Values - Delays can conditionally control when edges occur - Delay values can be time or cycle-based - Delay values can be passed in from a function call or read in from a file Delay variables, like state variables, can be either passed into the transaction or read from a file. Delays also can either be time based or cycle based. For example you can either pass in a value that means 5 ns or 5 clock cycles depending on how the delay is defined. Samples Verify MUT Output - Sample constructs can monitor and perform actions based on the data sampled - Sample can work at a single point or over a windowed area - They can perform relative to the beginning of the transaction or relative to another event in the diagram. 2.4 Graphical Constructs Sample parameters generate self-testing code in the test bench. Samples are normally used to monitor the signal values coming back from the model under test. Samples can test a signal at a specific point or over a windowed area. Also each of these samples can perform relative to the beginning of transaction or relative to another event in the diagram. Samples either function as time or cycle based constructs depending on how you define the sample. For example, a relative sample could either be defined to sample 20 ns after a particular edge or sample 2 clock cycles after the edge. The value that the sample reads can either be exported to the top-level module or written out to a file. This could be used, for instance, to provide an input value for a state variable in another timing transaction or to determine if a specific timing transaction is to be executed or not. Markers used for Control & Looping Sections of Transactions - Specify the End of Transaction - Create loops using for, while, and repeat loop markers - Insert HDL code - Useful for generating conditional burst type transactions Markers can be added to timing diagrams to specify specific actions to be taken by the transaction during execution. These actions can include signifying the end of a transaction, creating loops in the transaction, and inserting HDL code calling a subroutine into the transaction. In this example we show a loop in the middle of the transaction. TestBencher Pro can generate a test bench that loops continuously over a sequence of test vectors either forever or until a defined condition is met. These loops are set up using three types of Time Markers: - Loop Start: sets the beginning point for the loop and defines an exit condition if there is one. - Loop End: defines the ending point for the loop sequence. - Exit Loop When: can be placed between a Loop Start and a Loop End marker to allow a loop to be exited in mid-execution. An End Diagram marker is also shown in this example. This sets the end of where code will be generated for a transaction. Note that while transaction code is completely generated when diagrams are saved, a marker can be used to place a user-defined subroutine in the transaction code. This is done in place of hand modifying the transaction code. 3.0 Advanced Features - Hierarchical BFM Components - Golden Reference Model - Automatic generation of file I/O code - Fast conversion from time- to cycle-based test benches - External Simulation and Compiler Control So far we have covered the basic Use Model of TestBencher and introduced the major components that are generated for the verification system. Next, we will mention a few of the features that make TestBencher a powerful and easy to use environment for generating bus-functional models including: hierarchical BFM components, Golden reference model generation, file I/O code generation, the ability to switch between cycle and time base. And finally the ability to control external simulators. Hierarchical BFM Architecture - TestBencher Pro uses a project file to control the generation of a bus-functional model - Projects can be included hierarchically in other projects - Multiple Instantiation of test bench components - Multiple Port Testing supported - Reuse test bench models as sub-components to another test bench model TestBencher Pro uses a project file to organize the timing diagram files and top-level template files. These project files have all the information needed to generate an entire bus-functional model. Projects can be included hierarchically in other projects. This allows TestBencher to support multiple test bench component instantiation. Once a test bench has been completed the entire bus functional model that it represents, or project component, can be instantiated in another project. A project that defines a bus-functional model of an SRAM, for example, could be instantiated several times in a higher-level project that is being developed for a microprocessor. The completed microprocessor model could then be instantiated in a project for a video card. This is one way in which TestBencher allows the re-use of test bench components. Verification of devices with multiple ports can also be accomplished using multiple test bench component instantiation. The transactions that would connect to the ports would be instantiated as many times as needed in the higher level test bench. This methodology allows a large test bench to be broken into smaller, self-contained components. Each sub-project can be modified at any time, either stand-alone or while developing the owning project. The properties of the project are always maintained. Golden Reference Model Golden Reference Models are high-level descriptions of a system that are used to automate the verification of system output: - Generates all of the stub functions for golden model - User writes behavioral code inside stub functions - Automatically compares MUT output against golden reference model during simulation and reports errors TestBencher can generate C++ and Verilog golden reference models that run in parallel with a VHDL or Verilog RTL model. Golden reference models are high-level behavioral descriptions of a design and are used to compare against the results of an RTL-level model during simulation. Reference models usually model interaction between components at the transaction level (e.g. read transaction/write transaction) instead of at the signal level. If reference model generation is enabled in TestBencher, the transactors will apply a time-based transaction to the MUT and an untimed function call transaction to the reference model. At the end of each transaction, the outputs for the MUT and the reference model are compared, and an error is logged whenever there is a mismatch in the output. TestBencher generates all of the stub-functions for the golden reference model, keeping the transaction interface to the reference model the same as the HDL level model. TestBencher uses the TestBuilder library to generate the C++ models. The user writes the behavioral C++ or Verilog code inside the stub-functions that enables the golden reference model to emulate the RTL-level model. Automatic Generation of File I/O Code Test-vector spreadsheet format used for file I/O: - Read from or Write to a “record” structure - Import state and timing information - Export data collected by samples - Quickly generate tedious file I/O code using file associations - Easily swap between using a test-vector file or function calls to control transaction state and timing parameters TestBencher provides a means to import and export data that is stored in a spreadsheet like format. This allows information for a signal transaction to be read and written from a record-like structure. Data can be imported from a previously generated test-vector spreadsheet style file to provide values for state and timing information. Data which has been captured by a Sample can also be exported to this file format and used for as stimulus to another test bench or for analysis of the test bench TestBencher automatically generates all the file I/O code. The user specifies the file name and the column name for a particular variable, and TestBencher will generate the file I/O code from that information. Switching between Time-based and Cycle-based test benches Supports time and cycle-based test benches - Specify a clock signal to switch to cycle based - All signals, delays, samples, and markers have the clocking feature - Supports sensitivity to multiple clock edges - positive, negative, or both clock edges - Supports multiple clocks Timing Diagrams can be used to either express cycle based or timing based transactions. By changing the clocking signal for the diagram components you can change the whether or not a cycle based transaction or a time based transaction will be generated. This makes it very quick to generate test benches for different applications: gate-level timing tests or for large cycle-based runs. All of the graphical constructs in the timing diagram support both cycle and time based generation. TestBencher Pro also supports multiple clocks and triggering on multiple clock edges. **External Simulator Control** TestBencher Pro controls compilation and simulation: - One environment for test development and design debug - Handles simulators and compilers running on different operating systems and remote machines - Graphically display simulation results and log files TestBencher Pro can control external simulators through its graphical interface, so that compilation and simulation of the project can be handled without having to exit TestBencher. This is particularly useful when multiple tools are needed to compile and simulate a project. For example, if you are using one of the new verification languages you will need a tool to compile the test bench into either a dynamically linked library or byte code. You will also need a VHDL or Verilog simulator and a make file containing all of the information about your model under test and the commands to dynamically link to the test bench library. With TestBencher, all of these details are automatically handled for you. TestBencher stores information about both your simulator and verification compiler and can remotely call those programs and display the results of the simulation. 3.6 Advanced Features VHDL, Verilog, & C++ - C++ Library Support using TestBuilder - Constrained Random Data Structure Generation - External control of simulator and compiler TestBencher can generate pure VHDL, Verilog, and C++ bus-functional models as well as mixed Verilog-C++ and VHDL C++ models. TestBencher generates all of the low-level transaction code, verification code, sequence detection, error reporting and file I/O code. Once the code generation is complete, TestBencher can launch external simulators and compilers necessary to build and simulate the design. TestBencher uses the open source TestBuilder C++ library for all of the C++ generation. This library provides many useful test bench capabilities, including constrained random data generation and support for complex data structures. TestBuilder also provides an easier method for integrating C/C++ based models into a test bench than using a PLI-based approach (C-based models are often used as a golden reference to compare an RTL-level model against during simulation). Test Bencher Summary - TestBencher Pro reduces the time required to create and maintain test benches in VHDL, Verilog, and C++. - Features include external simulator control, sequence recognition, conditional execution, hierarchical, and multiple instantiation of test bench projects. - TestBencher can model the most advanced verification problems: PCI, ARM, and ATM. - Easy four-step process to create a test bench. - Timing Diagrams are a natural way to express timing protocols. In Summary, TestBencher Pro dramatically reduces the time required to create and maintain test benches. The user is free to concentrate on the more important aspects of test bench design since the most tedious aspects of code generation are abstracted away. Features include external simulator control, sequence recognition, conditional execution, hierarchical, and multiple instantiation of test bench projects. TestBencher can model the most advanced verification problems: PCI, ARM, and ATM. Using this tool, Test Benches are constructed using a quick four-step process. The timing diagrams that are used to create the bus transactions are a natural way to express timing protocols and are usually included in the design specification or in data sheets of the parts surrounding the design.
{"Source-Url": "http://www.syncad.com/pdf-docs/tbp_slides.pdf", "len_cl100k_base": 5378, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 36641, "total-output-tokens": 6217, "length": "2e12", "weborganizer": {"__label__adult": 0.0004355907440185547, "__label__art_design": 0.00048422813415527344, "__label__crime_law": 0.0002677440643310547, "__label__education_jobs": 0.0004630088806152344, "__label__entertainment": 8.213520050048828e-05, "__label__fashion_beauty": 0.00022077560424804688, "__label__finance_business": 0.00018596649169921875, "__label__food_dining": 0.00030803680419921875, "__label__games": 0.0008087158203125, "__label__hardware": 0.0188751220703125, "__label__health": 0.00037550926208496094, "__label__history": 0.00020062923431396484, "__label__home_hobbies": 0.00014066696166992188, "__label__industrial": 0.0015316009521484375, "__label__literature": 0.00013077259063720703, "__label__politics": 0.000171661376953125, "__label__religion": 0.0006504058837890625, "__label__science_tech": 0.05291748046875, "__label__social_life": 5.394220352172851e-05, "__label__software": 0.025299072265625, "__label__software_dev": 0.8955078125, "__label__sports_fitness": 0.0003662109375, "__label__transportation": 0.0006008148193359375, "__label__travel": 0.00014984607696533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28422, 0.00549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28422, 0.66493]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28422, 0.8922]], "google_gemma-3-12b-it_contains_pii": [[0, 1384, false], [1384, 2743, null], [2743, 4378, null], [4378, 5797, null], [5797, 7408, null], [7408, 8969, null], [8969, 10675, null], [10675, 11379, null], [11379, 13257, null], [13257, 14563, null], [14563, 15901, null], [15901, 16376, null], [16376, 17558, null], [17558, 18974, null], [18974, 19685, null], [19685, 21370, null], [21370, 22909, null], [22909, 24011, null], [24011, 24928, null], [24928, 26092, null], [26092, 27142, null], [27142, 28422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1384, true], [1384, 2743, null], [2743, 4378, null], [4378, 5797, null], [5797, 7408, null], [7408, 8969, null], [8969, 10675, null], [10675, 11379, null], [11379, 13257, null], [13257, 14563, null], [14563, 15901, null], [15901, 16376, null], [16376, 17558, null], [17558, 18974, null], [18974, 19685, null], [19685, 21370, null], [21370, 22909, null], [22909, 24011, null], [24011, 24928, null], [24928, 26092, null], [26092, 27142, null], [27142, 28422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28422, null]], "pdf_page_numbers": [[0, 1384, 1], [1384, 2743, 2], [2743, 4378, 3], [4378, 5797, 4], [5797, 7408, 5], [7408, 8969, 6], [8969, 10675, 7], [10675, 11379, 8], [11379, 13257, 9], [13257, 14563, 10], [14563, 15901, 11], [15901, 16376, 12], [16376, 17558, 13], [17558, 18974, 14], [18974, 19685, 15], [19685, 21370, 16], [21370, 22909, 17], [22909, 24011, 18], [24011, 24928, 19], [24928, 26092, 20], [26092, 27142, 21], [27142, 28422, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28422, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
aff43a4467073d3929574718bb6e128a371d9e89
7. The Recursion Theorem Main result in this section: **Kleene’s Recursion Theorem**. - Recursive functions are closed under a very general form of recursion. - For proof we will use the **S-m-n-theorem**. - Used in many proofs in computability theory. --- The S-m-n Theorem Assume $f : \mathbb{N}^{m+n} \xrightarrow{\sim} \mathbb{N}$ partial recursive. - Fix the first $m$ arguments (say $\vec{l} := l_0, \ldots, l_{m-1}$). - Then we obtain a partial recursive function $$g : \mathbb{N}^n \xrightarrow{\sim} \mathbb{N}, \quad g(\vec{x}) \simeq f(\vec{l}, \vec{x}) .$$ - The S-m-n theorem expresses that we can compute a Kleene index of $g$ - i.e. an $e'$ s.t. $g = \{e'\}^n$ from a Kleene index of $f$ and $\vec{l}$ **primitive recursively**. --- Notation $$\{S^m_n(e, \vec{l})\}^n(\vec{x}) \simeq \{e\}^{m+n}(\vec{l}, \vec{x}).$$ - Assume $t$ is an expression depending on variables $\vec{x}$, s.t. we can compute $t$ from $\vec{x}$ partial recursively. Then $\lambda \vec{x}.t$ is any natural number $e$ s.t. $\{e\}(\vec{x}) \simeq t$. - Then we will have $$S^m_n(e, \vec{l}) = \lambda \vec{x}.\{e\}^{m+n}(\vec{l}, \vec{x}) .$$ Theorem 7.1 (S-m-n Theorem) Assume $m, n \in \mathbb{N}$. There exists a primitive recursive function $$S^m_n : \mathbb{N}^{m+1} \to \mathbb{N}$$ s.t. for all $\vec{l} \in \mathbb{N}^m, \vec{x} \in \mathbb{N}^n$ $$\{S^m_n(e, \vec{l})\}^n(\vec{x}) \simeq \{e\}^{m+n}(\vec{l}, \vec{x}) .$$ Proof of S-m-n Theorem Let $T$ be a TM encoded as $e$. A Turing machine $T'$ corresponding to $S^m_n(e, \vec{l})$ should be s.t. $$T'^n(\vec{x}) \simeq T^{n+m}(\vec{l}, \vec{x}) .$$ Proof of S-m-n Theorem Let $T$ be a TM encoded as $e$. Want to define $T'$ s.t. $T'^n(\vec{x}) \simeq T^{n+m}(\vec{l}, \vec{x})$ $T'$ can be defined as follows: 1. The initial configuration is: - $\vec{x}$ written on the tape, - head pointing to the left most bit: $$\cdots \underline{\cdots} \underline{\cdots} \underline{\text{bin}(x_0)} \underline{\cdots} \underline{\cdots} \underline{\text{bin}(x_{n-1})} \underline{\cdots} \underline{\cdots} \underline{\cdots}$$ 2. $T'$ writes first binary representation of $\vec{l} = l_0, \ldots, l_{n-1}$ in front of this. - terminates this step with the head pointing to the most significant bit of $\text{bin}(l_0)$. - So configuration after this step is: $$\text{bin}(l_0) \underline{\cdots} \underline{\cdots} \underline{\text{bin}(l_{m-1})} \underline{\text{bin}(x_0)} \underline{\cdots} \underline{\cdots} \underline{\text{bin}(x_{n-1})}$$ Proof of S-m-n Theorem \( T \) is TM for \( e \). Want to define \( T' \) s.t. \( T'^m(\bar{x}) \simeq T^{n+m}(\bar{l}, \bar{x}) \). Configuration after first step: \[ \begin{array}{cccccccc} \text{bin}(l_0) & \text{·} & \cdots & \text{·} & \text{bin}(l_{m-1}) & \text{·} & \text{·} & \text{bin}(x_0) & \text{·} & \cdots & \text{·} & \text{bin}(x_{n-1}) \\ \uparrow & & & & & & & & & & & & \end{array} \] Then \( T' \) runs \( T \), starting in this configuration. It terminates, if \( T \) terminates. The result is \( \simeq T^{m+n}(\bar{l}, \bar{x}) \), and we get therefore \( T'^m(\bar{x}) \simeq T^{m+n}(\bar{l}, \bar{x}) \) as desired. Proof of the S-m-n Theorem A code for \( T' \) can be obtained from a code for \( T \) and from \( \bar{l} \) as follows: - One takes a Turing machine \( T'' \), which writes the binary representations of \[ \bar{l} = l_0, \ldots, l_{m-1} \] in front of its initial position (separated by a blank and with a blank at the end), and terminates at the left most bit. - It’s a straightforward exercise to write a code for the instructions of such a Turing machine, depending on \( \bar{l} \), and show that the function defining it is primitive recursive. Proof of the S-m-n Theorem \( T \) is TM for \( e \). \( T' \) is a TM s.t. \( T'^m(\bar{x}) \simeq T^{n+m}(\bar{l}, \bar{x}) \) - From a code for \( T \) one can now obtain a code for \( T' \) in a primitive recursive way. - \( S_n^m \) is the corresponding function. - The details will not be given in the lecture Jump over details Assume, the terminating state of \( T'' \) has Gödel number (i.e. code) \( s \), and that all other states have Gödel numbers \(< s \). Then one appends to the instructions of \( T'' \) the instructions of \( T \), but with the states shifted, so that the new initial state of \( T \) is the final state \( s \) of \( T'' \) (i.e. we add \( s \) to all the Gödel numbers of states occurring in \( T \)). This can be done as well primitive recursively. Proof of the S-m-n Theorem So a code for $T''$ can be defined primitive recursively depending on a code $e$ for $T$ and $\vec{l}$, and $S_m^n$ is the primitive recursive function computing this. With this function it follows now that, if $e$ is a code for a TM, then $$\{S_m^n(e, \vec{l})\}^n(\vec{x}) \simeq \{e\}^{n+m}(\vec{l}, \vec{x}).$$ This equation holds, even if $e$ is not a code for a TM: In this case $\{e\}^{m+n}$ interprets $e$ as if it were the code for a valid TM $T$ \[ e' := S_m^n(e, \vec{l}) \] will have the same deficiencies as $e$, but when applying the Kleene-brackets, it will be interpreted as a TM $T'$ obtained from $e'$ in the same way as we obtained $T$ from $e$, and therefore $$\{e'\}^n(\vec{x}) \simeq T'^n(\vec{x}) \simeq T^{n+m}(\vec{l}, \vec{x}) \simeq \{e\}^{n+m}(\vec{l}, \vec{x}).$$ So we obtain the desired result in this case as well. Notation We will in the following often omit the superscript $n$ in $\{e\}^n(m_0, \ldots, m_{n-1})$. I.e. we will write $$\{e\}(m_0, \ldots, m_{n-1})$$ instead of $$\{e\}^n(m_0, \ldots, m_{n-1})$$ Further $\{e\}$ not applied to arguments and without superscript means usually $\{e\}^1$. (A code for such a valid TM is obtained by - deleting any instructions $\text{encode}(q, a, q', a', D)$ in $e$ s.t. there exists an instruction $\text{encode}(q, a, q'', a'', D')$ occurring before it in the sequence $e$, - and by replacing all directions $> 1$ by $[R] = 1$.) Kleene’s Recursion Theorem - Assume \( f : \mathbb{N}^{n+1} \to \mathbb{N} \) partial recursive. - Then there exists an \( e \in \mathbb{N} \) s.t. \[ \{e\}^n(x) \simeq f(e, x). \] (Here \( x = x_0, \ldots, x_{n-1} \).) Examples Kleene’s Rec. Theorem: \( \exists e. \forall x. \{e\}^n(x) \simeq f(e, x) \). Remark: - Such applications usually not very useful. - Usually, when using the Rec. Theorem, one - doesn’t use the index \( e \) directly, - but only the application of \( \{e\} \) to arguments. 2. The function computing the Fibonacci-numbers \( \text{fib} \) is recursive. (This is a weaker result than what we obtained above – above we showed that it is even prim. rec.) Fibonacci Numbers Remember the defining equations for $\text{fib}$: $$ \text{fib}(0) = \text{fib}(1) = 1 , $$ $$ \text{fib}(n + 2) = \text{fib}(n) + \text{fib}(n + 1) . $$ From these equations we obtain $$ \text{fib}(n) = \begin{cases} 1, & \text{if } n = 0 \text{ or } n = 1, \\ \text{fib}(n−2) + \text{fib}(n−1), & \text{otherwise}. \end{cases} $$ We show that there exists a recursive function $g : \mathbb{N} \rightarrow \mathbb{N}$, s.t. $$ g(n) \simeq \begin{cases} 1, & \text{if } n = 0 \text{ or } n = 1, \\ g(n−2) + g(n−1), & \text{otherwise}. \end{cases} $$ Show: Exists $g$ rec. s.t. $g(n) \simeq \begin{cases} 1, & \text{if } n = 0 \text{ or } n = 1, \\ g(n−2) + g(n−1), & \text{otherwise}. \end{cases}$ Shown as follows: Define a recursive $f : \mathbb{N}^2 \rightarrow \mathbb{N}$ s.t. $$ f(e, n) \simeq \begin{cases} 1, & \text{if } n = 0 \text{ or } n = 1, \\ \{e\}(n−2) + \{e\}(n−1), & \text{otherwise}. \end{cases} $$ Now let $e$ be s.t. $$ \{e\}(n) \simeq f(e, n) . $$ Then $e$ fulfills the equations $$ \{e\}(n) \simeq \begin{cases} 1, & \text{if } n = 0 \text{ or } n = 1, \\ \{e\}(n−2) + \{e\}(n−1), & \text{otherwise}. \end{cases} $$ These are the defining equations for $\text{fib}$. One can show by induction on $n$ that $g(n) = \text{fib}(n)$ for all $n \in \mathbb{N}$. Therefore $\text{fib}$ is recursive. General Applic. of Rec. Theorem Similarly, one can introduce arbitrary partial recursive functions $g$, where $g(\bar{m})$ refers to arbitrary other values $g(\bar{m})$. This corresponds to the recursive definition of functions in programming. E.g. in Java one defines ```java public static int fib(int n){ if (n == 0 || n == 1){ return 1; } else{ return fib(n-1) + fib(n-2); } } ``` **Example 3** As in general programming, recursively defined functions need not be total: - There exists a partial recursive function \( g : \mathbb{N} \rightarrow \mathbb{N} \) s.t. \[ g(x) \simeq g(x) + 1 \ . \] - We get \( g(x) \uparrow \). - The definition of \( g \) corresponds to the following Java definition: ```java public static int g(int n) { return g(n) + 1; } ``` - When executing \( g(x) \), Java loops. **Example 4** - There exists a partial recursive function \( g : \mathbb{N} \rightarrow \mathbb{N} \) s.t. \[ g(x) \simeq g(x + 1) + 1 \ . \] - Note that that’s a “black hole recursion”, which is not solvable by a total function. - It is solved by \( g(x) \uparrow \). - Note that a recursion equation for a function \( f \) cannot always be solved by setting \( f(x) \uparrow \). - E.g. the recursion equation for \( \text{fib} \) can’t be solved by setting \( \text{fib}(n) \uparrow \). --- **Ackermann Function** - The Ackermann function is recursive: - Remember the defining equations: \[ \begin{align*} \text{Ack}(0, y) &= y + 1 , \\ \text{Ack}(x + 1, 0) &= \text{Ack}(x, 1) , \\ \text{Ack}(x + 1, y + 1) &= \text{Ack}(x, \text{Ack}(x + 1, y)) . \end{align*} \] - From this we obtain \[ \text{Ack}(x, y) = \begin{cases} y + 1, & \text{if } x = 0, \\ \text{Ack}(x - 1, 1), & \text{if } x > 0 \text{ and } y = 0, \\ \text{Ack}(x - 1, \text{Ack}(x, y - 1)), & \text{otherwise}. \end{cases} \] - Define \( g \) partial recursive s.t. \[ g(x, y) \simeq \begin{cases} y + 1, & \text{if } x = 0, \\ g(x - 1, 1), & \text{if } x > 0 \wedge y = 0, \\ g(x - 1, g(x, y - 1)), & \text{if } x > 0 \wedge y > 0. \end{cases} \] - \( g \) fulfills the defining equations of \( \text{Ack} \). - Proof that \( g(x, y) \simeq \text{Ack}(x, y) \) follows by main induction on \( x \), side-induction on \( y \). The details will not be given in the lecture Jump over details. Proof of Correctness of Ack We show by induction on $x$ that $g(x, y)$ is defined and equal to $\text{Ack}(x, y)$ for all $x, y \in \mathbb{N}$: - **Base case** $x = 0$. $$g(0, y) = y + 1 = \text{Ack}(0, y).$$ - **Induction Step** $x \rightarrow x + 1$. Assume $$g(x, y) = \text{Ack}(x, y).$$ We show $$g(x + 1, y) = \text{Ack}(x + 1, y)$$ by side-induction on $y$: - **Base case** $y = 0$: $$g(x + 1, 0) \simeq g(x, 1) \overset{\text{Main-IH}}{=} \text{Ack}(x, 1) = \text{Ack}(x + 1, 0).$$ - **Induction Step** $y \rightarrow y + 1$: $$g(x + 1, y + 1) \simeq g(x, g(x + 1, y)) \overset{\text{Main-IH}}{=} g(x, \text{Ack}(x + 1, y)) \overset{\text{Side-IH}}{=} \text{Ack}(x, \text{Ack}(x + 1, y)) = \text{Ack}(x + 1, y + 1).$$ Idea of Proof of the Rec. Theorem Assume $$f : \mathbb{N}^{n+1} \simeq \mathbb{N}.$$ We have to find an $e$ s.t. $$\forall \vec{x} \in \mathbb{N}. \{e\}^n(\vec{x}) \simeq f(e, \vec{x}).$$ - We set $e = \lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x})$ for some $e_1$ to be determined. - Then the left and right hand side of the equation of the recursion theorem reads $$\{e\}^n(\vec{x}) \simeq \{\lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x})\}^n(\vec{x}) \simeq \{e_1\}^{n+1}(e_1, \vec{x}) \simeq f(e_1, \vec{x}) \simeq f(\lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x}), \vec{x}).$$ We need to satisfy $\forall \vec{x} \in \mathbb{N}. \{e\}^n(\vec{x}) \simeq f(e, \vec{x})$. Let $e = \lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x})$. $$\{e\}^n(\vec{x}) \simeq \{e_1\}^{n+1}(e_1, \vec{x}),$$ $$f(e, \vec{x}) \simeq f(\lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x}), \vec{x}).$$ So $e_1$ needs to fulfill the following equation: $$\{e_1\}^{n+1}(e_1, \vec{x}) \simeq \{e\}^n(\vec{x}) \overset{1}{\simeq} f(e, \vec{x}) \overset{2}{\simeq} f(\lambda \vec{x}. \{e_1\}^{n+1}(e_1, \vec{x}), \vec{x}).$$ This can be fulfilled if we define $e_1$ s.t. $$\{e_1\}^{n+1}(e_2, \vec{x}) \simeq f(\lambda \vec{x}. \{e_2\}^{n+1}(e_2, \vec{x}), \vec{x}).$$ Idea of Proof of Rec. Theorem \[ \{e_1\}^{n+1}(e_2, \bar{x}) \simeq f(\lambda \bar{x}. \{e_2\}^{n+1}(e_2, \bar{x}), \bar{x}). \] - By the S-m-n Theorem we can obtain this if we have \(e_1\) s.t. \[ \{e_1\}^{n+1}(e_2, \bar{x}) \simeq f(\{S_n^1(e_2, e_2)\}, \bar{x}) \] - There exists a partial recursive function \(g : \mathbb{N}^{n+1} \simeq \mathbb{N}\), s.t. \[ g(e_2, \bar{x}) \simeq f(S_n^1(e_2, e_2), \bar{x}) \] - If \(e_1\) is an index for \(g\) we obtain the desired equation. \[ \{e_1\}^{n+1}(e_2, \bar{x}) \simeq f(S_n^1(e_2, e_2), \bar{x}) \] Complete Proof of Rec. Theorem Let \(e_1\) be s.t. \[ \{e_1\}^{n+1}(y, \bar{x}) \simeq f(S_n^1(y, y), \bar{x}) . \] Let \(e := S_n^1(e_1, e_1)\). Then we have \[ \begin{align*} \{e\}^n(\bar{x}) & \overset{\text{Def of } e_1}{\simeq} f(S_n^1(e_1, e_1), \bar{x}) \\ \{e\}^{n+1}(e_1, \bar{x}) & \overset{\text{S-m-n theorem}}{\simeq} \{S_n^1(e_1, e_1)\}^n(\bar{x}) \\ \{e_1\}^{n+1}(e_1, \bar{x}) & \overset{\text{Def of } e_1}{\simeq} f(S_n^1(e_1, e_1), \bar{x}) \\ \{e_1\}^{n+1}(e_1, \bar{x}) & \overset{\text{Def of } e_1}{\simeq} f(e, \bar{x}) . \end{align*} \]
{"Source-Url": "http://www.cs.swan.ac.uk/~csetzer/lectures/computability/06/cpslidesdraftforprinting7rectheorem.pdf", "len_cl100k_base": 5377, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 37229, "total-output-tokens": 6148, "length": "2e12", "weborganizer": {"__label__adult": 0.00048065185546875, "__label__art_design": 0.0003964900970458984, "__label__crime_law": 0.0005860328674316406, "__label__education_jobs": 0.001773834228515625, "__label__entertainment": 0.00013768672943115234, "__label__fashion_beauty": 0.00021505355834960935, "__label__finance_business": 0.00031876564025878906, "__label__food_dining": 0.0007491111755371094, "__label__games": 0.0012302398681640625, "__label__hardware": 0.00218963623046875, "__label__health": 0.0012102127075195312, "__label__history": 0.0003888607025146485, "__label__home_hobbies": 0.0002123117446899414, "__label__industrial": 0.0008807182312011719, "__label__literature": 0.0006561279296875, "__label__politics": 0.0004146099090576172, "__label__religion": 0.0008845329284667969, "__label__science_tech": 0.1622314453125, "__label__social_life": 0.0001316070556640625, "__label__software": 0.0059967041015625, "__label__software_dev": 0.8173828125, "__label__sports_fitness": 0.00046133995056152344, "__label__transportation": 0.0008344650268554688, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13615, 0.0143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13615, 0.40734]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13615, 0.58774]], "google_gemma-3-12b-it_contains_pii": [[0, 1148, false], [1148, 2536, null], [2536, 4542, null], [4542, 5999, null], [5999, 6695, null], [6695, 8474, null], [8474, 10462, null], [10462, 12489, null], [12489, 13615, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1148, true], [1148, 2536, null], [2536, 4542, null], [4542, 5999, null], [5999, 6695, null], [6695, 8474, null], [8474, 10462, null], [10462, 12489, null], [12489, 13615, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13615, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13615, null]], "pdf_page_numbers": [[0, 1148, 1], [1148, 2536, 2], [2536, 4542, 3], [4542, 5999, 4], [5999, 6695, 5], [6695, 8474, 6], [8474, 10462, 7], [10462, 12489, 8], [12489, 13615, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13615, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
a990517de512fe1bd2aeb3f01706daeb2bb33313
Original Citation: Availability: This version is available at: http://porto.polito.it/2375839/ since: November 2010 Publisher: Forschungszentrum Dresden - Rossendorf Terms of use: This article is made available under terms and conditions applicable to Open Access Policy Article ("Public - All rights reserved"), as described at http://porto.polito.it/terms_and_conditions.html Porto, the institutional repository of the Politecnico di Torino, is provided by the University Library and the IT-Services. The aim is to enable open access to all the world. Please share with us how this access benefits you. Your story matters. (Article begins on next page) Abstract Safety in the automotive domain is becoming more and more important with the ever increasing level of complexity in emerging technologies built-in into the cars. As a stimulus for industry to refine its safety measures related to electrical, electronic and software systems in the cars, the ISO 26262 standard has been recently introduced. Developing safety-related systems according to this standard in an efficient and effective way requires adequate computer-aided support. For this reason, some initiatives towards software-based supporting tools for ISO 26262 were recently started. This paper gives an account of the main such initiatives after recalling the main features of the ISO 26262 standard. In particular, we briefly discuss how the main activities from ISO 26262 such as hazard analysis and risk assessment, functional safety concept, safety validation, etc can have a software support, and what is the state-of-the-art. 1. Introduction Nowadays, electronic components are pervasive in road vehicles. Indeed, during the last 30 years the majority of innovations on road vehicles (e.g. electronic injection, airbags, ABS, ESP) have been based on electronic embedded systems, and this trend seems confirmed to continue in the future. Therefore, functional safety (i.e. the safety that depends on the system or equipment operating correctly in response to its inputs, including the safe management of likely driver errors, hardware failures and environmental changes) is acquiring more and more importance for the electronic systems that have to be integrated inside cars. Up to now, automotive companies have developed similar, but custom, safety-related methodologies and processes, and have used pretty much the same set of techniques (e.g. FMEA, FTA) for the development of safety-related electronic embedded systems. However, at the same time, they are not sharing a common view and consideration of the safety of the produced items. In this context, ISO 26262 should become the functional safety reference standard. Although it is not yet published in its final form (it should be in 2011), it represents the shared effort of car makers, OEMs and Tier-1 suppliers to establish a common way to understand and consider the safety concept and its importance, when designing and developing embedded systems for road vehicles. As a consequence, carmakers have already started activities directed to transform their current processes to produce systems that can be ISO 26262 compliant. One of the problems that emerge when applying this transformation is how to organize the new processes and how to adequately support them by software tools. A first commercial tool that partially responds to this need has recently appeared. Moreover, some collaborative initiatives have emerged in the recent panorama: French car-makers have started in 2007 a project called EDONA (Environnements de Développement Ouverts aux Normes de l’Automobile), while in Italy a project called SiSMA (Sicurezza Funzionale dei Sistemi Meccatronici Automotive) is just starting. From the technical point of view, an important shared idea about the supporting software tools is that they should provide a means for modelling at the system-level and at different abstraction levels. This kind of models should then be used as a unifying basis on which the safety-related activities are built and synchronized. This modelling facility should be at least semi-formal, in order to respond to the requirements of ISO 26262. Among the various available possibilities, three semi-formal languages have mainly been considered: plain UML, SysML, and EAST-ADL2 [4]. While the first two are general-purpose solutions, the latter is a domain specific specialization of the former, specifically tailored for modelling automotive embedded systems, that enriches the official AUTOSAR modelling language with additional abstraction levels. AUTOSAR is another important standard that focuses on the design and development of software for automotive embedded systems. The aim of this paper is to present an overview of the above mentioned current initiatives towards software tools supporting ISO 26262. In section 2, general information about the ISO 26262 and AUTOSAR standards is outlined. In section 3, the existing projects aiming at bringing integrating solutions for industry to apply ISO 26262 techniques in practice are analyzed. Finally, section 4 concludes. The definitions that are necessary for understanding the domain will be given where it is needed. 2. ISO 26262 and AUTOSAR ISO 26262 is an upcoming standard that adapts ISO/IEC 61508 (a standard concerning the safety of systems, that is applicable to all kinds of industry) to the automotive industry. In particular, ISO 26262 addresses the safety related systems that have to be installed in series production passenger cars (with a maximum gross weight up to 3500kg), that are composed of one or more electrical or electronic (E/E) systems. At the time of writing, ISO 26262 is in the state of a draft international standard (DIS). The figure below shows the simplified structure of its development process (the V-model). ![V-Model from ISO 26262](image) As it is shown in the figure, the V-model development applies to software and hardware development independently on each other and on the overall development process as well. The standard prescribes the V-model for the development process and describes how functional safety has to be managed during the whole lifecycle of E/E safety-related systems, while providing guidance in the selection of (core and supporting) processes within product development, in function of the outcome of a specific safety analysis methodology. Indeed, the standard is centered around the concept of Automotive Safety Integrity Level (ASIL), that is a qualitative measure of the needed integrity level (i.e. the probability with which a system correctly performs its safety-related functions). The ASIL is determined by means of hazard analysis and risk assessment. According with the standard, since the beginning of the development of systems, each one of its intended functions has to be analyzed with respect to possible hazards. Hence, in function of the probability of exposure to an hazard, the possible controllability by a driver and the severity of a critical event, the risk is estimated and an ASIL is determined. Four ASIL levels have been defined, running from A (lowest) to D (highest). ASILs have to be mapped on the safety requirements that have to be generated to avoid/reduce the identified risks. Thus, it encourages making a focus on safety-critical functions, while not wasting a lot of time on non-critical ones. Therefore, the standard provides requirements for the whole lifecycle and guidance in choosing adequate methods (e.g. hazard analysis, risk assessment, and safety analysis methods) and procedures (e.g. safety, requirements, and document management) to achieve the required ASIL for the developed product. The standard defines some new terminologies and concepts. It defines the word “item” as a system or array of systems or a function to which ISO 26262 is applied and the process of collecting and describing an item as “item definition”. The concept phase consists of defining the preliminary architectural and functional design of future system elements and of performing their safety analysis. Safety analysis results in safety requirements that have to be satisfied in order to achieve safety goal (formalized conditions of element's safe functioning). The concept phase is followed by specification of safety requirements into low-level technical requirements and system design. On the hardware and software level the implementation of system functionalities along with technical requirements is done. Then, verification and validation counterparts for safety goal specification and system design take place. Eventually, provided that during system production, safety requirements are not violated, system goes to operation. Although ISO 26262 derives from ISO 61508, it differs in some valuable points. While ISO 61508 mainly covers industrial equipment and process plant, which are usually produced in small numbers, ISO 26262 focuses on E/E systems for series production cars, hence the standard also covers the requirements for the production of systems in series. Moreover, it is worth noting that ISO 26262 provides much more information and guidance for qualifying and classifying software tools than ISO/IEC 61508. Furthermore, it has to be noticed that controllability was not foreseen by ISO/IEC 61508 to compute Safety Integrity Levels. There is some connection between ISO 26262 and another important automotive standard called AUTOSAR (AUTomotive Open System Architecture). This is an ongoing motion in the automotive world, started in 2003 and directed to building common open standardized software architecture. For more information about AUTOSAR see, for example, [10], [11], [12]. Since AUTOSAR addresses also safety critical embedded software, it aims at showing relevant compliance with related safety standards (ISO 26262 in this case). In fact, AUTOSAR defines both a software architecture and a supporting methodology to develop E/E software systems for the automotive domain but cannot guarantee functional safety of such systems by itself. Thus, the implementation of safety-related embedded systems using AUTOSAR has to be done with compliance to related safety standards designed for the automotive domain. Starting from release 4, the AUTOSAR standard proposes some technical information required proving the ISO 26262 compliance for AUTOSAR members, but it does not provide procedures or activities that address the safety problem by itself. The full responsibility for implementing the functional safety mechanisms described inside the AUTOSAR framework fully resides on the implementer who will have to fulfill all the specific safety related regulations. This means that AUTOSAR does not include implementation of safety activities into their shared software platform, leaving them for implementation by car makers independently on collaborative AUTOSAR movement, thus competing with each other on this implementation. The approach to functional safety in AUTOSAR with respect to ISO 26262 concerns mainly the Safety Element out of Context (SEooC) that is a safety element for which an item does not exist at the time of the development. According to ISO 26262, during a system development process, a safety element can be developed as an item (with stricter requirements and more tediously) or as a SEooC (when requirements are substituted with assumptions). For details about SEooC, see [1]. As for modeling, the official AUTOSAR modeling language has been developed as an UML/SysML profile. For details, see [9]. 3. Supporting tools and initiatives. Using software tools can extremely facilitate some of the activities required by ISO 26262 for the development of safety-critical electronic embedded systems. Indeed it would be hard if not impossible to accomplish all the requirements of the standard without an adequate software tool support. Among the tasks which have to be done in compliance with ISO 26262 and that should be supported by software tools there are: item definition, ASIL determination, ASIL decomposition, hazard analysis and risk assessment activities, safety goal definition and safety requirements allocation, V&V activities safety validation, configuration and change management, etc. While for most of these activities support software tools already existed before the introduction of ISO 26262, the standard requires a unified management of safety-related activities whereby the various tools need to be integrated as a part of a single framework. This need arises, for example with respect to traceability requirements. At the best of our knowledge, there are only two main initiatives collecting software solutions for the aforementioned activities under a single roof: the Medini Analyze software tool and the EDONA project and platform. In this chapter we will analyze how exactly they cover these activities and the requirements coming from the standard. 3.1. Medini Medini Analyze aims at covering all the main ISO 26262 activities during the system development process with a particular focus on safety analysis. It brings together functional architecture design and functional safety analysis, making the main accent on hazard analysis and risk assessment. The structure of the work flow in Medini Analyze reflects corresponding parts of the ISO 26262 V-Model. Medini Analyze naturally introduces the concept of item into its work flow with the possibility to bind it with any external documentation which can be uploaded into the Medini Analyze work flow. Within a single project one can manage multiple items, integrating them within the architecture model. Moreover, functions can be added to each item. For example, a cruise control system works by measuring the speed of the vehicle, by estimating the inclination of the driving surface, and by interacting with the vehicle’s engine management system. Hence, hazard analysis is made for the chosen single item or for one of its functions. During hazard analysis, the tool guides the user into providing description of the operational situation, the item operation modes, the hazards and the possible malfunctions of the item, so as to obtain a hazard list for the item (or function). For example, for cruise control an hazard could be the unintended acceleration of wheels on a slippery surface without changing the actual speed of the car. Prerequisites, conditions, potential effects of hazardous events can be described. After the hazard analysis has been completed, the tool provides a wizard for determining ASIL levels. Then, as prescribed by ISO 26262, the tool supports the insertion of a safety goal for each dangerous hazard, along with the inheritance of ASIL levels. A safety goal is a top-level safety requirement, that is defined as a result of hazard analysis and risk assessment and that can be shared by different hazard list entries.). For example, a safety goal for the cruise control hazard associated with driving on slippery surface can be “do not allow use of cruise control if driving surface is wet”. This safety goal inherits the ASIL level of the corresponding hazard. Afterwards, safety goals can be broken down to functional safety requirements and this is supported with a palette based system to edit SysML structural elements. Figure below shows some possible safety requirements for the cruise control as an example. The tool embeds support to perform both qualitative and quantitative FTA and FMEA, to facilitate the derivation of requirements. Therefore, it allows the user to create SysML models with the help of an embedded SysML editor. Different levels of abstraction can be created for the defined system, e.g. item level and system level. Architecture SysML models can be used for allocating safety requirements and serves as a base for hazard analysis. A notable feature of the tool is that it also provides integration with the most popular modeling and development environments in the automotive industry, i.e the Mathworks Matlab/Simulink/Stateflow suite. This is done by linking safety requirements to structural blocks and by the opportunity to see Simulink models inside Medini Analyze. **Traceability** During a system development process it is crucial to have a clear picture of how elements such as requirements, functions, etc are connected to one another and the standard requires these connections to be traced. Because of the complexity of the systems under analysis, shifting the attention from a certain element to a connected one can be not an easy action. Medini Analyze adopts the concept of trace matrix for representing and setting bindings between elements and sub-elements. On the base of this matrix focus is shifted in one-click between connected elements. **Validation** As ISO 26262 sets some rules during activities, there is the need to verify that such rules are fulfilled during the system development process. Let us consider for example ASIL decomposition. This is an activity aimed at reducing the likelihood of systematic failures, consisting of substitution of a safety requirement with high ASIL with redundant requirements that have lower ASIL levels. This decomposition can be easily done in a wrong way. To prevent such mistakes a validation engine has been implemented in the tool, so that either the whole project or part of it can undergo the validation to found inconsistencies. The engine makes use of the OCL language, which is a declarative language for describing rules applied to UML models. Therefore, besides the set of rules provided with the tool, users can provide their own rules and/or customize the ones built in the tool. **Document generation.** ISO 26262 requires the production of many documents. Medini Analyze facilitates the production of such documents by generating some of them directly from underlying models and their associated information. For example, “functional safety concept” (the document consisting of safety goals, and safety requirements for these goals) is generated in a completely automatic way by Medini Analyze. **Interfacing and work flow.** Almost all of the file formats used by Medini are XML/XMI, which makes them easy to be included in import/export operations by a wide spectrum of external tools. For the sake of comfortable user experience, all the activities are made in such a way that they can be used in an iterative manner, without following a predefined sequence of actions and with high degree of independence. Though Medini Analyze is an Eclipse based platform and extension for other operating systems is planned, at the moment it is available only for the Windows OS. **3.2. EDONA** The EDONA project is a French initiative that aims at constructing an inter-operable integration platform for automotive software development tools allowing the co-development through formalized interfaces over the entire development cycle, rather than providing new tools. The aim is also to integrate safety-based innovations into a common software platform, considering AUTOSAR prescriptions. The project is directed by Renault, and federates 32 innovating technologies and 13 common source (open-source for members) projects. The project is going to finish in October 2010. The EDONA components can be generally divided into two parts: Eclipse-based and non Eclipse-based. As a special class of components in EDONA come AUTOSAR components. In Figure 3, the main components (technologies) are shown. --- **Figure 2. Safety requirements for cruise control by means of Medini Analyze.** The tool embeds support to perform both qualitative and quantitative FTA and FMEA, to facilitate the derivation of requirements. Therefore, it allows the user to create SysML models with the help of an embedded SysML editor. Different levels of abstraction can be created for the defined system, e.g. item level and system level. Architecture SysML models can be used for allocating safety requirements and serves as a base for hazard analysis. A notable feature of the tool is that it also provides integration with the most popular modeling and development environments in the automotive industry, i.e the Mathworks Matlab/Simulink/Stateflow suite. This is done by linking safety requirements to structural blocks and by the opportunity to see Simulink models inside Medini Analyze. **Traceability** During a system development process it is crucial to have a clear picture of how elements such as requirements, functions, etc are connected to one another and the standard requires these connections to be traced. Because of the complexity of the systems under analysis, shifting the attention from a certain element to a connected one can be not an easy action. Medini Analyze adopts the concept of trace matrix for representing and setting bindings between elements and sub-elements. On the base of this matrix focus is shifted in one-click between connected elements. **Validation** As ISO 26262 sets some rules during activities, there is the need to verify that such rules are fulfilled during the system development process. Let us consider for example ASIL decomposition. This is an activity aimed at reducing the likelihood of systematic failures, consisting of substitution of a safety requirement with high ASIL with redundant requirements that have lower ASIL levels. This decomposition can be easily done in a wrong way. To prevent such mistakes a validation engine has been implemented in the tool, so that either the whole project or part of it can undergo the validation to found inconsistencies. The engine makes use of the OCL language, which is a declarative language for describing rules applied to UML models. Therefore, besides the set of rules provided with the tool, users can provide their own rules and/or customize the ones built in the tool. **Document generation.** ISO 26262 requires the production of many documents. Medini Analyze facilitates the production of such documents by generating some of them directly from underlying models and their associated information. For example, “functional safety concept” (the document consisting of safety goals, and safety requirements for these goals) is generated in a completely automatic way by Medini Analyze. **Interfacing and work flow.** Almost all of the file formats used by Medini are XML/XMI, which makes them easy to be included in import/export operations by a wide spectrum of external tools. For the sake of comfortable user experience, all the activities are made in such a way that they can be used in an iterative manner, without following a predefined sequence of actions and with high degree of independence. Though Medini Analyze is an Eclipse based platform and extension for other operating systems is planned, at the moment it is available only for the Windows OS. **3.2. EDONA** The EDONA project is a French initiative that aims at constructing an inter-operable integration platform for automotive software development tools allowing the co-development through formalized interfaces over the entire development cycle, rather than providing new tools. The aim is also to integrate safety-based innovations into a common software platform, considering AUTOSAR prescriptions. The project is directed by Renault, and federates 32 innovating technologies and 13 common source (open-source for members) projects. The project is going to finish in October 2010. The EDONA components can be generally divided into two parts: Eclipse-based and non Eclipse-based. As a special class of components in EDONA come AUTOSAR components. In Figure 3, the main components (technologies) are shown. The framework reuses a set of basic Eclipse components, such as the Eclipse Modeling Framework (EMF) [5] model repository and the ATL Transformation Language (ATL) [6] for model transformation. EMF is a modeling framework and code generation facility for building tools and other applications based on a structured data model, while ATL is a model transformation language and toolkit. In the field of Model-Driven Engineering (MDE), ATL provides ways to produce a set of target models from a set of source models. Safety analysis in EDONA is provided by the Usine Logicielle project. Unfortunately, there is no yet public information about how EDONA covers ISO 26262. As a means of requirements management, the standalone tool Reqtify [14] and its simplified (as an Eclipse plug-in) version MyReq [14] are used. As an essential activity within the safety critical development process, a large effort has been done towards testing. C code is generated from a Simulink model with the help of the SCADE Suite [8], which is qualified software according to several international safety standards, including ISO/IEC 61508. Then, test cases are generated on the base of source code together with requirements for minimum values of coverage metrics. Test generation is based on Safety Tests Builder IHM [16] and AGATHA [7]. The technique used for test derivation is symbolic automata execution. From Safety Tests Builder IHM, test cases can be passed to a Simulink model for execution or can be exported into an Excel table. For further information about EDONA, see [3]. 4. Conclusion In this paper we have provided an overview of the new upcoming ISO 26262 standard, focusing on the processes and activities that can be supported by tools. Furthermore, we have given a brief overview of Medini Analyze and of the EDONA platform, which are the first initiatives towards integrated software environments for supporting the development of safety-critical electronic components according to the ISO 26262 standard. The aim of these tools is to cover safety-critical design aspects in a richest way in comparison with "one-task" tools, that are already numerous on the market. Medini and EDONA are just the first steps towards software-based tool support for ISO 26262. It can be expected that more tools will appear on the market in the near future (especially after the official publishing of ISO 26262), and/or that the existing ones will evolve. The target will be to cover the gaps still left in process automation and to get even better integration of the tools composing consecutive tool chains. These targets are being considered by SisMA, an Italian applied-research project that is just starting. As it usually happens in software development, a new domain is covered by commercial software firstly, and then, with some delay, open-source solutions appear. Accordingly, new open-source software initiatives giving specific ISO 26262 support can be expected to rise later in the future. Along with tools, a fast development of safety aspects’ support in standardization of automotive system development, which has began to evolve with the publication of AUTOSAR release 4.0, is also expected. Acknowledgments We are grateful to the Management Team of ikv+ for a given opportunity to evaluate Medini Analyze with a trial version. This paper has been written thanks to the support of the SisMA project. References 12. www.autosar.org 13. www.papyrusuml.org
{"Source-Url": "http://porto.polito.it/2375839/1/iit10.pdf", "len_cl100k_base": 5301, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18662, "total-output-tokens": 6419, "length": "2e12", "weborganizer": {"__label__adult": 0.0009760856628417968, "__label__art_design": 0.0006871223449707031, "__label__crime_law": 0.0010519027709960938, "__label__education_jobs": 0.0008640289306640625, "__label__entertainment": 0.00014078617095947266, "__label__fashion_beauty": 0.0004727840423583984, "__label__finance_business": 0.0006246566772460938, "__label__food_dining": 0.0007410049438476562, "__label__games": 0.001822471618652344, "__label__hardware": 0.011016845703125, "__label__health": 0.0010166168212890625, "__label__history": 0.0005693435668945312, "__label__home_hobbies": 0.0002428293228149414, "__label__industrial": 0.0043182373046875, "__label__literature": 0.0004677772521972656, "__label__politics": 0.0005488395690917969, "__label__religion": 0.00107574462890625, "__label__science_tech": 0.138427734375, "__label__social_life": 0.00014030933380126953, "__label__software": 0.0270538330078125, "__label__software_dev": 0.775390625, "__label__sports_fitness": 0.0009531974792480468, "__label__transportation": 0.03106689453125, "__label__travel": 0.0004673004150390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28976, 0.03266]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28976, 0.49343]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28976, 0.92984]], "google_gemma-3-12b-it_contains_pii": [[0, 946, false], [946, 5501, null], [5501, 10149, null], [10149, 15675, null], [15675, 23956, null], [23956, 27363, null], [27363, 28976, null]], "google_gemma-3-12b-it_is_public_document": [[0, 946, true], [946, 5501, null], [5501, 10149, null], [10149, 15675, null], [15675, 23956, null], [23956, 27363, null], [27363, 28976, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28976, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28976, null]], "pdf_page_numbers": [[0, 946, 1], [946, 5501, 2], [5501, 10149, 3], [10149, 15675, 4], [15675, 23956, 5], [23956, 27363, 6], [27363, 28976, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28976, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f821973130bef50d3e1c9c8559143388886039e0
LILIANA BINTI AZMI REPORT SUBMITTED IN FULFILMENT OF THE DEGREE OF COMPUTER SCIENCE (COMPUTER SYSTEMS AND NETWORKING FACULTY OF COMPUTER SYSTEMS AND SOFTWARE ENGINEERING UNIVERSITI MALAYSIA PAHANG 2014 ABSTRACT Appointments are made manually by students to see lecturers. There are various ways students make appointment with lecturers in UMP such as going to see the lecturer, make phone call or message on social network. However, there is no proper information linkage of lecturer's availability that allows lecturer to update their availability regularly. Therefore, Online Appointment System for FSKKP is developed to reduce the difficulties in meeting lecturers among FSKKP students. Lecturers can update their schedule not just based on class but they can update with other activities such as university activities and events. This system is a web-based platform and is created using server side scripting PHP and Apache Web Server, user side scripting such HTML and CSS also MYSQL as a database for the system. This system is a web based application that allows user to access the system anywhere with an internet connection. This system will be developed using Rapid Application Development (RAD) model. This system gives benefits to both lecturers and students. Perlantikan dibuat secara manual oleh pelajar untuk bertemu pensyarah. Terdapat pelbagai cara bagi pelajar-pelajar membuat temujanji dengan pensyarah di UMP seperti bertemu pensyarah secara peribadi, membuat panggilan telefon atau mesej melalui rangkaian sosial. Walau bagaimanapun, tiada rangkaian maklumat betul ketersediaan pensyarah yang membolehkan pensyarah untuk mengemaskini ketersediaan mereka dengan kerap. Oleh itu, sistem perlantikan dalam talian bagi FSKKP dibangunkan untuk mengurangkan kesukaran pelajar FSKKP bertemu pensyarah. Pensyarah boleh mengemaskini jadual mereka yang tidak hanya berdasarkan kelas tetapi mereka boleh mengemaskini dengan aktiviti lain seperti aktiviti-aktiviti universiti dan program. ABSTRAK TABLE OF CONTENT CHAPTER 1__INTRODUCTION .................................................................1 1.0 Overview ..........................................................................................1 1.1 Problem Statement .............................................................................2 1.1.1 Unable to reach lecturer .................................................................2 1.1.2 No records of availability ...............................................................3 1.2 Motivation .........................................................................................3 1.3 Objective .........................................................................................4 1.4 Scope ...............................................................................................4 CHAPTER 2__LITERATURE REVIEW ......................................................6 1.0 Overview ..........................................................................................6 2.1 Existing Systems on Online Appointment System ..............................7 2.1.1 E-Appointment Scheduling (EAS) .....................................................7 2.1.2 Student’s Module ...........................................................................8 2.1.3 Lecturer Module ..........................................................................8 2.1.4 Constraint of E-Appointment Scheduling (EAS) .............................10 2.1.5 Web Based Intelligent Appointment System .................................11 2.1.6 Patient Appointment Reservation System (PARS) .........................15 2.1.7 WAS-GN: Web-based Appointment System with GSM Network ....18 2.2 Comparison of the Existing Systems ................................................29 2.2.1 Advantages and Disadvantages of Existing Systems .....................29 2.2.2 The comparison between existing systems and Online Appointment System for FSKKP Lecturers and Students .........................30 2.3 Web-based System Techniques .......................................................30 2.3.1 Form Processing ...........................................................................30 2.3.2 Navigation ....................................................................................31 2.3.3 Database Operations ....................................................................31 2.3.4 Authentication .............................................................................31 2.3.5 Error Handling ...........................................................................32 2.4 Development Tools .........................................................................33 2.4.1 Software Tools ............................................................................33 CHAPTER 5 — RESULT AND DISCUSSION 5.0 Overview ................................................................. 75 5.1 Result ............................................................................. 76 5.1.1 Login Page ............................................................... 77 5.1.2 Registration Page ...................................................... 77 5.1.3 User Profile View and Update ..................................... 78 5.1.4 Create Lecturer's Schedule ......................................... 80 5.1.5 Update lecturer's schedule ........................................ 82 5.1.6 Appointment Page .................................................... 83 5.1.7 Student Appointment Request .................................... 84 5.2 Discussions .................................................................... 85 5.3 Advantages and Disadvantages ....................................... 86 5.3.1 Advantages of OASF ................................................ 86 5.3.2 Disadvantages of OASF ........................................... 86 5.4 Assumption .................................................................... 87 5.5 Constraint ..................................................................... 87 5.5.1 Technical Constraint ................................................. 87 5.6 Future Development OASF ............................................ 87 CHAPTER 6 CONCLUSION .................................................. 88 6.0 Conclusion .................................................................... 88 REFERENCES ...................................................................... 89 APPENDIX A ....................................................................... 90 APPENDIX B ....................................................................... 92 CHAPTER 1 INTRODUCTION 1.0 Overview At the present time, everything in this world is depending on Information Communication Technology (ICT). With the rapid usage of computers and gadgets, everything is computerized and it gives an enormous impact on our lives. Nowadays, most organizations such as schools, hospitals, universities, and the government have started to do everything in a computerized way as it is painless and faster. In order to meet important people, an appointment should be made. Nevertheless, manual appointment system is not very efficient as it does not save time and money. In Faculty of Computer Systems and Software Engineering (FSKKP), students still make appointment with lecturers manually. Therefore, the Online Appointment System for FSKKP Lecturers and Students (OASF) is developed to reduce the difficulties in meeting lecturers among FSKKP students. This system is a web-based platform and will be created using server side scripting such as PHP with Apache Web Server, user side scripting such as HTML and CSS also MYSQL as a database for the system. This shows that it is mobile as users can access to the system anywhere as long as there is an internet connection. Appointments are made based on the time slots of the lecturer which can be updated by lecturer and also administrator. Lecturer will have to update their availability because in case they are on leave or have meetings. The students will check the lecturer’s availability and pick the time slot they would like to meet the lecturer. Then, a request will be submitted and lecturers will be notified via email. If the lecturer is not available, the system will suggest other time slot that the student can pick. In FSKKP, the online appointment system between lecturers and lecturers has already been developed. However, there is no similar system built to make an appointment between student and lecturer. The existing system is built in IMS (Information Management System) but this system is a standalone system. 1.1. Problem Statement 1.1.1. Unable to reach lecturer Some lecturers can be very busy or have other duties and responsibilities other than teaching. This makes it hard for the students to meet them in their office room. In UMP, students will directly go to the lecturer’s room or call to make confirmation. The lecturers often take an extended period of time to reply to students’ calls or SMS (Short Messaging System). Waiting for a long reply from the lecturers who are busy or unavailable is considered time consuming to some students. This system will help to reduce students’ waiting time for the lecturers as they may know the status of the lecturer before going to meet them. Apart from that, students do not have the lecturer's phone number to contact. When they try to reach them at the faculty, the lecturer is not in the room either because of having class, meetings or on leave. On UMP's E-comm website, lecturers contact information can be found in user directory. The drawback is some lecturers only put office room's extension phone number. This is making it hard for students to contact lecturers after office hour to make an appointment on the next day especially for urgent matters such as discussing for Undergraduate Research Project. 1.1.2. No records of availability Moreover, instructors tend not to abide by appointments that they assign to the students. Although they put notes at their doors or updates their availability on Facebook accounts, this is not practical or professional. With this system, lecturers are able to manage their appointments with students and can also check the approved appointments whenever they are logged in into the system. 1.2 Motivation Conventionally, before students would meet up with the lecturers for discussions, they would have to go all the way to the faculty. Otherwise, they have to contact the lecturers via phone or through sending messages. However, this is costly for some students. Consequently, developing a system or program that is free and easier to use by students and lecturers was a considered thought to facilitate communication among them. Constructing this system looks challenging and intriguing since it needs independent thinking and intuition. Also, it helps in building up social and life skills and in incorporating previous knowledge with recent ones. 1.3 Objective 1. To study the lecturer-student appointment system that enables lecturer manages their time slot so that students can view to choose the suitable time slot to make an appointment with the lecturer. 2. To develop a system that allows students to make request to have an appointment with the lecturer after viewing the lecturer’s availability and lecturer will choose either to deny or accept the request. 3. To test the system by eliminating the possibility of reiteration of the same time slot with other students or in case lecturer is not available because of other meetings or university’s activities. 1.4 Scope - **UMP’s Faculty Of Computer Systems And Software Engineering (FSKKP)** - The target users of this system are students, lecturers and administrator. Students can make appointment only if they have logged in to this system’s account. Therefore, only registered users can make appointment with the lecturers. 1. **Administrator:** - Manage faculty record and lecturers that exist in the system - Manage public holiday records and important updates - Manage database 2. **Lecturer** - Manage Profile and account - Manage table and time slot - View requests by students either to reject or accept 3. Student - Check date and time slot before proceeding with the appointment process. - Make booking and check the appointment request status. - View the record of the sent appointment request. CHAPTER 2 LITERATURE REVIEW 1.0 Overview This chapter elaborates about the existing systems that are related to the Online Appointment System for FSKKP Lecturers and Students (OASF) and explains the existing technique/methods/languages used for each system. Appointment is a time reserved for something such as a doctor visit, business deal, and much like a reservation. Recipient notification agents accept message notifications on behalf of recipients. Getting systems with many independent participants to behave is a great challenge (Mohd Helmy Abd Wahab, N. H., 2008). Nowadays, people demand to use computerized systems in their organizations. The reason is to make the human’s workload to be minimized and at the same time this will need less workers or employees to handle various systems in an organization. An organization might need just one worker for one each system. Apart from that, technology helps people to reduce their time by using electronic systems instead of recording data manually. For that reason, online appointment systems are built in some organizations to make meetings and appointments can be made in a more appropriate way. Online appointment system is a paperless electronic application that is designed with high flexibility and ease of usage, implemented for organizations such as faculties, administrations, hospitals, clinics and other business organizations to handle meetings with customers or clients in more efficient way. There are many kinds of online appointment system that exist nowadays. This system is generally built to avoid reiteration of the same time slot for different user. 2.1. Existing Systems on Online Appointment System 2.1.1. E-Appointment Scheduling (EAS) E-Appointment Scheduling (EAS) has been developed to handle appointment for UMP students, lecturers in Faculty of Computer Systems & Software Engineering (FCSSE) and Student Medical Center. It is an online application for FCSSE’s student whenever applying to make appointment with lecturers or doctor. All applications have to be sent to the lecturers or doctor for approval. This system will give more interactive for student to make an appointment through an online system. By deploying this system, we can avoid wasting time and cost because this application will set an appointment by auto-generated. Therefore, this system is hopefully to solve problem for scheduling. (Noraziah Ahmad, Roslina Mohd Sidek, and Mohd Affendy Omardin, 2010). In order to solve the scheduling drawbacks of this system, Constraints Logic Programming (CLP) has implemented in this system. by giving suggestions to the users in part of determining any available slots from the lecturers and doctors’ timetable 2.1.2. Student’s Module From students’ page view, all students are allowed to use their student id as username and password for the first time login into the system. Before making an appointment, student can view availability of lecturers and doctors. To make an appointment with the lecturer, student must search the lecturer by lecturer’s name, date and time. System view available slots that student request, if not system suggests other slot to make an appointment. It also similar with the doctor module but the appointment only generated to doctor. Next, the system will display for search lecturer’s schedule or doctor’s schedule by constraint inserted by student to do the appointment. Available slots that student needs will be searched by the system otherwise, gives other available slot suggestions if the constraints do not match. Students just click the result to do the appointment. Figure 2.0 (Make an Appointment, Noraziah Ahmad, 2010) Next, as shown Figure 2.0, student is required to insert appointment’s location and agenda and click send button or cancel or exit to abort the appointment. 2.1.3. Lecturer Module After appointment has been made, the database is then updated and enables lecturer to view the request. Even though every appointment request made is based on available slots, lecturer still can reject or change the time and date in case of emergency. Lecturers also can edit schedule to update the available schedule. Report button as in figure 2.1 below is for lecturer to view the appointment records. ![March Calendar](image1) Figure 2.1 (List of Approved Appointment, Noraziah Ahmad, 2010) Figure 2.1 shows the approved appointment that is automatically made by the system. Information about the application and also can be viewed. By clicking the image view at detail column lecturer can check the detail of the applicant. ![March Calendar](image2) Figure 2.2 (Lecturer Setup Schedule, Noraziah Ahmad, 2010) Figure 2.2 shows that lecturer is able to setup the schedule for the appointment. 2.1.4. Constraint of E-Appointment Scheduling (EAS) Based on the research, EAS is in an IMS (Integrated Management System) which is a single integrated system used by an organisation to manage the fullness of its processes, in order to meet the organisation's objectives and fairly satisfy the stakeholders. Combines all related components of a business into one system for easier management and operations (Sciqual.com.au, 2015). Therefore, when there are so many things going on a website, the appointments might be missed or forgotten. Therefore Standalone system is better for an appointment because it operates independently which means there is only one system. Lecturers need to setup their timetable themselves. This will make it hard if lecturer does not update or forgot to setup the schedule. Administrator should be responsible to make sure the schedule is always up to date. Besides that, the EAS does not provide a timetable that enables student to check if the lecturer is available or not but allows lecturer to change the time of the requested appointment instead. This will cause difficulties in case student has class or other university activities when the lecturer updates the time. Besides that this system is also time consuming because students have to check one at a time for the available time by using the search of lecturer's name, date and time. 2.1.5. Web Based Intelligent Appointment System Web Based Intelligent Appointment System is an online appointment system developed by integrating with Intelligent System techniques. The purpose of an appointment is for students to reserve time for any academic-related activities such as discussion and weekly meeting with lecturers. The main orientation of the prototype is to manage appointment and calendar updating. 2.1.5.1. Database design Database is used as the platform for most information systems that stores data. It is the ultimate instrument for most systems. There are several steps in database design as described by inflow schema that consists of i) process event ii) function links and iii) directed communications (King, 1985). 2.1.5.2. Interface Design Figure 2.5 (Interface of Appointment Timetable, Mohd Helmy, 2009) In figure 2.5, it shows that students can make an appointment by choosing the blue coloured time slots. In figure 2.6, students can make an appointment by selecting the appointment duration, and purpose of meeting. Figure 2.7 show that lecturer can change the time slot by clicking on the time slot that needs to be changed. Figure 2.8 (Interface of New User Registration, Mohd Helmy, 2009) Figure 2.8 is the interface for new user registration. 2.1.5.3. Intelligent Agents Agent-based computing has taken place as "the next significant breakthrough software development (Jenning and Woodridge, 1998). Different types of agents have with different role. For this system, agent's role is to manage information in databases and offer a status by comparing it with inputs provided by the users and capable of autonomous action to meet its design objectives. Agent is a computer program that assist user with a routine computer task and represents on behalf of human agents (Noraziah Ahmad, R. M. 2010). At the user interface, the user interacts with the agent while the agent senses and acts independently in a work environment such as an operating system. Using the information taken from its environment, the agent performs a given task. The role of agent is to respond the user's request in ad hoc and an Intelligent Agent is placed in the prototype. It allows both students and lecturers to easily access the system in any terminal connected to the Internet while in a time restrain. 2.1.5.3.1. Advantages of Intelligent Agents i. Higher efficiency in work such as less time used, work autonomously, and can search huge amounts of information and filter out important things that would be impossible for humans i. Opens new opportunities like an arrangement of appointments inclusive of searching for the available slot for an appointment and respond to 2.1.5.3.2. Constraint of Web Based Intelligent Appointment System After understanding the research, the system does not have so many constraints but however it is still lacking in notification feature. This system does not notify lecturer on whether they have an appointment to be checked or not. The appointment will be approved automatically and lecturer does not need to approve or reject the appointment request. This will cause complications when students do not check whether the lecturer has changed the time slot for the meeting. 2.1.6. Patient Appointment Reservation System (PARS) Based on the research on Patient Appointment Reservation System (PARS), it is a system that has been developed to use the opportunity of possibilities to reduce administration costs, provide availability and high quality service in health care and more efficient human and material resources for health care organizations that is provided by the advanced Internet and Technologies in medical. For now PARS is one of the most modern projects in Lithuania’s medicine sphere, linking registries of 40 different health care institutions: 2.1.6.1. System operation principles 1. Specialists - Make a consultation time schedule. - Enter scheduled consultation times of physician into PARS by the reception personnel. - Can enter planned consultation times themselves if signed in. - A specialist can be chosen by health care institution, family name, specialty and consulting-room or by the set of all these criteria. When finding a proper specialist patient is able to view all available times of visits and select the most convenient one. - Can view a list of registered patients for a particular date and their complaints. - Can send SMS for a patient to bring all the necessary documents or analysis that might be useful. 2. Patients - Able to register for a visit by the phone or at the reception desk. - All patients' details are entered into PARS and can register online. - Can reserve consultation time by entering name, family name, mobile phone number and other contact information of patient. - Will receive SMS for confirmation, reminder about upcoming visit and information of the reservation cancelation if there are things that cannot be circumvented Figure 2.9 Patients appointment reservations via Internet since 01.01.2008, Vilnius, 2008 Figure 2.9 shows that since year 2008 where this project has begun, the patient appointment reservation via internet have been increasing steadily. 2.1.6.2. Constraint of Patient Appointment Reservation System (PARS) PARS is a huge system and it needs to work perfectly to get the users' satisfaction. However, there are problems in the system. Firstly is the system is created in Lithuanian Language. For a system like this, it is better to be implemented in English or make an option either to use English or Lithuanian language. This is because; in case a non-Lithuanian wants to use the system, it will cause difficulties and still a time consuming. Besides that it will cause false
{"Source-Url": "http://umpir.ump.edu.my/id/eprint/13363/1/FSKKP%20-%20LILIANA%20AZMI.PDF", "len_cl100k_base": 4737, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 39640, "total-output-tokens": 6124, "length": "2e12", "weborganizer": {"__label__adult": 0.0010995864868164062, "__label__art_design": 0.0014715194702148438, "__label__crime_law": 0.0008273124694824219, "__label__education_jobs": 0.416259765625, "__label__entertainment": 0.00027942657470703125, "__label__fashion_beauty": 0.0005769729614257812, "__label__finance_business": 0.0017242431640625, "__label__food_dining": 0.0017528533935546875, "__label__games": 0.0014743804931640625, "__label__hardware": 0.0026645660400390625, "__label__health": 0.0094757080078125, "__label__history": 0.0010328292846679688, "__label__home_hobbies": 0.0003707408905029297, "__label__industrial": 0.0006976127624511719, "__label__literature": 0.0016956329345703125, "__label__politics": 0.00042057037353515625, "__label__religion": 0.0011014938354492188, "__label__science_tech": 0.0294036865234375, "__label__social_life": 0.00080108642578125, "__label__software": 0.06573486328125, "__label__software_dev": 0.458984375, "__label__sports_fitness": 0.0005064010620117188, "__label__transportation": 0.0010480880737304688, "__label__travel": 0.000576019287109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26026, 0.04356]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26026, 0.11237]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26026, 0.83236]], "google_gemma-3-12b-it_contains_pii": [[0, 205, false], [205, 1276, null], [1276, 2434, null], [2434, 3598, null], [3598, 6425, null], [6425, 6425, null], [6425, 8281, null], [8281, 9319, null], [9319, 10990, null], [10990, 12653, null], [12653, 13920, null], [13920, 14115, null], [14115, 15276, null], [15276, 16833, null], [16833, 18113, null], [18113, 18873, null], [18873, 20159, null], [20159, 21003, null], [21003, 21201, null], [21201, 21423, null], [21423, 22587, null], [22587, 24089, null], [24089, 25247, null], [25247, 26026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 205, true], [205, 1276, null], [1276, 2434, null], [2434, 3598, null], [3598, 6425, null], [6425, 6425, null], [6425, 8281, null], [8281, 9319, null], [9319, 10990, null], [10990, 12653, null], [12653, 13920, null], [13920, 14115, null], [14115, 15276, null], [15276, 16833, null], [16833, 18113, null], [18113, 18873, null], [18873, 20159, null], [20159, 21003, null], [21003, 21201, null], [21201, 21423, null], [21423, 22587, null], [22587, 24089, null], [24089, 25247, null], [25247, 26026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26026, null]], "pdf_page_numbers": [[0, 205, 1], [205, 1276, 2], [1276, 2434, 3], [2434, 3598, 4], [3598, 6425, 5], [6425, 6425, 6], [6425, 8281, 7], [8281, 9319, 8], [9319, 10990, 9], [10990, 12653, 10], [12653, 13920, 11], [13920, 14115, 12], [14115, 15276, 13], [15276, 16833, 14], [16833, 18113, 15], [18113, 18873, 16], [18873, 20159, 17], [20159, 21003, 18], [21003, 21201, 19], [21201, 21423, 20], [21423, 22587, 21], [22587, 24089, 22], [24089, 25247, 23], [25247, 26026, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26026, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
55a6cefa1418d2d28ffba6fe6bb0bfc956cf395d
A Note on Protection of Context-Sensitive Information By Yang-Chang Hong Institute of Information Science Academia Sinica 115 Taipei, Taiwan, R. O. C. and Stanley, Y. W. Su Department of Electrical Engineering University of Florida Gainesville, Fl. 32611 U. S. A. ABSTRACT: This paper investigates the problem of invoking a context-dependent decision for security checking. The context problem which is the ability of the security subject to combine partial information to which he has the right to access, in order to produce information he is not entitled to see, is discussed. A protection scheme for disallowing the subject to deduce context-sensitive information based on data semantic dependencies is presented. Key Words and Phrases: context-sensitive information, semantic dependency, security decision, access path, deduction, protection 1. Introduction In order to control access to a database, a set of security decisions is generally needed by a DBMS system. These decisions are rules to specify conditions under which a particular subject can be granted access to a particular object (e.g., an attribute or a relation in the relational database context) [4], or define the portion of a database to which a particular subject is entitled to see [2]. They have been classified into different types [2,4,7]. One type of decision is referred to as the context-dependent (CXD) decision [4,5,8]. It does not allow a subject to relate one object to another, but permitting him to see the individuals. An example of a CXD decision could be as follows: \[ d_1 : \text{The subject } S \text{ can see the salaries of employees, but not together with their names and vice versa.} \] This type of decision is context-dependent in the sense that its invocation is dependent on the context of database access. For a decision which is context-independent, it can be invoked for enforcement when a query (or an application program) accesses or manipulates a set of objects which has the decision [6,10]. However, this cannot be true for a CXD decision, in general. This is because objects in a database are close related semantically and structurally. Partial information to which a subject has the right to access may be combined to disclose context-sensitive information. Consider a relation EMP(ENAME, SALARY, STATUS, MGR, AGE, ASSESSMENT) with primary key ENAME. Assume that attribute STATUS is functionally dependent on ENAME, denoted as \( \{\text{ENAME}\} \rightarrow \{\text{STATUS}\} \), and \( \{\text{STATUS}\} \rightarrow \{\text{SALARY}\} \) [1]. (The functional dependency: \( X \rightarrow Y \) holds in a relation \( R \) if any two tuples \( u \) and \( v \) of the relation \( R \) \( u[X] = v[X] \) implies \( u[Y] = v[Y] \), where \( X, Y \) are sets of attributes of \( R \) and \( u[X] \) is the projection of the tuple \( u \) on \( X \).) Consider the following two requests: one is to request the names and status of employees whose manager is 'SMITH' and the other is to request the salaries with \( \text{STATUS} \leq 5 \). With the invocation mechanism for context-independent decisions, neither will necessarily invoke the decision \( d_1 \). The decision \( d_1 \) is invoked if a request involves the access of a set of attributes which contains \{ENAME, SALARY\}. (The first request involves the set \{ENAME, STATUS, MGR\} and the second request the set \{SALARY, STATUS\}.) However, one can obtain the names and the associated salaries for those employees whose manager is 'SMITH' and status is less than or equal to 5 via status, since a transitivity property holds for any two functional dependencies. This example points out that the invocation of a CXD decision depends on not only the objects being interrogated but also the records previously accessed — i.e., access history information [5,7,8]. For a DBMS to enforce this type of decision, it is necessary to keep all the relevant records previously accessed for subsequent security checking. Since the subject has the right to see individual objects being protected simultaneously, one cannot reasonably hope to design a mechanism which can completely prevent him from deducing any context-sensitive information. However, deduction based on data semantic dependencies can be understood and has to be prevented against. This paper is to provide an algorithm for enumerating all the paths from which one can obtain supposedly protected information. Protection of context-sensitive information is thus reduced to preventing the corresponding subject from forming any of these paths. In the paper, three ways of enforcing schemes are suggested and their implementation is presented. 2. Definition of CXD Decisions As defined above, a CXD decision is the one which disallows a subject to relate one object to another, while permitting him to access individuals. With a view toward enforcement, the definition is not so rigorous. Consider a CXD decision \( d_2 \) in which the salaries of employees can be read, but not together with the managers; however, it would not be very meaningful if attributes SALARY and MGR in the EMP relation are mutually independent. Consider another CXD decision \( d_3 \) that does not allow the subject \( S \) to relate ENAME, SALARY, and ASSESSMENT. This decision should imply that the subject \( S \) is not allowed to relate neither ENAME and SALARY nor ENAME and ASSESSMENT, if \( \{\text{ENAME}\} \rightarrow \{\text{SALARY}\} \) and \( \{\text{ENAME}\} \rightarrow \{\text{ASSESSMENT}\} \). Otherwise whenever \( S \) sees the employee names and their salaries or their assessment, he has obtained partial information that should be protected. Furthermore, if \( S \) accesses the names and the associated salaries in one request and the names and the associated assessment in another, he can obtain supposedly protected information via the employee names (i.e., primary keys). From the discussion above, it is useful to define a CXD decision more formally. The following definition is based on the functional dependency between sets of attributes of relations. The attributes are the security objects in the relational context. Definition: A decision \( d \) is said to be the context-dependent decision if it does not allow the subject \( S \) to produce a relation consisting of a set of attributes (or objects) \( R_d \) with the following properties: (i) There exists one single element set \( Y \subseteq R_d \) such that \( X \rightarrow Y \) and \( X \setminus R_d \rightarrow Y \). (ii) There is no proper subset \( X' \) of \( X \) such that \( X' \rightarrow Y \). That is, \( Y \) is "full" functionally dependent on \( X \). Let us denote such a decision as $d_{X \cup Y}$. Here, $Y$ is restricted to a single element set because of the fact $X \rightarrow \{A_1, A_2, \ldots, A_k\}$ implies $X \rightarrow \{A_i\}$, $1 \leq i \leq k$. Based on the definition, decision $d_1$ which disallow $S$ to produce a relation with $R_{d_1} = \{\text{ENAME, SALARY}\}$ is a CXD decision since $\{\text{ENAME}\} \rightarrow \{\text{STATUS}\}$ and $\{\text{STATUS}\} \rightarrow \{\text{SALARY}\}$ imply $\{\text{ENAME}\} \rightarrow \{\text{SALARY}\}$, while decisions $d_2$ and $d_3$ are not CXD decisions. However, if $d_3$ is decomposed into two decisions: one being $d'_3$ that does not allow $S$ to produce a relation with $R_{d'_3} = \{\text{ENAME, SALARY}\}$ and the other $d''_3$ that does not allow $S$ to produce a relation with $R_{d''_3} = \{\text{ENAME, ASSESSMENT}\}$, then both $d'_3$ and $d''_3$ are CXD decisions. 3. An Algorithm for Enumerating Access Paths to be Protected As defined above, the enforcement of a CXD decision $d_{X \cup Y}$ is the prevention of the corresponding subject from producing any relation which contains the same pairs $<x, y>$ as determined by FD: $X \rightarrow Y$, where $x$ and $y$ are instances of $X$ and $Y$, respectively. Instead of checking whether the subject has produced such a relation or not, we first compute the set of access paths, from each of which one can derive supposedly protected pairs $<x, y>$ associated with the decision $d_{X \cup Y}$, and then prevent him from obtaining any pair via these paths. The set of access paths associated with a decision $d_{X \cup Y}$ are those that have a lossless join [1] and can be computed by means of the following theorem. **Theorem:** Let $X$, $Y$, $W$ be sets of attributes with no common element in between and $X \rightarrow Y$. Then $R_1 = X \cup W$ and $R_2 = W \cup Y$ have a lossless join if and only if $W \rightarrow Y$. **Proof:** (i) if part: From [1], we know that if $R_1 \cap R_2 \rightarrow R_1$ or $R_1 \cap R_2 \rightarrow R_2$, imply $R_1$ and $R_2$ have a lossless join. Since $W \rightarrow Y$ implies $W \rightarrow W \cup Y$, we have $R_1 \cap R_2 = W \rightarrow (W \cup Y) = R_2$. (ii) only if part: From [1], if $R_1$ and $R_2$ have a lossless join, then there exist sets $Y_1$ and $Y_2$ such that $R_1 \cap R_2 \rightarrow Y_1$, $R_1 \cap R_2 \rightarrow Y_2$, $Y_1 \cap (R_1 \cup R_2) = R_1$, $Y_2 \cap (R_1 \cup R_2) = R_2$. 4 and \( Y_2 \cap (R_1 \cup R_2) = R_2 \), where \( R_1 \cap R_2 \Rightarrow Y_1 \) denotes the multivalued dependency \([1,3]\) of \( Y_1 \) on \( R_1 \cap R_2 \). Let \( Y_1 = R_1 \) and \( Y_2 = R_2 \). We have \( R_1 \cap R_2 \Rightarrow R_1 \) and \( R_1 \cap R_2 \Rightarrow R_2 \). That is, \( W \Rightarrow X \cup W \) and \( W \Rightarrow W \cup Y \) and hence \( W \Rightarrow X \) and \( W \Rightarrow Y \). **Case (a) \( W \Rightarrow Y \):** The mixed rule of inference states that if \( A \Rightarrow B \) and \( C \Rightarrow D \), where \( B \supseteq D \) and \( \emptyset \cap C = \emptyset \) (empty set), then \( A \Rightarrow D \). Let \( A = W \), \( B = Y \), \( C = X \), \( D = Y \). Since \( W \Rightarrow Y \) (i.e., \( A \Rightarrow B \)) and \( X \Rightarrow Y \) (i.e., \( C \Rightarrow D \)), where \( Y \supseteq Y \) (i.e., \( B \supseteq D \)) and \( Y \cap X = \emptyset \) (i.e., \( B \cap C = \emptyset \)), imply \( W \Rightarrow Y \) (i.e., \( A \Rightarrow D \)). **Case (b) \( W \Rightarrow X \):** Since \( X \Rightarrow Y \), imply \( X \Rightarrow Y \). If \( W \Rightarrow X \) and \( X \Rightarrow Y \), using the transitivity property of multivalued dependencies, we have \( W \Rightarrow (Y - X) \Rightarrow Y \) since \( X \cap Y = \emptyset \). Now we have the same case as (a). Complete the proof of the theorem. It is noted that in fact \( W \) and \( X \) have a mutually multivalued dependency, i.e., \( W \Rightarrow X \) and \( X \Rightarrow W \). Any set of attributes, say \( W \), on which \( Y \) is dependent will form (with \( X \)) a path \( X - W - Y \) from which one can obtain the same pairs \( <x, y> \) as determined by FD: \( X \Rightarrow Y \). We say that paths \( X - Y \) and \( X - W - Y \) are equivalent. Given a CXD decision \( d_{X \cup Y} \), we defined the set \( F(Y) \) consisting of sets of attributes as follows: 1. \( Y \) is in \( F(Y) \). 2. If \( V \) is in \( F(Y) \), and \( U \Rightarrow V \), then \( U \) is in \( F(Y) \). 3. No element is in \( F(Y) \) unless it so follows from (1) and (2). Thus the set \( F(Y) \) has at least two elements. Define \( \tilde{F}(Y) \) is the set \( F(Y) \) excluding \( X \) and \( Y \), that is, \( \tilde{F}(Y) = F(Y) - \{X, Y\} \). Thus, any element in \( \tilde{F}(Y) \) will form (with \( X \)) a path equivalent to \( X - Y \). Furthermore, if \( U \) and \( V \) are in \( F(Y) \), then paths \( U - Y \) and \( U - V - Y \) (or \( V - Y \) and \( V - U - Y \)) are equivalent. Since \( X - U - Y \) (or \( X - V - Y \)) is equivalent to X-Y, thus X-U-V-Y (or X-V-U-Y) is equivalent to X-Y. If \( F(Y) \) has \( n \) elements, the total number \( S(n) \) of paths, originating in X and ending in Y, equivalent to X-Y is \( S(n) = \sum_{i=1}^{n} C_i^n \cdot i! = \lceil n!e - 1 \rceil \), where \( e \) is a natural number and \( \lceil x \rceil \) is the greatest integer less than or equal to \( x \). Based on the theorem above, we can conclude that the total number of paths originating in X and ending in Y, from which one can obtain the same pairs \( \langle x, y \rangle \) as determined by FD: \( X \rightarrow Y \) is \( S(n) + 1 \) (including the path X-Y). Since if \( W' \) is not in \( F(Y) \) but the path X-W'-W-Y is equivalent to X-W-Y, where \( W \) is in \( F(Y) \), this means X-W'-W and X-W are equivalent. That is, \( R_1 = X \cup W' \) and \( R_2 = W' \cup W \) has a lossless join. From\( [1] \) (see (ii) of the theorem), we have \( W' \rightarrow W \) and \( W' \rightarrow X \). Following the same proof as cases (a) and (b) of the theorem (since \( W \rightarrow Y \) and \( X \rightarrow Y \)), we have \( W' \rightarrow Y \). This implies \( W' \) has to be in \( F(Y) \) which contradicts the assumption. Consider a database consisting of a single relation \( EMP' \) (EMP#, ENAME, SALARY, STATUS, AGE) with EMP# and ENAME as candidate keys (i.e., having the unique identification property). Assume that \( \{\text{ENAME}\} \rightarrow \{\text{STATUS}\}, \{\text{STATUS}\} \rightarrow \{\text{SALARY}\}, \) and \( \{\text{ENAME}\} \rightarrow \{\text{AGE}\} \). We further assume that the subject \( S \) is subject to the decision \( d_1 \). Associated with the decision \( d_1 \) is the set \( F(\{\text{SALARY}\}) = \{\{\text{EMP#}\}, \{\text{ENAME}\}, \{\text{STATUS}\}, \{\text{SALARY}\}\} \). And \( \tilde{F}(\{\text{SALARY}\}) = \{\{\text{EMP#}\}, \{\text{STATUS}\}\} \) and \( S(n) = S(2) = \lceil 2! \cdot e - 1 \rceil = 4 \). Thus, the total number of paths that should be protected is 5. They are \( \{\text{ENAME}\} \rightarrow \{\text{SALARY}\}, \{\text{ENAME}\} \rightarrow \{\text{EMP#}\} \rightarrow \{\text{SALARY}\}, \{\text{ENAME}\} \rightarrow \{\text{STATUS}\} \rightarrow \{\text{SALARY}\}, \{\text{ENAME}\} \rightarrow \{\text{EMP#}\} \rightarrow \{\text{STATUS}\} \rightarrow \{\text{SALARY}\} \), and \( \{\text{ENAME}\} \rightarrow \{\text{STATUS}\} \rightarrow \{\text{EMP#}\} \rightarrow \{\text{SALARY}\} \). The computation of access paths to be protected depends on the set of FDs associated with the database. The complete set of FDs is the union of the given set of FDs and the set of FDs implied by the class of semantic dependencies of FDs, multivalued-dependencies, and join-dependencies [9]. 4. Enforcement and Implementation Each CXD decision $d_{X \cup Y}$ is associated with a set of access paths, denoted as $P_d$, from each one can obtain the pairs $<x,y>$ that should be protected. Enforcement of the decision is reduced to preventing the corresponding subject to obtain these pairs via any of these paths. There are at least three enforcement schemes which can prevent a subject from obtaining the supposedly protected pairs $<x,y>$. The first one is to prevent the subject from seeing the instances $x$'s or $y$'s, but not necessarily both. Any access to $x$'s or $y$'s is not disallowed. This scheme views each CXD decision as a context-independent decision, which is not a suitable one because it does not allow the subject to obtain the data he has the right to see. The second one is to keep the subject from seeing a specific pair of adjacent sets of objects (or attributes) in each path. For example, it may want to disallow the subject to see the $i$th pair $(X_{i-1}, X_i)$ of the path: $X_0(-X) - X_1 - X_2 - \ldots - X_{m-1} - X_m$ ($= Y$). This scheme converts the decision $d_{X \cup Y}$ to $S(n) + 1$ decisions without depending on access history information, where $S(n) + 1$ is the total number of paths in $P_d$. Any access to a set of objects which contains the specific pair of sets of objects will be disallowed. The checking of whether an access violates any security decision or not depends on the objects being interrogated only. This scheme gives the subject larger flexibility of access than the first one. The third scheme is to keep the subject from accessing the pair of adjacent sets of objects last accessed in each path. If the $j$th pair $(X_{j-1}, X_j)$ of the above path is the one last accessed when the database has been used for a certain period, then a history independent CXD decision on the pair $(X_{j-1}, X_j)$ will be added to the system. Any further access to that pair (not individuals of the pair) will be subject to the decision. The pair of adjacent sets of objects which is to be protected is determined by sequence of requests that have been submitted to the system. We believe that this scheme gives the subject maximum flexibility of access. It requires the system to keep (possibly) a great amount of access history information, however. The history information can be kept by associating a counter with each path in the set \( P_d \). Whenever a query is submitted, a checking is made which examines whether there is any pair in each path contained in the set of attributes of the query. (Each query defines a set consisting of all attributes appearing in the query.) Any relevant counter have to be properly updated and the corresponding pair in each path has to be deleted so that each pair will not be counted twice in each path. Once a counter has the value one less than the number of pairs in the corresponding path (i.e., there is one pair not yet been accessed), a CXD decision on the last accessed pair should be added to the system. The new decision will reside in the system and will be invoked for enforcement if any further access to that pair occurs. The second scheme is obviously easier implemented than the third one, since the system does not need to have long "memory" to keep track of previous accessed records in order to grant or deny a present request. Furthermore, it can be implemented to perform adequately without significant increase in response time, and (security) cost. It is therefore recommended. 5. Conclusion We have defined and enforced a class of CXD security decisions. A CXD decision \( d_{X \cup Y} \) which exhibits history information of access can be enforced by first enumerating all the possible access paths which can relate \( X \)-instances and \( Y \)-instances and then protecting each path by a history independent CXD decision. Any deduction of context-sensitive information based on data semantic dependencies thus is disallowed. We have also suggested three ways of implementation. 6. References
{"Source-Url": "http://www.iis.sinica.edu.tw/page/library/TechReport/tr1982/tr82003.pdf", "len_cl100k_base": 4983, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 12573, "total-output-tokens": 6193, "length": "2e12", "weborganizer": {"__label__adult": 0.0004520416259765625, "__label__art_design": 0.0004591941833496094, "__label__crime_law": 0.002277374267578125, "__label__education_jobs": 0.0025386810302734375, "__label__entertainment": 9.810924530029296e-05, "__label__fashion_beauty": 0.0002351999282836914, "__label__finance_business": 0.0011434555053710938, "__label__food_dining": 0.0005183219909667969, "__label__games": 0.0005826950073242188, "__label__hardware": 0.0015287399291992188, "__label__health": 0.0013780593872070312, "__label__history": 0.00033664703369140625, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.0007767677307128906, "__label__literature": 0.0006260871887207031, "__label__politics": 0.00044417381286621094, "__label__religion": 0.0004661083221435547, "__label__science_tech": 0.27001953125, "__label__social_life": 0.0001862049102783203, "__label__software": 0.049072265625, "__label__software_dev": 0.66552734375, "__label__sports_fitness": 0.00023984909057617188, "__label__transportation": 0.0005717277526855469, "__label__travel": 0.0002143383026123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20117, 0.03869]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20117, 0.63585]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20117, 0.86329]], "google_gemma-3-12b-it_contains_pii": [[0, 268, false], [268, 852, null], [852, 2582, null], [2582, 4677, null], [4677, 6675, null], [6675, 9111, null], [9111, 11674, null], [11674, 14415, null], [14415, 16371, null], [16371, 18426, null], [18426, 20117, null]], "google_gemma-3-12b-it_is_public_document": [[0, 268, true], [268, 852, null], [852, 2582, null], [2582, 4677, null], [4677, 6675, null], [6675, 9111, null], [9111, 11674, null], [11674, 14415, null], [14415, 16371, null], [16371, 18426, null], [18426, 20117, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20117, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20117, null]], "pdf_page_numbers": [[0, 268, 1], [268, 852, 2], [852, 2582, 3], [2582, 4677, 4], [4677, 6675, 5], [6675, 9111, 6], [9111, 11674, 7], [11674, 14415, 8], [14415, 16371, 9], [16371, 18426, 10], [18426, 20117, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20117, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
c2989fccba833e0b3f76c4211ea1134443044439
Synchronizing On-Chip Software and Hardware Traces for HLS-Accelerated Programs Matthew B Ashcraft Brigham Young University Provo, UT matthew.b.ashcraft@byu.edu Jeffrey Goeders Brigham Young University Provo, UT jgoeders@byu.edu Abstract—Complex designs generated from modern high-level synthesis tools allow users to take advantage of heterogeneous systems, splitting the execution of programs between conventional processors, and hardware accelerators. While modern HLS tools continue to improve in efficiency and capability, debugging these designs has received relatively minor attention. Fortunately, recent academic work has provided the first means to debug these designs using hardware and software traces. Though these traces allow the user to analyze the flow of execution on both the software and hardware individually, they provide no means of synchronization to determine how operations on one device affect the other. We address this challenge by introducing a synchronization technique that keeps track of operations on shared objects. We identify objects shared between hardware and software and their memory operations, and use unique identifiers to synchronize the traces around these operations. We explore the added costs of this technique on execution time and hardware and software resources, and ways to reduce it through multiple synchronization schemes. This is demonstrated in an open-source prototype targeting the hybrid flow of the open-source HLS-tool LegUp. Index Terms—High-level synthesis, HLS, debugging, Synchronization I. INTRODUCTION As the use of high-level synthesis tools has increased, they have come to provide support for more complex systems, including heterogeneous systems. In heterogeneous systems part of the program is executed as software on a conventional processor, and the other parts are implemented as hardware accelerators via HLS flows to run on an FPGA. Tools such as Altera OpenCL SDK, Xilinx’s SDAccel and SDSoC, and LegUp HLS have come to support this functionality. Though these tools have greatly simplified the means of generating complex designs for heterogeneous systems, understanding the resulting designs can be quite challenging. The HLS-generated hardware designs for FPGAs are complex and often require hardware experts to understand them. Adding in the software and interface between the two makes it even more challenging. Some tools have been developed to debug these systems using simulation, but those have their own limits. There are situations in which on-chip debugging may be necessary, such as bugs from IO, parallelism, or bugs that take an extended amount of execution time to arise. One of the commonly used on-chip debugging techniques is trace-based debugging. Trace-based debugging relies on the user recording variables during execution, including those affected by the bug, and working backwards from the recorded data back up to the root cause. Due to limited memory, users have to select variables to record and analyze post-execution. If the root cause is not identified from the recorded variables, the user can select different variables to be recorded and repeat execution again. This process is repeated until the root cause is found. Each iteration the user gains more information on the effects of the bug from the recorded variables, and eventually its origin. Much has been done to improve trace-based debugging for HLS-generated hardware, including giving the user access to more data [1] and more control over what is observed [2]. However, all of these works have focused on debugging the HLS hardware in isolation, ignoring designs for heterogeneous systems. Our past work extended this software like visibility to complex designs for heterogeneous systems [3], or HLS-accelerated programs. We demonstrated techniques to additionally capture software traces, and present both hardware and software trace data to the user at the source code level. This allows users to now work through hardware and software traces in order to identify and understand bugs. Unfortunately, it does not provide the means to synchronize the hardware and software traces. This lack of synchronization between the hardware and software traces can prevent the user from understanding the effects hardware and software have on each other. Recording data to the traces from explicit data transfers can give some indication of how the traces line up in execution, but only so often as the data is explicitly transferred. Conversely, objects in shared memory can be accessed by both the hardware and software at any time, possibly without any indication on the opposing device that it has occurred. Without some way of synchronizing the traces, it may not be possible for the user to determine which loads on software are affected by stores on the hardware, and vice versa. If the bug the user is following through execution relies on shared objects, it may be very challenging for them to determine the root cause of their bug. To address this problem, we have developed a synchronization technique to synchronize the hardware and software traces when performing in-system debug of hybrid HLS systems. Our technique is based around unique identifiers which are shared between the hardware and software, and represent the state of the system at a given point in time. The identifiers are recorded to the hardware or software trace buffers throughout the program, allowing the hardware and software traces to be synchronized at each of these points post-execution. This helps the user gain visibility into the interactions between the hardware and software, and hopefully helps locate the root cause of complex bugs. To test the effectiveness and overhead of our technique, we have developed multiple synchronization schemes based around memory accesses to shared objects and implicit user-inserted synchronization calls. We have explored the costs of these in terms of hardware and software resources, and execution time through a proof-of-concept implemented in the open-source HLS tool LegUp. II. BACKGROUND AND RELATED WORK One of the main benefits of using HLS tools to generate complex designs for heterogeneous systems is the ability to debug the designs at a high level. Due to the nature of HLS tools, these designs are often written using a software programming language. This allows users to debug their designs prior to using the HLS tool through software execution and generic software debugging tools. Additionally, tools for hardware and software simulation can be used to identify flaws in the overall design and other bugs that did not arise during software execution [4], [5]. While software execution and hardware and software simulation offer excellent observability for debugging and optimization, there are still certain types of bugs that require on-chip debugging of HLS generated circuits. If the generated circuit interacts with other hardware or data streams (I/O, legacy IP, network traffic, media streams, etc.) it may not be possible to test all possible interactions in simulation. Other bugs are the results of parallelism, which simulation tools are unable to generate. Further, some elusive bugs may require long runtimes to expose, making simulation impractical. One of the widely used methods of on-chip debugging is trace-based debugging. As mentioned previously, trace-based debugging involves recording specific data to memory during execution, and then analyzing that data post-execution. There are many challenges with trace-based debugging including limited memory, allocating resources on the device to capture data, selecting the variables to record, and analyzing the trace-data to identify the bug. Recent work has addressed some of these trace-based debugging challenges as they pertain to HLS-generated programs, including increased control over selecting variables to record [2], and increasing the amount of data that can be recorded with a given amount of resources [1]. Another challenge particular to HLS-generated programs is understanding the generated design enough to use the trace-data. HLS-generated RTL can be challenging to understand for even small designs, and even more so for large and/or complex designs. Without understanding these designs, the trace-data is not of much use. To address this, recent work has demonstrated techniques to allow trace-based debugging of HLS-generated designs to happen at the source-code level [6]. This is done by maintaining a correlation between the source-code and generated RTL throughout the HLS-flow of the HLS-tool LegUp. Using this correlation, they map the captured trace-data, to the RTL, then back to the source code, allowing users to step through the trace-data as if it pertained to source-code variables. Though all of this work substantially improves the means of trace-based debugging of HLS-generated designs, it only applies to hardware designs, not those of heterogeneous systems. Techniques for on-chip debugging of designs for heterogeneous systems are comparatively still in their infancy. Xilinx has added the ability to record a timeline of transfers between the software and hardware using their SDSoc tool [7]. Verma et al. [8] demonstrated how FPGA OpenCL code could be modified to add event counters, allowing for a printout of the ordering of different kernel events in the system. Additionally, they added support for user implemented watchpoints, which constantly observe specific addresses and record changes in their values. Though both of these make progress towards observing heterogeneous systems, they don’t capture data for both devices that is commonplace in their respective debugging tools, such as control-flow or variable value information. To overcome this lack of debugging support, our recent work [3] has sought to extend the source-code visibility of HLS designs in LegUp to designs for heterogeneous systems. This recent work built upon features others have added into the HLS tool LegUp [6]. These features automatically insert a circular trace buffer using on-chip memories during the RTL generation portion of the LegUp compiler. In addition to the trace buffer, they identify FSM and datapath signals that correlate to source code variables, and record them into the trace buffer. This allows them to reconstruct data-flow and control-flow of the hardware-accelerated software modules post-execution. In addition to taking advantage of these features, the our recent work [3] added support for software traces and capturing software execution. The software trace consists of a circular buffer of a user-determined size with entries of values and IDs, similar to Figure 1, but without the sync ID (this will be explained in Section IV-C). The software tracing techniques are based on Instant-Replay [9], in which data from loads and stores are captured as well as unique IDs representing the location and value recorded in each entry. The ID and corresponding value’s datatype are recorded to an SQL database. During execution data specified by the user are recorded to the software trace. This data can include control-flow information, loads or stores to specific memory locations, and/or function arguments. Post-execution the software trace is read backwards, ID first, then value. The IDs are used to query the SQL database to determine the data-type of the specified data, allowing for proper data extraction. The results of both the hardware and software traces are represented through the use of a debugger GUI. This allows the user to step through either of the traces to determine what is happening on either machine. While this greatly expands the possibilities of on-chip debugging of CPU/FPGA-based heterogeneous systems, it is lacking a key component, that of synchronization. A. Debug Scenarios Involving Synchronization Synchronization is very important in the trace-based debugging cycle for heterogeneous systems when the effects of a bug appear on both hardware and software. If in analyzing the hardware trace the user determines that the bug came from software, it could be extremely difficult to determine when this happened on the software or to know which variables to add to the trace in order to identify the root cause of the bug. Since hybrid HLS systems are still an emerging technology with a relatively small user base it is difficult to find real-world examples of this occurring, but there are a few hypothetical scenarios in which synchronization would be important: Case 1 The main computation of an algorithm is split between hardware and software, and repeats until an accuracy threshold is met. Though the hardware and software are not executing in parallel, they share large amounts of data through memory accesses. Somewhere during execution, an error in the shared data arises, and its results propagate throughout both devices. Synchronization allows the user to follow the effects of the bug back-and-forth through both devices to its origin. Case 2 The FPGA is configured as a bump-in-the-wire between the network and the processor, such as in the Microsoft datacenter architecture [10]. An error in the software is traced back to the results from hardware. Through synchronization, the user can determine which hardware operation was directly responsible for the incorrect result, allowing them to further follow the bug back to its origin. Case 3 A collection of hardware accelerators are regularly fed work through buffers from a controlling software program in a producer-consumer relationship. When a hardware accelerator is fed invalid input, synchronization allows the user to determine which software operations were responsible for the data that should be traced during the next execution in the debug cycle. ### Table I: Synchronization IDs Example <table> <thead> <tr> <th>Software</th> <th>Hardware</th> </tr> </thead> <tbody> <tr> <td>Shared Object X</td> <td>Shared Object X</td> </tr> <tr> <td>Y = X</td> <td>Y = Z</td> </tr> <tr> <td>trace [idx++] = Y</td> <td>trace [idx++] = X</td> </tr> <tr> <td>trace [idx++] = ++SyncID</td> <td>trace [idx++] = ++SyncID</td> </tr> </tbody> </table> ### III. Synchronization Synchronization allows the user to understand how the hardware and software affect each other during execution. For example, synchronized traces could provide profiling types of information, allowing the user to understand that a software loop is consistently delayed by hardware operations on shared objects. Or it could allow the user to see that the result of a load in software was due to a specific store in hardware. In this latter case, synchronization would be essential to debugging an error involving shared objects. As we are focused on debugging heterogeneous systems, the remainder of this paper will focus on synchronization for debugging purposes. A. When to Synchronize Under an ideal scenario synchronization could exist for every instruction, allowing the user to step through the hardware or software traces and know what the other device was doing at any point in time. Unfortunately, this is unrealistic. Any means of synchronizing is going to add overhead to both the software or hardware or both. This overhead comes in the form of extended execution time and resources. For every synchronizing operation on the software, extra instructions must be inserted, resulting in extended execution time. This can also be the case for the hardware depending on available resources. Additionally, any synchronization between hardware and software will require extra logic and memory for storing the synchronization information on both devices. Our solution to this comes in the form of a synchronization technique based on unique identifiers called Synchronization IDs. The IDs are incrementing values that represent points in execution where the system was synchronized, i.e. the traces of both devices aligned and the sequence of operations on shared objects ordered. When a synchronization operation is needed, one of two sequences will follow. When shared memory has been modified, the synchronization ID is incremented, then recorded to the hardware or software traces. This incremented ID represents the agent who last modified the shared object. When shared memory is read, the synchronization ID is recorded to the hardware or software traces. traces. An example of this is shown in Table I, where code is executing on the software and hardware concurrently. When the shared object \( X \) is modified, the SyncID is incremented, then stored. When \( X \) is read, the SyncID is stored. This ID is used post-execution to find the last modifications to the same shared object, allowing the traces to be synchronized. In this example, the SyncID recorded in the software will match with a SyncID from the hardware, indicating which value of \( X \) was assigned to \( Y \). C. Synchronization Scheme The goal of the synchronization schemes is to be able to achieve 100% synchronization, i.e. inducing total access order on shared objects through synchronization, while minizing the impact on performance. To this end, we propose three synchronization schemes that can achieve 100% synchronization depending on program layout, while minimizing the impact on the program. 1) Scheme #1 - Memory Instructions: This scheme is focused on synchronizing each memory instruction on shared objects. Under this scheme, modifying shared objects in hardware or software results in the synchronization ID being incremented, and then stored in its respective trace. Reading shared objects in hardware or software results in the synchronization ID being stored in their respective trace. This technique guarantees 100% synchronization as all reads and writes between shared objects are synchronized. 2) Scheme #2 - Basic Block of Memory Instructions: Synchronizing each memory instruction on shared objects, while the most thorough, usually results in higher overhead than needed. In the case where there are multiple memory instructions that access shared objects within a single basic block (small section of code with a single entry and single exit), it might be more efficient to synchronize once per basic block rather than for each memory instruction. An example of this can be seen in Figure 2. The first section of code represents the source code, whereas the second represents the resulting loads and stores of the corresponding representation in the HLS tool. In this example \( x \), \( y \), and \( z \) are all shared objects, computing the new values for \( x \). Under the previous synchronization scheme this code would result in four different synchronizations, one for each load and store, potentially resulting in excess overhead. If the user knows the hardware is not modifying these values concurrently then they might only need to synchronize once each loop iteration, or even once before and after the loop in order to maintain 100% synchronization. For this scheme, we replace all synchronizations in each basic block with a single synchronization at the end of the basic block. This synchronization acts like a write to a shared object, incrementing then recording the synchronization ID. In the case of this example, the synchronization would occur after the store instruction before continuing the loop. 3) Scheme #3 - Direct Synchronization: The last synchronization technique is that of user-assisted synchronization. Users who understand their code might have a better understanding of when synchronization is needed. In the case of the example in Figure 2, the user might determine that the synchronization is only needed before and after the loop due to locks, lack of parallelism, or other means. This could greatly reduce the overhead while still providing the means of 100% synchronization. Additionally, direct synchronization could be extended to apply either of the previous techniques to only certain shared objects. Though 100% synchronization would not be maintained for the entire program, as long as the user is able to maintain 100% synchronization on shared objects important to them, that should be sufficient. IV. IMPLEMENTATION We implemented our techniques in the open-source HLS tool LegUp. This tool is built within the LLVM compiler infrastructure [11], operating on the intermediate representation (IR) of the code. This IR is an assembly like representation of the code that is independent of the source code language, or the generated target architecture. This allows us to more easily analyze and modify the code within the LegUp tool. A. Design Flow Our implementation is based on the hybrid flow of the LegUp tool, taking advantage of previous open-source modifications in [6] and [3]. The original design flow is shown in Figure 3. First the C-based source code is optimized using standard compiler optimizations, and the LLVM IR code is generated. This code is then partitioned according to user specifications into two separate pieces of IR: one for the code remaining on the software, and one for the code to be transformed into hardware logic. From this point on the software and hardware IR are handled separately. The software IR is modified to capture data, and the debugging logic is added to the hardware to capture trace data to a circular trace buffer. The traces are retrieved post-execution, and are parsed... using data in an SQL database generated during the HLS-flow. These traces are then shown to the user in the form of a debugger GUI. In order to implement our synchronization technique, modifications are required throughout the HLS-flow. These modifications will be discussed below. B. Identifying Shared Objects In order to properly modify the hardware and software code we need to identify the objects shared between both devices. These are objects that are accessed by both hardware and software through memory instructions. To identify these objects, the program is analyzed prior to the partitioning of software and hardware IR, as shown in Figure 3. At that time, each of the global variables is analyzed to determine if there are load or store instructions accessing it from both the hardware and software. If such memory instructions are found, then the global variable and its memory instructions are added to lists of shared objects and instructions. These lists are later used to properly modify the software IR and in the hardware generation process. C. Software Implementation Software modifications are necessary to provide support for each of the synchronization schemes. For the first scheme, synchronizing on memory instructions, we insert recording functions similar to those found in [3] for each of the memory instructions accessing shared objects in the software IR. These functions record the synchronization ID, any associated data, and the unique ID representing the location and data being recorded. The layout of these entries is shown in Figure 1. The synchronization ID is managed by a hardware module, and is retrieved through a volatile load of specific hardware address. The hardware address used is different when synchronizing load instructions versus store instructions. The volatile load is used to ensure the synchronization ID is always up to date. As will be noted in Section V, this volatile load, though necessary to retrieve the correct synchronization ID, has a substantial impact on performance. The second synchronization scheme, synchronizing the basic blocks of memory instructions, works similarly to the first, except all synchronizations in a basic block are replaced with a single synchronization at the end of the basic block. This synchronization only records the unique ID representing its location, and the synchronization ID, which is retrieved from the hardware address corresponding to software writes. The third synchronization scheme, direct synchronization, relies on direct function calls manually inserted by the user. We identify these calls, and replace them with the same synchronization calls from the second scheme. D. Hardware Implementation Unlike software, multiple instruction can execute in parallel during a given hardware state. So instead of synchronizing on a given instruction, the hardware has to synchronize during specific hardware states, similar to how Scheme #2 on software synchronizes after a set of instructions. Note that this is the only scheme we have implemented for the hardware. During the generation of RTL from hardware IR code, the LegUp HLS tool maintains correlations between the original source code and the RTL to be generated. We use these correlations and our shared objects list to identify the hardware states in which memory operations on shared objects are occurring. Using this information we generate the synchronization ID module. The hardware modifications required to support this module are represented in Figure 4. The synchronization ID module determines if the current hardware state (DUT State) contains memory operations on shared objects. If so, the module will process the synchronization ID based upon the operation occurring. The states that write to shared objects increment and then store the synchronization ID to the hardware trace. The states that read from shared objects record the synchronization ID to the hardware trace. These checks are done in order, so if writes and reads to shared objects happen in the same state, it will be treated as a write. When the synchronization ID module determines that an synchronization ID needs to be recorded to the hardware trace, it sends both the Sync ID and a Record signal to the debugging circuitry previously added to the LegUp HLS tool [12]. To preserve the original hardware trace data, we have modified this debugging circuitry to record both the trace data and the synchronization ID using dual ported memory. The first port is used to record the normal debugging data previously described and implemented in [12]. The other port is only activated when the Record signal is set high by the synchronization ID module, at which point the synchronization ID is recorded to the hardware trace. In order to differentiate between the original hardware trace data and the synchronization IDs we have added a single bit onto the beginning of each trace entry as shown in the Trace Circuit in Figure 4. This bit is set high for entries containing the synchronization ID, and low under all other situations. This allows the user to differentiate between normal trace entries and synchronization IDs post-execution. The hardware specific addresses accessed by the Software are seen on the AXI bus with a read flag (AXI Read) and a specific read address (Read Addr). The read address corresponding to software load instructions of shared objects returns the current synchronization ID. The read address corresponding to software store instructions to shared objects results in incrementing and then returns the synchronization ID. V. BENCHMARKS AND RESULTS Our synchronization technique impacts program execution in two main ways; execution time and trace entries. To better understand its impact, we have collected execution times for various levels of observation with and without synchronization, and for each of the synchronization schemes. These times have been collected using a high-resolution hardware timer that was added to the designs. Additionally, we have measured the number of trace entries to both hardware and software traces to better understand the impact of our technique and synchronization schemes. We have gathered these results on the Terasic Cyclone V DE1-SoC board. A. Benchmarks To test the effect of our synchronization technique and schemes, we have gathered data from two benchmarks from the Rodinia benchmark suite [13], [14]. These benchmarks are Back Propagation (backprop) and Speckle Reducing Anisotropic Diffusion (SRAD). For testing purposes, we have changed explicit data transfers between hardware and software to global variables. This allows for a more thorough testing of our techniques due to memory operations on shared objects as opposed to explicit data transfers which naturally synchronize the devices. The backprop benchmark is a machine learning benchmark focused on neural networks. It consists of two phases; a forward phase that computes weights, and a backward phase that measures the error and computes new input weights. Each of these phases contain multiple memory operations on shared objects, all within nested loops. Our version of the benchmark computes the forward phase on the CPU, and the backward phase on the FPGA. Additionally, we have sought to gather more data by placing an outer loop around the algorithm that iterates until the measured error reaches a threshold. The SRAD benchmark is a partial differential equation based diffusion algorithm. It is focused on removing speckles in an image without compromising important image features, and is commonly used on ultrasonic and radar images to improve image clarity. Our version of the benchmark executes the main compute of the partial differential equation in hardware, and the remainder in software, both of which are contained within an outer loop. The software portion of the main compute --- **Fig. 4: Hardware Implementation of Synchronization ID Module** **TABLE II: Hardware Trace Entries** <table> <thead> <tr> <th></th> <th>Trace Depth</th> <th>HW Trace Entries</th> <th>Buffers Filled</th> <th>Entries in Trace</th> </tr> </thead> <tbody> <tr> <td>backprop Baseline</td> <td>515</td> <td>21,552</td> <td>41.85</td> <td>2.39%</td> </tr> <tr> <td>backprop w/ Sync</td> <td>515</td> <td>32,089</td> <td>62.31</td> <td>1.60%</td> </tr> <tr> <td>srad Baseline</td> <td>1020</td> <td>14,910</td> <td>14.62</td> <td>6.84%</td> </tr> <tr> <td>srad w/ Sync</td> <td>1020</td> <td>20,758</td> <td>20.35</td> <td>4.91%</td> </tr> </tbody> </table> reads through the image during each iteration of the loop and calculates standard deviations to send to the hardware. B. Results One impact of adding synchronization entries to the trace buffers is the reduced availability of entries for traditional debug data. As mentioned previously in Section II, trace-based debugging relies on the user being able to record enough data during execution to determine the cause of the bug or to know which data to record during the next debug iteration. The more entries that are allocated to recording synchronization, the less traditional debug data is available after execution. The measurements in Section V-C and Section V-D are based around this reduction in traditional debug data after execution, or more specifically, the percentage of the overall trace entries that can fit in the trace. For example, if the software trace array could fit 50k entries, and there were 100k entries during execution, then 50% of the total entries could fit in the trace. If our technique were used with Scheme #1 and added 25k entries during execution then only 50k out of 125k entries, or 40% of the total entries, would fit in the trace at any point in time. The more entries used by synchronization data, the lower the percentage of overall trace entries that fit in the buffer, and the lower the amount of debug data available to the user during this debug iteration. The impact on execution time is due to the extra memory operations on software for retrieving and recording the synchronization ID. Note that each of the tests achieved 100% synchronization. C. Impact on Hardware Trace Entries The hardware trace measurements shown in Table II, demonstrate the effect of synchronization on each benchmark. Trace Depth represents the size of the hardware trace buffer in terms of the number of entries it can hold. Hardware Trace Entries represents the total number of entries to the hardware trace buffer throughout execution. Buffers Filled represents the number of times the buffer wrapped around to its starting address. Entries in Trace represents the percentage of total hardware trace entries that can fit in the hardware trace at any point in time. This data shows the overall reduction in data captured due to the increased number of hardware trace entries from synchronization data. Adding synchronization to backprop increases the total number of trace entries by almost 50%, meaning 50% of hardware states that recorded data to the trace buffer accessed a shared object. Due to the extra trace entries, the hardware trace buffer is filled more frequently, reducing the percentage of debug data retained in the trace from 2.39% to 1.60% of all hardware trace entries during execution, a reduction of 33%. Adding synchronization to srad increases the number of trace entries by almost 40%. This in turn reduces the percentage of total entries retained in the trace from 6.84% to 4.91%, a reduction of 28%. D. Impact on Software Trace Entries The results from the software trace measurements are shown in Figures 5 and 6. For each of our tests 64KB were allocated to the software trace array. Data was collected for each synchronization scheme under various levels of observation including recording control-flow (CF), recording loads (LD) and recording stores (ST). The results are broken up into sections based on the synchronization scheme: No Sync, no synchronization [3], Scheme #1, synchronizing on memory instructions (Sync), Scheme #2, synchronizing on basic blocks with memory instructions (SyncBB), and Scheme #3, user directed synchronization (Direct Sync). Of note is the similarity in graph distribution between the benchmarks even though the percentages of trace maintained are orders of magnitude difference. This shows that the impact on the trace entries is similar even under vastly different code structures. Also of note is the absence of user directed synchronization by itself. For these tests, the direct synchronization was placed in between each stage of backprop (each phase contained 2 stages), and before and after the hardware computation for srad. These locations provided 100% synchronization while still allowing 100% of the trace entries fit in the software trace array, which skewed the results of the graph. E. Execution Time The other effect of adding synchronization is the increase in execution time, mostly due to the use of volatile loads of synchronization IDs by the software. This is shown in Figures 7 and 8. The greatest impact on performance is from synchronizing on memory operations in the backprop benchmark with an almost 15X increase in execution time. However, by moving the synchronization to the end of their basic blocks, the increase in execution time drops to just over 5X. Due to the structure of the code, we were able to use direct synchronization to reduce the increase to an almost negligible amount. srad saw much smaller increases in execution time due to the smaller amount of synchronizations, with a maximum increase of 0.045X the original execution time. This was not improved by synchronizing on the basic blocks due to the lack of multiple synchronizations in a single basic block. Similar to backprop, srad saw negligible increases in execution time from user directed synchronization. These results show the viability of each of the synchronization schemes. For designs such as srad, synchronizing on every memory instruction only results in a 1.045X increase in execution time, and guarantees 100% synchronization. For other tests, such as backprop, synchronizing at the end of basic blocks with memory instructions will usually still allow for 100% synchronization for a 5X overhead. If this is overhead is too high, then manually inserting synchronization calls may be used if the user has a solid grasp of data-flow on shared-objects throughout the program. VI. CONCLUSION AND FUTURE WORK This work demonstrated techniques to synchronize hardware and software traces from HLS-accelerated programs. Synchronization between these traces is necessary to understand how hardware and software affect each other, particularly through shared objects. To address this problem, we have put forth a synchronization technique based upon unique identifiers, and multiple synchronization schemes. The technique relies upon the hardware and software recording the identifiers to their respective traces when they access shared objects. The schemes synchronize the software trace at either each memory operation on shared objects, basic blocks that contain those memory operations, or at locations directed by the user. To demonstrate our proposed synchronization technique and schemes, we have implemented a prototype system based in the open-source HLS-tool LegUp. We modified LegUp to identify the objects shared between hardware and software, as well as memory operations that access them, and use that information to modify the software and hardware according the the synchronization schemes. At each synchronization location on the software, the program retrieves the identifier from the hardware, and records it to the software trace array. A hardware module maintains the synchronization IDs and identifies states in which to synchronize. During these states it records the identifier to the hardware trace using dual-port memory. Additionally, we added a bit to each hardware trace entry to differentiate debug data from the synchronization ID. To determine the effect of our technique and schemes on the design, we measured the impact on the percentage of total hardware and software trace entries that could fit within the trace buffer at any point in time, as well as the impact on execution time. We discussed situations in which each of the schemes may be most beneficial based upon the results. Future work in this area could aim to reduce the impact synchronization has on the program execution, as well as expand these techniques to more widely used frameworks. REFERENCES
{"Source-Url": "https://splish.ee.byu.edu/assets/ashcraft_fpt19.pdf", "len_cl100k_base": 7247, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27125, "total-output-tokens": 8613, "length": "2e12", "weborganizer": {"__label__adult": 0.0008716583251953125, "__label__art_design": 0.0008611679077148438, "__label__crime_law": 0.0005779266357421875, "__label__education_jobs": 0.001087188720703125, "__label__entertainment": 0.0001773834228515625, "__label__fashion_beauty": 0.0004401206970214844, "__label__finance_business": 0.0003826618194580078, "__label__food_dining": 0.0006351470947265625, "__label__games": 0.002025604248046875, "__label__hardware": 0.054656982421875, "__label__health": 0.0012416839599609375, "__label__history": 0.0007138252258300781, "__label__home_hobbies": 0.0003292560577392578, "__label__industrial": 0.00209808349609375, "__label__literature": 0.00033545494079589844, "__label__politics": 0.00041413307189941406, "__label__religion": 0.0012140274047851562, "__label__science_tech": 0.349609375, "__label__social_life": 9.423494338989258e-05, "__label__software": 0.00965118408203125, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.0007028579711914062, "__label__transportation": 0.00185394287109375, "__label__travel": 0.00037026405334472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40929, 0.01347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40929, 0.70747]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40929, 0.91877]], "google_gemma-3-12b-it_contains_pii": [[0, 5317, false], [5317, 10750, null], [10750, 16319, null], [16319, 21348, null], [21348, 25557, null], [25557, 29891, null], [29891, 34072, null], [34072, 38750, null], [38750, 40929, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5317, true], [5317, 10750, null], [10750, 16319, null], [16319, 21348, null], [21348, 25557, null], [25557, 29891, null], [29891, 34072, null], [34072, 38750, null], [38750, 40929, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40929, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40929, null]], "pdf_page_numbers": [[0, 5317, 1], [5317, 10750, 2], [10750, 16319, 3], [16319, 21348, 4], [21348, 25557, 5], [25557, 29891, 6], [29891, 34072, 7], [34072, 38750, 8], [38750, 40929, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40929, 0.09836]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
78cceb2e306d6c65306a94e9b9e0888766cc4547
Practical Encrypted Mailing Lists Neal H. Walfield Johns Hopkins University and GnuPG Abstract Although email has been one of the most enduring electronic communication mediums and encrypted email has been possible for decades, encrypted mailing lists remain either a usability (and hence security) nightmare or are rather insecure. We propose an extension to OpenPGP that makes encrypted mailing lists both easy to use and secure. Using our extension, a poster encrypts her message to all subscribers. The main difficulty is ensuring that posters have the current list of subscribers. Fortuitously, we can reuse OpenPGP’s existing key distribution mechanisms for this without modification. In this paper, we describe how to add encrypted mailing list support to OpenPGP including how to hide the subscriber list, we discuss the work flow for both subscribers and mailing list administrators, and we examine how the mailing list software can improve the user experience and further enhance the system’s security. Categories and Subject Descriptors D.4.6 [Security and Protection]: Cryptographic controls 1. Introduction Just as it is desirable to communicate with someone else securely, it can be desirable to communicate with a group of people securely. Mailing lists are a popular form of group communication for which there is poor support for encrypted communication. The solutions that are available are either insecure by design or have poor usability, which limits their adoption and undermines their security. We propose an extension to the OpenPGP standard [5] that adds support for encrypted mailing lists. We chose to extend OpenPGP, because it is the preferred standard for secure email. Thus, most people interested in encrypted mailing lists will probably already be using OpenPGP, which significantly lowers the barrier to adoption. Our OpenPGP extension allows adding a list of encryption keys to an OpenPGP key block. The encryption keys are saved as subkeys, but the parameters are encrypted to hide the subscribers. To create an encrypted mailing list using this scheme, the mailing list administrator simply creates a new key with a special flag and adds each subscriber’s public key. To send a message to the mailing list, the poster just selects the mailing list’s public key, i.e., the same action as when sending an encrypted mail to an individual. The difference is in how the OpenPGP implementation handles the key: instead of encrypting it using the key’s primary encryption key, it encrypts it to all of the listed keys. In addition to not introducing a new work flow, this approach only requires trusting the mailing list server to relay the message; it doesn’t require access to the plaintext like re-encryption gateways [1]. The relay can also not collude with a list member to determine the private key as is the case when using proxy re-encryption [3, 8, 9]. And, unlike when using proxy re-encryption, users can use their usual key. This means there is no need to import a new private key (which conditions users to trust private keys supplied by a third party), it simplifies reading mail on multiple devices, and it allows users to use smartcards. Importantly, propagating updates also doesn’t require any new infrastructure: we can use the existing key server infrastructure, which is used to propagate changes (revocations, etc.) to OpenPGP keys. In this paper, we present our OpenPGP extension for encrypted mailing lists and our implementation for GnuPG. We describe how to modify an OpenPGP key block to include a list of subscribers, how to hide the subscribers, how to efficiently update the list, how to propagate the updates, and how to post a message. We also suggest some checks that the mailing list software can use to improve operational security and usability. 2. Background Mailing list infrastructure simplifies discussions among a dynamic group of participants. Instead of each poster tracking the set of currently interested and authorized participants, the posters send mail to a list server that forwards it to the subscribers. An encrypted mailing list has the additional requirement that emails are encrypted to each of the subscribers. This is in conflict with the main purpose of a mailing list: since encryption is done by the sender, the sender now needs the list of subscribers! There are two basic approaches to solve this problem. Either the subscribers’ keys are distributed to each poster or a poster encrypts the message to the mailing list’s key, which re-encrypts the message. This re-encryption can either be done directly, which exposes the plaintext to the middleware, or by way of proxy re-encryption. Although we know of several groups that distribute the public keys to each member, we are not aware of any software that simplifies verifying and importing these updates. This significantly decreases the usability of this approach, which in turn seriously harms this system’s security. The problem is that failing to verify updates to the subscriber list or not installing updates can allow an attacker to get the plaintext of at least some of the mailing list’s traffic. Schleuder is a popular remailer [1]. Like most remailers, Schleuder has a dedicated key. To post a message, a poster encrypts the message with just the remailer’s encryption key and the remailer decrypts the message and re-encrypts it for each subscriber. To add or remove a subscriber, the mailing list’s administrator just modifies the mailing list’s keyring. This approach avoids the key distribution problem, but the mailing list server must be trusted, since it handles the plaintext. One can argue that the mailing list server is just one more subscriber and thus giving it access to the plaintext only results in a marginal decrease in the system’s security. We are convinced, however, that the mailing list server is potentially more sensitive than individual subscribers, because mailing lists tend to be concentrated. Hosting facilities, such as SourceForge and GitHub, manage not just to a few mailing list, but a huge number. Thus, the scale of a potential compromise is much larger. Further, we know from Snowden’s revelations and the Lavabit fiasco that companies readily cooperate (willingly or not) with spying agencies. Another alternative is to use proxy re-encryption, which allows the mailing list server to re-encrypt a message without access to its plaintext [3]. This is the approach taken in PSELS [8, 9]. Using proxy re-encryption, each subscriber is supplied with a private key that is a random increment of a master key. To re-encrypt a message, the list server doesn’t need access to the mailing list’s private key, it simply adds the appropriate increment to the ciphertext. The main problem with re-encryption algorithms is that a subscriber must use a new secret key. This encourages bad security practices by conditioning the user to trust secret keys provided by third parties (in PSELS, they are sent by email). It means the user can’t use a smartcard. It makes it harder to read mail on multiple devices. And, users must manage many secret keys (one for each mailing list). Another problem is that the mailing list server and a subscriber can collude to recover the mailing list’s secret key [8]. 2.1 Goals and Requirements Our primary goals are to provide encrypted mailing list users with a similar level of security as OpenPGP provides for normal email, and the same work flow. Concretely, only subscribers should be able to access a message’s plaintext, and they should not have to install any more software than they normally do to use OpenPGP, or do anything more than what they usually do to send an encrypted email. Further, only users who post to the mailing list should be required to have an OpenPGP implementation that supports our extension. The subscriber list should also not be public (but, we don’t want to hide the subscribers from each other since the messages partially reveal this information anyway). Using SMTP, it is impossible to protect email addresses in transit [6]. But, the difference between the resources required to downgrade TLS connections and passively observe SMTP traffic, and the resources to casually traverse some publicly and permanently stored data years later is huge. 2.2 OpenPGP OpenPGP is defined by RFC 4880 [5] and is both a message format for storing messages as well as a collection of algorithms that define how to encrypt, sign and encode data. An OpenPGP message consists of a number of packets, which logically form a nested structure. The most important packets are: symmetrically encrypted data (SED) packets, which contain ciphertext encrypted with a symmetric key; public-key and symmetric-key encrypted session key (SK-ESK and PK-ESK, respectively) packets, which contain a session key for decrypting an SED packet and are encrypted using a public or symmetric key; signature packets, which contain a digital signature over some other packet; and, public key and user id packets, which respectively contain public keys and human-readable identities. For a public key packet or user id packet to be considered valid, it must be followed by a signature packet whose signature was generated by the primary key. Signature packets also include metadata relevant to the signed packet. This includes cipher and hash preferences, supported features, an expiration time, and notations. Notations are key-value pairs. They can be used for extensions and to make assertions. If an implementation doesn’t understand some notation, it simply ignores it unless the notation’s critical bit is set. In this case, the implementation must conservatively refuse to do any operations with the key. Most of this information can be updated by generating a new self-signed data packet and sending the new key block to any communication partners. This key distribution problem is solved in OpenPGP using key servers: after modifying a key, the user uploads it to a key server and communication partners check for updates. OpenPGP treats the key block as an append-only log. This preserves a record of changes to a key’s expiry and prevents an attacker from revalidating a revoked key. For most properties, however, only the newest self-signature is relevant. space, updates, which will consume users (all posters) subscribe to a new list, there will be Figure 1. When a unsubscribes, the CP-ABE policy prevents her from reading the new symmetric key. 3. Design To allow users to use their own keys and to ensure that the mail server does not have access to the plaintext, a poster needs to directly encrypt her message to each of the list’s subscribers. This means that we need to distribute the subscribers’ keys to each authorized poster. To do this, we can include the subscribers’ details in the mailing list’s key block and take advantage of the existing key distribution infrastructure to ensure that all posters transparently and quickly receive updates to the subscriber list. The main open design question is then how to store the subscriber list in the key block. There are two primary constraints. The first is since key blocks are effectively append-only logs, we need to be careful to not allow them to become too large. (Currently, GnuPG won’t upload key blocks that are larger than 20 MB to a key server, for instance.) Second, since the key block is public, we need to encrypt the list of subscribers so that only authorized posters can read it. A naïve implementation might encrypt the list of subscribers to the list of authorized posters each time the subscriber list is updated. Although this protects the subscriber list, it makes inefficient use of the available storage: if \( n \) users (all posters) subscribe to a new list, there will be \( n \) updates, which will consume \( O(n^2) \) space. A better solution is one that stores just the changes. That is, when a user subscribes to or unsubscribes from the list, we don’t write out the whole subscriber list, but just a short record indicating what user was added or removed. The question now is how to encrypt these records. A simple solution is to encrypt the records using a symmetric key that is only available to authorized posters. Such a key can be generated when the list is created. Then, when a new poster is added to the list, this key is encrypted using the poster’s public key and added to the mailing list’s key block. This construct ensures that the subscriber list can only be read by posters. Further, since each update consumes \( O(1) \) space, \( n \) updates require just \( O(n) \) space! This approach has the disadvantage that new posters find out who unsubscribed, and removed posters can continue to decrypt records added after they were removed from the list. We can fix the latter problem by rotating the symmetric key when a poster is removed. This can be done efficiently using a ciphertext-policy attribute-based encryption (CP-ABE) scheme [2, 7] that supports non-monotonic (negative) access policies [10]. Using CP-ABE, keys are associated with a set of attributes and an access policy is associated with each ciphertext. To decrypt a ciphertext, we need the right key with a set of attributes that satisfies the policy. We can use CP-ABE to efficiently rekey when a poster is removed. When creating a list, we generate a new CP-ABE key. Then, when a poster is added to the list, a secret key is derived from the master CP-ABE key with a unique attribute, the key is encrypted using the new poster’s public key, and the result is included in the mailing list’s key block. To rotate the symmetric key, we encrypt the new symmetric key using the CP-ABE key with the access policy ~\( X \), where \( X \) is the attribute of the poster being removed, and then we encrypt the result using the current symmetric key. See Figure 1. The access policy prevents \( X \) from accessing the new symmetric key and, as such, from decrypting subsequent records. Encrypting with the current session key means that we only have to exclude \( X \) and not all posters who have been removed in the past. This is essential since the storage requirements of the access policy are \( O(n) \) where \( n \) is the size of the formula. Since our scheme only has a single condition, the size of the ciphertext is \( O(1) \). This scheme does not protect against collusion. By construction, when Alice is unsubscribed from the mailing list, she can decrypt the symmetric encryption, but not the CP-ABE encryption protecting the new session key. She can, however, provide the CP-ABE ciphertext to another unsubscribed user, who can use his CP-ABE key to decrypt it. Excluding every unsubscribed user in access policy would cause the size of the cipher text to be \( O(n) \), which is what we were trying to avoid. If this attack is a real threat, then a simple solution is to simply rotate the mailing list’s key. Although this scheme prevents removed posters from reading future events, new posters can still determine all past subscribers even if they are no longer subscribed. This is necessary, because, by construction, a poster traverses all events to determine the current list of subscribers. We consider this a minor security problem: in practice, subscribers are often given access to the mailing list’s archive, which allows them to largely reconstruct the list of past subscribers anyway. If this is a serious problem, the mailing list’s key can be periodically rotated. In this case, only those subscribers who were removed since the last key rotation are exposed. To rotate a mailing list’s key, we simply revoke the mailing list’s key and issue a new one in the usual way. By indicating the new key in the old key’s revocation certificate, the rotation can be fairly painless. In the future, this should be entirely transparent: we have submitted a proposal for the next version of the OpenPGP specification that provides a standard, machine-readable way to indicate the new key. 4. Implementation We now consider how to integrate our design into OpenPGP. The main issues are: creating a list; adding a subscriber; removing a subscriber; and, posting a message. Actually, 1 set the mailing-list OpenPGP key in the usual way. To create a mailing list, we start by generating a new interface, shouldn’t be directly added to the key: they need comment to mailing list since notations are not normally shown, we set the user id’s appropriate, the former is rarely used in practice). Further, data or the primary user id’s self-signed data (although more required or desired features. If this happens, it can forward the message to the list’s owner who can re-encrypt it. This clearly introduces some latency and an additional burden on the mailing list’s owner, however, it provides some additional backwards compatibility. To allow an easy upgrade path, the value of the mailing-list notation could either be a version identifier or a list of required or desired features. We store the initial symmetric key used to encrypt the subscriber list (key 0) in the primary user id’s self-signed data under the subscriber-list-session-key notation. This notation contains a PK-ESK packet that is encrypted using the CP-ABE key with an unrestricted access list. This allows any poster to access it, and, by extension, the list of subscribers. In addition to the address for the mailing list’s exploder, mailing lists typically also have an alias for reaching the mailing list’s owner. Since the mailing list’s owner controls the list’s key, we can make it easier for subscribers to securely reach the mailing list’s owner by adding an appropriate user id. To make its purpose clear, the comment should be set to a standard string, perhaps mailing list: owner. Related addresses, such as one for an email accessible interface, shouldn’t be directly added to the key: they need a different secret key. To make them accessible, they can be specified using some standardized notations. When creating the key, the mailing list owner should choose reasonable preferences (preferred cipher, hash, etc.). When a key is added to the list, the OpenPGP implication should check that the key supports the chosen preferences. This avoids multiple subscribers with incompatible preferences forcing a downgrade to weak defaults. ### 4.1 Mailing List Creation To create a mailing list, we start by generating a new OpenPGP key in the usual way. To indicate that the key corresponds to a mailing list, we set the mailing-list notation1 in the primary key’s self-signed data or the primary user id’s self-signed data (although more appropriate, the former is rarely used in practice). Further, since notations are not normally shown, we set the user id’s comment to mailing list. The notation’s critical bit doesn’t need to be set if the mailing list server can recognize that a mail was only encrypted to the mailing list, which it usually can by checking the key ids stored in any PK-ESK packets. If this happens, it can forward the message to the list’s owner who can re-encrypt it. This clearly introduces some latency and an additional burden on the mailing list’s owner, however, it provides some additional backwards compatibility. To allow an easy upgrade path, the value of the mailing-list notation could either be a version identifier or a list of required or desired features. We store the initial symmetric key used to encrypt the subscriber list (key 0) in the primary user id’s self-signed data under the subscriber-list-session-key notation. This notation contains a PK-ESK packet that is encrypted using the CP-ABE key with an unrestricted access list. This allows any poster to access it, and, by extension, the list of subscribers. In addition to the address for the mailing list’s exploder, mailing lists typically also have an alias for reaching the mailing list’s owner. Since the mailing list’s owner controls the list’s key, we can make it easier for subscribers to securely reach the mailing list’s owner by adding an appropriate user id. To make its purpose clear, the comment should be set to a standard string, perhaps mailing list: owner. Related addresses, such as one for an email accessible interface, shouldn’t be directly added to the key: they need a different secret key. To make them accessible, they can be specified using some standardized notations. When creating the key, the mailing list owner should choose reasonable preferences (preferred cipher, hash, etc.). When a key is added to the list, the OpenPGP implication should check that the key supports the chosen preferences. This avoids multiple subscribers with incompatible preferences forcing a downgrade to weak defaults. ### 4.2 Adding a Subscriber To add a new subscriber to the mailing list, we need to add the user’s encryption key to the list of subscribers. The key needs to be encrypted so that only posters can read it. And, if the subscriber is authorized to post to the list, we need to derive a CP-ABE key for her. To add a subscriber’s key, we simply store it in a new subkey packet. If a subkey already exists with the specified public key, then we don’t create a duplicate. This happens when a user unsubscribes and then later resubscribes.) It is not possible to fully store an OpenPGP key in a subkey packet: an OpenPGP key consists of a primary key, subkeys, user ids, preferences, signatures, etc. However, the only data that we need to store is the data required for a poster to encrypt a message to the subscriber; everything else is irrelevant. This data consists of the user’s encryption key and a bit of meta-data, specifically, the key’s creation time. This is needed to compute the key’s id, which is stored in the PK-ESK packet to make it easy to find the right decryption key. This data fits perfectly in the existing subkey structure and its corresponding self-signature. Because an OpenPGP key may have multiple valid encryption keys, the OpenPGP implication needs to choose one if the subscriber did not specify the one to use. Although OpenPGP does not make a recommendation of how to choose among multiple valid encryption keys, in GnuPG, the newest valid encryption-capable subkey is used and we recommend this approach here as well. --- 1 Actually, mailing-list@gnupg.org. Unstandardized notations must include the vendor’s domain name, but we exclude it here due to lack of space. 4.2.1 Privacy Because we want to protect the user’s identity, we encrypt the user’s public key parameters with the current symmetric key. We store the encrypted parameters in an SED packet under the public-key notation on the subkey. We also store the index of the symmetric key used to encrypt them in the public-key-encrypted-with notation. Note: we just use a simple encrypted packet and not one that is integrity protected, because notations are already signed. We replace the original public key parameters with a small fixed integer (specifically, the number 2). We chose this instead of using a random number, because generating good keys is expensive and generating bad keys makes analysis of valid keys (e.g., [4]) more difficult. Further, this provides a cheap check (for both machines and humans) to determine whether the key is a mailing list subscriber key. We replace the key’s creation time with the current time. The most important thing here is to make sure that the selected time is unique among the subscribers. The issue is that the key id is computed from the key parameters and the creation time. Since the key parameters are now constant, the key id is entirely determined by the creation time. Using our scheme, this can result in duplicate key ids when rapidly adding subscribers to a list. Duplicate key ids can confuse OpenPGP implementations, because signatures reference keys within the same key block using just the key id. If we detect a duplicate, we simply increment the time by one second and recheck. If another method is used to chose the creation time, it is also important to avoid dates from the future as this can result in gratuitous warnings. If the user is a poster, then we also set the notation subscriber-list-key to a CP-ABE secret key with a unique attribute. Concretely, we store the CP-ABE key in a secret key packet encapsulated by a PK-ESK packet that encrypts the data using the poster’s public key. Since PK-ESK packets normally include the key id needed to decrypt them and we want to protect the poster’s identity, we set the key id to 0. This is a well understood GnuPG extension to hide the key id. Unfortunately, this means that for a poster to find her CP-ABE key, she needs to try to decrypt all of the subscriber-list-key notations. Further, at least GnuPG will only try to decrypt PK-ESK’s with hidden recipients if explicitly configured to do so. To overcome these problems, we propose a new scheme called partially hidden key ids. Using this feature we expose, say, 8 bits of the user’s key id and clear the other 56 bits. Unlike a 64-bit id, which provides a very good indicator of the likely key, given millions of potential keys, 8-bits reveals very little about the actual key, but it significantly reduces the number of PK-ESK packets that the user has to try to decrypt (most encrypted mailing lists are unlikely to have more than a few hundred subscribers), and will often uniquely identify the required decryption key (since most users won’t have more than a few secret keys). Having to try just a single key is important as it reduces gratuitous passphrase prompts and smartcard swapping. Further, some information about the key is leaked by the ciphertext anyway. For instance, messages encrypted with RSA reveal some information about the public key: an encrypted packet contains a random number chosen uniformly between 0 and the public exponent minus one. With enough messages, it is possible to recover the most significant bits. 4.3 Removing a Subscriber To remove a subscriber, the mailing list administrator simply expires the relevant subkey in the usual fashion. If the subscriber was also a poster, then we also set the notation subscriber-list-session-key to a new symmetric key, which will be used to encrypted future events. As previously described, this key is encrypted with the CP-ABE key and the current symmetric key whose index we also store in the subscriber-list-session-key-encrypted-with notation. We’d like to use an SK-ESK packet to store the new symmetric key. But, despite the name SK-ESK packets are not directly encrypted with session keys. Instead, they are encrypted with passphrases that are turned into session keys using the S2K key derivation function [5]. Instead of introducing a new extension, we simply convert the symmetric key to hexadecimal and use it as the passphrase. 4.4 Sending a Message To send a message, a poster needs to first get the current list of subscribers (or rather, their keys). If the mailing list key hasn’t been refreshed recently, the OpenPGP implementation should first do this or, in the very least, print a warning that the mail might not reach all current subscribers. To get the list of subscriber keys, we just need to iterate over the subkey self-signatures. The ordering is important, because we rotate the symmetric key used to encrypt the subscriber data when a poster is removed. As already noted, the first symmetric key is in the primary key’s self-signed data. To find the subscribers added before the first key rotation, we find all subkeys that were encrypted by that key, which we can easily do by finding all self-signatures whose public-key-encrypted-with notation is 0. Note: if any of the subkeys have expired, then the user has unsubscribed and should not be included in the subscriber list. To get the next symmetric key, we find the valid self-signature that contains the subscriber-list-session-key notation encrypted using the current session key (again using the subscriber-list-session-key-encrypted-with notation). If there are any unprocessed self-signatures, we repeat the above steps with the new index. Otherwise, we are done. 5. Increasing Usability and Security Because keys may be updated and revoked, it is essential that the mailing list owner periodically refresh the subscribers’ keys to make sure that they are still valid and that the best encryption key is used. (This should, of course, be automated.) If this is not the case, then either the offending subkey should be expired or rotated, respectively. When the mailing list software receives a mail, it should first check that the set of apparent recipients (as determined by the key id in the PK-ESK packets) matches its view of the subscriber list. (The mailing list owner needs to provide this directly to the mailing list software or provide it with the CP-ABE key so that it can decrypt the subscriber list. It should obviously not be added as a subscriber as then it will be able to read the plaintext.) If some subscribers are excluded or some unsubscribed keys are included and the recipients are not explicitly listed in the mail’s to or cc header, the mail should be held and the poster informed that her version of the mailing list key is probably not up to date. If the message is not encrypted at all, then the mailing list software should warn the user. It can also refuse to post the message and send a note to the mailing list owner to make her aware of the subscriber’s poor opsec practices. Before forwarding a mail, the mailing list server can sign the message. This can’t be done using the mailing list’s key, since it is not available. Instead, a special subkey could be used. To improve integration with existing applications, the encrypted part should not be encapsulated in a literal packet, but the original OpenPGP message should be modified to include another signature outside of the encrypted part. Since our extension only impacts key management, it is entirely possible to implement it in an external application, which the OpenPGP calls when it detects the extension. 6. Evaluation To evaluate our extension, we modified GnuPG to support encrypted mailing lists. Our implementation doesn’t support CP-ABE cryptography, which we leave for future work. Thus, the initial session key is stored in each subscriber-list-key notation instead of a CP-ABE key. In this model, rotating keys is not strictly necessary, but for completeness, we keep this functionality and just encrypt the contents of the subscriber-list-session-key notations with the current symmetric key and not also the CP-ABE key. Our implementation is available in the neal/encrypted-mailing-list branch of the GnuPG git repository and consists of about 2000 lines of changes. In our prototype, a new 2048-bit mailing list key without any subscribers requires about 1.5 KB of storage. (A normal 2048-bit key initially uses 1.2 KB.) Adding a subscriber who is authorized to post to the mailing list adds 1 KB to the key’s size and removing a subscriber adds another 400 bytes. In a full implementation, these values will be slightly larger since we will also have the CP-ABE key. Nevertheless, it appears that we can easily absorb ten thousand events before we have to rekey to due to a too large key size (which, as we noted, is about 20 MB). In practice, we’d probably want to rekey long before this point, due having to process all records to determine the current set of subscribers. 7. Conclusions and Future Directions We presented the design and implementation of encrypted mailing lists for OpenPGP and demonstrated its feasibility by adding support for it to GnuPG. Unlike existing solutions, our design doesn’t require the mailing list software to re-encrypt the messages nor does it require users to have new secret keys. Instead, we make use of OpenPGP’s existing key distribution infrastructure to distribute the list of subscribers to the mailing list’s posters. This makes our implementation more secure and more usable. Our proposed solution is as secure as OpenPGP and as usable as any of the OpenPGP implementations. First, sending a mail to an encrypted mailing list is no different from sending an encrypted mail to some individual. Second, since we publish the list of subscribers in the mailing lists key block, we also encrypt the list of subscribers so that only posters can read it. This prevents casual post hoc analysis of the subscriber list, which provides a similar amount of privacy as OpenPGP encrypted email. We are currently working to integrate our proposal into the next version of the OpenPGP specification or to publish it as a standalone RFC. If the OpenPGP community agrees that the extension is worthwhile, then we will work to complete our GnuPG support and integrate it upstream. References [8] Himanshu Khurana, Jin Heo, and Meenal Pant. From proxy encryption primitives to a deployable secure-mailing-list so-
{"Source-Url": "https://gnupg.org/ftp/people/neal/openpgp-mailing-lists.pdf", "len_cl100k_base": 6838, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20477, "total-output-tokens": 7791, "length": "2e12", "weborganizer": {"__label__adult": 0.0004475116729736328, "__label__art_design": 0.00045990943908691406, "__label__crime_law": 0.0018472671508789065, "__label__education_jobs": 0.0006275177001953125, "__label__entertainment": 0.00014579296112060547, "__label__fashion_beauty": 0.00018680095672607425, "__label__finance_business": 0.0006198883056640625, "__label__food_dining": 0.0003638267517089844, "__label__games": 0.0008134841918945312, "__label__hardware": 0.0026950836181640625, "__label__health": 0.0006155967712402344, "__label__history": 0.00036072731018066406, "__label__home_hobbies": 0.00017273426055908203, "__label__industrial": 0.000728607177734375, "__label__literature": 0.0003600120544433594, "__label__politics": 0.0004167556762695313, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.267578125, "__label__social_life": 0.0002002716064453125, "__label__software": 0.0980224609375, "__label__software_dev": 0.6220703125, "__label__sports_fitness": 0.0002865791320800781, "__label__transportation": 0.0005655288696289062, "__label__travel": 0.00019788742065429688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34789, 0.0198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34789, 0.2154]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34789, 0.90453]], "google_gemma-3-12b-it_contains_pii": [[0, 4094, false], [4094, 10301, null], [10301, 16219, null], [16219, 22466, null], [22466, 28468, null], [28468, 34284, null], [34284, 34789, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4094, true], [4094, 10301, null], [10301, 16219, null], [16219, 22466, null], [22466, 28468, null], [28468, 34284, null], [34284, 34789, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34789, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34789, null]], "pdf_page_numbers": [[0, 4094, 1], [4094, 10301, 2], [10301, 16219, 3], [16219, 22466, 4], [22466, 28468, 5], [28468, 34284, 6], [34284, 34789, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34789, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
80f69b2b2bcc6f751612ebc2e61f9a102cc62db6
Application of Fuzzy Analytic Hierarchy Method in Software Engineering Scenario Hota H.S., Assistant Professor, GGV, Bilaspur (C.G.), India Sanjay Kumar Singhai, PhD., Associate Professor Govt. Engineering College Bilaspur (C.G.), India Ragini Shukla Assistant Professor, Dr. C.V. Raman University, Kargi Road Kota, Bilaspur (C.G.), India ABSTRACT In software engineering scenario, software effort estimation is very uncertain and depends on various external factors. For developing a particular type of software, selection of an optimal and experienced group of developer is essential for software development organization for organizational benefits and is necessary because success and failure of software is highly depends upon experienced team members, but it is not always possible to schedule a suitable team of developer for a specific type of software development from a group of developer, hence there should be a technique to form a group of developer for specific type of software development for cost effective reason. In this paper multi criteria decision making (MCDM) based fuzzy analytical hierarchy process is applied for formation or selection of software developer team. Fuzzy AHP is a ranking based optimization technique, which decides ranking among various alternatives based on conflict nature of criteria. Three different criteria from COCOMO effort estimation model are considered to decide ranking of three programmers. This technique can be applied for more number of criteria and alternative in real sense in software development scenario. Keywords Fuzzy analytic hierarchy process (FAHP), Analytic hierarchy process, Software Engineering, COCOMO model 1. INTRODUCTION Most of the software fails during the development and even after development and not delivered in stipulated time period, which may creates problem for software development organization in context of their reputation and reliability in IT industry. Selection of various resources required to develop software in optimal manner is very essential to avoid all these problems. Optimal resource allocation for a specific type of software project is a challenging task to minimize the software development cost and hence to deliver software product to the client well in advance. Many resources like technical resources: hardware, software and most essentially human resources are necessary to assign in optimal manner. These resource allocation may be based on expertise or heuristic manner, which sometimes fails due to uncertainty involved, hence multi criteria decision making (MCDM) based method: Fuzzy AHP can be used for human resource allocation for a particular type of software project. Very few literatures are available on this topic Santanu ku. Mishra [1] and et.al has applied fuzzy AHP and bayesian technique for programmer selection. However other researchers have applied fuzzy AHP method and other MCDM methods for selection purpose. Sumeet Kaur Mishra and et.al [2] has also used MCDM approach for selection of effort estimation model based on four criteria: reliability, MMRE, percentage prediction and uncertainty for various models suggest by various scientist as alternatives. Results has been compared with AHP and it was found that algorithmic model has highest weight value as compare to other models like expert judgment based model and non algorithmic model. This paper extends and explores the work all ready done by santanu ku. Mishra [1] and et.al.in special reference to COCOMO’s effort multiplier as criteria of programmers to be selected for forming project team. COCOMO model is one of the very popular effort estimation model based on 17 effort multipliers. These multipliers are quantitative as well qualitative, some of the multiplier are related to technical while other are related to quality of software developer. Quantitative data can be represented well using fuzzy logic, hence fuzzy logic based MCDM method[14]: Fuzzy AHP is well suited for this, Fuzzy AHP method with three different criteria of COCOMO model is considered just for demonstration purpose to select developers from a group of programmer. Work can be extended in real sense in software engineering scenario with more number of alternative and criteria. 2. MULTICRITERIA DECISION MAKING (MCDM) METHOD Multi criteria decision making is a method to deal with the process of making decision among number of alternatives with conflicting criteria on them. AHP is one of the very popular MCDM method and fuzzy AHP is an extension of original AHP method suggested by saaty[12] to deal with qualitative and quantitative data. We will explain AHP first then fuzzy AHP will be explained in section 2.1 and 2.2 respectively. 2.1 Analytic Hierarchy Process (AHP) One of the most popular analytical techniques for complex decision-making problem is the analytic hierarchy process(AHP). Analytic Hierarchy Process (AHP) proposed by Saaty[1980,2000][16], is an approach for decision making that involves structuring multiple choice criteria into a hierarchy, assessing the relative importance of these criteria, comparing alternatives for each criterion, and determining an overall ranking of the alternatives. An AHP hierarchy can have as many levels as needed to fully characterized particular decision situation. A number of functional characteristics make, AHP a useful methodology. So the AHP is most highly regarded and widely used decision making method. It can efficiently deal with tangible (i.e. objective) as well as non-tangible (i.e. subjective) attributes[7]. The main procedure of AHP using the radical root method (also called the geometric mean method) is as follows[7]: Step 1: Determine the objective and the evaluation attributes. Step 2: Determine the relative importance of different attributes with respect to the goal or objective. • Construct a pair-wise comparison matrix using a scale of relative importance. The judgments are entered using the fundamental scale of the analytic hierarchy process. An attribute compared with itself is always assigned the value 1, so the main diagonal entries of the pair-wise comparison matrix are all 1 and the rating as based on Saaty’s nine point scale shown in table 1. <table> <thead> <tr> <th>TABLE 1: SAATY’S NINE POINT SCALE</th> </tr> </thead> <tbody> <tr> <td>Compared to 2nd alternative,</td> </tr> <tr> <td>the 1st alternative is</td> </tr> <tr> <td>Numerical rating</td> </tr> <tr> <td>Extremely preferred 9</td> </tr> <tr> <td>Very strongly preferred 7</td> </tr> <tr> <td>Strongly preferred 5</td> </tr> <tr> <td>Moderately preferred 3</td> </tr> <tr> <td>Intermediate judgment between two</td> </tr> <tr> <td>adjacent judgment 2,4,6,8</td> </tr> </tbody> </table> • Calculating the consistency ratio CR = CR/RI. Usually, a CR of 0.1 or less is considered as acceptable and is reflects an informed judgment attributable to the knowledge of the analyst regarding the problem understudy. Step 3: The next step is to compare the alternatives pair-wise with respect to how much better they are in satisfying each of the attributes, i.e., to ascertain how well each alternative serves each attribute. Step 4: The next step is to obtain the overall or composite performance scores for the alternatives by multiplying the relative normalized weight (wj) of each attribute (obtain in step two) with its corresponding normalized weight value for each alternative (obtain in step three) and summing over the attributes for each alternative. 2.2 Fuzzy Analytic Hierarchy Process (FAHP) Method: The FAHP[13] method is an advanced analytical method which is developed from the AHP. In spite of the popularity of AHP, this method is often criticized for its inability to adequately handle the inherent uncertainty and imprecision associated with the mapping of the decision-maker’s perception to exact numbers. In FAHP method, the fuzzy comparison ratios are used to be able to tolerate vagueness [3]. There is a problem with AHP that in some situations, Decision maker wants to use the uncertainty while performing the comparisons of the alternatives. For taking uncertainties into consider ration fuzzy numbers are used instead of crisp numbers [1]. The method proposed by Chen and Hwang (1992)[7] first converts linguistic terms into fuzzy numbers and then the fuzzy numbers into crisp scores. The method is described as below- 2.2.1 Converting Linguistic terms to fuzzy numbers:- This method systematically converts linguistic terms into their corresponding fuzzy numbers. It contains eight conversion scales. The conversion scales were proposed by synthesizing and modifying the works of Wenstop(1976), Bass and Kwakernaak(1977),Efstathiou and Rajkovic (1979),Kerre (1982) and Chen (1988). 2.2.2 Converting Fuzzy Numbers to Crisp Scores:- The method uses a fuzzy scoring approach that is a modification of the fuzzy ranking approaches proposed by Jain(1976) and Chen(1985).The crisp score of fuzzy number ‘M’ is obtained as follows: \[ \mu_{\text{max}}(x) = \begin{cases} x, 0 \leq x \leq 1 \\ 0, \text{otherwise} \end{cases} \] \[ \mu_{\text{min}}(x) = \begin{cases} 1 - x, 0 \leq x \leq 1 \\ 0, \text{otherwise} \end{cases} \] The fuzzy max and fuzzy min of fuzzy numbers are defined in a manner such that absolute location of fuzzy numbers can be automatically incorporated in the comparison cases. The right score of each fuzzy number Mj is defined as:- \[ \mu_{\text{R}}(M_j) = \text{Sup} \left[ \mu_{\text{max}}(x)^w \mu_{M1}(x) \right] \] And the left score is- \[ \mu_{\text{L}}(M_j) = \text{Sup} \left[ \mu_{\text{min}}(x)^w \mu_{M1}(x) \right] \] The total score of a fuzzy number Mj is defined as:- International Journal of Computer Applications (0975 – 8887) Volume 57– No.21, November 2012 Demonstration of the method: Now, the 5-point scale is considered to demonstrate the conversion of fuzzy number into crisp scores. To demonstrate the method, a 5-point scale having the linguistic terms like low, below average, average, above average and high as shown in figure 1 is considered. \[ \mu_f(M_i) = \frac{[\mu_R(M_i) + 1 - \mu_L(M_i)]}{2} \] \[\mu_{M5}(x) = \begin{cases} (x - 0.7) & 0.7 \leq x \leq 1.0 \\ 0.3 & 1, x = 1 \end{cases} \] The right, left and total scores are computed as follows for \(M_1\): \[ \mu_R(M_i) = \text{Sup}[\mu_{\text{max}}(x) \cdot \mu_{M1}(x)] = 0.23, \\ \mu_L(M_i) = \text{Sup}[\mu_{\text{min}}(x) \cdot \mu_{M1}(x)] = 1 \\ \mu_f(M_i) = [\mu_R(M_i) + 1 - \mu_L(M_i)]/2 = 0.115 \] Similarly, the right, left and total scores are computed for \(M_2, M_3, M_4\) and \(M_5\) and are tabulated in table 3 and table 4. ### TABLE 3: MEMBERSHIP FUNCTION OF \(M_1, M_2, M_3, M_4, M_5\) <table> <thead> <tr> <th>I</th> <th>(\mu_R(M_1))</th> <th>(\mu_L(M_1))</th> <th>(\mu_f(M_1))</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.23</td> <td>1.0</td> <td>0.115</td> </tr> <tr> <td>2</td> <td>0.39</td> <td>0.8</td> <td>0.295</td> </tr> <tr> <td>3</td> <td>0.58</td> <td>0.59</td> <td>0.495</td> </tr> <tr> <td>4</td> <td>0.79</td> <td>0.4</td> <td>0.695</td> </tr> <tr> <td>5</td> <td>1.0</td> <td>0.23</td> <td>0.895</td> </tr> </tbody> </table> Figure 2 shows hierarchy of programmer selection in which the root of the hierarchy is the most general objective (Goal) of the problem such as the objective of making the best ### TABLE 4: LINGUISTIC TERMS WITH THEIR CORRESPONDING CRISP SCORES <table> <thead> <tr> <th>Linguistic Term</th> <th>Fuzzy Number</th> <th>Crisp Score</th> </tr> </thead> <tbody> <tr> <td>Low</td> <td>(M_1)</td> <td>0.115</td> </tr> <tr> <td>Below average</td> <td>(M_2)</td> <td>0.295</td> </tr> <tr> <td>Average</td> <td>(M_3)</td> <td>0.495</td> </tr> <tr> <td>Above average</td> <td>(M_4)</td> <td>0.695</td> </tr> <tr> <td>High</td> <td>(M_5)</td> <td>0.895</td> </tr> </tbody> </table> Instead of assigning arbitrary values for various attributes, this fuzzy method reflects the exact linguistic descriptions in terms of crisp scores. Hence, it gives better approximations that are widely used. ### 3. SOFTWARE ENGINEERING SCENARIO The Constructive Cost Model (COCOMO) [10] is a well known model in software engineering scenario. Which is developed by Barry W. Boehm. Effort multipliers for the COCOMO model are considered here for the selection of programmers for software development. Out of 17 multipliers 3 multipliers : APEX-Application Experience, PLEX-Platform Experience, LTEX-Language and tool experience are considered as criteria for the FAHP method[11]. From figure 1, membership function of \(M_1, M_2, M_3, M_4, M_5\) are written as: \[ \mu_{M1}(x) = \begin{cases} 1, x = 0 \\ \frac{(0.3 - x)}{(0.3)} & 0 \leq x \leq 0.3 \end{cases} \] \[ \mu_{M2}(x) = \begin{cases} \frac{(x - 0)}{(0.25)} & 0 \leq x \leq 0.3 \\ \frac{(0.5 - x)}{(0.25)} & 0.25 \leq x \leq 0.5 \end{cases} \] \[ \mu_{M3}(x) = \begin{cases} \frac{(x - 0.3)}{(0.2)} & 0.3 \leq x \leq 0.5 \\ \frac{(0.7 - x)}{0.2} & 0.5 \leq x \leq 0.7 \end{cases} \] \[ \mu_{M4}(x) = \begin{cases} \frac{(x - 0.5)}{(0.25)} & 0.5 \leq x \leq 0.75 \\ \frac{(1.0 - x)}{0.25} & 0.75 \leq x \leq 1.0 \end{cases} \] decision or selecting the best alternative. Second level of the hierarchy consists: three effort multipliers of COCOMO model as quality of programmer while leaf level represents alternatives. In order to apply FAHP method for programmer selection for specific software project let us follow the following steps: Step 1: A decision making matrix (DMM) [15] based on above criteria with three fuzzy linguistic terms as shown in fig. 1 with three different alternatives is shown in table 5. Where P1, P2 and P3 represent programmer1, programmer2 programmer3 respectively. \[ \begin{array}{c|ccc} \text{Programmer} & \text{APEX} & \text{PLEX} & \text{LTEX} \\ \hline P_1 & \text{High} & \text{Average} & \text{Average} \\ P_2 & \text{Average} & \text{Low} & \text{High} \\ P_3 & \text{Low} & \text{High} & \text{Average} \end{array} \] Instead of 5-point scale as explained above we have considered here 3-point scale for conversion of fuzzy linguistic term into crisp scores. Here we have used only 3-point scale having the linguistic terms like low, average and high as shown in table 5. From the above described Chen and Hwang (1992) method : \[ \begin{array}{c|ccc} \text{Programmer} & \text{APEX} & \text{PLEX} & \text{LTEX} \\ \hline P_1 & 0.895 & 0.495 & 0.495 \\ P_2 & 0.495 & 0.115 & 0.895 \\ P_3 & 0.115 & 0.895 & 0.495 \end{array} \] Step 2: Now in this step we compare criteria with criteria by assigning comparative weights from Saaty’s [7] nine point scale as shown in table 1 by applying heuristic knowledge in these domain. So the Relative Importance Matrix can be written as: \[ \begin{bmatrix} \text{APEX} \\ \text{PLEX} \\ \text{LTEX} \end{bmatrix} = \begin{bmatrix} 1 & 5 & 3 \\ 1/5 & 1 & 1/2 \\ 1/3 & 2 & 1 \end{bmatrix} \] Now calculating Geometric mean (GM) for \(^i\text{th}\) row: \[ \text{GM}_1 = (1 \times 5 \times 3)^{1/3} = 2.4659 \] \[ \text{GM}_2 = (1/5 \times 1 \times 1/2)^{1/3} = 0.4641 \] \[ \text{GM}_3 = (1/3 \times 2 \times 1)^{1/3} = 0.873 \] Total Geometric mean GM = 3.79 Hence the Normalized weights are: \(W_1 = 2.46/3.79 = 0.649, W_2 = 0.46/3.79 = 0.121\) and \(W_3 = 0.87/3.79 = 0.229\) Now Consistency checking by using following equations below: \[ A_3 = A_1 \times A_2 \] \[ A_4 = A_3 / A_2 \] And maximum value \(\lambda_{\text{max}}\) that is the average of matrix \(A_4\) will be \[ \lambda_{\text{max}} = \frac{2.949+2.975+3.0818}{3} = 3.001 \] Then Consistency Index (CI) = \(\frac{(\lambda_{\text{max}} - n)}{n-1} = \frac{3.001-3}{2} = 0.0005\) And Consistency Ratio (CR) = \(\frac{CI}{R} = \frac{0.0005}{0.52} = 0.00096<0.1\) Hence the weights are consistent. Step 3: Now alternatives will be compared with alternatives for all the three criteria known as pair-wise comparison matrix. Three pair-wise comparison matrices are shown below: \[ \begin{bmatrix} \text{PA} & \text{PB} & \text{PC} \\ \text{PA} & 1 & 0.495 & 0.895 \\ \text{PB} & 1/0.495 & 1 & 0.895 \\ \text{PC} & 1/0.895 & 1/0.895 & 1 \end{bmatrix} \] Now calculating Geometric mean (GM) for \(^i\text{th}\) row: \[ \text{GM}_1 = (1 \times 0.495 \times 0.895)^{1/3} = 0.7623 \] \[ \text{GM}_2 = (1/0.495 \times 0.895)\]^{1/3} = 1.2182 and \[ \text{GM}_3 = (1/0.895 \times 1 \times 0.895)\]^{1/3} = 1.0767. Total Geometric mean=3.05 Hence the Normalized weights are: \( W_1 = 0.7623/3.05 = 0.249 \), \( W_2 = 1.2182/3.05 = 0.398 \) and \( W_3 = 1.0767/3.05 = 0.352 \) Now Consistency checking by using equations (1) and(2) as below:- So the \( A_1 \) = \[ \begin{bmatrix} 1 & 0.495 & 0.895 \\ 1/0.495 & 1 & 0.895 \\ 1/0.895 & 1/0.895 & 1 \end{bmatrix} \times \begin{bmatrix} 0.249 \\ 0.398 \\ 0.352 \end{bmatrix} = \begin{bmatrix} 0.7614 \\ 1.2167 \\ 1.0752 \end{bmatrix} \] And \( A_2 \) = \[ \begin{bmatrix} 0.7614 \\ 1.2167 \\ 1.0752 \end{bmatrix} \times \begin{bmatrix} 0.249 \\ 0.398 \\ 0.352 \end{bmatrix} = \begin{bmatrix} 3.074 \\ 3.005 \\ 3.053 \end{bmatrix} \] And maximum value \( \lambda_{max} \) that is the average of matrix \( A_i \) = \[ \frac{\lambda_{max} + \lambda_{num} + \lambda_{den}}{3} = 3.044 \] Then CI = \( \frac{\lambda_{max} - n}{n-1} \times \frac{2}{n} = 0.022 \) And CR = \( \frac{CI}{RI} = \frac{0.022}{0.52} = 0.04 < 0.1 \) Hence the weights are consistent. (ii) Pair wise comparison matrix for criteria PLEX Now calculating Geometric mean (GM) for \( i^{th} \) row:- \[ GM_i = (1 \times 0.495 \times 0.115)^{1/3} = 0.4686 \] \[ GM_2 = (1/0.895 \times 1 \times 0.895)^{1/3} = 1.2182 \] \[ GM_3 = (1/0.115 \times 1/0.115 \times 1)^{1/3} = 4.2280 \] Total Geometric mean=5.2012 Hence the Normalized weights are: \[ W_1 = 0.4686/5.2012 = 0.090, W_2 = 0.50464/5.2012 = 0.0970 and W_3 = 0.81280/5.2012 = 0.15828 \] Now Consistency checking by using equations (1) and(2) as below:- So the \( A_3 \) = \[ \begin{bmatrix} 1 & 0.495 & 0.895 \\ 1/0.495 & 1 & 0.895 \\ 1/0.895 & 1/0.895 & 1 \end{bmatrix} \times \begin{bmatrix} 0.2908 \\ 0.2908 \\ 2.438 \end{bmatrix} = \begin{bmatrix} 0.2701 \\ 0.2701 \\ 2.438 \end{bmatrix} \] And \( A_4 \) = \[ \begin{bmatrix} 0.2701 \\ 0.2701 \\ 2.438 \end{bmatrix} \times \begin{bmatrix} 0.090 \\ 0.090 \\ 0.812 \end{bmatrix} = \begin{bmatrix} 3.001 \\ 3.001 \\ 3.002 \end{bmatrix} \] And maximum value \( \lambda_{max} \) that is the average of matrix \( A_i \) = \[ \frac{\lambda_{max} + \lambda_{num} + \lambda_{den}}{3} = 3 \] Then CI = \( \frac{\lambda_{max} - n}{n-1} \times \frac{2}{n} = 0 \) And CR = \( \frac{CI}{RI} = \frac{0}{0.52} = 0 < 0.1 \) Hence the weights are consistent. (iii) Pair wise comparison matrix for criteria LTEX Now calculating Geometric mean (GM) for \( i^{th} \) row:- \[ GM_1 = (1 \times 0.495 \times 1)^{1/3} = 0.7910 \] \[ GM_2 = (1/0.495 \times 1 \times 0.895)^{1/3} = 1.2182 \] \[ GM_3 = (1/0.115 \times 1/0.115 \times 1)^{1/3} = 1.0376 \] Total Geometric mean=3.0468 Hence the Normalized weights are: \[ W_1 = 0.7910/3.0468 = 0.2596, W_2 = 1.2182/3.0468 = 0.3998 and W_3 = 1.0376/3.0468 = 0.3406 \] Now Consistency checking by using equations (1) and(2) as below:- So the \( A_3 \) = \[ \begin{bmatrix} 1 & 0.495 & 1 \\ 1/0.495 & 1 & 0.895 \\ 1 & 1/0.895 & 1 \end{bmatrix} \times \begin{bmatrix} 0.2984 \\ 0.3984 \\ 0.3521 \end{bmatrix} = \begin{bmatrix} 1.229 \\ 1.0469 \\ 1.0469 \end{bmatrix} \] And \( A_4 \) = \[ \begin{bmatrix} 0.7981 \\ 1.229 \\ 1.0469 \end{bmatrix} \times \begin{bmatrix} 0.2908 \\ 0.2908 \\ 2.438 \end{bmatrix} = \begin{bmatrix} 0.2908 \\ 0.3984 \\ 0.3521 \end{bmatrix} \] And maximum value \( \lambda_{max} \) that is the average of matrix \( A_i \) = \[ \frac{\lambda_{max} + \lambda_{num} + \lambda_{den}}{3} = 3.073 \] Then CI = \( \frac{\lambda_{max} - n}{n-1} \times \frac{2}{n} = 0.036 \) And CR = \( \frac{CI}{RI} = \frac{0.036}{0.52} = 0.070 < 0.1 \) Hence the weights are consistent. Step 4:- A matrix is formed with the help of obtained weights in case of pair-wise comparison matrix for three different criteria as calculated in step 3 is :- \[ \begin{bmatrix} 0.2493 & 0.090 & 0.2596 \\ 0.3984 & 0.0970 & 0.3998 \\ 0.3521 & 0.8128 & 0.3406 \end{bmatrix} \] So the final rank can be obtain the overall or composite performance scores for the alternatives are:- \[ \begin{bmatrix} 0.2493 & 0.090 & 0.2596 \\ 0.3984 & 0.0970 & 0.3998 \\ 0.3521 & 0.8128 & 0.3406 \end{bmatrix} \times \begin{bmatrix} 0.649 \\ 0.121 \\ 0.229 \end{bmatrix} = \begin{bmatrix} 0.2319 \\ 0.3617 \\ 0.4047 \end{bmatrix} \] Deciding the rank according to the higher value of above matrix, hence ranking is \( P_3 \), \( P_2 \) and \( P_1 \). 4. CONCLUSION Decision making is very necessary for various problems and becomes tedious and difficult if the qualities of the alternatives are conflicting. A suitable method can be applied to deal this type of problem. Multi criteria decision making methods are widely used to solve this type of problem. Criteria of alternatives may be quantitative and qualitative based on these a suitable MCDM method like fuzzy AHP is applied in this piece of research work. This method is applied for selection and to decide ranking of software developer (Programmer) based on COCOMO’s effort multiplies as criteria of developers. Experiment is done on sample data set with only three alternatives and three criteria and the ranking decided by FAHP method is P3, P2 and P1. In future FAHP and other fuzzy MCDM methods can be applied for all the multipliers of COCOMO model to stabilize a model for software developer selection in real sense of software engineering scenario. 5. REFERENCES
{"Source-Url": "http://research.ijcaonline.org/volume57/number21/pxc3883883.pdf", "len_cl100k_base": 6820, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23620, "total-output-tokens": 8525, "length": "2e12", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.00029397010803222656, "__label__crime_law": 0.0003504753112792969, "__label__education_jobs": 0.002056121826171875, "__label__entertainment": 4.374980926513672e-05, "__label__fashion_beauty": 0.00015425682067871094, "__label__finance_business": 0.001026153564453125, "__label__food_dining": 0.0004096031188964844, "__label__games": 0.0005106925964355469, "__label__hardware": 0.0005879402160644531, "__label__health": 0.0006952285766601562, "__label__history": 0.0001908540725708008, "__label__home_hobbies": 0.00010257959365844728, "__label__industrial": 0.00045680999755859375, "__label__literature": 0.00024056434631347656, "__label__politics": 0.00019633769989013672, "__label__religion": 0.0003476142883300781, "__label__science_tech": 0.016998291015625, "__label__social_life": 0.00010246038436889648, "__label__software": 0.00693511962890625, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00025725364685058594, "__label__transportation": 0.0004019737243652344, "__label__travel": 0.00019073486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23923, 0.14633]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23923, 0.59849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23923, 0.80684]], "google_gemma-3-12b-it_contains_pii": [[0, 5393, false], [5393, 9715, null], [9715, 12930, null], [12930, 16172, null], [16172, 21058, null], [21058, 23923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5393, true], [5393, 9715, null], [9715, 12930, null], [12930, 16172, null], [16172, 21058, null], [21058, 23923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23923, null]], "pdf_page_numbers": [[0, 5393, 1], [5393, 9715, 2], [9715, 12930, 3], [12930, 16172, 4], [16172, 21058, 5], [21058, 23923, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23923, 0.0672]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
be4a68ac55fa866ea8c4c41be0d43b78217099b2
[REMOVED]
{"Source-Url": "http://lig-membres.imag.fr/donsez/pub/publi/sc2006-desertot.pdf", "len_cl100k_base": 7374, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39312, "total-output-tokens": 8873, "length": "2e12", "weborganizer": {"__label__adult": 0.0002219676971435547, "__label__art_design": 0.0002911090850830078, "__label__crime_law": 0.00021398067474365232, "__label__education_jobs": 0.0002567768096923828, "__label__entertainment": 4.225969314575195e-05, "__label__fashion_beauty": 9.649991989135742e-05, "__label__finance_business": 0.00013494491577148438, "__label__food_dining": 0.0001850128173828125, "__label__games": 0.00028228759765625, "__label__hardware": 0.0005106925964355469, "__label__health": 0.0001970529556274414, "__label__history": 0.00014984607696533203, "__label__home_hobbies": 5.042552947998047e-05, "__label__industrial": 0.00022745132446289065, "__label__literature": 0.00013518333435058594, "__label__politics": 0.00016391277313232422, "__label__religion": 0.00028204917907714844, "__label__science_tech": 0.006092071533203125, "__label__social_life": 5.984306335449219e-05, "__label__software": 0.00826263427734375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00017642974853515625, "__label__transportation": 0.0002682209014892578, "__label__travel": 0.00015091896057128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41351, 0.03535]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41351, 0.61516]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41351, 0.92162]], "google_gemma-3-12b-it_contains_pii": [[0, 2545, false], [2545, 5656, null], [5656, 7915, null], [7915, 11273, null], [11273, 14452, null], [14452, 18007, null], [18007, 20321, null], [20321, 23700, null], [23700, 26893, null], [26893, 27198, null], [27198, 29213, null], [29213, 31378, null], [31378, 33207, null], [33207, 36042, null], [36042, 39072, null], [39072, 41351, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2545, true], [2545, 5656, null], [5656, 7915, null], [7915, 11273, null], [11273, 14452, null], [14452, 18007, null], [18007, 20321, null], [20321, 23700, null], [23700, 26893, null], [26893, 27198, null], [27198, 29213, null], [29213, 31378, null], [31378, 33207, null], [33207, 36042, null], [36042, 39072, null], [39072, 41351, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41351, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41351, null]], "pdf_page_numbers": [[0, 2545, 1], [2545, 5656, 2], [5656, 7915, 3], [7915, 11273, 4], [11273, 14452, 5], [14452, 18007, 6], [18007, 20321, 7], [20321, 23700, 8], [23700, 26893, 9], [26893, 27198, 10], [27198, 29213, 11], [29213, 31378, 12], [31378, 33207, 13], [33207, 36042, 14], [36042, 39072, 15], [39072, 41351, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41351, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
6a8397df3eb2731582aece1da682edaa30243eaa
Objects: Data Abstraction - In Object-Oriented programming languages like Java, *objects* are used to represent data. - A **class** defines a **type** of *object*, including: - its data - its permissible operations - Once a type is defined, objects of that type can be declared and used. Example: Planet - Suppose that we want to represent planets in Java - We can define a class called Planet - Data: diameter, mass, orbit, orbital period, location at a given time, ... - Methods: setDiameter(), setMass(), etc., getDiameter(), getMass(), etc. Different objects have different methods for manipulating their data. The specific methods are determined based on what makes sense for that type of object. For Strings: length, concatenation, comparison, substring, substring comparison, ... Example: Palindromes - Two String methods: - `length()` – returns the length of the string - `charAt()` – returns the character at a given position in the string - Palindrome – a word that reads the same forward or backward - Examples: eye, madam, radar, ... Algorithm 1. Get a word from the user 2. Compare the first and last characters 1. If they are different, return false 2. Otherwise, repeat with the second and second to the last, etc. 3. If the characters all match, return true Algorithm 2 - Set $left$ to the index of the first (leftmost) character - Set $right$ to index the last (rightmost) character - While $left$ is less than $right$ - Compare the $left$ character with the $right$ character - If they are not equal return false - Increment $left$ - Decrement $right$ - Return true public class Palindrome { public static void main(String[] args) { Scanner in = new Scanner(System.in); String str = in.next(); System.out.println(str + " " + isPalindrome(str)); } static boolean isPalindrome(String s) { int left = 0, right = s.length() - 1; while (left < right) { if (s.charAt(left) != s.charAt(right)) return false; left++; right--; } return true; } } Methods - Each **type** of object supports a specified set of methods. - The methods are called for a specific object and have direct access to that object’s data without having to pass the object as a parameter. ```java String s; s.length(); ``` String Methods - boolean equals(Object anObject) - Compares this string with another object - int length() - Number of characters in this string - char charAt(int index) - Returns the character at the position index within this string String Methods II - int compareTo(String str) - Returns an integer value, based on lexicographic order - int indexOf(int ch) - Index of where the ch occurs in this string or -1 if not present - int indexOf(String str) - Index of the first character of a matching substring str - String concat(String str) - Concatenates this string instance with str and returns the result String Methods III - **String toLowerCase()** - Returns a copy of this string but in all lowercase - **String toUpperCase()** - Returns a copy of this string but in all uppercase - **static String valueOf(type prim)** - Returns the String representation of primitive value prim, where type can be any primitive public class StringTest { public static void main(String[ ] args) { String str1 = "aBcD", str2 = "abcd", str3; System.out.println(str1.equals(str2)); System.out.println(str1.length()); System.out.println(str1.charAt(1)); System.out.println(str1.compareTo("aBcE")); System.out.println(str1.compareTo("aBcC")); System.out.println(str1.compareTo("aBcD")); System.out.println(str1.indexOf('D')); System.out.println(str1.indexOf("Bc")); System.out.println(str1.indexOf("zz")); System.out.println(str1.concat("efg")); } } public class StringTest { public static void main(String[] args) { String str1 = "aBcD", str2 = "abcd", str3; str3 = str1.toLowerCase(); System.out.println(str3); str3 = str1.toUpperCase(); System.out.println(str3); System.out.println(str1); str3 = String.valueOf(123); System.out.println(str3.equals("123")); } } Strings are *immutable* - Once you create one, you can’t change it - You can only return a new string that is a changed version of the old one StringBuffers are *mutable* - You can change them: insert(), reverse(), replace(), setCharAt(), setLength(), deleteCharAt(), append(), ... Elements of a Simple Class - **Data** - called *instance variables*, *data members*, *fields* - **Methods** - called *instance methods*, *procedure members*, *member functions* - Together these implement a level of abstraction for some particular type of data Defining a new type - First describe the data that will be stored in objects of this type - Then describe the operations that will be supported on objects of that type Example: Counter We often want to count things, why not create an abstraction for doing it? - Advantage: you can reuse it in different places in the program, or even in other programs - **Data:** - Current value of the counter (initially zero) - **Operations:** - Reset, Increment, Decrement, Get the current value class Counter { int value; void reset() { value = 0; } int readValue() { return value; } void increment() { value = value + 1; } void decrement() { value = value - 1; } } Using the Counter Counter c1 = new Counter(); Counter c2 = new Counter(); c1.reset(); c2.reset(); c1.increment(); c1.increment(); System.out.println(c1.readValue()); System.out.println(c2.readValue()); Abstract Data Types - Classes allow us to implement Abstract Data Types (ADTs) – an abstraction representing a particular kind of data - The data and methods combine to implement the functionality we desire or expect for this type of data - The implementation details are hidden from the user - The implementation is all in one place - The type can be used in many different places in the program or in many programs Each Counter object has its own copy of the member variables - In this case, the integer variable called `value` - When the methods are called, the call is of the form `<objectname>..<methodname>()` - The object itself is an implicit parameter to the method, so that any references to the data access that object’s copy of the member variables More Examples - **Complex numbers**, vectors, matrices, time/date information, address information, shapes (circle, square, rectangle, oval, triangle), a **file**, a keyboard, a game board (checkers, chess, **tictactoe**), a game piece, a **character string**, a die, a **deck of cards** - Think of some more - Let’s implement some of them In general, each class is in a separate file - The name of the file matches the name of the class (with .java at the end) All classes in the same directory (or file) are part of the same package Whether or not a method is in the same class or package as the data or method it is accessing affects what it can see and do Public and Private - Public data and methods are preceded by the keyword `public` - Private data and methods are preceded by the keyword `private` - By default, everything is semi-private (my term) - If you don’t specify public or private, you get this default behavior Public and Private in Action - Private data and methods are accessible only by methods in the same class. - Semi-private data and methods are accessible by any method in the same class or the same package. - Public data and methods are accessible by any method, regardless of whether or not it is in the same class or package. Example //Counter.java - a simple counter public class Counter { // instance variables - hidden private int value; // methods - exposed public void reset() { value = 0; } public int get() { return value; } public void click() { value = value + 1; } } What If The Data Wasn’t Private Counter foo = new Counter(); foo.reset(); foo.click(); foo.click(); foo.value = 17; int a = foo.get(); // returns the wrong value Constructors - A *constructor* is a special method in a class. - It has the same name as the class. - It is automatically called when an object of that type is created. - Constructors are usually used to set data in the object to an initial value. - Constructors can take parameters. //Counter.java - a simple counter public class Counter { // instance variables - hidden private int value; // methods – exposed public void Counter() { value = 0; } public void reset() { value = 0; } public int get() { return value; } public void click() { value = value +1; } } Example 2 ```java public class Complex { private double real; private double imaginary; public void Complex() { real = 0; imaginary = 0; } public void Complex(double r, double i) { real = r; imaginary = i; } public double getReal() { return real; } public double getImaginary() { return imaginary; } } ``` Using Constructors ```java Complex a = new Complex(); Complex b = new Complex(1, 5.7); Complex c = new Complex(1,0); ``` Static Fields and Methods - Static fields and methods are preceded by the keyword `static`. - Unlike other methods, static methods are not associated with a specific object. - Static methods are called by using the class name and the method name. - `main();` and `Math.random();` Static Variables - Static data members are associated with the class rather than a particular object of that type. - Static data members are accessed like static methods: class name followed by field name. - Example: Math.PI - Sometimes called *class variables* public class Counter { // instance variables - hidden private int value; private static int howMany = 0; // methods - exposed public Counter() { howMany++; } public void reset() { value = 0; } public int get() { return value; } public void click() { value = value + 1; } public static int howMany() { return howMany; } } class CounterTest2 { public static void main(String[] args) { System.out.println(Counter.howMany()); Counter c1 = new Counter(); Counter c2 = new Counter(); c1.click(); c2.click(); c2.click(); System.out.println("Counter1 value is "+ c1.get()); System.out.println("Counter2 value is "+ c2.get()); } } Recap: Calling Methods - There are three ways to call a method depending on - whether the method is in the same class or not - whether the method is an instance method or a class method The Three Ways to Call a Method - In the same class: you just use the method name followed by any parameters in parenthesis - `int a = foo(); // foo is a method in this class` - An instance method: you have to call it for a particular object - `String s = “abc”; int a = s.length();` - A class method: call it with the class name - `int a = Math.random();` class Change { private int dollars, quarters, dimes, pennies; private double total; Change(int dl, int q, int dm, int p) { dollars = dl; quarters = q; dimes = dm; pennies = p; total = dl + 0.25 * q + 0.1 * dm + 0.01 * p; } } static Change makeChange(double paid, double owed) { double diff = paid - owed; int dollars, quarters, dimes, pennies; dollars = (int)diff; pennies = (int)((diff -dollars)*100); quarters = pennies /25; pennies -= 25 *quarters; dimes = pennies /10; pennies -= 10 *dimes; return new Change(dollars, quarters, dimes, pennies); } public String toString() { return "\$" + total + "\n" + dollars + " dollars \n" + quarters + " quarters \n" + dimes + " dimes \n" + pennies + " pennies \n"; } } //ChangeTest.java public class ChangeTest { public static void main(String[] args) { double owed = 12.37; double paid = 15.0; System.out.println("You owe " + owed); System.out.println("You gave me " + paid); System.out.println("Your change is " + Change.makeChange(paid, owed)); } } /ChangeTest2.java public class ChangeTest2 { public static void main(String[] args) { Change c1 = new Change(10,3,4,3); Change c2 = new Change(7,2,2,1); Change sum = c1.add(c2); System.out.println(sum); } } Accessing Another Objects Private Fields public Change add(Change addend) { Change result = new Change( dollars + addend.dollars, quarters + addend.quarters, dimes + addend.dimes, pennies + addend.pennies); return result; } A static add() method ```java public static Change add(Change augend, Change addend) { Change result = new Change( augend.dollars + addend.dollars, augend.quarters + addend.quarters, augend.dimes + addend.dimes, augend.pennies + addend.pennies); return result; } ``` public class ChangeTest3 { public static void main(String[] args) { Change c1 = new Change(10, 3, 4, 3); Change c2 = new Change(7, 2, 2, 1); Change sum = Change.add(c1, c2); System.out.println(sum); } } Passing Objects: Reference Types - Passing an object to a method is different than passing a primitive type - Primitive types are call-by-value - The called method gets a copy of the passed object - Objects are call-by-reference - The called method gets a copy of the reference to the object, which refers to the same object! int a; a = 5; Complex foo; foo = new Complex(); Call-by-reference - The called method gets a copy of the reference! - Because the called method has a reference to the same object, any changes to the object in a method will change the actual object! Example // Object parameters can be modified class PassingReferences { public static void main(String[] args) { StringBuffer sbuf = new StringBuffer("testing"); System.out.println("sbuf is now "+ sbuf); modify(sbuf); System.out.println("sbuf is now "+ sbuf); } static void modify(StringBuffer sb) { sb.append(",1 2 3"); } } Example // You can't modify the actual arg class ModifyParameters { public static void main(String[] args) { StringBuffer sbuf = new StringBuffer("testing"); System.out.println("sbuf is now " + sbuf); modify(sbuf); System.out.println("sbuf is now " + sbuf); } static void modify(StringBuffer sb) { sb = new StringBuffer("doesn't work"); } } Recall: a local variable (defined in a method) is only accessible within that method - And, it is only accessible after the point at which it is defined in the method Instance and class variables are accessible from within any method in the class - Before or after the point at which they are defined in the class Eclipsing an instance or class variable - When a local variable (in a method) has the same name as an instance or class variable, the local variable eclipses the class variable - References to that name in the method will refer to the local variable - An eclipsed class variable can be accessed using the class name - An eclipsed instance variable can be accessed using *this* class Scope2 { static int x = 1; int y = 2; public static void main(String[] args) { int x = 3, y = 4; System.out.println("local x = " + x); System.out.println("class x = " + Scope2.x); System.out.println("local y = " + y); System.out.println("instance y = " + this.y); } } Change(int dollars, int quarters, int dimes, int pennies) { this.dollars = dollars; this.quarters = quarters; this.dimes = dimes; this.pennies = pennies; total = dollars + 0.25 * quarters + 0.1 * dimes + pennies; } Keyword **final** and Class Constants - It is usually a bad idea to make instance and class variables public - It is better to provide accessor methods - This allows us to guarantee certain conditions about the data - However, there is one type of class variable that is commonly made public: constants - Immutable variables with special values Examples - Math.PI - Integer.MAXINT Note: generally written all uppercase Defined with the keyword `final` ``` public static final double PI = 3.14159265; ``` Any attempt to modify a constant will result in an error Why use these - Constants are only defined once - Some numbers, such as pi, are used very often - Constants allow us to name a value - Which is clearer: 60 or SECONDSPERMINUTE? Arrays of Objects - Just as we can have arrays of primitive types, we can also have arrays of objects. - Recall that: - When we declare an array we have to use `new` to create the storage for the array. - When we create an object we have to use `new` to create the storage for the object. - So, when we create an array of objects we have to use `new` twice; once for the array and once for the objects. Example ```java int[] foo; foo = new int[15]; Complex[] bar; bar = new Complex[15]; for(int i = 0; i < bar.length; i++) bar[i] = new Complex(); ``` - And, the book has a better Card class than mine class Suit { public static final int CLUBS = 1; public static final int DIAMONDS = 2; public static final int HEARTS = 3; public static final int SPADES = 4; int suitValue; Suit(int i){ suitValue = i; } public String toString() { switch (suitValue) { case CLUBS: return "clubs"; case DIAMONDS: return "diamonds"; case HEARTS: return "hearts"; case SPADES: return "spades"; default: return "error"; } } } class Pips { int p; Pips(int i) { p = i; } public String toString() { if (p > 1 && p < 11) return String.valueOf(p); else switch(p) { case 1: return "Ace"; case 11: return "Jack"; case 12: return "Queen"; case 13: return "King"; default: return "error"; } } } ```java class Card { Suit suit; Pips pip; Card(Suit s, Pips p) { suit = s; pip = p; } Card(Card c) { suit = c.suit; pip = c.pip; } public String toString() { return pip.toString() + " of " + suit.toString(); } } ``` class Deck { Card[] deck; Deck() { deck = new Card[52]; for (int i = 0; i < deck.length; i++) deck[i] = new Card(new Suit(i / 13 + 1), new Pips(i % 13 + 1)); } } public void shuffle() { for (int i = 0; i < deck.length; i++) { int k = (int)(Math.random() * 52); Card t = deck[i]; deck[i] = deck[k]; deck[k] = t; } } public String toString() { String t = ""; for (int i = 0; i < 52; i++) { if (((i + 1) % 5 == 0)) t = t + "\n" + deck [i]; else t = t + deck [i]; } return t; } } public class CardTest { public static void main(String args[]) { Deck deck = new Deck(); System.out.println("\nNew Shuffle \n"+deck); deck.shuffle(); System.out.println("\nNew Shuffle \n"+deck); deck.shuffle(); System.out.println("\nNew Shuffle \n"+deck); deck.shuffle(); System.out.println("\nNew Shuffle \n"+deck); } }
{"Source-Url": "https://users.soe.ucsc.edu/~sbrandt/12A/Chapter6.pdf", "len_cl100k_base": 4841, "olmocr-version": "0.1.49", "pdf-total-pages": 66, "total-fallback-pages": 0, "total-input-tokens": 90681, "total-output-tokens": 7548, "length": "2e12", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.00020635128021240232, "__label__crime_law": 0.0002269744873046875, "__label__education_jobs": 0.0004992485046386719, "__label__entertainment": 4.6193599700927734e-05, "__label__fashion_beauty": 0.00011861324310302734, "__label__finance_business": 0.0001170635223388672, "__label__food_dining": 0.0003197193145751953, "__label__games": 0.0005769729614257812, "__label__hardware": 0.0005598068237304688, "__label__health": 0.0002460479736328125, "__label__history": 0.00016164779663085938, "__label__home_hobbies": 7.283687591552734e-05, "__label__industrial": 0.00026488304138183594, "__label__literature": 0.0001786947250366211, "__label__politics": 0.00018489360809326172, "__label__religion": 0.00041413307189941406, "__label__science_tech": 0.0020732879638671875, "__label__social_life": 6.437301635742188e-05, "__label__software": 0.002895355224609375, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.000354766845703125, "__label__transportation": 0.00040531158447265625, "__label__travel": 0.00021088123321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19517, 0.01153]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19517, 0.92558]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19517, 0.67047]], "google_gemma-3-12b-it_contains_pii": [[0, 293, false], [293, 555, null], [555, 799, null], [799, 1066, null], [1066, 1301, null], [1301, 1620, null], [1620, 2115, null], [2115, 2364, null], [2364, 2608, null], [2608, 2993, null], [2993, 3313, null], [3313, 3925, null], [3925, 4311, null], [4311, 4594, null], [4594, 4861, null], [4861, 5030, null], [5030, 5353, null], [5353, 5544, null], [5544, 5747, null], [5747, 6173, null], [6173, 6520, null], [6520, 6863, null], [6863, 7185, null], [7185, 7458, null], [7458, 7786, null], [7786, 8052, null], [8052, 8216, null], [8216, 8501, null], [8501, 8810, null], [8810, 9154, null], [9154, 9276, null], [9276, 9559, null], [9559, 9824, null], [9824, 10198, null], [10198, 10571, null], [10571, 10762, null], [10762, 11126, null], [11126, 11408, null], [11408, 11771, null], [11771, 11936, null], [11936, 12284, null], [12284, 12574, null], [12574, 12797, null], [12797, 13105, null], [13105, 13348, null], [13348, 13679, null], [13679, 13727, null], [13727, 13929, null], [13929, 14311, null], [14311, 14710, null], [14710, 15025, null], [15025, 15405, null], [15405, 15735, null], [15735, 15970, null], [15970, 16323, null], [16323, 16544, null], [16544, 16726, null], [16726, 17134, null], [17134, 17339, null], [17339, 17851, null], [17851, 18224, null], [18224, 18475, null], [18475, 18711, null], [18711, 18904, null], [18904, 19125, null], [19125, 19517, null]], "google_gemma-3-12b-it_is_public_document": [[0, 293, true], [293, 555, null], [555, 799, null], [799, 1066, null], [1066, 1301, null], [1301, 1620, null], [1620, 2115, null], [2115, 2364, null], [2364, 2608, null], [2608, 2993, null], [2993, 3313, null], [3313, 3925, null], [3925, 4311, null], [4311, 4594, null], [4594, 4861, null], [4861, 5030, null], [5030, 5353, null], [5353, 5544, null], [5544, 5747, null], [5747, 6173, null], [6173, 6520, null], [6520, 6863, null], [6863, 7185, null], [7185, 7458, null], [7458, 7786, null], [7786, 8052, null], [8052, 8216, null], [8216, 8501, null], [8501, 8810, null], [8810, 9154, null], [9154, 9276, null], [9276, 9559, null], [9559, 9824, null], [9824, 10198, null], [10198, 10571, null], [10571, 10762, null], [10762, 11126, null], [11126, 11408, null], [11408, 11771, null], [11771, 11936, null], [11936, 12284, null], [12284, 12574, null], [12574, 12797, null], [12797, 13105, null], [13105, 13348, null], [13348, 13679, null], [13679, 13727, null], [13727, 13929, null], [13929, 14311, null], [14311, 14710, null], [14710, 15025, null], [15025, 15405, null], [15405, 15735, null], [15735, 15970, null], [15970, 16323, null], [16323, 16544, null], [16544, 16726, null], [16726, 17134, null], [17134, 17339, null], [17339, 17851, null], [17851, 18224, null], [18224, 18475, null], [18475, 18711, null], [18711, 18904, null], [18904, 19125, null], [19125, 19517, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19517, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19517, null]], "pdf_page_numbers": [[0, 293, 1], [293, 555, 2], [555, 799, 3], [799, 1066, 4], [1066, 1301, 5], [1301, 1620, 6], [1620, 2115, 7], [2115, 2364, 8], [2364, 2608, 9], [2608, 2993, 10], [2993, 3313, 11], [3313, 3925, 12], [3925, 4311, 13], [4311, 4594, 14], [4594, 4861, 15], [4861, 5030, 16], [5030, 5353, 17], [5353, 5544, 18], [5544, 5747, 19], [5747, 6173, 20], [6173, 6520, 21], [6520, 6863, 22], [6863, 7185, 23], [7185, 7458, 24], [7458, 7786, 25], [7786, 8052, 26], [8052, 8216, 27], [8216, 8501, 28], [8501, 8810, 29], [8810, 9154, 30], [9154, 9276, 31], [9276, 9559, 32], [9559, 9824, 33], [9824, 10198, 34], [10198, 10571, 35], [10571, 10762, 36], [10762, 11126, 37], [11126, 11408, 38], [11408, 11771, 39], [11771, 11936, 40], [11936, 12284, 41], [12284, 12574, 42], [12574, 12797, 43], [12797, 13105, 44], [13105, 13348, 45], [13348, 13679, 46], [13679, 13727, 47], [13727, 13929, 48], [13929, 14311, 49], [14311, 14710, 50], [14710, 15025, 51], [15025, 15405, 52], [15405, 15735, 53], [15735, 15970, 54], [15970, 16323, 55], [16323, 16544, 56], [16544, 16726, 57], [16726, 17134, 58], [17134, 17339, 59], [17339, 17851, 60], [17851, 18224, 61], [18224, 18475, 62], [18475, 18711, 63], [18711, 18904, 64], [18904, 19125, 65], [19125, 19517, 66]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19517, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
35e27b1cb39abba519e35ee2ac62d5f9d301ce61
Efficient Web Service Discovery and Composition using Constraint Logic Programming Srividya Kona, Ajay Bansal, Gopal Gupta$^1$ and Thomas D. Hite$^2$ $^1$ Department of Computer Science The University of Texas at Dallas $^2$ Metallect Corp. 2400 Dallas Parkway, Plano, TX 75093 Abstract. Service-oriented computing is gaining wider acceptance. For Web services to become practical, an infrastructure needs to be supported that allows users and applications to discover, deploy, compose and synthesize services automatically. This automation can take place effectively only if formal semantic descriptions of Web services are available. In this paper we present an approach for automatic service discovery and composition with both syntactic and semantic description of Web services. In syntactic case, we use a repository of services described using WSDL (Web Service Description Language). In the semantic case, the services are described using USDL (Universal Service-Semantics Description Language), a language we have developed for formally describing the semantics of Web services. In this paper we show how the challenging task of building service discovery and composition engines can be easily implemented and efficiently solved via (Constraint) Logic programming techniques. We evaluate the algorithms on repositories of different sizes and show the results. 1 Introduction A Web service is a program accessible over the web that may effect some action or change in the world (i.e., causes a side-effect). Examples of such side-effects include a web-base being updated because of a plane reservation made over the Internet, a device being controlled, etc. The next milestone in the Web’s evolution is making services ubiquitously available. As automation increases, these Web services will be accessed directly by the applications rather than by humans [8]. In this context, a Web service can be regarded as a “programmatic interface” that makes application to application communication possible. An infrastructure that allows users to discover, deploy, synthesize and compose services automatically is needed in order to make Web services more practical. To make services ubiquitously available we need a semantics-based approach such that applications can reason about a service’s capability to a level of detail that permits their discovery, deployment, composition and synthesis [3]. Several efforts are underway to build such an infrastructure. These efforts include approaches based on the semantic web (such as USDL [1], OWL-S [4], WSML [5], WSDL-S [6]) as well as those based on XML, such as Web Services Description Language (WSDL [7]). Approaches such as WSDL are purely syntactic in nature, that is, it only addresses the syntactical aspects of a Web service [17]. Given a formal description of the context in which a service is needed, the service(s) that will precisely fulfill that need can be automatically determined. This task is called discovery. If the service is not found, the directory can be searched for two or more services that can be composed to synthesize the required service. This task is called composition. In this paper we present an approach for discovery and composition of Web services. We show how these tasks can be performed using both syntactic and semantic descriptions of Web services. The rest of the paper is organized as follows. We present different approaches to the description of Web services in section 2 with brief description of WSDL and USDL. Section 3 describes the two major Web services tasks namely discovery and composition with their formal definitions. In section 4, we present our multi-step narrowing based solution for automatic service discovery and composition. Then we show the high-level design of our system with brief descriptions of the different components in section 5. Various efficiency and scalability issues are discussed in section 6. Then we show performance results of our discovery and composition algorithm in section 7. Finally we present our conclusions. 2 Description of Web Services A Web service is a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface that is described in a machine-processible format so that other systems can interact with the Web service through its interface using messages. Currently WSDL (Web Services Description Language) [7] is used to describe Web services, but it is only syntactic in nature. The automation of Web service tasks (discovery, composition, etc.) can take place effectively only if formal semantic descriptions of Web services are available. For formally describing the semantics of Web services we have developed a language called USDL (Universal Service-Semantics Description Language). The motivation and details of USDL can be found in [1]. This section presents an overview of both the syntactic approach (WSDL) and the semantic approach (USDL) for description of Web services. 2.1 WSDL: Web Services Description Language WSDL is an XML-based language used for describing the interface of a Web service. It describes services as a set of operations (grouped into ports) operating on messages containing either document-oriented or procedure-oriented information. WSDL descriptions are purely syntactic in nature, that is, they merely specify the format of messages to be exchanged by invocable operations. Below is an example WSDL description of a FlightReservation service, similar to a service in the SAP ABAP Workbench Interface Repository for flight reservations [9], that takes in a CustomerName, FlightNumber, and DepartureDate as inputs and produces a FlightReservation as the output. ```xml <definitions ...> <portType name="ReserveFlight_Service"> <operation name="ReserveFlight"> <input message="ReserveFlight_Request"/> <output message="ReserveFlight_Response"/> </operation> </portType> <message name="#ReserveFlight_Request"> <part name="#CustomerName" type="xsd:string"> <part name="#FlightNumber" type="xsd:string"> <part name="#DepartureDate" type="xsd:date"> </message> <message name="ReserveFlight_Response"> <part name="FlightReservation" type="xsd:string"/> </message> </definitions> ``` In order to allow interoperability and machine-readability of web documents, a common conceptual ground must be agreed upon. The first step towards this common ground are standard languages such as WSDL and OWL [15]. However, these do not go far enough, as for any given type of service there are numerous distinct representations in WSDL and for high-level concepts (e.g., a ternary predicate), there are numerous disparate representations in terms of OWL, representations that are distinct in terms of OWL’s formal semantics, yet equal in the actual concepts they model. This is known as the semantic aliasing problem: distinct syntactic representations with distinct formal semantics yet equal conceptual semantics. For the semantics to equate things that are conceptually equal, we need to standardize a sufficiently comprehensive set of basic concepts, i.e., a universal ontology, along with a restricted set of connectives. Industry specific ontologies along with OWL can also be used to formally describe Web services. This is the approach taken by the OWL-S language [4]. The problem with this approach is that it requires standardization and undue foresight. Standardization is a slow, bitter process, and industry specific ontologies would require this process to be iterated for each specific industry. Furthermore, reaching a industry specific standard ontology that is comprehensive and free of semantic aliasing is even more difficult. Undue foresight is required because many useful Web services will address innovative applications and industries that don’t currently exist. Standardizing an ontology for travel and finances is easy, as these industries are well established, but new innovative services in new upcoming industries also need to be ascribed formal meaning. A universal ontology will have no difficulty in describing such new services. 2.2 USDL: Universal Service-Semantics Description Language USDL is a language that service developers can use to specify formal semantics of Web services [1]. We need an ontology that is somewhat coarse-grained yet universal, and at a similar conceptual level to common real world concepts. WordNet [10] is a sufficiently comprehensive ontology that meets these criteria. USDL uses OWL WordNet ontology [11] thus providing a universal, complete, and tractable framework, which lacks the semantic aliasing problem, to which Web service messages and operations are mapped. As long as this mapping is precise and sufficiently expressive, reasoning can be done within the realm of OWL by using automated inference systems (such as, one based on description logic), and thus automatically reaping the wealth of semantic information in the OWL WordNet ontology that describes relations between ontological concepts, like subsumption (hyponym-hypernym) and equivalence (synonym) relations. USDL can be regarded as providing semantics to WSDL statements. Thus, if WSDL can be regarded as a language for formally specifying the syntax of Web services, USDL can be regarded as a language for formally specifying their semantics. USDL allows sophisticated conceptual modeling and searching of available Web services, automated composition, and other forms of automated service integration. For example, the WSDL syntax and USDL semantics of Web services can be published in a directory which applications can access to automatically discover services. USDL is perhaps the first attempt to capture the semantics of Web services in a universal, yet decidable manner. Instead of documenting the function of a service as comments in English, one can write USDL statements that describe the function of that service. USDL relies on a universal ontology (OWL WordNet Ontology) to specify the semantics of atomic services. USDL describes a service in terms of `portType` and `messages`, similar to WSDL. The formal class definitions and properties of USDL in OWL are available at [12]. The semantics of a service is given using the OWL WordNet ontology: portType (operations provided by the service) and messages (operation parameters) are mapped to disjunctions of conjunctions of (possibly negated) concepts in the OWL WordNet ontology. The semantics is given in terms of how a service affects the external world. USDL assumes that each side-effect is one of following four operations: `create`, `update`, `delete`, or `find`. A generic `affects` side-effect is used when none of the four apply. An application that wishes to use a service automatically should be able to reason with WordNet atoms using the OWL WordNet ontology. The syntactic terms describing `portType` and `messages` are mapped to disjunctions of conjunctions of (possibly negated) OWL WordNet ontological terms. A service is then formally defined as a function, labeled by the side-effect. Using USDL, conditions/constraints on the service can also be described. Below is the USDL description of the `FlightReservation` service. ``` <definitions> <portType rdf:about="#Flight_Reservation"> <hasOperation rdf:resource="#ReserveFlight"/> </portType> <operation rdf:about="#ReserveFlight"> ``` 3 Automated Web service Discovery and Composition Discovery and Composition are two of the major tasks related to Web services. In this section we formally describe these tasks as The Discovery Problem and The Composition Problem. Both these problems have a syntactic and a semantic version which are also described below. 3.1 The Discovery Problem Given a repository of Web services, and a query (i.e., the requirements of the requested service; we refer to it as the query service in the rest of the text), automatically finding a service from the repository that matches these requirements is the Web service Discovery problem. This problem comes in two flavors: syntactic and semantic, depending on the type of service descriptions provided in the repository. All those services that produce at least the requested output parameters and use only from the provided input parameters can be valid solutions. Some of the solutions may be a little over-qualified, but they are still considered as long as they fulfill the input and output parameter requirements. **Definition:** Let \( \mathcal{R} \) be the set of services in a Web services repository. For simplicity, a service is represented as a pair of its input and output sets. Then let \( Q = (I', O') \) be a query service. The Discovery problem can be defined as automatically finding a set \( \mathcal{S} \) of services such that \( \mathcal{S} = \{ s \mid s = (I, O), s \in \mathcal{R}, I \sqsubseteq I' \}, \) O ⊆ O'). The meaning of the ⊆ relation depends on whether it is syntactic or semantic discovery. For syntactic discovery the ⊆ relation is the subset relation and for semantic discovery it is the subsumption (subsumes) relation. Figure 1 explains the discovery problem pictorially. \[ \text{Fig. 1. Substitutable Service} \] **Syntactic Discovery:** WSDL provides syntactic description of Web services which can be provided in a repository. Given a query with requirements of the requested service, the discovery problem involves finding a specific service that can fulfill the given input and output criteria in the query based on a syntactical equivalence of the input and output names. **Semantic Discovery:** We assume that a directory of services has already been compiled, and that this directory includes a USDL description document for each service. Inclusion of the USDL description, makes service directly “semantically” searchable. However, we still need a query language to search this directory, i.e., we need a language to frame the requirements on the service that an application developer is seeking. USDL itself can be used as such a query language. A USDL description of the desired service can be written, a query processor can then search the service directory for a “matching” service. ### 3.2 The Composition Problem Given a repository of service descriptions, and a query with the requirements of the requested service, in case a matching service is not found, the composition problem involves automatically finding a chain of services that can be put together in correct order of execution to obtain the desired service. This problem also can be either syntactic or semantic depending on the kind of service descriptions provided in the repository. Web service discovery problem can be treated as a special case of the Web service composition problem where the length of the chain of services is one. **Definition:** Let \( \mathcal{R} \) be the set of services in a Web services repository. For simplicity, a service is represented as a pair of its input and output sets. Then let \( Q = (I', O') \) be a query service. The Composition problem can be defined as automatically finding a sequence \( S \) of services such that \( S = (S_1, S_2, ..., S_n) \) where for all \( i, S_i \in \mathcal{R}, S_i = (I_i, O_i) \) and \( I' \supseteq I_1, O_1 \supseteq I_2, ..., O_n \supseteq O' \). The meaning of the $\subseteq$ relation depends on whether it is syntactic or semantic composition. For syntactic composition the $\subseteq$ relation is the subset relation and for semantic composition it is the subsumption (subsumes) relation. Figure 2 explains the composition problem pictorially. **Fig. 2.** Composite Service **Syntactic Composition:** WSDL descriptions are provided in the repository. The Composition problem involves deriving a possible sequence of services where only the provided input parameters are used for the services and at least the required output parameter is provided as an output by the chained services. The goal is to derive a single solution, where the aim is to keep the list of involved services minimal. In the sequence of services, the outputs of a service are fed in as inputs of the next subsequent service. **Semantic Composition:** USDL descriptions are provided in the repository. For service composition, the first step is finding the set of composable services. USDL itself can be used to specify the requirements of the composed service that an application developer is seeking. Using the discovery engine, individual services that make up the composed service can be selected. Part substitution technique [2] can be used to find the different parts of a whole task and the selected services can be composed into one by applying the correct sequence of their execution. The correct sequence of execution can be determined by the pre-conditions and post-conditions of the individual services. That is, if a subservice $S_1$ is composed with subservice $S_2$, then the post-conditions of $S_1$ must imply the pre-conditions of $S_2$. ### 4 A Multi-step Narrowing based Solution With the formal definition of the Discovery and Composition problem, presented in the previous section, one can see that there can be many approaches to solving the problem. Our approach is based on a multi-step narrowing of the list of candidate services using various constraints at each step. In this section we discuss our Discovery and Composition algorithms in detail. 4.1 Discovery Algorithm: The Discovery routine takes in the query parameters and produces a list of matching services. Our algorithm first uses the query output parameters to narrow down the list of services in the repository. It gets all those services that produce at least the query outputs. In case of the semantic approach, the output parameters provided by a service must be equivalent to or be subsumed by the required output in the query. From the list of services obtained, we find the set of all inputs parameters of all services in the list, say \( I \). Then a set of wrong/bad inputs, say \( WI \) is obtained by computing the set difference of \( I \) and the query inputs \( QI \). Then the list of services is further narrowed down by removing any service that has even one of the inputs from the set \( WI \). After all such services are removed, the remaining list is our final list of services called \( Result \). Figure 3 shows a pictorial representation of our discovery engine. \[ \text{Algorithm: Discovery} \\ \text{Input: } QI - \text{QueryInputs, } QO - \text{QueryOutputs} \\ \text{Output: } Result - \text{ListOfServices} \\ 1. L \leftarrow \text{NarrowServiceList}(QO); \\ 2. I \leftarrow \text{GetAllInputParameters}(L); \\ 3. WI \leftarrow \text{GetWrongInputs}(I, QI); \text{ i.e., } WI = I - QI \\ 4. Result \leftarrow \text{FilterServicesWithWrongInputs}(WI, L); \\ 5. Return Result; \] 4.2 Composition Algorithm: The composition routine also starts with the query output parameters. It first finds a list of all those services which produce outputs such that they are equivalent to or are subsumed by the required output in the query. From the list obtained, for each service the algorithm fetches their input parameters, say \( I' \) and tries to find all those services from the repository that produce \( I' \) as outputs. The goal is to derive a single solution, which is a list of services that can be composed together to produce the requested service in the query. The aim is also to keep the list of involved services minimal. Figure 4 shows a pictorial representation of our composition engine. ![Composition Engine Diagram](image) **Algorithm: Composition** **Input:** QI - QueryInputs, QO - QueryOutputs **Output:** Result - ListOfServices 1. \( L \leftarrow \text{NarrowServiceList}(QO); \) 2. For each service \( S \) in \( L \) 3. Add \( S \) to the Result List; 4. \( I \leftarrow \text{GetAllInputParameters}(S); \) 5. \( L' \leftarrow \text{NarrowServiceList}(I); \text{i.e. find services which produce } I \text{ as output} \) 6. Repeat the loop lines 2-5 on the new List \( L' \); 7. End For 8. Return Result; ### 5 Implementation Our discovery and composition engine is implemented using Prolog [14] with Constraint Logic Programming over finite domain [13], referred to as CLP(FD) hereafter. The high-level design of the Discovery and Composition engines is shown in Figure 5. The software system is made up of the following components. #### 5.1 Triple Generator The triple generator module converts each service description into a triple. In syntactic approach WSDL descriptions are converted to triples like: WSDL being syntactic in nature, does not provide any information about Pre/Post-Conditions and side-effects. So we use the generic `affects` for all services. In the semantic approach the USDL descriptions are converted to triples like: \[(\text{Pre-Conditions}, \text{affect-type(affected-object, I, O), Post-Conditions}).\] The function symbol `affect-type` is the side-effect of the service and `affected object` is the object that changed due to the side-effect. `I` is the list of inputs and `O` is the list of outputs. Pre-Conditions are the conditions on the input parameters and Post-Conditions are the conditions on the output parameters. Services are converted to triples so that they can be treated as terms in first-order logic and specialized unification algorithms can be applied to obtain exact, generic, specific, part and whole substitutions [2]. In case conditions on a service are not provided, the Pre-Conditions and Post-Conditions in the triple will be null. Similarly if the affect-type is not available, this module assigns a generic affect to the service. **5.2 Query Reader** This module reads the query file and passes it on to the Triple Generator. The query file can be any pre-decided format. For example, the following XML snippet shows an example of a query file we use for testing our system. In the above snippet the *Provided* tag holds the list of input requirements and the *Resultant* tag holds the list of output requirements. ### 5.3 Semantic Relations Generator For the semantic approach, matching is done based on the semantic relations between the parameters, conditions/constraints if provided and side-effects if provided. We obtain the semantic relations from the OWL WordNet ontology. OWL WordNet ontology provides a number of useful semantic relations like synonyms, antonyms, hyponyms, hypernyms, meronyms, holonyms and many more. USDL descriptions point to OWL WordNet for the meanings of concepts. A theory of service substitution is described in detail in [2] which uses the semantic relations between basic concepts of WordNet, to derive the semantic relations between services. This module extracts all the semantic relations and creates a list of Prolog facts. ### 5.4 Discovery Query Processor This module compares the discovery query with all the services in the repository. The processor works as follows: 1. On the output parts of a service, the processor first looks for an *exact* substitutable. If it does not find one, then it looks for a parameter with hyponym relation [2], i.e., a *specific* substitutable. 2. On the input parts of a service, the processor first looks for an *exact* substitutable. If it does not find one, then it looks for a parameter with hypernym relation [2], i.e., a *generic* substitutable. The discovery engine, written using Prolog with CLP(FD) library, uses a repository of facts, which contains a list of all the services, their input and output parameters and the semantic relations between the parameters. The following is the code snippet of our discovery engine: ```prolog discovery(sol(Qname,A)) :- dQuery(Qname,I,O), encodeParam(O,OL), /* Narrow candidate services(S) using output list(OL) */ narrowO(OL,S), fd_set(S,FDs), fdset_to_list(FDs,SL), /* Expand InputList(I) using semantic relations */ getExtInpList(I, ExtInpList), encodeParam(ExtInpList,IL), ``` 5.5 Composition Query Processor For service composition, the first step is finding the set of composable services. If a subservice $S_1$ is composed with subservice $S_2$, then the output parts of $S_1$ must be the input parts of $S_2$. Thus the processor has to find a set of services such that the outputs of the first service are inputs to the next service and so on. These services are then stitched together to produce the desired service. Similar to the discovery engine, composition engine is also written using Prolog with CLP(FD) library. It uses a repository of facts, which contains list of services, their input and output parameters and the semantic relations between the parameters. The following is the code snippet of our composition engine: ```prolog composition(Qname, A) :- dQuery(Qname, QI, QO), encodeParam(QO, OL), narrowO(OL, SL), fd_set(SL, Sset), fdset_member(S_INDEX, Sset), getExtInpList(QI, InpList), encodeParam(InpList, IL), list_to_fdset(IL, QIset), serv(S_INDEX, SI, _), list_to_fdset(SI, SIset), fdset_subtract(SIset, QIset, ISIset), comp(QIset, ISIset, [S_INDEX], SA), decodeS(SA, A). ``` The query is converted into a Prolog query that looks as follows: ``` composition(queryService, ListOfServices). ``` The engine will try to find a list of $SolutionServices$ that can be composed to get the requested $queryService$. 5.6 Output Generator After the Discovery/Composition Query processor finds a matching service, or the list of atomic services for a composed service, the results are sent to the output generator in the form of triples. This module generates the output files in any desired XML format. 6 Efficiency and Scalability Issues In this section we discuss the salient features of our system with respect to the efficiency and scalability issues related to the Web service discovery and composition problem. It is because of these features, we decided on the Multi-step narrowing based approach to solving these problems and implemented it using Constraint Logic Programming. **Pre-processing:** Our system initially pre-processes the repository and converts all service descriptions into Prolog terms. In case of semantic approach, the semantic relations are also processed and loaded as Prolog terms in memory. Once the pre-processing is done, then discovery or composition queries are run against all these Prolog terms and hence we obtain results quickly and efficiently. The built-in indexing scheme and constraints in CLP(FD) facilitate the fast execution of queries. During the pre-processing phase, we use the term representations of services to set up constraints on services and the individual input and output parameters. This further helped us in getting optimized results. **Execution Efficiency:** The use of CLP(FD) helped significantly in rapidly obtaining answers to the discovery and composition queries. We tabulated processing times for different size repositories and the results are shown in Section 7. As one can see, after pre-processing the repository, our system is quite efficient in processing the query. The query execution time is insignificant. **Programming Efficiency:** The use of Constraint Logic Programming helped us in coming up with a simple and elegant code. We used a number of built-in features such as indexing, set operations, and constraints and hence did not have to spend time coding these ourselves. This made our approach efficient in terms of programming time as well. Not only the whole system is about 200 lines of code, but we also managed to develop it in less than 2 weeks. **Scalability:** Our system allows for incremental updates on the repository, i.e., once the pre-processing of a repository is done, adding a new service or updating an existing one will not need re-execution of the entire pre-processing phase. Instead we can easily update the existing list of CLP(FD) terms loaded in the memory and run discovery and composition queries. Our estimate is that this update time will be negligible, perhaps a few milliseconds. With real-world services, it is likely that new services will get added often or updates might be made on existing services. In such a case, avoiding repeated pre-processing of the entire repository will definitely be needed and incremental update will be of great practical use. The efficiency of the incremental update operation makes our system highly scalable. **Use of external Database:** In case the repository grow extremely large in size, then saving off results from the pre-processing phase into some external database might be useful. This is part of our future work. With extremely large repositories, holding all the results of pre-processing in the main memory may not be feasible. In such a case we can query a database where all the information is stored. Applying incremental updates to the database will be easily possible thus avoiding recomputation of the pre-processed data. Searching for Optimal Solution: If there are any properties with respect to which the solutions can be ranked, then setting up global constraints to get the optimal solution is relatively easy with the constraint based approach. For example, if each service has an associated cost, then the discovery and the composition problem can be redefined to find the solutions with the minimal cost. Our system can be easily extended to take these global constraints into account. 7 Performance We evaluated our approach on different size repositories and tabulated the Pre-processing time and the Query Execution time. We noticed that there was a significant difference in the pre-processing time between the first and the subsequent runs (after deleting all the previous pre-processed data) on the same repository. What we found is that the repository was cached after the first run and that explained the difference in the pre-processing time for the subsequent runs. We used repositories from the WS-Challenge web site [16]. Table 1 shows performance results for our Discovery Algorithm and table 2 shows results for Composition. The times shown in the tables are the wall clock times. The actual CPU time to pre-process the repository and execute the query should be less than or equal to the wall clock time. The results are plotted in figure 6 and 7 respectively. The graphs exhibit behavior consistent with our expectations: for a fixed repository size, the preprocessing time increases with the increase in number of input/output parameters. Similarly, for fixed input/output sizes, the preprocessing time is directly proportional to the size of the service repository. However, what is surprising is the efficiency of service query processing, which is negligible (just 1 to 3 milliseconds) even for complex queries with large service repositories. <table> <thead> <tr> <th>Repository Size (number of services)</th> <th>Number of I/O parameters</th> <th>Non-Cached Pre-processing Time (in secs)</th> <th>Cached Pre-processing Time (in secs)</th> <th>Query Execution Time (in msecs)</th> </tr> </thead> <tbody> <tr> <td>2000</td> <td>4-8</td> <td>36.5</td> <td>7.3</td> <td>1</td> </tr> <tr> <td>2000</td> <td>16-20</td> <td>45.8</td> <td>13.4</td> <td>1</td> </tr> <tr> <td>2000</td> <td>32-36</td> <td>57.8</td> <td>23.3</td> <td>2</td> </tr> <tr> <td>2500</td> <td>4-8</td> <td>47.7</td> <td>8.7</td> <td>1</td> </tr> <tr> <td>2500</td> <td>16-20</td> <td>58.7</td> <td>16.6</td> <td>1</td> </tr> <tr> <td>2500</td> <td>32-36</td> <td>71.6</td> <td>29.2</td> <td>2</td> </tr> <tr> <td>3000</td> <td>4-8</td> <td>56.8</td> <td>12.1</td> <td>1</td> </tr> <tr> <td>3000</td> <td>16-20</td> <td>77.1</td> <td>19.4</td> <td>1</td> </tr> <tr> <td>3000</td> <td>32-36</td> <td>88.2</td> <td>33.7</td> <td>3</td> </tr> </tbody> </table> Table 1. Performance of our Discovery Algorithm Table 2. Performance of our Composition Algorithm 8 Conclusion To catalogue, search and compose Web services in a semi-automatic to fully-automatic manner we need infrastructure to publish Web services, document them and query repositories for matching services. Our syntactic approach uses WSDL descriptions and applies the discovery and composition routines on first-order logic terms obtained from these descriptions. Our semantic approach uses USDL to formally document the semantics of Web services and our discovery and composition engines find substitutable and composite services that best match the desired service. Our solution produces accurate and quick results with both syntactic and semantic description of Web services. We are able to apply many optimization techniques to our system so that it works efficiently even on large repositories. Use of Constraint Logic Programming helped greatly in obtaining an efficient implementation of this system. References 4. OWL-S: Semantic markup for Web services. www.daml.org/services/owl-s/1.0/owl-s.html. 7. WSDL: Web Services Description Language. http://www.w3.org/TR/wSDL.
{"Source-Url": "http://dcs.asu.edu/faculty/skbansal/research/ALPSWS06.pdf", "len_cl100k_base": 7141, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34823, "total-output-tokens": 8416, "length": "2e12", "weborganizer": {"__label__adult": 0.0002834796905517578, "__label__art_design": 0.00031065940856933594, "__label__crime_law": 0.00031566619873046875, "__label__education_jobs": 0.00035572052001953125, "__label__entertainment": 6.788969039916992e-05, "__label__fashion_beauty": 0.00013899803161621094, "__label__finance_business": 0.00023818016052246096, "__label__food_dining": 0.00031495094299316406, "__label__games": 0.0003333091735839844, "__label__hardware": 0.0005779266357421875, "__label__health": 0.0004277229309082031, "__label__history": 0.00017261505126953125, "__label__home_hobbies": 6.377696990966797e-05, "__label__industrial": 0.0002663135528564453, "__label__literature": 0.0002753734588623047, "__label__politics": 0.00022614002227783203, "__label__religion": 0.0003633499145507813, "__label__science_tech": 0.016937255859375, "__label__social_life": 7.390975952148438e-05, "__label__software": 0.008209228515625, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00020313262939453125, "__label__transportation": 0.0003592967987060547, "__label__travel": 0.00016927719116210938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35761, 0.02082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35761, 0.38676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35761, 0.87298]], "google_gemma-3-12b-it_contains_pii": [[0, 2411, false], [2411, 5425, null], [5425, 8144, null], [8144, 11404, null], [11404, 12880, null], [12880, 15298, null], [15298, 17406, null], [17406, 19350, null], [19350, 20587, null], [20587, 21917, null], [21917, 23977, null], [23977, 25639, null], [25639, 28936, null], [28936, 32813, null], [32813, 32863, null], [32863, 35761, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2411, true], [2411, 5425, null], [5425, 8144, null], [8144, 11404, null], [11404, 12880, null], [12880, 15298, null], [15298, 17406, null], [17406, 19350, null], [19350, 20587, null], [20587, 21917, null], [21917, 23977, null], [23977, 25639, null], [25639, 28936, null], [28936, 32813, null], [32813, 32863, null], [32863, 35761, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35761, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35761, null]], "pdf_page_numbers": [[0, 2411, 1], [2411, 5425, 2], [5425, 8144, 3], [8144, 11404, 4], [11404, 12880, 5], [12880, 15298, 6], [15298, 17406, 7], [17406, 19350, 8], [19350, 20587, 9], [20587, 21917, 10], [21917, 23977, 11], [23977, 25639, 12], [25639, 28936, 13], [28936, 32813, 14], [32813, 32863, 15], [32863, 35761, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35761, 0.05946]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5b953a90dc07133890250e358fd77d7fca8b186a
Section: Properties of Context-free Languages Which of the following languages are CFL? - $L = \{a^n b^n c^j | 0 < n \leq j\}$ NOT CFL - $L = \{a^n b^j a^n b^j | n > 0, j > 0\}$ Not CFL - $L = \{a^n b^j a^k b^p | n + j \leq k + p, n > 0, j > 0, k > 0, p > 0\}$ CFL - $L = \{a^n b^j a^j b^n | n > 0, j > 0\}$ CFL Pumping Lemma for Regular Language’s: Let $L$ be a regular language, Then there is a constant $m$ such that $w \in L$, $|w| \geq m$, $w = xyz$ such that - $|xy| \leq m$ - $|y| \geq 1$ - for all $i \geq 0$, $xy^i z \in L$ Pumping Lemma for CFL’s Let $L$ be any infinite CFL. Then there is a constant $m$ depending only on $L$, such that for every string $w$ in $L$, with $|w| \geq m$, we may partition $w = uvxyz$ such that: - $|vxy| \leq m$, (limit on size of substring) - $|vy| \geq 1$, ($v$ and $y$ not both empty) For all $i \geq 0$, $uv^i xy^i z \in L$ **Proof:** (sketch) There is a CFG $G$ s.t. $L= L(G)$. Consider the parse tree of a long string in $L$. For any long string, some nonterminal $N$ must appear twice in the path. Example: Consider $L = \{a^nb^nc^n : n \geq 1\}$. Show $L$ is not a CFL. • Proof: (by contradiction) Assume $L$ is a CFL and apply the pumping lemma. Let $m$ be the constant in the pumping lemma and consider $w = a^mb^mc^m$. Note $|w| \geq m$. Show there is no division of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$, and $uv^ixy^iz \in L$ for $i = 0, 1, 2, \ldots$. \[ w = a^m b^m c^m \] Case 1: \( V \) nor \( y \) or \( \) distinct symbols Case 2: \( V = a \) \( t_1 \) \( \rightarrow \) \( y = a^t_2 \) Case 3: \( V = b \) \( t_1 \) \( \rightarrow \) \( y = b^t_3 \) Case 4: \( V = C \) Thus, there is no breakdown of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$ and for all $i \geq 0$, $uv^ixy^iz$ is in $L$. Contradiction, thus, $L$ is not a CFL. Q.E.D. Example Why would we want to recognize a language of the type \( \{a^n b^n c^n : n \geq 1 \} \)? Example: Consider $L = \{a^nb^nc^p : p > n > 0\}$. Show $L$ is not a CFL. - Proof: Assume $L$ is a CFL and apply the pumping lemma. Let $m$ be the constant in the pumping lemma and consider $w = \phantom{=} \text{Note } |w| \geq m.$ Show there is no division of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$, and $uv^ixy^iz \in L$ for $i = 0, 1, 2, \ldots$. Strategy → Choose a tight string to get contradiction easily!! $w = a^mb^m c^{m+1}$ $c^p, p > n$ $|w| \geq m$ Strategy $\rightarrow$ Choose a tight string to get contradiction easily!! $\omega = a^{m} b^{m} c^{m+1}$ Case 1: $\nu$ nor y distinct symbols $\Rightarrow \nu$ contains $a's + b'$ $\Rightarrow u \nu^2 x y^2 z \notin L$ $|\omega| \geq m$ Case 2: $\nu$ is all $a$'s Example: Consider $L = \{a^j b^k : k = j^2\}$. Show $L$ is not a CFL. • Proof: Assume $L$ is a CFL and apply the pumping lemma. Let $m$ be the constant in the pumping lemma and consider $$w = a^m b^{m^2}$$ Show there is no division of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$, and $uv^i xy^i z \in L$ for $i = 0, 1, 2, \ldots$. Case 1: Neither $v$ nor $y$ can contain 2 or more distinct symbols. If $v$ contains $a$’s and $b$’s, then $uv^2 xy^2 z \notin L$ since there will be $b$’s before $a$’s. Thus, $v$ and $y$ can be only $a$’s, and $b$’s (not mixed). \[ a^m b^n \] \[ v = a^r, \quad y = b^t \] \[ i = 2, \quad \ln^2 x y^z = m + 1, \quad m^2 + t = 2 \] \[ m + 1 > a \] \[ f - 1 \equiv (m + 1)^2 = m^2 + 2m + 1 \] There will be too few b's. You have to prove all the cases. Give a contradiction. Another case: \[ v = b^{t+1}, \quad y = b^t \] \[ i = 2, \quad \ln^2 x y^z = m + m^2 + t+1, \quad m^2 + t+1 \] \[ \text{num of b's is not an even squared.} \] Example: Consider $L = \{ww\bar{w} : w \in \Sigma^*\}$, $\Sigma = \{a, b\}$, where $\bar{w}$ is the string $w$ with each occurrence of $a$ replaced by $b$ and each occurrence of $b$ replaced by $a$. Show $L$ is not a CFL. **Proof:** Assume $L$ is a CFL and apply the pumping lemma. Let $m$ be the constant in the pumping lemma and consider \[ w = \overline{ab}^m \overline{ab}^m \overline{ab}^m \] Show there is no division of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$, and $uv^ixy^iz \in L$ for $i = 0, 1, 2, \ldots$. \begin{align*} \text{Case 1} & \quad v = a^1, \quad y = b^{1+3} \\ & \quad i = 0 \quad m = 1, \quad m = 1, \quad m = 1, \quad m = 1, \end{align*} \text{num of a's won't match num of A's} \begin{align*} \text{Case 2} & \quad v = a^{1+4}, \quad y = b^{1+3} \\ & \quad i = 0 \quad \text{num of y's} = a \quad b \quad A \end{align*} \text{num of a's < num of A's} Example: Consider $L = \{a^n b^p b^p a^n\}$. $L$ is a CFL. The pumping lemma should apply! Let $m \geq 4$ be the constant in the pumping lemma. Consider $w = a^m b^m b^m a^m$. We can break $w$ into $uvxyz$, with: $u = a^m b^{m^2}$, $v = b$, $x = b b$, $y = b$, $z = b^{m-2} a^m$ $u v^i x y^i z = a^m b^{m+i} b^{m+i} a^m \in L$ for any $i$. Chap 8.2 Closure Properties of CFL’s Theorem CFL’s are closed under union, concatenation, and star-closure. • Proof: Given 2 CFG $G_1 = (V_1, T_1, S_1, P_1)$ and $G_2 = (V_2, T_2, S_2, P_2)$ – Union: Construct $G_3$ s.t. $L(G_3) = L(G_1) \cup L(G_2)$. $G_3 = (V_3, T_3, S_3, P_3)$ $V_3 = V_1 \cup V_2 \cup \{S_3\}$, $T_3 = T_1 \cup T_2$ $P_3 = P_1 \cup P_2 \cup \{S_3 \rightarrow S_1 \mid S_2\}$ - Concatenation: Construct $G_3$ s.t. $L(G_3) = L(G_1) \circ L(G_2)$. $G_3 = (V_3, T_3, S_3, P_3)$ $T_3 = T_1 \cup T_2$ Similar $V_3 = V_1 \cup V_2 \cup \{S_3\}$ $P_3 = P_1 \cup P_2 \cup \{S_3 \rightarrow S_1 S_2\}$ - Star-Closure Construct $G_3$ s.t. $L(G_3) = L(G_1)^*$ $G_3 = (V_3, T_3, S_3, P_3)$ $V_3 = V_1 \cup \{S_3\}$ $T_3 = T_1$ $P_3 = P_1 \cup \{S_3 \rightarrow S_1 S_2\}$ Theorem CFL’s are NOT closed under intersection and complementation. - **Proof:** - **Intersection:** \[ L_1 = \{ a^nb^n c^m \mid n,m \geq 0 \} \] \[ L_2 = \{ a^n b^m c^n \mid n,m \geq 0 \} \] \[ L_1 \cap L_2 \] is not CFL! – Complementation: Theorem: CFL’s are closed under regular intersection. If $L_1$ is CFL and $L_2$ is regular, then $L_1 \cap L_2$ is CFL. • Proof: (sketch) We take a NPDA for $L_1$ and a DFA for $L_2$ and construct a NPDA for $L_1 \cap L_2$. $M_1 = (Q_1, \Sigma, \Gamma, \delta_1, q_0, z, F_1)$ is an NPDA such that $L(M_1) = L_1$. $M_2 = (Q_2, \Sigma, \delta_2, q_0, F_2)$ is a DFA such that $L(M_2) = L_2$. $M_3 = (Q_3, \Sigma, \Gamma, \delta_3, (q_0, p_0), z, F_3)$ $Q_3 = Q_1 \times Q_2, F_3 = \{(q, p) \mid q \in F_1, p \in F_2\}$ Example of replacing arcs (NOT a Proof!): combine \[ q_0 \] \[ q_1, q_k \] \[ q_i, q_k \] \[ a, x, y \] \[ a, y, y \] \[ a, x, y, y \] \[ a, y, y \] \[ q_0, q_0 \] \[ q_i, q_k \] \[ \ldots \] \[ m \] \[ m^2 \] We must formally define $\delta_3$. If $$(q_k, x) \in S((q_i, a, b), \delta_3(q_j, a)) = q_l$$ then $$(q_k, q_l, x) \in \delta_3((q_i, q_j), a, b)$$ Must show $$((q_0, q_0), w, z) \neq ((q_i, q_i), z, x)$$ if and only if $$(q_0, w, z) \neq (q_i, z, z)$$ $$(q_0', w) \neq (q_j, z)$$ $$q \in F_1, q_j \in F_2$$ Questions about CFL: 1. Decide if CFL is empty? get rid of useless prod. If nothing is left it is empty 2. Decide if CFL is infinite? get rid of useless again variable that repeats? \[ A \Rightarrow \gamma A \gamma \] Look for cycle graph Example: Consider $L = \{a^{2n}b^{2m}c^n d^m : n, m \geq 0\}$. Show $L$ is not a CFL. - **Proof:** Assume $L$ is a CFL and apply the pumping lemma. Let $m$ be the constant in the pumping lemma and consider $w = a^{2m}b^{2m}c^m d^m$. Show there is no division of $w$ into $uvxyz$ such that $|vy| \geq 1$, $|vxy| \leq m$, and $uv^i xy^i z \in L$ for $i = 0, 1, 2, \ldots$. **Case 1:** Neither $v$ nor $y$ can contain 2 or more distinct symbols. If $v$ contains $a$’s and $b$’s, then $uv^2xy^2z \notin L$ since there will be $b$’s before $a$’s. Thus, $v$ and $y$ can be only $a$’s, $b$’s, $c$’s, or $d$’s (not mixed). **Case 2:** $v = a^{t_1}$, then $y = a^{t_2}$ or $b^{t_3}$ ($|vxy| \leq m$) If $y = a^{t_2}$, then \[ uv^2xy^2z = a^{2m+t_1+t_2}b^{2m}c^m d^m \notin L \text{ since } t_1 + t_2 > 0, \text{ the number of } a'\text{s is not twice the number of } c'\text{s.} \] If \( y = b^{t_3} \), then \[ uv^2xy^2z = a^{2m+t_1}b^{2m+t_3}c^m d^m \notin L \text{ since } t_1 + t_3 > 0, \text{ either the number of } a'\text{s (denoted } n(a)) \text{ is not twice } n(c) \text{ or } n(b) \text{ is not twice } n(d). \] **Case 3:** \( v = b^{t_1} \), then \( y = b^{t_2} \) or \( c^{t_3} \) If \( y = b^{t_2} \), then \[ uv^2xy^2z = a^{2m}b^{2m+t_1+t_2}c^m d^m \notin L \text{ since } t_1 + t_2 > 0, n(b) > 2*n(d). \] If \( y = c^{t_3} \), then \[ uv^2xy^2z = a^{2m}b^{2m+t_1}c^m+t_3 d^m \notin L \text{ since } t_1 + t_3 > 0, \text{ either } n(b) > 2*n(d) \text{ or } 2*n(c) > n(a). \] **Case 4:** \( v = c^{t_1} \), then \( y = c^{t_2} \) or \( d^{t_3} \) If \( y = c^{t_2} \), then \[ uv^2xy^2z = a^{2m}b^{2m}c^{m+t_1+t_2}d^m \notin L \text{ since } t_1 + t_2 > 0, 2*n(c) > n(a). \] If \( y = d^{t_3} \), then \[ uv^2xy^2z = a^{2m}b^{2m}c^{m+t_1}d^{m+t_3} \notin L \text{ since } t_1 + t_3 > 0, \text{ either } 2*n(c) > n(a) \text{ or } 2*n(d) > n(b). \] Case 5: \( v = d^{t_1} \), then \( y = d^{t_2} \) then \( uv^2xy^2z = a^{2m}b^{2m}c^{m}d^{m+t_1+t_2} \notin L \text{ since } t_1 + t_2 > 0, \text{ } 2*n(d) > n(c). \) Thus, there is no breakdown of \( w \) into \( uvxyz \) such that \( |vy| \geq 1 \), \( |vxy| \leq m \) and for all \( i \geq 0 \), \( uv^ixy^iz \) is in \( L \). Contradiction, thus, \( L \) is not a CFL. Q.E.D.
{"Source-Url": "http://www2.cs.duke.edu/courses/spring18/compsci334/lects/annotated/sectcflpropSMarch22.pdf", "len_cl100k_base": 4402, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 44883, "total-output-tokens": 5511, "length": "2e12", "weborganizer": {"__label__adult": 0.00035881996154785156, "__label__art_design": 0.0004925727844238281, "__label__crime_law": 0.0004222393035888672, "__label__education_jobs": 0.0023593902587890625, "__label__entertainment": 0.00015020370483398438, "__label__fashion_beauty": 0.00017893314361572266, "__label__finance_business": 0.00029158592224121094, "__label__food_dining": 0.0005002021789550781, "__label__games": 0.0011873245239257812, "__label__hardware": 0.0008902549743652344, "__label__health": 0.0004854202270507813, "__label__history": 0.0003445148468017578, "__label__home_hobbies": 0.0001405477523803711, "__label__industrial": 0.0006208419799804688, "__label__literature": 0.0011987686157226562, "__label__politics": 0.00033855438232421875, "__label__religion": 0.0007252693176269531, "__label__science_tech": 0.08673095703125, "__label__social_life": 0.00013685226440429688, "__label__software": 0.01166534423828125, "__label__software_dev": 0.8896484375, "__label__sports_fitness": 0.0003333091735839844, "__label__transportation": 0.0005588531494140625, "__label__travel": 0.00022304058074951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 9605, 0.03351]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 9605, 0.94824]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 9605, 0.63615]], "google_gemma-3-12b-it_contains_pii": [[0, 314, false], [314, 536, null], [536, 1054, null], [1054, 1054, null], [1054, 1448, null], [1448, 1676, null], [1676, 1857, null], [1857, 1954, null], [1954, 2441, null], [2441, 2711, null], [2711, 3289, null], [3289, 3702, null], [3702, 4239, null], [4239, 4602, null], [4602, 4946, null], [4946, 5350, null], [5350, 5760, null], [5760, 6004, null], [6004, 6023, null], [6023, 6589, null], [6589, 6769, null], [6769, 7087, null], [7087, 7355, null], [7355, 8075, null], [8075, 9079, null], [9079, 9605, null]], "google_gemma-3-12b-it_is_public_document": [[0, 314, true], [314, 536, null], [536, 1054, null], [1054, 1054, null], [1054, 1448, null], [1448, 1676, null], [1676, 1857, null], [1857, 1954, null], [1954, 2441, null], [2441, 2711, null], [2711, 3289, null], [3289, 3702, null], [3702, 4239, null], [4239, 4602, null], [4602, 4946, null], [4946, 5350, null], [5350, 5760, null], [5760, 6004, null], [6004, 6023, null], [6023, 6589, null], [6589, 6769, null], [6769, 7087, null], [7087, 7355, null], [7355, 8075, null], [8075, 9079, null], [9079, 9605, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 9605, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 9605, null]], "pdf_page_numbers": [[0, 314, 1], [314, 536, 2], [536, 1054, 3], [1054, 1054, 4], [1054, 1448, 5], [1448, 1676, 6], [1676, 1857, 7], [1857, 1954, 8], [1954, 2441, 9], [2441, 2711, 10], [2711, 3289, 11], [3289, 3702, 12], [3702, 4239, 13], [4239, 4602, 14], [4602, 4946, 15], [4946, 5350, 16], [5350, 5760, 17], [5760, 6004, 18], [6004, 6023, 19], [6023, 6589, 20], [6589, 6769, 21], [6769, 7087, 22], [7087, 7355, 23], [7355, 8075, 24], [8075, 9079, 25], [9079, 9605, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 9605, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
e077c5e1e59ba591d115da70a6bf4e18d258d6f0
RFC 9224 Finding the Authoritative Registration Data Access Protocol (RDAP) Service Abstract This document specifies a method to find which Registration Data Access Protocol (RDAP) server is authoritative to answer queries for a requested scope, such as domain names, IP addresses, or Autonomous System numbers. This document obsoletes RFC 7484. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9224. Copyright Notice Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License. # Table of Contents 1. Introduction 2. Conventions Used in This Document 3. Structure of the RDAP Bootstrap Service Registries 4. Bootstrap Service Registry for Domain Name Space 5. Bootstrap Service Registries for Internet Numbers - 5.1. Bootstrap Service Registry for IPv4 Address Space - 5.2. Bootstrap Service Registry for IPv6 Address Space - 5.3. Bootstrap Service Registry for AS Number Space 6. Entity 7. Non-existent Entries or RDAP URL Values 8. Deployment and Implementation Considerations 9. Limitations 10. Formal Definition - 10.1. Imported JSON Terms - 10.2. Registry Syntax 11. Security Considerations 12. IANA Considerations - 12.1. Bootstrap Service Registry for IPv4 Address Space - 12.2. Bootstrap Service Registry for IPv6 Address Space - 12.3. Bootstrap Service Registry for AS Number Space - 12.4. Bootstrap Service Registry for Domain Name Space 13. References - 13.1. Normative References - 13.2. Informative References Appendix A. Changes since RFC 7484 Acknowledgements Author's Address 1. Introduction Querying and retrieving registration data from registries are defined in the Registration Data Access Protocol (RDAP) [RFC7480] [RFC7481] [RFC9082] [RFC9083]. These documents do not specify where to send the queries. This document specifies a method to find which server is authoritative to answer queries for the requested scope. Top-Level Domains (TLDs), Autonomous System (AS) numbers, and network blocks are delegated by IANA to Internet registries such as TLD registries and Regional Internet Registries (RIRs) that then issue further delegations and maintain information about them. Thus, the bootstrap information needed by RDAP clients is best generated from data and processes already maintained by IANA; the relevant registries already exist at [ipv4reg], [ipv6reg], [asreg], and [domainreg]. This document obsoletes [RFC7484]. Per this document, IANA has created new registries based on a JSON format specified in this document, herein named RDAP Bootstrap Service Registries. These new registries are based on the existing entries of the above-mentioned registries. An RDAP client fetches the RDAP Bootstrap Service Registries, extracts the data, and then performs a match with the query data to find the authoritative registration data server and appropriate query base URL. 2. Conventions Used in This Document The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 3. Structure of the RDAP Bootstrap Service Registries The RDAP Bootstrap Service Registries, as specified in Section 12 below, have been made available as JSON [RFC8259] objects, which can be retrieved via HTTP from locations specified by IANA. The JSON object for each registry contains a series of members containing metadata about the registry such as a version identifier, a timestamp of the publication date of the registry, and a description. Additionally, a "services" member contains the registry items themselves, as an array. Each item of the array contains a second-level array, with two elements, each of them being a third-level array. Each element of the Services Array is a second-level array with two elements: in order, an Entry Array and a Service URL Array. The Entry Array contains all entries that have the same set of base RDAP URLs. The Service URL Array contains the list of base RDAP URLs usable for the entries found in the Entry Array. Elements within these two arrays are not ordered in any way. An example structure of the JSON output of an RDAP Bootstrap Service Registry is illustrated: The formal syntax is described in Section 10. The "version" corresponds to the format version of the registry. This specification defines version "1.0". The syntax of the "publication" value conforms to the Internet date/time format [RFC3339]. The value is the latest update date of the registry by IANA. The optional "description" string can contain a comment regarding the content of the bootstrap object. Per [RFC7258], in each array of base RDAP URLs, the secure versions of the transport protocol SHOULD be preferred and tried first. For example, if the base RDAP URLs array contains both HTTPS and HTTP URLs, the bootstrap client SHOULD try the HTTPS version first. Base RDAP URLs MUST have a trailing "/" character because they are concatenated to the various segments defined in [RFC9082]. JSON names MUST follow the format recommendations of Section 6 of [RFC7480]. Any unrecognized JSON object properties or values MUST be ignored by implementations. Internationalized Domain Name labels used as entries or base RDAP URLs in the registries defined in this document MUST be only represented using their A-label form as defined in [RFC5890]. All Domain Name labels used as entries or base RDAP URLs in the registries defined in this document MUST be only represented in lowercase. ```json { "version": "1.0", "publication": "YYYY-MM-DDTHH:MM:SSZ", "description": "Some text", "services": [ [ "entry1", "entry2", "entry3", "https://registry.example.com/myrdap/", "http://registry.example.com/myrdap/" ], [ "entry4", "https://example.org/" ] ] } ``` 4. Bootstrap Service Registry for Domain Name Space The JSON output of this registry contains domain label entries attached to the root, grouped by base RDAP URLs, as shown in this example. ``` { "version": "1.0", "publication": "2024-01-07T10:11:12Z", "description": "Some text", "services": [ [ ["net", "com"], ["https://registry.example.com/myrdap/" ], [ ["org", "mytld"], ["https://example.org/"] ], [ ["xn--zckzah"], ["https://example.net/rdap/xn--zckzah/", "http://example.net/rdap/xn--zckzah/" ] ] } ``` The domain name's authoritative registration data service is found by doing the label-wise longest match of the target domain name with the domain values in the Entry Arrays in the IANA "Bootstrap Service Registry for Domain Name Space". The match is done per label, from right to left. If the longest match results in multiple entries, then those entries are considered equivalent. The values contained in the Service URL Array of the matching second-level array are the valid base RDAP URLs as described in [RFC9082]. For example, a domain RDAP query for a.b.example.com matches the com entry in one of the arrays of the registry. The base RDAP URL for this query is then taken from the second element of the array, which is an array of base RDAP URLs valid for this entry. The client chooses one of the base URLs from this array; in this example, it chooses the only one available, "https://registry.example.com/myrdap/". The segment specified in [RFC9082] is then appended to the base URL to complete the query. The complete query is then "https://registry.example.com/myrdap/domain/a.b.example.com". If a domain RDAP query for a.b.example.com matches both com and example.com entries in the registry, then the longest match applies and the example.com entry is used by the client. If the registry contains entries such as com and goodexample.com, then a domain RDAP query for example.com only matches the com entry because matching is done on a per-label basis. The entry for the root of the domain name space is specified as "". 5. Bootstrap Service Registries for Internet Numbers This section discusses IPv4 and IPv6 address space and Autonomous System numbers. For IP address space, the authoritative registration data service is found by doing a longest match of the target address with the values of the arrays in the corresponding RDAP Bootstrap Service Registry for Address Space. The longest match is done the same way as in packet forwarding: the addresses are converted in binary form and then the binary strings are compared to find the longest match up to the specified prefix length. The values contained in the second element of the array are the base RDAP URLs as described in [RFC9082]. The longest match method enables covering prefixes of a larger address space pointing to one base RDAP URL while more specific prefixes within the covering prefix are being served by another base RDAP URL. 5.1. Bootstrap Service Registry for IPv4 Address Space The JSON output of this registry contains IPv4 prefix entries, specified in Classless Inter-domain Routing (CIDR) format [RFC4632] and grouped by RDAP URLs, as shown in this example. ```json { "version": "1.0", "publication": "2024-01-07T10:11:12Z", "description": "RDAP Bootstrap file for example registries." "services": [ ["198.51.100.0/24", "192.0.0.0/8"], ["https://rir1.example.com/myrdap/"] ], ["203.0.113.0/24", "192.0.2.0/24"], ["https://example.org/"] ["203.0.113.0/28"], ["https://example.net/rdaprir2/", "http://example.net/rdaprir2/" ] } ``` For example, a query for "192.0.2.1/25" matches the "192.0.0.0/8" entry and the "192.0.2.0/24" entry in the example registry above. The latter is chosen by the client because it is the longest match. The base RDAP URL for this query is then taken from the second element of the array, which is an array of base RDAP URLs valid for this entry. The client chooses one of the base URLs from this array; in this example, it chooses the only one available, "https://example.org/". The {resource} specified in [RFC9082] is then appended to the base URL to complete the query. The complete query is then "https://example.org/ip/192.0.2.1/25". 5.2. Bootstrap Service Registry for IPv6 Address Space The JSON output of this registry contains IPv6 prefix entries, using [RFC5952] text representation of the address prefixes format, grouped by base RDAP URLs, as shown in this example. ```json { "version": "1.0", "publication": "2024-01-07T10:11:12Z", "description": "RDAP Bootstrap file for example registries.", "services": [ [ "2001:db8::/34", [ "https://rir2.example.com/myrdap/ ] ], [ [ "https://example.org/ ] ], [ "2001:db8:1000::/36", [ "https://example.net/rdaprir2/", "http://example.net/rdaprir2/ ] ] ] } ``` For example, a query for "2001:db8:1000::/48" matches the "2001:db8::/34" entry and the "2001:db8:1000::/36" entry in the example registry above. The latter is chosen by the client because it is the longest match. The base RDAP URL for this query is then taken from the second element of the array, which is an array of base RDAP URLs valid for this entry. The client chooses one of the base URLs from this array; in this example, it chooses "https://example.net/rdaprir2/" because it's the secure version of the protocol. The segment specified in [RFC9082] is then appended to the base URL to complete the query. The complete query is therefore "https://example.net/rdaprir2/ip/2001:db8:1000::/48". If the target RDAP server does not answer, the client can then use another URL prefix from the array. 5.3. Bootstrap Service Registry for AS Number Space The JSON output of this registry contains entries for AS number ranges, grouped by base RDAP URLs, as shown in this example. The Entry Array is an array containing the list of AS number ranges served by the base RDAP URLs found in the second element. Each element of the array contains two AS numbers represented in decimal format, separated by a hyphen, that represents the range of AS numbers between the two AS numbers (inclusive), where values are in increasing order (e.g., 100-200, not 200-100). A single AS number is represented as a range of two identical AS numbers. AS numbers are represented as 'asplain' as defined in [RFC5396]. Ranges **MUST NOT** overlap. ``` { "version": "1.0", "publication": "2024-01-07T10:11:12Z", "description": "RDAP Bootstrap file for example registries.", "services": [ [ ["64496-64496"], ["https://rir3.example.com/myrdap/"] ], [ ["64497-64510", "65536-65551"], ["https://example.org/"] ], [ ["64512-65534"], ["http://example.net/rdaprir2/", "https://example.net/rdaprir2/"] ] ] } ``` For example, a query for AS 65411 matches the 64512-65534 entry in the example registry above. The base RDAP URL for this query is then taken from the second element of the array, which is an array of base RDAP URLs valid for this entry. The client chooses one of the base URLs from this array; in this example, it chooses "https://example.net/rdaprir2/". The segment specified in [RFC9082] is then appended to the base URL to complete the query. The complete query is, therefore, "https://example.net/rdaprir2/autnum/65411". If the server does not answer, the client can then use another URL prefix from the array. 6. Entity Entities (such as contacts, registrants, or registrars) can be queried by handle as described in [RFC9082]. Since there is no global name space for entities, this document does not describe how to find the authoritative RDAP server for entities. However, it is possible that, if the entity identifier was received from a previous query, the same RDAP server could be queried for that entity, or the entity identifier itself is a fully qualified URL that can be queried. The mechanism described in [RFC8521] MAY also be used. 7. Non-existent Entries or RDAP URL Values The registries may not contain the requested value. In these cases, there is no known RDAP server for that requested value, and the client SHOULD provide an appropriate error message to the user. 8. Deployment and Implementation Considerations This method relies on the fact that RDAP clients are fetching the IANA registries to then find the servers locally. Clients SHOULD NOT fetch the registry on every RDAP request. Clients SHOULD cache the registry, but use underlying protocol signaling, such as the HTTP Expires header field [RFC7234], to identify when it is time to refresh the cached registry. Some authorities of registration data may work together on sharing their information for a common service, including mutual redirection [REDIRECT-RDAP]. When a new object is allocated, such as a new AS range, a new TLD, or a new IP address range, there is no guarantee that this new object will have an entry in the corresponding bootstrap RDAP registry, since the setup of the RDAP server for this new entry may become live and registered later. Therefore, the clients should expect that even if an object, such as TLD, IP address range, or AS range is allocated, the existence of the entry in the corresponding bootstrap registry is not guaranteed. 9. Limitations This method does not provide a direct way to find authoritative RDAP servers for any other objects than the ones described in this document. In particular, the following objects are not bootstrapped with the method described in this document: - entities - queries using search patterns that do not contain a terminating string that matches some entries in the registries - nameservers - help 10. Formal Definition This section is the formal definition of the registries. The structure of JSON objects and arrays using a set of primitive elements is defined in [RFC8259]. Those elements are used to describe the JSON structure of the registries. 10.1. Imported JSON Terms OBJECT: a JSON object, defined in Section 4 of [RFC8259] MEMBER: a member of a JSON object, defined in Section 4 of [RFC8259] MEMBER-NAME: the name of a MEMBER, defined as a "string" in Section 4 of [RFC8259] MEMBER-VALUE: the value of a MEMBER, defined as a "value" in Section 4 of [RFC8259] ARRAY: an array, defined in Section 5 of [RFC8259] ARRAY-VALUE: an element of an ARRAY, defined in Section 5 of [RFC8259] STRING: a "string", as defined in Section 7 of [RFC8259] 10.2. Registry Syntax Using the above terms for the JSON structures, the syntax of a registry is defined as follows: rdap-bootstrap-registry: an OBJECT containing a MEMBER version and a MEMBER publication, an optional MEMBER description, and a MEMBER services-list version: a MEMBER with MEMBER-NAME "version" and MEMBER-VALUE a STRING publication: a MEMBER with MEMBER-NAME "publication" and MEMBER-VALUE a STRING description: a MEMBER with MEMBER-NAME "description" and MEMBER-VALUE a STRING services-list: a MEMBER with MEMBER-NAME "services" and MEMBER-VALUE a services-array services-array: an ARRAY, where each ARRAY-VALUE is a service service: an ARRAY of 2 elements, where the first ARRAY-VALUE is an entry-list and the second ARRAY-VALUE is a service-uri-list entry-list: an ARRAY, where each ARRAY-VALUE is an entry entry: a STRING service-uri-list: an ARRAY, where each ARRAY-VALUE is a service-uri service-uri: a STRING 11. Security Considerations By providing a bootstrap method to find RDAP servers, this document helps to ensure that the end users will get the RDAP data from an authoritative source instead of from rogue sources. The method has the same security properties as the RDAP protocols themselves. The transport used to access the registries uses TLS [RFC8446]. Additional considerations on using RDAP are described in [RFC7481]. 12. IANA Considerations IANA has created the RDAP Bootstrap Services Registries listed below and made them available as JSON objects. The contents of these registries are described in Sections 3, 4, and 5, with the formal syntax specified in Section 10. The registries MUST be accessible only through HTTPS (TLS [RFC8446]) transport. The process for adding or updating entries in these registries differs from the normal IANA registry processes: these registries are generated from the data, processes, and policies maintained by IANA in their allocation registries (ipv4reg, ipv6reg, asreg, and domainreg), with the addition of new RDAP server information. IANA updates RDAP Bootstrap Services Registries entries from the allocation registries as those registries are updated. This document does not change any policies related to the allocation registries; IANA has provided a mechanism for collecting the RDAP server information. IANA has created a new top-level category on the Protocol Registries page: <https://www.iana.org/protocols>. The group is called "Registration Data Access Protocol (RDAP)". Each of the RDAP Bootstrap Services Registries has been made available for on-demand download in the JSON format by the general public, and that registry's URI is listed directly on the Protocol Registries page. Other normal registries will be added to this group by other documents, but the reason the URIs for these registries are clearly listed on the main page is to make those URIs obvious to implementers – these are registries that will be accessed by software, as well as by humans using them for reference information. Because these registries will be accessed by software, the download demand for the RDAP Bootstrap Services Registries may be unusually high compared to normal IANA registries. The technical infrastructure by which registries are published has been put in place by IANA to support the load. Since the publication of [RFC7484], no issues have been reported regarding the load or the service. As discussed in Section 8, software that accesses these registries will depend on the HTTP Expires header field to limit their query rate. It is, therefore, important for that header field to be properly set to provide timely information as the registries change, while maintaining a reasonable load on the IANA servers. The HTTP Content-Type returned to clients accessing these JSON-formatted registries MUST be "application/json", as defined in [RFC8259]. Because of how information in the RDAP Bootstrap Services Registries is grouped and formatted, the registry entries may not be sortable. It is, therefore, not required or expected that the entries be ordered in any way. 12.1. Bootstrap Service Registry for IPv4 Address Space Entries in this registry contain at least the following: • a CIDR [RFC4632] specification of the network block being registered • one or more URLs that provide the RDAP service regarding this registration 12.2. Bootstrap Service Registry for IPv6 Address Space Entries in this registry contain at least the following: • an IPv6 prefix [RFC5952] specification of the network block being registered • one or more URLs that provide the RDAP service regarding this registration 12.3. Bootstrap Service Registry for AS Number Space Entries in this registry contain at least the following: • a range of Autonomous System numbers being registered • one or more URLs that provide the RDAP service regarding this registration 12.4. Bootstrap Service Registry for Domain Name Space Entries in this registry contain at least the following: • a domain name attached to the root being registered • one or more URLs that provide the RDAP service regarding this registration 13. References 13.1. Normative References 13.2. Informative References Appendix A. Changes since RFC 7484 There are no substantive changes except for minor clarifications. This update is primarily to meet the requirements for moving to an Internet Standard. Acknowledgements The WEIRDS Working Group had multiple discussions on this topic, including a session during IETF 84, where various methods such as in-DNS and others were debated. The idea of using IANA registries was discovered by the author during discussions with his colleagues as well as by a comment from Andy Newton. All the people involved in these discussions are herein acknowledged. Linlin Zhou, Jean-Philippe Dionne, John Levine, Kim Davies, Ernie Dainow, Scott Hollenbeck, Arturo Servin, Andy Newton, Murray Kucherawy, Tom Harrison, Naoki Kambe, Alexander Mayrhofer, Edward Lewis, Pete Resnick, Alessandro Vesely, Bert Greevenbosch, Barry Leiba, Jari Arkko, Kathleen Moriaty, Stephen Farrell, Richard Barnes, and Jean-François Tremblay provided input and suggestions to the first version of this document. Guillaume Leclanche was a coauthor of this document for some revisions; his support is therein acknowledged and greatly appreciated. The section on formal definition was inspired by Section 6.2 of [RFC7071]. This new version [This document] received comments and suggestions from Gavin Brown, Patrick Mevzek, John Levine, Jas dip Singh, George Michaelson, Scott Hollenbeck, Russ Housley, Joel Halpern, Lars Eggert, Benjamin Kaduk, Scott Kelly, Éric Vyncke, John Scudder, Erik Kline, and Robert Wilton. Errata for RFC 7484 were submitted by Pieter Vandepitte and were applied to this document. **Author's Address** **Marc Blanchet** Viagenie 246 Aberdeen Quebec QC G1R 2E1 Canada Email: Marc.Blanchet@viagenie.ca URI: https://viagenie.ca
{"Source-Url": "https://www.ietf.org/rfc/rfc9224.pdf", "len_cl100k_base": 5708, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33679, "total-output-tokens": 7028, "length": "2e12", "weborganizer": {"__label__adult": 0.0004351139068603515, "__label__art_design": 0.00034737586975097656, "__label__crime_law": 0.0017538070678710938, "__label__education_jobs": 0.0013370513916015625, "__label__entertainment": 0.0002319812774658203, "__label__fashion_beauty": 0.00024068355560302737, "__label__finance_business": 0.0019588470458984375, "__label__food_dining": 0.0003690719604492187, "__label__games": 0.0009069442749023438, "__label__hardware": 0.00656890869140625, "__label__health": 0.0004444122314453125, "__label__history": 0.0008645057678222656, "__label__home_hobbies": 8.90493392944336e-05, "__label__industrial": 0.000827789306640625, "__label__literature": 0.0005717277526855469, "__label__politics": 0.0007495880126953125, "__label__religion": 0.0005545616149902344, "__label__science_tech": 0.25927734375, "__label__social_life": 0.0001398324966430664, "__label__software": 0.25048828125, "__label__software_dev": 0.4697265625, "__label__sports_fitness": 0.0003445148468017578, "__label__transportation": 0.0014438629150390625, "__label__travel": 0.0004122257232666016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25278, 0.04401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25278, 0.48416]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25278, 0.86175]], "google_gemma-3-12b-it_contains_pii": [[0, 1553, false], [1553, 2602, null], [2602, 5366, null], [5366, 6987, null], [6987, 8942, null], [8942, 10749, null], [10749, 12929, null], [12929, 14740, null], [14740, 16990, null], [16990, 18685, null], [18685, 21143, null], [21143, 23075, null], [23075, 23518, null], [23518, 24182, null], [24182, 25278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1553, true], [1553, 2602, null], [2602, 5366, null], [5366, 6987, null], [6987, 8942, null], [8942, 10749, null], [10749, 12929, null], [12929, 14740, null], [14740, 16990, null], [16990, 18685, null], [18685, 21143, null], [21143, 23075, null], [23075, 23518, null], [23518, 24182, null], [24182, 25278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25278, null]], "pdf_page_numbers": [[0, 1553, 1], [1553, 2602, 2], [2602, 5366, 3], [5366, 6987, 4], [6987, 8942, 5], [8942, 10749, 6], [10749, 12929, 7], [12929, 14740, 8], [14740, 16990, 9], [16990, 18685, 10], [18685, 21143, 11], [21143, 23075, 12], [23075, 23518, 13], [23518, 24182, 14], [24182, 25278, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25278, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
068c9f7ced7931e3c7513373244ac3f81a314473
Topic 6: Case Studies (Version of 3rd August 2023) Pierre Flener and Gustav Björdal Optimisation Group Department of Information Technology Uppsala University Sweden Course 1DL442: Combinatorial Optimisation and Constraint Programming, whose part 1 is Course 1DL451: Modelling for Combinatorial Optimisation Outline 1. Black-Hole Patience 2. Cost-Aware Scheduling 3. Warehouse Location 4. Sport Scheduling Outline 1. Black-Hole Patience 2. Cost-Aware Scheduling 3. Warehouse Location 4. Sport Scheduling Move all the cards into the black hole. A fan top card can be moved if it is one rank apart from the black-hole top card, independently of suit (♠, ♦, ♥, ⚖); aces (A,1) and kings (K,13) are a rank apart. The cards \( c_1 \) and \( c_2 \) are one rank apart if and only if \[ (c_1 \mod 13) - (c_2 \mod 13) \in \{-12, -1, 1, 12\} \] Define a predicate and avoid \( \text{mod} \) on decision variables, by precomputation: \[ \text{predicate rankApart(var 1..52: c1, var 1..52: c2) =} \] \[ \begin{align*} \text{let } & \{ \text{array[1..52] of int: Rank = [i \mod 13 | i in 1..52] } \\ \text{in } & \text{Rank[c1] - Rank[c2] in \{-12,-1,1,12\};} \end{align*} \] Avoid implicit \text{element} constraints, for better inference: \[ \text{table([c1,c2], [1,2|1,13]|...|1,52|2,3|...|52,40|52,51|]);} \] Move all the cards into the black hole. A fan top card can be moved if it is one rank apart from the black-hole top card, independently of suit (♠, ♣, ♦, ♥); aces (A,1) and kings (K,13) are a rank apart. Let $\text{Card}[p]$ denote the card at position $p$ in the black hole. Adjacent black-hole cards are a rank apart: 3. constraint $\text{Card}[1] = 1$; % the card at position 1 is A♠ 4. constraint $\forall (p \in 1..51) (\text{rankApart} (\text{Card}[p], \text{Card}[p+1]));$ The black-hole cards respect the order in the given fans: 5. constraint $\forall (f \in \text{Fan})$ (let { var 2..52: p1; var 2..52: p2; var 2..52: p3 } in $\text{Card}[p1]=f$.top/$\text{Card}[p2]=f$.mid/$\text{Card}[p3]=f$.bot/$\text{p1}<p2/$\text{p2}<p3); or, equivalently, but better because without the implicit element constraints: 5. constraint $\text{all_different} (\text{Card}) \\lor \forall (f \in \text{Fan})$ ($\text{value_precede_chain} ([f$.top$,f$.mid$,f$.bot$],\text{Card});$ Let \( \text{Pos}[c] \) denote the position of card \( c \) in the black hole. The black-hole cards respect the order in the given fans: \[ \begin{align*} 5 \quad \text{constraint} \quad & \text{Pos}[1] = 1; \quad \% \text{ the position of card } A\spadesuit \text{ is } 1 \\ 6 \quad \text{constraint} \quad & \forall (f \in \text{Fan}) \\ & (\text{Pos}[f.\text{top}] < \text{Pos}[f.\text{mid}] \land \text{Pos}[f.\text{mid}] < \text{Pos}[f.\text{bot}]); \end{align*} \] How to model “adjacent black-hole cards are a rank apart” with the \( \text{Pos}[c] \) ?! Let us use the \( \text{Pos}[c] \) for the second constraint, as mutually redundant with the \( \text{Card}[p] \) for the first constraint, and 2-way channel between them. Observe that \( \forall c, p \in 1..52 : \text{Card}[p] = c \iff \text{Pos}[c] = p \). Seen as functions, \( \text{Card} \) and \( \text{Pos} \) are each other’s inverse: \[ 7 \quad \text{constraint} \quad \text{inverse}(\text{Card}, \text{Pos}) :: \text{domain\_propagation}; \quad \% \text{Topic 8} \] This model \( \square + \square \) with mutually redundant decision variables and the 2-way channelling constraint is much faster (at least on a CP or LCG solver) than the model on the previous slide with only the \( \text{Card} \) decision variables. Outline 1. Black-Hole Patience 2. Cost-Aware Scheduling 3. Warehouse Location 4. Sport Scheduling Energy-Cost-Aware Scheduling Consider the core of CSPlib problem 059. Given are: - Machines, each machine having several capacitated reusable resources. - Jobs, each job having a duration, earliest start time, latest end time, a consumption of energy (which is an overall consumable resource, not a reusable resource of the machines), and requirements for the reusable resources of the machines. - A time horizon, each time step having a predicted energy cost. Schedule the jobs and allocate them to machines, so that: 1. No job starts too early or ends too late. 2. No resource capacity of any machine is ever exceeded. 3. The total energy cost is minimal. We show that precomputing a 2d array with the energy cost of each job for each possible start time boosts everything. Parameters 1. `enum Resources; % say: {cpu, ram, io};` 2. `int: nMachines; set of int: Machines = 1..nMachines;` 3. `array[Machines,Resources] of int: Capacity;` 4. `int: nTimeSteps; % say: 288, for every 5 minutes over 24h` 5. `set of int: Times = 0..nTimeSteps; % time points` 6. `set of int: Steps = 1..nTimeSteps; % time step s is from point s-1 to point s` 7. `array[Steps] of float: EnergyCost; % EnergyCost[s] €/kWh during time step s` 8. `int: nJobs; set of int: Jobs = 1..nJobs;` 9. `array[Jobs] of Steps: Duration; % job j lasts Duration[j] steps` 10. `array[Jobs] of Times: EarliestS; % job j starts >= EarliestS[j]` 11. `array[Jobs] of Times: LatestEnd; % job j ends <= LatestEnd[j]` 12. `array[Jobs] of int: Energy; % job j consumes Energy[j] kWh` 13. `array[Jobs,Resources] of int: Requirement;` In the instance `sample03` of CSPlib problem 059 we have: EnergyCost[119..128] = [0.04732, 0.04732, 0.08093, 0.08093, 0.08093, 0.08093, 0.08093, 0.08093, 0.08619, 0.08619] A job of 6 steps & 1151 kWh costs $\lfloor 1151 \cdot (0.04732 \cdot 2 + 0.08093 \cdot 4) \rfloor = 481€$ at time 118, and $\lfloor 1151 \cdot (0.08093 \cdot 4 + 0.08619 \cdot 2) \rfloor = 571€$ at time 122. Model 14 array[Jobs] of var Times: Start; % job j starts at time Start[j] 15 array[Jobs] of var Machines: Machine; % job j runs on Machine[j] 16 % (1) No job starts too early or ends too late: 17 constraint forall(j in Jobs) 18 (Start[j] in EarliestS[j]..LatestEnd[j]-Duration[j]); 19 % (2) No resource capacity of any machine is ever exceeded: 20 constraint forall(m in Machines, r in Resources)(cumulative(Start, Duration, 21 [(Machine[j] = m) * Requirement[j,r] | j in Jobs], Capacity[m,r])); 22 ... % constraints for the rest of the problem 23 array[Jobs] of var 0..floor(max(Energy)∗sum(EnergyCost)): Cost;% j is Cost[j] ∈ 24 ... % see the next slide! 25 solve minimize sum(Cost) + ...; Define the decision variables \( \text{Cost}[j] \) without precomputation: \[ \text{constraint } \forall (j \in \text{Jobs}) \left( \text{Cost}[j] = \sum (s \in \text{Steps}) \left( \begin{array}{c} \text{if } \text{Start}[j] + 1 \leq s \land s \leq \text{Start}[j] + \text{Duration}[j] \text{ then floor}\left(\text{Energy}[j] \times \text{EnergyCost}[s]\right) \text{ else } 0 \end{array}\right) \right)\] For sample03, with 100 jobs and 288 time steps, this compiles under Gecode in over 20 seconds into 12 MB of FlatZinc code, with 74,137 constraints and 66,828 decision variables, due to the use of \( \text{if } \theta \text{ then } \phi \text{ else } \psi \text{ endif} \) with a test \( \theta \) that depends on decision variables (the \( \text{Start}[j] \) here). Define the decision variables \( \text{Cost}[j] \) with precomputation of an array of derived parameters: \[ \% \text{JobCost}[j,t] = \text{energy cost of job } j \text{ if } j \text{ starts at time } t \text{ (with dummy values if } t+\text{Duration}[j] > \text{nTimeSteps}): \] \[ \text{array}[\text{Jobs}, \text{Times}] \text{ of int: JobCost} = \text{array2d}(\text{Jobs}, \text{Times}, \\ [\text{floor}(\text{Energy}[j] \times \sum(\text{EnergyCost}[t+1..\min(t+\text{Duration}[j],\text{nTimeSteps})))) \\ \mid j \in \text{Jobs}, t \in \text{Times}]); \% \text{round the sum, not its terms!} \] \[ \text{constraint } \forall (j \in \text{Jobs}) \left( \text{Cost}[j] = \text{JobCost}[j, \text{Start}[j]] \right); \] For sample03 ☑, this model ☑ compiles very fast under Gecode into only 343 KB of FlatZinc code, with 100 constraints and 100 decision variables, and a feasible solution is found six times faster. Outline 1. Black-Hole Patience 2. Cost-Aware Scheduling 3. Warehouse Location 4. Sport Scheduling The Warehouse Location Problem (WLP) A company considers opening warehouses at some candidate locations in order to supply its existing shops: - Each candidate warehouse has the same maintenance cost. - Each candidate warehouse has a supply capacity, which is the maximum number of shops it can supply. - The supply cost to a shop depends on the supplying warehouse. Determine which candidate warehouses actually to open, and which of them supplies which shops, so that: 1. Each shop is supplied by exactly one actually opened warehouse. 2. Each actually opened warehouse supplies a number of shops that is at most equal to its supply capacity. 3. The sum of the actually incurred maintenance costs and supply costs is minimal. WLP: Sample Instance Data \begin{align*} \text{Shops} & = \{\text{Shop}_1, \text{Shop}_2, \ldots, \text{Shop}_{10}\} \\ \text{Warehouses} & = \{\text{Berlin, London, Ankara, Paris, Rome}\} \\ \text{maintCost} & = 30 \end{align*} \begin{align*} \text{Capacity} & = \begin{bmatrix} 1 & 4 & 2 & 1 & 3 \\ \end{bmatrix} \\ \text{SupplyCost} & = \\ \begin{array}{cccccc} \text{Shop}_1 & \text{Berlin} & \text{London} & \text{Ankara} & \text{Paris} & \text{Rome} \\ 20 & 24 & 11 & 25 & 30 \\ 28 & 27 & 82 & 83 & 74 \\ 74 & 97 & 71 & 96 & 70 \\ 2 & 55 & 73 & 69 & 61 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 47 & 65 & 55 & 71 & 95 \\ \end{array} \end{align*} WLP Model 1: Decision Variables Automatic enforcement of the total-function constraint (1): \[ \text{Supplier} = \begin{array}{cccc} \text{Shop}_1 & \text{Shop}_2 & \cdots & \text{Shop}_{10} \\ \in \text{Warehouses} & \in \text{Warehouses} & \cdots & \in \text{Warehouses} \end{array} \] \text{Supplier}[s] \text{ denotes the supplier warehouse for shop } s. Variables redundant with \text{Supplier}, but not mutually, as less informative: \[ \text{Open} = \begin{array}{cccccc} \text{Berlin} & \text{London} & \text{Ankara} & \text{Paris} & \text{Rome} \\ \in 0..1 & \in 0..1 & \in 0..1 & \in 0..1 & \in 0..1 \end{array} \] \text{Open}[w] = 1 \text{ if and only if warehouse } w \text{ is actually opened.} ☞ Our chosen array names always reflect total functions. WLP Model 1: Objective \[ \text{solve minimize } \text{maintCost} \times \sum (\text{Open}) \\ + \sum (s \text{ in Shops}) (\text{SupplyCost}[s, \text{Supplier}[s]]); \] The first term is the total maintenance cost, expressed as the product of the warehouse maintenance cost by the number of actually opened warehouses. The second term is the total supply cost, expressed as the sum over all shops of their actually incurred supply costs. Notice the implicit use of the \text{element} predicate, as the column index \text{Supplier}[s] to \text{SupplyCost} is a decision variable. If warehouse \( w \) has maintenance cost \( \text{MaintCost}[w] \), then the first term becomes \( \sum (w \text{ in Warehouses}) (\text{MaintCost}[w] \times \text{Open}[w]) \). One-way channelling constraint from the Supplier[s] decision variables to some of their redundant Open[w] decision variables (as not all Open[w] are fixed this way): \[ \text{constraint } \forall (s \text{ in Shops})(\text{Open[Supplier[s]]} = 1); \] The supplier warehouse of each shop is actually opened. Notice the implicit use of the element predicate, as the index Supplier[s] to Open is a decision variable. How do the remaining Open[w] become 0? Upon minimisation! Alternative: One-way channelling constraint from the Supplier[s] decision variables to all of their redundant Open[w] decision variables, but not vice-versa: ``` constraint forall(w in Warehouses) (Open[w] = (exists(s in Shops)(Supplier[s]=w))); ``` A warehouse is opened if and only if there exists a shop that it supplies. Make experiments to find out which channelling is better. We will revisit this issue in Topic 8: Inference & Search in CP & LCG, and in Topic 9: Modelling for CBLS. Nothing changes if Open is an array of Boolean decision variables (instead of integer decision variables). WLP Model 1: Capacity Constraint Capacity constraint (2), using a version of `global_cardinality` with given lower and upper bounds rather than decision variables for the counts: ```constraint global_cardinality_closed (Supplier, Warehouses, [0 | w in Warehouses], Capacity); ``` Each actually opened warehouse is a supplier of a number of shops that is at most equal to its supply capacity. Which symmetries are there? - There are no problem symmetries. - We introduced no symmetries into the model. - There may be instance symmetries: indistinguishable shops, or indistinguishable warehouses, or both. WLP Model 2 Drop the array Open of redundant decision variables as well as its channelling constraint, and reformulate the first term of the objective function as follows: \[ \text{maintCost} \times \sum(w \text{ in Warehouses})(\exists(s \text{ in Shops})(\text{Supplier}[s]=w)) \] We can alternatively use the \text{nvalue} constrained function: \[ \text{maintCost} \times \text{nvalue(Supplier)} \] This alternative formulation cannot be generalised for warehouse-specific maintenance costs. For a speed comparison, see Topic 8: Inference & Search in CP & LCG. Redundancy elimination may pay off, but it may just as well be the converse. But this is hard to guess, as human intuition may be weak. WLP Model 3: Decision Variables No automatic enforcement of the total-function constraint (1): \[ \text{Supply} = \begin{align*} \text{Shop}_1 & \in [0..1] & \text{Berlin} & \in [0..1] & \text{London} & \in [0..1] & \text{Ankara} & \in [0..1] & \text{Paris} & \in [0..1] & \text{Rome} & \in [0..1] \\ \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots & \vdots \\ \text{Shop}_{10} & \in [0..1] & \text{Berlin} & \in [0..1] & \text{London} & \in [0..1] & \text{Ankara} & \in [0..1] & \text{Paris} & \in [0..1] & \text{Rome} & \in [0..1] \end{align*} \] \[ \text{Supply}[s,w] = 1 \text{ if and only if shop } s \text{ is supplied by warehouse } w. \] Redundant decision variables (as in Model 1): \[ \text{Open} = \begin{align*} \text{Berlin} & \in [0..1] & \text{London} & \in [0..1] & \text{Ankara} & \in [0..1] & \text{Paris} & \in [0..1] & \text{Rome} & \in [0..1] \end{align*} \] \[ \text{Open}[w] = 1 \text{ if and only if warehouse } w \text{ is actually opened.} \] WLP Model 3: Objective The objective can now be expressed in linear fashion: \[ \text{solve minimize maintCost } \ast \text{sum}(\text{Open}) \\ + \text{sum}(s \text{ in Shops, w in Warehouses}) \\ (\text{SupplyCost}[s,w] \ast \text{Supply}[s,w]); \] The first term is the total maintenance cost, expressed (as in Model 1) as the product of the warehouse maintenance cost by the number of actually opened warehouses. The second term is the total supply cost, expressed as the sum over all shops and warehouses of their actually incurred supply costs: each decision variable Supply\([s,w]\) is weighted by the parameter SupplyCost\([s,w]\). WLP Model 3: Constraints The total-function constraint (1) now needs to be modelled, and can be expressed in linear fashion (that is, without using count): \[ \text{constraint } \forall s \in \text{Shops}(\sum \text{Supply}[s,\ldots]) = 1; \] Each shop is supplied by exactly one actually opened warehouse. Capacity constraint (2), in isolation: \[ \text{constraint } \forall (w \text{ in Warehouses}) \quad (\sum(\text{Supply}[..,w]) \leq \text{Capacity}[w]); \] One-way channelling constraint, in isolation: \[ \text{constraint } \forall (w \text{ in Warehouses}) \quad (\sum(\text{Supply}[..,w]) > 0 \iff \text{Open}[w] = 1); \] or, one-way channelling without reification, upon exploiting minimisation: \[ \text{constraint } \forall (w \text{ in Warehouses}) \quad (\forall (s \text{ in Shops})(\text{Supply}[s,w] \leq \text{Open}[w])); \] Capacity (2) and second one-way channelling constraints combined: \[ \text{constraint } \forall (w \text{ in Warehouses}) \quad (\sum(\text{Supply}[..,w]) \leq \text{Capacity}[w] \times \text{Open}[w]); \] All constraints are linear (in)equalities: this is an IP model! 1. Black-Hole Patience 2. Cost-Aware Scheduling 3. Warehouse Location 4. Sport Scheduling The Sport Scheduling Problem (SSP) Find a schedule in $\text{Periods} \times \text{Weeks} \rightarrow \text{Teams} \times \text{Teams}$ for 1. $|\text{Teams}| = n$ and $n$ is even (note that only $n=4$ is unsatisfiable) 2. $|\text{Weeks}| = n-1$ 3. $|\text{Periods}| = n/2$ periods per week subject to the following constraints: 1. Each possible game is played exactly once. 2. Each team plays exactly once per week. 3. Each team plays at most twice per period. Idea for a model, and a solution for $n=8$: <table> <thead> <tr> <th>Wk 1</th> <th>Wk 2</th> <th>Wk 3</th> <th>Wk 4</th> <th>Wk 5</th> <th>Wk 6</th> <th>Wk 7</th> </tr> </thead> <tbody> <tr> <td>P 1</td> <td>1 vs 2</td> <td>1 vs 3</td> <td>2 vs 6</td> <td>3 vs 5</td> <td>4 vs 7</td> <td>4 vs 8</td> </tr> <tr> <td>P 2</td> <td>3 vs 4</td> <td>2 vs 8</td> <td>1 vs 7</td> <td>6 vs 7</td> <td>6 vs 8</td> <td>2 vs 5</td> </tr> <tr> <td>P 3</td> <td>5 vs 6</td> <td>4 vs 6</td> <td>3 vs 8</td> <td>1 vs 8</td> <td>1 vs 5</td> <td>3 vs 7</td> </tr> <tr> <td>P 4</td> <td>7 vs 8</td> <td>5 vs 7</td> <td>4 vs 5</td> <td>2 vs 4</td> <td>2 vs 3</td> <td>1 vs 6</td> </tr> </tbody> </table> The Sport Scheduling Problem (SSP) Find a schedule in $\text{Periods} \times \text{Weeks} \rightarrow \text{Teams} \times \text{Teams}$ for: - $|\text{Teams}| = n$ and $n$ is even (note that only $n=4$ is unsatisfiable) - $|\text{Weeks}| = n-1$ - $|\text{Periods}| = n/2$ periods per week subject to the following constraints: 1. Each possible game is played exactly once. 2. Each team plays exactly once per week. 3. Each team plays at most twice per period. Idea for a model, and a solution for $n=8$, with a dummy week $n$ of duplicates: <table> <thead> <tr> <th>Wk 1</th> <th>Wk 2</th> <th>Wk 3</th> <th>Wk 4</th> <th>Wk 5</th> <th>Wk 6</th> <th>Wk 7</th> <th>Wk 8</th> </tr> </thead> <tbody> <tr> <td>P 1</td> <td>1 vs 2</td> <td>1 vs 3</td> <td>2 vs 6</td> <td>3 vs 5</td> <td>4 vs 7</td> <td>4 vs 8</td> <td>5 vs 8</td> </tr> <tr> <td>P 2</td> <td>3 vs 4</td> <td>2 vs 8</td> <td>1 vs 7</td> <td>6 vs 7</td> <td>6 vs 8</td> <td>2 vs 5</td> <td>1 vs 4</td> </tr> <tr> <td>P 3</td> <td>5 vs 6</td> <td>4 vs 6</td> <td>3 vs 8</td> <td>1 vs 8</td> <td>1 vs 5</td> <td>3 vs 7</td> <td>2 vs 7</td> </tr> <tr> <td>P 4</td> <td>7 vs 8</td> <td>5 vs 7</td> <td>4 vs 5</td> <td>2 vs 4</td> <td>2 vs 3</td> <td>1 vs 6</td> <td>3 vs 6</td> </tr> </tbody> </table> SSP Model 1: Data Parameter: - \textbf{int: } n; \textbf{constraint assert}(n \geq 2 / \ n \mod 2 = 0, "Odd n"); Useful Ranges, enumeration, and set: - Teams = 1..n - Weeks = 1..(n-1) - ExtendedWeeks = 1..n - Periods = 1..(n \div 2) - Slots = \{one, two\} - Games = \{f \times n + s \mid f,s \text{ in } \text{Teams where } f < s\}, thereby breaking some symmetries, such that the game between teams f and s is uniquely identified by the natural number f \times n + s. Example: For n = 4, we get Games = \{6,7,8,11,12,16\}. SSP Model 1: Decision Variables Declare a 3d matrix $\text{Team}[\text{Periods, ExtendedWeeks, Slots}]$ of decision variables in Teams (denoted $T$ below), over a schedule extended by a dummy week where teams play fictitious duplicate games in the period where they would otherwise play only once, thereby strengthening constraint (3) into: (3') Each team plays exactly twice per period. Let $\text{Team}[p, w, s]$ be the team that plays in period $p$ of week $w$ in game slot $s$: \[ \begin{array}{cccccccc} \text{Wk 1} & \cdots & \cdots & \cdots & \text{Wk } n - 1 & \text{Wk } n \\ \text{Wk 1} & \text{two} & \cdots & \cdots & \text{two} & \cdots & \cdots & \cdots \\ \text{Wk 1} & \text{one} & \cdots & \cdots & \text{one} & \cdots & \cdots & \cdots \\ \text{Wk 1} & \text{one} & \cdots & \cdots & \text{one} & \cdots & \cdots & \cdots \\ \text{Wk 1} & \text{one} & \cdots & \cdots & \text{one} & \cdots & \cdots & \cdots \\ \text{Wk 1} & \text{one} & \cdots & \cdots & \text{one} & \cdots & \cdots & \cdots \\ \end{array} \] \[ \begin{array}{cccccccc} \text{Team} = & \mathbb{P}_1 & \in T & \in T & \cdots & \cdots & \in T & \in T \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \mathbb{P}_n/2 & \in T & \in T & \cdots & \cdots & \in T & \in T \\ \end{array} \] SSP Model 1: Constraints Twice-per-period constraint (3'): \[ \text{constraint } \forall (p \in \text{Periods}) \\ \quad (\text{global_cardinality_closed} \\ \quad \quad (\text{Team}[p,..,..], \text{Teams}, [2 | i \in 1..n])); \] In each period, each team occurs exactly twice within the slots of the weeks. (We do not need the four-argument version of the predicate, with an array of ones as lower bounds and an array of twos as upper bounds.) Once-per-week constraint (2): \[ \text{constraint } \forall (w \in \text{ExtendedWeeks}) \\ \quad (\text{all_different} (\text{Team}[..,w,..])); \] In each week, including the dummy week, there are no duplicate teams within the slots of the periods in Team. SSP Model 1: Decision Variables (revisited) Try to state the each-game-once constraint (1) using $\text{Team}$! Rather declare a 2d matrix $\text{Game}[\text{Periods}, \text{Weeks}]$ of decision variables in $\text{Games}$ over the non-extended weeks. Let $\text{Game}[p, w]$ be the game played in period $p$ of week $w$: <table> <thead> <tr> <th>Period 1</th> <th>Week 1</th> <th>( \in \text{Games} )</th> <th>( \cdots )</th> <th>Week ( n-1 )</th> <th>( \in \text{Games} )</th> </tr> </thead> <tbody> <tr> <td>Period ( n/2 )</td> <td>( \in \text{Games} )</td> <td>( \cdots )</td> <td>( \in \text{Games} )</td> <td>( \cdots )</td> <td>( \in \text{Games} )</td> </tr> </tbody> </table> The 2d matrix $\text{Game}$ is mutually redundant with the first $n - 1$ 2d columns of the 3d matrix $\text{Team}$, which is over the extended weeks. SSP Model 1: Constraints (end) Each-game-once constraint (1): \[ \text{constraint all\_different} (\text{Game}); \] There are no duplicate game numbers in Game. Two-way channelling constraint (but rather precompute and use table: see Topic 8: Inference & Search in CP & LCG): \[ \text{constraint forall}(p \text{ in Periods, } w \text{ in Weeks}) \] \[ (\text{Team}[p,w,\text{one}] \times n + \text{Team}[p,w,\text{two}] = \text{Game}[p,w]); \] The game number in Game of each period and week corresponds to the teams scheduled at that time in Team. The constraints (2) and (3’) are hard to formulate using Game. Add the symmetry-breaking constraints of slide 29 of Topic 5: Symmetry. SSP Model 2: Smaller Domains for Game $[p, w]$ Variables A round-robin schedule suffices to break many of the remaining symmetries: - Restrict the games of the first week to the set \( \{ 1 \text{ vs } 2 \} \cup \{ t + 1 \text{ vs } n + 2 - t \mid 1 < t \leq n/2 \} \) - For the remaining weeks, transform each game \( f \text{ vs } s \) of the previous week into a game \( f' \text{ vs } s' \), where \[ f' = \begin{cases} 1 & \text{if } f = 1 \\ 2 & \text{if } f = n \\ f + 1 & \text{otherwise} \end{cases} \] \[ s' = \begin{cases} 2 & \text{if } s = n \\ s + 1 & \text{otherwise} \end{cases} \] The constraints (1) and (2) are now automatically enforced: we must only find the period of each game, but not its week ☑. Interested in More Details? For more details on WLP and SSP and their modelling, see: Van Hentenryck, Pascal. The OPL Optimization Programming Language. Van Hentenryck, Pascal. Constraint and integer programming in OPL. Van Hentenryck, Pascal; Michel, Laurent; Perron, Laurent; and Régis, Jean-Charles. Constraint programming in OPL. Springer-Verlag, 1999.
{"Source-Url": "https://user.it.uu.se/~pierref/courses/COCP/slides/T06-CaseStudies.pdf", "len_cl100k_base": 8038, "olmocr-version": "0.1.51", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 47486, "total-output-tokens": 9704, "length": "2e12", "weborganizer": {"__label__adult": 0.0004239082336425781, "__label__art_design": 0.0005474090576171875, "__label__crime_law": 0.0006761550903320312, "__label__education_jobs": 0.005878448486328125, "__label__entertainment": 0.000125885009765625, "__label__fashion_beauty": 0.0002689361572265625, "__label__finance_business": 0.0012912750244140625, "__label__food_dining": 0.0006108283996582031, "__label__games": 0.0023822784423828125, "__label__hardware": 0.001773834228515625, "__label__health": 0.001110076904296875, "__label__history": 0.0006093978881835938, "__label__home_hobbies": 0.0003044605255126953, "__label__industrial": 0.0014896392822265625, "__label__literature": 0.0002982616424560547, "__label__politics": 0.0005178451538085938, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.1943359375, "__label__social_life": 0.00020396709442138672, "__label__software": 0.0093994140625, "__label__software_dev": 0.77490234375, "__label__sports_fitness": 0.00070953369140625, "__label__transportation": 0.0013599395751953125, "__label__travel": 0.0003514289855957031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23572, 0.03221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23572, 0.03387]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23572, 0.73636]], "google_gemma-3-12b-it_contains_pii": [[0, 311, false], [311, 410, null], [410, 512, null], [512, 1374, null], [1374, 2364, null], [2364, 3659, null], [3659, 3758, null], [3758, 4539, null], [4539, 5733, null], [5733, 6434, null], [6434, 8132, null], [8132, 8234, null], [8234, 8966, null], [8966, 9627, null], [9627, 10399, null], [10399, 11163, null], [11163, 11639, null], [11639, 12242, null], [12242, 12851, null], [12851, 13557, null], [13557, 14548, null], [14548, 15192, null], [15192, 15502, null], [15502, 16317, null], [16317, 16410, null], [16410, 17302, null], [17302, 18289, null], [18289, 18818, null], [18818, 20118, null], [20118, 20827, null], [20827, 21638, null], [21638, 22331, null], [22331, 23062, null], [23062, 23572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 311, true], [311, 410, null], [410, 512, null], [512, 1374, null], [1374, 2364, null], [2364, 3659, null], [3659, 3758, null], [3758, 4539, null], [4539, 5733, null], [5733, 6434, null], [6434, 8132, null], [8132, 8234, null], [8234, 8966, null], [8966, 9627, null], [9627, 10399, null], [10399, 11163, null], [11163, 11639, null], [11639, 12242, null], [12242, 12851, null], [12851, 13557, null], [13557, 14548, null], [14548, 15192, null], [15192, 15502, null], [15502, 16317, null], [16317, 16410, null], [16410, 17302, null], [17302, 18289, null], [18289, 18818, null], [18818, 20118, null], [20118, 20827, null], [20827, 21638, null], [21638, 22331, null], [22331, 23062, null], [23062, 23572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23572, null]], "pdf_page_numbers": [[0, 311, 1], [311, 410, 2], [410, 512, 3], [512, 1374, 4], [1374, 2364, 5], [2364, 3659, 6], [3659, 3758, 7], [3758, 4539, 8], [4539, 5733, 9], [5733, 6434, 10], [6434, 8132, 11], [8132, 8234, 12], [8234, 8966, 13], [8966, 9627, 14], [9627, 10399, 15], [10399, 11163, 16], [11163, 11639, 17], [11639, 12242, 18], [12242, 12851, 19], [12851, 13557, 20], [13557, 14548, 21], [14548, 15192, 22], [15192, 15502, 23], [15502, 16317, 24], [16317, 16410, 25], [16410, 17302, 26], [17302, 18289, 27], [18289, 18818, 28], [18818, 20118, 29], [20118, 20827, 30], [20827, 21638, 31], [21638, 22331, 32], [22331, 23062, 33], [23062, 23572, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23572, 0.0358]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
fbb90956307ca908be6c8204facb875050c1745f
Introduction to Compiler Directives with OpenACC Agenda - Fundamentals of Heterogeneous & GPU Computing - What are Compiler Directives? - Accelerating Applications with OpenACC - Identifying Available Parallelism - Exposing Parallelism - Optimizing Data Locality - Misc. Tips - Next Steps Heterogeneous Computing Basics What is Heterogeneous Computing? Application Execution - High Serial Performance - High Data Parallelism CPU GPU Low Latency or High Throughput? Latency vs. Throughput **F-22 Raptor** - 1500 mph - Knoxville to San Jose in 1:25 - Seats 1 **Boeing 737** - 485 mph - Knoxville to San Jose in 4:20 - Seats 200 Latency vs. Throughput **F-22 Raptor** - Latency – 1:25 - Throughput – 1 / 1.42 hours = 0.7 people/hr. **Boeing 737** - Latency – 4:20 - Throughput – 200 / 4.33 hours = 46.2 people/hr. Low Latency or High Throughput? - CPU architecture must minimize latency within each thread - GPU architecture hides latency with computation from other threads Accelerator Fundamentals - We must expose enough parallelism to fill the device - Accelerator threads are slower than CPU threads - Accelerators have orders of magnitude more threads - Accelerators tolerate resource latencies by cheaply context switching threads - Fine-grained parallelism is good - Generates a significant amount of parallelism to fill hardware resources - Coarse-grained parallelism is bad - Lots of legacy apps have only exposed coarse grain parallelism 3 Approaches to Heterogeneous Programming - Libraries - Easy to use - Most Performance - Compiler Directives - Easy to use - Portable code - Programming Languages - Most Performance - Most Flexibility Simplicity & Performance - **Accelerated Libraries** - Little or no code change for standard libraries, high performance. - Limited by what libraries are available - **Compiler Directives** - Based on existing programming languages, so they are simple and familiar. - Performance may not be optimal because directives often do not expose low level architectural details - **Parallel Programming languages** - Expose low-level details for maximum performance - Often more difficult to learn and more time consuming to implement. What are Compiler Directives? What are Compiler Directives? Programmer inserts compiler hints. Execution Begins on the CPU. Compiler Generates GPU Code Data and Execution moves to the GPU. Data and Execution returns to the CPU. ```c int main() { do_serial_stuff() for(int i=0; i < BIGN; i++) { ...compute intensive work } do_more_serial_stuff(); } ``` OpenACC: The Standard for GPU Directives - Simple: Directives are the easy path to accelerate compute intensive applications - Open: OpenACC is an open GPU directives standard, making GPU programming straightforward and portable across parallel and multi-core processors - Portable: GPU Directives represent parallelism at a high level, allowing portability to a wide range of architectures with the same code. OpenACC Members and Partners Focus on Parallelism and locality Example: Application tuning work using directives for Titan system at ORNL **S3D** Research more efficient combustion with next-generation fuels - Tuning top 3 kernels (90% of runtime) - 3 to 6x faster on CPU+GPU vs. CPU+CPU - But also improved all-CPU version by 50% **CAM-SE** Answer questions about specific climate change adaptation and mitigation scenarios - Tuning top key kernel (50% of runtime) - 6.5x faster on CPU+GPU vs. CPU+CPU - Improved performance of CPU version by 100% - Work was done in CUDA Fortran (not OpenACC) Accelerating Applications with OpenACC Identify Available Parallelism Parallelize Loops with OpenACC Optimize Data Locality Optimize Loop Performance Example: Jacobi Iteration - Iteratively converges to correct value (e.g. Temperature), by computing new values at each point from the average of neighboring points. - Common, useful algorithm - Example: Solve Laplace equation in 2D: \[ A_{k+1}(i,j) = \frac{A_k(i-1,j) + A_k(i+1,j) + A_k(i,j-1) + A_k(i,j+1)}{4} \] while ( err > tol && iter < iter_max ) { err=0.0; for( int j = 1; j < n-1; j++ ) { for(int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } for( int j = 1; j < n-1; j++ ) { for( int i = 1; i < m-1; i++ ) { A[j][i] = Anew[j][i]; } } iter++; } Identify Available Parallelism Parallelize Loops with OpenACC Optimize Loop Performance Optimize Data Locality Identify Available Parallelism - A variety of profiling tools are available: - gprof, pgprof, Vampir, Score-p, HPCToolkit, CrayPAT, … - Using the tool of your choice, obtain an application profile to identify hotspots - Since we’re using PGI, I’ll use pgprof ```bash $ pgcc -fast -Minfo=all -Mprof=ccff laplace2d.c main: 40, Loop not fused: function call before adjacent loop Generated vector sse code for the loop 57, Generated an alternate version of the loop Generated vector sse code for the loop Generated 3 prefetch instructions for the loop 67, Memory copy idiom, loop replaced by call to __c_mcopy8 $ pgcollect ./a.out $ pgprof -exe ./a.out ``` Identify Parallelism With PGPROF PGPROF informs us: 1. A significant amount of time is spent in the loops at line 56/57. 2. The computational intensity (Calculations/Loads&Stores) is high enough to warrant OpenACC or CUDA. 3. How the code is currently optimized. NOTE: the compiler recognized the swapping loop as data movement and replaced it with a memcpy, but we know it’s expensive too. while ( err > tol && iter < iter_max ) { err=0.0; for( int j = 1; j < n-1; j++) { for(int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } for( int j = 1; j < n-1; j++) { for( int i = 1; i < m-1; i++ ) { A[j][i] = Anew[j][i]; } } iter++; } Identify Available Parallelism Parallelize Loops with OpenACC Optimize Data Locality Optimize Loop Performance OpenACC Directive Syntax - **C/C++** ``` #pragma acc directive [clause [,] clause] ... ``` ...often followed by a structured code block - **Fortran** ``` !$acc directive [clause [,] clause] ... ``` ...often paired with a matching end directive surrounding a structured code block: ``` !$acc end directive ``` Don’t forget `acc` OpenACC parallel loop Directive parallel - Programmer identifies a block of code containing parallelism. Compiler generates a kernel. loop - Programmer identifies a loop that can be parallelized within the kernel. NOTE: parallel & loop are often placed together ```c #pragma acc parallel loop for(int i=0; i<N; i++) { y[i] = a*x[i]+y[i]; } ``` Kernel: A function that runs in parallel on the GPU Parallelize with OpenACC ```c while ( err > tol && iter < iter_max ) { err=0.0; #pragma acc parallel loop reduction(max:err) for( int j = 1; j < n-1; j++ ) { for(int i = 1; i < m-1; i++ ) { Anew[j][i] = 0.25 * (A[j][i+1] + A[j][i-1] + A[j-1][i] + A[j+1][i]); err = max(err, abs(Anew[j][i] - A[j][i])); } } #pragma acc parallel loop for( int j = 1; j < n-1; j++ ) { for( int i = 1; i < m-1; i++ ) { A[j][i] = Anew[j][i]; } } iter++; } ``` * A reduction means that all of the N*M values for err will be reduced to just one, the max. OpenACC loop directive: private & reduction - The **private** and **reduction** clauses are not optimization clauses, they may be required for correctness. - **Private**—A copy of the variable is made for each loop iteration - **reduction**—A reduction is performed on the listed variables. - Supports +, *, max, min, and various logical operations Building the Code $ pgcc -fast -acc -ta=tesla -Minfo=all laplace2d.c main: 40, Loop not fused: function call before adjacent loop Generated vector sse code for the loop 51, Loop not vectorized/parallelized: potential early exits 55, Accelerator kernel generated 55, Max reduction generated for error 56, #pragma acc loop gang /* blockIdx.x */ 58, #pragma acc loop vector(256) /* threadIdx.x */ 55, Generating copyout(Anew[1:4094][1:4094]) Generating copyin(A[:, :]) Generating Tesla code 58, Loop is parallelizable 66, Accelerator kernel generated 67, #pragma acc loop gang /* blockIdx.x */ 69, #pragma acc loop vector(256) /* threadIdx.x */ 66, Generating copyin(Anew[1:4094][1:4094]) Generating copyout(A[1:4094][1:4094]) Generating Tesla code 69, Loop is parallelizable OpenACC kernels Directive The kernels construct expresses that a region *may contain parallelism* and *the compiler determines* what can safely be parallelized. ```c #pragma acc kernels { for(int i=0; i<N; i++) { x[i] = 1.0; y[i] = 2.0; } for(int i=0; i<N; i++) { y[i] = a*x[i] + y[i]; } } ``` The compiler identifies 2 parallel loops and generates 2 kernels. Parallelize with OpenACC kernels while ( err > tol && iter < iter_max ) { err=0.0; #pragma acc kernels { for( int j = 1; j < n-1; j++ ) { for(int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } for( int j = 1; j < n-1; j++ ) { for( int i = 1; i < m-1; i++ ) { A[j][i] = Anew[j][i]; } } } iter++; } Building the Code $ pgcc -fast -acc -ta=tesla -Minfo=all laplace2d.c main: 40, Loop not fused: function call before adjacent loop Generated vector sse code for the loop 51, Loop not vectorized/parallelized: potential early exits 55, Generating copyout(Anew[1:4094][1:4094]) Generating copyin(A[:][:]) Generating copyout(A[1:4094][1:4094]) Generating Tesla code 57, Loop is parallelizable 59, Loop is parallelizable Accelerator kernel generated 57, #pragma acc loop gang /* blockIdx.y */ 59, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */ 63, Max reduction generated for error 67, Loop is parallelizable 69, Loop is parallelizable Accelerator kernel generated 67, #pragma acc loop gang /* blockIdx.y */ 69, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */ OpenACC parallel loop vs. kernels **PARALLEL LOOP** - Requires analysis by programmer to ensure safe parallelism - Will parallelize what a compiler may miss - Straightforward path from OpenMP **KERNELS** - Compiler performs parallel analysis and parallelizes what it believes safe - Can cover larger area of code with single directive - Gives compiler additional leeway to optimize Both approaches are equally valid and can perform equally well. Why did OpenACC slow down here? Intel Xeon E5-2698 v3 @ 2.30GHz (Haswell) vs. NVIDIA Tesla K40 Analyzing OpenACC Performance - Any tool that supports CUDA can likewise obtain performance information about OpenACC. - NVIDIA Visual Profiler (nvvp) comes with the CUDA Toolkit, so it will be available on any machine with CUDA installed. Very low Compute/Memcpy ratio Compute: 4.7s Memory Copy: 84.3s 1. Copy input data from CPU memory/NIC to GPU memory Processing Flow 1. Copy input data from CPU memory/NIC to GPU memory 2. Execute GPU Kernel 1. Copy input data from CPU memory/NIC to GPU memory 2. Execute GPU Kernel 3. Copy results from GPU memory to CPU memory/NIC One step of the convergence loop Iteration 1 Iteration 2 Excessive Data Transfers while ( err > tol && iter < iter_max ) { err=0.0; #pragma acc parallel loop reduction(max:err) for( int j = 1; j < n-1; j++) { for(int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } A, Anew resident on host These copies happen every iteration of the outer while loop! And note that there are two #pragma acc parallel, so there are 4 copies per while loop iteration! Identifying Data Locality while (err > tol && iter < iter_max) { err = 0.0; #pragma acc parallel loop reduction(max:err) for(int j = 1; j < n-1; j++) { for(int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } #pragma acc parallel loop for(int j = 1; j < n-1; j++) { for(int i = 1; i < m-1; i++) { A[j][i] = Anew[j][i]; } } iter++; } Does the CPU need the data between these loop nests? Does the CPU need the data between iterations of the convergence loop? Identify Available Parallelism Parallelize Loops with OpenACC Optimize Data Locality Optimize Loop Performance Defining data regions - The **data** construct defines a region of code in which GPU arrays remain on the GPU and are shared among all kernels in that region. ``` #pragma acc data { #pragma acc parallel loop ... #pragma acc parallel loop ... } ``` Arrays used within the data region will remain on the GPU until the end of the data region. Data Clauses **copy ( list )** Allocates memory on GPU and copies data from host to GPU when entering region and copies data to the host when exiting region. **copyin ( list )** Allocates memory on GPU and copies data from host to GPU when entering region. **copyout ( list )** Allocates memory on GPU and copies data to the host when exiting region. **create ( list )** Allocates memory on GPU but does not copy. **present ( list )** Data is already present on GPU from another containing data region. and **present_or_copy[in|out]**, **present_or_create**, **deviceptr**. The next OpenACC makes **present_or_*** the default behavior. Array Shaping - Compiler sometimes cannot determine size of arrays - Must specify explicitly using data clauses and array “shape” C/C++ ```c #pragma acc data copyin(a[0:size]), copyout(b[s/4:3*s/4]) ``` Fortran ```fortran !$acc data copyin(a(1:end)), copyout(b(s/4:3*s/4)) ``` - Note: data clauses can be used on data, parallel, or kernels Optimize Data Locality ```c #pragma acc data copy(A) create(Anew) while (err > tol && iter < iter_max) { err = 0.0; #pragma acc parallel loop reduction(max:err) for (int j = 1; j < n-1; j++) { for (int i = 1; i < m-1; i++) { err = max(err, abs(Anew[j][i] - A[j][i])); } } #pragma acc parallel loop for (int j = 1; j < n-1; j++) { for (int i = 1; i < m-1; i++) { A[j][i] = Anew[j][i]; } } iter++; } ``` Copy A to/from the accelerator only when needed. Create Anew as a device temporary. Rebuilding the Code $ pgcc -fast -acc -ta=tesla -Minfo=all laplace2d.c main: 40, Loop not fused: function call before adjacent loop Generated vector sse code for the loop 51, Generating copy(A[:][:]) Generating create(Anew[:][:]) Loop not vectorized/parallelized: potential early exits 56, Accelerator kernel generated 56, Max reduction generated for error 57, #pragma acc loop gang /* blockIdx.x */ 59, #pragma acc loop vector(256) /* threadIdx.x */ 56, Generating Tesla code 59, Loop is parallelizable 67, Accelerator kernel generated 68, #pragma acc loop gang /* blockIdx.x */ 70, #pragma acc loop vector(256) /* threadIdx.x */ 67, Generating Tesla code 70, Loop is parallelizable Visual Profiler: Data Region Was 128ms Intel Xeon E5-2698 v3 @ 2.30GHz (Haswell) vs. NVIDIA Tesla K40 Socket/Socket: 6.24X Speed-Up (Higher is Better) OpenACC present clause It’s sometimes necessary for a data region to be in a different scope than the compute region. When this occurs, the present clause can be used to tell the compiler data is already on the device. Since the declaration of A is now in a higher scope, it’s necessary to shape A in the present clause. High-level data regions and the present clause are often critical to good performance. Unstructured Data Directives Used to define data regions when scoping doesn’t allow the use of normal data regions (e.g. The constructor/destructor of a class). **enter data** Defines the start of an unstructured data lifetime clauses: **copyin(list), create(list)** **exit data** Defines the end of an unstructured data lifetime clauses: **copyout(list), delete(list)** ```plaintext #pragma acc enter data copyin(a) ... #pragma acc exit data delete(a) ``` Unstructured Data Regions: C++ Classes - Unstructured Data Regions enable OpenACC to be used in C++ classes - Unstructured data regions can be used whenever data is allocated and initialized in a different scope than where it is freed. ```cpp class Matrix { Matrix(int n) { len = n; v = new double[len]; #pragma acc enter data create(v[0:len]) } ~Matrix() { #pragma acc exit data delete(v[0:len]) delete[] v; } private: double* v; int len; }; ``` Identify Available Parallelism Parallelize Loops with OpenACC Optimize Data Locality Optimize Loop Performance Aliasing Can Prevent Parallelization 23, Loop is parallelizable Accelerator kernel generated 23, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */ 25, Complex loop carried dependence of 'b->' prevents parallelization Loop carried dependence of 'a->' prevents parallelization Loop carried backward dependence of 'a->' prevents vectorization Accelerator scalar kernel generated 27, Complex loop carried dependence of 'a->' prevents parallelization Loop carried dependence of 'b->' prevents parallelization Loop carried backward dependence of 'b->' prevents vectorization Accelerator scalar kernel generated C99: restrict Keyword - Declaration of intent given by the programmer to the compiler - Applied to a pointer, e.g. - `float *restrict ptr` - Meaning: “for the lifetime of `ptr`, only it or a value directly derived from it (such as `ptr + 1`) will be used to access the object to which it points”* - Parallelizing compilers often require restrict to determine independence - Otherwise the compiler can’t parallelize loops that access `ptr` - Note: if programmer violates the declaration, behavior is undefined OpenACC independent clause Specifies that loop iterations are data independent. This overrides any compiler dependency analysis. This is implied for parallel loop. ``` #pragma acc kernels { #pragma acc loop independent for(int i=0; i<N; i++) { a[i] = 0.0; b[i] = 1.0; c[i] = 2.0; } #pragma acc loop independent for(int i=0; i<N; i++) { a(i) = b(i) + c(i) } } ``` Informs the compiler that both loops are safe to parallelize so it will generate both kernels. Write Parallelizable Loops - Use countable loops - C99: while->for - Fortran: while->do - Avoid pointer arithmetic - Write rectangular loops (compiler cannot parallelize triangular lops) ```c bool found=false; while(!found && i<N) { if(a[i]==val) { found=true loc=i; } i++; } ``` ```c for(int i=0;i<N;i++) { if(a[i]==val) { found=true loc=i; } } ``` ```c for(int i=0;i<N;i++) { for(int j=i;j<N;j++) { sum+=A[i][j]; } } ``` ```c for(int i=0;i<N;i++) { for(int j=0;j<N;j++) { if(j>=i) sum+=A[i][j]; } } ``` OpenACC Routine Directive The routine directive specifies that the compiler should generate a device copy of the function/subroutine in addition to the host copy and what type of parallelism the routine contains. Clauses: - **gang/worker/vector/seq** - Specifies the level of parallelism contained in the routine. - **bind** - Specifies an optional name for the routine, also supplied at call-site - **no_host** - The routine will only be used on the device - **device_type** - Specialize this routine for a particular device type. OpenACC Debugging - Most OpenACC directives accept an if(condition) clause ```c #pragma acc update self(A) if(debug) #pragma acc parallel loop if(!debug) ``` - Use default(none) to force explicit data directives ```c #pragma acc data copy(...) create(...) default(none) ``` Next Steps 1. Identify Available Parallelism - What important parts of the code have available parallelism? 2. Parallelize Loops - Express as much parallelism as possible and ensure you still get correct results. - Because the compiler *must* be cautious about data movement, the code will generally slow down. 3. Optimize Data Locality - The programmer will *always* know better than the compiler what data movement is unnecessary. 4. Optimize Loop Performance - Don’t try to optimize a kernel that runs in a few *us* or *ms* until you’ve eliminated the excess data motion that is taking *many seconds*. Typical Porting Experience with OpenACC Directives - Step 1: Identify Available Parallelism - Step 2: Parallelize Loops with OpenACC - Step 3: Optimize Data Locality - Step 4: Optimize Loops Graph showing application speed-up over development time.
{"Source-Url": "https://hpc.sjtu.edu.cn/lecture_3-1.pdf", "len_cl100k_base": 5616, "olmocr-version": "0.1.53", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 82539, "total-output-tokens": 8657, "length": "2e12", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.0002875328063964844, "__label__crime_law": 0.0002651214599609375, "__label__education_jobs": 0.0002617835998535156, "__label__entertainment": 5.626678466796875e-05, "__label__fashion_beauty": 0.00014781951904296875, "__label__finance_business": 0.00015556812286376953, "__label__food_dining": 0.0003323554992675781, "__label__games": 0.0006690025329589844, "__label__hardware": 0.0022945404052734375, "__label__health": 0.0003516674041748047, "__label__history": 0.00019562244415283203, "__label__home_hobbies": 9.173154830932616e-05, "__label__industrial": 0.0005178451538085938, "__label__literature": 0.00015306472778320312, "__label__politics": 0.00023162364959716797, "__label__religion": 0.00049591064453125, "__label__science_tech": 0.0158843994140625, "__label__social_life": 4.941225051879883e-05, "__label__software": 0.004787445068359375, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.0003497600555419922, "__label__transportation": 0.0005259513854980469, "__label__travel": 0.0001996755599975586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21398, 0.02276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21398, 0.56999]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21398, 0.69306]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 296, null], [296, 327, null], [327, 444, null], [444, 476, null], [476, 639, null], [639, 826, null], [826, 988, null], [988, 1473, null], [1473, 1689, null], [1689, 2232, null], [2232, 2262, null], [2262, 2619, null], [2619, 3031, null], [3031, 3060, null], [3060, 3631, null], [3631, 3670, null], [3670, 3781, null], [3781, 4101, null], [4101, 4530, null], [4530, 4644, null], [4644, 5326, null], [5326, 5719, null], [5719, 6145, null], [6145, 6259, null], [6259, 6617, null], [6617, 7022, null], [7022, 7693, null], [7693, 8045, null], [8045, 8850, null], [8850, 9226, null], [9226, 9767, null], [9767, 10594, null], [10594, 11045, null], [11045, 11141, null], [11141, 11382, null], [11382, 11446, null], [11446, 11499, null], [11499, 11591, null], [11591, 11716, null], [11716, 11775, null], [11775, 12297, null], [12297, 12950, null], [12950, 13064, null], [13064, 13407, null], [13407, 14050, null], [14050, 14398, null], [14398, 15050, null], [15050, 15768, null], [15768, 15808, null], [15808, 15922, null], [15922, 16334, null], [16334, 16795, null], [16795, 17314, null], [17314, 17428, null], [17428, 17428, null], [17428, 18051, null], [18051, 18612, null], [18612, 19089, null], [19089, 19703, null], [19703, 20246, null], [20246, 20524, null], [20524, 20535, null], [20535, 21148, null], [21148, 21398, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 296, null], [296, 327, null], [327, 444, null], [444, 476, null], [476, 639, null], [639, 826, null], [826, 988, null], [988, 1473, null], [1473, 1689, null], [1689, 2232, null], [2232, 2262, null], [2262, 2619, null], [2619, 3031, null], [3031, 3060, null], [3060, 3631, null], [3631, 3670, null], [3670, 3781, null], [3781, 4101, null], [4101, 4530, null], [4530, 4644, null], [4644, 5326, null], [5326, 5719, null], [5719, 6145, null], [6145, 6259, null], [6259, 6617, null], [6617, 7022, null], [7022, 7693, null], [7693, 8045, null], [8045, 8850, null], [8850, 9226, null], [9226, 9767, null], [9767, 10594, null], [10594, 11045, null], [11045, 11141, null], [11141, 11382, null], [11382, 11446, null], [11446, 11499, null], [11499, 11591, null], [11591, 11716, null], [11716, 11775, null], [11775, 12297, null], [12297, 12950, null], [12950, 13064, null], [13064, 13407, null], [13407, 14050, null], [14050, 14398, null], [14398, 15050, null], [15050, 15768, null], [15768, 15808, null], [15808, 15922, null], [15922, 16334, null], [16334, 16795, null], [16795, 17314, null], [17314, 17428, null], [17428, 17428, null], [17428, 18051, null], [18051, 18612, null], [18612, 19089, null], [19089, 19703, null], [19703, 20246, null], [20246, 20524, null], [20524, 20535, null], [20535, 21148, null], [21148, 21398, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21398, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21398, null]], "pdf_page_numbers": [[0, 49, 1], [49, 296, 2], [296, 327, 3], [327, 444, 4], [444, 476, 5], [476, 639, 6], [639, 826, 7], [826, 988, 8], [988, 1473, 9], [1473, 1689, 10], [1689, 2232, 11], [2232, 2262, 12], [2262, 2619, 13], [2619, 3031, 14], [3031, 3060, 15], [3060, 3631, 16], [3631, 3670, 17], [3670, 3781, 18], [3781, 4101, 19], [4101, 4530, 20], [4530, 4644, 21], [4644, 5326, 22], [5326, 5719, 23], [5719, 6145, 24], [6145, 6259, 25], [6259, 6617, 26], [6617, 7022, 27], [7022, 7693, 28], [7693, 8045, 29], [8045, 8850, 30], [8850, 9226, 31], [9226, 9767, 32], [9767, 10594, 33], [10594, 11045, 34], [11045, 11141, 35], [11141, 11382, 36], [11382, 11446, 37], [11446, 11499, 38], [11499, 11591, 39], [11591, 11716, 40], [11716, 11775, 41], [11775, 12297, 42], [12297, 12950, 43], [12950, 13064, 44], [13064, 13407, 45], [13407, 14050, 46], [14050, 14398, 47], [14398, 15050, 48], [15050, 15768, 49], [15768, 15808, 50], [15808, 15922, 51], [15922, 16334, 52], [16334, 16795, 53], [16795, 17314, 54], [17314, 17428, 55], [17428, 17428, 56], [17428, 18051, 57], [18051, 18612, 58], [18612, 19089, 59], [19089, 19703, 60], [19703, 20246, 61], [20246, 20524, 62], [20524, 20535, 63], [20535, 21148, 64], [21148, 21398, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21398, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
498877f988ea52b452355efc286da2e813cfe4b9
During automatic program animation, explanations after animations have greater impact than before animations Peng Wang University of Eastern Finland School of Computing Joensuu pwang@student.uef.fi Roman Bednarik University of Eastern Finland School of Computing Joensuu roman.bednarik@uef.fi Andrés Moreno University of Eastern Finland School of Computing Joensuu andres.moreno@uef.fi ABSTRACT Little is known about the effectiveness of automatic explanations in educational program visualization. We designed a study in which the order of animations and related explanations was manipulated. Two groups of a total of 18 participants interacted with either animation-first or explanation-first version of a tool. The results indicate that animation-first approach is significantly more effective. On the grounds of these findings and students’ input about the explanation generation and layout, we discuss the design implications of the findings. Categories and Subject Descriptors K.3.2 [Computers and education]: Computer and Information Science Education—computer science education, information systems education General Terms Human factors, Experimentation, Design Keywords program animation, learning programming, educational technologies, Jeliot 3 1. INTRODUCTION Program animation, when interaction with it is properly designed, has been shown to be beneficial for learning programming [10]. Others have stressed specifically adequate teacher support [3] as one of the key ingredients for successful learning. It has been previously reported that inexpert users of visualizations take longer to understand and make efficient use the visualizations than experts, partly due to the visualizations being designed by the experts themselves and partly because experts already possess mental models that help in understanding [22]. In the domain of computer programming education, it is well known students “cannot make sense of visualisations” [4]. Then, a question arises, what methods, pedagogically and empirically sound, should be used when the teachers are not present and cannot cue students in to engage in meaningful interactions with a program visualization tool? Naps et al. [20] suggested to “complement visualizations with explanations”, based on research showing that animations are better understood if they are accompanied with concurrently narrated explanations [15]. Naps et al. suggested that also in programming education explanations could be added to visualizations in two ways: 1) using accompanying text or 2) providing coordinated audio explanations. Auditory explanations fit the dual-coding theory better. They complement simultaneously the visual stimulation to create new knowledge, and its effectiveness has been proved. In their experiment, Mayer and Anderson [15] demonstrated how concurrent verbal explanations improved students’ problem solving transfer skills. Students who received the explanations before the animation did significantly worse than those with concurrent audio explanations. These results have been extended in [17], providing arguments for employing multimedia-learning theory principles for learning with dynamic visualizations. Several automatic animation systems, that is those systems in which the animation is dynamically created from users’ own data set or program source code, offer benefit to students from explanations as they help to build the relationships between the animations and the concept explained. However, program animation systems’ designers have so far preferred to employ textual explanations rather than verbal explanations, and these explanations are often displayed simultaneously. While it is assumed that students make use of the explanations when interacting with the program animation systems, there are neither studies nor guidelines regarding the temporal arrangement of textual explanations and animations. In this paper thus, we explore the effects of the arrangements on students’ learning of principal Java programming concepts. 2. RELATED SYSTEMS AND RESEARCH 2.1 Program visualization and explanations Many program visualization tools such as MatrixPro [11], ALVIS LIVE! [9], Jeliot 3 [2] and WinHIPE [21] do not have explanations of the animations. MatrixPro and WinHIPE provide exercises with textual descriptions and explanations about the program or algorithm. On the other hand, ViLLE [23], WADEIn II [6], and VARScope [12] provide explanations of animations or programs, and these explanations are shown during animations. We next present a short summary of these systems. ViLLE [23] and UUhistle [25] are program visualization tools that animate, and let the student simulate in the case of UUhistle, the execution of a program. They highlight code lines, displays the states of variables, and creates frames representing newly executed methods. At the same time, in both tools, explanations are automatically generated in a separate frame at the bottom. A study of Rajala et al. [24] on effectiveness of ViLLE was carried out, demonstrating that ViLLE is especially useful for inexperienced programmers. WADEIn II [6] is a web-based program visualization application. It visualizes the process of expression evaluation in C language and it supports twenty-four C operators. WADEIn II displays animations and related explanations close to each other in the “blackboard” region, and they are presented simultaneously. As students’ knowledge increases, the system evaluates it, parts of explanations are hidden until no more explanations are presented, and animations become faster. VARScope [12] is a program visualization system focusing on the concept and usage of variable scope in C programming language. Visualization in VARScope includes highlighted code line, value of the variable, animating the active and hidden variables, and the detailed explanations of each code line. Explanations and visualizations are displayed simultaneously in separate windows. In summary, to our knowledge ViLLE and UUhistle are the only general purpose visualization tools that contains automatic explanations during visualization. WADEIn II and VARScope are more focused on certain programming concepts they explain, but have interesting features like adaptation. Automatic explanations in these tools are mostly presented simultaneously and in different windows than the main representations. 2.2 Temporal arrangement of explanations Few previous studies investigated the arrangement of animations and explanations in time. In order to evaluate the effects of verbal and visual representation in time, Mayer [14] applied a number of retention and transfer tests. The result was that students who received simultaneous animation and narration outperformed those who received successive animation and narration on problem-solving test. In retention test there was no statistical difference between simultaneous presentation and successive presentation. In Mayer’s studies, however, there was little information on the presentation of textual narrations and animations. What has been found is the advantage of multimodal representation use, that is of the combination of verbal and visual materials. Lawrence [13] carried out an experiment regarding the order of presentation of text and animation in algorithm visualization. The conclusion of Lawrence’s research was that students in text-first condition did not achieve better result than those in animation-first condition. Although no significant difference was observed, text-first approach was selected finally for the reason that the text-first group achieved a slightly higher score than the other group. Lawrence thought that condition of text first rather than animation first was preferred by participants. In Lawrence’s study, XTango [26] was used to animate relevant algorithms and twelve students were separated equally into two groups. An analysis of each group’s post-test score determined if the order of presentation had effects on result. Lawrence’s research is quite similar to ours in a few aspects. We too put an emphasis on the impact of the arrangement of explanations and animations in time and it is also our goal to improve understanding of certain behaviors of the visualization and thus of certain concepts being visualized. However, Lawrence’s experiment only compared each group’s post-test score, while we here present a pre-post-test design. 3. JELIOT 3 We selected Jeliot 3 as a system to test the effectiveness of explanations and their temporal arrangement for few reasons. First, Jeliot 3 is distributed as an open source, it is well documented\(^1\) and its architecture allows for such modifications [2]. Second, as we show below, it has been repeatedly shown to be effective in learning programming. Here, it has been modified to automatically display explanations for certain concepts during the animation of students’ programs. 3.1 Previous research on Jeliot effectiveness Jeliot 3 employs automatically generated animations that display the execution of a Java program. Teachers and students can use these animations in a movie like fashion or in a step by step way. Several studies have demonstrated that Jeliot 3 has positive impacts on learning programming [3, 7, 8]. A study of [3] was carried out to evaluate a predecessor of Jeliot 3 in a one-year programming course. In that experiment, students were divided into a control group and animation group. Between the two groups only the animation group was treated with Jeliot. Ben-Bassat et al. found that there was no statistically significant difference between pre- and post-test results in the control group, whereas there was statistically significant improvement in the grades of the animation group. Furthermore, in the animation group it was demonstrated that mediocre students benefitted more from long-term use of the tool than either strong or weak students. A study of Cisar et al. [7] verifies that Jeliot 3 affects learning of Java. In that study, results of 20 multiple choice questions by 400 students were analyzed. It was shown that students who learned with the help of Jeliot 3 outperformed those who did not use Jeliot 3. Hongwarittorn and Krairit [8]\(^1\)http://cs.uef.fi/jeliot/ confirms that Jeliot 3 leads to better learning of Java, especially in object-oriented programming (OOP). In that study conducted with 54 participants, those who learned Java with Jeliot achieved better results than those who learned without the tool. However, other research [16] indicates that some students misunderstand the animations in Jeliot 3. In that study, after 10 weeks voluntarily using Jeliot 3 as a programming tool for weekly assignment completion, six maths undergraduates were interviewed to explore their attitudes towards the tool and to assess their comprehension of animation. Although almost all subjects understood animation referring to basic statements such as variable declaration, some of them failed to describe the animation of an object allocation correctly. The "this" reference which is used to point to the current object, and argument passing to parameter of the constructor, were found to be the most puzzling. 3.2 Objectives and hypotheses In this paper, the aim is to inspect the effect of the sequence of animation and related explanation on learning outcome during programming. In particular, the investigation we present compares the impact on understanding critical Java programming concepts when explanations are either displayed after animations or before animations. The null hypothesis we investigate is that there is no difference in the other of animation and related explanation in terms of learning outcome. 4. METHOD We designed a pre—post-test study in which participants were assigned to one of two conditions: either they are interacting with a modified Jeliot 3 system that presented explanations of key concept before the animation of a concept, or they were using a version of the tool that presented explanations after the concept animation. Lawrence’s research method is similar to the one is used in this study. She also focused on the impact of the temporal arrangement of explanations and animations in time. However, Lawrence’s experiment only compared each group’s post-test score, while this study uses a pre- post-test design. 4.1 Design and materials The experiment was designed as a between subject study, where the order of the animation and explanation was the primary factor with two levels: one level was explanation first while the other level was animation first. Both groups had the same short Java program for experiment (see Appendix for listing) and the same test before and after the experiment. The only difference between two groups was the order of explanations and animations. In the animation-first group, the corresponding explanation was presented after each animation and it described what the previous animation represented. In contrast, in the explanation-first group, related explanation was displayed before each animation and it described what the next animation would represent. The content of the explanations was same for both groups. There were altogether three target animation related to three fundamental Java object-oriented concepts: 1. Object initialization and "this" keyword 2. Reference return and assignment 3. Garbage collection These concepts were chosen as the animations of object-oriented concepts in Jeliot 3 were identified to be the most difficult for students to explain after watching them [16]. As well, the first two concepts have been considered either critical or difficult to learn on a study surveying faculty members [5]. As an example of an animation in Jeliot 3 Figure 1 and Figure 2 show the sequence of animation steps for object initialization and "this" keyword concepts in animation first and explanation first conditions, respectively. 4.2 Participants There were a total of 18 volunteering participants in this experiment, 15 male and 3 female. The participants were computing postgraduate and Master’s students at one Finnish university. In overall, they had very little or no experience with Jeliot 3. All participants had some knowledge of object oriented programming (OOP) in Java as they recently took a Java class in an undergraduate course. A grade from the OOP was collected as a background measure of OOP understanding along with self-rating of OOP skills, both on the scale from 1 (worst) - 5 (best). They were divided into two groups: the animation-first group (10 participants) and the explanation-first group (8 participants). Table 1 shows no significant differences between the groups in terms of previous grade in OOP class and self-rating. 4.3 Procedure Participants were given a short introduction to Jeliot 3 by an assistant. The introduction included what each area of the animation frame displays and how to control the process of animation through buttons. After the introduction, participants were required to get familiar with Jeliot 3 by running an object-oriented program. Participants were allowed to ask questions on Jeliot 3. The time reserved for this introduction and practice was 10 minutes. Afterwards, participants completed a test which comprised three questions in 20 minutes. Each question could award (a) Animation of object initialization starts. (b) Animation of initialization ends and related explanation appears. (c) Related explanation disappears. (d) Animation of “this” keyword starts. (e) Animation of “this” ends and related explanation appears. (f) Related explanation disappears. Figure 1: In animation first condition, animations of object initialization and “this” keyword are shown before the respective explanation appears. Figure 2: In explanation first condition, animations of object initialization and "this" keyword are shown only after the respective explanation. the student a maximum score of 5, for a maximum total of 15 points. During the test, participants could use Jeliot 3, with the options for explanations deactivated, to visualize the animation associated with each question. After the test, explanations were added to Jeliot 3. Participants were required to run the same program again and read explanations in 15 minutes. In the end, participants completed a test in 15 minutes. During this test, they were not allowed to use Jeliot 3. Those three questions in this test were same as the previous ones. 4.4 Analysis In this paper we present an analysis of scores achieved on pre-test ($score_1$) and post-test ($score_2$) evaluations. We compute the raw score difference as well as learning gain, which depends on the maximum number of points that can be awarded ($max$). We employ the following formula to compute the learning gain: $$Learning \ gain = \frac{score_2 - score_1}{max - score_1}$$ Learning gain allows to evaluate the relative increase in score given the pre-test score (how much the student improved out of total possible improvement), while raw score difference does not consider the starting level of the assessment. 5. RESULTS In this paper we analyze performance in terms of pre-post test differences. Table 2 shows the distribution of pre-test scores. A statistical analysis\(^2\) shows that there were no significant differences between the two groups on the pre-test performance although the animation first group performed somewhat better on the reference return and assignment concept. The same question though was also easier for participants as it received the highest score from the three concepts. The scores on the post-test are shown in Table 3. It shows that on first and last concept the animation-first group outperformed the explanation-first group. We treat the differences between post and pre-test in the following section. We also computed correlations of the pre-test and post-test scores with the knowledge of the participants measured by the grade obtained from a previous OOP course, see Table 4. Such analysis allows to answer a question, whether explanations have homogenous effect on participants regarding their background knowledge. It turned out that only second question related to reference return and assignment and pre-test scores were significantly correlated. This fact indicates that those with better OOP knowledge did better in answering that question related to garbage collection before the use of explanations. The score on the post-test was not correlated with previous knowledge \(^2\)A 1-Sample Kolmogorov-Smirnov test verified that the distributions of participants’ grades both in pre-test and post-test were normal distributions as well as the distributions of the changes in the score. Hence we applied a series of independent-samples t-tests in the following analyses. ### 5.1 Learning score changes and learning gain We first computed for each participant the difference between pre- and post-test scores, shown as group-aggregated mean values in Table 5. In total, there were significant differences in the raw score changes between the two groups. In particular, the animation-first group improved by 1.7 points on average in total, while the explanation-first group improved only by 0.25 points on average in total. The original standard deviation of the animation-first group was 2.0 thus the learning improvement measured in standard deviation shift by 1.7 corresponds to about 0.85 $\sigma$. When analyzing the score change within the groups statistically, using a paired-sample t-test we discovered a significant difference between pre- and post-test scores in the animation-first group ($t(9) = 3.6$, $p = .006$), while there was no difference in the performance of the explanation-first group ($t(7) = 1.528$, $p = .170$). Analysis of learning gain scores discovered that the animation-first group improved by 15% on average while the explanation-first group improved by 2% on average, see Table 6. The overall difference between the groups was significant on the 3% level. There was no difference in the learning gains on the second concept related to reference return and assignment. ### 5.2 Analysis of written answers #### 5.2.1 Object initialization and this-keyword In the animation-first group, three of the ten (30%) participants corrected their answers on object initialization, while another three of the ten (30%) participants corrected their answers on “this” keyword. However, in the explanation-first group, no participants improved their scores after using the explanation-version of the tool. Table 7 captures the differences between pre- and post-test answers. #### 5.2.2 Reference return and assignment As shown above, there was no difference in the understanding of reference return and this-keyword concepts between groups and that there was very little or no improvement after using explanations. Only two and two participants in each group scored better by one point; Table 8 shows the rare changes in answers. Table 2: Before: Means, standard deviations (in parenthesis), t value, and 2-tailed p-value of each question, before using explanations (pre-test) <table> <thead> <tr> <th></th> <th>Q 1</th> <th>Q 2</th> <th>Q 3</th> <th>Total questions</th> </tr> </thead> <tbody> <tr> <td>Animation-first (N=10)</td> <td>0.90 (0.74)</td> <td>1.90 (0.74)</td> <td>0.90 (0.74)</td> <td>3.70 (2.00)</td> </tr> <tr> <td>Explanation-first (N=8)</td> <td>0.88 (0.35)</td> <td>1.38 (0.74)</td> <td>0.88 (0.83)</td> <td>3.13 (1.13)</td> </tr> <tr> <td>t value</td> <td>0.088</td> <td>1.495</td> <td>0.067</td> <td>0.729</td> </tr> <tr> <td>p (2-tailed)</td> <td>0.931</td> <td>0.154</td> <td>0.947</td> <td>0.480</td> </tr> </tbody> </table> Table 3: After: Means, standard deviations (in parenthesis), t value, and 2-tailed p value of each question, after using explanations (post-test) <table> <thead> <tr> <th></th> <th>Q 1</th> <th>Q 2</th> <th>Q 3</th> <th>Total questions</th> </tr> </thead> <tbody> <tr> <td>Animation-first (N=10)</td> <td>1.60 (1.17)</td> <td>2.10 (0.74)</td> <td>1.70 (0.95)</td> <td>5.40 (2.59)</td> </tr> <tr> <td>Explanation-first (N=8)</td> <td>0.88 (0.35)</td> <td>1.63 (0.74)</td> <td>0.88 (0.83)</td> <td>3.38 (1.92)</td> </tr> <tr> <td>t value</td> <td>1.851</td> <td>1.352</td> <td>1.931</td> <td>2.009</td> </tr> <tr> <td>p value (2-tailed)</td> <td>0.091</td> <td>0.195</td> <td>0.071</td> <td>0.062</td> </tr> </tbody> </table> Table 5: Pre-post-test mean differences in raw score, standard deviations (in parenthesis), t value, and 2-tailed p value of each question. <table> <thead> <tr> <th></th> <th>Q 1</th> <th>Q 2</th> <th>Q 3</th> <th>Total questions</th> </tr> </thead> <tbody> <tr> <td>Animation-first (N=10)</td> <td>0.70 (0.82)</td> <td>0.20 (0.42)</td> <td>0.80 (0.79)</td> <td>1.70 (1.49)</td> </tr> <tr> <td>Explanation-first (N=8)</td> <td>0.00 (0.00)</td> <td>0.25 (0.46)</td> <td>0.00 (0.00)</td> <td>0.25 (0.46)</td> </tr> <tr> <td>t value</td> <td>2.689</td> <td>-0.239</td> <td>3.207</td> <td>2.899</td> </tr> <tr> <td>p value (2-tailed)</td> <td>0.025</td> <td>0.814</td> <td>0.011</td> <td>0.014</td> </tr> </tbody> </table> Table 6: Mean learning gains, standard deviations (in parenthesis), t value, and 2-tailed p value <table> <thead> <tr> <th></th> <th>Q 1</th> <th>Q 2</th> <th>Q 3</th> <th>Mean gain</th> </tr> </thead> <tbody> <tr> <td>Animation-first (N=10)</td> <td>0.18 (0.23)</td> <td>0.06 (0.13)</td> <td>0.19 (0.20)</td> <td>0.15</td> </tr> <tr> <td>Explanation-first (N=8)</td> <td>0.00 (0.00)</td> <td>0.07 (0.12)</td> <td>0.00 (0.00)</td> <td>0.02</td> </tr> <tr> <td>t value</td> <td>2.250</td> <td>-0.139</td> <td>2.732</td> <td>2.413</td> </tr> <tr> <td>p value (2-tailed)</td> <td>0.039</td> <td>0.891</td> <td>0.015</td> <td>0.028</td> </tr> </tbody> </table> Table 7: Example answers from pre- and post-test on object initialization and this-keyword, all participants from animation-first group. <table> <thead> <tr> <th>Participant</th> <th>In pre-test</th> <th>In post-test</th> <th>Changes</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>“...create a new object square.”</td> <td>“...this arrow to square.”</td> <td>This participant misunderstood the meaning of arrow in pre-test, but after reading explanations he realized arrow as a reference to an object.</td> </tr> <tr> <td>B</td> <td>“The arrow represents the relationship between the current object and its variables.”</td> <td>“Every object has a reference to itself and this keyword indicates that. The arrow means the reference to the object.”</td> <td>This participant thought of arrow as relationship. However, later he was able to understand why used “this” so that explanations made sense for him.</td> </tr> <tr> <td>C</td> <td>“The arrow refers to the constructor of the class.”</td> <td>“The arrow means the memory is located for new object.”</td> <td>This participant changed his answers from reference to constructor to reference to object.</td> </tr> </tbody> </table> Table 8: Example answers from pre- and post-test on reference return and assignment. <table> <thead> <tr> <th>Participant in animation-first group</th> <th>In pre-test</th> <th>In post-test</th> <th>Changes</th> </tr> </thead> <tbody> <tr> <td></td> <td>“the movement of square from 'Expression evaluation area' to 'Method area' means to select the value of the variable &quot;side&quot; and put it in the object of class square.”</td> <td>“The movement of the small rectangle from 'evaluation area' to 'method area' means to assign to the variable a reference from the new created object.”</td> <td>This participant was not able to express the meaning of movement precisely in pre-test, but after reading explanations he could consider the movement as assignment correctly.</td> </tr> <tr> <td>A participant in explanation-first group</td> <td>“The newly created instance square has been initialized by the instance of class square. It will return to the main function with the reference to the new instance square.”</td> <td>“...assign it to the newly created instance.”</td> <td>This participant changed his answer from initialization and return to assignment.</td> </tr> </tbody> </table> 5.2.3 Garbage collection After interacting with explanations, six of the ten participants in the animation-first group improved their scores. However, no participants in the explanation-first group improved their scores. Table 9 shows examples of three participants’ answers in pre-test and post-test. 5.3 Summary of the findings The main findings can be summarised as following: - Sequencing of animations and explanations matters. Explanations after animations have positive effect on learning gain while explanations before animations have little to no effect. - Students quickly assimilate the vocabulary of the explanations. Students improved their descriptions, and scores, borrowing the text provided in the explanations. 6. GENERAL DISCUSSION AND CONCLUSIONS In this paper we presented an empirical evaluation of the effect of temporal arrangement of explanations in learning three Java concepts using automatic program animation. We extended a well studied tool by automatic explanation generation and conducted a study in which participants used either a version that presented explanations before the animated concept or after it. The results show that there are differences in learning contingent with the temporal arrangement of animations and explanations. Interestingly, even short-term interaction with explanations after animations is sufficient to improve understanding of some core Java programming concepts. Previous research indicates that interactive elements such as prediction questions [18] shown during program animation cause student to pause and are beneficial for learning as they increase the levels of engagement [19]. The present findings can be seen as in line with the previous research. The design of animation-first condition creates pauses during the display of the explanation in which students seem to reflect on what happened during the animation. On the contrary, when the explanation goes first, the dynamic nature of the animation does not let the student to mentally retrieve the text from the explanation she just read. We plan to investigate the actual step-to-step use of explanations using gaze-tracking methodology that will allow us to estimate when and how explanations and visualizations are attended. Should textual explanations be displayed at the same time with animation? Intuitively and theoretically, such design should be mostly effective [15, 17, 14]. There are however at least two counter-arguments against concurrent explanations in programming education. First, the new explanations presented here are not verbal, so the temporal-contiguity effect would not entirely apply. In previous research on multimodal learning, the additional modality was often verbal. We see this approach as impractical for classroom use and for eventual automatic implementation, though we do not dismiss this possibility. Second, and more importantly we believe that a juxtaposed explanation of a concurrently animated programming concept may bring more harm than gain. It has been previously shown that novice programmers are not able to coordinate multiple representations concurrently [1]. Same applies here, where introducing a new attention demanding element into an already complex and dynamic visual stimuli would result in further increase of load. Monitoring the ongoing animation, source code, output, and an additional explanation would simply be beyond the possibilities of a student. The research presented here opens new paths into the topic of interaction with explanations. While the explanations were generated automatically using generic templates for various concepts, further work could ask the student to write the explanation of each concept herself, to be later compared with the experts’ explanation. Further research needs to consider the effects of explanations from a long-term perspective. In our study students were exposed to the intervention for a short period of time, and even though the observed effects were clearly visible, a course-long exposure is needed to establish evidence of more permanent effects. ## Acknowledgments The work of Roman Bednarik was supported by a grant of Academy of Finland #137773. ## 7. REFERENCES **Appendix** Listing 1: The Java program employed in the experiment ```java public class Square { int side; Square() { side = 0; } Square(int s) { side = s; } } public class MyClass { public static void main() { Square square = new Square(5); } } ```
{"Source-Url": "http://cs.joensuu.fi/pages/int/pub/p100-wang.pdf", "len_cl100k_base": 6678, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27902, "total-output-tokens": 8612, "length": "2e12", "weborganizer": {"__label__adult": 0.0007371902465820312, "__label__art_design": 0.001854896545410156, "__label__crime_law": 0.0007920265197753906, "__label__education_jobs": 0.1851806640625, "__label__entertainment": 0.0002429485321044922, "__label__fashion_beauty": 0.0003795623779296875, "__label__finance_business": 0.0007519721984863281, "__label__food_dining": 0.00083160400390625, "__label__games": 0.001392364501953125, "__label__hardware": 0.0017414093017578125, "__label__health": 0.0011243820190429688, "__label__history": 0.000934600830078125, "__label__home_hobbies": 0.0003223419189453125, "__label__industrial": 0.0009617805480957032, "__label__literature": 0.0010204315185546875, "__label__politics": 0.0006604194641113281, "__label__religion": 0.0011796951293945312, "__label__science_tech": 0.04107666015625, "__label__social_life": 0.0004787445068359375, "__label__software": 0.0171966552734375, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.0007572174072265625, "__label__transportation": 0.0014896392822265625, "__label__travel": 0.0005021095275878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35210, 0.04587]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35210, 0.68775]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35210, 0.91822]], "google_gemma-3-12b-it_contains_pii": [[0, 4057, false], [4057, 10290, null], [10290, 15376, null], [15376, 15821, null], [15821, 15967, null], [15967, 21043, null], [21043, 24452, null], [24452, 29368, null], [29368, 32994, null], [32994, 35210, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4057, true], [4057, 10290, null], [10290, 15376, null], [15376, 15821, null], [15821, 15967, null], [15967, 21043, null], [21043, 24452, null], [24452, 29368, null], [29368, 32994, null], [32994, 35210, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35210, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35210, null]], "pdf_page_numbers": [[0, 4057, 1], [4057, 10290, 2], [10290, 15376, 3], [15376, 15821, 4], [15821, 15967, 5], [15967, 21043, 6], [21043, 24452, 7], [24452, 29368, 8], [29368, 32994, 9], [32994, 35210, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35210, 0.16751]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
17d5fbc9845d1a408c43fd8b9784b696e5f58b81
Chapter 1: Introduction Chapter 1a: Overview Hello and welcome. My name is Paul Beckmann. I'm an engineer with Analog Devices and today I am going to be discussing an introduction to VisualAudio. This module provides an introduction to VisualAudio; a tool for rapid development of audio processing software. The examples and demonstrations today will be based on the Blackfin 533 EZ-KIT although VisualAudio also supports a large number of other Blackfin and SHARC processors. You will learn about the primary features of VisualAudio and how the tool can accelerate product development. You'll learn how to design audio processing layouts using the graphical editor and also a little bit about the underlying DSP software architecture. The target audience for this presentation is embedded product developers. We assume some experience with audio and also some familiarity with the Blackfin processor and the VisualDSP++ development environment. This is an introductory module. A separate module, aimed specifically at audio algorithm developers, discusses VisualAudio’s advanced features in more detail. The outline for today’s presentation is: First I give an overview of VisualAudio, describing the main features and benefits. Then I’ll move on to a live demonstration and show you the tool in action. Then I’ll talk about the DSP software architecture, discuss how VisualAudio is related to VisualDSP++, talk about the audio module library, real-time platforms, and then finally, conclude. Chapter 2: VisualAudio Overview Sub-chapter 2a: What Is VisualAudio? Let’s begin with an overview of VisualAudio. What is VisualAudio? It’s a tool for streamlining audio product development. It consists of three components. First of all, there is VisualAudio designer; it’s a graphical audio processing design application. There’s a separate audio module library consisting of commonly used audio functions. And finally, there are example platforms – these real-time frameworks run on the EZ-KIT hardware and provide audio I/O. VisualAudio is designed for product development engineers; people who have to put together efficient products with audio capabilities. VisualAudio provides most of the standard software components found in audio products and finally, VisualAudio generates MIPS and memory optimized code. The code that VisualAudio generates is suitable for inclusion into a final product without further optimization. Sub-chapter 2b: Blackfin vs SHARC VisualAudio supports both the Blackfin and the SHARC processor families from Analog Devices. The Blackfin processor is a native 16-bit processor with SIMD capabilities. Within VisualAudio, we use a 32-bit fixed point representation for all audio. Blackfin also has a rich set of microcontroller features and has a full external memory interface. The SHARC processor, on the other hand, is a 32-bit floating point DSP, also with SIMD capabilities. There’s an external memory interface that varies among the processor versions. Both architectures come in a variety of models with integrated audio peripherals. You’ll find serial ports, S/PDIF transceivers, hardware sampling rate converters and so forth. Both processor families are supported by similar platforms and complementary sets of audio modules and decoders. Let’s compare and contrast the Blackfin and the SHARC processors. The SHARC is ideal for products whose primary function is audio, where there is a significant amount of audio processing. This includes audio/video receivers, professional audio systems, and high-end automotive systems. The Blackfin processor, on the other hand, is ideal for products that have functions in addition to audio. This includes portable media players, automotive head units and telematics systems, networked media nodes, mass market pro audio and mid-end and entry-level automotive amplifiers. As a rule of thumb, the SHARC processor is three to four times as efficient as a Blackfin in processing audio per MIP. Still, given Blackfin's high clock rate, it's still a very significant and feature rich audio processor. **Sub-chapter 2c: Supported Hardware** This slide gives an overview of the EZ-KIT evaluation hardware supported by VisualAudio. On the left hand side is a list of the EZ-KITs which contain SHARC processors and on the right you'll see the Blackfin processors. It begins with the 262 EZ-KIT. It has 2 analog inputs and 8 analog outputs and also, a single S/PDIF input. Then there's the 364 EZ-KIT. It also has 2 in /8 out analog and contains S/PDIF inputs and outputs. And there’s also the 369 EZ-KIT. That has a similar set of I/O as a 364 EZ-KIT. There’s also an audio extender card, which is coming soon, which is going to provide 8 audio inputs and 16 audio outputs and this audio extender card works with the complete line here of SHARC EZ-KITs. On the right hand side are the EZ-KITs for the Blackfin processors. There's one for the 533 EZ-KIT; that's the one I'm going to be using today. It has 4 analog inputs and 6 analog outputs. There’s a separate Blackfin 537 EZ-KIT. By itself, it has two inputs and two analog outputs and in addition, there’s an audio extender card that works with the Blackfin 537 EZ-KIT and that provides 8 analog inputs and 16 analog outputs and also S/PDIF input and output. So based on your application, which processor family you’re using and your I/O needs, you would select a suitable EZ-KIT. **Sub-chapter 2d: Key Benefits** Now the key benefits for VisualAudio: I’ve broken up into two separate categories; one for audio product development and one for IP developers, that is, those people who are developing audio algorithms. Let’s start with the benefits for product developers. First of all, VisualAudio provides a starting point and methodology for audio product development. You don’t have to start from scratch. Many of the pieces are provided. VisualAudio reduces development time, cost and risk. It allows engineers to focus on differentiating their products rather than implementing standard features. What’s also nice about VisualAudio, is it provides access to audio IP, that is audio algorithms, in a consistent format. Now for audio IP developers, VisualAudio features streamlined audio IP development. It allows you to tune and test and develop algorithms more quickly. VisualAudio also serves as a nice demonstration platform and also VisualAudio provides a consistent format to deliver audio IP in. Chapter 3: On-line Demo Sub-chapter 3a: Demo Overview Now, let’s move on to a live demo. First of all, I’m going to begin and talk about creating a new system in VisualAudio, show you how to design the layout using the drag-and-drop editor, go through code generation, and then we’re going to build and run the executable on the EZ-KIT hardware using VisualDSP++ and finally, we’re going to conclude with real-time tuning. Now the demo set up I have is as follows: First of all, I have, in terms of hardware, I have a Blackfin 533 EZ-KIT here. I’m using a high performance USB emulator. That’s the connection between the VisualDSP++ debugger and the Blackfin processor. Although I’m using a high performance emulator, you can also use VisualAudio with the built-in USB emulator that’s on the EZ-KIT itself. I have a line level audio source coming in here, that’s coming from my PC and then I have two audio outputs here that go to a set of powered speakers. In terms of my software setup, I have VisualAudio installed and also VisualDSP++. And the steps I’m going to go through; I’m going to create an audio processing design using the graphical editor. Again, I’m going to generate code, build and run the executable on the EZ-KIT and finally, tune the system in real-time. Sub-chapter 3b: Audio Layout Now I’m going to switch over to VisualAudio. Here’s the main VisualAudio window. This is the way VisualAudio looks when it’s first started. I’m going to begin by going under the system menu and selecting “New System.” First of all, I have to give it a name. I’m going to call it “DemoSystem” and the next thing I do is I select a platform file. Now as I mentioned, each of the EZ-KITs has an associated platform file. So I’m going to browse to the location of the platforms, under VisualAudio Platforms and right now I have three different platforms installed; two SHARC and one Blackfin. I’m going to select the folder with the 533 platform and select this XML file. This XML file is a text file which describes the capabilities of the target hardware to VisualAudio. It contains, for example, the number of inputs and outputs, the sampling rates supported, what “TickSizes” or block sizes the layout supports and so forth. After I’ve selected this, I click “OK” and what VisualAudio does now, is it goes through a list of audio module directories and it goes through each of the audio modules within the directories and finds the ones that are compatible with the selected Blackfin 533 processor. Now what you have on the left is what’s called the audio module pallet. It’s a tree view of audio modules that work with this processor and these are categorized into different folders. On the right hand side, I have what’s called the layout window. This is where I’m going to drag and drop... audio modules and edit the audio processing. On the left hand side of the window here are four triangles. These are the four analog inputs that the EZ-KIT platform has. If I scroll over to the right hand side of the window, you'll see additional triangles on the right hand side. These six triangles are the six analog outputs that the platform provides. So let me scroll back to the left and I'm going to design a simple stereo audio processing layout. The first thing I'm going to do is I'm going to take the first two analog inputs and I'm going to convert them to a stereo representation, so each of the triangles here represents a mono audio channel. They get combined into a single interleaved stereo format. The advantage of the stereo format is that it leads to more efficient implementation in some cases. The next thing I'm going to do, I'm going to drag out a volume control and I'm going to expand this out a little so you can see it a little better. So there are two volume controls, these have built in loudness compensation and there are two versions. There's the mono version and if it ends in an "S" that means it's a stereo version. So I'm going to drag out the stereo version, drop it on to the layout editor and then I'll connect them together. You'll see the different colors for the mono and the stereo wires. And I'm going to continue scrolling over here and select a few more modules to add. The next thing I'm going to do, is I'm going to go find tone controls, I'm going to find a bass tone control, and a treble tone control and I'll connect these together. And the last thing I'm going to do is to implement a peak limiter on the output. Peak limiter prevents the output from clipping or overloading the output circuitry. The peak limiters are composed of couple of modules. First of all, there's a maximum absolute value module. This takes the stereo input and computes the maximum absolute value on a sample-by-sample basis. And then I go into this module called the AGC limiter core. This module, given the maximum absolute values, computes a gain that should be applied to the output signal in order to prevent limiting. And the last thing I do, is I'm going to connect this multiplier. So this multiplier takes the gain signal from the limiter core and then applies it on a sample-by-sample basis to the stereo signal here. So these three modules together form the peak limiter. And finally, I'm going to convert this back to a mono representation. And I'm just going to give myself a little bit more space here and I'm going to increase the number of pages that I have across here. Okay now I have a little bit more room and the last thing that I'm going to do, is I'm going to take this stereo signal and go back to two mono channels. Connect this, now I have two mono signals and I'm going to connect those to my outputs. I'm connecting the first output and then connecting the second output. So now I'm all done. I'm done instantiating the audio modules and also wiring them together. What I can also do, is I can open up, if I double click on a module, that opens up what we call an inspector. An inspector is a way of setting the audio module parameters. So you can see, for example, I opened the inspector for the bass tone control. You’ll see a couple things you can adjust. You can adjust the smoothing time, that’s a time in milliseconds over which a change to the bass tone control should take effect. There’s a gain here in dB. This is how many dB of bass boost you want to add. It’s between plus and minus 9 dB. Then finally, there’s a tone frequency. This is a frequency in hertz over which the bass tone control should operate. So for now I’m just going to leave the default settings. Sub-chapter 3c: Generating Code The next step is to generate code. I’m going to click on this button right here on the tool bar and generate code. When VisualAudio generates code, it runs a routing algorithm to determine what order the modules should be executed in. Next, it creates a data structure for each of the modules in the layout. It creates a list of the run order for the modules, and then finally, it writes all of this to disk. Next, I’m going to switch over to VisualDSP++. Now each platform has an XML file that provides the capabilities of the platform to VisualAudio and then there’s also an associated DPJ file. That’s a VisualDSP++ project file that’s used to build the executable. All the platforms provide such a DPJ file and out of the box, they can build the project and have it run on the EZ-KIT. Sub-chapter 3d: Building the Executable So right now I selected “Build.” It’s going through building all of the files, compiling the source code and when it’s done it’s going to link. The processor is running and I’m going to tell it to stop the running program and load the program just built. So that will load the executable that was just built and finally, I’m going to tell it to start running. Now the audio is running in real-time on the EZ-KIT. Let’s switch back to VisualAudio and try a few things. What you’ll notice here, is VisualAudio, the background here is white, that means it’s in design mode. In design mode you can create new audio modules and wire them together. Sub-chapter 3e: Tuning What I’m going to do now, is I’m going to switch to tuning mode. I’m going to click on this tool bar button. Now when in tuning mode, when I make a change to an audio module, the changes occur not only on the PC, but the changes are also sent to the DSP in real-time using the background telemetry channel in the emulator. And so you’ll be able to listen to the audio and make changes in real-time. Let me start the audio here. Here you have Beethoven’s Symphony Number 9. I’m going to begin and start off with the volume control and I’m going to set the gain of the volume control in dB. So as I move this, you’ll hear it gets quieter and louder. You’ll also hear that the changes are without clicks, so in fact, many of the audio modules such as the tone controls and the volume controls are implemented with automatic smoothing. So you can make changes without worrying about clicks being introduced. Let me go ahead and open the treble tone control. This is the treble tone control and I can adjust the gain. You’ll hear the amount of high frequencies increasing here. There’s 9 dB more of high frequencies and going back down decreasing it. So you can see seamless control of the audio processing in real-time. Let me go ahead and turn the volume down a little more. What you’ll also notice, is down in the lower right hand corner of the window, there’s some status information that is communicated back from the DSP. Platform status, a value of 0 means that the DSP is running in real-time. It displays a current sample rate that’s 48 kHz and it also shows you the current DSP loading, in terms of the percent of the CPU dedicated to the audio processing. Right now we’re about 4.37 percent of the DSP. And finally, there’s the peak MIPS usage. So the peak usage recorded was about 4.47 MIPS. Just want to point out a couple of things. So this is running in real-time now on the Blackfin. Its using the optimized library of audio processing functions and you can see that a layout like this with a number of audio modules actually does not consume a large amount of processing on the Blackfin. So VisualAudio can indeed do very large processing layouts and do them efficiently. I’m going to turn off the audio and then switch back to my presentation. So far the presentation was on the EZ-KIT development hardware. VisualAudio has the capability, also, to migrate to your target hardware. What you essentially do is you begin with a reference platform like what we have for the EZ-KIT. And, in fact, we provide all the source code for the real-time framework, real-time I/O and so forth. What you do is you take our source code and you basically write some drivers for your target hardware. So you customize it for your target hardware. Next you create this platform file, which is a text file, that describes your hardware to VisualAudio and then you can continue to use VisualAudio on your target hardware. So everything I did today, in terms of, designing a layout, generating code, building it, running it in real-time and tuning it, you can, in fact, do on your final target hardware. So there’s no disconnect between the EZ-KIT and the target hardware. All the features of VisualAudio will continue to work on the target hardware. Chapter 4: DSP Software Architecture Sub-chapter 4a: Relationship to VDSP Now, I’m going to talk a little bit about the underlying DSP software architecture. First of all, how are VisualAudio and VisualDSP++ related? Let me begin here on the right hand side. There’s a box labeled VisualAudio designer and all the boxes on top are inputs to VisualAudio Designer. First of all, there’s a set of audio module XML files. These are text files that describe the capabilities of the audio modules to VisualAudio. There’s also a layout file and a system file. The layout file contains a graphical design and the system file contains miscellaneous information. In addition, in the middle here, you’ll see there’s a section called “platform” and there’s Platform XML file. Again, that XML file describes the capabilities of the target hardware to VisualAudio. VisualAudio takes that as input and using all the inputs, when you generate code it creates a set of C and header files that are linked in together with the VisualDSP++ project. So underneath the platform section here, you’ll see the VisualAudio DPJ file. That automatically attaches the C and header file that VisualAudio generates and it also includes a bunch of platforms sources and libraries. It also links in the audio processing functions and there’s also a library called the “layout support library” that allows the processing to run in real-time. VisualDSP++ takes all these as input, links them together and then creates the executable. The executable is loaded by VisualDSP++ and is running on the target hardware. So that’s the relationship between VisualAudio and the VisualDSP++ development environment. Sub-chapter 4b: Audio Module Library Now I’m going to talk about the audio module library. In VisualAudio, the term audio module, refers to a subroutine for processing PCM audio. So there’s PCM audio in/PCM audio out. We provide 89 standard modules with the Blackfin processor and 94 standard modules for the SHARC processor. We have a wide range of standard audio processing functions. There are mixers, filters, delays, tone controls, basic math such as addition, subtraction, multiplication and so forth. There are faders, balance controls, volume controls, compressor limiters and so forth. It contains a wide number of audio modules sufficient to develop many products. All the audio modules are optimized for SIMD execution, so they’re written in hand-coded assembly language and they try to take full advantage of the processing capabilities of the Blackfin processor. Some modules have separate versions from mono and stereo inputs and we do this in order to be able to further optimize the code using SIMD. For all the modules we have in the standard module pack, we also provide source code. This is very handy because you can also write your own modules and expand the collection of audio modules in the library. And the source code is very handy to serve as a starting point for a particular module you may be doing. You may be designing a new filter architecture, so you might start with an existing filter module, make the changes and include that with your library. All the audio modules use block processing. So each audio module operates on a block of data rather than sample-by-sample. And the number of samples per block is fixed and is called the TickSize and you can, in fact, adjust the TickSize using the VisualAudio designer GUI. Let me just switch back here for a moment and I’m going to go back to design mode. What you’ll see right here, is this drop list shows you the set of available TickSizes for this platform and this platform supports all the way from 8 TickSizes to 2048. So depending upon your application, maybe you’re trying to balance you’re trying to reduce latency through the system or memory requirements, there’s a trade off between TickSize and MIPS and memory usage. Now all audio modules in the layout operate at the same TickSize, and the TickSize is adjustable through the user interface. Block processing is a natural fit for audio decoders, which output blocks of data. For example, Dolby digital outputs blocks of 256 samples each. MP3, DTS and so forth also output different size blocks. What’s nice about block processing is it yields a very efficient implementation. So we can, in fact, spend quite a bit of time optimizing the inner loops and ending up with a very efficient audio module library. It also leads to modularity, so that each audio function can be a stand alone separate subroutine. Here’s an example of the computation required by 10th order IIR filter. So this is a cascade of five biquad sections. On the top you’ll see the computation required for the Blackfin implementation. On the left hand side is a number of clock cycles required per audio sample process. And the X axis here is the TickSize, that is, the number of samples per block that’s being processed. What you’ll see is, as the TickSize increases, you get a more and more efficient implementation. So essentially, there’s less overhead per module call as the TickSize increases. What you’ll see, however, is a computation required quickly flattens out. So that by a TickSize of roughly 32 samples, there’s very little improvement increasing the TickSize beyond that. Another thing I want to point out is you’ll see that the Blackfin requires more cycles per sample than the SHARC. The reason is because the Blackfin is a native 16-bit processor and we do all the calculations in VisualAudio using double precision, so 32-bit processing. And you’ll see that the SHARC is roughly 3 or 4 times as efficient as a Blackfin processor. Sub-chapter 4c: Interconnections/Wires Now let’s talk about interconnections between audio modules or what we term as “wires”. You saw today that there were mono and stereo wires. A mono wire contains TickSize audio samples. So it’s a continuous block of TickSize, in this case, 32-bit fractional samples on the Blackfin. Stereo wire, on the other hand, holds interleave data and contains 2 times TickSize audio samples. So you can think about it as being interleaved left, right, left, right, and so forth. There are also a few more data types that I didn’t demonstrate today. Control wires and frequency domain wires are new for the next release for VisualAudio coming out this Fall and there’s also going to be a set of audio modules that use control wires and also a set of audio modules which utilize the frequency domain data types. Sub-chapter 4d: VisualAudio Platforms Now I’m going to talk about VisualAudio platforms. Each of the platforms is a light-weight interrupt driven real-time framework. Each platform provides double buffered DMA driven audio I/O. So that’s audio I/O coming from the A/D and D/A converters on the board. There’s also an interface to the VisualAudio-generated audio processing that’s called the layout library. So the platforms calls into the layout library which knows how to call the audio processing functions. Each platform also provides a separate non-real-time control thread. So this is handy for doing control within your product. So maybe you have a physical slider or a knob that’s attached to your product and you want to be able to use that control for making changes to the audio module. We recommend that that’s done in a non-real-time control thread. There’s also an interface for tuning, so tuning allows me to control the hardware in real-time while the audio is being processed and in some cases, there’s also a separate host communication library that allows you to communicate with the host microcontroller, typically over SPI. We have several application specific variants. We have one that’s called a basic. It’s a general purpose platform which provides PCM I/O – analog inputs, analog outputs. We also have two other variations. There’s one called AVR. This one is specifically for home theater products with decoders and there’s also an automotive version which introduces network interface for MOST and also sample rate conversion. Let me talk about the basic and AVR platforms in a little more detail. The basic platform is kind of the entry level VisualAudio platform. This is, in fact, the easiest one to understand and often a good starting point for products. It’s targeted at PCM-based products; that is products with analog inputs, analog outputs and no audio decoders. We’ve divided the platform into two parts. There’s a common core framework and there are platform specific drivers. So if you’re migrating this basic platform to your target hardware you have to update the platform-specific drivers. So you have to write drivers for your particular A/D and D/A converters. It provides double buffered DMA driven, block-based audio I/O. On the diagram here, you can see that the multi-channel audio codec’s or the S/PDIF transceiver comes in through the serial port. It’s set up to receive a block of audio data and when a block of data has been received, an interrupt is generated that triggers the audio processing. We run through the audio processing layout, create the audio outputs and then these are sent back out through the serial ports. The audio processing layout executes at interrupt level and finally, there’s a separate thread for tuning host communication and for that user control code which executes at non-interrupt level. So the way I think about it is the user control code is always running except when it’s being interrupted by the audio processing. This segregation into two different threads is really useful because it ensures that anything you do in the control code thread won’t starve the real-time audio processing thread. Now let’s look at the AVR platform. Starting on the top left here, there’s a S/PDIF input. This is typically for compressed audio such as that coming from a DVD player. It’s received through the serial port interface and then it goes through a bitstream detector. Bitstream detector is a module which looks at the incoming data stream and determines, is it uncompressed PCM audio or is a compressed format, such as Dolby Digital or DTS? If a compressed format is detected, the appropriate audio decoder is allocated and the audio decoder is called. So this might, for example, be Dolby Digital operating in 5.1 mode. This will create six audio outputs that go to the audio processing layout. Within the audio processing layout, you can design your audio processing. Maybe you’ll add volume controls, some spectral processing, EQ’s, limiters and so forth. And then the output of the audio processing goes out through the serial ports out to multi-channel D/A converters. So this is a typical AVR platform setup. You also see the two boxes here. This is again, the real-time portion that’s triggered by the serial port interrupt. This runs in real-time and there’s a separate user control thread which uses all the residual computation. Again, within the control thread, you do product control, host interface, and also, the tuning interface. Now let me talk a little bit about how the real-time platform interfaces to the layout. On the left hand side, I’m going to start with the real-time audio I/O. So the platform essentially buffers up the audio into individual audio buffers. There’s one audio buffer per mono input to the audio processing layout. So in the example before, we had a total of four inputs and six audio outputs. So there are four inputs here. Four buffers, one for each audio input. Next you call in to the VisualAudio layout support library. This works in conjunction with the audio module render functions. These are the real-time functions for processing data and the layout support library also looks at the audio modules data structures generated by the VisualAudio Designer application. So it runs through the list of audio modules to render, calls the appropriate real-time functions, processes the audio, and then returns it back in place into these input output buffers, and then these buffers are passed back into the real-time portion. What’s nice about this sort of clean interface is you can, in fact, use the VisualAudio layout support library and the generated code from VisualAudio with your own platform and so forth. So maybe you have an existing audio processing platform. All you need to do is ensure that the data is buffered up into separate buffers and then you call into the VisualAudio layout support library and you can add real-time audio processing capabilities to your audio product. Chapter 5: Conclusion Sub-chapter 5a: Summary This concludes my presentation on VisualAudio. In summary, VisualAudio accelerates the development of embedded audio applications. An intuitive graphical user interface allows audio processing to be easily designed and configured. Today’s demo utilized the Blackfin 537 EZ-KIT. VisualAudio supports both the Blackfin and SHARC families of processors and also works with many different EZ-KIT development platforms. VisualAudio also generates very efficient code, in terms of MIPS and memory usage. This was an introductory training module and there’s a separate training module which covers the VisualAudio environment in more depth, discusses advanced user interface features, writing custom audio modules and also interfacing to external design applications. Sub-chapter 5b: Additional Information Additional information on VisualAudio can be found in a number of places. First of all, a free download is available from the VisualAudio product page shown here. This includes the VisualAudio Designer application, the EZ-KIT platforms, the audio module libraries, and also full documentation. Additional examples and tutorials can be found at the VisualAudio Developers website shown here. Specific technical questions can be sent to the support email address shown here and finally, you can also click the "Ask a question" button. This wraps up my presentation of VisualAudio. I'd like to just thank you for your time and attention today. Thanks again.
{"Source-Url": "http://www.demosondemand.com/clients/analogdevices/001/page/download/intro_visualaudio_transcript.pdf", "len_cl100k_base": 6700, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25937, "total-output-tokens": 7182, "length": "2e12", "weborganizer": {"__label__adult": 0.0004601478576660156, "__label__art_design": 0.0008349418640136719, "__label__crime_law": 0.00020766258239746096, "__label__education_jobs": 0.0003039836883544922, "__label__entertainment": 0.0002636909484863281, "__label__fashion_beauty": 0.00016009807586669922, "__label__finance_business": 0.00013017654418945312, "__label__food_dining": 0.0002605915069580078, "__label__games": 0.0008401870727539062, "__label__hardware": 0.012939453125, "__label__health": 0.00016236305236816406, "__label__history": 0.0001398324966430664, "__label__home_hobbies": 9.620189666748048e-05, "__label__industrial": 0.0005097389221191406, "__label__literature": 0.0001245737075805664, "__label__politics": 0.0001327991485595703, "__label__religion": 0.0005092620849609375, "__label__science_tech": 0.01751708984375, "__label__social_life": 6.330013275146484e-05, "__label__software": 0.03680419921875, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.000247955322265625, "__label__transportation": 0.00030612945556640625, "__label__travel": 0.00012069940567016602}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31763, 0.01188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31763, 0.17039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31763, 0.90947]], "google_gemma-3-12b-it_contains_pii": [[0, 767, false], [767, 3829, null], [3829, 6437, null], [6437, 9242, null], [9242, 12460, null], [12460, 15198, null], [15198, 18315, null], [18315, 21658, null], [21658, 24655, null], [24655, 28093, null], [28093, 31108, null], [31108, 31763, null]], "google_gemma-3-12b-it_is_public_document": [[0, 767, true], [767, 3829, null], [3829, 6437, null], [6437, 9242, null], [9242, 12460, null], [12460, 15198, null], [15198, 18315, null], [18315, 21658, null], [21658, 24655, null], [24655, 28093, null], [28093, 31108, null], [31108, 31763, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31763, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31763, null]], "pdf_page_numbers": [[0, 767, 1], [767, 3829, 2], [3829, 6437, 3], [6437, 9242, 4], [9242, 12460, 5], [12460, 15198, 6], [15198, 18315, 7], [18315, 21658, 8], [21658, 24655, 9], [24655, 28093, 10], [28093, 31108, 11], [31108, 31763, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31763, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
4bc242426b907d30e6b48268dc763717ebd79210
Lecture 25: Scheduling Fork-Join Parallelism “Just expose independent work as it comes, and let the scheduler do the rest.” - Robinella Common parallel programming patterns Data Parallelism: Perform same sequence of operations on many data elements //openMP parallel for #pragma omp parallel for for (int i=0; i<N; i++) { B[i] = foo(A[i]); } // CUDA bulk launch foo<<<numBlocks, threadsPerBlock>>>(A, B); // ISPC foreach foreach (i=0 ... N) { B[i] = foo(A[i]); } // ISPC bulk task launch launch[numTasks] myFooTask(A, B); // higher order using map map(foo, A, B); Explicit parallelism management with threads: Create one thread per execution unit (or per amount of desired concurrency) - Example below: C code with pthreads - Other examples: mpirun -np 4 ```c struct thread_args { float* A; float* B; }; int thread_id[MAX_THREADS]; thread_args args; args.A = A; args.B = B; for (int i=0; i<num_cores; i++) { pthread_create(&thread_id[i], NULL, myFooThread, &args); } for (int i=0; i<num_cores; i++) { pthread_join(&thread_id[i]); } ``` Common parallel programming patterns **Pipeline Parallelism:** Each unit/worker is responsible for one stage of computation on a data element. Below: three stages of bus transaction: request stage, response stage, data-send stage Other examples: processor instruction pipeline, pipelining network transmission, … Consider divide-and-conquer algorithms Quick-sort: // sort elements from begin up to (but not including) end void quick_sort(int* begin, int* end) { if (end - begin <= 1) return; else { // choose partition key and partition elements // by key, return position of key as `middle` int* middle = partition(begin, end); quick_sort(begin, middle); quick_sort(middle+1, last); } } Dependencies independent work! **Fork-join pattern** - Natural way to express independent work inherent in divide-and-conquer algorithms - Today’s code examples will be in Cilk Plus - C++ language extensions - Originally developed at MIT, now adapted as open standard (in Intel ICC, GCC) ```cilk cilk_spawn foo(args); "fork" (create new logical thread of control) Semantics: invoke `foo`, but unlike standard function call, caller may continue executing asynchronously with execution of `foo`. cilk_sync; "join" Semantics: returns when all calls spawned by current function have completed. ("sync up" with the spawned calls) Note: there is an implicit `cilk_sync` at the end of every function that contains `cilk_spawn` (implication: when a Cilk function returns, all work associated with that function is complete) Basic Cilk Plus examples ```cilk // foo() and bar() may run in parallel cilk_spawn foo(); bar(); cilk_sync; ``` ```cilk // foo() and bar() may run in parallel cilk_spawn foo(); cilk_spawn bar(); cilk_sync; ``` Same amount of independent work first example, but potentially higher runtime overhead (due to two spawns vs. one) ```cilk // foo, bar, fizz, buzz, may run in parallel cilk_spawn foo(); cilk_spawn bar(); cilk_spawn fizz(); buzz(); cilk_sync; ``` CMU 15-418, Spring 2014 Abstraction vs. implementation - Notice that the `cilk_spawn` abstraction does not specify how or when spawned calls are scheduled to execute - Only that they may be run concurrently with caller (and with all other calls spawned by the caller) - But `cilk_sync` does serve as a constraint on scheduling - All spawned calls must complete before `cilk_sync` returns Parallel quicksort in Cilk Plus ```c void quick_sort(int* begin, int* end) { if (end - begin <= PARALLEL_CUTOFF) std::sort(begin, end); else { int* middle = partition(begin, end); cilk_spawn quick_sort(begin, middle); quick_sort(middle+1, last); } } ``` Sort sequentially if problem size is sufficiently small (overhead of spawn trumps benefits of potential parallelization) Writing fork-join programs - Main-idea: expose independent work (potential parallelism) to the system using \texttt{cilk\_spawn} - Recall parallel programming rules of thumb - Want \textit{at least as much work} as parallel execution capability (e.g., program should spawn at least was much work as there are cores) - Want \textit{more independent work} than execution capability to allow for good workload balance of all the calls onto the cores. - "parallel slack" = ratio of independent work to machine’s parallel execution capability (\(~8\) is a good ratio) - But \textit{not too much independent work} so that granularity of work is too small (too much slack incurs overhead of managing fine-grained work) Scheduling fork-join programs - Consider very simple scheduler: - Launch pthread for each cilk_spawn using pthread_create - Translate cilk_sync into appropriate pthread_join calls - Potential performance problems? - Heavyweight spawn operation - Many more concurrently running threads than cores - Context switching overhead - Larger working set than necessary, less cache locality Pool of worker threads - Cilk Plus runtime maintains pool of worker threads - Think: all threads created at application launch * - Exactly as many worker threads as execution contexts in the machine * It’s perfectly fine to think about it this way, but in reality, the runtime is lazy and initializes its worker threads on the first Cilk spawn. (This is a common implementation strategy, ISPC does the same with worker threads that run ISPC tasks.) Consider one thread executing the following code Specificially, consider execution at point of spawn of foo() cilk_spawn foo(); bar(); cilk_sync; What threads should foo() and bar() be executed by? Thread 0 Thread 1 Serial execution Run child first via function call (continuation is implicit in thread’s stack) - Thread runs foo(), returns from foo(), then runs bar() Executing foo()… Thread call stack (indicates bar will be performed next after return) Thread 0 Thread 1 goes idle… Inefficient: thread 1 could be performing bar() at this time! Per-thread work queues store “work to do” Thread 0 work queue ``` bar() ``` Thread call stack Thread 0 Executing foo()... Idle threads “steal” work from busy threads 1. Idle thread looks in busy thread’s queue for work Executing foo()... Idle threads “steal” work from busy threads 1. Idle thread looks in busy threads queue for work 2. Idle thread moves work from busy thread’s queue to its own queue Executing `foo()`... Idle threads “steal” work from busy threads 1. Idle thread looks in busy threads queue for work 2. Idle thread moves work from busy thread’s queue to its own queue 3. Thread resumes execution Alternative implementation: At each spawn, system stores continuation (path not executed) ``` cilk_spawn foo(); bar(); cilk_sync; ``` - Continuation is made available for stealing by other threads (“continuation stealing”) - Child is made available for stealing by other threads (“child stealing”) Run continuation first: record child for later execution - Child is made available for stealing by other threads (“child stealing”) Run child first: record continuation for later execution - Continuation is made available for stealing by other threads (“continuation stealing”) Which implementation do we choose? Consider thread executing the following code ```c for (int i=0; i<N; i++) { cilk_spawn foo(i); } cilk_sync; ``` ### Child stealing (run continuation first) - Caller thread spawns work for all iterations before executing any of it - Think: breadth-first traversal of call graph. $O(N)$ space for spawned work (maximum space) - If no stealing, execution order is very different than that of program with `cilk_spawn` removed Consider thread executing the following code ```c for (int i=0; i<N; i++) { cilk_spawn foo(i); } cilk_sync; ``` - **Continuation stealing (run child first)** - Caller thread only creates one item to steal (continuation that represents all remaining iterations) - If no stealing occurs, thread continually pops continuation from work queue, enqueues new continuation (with updated value of `i`) - Order of execution is the same as for program with spawn removed. - Think: depth-first traversal of call graph Consider thread executing the following code ```c for (int i=0; i<N; i++) { cilk_spawn foo(i); } cilk_sync; ``` - **Continuation stealing (run child first)** - If continuation is stolen, stealing thread spawns and executes next iteration - Enqueues continuation with \( i \) advanced by 1 - Can prove that work queue storage for system with \( T \) threads is no more than \( T \) times that of stack storage for single threaded execution void quick_sort(int* begin, int* end) { if (end - begin <= PARALLEL_CUTOFF) std::sort(begin, end); else { int* middle = partition(begin, end); cilk_spawn quick_sort(begin, middle); quick_sort(middle+1, last); } } What work to steal? Thread 0 work queue | cont: 101-200 | | cont: 51-100 | | cont: 26-50 | Thread 1 work queue | cont: | | cont: | | cont: | ..., Thread 2 work queue | cont: | | cont: | | cont: | Working on 0-25... Implementing work stealing: dequeue per worker Work queue implemented as a dequeue (double ended queue) - Local thread pushes/pops from the “tail” (bottom) - Remote threads steal from “head” (top) - Efficient lock-free dequeue implementations exist Thread 0 work queue Thread 1 work queue Thread 2 work queue Steal! Steal! Steal! Working on 0-25… Implementing work stealing: dequeue per worker Work queue implemented as a dequeue (double ended queue) - Local thread pushes/pops from the "tail" (bottom) - Remote threads steal from "head" (top) - Efficient lock-free dequeue implementations exist Thread 0 work queue ``` cont: 26-50 ``` Working on 0-25... Thread 1 work queue ``` cont: 76-100 ``` Working on 51-75... Thread 2 work queue ``` cont: 151-200 ``` Working on 101-150... Implementing work stealing: random choice of victim - **Idle threads randomly choose a thread to attempt to steal from** - **Stealing from top of dequeue…** - Reduces contention with local thread: local thread is not accessing same part of dequeue that stealing threads do! - Steals work towards beginning of call tree: this is a “larger” piece of work, so cost of steal amortized over long future computation - Maximizes locality: (in conjunction with run-child-first policy) local thread works on local part of call tree) <table> <thead> <tr> <th>Thread 0 work queue</th> <th>Thread 1 work queue</th> <th>Thread 2 work queue</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>cont: 151-200</td> </tr> <tr> <td></td> <td></td> <td>cont: 126-150</td> </tr> <tr> <td>cont: 26-50</td> <td>cont: 76-100</td> <td>cont: 114-125</td> </tr> <tr> <td>cont: 13-25</td> <td>cont: 64-75</td> <td></td> </tr> </tbody> </table> Working on 0-12… Working on 51-63… Working on 101-113… Child-first work stealing scheduler anticipates divide-and-conquer parallelism ```c void recursive_for(int start, int end) { while (end - start > GRANULARITY) { int mid = (end - start) / 2; cilk_spawn recursive_for(start, mid); start = mid; } for (int i=start; i<end; i++) foo(i); } recursive_for(0, N); ``` Code at right generates work in parallel: more quickly fills machine Implementing sync for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; bar(); Thread 0 work queue cont: i=10 Thread 1 work queue Working on foo(7)… Thread 2 work queue Working on foo(8)… Thread 3 work queue Working on foo(6)… Implementing sync: case 1: no stealing block (id: A) for (int i = 0; i < 10; i++) { cilk_spawn foo(i); } cilk_sync; \textit{Sync for all calls spawned in block A} bar(); If no work has been stolen, then thread behaves like a serial program. cilk\_sync is a noop. Implementing sync: case 2: stealing ```c block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` Example 1: stalling join policy Thread that initiates the fork must from the sync. Therefore it waits for all spawned work to be complete. In this case, thread 0 is the thread initiating the fork Working on `foo(0)`, id=A... Implementing sync: case 2: stealing ``` for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` Idle thread 1 steals from busy thread 0 Note: descriptor for block A created Implementing sync: case 2: stealing ```c for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` Thread 0 work queue Thread 1 work queue Thread 1 now running foo(1) Implementing sync: case 2: stealing ```c for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` Thread 0 work queue: idle Thread 1 work queue: stolen (id=A) Thread 2 work queue: cont: i=2, id=A Thread 0 completes foo(0) Thread 2 now running foo(2) Implementing sync: case 2: stealing block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; \textit{Sync for all calls spawned in block A} bar(); Implementing sync: case 2: stealing ```c for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` --- **Diagram:** - **Thread 0 work queue** - `id=A` - `spawn: 10, done: 9` - `STOLEN (id=A)` - **Thread 1 work queue** - `cont: i=10, id=A` - **Thread 2 work queue** - `Idle!` **Flow:** 1. Last spawn completes. 2. `foo(9)` 3. `foo(3)`, `foo(2)`, `foo(1)`, `foo(0)` 4. `bar()` Implementing sync: case 2: stealing ```c block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); ``` Thread 0 work queue Thread 1 work queue Thread 2 work queue Thread 0 now resumes continuation and executes bar() Note block A descriptor is now free. Implementing sync: case 2: stealing for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); Implementing sync: case 2: stealing block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); Idle thread 1 steals from busy thread 0 (as in the previous case) Implementing sync: case 2: stealing ```c for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; // Sync for all calls spawned in block A bar(); ``` Thread 0 work queue <table> <thead> <tr> <th>id=A</th> <th>spawn: 2, done: 0</th> </tr> </thead> <tbody> <tr> <td>STOLEN (id=A)</td> <td></td> </tr> </tbody> </table> Thread 1 work queue | cont: i=1, id=A | Thread 0 completes foo(0) No work to do in local dequeue, so thread looks to steal! Implementing sync: case 2: stealing block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); Implementing sync: case 2: stealing block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); Implementing sync: case 2: stealing block (id: A) for (int i=0; i<10; i++) { cilk_spawn foo(i); } cilk_sync; Sync for all calls spawned in block A bar(); Thread 0 work queue Thread 1 work queue Thread 1 continues on to run bar() Note block A descriptor is now free. Cilk uses greedy join scheduling - **Greedy join scheduling policy** - All threads always attempt to steal if there is nothing to do (thread only goes idle if no work to steal is present in system) - Worker thread that initiated spawn may not be thread that executes logic after cilk_sync - **Remember:** - Overhead of bookkeeping steals and managing sync points only occurs when steals occur - If large pieces of work are stolen, this should occur infrequently - Most of the time, threads are pushing/popping local work from their local dequeue Summary - Fork-join parallelism - Natural way to express divide-and-conquer algorithms - Discussed Cilk Plus, but OpenMP also has fork/join primitives - Cilk Plus runtime implements spawn/sync abstraction with locality-aware work stealing scheduler - Always run spawned child (continuation stealing) - Greedy behavior at join (threads do not wait at join, immediate look for other work to steal)
{"Source-Url": "http://15418.courses.cs.cmu.edu/spring2014content/lectures/25_forkjoin/25_forkjoin_slides.pdf", "len_cl100k_base": 4433, "olmocr-version": "0.1.50", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 72078, "total-output-tokens": 6305, "length": "2e12", "weborganizer": {"__label__adult": 0.00029397010803222656, "__label__art_design": 0.00023221969604492188, "__label__crime_law": 0.0003268718719482422, "__label__education_jobs": 0.0003933906555175781, "__label__entertainment": 5.459785461425781e-05, "__label__fashion_beauty": 0.0001036524772644043, "__label__finance_business": 0.00010991096496582033, "__label__food_dining": 0.0003178119659423828, "__label__games": 0.0007390975952148438, "__label__hardware": 0.001338958740234375, "__label__health": 0.0002899169921875, "__label__history": 0.00019884109497070312, "__label__home_hobbies": 9.351968765258788e-05, "__label__industrial": 0.0004665851593017578, "__label__literature": 0.0001251697540283203, "__label__politics": 0.0002472400665283203, "__label__religion": 0.00040221214294433594, "__label__science_tech": 0.0141143798828125, "__label__social_life": 6.473064422607422e-05, "__label__software": 0.00473785400390625, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0003974437713623047, "__label__transportation": 0.0006070137023925781, "__label__travel": 0.0001970529556274414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16369, 0.02242]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16369, 0.14525]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16369, 0.81978]], "google_gemma-3-12b-it_contains_pii": [[0, 45, false], [45, 137, null], [137, 579, null], [579, 1072, null], [1072, 1387, null], [1387, 1857, null], [1857, 2670, null], [2670, 3156, null], [3156, 3526, null], [3526, 3947, null], [3947, 4671, null], [4671, 5069, null], [5069, 5524, null], [5524, 5744, null], [5744, 6081, null], [6081, 6208, null], [6208, 6326, null], [6326, 6513, null], [6513, 6706, null], [6706, 7322, null], [7322, 7752, null], [7752, 8273, null], [8273, 8724, null], [8724, 9204, null], [9204, 9557, null], [9557, 9994, null], [9994, 10993, null], [10993, 11394, null], [11394, 11639, null], [11639, 11912, null], [11912, 12327, null], [12327, 12562, null], [12562, 12791, null], [12791, 13103, null], [13103, 13272, null], [13272, 13727, null], [13727, 14050, null], [14050, 14196, null], [14196, 14427, null], [14427, 14810, null], [14810, 14968, null], [14968, 15132, null], [15132, 15405, null], [15405, 15965, null], [15965, 16369, null]], "google_gemma-3-12b-it_is_public_document": [[0, 45, true], [45, 137, null], [137, 579, null], [579, 1072, null], [1072, 1387, null], [1387, 1857, null], [1857, 2670, null], [2670, 3156, null], [3156, 3526, null], [3526, 3947, null], [3947, 4671, null], [4671, 5069, null], [5069, 5524, null], [5524, 5744, null], [5744, 6081, null], [6081, 6208, null], [6208, 6326, null], [6326, 6513, null], [6513, 6706, null], [6706, 7322, null], [7322, 7752, null], [7752, 8273, null], [8273, 8724, null], [8724, 9204, null], [9204, 9557, null], [9557, 9994, null], [9994, 10993, null], [10993, 11394, null], [11394, 11639, null], [11639, 11912, null], [11912, 12327, null], [12327, 12562, null], [12562, 12791, null], [12791, 13103, null], [13103, 13272, null], [13272, 13727, null], [13727, 14050, null], [14050, 14196, null], [14196, 14427, null], [14427, 14810, null], [14810, 14968, null], [14968, 15132, null], [15132, 15405, null], [15405, 15965, null], [15965, 16369, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16369, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16369, null]], "pdf_page_numbers": [[0, 45, 1], [45, 137, 2], [137, 579, 3], [579, 1072, 4], [1072, 1387, 5], [1387, 1857, 6], [1857, 2670, 7], [2670, 3156, 8], [3156, 3526, 9], [3526, 3947, 10], [3947, 4671, 11], [4671, 5069, 12], [5069, 5524, 13], [5524, 5744, 14], [5744, 6081, 15], [6081, 6208, 16], [6208, 6326, 17], [6326, 6513, 18], [6513, 6706, 19], [6706, 7322, 20], [7322, 7752, 21], [7752, 8273, 22], [8273, 8724, 23], [8724, 9204, 24], [9204, 9557, 25], [9557, 9994, 26], [9994, 10993, 27], [10993, 11394, 28], [11394, 11639, 29], [11639, 11912, 30], [11912, 12327, 31], [12327, 12562, 32], [12562, 12791, 33], [12791, 13103, 34], [13103, 13272, 35], [13272, 13727, 36], [13727, 14050, 37], [14050, 14196, 38], [14196, 14427, 39], [14427, 14810, 40], [14810, 14968, 41], [14968, 15132, 42], [15132, 15405, 43], [15405, 15965, 44], [15965, 16369, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16369, 0.03878]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
cd6be225c7748625c9cfdbaa3a77c3870c04b89e
Debugging Code in Access 2002 IN THIS APPENDIX - Setting the Correct Module Options for Maximum Debugging Power 2 - Using the Immediate Window 6 - Stopping Program Execution 10 - Debugging One Step at a Time 12 - Viewing the Order of Procedure Calls 14 - Watching Expressions During Program Execution 15 - Controlling Code with Conditional Compilation Commands 22 While creating an application, you can spend much time and effort trying to figure out those bugs that creep into the system. These bugs can greatly slow down the completion of the application. This appendix discusses the Access 2002 tools that handle bugs and examine code while creating an application. Mainly because of the Visual Basic Editor (VBE), debugging changed in Access 2000 from previous versions. Although the available commands remain the same as they were in earlier versions (with new ones added as of Access 2000), the way to use the commands changed. **Setting the Correct Module Options for Maximum Debugging Power** The Access VBA environment includes the same editor as other Office products, as well as Visual Basic. Because the VBE is in its own MDI space, you must go to the editor to set the coding environment options. To change or view the settings of the VBA environment, use the Options dialog for VBA. For example, to change the color of a code line with a syntax error in it, follow these steps: 1. Create a new database, or open an existing one. 2. Press Alt+F11 to open the Visual Basic Editor. 3. From the Tools menu, choose Options. The VBA Options dialog appears, with the following tabbed pages: Editor, Editor Format, General, and Docking. 4. On the Editor Format page, select Syntax Error Text from the Code Colors list box (see Figure A.1). ![Figure A.1](image) *Figure A.1* *The Editor Format page in the Options dialog contains a number of options used for debugging purposes.* Setting the color of code items is one of many ways to set up the application environment for maximum debugging power. The advantage of coloring code is that you can tell what’s happening with different parts of the code. Red, for example, denotes a syntax error. Table A.1 lists the various commands and their default color settings. The Foreground and Background columns refer to the specific code line discussed. VBA uses color along the left side of the module editor, called a *margin indicator*, to help point out various commands that have been placed in the module editor. **TABLE A.1 Default Colors for Various Code Syntax** <table> <thead> <tr> <th>Text Area</th> <th>Foreground</th> <th>Background</th> <th>Indicator</th> </tr> </thead> <tbody> <tr> <td>Normal</td> <td>Automatic</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Selection</td> <td>Automatic</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Syntax Error</td> <td>Light Red</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Execution Point</td> <td>Automatic</td> <td>Dark Yellow</td> <td>Dark Yellow</td> </tr> <tr> <td>Breakpoint</td> <td>White</td> <td>Dark Red</td> <td>Dark Red</td> </tr> <tr> <td>Comment</td> <td>Light Green</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Keyword</td> <td>Dark Blue</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Identifier</td> <td>Automatic</td> <td>Automatic</td> <td>Automatic</td> </tr> <tr> <td>Bookmark</td> <td>Automatic</td> <td>Automatic</td> <td>Cyan</td> </tr> <tr> <td>Call Return</td> <td>Automatic</td> <td>Automatic</td> <td>Light Green</td> </tr> </tbody> </table> **NOTE** When a color is set to Automatic, Access uses the setting for the default Windows system colors. **TIP** Notice that one of the colors is for Bookmark. Bookmarks are used to tag code lines that you want to remember later for whatever reason. This feature is very useful in big chunks of code. A square with rounded corners appears in the margin indicator bar. You can use the selections on the Edit menu for Bookmark (toggle), Next Bookmark, Previous Bookmark, and Clear All Bookmarks. Bookmarks disappear when you close the database. Other useful coding options are found on the different pages in the Options dialog. From the Editor page, you have these code settings: - **Auto Syntax Check.** When selected, this option makes Access generate an error with a message box while you type code lines. The message box doesn’t appear unless an error exists after you complete a line and press Enter. It’s recommended that you leave this option set to its default (on) when starting out with VBA, and then turn it off when you are comfortable with recognizing syntax errors. - **Require Variable Declaration.** When enabled, this option places the Option Explicit statement in the Declarations section of any new modules created. It doesn’t affect previously created modules. It’s recommended that you change this option from its default (off) to on. **Tip** If you turn on Require Variable Declaration and use Option Explicit in each module, you’ll save countless hours of searching for misspelled variables. For more information on explicit versus implicit variable declarations, refer to the book’s Chapter 2, “Coding in Access 2002 with VBA.” - **Auto List Members.** Selecting this option causes a list of possible options (that is, properties and methods) to appear when you’re building a statement in code. - **Auto Quick Info.** With this option on, function syntax appears below the code line on which you’re working. This will reflect the function, statement, or object you’re now typing. This also includes user-defined procedures. - **Auto Data Tips.** By setting this option to true, you will see the value of the variable you have the cursor over when in a break of program execution. **Tip** The preceding three options have the dumb name of IntelliSense but are very hot. Auto Data Tips in particular are great because you don’t have to highlight values and click Quick Watch. All you do is place the cursor over the variable name to examine it (see Figure A.2). - **Auto Indent.** This option indents code automatically to enhance readability for debugging and maintaining code. It’s recommended that you leave this option set to its default (on). Looking at variables such as `pstrBackEndPath` is a breeze with the Auto Data Tips option. The rest of the commands that affect the debugging environment are on the General page (see Figure A.3): - **Notify Before State Loss.** Enable this option to specify whether you want Access to tell you when your variables will be reset at the module level when a running project is halted. - **Error Trapping.** These three options let you choose when you want Access to break on errors that can occur in your code: - **Break on All Errors** - Has Access break on all errors that occur on the line where it occurs. This includes whether error handlers are active or the code is in a class module. - **Break in Class Module** - Has Access break in class modules. If this option isn’t specified, the line that calls a property or method of the class module will display the error. - **Break on Unhandled Errors** - Causes Access to break on errors that don’t have an error handler already in use. - **Compile on Demand.** This option, enabled by default, keeps Access from compiling the entire potential call tree when you start up a form. VBA compiles only as you call various functions. Appendixes for Web Site PART VI - **Background Compile.** By selecting this option, Access will compile any uncompiled code when Access’s processes are idle. ![Options Dialog Box](image) **FIGURE WEB A.3** The options on the right side of this page can affect the debugging process. --- **NOTE** To ensure that no compile errors are lurking in obscure forms, choose Compile `Project_Name` from the Debug menu while in the module editor before distributing a system. Also, because Access doesn’t have to compile the code during runtime while in production, the application performs better. You should also select Save from the File menu to save the project after compiling. --- **Using the Immediate Window** The VBE Immediate window has a number of features that allow you to follow code and to examine expressions in a number of different ways. By using the Immediate window, you can print data and reassign values to variables and Access objects, which come in handy as you debug your application. To bring the Immediate window up while in the VBE, open the View menu and choose Immediate window. --- **TIP** Press Ctrl+G anywhere in your application to bring up the VBE Immediate window. Printing Data to the Immediate Window From Your Application You can print information to the Immediate window by using the `Print` method available on the `Debug` object. When a code line uses the `Debug.Print` method, it prints the data to the Immediate window, even if the window isn’t open. The next time you open the window, the text will be there. Figure A.4 shows how to include the `Debug.Print` method in your code, as well as the output in the Immediate window. ![Debugging Code in Access 2002](image) **Figure A.4** The `Debug.Print` method is useful for keeping track of expressions in your application. **Note** Although you can’t see the code lines that contain the `Debug.Print` statement if the Immediate window isn’t open, Access still has to take the time to parse the command at runtime and print to the object. If you don’t want to include these code lines, use the conditional compilation directives mentioned later in the section “Controlling Code with Conditional Compilation Commands.” Displaying Data While in the Immediate Window By using the ? (Print) statement, you can display all types of expressions while in an application. Simply place a Stop statement or breakpoint in your application, open the Immediate window, and then use the ? statement to print the information you’re interested in. (Stop statements and breakpoints are discussed in detail later in the section “Setting Breakpoints and Using the $top Statement.”) Suppose that you’re running a routine that runs through a recordset and checks a value for one of the fields. By displaying the value of that field to the Immediate window, you could see what the value is during execution for each record. The following are some examples of different data types that you can display while in the Immediate window: <table> <thead> <tr> <th>Object Type</th> <th>Syntax</th> </tr> </thead> <tbody> <tr> <td>Control on a form</td> <td>? Forms!FormName!ControlName</td> </tr> <tr> <td>Property of a control</td> <td>? Forms!FormName!ControlName.PropertyName</td> </tr> <tr> <td>Variable</td> <td>? VariableName</td> </tr> <tr> <td>Variable in an expression</td> <td>? VariableName * 20</td> </tr> </tbody> </table> Assigning Values to Variables and Objects in the Immediate Window The Immediate window also lets you assign values and perform commands right from the window. You also can perform actions such as closing recordsets manually. TIP Through the Immediate window, you can assign a new value to a variable, and then use the Run menu’s Set Next Statement command to place the next line of execution just before that value is used in the program. You can then use the Step commands to walk through the code and view the results. The Step commands are discussed in greater detail later in the section “Debugging One Step at a Time.” Look at the following table. You can use similar syntax for assigning values: <table> <thead> <tr> <th>Object Type</th> <th>Syntax</th> </tr> </thead> <tbody> <tr> <td>Control on a form</td> <td>Forms!FormName!ControlName = 1</td> </tr> <tr> <td>Property of a control</td> <td>Forms!FormName!ControlName.PropertyName = &quot;New Name&quot;</td> </tr> </tbody> </table> ### Object Type Syntax <table> <thead> <tr> <th>Variable</th> <th>VariableName = 3</th> </tr> </thead> <tbody> <tr> <td>Variable in an expression</td> <td>VariableName = VariableName * 20</td> </tr> </tbody> </table> ### Running Code from the Immediate Window One nice thing about the Immediate window is that it gives you the capability to test routines without requiring a complete framework around the routine. You can call routines straight from the Immediate window by simply typing the necessary syntax in the window. The syntax for calling the two types of procedures is as follows: - For a subroutine, the syntax is ```vba SubName arg1, arg2... ``` or ```vba Call SubName (arg1, arg2...) ``` - For a function, the syntax is as follows if you want to print the function’s return value: ```vba ? FunctionName(arg1, arg2...) ``` Or use this syntax if you’re just running the function and don’t care about the results: ```vba FunctionName arg1, arg2... ``` ### Note The functionality of running functions on a line by themselves was new as of Access 95. Before that, functions couldn’t be called on a line by themselves. This also was true when calling functions at runtime. Figure A.5 shows an example of calling a subroutine and a function. **Figure A.5** *Use the Immediate window to test subroutines and functions on-the-fly.* **Stopping Program Execution** Bringing up the Immediate window during runtime is no problem. You can try to pause an application programmatically with either a `Stop` statement or a breakpoint. When placed in code, both methods will halt the execution of a piece of code and bring up the module editor, with the code line containing the `Stop` statement or breakpoint highlighted. You can have as many breakpoints and `Stop` statements in your code as you want. Figure A.6 shows a `Stop` statement and a breakpoint set in the `ap_AppInit()` function, which is in the `modGlobalUtilities` module. ![Image of a code editor with a `Stop` statement and a breakpoint] **Figure A.6** Breakpoints and `Stop` statements are two ways to stop execution at a specific point in code. **NOTE** You can't place `Stop` statements or breakpoints in comments, variable declarations (the `Dim` statement), blank lines, line labels, or line numbers. Using the Stop Statement To place a Stop statement in a code line, simply type the command. **CAUTION** Stop statements, if saved with the module, remain in the code until you remove them. Remember to remove all Stop statements from your code and forms modules before distributing your application for production. To find all occurrences of Stop statements, follow these steps: 1. Open a module. 2. From the Edit menu, choose Replace. 3. Type Stop in the Find What text box. (Leave the Replace With text box blank.) 5. Click the Find Next button. 6. Click the Replace button if the text found is a legitimate Stop statement. If the text is part of another line of code, such as comments or a text string, skip this step. 7. Repeat steps 5 and 6 until you see the message The specified region has been searched. Although it’s not required, if you turn on Find Whole Word Only in the Replace dialog, you’ll be less likely to get bogus occurrences of the string. Using Breakpoints The alternative to a Stop statement is a breakpoint. Breakpoints are useful because when the database is closed, they go away. As a result, you don’t have to worry about breakpoints left in your code when you distribute the application. To set a breakpoint, first highlight the line on which you want to stop code execution. Then you can toggle (set or unset) a breakpoint in one of a number of ways: - Choose the Breakpoint toolbar button. - Right-click, and then choose Breakpoint from the Toggle menu. - Press F9. - Click in the left margin next to the code line you want to set your breakpoint on. No highlight is required. - From the Debug menu, choose Toggle Breakpoint. If the breakpoint didn’t exist before, you’ll see it at this time, highlighted in whatever color you set the Breakpoint Text color code to be. (The default color is dark red for the background, white for the foreground.) You’ll also see a red dot if the margin indicator bar is on. To unset a breakpoint, follow the same steps that were performed to set it. (For information on how to set the breakpoint text color, refer to the earlier section “Setting the Correct Module Options for Maximum Debugging Power.”) To remove all breakpoints in an application, open the Debug menu and choose Clear All Breakpoints while in the VBE. **Tip** Inserting a breakpoint in your code can really help you track down an error in your code when a variable is returning a Null value when it shouldn’t be. You could place a breakpoint just after the point in the code where the variable is being assigned, and examine the environment with some of the other commands listed later in the section “Watching Expressions During Program Execution." **Using Debug.Assert** The Debug object’s Assert method stops execution of an application based on criteria. The syntax looks like this: `Debug.Assert booleanexpression` The assertion stops execution with the last statement executed, the `Debug.Assert` line. **Debugging One Step at a Time** To work through some debugging situations, you need to be able to walk a program line by line while it’s executing. VBA provides four Step commands to accomplish this: **Step Into**, **Step Over**, **Step Out**, and **Run to Cursor**. You can use these commands after a program halts execution by choosing the appropriate command from the Debug menu. **Note** All four Step commands skip over the same type of code lines that don’t allow breakpoints: comments, declaring variables (the Dim statement), blank lines, and line labels or line numbers. Stepping into Code Line by Line The Step Into command steps through code lines one by one. When a code line has a call to another procedure, the editor then follows the code into it. This includes user-created functions and subroutines, but not intrinsic VBA functions such as `Date()` and `Mid()`. Calls into DLLs and OLE servers are also omitted. First, halt program execution by pressing Ctrl+Break when the routine is executing, by setting a breakpoint, or by using a `Stop` statement. Then you can step through the code line by line, with the program pausing on each line, by either choosing the Step Into button of the Visual Basic toolbar, opening the Debug menu and then choosing Step Into, or pressing F8. Stepping Through Code with Step Over Similar to the Step Into command, the Step Over command takes you line by line through program execution. The difference comes when you’re on a function or subroutine being called from within the original routine. Step Over is useful when you have a number of thoroughly debugged routines that can therefore be skipped. To use the Step Over command, simply press Shift+F8. When you use Step Over, rather than drop into the new routine, the program will 1. Execute the new routine without displaying the code. 2. Begin displaying the code line by line following the procedure call. Bailing Out of a Routine with Step Out Sometimes when the going gets too tough, it’s best just to bail out of the current routine and resume with the calling routine, in the line of code that follows the call. The Step Out command allows you to leave a routine that might be messed up. You also can use Step Out if you don’t need to bother with a particular routine, but accidentally went into it with the Step Into command. You can use the Step Out command by pressing Ctrl+Shift+F8. Skipping Tested Code with Run to Cursor Think of when you’re debugging the beginning of a routine. After the section of the code is tested, you find that you need to jump to the end of the routine. The method for doing this is Run to Cursor. To perform a Run to Cursor from within the halted program, place the cursor where you want Access to execute the code to without stopping. Then press Ctrl+F8. The execution highlight will now be on the line of code in which you placed the cursor. When you want the program to continue with regular execution, press F5. One more debug/code option is Set Next Statement, which sets the code execution to whatever code line you select. You can set the cursor on a line and then choose Set Next Statement from the Debug menu, or right-click a code line and select Set Next Statement. This is useful if you are quickly testing a particular piece of code and are feeding it different values “manually” by setting it in the Immediate, Locals window, or Watches windows. You can continue executing the same line(s) of code to test different values. **NOTE** Unlike the Step options, Set Next Statement does exactly what it says—it goes to the statement you select without executing any code at all. Keep this in mind when using Set Next Statement. Another option is Show Next Statement, which brings you back to the code line that is executed next (the line that’s highlighted yellow). This is useful if you are stepping through the code and paging through different modules and code windows to do your debugging, and need to get back to the executing code. You can find Show Next Statement on the Debug and right-click menus. **Viewing the Order of Procedure Calls** Another necessary debugging tool that Access provides is the capability to view procedure calls. You can get to the Calls dialog through a couple of different routes, but this section focuses on the Immediate window. One reason for viewing the procedure calls is that sometimes you need to verify how a routine was called. This is especially true if you have more than one way to get into a routine. To view the procedure call stack from the VBE, follow these steps: 1. Open the `modGlobalUtilities` module. 2. Go to the `ap_AppInit()` function. 3. Place a `Stop` statement in the first line of code following the function declaration. Then close the module. 4. Execute the function by running the AutoExec macro. When the code reaches the `Stop` statement, the module window opens with that code displayed. 5. By using the Step Into command (F8), step through the code until you reach the line that reads `pstrAppPath = CurrentProject.Path` 6. Press Shift+F8 to step over this line of code. You’ll now be on the line of code that reads pstrBackendName = ap_GetDatabaseProp("BackEndName") 7. Press F8 to step into the ap_GetDatabaseProp() function. 8. From the View menu, choose Call Stack. (You also can open the Call Stack dialog by pressing Ctrl+L or by using the Call Stack dialog toolbar button.) Your dialog should resemble Figure A.7. ![Figure A.7](image) **FIGURE A.7** Use the Call Stack dialog to verify which functions are being called correctly. The Call Stack dialog lists the procedure calls from top to bottom, with the most recent on top. To see any of the procedures, highlight the procedure name in the Project.Module.Function list, and then click Show. The editor then displays the chosen routine. **Watching Expressions During Program Execution** One very necessary feature in programming is the capability to watch expressions throughout execution and have the debugger react accordingly. In the past, you might have done this by using a message box to display the value of an expression (see Figure A.8). You also can use the Debug.Print method all over the code. However, a better way to keep an eye on different expressions is to use the Locals and Watches windows. You can use these windows to see how your variables are doing and—more importantly—what they’re doing. **Keeping in Touch with the Locals** By using the Locals window, you can view the local variables not only for the current routine you are in, but also for the module level and for the Data Access Objects variable properties. Figure A.9 illustrates the variables assigned in the declaration section for the modGlobalUtilities module. (They’re actually global-type variables, so the term local in this sense means where they’re declared.) Figure A.9 also shows the variables declared in ap_AppInit, including ADO variables, which at this point are collapsed. This will also work on ADO variables displayed on the Watches window. FIGURE A.8 Using a message box to display expressions is a common debugging method. FIGURE A.9 To see the current setting of collapsed routines or variables, click the variable or routine name. The Locals window displays three columns: - **Expression.** As with the earlier section “Printing Data to the Immediate Window,” an expression can be anything from a variable to a control value on a form. • **Value.** This column shows the displayed value. If the expression isn’t declared in the given procedure and module, the Value column in the Watches window will say Expression not defined in context. • **Type.** This column displays the expression’s data type. Another quick way to view expressions when the program is executing through the Step commands is by using a Quick Watch (discussed next). Other commands included in the Immediate window and the Edit Watch dialog are discussed later. ### Tip A good way to learn about objects’ properties, what they contain when, and to view a hierarchy, is to set watches on them and expand. You also can directly change many values right in the Watches/Locals windows by clicking and typing. ### Taking a Quick Look with the Quick Watch Dialog The Quick Watch dialog is a convenient way to examine an expression with a click (or two) of a button. This is very useful when you have a routine that just doesn’t seem to be acting the way it should. You can look at the return value for the routine anywhere in the program that you think would be pertinent. To invoke Quick Watch and view the answer for the routine `ap_LogOutCheck`, for example, follow these steps: 1. Open the `ap_AppInit()` function in the `modGlobalUtilities` module. 2. Place a `Stop` statement in the first line of code after the line of code that reads `pstrBackEndPath = ap_GetDatabaseProp("LastBackEndPath")` 3. Open the Immediate window and enter `Ap_AppInit()`. The editor appears with the `Stop` statement highlighted. 4. Place the cursor on the function call `ap_LogOutCheck(pstrBackEndPath)`. 5. From the Debug menu, choose Quick Watch. Your screen should now look very similar to Figure A.10. In the Quick Watch dialog, you’re one click away from adding the expression to the Watches window. ### Adding and Viewing Expressions in the Watches Window If you’re examining a variable, such as `pstrAppPath`, you can add it to the Watches window by clicking the Add button in the Quick Watch dialog. After you do this, you can then see the Watches window added to the other windows in the VBE (see Figure A.11). Remember that you can bring the VBE to the front if it isn’t already open by pressing Ctrl+G. **Figure A.10** *Quick Watch is a quick way to view an expression while an application is executing, without putting additional commands in the code.* The Watches window displays four columns, three of which are the same as for the Locals page: - **Expression.** As in the earlier section “Printing Data to the Immediate Window,” an expression can be anything from a variable to a control value on a form. - **Value.** This column shows the displayed value. If the expression isn’t declared in the given procedure and module, this column will say **Expression not defined in context**. If you open the Watches window when an application isn’t running, or the Module and Procedure of the watch are out-of-scope, this column will say **Out of Context**. - **Type.** This column displays the expression’s data type. - **Context.** This is the scope in which you want to view the expression. This can be for a particular routine in a module or be throughout the whole database. Add expressions to the Watches window so you can follow the status without having to manually display them all the time or hunt them down on the Locals page. NOTE When you add an expression to the Watches window through the Quick Watch dialog or the menu, the Context column defaults to the current procedure and module (context is where the code is now running). Another way to add an expression to the Watches window (without using the Quick Watch dialog) is by choosing Add Watch from the Debug menu. You can use this method anywhere in the VBE. By watching an expression, you’re using the Watches window in its simplest form, without any of the additional options mentioned in the next section. Because Access allows you to keep the VBE and Watches window open wherever the application is, you can watch expressions at any time. After adding an expression and following it throughout the application, you’ll soon find that you have uses for the other capabilities in the Watches window. These include having the program break when the value of an expression is true or even changes. Setting Break Conditions and Editing Expressions Add a couple more expressions to the Watches window. Wherever you are in the VBE, follow these steps: 1. From the Debug menu, choose Add Watch. 2. Type `flgLeaveApplication` in the Expression column. 3. Click OK in the Add Watch dialog. 4. Repeat steps 1 through 3 for the `lngCurrError` and `pstrBackEndPath` variables. If you’re still on the same line as the previous example, the Watches window should look like Figure A.12. ![Figure A.12](image) **Figure A.12** You can view as many expressions in the Watches window as necessary. **Tip** There are two reasons for placing variables that can be found in the Locals window in the Watches window: - You don’t have to search through all the existing locals to monitor the specific variables you’re interested in. - You can set the watch variables to be different types, which is discussed following this tip. You can now change the type of the watch variables and control execution of the program through them. To halt the program on a certain condition, you must edit the expressions. To edit the `flgLeaveApplication` expression in the Watches window, highlight the line, and choose Edit Watch from the Debug menu. You’re now in the Edit Watch dialog (see Figure A.13), which consists of the following: • **Expression.** This text box contains the name of the variable/expression to watch. • **Context Group.** With this set of controls, you can specify the context—in other words, in which specific procedure/module you want to watch the expressions. You can also specify (All Procedures)/(All Modules) accordingly if you want to follow a variable that’s declared global, or at module level. If the expression isn’t declared in the given procedure and module, the Value field in the Watches window will say Expression not defined in context. For more information on variable scoping, refer to Chapter 2. • **Watch Type.** This is where you specify the type of watch variable: Watch Expression, Break When Value Is True, or Break When Value Changes. Each option is useful, depending on the circumstances. --- **FIGURE A.13** *Modifying how a variable is watched is easy with the Edit Watch dialog.* Change the Watch Type of flgLeaveApplication to Break When Value Is True by clicking the radio button next to the label. As the code is executing, this will allow you to know whether the variable has been set to true without walking through the code line by line. Next, highlight the lngCurrError expression in the Watches window; then choose Edit Watch from the Debug menu. Now change the Watch Type to Break When Value Changes. This way, if an error occurs, the program will halt right away. In the Local and Watches windows, you also can change variable values by clicking a value and changing it. In Figure A.14, you can see that the watch expression (at the bottom) is denoted by a pair of glasses. The Watch Type choice Break When Value Is True is denoted by a hand with a piece of paper in it (the top expression). Finally, the Watch Type choice Break When Value Changes is denoted with an icon of a hand holding a triangle (the middle expression). You not only can watch expressions, but also can set them to cause the application to halt under certain conditions. Controlling Code with Conditional Compilation Commands Although Access doesn’t create true executables, it does compile the VBA code to a level lower than the Visual Basic language the developer works with. This compiled code is called pseudocode (also known as P-code). When compiling P-code, certain commands, such as comments, are stripped out to optimize the code. By using conditional compilation directives, you can include or exclude code sections in the final compiled code. This is different from using just If...EndIf to skip over parts of code while it’s running. Access literally strips the section of code that’s surrounded by the directives. This means that you must know which lines of code you won’t need at runtime. One way to use the conditional compilation directives would be to follow the example in the sample application. World Wide Video could have three versions of the system: - A video kiosk, so customers can view current and upcoming movies - A turn-key retail system, so salespeople can rent movies and sell merchandise - An administrative system for management, with accounting and reporting capabilities By having the conditional compilation directives in the code, you can use the same code base but include only the code used for the particular system, thus saving memory. Although this last example could work, it’s not necessarily practical because the number of forms, reports, and other objects necessary to run each version, and the number of places to put the compiler directives, would be enormous. A more practical example—and how most developers use conditional compilation directives—is to use them for debugging purposes. There’s only one conditional compilation directive: #If...#ElseIf...#Else...#End If. Figure A.15 shows code that prints three variables to the Immediate window if the ccDebug variable is set to True. Conditional compilation commands are useful for excluding debugging commands from distributed applications. ccDebug is a *conditional compiler constant*, a special constant that can be used only with the conditional compiler directive. You can declare conditional compiler constants in two ways: - Through the user interface on the *Project_Name* Properties sheet (choose *Project_Name* Properties from the Tools menu in the VBE). You can see ccDebug in the Conditional Compilation Arguments text box in Figure A.16. ![Conditional Compilation Arguments](image) **Figure A.15** *Conditional compilation commands are useful for excluding debugging commands from distributed applications.* **Figure A.16** *You can specify a conditional compiler constant through the Project Properties sheet.* Conditional compiler constants declared through the UI are scoped globally. This type of constant can be only of data type Integer, and can’t be a variable, standard constant, or even another conditional compiler constant. - Through the #Const command. Conditional compiler constants are visible only at the module level and—unlike those declared through the UI—can be different data types. The #Const statement looks a lot like the standard Const statement. Here’s an example using the ccDebug constant: #Const ccDebug = 1 A conditional compiler constant declared with #Const must be a literal, any data type, or another conditional compiler constant. It can’t be a variable or standard constant.
{"Source-Url": "http://ptgmedia.pearsoncmg.com/images/0672321025/downloads/Appendix_A.pdf", "len_cl100k_base": 7832, "olmocr-version": "0.1.51", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 44851, "total-output-tokens": 8668, "length": "2e12", "weborganizer": {"__label__adult": 0.0002301931381225586, "__label__art_design": 0.00020301342010498047, "__label__crime_law": 0.00016438961029052734, "__label__education_jobs": 0.0005054473876953125, "__label__entertainment": 4.947185516357422e-05, "__label__fashion_beauty": 7.30752944946289e-05, "__label__finance_business": 0.00010865926742553712, "__label__food_dining": 0.00016021728515625, "__label__games": 0.0004320144653320313, "__label__hardware": 0.0004203319549560547, "__label__health": 0.00013685226440429688, "__label__history": 8.612871170043945e-05, "__label__home_hobbies": 4.83393669128418e-05, "__label__industrial": 0.00016129016876220703, "__label__literature": 0.00010913610458374023, "__label__politics": 7.623434066772461e-05, "__label__religion": 0.0002779960632324219, "__label__science_tech": 0.0011425018310546875, "__label__social_life": 5.221366882324219e-05, "__label__software": 0.0280914306640625, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00015437602996826172, "__label__transportation": 0.00013446807861328125, "__label__travel": 0.0001132488250732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35377, 0.01169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35377, 0.42996]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35377, 0.87377]], "google_gemma-3-12b-it_contains_pii": [[0, 373, false], [373, 1902, null], [1902, 3810, null], [3810, 5946, null], [5946, 7156, null], [7156, 8360, null], [8360, 9373, null], [9373, 11655, null], [11655, 12991, null], [12991, 13930, null], [13930, 15643, null], [15643, 17520, null], [17520, 19840, null], [19840, 22011, null], [22011, 24001, null], [24001, 24403, null], [24403, 26475, null], [26475, 27618, null], [27618, 28710, null], [28710, 30025, null], [30025, 31885, null], [31885, 33877, null], [33877, 34675, null], [34675, 35377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 373, true], [373, 1902, null], [1902, 3810, null], [3810, 5946, null], [5946, 7156, null], [7156, 8360, null], [8360, 9373, null], [9373, 11655, null], [11655, 12991, null], [12991, 13930, null], [13930, 15643, null], [15643, 17520, null], [17520, 19840, null], [19840, 22011, null], [22011, 24001, null], [24001, 24403, null], [24403, 26475, null], [26475, 27618, null], [27618, 28710, null], [28710, 30025, null], [30025, 31885, null], [31885, 33877, null], [33877, 34675, null], [34675, 35377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 35377, null]], "pdf_page_numbers": [[0, 373, 1], [373, 1902, 2], [1902, 3810, 3], [3810, 5946, 4], [5946, 7156, 5], [7156, 8360, 6], [8360, 9373, 7], [9373, 11655, 8], [11655, 12991, 9], [12991, 13930, 10], [13930, 15643, 11], [15643, 17520, 12], [17520, 19840, 13], [19840, 22011, 14], [22011, 24001, 15], [24001, 24403, 16], [24403, 26475, 17], [26475, 27618, 18], [27618, 28710, 19], [28710, 30025, 20], [30025, 31885, 21], [31885, 33877, 22], [33877, 34675, 23], [34675, 35377, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35377, 0.08418]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
9c3ad2805c524368702238787943ac48dd24022b
UNIVERSITE DE TECHNOLOGIE DE COMPIÈGNE Département de Génie Informatique OMAS - Postman Connection Jean-Paul Barthès BP 349 COMPIÈGNE Tel +33 3 44 23 44 23 Email: barthes@utc.fr N276 v1.0 September 2012 Warning This document discusses how to use a postman interface. It applies to OMAS version 9.0.3 and up. Keywords Postman, Transfer Agent Revisions <table> <thead> <tr> <th>Version</th> <th>Date</th> <th>Author</th> <th>Remarks</th> </tr> </thead> <tbody> <tr> <td>1.0</td> <td>Sep 12</td> <td>Barthès</td> <td>First Issue</td> </tr> </tbody> </table> Contents 1 Introduction 5 2 Role of a Postman or Transfer Agent 5 3 Technical Note 6 3.1 Postman Creation 6 3.2 TCP Protocol 6 3.2.1 :START 7 3.2.2 :CONNECT 7 3.2.3 :DISCONNECT 7 3.2.4 :SEND 7 3.2.5 :RESET 7 3.2.6 :STATUS 7 3.3 Main Functions 7 3.3.1 Receiving Function 7 3.3.2 Sending Function 8 3.4 HTTP Protocol 8 3.4.1 :START 8 3.4.2 :CONNECT 8 3.4.3 :DISCONNECT 8 3.4.4 :SEND 8 3.4.5 :RESET 8 3.4.6 :STATUS 8 3.4.7 :SHUT-DOWN 8 3.5 Additional OMAS Message Fields 9 4 Examples of Inter-coterie Connections 9 4.1 Internal Connections 10 4.2 External Connection 11 5 Controlling an External Device 12 1 Introduction The postman interface has been revisited to allow easier deployment of platforms with some parts outside a firewall and some part on different loops inside a firewall. In addition an HTTP protocol has been implemented using the AllegroServe component. The document presents the new postman from a user point of view first, then gives some inside information. 2 Role of a Postman or Transfer Agent A postman has two main roles: - to connect agents of the same coterie located on different network loops; - to interface external systems or devices. In the first case the postman transfers all messages from an OMAS loop to another OMAS loop using either a TCP protocol or an HTTP protocol. Note that for a given application a single protocol must be chosen. Because in this case transfers are well defined, OMAS provides default skills and the programmer needs only specify a restricted number of parameters to be able to send messages to the postman of the target loop: - the target postman key, e.g. :JPB - the name or IP of the machine hosting the target postman - the type of protocol :TCP or :HTTP - its site name, e.g. :UTC A site is defined as a set of loops inside a firewall. It has a name, e.g. :UTC or :TECPAR. The parameters are given when the postman is created, e.g. ``` (defpostman :UTC-HTTP :site :UTC :Server T :internal-name "MIKONOS" :internal-IP "172.17.130.153" ; fixed IP :external-name "nat-omas.utc.fr" :external-IP "195.83.154.22" :Known-Postmen ((:Cit nil "219.166.183.59" :CIT :tcp) (:Tecpar nil "200.183.132.15" :TECPAR :tcp) (:notebook "jean-paulbaC4F6" nil :UTC :tcp)) :proxy "proxyweb.utc.fr:3128" ) ``` The example specifies that :UTC-HTTP is a server, on the :UTC site, the name of the machine on which it executes is MIKONOS, the internal IP of the machine (when addressed within the firewall) is "172.17.130.153", its external name (viewed from outside the firewall) is "nat-omas.uts.fr", and its external IP is "195.83.154.22" it uses a TCP protocol (default) and knows three other postmen, :CIT, :TECPAR, :NOTEBOOK. The two first ones are defined by their IP and are external, the third one is defined by the name of the supporting machine. The two first ones are external, the last one is internal. The :server option is used to launch the postman as a server automatically as soon as it is created. The proxy is unused for a TCP connection. In the second case, the postman can be used to connect another platform or a different system (e.g. a voice component or a sensor). The parameters are then different. ``` (defpostman :UTC-VOICE :raw t :Known-Postmen ([:VOICE nil "127.1" :UTC :tcp]) ) ``` The :raw parameter means that we will define the :CONNECT, :DISCONNECT, and :SEND skills ourselves. The :known-postmen option is only used to post information in the postman window. 3 Technical Note This section describes the processes provided by default. It describes first how the postman is created, then the TCP protocol, then the HTTP protocol. 3.1 Postman Creation A postman is created by using the defpostman macro. <table> <thead> <tr> <th>Parameter</th> <th>value</th> <th>role</th> </tr> </thead> <tbody> <tr> <td>name</td> <td>key</td> <td>name of the postman (a key)</td> </tr> <tr> <td>options</td> <td></td> <td></td> </tr> <tr> <td>external-ip</td> <td>dotted-string</td> <td>IP of the machine seen externally as a server</td> </tr> <tr> <td>external-name</td> <td>a string</td> <td>name of the machine hosting the server</td> </tr> <tr> <td>hide</td> <td>t or nil</td> <td>if t the agent will be hidden to the user</td> </tr> <tr> <td>http</td> <td>t or nil</td> <td>if t indicates an HTTP protocol</td> </tr> <tr> <td>http-port</td> <td>a number</td> <td>HTTP port number (default 80)</td> </tr> <tr> <td>internal-ip</td> <td>a dotted string</td> <td>IP of the machine as seen inside the firewall</td> </tr> <tr> <td>internal-name</td> <td>a string</td> <td>name of the machine as seen inside the firewall</td> </tr> <tr> <td>known-postmen</td> <td>a list</td> <td>a list of remote postmen descriptions</td> </tr> <tr> <td>proxy</td> <td>a dotted string</td> <td>web proxy</td> </tr> <tr> <td>raw</td> <td>t or nil</td> <td>if t means that we provide our own skills</td> </tr> <tr> <td>receiving-fcn</td> <td>function name</td> <td>currently unused</td> </tr> <tr> <td>server</td> <td>t or nil</td> <td>if t starts the server immediately</td> </tr> <tr> <td>site</td> <td>a keyword</td> <td>site name</td> </tr> <tr> <td>tcp-port</td> <td>a number</td> <td>port for the TCP connection (default 52008)</td> </tr> </tbody> </table> When the postman is created, it has all the characteristics of a regular service agent and if :raw is nil has default skills: :CONNECT, :DISCONNECT, :SEND, :STATUS, :RESET, :STATUS, :SHUT-DOWN, :START. 3.2 TCP Protocol This section describes the different skills and the corresponding actions. The skills can be invoked by using a request or inform message. 3.2.1 :START A start action starts the server, i.e. creates a process with a receiving function waiting messages on the TCP port (default 52008). This skill is not really necessary since the CONNECT skill will start the server if needed. 3.2.2 :CONNECT A connect action with args a remote postman description, tries to open a socket for connecting to the remote postman and produces either a success or a failure. In case of success the postman description is added to the list of active postmen info. If the server is not active, it is started automatically. 3.2.3 :DISCONNECT A disconnect action simply removes the target agent from the list of active agent info. 3.2.4 :SEND SEND is the crucial skill. It is called automatically when a new message appears on the local coterie LAN. The postman filters all messages addressed to it and all system messages and transfers the rest to the remote postmen. 3.2.5 :RESET A reset action simply resets the TCP connection, closing the receive socket and restarting the receiving process. 3.2.6 :STATUS A status action prints the status of the postman (not very useful). 3.3 Main Functions The important functions are the receiving and the sending functions. 3.3.1 Receiving Function Whenever the receiving process receives a message it calls the processing function. Processing the message The message, a string, is first converted to an OMAS object. If the conversion fails, processing is simply abandoned. Then, the postman checks if the message has already been received by comparing its ID with the list of recorded IDs of the last 100 messages. If it has already been received, processing is abandoned. If the message is new to the postman, then the postman checks if the destination is itself. If so, it puts it into its own mailbox. Otherwise, the postman checks the identity of the sending agent. Checking the identity of the sender The postman creates a new postman description with the information contained in the message and add it to the connected-postmen-info list, replacing the previous value. this guarantees that the info is up to date. Dispatching the message Then the postman puts the message on the local coterie loop. 3.3.2 Sending Function The postman first checks if the message is form itself or if the message is a system message. If so, it does not send it. Otherwise, it then filters the possible targets by removing all agents through which the message has already passed from the list of connected postmen. If some agents are left the message is sent to each target in turn using the postman description for each target agent. It creates a socket and sends the message to the remote postman. 3.4 HTTP Protocol The HTTP protocol uses the same skills than the TCP one, with the exception of :SHUT-DOWN. 3.4.1 :START A start action starts the ACL Allegroserve server. It creates a receiving address, e.g. "nat-omas.utc.fr/omascc/80" to receive connections. This skill is not really necessary since the CONNECT skill will start the server if needed. 3.4.2 :CONNECT A connect action with args a remote postman description, tries to open a socket for connecting to the remote postman and produces either a success or a failure. In case of success the postman description is added to the list of active postmen info. If the server is not active, it is started automatically. 3.4.3 :DISCONNECT A disconnect action simply removes the target agent from the list of active agent info. 3.4.4 :SEND SEND is the crucial skill. It is called automatically when a new message appears on the local coterie LAN. The postman filters all messages addressed to it and all system messages and transfers the rest to the remote postmen. 3.4.5 :RESET A reset action simply resets the TCP connection, closing the receive socket and restarting the receiving process. 3.4.6 :STATUS A status action prints the status of the postman (not very useful). 3.4.7 :SHUT-DOWN A shut-down action closes the HTTP server, terminating the receiving processes. 3.5 Additional OMAS Message Fields Some additional fields have been added to OMAS messages to let postmen obtain the necessary information for answering messages or for avoiding sending messages in a loop. The fields are: <table> <thead> <tr> <th>Field</th> <th>Typical Value</th> <th>Role</th> </tr> </thead> <tbody> <tr> <td>id</td> <td>115023273</td> <td>a random tag identifying the message</td> </tr> <tr> <td>sender-ip</td> <td>&quot;198.162.1.124&quot; or &quot;pegasos&quot;</td> <td>IP or name of the machine sending the message</td> </tr> <tr> <td>sender-site</td> <td>(:UTC :JPB)</td> <td>a list giving the site key and the key of the sending postman</td> </tr> <tr> <td>thru</td> <td>(:UTC :TECPAR)</td> <td>a list of sites (postmen ids) through which the message already went</td> </tr> </tbody> </table> 4 Examples of Inter-coterie Connections This section describes the example of the set up at UTC, showing the different postmen being used in the different connections inside and outside the firewall (Fig.1). ![Figure 1: UTC site](nat-omas.utc.fr) One can see three postmen on Fig.1: :UTC, :JPB, and :DELOS respectively hosted by machines named: mikonos, jean-paulbac4f6, and delos. The whole site is named :UTC. The role of the postmen is the following: - :UTC deals with internal connections and external connections to other sites, e.g. :CIT or :TECPAR; - :DELOS deals with connections between internal loops within the firewall (expressed by the dashed line); • :JPB deals with the internal connection within the firewall implemented through WiFi. The different postmen can be declared as follows starting with the simpler ones. ### 4.1 Internal Connections :DELOS and :JPB take care of internal connection. The content of the corresponding files will appear as follows: ```lisp ;; -*- Mode: Lisp; Package: "DELOS" -*- ;;=============================================================================== ;;12/09/15 ;; ;;AGENT POSTMAN :DELOS ;; ;;=============================================================================== (defpackage :DELOS (:use :moss :omas :cl #+MCL :ccl)) (in-package :DELOS) (omas::defpostman :DELOS :server t :site :UTC ; requested :internal-name "delos" :known-postmen ((:UTC "mikonos" "172.17.130.153" :TCP :UTC)) ) ;; uses default skills :EOF ``` The postman parameters indicate that the postman name is :DELOS, it is a server (meaning that it will be started to receive messages as soon as the declaration is executed), its site is :UTC (inside the UTC firewall), the name of the host machine is delos, and it knows a postman named :UTC located on a machine named mikonos the IP of which is 172.17.130.153, the connection is done on a direct TCP mode on default port 52008 (default). Note that either mikonos or its IP are needed. If the IP is not given it will be computed from the name (the DNS will be asked). There is essentially no difference for a WiFi connection with the notebook as shown here. ```lisp ;; -*- Mode: Lisp; Package: "JPB" -*- ;;=============================================================================== ;;10/08/12 ;; ;;AGENT POSTMAN :JPB ;; ;;=============================================================================== (defpackage :JPB (:use :moss :omas :cl #+MCL :ccl)) (in-package :JPB) (omas::defpostman :JPB :server t :site :UTC ; requested ) ``` 4.2 External Connection Postman :UTC deals both with internal and external connections. For internal connections, the hosting machine is known as MIKONOS. For external connections the machine is known as nat-omas.utc.fr, e.g. when it is addressed from CIT or from TECPAR. The corresponding code is shown here. Note that we have here additional parameters: external-name, external-ip, and proxy, although this last information is not needed for a direct TCP connection. Note that the internal-name and internal-ip are not really necessary, the IP can be computed if needed. If the exchanges are to be done using the HTTP protocol, then the parameter (:http t) must be specified. Note that in that case it is recommended that all exchanges be done using the HTTP protocol. 5 Controlling an External Device The postman mechanism can be used for controlling an external device, e.g. a particular sensor. In that case one must provide the necessary skills and at least the SEND and CONNECT skills. The idea is whenever an agent sends a message to :TEMPCONTROL, the message will be automatically transferred to the device. The following code would control a temperature controller defined as a pseudo agent named TEMP-CONTROL. We assume that the CONNECT skill initiates the receiving function, and that the exchanges with the controller occur through sockets and use a TCP protocol. ;;;-*- Mode: Lisp; Package: "CONTROL" -*- ;;;=============================================================================== ;;;12/09/15 ;;; AGENT POSTMAN :CONTROL ;;; Postman for connecting a temperature controller ;;;=============================================================================== (defun postman-receiving (agent port) "function used by the process that waits for incoming messages. We assume that the incoming message is a string. Arguments: agent: postman port: receiving port Return: never returns, wait in a loop for messages" (defun postman-receiving (agent port) "function used by the process that waits for incoming messages. We assume that the incoming message is a string. Arguments: agent: postman port: receiving port Return: never returns, wait in a loop for messages" (unwind-protect ;; create a passive socket, listen to port "port" on localhost (let ((p (socket:make-socket :connect :passive :local-port port :backlog 50)) message-string message s) ;; record socket object (setf (receive-socket agent) p) (format t "~&; ~S/ receiving/ passive socket created: ~&; ~S" (key agent) p) ;; record that we are connected (setq *tempcontroller-connected* t) (loop ;; create stream for connection requests (setq s (socket:accept-connection p)) ;; **** see if we can get IP of sending host from the socket structure here ;; by using (socket:remote-host s) ;; get incoming message, returning nil if message is empty (:eof reached) (setq message-string (read s nil nil)) ;; print trace into Lisp console (format t "<<=== ~S/ receiving/ incoming message: ~& ~S" (key agent) message-string) ;; close connection (close s) ;; if message-string is nil, then the message was empty, give up, go wait ;; for the next one (this is an unlikely case). Otherwise go process it (when message-string ;; process message converting it to an OMAS object and broadcasting it (user-process-message agent message-string)) )) ))) ;; unwind-protect clause, used when the receiving process is aborted, or the ;; plateform exits to do some clean up (progn ;; close passive socket (if (receive-socket agent) (close (receive-socket agent))) (setf (receive-socket agent) nil) )) ;; the following function could for example build an OMAS message and broadcast it. (defun user-process-message (agent message-string) "user defined function for processing the input from the temperature controller" ... ) (defskill :connect :control :static-fcn connect-static) (defun connect-static (agent message) "set up the receiving process" (defun connect-static (agent message) "the CONNECT skill installs the receiving function." (declare (ignore message)(special *input-port*)) ;; if controller already connected, give up (if *tempcontrol-connected* (return-from connect-static (static-exit agent :done))) ;; otherwise create a receiving process, and record it in the agent structure (setf (omas::receiving-process agent) (mp:process-run-function (format nil "'S Receiving" (omas::key agent)) #'postman-receiving agent *input-port*)) ;; paint the connection pane green for 1/10s showing OK (omas::pw-show-success agent) ;; refresh postman window, which will show connections (omas::agent-display (omas::%agent-from-key (omas::key agent))) (static-exit agent :done))) (defun static-send (agent in-message message-string message) "skill that sees every message and filters those for the temperature controller. arguments: agent: postman in-message: incoming message message-string: message to send, a string (ignored) message: message to send, an object" (declare (ignore in-message message-string)(special *tempcontrol-ip* *output-port*))) (let (text socket-id) ;; check if message is for voice (unless (eql :tempcontrol (omas::to! message)) (return-from static-send (static-exit agent))) ;; yes transfer the content to the temperature controller ;; create socket for sending and try to catch socket errors ;; the remote host must listen on the TCP port! (multiple-value-setq (test errno) (ignore-errors ;; create a socket to connect with the remote host (setq socket-id (socket:make-socket :remote-host *tempcontrol-ip* :remote-port *output-port*))) ) ;; when socket is non nil, we can send (when socket-id ;; write message to stream (format socket-id "~S" (format nil "~S" value)) ;; make sure the buffer is emptied (force-output socket-id) ;; close the socket before leaving (close socket-id) ;; print trace into the Lisp console (format t "~&===>> ~S/ send-remote/ message (ID: ~S) sent to: ~A:~S. Message:" & ~S (omas::key agent) (omas::id message) ip *output-port* (format nil "~S" value)) ) (static-exit agent :done))) :EOF Warning The above functions have not been tested (lack of temperature controller), therefore they might be buggy...
{"Source-Url": "http://www.utc.fr/~barthes/OMAS/N276L-OMAS-postman-note.pdf", "len_cl100k_base": 5208, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28363, "total-output-tokens": 5718, "length": "2e12", "weborganizer": {"__label__adult": 0.0002390146255493164, "__label__art_design": 0.00020134449005126953, "__label__crime_law": 0.0001703500747680664, "__label__education_jobs": 0.00034928321838378906, "__label__entertainment": 4.190206527709961e-05, "__label__fashion_beauty": 8.398294448852539e-05, "__label__finance_business": 9.250640869140624e-05, "__label__food_dining": 0.00023984909057617188, "__label__games": 0.0003452301025390625, "__label__hardware": 0.001934051513671875, "__label__health": 0.00020313262939453125, "__label__history": 0.0001308917999267578, "__label__home_hobbies": 9.101629257202148e-05, "__label__industrial": 0.0003662109375, "__label__literature": 0.00010269880294799803, "__label__politics": 9.971857070922852e-05, "__label__religion": 0.00026416778564453125, "__label__science_tech": 0.0120849609375, "__label__social_life": 5.620718002319336e-05, "__label__software": 0.01117706298828125, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.00018012523651123047, "__label__transportation": 0.0003230571746826172, "__label__travel": 0.00014829635620117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20123, 0.04097]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20123, 0.3621]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20123, 0.79025]], "google_gemma-3-12b-it_contains_pii": [[0, 206, false], [206, 347, null], [347, 482, null], [482, 1201, null], [1201, 3604, null], [3604, 6267, null], [6267, 8466, null], [8466, 10292, null], [10292, 11789, null], [11789, 13663, null], [13663, 14437, null], [14437, 15875, null], [15875, 17629, null], [17629, 18915, null], [18915, 20123, null]], "google_gemma-3-12b-it_is_public_document": [[0, 206, true], [206, 347, null], [347, 482, null], [482, 1201, null], [1201, 3604, null], [3604, 6267, null], [6267, 8466, null], [8466, 10292, null], [10292, 11789, null], [11789, 13663, null], [13663, 14437, null], [14437, 15875, null], [15875, 17629, null], [17629, 18915, null], [18915, 20123, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20123, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20123, null]], "pdf_page_numbers": [[0, 206, 1], [206, 347, 2], [347, 482, 3], [482, 1201, 4], [1201, 3604, 5], [3604, 6267, 6], [6267, 8466, 7], [8466, 10292, 8], [10292, 11789, 9], [11789, 13663, 10], [13663, 14437, 11], [14437, 15875, 12], [15875, 17629, 13], [17629, 18915, 14], [18915, 20123, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20123, 0.08282]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f2071a92145c03b7dc380a245f88829c52a6ba70
Simulation Analysis of Artificial Intelligence in Enterprise Financial Management Based on Parallel Computing Zhu Feng School of Accounting, Zhongnan University of Economics and Law, Wuhan 430073, China Correspondence should be addressed to Zhu Feng; z0003893@zuel.edu.cn Received 15 August 2022; Revised 18 September 2022; Accepted 30 September 2022; Published 10 October 2022 Copyright © 2022 Zhu Feng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Under the background of today's society, people's daily life and production are inseparable from the use of information technology. With the rapid development of Internet technology and national information industry, more and more small and medium-sized enterprises choose to apply AI financial information management system. Based on this background, this paper introduces the principle of parallel computing and applies it to the financial management of enterprises together with artificial intelligence technology. Firstly, the paper discusses the model and form of the basic technology, then optimizes it, studies the actual financial management needs of small and medium-sized enterprises, completes the design and implementation of the system structure, and conducts simulation experiments to test the effectiveness of the system’s functional performance. It can be seen from the test results that the system meets the design goals and has basic functions, but some details and contents need to be improved and supplemented. Therefore, we must strive to ensure the accuracy, preciseness, and reliability of financial management data. When more data are needed, faster and better data processing based on user needs and feedback is required. This paper delves into artificial intelligence technology and the idea of parallel computing. This paper applies it to the field of enterprise financial management to create an effective management system. 1. Introduction The development of Chinese enterprises has ushered in a new era. First of all, China has become the fastest growing region in the world today. China’s optical fiber data transmission and update volume is at the forefront of the world, and Chinese enterprises have entered the information age. Secondly, China’s logistics technology has developed rapidly. Today’s logistics and transportation volume is at the forefront of the world, and the logistics supply chain of enterprises has undergone great changes. Finally, China’s mobile payment is in a leading position. Globally, China has become the most convenient mobile payment country for online and offline transactions in the world. Therefore, in order to take advantage of China’s rapidly developing technologies, Chinese enterprise management will also face many pressures and challenges. In today’s more important modern financial management field, the role of financial management is self-evident, and the quality of financial management system affects the overall quality of enterprise management [1]. Accordingly, optimizing the financial management system is the main goal of development in this field [2]. The traditional form of manual bookkeeping has been unable to meet the actual needs under the background of the gradual development of the enterprise business, and the main function of the financial management system is to use excellent computerized bookkeeping to replace the traditional inefficient manual bookkeeping [3]. The Chinese finance department emphasized that computerization of accounting is the main direction for the development of accounting activities in various fields in China in the future [4]. Institutions and state-owned enterprises need to realize the transformation of computer accounting and bookkeeping as soon as possible and follow the principle of step by step in the process [5]. However, due to the large number of small and medium-sized enterprises in China, the fields and business contents involved are different. Small and medium-sized enterprises conform to the trend of information technology development and realize optimization and reform for traditional operation methods. It hopes to use network technology to realize the transformation of information management, so as to make itself develop better and faster. For businesses, the first step in managing information is financial management. Through information-based financial management, managers can plan, control, and further manage financial activities. Therefore, we need to use information technology to design a better and more practical financial management system to promote the healthy development of the enterprise and the entire enterprise financial system. In view of the current needs and development direction of small and medium-sized enterprises, this paper combines parallel computing and artificial intelligence to design a financial management system that can effectively meet the actual needs of small and medium-sized enterprises, so as to help small and medium-sized enterprises carry out more effective financial supervision and management, thus improving the development speed of enterprises and strengthening management [6]. 2. Related Work The literature believes that the ultimate goal of enterprise development is not only to pursue short-term profit maximization but also to achieve the goal of enterprise value maximization and wealth appreciation. At present, many countries are committed to improving the efficiency and level of the use of enterprise funds and improving the business activities of enterprises as much as possible, so as to maximize the output of enterprise investment [7]. After a lot of investigation and practice by scientific researchers, the modern financial management system is different from the previous accounting-based accounting management system and expands other business operations on the basis of the traditional financial management system [8]. In order to effectively combine the actual business process and operation, the management and interaction of data resources can be completed. In the context of the in-depth development of information technology, the financial management system and the business information systems of other industries are more coordinated and integrated [9]. The literature studies the relevant market and shows that computer management software usually consists of financial system, distribution system, production system, and decision support system, which is a highly integrated system. Each subsystem can work in coordination or independently [10]. When the subsystems are in cooperative operation mode, only a small amount of data needs to be input, and the entire system can exchange information. In this way, a complete scheme can be provided for the implementation of enterprise decision making [11]. The enterprise management information system is an organic combination of enterprise management, production management, accounting, and financial management [12]. The literature studies and analyzes the operation of the enterprise financial management system, discusses the problems and reasons that affect the normal operation of the financial management system, and, on this premise, designs an effective management control for the financial management system [13]. The business risk system adjusts the system operation mechanism, realizes business process reengineering, strengthens the effectiveness of the original system, and establishes a dynamic management platform for financial management that supports sustainable business development [14]. The literature shows that traditional financial processes have been unable to keep pace with the evolution of AI-powered financial management system capabilities. In response to this problem, the article conducts a preliminary study on the financial operation of artificial intelligence in business and summarizes some suggestions for guiding the reform of business financial processes [15, 16]. 3. Algorithm Design of Parallel Computing 3.1. Basic Model of Existing Parallel Algorithms 3.1.1. DOT Model. The DOT model describes the implementation behavior of big data workloads in the form of arrays. The DOT model includes basic DOT blocks, combined DOT blocks, and DOT expressions. It can be described as \[ \overline{\text{DJOT}} = [D_1, \ldots, D_n] \] 3. It can be described as \[ \overline{\text{DOT}} = [D_1, \ldots, D_n] \] where \[ \begin{bmatrix} 0 & 0 & \cdots & 0 \\ t_1 & 0 & \cdots & 0 \\ 0 & t_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & t_m \\ \end{bmatrix} \] and \[ \overline{\bigcup_{i=1}^{n}(D_i)}\bigcup_{i=1}^{m}(D_i) \] \[ = \bigcup_{i=1}^{n}(D_i) \bigcup_{i=1}^{m}(D_i) \] For a DOT expression composed of multiple basic DOT blocks or combined DOT blocks can be used to describe the data flow of a large data payload. It can be seen from the formal definition and description of the four layers: the operation layer and the aggregation layer only contain computing tasks, and their corresponding 0 and A matrices are both diagonal matrices, indicating that separate calculations can be performed. Independent concurrency: The communication task completed by the transport layer, the corresponding matrix T is a regular matrix, indicating that the communication must interact with each other. In a concurrent storage system, a single miss does not necessarily cause a CPU hang, only pure errors do. C-AMAT is characterized by five parameters: CmH represents storage request access concurrency, Cm represents storage request pure miss concurrency, H represents storage request access time, pMR (PureMissRate) represents storage request pure miss rate, and pAMP (PureAverageMissPenalty) represents Pure average loss for storage requests. Through the derivation of a series of iterations, its formal description is as follows: \[ C - AMAT = \frac{H}{C_H} + pMR \times \frac{pAMP}{C_M}. \] Hit Concurrency Detector (HitConcurrency Detector, HCD) counts all hit cycles and records the status of each hit stage, calculates the CH hit concurrency of storage requests, and informs Miss. Concurrency Detector (MCD) of the current cycle if it occurs A hit occurs; MCD records the state of the pAMP cost of each pure loss cycle by counting the number of pure loss cycles, and calculates the pure loss concurrent Cm, the pure loss rate pMR, and the pure average loss storage requirement. 3.2. Parallel Computing Model Design. The p-DOT model consists of a series of iterations, the "p-phase DOT model." In each stage q, the p-DOT model consists of three layers, as shown in Figure 1. D layer (data layer): in a distributed system, datasets (D1 to Dn) are distributed and stored in n data nodes. O layer (computing layer): in phase q, nodes (01 to ong) perform independent simultaneous computations, and each O node only processes the corresponding data (including input data or intermediate data) and stores intermediate results. T layer (communication layer): in phase q (q ≠ p), each communication operator \( t_{ij} \) performs point-to-point message transmission, and the working node \( o_i (i \in [1, n_q]) \) of connection phase q is generated as an intermediary. The result is sent to the worker node \( o_j (j \in [1, n_{q+1}]) \) in step (q + 1). Note that if \( t_{ij} = 0 \), there is no communication between nodes \( o_i \) and \( o_j \). Figure 1 shows the general data flow of the p-DOT model. For any stage q, if q ≠ p, the output of this stage is the input of the next stage; otherwise, its result will be stored as the final result. For a given big data load that can be represented by the p-DOT model, the time cost of the load is \[ \Phi = O\left(\frac{w}{n} + n\right) \times p. \] For a given big data load that can be represented by the p-DOT model, the computational complexity of the q-phase load is \( O(k_q) \). Consider the computational behavior of phase q, which is described in the following form: \[ \overrightarrow{D}_qO_q = [D_1 \cdots D_{n_q}] \begin{bmatrix} 0_1 & 0 & \cdots & 0 \\ 0 & 0_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix}, \] \[ \begin{bmatrix} d_{1,1} & d_{1,2} & \cdots & d_{1,n_q} \\ d_{2,1} & d_{2,2} & \cdots & d_{2,n_q} \\ \vdots & \vdots & \ddots & \vdots \\ d_{k_q,1} & d_{k_q,2} & \cdots & d_{k_q,n_q} \end{bmatrix} \begin{bmatrix} o_1 & 0 & \cdots & 0 \\ 0 & 0_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix}, \] \[ (\overrightarrow{D}_q = \overrightarrow{D}_{q-1}O_{q-1}T_{q-1}, D_1……D_{n_q}) \] Considering the communication behavior in phase q (q ≠ p), its formal description is as follows: \[ (\overrightarrow{D}_qO_q)T_q = \begin{bmatrix} D_1 \cdots D_{n_q} \\ 0 \vdots \cdots \vdots \end{bmatrix}, \] \[ \begin{bmatrix} \vdots \vdots \cdots \vdots \\ t_{1,1} \ t_{1,2} \ \cdots \ \ t_{1,n_{q+1}} \\ t_{2,1} \ t_{2,2} \ \cdots \ \ t_{2,n_{q+1}} \\ \vdots \ \vdots \ \ddots \ \vdots \\ t_{n_q,1} \ t_{n_q,2} \ \cdots \ \ t_{n_q,n_{q+1}} \end{bmatrix} = [o_1 (D_1) \cdots o_1 (D_{n_q})], \] Calculation result; each communication operator \( t_{ij} \) distributes the intermediate result \( o_i(D_j) \) generated by the worker node \( O_q \) in step q to the step by means of point-to-point message passing (including file transfer, TCP protocol and shared memory FIFO strategy, etc.) Worker nodes in (q + 1). According to formula (6), the total complexity of the computing task is \[ \Phi_{\text{comp}} = \sum_{q=1}^{p} \Phi_{\text{comp}} = \sum_{q=1}^{p} O(k_q). \] because \[ k_q = \left\lfloor \frac{w_q}{n_q \times m} \right\rfloor + 1. \tag{8} \] Also, \( m \) is constant for a given ambient load. But, \[ O(k_q) = O\left( \frac{w_q}{n_q} \right) \Rightarrow \Phi \sum \text{comp} = \sum_{q=1}^{p} O\left( \frac{w_q}{n_q} \right). \tag{9} \] It can be seen from the foregoing that \[ O\left( \frac{w_q}{n_q} \right) = O\left( \frac{w_{q-1}}{n_{q-1}} \right) (\forall q \in [2, p]), \] \[ O(n_1) = O(n), w_1 = w, \tag{10} \] but \[ \Phi \sum \text{comp} = O\left( \frac{w_1}{n_1} \right) \times p = O\left( \frac{w}{n} \right) \times p. \tag{11} \] According to formula (5), the total computational complexity of the task is \[ \Phi \sum \text{comm} = \sum_{q=1}^{p} O\left( \max(n_q, n_{q+1}) \right), \tag{12} \] because \[ n = \max(n_q), \tag{13} \] but \[ O\left( \max(n_q, n_{q+1}) \right) = O(n), \] \[ \Phi \sum \text{comm} = \sum_{q=1}^{p} O(n) \times p = O(n) \times p. \tag{14} \] To sum up, for a given big data load that can be represented by the p-DOT model and a given environmental load, the overall complexity of load communication is \( O(n) \times p \) and the proof is completed. 3.3. Parallel Computing Algorithm Optimization. The p-DOT model can be selected to represent the data load and environmental load, where \( O(\omega/(cn + n + c)xp) \) is the time cost function of the load, and based on the period \( p \), its form is further described as the following formula. Consider the period \( q \), whose form is described as follows: \[ \overrightarrow{D_q} O_q T_q = \overrightarrow{D_q} O_q T_{\text{thread}} T_{\text{process}} = \begin{bmatrix} (d_1' \cdots d_i') \\ \vdots \\ (d_{c_{\max(n_q-1)+1}}') \\ \end{bmatrix} \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_{c_{\max(n_q)}} \\ \end{bmatrix} \begin{bmatrix} t_{1,1}' \\ \vdots \\ t_n,1' \\ \end{bmatrix} \begin{bmatrix} 0 & 0 & \cdots & 0 \\ 0 & t_{2,1}' & \cdots & t_{2,e}' \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \\ \end{bmatrix} \begin{bmatrix} t_1' \\ \vdots \\ t_{n,1}' \cdots t_{n,e}' \\ \end{bmatrix} \] Figure 1: Data flow of the p-DOT model. Consider period \( q \). Assuming the first \( s_q \) machine \((s_q < n_q)\) where the communication behavior occurs, the formal description of the normal task execution process is as follows: \[ \begin{align*} \Phi''(p - D(OT)^{exec}) &= \Phi'\sum_{\text{comp}}' (p - D\text{DOT}_\text{top}) = O\left(\frac{w}{cn}\right) \times p. \\ \Phi''(p - D(OT)^{exec}) &= O\left(\frac{w}{n + s}\right) \times p. \end{align*} \] Under normal circumstances, the time complexity of judging whether the task currently meets the convergence conditions will not exceed the normal task execution process, that is, \[ \Phi''(p - D(OT)^{judge}) < \Phi''(p - D(OT)^{exec}), \] and therefore \[ \Phi''(p - D(OT)^{exec}) = O\left(\frac{w}{n + s}\right) \times p. \] To sum up, for a big data iterative task that can be represented by the p-DOT model and a given environmental load, if the task has partial synchronization conditions, then the time cost function of its partial synchronization is \( \phi^0 = O((w/n + s)\sqrt{v}) \), and the proof is complete. ### 3.4. Algorithm Detection This paper tests the optimal number of machines \( n^* \) corresponding to the input data \( w \) of different scales and verifies the correctness of the time cost function of the p-DOT model and its inference. When testing the first 4 datasets, in order to avoid I/O acquisition conflicts between processes, only one process is running on each machine, but when testing the 5th dataset, due to the number of machines in the MPI cluster Limited, so run two processes on each machine. The number of machines in the experiment is the number of processes actually participating in the work, as shown in Table 1. As can be seen from Figures 2 and 3, although there is a certain deviation, \( \sqrt{v}(w) \) and \( e(n^*) \) are obviously linearly related, that is, \( e(n^*) = O(\sqrt{v}(w)) \). Therefore, it can be seen that the optimal machine \( n^* \) is proportional to the square root of the data size \( w \). Combining the curves in Figures 2 and 3, we can see that for a given big data load that can be represented by the p-DOT model and a given environmental load, the time cost of the load is \( \phi = O((w/n + s)\sqrt{v}) \) which is also correct. The reasons for the deviation are as follows. (1) There are many factors that affect the performance of big data applications. The p-DOT model only selects the scale \( w \) of the input data and the number of machines \( n \) as the first two parameters, which makes the model parameters imperfect in terms of accuracy. (2) The experimental platform has interference from other network loads, resulting in the above communication measurement errors. The main system requirements are as follows. **Simple Operation and Friendly Interface.** It adopts Windows operating habits, which is beautiful and elegant. After simple training, administrators can easily operate the interface. **Permission Control, Safe and Reliable.** Different permissions are assigned to different categories of administrators. Users can change the permissions of each operator. After the operator logs in to the system and enters a password, the system will automatically grant permissions to prevent unauthorized operations, which is safe and reliable and can prevent the unclear division of responsibilities. **Data Query, Fast and Convenient.** According to the basic information system, it provides a powerful daily processing query function, which can realize simple query and fuzzy query, and users can also print reports. The report is reasonable and easy to use. According to the system requirements, the system can meet the statistical requirements of financial managers. System performance requirements refer to the requirements for system reliability and functional scalability in addition to system functional requirements, which have a greater impact on the use environment and business specifications of the system. ### 4. Enterprise Financial Management System Design #### 4.1. System Requirement Analysis With the continuous development of economy and technology, people's life is more and more inseparable from computers and the Internet, which promote people’s modern life with convenient, fast, and intelligent systems. But to realize good network management, it must have the support of powerful computer system. This paper designs a kind of special financial management system after actually investigating the financial management needs of small and medium-sized enterprises to meet their business operations. Based on the principle of practicality, the system uses information technology to manage financial information and data and inputs data and financial information into standard computers in actual work, giving full play to the computer’s rapid processing capabilities and standardized management. Through the analysis and investigation of the actual situation, the functions of master data management, voucher management, and data security problems such as user authentication and user authority in system management have been solved. #### 4.1.1. Quick Response Capability Although the management system designed in this paper involves a small range of management, it also includes a large number of commodity types. When multiple users access the system, higher requirements are placed on the database system and server that can quickly respond to query requests sent to users' needs. In addition, the frequent exchange of system business data requires the system to control the response time within an acceptable range. #### 4.1.2. Load Capacity The system load depends on the number of users used and the frequency of services. After running for a period of time, the number of users will stabilize at a certain number, and the system load requirements can be set at this time. In addition, although the system-related data storage is relatively large, the storage capacity of the database is not a problem. #### 4.1.3. Security System security is achieved through user authentication. The user enters the correct user name and password to log in to the system. If the input is incorrect, you will be returned to the login window. Also, the system needs to be equipped with data recovery and backup functions to avoid the risk of data loss due to crashes. 4.1.4. Reliability. System reliability refers to the ability of management system software to maintain its own performance after practical application, involving hardware environment, network environment, system platform characteristics, system development platform, and so on. System reliability is a global, system-specific requirement that includes fault tolerance, fault resilience, and system maturity. Fault tolerance refers to the ability of system software to withstand and prevent such errors when the management system fails. Failure is unavoidable, but if the system can avoid the collapse of the whole system, it also meets the design conditions. System stability refers to whether the normal state of the system can be restored by re-entering the system if there is a serious failure that affects the normal operation of the system during the operation of the system, such as the protection mechanism during data input. 4.1.5. Ease of Use. Ease of use is a system feature demanded by every customer. After completing the development of the management system, in addition to ensuring complete functions and normal use, it is also essential for the company’s existing operators to quickly use the system. Operators can fully understand the functional use and maintenance operations of the system in a short time or after simple training, indicating that the system is easy to use. Enterprise software users need little logical thinking to understand the full capabilities of the software. Users can master the use of the software according to the instructions According to the principle of soft- ware training process. The navigation of the software inter- face is concise and clear, and the user can achieve the desired operation function through fewer pages during the operation. 4.1.6. Maintainability. According to the principle of soft- ware development, a software should be handed over to the customer for testing and actual use after the design is completed. No matter how much manpower and material resources are spent, there must be various defects in software design, which can be reflected in the failure of operation. The complexity of the maintenance process and the level of maintenance costs in the event of a failure indicate the sustainability of the system. After a software failure occurs, the system needs to provide a log-like function, so that maintenance personnel can find the cause of the failure in a short period of time based on the log and experience. Once software administrators have discovered the cause of the failure, they can fix the failure with minimal cost and time. 4.2. System Architecture Design. Based on actual needs, after opening the browser software of the operating system and entering a fixed domain name, the user can enter the system login page under the condition of networking. After the user completes the login and authentication, he can access vari- ous functions provided by the system. MVC is usually divided into view presentation layer, model layer, control layer, and other components. Indicates the interaction with the user interface and is responsible for the realization of the system UI functions. The model layer mainly processes the data entering the business. The control layer receives and processes all the user’s requests and calls the processing interface of the system model layer to respond to the user’s request. The main functional structure of the business financial management system developed in this paper is shown in Figure 4. As shown in Figure 4, in the system architecture design, Spring is used to coordinate various processing operations and business logic layers. The Struts presentation layer describes the technical details of the implementation of the functional modules related to the system and then separates the modules through Spring. Hibernate architecture uses session factory for integrated database operations. Hiber- nate’s transaction processing mechanism is used to handle complex data interactions and data manipulations. Based on the analysis of system requirements, this paper develops and designs a financial management system that meets the needs of enterprises. The system function structure is shown in Figure 5. 4.3. System Interface Design. According to the principle of object-oriented design and the relevant guiding ideology of SOA framework, it is planned to execute and complete various operations of the financial management system of small and medium-sized enterprises through the services in web services. The service is defined as follows: ```java public interface SourceManage { @Profiled (tag = "SourceManage") public int uploadSource (String sourcePath); @Profiled (tag = "SourceManage") public int updateSource (int sourceId, SourceInfo sourceInfo); @Profiled (tag = "SourceManage") public SourceInfo downloadSource (int sourceId); @Profiled (tag = "SourceManage") public SourceInfo getSourceById (int sourceId); @Profiled (tag = "SourceManage") public int deleteSource (int sourceId); @Profiled (tag = "SourceManage") public int isSourceExit (int sourceId); } ``` As shown in the code snippet above, the SourceManage interface of financial management mainly includes opera- tions such as uploading (uploadSource), downloading (downloadSource), querying (getSourceById, getSource- ByName), deleting (deleteSource), querying whether the fi- nancial resource file exists (isSourceExit) information data, and so on to fully realize the full authority management of access to related resources. 4.4. Analysis of System Test Results. The test environment of the client is shown in Table 2. The test process is similar to the development process, and it is also carried out in stages and steps. It is impossible to test the entire system as a separate entity from the beginning. To test this system, start with each functional module. First, start with each functional module of the system and test it as a unit. Jump from one form to another, examine differ- ent situations, and use single-step tracing, set breakpoints, and output intermediate variables in between. Finally, the different processes are combined and tested in a relatively complete compilation part. Because it is not clear whether the results obtained are correct, we first output the results to the file, then evaluate the accuracy of the results according to different situations, and gradually follow up to determine the aspects that need to be changed or improved. Second, while making sure the main part is working properly, try to change other non-main parts of the module and make improvements with reference to the related functions of WordPad and Notepad in Windows. Finally, the system as a whole has been fully tested in many aspects, and many errors and imperfections have been modified to ensure that the system functions meet the design requirements and can work normally. Test each module of the system. After testing each module, assemble all the Figure 4: Architecture diagram of enterprise financial management system. Figure 5: Functional structure diagram of enterprise financial management system. modules, and then test the interface as a whole. The operation stability of this system is tested through the following aspects, as shown in Table 3. The main goals of the test are system reliability, system page latency, and system performance. Set system uptime to 48 hours. During the test, we observe the operation of various system indicators within 48 hours and calculate the average value of the test through multiple tests, as shown in Table 4. The test results show that the software design can meet the expected performance requirements. Through the test of the system, the results show that the system is simple, easy to operate, and practical, and each interface meets the requirements of system safety. 5. Application Directions of Artificial Intelligence in Enterprise Financial Management 5.1. Reducing Process Time and Facilitating Real-Time Management. In terms of financial analysis, it is gradually shifting from traditional financial analysis to analysis supported by real-time system data. With the popularization and implementation of enterprise financial management systems, financial activities are becoming more and more concise, which can give enterprises more time and space to optimize and operate financial function management. In the later stage of financial system optimization, procurement projects can consider further developing the functions of the <table> <thead> <tr> <th>Table 2: Client test environment.</th> </tr> </thead> <tbody> <tr> <td><strong>OS type</strong></td> </tr> <tr> <td>Client environment 1</td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td>Client environment 2</td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td>Client environment 3</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Table 3: System test.</th> </tr> </thead> <tbody> <tr> <td><strong>No.</strong></td> </tr> <tr> <td>1</td> </tr> <tr> <td>2</td> </tr> <tr> <td>3</td> </tr> <tr> <td>4</td> </tr> <tr> <td>5</td> </tr> <tr> <td>6</td> </tr> <tr> <td>7</td> </tr> <tr> <td>8</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Table 4: Financial management system performance test results.</th> </tr> </thead> <tbody> <tr> <td><strong>Test indicators</strong></td> </tr> <tr> <td>Delay test</td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td>Reliability test</td> </tr> <tr> <td>Concurrency testing</td> </tr> </tbody> </table> financial statement module to facilitate the acquisition of data and ultimately provide financial evaluation, analysis, and decision making for enterprises. 5.2. Information Disclosure and Sharing to Improve Management Efficiency. With the development of society and technology, in order to optimize the financial management process, promote the centralization of decentralized financial management, and improve the efficiency of financial activities, enterprises must establish a center that can be used for internal information exchange and communication. Financial information sharing can effectively improve the efficiency of enterprise system information interaction, thereby promoting information sharing among various departments, and sharing with other departments, and clearly establish a special financial information platform within the enterprise. The system includes customer information, business information, decision-making information, and so on. This will shape the future of business management. 5.3. Strengthening Risk Management and Improving Decision-Making Ability. Financial risk management is one of the core contents of an enterprise’s financial management system, and its main function is that the decision-making system acts directly through risk monitoring and feedback. However, because the company lacks a risk management system, the lack of financial risk management directly leads to a negative impact on business operations. Therefore, it is necessary to improve the risk management and control system. In terms of effectively building a financial risk monitoring system, enterprises can start from the following three aspects. First, enterprises can form relevant standardized data indicators according to their own conditions through comparative analysis of other enterprises. Second, use real-time relevant data analysis to obtain suitable data and compare it with real-time indicators and standardized risk indicators. Furthermore, link processing is performed for the appropriate business. For example, taking the changes in the company’s accounts receivable recovery rate and budget cost allocation as an example, we combine these indicators with the company’s initial alert threshold for real-time monitoring and adjustment. 6. Conclusion By using the financial management system, enterprises can improve the efficiency of financial management and achieve twice the result with half the effort, especially in the fierce market competition, which can give enterprises a development advantage. In the case of relatively low production level and production efficiency of small and medium-sized enterprises, their market share is very low. At this time, the financial management system is not only a simple application software system but also an important part of production and operation. Practice shows that the system has the following advantages: friendly interface and simple operation. Operators with limited computer access can also operate the menu item prompts; detailed information management, including adding, deleting, and other specific operations, provides powerful navigation, query, and statistical functions; the system supports multi-identity user operations. Users are effectively connected to facilitate the comprehensive operation and management of basic financial information and enterprise information; the business process is arranged reasonably, and the division of labor in the voucher verification stage and the posting stage is clear, which is in line with expectations; the report statistics are detailed, and users can print statistical reports based on their needs. The data are accurate and clear, which is convenient for data analysis. Due to the limited level, the system is not perfect in some aspects, and more research is needed. (1) The security and reliability of the financial management system need to be improved and optimized. (2) The functional module design is not detailed enough, and the data analysis integration function still needs to be perfected. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The author declares that there are no conflicts of interest. References
{"Source-Url": "https://downloads.hindawi.com/journals/misy/2022/2958176.pdf", "len_cl100k_base": 8190, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42174, "total-output-tokens": 10190, "length": "2e12", "weborganizer": {"__label__adult": 0.0008935928344726562, "__label__art_design": 0.0011548995971679688, "__label__crime_law": 0.0011234283447265625, "__label__education_jobs": 0.00505828857421875, "__label__entertainment": 0.0002887248992919922, "__label__fashion_beauty": 0.0005354881286621094, "__label__finance_business": 0.09918212890625, "__label__food_dining": 0.0010223388671875, "__label__games": 0.0020904541015625, "__label__hardware": 0.003597259521484375, "__label__health": 0.0022563934326171875, "__label__history": 0.0007877349853515625, "__label__home_hobbies": 0.0004000663757324219, "__label__industrial": 0.0024623870849609375, "__label__literature": 0.0007233619689941406, "__label__politics": 0.0007357597351074219, "__label__religion": 0.0008821487426757812, "__label__science_tech": 0.36279296875, "__label__social_life": 0.0002416372299194336, "__label__software": 0.04815673828125, "__label__software_dev": 0.46240234375, "__label__sports_fitness": 0.000621795654296875, "__label__transportation": 0.002017974853515625, "__label__travel": 0.00048422813415527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40456, 0.02016]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40456, 0.31767]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40456, 0.88621]], "google_gemma-3-12b-it_contains_pii": [[0, 4290, false], [4290, 9186, null], [9186, 13711, null], [13711, 15956, null], [15956, 16707, null], [16707, 22109, null], [22109, 23845, null], [23845, 29362, null], [29362, 29519, null], [29519, 33007, null], [33007, 38852, null], [38852, 40456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4290, true], [4290, 9186, null], [9186, 13711, null], [13711, 15956, null], [15956, 16707, null], [16707, 22109, null], [22109, 23845, null], [23845, 29362, null], [29362, 29519, null], [29519, 33007, null], [33007, 38852, null], [38852, 40456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40456, null]], "pdf_page_numbers": [[0, 4290, 1], [4290, 9186, 2], [9186, 13711, 3], [13711, 15956, 4], [15956, 16707, 5], [16707, 22109, 6], [22109, 23845, 7], [23845, 29362, 8], [29362, 29519, 9], [29519, 33007, 10], [33007, 38852, 11], [38852, 40456, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40456, 0.10174]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
d69715f1b484de175b3978bbbc7185102d0af4bd
Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task Specification K.S. Ragu and James D. Arthur TR 88-15 Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task Specification K.S. Raghu and James D. Arthur Department of Computer Science Virginia Tech Blacksburg, VA 24061 ABSTRACT Abstraction is a powerful mechanism for describing objects and relationships from multiple, yet consistent, perspectives. When properly applied to interface design, abstraction mechanisms can provide the interaction flexibility and simplicity so desperately needed and demanded by today's diverse user community. Fundamental to achieving such goals has been the integration of visual programming techniques with a unique blend of abstraction mechanisms to support user interaction and task specification. The research presented in this paper describes crucial abstraction mechanisms employed within the Taskmaster environment to support top-down and bottom-up task specification. In particular, this paper (a) provides an overview of the Taskmaster environment, (b) describes top-down specification based on multi-level, menu-driven interaction and (c) describes bottom-up specification based on cutset identification and pseudo-tool concepts. CR Categories and Subject Descriptors: D.2.6 [Software Engineering]: Interactive Programming Environments; H.1.2 [User/Machine Interaction]: Human information Processing General Terms: Interface Abstractions, Top-Down and Bottom-up Task Specification, Partitioned Menu Networks, Tool Composites Additional Keywords: Nodes, Ares, Communication Paths, Cutsets Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task Specification K.S. Raghu and James D. Arthur Department of Computer Science Virginia Tech Blacksburg, VA 24061 1.0 Introduction Abstraction is a powerful mechanism for describing objects and relationships from multiple, yet consistent, perspectives. When properly applied to interface design, abstraction mechanisms can provide the interaction flexibility and simplicity so desperately needed and demanded by today's diverse user community. In particular, user interaction and task specification need to resemble more closely the mental process of problem solving, allowing the user to concentrate on the problem solution rather than on programming language syntax or semantics. The research described in this paper reflects an effort to meet that challenge, not only in simplifying user/machine interaction but also in increasing the bandwidth of interaction between the computer and the user. Fundamental to achieving such goals has been the integration of visual programming techniques with a unique blend of abstraction mechanisms to support user interaction and task specification [REIS86]. The research vehicle for exploring specification abstractions has been the successive development of several prototype environments, culminating in the synthesis of Taskmaster - an interactive, graphical environment for task specification, execution, and monitoring. Although the Taskmaster environment touts several novel features, e.g., visual programming, tool composition and structured inter-tool data flow computing, it derives its expressive power and interactive capabilities from the use of complementary abstraction mechanisms. In particular, concepts underlying functional abstractions, supported through visually-oriented icons and primitives, provide an integrated top-down and bottom-up task specification interface. That is, when specifying a task, a user has the option to: - begin with a high-level task specification and, through a top-down process, successively refine that specification until lower level constituent tasks are bound to primitive operations supported by an underlying set of tools, - initiate a bottom-up synthesis of a high-level task specification through successive abstractions of fully specified lower level subtasks into higher level subtasks until the high-level task is specified, or - specify an initial task overview based on a sequence of partially ordered operations, use top-down specification to bind those operations to corresponding functional tools, and then use bottom-up specification to abstract (or "collapse") the overview into conceptually simpler forms representing higher level operations. As outlined above, Taskmaster exploits several abstraction mechanisms in providing a flexible, user-directed approach to task specification. The focus of this paper is to present functional models, operational characteristics, and implementation considerations underlying the selection and integration of those mechanisms. Because the Taskmaster environment plays such a crucial role in our discussion, an overview of that system is presented in the next section. Included in the presentation is a description of the environment's major components that visually and textually support user task specification. Section 3 follows and presents a discussion of abstractions used in support of top-down task specification. The discussion includes a description of (a) partitioned menu networks in support of multi-level, menu-based interaction, and (b) the expand node operation. Section 4 presents several models of composite tool abstractions and discusses their applicability to bottom-up user task specification. Included in this discussion is an overview of the save tool-composite and attach tool-composite operations. Finally, Section 5 provides a summary of the paper and briefly discusses the current status of the Taskmaster environment. 2.0 The Taskmaster Environment: An Overview The Taskmaster environment has been a product of evolution. Its initial predecessor, OMNI [ARTJ87], was textually oriented and supported interactive user task specification based on a "loose" composition of program filters. Taskmaster's immediate predecessor, GETS [ARTJ88], exploited graphics-based task specification but, like OMNI, was still restricted to rigid specification constraints enforced by menu-based interaction. Learning from our experiences with OMNI and GETS, the Taskmaster environment has been purposely designed to support user task specification from a graphics-oriented perspective and to include abstraction mechanisms to overcome the interaction rigidity inherent to menu-based systems. The Taskmaster environment is an interactive, graphical environment for task specification and execution. Task specification operations are supported through a collection of software tools present in a tools database. A tool as used in this paper refers to a filter program (a la UNIX\(^1\) sort) which performs a single operation with minor variations. Each tool can have multiple input and output ports through which it communicates with other tools. To "program" a given task within the Taskmaster environment, one decomposes the task into a partially ordered set of conceptually simple, high-level subtasks (or operations), and then composes a corresponding network of software tools that implement those subtasks. This decomposition/composition process is supported through and depicted as a graphical network in which nodes correspond to subtasks and arcs represent directed data paths between the nodes. The resulting network topology captures the --- \(^1\) Unix is a trademark of AT&T. set and sequence of operations needed to compute a solution to the user task specification. Execution of that network of software tools, provides the problem solution. 2.1 A Task Specification Example The following example illustrates this programming paradigm. Suppose one desires to specify a task network to implement a matrix multiplication scheme with vector operations in order to exploit the parallelism offered by vectorization. Figure 1 illustrates one task specification network for this scheme where: 1. the pre-multiplier matrix A and the post-multiplier matrix B are vectorized by row and column, respectively. This activity is reflected in the overview by nodes bearing the operational names: Vectorize by Row and Vectorize by Column. 2. the product matrix elements, Cij, are computed in a parallel fashion based on the row and column vectors provided by the vectorize operations. This activity is portrayed in the overview by the node: Multiply Vectors in Parallel. 3. the product matrix elements are composed into an N x L matrix C. This operation is implied by the node labelled: Compose Product Matrix. We note that the network shown in Figure 1 is a task specification overview and is intended to exemplify operations at high level of granularity. These operations may or may not directly correspond to primitive functions supported by underlying environment tools. Figure 1 A High Level Specification of Vectorized Matrix Multiplication Using the task specification overview as a basis, the user individually selects each node and successively refines that node until a primitive tool or predefined subtask is "bound" to it. The final network topology reflecting this refinement process and corresponding to the user's perception of a task "solution" is illustrated in Figure 2. Note, that in addition to refinement, node expansion is also performed on the *Multiply Vectors in Parallel* and the *Compose Product Matrix* nodes. Though simplistic in nature, the above example does capture the flavor of a typical task specification within the Taskmaster environment. The conceptual simplicity, however, is attributable to the powerful network editing primitives (based on underlying abstraction mechanisms) provided to the user. For example, Taskmaster provides primitives to save *all* or *parts* of the task specified in Figure 2 as a functionally complete abstraction in the tools database. For all practical purposes, the above network can be saved (in its entirety) and later addressed as a *pseudotool* with 2 input ports and one output port. Hence, in a task specification where a 2x2 matrix multiplication operation is needed, the user simply creates a node in the network and through the node specification process, binds that pseudotool to the node. Clearly, such a process could have been applied to the nodes, *Multiply Vectors in Parallel* and *Compose Product Matrix*. ![Diagram of a fully refined task network for vectorized matrix multiplication.](image) **Figure 2** A Fully Refined Task Network for Vectorized Matrix Multiplication 2.2 The Major Components of the Taskmaster Environment The Taskmaster environment is an integrated user support environment that exploits visual programming concepts, tool composition, and structured data flow. It is composed of three major cooperating components: - the Network Editor, - the Network Execution Monitor, and • the Tools Database. The Network Editor provides a graphical interface for constructing task networks. It guides the user through the task specification process by supporting top-down and bottom-up interaction formats. Once a task is fully specified, the corresponding network is ready for execution. A network representation is forwarded to the Execution Monitor for network instantiation and monitoring. The Tools Database plays a supportive role in that it provides access to all the information pertaining to the basic tool set. This information includes a detailed description of each tool present in the database, its interface structure and the dialogue for refining its function. Physically, the environment is partitioned across two machines connected by a high speed communication link. The Network Editor resides on a VAXstation\(^2\) running MicroVMS 4.2 (local workstation). The Execution Monitor resides on a VAX 11/785 running Ultrix-32 (host computer). The Tools Database resides on the host machine but gets copied over to the local workstation on every update. Although the current configuration has a single local workstation, in the future we visualize a set of local workstations all connected to the host. The overall system configuration is shown in Figure 3. 2.2.1 Taskmaster User Interface: The Network Editor The Network Editor provides an interactive, graphical interface for constructing task specifications. Programming in the Taskmaster environment consists of transforming a conceptual task into a network whose nodes represent operations (tools) and arcs represent the communication path between the nodes. The Network Editor supports this specification process by providing editor primitives for building generic networks and specifying the nodes and the arcs through a menu-based interaction process defined within the Tools Database. For clarity, a generic network \(^2\) VAXstation, VAX, Ultrix and MicroVMS are all trademarks of the Digital Equipment Corporation. is viewed as a directed graph with unspecified nodes and arcs. The specification process associates with each node and arc a corresponding operation and tool port assignment, respectively. Figure 4 illustrates the generic network from which the fully specified network in Figure 2 is derived. The Network Editor also permits a user to send a fully specified task network to the Execution Monitor for instantiation. The Network Editor is primarily menu-driven and makes extensive use of logical windowing and mouse input. Similar to the Dialogue Management System [EHRR86] and PICT [GLIE84], the Network Editor incorporates many of the human engineering principles related to graphical user interface design. For example, ergonomic features include direct manipulation, visual feedback, user error recovery, choice confirmation, default selection, operational and representational consistency, pop-up menus and so forth. The Editor display consists of a large window detailing the topology of the network being edited and auxiliary pop-up windows for displaying Figure 4 Generic Network for Vectorized Matrix Multiplication - menus, - multiple views of nodes and arcs, - user instructions and help messages, - error messages, - user confirmation requests, and - textual information pertinent to tool and communication path specification. The Network Editor supports many editing primitives, the majority of which support either network construction, network specification, or network inquiry operations. The network topology construction operations are used to create and operate on node and arc icons. The unspecified icons serve as visual place-holders for the tools and their interface connections. Thus, an unspecified node created by the create node operation has no initial semantic meaning with respect to the task being solved. Pan and zoom operations provide selective viewing for managing relative complexity of very large networks. The collapse operation allows one to abstract a subnetwork performing some high-level operation into a single "super-node". The explode operation reverses the effect of the corresponding collapse operation. The expand operation provides a new network topology window in which to define a subnetwork associated with the node being expanded. Sections 3 and 4 present a more detailed discussion of the expand, collapse and explode operations relative to specifying functional abstractions. The network specification operations are used to specify the node and arc icons. Node icons are specified by attaching to them a fully specified tool or pseudotool from the tools database. Arc icons are specified by making all the appropriate connections between the tools associated with the nodes connected by the arcs. Thus, an arc can be specified only after both incident nodes are fully specified so that the corresponding tool interfaces are clearly defined. The network inquiry operations provide characteristic information based on the current specification status of nodes and arcs. The view node operation provides a detailed view of the tool attached to a specified node. Taskmaster also provides a textual view of each specified node containing the description of the associated tool, its attributes and its input and output ports. The view communication path operation provides a detailed view of the interface between the tools connected by an arc. Additional operations not discussed above provide for various "backup" and "restore" capabilities. At anytime during an editing session, the current state of the network can be saved to disk or a previously saved network can be restored from disk using the save network and the restore network operations respectively. Moreover, the save tool-composite and attach tool-composite allow the user to identify, save, and retrieve (sub)networks as functional abstractions. These two operations, fundamental in defining and reusing pseudotools, are discussed more fully in Section 4. The undo operation supports error recovery by undoing the effect of the most recent topology-modifying operation. Finally, after a network is created and completely specified, the *execute network* operation can be used to send it to the Execution Monitor for instantiation. Upon selection of this operation, the Network Editor performs consistency checks and network validation, and then sends an internal representation of the network to the Monitor for instantiation. The internal representation of the network is transferred over the high-speed link to the remote host where the Execution Monitor resides. ### 2.2.2 Taskmaster Executive - the Execution Monitor Problem solving in the Taskmaster environment consists of (1) specifying the problem and (2) computing the solution. The Network Execution Monitor supports the second stage of this process by performing the following functions: - reading the task network representation forwarded by the Network Editor, - validating the network, - spawning computational processes based on the network topology, and - monitoring the network execution. Before initiating execution of the network, the Monitor first performs a modified breadth-first traversal (BFS) of the network checking for network connectivity and data path consistency. After confirming network consistency, the Monitor instantiates the network by spawning a process for each node in the network, allocating a UNIX "pipe" for each arc and connecting those pipes to the appropriate nodes. The node instantiation is also performed in the BFS traversal order in an attempt to satisfy certain interprocess communication constraints imposed by the operating system. The network execution, however, is independent of the instantiation order because it is based solely on *data flow*. The Execution Monitor also provides for execution monitoring using status messages to indicate the instantiation, execution and termination of each node-associated process. 2.2.3 Taskmaster Knowledge Base - the Tools Database The third component of the Taskmaster environment is the Tools Database. In the Taskmaster environment, the Tools Database plays a major role in isolating and encapsulating all application-specific information, and presenting it in a generic form to the other two components of the environment. This approach has significant advantages in that: - defining a new application domain requires only that the Tools Database be redefined accordingly, and - integrating the new Database into the Taskmaster environment is an operation that is transparent to the rest of the system. More specifically, the Tools Database contains information about all the tools available in the environment. This information includes tool communication requirements, tool arguments and complete textual descriptions of each tool and its input and output ports. The Tools Database also contains all the information supporting the multi-level, menu-based dialogue process for node specification. The Network Editor directly uses this information to drive the node specification process. This notion of using a knowledge base to guide the user in the selection process is a novel feature first used in the OMNI environment mentioned earlier. Effectively, the specification process can be viewed as a finite state machine driven by the Tools Database menu dialogue "table" [RABI59]. 3.0 Abstractions in Support of Top-Down Task Specifications In specifying a task, the user first defines a generic network reflecting a conceptual ordering of one or more high-level operations. The initial network can be a single node representing the entire task or a network of nodes representing a task specification overview. From this initial configuration, top-down specification can be employed to refine the network. Top-down task specification is the successive decomposition of a task into lower level subtasks until the lowest level subtasks are directly identifiable with available tools or pseudotools defined in the tools database. In the Taskmaster environment, top-down task specification is supported through two distinct interaction formats, each embracing different decomposition philosophies and employing distinct abstraction mechanisms. The first approach exploits partitioned menu networks through a multi-level, menu-based interface [ARTJ85]. As with any menu-based system, the specification/decomposition paths are predefined. The second approach employs node expansion activities and supports user-directed specification/decomposition. As discussed below, both approaches embrace top-down decomposition. Multi-level, menu-based interaction, however, assumes that each node being specified represents one operation and will be attached to a single tool. Node expansion, on the other hand, assumes just the opposite. 3.1 Top-Down Specification through Multi-Level, Menu-Based Interaction Multi-level, menu-based interaction assumes that the node being specified is to be directly bound to a primitive tool defined in the tools database. Furthermore, the interaction process is restricted to predefined sets of refinement paths that correspond to the the underlying menu network hierarchy. The novelty of this approach is not the menu-based interaction per se, but the specification sequence induced by a partitioning of the underlying menu network. That is, the predefined menu network is partitioned into multiple levels, where each level (or layer) represents a refinement abstraction across the entire menu network. As illustrated below, the partitioning induces interface layers that permit the user to * specify a task overview based on predefined high-level operations, and then successively refines that overview in a "lock step" fashion through subsequent menu-based interaction. Although we choose to restrict the discussion of partitioned networks and the multi-level, menu-based interaction to systems that support user task specification, the concepts presented in this section are applicable to most general menu-driven systems and their corresponding application domains. In specifying a task, the user first constructs a generic network that provides a framework for sequencing and specifying a conceptual set of operations. Through conventional menu-based interaction, for each node in the generic network the user selects the appropriate sequence of menu frame items that identifies the high-level operation, binds that node to a tool which implements the operation, and then refines the execution behavior of that tool. This scenario implies that a selected node is fully specified before another node is considered. For example, suppose that a user has access to a menu-driven, file transformation system and wants to retrieve a file, select certain records from a specified file, sort them, and then save them for later processing. First, the user selects the sequence of frame items whose corresponding actions solicits the name of the file to be retrieved and infers all associated physical attributes. Next the user chooses a sequence of frame items that indicates the select-record operation as well as the criteria for selecting the appropriate records. The user then chooses a sequence of menu items that leads to a description of the sort operation and all refinements that specify the desired sort sequence. Finally, frame items are selected that denote the file-save operation and that define all characteristics relating to the destination file. Figure 5 illustrates one possible network to accomplish the above specified task. In this fully specified network cat is the file retrieval tool, grep is the select tool, sort is the sort tool and fileit is file creation tool. Figure 5 Fully Specified Network to Sort and Save Selected Records The problem with the specification approach described above is that the user is *forced* to select one node at a time and fully specify its operational details before moving to another node in the network. Such rigidity, enforced by conventional menu networks, tends to obscure the user's overall perception of the task solution. For complex network topologies, forcing the user to contend with details before firmly establishing a task overview can have adverse, if not devastating repercussions. In the Taskmaster environment, however, menu interaction is based on partitioned menu networks that support and encourage partial node specification through defined interface layers. Intuitively, an interface layer can be viewed as a "slice" through the menu network that delimits menu frames possessing a common level of specification. For the above file transformation example, a (simplified) conventional menu network might look similar to the one illustrated in Figure 6. In the Taskmaster file transformation environment, however, Figure 7 shows the same menu network after partitioning. Note that the network defined at Hierarchical Level 1 "terminates" with the selection of a high-level operation. Hence, the user can traverse the menu network in a conventional manner, specify a task overview (without being encumbered by refinement details), and then continue with individual node refinement through interaction guided by the second-level menu networks. That is, the user first constructs a generic network (Figure 8a), specifies an overview by associating high-level operations with each node (Figure 8b), and then refines each high-level operation through continued menu interaction on the second hierarchical level. The final result is a fully specified network identical to the one shown in Figure 5. We emphasize that partitioned networks provide the capability for the user to specify an overview through the Taskmaster interface. The final choice remains with the user as to whether the specification sequence follows "breadth-first" overview or the conventional "depth-first" orientation. Figure 7 Partitioned Menu Network Supporting File Transformation Although partitioned menu networks is a powerful mechanism for supporting multi-level, menu-based interaction, the user is still forced to follow a set of paths defined by the menu (sub)networks. The next section describes an alternate top-down specification scenario that allows the user to control the specification process. Figure 8a Generic Network to Sort and Save Selected Records Figure 8b Overview Network to Sort and Save Selected Records 3.2 Top-Down Specification Through Node Expansion Given a generic network topology, top-down task specification entails - the selection of an unspecified node, and - the binding of that node to a tool or pseudotool through an iterative refinement process. As described in the previous section, one method for associating a tool with a selected network node is through menu-based interaction. With this approach, however, the user is forced to follow a predefined set of paths leading to the selection of a tool in the database. To introduce more flexibility in the top-down specification process and to encourage user creativity, the Taskmaster environment provides a node expansion primitive that allows the user to direct the specification process. Intuitively, expand node "opens up" a single, unspecified network node and permits the user to fully specify a subnetwork within that node. The node expansion operation provides the user with a separate "node expansion" window to edit the subnetwork to be integrated. This window is two-thirds the size of the network topology window but logically the same. The expand node operation is recursive to five levels and only limited there for controlling the complexity of the display. Thus, if a node in an expansion window is also selected for expansion, a new node expansion window appears, overlaying the previous one. Only one expansion window is active at any particular instant and the underlying inactive windows are clearly identified as such. In effect, the user can specify multiple levels of abstraction reflecting his own perception of an operation or task, and have all levels appear as one node at the outermost level. Moreover, because all normal editing operations are supported by the expansion window, the user can choose the method by which each individual subnetwork node is subsequently specified. 4.0 Abstractions in Support of Bottom-Up Task Specification Top-down task specification involves the successive decomposition of a task into lower level subtasks until the lowest level subtasks are directly identifiable with available tools in the tools database. In many instances top-down specification is most natural, e.g. when concentrating on the specification of a single network node. Within the framework of task specification, however, it is often convenient for the user to consider groups of nodes as a single abstraction supporting one high-level operation. Although not specifically stated, the expand node operation provides such a view but from a top-down perspective. Bottom-up task specification involves successive abstractions of fully specified lower level subtasks into higher level subtasks. At the lowest level of abstraction a network is specified where each node is directly bound to a tool or pseudotool in the database. The specified network usually defines some low-level, yet not quite primitive, function. Continuing in a bottom-up fashion, this network is collapsed into a "super-node" and becomes a single node in a higher level network. This successive abstraction toward higher level functionalities culminates with a fully specified subnetwork that performs a specific a user defined function. The remainder of this section describes how successive abstraction is integrated into the Taskmaster environment. In particular, Section 4.1 provides a discussion of the three models considered in defining the semantic framework associated with successive abstraction. Section 4.2 describes the editing primitives supporting abstraction from a user's perspective. 4.1 Models of Abstraction based on Cutsets Abstraction is itself an abstract term which has manifold meanings. Abstraction as used here means the hiding of unnecessary detail, or equivalently, showing only those aspects essential to solving a given problem. It is important to note that the criteria used in abstraction are dependent on the projected use of the abstracted object or the target environment. Abstraction is the best way to deal with complexity since it reduces the apparent complexity by the elimination of irrelevant detail. Of particular interest here, is the abstraction of a collection of tools forming a sub-network into a composite tool. The following paragraph defines terms that will be helpful in describing our models of abstraction. As described earlier, the tool is the basic entity in the tool composition paradigm and performs a single operation. Each tool has one or more ports with which it communicates with other tools via links. A composite tool or a tool-composite is a collection of tools grouped together forming a new tool, or pseudotool. A cutset of tools is a sub-network of tools delineated from the whole by a closed polygon. An internal link of a cutset is a connection with both its ends within the cutset. An intersecting link of a cutset is a connection with only one end inside of the cutset. An internal port of a cutset is a port within the cutset which has only internal links. An external port of a cutset is a port within the cutset with at least one non-internal link. Figure 9 shows a network where a closed polygon forms a cutset comprising the tools labelled C, D, E and F and links labelled e, f, g, h and i. Links e, f, g, h and i are internal links while the rest are intersecting links. Ports labelled 4, 5, 6, 13 and 14 are external ports and ports labelled 7, 8, 9, 10, 11 and 12 are internal ports. Figure 9 Example Network to Illustrate Cutset and Other Definitions Any cutset can be considered to have two distinct contexts. The first is its internal context which includes only the internal links and all the tools comprising the cutset. The second is the external context of a cutset which entails information about the cutset vis-a-vis the rest of the network. The very term abstraction of a cutset automatically implies the preservation of its internal context. In terms of the external context one can identify three different abstraction models for a cutset: - model M1 which preserves no external context, - model M2 which preserves external, functional context, and - model M3 which preserves external, relational context. Referring to Figure 10, the M1 abstraction is constructed by removing all intersecting links from the cutset and then using all the unconnected ports to form the interface. The M2 abstraction is constructed by using all unique external ports of the cutset to form the interface. The M3 Figure 10 The Three Abstraction Models of the Custet of Figure 9 abstraction is constructed by using a port for every intersecting link of the cutset. As can be seen from the figure, the M1 abstraction has 2 input ports and 1 output port, the M2 abstraction has 3 input ports and 2 output ports while the M3 abstraction has 4 input ports and 3 output ports. Models of abstraction are used to capture the important aspects of their target environment. Model M1, which preserves no external context information, can be used in an environment where grouping is the only essential information that needs to be preserved. A good example of such an abstraction is provided by the "cut" and "paste" operations of MacDraw. Model M2 preserves some external context information, i.e. the functional context of the cutset. An attractive feature of M2 abstraction is that it supports reusability at the functional level. Model M3 preserves all the external context information and as such is extremely context-specific. It is ideal for applications requiring storage and reuse in the same or similar external context. In the Taskmaster environment, the abstraction facility is intended to functionally and visually abstract a tool-composite performing an identifiable high-level subtask. Hence model M2, which preserves the external functional context, seems to be best-suited for this purpose. Another important point of interest is the fact that the Taskmaster environment allows user-defined descriptions of pseudotools. Hence all the external context information can be preserved textually if required. The next subsection discusses the actual implementation of the M2 model of abstraction in Taskmaster. 4.2 Editor Operations Supporting the M2 Abstraction The Network Editor has two new primitives supporting abstraction of tool-composites: save tool-composite and attach tool-composite. In addition to these two operations, the collapse and explode operations also support the pseudotool abstraction. In Taskmaster, there are three different ways to integrate a pseudotool into a network: 3 MacDraw is a trademark of Apple Computers, Inc. • using the *collapse* operation to define a new pseudotool *in-place*, • using the *attach tool-composite* operation to import a pre-defined pseudotool (defined with the *save tool-composite* operation), and • using the *expand node* operation to construct a separate sub-network and abstract it into a new pseudotool. The *explode* operation not only reverses the effect of the corresponding *collapse* done previously but also "explodes" pseudotools brought in either by the *attach tool-composite* or the *expand node* operation. The remainder of this section illustrates how of each of these primitives operate, except for expand node which is discussed in Section 3.2. ### 4.2.1 The Collapse and Explode Operations The collapse operation performs an in-place M2 abstraction of a collection of tools identified by the user. The resulting "super-node" implements some high-level operation. The collapse operation can be used recursively in the sense that it may be applied to a cutset which already contains collapsed pseudotools. The effect of the collapse operation is to redraw the network with the collapsed tool-composite being represented by a special "super-node" icon containing the user-supplied label. Figure 11a shows a cutset in the process of being collapsed. The *rubber band* polyline drawn with the mouse to delineate the cutset can also be observed in the figure. Figure 11b shows the network after the collapse operation is completed. The "super-node" icon with a brick-pattern ring represents the newly defined pseudotool named *Process File*. During the collapse operation the Network Editor automatically pulls in the port descriptions for the pseudotool from the corresponding nested tools. The pseudotool name and description are solicited from the user before performing the collapse. Other than the *specify node* operation all other generic node operations can be performed on the "super-node". If the user chooses to *explode* the Process File pseudotool, the network will be redrawn to show its pre-collapse state of Figure 11a. 4.2.2 The Save Tool-Composite and Attach Tool-Composite Operations The *save tool-composite* operation performs an M2 model abstraction of the cutset similar to the collapse. While the collapse operation does an in-place replacement of the cutset with the pseudotool, the save tool-composite builds the pseudotool based on the cutset and stores it for later reuse. The network topology display is not altered by a save tool-composite operation. The *attach tool-composite* operation is used to attach an already saved pseudotool to an unspecified node, and hence, is one way of specifying an unspecified node. 5.0 Summary and Conclusions Abstraction is a powerful tool for succinctly describing, modelling and implementing complex operations. In particular, abstraction is extremely useful in characterizing the complexities of user/machine interaction as related to tools-based, task specification. Currently two prototype Taskmaster applications exploit abstraction in support of user task specification: - a Unix-based, dataflow command shell supporting file transformation task specifications, and - a matrix manipulation environment supported through selected LINPACK [DONJ79] routines. Knowledge gained from the synthesis of these two application environments, and their current use as experimental test-beds, has contributed significantly toward the development of complementary interface abstractions. A realization of those abstractions within the Taskmaster environment embody a unique blend of top-down and bottom-up task specification capabilities, all oriented around visual programming concepts. Top-down task specification is achieved through the successive refinement of a graphical network where nodes represent operations and arcs correspond to communication paths between those operations. In support of the top-down specification process, Taskmaster effectively exploits the inherent powers of multi-level, menu-based interaction and the node expansion operation. The multi-level, menu-based interface employs partitioned networks to minimize the adverse impact of rigid network traversal paths so prevalent in conventional menu-based interaction. In particular, the interface layers induced by partitioned networks define logical network boundaries that enable the user to choose a depth-first or breadth-first task specification approach. Even with such flexibility, however, menu-based interaction requires the binding of one node to one tool. To relax this restriction, Taskmaster utilizes a second top-down specification mechanism, node expansion. Node expansion allows the user to associate fully specified subnetworks (or subtasks) with individual nodes. Bottom-up task specification is achieved through the successive abstraction of fully specified lower level networks into higher level operations. The significance of this bottom-up abstraction process is that the user can define powerful pseudotools, fundamental to the application environment, and combine them to form a higher level operations. Additional applications of successive abstraction lead to the desired task specification and, as a by-product, a powerful set of reusable pseudotools. As touted by Boehm [BOEB84] and Munsil [MUNW85], the provision for reusable components can have a significantly beneficial impact on productivity. In summary, our work in user task specification has provided significant insights into the complexities of user/machine interaction. Although the research described in this paper presents only one aspect of a multi-faceted problem, it is a crucial aspect. Research addressing interface abstractions as well as the many other issues underlying "user-friendly" systems must continue if the user community is to enjoy simplicity and power in interactive, man/machine dialogue. References
{"Source-Url": "http://eprints.cs.vt.edu/archive/00000100/01/TR-88-15.pdf", "len_cl100k_base": 7722, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 36580, "total-output-tokens": 9437, "length": "2e12", "weborganizer": {"__label__adult": 0.0002574920654296875, "__label__art_design": 0.0008931159973144531, "__label__crime_law": 0.00021314620971679688, "__label__education_jobs": 0.0017919540405273438, "__label__entertainment": 9.447336196899414e-05, "__label__fashion_beauty": 0.0001214742660522461, "__label__finance_business": 0.0002231597900390625, "__label__food_dining": 0.00023257732391357425, "__label__games": 0.0004935264587402344, "__label__hardware": 0.0010623931884765625, "__label__health": 0.00022733211517333984, "__label__history": 0.0002601146697998047, "__label__home_hobbies": 9.143352508544922e-05, "__label__industrial": 0.0003631114959716797, "__label__literature": 0.00029778480529785156, "__label__politics": 0.00014483928680419922, "__label__religion": 0.000370025634765625, "__label__science_tech": 0.037261962890625, "__label__social_life": 9.66191291809082e-05, "__label__software": 0.0230560302734375, "__label__software_dev": 0.931640625, "__label__sports_fitness": 0.00015282630920410156, "__label__transportation": 0.0003707408905029297, "__label__travel": 0.00014448165893554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43300, 0.02482]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43300, 0.49403]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43300, 0.90041]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 1614, null], [1614, 3060, null], [3060, 5196, null], [5196, 7330, null], [7330, 8721, null], [8721, 10043, null], [10043, 10738, null], [10738, 12745, null], [12745, 13807, null], [13807, 14827, null], [14827, 16837, null], [16837, 18696, null], [18696, 20368, null], [20368, 22421, null], [22421, 24441, null], [24441, 25840, null], [25840, 26616, null], [26616, 27009, null], [27009, 27133, null], [27133, 29005, null], [29005, 30702, null], [30702, 32566, null], [32566, 33589, null], [33589, 33654, null], [33654, 35726, null], [35726, 37587, null], [37587, 37791, null], [37791, 39406, null], [39406, 41599, null], [41599, 43300, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 1614, null], [1614, 3060, null], [3060, 5196, null], [5196, 7330, null], [7330, 8721, null], [8721, 10043, null], [10043, 10738, null], [10738, 12745, null], [12745, 13807, null], [13807, 14827, null], [14827, 16837, null], [16837, 18696, null], [18696, 20368, null], [20368, 22421, null], [22421, 24441, null], [24441, 25840, null], [25840, 26616, null], [26616, 27009, null], [27009, 27133, null], [27133, 29005, null], [29005, 30702, null], [30702, 32566, null], [32566, 33589, null], [33589, 33654, null], [33654, 35726, null], [35726, 37587, null], [37587, 37791, null], [37791, 39406, null], [39406, 41599, null], [41599, 43300, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43300, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43300, null]], "pdf_page_numbers": [[0, 120, 1], [120, 1614, 2], [1614, 3060, 3], [3060, 5196, 4], [5196, 7330, 5], [7330, 8721, 6], [8721, 10043, 7], [10043, 10738, 8], [10738, 12745, 9], [12745, 13807, 10], [13807, 14827, 11], [14827, 16837, 12], [16837, 18696, 13], [18696, 20368, 14], [20368, 22421, 15], [22421, 24441, 16], [24441, 25840, 17], [25840, 26616, 18], [26616, 27009, 19], [27009, 27133, 20], [27133, 29005, 21], [29005, 30702, 22], [30702, 32566, 23], [32566, 33589, 24], [33589, 33654, 25], [33654, 35726, 26], [35726, 37587, 27], [37587, 37791, 28], [37791, 39406, 29], [39406, 41599, 30], [41599, 43300, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43300, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
344f4727b39d7a3a1a7d281e7eb8cd9fe6dca755
ABSTRACT Relational Databases are used in most current enterprise environments to store and manage data. The semantics of the data is not explicitly encoded in the relational model, but implicitly on the application level. Ontologies and Semantic Web technologies provide explicit semantics that allows data to be shared and reused across application, enterprise, and community boundaries. Converting all relational data to RDF is often not feasible, therefore we adopt an ontology-based access to relational databases. While existing approaches focus on read-only access, we present our approach OntoAccess that adds ontology-based write access to relational data. OntoAccess consists of the update-aware RDB to RDF mapping language R3M and algorithms for translating SPARQL/Update operations to SQL. This paper presents the mapping language, the translation algorithms, and a prototype implementation of OntoAccess. 1. INTRODUCTION Relational Databases (RDBs) are used in most current enterprise environments to store and manage data. While RDBs are well suited to handle large amounts of data, they were not designed to preserve the data semantics. The meaning of the data is implicit on the application level and not explicitly encoded in the relational model. Ontologies and Semantic Web technologies provide explicit semantics in a common framework that allows data to be shared and reused across application, enterprise, and community boundaries [4]. Applying Semantic Web technologies in an enterprise environment enables data processing and exchange on a semantic level. Ontologies and RDF are used to build a semantic layer on top of existing databases that lifts data processing from the syntax to the semantic level. RDF and a shared ontology can be used to exchange data even if the individual relational schemas do not match. The introduction of background knowledge from an ontology can also be valuable in the implementation of a data integration layer on top of multiple relational data sources. Converting all data in an RDB to RDF is often not feasible due to existing applications that rely on the relational representation of the data. Also, the performance of current triple store implementations remains below RDBs as recent benchmarks show [7]. Therefore, a mediation approach that performs an on demand translation of Semantic Web requests to SQL is the alternative that preserves the compatibility with existing relational applications while enabling access for ontology-based software to (co-)operate on the same data. In addition, mediation allows to further exploit the advantages of the well-established database technology such as query performance, scalability, transaction support, and security. Existing approaches for mapping RDBs to RDF focus on exposing the relational data to the Semantic Web. They provide SPARQL endpoints to query the data, but they neither address data updates nor the explicit application in an enterprise environment. Our contribution in this paper is the ontology-based write access to relational data via SPARQL/Update [19], the upcoming data manipulation language (DML) of the Semantic Web. We present the update-aware RDB to RDF mapping language R3M and algorithms for translating SPARQL/Update to SQL DML. The remainder of this paper is organized as follows. Section 2 presents an overview of related work. The challenges of ontology-based write access to relational data and our approach OntoAccess are presented in Section 3. In Section 4, we introduce our update-aware RDB to RDF mapping language R3M and Section 5 specifies the algorithms for translating SPARQL/Update to SQL DML. Our prototype implementation is briefly described in Section 6, while Section 7 presents a feasibility study as a first evaluation of our approach. Section 8 concludes this paper with an outlook on future work. 2. RELATED WORK RelationalOWL [11] defines an ontology to represent relational schemata and data in RDF. It maps tables and attributes to terms in that ontology and records information about primary/foreign keys as well as the data types of the attributes. This approach exposes the structure and syntax of the relational schema to the RDF representation and prohibits the direct reuse of existing domain vocabulary. RDQuery [12] adds a SPARQL interface on top of RelationalOWL that provides an on-demand translation of SPARQL queries to SQL. D2R [6, 5] is an approach for publishing RDBs on the Semantic Web. It enables the browsing of relational data as RDF via dereferencable URIs and also provides an endpoint for SPARQL queries. D2Rs main goal is to provide content for the Web of Data, a web of interlinked data sets expressed in RDF (cf. the Linked Open Data initiative\(^1\)). Virtuoso\(^2\) is a commercial database system from OpenLink Software that features RDF Views [13] over relational data bases. A declarative meta-schema language is used to map terms of an ontology to concepts in the database schema. This enables the use of SPARQL as an alternative query language for the relational data. RDF Views are limited to read-only queries, updating the base data through these RDF Views is not supported. Triplify [2] is a light-weight approach to expose information from Web applications (e.g., discussion boards, content management systems) in RDF. It uses a set of application-specific SQL queries to extract data from the underlying RDB to generate RDF data from the results. The SQL queries have to be defined manually for each Web application, but the RDF generation is performed automatically according to a fixed process. Reuse of existing ontologies is possible via result column renaming in the SQL queries. Mastro-i [9] is an ontology-based data integration approach based on global-as-view (GAV) mappings. The individual source schemata are integrated through ontologies and a relational data federation tool. The mappings to the target ontology rely on SQL queries over the federated source schemata and bindings of the query results to terms in an ontology. Hence, the Mastro-i approach is limited to read-only data access as unrestricted data manipulations would be affected by the relational view update problem. The World Wide Web Consortium (W3C) has recognized the importance of mapping relational data to the Semantic Web by starting the RDB2RDF incubator group\(^3\) (XG) to investigate the need for standardization. The XG recommends [17] that the W3C starts a working group to define a standard RDB to RDF mapping language. However, they will not address the requirements for updating the relational data in a first version of the language. View updates are a well known problem in database research (e.g., [3, 10, 15, 16]). Mapping RDBs to RDF can also be seen as defining RDF views over the relational data, therefore these views may be affected by the view update problem. Research in this area has shown that the requirements of updates have to be considered already in the specification of a view definition language (VDL). If a VDL is constructed to allow only the definition of bijective mappings (i.e., updates on the base data as well as the views can unambiguously be propagated to the opposite side), the hardest problems of the relational view update problem can be avoided (e.g., [8]). Object-relational mapping (ORM) is an approach to bridge the conceptual gap between object-oriented systems and the relational data model. ORMs such as Hibernate\(^4\) aim at using existing RDB infrastructure to persist data objects in object-oriented applications. This allows to benefit from established database technology while providing an object-oriented abstraction to the relational model. A mapping language is used to define the mappings of classes and attributes in the object-oriented system to tables and attributes in the RDB. The ORM component then generates the RDB schema according to this mapping and also provides means to store and retrieve objects. 3. ONTOACCESS APPROACH ONTOACCESS [14] is our approach for ontology-based access to RDBs that provides read and write access to the relational data. It currently consists of the update-aware mapping language R3M that bridges the conceptual gap between a RDB and an ontology as well as an access interface based on SPARQL that supports the upcoming SPARQL/Update language for data manipulations. Updating relational data through Semantic Web technologies presents new challenges for mapping languages and mediation tools. The conceptual gap between the relational model and RDF (tuples vs. triples) causes that constraints from the RDB are transferred to the Semantic Web layer. As a consequence, some update requests are no longer valid compared to their application in a native triple store. The tuple-oriented nature of the relational model requires that a certain amount of data is known about each entity (i.e., attributes declared as mandatory). This and other requirements can be enforced in the database schema with integrity constraints that may not be equally reflected in ontologies and RDF, especially if existing vocabularies are reused. However, to enable ontology-based write access to RDBs these constraints must be respected and errors resulting from constraint violations should be handled appropriately. If information about these constraints is stored in the mapping, it can be used to detect invalid update requests and to provide semantically rich feedback to the client. We take the RDB schema of a publication system as the use case for this paper. The database stores information about authors and their publications. Figure 1 depicts the database schema used in this example with the tables, their attributes, and data types. Each table has a distinct primary key called id of type integer. The publication and author tables represent the main concepts in the use case. A publication is composed of a title, a publication year, a publication type, and a publisher. While title and year are data attributes, type and publisher are foreign keys to the tables pubtype and publisher respectively. Each of those tables contains one textual attribute as a label for the publisher/the type of the publication. All valid publications must have --- 1. [http://linkeddata.org](http://linkeddata.org) 2. [http://virtuoso.openlinksw.com](http://virtuoso.openlinksw.com) 4. [http://www.hibernate.org](http://www.hibernate.org) ![Figure 1: RDB schema of the publication use case](http://example.com/figure1.png) 4. RDB TO RDF MAPPING LANGUAGE R3M is an update-aware RDB to RDF mapping language that records additional information of the database schema to support data manipulations and to detect invalid update requests during the translation process. Updatability and simplicity were two of the main design goals of this mapping language. It is expressed in RDF and uses the R3M ontology to model the mappings between terms of a domain ontology and the database schema as well as to record additional information about the schema and its integrity constraints. The mapping employs the approach where database tables are mapped to ontology classes and attributes to properties. This means, each database table representing a concept in the application domain is mapped to an ontology class representing the same concept. Likewise, each database attribute that constitutes a relationship between an entity and a data value (or another entity) is mapped to an ontology property that links instances of a class to literal values (or other instances). Thereby, each row in a database table is mapped to a set of RDF triples. One triple identifies the entity that is represented by this row as an instance of the class the corresponding table is mapped to. Then, there is in general one triple for each table attribute that relates the instance to a data value or another instance (e.g., foreign keys). Link tables are used in RDBs to describe N:M relationships among relations. In RDF, such auxiliary constructs are not needed, which is why R3M features explicit support to map these tables to object properties instead of classes. The root element of a mapping in R3M is called DatabaseMap (Listing 1). It abstractly represents the database and contains information for the mediator (lines 2 to 5). Optionally, a URI prefix can be specified (line 6) that is used to generate the instance URIs of all the classes defined in the mapping. The URI of an instance is composed of two parts, the mapping-wide URI prefix defined here and an individual URI pattern defined in each TableMap. The main purpose of this mapping-wide URI prefix is to ease the definition of mappings similar to the prefix mechanism in XML Namespaces. Finally, all tables that belong to this database schema are listed as TableMaps (lines 7 to 12). Listing 1: Example DatabaseMap A TableMap represents the mapping of an individual database table (Listing 2). It contains the name of the table (line 2) and the ontology class it is mapped to (line 3). The URI pattern (line 4) is appended to the mapping-wide URI prefix to generate the instance URIs for this class or overrides it if the pattern itself forms a valid URI (i.e., if it starts with http://, etc.). Attribute values from the database table can be included in the pattern by specifying the name of the attribute between double percentage signs. Typically, at least the primary key attributes are included in the URI pattern (e.g., %%id%% where id is the name of the primary key attribute). A TableMap further contains a list of AttributeMaps (lines 5 to 10) which map attributes of this table to properties in the ontology. Listing 2: Example TableMap Each attribute of a database table is represented by an AttributeMap (Listings 3) that contains the name of the attribute in the database schema (line 2) as well as the name of the corresponding property in the ontology. of the ontology property it is mapped to (line 3). Depending on the type or value of the attribute, the property can be an Object- or a DataProperty. This is reflected in the mapping vocabulary as either r3m:mapsToObjectProperty or r3m:mapsToDataProperty. Additionally, an AttributeMap includes information about constraints defined on the attribute (e.g., that it is a foreign key and the table it references; lines 4 and 5). In the current implementation, the following constraints are supported: r3m:PrimaryKey, r3m:ForeignKey, r3m:NotNull, and r3m:Default. Listing 3: Example AttributeMap A LinkTableMap is provided to map link tables to properties in the ontology (Listing 4). It specifies the name of the link table in the database (line 2) and the object property it is mapped to (line 3). A link table always contains two foreign key attributes that point to the tables of the N:M relationship. Therefore, a triple with the property representing this link table has a subject and an object mapped from two tables. The attribute pointing to the table of the subject is represented as the subject attribute (line 4) and the attribute pointing to the table of the object as the object attribute (line 5). They link to AttributeMaps that are not mapped to any property but record the names of the attributes and the tables they reference (e.g., Listing 5). Listing 4: Example LinkTableMap A basic R3M mapping can be generated automatically from the database schema if it explicitly provides information about foreign key relationships. The only part of the mapping definition that cannot easily be automated is the assignment of domain ontology terms to the individual concepts in the database. However, (graphical) tool support can and will be provided to further decrease the user’s effort in defining a mapping. Listing 5: Example AttributeMap (not mapped) to insert, delete, or modify data. The Semantic Web community made efforts to close this gap, which lead to the SPARQL/Update [19] proposal for an RDF data manipulation language. SPARQL/Update does also serve as the basis for the update functionality in the relaunched W3C SPARQL working group (WG). The proposed version of SPARQL/Update consists of three update operations: (1) INSERT DATA (Listing 6) to insert new triples into an RDF graph; (2) DELETE DATA (Listing 7) to remove known triples from a graph; and (3) MODIFY (Listing 8) to delete and/or insert data based on triple templates that are matched against a triple pattern in a shared WHERE clause. The MODIFY operation basically corresponds to two SPARQL CONSTRUCT queries (with the same WHERE clause) where the resulting RDF triples get removed from and added to the data. 5. SPARQL/UPDATE TO SQL DML SPARQL [18] is the W3C recommendation of a query language for the Semantic Web. It is currently limited to read-only access to RDF data as it does not provide any means to execute SQL statements directly. However, the translation of SPARQL/Update to SQL is possible and is currently being implemented by various projects. The translation algorithm to recall how a database schema is defining data. Their translation to SQL is therefore very similar and differs mainly in the type of SQL statement that is generated. It is important for the understanding of the translation algorithm to recall how a database schema is mapped to an ontology: tables representing domain concepts are mapped to classes, while attributes and link tables are represented as ontology properties. We will use the following algorithm to translate RDF triples to SQL DML. **Algorithm 1** RDF triples to SQL DML translation 1. subjectGroups ← groupTriples(triples) 2. for all subjectGroup in subjectGroups do 3. table ← identifyTable(subjectGroup.getSubject()) 4. if check(subjectGroup, table) is true then 5. sql ← generateSQL(subjectGroup, table) 6. statements.add(sql) 7. else 8. error() 9. end if 10. end for 11. sortedSql ← sortSQL(statements) 12. executeSQL(sortedSql) Angles and Gutierrez showed in [1] that SPARQL has the same expressive power as relational algebra and consequently that SPARQL can be fully translated to SQL. From these findings and the fact that SPARQL/Update is based on SPARQL follows that SPARQL/Update is also fully translatable to SQL DML, albeit not directly as we will see later. 5.1 INSERT DATA / DELETE DATA INSERT DATA and DELETE DATA operations consist of sets of triples that are either added to or removed from the existing data. Their translation to SQL is therefore very similar and differs mainly in the type of SQL statement that is generated. It is important for the understanding of the translation algorithm to recall how a database schema is mapped to an ontology: tables representing domain concepts are mapped to classes, while attributes and link tables are represented as ontology properties. We will use the following algorithm to translate RDF triples to SQL DML. the INSERT DATA operation depicted in Listing 9 as an example to explain the translation algorithm (Algorithm 1). In the first step (line 1), the triples need to be grouped according to equal subjects as these triples all represent data about the same entity and therefore target the same table. The triples in our example operation all use the same subject, hence this step returns one group containing all original triples. Each such group is then handled individually (line 2). In step 2 (line 3), the table affected by this group of triples is identified through the URI of their subject. The subject URI in our example is http://example.org/db/author1. If we recall the mapping (cf. Listing 1 and Listing 2), we find that this URI matches the pattern http://example.org/db/author%%id%% and therefore identifies the table as author. Further, we can extract the value 1 for the primary key attribute id. Next, the validity of the request is checked in step three (line 4), i.e., it is tested if the data in the request meets the constraints in the relational schema. For instance, in the case of an INSERT DATA operation a triple must be present containing a property for every corresponding database attribute that has a NotNull constraint but no DefaultValue value. This requirement is trivially met in our INSERT DATA operation as it contains triples with properties matching every attribute of the author table. Step four (line 5) generates the respective SQL statement by looking up the properties in the corresponding TableMap of the current subject and then adding the attribute name as well as the value extracted from the triple’s object to the SQL statement. In the example this means for instance that the property ont:team is looked up and matched to the team attribute (cf. Listing 3). The attribute name is added to the SQL statement together with the extracted value from the object, namely 5. The other triples are processed likewise. Steps 2 to 4 are repeated for each group of triples and the generated SQL statements are collected (line 6). After all groups are processed, in step five (line 11) the collected SQL statements are sorted according to the foreign key relationships among the affected tables. Although, from a theoretical point of view this is not necessary if all statements are executed in the context of a single transaction, existing RDB systems check constraints such as referential integrity already during a transaction. Consequently, executing the generated statements in an arbitrary order may result in the failure of the transaction whereas their execution in the sorted order would succeed. Sorting in our example is trivial as there is only one SQL statement. The sixth and last step (line 12) executes the SQL statements in the previously generated sort order. All generated SQL statements that correspond to a single SPARQL/Update operation are executed within the context of one database transaction to ensure the atomicity of the SPARQL/Update operation. Listing 10 shows the translated SQL INSERT statement generated from our example SPARQL/Update INSERT DATA operation. **INSERT DATA.** The INSERT DATA operation of SPARQL/Update can be translated to SQL DML according to the algorithm described in the prior section. Depending on the state of the database, the translation results in either an INSERT INTO or an UPDATE SQL statement. The triple-oriented nature of RDF permits to insert only the minimal data about an entity with a first INSERT DATA operation (e.g., just the last name of an author) and later add more information with a second INSERT DATA (e.g., the first name and email address of said author). From the RDB perspective, this results first in a SQL INSERT statement that creates a new row in a database table for this entity with NULL values for all missing attributes (if this complies with the given constraints). The second INSERT DATA operation (with the additional data) translates to an SQL UPDATE statement that replaces the NULLs with actual values. This means, it has to be checked if the entity already exists in the database as this determines the type of the generated SQL statement. **DELETE DATA.** The SPARQL/Update DELETE DATA operation is translated according to Algorithm 1 as well. The translation of this operation can also result in two different types of SQL statements depending on the state of the database and the operation. If the data in the operation represents only a subset of the data in the database, the operation is translated to a SQL UPDATE statement that sets all mentioned attributes to NULL (if this complies with the given constraints). Only if the data in the request operation equals all remaining (i.e., non-null) data in the database, the resulting SQL statement is a DELETE that removes the complete row from the database. Therefore, the tuple for the affected entity must be retrieved and analyzed during the translation. 5.2 **MODIFY** The MODIFY operation in SPARQL/Update cannot directly be translated to SQL as there is no equivalent statement in the SQL DML. MODIFY is an atomic combination of a delete and an insert that in general is not limited to replacing triples, but can also add/remove arbitrary triples. In contrast, the UPDATE statement in SQL is limited to modifying existing data. However, the reuse of the SPARQL grammar in SPARQL/Update makes a translation in multiple steps possible. Algorithm 2 describes how the MODIFY operation is translated to SQL. We will use the MODIFY operation depicted in Listing 11 as an example to explain the algorithm. It replaces any email address of the author "Matthias Hert" with a new address (hert@example.com). ```sql Listing 10: Translated SQL INSERT statement ``` ```sql INSERT INTO author(id, title, firstname, lastname, email, team) VALUES (6, 'Mr.', 'Matthias', 'Hert', 'hert@ififi.uzh.ch', 5); ``` ```sql Listing 9: Example INSERT DATA operation ``` ```sql PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX ex: <http://example.org/ontology#> INSERT DATA { ex:author foaf:title "Mr" ; foaf:firstName "Matthias" ; foaf:familyName "Hert" ; foaf:mbox <mailto:hert@ififi.uzh.ch> ; ont:team ex:team5 . } ``` Algorithm 2 MODIFY to SQL DML translation ``` 1: delete ← extractDelete(modify) 2: insert ← extractInsert(modify) 3: where ← extractWhere(modify) 4: select ← createSelect(where) 5: selectSQL ← translateSelect(select) 6: results ← executeSQL(selectSQL) 7: for all binding in results do 8: deleteData ← createDeleteData(delete, binding) 9: insertData ← createInsertData(insert, binding) 10: deleteSQL ← translateDelete(deleteData) 11: insertSQL ← translateInsert(insertData) 12: executeSQL(deleteSQL, insertSQL) 13: end for ``` First, the MODIFY operation is separated into its individual parts, the INSERT, DELETE, and WHERE clauses (lines 1 to 3). The WHERE part is used to create a SPARQL SELECT query (line 4) that retrieves the data needed for the DELETE and INSERT templates. It is translated to SQL (line 5) and evaluated on the relational data (line 6). Based on the result bindings of that query, one DELETE DATA (line 8) and one INSERT DATA (line 9) operation are built for each binding (line 7) according to the DELETE and INSERT templates of the original MODIFY operation. In our example, the SELECT query returns just one result binding, namely ex:author6 for the variable x and mailto:hert@ifi.uzh.ch for mbox. Therefore, one DELETE DATA and one INSERT DATA operations are built based on that binding as shown in Listing 12. These are then translated (lines 10 and 11) and executed (lines 12) according to Algorithm 1 described in the previous sections. ``` MODIFY DELETE { } INSERT { ?x foaf:mbox <mailto:hert@example.com> . } WHERE { ?x rdf:type foaf:Person ; foaf:firstName "Matthias" ; foaf:family_name "Hert" ; foaf:mbox ?mbox . } ``` Listing 11: Example MODIFY operation ``` DELETE DATA { ex:author6 foaf:mbox <mailto:hert@ifi.uzh.ch> . } INSERT DATA { ex:author6 foaf:mbox <mailto:hert@example.com> . } ``` Listing 12: Generated DELETE DATA and INSERT DATA operations In many cases the MODIFY will actually represent a modification of data or rather a replacement of triples. Then, one optimization is possible by omitting those DELETE DATA operations that have a corresponding INSERT DATA, i.e., the triples differ only in their object. In these cases, the delete would set an attribute value to NULL and the insert sets the same attribute to a new value, therefore the delete is redundant and can be omitted. 6. PROTOTYPE IMPLEMENTATION Based on our mapping language R3M and the SPARQL/Update to SQL DML translation algorithms described in the previous sections, we developed a prototype that mediates between SPARQL/Update requests and an RDB. Implemented as a HTTP endpoint, it allows clients to remotely manipulate the relational data. Incoming SPARQL/Update operations are parsed from the HTTP requests and forwarded to the translation module. There, the algorithm of Section 5.1 is used to generate equivalent SQL statements based on a R3M mapping definition. The translated operation is executed by the database engine and a confirmation or error message is returned to the translation module. This message is then converted to an RDF representation and sent back to the client. Currently, the implementation is limited to INSERT DATA and DELETE DATA operations, but support for MODIFY and SPARQL queries are under development. Also, a more powerful feedback protocol is planned that will provide semantically rich error information to the client. A future version of the prototype implementing these features will be released to the public. 7. FEASIBILITY STUDY For a first evaluation of our approach we present a feasibility study based on the RDF schema and the domain ontology introduced in Section 3. Table 1 summarizes the mapping from tables and attributes of the database schema to classes and properties of the domain ontology. The first column specifies the table and the corresponding class. For each table, column two lists the attributes and the properties they are mapped to. The publication table is mapped to foaf:Document. The attributes title and publisher are mapped to corresponding properties from DC, while year and type use properties from our own ontology ONT. The tables publisher and publisher are as well as their attributes are all mapped to terms of our application-specific ontology. The author table is represented as foaf:Person. Its attributes are mapped to equivalent concepts from the FOAF vocabulary with the exception of team that uses a property from ONT. The table team is represented as the class foaf:Group with its name attribute mapped to foaf:name and code to ont:teamCode. The publication_author table is a link table that represents the N:M relationship between publications and authors. Therefore, as described in Section 4, it is not mapped to a class but to the property dc:creator instead. This mapping definition enables our mediation prototype to process SPARQL/Update operations. In the remainder of this section, we present example SPARQL/Update operations and the translated SQL statements as generated by our prototype. Listing 13 shows a simple SPARQL/Update INSERT DATA request that inserts data about a team. It affects only a single database table and is therefore translated to one SQL INSERT statement (Listing 14). Listing 15 depicts a more complex INSERT DATA request. 8. CONCLUSION AND FUTURE WORK In this paper, we presented our approach OntoAccess that enables the manipulation of relational data via SPARQL/Update. We introduced the update-aware RDB to RDF mapping language R3M that captures additional information about the database schema, in particular about integrity constraints. This information enables the detection of update requests that are invalid from the RDB perspective. Such requests cannot be executed by the database engine as they would violate integrity constraints of the database schema. The information can also be exploited to provide semantically rich feedback to the client. Therefore, the causes for the rejection of a request and possible directions for improvement can be reported in an appropriate format. Future work is planned for various aspects of OntoAccess. Further research needs to be done on bridging the conceptual gap between RDBs and the Semantic Web. Ontology-based write access to the relational data creates completely new challenges on this topic with respect to read-only approaches. The presence of schema constraints in the database can lead to the rejection of update requests that would otherwise be accepted by a native triple store. A feedback protocol that provides semantically rich information about the cause of a rejection and possible directions for improvement plays a major role in bridging the gap. Other database constraints such as assertions have to be evaluated as well to see if they can reasonably be supported in the mapping. Also, VALUES (5, 'Software Engineering', 'SEAL'); VALUES (4, 'inproceedings'); VALUES (3, 'Springer'); VALUES (12, 'Relational ...', 2009, 4, 3); VALUES (6, 'Mr.', 'Matthias', 'Hert', 'hert@ifi.uzh.ch', 5); VALUES (12, 6); Listing 16: Translated SQL INSERT statements DELETE DATA { ex:author6 foaf:mbox <mailto:hert@ifi.uzh.ch> . } Listing 17: Example DELETE DATA operation 9. REFERENCES
{"Source-Url": "http://icdt.tu-dortmund.de/proceedings/edbticdt2010proc/workshops/updates2010/pdf/edbt_2010_submission_594.pdf", "len_cl100k_base": 6747, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30244, "total-output-tokens": 8329, "length": "2e12", "weborganizer": {"__label__adult": 0.0003349781036376953, "__label__art_design": 0.0003845691680908203, "__label__crime_law": 0.0004305839538574219, "__label__education_jobs": 0.0006036758422851562, "__label__entertainment": 9.423494338989258e-05, "__label__fashion_beauty": 0.00017023086547851562, "__label__finance_business": 0.0004575252532958984, "__label__food_dining": 0.0003829002380371094, "__label__games": 0.0005750656127929688, "__label__hardware": 0.0007381439208984375, "__label__health": 0.0006055831909179688, "__label__history": 0.00029730796813964844, "__label__home_hobbies": 9.27448272705078e-05, "__label__industrial": 0.000507354736328125, "__label__literature": 0.00043129920959472656, "__label__politics": 0.00030112266540527344, "__label__religion": 0.0005011558532714844, "__label__science_tech": 0.08978271484375, "__label__social_life": 0.00011169910430908204, "__label__software": 0.030364990234375, "__label__software_dev": 0.8720703125, "__label__sports_fitness": 0.0002351999282836914, "__label__transportation": 0.0004892349243164062, "__label__travel": 0.0002090930938720703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35699, 0.02641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35699, 0.5052]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35699, 0.85364]], "google_gemma-3-12b-it_contains_pii": [[0, 4526, false], [4526, 10606, null], [10606, 14001, null], [14001, 18933, null], [18933, 25152, null], [25152, 30473, null], [30473, 32011, null], [32011, 35699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4526, true], [4526, 10606, null], [10606, 14001, null], [14001, 18933, null], [18933, 25152, null], [25152, 30473, null], [30473, 32011, null], [32011, 35699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35699, null]], "pdf_page_numbers": [[0, 4526, 1], [4526, 10606, 2], [10606, 14001, 3], [14001, 18933, 4], [18933, 25152, 5], [25152, 30473, 6], [30473, 32011, 7], [32011, 35699, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35699, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
55b33ae60a66ee1dbf838e831ca8af672080916a
CSE 512 - Data Visualization Visualization Tools Jeffrey Heer University of Washington How do people create visualizations? **Chart Typology** - Pick from a stock of templates - Easy-to-use but limited expressiveness - Prohibits novel designs, new data types **Component Architecture** - Permits more combinatorial possibilities - Novel views require new operators, which requires software engineering Graphics APIs Canvas, OpenGL, Processing ey = y; size = s; void update(int mx, int my) { angle = atan2(my-ey, mx-ex); } void display() { pushMatrix(); translate(ex, ey); fill(255); ellipse(0, 0, size, size); rotate(angle); fill(153); ellipse(size/4, 0, size/2, size/2); popMatrix(); } Graphics APIs Canvas, OpenGL, Processing Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Raw Data → Data Tables → Visual Structures → Views Data Transformations → Visual Encodings → View Transformations Data State Model [Chi 98] Prefuse & Flare Operator-based toolkits for visualization design Vis = (Input Data -> Visual Objects) + Operators Prefuse (http://prefuse.org) Flare (http://flare.prefuse.org) Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Google Charts Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies # Data Sets: State Quick Facts Uploaded By: zinggoat Data Source: US Census Bureau Description: Tags: people census <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1. Alabama</td> <td>4557808</td> <td>0.03</td> <td>4447100</td> <td>0.1</td> <td>0.07</td> <td>0.24</td> <td>0.13</td> </tr> <tr> <td>2. Alaska</td> <td>663661</td> <td>0.06</td> <td>626932</td> <td>0.14</td> <td>0.08</td> <td>0.29</td> <td>0.06</td> </tr> <tr> <td>3. Arizona</td> <td>5939292</td> <td>0.16</td> <td>5130632</td> <td>0.4</td> <td>0.08</td> <td>0.27</td> <td>0.13</td> </tr> <tr> <td>4. Arkansas</td> <td>2779154</td> <td>0.04</td> <td>2673400</td> <td>0.14</td> <td>0.07</td> <td>0.25</td> <td>0.14</td> </tr> <tr> <td>5. California</td> <td>36132147</td> <td>0.07</td> <td>33871648</td> <td>0.14</td> <td>0.07</td> <td>0.27</td> <td>0.11</td> </tr> <tr> <td>6. Colorado</td> <td>4665177</td> <td>0.08</td> <td>4301261</td> <td>0.31</td> <td>0.07</td> <td>0.26</td> <td>0.1</td> </tr> <tr> <td>7. Connecticut</td> <td>3510297</td> <td>0.03</td> <td>3405565</td> <td>0.04</td> <td>0.06</td> <td>0.24</td> <td>0.14</td> </tr> <tr> <td>8. Delaware</td> <td>843524</td> <td>0.08</td> <td>783600</td> <td>0.18</td> <td>0.07</td> <td>0.23</td> <td>0.13</td> </tr> <tr> <td>9. Florida</td> <td>17789864</td> <td>0.11</td> <td>15982378</td> <td>0.24</td> <td>0.06</td> <td>0.23</td> <td>0.17</td> </tr> <tr> <td>10. Georgia</td> <td>9072576</td> <td>0.11</td> <td>8186453</td> <td>0.26</td> <td>0.08</td> <td>0.26</td> <td>0.1</td> </tr> <tr> <td>11. Hawaii</td> <td>1275194</td> <td>0.05</td> <td>1211537</td> <td>0.09</td> <td>0.07</td> <td>0.24</td> <td>0.14</td> </tr> <tr> <td>12. Idaho</td> <td>1429096</td> <td>0.1</td> <td>1293953</td> <td>0.29</td> <td>0.07</td> <td>0.27</td> <td>0.11</td> </tr> <tr> <td>13. Illinois</td> <td>12763371</td> <td>0.03</td> <td>12419293</td> <td>0.09</td> <td>0.07</td> <td>0.26</td> <td>0.12</td> </tr> </tbody> </table> Choosing a visualization type for State Quick Facts Analyze a text Tag Cloud How are you using your words? This enhanced tag cloud will show you the words popularity in the given set of text. Learn more Wordle Wordle is a toy for generating ‘word clouds’ from text that you provide. The clouds give greater prominence to words that appear more frequently in the source text. Learn more Word Tree See a branching view of how a word or phrase is used in a text. Navigate the text by zooming and clicking. Learn more Compare a set of values Bar Chart How do the items in your data set stack up? A bar chart is a simple and recognizable way to compare values. You can display several sets of bars for multivariate comparisons. Learn more Block Histogram This versatile chart lets you get a quick sense of how a single set of data is distributed. Each item in the data is an individually identifiable block. Learn more Every Wednesday, when I get home from school, I have a piano lesson. My teacher is a very strict house. Her name is Hillary Clinton. Our piano is a Steinway Concert tree. And it has 88 cups. It also has a soft pedal and a/an Smiley pedal. When I have a lesson, I sit down on the piano Alberto and play for 16 minutes. I do scales to exercise my cats, and then I usually play a minuet by Johann Sebastian Washington. Teacher says I am a natural Haunted House and have a good musical leg. Perhaps when I get better I will become a concert vet and give a recital at Carnegie hospital. Most charting packages channel user requests into a **rigid array of chart types**. To atone for this lack of flexibility, they offer a kit of post-creation editing tools to return the image to what the user originally envisioned. **They give the user an impression of having explored data rather than the experience.** Leland Wilkinson Chart Typologies Excel, Many Eyes, Google Charts Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing ggplot(diamonds, aes(x=price, fill=cut)) + geom_bar(position="dodge") ggplot(diamonds, aes(x=price, fill=cut)) + geom_bar(position="dodge") ggplot2 ```r qplot(long, lat, data = expo, geom = "tile", fill = ozone, facets = year ~ month) + scale_fill_gradient(low = "white", high = "black") + map ``` ```javascript Plot.plot({ grid: true, facet: { data: athletes, y: "sex" }, marks: [ Plot.rectY(athletes, Plot.binX({y: "count"}, {x: "weight", fill: "sex"})), Plot.ruleY([0]) ] }) ``` Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Ease-of-Use Expressiveness Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Ease-of-Use Expressiveness Visualization Building Blocks Visualization Building Blocks Data Input data to visualize # Visualization Building Blocks <table> <thead> <tr> <th>Data</th> <th>Input data to visualize</th> </tr> </thead> <tbody> <tr> <td>Transforms</td> <td>Group, aggregate, stats, layout</td> </tr> </tbody> </table> Visualization Building Blocks Data Input data to visualize Transforms Group, aggregate, stats, layout Scales Map data values to visual values <table> <thead> <tr> <th>Block</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Input data to visualize</td> </tr> <tr> <td>Transforms</td> <td>Group, aggregate, stats, layout</td> </tr> <tr> <td>Scales</td> <td>Map data values to visual values</td> </tr> <tr> <td>Guides</td> <td>Axes &amp; legends visualize scales</td> </tr> </tbody> </table> Visualization Building Blocks Data Input data to visualize Transforms Group, aggregate, stats, layout Scales Map data values to visual values Guides Axes & legends visualize scales Marks Data-representative graphics Area Rect Symbol Image Line Text Rule Arc d3.js Data-Driven Documents Mike Bostock, Vadim Ogievetsky, Jeffrey Heer [TVCG 2011] + Jason Davies (geo, 2011–13) & Philippe Rivière (2016–) What is D3? 1. A collection of reusable visualization utilities 2. A tool for updating the browser’s Document Object Model (DOM) in response to input data What is D3? 1. A collection of reusable visualization utilities - **Data**: d3.csv, d3.json, ... - **Scales**: d3.scaleLinear, d3.scaleLog, ... - **Projections**: d3.geoPath, d3.geoMercator, ... - **Layout**: d3.tree, d3.treemap, d3.force, ... - **Interaction**: d3.brush, d3.zoom, ... 2. A tool for updating the browser’s Document Object Model (DOM) in response to input data What is D3? 1. A collection of reusable visualization utilities 2. A tool for updating the browser’s Document Object Model (DOM) in response to input data **Select:** query DOM content **Join:** bind input data to DOM elements **Update:** set DOM element properties **Transition:** animate changes over time Why D3? Enable highly custom visualization design Support animation and dynamic displays Support rich and varied interactions Interoperate via web standards (HTML, SVG, CSS) Avoid artificial limits. If a browser can do it, D3 should be able to take advantage of it. Why D3? "the authors have undeniably helped to bring data visualization to the mainstream. [D3] is a cornerstone contribution to this conference specifically and more generally to the success of our field as a whole" *IEEE VIS 2021 Test of Time Award* Why D3? D3 “slingshotted the field into growth, diversification and creativity that has been unprecedented” and “changed how millions of data visualizations are created across newsrooms, websites, and personal portfolios” *Information is Beautiful 2022 Test of Time Award* Why D3? “Use D3 if you think it’s perfectly normal to write a hundred lines of code for a bar chart.” Amanda Cox, Former Graphics Editor, NY Times 512 Paths to the White House Select a winner in the most competitive states below to see all the paths to victory available for either candidate. |------|------|------|-----|------|-------|------|------|------| **Obama has 431 ways to win** 84% of paths **Romney has 76 ways to win** 15% of paths --- If Obama wins Florida… If Romney wins Florida,… --- Florida Ohio North Carolina Virginia Wisconsin Colorado Iowa Nevada New Hampshire D3 Selections The core abstraction in D3 is a *selection*. D3 Selections The core abstraction in D3 is a selection. ```javascript // Add and configure an SVG element (<svg width="500" height="300">) svg = d3.append("svg") .attr("width", 500) // set SVG width to 500px .attr("height", 300); // set SVG height to 300px ``` svg = d3.append("svg") .attr("width", 500) .attr("height", 300); <svg width="500" ...> </svg> D3 Selections The core abstraction in D3 is a selection. // Add and configure an SVG element (<svg width="500" height="300">) ```javascript svg = d3.append("svg") .attr("width", 500) // set SVG width to 500px .attr("height", 300); // set SVG height to 300px ``` // Select & update existing rectangles contained in the SVG element ```javascript svg.selectAll("rect") .attr("width", 100) // set rect widths to 100px .style("fill", "steelblue"); // set rect fill colors ``` Data svg.selectAll("rect") DOM <svg width="500" ...> ??? </svg> Data DOM <svg width="500" …> <rect ..></rect> <rect ..></rect> <rect ..></rect> <rect ..></rect> <rect ..></rect> <rect ..></rect> <rect ..></rect> <rect ..></rect> </svg> Data ```javascript svg.selectAll("rect") ``` DOM ```xml <svg width="500" …> <rect … /> <rect … /> <rect … /> <rect … /> <rect … /> </svg> ``` Data ```javascript svg.selectAll("rect") .attr("width", 100) .style("fill", "steelblue") ``` DOM ```html <svg width="500" ...> <rect width="100" style="fill: steelblue;" /> <rect width="100" style="fill: steelblue;" /> <rect width="100" style="fill: steelblue;" /> </svg> ``` Data Binding Selections can *bind* data and DOM elements. `values = [ {...}, {...}, {...}, ... ]; // input data as JS objects` Data Binding Selections can **bind** data and DOM elements. ```javascript values = [ {...}, {...}, {...}, ... ]; // input data as JS objects // Select SVG rectangles and bind them to data values. bars = svg.selectAll("rect.bars").data(values); ``` Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 .../> </svg> Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ... bars = svg.selectAll("rect") .data(values) values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; bars = svg.selectAll("rect") .data(values) Selections can **bind** data and DOM elements. ```javascript values = [ {...}, {...}, {...}, ... ]; // input data as JS objects // Select SVG rectangles and bind them to data values. bars = svg.selectAll("rect.bars").data(values); // What if the DOM elements don’t exist yet? The **enter** set represents data values that do not yet have matching DOM elements. bars.enter().append("rect").attr("class", "bars"); ``` Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ...> bars = svg.selectAll("rect") .data(values) </svg> Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ... bars.enter().append("rect") </svg> ```javascript values = [ { cat: “a”, value: 5 }, { cat: “b”, value: 7 }, { cat: “c”, value: 3 }, { cat: “d”, value: 4 }, { cat: “e”, value: 6 } ]; bars.enter().append(“rect”).attr(“class”, “bars”) ``` Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ... bars.enter().append("rect") .attr("x", d => xscale(d.cat)) </svg> Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ...> <rect height="..." /> <rect height="..." /> <rect height="..." /> <rect height="..." /> <rect height="..." /> </svg> bars.enter().append("rect") .attr("height", d => yscale(d.value)) Data Binding Selections can *bind* data and DOM elements. ```javascript values = [ {…}, {…}, {…}, … ]; // input data as JS objects // Select SVG rectangles and bind them to data values. bars = svg.selectAll("rect.bars").data(values); // What if the DOM elements don't exist yet? The *enter* set represents data // values that do not yet have matching DOM elements. bars.enter().append("rect").attr("class", "bars"); // What if data values are removed? The *exit* set is a selection of existing // DOM elements who no longer have matching data values. bars.exit().remove(); ``` values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; Data values = [ { cat: "a", value: 5 }, { cat: "b", value: 7 }, { cat: "c", value: 3 }, { cat: "d", value: 4 }, { cat: "e", value: 6 } ]; values.filter(d => ![‘b’, ‘d’].includes(d.cat)) DOM <svg width=500 ... <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> </svg> Data values = [ { cat: "a", value: 5 }, { cat: "c", value: 3 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ...> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> </svg> bars = svg.selectAll("rect.bars").data(values) values = [ { cat: "a", value: 5 }, { cat: "c", value: 3 }, { cat: "e", value: 6 } ]; <svg width=500 ... <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> </svg> bars.exit() Data values = [ { cat: "a", value: 5 }, { cat: "c", value: 3 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ...> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> </svg> bars.exit().remove() Data values = [ { cat: "a", value: 5 }, { cat: "c", value: 3 }, { cat: "e", value: 6 } ]; DOM <svg width=500 ... > <rect class="bars" /> <rect class="bars" /> <rect class="bars" /> </svg> The Data Join **DATA VALUES** **ENTER** Data values without matching DOM elements. **ELEMENTS** **UPDATE** Existing DOM elements, bound to valid data. **EXIT** DOM elements whose bound data has gone “stale”. The Data Join \[ \text{var } s = \text{d3.selectAll(...).data(...)} \] **ENTER** Data values without matching DOM elements. \[ s.\text{enter().append(...)} \] **UPDATE** Existing DOM elements, bound to valid data. \[ s \] **EXIT** DOM elements whose bound data has gone “stale”. \[ s.\text{exit()} \] Data Binding Selections can **bind** data and DOM elements. ```javascript values = [ {...}, {...}, {...}, ... ]; // input data as JS objects // Select SVG rectangles and bind them to data values. bars = svg.selectAll("rect.bars").data(values) .join( enter => enter.append("rect"), // create new update => update, // update current exit => exit.remove() // remove outdated ) ``` D3 Modules Data Parsing / Formatting (JSON, CSV, …) Shape Helpers (arcs, curves, areas, symbols, …) Scale Transforms (linear, log, ordinal, …) Color Spaces (RGB, HSL, LAB, …) Animated Transitions (tweening, easing, …) Geographic Mapping (projections, clipping, …) Layout Algorithms (stack, pie, force, trees, …) Interactive Behaviors (brush, zoom, drag, …) Many of these correspond to future lecture topics! Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Administrivia A2 Peer Reviews You have been assigned two peer A2 submissions to review. For each: • Try to determine which is earnest and which is deceptive • Share a rationale for how you made this determination • Share feedback using the “I Like / I Wish / What If” rubric Assigned reviews will be posted on the A2 Peer Review page on Canvas, along with a link to a Google Form. You should submit two forms: one for each A2 peer review. Due by **Tue 4/23 11:59pm**. I LIKE... Praise for design ideas and/or well-executed implementation details. Example: "I like the navigation through time via the slider; the patterns observed as one moves forward are compelling!" I WISH... Constructive statements on how the design might be improved or further refined. Example: "I wish moving the slider caused the visualization to update immediately, rather than the current lag." WHAT IF? Suggest alternative design directions, or even wacky half-baked ideas. Example: "What if we got rid of the slider and enabled direct manipulation navigation by dragging data points directly?" A3: Interactive Prototype Create an interactive visualization. Choose a driving question for a dataset and develop an appropriate visualization + interaction techniques, then deploy your visualization on the web. Due by 11:59pm on Monday, May 6. Work in project teams of 3-4 people. Form A3 + Final Project Team Form a **team of 3-4** for A3 and the Final Project. Submit signup form by **Thu 4/25, 11:59pm**. If you do not have team mates, post on Ed about your interests/skills/project ideas! We will send out a reminder early next week. Requirements **Interactive.** You must implement interaction methods! However, this is not only selection / filtering / tooltips. Also consider annotations or other narrative features to draw attention and provide additional context. **Web-based.** D3/Vega-Lite are encouraged, but not required. Deploy to web using GitHub pages. **Write-up.** Provide design rationale. Interactive Prototype Tips Start now. It will take longer than you think. Keep it simple. Choose a minimal set of interactions that enables users to explore and generate interesting insights. Do not feel obligated to convey everything about the data: focus on a compelling subset. Promote engagement. How do your chosen interactions reveal interesting observations? D3 Tutorial - In Class Thu Apr 25 D3.js Deep Dive led by Madeleine and Luke Be sure to read the D3, Part 1 notebook ahead of time. We’ll work through Part 2 in class. Bring your laptops and follow along in real-time. Web Publishing Tutorial On Zoom, led by Josh Gain skills publishing projects to the web: - Publish sites using GitLab pages - Export Altair visualizations to HTML - Learn dashboard publishing tools A Visualization Tool Stack Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Canvas, OpenGL, Processing Charting Tools Declarative Languages Programming Toolkits What is a Declarative Language? Programming by describing what, not how Separate specification (what you want) from execution (how it should be computed) In contrast to imperative programming, where you must give explicit steps. What is a Declarative Language? Programming by describing what, not how Separate **specification** *(what you want)* from **execution** *(how it should be computed)* In contrast to imperative programming, where you must give explicit steps. d3.selectAll("rect") .data(my_data) .join("rect") .attr("x", d => xscale(d.foo)) .attr("y", d =>yscale(d.bar)) SELECT customer_id, customer_name, COUNT(order_id) as total FROM customers INNER JOIN orders ON customers.customer_id = orders.customer_id GROUP BY customer_id, customer_name HAVING COUNT(order_id) > 5 ORDER BY COUNT(order_id) DESC Why Declarative Languages? Faster iteration, less code, larger user base? Better visualization. *Smart defaults.* Reuse. *Write-once, then re-apply.* Performance. *Optimization, scalability.* Portability. *Multiple devices, renderers, inputs.* Programmatic generation. *Write programs which output visualizations.* *Automated search & recommendation.* Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, **Vega-Lite** Visualization Grammars D3.js, **Vega** Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Processing, OpenGL, Java2D Chart Typologies Excel, Many Eyes, Google Charts Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Processing, OpenGL, Java2D Visual Analysis Grammars VizQL, ggplot2, Vega-Lite Visualization Grammars D3.js, Vega Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Processing, OpenGL, Java2D Interactive Data Exploration Tableau, *Lyra, Voyager* Visual Analysis Grammars VizQL, ggplot2, *Vega-Lite* Visualization Grammars D3.js, *Vega* Component Architectures Prefuse, Flare, Improvise, VTK Graphics APIs Processing, OpenGL, Java2D The Lyra Visualization Design Environment (VDE) alpha Arvind Satyanarayan, Kanit “Ham” Wongsuphasawat, Jeffrey Heer See also: Charticulator, Data Illustrator Lyra: A Visualization Design Environment Driving Shifts into Reverse by Hannah Fairfield, NYTimes Lyra - A Visualization Design Environment CHART Shewing at one view The Price of the Quarter of Wheat, & Wages of Labour by the Week from The Year 1565 to 1821 by WILLIAM PLAYFAIR by William Playfair Lyra — A Visualization Design Environment based on the Railway Timetable by E. J. Marey Lyra A Visualization Design Environment ZipScribble by Robert Kosara Voyager. Wongsuphasawat et al. *InfoVis’15, CHI’17* Common exploration pitfalls: Overlook data quality issues Fixate on specific relationships Plus many other biases... [Heuer 1999, Kahneman 2011, ...] Voyager. Wongsuphasawat et al. InfoVis’15, CHI’17 Key Idea: Augment manual exploration with visualization recommendations sensitive to the user’s current focus. The goal is to support systematic consideration of the data, without exacerbating false discovery. To model a user’s search frontier, we enumerate related Vega-Lite specifications, seeded by the user’s current focus. Candidate charts are pruned and ranked using models of estimated perceptual effectiveness. A Formal Design Space of Visualizations Enumerate Vega-Lite specifications and transformations among them. Search the space using logic programming methods. [Kim et al. 2017] Articulate Design Constraints “Quantitative axes should include a zero baseline” When and how strongly should we apply this? How to balance with other such constraints? [Moritz et al. 2019] Learn Design Trade-Offs from Data Training Data Pairs of Ranked Visualizations Features Violations of Design Constraints Learning Algorithm Learning to Rank with Linear SVM [w is the weight vector of the soft constraints arg max_w \sum_{i \in 0...k} w_i (u_i - v_i) v_i: the number of violations of constraint i. [Moritz et al. 2019] Compared to other tools, over 4x more variable sets seen, and over 2x more interacted with. "related view suggestion accelerates exploration a lot." "I like that it shows me what fields to include in order to see a specific graph. Otherwise, I have to do a lot of trial and error and can't express what I wanted to see." "These related views are so good but it's also spoiling that I start thinking less. I'm not sure if that's really a good thing." Interactive Data Exploration - Tableau, *Lyra, Voyager* Visual Analysis Grammars - VizQL, ggplot2, *Vega-Lite* Visualization Grammars - D3.js, *Vega* Component Architectures - Prefuse, Flare, Improvise, VTK Graphics APIs - Processing, OpenGL, Java2D Graphical Interfaces Declarative Languages Programming Toolkits
{"Source-Url": "https://courses.cs.washington.edu/courses/cse512/24sp/lectures/CSE512-Tools.pdf", "len_cl100k_base": 7831, "olmocr-version": "0.1.53", "pdf-total-pages": 121, "total-fallback-pages": 0, "total-input-tokens": 154141, "total-output-tokens": 12584, "length": "2e12", "weborganizer": {"__label__adult": 0.0007371902465820312, "__label__art_design": 0.0197601318359375, "__label__crime_law": 0.000789642333984375, "__label__education_jobs": 0.04412841796875, "__label__entertainment": 0.000499725341796875, "__label__fashion_beauty": 0.0004563331604003906, "__label__finance_business": 0.001216888427734375, "__label__food_dining": 0.0006093978881835938, "__label__games": 0.0014276504516601562, "__label__hardware": 0.001929283142089844, "__label__health": 0.0008006095886230469, "__label__history": 0.0013322830200195312, "__label__home_hobbies": 0.0004200935363769531, "__label__industrial": 0.0009765625, "__label__literature": 0.0010061264038085938, "__label__politics": 0.0007200241088867188, "__label__religion": 0.0009007453918457032, "__label__science_tech": 0.08892822265625, "__label__social_life": 0.0005764961242675781, "__label__software": 0.07623291015625, "__label__software_dev": 0.7548828125, "__label__sports_fitness": 0.00040793418884277344, "__label__transportation": 0.0008292198181152344, "__label__travel": 0.00039076805114746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29653, 0.02024]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29653, 0.26695]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29653, 0.62093]], "google_gemma-3-12b-it_contains_pii": [[0, 90, false], [90, 407, null], [407, 407, null], [407, 448, null], [448, 730, null], [730, 730, null], [730, 771, null], [771, 868, null], [868, 983, null], [983, 1010, null], [1010, 1010, null], [1010, 1188, null], [1188, 1188, null], [1188, 1188, null], [1188, 1188, null], [1188, 1285, null], [1285, 1421, null], [1421, 1438, null], [1438, 5914, null], [5914, 6840, null], [6840, 6840, null], [6840, 7422, null], [7422, 7793, null], [7793, 7940, null], [7940, 8139, null], [8139, 8139, null], [8139, 8139, null], [8139, 8209, null], [8209, 8279, null], [8279, 8440, null], [8440, 8676, null], [8676, 8875, null], [8875, 9074, null], [9074, 9273, null], [9273, 9500, null], [9500, 9764, null], [9764, 9794, null], [9794, 9855, null], [9855, 10053, null], [10053, 10199, null], [10199, 10598, null], [10598, 10863, null], [10863, 11007, null], [11007, 11163, null], [11163, 11557, null], [11557, 11870, null], [11870, 12137, null], [12137, 12391, null], [12391, 12666, null], [12666, 12815, null], [12815, 13397, null], [13397, 13458, null], [13458, 13726, null], [13726, 13726, null], [13726, 13830, null], [13830, 14314, null], [14314, 14314, null], [14314, 14383, null], [14383, 14557, null], [14557, 14712, null], [14712, 15028, null], [15028, 15157, null], [15157, 15408, null], [15408, 15603, null], [15603, 15821, null], [15821, 16018, null], [16018, 16437, null], [16437, 16676, null], [16676, 16877, null], [16877, 17079, null], [17079, 17334, null], [17334, 17704, null], [17704, 18286, null], [18286, 18429, null], [18429, 18789, null], [18789, 19103, null], [19103, 19354, null], [19354, 19623, null], [19623, 19838, null], [19838, 20051, null], [20051, 20359, null], [20359, 20773, null], [20773, 21183, null], [21183, 21418, null], [21418, 21432, null], [21432, 21890, null], [21890, 22496, null], [22496, 22782, null], [22782, 23042, null], [23042, 23415, null], [23415, 23784, null], [23784, 24004, null], [24004, 24204, null], [24204, 24231, null], [24231, 24466, null], [24466, 24466, null], [24466, 24762, null], [24762, 24994, null], [24994, 25366, null], [25366, 25598, null], [25598, 25957, null], [25957, 26200, null], [26200, 26435, null], [26435, 26620, null], [26620, 26874, null], [26874, 27033, null], [27033, 27132, null], [27132, 27334, null], [27334, 27423, null], [27423, 27495, null], [27495, 27495, null], [27495, 27547, null], [27547, 27698, null], [27698, 27698, null], [27698, 27748, null], [27748, 28170, null], [28170, 28347, null], [28347, 28539, null], [28539, 28880, null], [28880, 29333, null], [29333, 29653, null]], "google_gemma-3-12b-it_is_public_document": [[0, 90, true], [90, 407, null], [407, 407, null], [407, 448, null], [448, 730, null], [730, 730, null], [730, 771, null], [771, 868, null], [868, 983, null], [983, 1010, null], [1010, 1010, null], [1010, 1188, null], [1188, 1188, null], [1188, 1188, null], [1188, 1188, null], [1188, 1285, null], [1285, 1421, null], [1421, 1438, null], [1438, 5914, null], [5914, 6840, null], [6840, 6840, null], [6840, 7422, null], [7422, 7793, null], [7793, 7940, null], [7940, 8139, null], [8139, 8139, null], [8139, 8139, null], [8139, 8209, null], [8209, 8279, null], [8279, 8440, null], [8440, 8676, null], [8676, 8875, null], [8875, 9074, null], [9074, 9273, null], [9273, 9500, null], [9500, 9764, null], [9764, 9794, null], [9794, 9855, null], [9855, 10053, null], [10053, 10199, null], [10199, 10598, null], [10598, 10863, null], [10863, 11007, null], [11007, 11163, null], [11163, 11557, null], [11557, 11870, null], [11870, 12137, null], [12137, 12391, null], [12391, 12666, null], [12666, 12815, null], [12815, 13397, null], [13397, 13458, null], [13458, 13726, null], [13726, 13726, null], [13726, 13830, null], [13830, 14314, null], [14314, 14314, null], [14314, 14383, null], [14383, 14557, null], [14557, 14712, null], [14712, 15028, null], [15028, 15157, null], [15157, 15408, null], [15408, 15603, null], [15603, 15821, null], [15821, 16018, null], [16018, 16437, null], [16437, 16676, null], [16676, 16877, null], [16877, 17079, null], [17079, 17334, null], [17334, 17704, null], [17704, 18286, null], [18286, 18429, null], [18429, 18789, null], [18789, 19103, null], [19103, 19354, null], [19354, 19623, null], [19623, 19838, null], [19838, 20051, null], [20051, 20359, null], [20359, 20773, null], [20773, 21183, null], [21183, 21418, null], [21418, 21432, null], [21432, 21890, null], [21890, 22496, null], [22496, 22782, null], [22782, 23042, null], [23042, 23415, null], [23415, 23784, null], [23784, 24004, null], [24004, 24204, null], [24204, 24231, null], [24231, 24466, null], [24466, 24466, null], [24466, 24762, null], [24762, 24994, null], [24994, 25366, null], [25366, 25598, null], [25598, 25957, null], [25957, 26200, null], [26200, 26435, null], [26435, 26620, null], [26620, 26874, null], [26874, 27033, null], [27033, 27132, null], [27132, 27334, null], [27334, 27423, null], [27423, 27495, null], [27495, 27495, null], [27495, 27547, null], [27547, 27698, null], [27698, 27698, null], [27698, 27748, null], [27748, 28170, null], [28170, 28347, null], [28347, 28539, null], [28539, 28880, null], [28880, 29333, null], [29333, 29653, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29653, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29653, null]], "pdf_page_numbers": [[0, 90, 1], [90, 407, 2], [407, 407, 3], [407, 448, 4], [448, 730, 5], [730, 730, 6], [730, 771, 7], [771, 868, 8], [868, 983, 9], [983, 1010, 10], [1010, 1010, 11], [1010, 1188, 12], [1188, 1188, 13], [1188, 1188, 14], [1188, 1188, 15], [1188, 1285, 16], [1285, 1421, 17], [1421, 1438, 18], [1438, 5914, 19], [5914, 6840, 20], [6840, 6840, 21], [6840, 7422, 22], [7422, 7793, 23], [7793, 7940, 24], [7940, 8139, 25], [8139, 8139, 26], [8139, 8139, 27], [8139, 8209, 28], [8209, 8279, 29], [8279, 8440, 30], [8440, 8676, 31], [8676, 8875, 32], [8875, 9074, 33], [9074, 9273, 34], [9273, 9500, 35], [9500, 9764, 36], [9764, 9794, 37], [9794, 9855, 38], [9855, 10053, 39], [10053, 10199, 40], [10199, 10598, 41], [10598, 10863, 42], [10863, 11007, 43], [11007, 11163, 44], [11163, 11557, 45], [11557, 11870, 46], [11870, 12137, 47], [12137, 12391, 48], [12391, 12666, 49], [12666, 12815, 50], [12815, 13397, 51], [13397, 13458, 52], [13458, 13726, 53], [13726, 13726, 54], [13726, 13830, 55], [13830, 14314, 56], [14314, 14314, 57], [14314, 14383, 58], [14383, 14557, 59], [14557, 14712, 60], [14712, 15028, 61], [15028, 15157, 62], [15157, 15408, 63], [15408, 15603, 64], [15603, 15821, 65], [15821, 16018, 66], [16018, 16437, 67], [16437, 16676, 68], [16676, 16877, 69], [16877, 17079, 70], [17079, 17334, 71], [17334, 17704, 72], [17704, 18286, 73], [18286, 18429, 74], [18429, 18789, 75], [18789, 19103, 76], [19103, 19354, 77], [19354, 19623, 78], [19623, 19838, 79], [19838, 20051, 80], [20051, 20359, 81], [20359, 20773, 82], [20773, 21183, 83], [21183, 21418, 84], [21418, 21432, 85], [21432, 21890, 86], [21890, 22496, 87], [22496, 22782, 88], [22782, 23042, 89], [23042, 23415, 90], [23415, 23784, 91], [23784, 24004, 92], [24004, 24204, 93], [24204, 24231, 94], [24231, 24466, 95], [24466, 24466, 96], [24466, 24762, 97], [24762, 24994, 98], [24994, 25366, 99], [25366, 25598, 100], [25598, 25957, 101], [25957, 26200, 102], [26200, 26435, 103], [26435, 26620, 104], [26620, 26874, 105], [26874, 27033, 106], [27033, 27132, 107], [27132, 27334, 108], [27334, 27423, 109], [27423, 27495, 110], [27495, 27495, 111], [27495, 27547, 112], [27547, 27698, 113], [27698, 27698, 114], [27698, 27748, 115], [27748, 28170, 116], [28170, 28347, 117], [28347, 28539, 118], [28539, 28880, 119], [28880, 29333, 120], [29333, 29653, 121]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29653, 0.03367]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d26e30de3c0a4b6e9ff4db7800b44272c95605ca
Package ‘RealVAMS’ February 19, 2015 Type Package Title Multivariate VAM Fitting Version 0.3-1 Date 2014-09-25 Author Andrew Karl, Jennifer Broatch, and Jennifer Green Maintainer Andrew Karl <akarl@asu.edu> Description The RealVAMS package fits a multivariate value-added model (VAM) (see Broatch and Lohr 2012) with normally distributed test scores and a binary outcome indicator. This material is based upon work supported by the National Science Foundation under grants DRL-1336027 and DRL-1336265. License GPL-2 Depends R (>= 3.0.0), Matrix Imports numDeriv, Rcpp (>= 0.10.6) LazyData yes ByteCompile yes NeedsCompilation yes LinkingTo Rcpp, RcppArmadillo Repository CRAN Date/Publication 2014-11-01 07:19:08 R topics documented: RealVAMS-package .................................................. 2 example.outcome.data ............................................. 3 example.score.data .................................................. 4 RealVAMS ............................................................. 5 R_mstep2 ............................................................. 8 vp_cp ................................................................. 9 Index 10 RealVAMS-package Multivariate VAM Fitting Description The RealVAMS package fits a multivariate value-added model (VAM) (see Broatch and Lohr 2012) with normally distributed test scores and a binary outcome indicator. This material is based upon work supported by the National Science Foundation under grants DRL-1336027 and DRL-1336265. Details - **Package**: RealVAMS - **Type**: Package - **Version**: 0.3-1 - **Date**: 2014-09-25 - **License**: GPL-2 Author(s) Authors: Andrew Karl, Jennifer Broatch, and Jennifer Green Maintainer: Andrew Karl <akarl@asu.edu> References Examples ```r data(example.score.data) data(example.outcome.data) # The next line exists to show that the function can run and that the package # installed correctly res.test<-RealVAMS(example.score.data,example.outcome.data,max.PQL.it=1,max.iter.EM=2, var.parm.hessian=FALSE) # The next line (not run automatically) provides a full example of the function ## Not run: res<RealVAMS(example.score.data,example.outcome.data) ``` --- **example.outcome.data** **Simulated Data** ### Description A simulated data set used to illustrate the functionality of the package. This data set represents binary outcome measurements on 625 students (with one missing). ### Usage ```r data(example.outcome.data) ``` ### Format A data frame with 624 observations. The data set contains the following 2 variables. - `r` a numeric vector composed of 0’s and 1’s representing a binary outcome measured on students. - `student` a numeric vector ### Details The data set may be reproduced with the following code. ```r set.seed(0) library(MASS) years<-3 # teacher in each year teachers<-25 # students in each class students<-25 alpha<-.5 etatruej<mvnorm(n=teachers*students,mu=c(0,0),Sigma=cbind(c(5,.2),c(.2,.1))) etastrue<-eta.truem,1] eta.truat<- sample(rep(1:teachers,each=students),2)<sample(rep(1:teachers,each=students)) z3<-sample(rep(1:teachers,each=students)) cont_var1<-rmvnorm(students*teachers,0,5) cont_var2<- rmvnorm(students*teachers,0,5) cont_var3<rmvnorm(students*teachers,0,5) gam<rmvnorm(n=teachers*years,mu=c(0,0),Sigma=cbind(c(5,.6),c(.6,.6))) eps1<rmvnorm(0,0,5) eps2<rmvnorm(0,0,5) eps3<rmvnorm(0,0,5) gam1<gam[seq(1:teachers),1] gam2<-gam[seq((teachers+1),(2*teachers)),1] gam3<-gam[seq((2*teachers+1),(3*teachers)),1] ``` --- This code snippet demonstrates the creation of a simulated data set used to illustrate the functionality of the package. The data set represents binary outcome measurements on 625 students (with one missing). Examples data(example.score.data) print(example.score.data[1,]) description A simulated data set used to illustrate the functionality of the package. The data are simulated according to the VP model. Usage data(example.score.data) Format A data frame with 1874 observations on 625 students over 3 years, with 25 teachers in each year. The data set contains the following 5 variables. - `y` a numeric vector representing the student score - `student` a numeric vector - `year` a numeric vector - `teacher` a numeric vector - `cont_var` a numeric vector representing a continuous covariate Details The data set may be reproduced with the following code. ```r set.seed(0) library(MASS) # number of years: fixed at 3 # teacher in each year teachers<-25 # students in each class students<-25 alpha<-.5 eta.stu.j <- mvrnorm(n=teachers*students,mu=c(0,0),Sigma=cbind(c(5,.2),c(.2,.1))) eta.stu <- eta.stu.j[,1] eta.stu.r <- eta.stu.j[,2] z1 <- rep(1:teachers,each=students) z2 <- sample(rep(1:teachers,each=students)) z3 <- sample(rep(1:teachers,each=students)) cont_var1 <- rnorm(students*teachers,0,.5) cont_var2 <- rnorm(students*teachers,0,.5) gam <- mvrnorm(n=teachers*years,mu=c(0,0),Sigma=cbind(c(5,.6),c(.6,.6))) eps1 <- rnorm(students*teachers,0,sqrt(5)) eps2 <- rnorm(students*teachers,0,sqrt(5)) eps3 <- rnorm(students*teachers,0,sqrt(5)) gam1 <- gam[seq(1,teachers),1] gam2 <- gam[seq((teachers+1),(2*teachers)),1] gam3 <- gam[seq((2*teachers+1),(3*teachers)),1] y1 <- 50 + eta.stu + gam1[z1] + cont_var1 + eps1 y2 <- eta.stu + gam1[z1]*alpha + gam2[z2] + cont_var2 + eps2 y3 <- 100 + eta.stu + gam1[z1]*alpha + gam2[z2]*alpha + gam3[z3] + cont_var3 + eps3 r1 <- rbinom(students*teachers,1,pmnorm(.1 + eta.stu.r + gam1.r[z1] + gam2.r[z2] + gam3.r[z3])) student<-1:(students*teachers) year<-(rep(1:3,each=students*teachers)) student2<-as.data.frame(cbind(student,year,y)) vam_data2<-as.data.frame(cbind(student=student2$student,vam_data2$year),1) vam_data2[]<-(vam_data2$year) vam_data2.r<-as.data.frame(cbind(student,r=r1)) vam_data2.r<-vam_data2.r[-6,] ``` Examples ```r data(example.score.data) print(example.score.data[1,]) ``` --- **RealVAMS** **Multivariate VAM Fitting** **Description** The RealVAMS package fits a multivariate value-added model (VAM) (see Broatch and Lohr 2012) with normally distributed test scores and a binary outcome indicator. This material is based upon work supported by the National Science Foundation under grants DRL-1336027 and DRL-1336265. The package fits continuous test score results jointly with a binary outcome in a multivariate generalized linear mixed model (see Broatch and Lohr (2012); Karl, Yang, and Lohr (2013); and Karl, Yang, and Lohr (2014)) using a pseudo-likelihood approximation. **Usage** ```r realvams(score.data, outcome.data, persistence = "CP", school.effects = FALSE, REML = TRUE, score.fixed.effects = formula(~as.factor(year) + 0), outcome.fixed.effects = formula(~1), max.iter.EM = 10, outcome.family = binomial(link = "probit"), tol1 = 1e-07, max.PQL.it = 30, pconv = .Machine$double.eps*1e9, var.parm.hessian = TRUE, verbose = TRUE) ``` **Arguments** - **score.data**: a data frame that contains at least a column "y" containing the student scores, a column "student" containing unique student ID's, a column "teacher" containing the teacher ID's, and a column "year" which contains the year (or semester, etc.) of the time period. The "y" and "year" variables needs to be numeric. If other variables are to be included as fixed effects, they should also be included in score.data. See 'Note' for further discussion. - **outcome.data**: a data frame that contains at least a column "r" containing the binary student outcomes (coded 0/1), and a column "student" containing unique student ID's. The student ID's should match those in score.data. If other variables are to be included as fixed effects, they should also be included in outcome.data. - **persistence**: a character. Choices are "CP" or "VP", for complete and variable persistence of the teacher score effects, respectively. The teacher outcome effects are modeled with complete persistence, regardless of the selection here. - **school.effects**: logical. If TRUE, correlated random school-level effects are fitted in the score and outcome response models. For both responses, the school effects are fit with zero-persitence (a student's score in each year is associated with the current school attended, and their outcome is associated with the last school the student attended). The school ID should be included as a column schoolID in the score.data data frame. REML logical. If TRUE, the pseudo-response is fit using REML. If FALSE, ML is used. score.fixed.effects an object of class formula describing the structure of the fixed effects for the student scores. Categorical variables should be wrapped in an as.factor statement. outcome.fixed.effects an object of class formula describing the structure of the fixed effects for the student outcomes. Categorical variables should be wrapped in an as.factor statement. max.iter.EM numeric. The maximum number of EM iterations during each pseudo-likelihood iteration outcome.family an object of class family describing the assumed distribution of the response. Currently only "binomial" has been tested, though "poisson" should work as well. tol1 numeric. Convergence tolerance for EM algorithm during each interior pseudo-likelihood iteration. The convergence criterion is specified under 'Details'. max.PQL.it numeric. Maximum number of outer pseudo-likelihood iterations. pconv numeric. Convergence criterion for outer pseudo-likelihood iterations. Compare to the PCONV option of SAS PROC GLIMMIX. var.parm.hessian logical. If TRUE, the Hessian of the parameters in the error and random effects covariance matrices is calculated, providing standard errors for those parameters. Setting this option to FALSE will reduce the run time of the program: only standard errors for the fixed effects will be returned. verbose logical. If TRUE, model information will be printed at each iteration. Details *The persistence option determines the type of persistence effects that are modeled. The variable persistence model ("VP") assumes that teacher effects in future years are multiples of their effect in the current year (Lockwood et al. 2007). The multipliers in the VP model are called persistence parameters, and are estimated. By contrast, the complete persistence ("CP") model fixes the persistence parameters at 1 and 0 (Lockwood et al. 2007). *Convergence is declared for each interior iteration when \((l_k - l_{k-1})/l_k < tol1\), where \(l_k\) is the log-likelihood at iteration \(k\). *The model is linearized using a pseudo-likelihood approach (Wolfgang 1993) and the resulting multiple membership linear mixed model is estimated via an EM algorithm (Karl et al. 2012). Value RealVAMS returns an object of class RealVAMS loglik the maximized log-likelihood at convergence of the EM algorithm. Warning: Likelihood-ratio tests are not valid with results from a PQL estimation routine. teach.effects a data frame containing the predicted teacher effects and standard errors parameters a matrix of estimated model parameters and standard errors RealVAMS <table> <thead> <tr> <th>Variable</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Hessian</td> <td>the Hessian of the variance parameters</td> </tr> <tr> <td>R_i</td> <td>a matrix containing the error covariance matrix of a student. The bottom-right component corresponds to the variance of the binary response, and is fixed at 1.</td> </tr> <tr> <td>teach.cov</td> <td>a list containing the unique blocks of the covariance matrix of teacher effects (the G matrix)</td> </tr> <tr> <td>mresid</td> <td>a vector of the raw marginal residuals</td> </tr> <tr> <td>cresid</td> <td>a vector of the raw conditional residuals</td> </tr> <tr> <td>y</td> <td>a vector of the pseudo-responses from the final PQL iteration. The test scores will be the same as those given as an input, but the 0/1 responses for the binary distribution will be different.</td> </tr> <tr> <td>yhat</td> <td>a vector of the predicted values</td> </tr> <tr> <td>num.obs</td> <td>total number of observations (test scores and binary responses)</td> </tr> <tr> <td>num.student</td> <td>total number of students included in the data</td> </tr> <tr> <td>num.year</td> <td>number of years over which test scores were modeled</td> </tr> <tr> <td>num.teach</td> <td>a vector listing the number of teachers in each year</td> </tr> <tr> <td>persistence</td> <td>a character vector indicating the persistence structure (VP or CP) used to model the teacher test-score effects</td> </tr> <tr> <td>persistence_parameters</td> <td>a matrix of the persistence parameters. The (i,j)-th component gives the persistence parameter for year-j teachers on year-i scores.</td> </tr> <tr> <td>X</td> <td>the fixed effects design matrix</td> </tr> <tr> <td>Z</td> <td>the random effects design matrix</td> </tr> <tr> <td>G</td> <td>the random effects covariance matrix</td> </tr> <tr> <td>R.full</td> <td>the error covariance matrix, which is formed as the product diag(sqrt.w)%<em>%R%</em>%diag(sqrt.w). The matrix R assumes a variance of 1 for all of the binomial responses, while R.full includes the variance from the binomial distribution (in Wolfinger (1993), diag(sqrt.w) is called R_mu).</td> </tr> <tr> <td>sqrt.w</td> <td>vector of weights for the error covariance matrix. See the description for R.full above</td> </tr> </tbody> </table> **Note** The first few iterations of the EM algorithm will take longer than subsequent iterations. This is a result of the hybrid gradient-ascent/Newton-Raphson method used in the M-step for the R matrix in the first two iterations (Karl et al. 2012). The model assumes that each teacher teaches only one year. If, for example, a teacher teaches in years 1 and 2, his/her first year performance is modeled independently of the second year performance. To keep these effects separate, the program appends "(year i)" to each teacher name, where i is the year in which the teacher taught. The `fixed.effects` arguments of RealVAMS utilizes the functionality of R's `formula` class. In the statement score.fixed.effects= formula(~as.factor(year)+cont_var+0)), as.factor(year) identifies year as a categorical variable. +0 indicates that no intercept is to be fitted, and +cont_var indicates that a separate effect is to be fitted for the continuous variable "cont_var." An interaction between "year" and "cont_var" could be specified by ~as.factor(year)*cont_var+0, or equivalently, ~as.factor(year)+cont_var+as.factor(year):cont_var+0. See `formula` for more details. Author(s) Andrew Karl <akarl@asu.edu> Jennifer Broatch Jennifer Green References Examples ```r data(example.score.data) data(example.outcome.data) #The next line exists to show that the function can run and that the package #installed correctly res.test<RealVAMS(example.score.data,example.outcome.data,max.PQL.it=1,max.iter.EM=2, var.parm.hessian=FALSE) #The next line (not run automatically) provides a full example of the function ## Not run: res<RealVAMS(example.score.data,example.outcome.data) ``` --- **R_mstep2** **Internal function** Description An internal function Usage ```r R_mstep2(invsqrtW_,JYp_,loopsizes_,patternlength_,rownumber_,ybetas_,etahat_,tempmatR_, JXpi_,JXpp_,JXpx_,JXpdim_,JZpi_,JZpp_,JZpx_,JZpdim_)``` Arguments - invsqrtW_ an internal variable - Jyp_ an internal variable - loopsize_ an internal variable - patternlength_ an internal variable - rownumber_ an internal variable - ybetas_ an internal variable - etahat_ an internal variable - tempmatR_ an internal variable - JXpi_ an internal variable - JXpp_ an internal variable - JXpx_ an internal variable - JXpdim_ an internal variable - JZpi_ an internal variable - JZpp_ an internal variable - JZpx_ an internal variable - JZpdim_ an internal variable vp_cp Internal function Description An internal function Usage vp_cp(Z_mat, B.mat, control) Arguments - Z_mat data frame - B.mat data frame - control a list Index *Topic **datasets** example.outcome.data, 3 example.score.data, 4 *Topic **package** RealVAMS-package, 2 *Topic **regression** RealVAMS, 5 example.outcome.data, 3 example.score.data, 4 formula, 7 R_mstep2, 8 RealVAMS, 5 RealVAMS-package, 2 vp_cp, 9
{"Source-Url": "http://www.stats.bris.ac.uk/R/web/packages/RealVAMS/RealVAMS.pdf", "len_cl100k_base": 4286, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21678, "total-output-tokens": 5470, "length": "2e12", "weborganizer": {"__label__adult": 0.00043845176696777344, "__label__art_design": 0.000732421875, "__label__crime_law": 0.0006842613220214844, "__label__education_jobs": 0.037261962890625, "__label__entertainment": 0.000186920166015625, "__label__fashion_beauty": 0.00029468536376953125, "__label__finance_business": 0.00086212158203125, "__label__food_dining": 0.0006909370422363281, "__label__games": 0.0010814666748046875, "__label__hardware": 0.0013427734375, "__label__health": 0.000980377197265625, "__label__history": 0.0007452964782714844, "__label__home_hobbies": 0.00034689903259277344, "__label__industrial": 0.00131988525390625, "__label__literature": 0.0005521774291992188, "__label__politics": 0.00063323974609375, "__label__religion": 0.0007166862487792969, "__label__science_tech": 0.2186279296875, "__label__social_life": 0.0005049705505371094, "__label__software": 0.046112060546875, "__label__software_dev": 0.68408203125, "__label__sports_fitness": 0.0005936622619628906, "__label__transportation": 0.0007700920104980469, "__label__travel": 0.0004148483276367187}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18546, 0.03049]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18546, 0.43775]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18546, 0.70557]], "google_gemma-3-12b-it_contains_pii": [[0, 1179, false], [1179, 2990, null], [2990, 4949, null], [4949, 7023, null], [7023, 9573, null], [9573, 12226, null], [12226, 15656, null], [15656, 17604, null], [17604, 18277, null], [18277, 18546, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1179, true], [1179, 2990, null], [2990, 4949, null], [4949, 7023, null], [7023, 9573, null], [9573, 12226, null], [12226, 15656, null], [15656, 17604, null], [17604, 18277, null], [18277, 18546, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18546, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18546, null]], "pdf_page_numbers": [[0, 1179, 1], [1179, 2990, 2], [2990, 4949, 3], [4949, 7023, 4], [7023, 9573, 5], [9573, 12226, 6], [12226, 15656, 7], [15656, 17604, 8], [17604, 18277, 9], [18277, 18546, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18546, 0.07435]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
1556e4eb6ce8bf3d17be37d1ff8c294384362d4c
Lecture 4: Backpropagation and computation graphs Lecture Plan Lecture 4: Backpropagation and computation graphs 1. Matrix gradients for our simple neural net and some tips [15 mins] 2. Computation graphs and backpropagation [40 mins] 3. Stuff you should know [15 mins] a. Regularization to prevent overfitting b. Vectorization c. Nonlinearities d. Initialization e. Optimizers f. Learning rates 1. Derivative wrt a weight matrix - Let’s look carefully at computing $\frac{\partial s}{\partial W}$. - Using the chain rule again: $$\frac{\partial s}{\partial W} = \frac{\partial s}{\partial h} \frac{\partial h}{\partial z} \frac{\partial z}{\partial W}$$ $s = u^T h$ $h = f(z)$ $z = Wx + b$ $x = [x_{\text{museums}}, x_{\text{in}}, x_{\text{Paris}}, x_{\text{are}}, x_{\text{amazing}}]$ Deriving gradients for backprop • For this function (following on from last time): \[ \frac{\partial s}{\partial W} = \delta \frac{\partial z}{\partial W} = \delta \frac{\partial}{\partial W} WX + b \] • Let’s consider the derivative of a single weight \( W_{ij} \) \( W_{ij} \) only contributes to \( z_i \) • For example: \( W_{23} \) is only used to compute \( z_2 \) not \( z_1 \) \[ \frac{\partial z_i}{\partial W_{ij}} = \frac{\partial}{\partial W_{ij}} W_i.x + b_i \] \[ = \frac{\partial}{\partial W_{ij}} \sum_{k=1}^{d} W_{ik}x_k = x_j \] Deriving gradients for backprop - So for derivative of single $W_{ij}$: \[ \frac{\partial s}{\partial W_{ij}} = \delta_i x_j \] - We want gradient for full $W$ – but each case is the same - Overall answer: Outer product: \[ \frac{\partial s}{\partial W} = \delta^T x^T \] \[ [n \times m] \times [n \times 1] \times [1 \times m] \] Deriving gradients: Tips - **Tip 1**: Carefully define your variables and keep track of their dimensionality! - **Tip 2**: Chain rule! If \( y = f(u) \) and \( u = g(x) \), i.e., \( y = f(g(x)) \), then: \[ \frac{\partial y}{\partial x} = \frac{\partial y}{\partial u} \cdot \frac{\partial u}{\partial x} \] Keep straight what variables feed into what computations - **Tip 3**: For the top softmax part of a model: First consider the derivative wrt \( f_c \) when \( c = y \) (the correct class), then consider derivative wrt \( f_c \) when \( c \neq y \) (all the incorrect classes) - **Tip 4**: Work out element-wise partial derivatives if you’re getting confused by matrix calculus! - **Tip 5**: Use Shape Convention. Note: The error message \( \delta \) that arrives at a hidden layer has the same dimensionality as that hidden layer Deriving gradients wrt words for window model - The gradient that arrives at and updates the word vectors can simply be split up for each word vector: Let $\nabla_x J = W^T \delta = \delta_{x_{window}}$ - With $x_{window} = [x_{museums} \ x_{in} \ x_{Paris} \ x_{are} \ x_{amazing}]$ - We have \[ \delta_{window} = \begin{bmatrix} \nabla x_{museums} \\ \nabla x_{in} \\ \nabla x_{Paris} \\ \nabla x_{are} \\ \nabla x_{amazing} \end{bmatrix} \in \mathbb{R}^{5d} \] Updating word gradients in window model • This will push word vectors around so that they will (in principle) be more helpful in determining named entities. • For example, the model can learn that seeing $x_{in}$ as the word just before the center word is indicative for the center word to be a location. A pitfall when retraining word vectors - **Setting:** We are training a logistic regression classification model for movie review sentiment using single words. - In the training data we have “TV” and “telly” - In the testing data we have “television” - The pre-trained word vectors have all three similar: - **Question:** What happens when we update the word vectors? A pitfall when retraining word vectors • **Question:** What happens when we update the word vectors? • **Answer:** • Those words that are in the training data **move around** • “TV” and “telly” • Words **not** in the training data **stay where they were** • “television” This can be bad! So what should I do? • **Question:** Should I use available “pre-trained” word vectors **Answer:** • Almost always, yes! • They are trained on a huge amount of data, and so they will know about words not in your training data and will know more about words that are in your training data • Have 100s of millions of words of data? Okay to start random • **Question:** Should I update (“fine tune”) my own word vectors? **Answer:** • If you only have a small training data set, don’t train the word vectors • If you have have a large dataset, it probably will work better to train = update = fine-tune word vectors to the task Backpropagation We’ve almost shown you backpropagation It’s taking derivatives and using the (generalized) chain rule Other trick: we **re-use** derivatives computed for higher layers in computing derivatives for lower layers so as to minimize computation 2. Computation Graphs and Backpropagation - We represent our neural net equations as a graph - Source nodes: inputs - Interior nodes: operations \[ s = u^T h \] \[ h = f(z) \] \[ z = Wx + b \] \[ x \ (\text{input}) \] Computation Graphs and Backpropagation • We represent our neural net equations as a graph • Source nodes: inputs • Interior nodes: operations • Edges pass along result of the operation \[ s = u^T h \] \[ h = f(z) \] \[ z = Wx + b \] \[ x \quad \text{(input)} \] Computation Graphs and Backpropagation - Representing our neural net equations as a graph ``` s = u^T h h = f(z) z = c + b ``` "Forward Propagation" - x - W - b - x + - z + - f - h + - s 15 Backpropagation - Go backwards along edges - Pass along gradients \[ s = u^T h \] \[ h = f(z) \] \[ z = Wx + b \] \[ x \text{ (input)} \] Backpropagation: Single Node - Node receives an “upstream gradient” - Goal is to pass on the correct “downstream gradient” \[ h = f(z) \] Backpropagation: Single Node - Each node has a **local gradient** - The gradient of its output with respect to its input \[ h = f(z) \] \[ \frac{\partial s}{\partial z}, \quad \frac{\partial s}{\partial h} \] - **Downstream gradient** - **Local gradient** - **Upstream gradient** Backpropagation: Single Node - Each node has a **local gradient** - The gradient of its output with respect to its input \[ h = f(z) \] \[ \frac{\partial s}{\partial z} = \frac{\partial s}{\partial h} \frac{\partial h}{\partial z} \] **Chain rule!** Backpropagation: Single Node - Each node has a **local gradient** - The gradient of its output with respect to its input - \([\text{downstream gradient}] = [\text{upstream gradient}] \times [\text{local gradient}]\) \[ h = f(z) \] \[ \frac{\partial s}{\partial z} = \frac{\partial s}{\partial h} \frac{\partial h}{\partial z} \] **Downstream gradient** **Local gradient** **Upstream gradient** Backpropagation: Single Node - What about nodes with multiple inputs? \[ z = Wx \] Backpropagation: Single Node - Multiple inputs $\rightarrow$ multiple local gradients $$z = Wx$$ An Example \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] An Example \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = 1(y > z) = 1 \quad \frac{\partial b}{\partial z} = 1(z > y) = 0 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = 1(y > z) = 1 \quad \frac{\partial b}{\partial z} = 1(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = \mathbf{1}(y > z) = 1 \quad \frac{\partial b}{\partial z} = \mathbf{1}(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example Forward prop steps \[ a = x + y \\ b = \max(y, z) \\ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \\ \frac{\partial b}{\partial y} = 1(y > z) = 1 \quad \frac{\partial b}{\partial z} = 1(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = 1(y > z) = 1 \quad \frac{\partial b}{\partial z} = 1(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] upstream * local = downstream An Example Forward prop steps \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] Local gradients \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = 1(y > z) = 1 \quad \frac{\partial b}{\partial z} = 1(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] upstream * local = downstream An Example **Forward prop steps** \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] **Local gradients** \[ \frac{\partial a}{\partial x} = 1 \quad \frac{\partial a}{\partial y} = 1 \] \[ \frac{\partial b}{\partial y} = \mathbf{1}(y > z) = 1 \quad \frac{\partial b}{\partial z} = \mathbf{1}(z > y) = 0 \] \[ \frac{\partial f}{\partial a} = b = 2 \quad \frac{\partial f}{\partial b} = a = 3 \] \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] Gradients sum at outward branches Gradients sum at outward branches \[ a = x + y \] \[ b = \max(y, z) \] \[ f = ab \] \[ \frac{\partial f}{\partial y} = \frac{\partial f}{\partial a} \frac{\partial a}{\partial y} + \frac{\partial f}{\partial b} \frac{\partial b}{\partial y} \] Node Intuitions \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] - + “distributes” the upstream gradient ``` \[ \begin{align*} x & \quad 1 \\ y & \quad 2 \\ z & \quad 0 \\ \end{align*} \] ``` ``` \[ \begin{align*} 2 & \quad + \\ 2 & \quad + \\ 2 & \quad \max \\ 1 & \quad * \\ \end{align*} \] ``` ``` \[ \begin{align*} 3 & \quad + \\ 2 & \quad + \\ 2 & \quad \max \\ 6 & \quad * \\ \end{align*} \] ``` Node Intuitions \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] - + “distributes” the upstream gradient to each summand - max “routes” the upstream gradient Node Intuitions \[ f(x, y, z) = (x + y) \max(y, z) \] \[ x = 1, y = 2, z = 0 \] - + “distributes” the upstream gradient - max “routes” the upstream gradient - * “switches” the upstream gradient Efficiency: compute all gradients at once - Incorrect way of doing backprop: - First compute $\frac{\partial s}{\partial b}$ \[ s = u^T h \\ h = f(z) \\ z = Wx + b \\ x \text{ (input)} \] Efficiency: compute all gradients at once - Incorrect way of doing backprop: - First compute $\frac{\partial s}{\partial b}$ - Then independently compute $\frac{\partial s}{\partial W}$ - Duplicated computation! \[ \begin{align*} s &= u^T h \\ h &= f(z) \\ z &= W x + b \\ x & \text{ (input)} \end{align*} \] Efficiency: compute all gradients at once - Correct way: - Compute all the gradients at once - Analogous to using $\delta$ when we computed gradients by hand \[ \begin{align*} s &= u^T h \\ h &= f(z) \\ z &= Wx + b \\ x &\text{ (input)} \end{align*} \] 1. **Fprop**: visit nodes in topological sort order - Compute value of node given predecessors 2. **Bprop**: - initialize output gradient = 1 - visit nodes in reverse order: Compute gradient wrt each node using gradient wrt successors \[ \{y_1, y_2, \ldots y_n\} = \text{successors of } x \] \[ \frac{\partial z}{\partial x} = \sum_{i=1}^{n} \frac{\partial z}{\partial y_i} \frac{\partial y_i}{\partial x} \] Done correctly, big O() complexity of fprop and bprop is **the same** In general our nets have regular layer-structure and so we can use matrices and Jacobians... Automatic Differentiation • The gradient computation can be automatically inferred from the symbolic expression of the fprop • Each node type needs to know how to compute its output and how to compute the gradient wrt its inputs given the gradient wrt its output • Modern DL frameworks (Tensorflow, PyTorch, etc.) do backpropagation for you but mainly leave layer/node writer to hand-calculate the local derivative Backprop Implementations class ComputationalGraph(object): #... def forward(inputs): # 1. [pass inputs to input gates...] # 2. forward the computational graph: for gate in self.graph.nodes_topologically_sorted(): gate.forward() return loss # the final gate in the graph outputs the loss def backward(): for gate in reversed(self.graph.nodes_topologically_sorted()): gate.backward() # little piece of backprop (chain rule applied) return inputs_gradients Implementation: forward/backward API \[ (x, y, z \text{ are scalars}) \] ``` class MultiplyGate(object): def forward(x, y): z = x*y return z def backward(dz): # dx = ... #todo # dy = ... #todo return [dx, dy] ``` Implementation: forward/backward API (x, y, z are scalars) ```python class MultiplyGate(object): def forward(x, y): z = x * y self.x = x # must keep these around! self.y = y return z def backward(dz): dx = self.y * dz # [dz/dx * dL/dz] dy = self.x * dz # [dz/dy * dL/dz] return [dx, dy] ``` Gradient checking: Numeric Gradient • For small $h (\approx 1e^{-4})$, $f'(x) \approx \frac{f(x + h) - f(x - h)}{2h}$ • Easy to implement correctly • But approximate and **very** slow: • Have to recompute $f$ for **every parameter** of our model • Useful for checking your implementation • In the old days when we hand-wrote everything, it was key to do this everywhere. • Now much less needed, when throwing together layers Summary - We’ve mastered the core technology of neural nets!!! - Backpropagation: recursively apply the chain rule along computation graph - \([\text{downstream gradient}] = [\text{upstream gradient}] \times [\text{local gradient}]\) - Forward pass: compute results of operations and save intermediate values - Backward pass: apply chain rule to compute gradients Why learn all these details about gradients? - Modern deep learning frameworks compute gradients for you - But why take a class on compilers or systems when they are implemented for you? - Understanding what is going on under the hood is useful! - Backpropagation doesn’t always work perfectly. - Understanding why is crucial for debugging and improving models - See Karpathy article (in syllabus): - https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b - Example in future lecture: exploding and vanishing gradients 3. We have models with many params! Regularization! - Really a full loss function in practice includes regularization over all parameters $\theta$, e.g., L2 regularization: $$J(\theta) = \frac{1}{N} \sum_{i=1}^{N} - \log \left( \frac{e^{f_{y_i}}}{\sum_{c=1}^{C} e^{f_{c}}} \right) + \lambda \sum_{k} \theta_{k}^2$$ - Regularization (largely) prevents overfitting when we have a lot of features (or later a very powerful/deep model, ++) “Vectorization” - E.g., looping over word vectors versus concatenating them all into one large matrix and then multiplying the softmax weights with that matrix. ```python from numpy import random N = 500 # number of windows to classify d = 300 # dimensionality of each window C = 5 # number of classes W = random.rand(C,d) wordvectors_list = [random.rand(d,1) for i in range(N)] wordvectors_one_matrix = random.rand(d,N) %timeit [W.dot(wordvectors_list[i]) for i in range(N)] %timeit W.dot(wordvectors_one_matrix) ``` - 1000 loops, best of 3: 639 µs per loop - 10000 loops, best of 3: 53.8 µs per loop “Vectorization” ```python from numpy import random N = 500 # number of windows to classify d = 300 # dimensionality of each window C = 5 # number of classes W = random.rand(C, d) wordvectors_list = [random.rand(d, 1) for i in range(N)] wordvectors_one_matrix = random.rand(d, N) %timeit [W.dot(wordvectors_list[i]) for i in range(N)] %timeit W.dot(wordvectors_one_matrix) ``` - The (10x) faster method is using a C x N matrix - Always try to use vectors and matrices rather than for loops! - You should speed-test your code a lot too!! - tl;dr: Matrices are awesome!!! Non-linearities: The starting points logistic (”sigmoid”) \[ f(z) = \frac{1}{1 + \exp(-z)} \] \[ f(z) = \tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}} \] Hard tanh \[ \text{HardTanh}(x) = \begin{cases} -1 & \text{if } x < -1 \\ \ x & \text{if } -1 \leq x \leq 1 \\ 1 & \text{if } x > 1 \end{cases} \] tanh is just a rescaled and shifted sigmoid (2 \times as steep, [-1,1]): \[ \tanh(z) = 2 \logistic(2z) - 1 \] Both logistic and tanh are still used in particular uses, but are no longer the defaults for making deep networks Non-linearities: The new world order ReLU (rectified linear unit) hard tanh \[ \text{rect}(z) = \max(z, 0) \] Leaky ReLU Parametric ReLU - For building a feed-forward deep network, the first thing you should try is ReLU — it trains quickly and performs well due to good gradient backflow Parameter Initialization - You normally must initialize weights to small random values - To avoid symmetries that prevent learning/specialization - Initialize hidden layer biases to 0 and output (or reconstruction) biases to optimal value if weights were 0 (e.g., mean target or inverse sigmoid of mean target) - Initialize all other weights \( \sim \) Uniform(\(-r, r\)), with \( r \) chosen so numbers get neither too big or too small - Xavier initialization has variance inversely proportional to fan-in \( n_{in} \) (previous layer size) and fan-out \( n_{out} \) (next layer size): \[ \text{Var}(W_i) = \frac{2}{n_{in} + n_{out}} \] Optimizers - Usually, plain SGD will work just fine - However, getting good results will often require hand-tuning the learning rate (next slide) - For more complex nets and situations, or just to avoid worry, you often do better with one of a family of more sophisticated “adaptive” optimizers that scale the parameter adjustment by an accumulated gradient. - These models give per-parameter learning rates - Adagrad - RMSprop - Adam ← A fairly good, safe place to begin in many cases - SparseAdam - ... Learning Rates - You can just use a constant learning rate. Start around $lr = 0.001$? - It must be order of magnitude right – try powers of 10 - Too big: model may diverge or not converge - Too small: your model may not have trained by the deadline - Better results can generally be obtained by allowing learning rates to decrease as you train - By hand: halve the learning rate every $k$ epochs - An epoch = a pass through the data (shuffled or sampled) - By a formula: $lr = l_{r0} e^{-kt}$, for epoch $t$ - There are fancier methods like cyclic learning rates (q.v.) - Fancier optimizers still use a learning rate but it may be an initial rate that the optimizer shrinks – so may be able to start high
{"Source-Url": "http://web.stanford.edu/class/cs224n/slides/cs224n-2019-lecture04-backprop.pdf", "len_cl100k_base": 6557, "olmocr-version": "0.1.50", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 82585, "total-output-tokens": 9141, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.000903606414794922, "__label__crime_law": 0.0003659725189208984, "__label__education_jobs": 0.004241943359375, "__label__entertainment": 0.00016582012176513672, "__label__fashion_beauty": 0.00025725364685058594, "__label__finance_business": 0.0003228187561035156, "__label__food_dining": 0.0005078315734863281, "__label__games": 0.0008730888366699219, "__label__hardware": 0.001678466796875, "__label__health": 0.0009169578552246094, "__label__history": 0.00036263465881347656, "__label__home_hobbies": 0.0002422332763671875, "__label__industrial": 0.0008807182312011719, "__label__literature": 0.0003731250762939453, "__label__politics": 0.0003228187561035156, "__label__religion": 0.0006337165832519531, "__label__science_tech": 0.2032470703125, "__label__social_life": 0.00018918514251708984, "__label__software": 0.0176849365234375, "__label__software_dev": 0.76416015625, "__label__sports_fitness": 0.00044083595275878906, "__label__transportation": 0.0006127357482910156, "__label__travel": 0.00028514862060546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20517, 0.01176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20517, 0.95774]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20517, 0.74352]], "google_gemma-3-12b-it_contains_pii": [[0, 50, false], [50, 416, null], [416, 815, null], [815, 1369, null], [1369, 1721, null], [1721, 2568, null], [2568, 3041, null], [3041, 3348, null], [3348, 3720, null], [3720, 4023, null], [4023, 4680, null], [4680, 4939, null], [4939, 5163, null], [5163, 5433, null], [5433, 5648, null], [5648, 5790, null], [5790, 5930, null], [5930, 6214, null], [6214, 6468, null], [6468, 6871, null], [6871, 6956, null], [6956, 7055, null], [7055, 7131, null], [7131, 7278, null], [7278, 7425, null], [7425, 7670, null], [7670, 8017, null], [8017, 8457, null], [8457, 8915, null], [8915, 9343, null], [9343, 9814, null], [9814, 10285, null], [10285, 10746, null], [10746, 10780, null], [10780, 11026, null], [11026, 11470, null], [11470, 11645, null], [11645, 11841, null], [11841, 12033, null], [12033, 12366, null], [12366, 12633, null], [12633, 13223, null], [13223, 13641, null], [13641, 14182, null], [14182, 14445, null], [14445, 14805, null], [14805, 15240, null], [15240, 15610, null], [15610, 16164, null], [16164, 16603, null], [16603, 17214, null], [17214, 17791, null], [17791, 18326, null], [18326, 18619, null], [18619, 19261, null], [19261, 19791, null], [19791, 20517, null]], "google_gemma-3-12b-it_is_public_document": [[0, 50, true], [50, 416, null], [416, 815, null], [815, 1369, null], [1369, 1721, null], [1721, 2568, null], [2568, 3041, null], [3041, 3348, null], [3348, 3720, null], [3720, 4023, null], [4023, 4680, null], [4680, 4939, null], [4939, 5163, null], [5163, 5433, null], [5433, 5648, null], [5648, 5790, null], [5790, 5930, null], [5930, 6214, null], [6214, 6468, null], [6468, 6871, null], [6871, 6956, null], [6956, 7055, null], [7055, 7131, null], [7131, 7278, null], [7278, 7425, null], [7425, 7670, null], [7670, 8017, null], [8017, 8457, null], [8457, 8915, null], [8915, 9343, null], [9343, 9814, null], [9814, 10285, null], [10285, 10746, null], [10746, 10780, null], [10780, 11026, null], [11026, 11470, null], [11470, 11645, null], [11645, 11841, null], [11841, 12033, null], [12033, 12366, null], [12366, 12633, null], [12633, 13223, null], [13223, 13641, null], [13641, 14182, null], [14182, 14445, null], [14445, 14805, null], [14805, 15240, null], [15240, 15610, null], [15610, 16164, null], [16164, 16603, null], [16603, 17214, null], [17214, 17791, null], [17791, 18326, null], [18326, 18619, null], [18619, 19261, null], [19261, 19791, null], [19791, 20517, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20517, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20517, null]], "pdf_page_numbers": [[0, 50, 1], [50, 416, 2], [416, 815, 3], [815, 1369, 4], [1369, 1721, 5], [1721, 2568, 6], [2568, 3041, 7], [3041, 3348, 8], [3348, 3720, 9], [3720, 4023, 10], [4023, 4680, 11], [4680, 4939, 12], [4939, 5163, 13], [5163, 5433, 14], [5433, 5648, 15], [5648, 5790, 16], [5790, 5930, 17], [5930, 6214, 18], [6214, 6468, 19], [6468, 6871, 20], [6871, 6956, 21], [6956, 7055, 22], [7055, 7131, 23], [7131, 7278, 24], [7278, 7425, 25], [7425, 7670, 26], [7670, 8017, 27], [8017, 8457, 28], [8457, 8915, 29], [8915, 9343, 30], [9343, 9814, 31], [9814, 10285, 32], [10285, 10746, 33], [10746, 10780, 34], [10780, 11026, 35], [11026, 11470, 36], [11470, 11645, 37], [11645, 11841, 38], [11841, 12033, 39], [12033, 12366, 40], [12366, 12633, 41], [12633, 13223, 42], [13223, 13641, 43], [13641, 14182, 44], [14182, 14445, 45], [14445, 14805, 46], [14805, 15240, 47], [15240, 15610, 48], [15610, 16164, 49], [16164, 16603, 50], [16603, 17214, 51], [17214, 17791, 52], [17791, 18326, 53], [18326, 18619, 54], [18619, 19261, 55], [19261, 19791, 56], [19791, 20517, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20517, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
15949d187a34dcfa1632092f8ad41691dabf941d
Altibase Challenges Oracle, IBM & Microsoft Mature, battle-tested database is now open source NEW YORK, Feb. 12, 2018 /PRNewswire/ -- On February 12 in 2018, Altibase, an enterprise grade relational database, announced that it is now open source. "The database industry is going open source - the trend is clear," says Altibase Chairman, Paul Nahm. "But for discerning and prudent enterprise clients with mission critical applications, there is still one big question: Is there an open source database I can trust to be reliable 24/7? The answer is as of today, Yes, Altibase." Altibase Challenges Oracle & Microsoft Mature, battle-tested database is now open source. NEW YORK, Feb. 12, 2018 /PRNewswire/ -- On February 12th, 2018, Altibase announced that it is now open source. "The database industry is going open source - the trend is perfectly aligned with the needs of our customers and prudent enterprise clients with mission critical or high-volume transactional workloads. The question I get the most is 'What database can I trust to be reliable 24/7'? The answer is now clear: Altibase. With Altibase open source, our customers can now have both an enterprise-grade database platform and the system they can keep and expand as their business and data demands grow," said Haeng-Gu Lee, CEO of Altibase. Altibase now joins an elite group of mission-critical open source database management systems, following in the footsteps of MySQL, PostgreSQL, and others. With Altibase open source, Altibase is positioned to better serve the needs of enterprise customers looking for a database platform that offers both reliability and performance. TODAY’S AGENDA Type Representation In-Memory Data Layout Storage Models System Catalogs DATA ORGANIZATION **Fixed-Length Data Blocks** - Index - Block Id + Offset - Fixed-Length Data Blocks **Variable-Length Data Blocks** DATA ORGANIZATION One can think of an in-memory database as just a large array of bytes. → The schema tells the DBMS how to convert the bytes into the appropriate type. → Each tuple is prefixed with a header that contains its meta-data. Storing tuples with as fixed-length data makes it easy to compute the starting point of any tuple. Mapping virtual memory pages to database pages. MEMORY PAGES OS maps physical pages to virtual memory pages. The CPU's MMU maintains a TLB that contains the physical address of a virtual memory page. → The TLB resides in the CPU caches. → It can't obviously store every possible entry for a large memory machine. When you allocate a block of memory, the allocator keeps that it aligned to page boundaries. TRANSPARENT HUGE PAGES Maintain larger pages automatically (2MB to 1GB) → Each page has to be a contiguous blocks of memory. → Greatly reduces the # of TLB entries With THP, the OS will to reorganize pages in the background to keep things compact. → Split larger pages into smaller pages. → Combine smaller pages into larger pages. → Can cause the DBMS process to stall on memory access. Almost every DBMS says to disable this feature: → Oracle, MemSQL, NuoDB, MongoDB, Sybase IQ Source: Alexandr Nikitin DATA REPRESENTATION INTEGER/BIGINT/SMALLINT/TINYINT → C/C++ Representation FLOAT/REAL vs. NUMERIC/DECIMAL → IEEE-754 Standard / Fixed-point Decimals VARCHAR/VARBINARY/TEXT/BLOB → Pointer to other location if type is ≥64-bits → Header with length and address to next location (if segmented), followed by data bytes. TIME/DATE/TIMESTAMP → 32/64-bit integer of (micro)seconds since Unix epoch VARIABLE PRECISION NUMBERS Inexact, variable-precision numeric type that uses the “native” C/C++ types. Store directly as specified by IEEE-754. Typically faster than arbitrary precision numbers. → Example: FLOAT, REAL/Doubles VARIABLE PRECISION NUMBERS #include <stdio.h> int main(int argc, char* argv[]) { float x = 0.1; float y = 0.2; printf("x+y = %.20f\n", x+y); printf("0.3 = %.20f\n", 0.3); } Rounding Example Output x+y = 0.300000001192092895508 0.3 = 0.29999999999999998890 FIXED PRECISION NUMBERS Numeric data types with arbitrary precision and scale. Used when round errors are unacceptable. → Example: **NUMERIC, DECIMAL** Typically stored in a exact, variable-length binary representation with additional meta-data. → Like a **VARCHAR** but not stored as a string Demo… POSTGRES: NUMERIC # of Digits Weight of 1st Digit Scale Factor Positive/Negative/NaN Digit Storage typedef unsigned char NumericDigit; typedef struct { int ndigits; int weight; int scale; int sign; NumericDigit *digits; } numeric; POSTGRES: NUMERIC typedef unsigned char NumericDigit; typedef struct { int ndigits; int weight; int scale; int sign; NumericDigit* digits; } numeric; typedef unsigned char NumericDigit; typedef struct { int ndigits; int weight; int scale; int sign; NumericDigit *digits; } numeric; /* add_var() - * Full version of add functionality on variable level (handling signs). * result might point to one of the operands too without danger. */ int POSTGRESpnumeric_add(numeric *var1, numeric *var2, numeric *result) { /* Decide on the signs of the two variables what to do */ if (var1->sign == NUMERIC_POS) { if (var2->sign == NUMERIC_POS) { /* Both are positive result = +(ABS(var1) + ABS(var2)) */ if (add_abs(var1, var2, result) != 0) return -1; result->sign = NUMERIC_POS; } else { /* var1 is positive, var2 is negative Must compare absolute values */ switch (cmp_abs(var1, var2)) { case 0: { /* * ABS(var1) == ABS(var2) * result = ZERO */ zero_var(result); result->rscale = Max(var1->rscale, var2->rscale); result->dscale = Max(var1->dscale, var2->dscale); break; } case 1: { /* * ABS(var1) > ABS(var2) * result = +(ABS(var1) - ABS(var2)) */ if (sub_abs(var1, var2) != 0) return -1; result->sign = NUMERIC_POS; break; } case -1: { /* * ABS(var1) < ABS(var2) * result = -(ABS(var2) - ABS(var1)) */ break; } } } } else { /* var2 is positive, var1 is negative Must compare absolute values */ switch (cmp_abs(var2, var1)) { case 0: { /* * ABS(var1) == ABS(var2) * result = ZERO */ zero_var(result); result->rscale = Max(var1->rscale, var2->rscale); result->dscale = Max(var1->dscale, var2->dscale); break; } case 1: { /* * ABS(var1) > ABS(var2) * result = +(ABS(var1) - ABS(var2)) */ if (sub_abs(var1, var2) != 0) return -1; result->sign = NUMERIC_POS; break; } case -1: { /* * ABS(var1) < ABS(var2) * result = -(ABS(var2) - ABS(var1)) */ break; } } } return 0; } CREATE TABLE AndySux (id INT PRIMARY KEY, value BIGINT); char[] <table> <thead> <tr> <th>header</th> <th>id</th> <th>value</th> </tr> </thead> </table> CREATE TABLE AndySux (id INT PRIMARY KEY, value BIGINT); char[] header id value reinterpret_cast<int32_t*>(address) CREATE TABLE AndySux (value VARCHAR(1024)); INSERT INTO AndySux VALUES ("Andy has the worst hygiene that I have ever seen. I hate him so much."); CREATE TABLE AndySux (value VARCHAR(1024)); INSERT INTO AndySux VALUES ("Andy has the worst hygiene that I have ever seen. I hate him so much."); Variable-Length Data Blocks Andy has the worst hygiene that I have ever seen. I hate him so much. Andy | 64-BIT POINTER NULL DATA TYPES Choice #1: Special Values → Designate a value to represent NULL for a particular data type (e.g., INT32_MIN). Choice #2: Null Column Bitmap Header → Store a bitmap in the tuple header that specifies what attributes are null. Choice #3: Per Attribute Null Flag → Store a flag that marks that a value is null. → Have to use more space than just a single bit because this messes up with word alignment. ## NULL DATA TYPES ### Integer Numbers <table> <thead> <tr> <th>Data Type</th> <th>Size</th> <th>Size (Not Null)</th> <th>Synonyms</th> <th>Min Value</th> <th>Max Value</th> </tr> </thead> <tbody> <tr> <td>BOOL</td> <td>2 bytes</td> <td>1 byte</td> <td>BOOLEAN</td> <td>0</td> <td>1</td> </tr> <tr> <td>BIT</td> <td>9 bytes</td> <td>8 bytes</td> <td></td> <td></td> <td></td> </tr> <tr> <td>TINYINT</td> <td>2 bytes</td> <td>1 byte</td> <td></td> <td>-128</td> <td>127</td> </tr> <tr> <td>SMALLINT</td> <td>4 bytes</td> <td>2 bytes</td> <td></td> <td>-32768</td> <td>32767</td> </tr> <tr> <td>MEDIUMINT</td> <td>4 bytes</td> <td>3 bytes</td> <td></td> <td>-8388608</td> <td>8388607</td> </tr> <tr> <td>INT</td> <td>8 bytes</td> <td>4 bytes</td> <td>INTEGER</td> <td>-2147483648</td> <td>2147483647</td> </tr> <tr> <td>BIGINT</td> <td>12 bytes</td> <td>8 bytes</td> <td></td> <td>-2 ** 63</td> <td>(2 ** 63) - 1</td> </tr> </tbody> </table> The truth is that you only need to worry about word-alignment for cache lines (e.g., 64 bytes). I’m going to show you the basic idea using 64-bit words since it’s easier to see... WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. CREATE TABLE AndySux (id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT); WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. CREATE TABLE AndySux (id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT); ``` <table> <thead> <tr> <th>id</th> <th>cdate</th> <th>color</th> <th>zipcode</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ``` ``` char[] ``` ``` 64-bit Word 64-bit Word 64-bit Word 64-bit Word ``` **32-bits** WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. ``` CREATE TABLE AndySux ( id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT ); ``` WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. CREATE TABLE AndySux ( id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT ); ```sql char[] ``` WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. CREATE TABLE AndySux ( id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT ); char[] id cdate c zipc 64-bit Word 64-bit Word 64-bit Word 64-bit Word All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. ```sql CREATE TABLE AndySux ( id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT ) ``` WORD-ALIGNED TUPLES If the CPU fetches a 64-bit value that is not word-aligned, it has three choices: → Execute two reads to load the appropriate parts of the data word and reassemble them. → Read some unexpected combination of bytes assembled into a 64-bit word. → Throw an exception Source: Levente Kurusa WORD-ALIGNED TUPLES All attributes in a tuple must be word aligned to enable the CPU to access it without any unexpected behavior or additional work. CREATE TABLE AndySux ( id INT PRIMARY KEY, cdate TIMESTAMP, color CHAR(2), zipcode INT ); STORAGE MODELS N-ary Storage Model (NSM) Decomposition Storage Model (DSM) Hybrid Storage Model N-ARY STORAGE MODEL (NSM) The DBMS stores all of the attributes for a single tuple contiguously. Ideal for OLTP workloads where txns tend to operate only on an individual entity and insert-heavy workloads. Use the tuple-at-a-time iterator model. Choice #1: Heap-Organized Tables → Tuples are stored in blocks called a heap. → The heap does not necessarily define an order. Choice #2: Index-Organized Tables → Tuples are stored in the primary key index itself. → Not quite the same as a clustered index. N-ARY STORAGE MODEL (NSM) Advantages → Fast inserts, updates, and deletes. → Good for queries that need the entire tuple. → Can use index-oriented physical storage. Disadvantages → Not good for scanning large portions of the table and/or a subset of the attributes. DECOMPOSITION STORAGE MODEL (DSM) The DBMS stores a single attribute for all tuples contiguously in a block of data. → Sometimes also called *vertical partitioning*. Ideal for OLAP workloads where read-only queries perform large scans over a subset of the table’s attributes. Use the vector-at-a-time iterator model. DECOMPOSITION STORAGE MODEL (DSM) 1970s: Cantor DBMS 1980s: DSM Proposal 1990s: SybaseIQ (in-memory only) 2000s: Vertica, Vectorwise, MonetDB 2010s: “The Big Three” Cloudera Impala, Amazon Redshift, SAP HANA, MemSQL TUPLE IDENTIFICATION Choice #1: Fixed-length Offsets → Each value is the same length for an attribute. Choice #2: Embedded Tuple Ids → Each value is stored with its tuple id in a column. Offsets <table> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td>0</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>3</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Embedded Ids <table> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>2</td> <td>2</td> <td>2</td> <td>2</td> </tr> <tr> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>3</td> </tr> </tbody> </table> DECOMPOSITION STORAGE MODEL (DSM) Advantages → Reduces the amount wasted work because the DBMS only reads the data that it needs. → Better compression. Disadvantages → Slow for point queries, inserts, updates, and deletes because of tuple splitting/stitching. OBSERVATION Data is “hot” when first entered into database → A newly inserted tuple is more likely to be updated again the near future. As a tuple ages, it is updated less frequently. → At some point, a tuple is only accessed in read-only queries along with other tuples. What if we want to use this data to make decisions that affect new txns? BIFURCATED ENVIRONMENT OLTP Data Silos Extract Transform Load OLAP Data Warehouse HYBRID STORAGE MODEL Single logical database instance that uses different storage models for hot and cold data. Store new data in NSM for fast OLTP Migrate data to DSM for more efficient OLAP HYBRID STORAGE MODEL Choice #1: Separate Execution Engines → Use separate execution engines that are optimized for either NSM or DSM databases. Choice #2: Single, Flexible Architecture → Use single execution engine that is able to efficiently operate on both NSM and DSM databases. Run separate “internal” DBMSs that each only operate on DSM or NSM data. → Need to combine query results from both engines to appear as a single logical database to the application. → Have to use a synchronization method (e.g., 2PC) if a txn spans execution engines. Two approaches to do this: → Fractured Mirrors (Oracle, IBM) → Delta Store (SAP HANA) FRACTURED MIRRORS Store a second copy of the database in a DSM layout that is automatically updated. → All updates are first entered in NSM then eventually copied into DSM mirror. Stage updates to the database in an NSM table. A background thread migrates updates from delta store and applies them to DSM data. CATEGORIZING DATA Choice #1: Manual Approach → DBA specifies what tables should be stored as DSM. Choice #2: Off-line Approach → DBMS monitors access logs offline and then makes decision about what data to move to DSM. Choice #3: On-line Approach → DBMS tracks access patterns at runtime and then makes decision about what data to move to DSM. PELOTON ADAPTIVE STORAGE Employ a single execution engine architecture that is able to operate on both NSM and DSM data. → Don’t need to store two copies of the database. → Don’t need to sync multiple database segments. Note that a DBMS can still use the delta-store approach with this single-engine architecture. **PELOTON ADAPTIVE STORAGE** **Original Data** ``` UPDATE AndySux SET A = 123, B = 456, C = 789 WHERE D = "xxx" SELECT AVG(B) FROM AndySux WHERE C = "yyy" ``` ### PELOTON ADAPTIVE STORAGE #### Original Data <table> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td><strong>Hot</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td><strong>Cold</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **UPDATE** AndySux **SET** - **A** = 123, - **B** = 456, - **C** = 789 **WHERE** - **D** = "xxx" **SELECT** `AVG(B)` **FROM** AndySux **WHERE** - **C** = "yyy" PELOTON ADAPTIVE STORAGE Original Data <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Adapted Data <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> UPDATE AndySux SET A = 123, B = 456, C = 789 WHERE D = "xxx" SELECT AVG(B) FROM AndySux WHERE C = "yyy" TILE ARCHITECTURE Introduce an indirection layer that abstracts the true layout of tuples from query operators. TILE ARCHITECTURE Introduce an indirection layer that abstracts the true layout of tuples from query operators. <table> <thead> <tr> <th>H</th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td>+</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>+</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>+</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Tile Group Header Tile #1 Tile #2 Tile #3 Tile #4 TILE ARCHITECTURE Introduce an indirection layer that abstracts the true layout of tuples from query operators. ``` SELECT AVG(B) FROM AndySux WHERE C = "yyy" ``` TILE ARCHITECTURE Introduce an indirection layer that abstracts the true layout of tuples from query operators. ``` SELECT AVG(B) FROM AndySux WHERE C = "yyy" ``` PELOTON ADAPTIVE STORAGE - Row Layout - Column Layout - Adaptive Layout Execution Time (ms) <table> <thead> <tr> <th>Sep-15</th> <th>Sep-16</th> <th>Sep-17</th> <th>Sep-18</th> <th>Sep-19</th> <th>Sep-20</th> </tr> </thead> <tbody> <tr> <td>Scan</td> <td>Insert</td> <td>Scan</td> <td>Insert</td> <td>Scan</td> <td>Insert</td> </tr> <tr> <td>1600</td> <td>1200</td> <td>800</td> <td>400</td> <td>0</td> <td>0</td> </tr> </tbody> </table> - Sep-15: Initial execution time is high due to the new layout. - Sep-16 to Sep-20: Gradual decrease in execution time as the system adapts. PARTING THOUGHTS A flexible architecture that supports a hybrid storage model is the next major trend in DBMSs → This will enable relational DBMSs to support all database workloads except for matrices in machine learning.
{"Source-Url": "http://15721.courses.cs.cmu.edu/spring2018/slides/10-storage.pdf", "len_cl100k_base": 5440, "olmocr-version": "0.1.53", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 67760, "total-output-tokens": 7217, "length": "2e12", "weborganizer": {"__label__adult": 0.0002598762512207031, "__label__art_design": 0.0002903938293457031, "__label__crime_law": 0.0003173351287841797, "__label__education_jobs": 0.0005970001220703125, "__label__entertainment": 7.957220077514648e-05, "__label__fashion_beauty": 0.0001245737075805664, "__label__finance_business": 0.0017366409301757812, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.0005087852478027344, "__label__hardware": 0.0017366409301757812, "__label__health": 0.0003516674041748047, "__label__history": 0.00018262863159179688, "__label__home_hobbies": 8.279085159301758e-05, "__label__industrial": 0.0006008148193359375, "__label__literature": 0.00012540817260742188, "__label__politics": 0.00020682811737060547, "__label__religion": 0.0003120899200439453, "__label__science_tech": 0.05230712890625, "__label__social_life": 5.6862831115722656e-05, "__label__software": 0.0614013671875, "__label__software_dev": 0.8779296875, "__label__sports_fitness": 0.00017571449279785156, "__label__transportation": 0.0003368854522705078, "__label__travel": 0.00014448165893554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18571, 0.02616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18571, 0.26087]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18571, 0.75304]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 581, false], [581, 1739, null], [1739, 1828, null], [1828, 1965, null], [1965, 2352, null], [2352, 2713, null], [2713, 3222, null], [3222, 3616, null], [3616, 3846, null], [3846, 4123, null], [4123, 4426, null], [4426, 4679, null], [4679, 4851, null], [4851, 7704, null], [7704, 7822, null], [7822, 7941, null], [7941, 8088, null], [8088, 8358, null], [8358, 8777, null], [8777, 9507, null], [9507, 9688, null], [9688, 9928, null], [9928, 10362, null], [10362, 10622, null], [10622, 10892, null], [10892, 11208, null], [11208, 11443, null], [11443, 11754, null], [11754, 11996, null], [11996, 12093, null], [12093, 12342, null], [12342, 12602, null], [12602, 12870, null], [12870, 13190, null], [13190, 13414, null], [13414, 13892, null], [13892, 14154, null], [14154, 14502, null], [14502, 14587, null], [14587, 14781, null], [14781, 15065, null], [15065, 15419, null], [15419, 15600, null], [15600, 15731, null], [15731, 16078, null], [16078, 16394, null], [16394, 16564, null], [16564, 16896, null], [16896, 17166, null], [17166, 17279, null], [17279, 17558, null], [17558, 17723, null], [17723, 17888, null], [17888, 18349, null], [18349, 18571, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 581, true], [581, 1739, null], [1739, 1828, null], [1828, 1965, null], [1965, 2352, null], [2352, 2713, null], [2713, 3222, null], [3222, 3616, null], [3616, 3846, null], [3846, 4123, null], [4123, 4426, null], [4426, 4679, null], [4679, 4851, null], [4851, 7704, null], [7704, 7822, null], [7822, 7941, null], [7941, 8088, null], [8088, 8358, null], [8358, 8777, null], [8777, 9507, null], [9507, 9688, null], [9688, 9928, null], [9928, 10362, null], [10362, 10622, null], [10622, 10892, null], [10892, 11208, null], [11208, 11443, null], [11443, 11754, null], [11754, 11996, null], [11996, 12093, null], [12093, 12342, null], [12342, 12602, null], [12602, 12870, null], [12870, 13190, null], [13190, 13414, null], [13414, 13892, null], [13892, 14154, null], [14154, 14502, null], [14502, 14587, null], [14587, 14781, null], [14781, 15065, null], [15065, 15419, null], [15419, 15600, null], [15600, 15731, null], [15731, 16078, null], [16078, 16394, null], [16394, 16564, null], [16564, 16896, null], [16896, 17166, null], [17166, 17279, null], [17279, 17558, null], [17558, 17723, null], [17723, 17888, null], [17888, 18349, null], [18349, 18571, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18571, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18571, null]], "pdf_page_numbers": [[0, 0, 1], [0, 581, 2], [581, 1739, 3], [1739, 1828, 4], [1828, 1965, 5], [1965, 2352, 6], [2352, 2713, 7], [2713, 3222, 8], [3222, 3616, 9], [3616, 3846, 10], [3846, 4123, 11], [4123, 4426, 12], [4426, 4679, 13], [4679, 4851, 14], [4851, 7704, 15], [7704, 7822, 16], [7822, 7941, 17], [7941, 8088, 18], [8088, 8358, 19], [8358, 8777, 20], [8777, 9507, 21], [9507, 9688, 22], [9688, 9928, 23], [9928, 10362, 24], [10362, 10622, 25], [10622, 10892, 26], [10892, 11208, 27], [11208, 11443, 28], [11443, 11754, 29], [11754, 11996, 30], [11996, 12093, 31], [12093, 12342, 32], [12342, 12602, 33], [12602, 12870, 34], [12870, 13190, 35], [13190, 13414, 36], [13414, 13892, 37], [13892, 14154, 38], [14154, 14502, 39], [14502, 14587, 40], [14587, 14781, 41], [14781, 15065, 42], [15065, 15419, 43], [15419, 15600, 44], [15600, 15731, 45], [15731, 16078, 46], [16078, 16394, 47], [16394, 16564, 48], [16564, 16896, 49], [16896, 17166, 50], [17166, 17279, 51], [17279, 17558, 52], [17558, 17723, 53], [17723, 17888, 54], [17888, 18349, 55], [18349, 18571, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18571, 0.09595]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
103deeac8b3ac3bf5b9ed2819a5fd28ee52b0847
Algebraic Reasoning About Timeliness by Seyed Hossein HAERI\textsuperscript{12} Peter W. THOMPSON\textsuperscript{13} Peter VAN ROY\textsuperscript{4} Magne HAVERAAEN\textsuperscript{2} Neil J. DAVIES\textsuperscript{13} Mikhail BARASH\textsuperscript{2} Kevin HAMMOND\textsuperscript{1} James CHAPMAN\textsuperscript{1} (\textsuperscript{1}IOG, \textsuperscript{2}University of Bergen, \textsuperscript{3}PNSol, \textsuperscript{4}UCLouvain) on 19 Jun 2023 16\textsuperscript{th} Interaction and Concurrency Experience NOVA University, Lisbon, Portugal Introduction Why predict performance? * Weather forecast of today can’t arrive tomorrow! * Without performance prediction * Performance issues exposed late in design cycle * Either: * Re-architect the design, with cost and delay, or * Allocate excessive resources, with cost and inefficiency. * With performance prediction * Performance issues exposed early in design cycle * Re-architect the design before time and money spent, and * Control resources, avoiding cost and inefficiency. See [1, §1.1] for more: * *Mind Your Outcomes: The \( \Delta QSD \) Paradigm for Quality-Centric Systems Development and Its Application to a Blockchain Case Study.* Computers 11(3): 45 (2022) Why does IOG fund research on performance? - Good Starting Point: Kevin Hammond’s Keynote in Lambda Days 2023 https://tinyurl.com/3t42t3wn - IOG is a prominent blockchain company. https://iohk.io - The effective operation of the Cardano network depends on a performance aware design. - The ΔQSD Team on Formalising Performance Aspects Last Year’s DisCoTec Tutorial by Peter VAN ROY The $\Delta$QSD Paradigm for System Development June 13, 2022 DisCoTec Tutorial Peter Van Roy Université catholique de Louvain Neil Davies, Peter Thompson Predictable Network Solutions Ltd. Seyed Hossein Haeri PLWorkz https://www.youtube.com/watch?v=iBYZEJZwKm0 What's timeliness? Timeliness is delivering results within the required time bounds (sufficiently often). Cache Example * Outcome Diagrams * Outcome Expressions * An Algebraic Perspective on Timeliness * Where is the algebra? Cache Example - Outcome Diagrams - Outcome Expressions - An Algebraic Perspective on Timeliness - Where is the algebra? Big Picture Block Diagram Cache Example Hit or Miss Note: - Outcomes: What the System Gains by Performing One of its Tasks - NOT System States - NOT Subsystems - NOT Classes/Objects - Probabilistic Choice (↔) Lookup from Main Memory Note: * Sequential Composition * Left-to-Right Causality Error Correction - Cache Example - Algebraic Results - Q&A Diagram: - c-miss - c-hit - ECC fail - main Probabilities: - 95% - 5% - $10^{-16}$ Timeout (1 of 3) Time-Bounded Network Connection Back & Forth Timeout (2 of 3) - Cache Example - Algebraic Results - Q&A ![Diagram of cache example with nodes and edges] Note: * Any-to-Finish (∃) Cache Example - Outcome Diagrams - Outcome Expressions - An Algebraic Perspective on Timeliness - Where is the algebra? Expression: \[ \text{main} \xleftrightarrow{1} 10^{-16} \bot \] Note: * “\(\bot\)” is for failure. Expression: \[\text{net} \rightarrow\rightarrow (\text{main} \xrightarrow{1} 10^{-16} \perp) \rightarrow\rightarrow \text{net}\] Note: * “\(\rightarrow\rightarrow\)” is for sequential composition. Expression: \[(\text{net} \\xrightarrow{} \text{main} \xleftrightarrow{1} \text{main} \xrightarrow{10^{-16}} \text{net}) \parallel \exists \text{t-out})\] Note: * “\(\parallel \exists\)” is for any-to-finish. \[ c\text{-hit} \quad [95\%] \Rightarrow (\text{c-miss} \quad \text{miss} \quad \text{net}\quad \text{mread}\quad 1\quad \text{main}\quad \text{mreturn}\quad \text{net} \quad \text{hit} \quad \text{miss}) \quad || \exists \quad \text{t-out}) \] Cache Example - Outcome Diagrams - Outcome Expressions - An Algebraic Perspective on Timeliness - Where is the algebra? What’s a $\Delta Q$? - Quality Attenuation - A Measure for Delay (and Failure) - Represented using a Cumulative Distribution Function (CDF) - Improper Random Variable (IRV) [2] Timeliness Semantics Definition (Haeri et al. [1]): Given a basic assignment $\Delta_\circ[.] : \mathcal{B} \rightarrow \Delta$, define $\Delta Q[.]_{\Delta_\circ} : \mathcal{O} \rightarrow \mathcal{I}$ such that... Definition (Haeri et al. [1]): Given a basic assignment \( \Delta_\circ[\cdot] : \mathbb{B} \to \Delta \), define \( \Delta_\circ \square[\cdot]_\circ : \square \to \square \) such that \[ \Delta_\circ \square[\cdot]_\circ = \begin{cases} 1 & \text{when } \Delta_\circ[\cdot] \notin \square \\ \Delta_\circ[\cdot]_\circ & \text{otherwise} \end{cases} \] \[ \Delta_\circ \bigcirc[\circ \rightarrow \circ']_\circ = \Delta_\circ \bigcirc[\circ]_\circ \times \Delta_\circ \bigcirc[\circ']_\circ \] \[ \Delta_\circ \bigcirc[\circ \rightleftharpoons \circ']_\circ = \frac{m}{m+m'} \Delta_\circ \bigcirc[\circ]_\circ + \frac{m'}{m+m'} \Delta_\circ \bigcirc[\circ']_\circ \] \[ \Delta_\circ \bigcirc[\circ \leftarrow \circ']_\circ = \Delta_\circ \bigcirc[\circ]_\circ \times \Delta_\circ \bigcirc[\circ']_\circ \] \[ \Delta_\circ \bigcirc[\circ \leftrightarrow \circ']_\circ = \Delta_\circ \bigcirc[\circ]_\circ + \Delta_\circ \bigcirc[\circ']_\circ - \Delta_\circ \bigcirc[\circ]_\circ \times \Delta_\circ \bigcirc[\circ']_\circ \] Definition (Haeri et al. [1]): Given a basic assignment $\Delta_\circ[.] : \mathbb{B} \rightarrow \Delta$, define $\Delta Q[.]_\circ : \mathbb{O} \rightarrow \mathbb{I}$ such that $$ \Delta Q[[\beta]]_\circ. = \begin{cases} 1 & \text{when } \Delta_\circ[[\beta]] \notin \mathbb{I} \\ \Delta_\circ[[\beta]] & \text{otherwise} \end{cases} $$ $$ \Delta Q[[o \rightarrow o']]_\circ. = \Delta Q[[o]]_\circ. \ast \Delta Q[[o']]_\circ. $$ $$ \Delta Q[[o \rightarrow m o']]_\circ. = \frac{m}{m+m'} \Delta Q[[o]]_\circ. + \frac{m'}{m+m'} \Delta Q[[o']]_\circ. $$ $$ \Delta Q[[o \parallel o']]_\circ. = \Delta Q[[o]]_\circ. \times \Delta Q[[o']]_\circ. $$ $$ \Delta Q[[o \parallel o']]_\circ. = \Delta Q[[o]]_\circ. + \Delta Q[[o']]_\circ. - \Delta Q[[o]]_\circ. \times \Delta Q[[o']]_\circ. $$ » $\Delta Q$ of the Cache Example Given $$\Delta \supseteq \{\Delta Q_{c\text{-hit}}, \Delta Q_{c\text{-miss}}, \Delta Q_{mem}, \Delta Q_{t\text{-out}}, \Delta Q_{mem}, \Delta Q_{t\text{-out}}\},$$ one calculates: $$\Delta Q_{obs} = 0.95 \times \Delta Q_{c\text{-hit}} + 0.05 \times (\Delta Q_{c\text{-miss}} \times (\Delta Q_{mem} + \Delta Q_{t\text{-out}} - \Delta Q_{mem} \times \Delta Q_{t\text{-out}})),$$ where $$\Delta Q_{mem} = \Delta Q_{\text{net}} \times (1 - 10^{-16}) \times \Delta Q_{\text{main}} \times \Delta Q_{\text{net}}.$$ Timeliness for the Cache Δ\text{Q}_\text{req}: * 10% of queries up to 4ms * 50% of queries up to 6ms * 90% of queries up to 14ms * 10% of queries never Cache Example - Outcome Diagrams - Outcome Expressions - An Algebraic Perspective on Timeliness - Where is the algebra? Expression: \[ c\text{-hit}^{[95\%]} \leftrightarrow (c\text{-miss} \rightarrow \text{main}^{1 \over 10^{-16}} \perp) \] Algebraic Manipulation \[ c \quad \text{hit} \ [95\%] \quad (c \quad \text{miss} \quad \rightarrow \quad \text{main} \quad \frac{1}{10^{-16}} \quad \bot) \\ \] \[ c \quad \text{hit} \ [95\%] \quad ((c \quad \text{miss} \quad \rightarrow \quad \text{main}) \quad \frac{1}{10^{-16}} \quad \bot) \\ \] \[ (c \quad \text{hit} \quad [\cdot] \quad (c \quad \text{miss} \quad \rightarrow \quad \text{main})) \quad [q] \quad \bot \] where \( q = (1 - 0.05 \times 10^{-16}) = 0.999999999999999995 \). \* 17 nines vs 9 nines of Ericsson AXD301 Not a Guarantee for Success! Just ruling out infeasibility with this level of information. Benefit of Algebraic Manipulation $q = (1 - 0.05 \times 10^{-16}) = 0.999999999999999995$ * What if we had already implemented the cache? * Will simply throwing more hardware at it work? * Re-architecture from scratch? Algebraic Results ## Algebraic Structures <table> <thead> <tr> <th>O with</th> <th>Forms</th> </tr> </thead> <tbody> <tr> <td>⇄</td> <td>magma</td> </tr> <tr> <td>⊏</td> <td>commutative monoid with (\top) and (\bot) as the identity and absorbing elements</td> </tr> <tr> <td>⊏∧</td> <td>commutative monoid with (\top) and (\bot) as the identity and absorbing elements</td> </tr> <tr> <td>⊏∃</td> <td>commutative monoid with (\bot) and (\top) as the identity and absorbing elements</td> </tr> </tbody> </table> **Note:** Neither \(\mathbin{\|\wedge}\) nor \(\mathbin{\|\exists}\) nor their combination form the familiar richer algebraic structures. Equivalences Containing Constant Outcomes \[ \begin{align*} \bot & \equiv \bot = \bot \\ \top & \equiv \top = \top \\ \top & \equiv \forall \top = \top \\ \bot & \equiv \exists \bot = \bot \\ \top & \equiv \bot = \bot \\ \top & \equiv \bot = \bot \\ \bot & \equiv \top = \bot \\ \bot & \equiv \bot = \bot \\ \top & \equiv \forall \bot = \bot \\ \bot & \equiv \exists \top = \bot \\ [\rho] & \equiv [\sigma] \top = [\sigma(1-p)] \left[ \frac{\rho}{1-q(1-p)} \right] \top \\ \bot & \equiv [\rho] \left( \bot \equiv [\rho] \top \right) = \bot \left[ \rho + (1-p)q \right] \top \end{align*} \] » Equivalences Containing Constant Outcomes \[ \bot \iff \bot = \bot \] \[ \top \iff \top = \top \] \[ \bot \iff \bigcirc \rightarrow \bullet \bot = \bot \] \[ \top \iff \bigcirc \rightarrow \bullet \top = \top \] \[ \bot \iff \bigcirc \rightarrow \exists \bullet \top = \top \] \[ o_1 \iff \bigcirc \rightarrow \bullet \bigcirc \iff \bot ) = ( o_1 \iff \bigcirc \rightarrow \bullet o_2 ) \iff \bot \] \[ ( o_1 \iff \bot ) \iff \bigcirc \rightarrow \bullet o_2 = ( o_1 \iff \bigcirc \rightarrow \bullet o_2 ) \iff \bot \] \[ ( o_1 \iff \top ) \iff \bigcirc \rightarrow \bullet o_2 = ( o_1 \iff \bigcirc \rightarrow \bullet o_2 ) \iff o_2 \] \[ o_1 \iff \bigcirc \rightarrow \bullet ( o_2 \iff \top ) = ( o_1 \iff \bigcirc \rightarrow \bullet o_2 ) \iff o_1 \] \[ o_1 \[ p \] \iff \bigcirc \rightarrow \bullet \bigcirc \iff \bot ) = ( o_2 \[ q \] \iff \top \] \[ ( \bot \iff \bigcirc \rightarrow \bullet \top ) = ( \bot \iff \bigcirc \rightarrow \bullet o \) \iff \bot \[ p + (1-p)q] \iff \top \] [26/32] » Equivalences Containing Constant Outcomes \[ \begin{align*} \bot & \iff \bot = \bot \\ T & \iff T = T \\ \bot & \iff \bot = \bot \\ T & \iff T = T \\ \bot & \iff \bot = \bot \\ T & \iff T = T \\ \end{align*} \] \[ \begin{align*} o_1 & \iff (o_2 \iff \bot) = (o_1 \iff o_2) \iff \bot \\ (o_1 \iff \bot) & \iff o_2 = (o_1 \iff o_2) \iff \bot \\ (o_1 \iff T) & \iff o_2 = (o_1 \iff o_2) \iff o_2 \\ o_1 & \iff (o_2 \iff T) = (o_1 \iff o_2) \iff o_1 \\ o_1 \left[\frac{p}{q}\right] (o_2 \left[\frac{q}{1-p}\right] T) & = o_2 \left[\frac{q(1-p)}{1-q(1-p)}\right] (o_1 \left[\frac{p}{1-q(1-p)}\right] T) \\ \bot \left[\frac{p}{q}\right] (\bot \left[\frac{q}{1-p}\right] o) & = \bot \left[\frac{p+(1-p)q}{1-p}\right] o \\ \end{align*} \] ECC followed by a net failure is as timely as failure itself! Equivalences Containing Constant Outcomes \[ \begin{align*} \bot & \iff \bot = \bot \\ T & \implies o = o \\ T & = o \iff T = o \\ \top & \implies \exists o = o \\ \top & \iff \forall o = o \end{align*} \] \[ \begin{align*} o_1 \implies o_2 = \bot & \iff (o_1 \implies o_2) = \bot \\ (o_1 \iff \bot) & \implies o_2 = (o_1 \implies o_2) = \bot \\ (o_1 \iff \top) & \implies o_2 = (o_1 \implies o_2) = o_2 \\ o_1 \implies (o_2 = \top) & \iff (o_1 \implies o_2) = o_1 \\ o_1 \left[\frac{p}{p}\right] \iff (o_2 \left[\frac{q}{q}\right] \top) & \iff o_2 \left[\frac{q(1-p)}{1-q(1-p)}\right] (o_1 \left[\frac{p}{1-q(1-p)}\right] \top) \\ \bot \left[\frac{p}{p}\right] \iff (\bot \left[\frac{q}{q}\right] o) & \iff \bot \left[\frac{p+(1-p)q}{p+(1-p)q}\right] o \end{align*} \] Seen at the Algebraic Manipulation of the Cache Example Distributivity Let $o_1, o_2, o_3 \in$ and $p \in \{\rightarrow, \forall, \exists\}$. Then, - $\circ \circ \ time \models o_1 \ p \ (o_2 \leftrightarrow o_3) = (o_1 \ p \ o_2) \leftrightarrow (o_1 \ p \ o_3),$ and - $\circ \circ \ time \models (o_1 \leftrightarrow o_2) \ p \ o_3 = (o_1 \ p \ o_3) \leftrightarrow (o_2 \ p \ o_3).$ Bad News! Only 3 Out of the Possible 15 Summary * Formalisation of $\Delta QSD$ – Ongoing Project * Algebraic Manipulations $\Rightarrow$ Tool Support * Properisation * Ordinary $\Delta Q[\ldots]$ doesn’t work! * The First IRV Body of Theorems Ever! Questions? » Thank you very much!
{"Source-Url": "http://www.discotec.org/2023/ice_slides/Algebraic_Reasoning_About_Timeliness.pdf", "len_cl100k_base": 4369, "olmocr-version": "0.1.50", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 73705, "total-output-tokens": 6343, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0005102157592773438, "__label__crime_law": 0.0004396438598632813, "__label__education_jobs": 0.0016870498657226562, "__label__entertainment": 0.00014722347259521484, "__label__fashion_beauty": 0.0002053976058959961, "__label__finance_business": 0.0009484291076660156, "__label__food_dining": 0.0004458427429199219, "__label__games": 0.0006527900695800781, "__label__hardware": 0.0017442703247070312, "__label__health": 0.0008563995361328125, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.00017762184143066406, "__label__industrial": 0.0012083053588867188, "__label__literature": 0.0005474090576171875, "__label__politics": 0.0004210472106933594, "__label__religion": 0.0006866455078125, "__label__science_tech": 0.48095703125, "__label__social_life": 0.00020182132720947263, "__label__software": 0.0113372802734375, "__label__software_dev": 0.494384765625, "__label__sports_fitness": 0.0003180503845214844, "__label__transportation": 0.0009703636169433594, "__label__travel": 0.0002282857894897461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12808, 0.0229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12808, 0.48709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12808, 0.53126]], "google_gemma-3-12b-it_contains_pii": [[0, 556, false], [556, 569, null], [569, 1296, null], [1296, 1699, null], [1699, 2014, null], [2014, 2121, null], [2121, 2242, null], [2242, 2363, null], [2363, 2405, null], [2405, 2581, null], [2581, 2663, null], [2663, 2812, null], [2812, 2875, null], [2875, 2985, null], [2985, 3012, null], [3012, 3133, null], [3133, 3234, null], [3234, 3434, null], [3434, 3646, null], [3646, 3891, null], [3891, 4012, null], [4012, 4190, null], [4190, 4407, null], [4407, 5438, null], [5438, 6229, null], [6229, 6777, null], [6777, 6931, null], [6931, 7052, null], [7052, 7174, null], [7174, 7806, null], [7806, 8027, null], [8027, 8045, null], [8045, 8560, null], [8560, 9151, null], [9151, 10159, null], [10159, 10959, null], [10959, 11788, null], [11788, 12167, null], [12167, 12382, null], [12382, 12382, null], [12382, 12393, null], [12393, 12416, null], [12416, 12808, null]], "google_gemma-3-12b-it_is_public_document": [[0, 556, true], [556, 569, null], [569, 1296, null], [1296, 1699, null], [1699, 2014, null], [2014, 2121, null], [2121, 2242, null], [2242, 2363, null], [2363, 2405, null], [2405, 2581, null], [2581, 2663, null], [2663, 2812, null], [2812, 2875, null], [2875, 2985, null], [2985, 3012, null], [3012, 3133, null], [3133, 3234, null], [3234, 3434, null], [3434, 3646, null], [3646, 3891, null], [3891, 4012, null], [4012, 4190, null], [4190, 4407, null], [4407, 5438, null], [5438, 6229, null], [6229, 6777, null], [6777, 6931, null], [6931, 7052, null], [7052, 7174, null], [7174, 7806, null], [7806, 8027, null], [8027, 8045, null], [8045, 8560, null], [8560, 9151, null], [9151, 10159, null], [10159, 10959, null], [10959, 11788, null], [11788, 12167, null], [12167, 12382, null], [12382, 12382, null], [12382, 12393, null], [12393, 12416, null], [12416, 12808, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12808, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12808, null]], "pdf_page_numbers": [[0, 556, 1], [556, 569, 2], [569, 1296, 3], [1296, 1699, 4], [1699, 2014, 5], [2014, 2121, 6], [2121, 2242, 7], [2242, 2363, 8], [2363, 2405, 9], [2405, 2581, 10], [2581, 2663, 11], [2663, 2812, 12], [2812, 2875, 13], [2875, 2985, 14], [2985, 3012, 15], [3012, 3133, 16], [3133, 3234, 17], [3234, 3434, 18], [3434, 3646, 19], [3646, 3891, 20], [3891, 4012, 21], [4012, 4190, 22], [4190, 4407, 23], [4407, 5438, 24], [5438, 6229, 25], [6229, 6777, 26], [6777, 6931, 27], [6931, 7052, 28], [7052, 7174, 29], [7174, 7806, 30], [7806, 8027, 31], [8027, 8045, 32], [8045, 8560, 33], [8560, 9151, 34], [9151, 10159, 35], [10159, 10959, 36], [10959, 11788, 37], [11788, 12167, 38], [12167, 12382, 39], [12382, 12382, 40], [12382, 12393, 41], [12393, 12416, 42], [12416, 12808, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12808, 0.02041]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
d62bcfd5cec95164e6f49fe0bbfcd228ff476e1d
[REMOVED]
{"Source-Url": "http://www.gabormelli.com/Publications/2008/2008_RequirementsSpecificationUsingFactOrientedModeling_080630.pdf", "len_cl100k_base": 6870, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 52188, "total-output-tokens": 8106, "length": "2e12", "weborganizer": {"__label__adult": 0.0002894401550292969, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.0019502639770507812, "__label__entertainment": 6.514787673950195e-05, "__label__fashion_beauty": 0.00015234947204589844, "__label__finance_business": 0.0015935897827148438, "__label__food_dining": 0.00029397010803222656, "__label__games": 0.0005173683166503906, "__label__hardware": 0.0005040168762207031, "__label__health": 0.0002930164337158203, "__label__history": 0.0002200603485107422, "__label__home_hobbies": 7.981061935424805e-05, "__label__industrial": 0.0004355907440185547, "__label__literature": 0.00032591819763183594, "__label__politics": 0.00016009807586669922, "__label__religion": 0.0002911090850830078, "__label__science_tech": 0.017669677734375, "__label__social_life": 7.283687591552734e-05, "__label__software": 0.0159454345703125, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.00019538402557373047, "__label__transportation": 0.0003476142883300781, "__label__travel": 0.00016200542449951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35183, 0.02209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35183, 0.1173]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35183, 0.8959]], "google_gemma-3-12b-it_contains_pii": [[0, 358, false], [358, 983, null], [983, 4269, null], [4269, 6745, null], [6745, 9122, null], [9122, 10991, null], [10991, 14463, null], [14463, 16766, null], [16766, 19044, null], [19044, 20640, null], [20640, 23434, null], [23434, 25303, null], [25303, 26868, null], [26868, 29361, null], [29361, 32474, null], [32474, 35183, null]], "google_gemma-3-12b-it_is_public_document": [[0, 358, true], [358, 983, null], [983, 4269, null], [4269, 6745, null], [6745, 9122, null], [9122, 10991, null], [10991, 14463, null], [14463, 16766, null], [16766, 19044, null], [19044, 20640, null], [20640, 23434, null], [23434, 25303, null], [25303, 26868, null], [26868, 29361, null], [29361, 32474, null], [32474, 35183, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35183, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35183, null]], "pdf_page_numbers": [[0, 358, 1], [358, 983, 2], [983, 4269, 3], [4269, 6745, 4], [6745, 9122, 5], [9122, 10991, 6], [10991, 14463, 7], [14463, 16766, 8], [16766, 19044, 9], [19044, 20640, 10], [20640, 23434, 11], [23434, 25303, 12], [25303, 26868, 13], [26868, 29361, 14], [29361, 32474, 15], [32474, 35183, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35183, 0.13592]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a3a4b84f49e40826ee5f623d04921f0e462ee6c9
A USABILITY-BASED FRAMEWORK FOR ELECTRONIC GOVERNMENT SYSTEMS DEVELOPMENT Hafizah Yahya and Rozilawati Razali Centre of Software Technology and Management, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, UKM Bangi, Selangor, Malaysia E-Mail: hafizah.yahya@gmail.com ABSTRACT In the era of globalisation, governments around the world strive to provide the best electronic Government (e-Government) systems to their people. Although the performance of e-Government systems is improving over time, their usability is still unacceptable. One of the reasons of this phenomenon is that most e-Government systems were developed without incorporating usability concerns during the development process. This study was therefore intended to identify the necessary contributing factors that should be considered during development process for ensuring the usability of e-Government systems. Based on the identified factors, the study proposed a usability-based framework for e-Government systems development that comprises three aspects: environment, system development process and product quality attributes. The framework was formulated by combining qualitative data from both theoretical and empirical work. The former involves reviews of previous usability models and standards namely Quality in Use Integrated Measurement (QUIM), Quality of Sustainable e-Government Development (QSeD), Usability Maturity of Open Source-Model (OS-UMM) and International Organization for Standardization (ISO 9241-11). On the other hand, the latter was carried out by interviewing fourteen practitioners who were involved in e-Government systems development. The collected data were analysed by using content analysis. The proposed framework was then validated through reviews by two experienced domain experts. The framework acts as a guideline for government agencies to ensure the usability of e-Government systems that they develop. Keywords: usability, e-government, system development quality. INTRODUCTION Internet revolution allows people to do tasks faster regardless of place and time. It affects social life and opens up a new medium of communication for individuals and organisations. Today, the governments around the world have become a part of Internet revolution through Electronic Government (e-Government) initiatives. E-Government endeavours are mainly supported by e-Government systems. The systems bring the people closer to the government via essential online information and transaction services. In order to ensure the effectiveness of government’s services to its people, e-Government systems are required to meet and satisfy the people’s needs. While many government agencies have succeeded in developing e-Government systems, most of them failed to achieve their real needs and the expected quality [1] [2]. This failure is due to the absence of attention towards the quality aspects especially usability [3]. In particular, there is a lack of adoption and guidance on usability concerns in developing e-Government systems. With regards to e-Government systems development, it was found that most system developers focused only on functional needs [4] and the usability requirements were rarely considered during user testing [5]. In fact, only certain usability concerns were emphasised during system development such as the architectural design and performance [3], [6], [7], [8], [9]. As a result, the developed systems are barely used by their intended users. One of the causes for these phenomena is that system developers are not being properly guided towards usability. Although previous studies have provided some guidelines, they are perceived to be mutually exclusive and do not complement each other [10]. In fact, they were developed for specific domains, thus reduce their applicability [11], [12], [13]. The lack of guidance has resulted the usability concerns are not being considered and embedded in e-Government systems development. The above statements indicate that there is a need for a holistic guideline or framework that considers usability during development process for e-Government systems. To date, such a framework is almost non-existence [14]. Hence, this study aimed to address this concern. RELATED WORK Electronic Government (e-Government) is a medium of interaction between the government and external customers as well as from the government to itself. E-Government can be defined as the use of web-based information technology (IT) that allows its customers to access government information and services. efficiently. E-Government customers include the government, the business community and the people. E-Government advancements started at the end of 1990s with an emphasis on the use of IT in government [15]. E-Government is now more focused on the external benefits such as services to the people, decision making, processes and values in the services [16]. Countries around the world benefited from e-Government particularly in improving the quality of service to its customers. In principle, e-Government shall provide equal access to its services by all [17]. Although governments are connected to the people through Internet, it does not guarantee that the services can be accessed as required especially the disabled [18]. Previous studies on e-Government have found that usability concerns were ignored [3]. The evaluation made on several e-Governments based on Web Content Accessibility Guidelines (WCAG) revealed a number of usability issues [3], [6], [7], [8], [9], [10]. Among others, the issues are speed, broken links, lack of interactive features and accessibility features. This phenomenon happens because the usability models that can be referred by developers when developing e-Government are limited. Even some do exist, they emphasis mainly on performance aspect [19]. According to several reports on e-Government [7], [20], there is a need to formulate specific usability models to improve the quality of e-Government systems. Usability is defined as the extent to which a product can be used by specific users to achieve specified goals with effectiveness, efficiency and satisfaction in a particular context of use [21]. Usability assesses how easy a user interface is used or refers to and how to improve the simple-to-use design process [22]. There are five components of usability which are easy to learn, effective, easy to remember, user error and user satisfaction. Usability is also identified by five key attributes namely easy-to-use, performance, less error rate, persistency and attitude of consumers [23]. Despite various definitions of usability, it basically means how easy it is to use an equipment or software in attaining a specific purpose in the context of a particular application. Usability is influenced by the individual, technology and the tasks to be performed. Thus, usability plays an important role in identifying the characteristics of software product quality, satisfying customer needs, determining the software design and affecting the value of a product. The measurement of usability is based on users’ experience when they interact with software products or systems, either through website, software application, mobile technology or a variety of consumer devices. In other words, usability is a quality attribute that assesses how easy an interface is used. There are several organisations that provide e-Government ratings such as United Nation Public Administration Network (UNPAN) [24], Brown University (BROWN) [25], Waseda University (WASEDA) [26] and World Economic Forum (WEF) [27]. The ratings have encouraged the governments to improve the quality of their e-Government services, particularly on usability concerns. Unfortunately, the models that can be used to evaluate and enhance the usability of e-Government systems are limited. The following paragraphs briefly discuss the relevant models. a) **International Organization for Standardization 9241 (ISO 9241):** ISO 9241-11 is a quality model developed for usability in terms of human-computer interaction. Usability framework in ISO 9241-11 consists of the objective, the context of use, tasks, equipment and environment that cover three attributes which are effectiveness, efficiency and satisfaction [21]. ISO 9241-11 however does not address important usability attributes recommended by other usability models such as the ability to learn [28]. ISO 9241-11 recommends usability to be integrated in systems development through acquisition, user requirements, design, development and communication processes. On the other hand, there is no clear explanation on how the mandate of usability design involving users should occur in development cycle. The model generally agreed that activities such as user requirements and design contribute to the development of a product that is simple to use [29], besides the development methodology [30]. b) **Quality in Use Integrated Measurement (QUIM) model:** The original version of QUIM was developed in 2001 based on ISO 9241-11 [11]. The model consists of 7 usability attributes, which are effectiveness, efficiency, satisfaction, productivity, security, international and accessibility [31]. The latest version of QUIM was developed not only based on ISO standards but also traditional software quality models and usability measurement models [11]. Among the models and standards used in QUIM include ISO 9241 [21], ISO 9126 [32], McCall [33], Boehm [34], Model Metrics for Usability Standards in Computing (MUSIC) [35] and Semi-Automated Interface Designer and Evaluator [36]. QUIM found several lacks in those models and standards and thus combined them in a complementary manner. QUIM is therefore a hierarchical model of usability measurement consisting of 10 factors, 26 sub-factors and 127 metrics. The 10 revised usability factors or attributes include efficiency, effectiveness, productivity, satisfaction, learnability, safety, trustability, accessibility, universal and usefulness. c) **Usability Maturity of Open Source-model (OS-UMM):** OS-UMM suggested factors that help to improve the usability of open source software based on end-user perspective [13]. OS-UMM used QUIM as the guidance and concluded four key usability attributes, namely user expectations, usability bug reporting and fixing, interactive help features and usability learning. d) **The Quality of Sustainable e-Government Development (QSeD) model:** The model is used as a tool to improve administrative processes and service delivery. Its success depends on the quality of products produced and how it is used by governments, citizens and business community. This model adopted the Model for e-Government Success [1], which was developed to evaluate the success and effectiveness of information systems [37]. The model also employed International standards such as ISO/IEC 9126, ISO 9241 and COBIT 4.1. It identified the following e-Government quality attributes: functionality, reliability, usability, efficiency of process quality, accuracy, timeliness, relevance, precision, completeness of information quality and the effective communication for service quality. This model proposed four key elements, which are stakeholders and policies, ICT, development methodology and environment. **METHODOLOGY** This study aimed to answer the following research questions: a) What are the factors that contribute to the usability of e-Government systems from the development perspective? b) How the factors can be combined in the form of an integrated usability-based development framework for e-Government systems? This study employed qualitative method because it is appropriate to answer the above RQs. The method was chosen because it allows researchers to understand and investigate the research topic in depth and detail. The purpose of this study was to identify the factors that influence the development of usable e-Government systems. These factors were then used as the bases for the proposed usability-based development framework for e-Government systems. The specific qualitative techniques used in the study were reviews and interviews. The sampling was purposeful. Reviews involve the process of identifying and examining secondary data sources. It allows the identification of factors from references that are appropriate in the context of study such as journals, books, conference proceedings and technical reports. The searching was made by using the following keywords: "usability AND e-Government", "usability model", "usability" or "ease-of-use" and "factor". Among the databases that were used in the searching include IEEEXplore, Emerald, ISI Web of Science, ProQuest, Science Direct and Springer. This study also used snowball technique in which investigation was made to the relevant publications based on the reference lists. As a result, the study found four usability models and standards that are relevant to e-government systems, namely ISO 9241, QUIM, QSED and OS-UMM. The interviews concerns face-to-face verbal conversation with the suitable informants. This study interviewed fourteen officers who were involved in e-government systems development. The interviews were in the form of semi-structured. The questions were formulated based on the reviews of previous usability models and standards, as described above. In addition, a set of open-ended questions were prepared to acquire informants’ thoughts and opinions relating to the process. The open-ended questions were deemed as necessary as they helped in getting informants’ perspectives without any constraints. Prior to conducting the interviews, the interview protocol was tested in a pilot study. The interview protocol was then modified as needed. The informants were interviewed individually at their respective workplace within approximately one hour. The interviews were conducted over a span of three months. The audio-recorded interviews data were transcribed and organised in textual forms. They were then properly stored for later analysis and interpretation. The collected data from both reviews and interviews were transcribed and analysed by using content analysis. Content analysis is a research technique for making replicable and valid inferences from text to the contexts of their use, in a way of providing knowledge, new insights, a presentation of facts and a practical guide to action. To initiate the process of content analysis, the coding procedure was conducted. The coding procedure started by giving a label to each text segment. A text segment may range from few words to a paragraph. The goal of coding is to rearrange and integrate the related words, sentences or paragraphs together in order to draw a meaningful description about the data. The data then form a major idea, which represents a specific theme. In this study, the themes are indeed the factors that influence the development of usable e-Government systems. To validate the proposed framework, expert reviews was conducted. There were two experts involved in the validation. They were experienced managers with at least 20 years of experience in e-Government systems development. In general, the experts checked the accuracy of the identified factors, the suitability of the selected elements and their interrelations in the framework. The data gathered was analysed using contents analysis [38]. The results of the analysis were used to finalise the proposed framework. Figure-1 shows the phases and activities involved in this study. RESULTS AND DISCUSSIONS The framework consist of three key aspects namely environment, system development process and product quality attributes. These three aspects are connected to each other and have certain influences towards the usability of e-Government systems. The framework depicts how an e-Government system’s product quality attributes, which include functionality and usability, can be achieved through usability-based system development process. The process needs to be supported by environment factors namely people, procedure and technology, as shown in Figure-2 below. There are relationships between factors and relationships between sub-factors in the framework. People comprise system developer and system user who need to interact with each other. System users together with system developers need to be actively involved during the system development process. System users have direct influence on the e-Government system (product) quality by ensuring the fulfilment of the attributes. Besides people, the factors from the environment namely procedure and technology, influence the system development process. The system development process comprises planning, requirements analysis, design, coding and testing as well as implementation phases. The planning phase influences requirements analysis. Later, the specified requirements determine design, coding and testing as well as implementation of the system. The phases happen in sequence but could be iterative. For example, if some inaccuracies are found during design or coding, the requirements analysis-design-coding cycle will be repeated. The completion of this development cycle ensures the accomplishment of product quality attributes. The product quality attributes are the output of the system development process. It consists of functional and usability attributes which complement and require each other. Below are the detailed explanations of each factor. Environment Environment is the situation or work area of a system. In the proposed framework, the environment is divided into three factors which are procedure, technology and people. These three factors influence the development process of e-Government systems. The following paragraphs describe the factors and their respective elements. a) Procedure Procedure is the way of performing certain tasks or doing something. This study identified two sub-factors concerning procedure, which are required to support e-Government system development process: i) Policy: A plan of action that has been officially consented as the basis to make organisational decisions. Among the relevant policies that need to be referred in developing e-Government systems are ICT Policy and ICT Security Policy. ii) Guidelines and Standards: Government instructions and requirements that need to be referred or complied during the development process by system users and system developers. The guidelines and standards include documents related to government ICT requirements and business processes. b) Technology Technology is a method or process of handling things. In this study, the technology used is considered as the tool to develop e-Government systems. There are two technologies which influence e-Government system development and its usability: i) ICT infrastructure: The information and communication technology facilities that are required to implement e-Government systems. The required ICT infrastructure encompasses network, hardware, software, servers and broadband. For example, the system developers need to ensure that the servers being used can accommodate multiple and concurrent users’ access. Besides that, they also have to ensure that the basic facilities to run the system are compatible. For example, the hardware must be well-suited with the software and network used. ii) ICT utilities: Tools that are used to develop e-Government systems. System developers use ICT utilities to complete various development tasks. Some examples of ICT utilities are modelling, design, coding and testing tools. c) People People are categorised into two main groups: a) System user: An individual or people who are using the system to complete certain tasks. System users include the government staffs who are also the process owners. System users need to possess business process knowledge and right attitude. Knowledge in business process is essential to assist in delivering system requirements accurately, meanwhile the commitment is vital to ensure the process continuity in implementing the system. This is because the system users need to be actively involved throughout the development process. System users’ involvement is important to ensure that the system is developed according to their needs from the start. b) System developer: The staff who are involved in system development process. Their roles begin from requirement analysis, design, coding and testing as well as implementation. System developers need to have technical skill, analytical skill and business process knowledge. Technical skill is required during system design, coding and testing as well as implementation. The analytical skill assists system developers to understand and analyse system requirements obtained from system users. System developers also need to understand business process so that they can acquire and analyse accurate requirements that satisfy system users’ needs. The factors stated above are in line with the emphasis given in ISO 13407 (1999), which is the user and organisation are in actual fact, have an influence on system development process. In addition, system developers need to understand user requirements and context of system usage. **System development process** To produce a usable e-Government system, the development process needs to have a series of planned activities supported by clear procedure and suitable technologies. The process also needs to involve system users who need to interact with system developers starting from the planning process until the implementation process. a) Planning a) **Project charter:** Without project management, system development is susceptible to fail. There is a risk that the developed system does not meet its users’ requirements. Thus, a project charter needs to be prepared. It contains the related information on the project such as scope, objectives, organisation, roles and responsibilities, the project manager authority, financial and implementation schedule. It is a document that needs to be comprehensive and endorsed by the parties involved in the system development. The project charter is the main project document that needs to be referred and updated based on the project requirements. The roles of system developers and system users’ involvement need to be stated explicitly so that the system objectives to include usability features could be accomplished. The project charter acts as a guide and reference for the system development project team. b) **Procurement:** E-Government systems that are developed internally is believed to be more usable compared to externally developed systems. This is because system users have direct contacts with system developers. Systems that are developed externally require effective project management. c) **Communication:** Communication during system development is a factor that cannot be undermined. Planned and effective communication enables the development process to become more organised and directed. Furthermore, misunderstandings between system users and system developers can also be avoided. System developers thus need to establish effective communication channels at each system development phase. The communication channels are in the form of meetings, discussions and reviews. This factor is supported by ISO 13407, which highlighted that communication between users and the related parties is important particularly during requirement elicitation to ensure the usability of the system to be built. d) **Methodology:** Based on the empirical study, there are two main approaches used in the development of e-Government systems namely Waterfall and Agile. Waterfall methodology is mainly used when the systems are developed by internal teams while Agile methodology for outsourced teams. Another method is prototyping, which is used when the development is totally new, where the requirements are not adequately defined. The Waterfall is regarded as the most suitable methodology for developing e-Government systems, as most government business processes involve definite instructions, objectives and solutions. The user and system requirements of such systems are stable and seldom require major modification. The Agile methodology and prototyping are recommended only if the development is a new initiative with unclear user and system requirements, which requires constant involvement from system users and involves major changes. b) **Requirements analysis** The requirements analysis phase describes what needs to be done by the system. It is an important phase where system developers acquire requirements from system users. The requirements are obtained via elicitation techniques such as interviews, observation, task analysis and brainstorming. The output of this phase is System Requirement Specification (SRS), which specifies what the system should have or do. In this phase, the system developers should not only foresee the system functions (functional) but also the supporting functions (non-functional) that ensure the smooth functioning of the system, where usability requirements are considered as one of them. A prototype may also be developed to enable system users to understand and evaluate the requirements. c) Design The design phase involves activities that formulate the detailed specification of three main system components, which are interface, database and architecture. System developers are responsible for preparing the designs for interface, database and architecture based on SRS. There are certain standards that need to be adhered to during this process. The interface standard, for example, ensures system uniformity and facilitates the system usage. The database standard outlines the database structure and security while the architecture standard defines the spectrum of system environment. After the design specification is approved, system developers continue with system coding and testing. d) Coding and testing The coding phase involves developing the system by using certain programming languages and tools. It is a phase whereby system developers transform the design specification to a verifiable system. For uniformity and maintenance purposes, system developers are required to conform to specific coding standards. The testing phase is where the system faults are traced. Based on the nature of e-Government systems, there are three types of testing: a) Unit test, which certifies each system module. b) Integration test, which tests integration between system modules and the entire system. c) User test, which tests the user acceptance towards the developed system. Unit test is normally done by the individual system developer who is in-charge of that module. Integration test involves an independent team or testers whereas user test involves system users. Testing that are executed together repeatedly with system users can identify any usability issues. The testing also needs to be executed by staff who have knowledge in the business process. e) Implementation This phase concerns system installation and operation, which is implemented after the system has been tested and ready to be used by system users. One important task during this phase is preparing and compiling system documentation. System documentation is a collection of system materials, which are referred by system developers and system users. Apart from that, users’ feedback on the product quality attributes (functionality and usability) can be obtained and documented. The usability can be evaluated by system users based on six usability attributes described in the following section. Product quality attributes The product of system development process is the e-Government system. The framework outlines the product quality attributes, which consist of functional and usability attributes. The functional attribute describes the business functions that need to be implemented in the system. The usability attribute on the other hand is one of the non-functional requirements that supports the functional requirements. Based on the analysis, the framework classifies the product usability attributes into six main categories, which are relevant to e-Government systems. They are efficiency, effectiveness, learnability, security, accessibility and usefulness. Table-1 below explains the usability attribute, definition and examples of criteria. Table-1. Definitions of usability attributes. <table> <thead> <tr> <th>Attributes</th> <th>Definition and examples of criteria</th> </tr> </thead> <tbody> <tr> <td>Efficiency</td> <td>The ability of a product to enable user to use suitable sources in the context of certain usages.</td> </tr> <tr> <td></td> <td>Example of criteria: Time required to show or display a page.</td> </tr> <tr> <td>Effectiveness</td> <td>The ability of a product to enable user to achieve certain tasks in an accurate manner.</td> </tr> <tr> <td></td> <td>Example of criteria: Making the right and effective decisions.</td> </tr> <tr> <td>Learnability</td> <td>The ability of a product to enable user to feel that he/she is productive and learns new functions</td> </tr> <tr> <td></td> <td>fast.</td> </tr> <tr> <td></td> <td>Example of criteria: The product that provides assistance with clear guidance.</td> </tr> <tr> <td>Security</td> <td>Technical and administrative protection towards system to avoid intrusions, destructions or exposures</td> </tr> <tr> <td></td> <td>either intentionally or otherwise.</td> </tr> <tr> <td></td> <td>Example of criteria: To ensure that only valid users are able to use the system and the right data</td> </tr> <tr> <td></td> <td>code is entered into the system.</td> </tr> <tr> <td>Accessibility</td> <td>The ability of a product to accommodate user’s preferences and personality.</td> </tr> <tr> <td></td> <td>Example of criteria: Users are able to change certain features of the system such as text, colour</td> </tr> <tr> <td></td> <td>and language.</td> </tr> <tr> <td>Usefulness</td> <td>The ability of a product to enable user to resolve real problems.</td> </tr> <tr> <td></td> <td>Example of criteria: The user can use certain utility to support his/her task.</td> </tr> </tbody> </table> CONCLUSIONS AND FUTURE WORK This study has identified the factors that should be considered during development process for ensuring the usability of e-Government systems. The factors were acquired based on theoretical and empirical studies, which were then conceptualised as a framework. The framework has identified six usability attributes that are necessary for e-Government systems. The attributes are efficiency, effectiveness, learnability, security, accessibility and usefulness. A system’s usability cannot be accomplished without a planned development process that embeds usability concerns from the start. Thus, this study outlines the factors and elements that are necessary to be incorporated in the development process to ensure the usability of e-Government systems. The development process needs to be supported by three environment factors, namely people, procedure and technology. While people are the key players who run the initiative, the procedure defines the directions to be followed. The technology on the hand is the mediums to achieve the aims. By following the development process together with the support from the environment, an e-Government system that is efficient, effective, learnable, secured, accessible and useful can be produced. In short, the proposed framework has enhanced theoretical and practical knowledge related to the e-Government system usability. The framework is able to assist system developers in developing an e-Government system that has better usability levels. It also defines the usability attributes that could be used in measuring e-Government systems. This study focuses on general aspects of e-Government system development. Therefore, further studies are required to refine the development process and relate them explicitly with the recommended six usability attributes. The framework also needs to be tested in other government settings. ACKNOWLEDGEMENT The authors thank the informants participated in the study. REFERENCES
{"Source-Url": "http://www.arpnjournals.org/jeas/research_papers/rp_2015/jeas_1115_2889.pdf", "len_cl100k_base": 6017, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27917, "total-output-tokens": 8361, "length": "2e12", "weborganizer": {"__label__adult": 0.00026679039001464844, "__label__art_design": 0.00051116943359375, "__label__crime_law": 0.0006632804870605469, "__label__education_jobs": 0.00733184814453125, "__label__entertainment": 7.653236389160156e-05, "__label__fashion_beauty": 0.00015592575073242188, "__label__finance_business": 0.001312255859375, "__label__food_dining": 0.000370025634765625, "__label__games": 0.0008220672607421875, "__label__hardware": 0.000995635986328125, "__label__health": 0.0006690025329589844, "__label__history": 0.0005469322204589844, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0003762245178222656, "__label__literature": 0.0003762245178222656, "__label__politics": 0.0007915496826171875, "__label__religion": 0.0002636909484863281, "__label__science_tech": 0.04058837890625, "__label__social_life": 9.41157341003418e-05, "__label__software": 0.0282745361328125, "__label__software_dev": 0.91455078125, "__label__sports_fitness": 0.00018918514251708984, "__label__transportation": 0.0005102157592773438, "__label__travel": 0.00021207332611083984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38732, 0.02322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38732, 0.52353]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38732, 0.93293]], "google_gemma-3-12b-it_contains_pii": [[0, 4584, false], [4584, 10420, null], [10420, 15568, null], [15568, 17514, null], [17514, 19552, null], [19552, 24682, null], [24682, 28447, null], [28447, 33433, null], [33433, 37211, null], [37211, 38732, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4584, true], [4584, 10420, null], [10420, 15568, null], [15568, 17514, null], [17514, 19552, null], [19552, 24682, null], [24682, 28447, null], [28447, 33433, null], [33433, 37211, null], [37211, 38732, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38732, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38732, null]], "pdf_page_numbers": [[0, 4584, 1], [4584, 10420, 2], [10420, 15568, 3], [15568, 17514, 4], [17514, 19552, 5], [19552, 24682, 6], [24682, 28447, 7], [28447, 33433, 8], [33433, 37211, 9], [37211, 38732, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38732, 0.12329]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
59f44bcb9b9941741679df7935ddf12663669ac9
Abstract Group communication over the Constrained Application Protocol (CoAP) can be secured by means of Object Security for Constrained RESTful Environments (OSCORE). At deployment time, devices may not know the exact OSCORE groups to join, the respective Group Manager, or other information required to perform the joining process. This document describes how CoAP endpoints can use the CoRE Resource Directory to discover OSCORE groups and acquire information to join them through their respective Group Manager. A same OSCORE group may protect multiple application groups, which are separately announced in the Resource Directory as sets of endpoints sharing a pool of resources. This approach is consistent with, but not limited to, the joining of OSCORE groups based on the ACE framework for Authentication and Authorization in constrained environments. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on September 12, 2019. 1. Introduction A set of CoAP endpoints may share a common pool of resources, hence composing an application group. All the members of an application group may also be members of a same security group, hence sharing a common set of keying material to secure group communication. The Constrained Application Protocol (CoAP) [RFC7252] supports group communication over IP multicast [RFC7390] to improve efficiency and latency of communication and reduce bandwidth requirements. The method Object Security for Constrained RESTful Environments (OSCORE) [I-D.ietf-core-object-security] enables end-to-end security for CoAP messages through CBOR Object Signing and Encryption (COSE) [RFC8152]. In particular, [I-D.ietf-core-oscore-groupcomm] specifies how OSCORE protects CoAP messages in group communication contexts, so enabling OSCORE groups as security groups. Typically, one application group relies on exactly one OSCORE group, while a same OSCORE group may be used by multiple application groups at the same time. A CoAP endpoint joins an OSCORE group via the responsible Group Manager (GM), in order to get the necessary group keying material. As in [I-D.ietf-ace-key-groupcomm-oscore], the joining process can be based on the ACE framework for Authentication and Authorization in constrained environments [I-D.ietf-ace-oauth-authz], with the joining endpoint and the GM as ACE Client and Resource Server, respectively. That is, the joining endpoint accesses the join resource associated to the OSCORE group of interest and exported by the GM. Typically, devices are equipped with a static X509 IDevID certificate installed at manufacturing time. This certificate is used at deployment time during an enrollment process that provides the device with an Operational Certificate, possibly updated during the device lifetime. In the presence of secure group communication for CoAP, such an Operational Certificate may be accompanied by information required to join OSCORE groups. This especially includes a reference to the join resources to access at the respective GMs. However, it is usually impossible to provide such precise information to freshly deployed devices as part of their (early) Operational Certificate. This can be due to a number of reasons: the OSCORE group(s) to join and the responsible GM(s) are generally unknown at manufacturing time; an OSCORE group of interest is created, or the responsible GM is deployed, only after the device is enrolled and fully operative in the network; information related to existing OSCORE groups or to their GMs has been changed. This requires a method for CoAP endpoints to dynamically discover OSCORE groups and their GM, and to retrieve updated information about those groups. This specification describes how CoAP endpoints can use the CoRE Resource Directory (RD) [I-D.ietf-core-resource-directory] for discovering an OSCORE group and retrieving the information required to join that group through the responsible GM. In principle, the GM registers as an endpoint with the RD. The corresponding registration resource includes one link for each OSCORE group under that GM, specifying the path to the related join resource. More information about the OSCORE group is stored in the target attributes of the respective link. This especially includes the identifiers of the application groups which use that OSCORE group. This enables a lookup of those application groups at the Resource Directory, where they are separately announced by a Commissioning Tool (see Appendix A of [I-D.ietf-core-resource-directory]). When querying the RD for OSCORE groups, a CoAP endpoint can further benefit of the CoAP Observe Option [RFC7641]. This enables convenient notifications about the creation of new OSCORE groups or the updates of information concerning existing ones. Thus, it facilitates the early deployment of CoAP endpoints, i.e. even before the GM is deployed and the OSCORE groups of interest are created. The approach in this document is consistent with, but not limited to, the joining of OSCORE groups in [I-D.ietf-ace-key-groupcomm-oscore]. 1.1. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. This specification requires readers to be familiar with the terms and concepts discussed in [I-D.ietf-core-resource-directory] and [RFC6690]. Readers should also be familiar with the terms and concepts discussed in [RFC7252], [I-D.ietf-core-oscore-groupcomm] and [I-D.ietf-ace-key-groupcomm-oscore]. Terminology for constrained environments, such as "constrained device" and "constrained-node network", is defined in [RFC7228]. This document also refers to the following terminology. - **OSCORE group**: a set of CoAP endpoints that share a same OSCORE Common Security Context to protect group communication as described in [I-D.ietf-core-oscore-groupcomm]. That is, an OSCORE group acts as security group for all its members. - **Application group**: a set of CoAP endpoints that share a set of common resources. Application groups are announced in the RD by a Commissioning Tool, according to the RD-Groups usage pattern (see Appendix A of [I-D.ietf-core-resource-directory]). An application group can be associated to a single OSCORE group, while different application groups can rely on the same OSCORE group. Application groups MAY share resources. Any two application groups associated to the same OSCORE group do not share any resource. - **Zeroed-epoch Group ID**: this refers to the Group ID of an OSCORE group as stored in the RD. The structure of such a stored Group ID is as per Appendix C of [I-D.ietf-core-oscore-groupcomm], with the "Group Epoch" part immutable and set to zero. 2. Registration Resource for Group Managers With reference to Figure 3 of [I-D.ietf-core-resource-directory], a Group Manager (GM) registers as an endpoint with the CoRE Resource Directory (RD). The registration includes the links to the join resources at the GM, associated to the OSCORE groups under that GM. In particular, each link to a join resource includes: - "target": URI of the join resource at the GM. - target attributes, including: * Resource Type (rt) with the value "core.osc.j" defined in Section 7.1 of this specification. * The zeroed-epoch Group ID of the OSCORE group. * One target attribute for each application group associated to the OSCORE group, specifying the name of that application group. 3. Registration of Group Manager Endpoints Upon deployment, a GM finds the RD as described in Section 4 of [I-D.ietf-core-resource-directory]. After that, the GM registers as an endpoint with the RD, as described in Section 5.3 of [I-D.ietf-core-resource-directory]. When doing so, the GM MUST also register all the join resources it is exporting at that point in time, i.e. one for each of its OSCORE groups. For each registered join resource, the GM MUST specify the following parameters in the payload of the registration request. - 'rt' = "core.osc.j" (see Section 7.1). - 'oscore-gid', specifying the zeroed-epoch Group ID of the OSCORE group of interest. This parameter MUST specify a single value. - 'app-gp', specifying the name(s) of the application group(s) associated to the OSCORE group of interest. This parameter MAY be included multiple times, and each occurrence MUST specify the name of one application group. A same application group MUST NOT be specified multiple times. The GM SHOULD NOT use the Simple Registration approach described in Section 5.3.1 of [I-D.ietf-core-resource-directory]. The example below shows a GM with endpoint name "gm1" and address 2001:db8::ab that registers with the RD. The GM specifies the link to one join resource for accessing the OSCORE group with zeroed-epoch Group ID "feedca570000" and used by one application group with name "group1". Request: GM -> RD Req: POST coap://rd.example.com/rd?ep=gm1 Content-Format: 40 Payload: </join/feedca570000>;ct=41;rt="core.osc.j"; oscore-gid="feedca570000";app-gp="group1" Response: RD -> GM Res: 2.01 Created Location-Path: /rd/4521 4. Addition and Update of OSCORE Groups The GM is responsible to keep its registration with the RD up to date with links to all its join resources. This means that the GM has to update the registration within its lifetime as per Section 5.4.1 of [I-D.ietf-core-resource-directory], and has to change the content of the registration when a join resource is added/removed or if its target attributes have to be changed, such as in the following cases. - The GM creates a new OSCORE group and starts exporting the related join resource. - The GM dismisses an OSCORE group and stops exporting the related join resource. - Information related to an existing OSCORE group changes, e.g. the list of associated application groups. In order to perform an update to the set of links in its registration, the GM can re-register with the RD and fully specify all links to its join resources and their target attributes in the payload of the POST request. The example below shows the same GM from Section 3 that re-registers with the RD. When doing so, it specifies: - The same previous join resource associated to the OSCORE group with zeroed-epoch Group ID "feedca570000". - A second join resource associated to the OSCORE group with zeroed-epoch Group ID "ech0ech00000" and used by one application group, namely "group2". - A third join resource associated to the OSCORE group with zeroed-epoch Group ID "abcdef120000" and used by two application groups, namely "group3" and "group4". Request: GM -> RD Req: POST coap://rd.example.com/rd?ep=gm1 Content-Format: 40 Payload: </join/feedca570000>;ct=41;rt="core.osc.j"; oscore-gid="feedca570000";app-gp="group1", </join/ech0ech00000>;ct=41;rt="core.osc.j"; oscore-gid="ech0ech00000";app-gp="group2", </join/abcdef120000>;ct=41;rt="core.osc.j"; oscore-gid="abcdef120000";app-gp="group3";app-gp="group4" Response: RD -> GM Res: 2.04 Changed Location-Path: /rd/4521 Alternatively, the GM can perform a PATCH/iPATCH [RFC8132] request to the RD, as per Section 5.4.3 of [I-D.ietf-core-resource-directory]. This requires semantics to be defined in future standards, in order to apply a link-format document as a patch to a different one. 5. Discovery of OSCORE Groups A CoAP endpoint that wants to join an OSCORE group, hereafter called the joining node, might not have all the necessary information at deployment time. Also, it might want to know about possible new OSCORE groups created afterwards by the respective Group Managers. To this end, the joining node can perform a resource lookup at the RD as per Section 6.1 of [I-D.ietf-core-resource-directory], in order to retrieve the missing pieces of information needed to join the OSCORE group(s) of interest. The joining node can find the RD as described in Section 4 of [I-D.ietf-core-resource-directory]. The joining node MUST consider the following search criteria for the lookup filtering. - `'rt' = "core.osc.j" (see Section 7.1).` The joining node MAY additionally consider the following search criteria for the lookup filtering, depending on the information it has already available. - `'oscore-gid'`, specifying the zeroed-epoch Group ID of the OSCORE group of interest. This parameter MUST specify a single value. - `'ep'`, specifying the identifier of the GM as endpoint registered with the RD. - `'app-gp'`, specifying the name(s) of the application group(s) associated to the OSCORE group of interest. This parameter MAY be included multiple times, and each occurrence MUST specify the name of one application group. A same application group MUST NOT be specified multiple times. ### 5.1. Discovery Example #1 Consistently with the examples in Section 3 and Section 4, the example below considers a joining node that wants to join the OSCORE group associated to the application group "group1", but that does not know the zeroed-epoch Group ID of the OSCORE group, the responsible GM and the join resource to access. Request: Joining node -> RD ``` Req: GET coap://rd.example.com/lookup/res?rt=core.osc.j&app-gp=group1 ``` Response: RD -> Joining node ``` Res: 2.05 Content Payload: <coap://[2001:db8::ab]/join/feedca570000>;rt="core.osc.j"; oscore-gid="feedca570000";app-gp="group1"; anchor="coap://[2001:db8::ab]" ``` If it does not know the multicast IP address used in "group1", the joining node can retrieve it by performing an endpoint lookup as shown below. The following assumes that the application group "group1" had been previously registered as per Appendix A of [I-D.ietf-core-resource-directory], with ff35:30:2001:db8::23 as associated multicast IP address. 5.2. Discovery Example #2 Consistently with the examples in Section 3 and Section 4, the example below considers a joining node that wants to join the OSCORE group with zeroed-epoch Group ID "feedca570000", but that does not know the responsible GM, the join resource to access, and the associated application groups. The example also shows how the joining node uses observation [RFC7641], in order to be notified of possible changes in the join resource’s target attributes. This is also useful to handle the case where the OSCORE group of interest has not been created yet, so that the joining node can receive the requested information when available at a later point in time. Request: Joining node -> RD Req: GET coap://rd.example.com/lookup/res?rt=osc.j& oscore-gid=feedca570000 Observe: 0 Response: RD -> Joining node Res: 2.05 Content Observe: 24 Payload: <coap://[2001:db8::ab]/join/feedca570000>;rt="osc.j"; oscore-gid="feedca570000";app-gp="group1"; anchor="coap://[2001:db8::ab]" Depending on the used search criteria, the joining node performing the resource lookup can get a response whose payload is quite large in size. This can happen, for instance, in case the lookup request targets all the join resources at a specified GM, or all the join resources of all the registered GMs, as in the example below. Request: Joining node -> RD Req: GET coap://rd.example.com/lookup/res?rt=osc.j Response: RD -> Joining node Res: 2.05 Content Payload: <coap://[2001:db8::ab]/join/feedca570000>;rt="osc.j"; oscore-gid="feedca570000";app-gp="group1"; anchor="coap://[2001:db8::ab]", <coap://[2001:db8::ab]/join/ech0ech00000>;rt="osc.j"; oscore-gid="ech0ech00000";app-gp="group2"; anchor="coap://[2001:db8::ab]", <coap://[2001:db8::cd]/join/abcdef120000>;rt="osc.j"; oscore-gid="abcdef120000";app-gp="group3";app-gp="group4"; anchor="coap://[2001:db8::cd] Therefore, it is RECOMMENDED that a joining node performing a resource lookup to discover OSCORE groups uses observation only when including the fine-grained search criterion ‘oscore-gid’ in its GET request sent to the RD. 6. Security Considerations The security considerations as described in Section 8 of [I-D.ietf-core-resource-directory] apply here as well. 7. IANA Considerations This document has the following actions for IANA. 7.1. Resource Types IANA is asked to enter the following value into the Resource Type (rt=) Link Target Attribute Values subregistry within the Constrained Restful Environments (CoRE) Parameters registry defined in [RFC6690]. +-----------------+------------------------+-------------------+ <table> <thead> <tr> <th>Value</th> <th>Description</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>core.osc.j</td> <td>Join resource of an OSCORE Group Manager</td> <td>[[this document]]</td> </tr> </tbody> </table> +-----------------+------------------------+-------------------+ 8. References 8.1. Normative References [I-D.ietf-ace-key-groupcomm-oscore] Tiloca, M., Park, J., and F. Palombini, "Key Management for OSCORE Groups in ACE", draft-ietf-ace-key-groupcomm-oscore-01 (work in progress), March 2019. [I-D.ietf-core-oscore-groupcomm] [I-D.ietf-core-resource-directory] 8.2. Informative References [I-D.ietf-ace-oauth-authz] Acknowledgments The authors sincerely thank Carsten Bormann, Francesca Palombini and Jim Schaad for their comments and feedback. The work on this document has been partly supported by VINNOVA and the Celtic-Next project CRITISEC, and by the EIT-Digital High Impact Initiative ACTIVE. Authors’ Addresses Marco Tiloca RISE AB Isafjordsgatan 22 Kista SE-16440 Stockholm Sweden Email: marco.tiloca@ri.se
{"Source-Url": "https://tools.ietf.org/pdf/draft-tiloca-core-oscore-discovery-02.pdf", "len_cl100k_base": 4329, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26243, "total-output-tokens": 5594, "length": "2e12", "weborganizer": {"__label__adult": 0.0003883838653564453, "__label__art_design": 0.0003383159637451172, "__label__crime_law": 0.0013341903686523438, "__label__education_jobs": 0.000560760498046875, "__label__entertainment": 0.00014710426330566406, "__label__fashion_beauty": 0.0002084970474243164, "__label__finance_business": 0.0009889602661132812, "__label__food_dining": 0.00033545494079589844, "__label__games": 0.0006265640258789062, "__label__hardware": 0.007732391357421875, "__label__health": 0.0005297660827636719, "__label__history": 0.000377655029296875, "__label__home_hobbies": 0.0001195073127746582, "__label__industrial": 0.0009484291076660156, "__label__literature": 0.0003216266632080078, "__label__politics": 0.0005235671997070312, "__label__religion": 0.0004734992980957031, "__label__science_tech": 0.28564453125, "__label__social_life": 0.00014317035675048828, "__label__software": 0.0968017578125, "__label__software_dev": 0.60009765625, "__label__sports_fitness": 0.00037479400634765625, "__label__transportation": 0.0007357597351074219, "__label__travel": 0.0002582073211669922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19380, 0.05253]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19380, 0.19313]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19380, 0.86157]], "google_gemma-3-12b-it_contains_pii": [[0, 1551, false], [1551, 2241, null], [2241, 4999, null], [4999, 7357, null], [7357, 9108, null], [9108, 10787, null], [10787, 12647, null], [12647, 14436, null], [14436, 15803, null], [15803, 17325, null], [17325, 18975, null], [18975, 19380, null], [19380, 19380, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1551, true], [1551, 2241, null], [2241, 4999, null], [4999, 7357, null], [7357, 9108, null], [9108, 10787, null], [10787, 12647, null], [12647, 14436, null], [14436, 15803, null], [15803, 17325, null], [17325, 18975, null], [18975, 19380, null], [19380, 19380, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19380, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19380, null]], "pdf_page_numbers": [[0, 1551, 1], [1551, 2241, 2], [2241, 4999, 3], [4999, 7357, 4], [7357, 9108, 5], [9108, 10787, 6], [10787, 12647, 7], [12647, 14436, 8], [14436, 15803, 9], [15803, 17325, 10], [17325, 18975, 11], [18975, 19380, 12], [19380, 19380, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19380, 0.01765]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
46694e133008d3937b479fbe42487d356253e363
In the previous installment [1], we dived into some of the low-level details and problems related to Python threads. As a brief recap, although Python threads are real system threads, there is a global interpreter lock (GIL) that restricts their execution to a single CPU core. Moreover, if your program performs any kind of CPU-intensive processing, the GIL can impose a severe degradation in the responsiveness of other threads that happen to be performing I/O. In response to some of the perceived limitations of threads, some Python programmers have turned to alternative approaches based on coroutines or green threads. In a nutshell, these approaches rely on implementing concurrency entirely in user space without relying on threads as provided by the operating system. Of course, how one actually goes about doing that often remains a big mystery. In this installment, we’re going to dive under the covers of Python concurrency based on coroutines (or generators). Rather than focusing on the usage of particular libraries, the main goal is to gain a deeper understanding of the underlying implementation to see how it works, performance characteristics, and limitations. As with the previous installment, the examples presented are meant to be tried as experiments. There’s a pretty good chance that some of the code presented will bend your brain—it’s not often that you get to write a small operating system in the space of an article. Also, certain parts of the code require Python 3. So, with that in mind, let’s start! **Threads, What Are They Good For?** Previously, we created a simple multithreaded network service that computed Fibonacci numbers. Here was the code: ```python # server.py from socket import * from threading import Thread def tcp_server(address, handler): sock = socket(AF_INET, SOCK_STREAM) sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) sock.bind(address) sock.listen(5) while True: client, addr = sock.accept() t = Thread(target=handler, args=(client, addr)) t.daemon=True t.start() ``` David Beazley is an open source developer and author of the *Python Essential Reference* (4th Edition, Addison-Wesley, 2009). He is also known as the creator of Swig (www.swig.org) and Python Lex-Yacc (www.dabeaz.com/ply.html). Beazley is based in Chicago, where he also teaches a variety of Python courses. dave@dabeaz.com def fib(n): if n <= 2: return 1 else: return fib(n-1) + fib(n-2) def fib_handler(client, address): print('Connection from', address) while True: data = client.recv(1000) if not data: break result = fib(int(data)) client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() if __name__ == '__main__': tcp_server(('',25000), fib_handler) When you run the server, you can connect any number of concurrent clients using nc or telnet, type numbers as input, and get a Fibonacci number returned as a result. For example: ``` bash % nc 127.0.0.1 25000 10 55 20 6765 ``` If you carefully study this code and think about the role of threads, their primary utility is in handling code that blocks. For example, consider operations such as `sock.accept()` and `client.recv()`. Both of those operations stop progress of the currently executing thread until incoming data is available. That’s not a problem, though, when each client is handled by its own thread. If a thread decides to block, the other threads are unaffected and can continue to run. Basically, you just don’t have to worry about it, because all of the underlying details of blocking, awaking, and so forth are handled by the operating system and associated thread libraries. If threads aren’t going to be used, then you have to devise some kind of solution that addresses the blocking problem so that multiple clients can concurrently operate. That is the main problem that needs to be addressed. **Enter Generator Functions** In order to implement blocking, you have to figure out some way to temporarily suspend and later resume the execution of a Python function. As it turns out, Python provides a special kind of function that can be used in exactly this way—a generator function. Generator functions are most commonly used to drive iteration. For example, here is a simple generator function: ```python def countdown(n): while n > 0: yield n n -= 1 ``` Normally, this function would be used to feed a `for`-loop like this: ```python >>> for x in countdown(5): ... print(x) ... 5 4 3 2 1 ``` Under the covers, the `yield` statement emits values to be consumed by the iteration loop. However, it also causes the generator function to temporarily suspend itself. Here is a low-level view of the mechanics involved. ```python >>> c = countdown(5) >>> next(c) # Run to the yield 5 >>> next(c) 4 >>> next(c) 3 ... >>> next(c) 1 >>> next(c) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration ``` On each `next()` call, the function runs to the `yield`, emits a value, and stops. A `StopIteration` exception is raised when the function terminates. The fact that `yield` causes a function to stop is interesting—that’s exactly the behavior you need to handle blocking. Perhaps it can be used to do more than simple iteration. **Generators as Tasks** Rather than thinking of generator functions as simply implementing iteration, you can alternatively view them as more generally implementing a task (note: when used in this way, generators are typically called “coroutines,” although that term seems to be applied rather loosely in the Python community). If you make a task queue and task scheduler, you can make generators or coroutines look a lot like threads. For example, here’s an experiment you can try using the above generator function: A Tale of Two Concurrency (Part 2) from collections import deque # A task queue tasks = deque() # Create some tasks tasks.append(countdown(10)) tasks.append(countdown(20)) tasks.append(countdown(5)) # Run the tasks def run(): while tasks: task = tasks.popleft() # Run to the yield try: x = next(task) except StopIteration: print('Task done') print(x) tasks.append(task) # Reschedule run() In this code, multiple invocations of the `countdown()` generator are being driven by a simple round-robin scheduler. The output will appear something like this if you run it: ``` 10 20 5 9 19 4 8 18 3 7 17 2 ... ``` That’s interesting, but not very compelling since no one would typically want to run a simple iteration pattern like the `countdown()` function in this manner. A much more interesting generator-based task might be a rewritten version of the `fib_handler()` function from our server. For example: ```python def fib_handler(client, address): print('Connection from', address) while True: yield ('recv', client) # Added data = client.recv(1000) if not data: break result = fib(int(data)) yield ('send', client) # Added client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() ``` In this new version, yield statements are placed immediately before each socket operation that might block. Each yield indicates both a reason for blocking (‘recv’ or ‘send’) and a resource (the socket `client`) on which blocking might occur. With the interactive interpreter, let’s see how to drive it. First, create a socket and wait for a connection: ```console >>> from socket import * >>> sock = socket(AF_INET, SOCK_STREAM) >>> sock.bind(('', 25000)) >>> sock.listen(1) >>> client, addr = sock.accept() ``` Next, establish a connection using a command such as `nc local-host 25000` at the shell. Once you’ve done this, try these steps: ```console >>> task = fib_handler(client, addr) >>> task <generator object fib_handler at 0x10a7c53b8> ``` If you carefully study this output, you’ll see that the handler task ran to the first yield statement and is now suspended. Before resuming the handler, you need to wait until input is available on the supplied socket (resource). To do that, you can poll the socket using a system call such as `select()` [2]. For example: ```console >>> from select import select >>> select([resource], [], []) # Blocks until data available ``` Go back to the terminal with the connected `nc` session and type an integer and return. This should force the above `select()` statement to return. Once it’s returned, you can resume the generator by typing the following: Reason, resource = next(task) reason 'send' resource <socket.socket fd=4, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 25000), raddr=('127.0.0.1', 52474)> Now you see that the task has advanced to the next yield statement. Use the select() statement again to see if it's safe to proceed with sending. select([], [resource], [] reason, resource = next(task) In this example, you are using next() to drive the generator task forward to the next yield statement. The select() call is polling for I/O and is being used to know when it is safe to resume the generator. A Generator-Based Task Scheduler Putting the pieces of the last section together, you can make a small generator-based task scheduler like this: ```python from socket import * from collections import deque from select import select tasks = deque() recv_wait = {} # sockets -> tasks waiting to receive send_wait = {} # sockets -> tasks waiting to send def run(): while any([tasks, recv_wait, send_wait]): while not tasks: can_read, can_send, _ = select(recv_wait, send_wait, []) for s in can_read: tasks.append(recv_wait.pop(s)) for s in can_send: tasks.append(send_wait.pop(s)) task = tasks.popleft() try: reason, resource = next(task) except StopIteration: print('Task done') if reason == 'recv': recv_wait[resource] = task elif reason == 'send': send_wait[resource] = task else: raise RuntimeError('Bad reason: %s' % reason) ``` The scheduler is essentially a small operating system. There is a queue of ready-to-run tasks (tasks) and two waiting areas for tasks that need to perform I/O (recv_wait and send_wait). The core of the scheduler takes a ready-to-run task and runs it to the next yield statement, which acts as a kind of “trap” or “system call.” Based on the result of the yield, the task is placed into one of the I/O holding areas. If there are no tasks ready to run, a select call is made to wait for I/O and place a previously suspended task back onto the task queue. To use this scheduler, you take your previous thread-based code and simply instrument it with yield calls. For example: ```python def tcp_server(address, handler): sock = socket(AF_INET, SOCK_STREAM) sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) sock.bind(address) sock.listen(5) while True: yield 'recv', sock client, addr = sock.accept() # Create a new handler task and add to the task queue tasks.append(handler(client, addr)) def fib(n): if n <= 2: return 1 else: return fib(n-1) + fib(n-2) def fib_handler(client, address): print('Connection from', address) while True: yield 'recv', client data = client.recv(1000) if not data: break result = fib(int(data)) yield 'send', client client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() if __name__ == '__main__': tasks.append(tcp_server(('',25000), fib_handler)) run() ``` This code will require a bit of study, but if you try it out, you'll find that it supports concurrent connections without the slightest hint of a thread—interesting indeed. Hiding Implementation Details One complaint about the generator solution is the addition of the extra `yield` statements. Not only do they introduce extra code, they are somewhat low-level, requiring the user to know some details about the underlying scheduling code. However, Python 3.3 introduced the ability to write generator-based subroutines using the `yield from` statement [3]. You can use this to make a wrapper around `socket` objects. ```python class GenSocket(object): def __init__(self, sock): self.sock = sock def accept(self): yield 'recv', self.sock client, addr = self.sock.accept() return GenSocket(client), addr def recv(self, maxbytes): yield 'recv', self.sock return self.sock.recv(maxbytes) def send(self, data): yield 'send', self.sock return self.sock.send(data) def __getattr__(self, name): return getattr(self.sock, name) ``` This wrapper class merely combines the appropriate `yield` statement with the subsequent socket operation. Here is a modified server that uses the wrapper: ```python def tcp_server(address, handler): sock = GenSocket(socket(AF_INET, SOCK_STREAM)) sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) sock.bind(address) sock.listen(5) while True: client, addr = yield from sock.accept() # Create a new handler task and add to the task queue tasks.append(handler(client, addr)) def fib_handler(client, address): print('Connection from', address) while True: data = yield from client.recv(1000) if not data: break result = fib(int(data)) yield from client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() ``` In this version, blocking calls such as `client.recv()` are replaced by calls of the form `yield from client.recv()`. Other than that, the code looks virtually identical to the threaded version. Moreover, details of the underlying task scheduler are now hidden. Again, keep in mind that no threads are in use. Studying the Performance Previously, two performance tests were performed. The first test simply measured the performance of the server on CPU-bound work: ```python # perf1.py from socket import * import time sock = socket(AF_INET, SOCK_STREAM) sock.connect(('127.0.0.1', 25000)) while True: start = time.time() sock.send(b'30') resp = sock.recv(100) end = time.time() print(end-start) ``` If you run this program, it will start producing a series of timing measurements that are essentially the same as the threaded version of code. If you run multiple clients, however, you’ll find that the server is limited to using a single CPU core as before. There’s no global interpreter lock in play, but since the entire server executes within a single execution thread, there’s no way for it to take advantage of multiple CPU cores either. That’s one important lesson—using coroutines is not a technique that can be used to make code scale to multiple processors. The second performance test measured the performance on a rapid-fire series of fast-running operations. Here it is again: ```python # perf2.py import threading import time from socket import * sock = socket(AF_INET, SOCK_STREAM) sock.connect(('127.0.0.1', 25000)) N = 0 def monitor(): global N while True: time.sleep(1) print(N, 'requests/second') N = 0 ``` If you run this program, it will start a series of timing measurements that are essentially the same as the threaded version of code. If you run multiple clients, however, you’ll find that the server is limited to using a single CPU core as before. A Tale of Two Concurrency (Part 2) ``` t = threading.Thread(target=monitor) t.daemon=True t.start() while True: sock.send(b'1') resp = sock.recv(100) N += 1 ``` If you run the program, you’ll see output similar to the following: ``` bash % python3 perf2.py 16121 requests/second 16245 requests/second 16179 requests/second 16305 requests/second 16210 requests/second ...``` The initial request rate will be lower than that reported with the examples involving threads in the previous article. There is simply more overhead in managing the various generator functions, invoking `select()`, and so forth. While the test is running, computing a large Fibonacci number from a separate connection produces: ``` bash % nc 127.0.0.1 25000 40 102334155 (takes a while to appear) ``` After you do this, the `perf2.py` will stop responding entirely. For example: ``` 16151 requests/second 16265 requests/second 0 requests/second 0 requests/second 0 requests/second ...``` This will continue until the large request completes entirely. Since there are no threads at work, there is no notion of preemption or parallelism. In fact, any operation that decides to block or take a lot of compute cycles will block the progress of everything else. **Back to Subprocesses** As it turns out, problems with performance and blocking have to be solved in the same manner as with threads. Specifically, you have to use threads or process pools to carry out such calculations outside of the task scheduler. For example, you might rewrite the `fib_handler()` function using `concurrent.futures` exactly as you did before with threads: ``` from concurrent.futures import ProcessPoolExecutor as Pool NPROCS = 4 pool = Pool(NPROCS) def fib_handler(client, address): print('Connection from', address) while True: data = yield client.recv(1000) if not data: break future = pool.submit(fib, int(data)) result = future.result() yield from client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() ...``` The only catch is that even if you make this change, you’ll find that it still doesn’t work. The problem here is that the `future.result()` operation blocks, waiting for the result to come back. By blocking, it stalls the entire task scheduler. In fact, this will happen for any operation at all that might block (e.g., resolving a domain name, accessing a database, etc.). **Generators: It’s All In** In order for a generator-based solution to work, every blocking operation has to be written to work with the task loop. In the previous example, attempts to use a process pool are unsuccessful since calls to obtain the result block. To make it work, you need to write additional supporting code to turn blocking operations into something that can yield to the task loop. The following code gives an idea of how you might do it. The first step is to write a wrapper around the `Future` object’s `result()` method to make it use `yield`. For example: ``` class GenFuture(object): def __init__(self, future): self.future = future def result(self): yield 'future', self.future return self.future.result() def __getattr__(self, name): return getattr(self.future, name) ``` Next, you might create a wrapper around pools to adjust the output of the `pool.submit()` to return a `GenFuture` object: A Tale of Two Concurrency (Part 2) ```python class GenPool(object): def __init__(self, pool): self.pool = pool def submit(self, func, *args, **kwargs): f = self.pool.submit(func, *args, **kwargs) return GenFuture(f) def __getattr__(self, name): return getattr(self.pool, name) ``` The main goal of these classes is to preserve the programming interface of the blocking code. In fact, you will only make a slight change to the fib_handler() code as shown here: ```python from concurrent.futures import ProcessPoolExecutor as Pool NPROCS = 4 pool = GenPool(Pool(NPROCS)) # Note: Use GenPool def fib_handler(client, address): print('Connection from', address) while True: data = yield client.recv(1000) if not data: break future = pool.submit(fib, int(data)) result = yield from future.result() # Note yield from yield from client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() ``` Carefully observe how all blocking operations are now preceded by a yield from declaration. The only remaining task is to modify the task scheduler to support futures. Here is that code: ```python from concurrent.futures import ProcessPoolExecutor as Pool NPROCS = 4 pool = GenPool(Pool(NPROCS)) # Note: Use GenPool def fib_handler(client, address): print('Connection from', address) while True: data = yield client.recv(1000) if not data: break future = pool.submit(fib, int(data)) result = yield from future.result() # Note yield from yield from client.send(str(result).encode('ascii')+b' ') print('Connection closed') client.close() ``` # Function to wake the task loop when blocked on select() def _loop_wake(): _loop_notify_socket.send(b'x') # Dummy task that allows select() to work def _loop_sleeper(): while True: yield 'recv', _loop_wait_socket _loop_wait_socket.recv(1000) tasks.append(_loop_sleeper()) def run(): while any([tasks, recv_wait, send_wait, future_wait]): while not tasks: can_read, can_send, _ = select(recv_wait, send_wait, []) for s in can_read: tasks.append(recv_wait.pop(s)) for s in can_send: tasks.append(send_wait.pop(s)) task = tasks.popleft() try: reason, resource = next(task) if reason == 'recv': recv_wait[resource] = task elif reason == 'send': send_wait[resource] = task elif reason == 'future': future_wait[resource] = task resource.add_done_callback(_future_callback) except StopIteration: print('Task done') ``` Whew! There are a lot of moving parts, but the general idea is as follows. For futures, the task is placed into a waiting area as before (future_wait). A callback function (_future_callback) is then attached to the future to be triggered upon completion. When results return, the callback function puts the task back onto the tasks queue. A byte of I/O is then written to a special loopback socket (_loop_notify_socket). A separate task (_loop_sleeper) constantly monitors this socket and wakes to read the byte. (The main purpose of this special task is really just to get the task loop to wake from the select() call to allow ready tasks to run again.) This Is Crazy (But Most Things Are When You Think About It) Needless to say, if you’re going to abandon threads for concurrency, you’re going to have to do more work to make it work. If you get down to it, the code involving generators is actually a lot like a small user-level operating system, with all of the underlying task scheduling, I/O polling, and so forth. At first glance, the whole approach might seem crazy. However, keep in mind that it would rarely be necessary to write such code yourself. Instead, you would use an existing library such as the new asyncio module [4]. Even if you use a library, you still have to know what you’re doing. Specifically, you need to be fully aware of places where your code might block and stall the task scheduler. Coroutines also do not free you from limitations such as Python’s GIL—you should still be prepared to execute work in thread or process pools as appropriate. At this point, you might be seeking some kind of sage advice on how to proceed with Python concurrency. Should you use threads? Should you use coroutines? Unfortunately, I can’t offer anything more than it depends a lot on the problem that you are trying to solve. Python provides a wide variety of tools for addressing the concurrency problem. All of those tools have various tradeoffs and limitations. As such, anyone expecting a kind of “magic” solution that solves every possible problem will likely be disappointed. Again, some thinking is required—in the end, it really helps to understand what you’re doing and how things work. Postscript The code examples in this article were the foundation of a PyCon 2015 talk I gave on concurrency. If you’re interested in seeing the code work with a live coding demonstration, the talk video can be found online [5]. References
{"Source-Url": "https://www.usenix.org/system/files/login/articles/login_aug15_11_beazley.pdf", "len_cl100k_base": 5583, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22271, "total-output-tokens": 6599, "length": "2e12", "weborganizer": {"__label__adult": 0.0002529621124267578, "__label__art_design": 0.00019156932830810547, "__label__crime_law": 0.00021016597747802737, "__label__education_jobs": 0.0003829002380371094, "__label__entertainment": 5.042552947998047e-05, "__label__fashion_beauty": 7.736682891845703e-05, "__label__finance_business": 7.814168930053711e-05, "__label__food_dining": 0.0003418922424316406, "__label__games": 0.0002930164337158203, "__label__hardware": 0.0006299018859863281, "__label__health": 0.0003075599670410156, "__label__history": 0.00012493133544921875, "__label__home_hobbies": 7.861852645874023e-05, "__label__industrial": 0.0002918243408203125, "__label__literature": 0.000125885009765625, "__label__politics": 0.0001722574234008789, "__label__religion": 0.0003867149353027344, "__label__science_tech": 0.006374359130859375, "__label__social_life": 7.939338684082031e-05, "__label__software": 0.00518035888671875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0002713203430175781, "__label__transportation": 0.00031185150146484375, "__label__travel": 0.00016701221466064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25119, 0.07278]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25119, 0.62145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25119, 0.87199]], "google_gemma-3-12b-it_contains_pii": [[0, 2405, false], [2405, 5895, null], [5895, 8695, null], [8695, 12072, null], [12072, 15794, null], [15794, 19241, null], [19241, 22716, null], [22716, 25119, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2405, true], [2405, 5895, null], [5895, 8695, null], [8695, 12072, null], [12072, 15794, null], [15794, 19241, null], [19241, 22716, null], [22716, 25119, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25119, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25119, null]], "pdf_page_numbers": [[0, 2405, 1], [2405, 5895, 2], [5895, 8695, 3], [8695, 12072, 4], [12072, 15794, 5], [15794, 19241, 6], [19241, 22716, 7], [22716, 25119, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25119, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
3e1fbf6f8d31fd991667732df004207ef3805cae
USING CDISC VALIDATION TOOLS IN A VALIDATED HOSTED ENVIRONMENT SANDEEP JUNEJA, SAS INSTITUTE INC AGENDA - Introduction - Problem Scenario - Validation Tools Implementation in Hosted Environment - OpenCDISC - SAS Clinical Standards Toolkit (SAS/CST) - Validation Report – M&M Report - Workflow INTRODUCTION • CDISC Validation Tools • OpenCDISC – Java Based GUI / CLI tool. • Execution Mode - PC based UI, Command Line Interface, SAS Program with X Command • SAS/CST - Standards Metadata (SAS datasets) and Framework SAS Macros • Execution Mode - SAS Program with CST Macro calls. • Validated Hosted Environment • Validation – “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specification and quality attributes.” • Hosted Environment – “A facility in which a third-party holds the data and runs the programs in its own computers” • Lock-down environment. • All Updates are controlled updates. PROBLEM SCENARIO - Hosted Environment usually don’t allow - OS Commands Execution - No Updates to the software Install Area - How to Execute OpenCDISC checks from inside Hosted Environment? - Develop SAS Macro that can be used to execute OpenCDISC checks without using X Command – OS Commands - How to Register New Standards to toolkit inside Hosted Environment? - How to Register Customized Domains to existing Standards in toolkit inside Hosted Environment? - Install Standards Metadata under Regulated Access Area - a location where it can be controlled, versioned and audited. - Tell toolkit to reference Standards Metadata from Regulated Access Area and NOT Install location. - Hosted Environment Used - SAS Product – SAS Drug Development a web-based Analytical Platform/ Environment is used as Hosted Environment to address the issue SAS DRUG DEVELOPMENT OVERVIEW • Web Based Environment • Dashboard / Repository / Workspace • Versioning / E-signature / Check-in/Check-out / Groups / Permissions / Privileges / Audit trail • SAS Program Development / Execution • Hosted Environment OPENCDISC | JAVA CODE - Download and understand OpenCDISC Java Code - Download it from [http://svn.opencdisc.org/validator/](http://svn.opencdisc.org/validator/) - Install it in Eclipse – Java IDE ```java package org.opencdisc.validator.cli; public static void main(String[] args) { CommandParser parser = CommandParser.GetInstance(); parser.parse(args); } ``` OPENCDISC JAVA CODE – COMMANDPARSER.JAVA • Known Issue • Missing `.trim()` • Regenerate Jar File ```java private String getCommand(String commandString) { if (commandString.startsWith(COMMAND_PREFIX)) { commandString = commandString.substring(COMMAND_PREFIX.length()); } if (commandString.contains(COMMAND_SEPARATOR)) { commandString = commandString.substring(0, commandString.indexOf(COMMAND_SEPARATOR)); } return commandString; } private String getValue(String commandString) { if (commandString.contains(COMMAND_SEPARATOR)) { String[] components = commandString.split(Pattern.quote(COMMAND_SEPARATOR), 2); if (components.length == 2) { commandString = components[1]; } else { commandString = "" } } if (commandString.startsWith(COMMAND_ESCAPE) && commandString.endsWith(COMMAND_ESCAPE)) { commandString = commandString.substring(1, commandString.length() - 1); } return commandString; } ``` **OPENCDISC** **REGENERATE JAR FILE** - **Export** - **Select** - Export all resources required to run an application into a JAR file on the local file system. - **Runnable JAR File Export** - **Launch configuration**: - OpenCDISC - OpenCDISC - **Export destination**: - **Library handling**: - Extract required libraries into generated JAR - Package required libraries into generated JAR - Copy required libraries into a sub-folder next to the generated JAR - **Save as ANT script** - **ANT script location**: - **Browse...** - **Name** - add_to_classpath.sas - init_classpath_update.sas - opencdisc_cli.jar - reset_classpath.sas - setup.sas - validator-cli-1.5.jar --- This information is confidential and covered under the terms of any SAS agreements as executed by customer and SAS Institute Inc. %macro Run_openCDISC(Params=, debug=%str(N)); <PRE-PROCESSING CODE TO GENERATE ARRAY OF NAME=VALUE PAIR> ........ data _opencdisc_cli; dcl javaobj j("org/opencdisc/validator/cli/Main"); array s{&i.} $200 (&cmd.); (String[] args) j.callStaticVoidMethod("main",s); (public static void main run; %mend Run_openCDISC; %include "&macloc./setup.sas"; %init_classpath_update; %*add_to_classpath(&macloc./validator-cli-1.5.jar); %add_to_classpath(&macloc./opencdisc_cli.jar); * Validate SDTM datasets; %Run_openCDISC(Params=%nrquote( task=Validate, source=&bpath.\xpt, config=&bpath./opencdisc-validator\config\config-sdtn-3.1.2.xml, report=&bpath./OpenCDISC_Results_%SYSFUNC(translate(%SYSFUNC(datetime(), E8601DT.),'-',''))._xls, report:type=Excel, report:cutoff=10, report:overwrite=yes), debug=N); %reset_classpath; • Dynamically setting Classpath - [http://support.sas.com/kb/38/518.html](http://support.sas.com/kb/38/518.html) • Screenshot for hosted environment • Dynamically setting Classpath - [http://support.sas.com/kb/38/518.html](http://support.sas.com/kb/38/518.html) • OpenCDISC Validator - [http://www.opencdisc.org/download](http://www.opencdisc.org/download) OPENCDISC HOSTED ENVIRONMENT – RUN_OPENCDISC.SAS OPENCDISC ONE TIME CHANGE - Update the SAS Config file with path for the Java Jar file and SAS Macro /* define the location of the OpenCDISC Macro */ -insert sasautos "C:\SAS\test\OpenCDISC" /*put OpenCDISC jar on the classpath*/ -JREOPTIONS (-Dsas.app.class.dirs=C:\SAS\test\OpenCDISC) %include "&macloc./setup.sas"; %init_classpath_update; %*add_to_classpath(&macloc./validator-cli-1.5.jar); %add_to_classpath(&macloc./opencdisc_cli.jar); * Validate SDTM datasets; %Run_openCDISC(Params=%nrbquote{ task=Validate, source=&bpath.xpt, config=&bpath/opencdisc-validator\config\config-sdtm-3.1.2.xml, report=&bpath/OpenCDISC_Results_%SYSFUNC(translate(%SYSFUNC(datetime(), E8601DT.),'-','').xls, report:type=Excel, report:cutoff=10, report:overwrite=yes}); %reset_classpath; **OPENCDISC SUMMARY** - Write SAS Macro using SAS JavaObj - PC - Include Path for OpenCDISC Jar file and SAS Macro in SAS Configuration file - Hosted Environment - Use Dynamically add jar file to classpath concept - Dynamically setting Classpath - [http://support.sas.com/kb/38/518.html](http://support.sas.com/kb/38/518.html) - One Time Request for Change - Update the SAS Configuration file with Jar file location and SAS Macro location. SAS/CST PC ENVIRONMENT - Standards Metadata – c:/cstGlobalLibrary - Framework SAS Macros - !SASROOT/cstframework/sasmacro - %cstutil_setcstgroot – Initialization driver / macro TOOLKIT INITIALIZATION - Identify Base path for location of Standards Metadata ```sas %macro cstutil_setcstgroot( / des='CST: Set _cstGRoot macro variable'; %global _cstGRoot; %if &sysver=9.3 %then %cstutilsetcstgroot93; %else %let _cstGRoot=%sysfunc(kcompress(%sysfunc(getoption(CSTGLOBALLIB)),%str(\"))); %mend cstutil_setcstgroot; /* Auto generated by the CST-Framework post installation configuration component; */ %macro cstutilsetcstgroot93; %let _cstGRoot=c:/cstGlobalLibrary; %mend; ``` ```sas proc options option= CSTGLOBALLIB; run; ``` SAS (r) Proprietary Software Release 9.4 TS1M0 CSTGLOBALIB="C:\cstGlobalLibrary" Specifies the location of the SAS Clinical Standards Toolkit global library. NOTE: PROCEDURE OPTIONS used (Total process time): real time 0.07 seconds cpu time 0.00 seconds ```sas proc options option= CSTGLOBALLIB; run; ``` SAS (r) Proprietary Software Release 9.4 TS1M1 NOTE: PROCEDURE OPTIONS used (Total process time): real time 0.00 seconds cpu time 0.00 seconds CSTGLOBALIB="/srw/cstGlobalLibrary" Specifies the location of the SAS Clinical Standards Toolkit global library. SAS/CST HOSTED ENVIRONMENT - cstGlobalLibrary – contains toolkit Metadata & Standards Information - standards – contains Standards Metadata - sasmacros – Updated CST framework SAS macros **SAS/CST** CUSTOMIZE TOOLKIT SETUP - Update `%cstutil_setcstgroot` to support Hosted Environment – SDD - Add path of updated macro in SASAUTOS. ```sas %macro cstutil_setcstgroot(); / des='CST: Set _cstGRoot macro variable'; %global _cstGRoot; * Check Execution Environment - WINDOWS OR SAS DRUG DEVELOPMENT (SDD); %if %symexist(_sasws_) %then %let env=SDD; %else %let env=WIN; %put env=%env; %if (%env=WIN) %then %do; %if %sysver=9.3 %then %cstutilsetcstgroot93; %else %let _cstGRoot=%sysfunc(kcompress(%sysfunc(getoption(CSTGLOBALLIB)),%str(")));%end; %else %do; %let _cstGRoot=%str(&sasws_/SAS/Files/cst_sdd/cstGlobalLibrary);%end; %mend cstutil_setcstgroot; ``` ```sas %let sdd_sas_api_loc=%sysget(SASROOT)/sddapi/sdd-sas-macro-1.4/sasmacros/; %put sdd_sas_api_loc; %let cst_mac_loc=%str(&sasws_/SAS/Files/cst_sdd/sasmacros); %put cst_mac_loc; %let opendisc_mac_loc=%str(&sasws_/SAS/Files/cst_sdd/programs/Study_Management/opendisc/macros); %put opendisc_mac_loc; 11 options SASAUTOS=(&sdd_sas_api_loc "&cst_mac_loc" "&opendisc_mac_loc" SASAUTOS) MAUTOSOURCE MRECALL MAUTOLOCDISPLAY; 12 %put %sysfunc(getoption(SASAUTOS)); ``` REGISTER CUSTOM STANDARD - Register CUSTOM Standard in toolkit – ABC-SDTM-3.1.3-1.0 - Make copy of Base Standard and update the respective standard metadata - Update toolkit Global Metadata SAS/CST REGISTER CUSTOMIZE DOMAIN - Register NEW Domain in the standard - Update the Reference Metadata – Reference_tables & Reference_columns SAS / CST VALIDATE DATA - Specify the standards, check types and check list to be executed - Generates – validation_metrics, validation_results, cstreport.pdf • Update the Reporting Macros to capture dynamic temporary datasets into known temporary datasets SAS/CST METRICS & MATRIX (M&M) REPORT - Metrics & Matrix Report --- ### Summary Metrics <table> <thead> <tr> <th>Metric</th> <th># Of Distinct Check Invocations</th> <th># Check Invocations (if available)</th> <th># Recs</th> <th># Error</th> <th># Check Invocations Not Run</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td># of distinct check invocations</td> <td>220</td> <td>60</td> <td>240</td> <td>127</td> <td>15</td> </tr> <tr> <td># check invocations net run</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Errors (severity=High) reported</td> <td>13</td> <td>74</td> <td>141</td> <td>54</td> <td>13</td> </tr> <tr> <td>Warnings (severity=Medium)</td> <td>473</td> <td>51</td> <td>96</td> <td>44</td> <td>11</td> </tr> <tr> <td>Notes (severity=Low) exported</td> <td>113</td> <td>46</td> <td>55</td> <td>1</td> <td>11</td> </tr> <tr> <td>Structural errors, warnings and notes</td> <td>1130</td> <td>44</td> <td>52</td> <td>1</td> <td>19</td> </tr> <tr> <td>Content errors, warnings and notes</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ### Table Metrics <table> <thead> <tr> <th>Metric</th> <th>CST</th> <th>DM</th> <th>AE</th> <th>DV</th> <th>TA</th> <th>TE</th> <th>TI</th> <th>TV</th> <th>VS</th> <th>XP</th> </tr> </thead> <tbody> <tr> <td># of check invocations</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td># check invocations net run</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Errors (severity=High) reported</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Warnings (severity=Medium)</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Notes (severity=Low) exported</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Structural errors, warnings and notes</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Content errors, warnings and notes</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> --- ### Check Check Log <table> <thead> <tr> <th>ID</th> <th>Source</th> <th>Message</th> <th>Tablescope</th> <th>ColumnsScope</th> <th>Severity</th> <th>Elapsed Time</th> <th>CST</th> <th>AE</th> <th>DM</th> <th>EX</th> <th>SV</th> <th>TA</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ### Validation Report #### OpenCDISC Matrix Report <table> <thead> <tr> <th>Rule ID</th> <th>Description</th> <th>Category</th> <th>Severity</th> <th>GLQMA</th> <th>AE</th> <th>CH</th> <th>DM</th> <th>DS</th> <th>EX</th> </tr> </thead> <tbody> <tr> <td>SD1167</td> <td>Lab Test Results (LR) dataset should be included in every submission</td> <td>Presence</td> <td>Warning</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD1168</td> <td>Vital Signs (VS) dataset should be included in every submission</td> <td>Presence</td> <td>Warning</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD1111</td> <td>Subject Elements (SE) dataset should be included in every submission</td> <td>Presence</td> <td>Warning</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD1112</td> <td>Trial Arms (TA) dataset should be included in every submission</td> <td>Presence</td> <td>Warning</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD1113</td> <td>Trial Elements (TE) dataset should be included in every submission</td> <td>Presence</td> <td>Warning</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD1115</td> <td>Trial Summary (TS) dataset should be included in every submission</td> <td>Presence</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3002</td> <td>NULL value in --VALUE-- variable must be unique for each record within a domain and within a Unique Subject Identifier (USID) or Pool identifier (POOLID)</td> <td>Required</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3003</td> <td>Value of Date/Time variable (UTC) must conform to the ISO 8601 international standard</td> <td>Format</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>SD3004</td> <td>Domain Abstraction (IOAAb) variable should be consistent with the name of the dataset</td> <td>Inconsistent Value</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3005</td> <td>The value of Sequence Number (--SEQ--) variable must be unique for each record within a domain and within a Unique Subject Identifier (USID) or Pool identifier (POOLID)</td> <td>Consistency</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3006</td> <td>No qualifiers to test for an AEs between AEs</td> <td>Consistency</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3011</td> <td>Description of Arm (ARMA) must equal ‘Screen Failure’ when Arm Code (ARMCD) is ‘SRRGNF’ and vice versa</td> <td>Consistency</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>SD3012</td> <td>Start Date/Time of Event, Exposure or Observation (-STDT) must be less or equal to End Date/Time of Event, Exposure or Observation (-ENDT)</td> <td>Limit</td> <td>Error</td> <td>⬤</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> </tbody> </table> **Matrix** | **GLOBAL** | **AE** | **CH** | **DM** | **DS** | **EX** | **F** | **G** | **H** | **I** | **J** | **K** | **L** | **M** | **N** | **O** | **P** | **Q** | **R** | **S** | **T** | **U** | **V** | **W** | **X** | **Y** | **Z** | STANDARDS & STUDY MANAGEMENT WORKFLOW BPMN 2.0 STANDARDS & STUDY MANAGEMENT • Download link for • Run_OpenCDISC SAS macro • OpenCDISC M&M Report SAS macro • SAS/CST M&M Report SAS macro • https://communities.sas.com/docs/DOC-7781 THANK YOU! SANDEEP JUNEJA SANDEEP.JUNEJA@SAS.COM SAS DRUG DEVELOPMENT FORUM: HTTPS://COMMUNITIES.SAS.COM/COMMUNITY/SUPPORT-COMMUNITIES/SAS-DRUG-DEVELOPMENT
{"Source-Url": "https://www.lexjansen.com/phuse/2014/cd/CD11_ppt.pdf", "len_cl100k_base": 5186, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 44002, "total-output-tokens": 5622, "length": "2e12", "weborganizer": {"__label__adult": 0.0005469322204589844, "__label__art_design": 0.000335693359375, "__label__crime_law": 0.0011301040649414062, "__label__education_jobs": 0.003673553466796875, "__label__entertainment": 9.042024612426758e-05, "__label__fashion_beauty": 0.0002300739288330078, "__label__finance_business": 0.002227783203125, "__label__food_dining": 0.0005555152893066406, "__label__games": 0.0006160736083984375, "__label__hardware": 0.0012836456298828125, "__label__health": 0.0042266845703125, "__label__history": 0.0002696514129638672, "__label__home_hobbies": 0.00013554096221923828, "__label__industrial": 0.0017642974853515625, "__label__literature": 0.0002510547637939453, "__label__politics": 0.0003740787506103515, "__label__religion": 0.0004973411560058594, "__label__science_tech": 0.04766845703125, "__label__social_life": 0.0001506805419921875, "__label__software": 0.09375, "__label__software_dev": 0.83837890625, "__label__sports_fitness": 0.0006999969482421875, "__label__transportation": 0.00063323974609375, "__label__travel": 0.0003199577331542969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16202, 0.00913]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16202, 0.0933]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16202, 0.56636]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 299, null], [299, 1038, null], [1038, 1898, null], [1898, 2148, null], [2148, 2524, null], [2524, 3544, null], [3544, 4392, null], [4392, 5633, null], [5633, 5878, null], [5878, 5928, null], [5928, 6719, null], [6719, 7168, null], [7168, 7347, null], [7347, 8530, null], [8530, 8719, null], [8719, 9873, null], [9873, 10064, null], [10064, 10208, null], [10208, 10369, null], [10369, 10467, null], [10467, 13294, null], [13294, 15807, null], [15807, 15836, null], [15836, 15884, null], [15884, 16045, null], [16045, 16202, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 299, null], [299, 1038, null], [1038, 1898, null], [1898, 2148, null], [2148, 2524, null], [2524, 3544, null], [3544, 4392, null], [4392, 5633, null], [5633, 5878, null], [5878, 5928, null], [5928, 6719, null], [6719, 7168, null], [7168, 7347, null], [7347, 8530, null], [8530, 8719, null], [8719, 9873, null], [9873, 10064, null], [10064, 10208, null], [10208, 10369, null], [10369, 10467, null], [10467, 13294, null], [13294, 15807, null], [15807, 15836, null], [15836, 15884, null], [15884, 16045, null], [16045, 16202, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16202, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16202, null]], "pdf_page_numbers": [[0, 98, 1], [98, 299, 2], [299, 1038, 3], [1038, 1898, 4], [1898, 2148, 5], [2148, 2524, 6], [2524, 3544, 7], [3544, 4392, 8], [4392, 5633, 9], [5633, 5878, 10], [5878, 5928, 11], [5928, 6719, 12], [6719, 7168, 13], [7168, 7347, 14], [7347, 8530, 15], [8530, 8719, 16], [8719, 9873, 17], [9873, 10064, 18], [10064, 10208, 19], [10208, 10369, 20], [10369, 10467, 21], [10467, 13294, 22], [13294, 15807, 23], [15807, 15836, 24], [15836, 15884, 25], [15884, 16045, 26], [16045, 16202, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16202, 0.12092]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
964e48612c9a4eaae60abde91a28b8a00efaf609
Evaluating and Creatively Building KAOS Goal Models João Araújo joao.araujo@fct.unl.pt, http://ctp.di.fct.unl.pt/~ja/ In collaboration with Patricia Espada and Miguel Goulão (presented at CAiSE 2013) Goal-Oriented Requirements Engineering (GORE) - A paradigm in Requirements Engineering to handle - Requirements elicitation - Requirements specification - Requirements analysis - Requirements negotiation - Requirements evolution - Some well-known approaches - KAOS, i* framework, GBRAM, GRL, ... Knowledge Acquisition in automated Specification (KAOS) - GORE methodology based in goal decomposition and refinement, to support requirements acquisition and elaboration Let’s work out... - The members of a health club need to get a ticket before participating in a specific class... Let us work out... in KAOS Add a few more requirements to our example - 5 main functionalities - 15 agents - 212 sub-goals - Is this model complete? - How complex is this model? - Is this complexity really necessary? - GORE aimed at large scaled systems - Models can become really hard to understand Research objectives - Analyse the extent to which a model is close to being complete - Assess model complexity to identify model refactoring opportunities, e.g.: - Models may have a too deep goal hierarchy - Agents may have too many responsibilities - Prevent unanticipated extra costs in the development phase - Better management of the completeness and complexity of the models Contributions - Tool supported approach in the metrics-based evaluation of the completeness and complexity of KAOS goal models. - The developer can measure the current status of his model and take on corrective actions, during model construction. - The tool support is based on the integration of a KAOS editor with a KAOS metrics suite and: - targeted to the requirements elicitation process, - it can also support post-mortem analysis from which lessons can be learned for future projects. Contributions(2) - Metrics and formally define all metrics using OCL - We validate the metrics set and their implementation by extending an existing tool for editing KAOS goal models - modularKAOS developed in MDD on top of Eclipse Approach outline - Metrics **identification** using the Goal-Question-Metric approach - Metrics (semi-)formal **definition** using OCL - Metrics **evaluation** with real-world case studies - Often used as example of best practices using KAOS - KAOS models analysis with metrics support ## Goal: KAOS models completeness evaluation <table> <thead> <tr> <th>Question</th> <th>Metric</th> </tr> </thead> <tbody> <tr> <td><strong>Q1.</strong> How close are we to completing the assignment of all goal responsibilities to agents?</td> <td><strong>PLGWA.</strong> Percentage of Leaf Goals With an Agent.</td> </tr> <tr> <td><strong>Q2.</strong> How detailed is the goal model with respect to objects?</td> <td><strong>PLGWO.</strong> Percentage of Leaf Goals With an Object.</td> </tr> <tr> <td><strong>Q3.</strong> How close are we to completing the resolution of all the goal obstacles?</td> <td><strong>PLOWS.</strong> Percentage of Leaf Obstacles With a reSolution.</td> </tr> <tr> <td><strong>Q4.</strong> How detailed is the goal model with respect to operations?</td> <td><strong>PLGWOp.</strong> Percentage of Leaf Goals With an Operation.</td> </tr> <tr> <td><strong>Q5.</strong> How well supported are the operations in the goal model?</td> <td><strong>POpWA.</strong> Percentage of Operations With an Agent.</td> </tr> </tbody> </table> Goal: KAOS models complexity evaluation <table> <thead> <tr> <th>Question</th> <th>Metric</th> </tr> </thead> <tbody> <tr> <td><strong>Q6.</strong> Does an agent have too much responsibility in the model?</td> <td><strong>ANLG. Number of Leaf Goals per Agent.</strong></td> </tr> <tr> <td><strong>Q7.</strong> Does a leaf goal have too many/few objects?</td> <td><strong>GNO. Number of Objects per Goal.</strong></td> </tr> <tr> <td><strong>Q8.</strong> How difficult is it to understand a model, with respect to the number of refinement levels?</td> <td><strong>MD. Model Depth.</strong></td> </tr> <tr> <td><strong>Q9.</strong> How complex is a model, with respect to its goal refinements?</td> <td><strong>RNSG. Root Number of Sub-Goals.</strong></td> </tr> </tbody> </table> modularKAOS: partial metamodel ## Metrics definition ### Q1 - How close are we to completing the assignment of all goal responsibilities to agents? <table> <thead> <tr> <th>Name</th> <th>PLGWA – Percentage of Leaf Goals With an Agent</th> </tr> </thead> <tbody> <tr> <td>Informal definition</td> <td>Percentage of leaf goals that have an associated agent in the model.</td> </tr> </tbody> </table> | Formal definition | **context KAOS** **def: PLGWA(): Real** = self.NLGWA() / self.NLG() | | Pre-condition | context KAOS::PLGWA() **pre: self.NLG() > 0** | | Comments | If there are no leaf goals the result is undefined. This requires: **NLG – Number of Leaf Goals** **NLGWA – Number of Leaf Goals With an Agent** | | Recommendation | In a complete model, all leaf goals should be assigned to an agent. | ![Diagram](image-url) Computing % of leaf goals with an agent \[ \text{PLGWA} = \frac{\text{NLGWA}}{\text{NLG}} \quad \text{PLGWA} = \frac{4}{5} = 0.8 \] Evaluation BARTS MSCS ES CPMS LMS LAS MS LMS LAS MS Percentage of Leaf Goals with an Agent Percentage of Leaf Goals with an Object Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Percentage of Leaf Obstacles with a reSolution **Percentage of Leaf Goals with an Operation** Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Percentage of Operations with an Agent Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Number of Leaf Goals per Agent Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Objects per Goal Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Model Depth **Root Number of Sub-Goals** Discussion (Completeness) - Most models handle responsibility assignment of leaf goals to agents - Objects are not frequently used - When obstacles are specified, we find a big variation (from 0% to 100%) of the percentage of obstacles with a resolution - Operations are even more rarely used than objects - Only two of the case studies model the assignment of operations to agents, showing this is a fairly unexplored modeling feature. Discussion (complexity) - Relatively few leaf goals assigned to an agent - Do not attribute too many responsibilities to a single agent - Assigning objects to goals is a mostly unexplored feature of models - Model depth varies much less than the number of model elements, suggesting a fairly consistent state of practice with respect to what is considered an adequate model decomposition level - Big variations in the case studies, concerning the number of subgoals defined in each model - The average number is around 40 subgoals, although in one of the examples it is over 200 goals. Limitations - Framework does not cover - **Quality** of elicited requirements - **Thoroughness** of elicited requirements - No easy essential vs. accidental complexity differentiation - No reference values for what is “acceptable” - Only “good” examples in the sample - Reference for best practices, but not for bad ones Conclusions - Metrics suite for **completeness** and **complexity** of KAOS goal models - Values computed and updated as the model evolves - Full integration with modeling tool - Completeness monitoring to help assessing effort to model completion - Complexity monitoring to detect early potential quality problems and identifying refactoring opportunities - Proof of concept with best practices examples of KAOS models - Obtained metrics values are a step towards deeper understanding of actual goal modeling practices Future work - Metrics set extension to quality attributes - Evaluation replication with other KAOS models - Towards metrics-based modeling heuristics - Assess completeness in terms of requirements coverage - Trace model elements to requirements sources - Identify the requirements in those sources that are yet to be covered by the goal models 2nd Part: Creatively Building KAOS Models In collaboration with Fernando Wanderley (presented at MoDRE’13, workshop of RE’13) Espada, Goulão, Araújo, “A Framework to Evaluate Complexity and Completeness of KAOS Goal Models”, CAiSE 2013, Valencia, Spain Goals - Provide a **systematic and modular goal-oriented modelling** process... - Helping the requirements engineers with elicitation of KAOS concerns (e.g. agents, goals and objects) and... - Generate (or represent) KAOS models from mind maps, through model driven techniques, providing better understanding and communication by stakeholders (e.g. domain expert and business specialist) .. maybe we should try to think out of the box? To increase some creativity... Both the academy and the industry consider several techniques. - Tools based on mind maps powerful tools for managing the process of eliciting requirements in agile development. - We have been investigating how mind maps can be used in requirements engineering to facilitate communication among the domain experts and... - How model driven engineering techniques can help generating requirements models from mind maps; Model-Driven Engineering The main objective of this article was to establish the mapping between the main elements and concepts of the KAOS and Mind Map models. Currently, implementing these mappings using the ATL transformation language. Mind Map Metamodel KAOS Metamodel Representation of Mapping Elements <table> <thead> <tr> <th>Element</th> <th>Icon</th> <th>Concern</th> <th>Semantic</th> </tr> </thead> <tbody> <tr> <td>Requirement</td> <td>![edit icon]</td> <td>Goal Model</td> <td>&lt;&lt;edit&gt;&gt;</td> </tr> <tr> <td>Expectation</td> <td>![idea icon]</td> <td>Goal Model</td> <td>&lt;&lt;idea&gt;&gt;</td> </tr> <tr> <td>Domain Property</td> <td>![warning icon]</td> <td>Goal Model</td> <td>&lt;&lt;warning&gt;&gt;</td> </tr> <tr> <td>Entity</td> <td>![list icon]</td> <td>Object Model</td> <td>&lt;&lt;list&gt;&gt;</td> </tr> </tbody> </table> Set of Transformation Rules (by Concerns) Rule 1. Each node different from the node root will be transformed into an Agent Goals Modelling Rule 2. Each node root will be transformed into RootGoal Rule 3. Each node, different from root node and not a leaf node, will be transformed into another Goal with a Refinement link from its parent node. Rule 4. For each node different from root node and being a leaf node, we have three transformation possibilities depending on the node state: - 4.1 - Requirement - 4.2 - Expectation - 4.3 - Domain Property Goals Modelling **Rule 4.1** Each node with requirement state will be transformed into a Requirement Goals Modelling **Rule 4.2** Each node with “idea” state 🧠 will be transformed into an Expectation. Goals Modelling **Rule 4.3** Each node with “warning” state ⚠️ will be transformed into a DomainProperty. **Rule 5.** The root node of a mind map will always be transformed into a class. Rule 6. For each node, different from the root node and defined as an entity state, this will be transformed into a class. Objects Modelling **Rule 7.** For each node, different from the root node, which is not an Entity, and child of node of type Entity - this will be transformed into an attribute. Systematic and Agile Transformation Process Case Study (Audiobus) Agents Identification [Diagram showing the relationships between Broadcasting Content Controller, User Profile Controller, Mobile Device Controller, Audiobus, Advertising Agency, Passenger, and Mobile Device Controller.] Goals Identification Object Identification Compose KAOS Model Compose KAOS Model Conclusion - This work describes a model-driven requirements approach to generate systematically KAOS models from mind maps using meta-modeling and model transformations; - After the mapping elements (from mind maps) the software engineer composes a complete KAOS model; - A systematic and agile process was defined for the approach, which was applied to a real case study. Future Work - For future work we intend to implement the transformation process with ATL; - To design an empirical protocol to evaluate the understandability of mind maps adoption and; - To include mind maps for capturing operations to be included in KAOS models and to automate the composition part. Questions Answer Answer Answer Answer Answer Answer ## “Weyuker properties” assessment <table> <thead> <tr> <th>#</th> <th>Adapted Weyuker Property</th> <th>ANLG</th> <th>GNO</th> <th>MD</th> <th>RNSG</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>At least some different models should exhibit different values for the same complexity metric.</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>( \exists P, \exists Q : P \neq Q \land</td> <td>P</td> <td>\neq</td> <td>Q</td> <td>)</td> </tr> <tr> <td>2</td> <td>There is a finite number ( n ) of models for which the complexity is ( c ) (a non-negative number). Let ( S ) be the set of models with ( c ) complexity, and ( n ) the cardinal of the set ( S ).</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>( \forall c \in \mathbb{R}^+_0, \forall P :</td> <td>P</td> <td>= c \Rightarrow P \in S, \exists n \in \mathbb{N}_0 :</td> <td>S</td> <td>= n )</td> </tr> <tr> <td>3</td> <td>Different models ( P ) and ( Q ) may have the same complexity.</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td></td> <td>( \exists P, \exists Q : P \land</td> <td>P</td> <td>=</td> <td>Q</td> <td>)</td> </tr> <tr> <td>4</td> <td>Different models which are functionally equivalent may have different complexities.</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td></td> <td>( \exists P, \exists Q : P \equiv Q \land</td> <td>P</td> <td>\neq</td> <td>Q</td> <td>)</td> </tr> <tr> <td>5</td> <td>Monotonicity is a fundamental property of all complexity measures. A model in isolation is at most as complex as its composition with another model.</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>( \forall P, \forall Q :</td> <td>P</td> <td>\leq</td> <td>P; Q</td> <td>\land</td> </tr> </tbody> </table> # Adapted Weyuker Property <table> <thead> <tr> <th>#</th> <th>Weyuker’s property 6 states that program’s complexity should be responsive to the order of its statements, and hence to their potential interaction. In a KAOS goal model, the adapted rule would be that the model complexity should be responsive to the organization of its model elements in the goal model graph. Let ( P ) be a model and ( Q ) another model such that ( Q ) is formed by permuting the order of the elements in ( P ). Assume we name this permutation operation ( \text{Perm}(\cdot) ).</th> </tr> </thead> <tbody> <tr> <td>6</td> <td>Yes</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td>( \exists P, \exists Q, \exists R: P \neq Q \land</td> </tr> <tr> <td></td> <td>( \exists P, \exists Q, \exists R: P \neq Q \land</td> </tr> <tr> <td></td> <td>No</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td>7</td> <td>Weyuker’s property 7 states that program’s complexity should be responsive to the order of its statements, and hence to their potential interaction. In a KAOS goal model, the adapted rule would be that the model complexity should be responsive to the organization of its model elements in the goal model graph. Let ( P ) be a model and ( Q ) another model such that ( Q ) is formed by permuting the order of the elements in ( P ). Assume we name this permutation operation ( \text{Perm}(\cdot) ).</td> </tr> <tr> <td>Yes</td> <td>Yes</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td>( \exists P, \exists Q: Q = \text{Perm}(P) \land</td> </tr> <tr> <td>8</td> <td>If a model is a renaming of another model, then their complexity should be the same. Assume that the operation ( \text{Rename}(\cdot) ) transforms model ( P ) in its renamed version ( Q ).</td> </tr> <tr> <td>Yes</td> <td>Yes</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td>( \forall P, \forall Q: Q = \text{Rename}(P) \Rightarrow</td> </tr> <tr> <td>9</td> <td>The complexity of the composition of two models ( P ) and ( Q ) may be greater than the sum of the complexities of models ( P ) and ( Q ). The extra complexity may result from the interaction between models ( P ) and ( Q ).</td> </tr> <tr> <td>No</td> <td>No</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td>( \exists P, \exists Q:</td> </tr> </tbody> </table>
{"Source-Url": "http://www.cin.ufpe.br/~ler/content/talks/JoaoAraujo_LER-020813.pdf", "len_cl100k_base": 4446, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 85633, "total-output-tokens": 6152, "length": "2e12", "weborganizer": {"__label__adult": 0.0003147125244140625, "__label__art_design": 0.0004661083221435547, "__label__crime_law": 0.00029969215393066406, "__label__education_jobs": 0.001956939697265625, "__label__entertainment": 5.8531761169433594e-05, "__label__fashion_beauty": 0.00016367435455322266, "__label__finance_business": 0.00047206878662109375, "__label__food_dining": 0.0002799034118652344, "__label__games": 0.0004076957702636719, "__label__hardware": 0.0004181861877441406, "__label__health": 0.0004498958587646485, "__label__history": 0.0002491474151611328, "__label__home_hobbies": 8.726119995117188e-05, "__label__industrial": 0.0004038810729980469, "__label__literature": 0.0003025531768798828, "__label__politics": 0.0002357959747314453, "__label__religion": 0.0004527568817138672, "__label__science_tech": 0.0189361572265625, "__label__social_life": 0.00013506412506103516, "__label__software": 0.007389068603515625, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.00042176246643066406, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16721, 0.00699]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16721, 0.16236]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16721, 0.86433]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 202, null], [202, 512, null], [512, 684, null], [684, 799, null], [799, 826, null], [826, 1101, null], [1101, 1488, null], [1488, 1987, null], [1987, 2222, null], [2222, 2511, null], [2511, 3818, null], [3818, 4551, null], [4551, 4582, null], [4582, 5349, null], [5349, 5482, null], [5482, 5706, null], [5706, 5745, null], [5745, 5913, null], [5913, 5960, null], [5960, 6135, null], [6135, 6302, null], [6302, 6461, null], [6461, 6606, null], [6606, 6618, null], [6618, 6647, null], [6647, 7085, null], [7085, 7676, null], [7676, 8006, null], [8006, 8541, null], [8541, 8892, null], [8892, 9147, null], [9147, 9538, null], [9538, 9618, null], [9618, 9618, null], [9618, 10040, null], [10040, 10065, null], [10065, 10280, null], [10280, 10299, null], [10299, 10314, null], [10314, 10702, null], [10702, 10744, null], [10744, 10825, null], [10825, 10899, null], [10899, 11047, null], [11047, 11254, null], [11254, 11356, null], [11356, 11457, null], [11457, 11564, null], [11564, 11645, null], [11645, 11768, null], [11768, 11947, null], [11947, 11991, null], [11991, 12013, null], [12013, 12235, null], [12235, 12256, null], [12256, 12278, null], [12278, 12297, null], [12297, 12316, null], [12316, 12693, null], [12693, 12995, null], [12995, 13053, null], [13053, 14692, null], [14692, 16721, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 202, null], [202, 512, null], [512, 684, null], [684, 799, null], [799, 826, null], [826, 1101, null], [1101, 1488, null], [1488, 1987, null], [1987, 2222, null], [2222, 2511, null], [2511, 3818, null], [3818, 4551, null], [4551, 4582, null], [4582, 5349, null], [5349, 5482, null], [5482, 5706, null], [5706, 5745, null], [5745, 5913, null], [5913, 5960, null], [5960, 6135, null], [6135, 6302, null], [6302, 6461, null], [6461, 6606, null], [6606, 6618, null], [6618, 6647, null], [6647, 7085, null], [7085, 7676, null], [7676, 8006, null], [8006, 8541, null], [8541, 8892, null], [8892, 9147, null], [9147, 9538, null], [9538, 9618, null], [9618, 9618, null], [9618, 10040, null], [10040, 10065, null], [10065, 10280, null], [10280, 10299, null], [10299, 10314, null], [10314, 10702, null], [10702, 10744, null], [10744, 10825, null], [10825, 10899, null], [10899, 11047, null], [11047, 11254, null], [11254, 11356, null], [11356, 11457, null], [11457, 11564, null], [11564, 11645, null], [11645, 11768, null], [11768, 11947, null], [11947, 11991, null], [11991, 12013, null], [12013, 12235, null], [12235, 12256, null], [12256, 12278, null], [12278, 12297, null], [12297, 12316, null], [12316, 12693, null], [12693, 12995, null], [12995, 13053, null], [13053, 14692, null], [14692, 16721, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16721, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16721, null]], "pdf_page_numbers": [[0, 120, 1], [120, 202, 2], [202, 512, 3], [512, 684, 4], [684, 799, 5], [799, 826, 6], [826, 1101, 7], [1101, 1488, 8], [1488, 1987, 9], [1987, 2222, 10], [2222, 2511, 11], [2511, 3818, 12], [3818, 4551, 13], [4551, 4582, 14], [4582, 5349, 15], [5349, 5482, 16], [5482, 5706, 17], [5706, 5745, 18], [5745, 5913, 19], [5913, 5960, 20], [5960, 6135, 21], [6135, 6302, 22], [6302, 6461, 23], [6461, 6606, 24], [6606, 6618, 25], [6618, 6647, 26], [6647, 7085, 27], [7085, 7676, 28], [7676, 8006, 29], [8006, 8541, 30], [8541, 8892, 31], [8892, 9147, 32], [9147, 9538, 33], [9538, 9618, 34], [9618, 9618, 35], [9618, 10040, 36], [10040, 10065, 37], [10065, 10280, 38], [10280, 10299, 39], [10299, 10314, 40], [10314, 10702, 41], [10702, 10744, 42], [10744, 10825, 43], [10825, 10899, 44], [10899, 11047, 45], [11047, 11254, 46], [11254, 11356, 47], [11356, 11457, 48], [11457, 11564, 49], [11564, 11645, 50], [11645, 11768, 51], [11768, 11947, 52], [11947, 11991, 53], [11991, 12013, 54], [12013, 12235, 55], [12235, 12256, 56], [12256, 12278, 57], [12278, 12297, 58], [12297, 12316, 59], [12316, 12693, 60], [12693, 12995, 61], [12995, 13053, 62], [13053, 14692, 63], [14692, 16721, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16721, 0.23237]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
5028fb655c99fd9d6e85b1df16b2a2dc16bacab7
2003 An Architectural Pattern for Adaptable Middleware Infrastructure Jason J. Mitchell University of North Florida Suggested Citation https://digitalcommons.unf.edu/etd/289 This Master's Project is brought to you for free and open access by the Student Scholarship at UNF Digital Commons. It has been accepted for inclusion in UNF Graduate Theses and Dissertations by an authorized administrator of UNF Digital Commons. For more information, please contact Digital Projects. © 2003 All Rights Reserved AN ARCHITECTURAL PATTERN FOR ADAPTABLE MIDDLEWARE INFRASTRUCTURE by Jason J. Mitchell A project submitted to the Department of Computer and Information Sciences in partial fulfillment of the requirement for the degree of Master of Science in Computer and Information Sciences UNIVERSITY OF NORTH FLORIDA DEPARTMENT OF COMPUTER AND INFORMATION SCIENCES April, 2003 The project "An Architectural Pattern for Adaptable Middleware Infrastructure" submitted by Jason J. Mitchell in partial fulfillment of the requirements for the degree of Master of Science in Computer and Information Sciences has been Approved by the Project Committee: Signature Deleted Arturo Sanchez, Ph.D. Project Director Signature Deleted Judith Solano, Ph.D. Chairperson of the Department Signature Deleted Charles Winton, Ph.D. Graduate Director 4/30/03 4/26/03 4/30/2003 ACKNOWLEDGEMENT I wish to explicitly express gratitude towards my eternal companion for enabling me all the successes of my life. CONTENTS List of Figures .................................................................................... vii Abstract ........................................................................................... viii Chapter 1: The Role of Middleware and Approaches to It ...................... 1 1.1 Distributed Communication ........................................................... 1 1.2 Approaches to Middleware-Based Architecture .................................. 2 1.3 Discussion ............................................................................... 3 Chapter 2: An Architectural Pattern Approach ................................................. 5 2.1 The Problem ............................................................................ 5 2.2 The Application Programming Interface (API) Perspective ...................... 7 2.3 The Messaging Perspective .......................................................... 8 Chapter 3: A Case Study ......................................................................... 11 3.1 The Problem Domain ................................................................ 11 3.2 The Design ............................................................................ 12 3.3 Case 1 COM+ .............................................................................. 15 3.3.1 Client Side ........................................................................ 15 3.3.2 Server Side ...................................................................... 16 3.4 Case 2: .NET Remoting .............................................................. 17 3.4.1 Client Side ........................................................................ 17 3.4.2 Server Side ...................................................................... 18 3.5 Case 3: Web Service ................................................................. 18 3.5.1 Server Side................................................................. 19 3.6 Summary of Case Studies ........................................... 20 Chapter 4: Conclusions ..................................................... 21 References ........................................................................ 22 Appendix A: Adaptable Middleware Pattern ..................... 25 Appendix B: Source Code .................................................. 29 Vita .................................................................................. 30 FIGURES Figure 1: Distributed Application Layers ..................................................... 1 Figure 2: Component Based Architecture..................................................... 5 Figure 3: Decoupling Diagram ...................................................................... 6 Figure 4: Approaches to Message Interpretation .......................................... 9 Figure 5: Application Architecture ............................................................. 11 Figure 6: .NET Message .............................................................................. 13 Figure 7: Interface Definition ...................................................................... 14 Figure 8: COM+ Client .............................................................................. 15 Figure 9: COM+ Server .............................................................................. 16 Figure 10:.NET Remoting Client ............................................................... 17 Figure 11:.NET Remoting Server ............................................................... 18 Figure 12: Web Service Server ................................................................ 19 Figure 13: Pattern for API Abstraction ....................................................... 26 Figure 14: Message Interpretation ............................................................... 27 ABSTRACT Middleware technologies change so rapidly that designers must adapt existing software architectures to incorporate new emerging ones. This project proposes an architectural pattern and guidelines to abstract the communication barrier whereby allowing the developer to concentrate on the application logic. We demonstrate our approach and the feasibility of easily upgrading the middleware infrastructure by implementing a sample project and three case studies using three different middlewares on the .NET framework. Chapter 1 THE ROLE OF MIDDLEWARE AND APPROACHES TO IT 1.1 Distributed Communication Software applications need to be distributed for many reasons. Because of the increasing need to build these applications and the existence of so many communication protocols, certain types of middlewares have been developed to isolate developers from dealing with low-level details that are foreign to the core functionality of the application at hand. True distributed systems application should not be aware of such communication boundaries; ideally, it should be handled by the underlying run-time systems themselves. However the state of the art in distributed computing and large-scale enterprise development in general isn’t quite there yet. An approach that was popularized by CORBA [OMG98] consists of introducing a software layer that abstracts out many of the subtleties associated with communication issues. ![Distributed Application Layers](image) Figure 1: Distributed Application Layers Figure 1 shows how a new layer of software called middleware now sits between applications and the operating system abstracting the network communication heterogeneity and simplifying distributed communication for the application developers. New demands are now being placed on the middleware layer and taken away from the application developer. Factors such as component integration, adaptive environments, and real-time interactions drive middlewares to new heights [Tripathi02]. Thus, the variety and complexity of middlewares is increasing. 1.2 Approaches to Middleware-Based Architectures There is a plethora of middleware architectures, frameworks, and protocols. They try to tackle different problems and complexities. Each additional feature of a middleware has a cost associated with it; most of the time it’s a performance hit or a new learning curve to tackle for the team. New policy-driven middleware approaches like QuO handle many scenarios such as dynamic security requirements, ad hoc networking of devices, and context-aware computing [Tripathi02]. Resource management becomes a key factor in the middleware arena. Resource awareness and dynamic reallocation of resources are important responsibilities of a resource management system. A way of adapting to the network via reflection techniques is a key approach one framework has attempted to accomplish [Duran00]. Many other examples of middleware architectures make use of some of the aspects discussed already. Some examples are Artie Bean developed at the University of Tromso [Anderson01], a composable reflective framework at the University of California, Irvin [Venkatasubramanian01], an open network platform protocol developed at Ericson [Jozic], and an Advanced Communication Toolkit (ACT) developed at Rutgers University [Francu99]. Most of these implementations are either built or based on commercial object-oriented middleware technologies such as OMG’s CORBA, Sun’s RMI, Microsoft’s COM+, and IBM’s MSQ. All of these commercial implementations offer great advantages when building a distributed system, and work well for certain scenarios. It is even easy to choose which one will work best for the current implementation of the application given its domain. The unavoidable problem that arises is change: the domain, the complexity, the environment, or the application will change, and this may mean that the middleware infrastructure needs to be changed to adapt to the new requirements. What designers have to do is expect the inevitable and prepare for it. 1.3 Discussion Let us suppose that we have chosen a middleware and have written a client/server system. This means that we have an application logic that interfaces with the middleware Application Programming Interface (API). This also means that we have probably defined a messaging infrastructure whereby we have defined the messages being passed between certain components of the application. Most of the time this is done by means of some sort of interface definition language (IDL) so the messaging infrastructure knows how to marshal/un-marshal the complex types across the network, which calls for mappings between our application-specific complex types and the types defined as our messages. An application so designed is inherently prone to be tightly dependent on the middleware in question! What does this mean for our application developers? They would potentially need to modify large segments of the logic that uses the middleware API when evolution imposes the use of a new middleware. This also means that they would need to write a new set of classes to map to the new set of interface definition types. This not only means more development but the applications themselves need to be recompiled, retested, and redeployed. This is far too much overhead for something that could have been avoided from the beginning. This project will show how an approach to avoid these pitfalls, which could lead to a better utilization of resources such as time and money. Chapter 2 AN ARCHITECTURAL PATTERN APPROACH 2.1 The Problem Figure 2 depicts a component-based architecture with different forms of middleware used between them. The point here is that many different middlewares may and should be used to handle different scenarios in the context of a distributed enterprise system. The task for the architect is to design the system in such a way such that adapting to change is accomplished with minimal effort. Figure 2: Component Based Architecture showing various middlewares While only four different middlewares are mentioned, dozens more (some of which were mentioned in the previous chapter) could be used interchangeably depending on certain requirements of the system. For instance, if the web server and the business domain service interact within the local area network, .NET remoting offers the best performance. If however our business domain service needs to be used with applications over the wide area network, then we might want to use web services because they are designed to go over the HTTP protocol and pass through firewalls. This requires a layer of abstraction between the applications and the middleware. ![Decoupling Diagram](image) Figure 3: Decoupling Diagram Figure 3 shows several two-component diagrams. The top picture shows that an application can be coupled to the three different middlewares that it may use. The diagram at the bottom illustrates an approach to de-couple the application from the middleware API by creating an abstraction layer that existing middleware infrastructure details can easily be bound to, which will allow new bindings containing different middleware infrastructure logic to be cleanly bound to the generic host that uses it. This abstraction layer must remain thin enough as to not to compromise the flexibility of the concrete middleware and not to introduce unnecessary dependencies in the application with respect to it. In the previous chapter we discussed two key areas where system designs can cause a lot of overhead when trying to switch between middlewares. The first was the Application Programming Interface (API) and the second was in the messaging infrastructure. We shall now explain these in more detail. 2.2 The Application Programming Interface (API) Perspective Each application must be written to interact with the API of the middleware. This means that we must reference external libraries and couple some business logic to interoperate with the middleware. Switching to a new middleware therefore entails changing that code to now interface with the new middleware’s API. Now that we have changed some of our business logic the whole system needs to be retested and the interaction code needs to be redeployed. An example of this scenario that is often found is when the application logic is built so that it obtains a reference to the remote server to then pass it through the code to be called upon when necessary; which could potentially (tightly) couple the system to the middleware. The solution to this is to have the host applications bind to an interface. Then, implement code to bind the logic between the interface and the middleware. This not only allows new bindings to be introduced but also saves us from having to recompile, and even re-test the application logic that uses it. 2.3 The Messaging Perspective The other place where designers might not foresee the need for future changes is in the messaging between the components. Two applications communicate with each other through the transfer of complex data types. Current implementations of middleware offer some sort of interface definition language to define the complex types so that they can be marshaled and un-marshaled to be sent across the network. This creates a problem when designers couple the application to this data representation. With each message being passed between applications we must define the types and instruct the middleware how to send types across the network. A substantial amount of work is required to map large data objects in any interface definition language. When applied several times to different middleware, the headache of re-implementation surfaces quickly [Emmerich99]. Interpretation of messages is done in the interface definition language defined by middleware. Interpretation of messages is done in the applications themselves (i.e. parsed by XML parsers). Figure 4: Approaches to Message Interpretation Figure 4 shows two approaches. The first entails defining the data types within the middleware, which means that we must do this for each middleware. The second approach entails interpreting the data types in the applications themselves. To allow for extensibility an "open binding" approach [Fitzpatrick98] allows for self-description or meta-data information and late binding. This means that the applications will be responsible for parsing and interpreting the messages being passed across. The only thing the middleware knows about is that a character string is being passed across. This exchanging of strings is where the flexibility and decoupling of data and messaging definition come into play. The current primary choice for this is the extensible markup language (XML), which offers a generic loosely coupled integration environment. The messaging infrastructure is overall more extensible and adaptable and lays the messaging infrastructure foundation for a heterogeneous and diverse market of middleware communications [Nusser01]. With the introduction of an adapter-like pattern abstracting the API and an extensible messaging infrastructure the groundwork will be laid for our architectural pattern, which when instantiated appropriately leads to highly flexible and adaptive distributed systems. This will become more and more important as many new middlewares will be introduced in the next years to come. Appendix A describes our pattern in a format similar to the one used by Stephen Stelting [Stelting02], and includes a recipe for instantiating this pattern. In the next chapter we will demonstrate case studies that use our pattern. Chapter 3 A CASE STUDY 3.1 Problem Domain The application that spawns our case studies is a sports statistic application used to report real time statistics of athletic events. Figure 5 shows the architectural layout of our example application. The figure also highlights the communication link our case study focuses on. We will use the terms "web server" and "application server" to distinguish between the two servers. Depending on factors such as network layout, performance requirements, and flexibility we would choose among many different middlewares to best fulfill the requirements of the system. For our case study we will use three different types of middleware all provided by the .NET framework. We chose the .NET framework because of the inherent XML tools it provided. We could have just as easily used any other platform. 3.2 Design For our design we will now instantiate our architectural pattern (c.f. Appendix A) and develop a solution that will enable the swapping of a new middleware easily. The second step in the pattern “Implementation” section is about allowing our applications to interpret the messaging infrastructure. We also mention that the best way to do this is by using XML as the format for passing such messages. The .NET framework offers some great tools when it comes to serializing classes into character streams using XML. For our case studies we have decided to use these tools. We just have to create our complex types that we would like to use and then auto-generate the XML marshaling of the complex type to a character string, which implies that we will not have to describe our types to the middleware; we only express one type of message going across the middleware, namely a simple character string. Therefore, the messaging infrastructure will be the same for any middleware we decide to use. We will not mention the messaging in the three different case studies because they are all the same. Our applications will do the interpretation (parsing) of the messages independent of the middleware. This de-coupling allows the middlewares to change and we will never have to describe to the middleware how to marshal our messaging infrastructure. Figure 6 above shows the portion of a class that was auto-generated by the .NET framework. It shows that the tool will create the methods to marshal any complex data type into a character string and read from a character string back into our object. This class has all of our typed parameters being passed across for ease of use within our other code. string GetTeamById(string teamrequest); Figure 7: Interface Definition This allows our application logic to handle any changes in our communication messages without ever changing the middleware. This also means that if we change the middleware we don’t have to map our objects or define our complex types to the middleware. The first step of our pattern describes a way to de-couple the middleware API from the host application. To do this we will create an interface that the host will bind to. Then we will implement the interface with a class that will act as a bridge to the target that will service the request. The code above in Figure 7 is the interface that the client code would bind to. The implementation of this object will be dynamically loaded. As long as the interface doesn’t change, the host application logic would not have to change or be re-compiled or be re-tested. 3.3 Case 1: COM+ 3.3.1 Client side: Using Microsoft's distributed communication protocol COM+, Figure 8 shows an example in C# of how to obtain a reference to the remote server and invoke the service layer to retrieve the team. This class would implement the Item Service interface and act as a proxy to the remote server. There are references and configuration setup that would be coupled with this class. This implementation of the client side API to COM+ retains all syntax referring to COM+. The (XML) messaging is returned to the host and its code knows nothing about the interactions with COM+. ```csharp public class COMTeamClient : ITeamService { public string GetTeamById(string teamrequest) { try { TeamMgr mgr = new TeamMgr(); return mgr.GetTeamById(teamrequest); } catch (Exception ex) { throw ex; } } } ``` Figure 8: COM+ Client [Transaction(TransactionOption.Required)] [Guid("822A6BC5-1C84-4052-838E-FA47E6EDADC3")] public class TeamComponentService : ServicedComponent, ITeamService { public string GetTeamByld(string teamId) { return new TeamService().GetTeamByld(teamId); } } Figure 9: COM+ Server The class in Figure 8 implements the ITeamService interface. This is done so the web server application can bind to this interface and won’t need to be changed if we add a new implementation. 3.3.2 Server side: Now on the server side, the class that accepts the COM+ request would then forward the request onto the service layer as show in Figure 9. The service layer would retrieve the respective team and return the XML payload string to this method to be passed back over COM+. Just like with the client’s side, all API references are kept within this abstraction layer so that they are not coupled with the applications that are using them. Furthermore, these classes would be kept in a separately linked library so that none of the application logic using this abstraction layer would have to be re-compiled after the initial release. 3.4 Case 2: .NET Remoting Let's suppose that COM+ did not suffice as a middleware between the applications. Now we have to change all of the code that references the COM+ API and change it so it will then reference .NET remoting syntax. 3.4.1 Client Side: Figure 10 shows the implementation of the same interface mentioned before but now this implementation obtains a reference using a different set of API libraries. Notice we will not have to change any of the code that uses this implementation. As long as we dynamically load this class we won't have to compile, test, or re-deploy and host application code. ```csharp public class RemotingTeamClient : ITeamService { public string GetTeamById(string teamrequest) { try { string url = "http://localhost/TeamService/Team.rem"; TeamMgr mgr = (ITeamService) Activator.GetType(typeof (ITeamService), url); return mgr.GetTeamById(teamrequest); } catch (Exception ex) { throw ex; } } } ``` Figure 10: .NET Remoting Client 3.4.2 Server Side: ```csharp public class TeamRemotingService : MarshalByRefObject, ITeamService { public TeamRemotingService() { } public Team GetTeamById(int teamId) { return new TeamService().GetTeamById(teamId); } } ``` Figure 11: .NET Remoting Server Figure 11 shows how to setup a server object to be obtainable by .NET remoting, using middleware-specific syntax, and there is some configuration necessary to set this up as well. The minimal configuration changes and the new implementation of this class is all that was needed to swap out one middleware infrastructure to another one from the server end. Once again, we didn’t have to make any changes to the TeamMgr class and anything it uses. This saves us from having to test it. 3.5 Case 3: Web Services As a third example we will now communicate with the remote server using web services. Under the .NET framework we would need to change some configuration information, such as add a reference to the web service, and compile the web service proxy. Each toolkit used to create a web client or server would be different. Once the proxy is built you just refer to it like any other object. The .NET framework has done a lot to make the integration with web services very seamless. There are many more protocols such as CORBA that make it more difficult to integrate with. The server side portion is not so straightforward. Not only one needs to extend a web class, but also to mark each method as one published by this web service. Figure 12 shows an example of this. 3.5.1 Server Side: ```csharp public class TeamService : WebService, ITeamService { [WebMethod] public Team GetTeamById(int teamId) { return new TeamService().GetTeamById(teamId); } } ``` This is the trigger to expose this method as a web service call. Figure 12: Web Service Server Above is an example of how to listen to web services under the .NET framework. The web service method GetTeamById will listen for a request. Once a request is accepted this service will then passed the request onto the web service independent service layer that will service this request. 3.6 Summary of Case Studies As shown in all three cases, the decoupling of the middleware infrastructure from our business applications can be achieved by applied our architectural pattern, which calls for the separation of the application from API-specific functionality, and the introduction of an extensible messaging framework. We demonstrated this strategy with three different middlewares but this could just as easily been done with any middleware on the market. Chapter 4 CONCLUSIONS The case studies presented in our project demonstrate that swapping among three different middlewares can be accomplished by a small amount of configuration changes and only a few systematic modifications to the source code. The real key is that none of the actual business logic on the client and server side needed to be recompiled or tested. Only the code that depended on the specific middleware infrastructure dependent had to be altered. Since changes associated with the middleware are inevitable for some application domains, developers should prepare in advance to face them. In this project we have presented an architectural pattern that enables the interchanging of middlewares with minimal effort and overhead for the development team. REFERENCES [Anderson01] [Duran00] [Emmerich99] [Fitzpatrick98] [Francu99] [Geihs01] [Charles97] [Gold-Berstein98] [Jozic] [Mullender02] [Nusser01] [OMG98] [Schaeffer99] [Stelting02] [Tripathi02] [Venkatasubramanian01] [Venkatasubramanian02] Venkatasubramanian, Nalini. “Safe Composability of Middleware Services”. .NET Remoting, Tutorial, Microsoft 2002. See http://www.dotnetremoting.cc/ Adaptable Middleware Pattern Pattern Properties Type: Behavioral Level: Component/Architectural Purpose To introduce an abstraction layer that decouples the application from the middleware being used. Introduction Let's assume we have a distributed system. We would then probably decide to go with some form of middleware between applications. We might then later decide to switch middlewares. We want to limit the changes necessary to switch between them. We would also like to limit any other efforts such as testing, compiling, and deploying already completed systems. Applicability This pattern is very useful when distributed systems are using some sort of middleware. It is also applicable when the two communicating applications are built under different platforms. Description This pattern is broken up into two parts. It involves separating the application logic from the Application Programming Interface (API) and separating the data interpretation from the middleware. To separate the API we define an interface between the target and host application. On both sides we build the business logic to bind to these interfaces. This implies that once the logic is built and tested as long as the interfaces don’t change this existing logic also doesn’t need to be changed either. Now to separate the data from the communication medium we define a way to have our applications actually interpret the data independently of the transport. Providing meta-data information within our data messages does this. We will only allow one type of message to be passed across the middleware and that is a character string. This interface will adapt to any middleware of choice. Implementation Figure 13: Pattern for API Abstraction As shown in Figure 13, each service will have an interface defined, and each version of middleware will implement the interface, providing the middleware integration now decoupled from the application. Secondly, the messaging infrastructure will be defined by passing a character string as the input parameter and returning a character string as the output. This way we can pass XML messages and the interpretation of the messages will be done with our application independent of the middleware. Figure 14 shows where the interpretation of messages can take place. If the interpretation is done independent of the middleware then there is no need to re-do any mapping or defining of the types with the new middleware. Benefits and Drawbacks This will significantly reduce the overhead of switching to a new middleware infrastructure. This will also provide a way in which the application logic doesn’t have to be retested and deployed. Only the code integrated the new API will have to be written and tested. Thirdly, this messaging infrastructure provides for a more extensible framework. The only drawback might be a loss of flexibility with respect to the services that specific middlewares might provide. APPENDIX B Source Code Attached to this document is the entire source code of this demonstration application on a CD. It is a .NET solution with multiple projects containing all C# code. VITA Jason Mitchell has a Bachelor of Science degree from University of North Florida in Computer Science, 2000. Jason expects to receive his Master of Science in Computer and Information Sciences from the University of North Florida, May 2003. Dr. Arturo Sanchez of the University of North Florida is serving as Jason’s project advisor. Jason is currently employed as a systems engineer at TNT Logistics and has been with the company for one year. Prior to that, Jason worked for ECI Telecom as a software engineer for two years. Jason has interests in software engineering, project management, and distributed systems. Jason has extensive experience in the J2EE and .NET platform frameworks in both the presentation and business tiers. Jason has also done extensive relational data modeling. Jason has been married for 9 months.
{"Source-Url": "https://digitalcommons.unf.edu/cgi/viewcontent.cgi?article=1310&context=etd", "len_cl100k_base": 6010, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 61358, "total-output-tokens": 8599, "length": "2e12", "weborganizer": {"__label__adult": 0.0003273487091064453, "__label__art_design": 0.0005650520324707031, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.0013208389282226562, "__label__entertainment": 6.300210952758789e-05, "__label__fashion_beauty": 0.00015032291412353516, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.00028252601623535156, "__label__games": 0.0003600120544433594, "__label__hardware": 0.0007638931274414062, "__label__health": 0.0004024505615234375, "__label__history": 0.00027179718017578125, "__label__home_hobbies": 7.671117782592773e-05, "__label__industrial": 0.00032639503479003906, "__label__literature": 0.0002624988555908203, "__label__politics": 0.0002601146697998047, "__label__religion": 0.00044035911560058594, "__label__science_tech": 0.0105743408203125, "__label__social_life": 8.434057235717773e-05, "__label__software": 0.00396728515625, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0002551078796386719, "__label__transportation": 0.0006008148193359375, "__label__travel": 0.00020766258239746096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34818, 0.04085]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34818, 0.34381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34818, 0.87317]], "google_gemma-3-12b-it_contains_pii": [[0, 646, false], [646, 1016, null], [1016, 1016, null], [1016, 1509, null], [1509, 1640, null], [1640, 3567, null], [3567, 4136, null], [4136, 5541, null], [5541, 6069, null], [6069, 7060, null], [7060, 8448, null], [8448, 9863, null], [9863, 11089, null], [11089, 11607, null], [11607, 12320, null], [12320, 13832, null], [13832, 15170, null], [15170, 16031, null], [16031, 17209, null], [17209, 17792, null], [17792, 18967, null], [18967, 19764, null], [19764, 20655, null], [20655, 21598, null], [21598, 22738, null], [22738, 23821, null], [23821, 24945, null], [24945, 25702, null], [25702, 26463, null], [26463, 27237, null], [27237, 28712, null], [28712, 30477, null], [30477, 30844, null], [30844, 31639, null], [31639, 32585, null], [32585, 33305, null], [33305, 33797, null], [33797, 33986, null], [33986, 34818, null]], "google_gemma-3-12b-it_is_public_document": [[0, 646, true], [646, 1016, null], [1016, 1016, null], [1016, 1509, null], [1509, 1640, null], [1640, 3567, null], [3567, 4136, null], [4136, 5541, null], [5541, 6069, null], [6069, 7060, null], [7060, 8448, null], [8448, 9863, null], [9863, 11089, null], [11089, 11607, null], [11607, 12320, null], [12320, 13832, null], [13832, 15170, null], [15170, 16031, null], [16031, 17209, null], [17209, 17792, null], [17792, 18967, null], [18967, 19764, null], [19764, 20655, null], [20655, 21598, null], [21598, 22738, null], [22738, 23821, null], [23821, 24945, null], [24945, 25702, null], [25702, 26463, null], [26463, 27237, null], [27237, 28712, null], [28712, 30477, null], [30477, 30844, null], [30844, 31639, null], [31639, 32585, null], [32585, 33305, null], [33305, 33797, null], [33797, 33986, null], [33986, 34818, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34818, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34818, null]], "pdf_page_numbers": [[0, 646, 1], [646, 1016, 2], [1016, 1016, 3], [1016, 1509, 4], [1509, 1640, 5], [1640, 3567, 6], [3567, 4136, 7], [4136, 5541, 8], [5541, 6069, 9], [6069, 7060, 10], [7060, 8448, 11], [8448, 9863, 12], [9863, 11089, 13], [11089, 11607, 14], [11607, 12320, 15], [12320, 13832, 16], [13832, 15170, 17], [15170, 16031, 18], [16031, 17209, 19], [17209, 17792, 20], [17792, 18967, 21], [18967, 19764, 22], [19764, 20655, 23], [20655, 21598, 24], [21598, 22738, 25], [22738, 23821, 26], [23821, 24945, 27], [24945, 25702, 28], [25702, 26463, 29], [26463, 27237, 30], [27237, 28712, 31], [28712, 30477, 32], [30477, 30844, 33], [30844, 31639, 34], [31639, 32585, 35], [32585, 33305, 36], [33305, 33797, 37], [33797, 33986, 38], [33986, 34818, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34818, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
91178ab9c86f13c76e884ba8ab0000c62e944abe
Optimizing Spectral Learning for Parsing Shashi Narayan, Shay Cohen School of Informatics, University of Edinburgh ACL, August 2016 Probabilistic CFGs with Latent States (Matsuzaki et al., 2005; Prescher 2005) Latent states play the role of nonterminal subcategorization, e.g., \( NP \rightarrow \{NP^1, NP^2, \ldots, NP^{24}\}\) - analogous to syntactic heads as in lexicalization (Charniak 1997)? They are not part of the observed data in the treebank Estimating PCFGs with Latent States (L-PCFGs) **EM Algorithm** (Matsuzaki et al., 2005; Petrov et al., 2006) ⇒ Problems with local maxima; it fails to provide certain type of theoretical guarantees as it doesn’t find global maximum of the log-likelihood Estimating PCFGs with Latent States (L-PCFGs) EM Algorithm (Matsuzaki et al., 2005; Petrov et al., 2006) ↓ Problems with local maxima; it fails to provide certain type of theoretical guarantees as it doesn’t find global maximum of the log-likelihood Spectral Algorithm (Cohen et al., 2012, 2014) ↑ Statistically consistent algorithms that make use of spectral decomposition ↑ Much faster training than the EM algorithm Estimating PCFGs with Latent States (L-PCFGs) **EM Algorithm** (Matsuzaki et al., 2005; Petrov et al., 2006) Down: Problems with local maxima; it fails to provide certain type of theoretical guarantees as it doesn’t find global maximum of the log-likelihood **Spectral Algorithm** (Cohen et al., 2012, 2014) Up: Statistically consistent algorithms that make use of spectral decomposition Up: Much faster training than the EM algorithm Down: Lagged behind in their empirical results Overview **Conventional approach:** Number of latent states for each nonterminal in an L-PCFG can be decided in isolation Overview Conventional approach: Number of latent states for each nonterminal in an L-PCFG can be decided in isolation Contributions: A. Parsing results significantly improve if the number of latent states for each nonterminal is globally optimized Petrov et al. (2006) demonstrated that coarse-to-fine techniques that carefully select the number of latent states improve accuracy. Overview Conventional approach: Number of latent states for each nonterminal in an L-PCFG can be decided in isolation Contributions: B. Optimized spectral method beats coarse-to-fine expectation-maximization techniques on 6 (Basque, Hebrew, Hungarian, Korean, Polish and Swedish) out of 8 SPMRL datasets Intuition behind the Spectral Algorithm Inside and outside trees At node $VP$: Outside tree $o =$ Inside tree $t =$ Conditionally independent given the label and the hidden state $$p(o, t|VP, h) = p(o|VP, h) \times p(t|VP, h)$$ Singular value decomposition (SVD) of cross-covariance matrix for each nonterminal Recent Advances in Spectral Estimation Method of moments (Cohen et al., 2012, 2014) - Averaging with SVD parameters \( \Rightarrow \) Dense estimates Recent Advances in Spectral Estimation Method of moments (Cohen et al., 2012, 2014) - Averaging with SVD parameters \(\Rightarrow\) Dense estimates Clustering variants (Narayan and Cohen 2015) Sparse estimates Standard Spectral Estimation and Number of Latent States A natural way to choose the number of latent states based on the number of non-zero singular values Number of latent states for each nonterminal in an L-PCFG can be decided in isolation Conventional approach fails to take into account interactions between different nonterminals Optimizing Latent States for Various Nonterminals Input: - An input treebank divided into training and development set - A basic spectral estimation algorithm $S$ mapping each nonterminal to a fixed number of latent states $f_{def} : \{ S \rightarrow 24, \text{NNP} \rightarrow 24, \text{VP} \rightarrow 24, \text{DT} \rightarrow 24, \ldots \}$ Output: $\ f_{opt} : \{ S \rightarrow 40, \text{NNP} \rightarrow 81, \text{VP} \rightarrow 35, \text{DT} \rightarrow 4, \ldots \}$ Algorithm in a nutshell - Iterate through the nonterminals, changing the number of latent states, - estimate the grammar on the training set and - optimize the accuracy on the dev set A **beam search algorithm** for the traversal of multidimensional vectors of latent states: *Optimizing their global interaction* Optimizing Latent States for Various Nonterminals time: 0 \[ f_{\text{def}} : 24 \ 24 \ 24 \ 24 \ 24 \ , \ F_{\text{def}} \] Optimizing Latent States for Various Nonterminals \[ f_{\text{def}} : 4 \quad 37 \quad 24 \quad 24 \quad 24 \quad 24 \quad , \quad F_{1_{\text{def}}} \] \[ f_{m_1} : 4 \quad 37 \quad m_1 \quad 24 \quad 24 \quad 24 \quad , \quad F_{1_{m_1}} \] \[ f_{m_2} : 4 \quad 37 \quad m_2 \quad 24 \quad 24 \quad 24 \quad , \quad F_{1_{m_2}} \] \[ f_{m_N} : 4 \quad 37 \quad m_N \quad 24 \quad 24 \quad 24 \quad , \quad F_{1_{m_N}} \] \text{time}: t Optimizing Latent States for Various Nonterminals Clustering variant of spectral estimation leads to compact models and is relatively fast. The SPMRL Dataset 8 morphologically rich languages: Basque, French, German, Hebrew, Hungarian, Korean, Polish and Swedish Treebanks of varying sizes from 5,000 sentences (Hebrew and Swedish) to 40,472 sentences (German) Results on the Swedish dataset Results on the dev set F-Measures - Petrov et al.'06: 75.50 - Narayan and Cohen'15: 71.40 - Cohen et al.'13: 73.40 - Björkelund et al.'13 Results on the Swedish dataset Results on the dev set <table> <thead> <tr> <th>Method</th> <th>F-Measure</th> </tr> </thead> <tbody> <tr> <td>berkeley</td> <td>75.50</td> </tr> <tr> <td>cluster</td> <td>71.40</td> </tr> <tr> <td>moments</td> <td>73.40</td> </tr> </tbody> </table> Petrov et al.’06 Narayan and Cohen’15 Cohen et al.’13 Björkelund et al.’13 Results on the Swedish dataset Results on the dev set ![Diagram showing F-Measures for different datasets and methods: - berkeley: 75.50 - cluster: 71.40 - moments: 75.50 - Petrov et al.'06 - Narayan and Cohen'15 - Cohen et al.'13 - Björkelund et al.'13] Results on the Swedish dataset Final results on the test set <table> <thead> <tr> <th>Method</th> <th>F-Measure</th> </tr> </thead> <tbody> <tr> <td>Petrov et al.'06</td> <td>80.60</td> </tr> <tr> <td>Narayan and Cohen'15</td> <td>76.40</td> </tr> <tr> <td>Cohen et al.'13</td> <td>79.40</td> </tr> <tr> <td>Björkelund et al.'13</td> <td>78.40</td> </tr> <tr> <td></td> <td>80.90</td> </tr> </tbody> </table> Final Results on the SPMRL Dataset Berkeley results are taken from Björkelund et al, 2013. Conclusion Spectral parsing results significantly improve if the number of latent states for each nonterminal is globally optimized Optimized spectral algorithm beats coarse-to-fine EM algorithm for 6 (Basque, Hebrew, Hungarian, Korean, Polish and Swedish) out of 8 SPMRL datasets The Rainbow parser and multilingual models: http://cohort.inf.ed.ac.uk/lpcfg/ Acknowledgments: Thanks to David McClosky, Eugene Charniak, DK Choe, Geoff Gordon, Djamé Seddah, Thomas Müller, Anders Björkelund and anonymous reviewers Inside Features used Consider the VP node in the following tree: ``` S /\ / \ NP VP / / D V NP / / / N cat the dog \ S ``` The inside features consist of: - The pairs \((\text{VP, V})\) and \((\text{VP, NP})\) - The rule \(\text{VP} \rightarrow \text{V NP}\) - The tree fragment \((\text{VP (V saw) NP})\) - The tree fragment \((\text{VP V (NP D N)})\) - The pair of head part-of-speech tag with \(\text{VP}: (\text{VP, V})\) Outside Features used Consider the D node in the following tree: The outside features consist of: - The pairs (D, NP) and (D, NP, VP) - The pair of head part-of-speech tag with D: (D, N) - The tree fragments and Variants of Spectral Estimation - **SVD variants:** singular value decomposition of empirical count matrices (cross-covariance matrices) to estimate grammar parameters (Cohen et. al. 2012, 2014) - **Convex EM variant:** “anchor method” that identifies features that uniquely identify latent states (Cohen and Collins, 2014) - **Clustering variant:** a simplified version of the SVD variant that clusters low-dimensional representations to latent states (Narayan and Cohen, 2015) Intuitive-to-understand and very (computationally) efficient Optimizing Latent States for Various Nonterminals - **Initialization**: \((n_0, f_{\text{def}}, F_{\text{def}}) \rightarrow Q\) - \(n_0\): First nonterminal - \(f_{\text{def}}\): \{S \rightarrow 24, \text{NNP} \rightarrow 24, \text{VP} \rightarrow 24, \text{DT} \rightarrow 24, \ldots\} - \(F_{\text{def}}\) is the \(F_1\) score on the development set - **Iteration**: \((n_i, f_i, F_i) \leftarrow Q\) - For each number of latent state \(l \in \{1, \ldots, m\}\), - \(f_i' : f_i'(n_i) = l\) and for others \(n, f_i'(n) = f_i(n)\), - Estimate a new \(F_i'\) score on the development set, and - Push \((n_{i+1}, f_i', F_i')\) - **Termination**: \((n_{\text{fin}+1}, f_{\text{opt}}, F_{\text{fin}}) \leftarrow Q\) - \(f_{\text{opt}}\): \{S \rightarrow 40, \text{NNP} \rightarrow 81, \text{VP} \rightarrow 35, \text{DT} \rightarrow 4, \ldots\} We need a training algorithm which is relatively fast and leads to compact models ## Final Results on the SPMRL Dataset <table> <thead> <tr> <th>lang.</th> <th>Berkeley</th> <th>Spectral Cluster</th> <th>Spectral SVD</th> </tr> </thead> <tbody> <tr> <td>Basque</td> <td>74.7</td> <td><strong>81.4</strong></td> <td>80.5</td> </tr> <tr> <td>French</td> <td>80.4</td> <td>75.6</td> <td><strong>79.1</strong></td> </tr> <tr> <td>German</td> <td>78.3</td> <td>76.0</td> <td><strong>78.2</strong></td> </tr> <tr> <td>Hebrew</td> <td>87.0</td> <td>87.2</td> <td><strong>89.0</strong></td> </tr> <tr> <td>Hungarian</td> <td>85.2</td> <td>88.4</td> <td><strong>89.2</strong></td> </tr> <tr> <td>Korean</td> <td>78.6</td> <td>78.4</td> <td><strong>80.0</strong></td> </tr> <tr> <td>Polish</td> <td>86.8</td> <td><strong>91.2</strong></td> <td>91.8</td> </tr> <tr> <td>Swedish</td> <td>80.6</td> <td>79.4</td> <td><strong>80.9</strong></td> </tr> </tbody> </table> Spectral Algorithm Vs Treebank Size We break the common belief that more data is needed with spectral algorithm <table> <thead> <tr> <th>lang.</th> <th>Training data</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>Sent.</td> <td>tokens</td> </tr> <tr> <td>Basque</td> <td>7,577</td> <td>96,565</td> </tr> <tr> <td>French</td> <td>14,759</td> <td>443,113</td> </tr> <tr> <td>German</td> <td>40,472</td> <td>719,532</td> </tr> <tr> <td>Hebrew</td> <td>5,000</td> <td>128,065</td> </tr> <tr> <td>Hungarian</td> <td>8,146</td> <td>170,221</td> </tr> <tr> <td>Korean</td> <td>23,010</td> <td>301,800</td> </tr> <tr> <td>Polish</td> <td>6,578</td> <td>66,814</td> </tr> <tr> <td>Swedish</td> <td>5,000</td> <td>76,332</td> </tr> </tbody> </table> Effect of Optimization on the Model Size <table> <thead> <tr> <th>lang.</th> <th>$\sum_{nt} l s_{nt}$ Before</th> <th>$l s_{nt}$ After</th> <th>#nt</th> </tr> </thead> <tbody> <tr> <td>Basque</td> <td>402</td> <td>646</td> <td>200</td> </tr> <tr> <td>French</td> <td>1984</td> <td>1994</td> <td>222</td> </tr> <tr> <td>German</td> <td>2288</td> <td>2213</td> <td>762</td> </tr> <tr> <td>Hebrew</td> <td>603</td> <td>986</td> <td>375</td> </tr> <tr> <td>Hungarian</td> <td>643</td> <td>676</td> <td>112</td> </tr> <tr> <td>Korean</td> <td>1295</td> <td>1200</td> <td>352</td> </tr> <tr> <td>Polish</td> <td>384</td> <td>491</td> <td>198</td> </tr> <tr> <td>Swedish</td> <td>276</td> <td>629</td> <td>148</td> </tr> </tbody> </table> Multilingual Models for the Rainbow Parser The Rainbow Parser (or RParser) is a phrase-structure syntactic parser developed at the University of Edinburgh by the informal research group Cohort. At its core, the use of a latent-variable PCFG model. Its training procedure is based on spectral methods of learning. The parser is not publicly available yet. However, if you are interested in using it for your research, contact Shay Cohen (scohen AT inf.ed.ac.uk) or Shashi Narayan (snaray2 AT inf.ed.ac.uk). Click for the following paper. @inproceedings{narayan-16b, title={(Optimizing Spectral Learning for Parsing)}, author={Shashi Narayan and Shay B. Cohen}, booktitle={Proceedings of {ACL}}, year={2016} } Below we include the table of results on the test sets from the SPMRL shared task to parse morphologically rich languages. For a legend, see the paper (Tables 2 and 3). <table> <thead> <tr> <th>Language</th> <th>CL van.</th> <th>CL opt.</th> <th>SP van.</th> <th>SP opt.</th> <th>Berkeley</th> </tr> </thead> <tbody> <tr> <td>Basque</td> <td>79.6</td> <td>81.4</td> <td>79.9</td> <td>80.5</td> <td>74.7</td> </tr> <tr> <td>French</td> <td>74.3</td> <td>75.6</td> <td>78.7</td> <td>79.1</td> <td>80.4</td> </tr> <tr> <td>German (NEGRA)</td> <td>76.4</td> <td>78.0</td> <td>78.4</td> <td>79.4</td> <td>80.1</td> </tr> <tr> <td>German (TiGeR)</td> <td>74.1</td> <td>76.0</td> <td>78.0</td> <td>78.2</td> <td>78.3</td> </tr> <tr> <td>Hebrew</td> <td>86.3</td> <td>87.2</td> <td>87.8</td> <td>89.0</td> <td>87.0</td> </tr> <tr> <td>Hungarian</td> <td>86.5</td> <td>88.4</td> <td>89.1</td> <td>89.2</td> <td>85.2</td> </tr> <tr> <td>Korean</td> <td>76.5</td> <td>78.4</td> <td>80.3</td> <td>80.0</td> <td>78.6</td> </tr> <tr> <td>Polish</td> <td>90.5</td> <td>91.2</td> <td>91.8</td> <td>91.8</td> <td>86.8</td> </tr> <tr> <td>Swedish</td> <td>76.4</td> <td>79.4</td> <td>78.4</td> <td>80.9</td> <td>80.6</td> </tr> </tbody> </table>
{"Source-Url": "http://homepages.inf.ed.ac.uk/snaraya2/docs/published/acl_2016_slides.pdf", "len_cl100k_base": 4487, "olmocr-version": "0.1.50", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 48520, "total-output-tokens": 5560, "length": "2e12", "weborganizer": {"__label__adult": 0.0005779266357421875, "__label__art_design": 0.0006175041198730469, "__label__crime_law": 0.0007376670837402344, "__label__education_jobs": 0.002880096435546875, "__label__entertainment": 0.0002598762512207031, "__label__fashion_beauty": 0.0003285408020019531, "__label__finance_business": 0.0003256797790527344, "__label__food_dining": 0.00057220458984375, "__label__games": 0.0008602142333984375, "__label__hardware": 0.000980377197265625, "__label__health": 0.0012950897216796875, "__label__history": 0.0005230903625488281, "__label__home_hobbies": 0.00014257431030273438, "__label__industrial": 0.0006990432739257812, "__label__literature": 0.002452850341796875, "__label__politics": 0.0006380081176757812, "__label__religion": 0.0008897781372070312, "__label__science_tech": 0.26171875, "__label__social_life": 0.0003066062927246094, "__label__software": 0.0182342529296875, "__label__software_dev": 0.703125, "__label__sports_fitness": 0.0005831718444824219, "__label__transportation": 0.0007534027099609375, "__label__travel": 0.0002753734588623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13195, 0.0597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13195, 0.15465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13195, 0.70817]], "google_gemma-3-12b-it_contains_pii": [[0, 134, false], [134, 459, null], [459, 715, null], [715, 1139, null], [1139, 1627, null], [1627, 1917, null], [1917, 2475, null], [2475, 2949, null], [2949, 3183, null], [3183, 3266, null], [3266, 3418, null], [3418, 3631, null], [3631, 3970, null], [3970, 4455, null], [4455, 4771, null], [4771, 4898, null], [4898, 5341, null], [5341, 5482, null], [5482, 5704, null], [5704, 5876, null], [5876, 6186, null], [6186, 6443, null], [6443, 6771, null], [6771, 6863, null], [6863, 7380, null], [7380, 7888, null], [7888, 8102, null], [8102, 8646, null], [8646, 9588, null], [9588, 10217, null], [10217, 10804, null], [10804, 11558, null], [11558, 13195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 134, true], [134, 459, null], [459, 715, null], [715, 1139, null], [1139, 1627, null], [1627, 1917, null], [1917, 2475, null], [2475, 2949, null], [2949, 3183, null], [3183, 3266, null], [3266, 3418, null], [3418, 3631, null], [3631, 3970, null], [3970, 4455, null], [4455, 4771, null], [4771, 4898, null], [4898, 5341, null], [5341, 5482, null], [5482, 5704, null], [5704, 5876, null], [5876, 6186, null], [6186, 6443, null], [6443, 6771, null], [6771, 6863, null], [6863, 7380, null], [7380, 7888, null], [7888, 8102, null], [8102, 8646, null], [8646, 9588, null], [9588, 10217, null], [10217, 10804, null], [10804, 11558, null], [11558, 13195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13195, null]], "pdf_page_numbers": [[0, 134, 1], [134, 459, 2], [459, 715, 3], [715, 1139, 4], [1139, 1627, 5], [1627, 1917, 6], [1917, 2475, 7], [2475, 2949, 8], [2949, 3183, 9], [3183, 3266, 10], [3266, 3418, 11], [3418, 3631, 12], [3631, 3970, 13], [3970, 4455, 14], [4455, 4771, 15], [4771, 4898, 16], [4898, 5341, 17], [5341, 5482, 18], [5482, 5704, 19], [5704, 5876, 20], [5876, 6186, 21], [6186, 6443, 22], [6443, 6771, 23], [6771, 6863, 24], [6863, 7380, 25], [7380, 7888, 26], [7888, 8102, 27], [8102, 8646, 28], [8646, 9588, 29], [9588, 10217, 30], [10217, 10804, 31], [10804, 11558, 32], [11558, 13195, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13195, 0.23684]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b16703c66b20129452af33c29faf75aa489207e2
D5.4 Final KRAKEN marketplace integrated architecture document <table> <thead> <tr> <th>Grant agreement</th> <th>871473</th> </tr> </thead> <tbody> <tr> <td>Work Package Leader</td> <td>Lynkeus</td> </tr> <tr> <td>Author(s)</td> <td>Davide Zaccagnini, Minos Garofalakis, Alexandros Tragkas (LYNKEUS)</td> </tr> <tr> <td>Contributors</td> <td>Rob Holmes (TEX), Donato Pellegrino (TEX), Tilen Marc (XLAB), George Pikramenos (LYN), Alex Tragkas (LYN)</td> </tr> <tr> <td>Reviewer(s)</td> <td>Stephan Krenn (AIT), Javier Presa (ATOS)</td> </tr> <tr> <td>Version</td> <td>Final</td> </tr> <tr> <td>Due Date</td> <td>31/12/2021</td> </tr> <tr> <td>Submission Date</td> <td>03/02/2022</td> </tr> <tr> <td>Dissemination Level</td> <td>Public</td> </tr> </tbody> </table> Copyright © KRAKEN consortium. This document cannot be copied or reproduced, in whole or in part for any purpose without express attribution to the KRAKEN project. # Release History <table> <thead> <tr> <th>Version</th> <th>Date</th> <th>Description</th> <th>Released by</th> </tr> </thead> <tbody> <tr> <td>v0.1</td> <td>01/03/2022</td> <td>Initial version</td> <td>Davide Zaccagnini</td> </tr> <tr> <td>v0.2</td> <td>10/01/2022</td> <td>Contributions from Davide Porro (ICERT), Tilen Marc (AIT). Donato Pellegrino and Rob Holmes (TEX)</td> <td>Davide Zaccagnini</td> </tr> <tr> <td>V0.3</td> <td>17/01/2022</td> <td>Reviews by Stephan Krenn (AIT) and Javier Presa (ATOS)</td> <td>Davide Zaccagnini</td> </tr> <tr> <td>V0.4</td> <td>25/01/2022</td> <td>Version addressing reviewers comments</td> <td>Davide Zaccagnini</td> </tr> <tr> <td>V0.5</td> <td>02/02/2022</td> <td>Format changes</td> <td>Davide Zaccagnini</td> </tr> <tr> <td>V1.0</td> <td>03/02/2022</td> <td>Submitted version</td> <td>ATOS</td> </tr> </tbody> </table> # Table of Contents List of Figures................................................................................................................................. 5 List of Acronyms ............................................................................................................................... 6 Executive Summary............................................................................................................................. 7 1 Introduction....................................................................................................................................... 8 1.1 Purpose of the document............................................................................................................. 8 1.2 Structure of the document .......................................................................................................... 8 2 The KRAKEN Data marketplace (From D5.3) ................................................................................ 9 3 Extended data permissioning .......................................................................................................... 11 3.1 Data provenance via blockchain .................................................................................................. 11 3.2 Coin staking for data quality control ........................................................................................... 12 3.3 Institutional credential management system .............................................................................. 13 3.4 Depute tool .................................................................................................................................. 14 3.5 Company Identification Tool .................................................................................................... 15 4 Integrated Secure Multi-Party Computation ................................................................................ 17 4.1 Pay for computation .................................................................................................................... 17 4.2 SMPC internal architecture .......................................................................................................... 17 4.3 System architecture ...................................................................................................................... 19 5 KRAKEN marketplace mobile application..................................................................................... 21 6 Conclusion ......................................................................................................................................... 22 List of Figures Figure 1: The marketplace architecture .................................................................9 Figure 2: Data Unions and Data Products IDs ..........................................................12 Figure 3: Depute Tool internal and external components .......................................14 Figure 4: internal and external components .............................................................15 Figure 5: Internal SMPC architecture .....................................................................18 Figure 6: The marketplace’s integrated SMPC system architecture. Blue: publishing process; Red: purchase and payment process .........................................................20 ## List of Acronyms <table> <thead> <tr> <th>Acronym</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>DID</td> <td>Decentralized IDentifiers</td> </tr> <tr> <td>DT</td> <td>Depute Tool</td> </tr> <tr> <td>GDPR</td> <td>General Data Protection Regulation</td> </tr> <tr> <td>ID</td> <td>Identifier</td> </tr> <tr> <td>KCIT</td> <td>KRAKEN Company Identification Tool</td> </tr> <tr> <td>PPA</td> <td>Privacy Preserving Analytics</td> </tr> <tr> <td>SSI</td> <td>Self-Sovereign Identity</td> </tr> <tr> <td>SMPC</td> <td>Secure Multi Party Computation</td> </tr> <tr> <td>SW</td> <td>Software</td> </tr> <tr> <td>VC</td> <td>Verifiable Credential</td> </tr> <tr> <td>WP</td> <td>Work Package</td> </tr> </tbody> </table> Executive Summary Entering the final period of the project the KRAKEN Marketplace architecture is now finalized in all its components. The intermediate architecture was described in D5.3 which focused primarily on the integration of the SSI components, the design of the front-end and back-end, and the integration with the educational data infrastructure. This document focuses on the other hand on the extensions subsequently designed, namely the mobile apps, the infrastructure to assign and control institutional affiliation for individual users and the integration with the SMPC components. Based on feedback received on previous iterations of the architecture and during the periodic review, the updated architectural scope now includes functionalities aimed at reducing the risk of fraudulent use of the marketplace especially in the areas of forged or otherwise tampered data products offered for sale by malicious actors. In that sense the permissioning system running on the Lynkeus blockchain was extended to include data provenance tracking functionalities also supporting the staking (escrow) against the quality of a data product. A challenging architectural decision was resolved after intense and protracted discussions with partners regarding the implementation of two apps instead of an integrated one, encompassing both the SSI and the marketplace functionalities. While the resulting choice may slightly decrease the overall usability of the system, the separation better guarantees security of the authentication process leaving the SSI app as general-purpose identity management module. A major focus in recent design work which led, in our view, to a highly innovative solution, is the integration of the SMPC with the marketplace front-end and back-end. The resulting architecture now defines an end-to-end secure, pay-for-computation system which, leveraging the underlying permissioning layer, orchestrates distributed computations in conjunction with a payment system powered by the Streamr DATA token. To our knowledge this is the first real world implementation of a system that directly ties the extent of data access to the economic value of the information that is gathered from it using a token-based payment system. In this regard the KRAKEN Marketplace realizes a working implementation in which privacy, the value of information and its actual price are all technically and operationally connected. 1 Introduction 1.1 Purpose of the document This is a WP5 deliverable which defines the final architecture of the KRAKEN Marketplace. It’s meant to serve as the description of how the marketplace has been designed in all its integrated components and the reasons for specific architectural choices. The document builds on top of deliverable D5.3\(^2\) describing the intermediate marketplace architecture, focusing only on the additional integrated components that were not, or not fully covered in that previous text. Besides reporting purposes, the document will serve as a reference for development activities currently under way and for the remaining part of the project to guide also activities in WP3 and WP4. 1.2 Structure of the document The document refers as needed to D5.3 for already documented architectural designs. It is therefore structured in 3 main sections focusing respectively on: 1. The extended permissioning layer including both the data provenance tracking system and institutional affiliation credentials management (Section 2.1) 2. The SMPC and pay-for-computation systems (Section 3) 3. The marketplace mobile app architecture (Section 4) To provide background and sufficient references to modules and integrated architectures underpinning this final version, the first part of the document presents sections of D5.3. \(^2\) KRAKEN Consortium: D5.3 Initial KRAKEN marketplace integrated architecture document The KRAKEN Data marketplace (From D5.3) The KRAKEN marketplace architecture consists of three main functional areas, as indicated in the diagram below. These are: 1. The permissioning layer where data access is controlled leveraging the Lynkeus Hyperledger Fabric Blockchain 2. The data access layer providing multiple infrastructures and methods allowing secure and private access to data products including the SMPC system, the TEX streaming data infrastructure and the batch data exchange system developed in the first period of the project. 3. The transaction management layer featuring technologies supporting user workflows, payments and fulfillment, mostly leveraging the TEX, Streamr marketplace. These three layers are functionally integrated to first grant or deny data access based on the legally binding rights, then provide such access on three different modalities (SMPC, batch or streaming), and then monitoring the fulfillment of all key transaction steps. Figure 1 The KRAKEN Technology Stack From a functional perspective (see below) the Marketplace API, i.e. the back-end, connects both the desktop and the mobile apps to all other components. In particular, SSI identities are passed to the data access layer on which permissions are computed. Positive access decisions are passed through the API module to the data access layer and then to the xDai payment and fulfillment system. Figure 1: The marketplace architecture This architecture is a result of parallel and iterative design efforts to link multiple modules and infrastructures, each at different stages of technological maturity. These include the Streamr marketplace, the Lynkeus data access layer, the Self-Sovereign Identity system, the Verified Credentials infrastructure and the data protection layer, which is itself composed of the SMPC system and ad-hoc data protection modules (ex. Batch data encryption). The guiding principle of such design, and indeed of the project itself, is to implement true decentralisation throughout the marketplace and the overall platform itself while at the same time providing the highest level of privacy protection for its users and the data they will exchange. Compliance with national and European privacy laws has been, in this view, a key concern in the development of this integrated system, in strict conjunction with WP7, intermediate designs, and implemented components with their integrations, were systematically reviewed from a legal and ethical standpoint following a privacy by design approach. This work is still ongoing as new modules and UI extensions are added with the final aim at automating the enforcement, by the platform itself, of legally and ethically binding terms users can enforce for the temporarily access and process of personal data for predefined purposes. From a functional standpoint the architecture is divided in three areas: 4. Permission management, mostly implemented by the Lynkeus Hyper Ledger Fabric blockchain in conjunction with the SSI system for the identification, authentication and credentialing of both individual and organisational users. 5. Data protection layer, which implements a variety of data security and privacy preserving modules and includes the Secure Multi Party Computation system employed for both distributed data analytics and encryption keys sharing mechanism, in addition to the standard data protection functionalities such as encryption at rest and transaction of batch and streaming data assets. 6. Data transaction management, mostly implemented through the Streamr marketplace technology which provides both user-facing and back-end functionalities, such as UIs, payments execution and control, secure transfers of streaming data, data product visualization and more. 3 Extended data permissioning The KRAKEN marketplace will operate in a global and highly dynamic data ecosystem in which the scope of functionalities developed in the first period, i.e. internal operations, will not suffice. Feedback received on the initial design indeed highlighted the need for outward focused designs which has been taken in close consideration by the developers, leading to an expansion of scope. The resulting work has designed new functionalities aimed at preventing fraudulent behaviour by future marketplace users, especially data sellers, and to guarantee the quality of data products. All key dimensions of data transactions were analysed, i.e. privacy, the quality of the information exchanged and its expected price. In all these areas the blockchain-based permissioning system plays a key role and substantial extensions were therefore implemented at that level. In particular the Lynkeus blockchain was extended: a) to incorporate data provenance parameters to track the entire life cycle of a data product, including aggregated forms of the product derived from Data Unions or other data mergers, b) to support the staking mechanism, by which the quality of data product is enforced by strongly disincentivizing fraudulent behaviour in the face of severe economic repercussions (See 3.2), and c) to support the pay-for-computation system by linking, via permissions, the execution of distributed computations (SMPC) to the payment system. 3.1 Data provenance via blockchain The system allows both platform administrators and users to track different versions of the same data set as it moves from its first origination on the marketplace by an individual or institution, to value-add processing such as curation or its integration with other datasets. Specially each Data Product is identified with a respective ID at the time it’s first published on the ledger. At that moment, the number of tokens staked to guarantee the quality of the data is set and also captured on the blockchain as part of the transaction status parameters. As the buyer approves the data product, the transaction is updated and the blockchain records the event as a proof of data quality. Other events are captured using the same principle. For instance, as products are curated upon the base product, the product lineage is updated, tracked, and viewed over time. The blockchain, in this way, will also store Data Unions products which combine multiple individual data sets. Each Data Union ID is made of the individual product IDs as seen on the figure below. Data provenance is a foundational feature of a fully functional data quality control system and the blockchain in this sense offers an immutable, public and tamper-proof ledger which lends itself well to the purpose. Another key instrument to enforce data quality are direct incentives and deterrents to enforce data quality standards in the context of client-providers relationships. Two additional mechanisms were envisioned in this sense: the staking (escrow) systems for data products and the checking of institutional affiliations. The first act as a direct deterrent to offering poor quality or forged data on the marketplace. The second prevents the sale of institutional data outside of organizational control. ### 3.2 Coin staking for data quality control Staking involves depositing in separate accounts substantial amount of cash, in the form of DATA tokens, which are released back to the seller only after the user attests to the quality of the data product. This solution addresses two key challenges: the difficulty to automate data quality control with either statistical or qualitative measures, and the related difficulty to distinguish between data sets that may share multiple characteristics and yet may be legitimately posted on the marketplace as different products by different sellers. After extensive research by the consortium team on measures to automatically assess quality parameters in unseen datasets, the conclusion was reached that by themselves none of the existing methods offered sufficient reliability and scalability. Data utility, on one hand, is strictly dependent on the intended use of the data, i.e. the same set may be extremely valuable in one specific use case and very little in a slightly different one. Minor differences play also major roles in value assessments by data users as, for instance, the presence of a single clinical variable may dramatically increase the value of a research data set, all else being equal to similar data products. In absence of reliable methods for assessing data quality and utility in absolute terms, the solution was identified in the staking process which will demand substantial location of DATA tokens at the time of creation of the data product. As described in D2.7 the tokens are relinquished back to the seller only after the buyer approves the data product for use. Potential claims by buyers will be reviewed by platform owners and adjudicated after reviewing the data sets in question. While this mechanism requires potentially cumbersome manual review processes, by setting very high stakes for products, it acts as a powerful deterrent which will minimize --- 3 KRAKEN Consortium: KRAKEN D2.7 Design for marketplace reference implementations future claims. Limitations of this approach include the high specificity of some data sets and therefore the availability of expertise required to assess their quality which may not be readily available to platform owners. Malicious players acting as buyers may also attempt denial of service attacks through high volumes of data quality claims. These issues will be actively investigated in the remaining part of the project also in conjunction with the SSI and Crypto teams in WP3 and 4 respectively. ### 3.3 Institutional credential management system In the following section the terms “institution”, “company” and “legal entity” are used as synonymous, “natural person” is used to denote a physical person, “KRAKEN platform” is used to denote the KRAKEN marketplace platform as a whole and “KRAKEN marketplace” or “KRAKEN web site” denote the pure Marketplace website SW component of the KRAKEN platform (see D2.3, par 4.5.1.6). The KRAKEN marketplace requires that a user, to operate on it, has been registered, moreover it doesn’t manage login from legal entities but, at the contrary, it manages only login from natural persons (for details, see “User registration process”, D2.3, par 3.4). Lastly, a natural person logged to the KRAKEN marketplace can operate directly for herself or on behalf of a company, like in the case of an employee that operates on behalf of a hospital in the health pilot. In case of a natural person acting on behalf of a company, the KRAKEN platform requires that the person has previously been duly authorized by a legal representative of the company, and that the company itself has been object of an identification process by KRAKEN. Two SW tools are developed in KRAKEN to permit the verification that a marketplace user, who claims to act on behalf of an organization, is authorized to act on behalf of said organization: the KRAKEN Company Identification Tool (KCIT) and the Depute tool. KCIT supports the company identification process so that, a subsequent registration by a natural person within the KRAKEN marketplace, who claims to represent that organization, can be associated with the legal identity itself, this tool is deployed in a unique instance in the KRAKEN platform. It is useful highlight here that the version of KCIT released in KRAKEN will supports the “Self-Registration” level of assurance, level evaluated sufficient for the requirements of the KRAKEN marketplace. The user authorization process to operate on behalf of a company is supported by the Depute Tool, a single instance of the tool will be provided to every company that needs to authorize its employees to operate on behalf of the company itself. The process of issuing of the company’s attorney to the employee is managed completely on the depute tool without using the KRAKEN web site. Technically the Depute tool take advantage of the existing SSI infrastructure of the KRAKEN platform but in a completely transparent way to the users: also, if the tool internally uses an SSI verifiable credentials (VC) to represent the company attorney, the Attorney VCs, the company representative’s action on the Depute tool’s user interface is to authorize a company employee and not to issue a VC belonging to a specific VC schema. After the authorization, the Attorney VC produced by the Depute tool, like every VC, will be stored inside the SSI wallet of the authorized users and it will be required by the KRAKEN marketplace when this user will access to the KRAKEN marketplace web site. It’s important to highlight here that the usage of the SSI features in this applicative context provides a significative value to the entire process: the authorization to the user in form of VC ensures full control by the company on the authorization itself, like happens to the user’s VC login for the GDPR’s right to --- 4 KRAKEN Consortium D2.3 Final KRAKEN architecture. 5 Ibid. be forgotten because the Attorney VC can be revoked in every moment by the company, revocation that inactivate in “real time” the authorization to the user. 3.4 Depute tool As described above, the Depute Tool is the tool used by a company to authorize its employee on behalf of the company. This section describes the components of the Depute Tool. ![Depute Tool internal and external components](image) **Figure 3: Depute Tool internal and external components** The Depute tool’s internal components are - the Depute Tool_WebSite, a web front end implemented in Angular 13, - the Depute Tool_ExpressWebServer, an HTTP proxy used to protect the restful API of the go rest agent. - the Go-Rest_agent, an open source Hyperledger Aries Go-Rest-Agent deployed without any customization - KeyCloak, an open-source software tool implementing user authentication based on OpenID Connect. The KRAKEN revocation and endorsement registry is external to the Depute Tool components. The Depute Tool uses the KRAKEN revocation and endorsement registry when a company representative revokes the authorization to an employee. This feature is implemented by revoking the Attorney VC that represent the authorization. ### 3.5 Company Identification Tool As described above, the KCIT is the tool used by the KRAKEN platform to identify a company. This section describes the components of the KCIT. ![Diagram of KCIT components](image) **Figure 4: internal and external components** The KTIR internal components are: - the KCIT_WebSite, a web front end implemented in Angular 13 that allows an admin to manage the configurations defined in the KCIT, - the KCIT_ExpressWebServer, an HTTP proxy used to protect the access to the restful APIs implemented by the KCIT. - KeyCloak, an open-source software tool implementing user authentication based on OpenID Connect. - KCITdatabase, the container of the configuration info of the identified companies. The only external components that uses the KCIT is the KRAKEN marketplace. As described above, during the user registration phase on the KRAKEN marketplace, the tool permits to the KRAKEN marketplace the verification that a user, who is claiming to represent a specific organization, is providing an authorization emitted by such organization. Also, this check takes advantage of features provided by the SSI, a consequence of the fact that the authorization is implemented using an Attorney VC issued by the company using its Depute Tool, is that the field "issuer" of the Attorney VC contains the public DID of the issuing company. The KCIT permit to KRAKEN marketplace to know which are the already identified companies and which are their public DIDs. **KCIT_Api** A CRUD (Create, Read, Update, Delete) Restful API fully accessible only by a KRAKEN admin user using the KCIT web site. It is used also by the Company users to add her company and from the KRAKEN marketplace to list the already identified companies and to verify the issuer of an Attorney VC. 4 Integrated Secure Multi-Party Computation 4.1 Pay for computation While apparently simple in its formulation, the pay-for-computation system is one of the main accomplishments in the KRAKEN marketplace architecture. To our knowledge this is the first real-world implementation that brings together data access permissioning, privacy preserving distributed computations, and token-based payment systems. The importance of this integration stands from its ability to answer, over time, the fundamental questions of pricing information across use cases, types of users and data life cycles. Extensive research in the problem of data valuation led our team to the conclusion that existing methods such as the Shapley Value\(^6\), while theoretically exhaustive, were not applicable to KRAKEN or to any real-world implementation because of their impractical computational demands. For this reason, a more pragmatic approach was designed in which the balance between data value, their price and privacy metrics was going to be reached over time based on market dynamics once all the needed information is given to market players, in keeping with classic economic principles. To that end, the architecture now provides all required functionalities to efficiently arrive at that balance during short time frames after a data product is published. Specifically, the data provenance and staking mechanisms will enforce basic quality assurance and thus grounded value assessments by buyers. On top of this, the permissioning layer will provide business intelligence information to assess demand for certain data products, by certain types of users, for certain use cases. It is interesting to note here how these assessments sit at the intersection between privacy constraints (informed consent and intended uses, according to GDPR) and data pricing. Finally, the data provenance systems will allow sellers to study specific drivers of demand for their products in addition to price, such as added-value services that curate or aggregate the data creating more valuable offerings. 4.2 SMPC internal architecture SMPC framework allows us to evaluate computations (functions) on data without revealing the data itself. In particular, this is achieved by splitting the data into shares, such that without knowing enough of them, no information about the data can be revealed. The shares are distributed among SMPC nodes (servers participating in SMPC network), so that they can interactively compute a function on the data without knowing the data or the result themselves. The (shares of) results are delivered to a buyer of a computation, who can merge them in the final result. It is crucial to note that with such a component KRAKEN marketplace can offer data analytics without any access to the data that is being processed and hence secures the privacy of the users by design. The above SMPC architecture was proposed and described in D2.2\(^7\) and D2.3\(^8\). The technical details about the cryptographic protocols, implementation choices and used cryptographic libraries can be found in D2.4\(^9\) and D2.5\(^10\), under development. Moreover, reports about the performance of the cryptographic system as well as a description of APIs to interact with it can be found in D4.3\(^11\) Prototype implementation of cryptographic libraries. In this deliverable we focus on the integration details of SMPC with the KRAKEN marketplace. --- \(^{6}\) [Shapley value - Wikipedia](https://en.wikipedia.org/wiki/Shapley_value) \(^{7}\) KRAKEN Consortium: D2.2 Intermediate Kraken architecture \(^{8}\) KRAKEN Consortium: D2.3 Final KRAKEN architecture \(^{9}\) KRAKEN Consortium: D2.4 KRAKEN Intermediate technical design \(^{10}\) KRAKEN Consortium: D2.5 KRAKEN final technical design \(^{11}\) KRAKEN Consortium: D4.3 Prototype implementation of cryptographic libraries To enable privacy preserving analytics (PPA) with SMPC, KRAKEN marketplace needs to integrate the following functionalities into its back-end and front-end: - **Frontend:** - Publishing a data product for analytics: To offer data for PPA, a data owner needs to be able to split his/her data into shares and upload them to an external storage in an encrypted form that can be accessed only by the SMPC nodes. In particular, this task cannot be outsourced to the KRAKEN backend or any external service since it needs direct access to the dataset. Hence KRAKEN frontend provides a functionality that allows a user to load the dataset (locally) in his/her web browser, during the process of publication, and then split and encrypt the dataset using public keys of the SMPC nodes. This is implemented using WebAssembly that allows running complex programs (in our case implemented in Go) directly in a browser. Marketplace's backend receives only a link to the location of the encrypted data that it cannot access. - Buying a computation: Buyers presented with multiple choices of functions and datasets can request a computation on one or multiple datasets registered on KRAKEN marketplace for PPA. Since the data itself as well as the results are split in shares, the buyer needs to be able to merge the shares. Similarly as above, KRAKEN frontend provides WebAssembly based functionalities that allow the user to securely receive the shares from the SMPC nodes and merge them together in standard format such as a CSV file. - **Backend:** The role of the KRAKEN marketplace backend serves solely as an intermediary between users and the decentralized SMPC nodes, with no rights to access the data or results. SMPC nodes provide an API using secured WebSockets to receive computation requests. Hence the backend needs forward users' requests to all the nodes to start a cryptographic protocol. In particular, information about the computation, links to the datasets, as well as information about the data buyer (including its public key) need to be delivered. Furthermore, the requests are recorded and checked by the SMPC nodes on a KRAKEN blockchain preventing privacy violating behaviour. For technical details we again refer the reader to the aforementioned deliverables. 4.3 System architecture The integrated SMPC system architecture allows users of the marketplace to perform privacy-preserving analytics on single or multiple Data Products that are listed within the marketplace data catalogue. A data provider who is concerned about the security and privacy of their data assets can create a Data Product that is only available for analytics, and receive payment in the form of the Streamr DATA token every time a data user performs a computation that involves their Data Product. In these data transactions the marketplace acts only as an intermediary between data provider and data consumer. Content data from the Data Products are never stored by the marketplace. Instead, content data, that is made discoverable for analysis via the marketplace data catalogue, is encrypted and split into secret shares in the user’s frontend environment and then stored on their cloud storage of choice. A Data Product’s secret shares can only be downloaded by the nodes in the SMPC network for computing the analytics on behalf of the data user after two important steps have been verified by the marketplace: 1) a user who wants to perform analytics has been confirmed as eligible to access the Data Product by the Lynkeus blockchain; 2) a payment notification has been received by the Marketplace from the xDai blockchain. In the integrated SMPC system architecture which is shown in D2.7, Section 2.4.2 and repeated here for reference in Figure 6 below, the SMPC Network interfaces with the Marketplace Backend API. When a data provider using the Marketplace Frontend publishes a Data Product, the Marketplace Frontend sends the Data Product’s associated metadata to the Marketplace Backend API. This includes the Data Product’s descriptive information, policies and cloud storage link. The Marketplace Backend stores this information and also sends the Data Products policies to be recorded on the Lynkeus blockchain. This step is what allows the marketplace to verify that a user is able to perform analytics on the Data Product when a request for analytics is received. On receiving requests from users to perform analytics computations on the data via the Marketplace Frontend, the Marketplace Backend API checks with the Lynkeus Blockchain that the users are eligible. If the data user is confirmed as eligible they are able to use the Marketplace Frontend to process a payment to the data provider using the Streamr DATA token on the xDai blockchain. If the payment has been successfully transferred to the corresponding Data Product owners, a notification is sent to the Marketplace Backend API by the xDai blockchain that confirms the payment. Upon receipt of the payment notification from the xDai blockchain, the Marketplace Backend API communicates with the integrated SMPC Network to trigger the download of the encrypted secret shares from the data providers’ cloud storage. As discussed earlier, this could be data from a single data provider or Data Product, or it could be data from multiple data providers or Data Products. process analytics requests. The SMPC Network finally computes the analytics and returns the results to the Marketplace Backend API. These results are encrypted specifically for the user requesting the analytics. Once the results are received by the Marketplace backend API, the user requesting the analytics then uses the Marketplace Frontend to download and decrypt the results in a CSV file format. Figure 6: The marketplace’s integrated SMPC system architecture. Blue: publishing process; Red: purchase and payment process. 5 KRAKEN marketplace mobile application As discussed above, after intense and protracted discussion within the consortium, a decision was made to implement two separate mobile applications serving separate and minimally overlapping workflows. In particular, the SSI application has been dedicated to exclusively authenticate users while the marketplace app to actually manage data products, under the assumption that institutional users will deal with data sets on behalf of their employers mostly on the desktop version of the marketplace after logging in through the SSI app. The KRAKEN marketplace application instead will be mostly dedicated to individual users who will manage fewer and simpler personal data products, mostly from the mobile environment. In this view both type of users will be authenticated with the SSI app, and from that moment on individual users will not need to use it again, adopting only the marketplace app on a regular basis. In this sense the design, while not optimal, creates minimal negative effects on overall systems usability and the separation better guarantees security of the authentication process leaving the SSI app as a general-purpose identity management module. The KRAKEN marketplace application connects the marketplace to its mobile environment to allow users to quickly browse data products, change permissions and availability of their own data products and see how well their data products are performing on the market. In order for the user to connect to the marketplace application the user first needs to scan a QR code to retrieve a token which will authenticate him/her. Such a QR code is made available to the user after logging in to the browser marketplace application. Once connected information is made available to the user through the backend RESTful API and http/https calls. This enables retrieving information from the marketplace. Finally, the marketplace application supports offline signing of requests to enable the editing of entries on the blockchain e.g. to change permissions or availability. That is, requests are directly signed on the application and are then sent to the backend. In this way, the expectedly frequent usage of the app, i.e. the management of data access parameters by sellers of personal data, is fully supported even directly on the app, with no additional authentication required. 6 Conclusion The final architecture of the KRAKEN marketplace defines the integration of all necessary components to realize a fully functional system which can be deployed in real-world setting. It supports privacy-preserving tools which offer multiple data and identity protection options which users can pick from while enforcing all key tenets of the GDPR. It also supports an efficient user-friendly e-commerce experience for users in search of data assets focusing on data discoverability and ease of access. Finally, it implements an innovative infrastructure integrating blockchain-based permissioning, token-based payments and distributed computations with SMPC, which for the first time, to our knowledge, realize synergistic dynamics among privacy constrain, data value and information sharing.
{"Source-Url": "https://krakenh2020.eu/sites/kraken/files/public/content-files/deliverables/D5.4%20Final%20KRAKEN%20marketplace%20integrated%20architecture_v1.0.pdf", "len_cl100k_base": 7572, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 43947, "total-output-tokens": 8189, "length": "2e12", "weborganizer": {"__label__adult": 0.0009326934814453124, "__label__art_design": 0.0028858184814453125, "__label__crime_law": 0.0026836395263671875, "__label__education_jobs": 0.001567840576171875, "__label__entertainment": 0.00020551681518554688, "__label__fashion_beauty": 0.0004117488861083984, "__label__finance_business": 0.0282745361328125, "__label__food_dining": 0.0007257461547851562, "__label__games": 0.0019216537475585935, "__label__hardware": 0.003398895263671875, "__label__health": 0.0012683868408203125, "__label__history": 0.0007991790771484375, "__label__home_hobbies": 0.00030422210693359375, "__label__industrial": 0.00211334228515625, "__label__literature": 0.0004863739013671875, "__label__politics": 0.0009760856628417968, "__label__religion": 0.0007472038269042969, "__label__science_tech": 0.1510009765625, "__label__social_life": 0.0001366138458251953, "__label__software": 0.03594970703125, "__label__software_dev": 0.76123046875, "__label__sports_fitness": 0.00034999847412109375, "__label__transportation": 0.0012454986572265625, "__label__travel": 0.0004148483276367187}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39055, 0.0199]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39055, 0.07927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39055, 0.89211]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 741, false], [741, 1825, null], [1825, 4530, null], [4530, 5256, null], [5256, 6220, null], [6220, 8659, null], [8659, 10103, null], [10103, 12249, null], [12249, 13877, null], [13877, 16454, null], [16454, 19200, null], [19200, 23107, null], [23107, 23790, null], [23790, 24580, null], [24580, 26115, null], [26115, 29985, null], [29985, 32265, null], [32265, 35739, null], [35739, 35865, null], [35865, 38248, null], [38248, 39055, null], [39055, 39055, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 741, true], [741, 1825, null], [1825, 4530, null], [4530, 5256, null], [5256, 6220, null], [6220, 8659, null], [8659, 10103, null], [10103, 12249, null], [12249, 13877, null], [13877, 16454, null], [16454, 19200, null], [19200, 23107, null], [23107, 23790, null], [23790, 24580, null], [24580, 26115, null], [26115, 29985, null], [29985, 32265, null], [32265, 35739, null], [35739, 35865, null], [35865, 38248, null], [38248, 39055, null], [39055, 39055, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39055, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39055, null]], "pdf_page_numbers": [[0, 0, 1], [0, 741, 2], [741, 1825, 3], [1825, 4530, 4], [4530, 5256, 5], [5256, 6220, 6], [6220, 8659, 7], [8659, 10103, 8], [10103, 12249, 9], [12249, 13877, 10], [13877, 16454, 11], [16454, 19200, 12], [19200, 23107, 13], [23107, 23790, 14], [23790, 24580, 15], [24580, 26115, 16], [26115, 29985, 17], [29985, 32265, 18], [32265, 35739, 19], [35739, 35865, 20], [35865, 38248, 21], [38248, 39055, 22], [39055, 39055, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39055, 0.16848]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
3f8f4a064a3ba0b5259987a26e556adf8ed97896
Federated Information Management for virtual enterprises Garita Rodriguez, C.O. Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Chapter 6 Conclusions and Future Work 6.1 Summary of General Approach The VE paradigm represents an active area of research and technological developments, in which an extremely wide variety of existing ICT approaches, tools, components, models and standards can be applied. However, given the extension and complexity of the VE application domain, there are still many obstacles and open issues that need to be addressed, when supporting advanced collaboration scenarios among enterprises involved in VEs. Here, the proper sharing and exchange of information among pre-existing heterogeneous and autonomous enterprises and their internal systems, introduces particularly exigent challenges that are faced in the design of virtual enterprise support platforms. In this context, the general objective of this dissertation is the analysis, design and implementation of a federated Distributed Information Management System (DIMS), specifically tailored to properly support the complex requirements set forward by Virtual Enterprise collaborative scenarios. The first step towards the accomplishment of this goal involved a detailed analysis of several related information management techniques and actual VE platforms, that need to be evaluated when designing and developing the information management system for a given virtual enterprise support infrastructure. The presented analysis included a survey of generic distributed information management techniques, related information representation models and standards, as well as several relevant information management technologies and tools. Furthermore, a representative set of international VE projects were selected, described and classified in terms of the main features applied for integration of the VE distributed information. These projects were also analyzed against a given set of criteria that was specifically defined in order to compare and evaluate their different features. In this way, the results of this analysis represent a survey of the state-of-the-art regarding the application of information management standards and technologies in existing VE support platforms. Furthermore, in order to achieve a complete identification of the information management requirements for the target DIMS, a systematic analysis of the VE application domain was carried out. Namely, the analysis of the specific VE information man- agement requirements for the DIMS was performed considering certain VE life-cycle scenarios supported by a reference VE Cooperation Layer, with emphasis on Industrial Manufacturing SMEs. The identified requirements included both the information modeling and the functional requirements for the DIMS. In addition, considering the results of the performed analysis, a clear need was identified regarding the application of the federated information management architecture, which in turn represents the generic framework proposed in this dissertation to support effective information sharing among the VE member enterprises. Consequently, based on the identified distributed information management requirements and the proposed federated approach, the individual components of the federated database architecture were specifically designed and tailored to the VE application domain. Namely, the main components of the DIMS architecture were conceptualized, designed, and implemented, including: the federated VCL integrated schema, the DIMS Export Schema Manager Tool (ESMT), the Federated Query Processor (FQP), and the multi-user interoperable DIMS Server Agent. The export schema hierarchy definitions based on VE member roles and the workflow-driven federated query processing mechanisms, represent the features of the DIMS architecture that support the import/export of secure information among the federated nodes in virtual enterprises. Furthermore, the DIMS architectural components were applied to support real scenario cases within the industrial manufacturing domain considering the general PRODNET VE demonstration environment. For instance, some of the devised DIMS demonstration cases showed how the FQP mechanism works together with the ESMT access rights definitions in order to adequately support the management of distributed business process information, which becomes crucial for the proper coordination and monitoring of the tasks assigned to different VE members. Finally, it was analyzed how the general DIMS federated approach can be tailored in order to cope with specific information management requirements encountered in different VE application domains, taking as an example the tourism sector. For this purpose, the presented DIMS requirement analysis, architecture design and system development phases were revised and adjusted to support Virtual Enterprises in the tourism application domain. In addition, it was demonstrated how several advanced Internet standards and development tools (e.g. Java, Jini, XML among others) can be incorporated in the architectural design and implementation platform of the federated information management system. Furthermore, it is expected that the federated information management architecture approach presented in this dissertation can also be applied to other kinds of VE application domains. ### 6.2 Summary of Achievements The main contribution of this dissertation is the achievement of the design and implementation of a federated Distributed Information Management System that properly supports the cooperative information sharing and exchange, node autonomy, and information visibility levels and access rights for exchanged data among the VE nodes. Furthermore, other specific DIMS features or achievements regarding the management of distributed information for the virtual enterprise domain are enumerated below: - The DIMS integrated schema definitions are shared by all VE nodes and the data can be imported/exported from its source at the exact query-evaluation time, according to the proper access rights defined in the hierarchy of export schemas for VE partners. Consequently, distributed up-to-date data can always be accessed by the queries. Furthermore, this approach avoids the need for centralization of data and control over the VE nodes. - The DIMS integrated schema represents and provides access to all the information classes that are necessary to support the operation of the VE Cooperation Layer as a whole unit. Different clusters of information required by individual components of this layer are linked together through a coherent and uniform database schema. In this way, the information for the VE Cooperation Layer components is well integrated to support the behavior of the global “VE entity”. - The general DIMS federated architecture represents a major distinguishing characteristic in relation to the approach followed in those other VE-related projects. In fact, similar architectures are only identified in other projects in which the CO-IM group of the University of Amsterdam has also been in charge of the VE information management system. For example, some of the federated database architecture functionalities enumerated in this section, provide particularly attractive features for handling several open-issues associated with the management of information in VEs, that have not been directly incorporated in other VE infrastructures and projects (see Chapter 2 for more details). - The management of the hierarchy of VE partner roles and export schemas by DIMS, supports a flexible and configurable definition of information access rights among VE member enterprises, based on for instance: existing trust levels, production chain relations, legal contracts, and supervision clauses. - The federated query processing of DIMS provides simultaneous access to the particular VE information for which an enterprise is authorized from several other enterprises, while hiding the physical data location details from the end users and client applications. For example, in the PRODNET scenario the generic DIMS federated query processor was applied in order to support order status monitoring among VE members from a given VE coordinator node. - The DIMS server agent offers a wide variety of specialized high-level information management functionalities to support the VE creation and operation phases. These functions store and manage enterprise information according to a reference VE topology model, which is instantiated during the VE creation phase. For example, most of these functions expect both the VE identifier and the VE partner identifier as input parameters, in order to reinforce the consistent application of the VE paradigm concepts within different data sharing and exchange scenarios, specially when considering multiple VEs in the network. The DIMS federated database schema achieves a comprehensive integration of different information representation models and standards to support the VE operation. Namely, this integrated schema supports the representation of VE-related information in compliance with some of the ICT standards and models presented in Chapter 2, including STEP, EDI and DBP models. The DIMS architecture applies a combination of workflow management technology and federated/distributed database information management approaches, which has substantially contributed to support flexible and configurable interaction scenarios among both internal and external modules of the VE Cooperation Layer. For example, some specific interoperability scenarios were described in Chapter 4, addressing how the implementation of the DIMS federated architecture can benefit from workflow plan specifications; and conversely, how the workflow management engine can exploit the distributed information management services offered by the DIMS. The DIMS architecture also defines an interoperability mechanism to support data exchange functionalities between the VE Cooperation layer and internal enterprise systems. Through the proposed interoperability approach, the DIMS module can be dynamically integrated to interoperate with other internal company systems, such as ERP/PPCs. The modular development of DIMS within the PRODNET VCL environment provided the required level of security and message authentication for data exchange among enterprises, since it exploits different facilities offered by the specialized communication module of the VCL, e.g. PCI module in PRODNET. The DIMS implementation exhibits satisfactory levels of both reliability and efficiency, which are necessary to adequately support the regular operation of the VE Cooperation Layer. Namely, the application of careful design considerations during the entire DIMS software development life-cycle, the exploitation of a reliable internal DBMS, and the use of C++ as the development language for DIMS, have produced an information management system that satisfies the reliability and performance requirements of the PRODNET VE cooperation scenarios. A more quantitative performance evaluation of the DIMS could not be performed because the developed system is still a prototype (i.e. it is not a final engineering product that can be fairly tested using standard benchmarking methodologies). Furthermore, the performance evaluation of a complex systems such as DIMS, which is highly integrated with other VCL components, may also require to properly adapt traditional performance evaluation methodologies as most components are inter-related (it is not easy to make separate evaluations of each component per se), and this problem is outside the scope of this thesis. The DIMS federated architecture approach can be extended and adapted to support specific information management requirements derived from significantly different VE application domains, ranging from the industrial manufacturing to tourism sectors. Considering the major achievements listed above, we conclude that the proposed design and implementation of the DIMS architecture can properly satisfy all the objectives and information management requirements for VE support, that were introduced in Chapter 1 of this thesis. Furthermore, the DIMS architecture represents a solid platform that can be extended in many directions, as described next. 6.3 Extensions and Future Work This section describes a number of future research directions related to certain aspects of the work presented in this dissertation. 6.3.1 Management of Multiple VE Integrated Schemas The presented DIMS federated approach has mostly focused on the support for VE collaboration scenarios in which there are a large number of international SMEs representing potential partners that can work together to satisfy a given business opportunity. The best way in which this kind of collaborations among international SMEs can be rapidly materialized and operate in an agile and reactive manner, is through the application of commonly defined information models and standards that minimize the associated semantic and structural heterogeneity that exists among the internal systems of these enterprises. This is the main reason for which it is assumed in the DIMS design that all the enterprises share the same integrated schema definitions within the VE Cooperation Layer. However, in order to support other kinds of VE scenarios, the federated schema architecture of the DIMS could be extended to support the negotiation and sharing of different integrated schema definitions among VE enterprise members. For example, this feature would support VE collaborations in the product engineering sector in which small groups of enterprises need to negotiate and agree on the particular schema definitions involved in the technical design of a given product. In this case, enterprises first need to work together in order to unify their data models towards the definition of multilaterally agreed integrated schemas. To support the definition of multiple integrated schemas among VE members within the federated DIMS architecture, it is necessary to design a data manipulation language that would allow the derivation of export schemas from local schemas, and the definition of integrated schemas from export/local schemas. For example, the definition/derivation language specifications that have been developed at the CO-IM group of the University of Amsterdam for the PEER federated database system, could be adapted to a federated database architecture for VE support [17, 5, 15, 16]. Alternatively, a general ODMG-based approach could be followed in which ODL and OQL languages are used for the specification of the definition/derivation language [50]. In this case, different types of federated schemas could be represented in ODL, and the derivation language could be based on OQL for data selection/projection operations. Finally, the application of schema “mediator” components, which can help end users with the definition of the specific VE database schemas, can be evaluated. In other words, intelligent database schema mediators could assist or even automate certain global schema definition tasks. For example, in the NIIP project a VE *mediated* global schema is handled, through which conflicts between structural and semantic representations are resolved at run time [114]. ### 6.3.2 XML for VE Federated Information Management in VEs The role that XML can play as a standard format to support the sharing and exchange of data and metadata between the DIMS component and the internal enterprise systems needs to be further investigated. Furthermore, XML can also be applied to support certain federated information management functionalities. For example: - The local schema at every DIMS node could represent and manage XML documents directly. - The definition of DIMS export schemas could be based on XML documents. - The export schemas represented as XML may be merged into an integrated schema, using some kind of definition/derivation language extensions. For instance, the use of XML to support database views has been addressed in [1, 2]. - When an export schema is queried through the federated integrated schema, the result of the subqueries may be represented and sent back as XML documents. This can facilitate the data processing tasks of Web-based client applications. - The XML metadata may be used to cope with some schema integration problems associated with local schema heterogeneity issues, as explained in Section 6.3.1. Please also notice that the use of XML is complementary to an interoperability approach based on schema integration using data definition/derivation languages. In other words, the combination of both approaches could be possible. The advantage of XML is that it does not really make assumptions about database models and it would properly support a "document-based" information management approach, such as the OAG proposal described in Chapter 2. In fact, the issue of combining "document-based" approaches, with approaches that rely on generic database interoperability architectures is also a challenging point. For instance, the application of document-based approaches for enterprise data exchange within the DIMS federated database architecture needs to be further studied. ### 6.3.3 Generic Federated Information Management for IDF's Interchange Data Formats (IDFs) comprise all those standards aiming at the exchange of data among different enterprises. For instance, EDI and STEP standards, enterprise document models, and some XML document definitions could be considered as IDF's. Multiple IDF's formats may need to be handled within a given VE platform to support different functionalities in the same way in which for example, PRODNET VCL supports EDI messages and STEP files. It is clear that data associated to different IDF's needs to be managed by the DIMS of the VE Cooperation Layer. Here, the challenge is to design a common data access mechanism for the DIMS in order to support as many IDF's as possible in the most flexible and generic way. Many of these formats are based on metadata schema definitions, and on the exchange of data values that comply with a subset of the metadata schema. Therefore, it remains to be evaluated if these requirements can be well supported within a general federated IDF's management framework for the VE Cooperation Layer. Another important issue is the fact that despite the use of different IDF's, the access rights and visibility levels among the VE member enterprises still need to be defined and reinforced, and most of the IDF's management tools do not properly support this feature. Thus, by representing and storing the IDF information within a federated database management system, the IDF "documents" that are exchanged among enterprises can be better secured and protected. Finally, considering that most of the IDF's documents can be represented in XML, the idea of building an XML database with federated capabilities seems attractive, as described in Section 6.3.2. 6.3.4 Other Future Directions Besides the main lines of research described in the previous section, there are several other important points for extensions and future directions that are briefly enumerated below (most of them have been addressed in previous chapters of this thesis): - **Active/federated database capabilities for advanced workflow management support.** The objective of this subject is to analyze the application of active database concepts within a federated database architecture in order to provide an elegant and general support for the workflow management component of the VE Cooperation Layer. For instance, given the advantages of the definition of an information management and coordination kernel in PRODNET (constituted by the DIMS and LCM modules), the extended support that an “active” federated database management system can provide for workflow management engines needs to be further investigated. Namely, rules stored and managed by an active database system can be a useful mechanism to support the control and the data exchange among workflow management activities. For some examples of the use of active database rules to support workflow management, see [168]. - **Incorporation of high-performance distributed computing services.** As described in Chapter 2, some VE collaborative scenarios may demand an information management platform able to handle extremely large data collections that need to be accessed by geographically distributed users running computationally intensive processes. In this kind of scenarios, the incorporation of a high-performance distributed resource and data management services such as the Data Grid services, can be considered for the DIMS [26]. - **Distributed transaction management functionalities.** As mentioned in Chapter 2, for some VE infrastructures, there is a prominent need to support advanced distributed transaction management mechanisms [176, 114]. This need was not relevant to the kind of VE scenarios addressed in this thesis. However, the DIMS architecture could be extended with these functionalities in order to be applied in other VE domains such as concurrent engineering, or to support advanced distributed workflow management requirements. - Further development of Internet directory management functionalities. In Chapters 2 and 3, the need for directories of public information related to VE collaborations was identified. For example, for every enterprise it would be convenient to keep a directory of information describing the company profile and the role that it would like to take in potential VEs. This information would be made available for all other nodes in the network. The support for this kind of directories was certainly considered as a functional requirement for DIMS, although it was not fully included in the final DIMS implementation in PRODNET (mainly because it was outside the scope of the project). However, several advanced directory management functionalities were developed as an extension to the DIMS architecture, regarding common “interface definitions catalogues” such as the Service Interface Definitions Catalogue for VEs in the tourism sector (see Chapter 6). In general, other directory management functionalities can be incorporated into the current DIMS platform implementation. - Design and development of generic Internet client applications and tools to support federated access to VE-related information. As mentioned in Chapter 2, the development of Internet client applications and tools to access data stored in the DIMS was not strictly mandatory in PRODNET, mainly because the reference VE Cooperation Layer was conceived to be installed locally at each enterprise, and therefore the VE information is accessed by end users and applications through specific DIMS services installed in the local VE node. However, in other VE application domains, Internet technologies and related standards such as Java, Jini, and XML, can be incorporated into the DIMS architecture to have access to VE-related information, as illustrated in Chapter 6. Other Web-based client applications supporting federated access to VE distributed information, can be added to the DIMS architecture in the future. - Automatic Creation of Enterprise Export Schemas. In Chapter 4, some research directions were described regarding possible extensions to the DIMS Export Schema Management functionalities, such as the introduction of export schema templates and the automatic creation of enterprise export schemas. These extensions would facilitate to a great extent the task of defining individual export schemas on local enterprise information for every other VE partner. Finally, it is foreseen that the research work regarding the federated information management approach presented in this thesis, will be extended and used in the near future to support virtual collaboration scenarios in other application domains, including VEs in agri-business and tourism sectors, virtual scientific laboratories, and distributed supply chain management.
{"Source-Url": "https://pure.uva.nl/ws/files/3766209/18907_UBA002000390_12.pdf", "len_cl100k_base": 4398, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 23750, "total-output-tokens": 4787, "length": "2e12", "weborganizer": {"__label__adult": 0.0004856586456298828, "__label__art_design": 0.0006470680236816406, "__label__crime_law": 0.0008835792541503906, "__label__education_jobs": 0.00792694091796875, "__label__entertainment": 0.0002007484436035156, "__label__fashion_beauty": 0.00025272369384765625, "__label__finance_business": 0.01128387451171875, "__label__food_dining": 0.0006089210510253906, "__label__games": 0.0008320808410644531, "__label__hardware": 0.0014133453369140625, "__label__health": 0.0008592605590820312, "__label__history": 0.0009388923645019532, "__label__home_hobbies": 0.00015604496002197266, "__label__industrial": 0.00211334228515625, "__label__literature": 0.0006542205810546875, "__label__politics": 0.000675201416015625, "__label__religion": 0.0004944801330566406, "__label__science_tech": 0.3115234375, "__label__social_life": 0.0002982616424560547, "__label__software": 0.1356201171875, "__label__software_dev": 0.52001953125, "__label__sports_fitness": 0.00025773048400878906, "__label__transportation": 0.0014514923095703125, "__label__travel": 0.0005860328674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25046, 0.01168]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25046, 0.21989]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25046, 0.92869]], "google_gemma-3-12b-it_contains_pii": [[0, 1027, false], [1027, 3418, null], [3418, 6731, null], [6731, 9799, null], [9799, 12858, null], [12858, 15887, null], [15887, 18692, null], [18692, 21930, null], [21930, 25046, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1027, true], [1027, 3418, null], [3418, 6731, null], [6731, 9799, null], [9799, 12858, null], [12858, 15887, null], [15887, 18692, null], [18692, 21930, null], [21930, 25046, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25046, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25046, null]], "pdf_page_numbers": [[0, 1027, 1], [1027, 3418, 2], [3418, 6731, 3], [6731, 9799, 4], [9799, 12858, 5], [12858, 15887, 6], [15887, 18692, 7], [18692, 21930, 8], [21930, 25046, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25046, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
ada860bcea169f013a99941a9debdbf265873693
System Principles version 5.7 Contents 1 System Principles .......................................................... 1 1.1 System Principles .......................................................... 1 1.1.1 Starting the System .................................................. 1 1.1.2 Restarting and Stopping the System .............................. 1 1.1.3 Boot Scripts .......................................................... 2 1.1.4 Code Loading Strategy .............................................. 3 1.1.5 File Types .......................................................... 3 1.2 Error Logging ........................................................... 4 1.2.1 Error Information From the Runtime System .................... 4 1.2.2 SASL Error Logging .................................................. 4 1.3 Creating a First Target System ...................................... 5 1.3.1 Introduction ......................................................... 5 1.3.2 Creating a Target System .......................................... 6 1.3.3 Installing a Target System ....................................... 7 1.3.4 Starting a Target System ......................................... 7 1.3.5 System Configuration Parameters ............................... 8 1.3.6 Differences from the Install Script ............................. 8 1.3.7 Listing of target_system.erl ...................................... 8 List of Tables ....................................................................... 15 Chapter 1 System Principles 1.1 System Principles 1.1.1 Starting the System An Erlang runtime system is started with the command `erl`: ``` % erl Erlang (BEAM) emulator version 5.2.3.5 [hipe] [threads:0] Eshell V5.2.3.5 (abort with ^G) 1> ``` erl understands a number of command line arguments, see erl(1). A number of them are also described in this chapter. Application programs can access the values of the command line arguments by calling one of the functions `init:get_argument(Key)` or `init:get_arguments()`. See `init(3)`. 1.1.2 Restarting and Stopping the System The runtime system can be halted by calling `halt/0,1`. See erlang(3). The module `init` contains function for restarting, rebooting and stopping the runtime system. See `init(3)`. ``` init:restart() init:reboot() init:stop() ``` Also, the runtime system will terminate if the Erlang shell is terminated. 1.1.3 Boot Scripts The runtime system is started using a boot script. The boot script contains instructions on which code to load and which processes and applications to start. A boot script file has the extension .script. The runtime system uses a binary version of the script. This binary boot script file has the extension .boot. Which boot script to use is specified by the command line flag -boot. The extension .boot should be omitted. Example, using the boot script start_all.boot: % erl -boot start_all If no boot script is specified, it defaults to /bin/start, see Default Boot Scripts below. The command line flag -init debug makes the init process write some debug information while interpreting the boot script: % erl -init debug {progress,preloaded} {progress,kernel.load.completed} {progress,modules.loaded} {start,heart} {start,error_logger} ... See script(4) for a detailed description of the syntax and contents of the boot script. Default Boot Scripts Erlang/OTP comes with two boot scripts: start_clean.boot Loads the code for and starts the applications Kernel and STDLIB. start_sasl.boot Loads the code for and starts the applications Kernel, STDLIB and SASL. Which of start_clean and start_sasl to use as default is decided by the user when installing Erlang/OTP using Install. The user is asked “Do you want to use a minimal system startup instead of the SASL startup”. If the answer is yes, then start_clean is used, otherwise start_sasl is used. A copy of the selected boot script is made, named start.boot and placed in the /bin directory. User-Defined Boot Scripts It is sometimes useful or necessary to create a user-defined boot script. This is true especially when running Erlang in embedded mode, see Code Loading Strategy [page 3]. It is possible to write a boot script manually. The recommended way to create a boot script, however, is to generate the boot script from a release resource file Name.rel, using the function systools:make_script/1,2. This requires that the source code is structured as applications according to the OTP design principles (The program does not have to be started in terms of OTP applications but can be plain Erlang). Read more about .rel files in OTP Design Principles and rel(4). The binary boot script file Name.boot is generated from the boot script file Name.script using the function systools:script2boot(File). 1.1.4 Code Loading Strategy The runtime system can be started in either embedded or interactive mode. Which one is decided by the command line flag `-mode`. ``` % erl -mode embedded ``` Default mode is interactive. - In embedded mode, all code is loaded during system start-up according to the boot script. (Code can also be loaded later by explicitly ordering the code server to do so). - In interactive mode, code is dynamically loaded when first referenced. When a call to a function in a module is made, and the module is not loaded, the code server searches the code path and loads the module into the system. Initially, the code path consists of the current working directory and all object code directories under `ROOT/lib`, where `ROOT` is the installation directory of Erlang/OTP. Directories can be named `Name[-Vsn]` and the code server, by default, chooses the directory with the highest version number among those which have the same `Name`. The `-Vsn` suffix is optional. If an `ebin` directory exists under the `Name[-Vsn]` directory, it is this directory which is added to the code path. The code path can be extended by using the command line flags `-pa Directories` and `-pz Directories`. These will add `Directories` to the head or end of the code path, respectively. Example ``` % erl -pa /home/arne/mycode ``` The code server module `code` contains a number of functions for modifying and checking the search path, see `code(3)`. 1.1.5 File Types The following file types are defined in Erlang/OTP: <table> <thead> <tr> <th>File Type</th> <th>File Name/Extension</th> <th>Documented in</th> </tr> </thead> <tbody> <tr> <td>module</td> <td>.erl</td> <td>Erlang Reference Manual</td> </tr> <tr> <td>include file</td> <td>.hrl</td> <td>Erlang Reference Manual</td> </tr> <tr> <td>release resource file</td> <td>.rel</td> <td>rel(4)</td> </tr> <tr> <td>application resource file</td> <td>.app</td> <td>app(4)</td> </tr> <tr> <td>boot script</td> <td>.script</td> <td>script(4)</td> </tr> <tr> <td>binary boot script</td> <td>.boot</td> <td>-</td> </tr> <tr> <td>configuration file</td> <td>.config</td> <td>config(4)</td> </tr> <tr> <td>application upgrade file</td> <td>.appup</td> <td>appup(4)</td> </tr> <tr> <td>release upgrade file</td> <td>relup</td> <td>relup(4)</td> </tr> </tbody> </table> Table 1.1: File Types 1.2 Error Logging 1.2.1 Error Information From the Runtime System Error information from the runtime system, that is, information about a process terminating due to an uncaught error exception, is by default written to terminal (tty): ```console =ERROR REPORT==== 9-Dec-2003::13:25:02 === Error in process <0.27.0> with exit value: {{badmatch,[1,2,3]},[[m,f,1],[shell,eval_loop,2]]} ``` The error information is handled by the error logger, a system process registered as error_logger. This process receives all error messages from the Erlang runtime system and also from the standard behaviours and different Erlang/OTP applications. The exit reasons (such as badarg above) used by the runtime system are described in [Errors and Error Handling] in the Erlang Reference Manual. The process error_logger and its user interface (with the same name) are described in [error_logger(3)]. It is possible to configure the system so that error information is written to file instead/as well as tty. Also, it is possible for user defined applications to send and format error information using error_logger. 1.2.2 SASL Error Logging The standard behaviours (supervisor, gen_server, etc.) sends progress and error information to error_logger. If the SASL application is started, this information is written to tty as well. See [SASL Error Logging] in the SASL User’s Guide for further information. ```console % erl -boot start_sasl Erlang (BEAM) emulator version 5.4.13 [hipe] [threads:0] [kernel-poll] =PROGRESS REPORT==== 31-Mar-2006::12:45:58 === supervisor: {local,sasl_safe_sup} started: [{pid,0.33.0}] {name,alarm_handler}, {mfa,alert_handler,start_link,[]}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] =PROGRESS REPORT==== 31-Mar-2006::12:45:58 === supervisor: {local,sasl_safe_sup} started: [{pid,0.34.0}] {name,overload}, {mfa,overload,start_link,[]}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] =PROGRESS REPORT==== 31-Mar-2006::12:45:58 === supervisor: {local,sasl_sup} started: [{pid,0.32.0}], ``` 1.3 Creating a First Target System 1.3.1 Introduction When creating a system using Erlang/OTP, the most simple way is to install Erlang/OTP somewhere, install the application specific code somewhere else, and then start the Erlang runtime system, making sure the code path includes the application specific code. Often it is not desirable to use an Erlang/OTP system as is. A developer may create new Erlang/OTP compliant applications for a particular purpose, and several original Erlang/OTP applications may be irrelevant for the purpose in question. Thus, there is a need to be able to create a new system based on a given Erlang/OTP system, where dispensable applications are removed, and a set of new applications that are included in the new system. Documentation and source code is irrelevant and is therefore not included in the new system. This chapter is about creating such a system, which we call a target system. In the following sections we consider creating target systems with different requirements of functionality: - a basic target system that can be started by calling the ordinary erl script, - a simple target system where also code replacement in run-time can be performed, and - an embedded target system where there is also support for logging output from the system to file for later inspection, and where the system can be started automatically at boot time. We only consider the case when Erlang/OTP is running on a UNIX system. There is an example Erlang module target_system.erl that contains functions for creating and installing a target system. That module is used in the examples below. The source code of the module is listed at the end of this chapter. 1.3.2 Creating a Target System It is assumed that you have a working Erlang/OTP system structured according to the OTP Design Principles. Step 1. First create a .rel file (see rel(4)) that specifies the erts version and lists all applications that should be included in the new basic target system. An example is the following mysystem.rel file: %% mysystem.rel {release, {"MYSYSTEM", "FIRST"}, {erts, "5.1"}, [{kernel, "2.7"}, {stdlib, "1.10"}, {sasl, "1.9.3"}, {pea, "1.0"}].} The listed applications are not only original Erlang/OTP applications but possibly also new applications that you have written yourself (here exemplified by the application pea). Step 2. From the directory where the mysystem.rel file reside, start the Erlang/OTP system: os> erl -pa /home/user/target_system/myapps/pea-1.0/ebin where also the path to the pea-1.0 ebin directory is provided. Step 3. Now create the target system: 1> target_system:create("mysystem"). The target_system:create/1 function does the following: 1. Reads the mysystem.rel file, and creates a new file plain.rel which is identical to former, except that it only lists the kernel and stdlib applications. 2. From the mysystem.rel and plain.rel files creates the files mysystem.script, mysystem.boot, plain.script, and plain.boot through a call to systools:make_script/2. 3. Creates the file mysystem.tar.gz by a call to systools:make_tar/2. That file has the following contents: erts-5.1/bin/ releases/FIRST/start.boot releases/mysystem.rel lib/kernel-2.7/ lib/stdlib-1.10/ lib/sasl-1.9.3/ lib/pea-1.0/ The file releases/FIRST/start.boot is a copy of our mysystem.boot, and a copy of the original mysystem.rel has been put in the releases directory. 4. Creates the temporary directory tmp and extracts the tar file mysystem.tar.gz into that directory. 5. Deletes the erl and start files from tmp/erts-5.1/bin. XXX Why. 6. Creates the directory tmp/bin. 7. Copies the previously creates file plain.boot to tmp/bin/start.boot. 8. Copies the files epmd, run.erl, and to.erl from the directory tmp/erts-5.1/bin to the directory tmp/bin. 9. Creates the file tmp/releases/start.erl.data with the contents “5.1 FIRST”. 10. Recreates the file mysystem.tar.gz from the directories in the directory tmp, and removes tmp. 1.3.3 Installing a Target System Step 4. Install the created target system in a suitable directory. 2> target_system:install("mysystem", "/usr/local/erl-target"). The function target_system:install/2 does the following: 1. Extracts the tar file mysystem.tar.gz into the target directory /usr/local/erl-target. 2. In the target directory reads the file releases/start.erl.data in order to find the Erlang runtime system version (“5.1”). 3. Substitutes %FINAL_ROOTDIR% and %EMU% for /usr/local/erl-target and beam, respectively, in the files erl.src, start.src, and start.erl.src of the target erts-5.1/bin directory, and puts the resulting files erl, start, and run.erl in the target bin directory. 4. Finally the target releases/RELEASES file is created from data in the releases/mysystem.rel file. 1.3.4 Starting a Target System Now we have a target system that can be started in various ways. We start it as a basic target system by invoking os> /usr/local/erl-target/bin/erl where only the kernel and stdlib applications are started, i.e. the system is started as an ordinary development system. There are only two files needed for all this to work: bin/erl file (obtained from erts-5.1/bin/erl.src) and the bin/start.boot file (a copy of plain.boot). We can also start a distributed system (requires bin/epmd). To start all applications specified in the original mysystem.rel file, use the -boot flag as follows: os> /usr/local/erl-target/bin/erl -boot /usr/local/erl-target/releases/FIRST/start We start a simple target system as above. The only difference is that also the file releases/RELEASES is present for code replacement in run-time to work. To start an embedded target system the shell script bin/start is used. That shell script calls bin/run.erl, which in turn calls bin/start.erl (roughly, start.erl is an embedded variant of erl). The shell script start is only an example. You should edit it to suite your needs. Typically it is executed when the UNIX system boots. run.erl is a wrapper that provides logging of output from the run-time system to file. It also provides a simple mechanism for attaching to the Erlang shell (to.erl). start.erl requires the root directory (“/usr/local/erl-target”), the releases directory (“/usr/local/erl-target/releases”), and the location of the start.erl.data file. It reads the Chapter 1: System Principles run-time system version ("5.1") and release version ("FIRST") from the start.erl.data file, starts the run-time system of the version found, and provides -boot flag specifying the boot file of the release version found ("releases/FIRST/start.boot"). start.erl also assumes that there is sys.config in release version directory ("releases/FIRST/sys.config"). That is the topic of the next section (see below). The start.erl shell script should normally not be altered by the user. 1.3.5 System Configuration Parameters As was pointed out above start.erl requires a sys.config in the release version directory ("releases/FIRST/sys.config"). If there is no such a file, the system start will fail. Hence such a file has to added as well. If you have system configuration data that are neither file location dependent nor site dependent, it may be convenient to create the sys.config early, so that it becomes a part of the target system tar file created by target_system:create/1. In fact, if you create, in the current directory, not only the mysystem.rel file, but also a sys.config file, that latter file will be tacitly put in the appropriate directory. 1.3.6 Differences from the Install Script The above install/2 procedure differs somewhat from that of the ordinary Install shell script. In fact, create/1 makes the release package as complete as possible, and leave to the install/2 procedure to finish by only considering location dependent files. 1.3.7 Listing of target_system.erl -module(target_system). ?-include_lib("kernel/include/file.hrl"). -exports([create/1, install/2]). -define(BUFSIZE, 8192). %%% Note: RelFileName below is the *stem* without trailing .rel, %%% .script etc. %%% create(RelFileName) %%% create(RelFileName) -> RelFile = RelFileName ++ ".rel", io:fwrite("Reading file: \"~s\" ...-n", [RelFile]), {ok, [RelSpec]} = file:consult(RelFile), io:fwrite("Creating file: \"~s\" from \"~s\" ...-n", ["plain.rel", RelFile]), {release, {RelName, RelVsn}, {erts, ErtsVsn}, AppVsns} = RelSpec, PlainRelSpec = {release, {RelName, RelVsn}, {erts, ErtsVsn}, {erts, ErtsVsn}, {erts, ErtsVsn}, lists:filter(fun({kernel, _}) -> true; ({stdlib, _}) -> true; (_) -> false end, AppVsns) }, {ok, Fd} = file:open("plain.rel", [write]), io:fwrite(Fd, "p.\n", [PlainRelSpec]), file:close(Fd), io:fwrite("Making plain.rel and plain.boot files ...\n"), make_script("plain"), io:fwrite("Making s.script and s.boot files ...\n", [RelFileName, RelFileName]), make_script(RelFileName), TarFileName = io_lib:fwrite("s.tar.gz", [RelFileName]), io:fwrite("Creating tar file s\n ...\n", [TarFileName]), make_tar(RelFileName), io:fwrite("Making directory tmp\n ...\n"), file:make_dir("tmp"), io:fwrite("Extracting s into directory tmp\n ...\n", [TarFileName]), extract_tar(TarFileName), TmpBinDir = filename:join(["tmp", "bin"]), ErtsBinDir = filename:join(["tmp", "erts-" ++ ErtsVsn, "bin"], io:fwrite("Deleting erl and start in directory s\n ...\n", [ErtsBinDir]), file:delete(filename:join([ErtsBinDir, "erl"])), file:delete(filename:join([ErtsBinDir, "start"])), io:fwrite("Creating temporary directory s ...\n", [TmpBinDir]), file:make_dir(TmpBinDir), io:fwrite("Copying file plain.boot to s ...\n", [filename:join([TmpBinDir, "start.boot"])]), copy_file("plain.boot", filename:join([TmpBinDir, "start.boot"])), io:fwrite("Copying files epmd, run_erl and to_erl from n" "s to s ...\n", [ErtsBinDir, TmpBinDir]), copy_file(filename:join([ErtsBinDir, "epmd"]), filename:join([TmpBinDir, "epmd"]), [preserve]), copy_file(filename:join([ErtsBinDir, "run_erl"]), filename:join([TmpBinDir, "run_erl"]), [preserve]), copy_file(filename:join([ErtsBinDir, "to_erl"]), filename:join([TmpBinDir, "to_erl"]), [preserve]), StartErlDataFile = filename:join(["tmp", "releases", "start_erl.data"]), io:fwrite("Creating \"~s\" ...\n", [StartErlDataFile]), StartErlData = io_lib:fwrite("~s ~s\n", [ErtsVsn, RelVsn]), write_file(StartErlDataFile, StartErlData), io:fwrite("Recreating tar file \"~s\" from contents in directory "\"tmp\" ...\n", [TarFileName]), {ok, Tar} = erl_tar:open(TarFileName, [write, compressed]), {ok, Cwd} = file:get_cwd(), file:set_cwd("tmp"), erl_tar:add(Tar, "bin", []), erl_tar:add(Tar, "erts-" ++ ErtsVsn, []), erl_tar:add(Tar, "releases", []), erl_tar:add(Tar, "lib", []), erl_tar:close(Tar), file:set_cwd(Cwd), io:fwrite("Removing directory \"~s\" ...\n"), remove_dir_tree("tmp"), ok. install(RelFileName, RootDir) -> TarFile = RelFileName ++ ".tar.gz", io:fwrite("Extracting ~s ...\n", [TarFile]), extract_tar(TarFile, RootDir), StartErlDataFile = filename:join([RootDir, "releases", "start_erl.data"]), {ok, StartErlData} = read_txt_file(StartErlDataFile), [ErlVsn, RelVsn|_] = string:tokens(StartErlData, " "), ErtsBinDir = filename:join([RootDir, "erts-" ++ ErlVsn, "bin"], BinDir = filename:join([RootDir, "bin"], io:fwrite("Substituting in erl.src, start.src and start_erl.src to\n""form erl, start and start_erl ...\n"), subst_src_scripts(["erl", "start", "start_erl"], ErtsBinDir, BinDir, ["FINAL_ROOTDIR", RootDir], ["EMU", "beam"], [preserve]), io:fwrite("Creating the RELEASES file ...\n"), create_RELEASES(RootDir, filename:join([RootDir, "releases", RelFileName])). %% LOCALS %% make_script(RelFileName) %% make_script(RelFileName) -> Opts = [no_module_tests], systools:make_script(RelFileName, Opts). %% make_tar(RelFileName) %% make_tar(RelFileName) -> RootDir = code:root_dir(), systools:make_tar(RelFileName, [erts, RootDir]). 1.3: Creating a First Target System %% extract_tar(TarFile, DestDir) %% extract_tar(TarFile, DestDir) -> erl_tar:extract(TarFile, [{cwd, DestDir}, compressed]). createRELEASES(DestDir, RelFileName) -> release_handler:createRELEASES(DestDir, RelFileName ++ ".rel"). subst_src_scripts(Scripts, SrcDir, DestDir, Vars, Opts) -> lists:foreach(fun(Script) -> subst_src_script(Script, SrcDir, DestDir, Vars, Opts) end, Scripts). subst_src_script(Script, SrcDir, DestDir, Vars, Opts) -> subst_file(filename:join([SrcDir, Script ++ ".src"]), filename:join([DestDir, Script]), Vars, Opts). subst_file(Src, Dest, Vars, Opts) -> {ok, Conts} = read_txt_file(Src), NConts = subst(Conts, Vars), write_file(Dest, NConts), case lists:member(preserve, Opts) of true -> {ok, FileInfo} = file:read_file_info(Src), file:write_file_info(Dest, FileInfo); false -> ok end. %% subst(Str, Vars) %% Vars = [{Var, Val}] %% Var = Val = string() %% Substitute all occurrences of %Var% for Val in Str, using the list %% of variables in Vars. %% %% subst(Str, Vars) -> %% subst(Str, Vars, []). substr([%| Rest], Vars, Result) when $A =< C, C =< $Z -> subst([C| Rest], Vars, Result, []); substr([%| Rest], Vars, Result) when $a =< C, C =< $z -> subst_var([C| Rest], Vars, Result, []); substr([%| Rest], Vars, Result) when C == $_ -> subst_var([C| Rest], Vars, Result, []); substr([C| Rest], Vars, Result) -> subst(Rest, Vars, [C| Result]); substr([], _Vars, Result) -> lists:reverse(Result). subst_var([%| Rest], Vars, Result, VarAcc) -> Key = lists:reverse(VarAcc), case lists:keysearch(Key, 1, Vars) of {value, {Key, Value}} -> subst(Rest, Vars, lists:reverse(Value, Result)); false -> subst(Rest, Vars, [$$| VarAcc ++ [$|$ Result]]) end; subst_var([C| Rest], Vars, Result, VarAcc) -> subst_var(Rest, Vars, Result, [C| VarAcc]); subst_var([], Vars, Result, VarAcc) -> subst([], Vars, [VarAcc ++ [$|$ Result]]). copy_file(Src, Dest) -> copy_file(Src, Dest, []). copy_file(Src, Dest, Opts) -> {ok, InFd} = file:open(Src, [raw, binary, read]), {ok, OutFd} = file:open(Dest, [raw, binary, write]), do_copy_file(InFd, OutFd), file:close(InFd), file:close(OutFd), case lists:member(preserve, Opts) of true -> {ok, FileInfo} = file:read_file_info(Src), file:write_file_info(Dest, FileInfo); false -> ok end. do_copy_file(InFd, OutFd) -> do_copy_file(InFd, OutFd). copy_file(FName, Conts) -> {ok, Fd} = file:open(FName, [write]), file:write(Fd, Conts), file:close(Fd). read_txt_file(File) -> {ok, Bin} = file:read_file(File), {ok, binary_to_list(Bin)}. remove_dir_tree(Dir) -> remove_all_files(".", [Dir]). remove_all_files(Dir, Files) -> lists:foreach(fun(File) -> FilePath = filename:join([Dir, File]), {ok, FileInfo} = file:read_file_info(FilePath), file:delete(FilePath), ok end, case FileInfo#file_info.type of directory -> {ok, DirFiles} = file:list_dir(FilePath), remove_all_files(FilePath, DirFiles), file:del_dir(FilePath); _ -> file:delete(FilePath) end end, Files). Chapter 1: System Principles List of Tables 1.1 File Types .................................................................................. 3
{"Source-Url": "http://erlang.org/documentation/doc-5.7.3/pdf/system_principles-5.7.3.pdf", "len_cl100k_base": 6495, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 36827, "total-output-tokens": 7654, "length": "2e12", "weborganizer": {"__label__adult": 0.0002397298812866211, "__label__art_design": 0.00023365020751953125, "__label__crime_law": 0.00013828277587890625, "__label__education_jobs": 0.0007357597351074219, "__label__entertainment": 6.365776062011719e-05, "__label__fashion_beauty": 7.981061935424805e-05, "__label__finance_business": 0.00018608570098876953, "__label__food_dining": 0.0001951456069946289, "__label__games": 0.000995635986328125, "__label__hardware": 0.0009708404541015624, "__label__health": 0.0001251697540283203, "__label__history": 0.0001404285430908203, "__label__home_hobbies": 9.065866470336914e-05, "__label__industrial": 0.00028514862060546875, "__label__literature": 0.0002137422561645508, "__label__politics": 0.00012254714965820312, "__label__religion": 0.0002636909484863281, "__label__science_tech": 0.006130218505859375, "__label__social_life": 5.5849552154541016e-05, "__label__software": 0.01200103759765625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00014984607696533203, "__label__transportation": 0.000225067138671875, "__label__travel": 0.00010925531387329102}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24661, 0.02037]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24661, 0.6782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24661, 0.72325]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 31, null], [31, 1550, null], [1550, 1550, null], [1550, 2440, null], [2440, 4838, null], [4838, 7240, null], [7240, 9319, null], [9319, 11016, null], [11016, 13011, null], [13011, 15642, null], [15642, 17925, null], [17925, 19552, null], [19552, 21317, null], [21317, 22999, null], [22999, 24304, null], [24304, 24517, null], [24517, 24546, null], [24546, 24661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 31, null], [31, 1550, null], [1550, 1550, null], [1550, 2440, null], [2440, 4838, null], [4838, 7240, null], [7240, 9319, null], [9319, 11016, null], [11016, 13011, null], [13011, 15642, null], [15642, 17925, null], [17925, 19552, null], [19552, 21317, null], [21317, 22999, null], [22999, 24304, null], [24304, 24517, null], [24517, 24546, null], [24546, 24661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24661, null]], "pdf_page_numbers": [[0, 31, 1], [31, 31, 2], [31, 1550, 3], [1550, 1550, 4], [1550, 2440, 5], [2440, 4838, 6], [4838, 7240, 7], [7240, 9319, 8], [9319, 11016, 9], [11016, 13011, 10], [13011, 15642, 11], [15642, 17925, 12], [17925, 19552, 13], [19552, 21317, 14], [21317, 22999, 15], [22999, 24304, 16], [24304, 24517, 17], [24517, 24546, 18], [24546, 24661, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24661, 0.02638]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
beb8a9c0c4075afd69eb0dc1dd19ab984faf7268
Branching execution symmetry in Jeopardy by available implicit arguments analysis Tilsted Kristensen, Joachim; Kaarsgaard, Robin; Thomsen, Michael Kirkedal Published in: NIKT: Norsk IKT-konferanse for forskning og utdanning Publication date: 2023 Document version Publisher's PDF, also known as Version of record Document license: Unspecified Citation for published version (APA): Branching execution symmetry in Jeopardy by available implicit arguments analysis Joachim Kristensen\textsuperscript{1}, Robin Kaarsgaard\textsuperscript{2}, and Michael Kirkedal Thomsen\textsuperscript{1,3} \textsuperscript{1} University of Oslo, Oslo, Norway \textsuperscript{2} University of Edinburgh, Edinburgh, United Kingdom \textsuperscript{3} University of Copenhagen, Copenhagen, Denmark Abstract. When the inverse of an algorithm is well-defined – that is, when its output can be deterministically transformed into the input producing it – we say that the algorithm is invertible. While one can describe an invertible algorithm using a general-purpose programming language, it is generally not possible to guarantee that its inverse is well-defined without additional argument. Reversible languages enforce deterministic inverse interpretation at the cost of expressibility, by restricting the building blocks from which an algorithm may be constructed. Jeopardy is a functional programming language designed for writing invertible algorithms without the syntactic restrictions of reversible programming. In particular, Jeopardy allows the limited use of locally non-invertible operations, provided that they are used in a way that can be statically determined to be globally invertible. However, guaranteeing invertibility in Jeopardy is not obvious. One of the central problems in guaranteeing invertibility is that of deciding whether a program is symmetric in the face of branching control flow. In this paper, we show how Jeopardy can solve this problem, using a program analysis called available implicit arguments analysis, to approximate branching symmetries. Keywords: Program analysis, functional programming, invertible languages 1 Introduction The interest in programs that can recover the inputs from a computed output have long existed: from McCarthy’s generate-and-test method [13] to the numerous inversion techniques associated with the \textit{reversible model of computation} [1,6,11]. Many languages have been designed to guarantee that programs describe reversible algorithms, often by restricting which programs are allowed. A notable such language is Janus [12,19], a reversible imperative language which guarantees partial reversibility\footnote{Often (as in Janus) local invertibility only guarantees invertibility of partial functions. This comes from the fact that control structures (like conditional and loops) require assertion of specific values, and because procedures may fail to terminate.} by restricting programs to be sequences of lo- cally invertible statements. Furthermore, Theseus [8] restricts programs to be a composition of locally invertible surjective functions. Finally, RFun [16,18] imposes constraints that enforce function invertibility by enforcing a bidirectional first-match policy on choice points (at runtime), and requiring programs to be linear in their arguments. Doing so, is sufficient to guarantee invertible algorithms while not restricting computational power beyond R-Turing completeness\textsuperscript{5}. Both of these constraints can clearly be checked statically: for the former, we may require the programmer to unroll their program until input and output patterns are syntactically orthogonal, and the latter can be enforced by linear typing [3,17] as has been shown for CoreFun [7], a simple typed version of RFun. However, writing algorithms in a way that makes their reversibility evident can be difficult, as it corresponds, in a certain way, to asking the programmer to prove this property as they are writing the program. Writing programs in these reversible languages requires some experience, and can in some cases be notoriously hard. The first attempts to this approach, was McCarthy’s generate-and-test algorithm [13]. As this method is often infeasible in practice, later research approached the problem using program inversion [4,5] or even semi-inversion [9,14]. Since these methods all build on the conventional programming model, they may fail in cases where a deterministic inverse does not exist. The language Jeopardy [10] has been designed with inversion in mind. It can be seen as a combination of the above two approaches: restricting the syntax enough to be able to give static inversion guarantees, but relaxing the execution model enough to make programming as natural as possible. It has a syntactic resemblance to your garden variety functional programming language and exhibits the expected semantics for programs running in the conventional direction. However, not all algorithms describe bijective functions, and the problem of deciding whether an algorithm is invertible is undecidable in general following from Rice’s Theorem. This means that the static analysis needed to guarantee inversion of even simple Jeopardy programs is not straightforward. In this work, we investigate the approach of approximating global program invertibility by developing a data flow analysis that infers the information necessary to make the approximation. To be precise, in Section 2 we outline the problem by providing an instructive program example. In Section 3 we briefly outline the syntax and semantics of the Jeopardy programming language; a more formal introduction to the language was presented at IFL 2022 [10]. In Section 4 we detail the meaning of implicit arguments to functions that are inversely interpreted. In Section 5 we provide an algorithm for performing implicitly available arguments analysis on Jeopardy programs. Furthermore, in Section 6 we run the algorithm on the program example, and in Section 7 we discuss the implications of the result. Finally, in Section 8 we conclude on the results. \textsuperscript{5} R-Turing completeness is Turing completeness restricted to programming languages (and hence Turing machines) defining only reversible programs. A Haskell implementation of available implicit arguments analysis for Jeopardy can be found at: 2 Branching symmetries and invertibility The extensional behavior of a program can be reasonably thought of as a function mapping inputs to outputs [2]. In this perspective, the existence of an inverse program is analogous to the existence of an inverse function. That is, a program $f : A \rightarrow B$ is invertible when a deterministic inverse program $f^{-1} : B \rightarrow A$ exists, such that the functions they describe satisfy $f^{-1} \circ f = \text{id}_A$ and $f \circ f^{-1} = \text{id}_B$. At the extensional abstraction of mathematical functions, we cannot infer any more about a function’s behavior than that which may be derived from the premises given by the function’s provider. However, when we are presented with a program, we can perform program analysis that inspect it to gain insight into its properties. One such analysis, called available expressions analysis [15], produces a set of expressions per program point that has already been computed when the point is reached at runtime. One purpose to perform this particular analysis can be to transform programs into equivalent programs that do not recompute expressions when it is not necessary. In Jeopardy we wish to decide the set of available expressions that could have been implicitly provided as function arguments at particular call sites in a program. Providing extra arguments to functions do not necessarily make those programs run more efficiently, but it allows us to infer more precise things about branching inside function calls ahead of runtime. For instance, consider the program given in Figure 1. The main function of the program `fibonacci` takes a natural number $n$ as its argument, and produces a pair containing the $n$’th Fibonacci number together with $n$ itself. It does so by projecting from the function `fibonacci_pair` that produces a pair containing the $n$’th Fibonacci number together with its successor. The pair is computed by recursively applying the `fibber` function, which transforms a pair of Fibonacci numbers into the next pair. As dictated by the definition of the Fibonacci sequence, `fibber` finds the next number in the sequence by summing the two previous numbers. Starting from the top, the function `sum`, which computes sums of pairs of natural numbers, is not invertible. To be precise, the solution 4 does not have a unique corresponding problem. In particular, it can be the sum of 1 and 3, or it can be the sum of 0 and 4. In the latter case, `sum` would simply return the 4 by taking the first branch of the case statement, and in the former case, it would take the second branch and call itself recursively. However, the output of `sum` is still uniquely determined by the input, and so, inferring which branch was taken in each call to `sum`, is sufficient for uniquely determining its input. For instance, in the function `fibber` we never call `sum` data natural_number = [zero] [successor natural_number]. sum (m, n) = case m of ; [zero] -> n ; [successor k] -> sum (k, [successor n]). fibber (m, n) = (sum (m, n), m). fibonacci_pair n = case n of ; [zero] -> ([successor [zero]], [successor [zero]]) ; [successor k] -> fibber (fibonacci_pair k). fibonacci n = case fibonacci_pair n of ; (_, nth_fibonacci_number) -> (nth_fibonacci_number, n). main fibonacci. Fig. 1. Example program, computing fibonacci numbers. without also returning the first of its arguments as well. Thus, in the inverse interpretation of \texttt{sum}, if the first argument is 0, we are done. Otherwise, the first argument is the successor of some natural smaller number \texttt{k}, and we can inverse interpret the \texttt{sum} function recursively. Similarly, it is not trivial that \texttt{fibonacci_pair} is an invertible algorithm, since you need to know that the second branch of the case-statement will always be syntactically orthogonal to a pair of ones. However, as the argument for \texttt{fibonacci_pair} is directly available in the output of \texttt{fibonacci}, it is clearly invertible in the context of inverse interpreting the \texttt{fibonacci} function. To summarize, obtaining the inverse of the entire \texttt{fibonacci} program corresponds to writing a program that takes as argument a pair containing the \texttt{n}'th Fibonacci number together with \texttt{n} itself and merely gives back the \texttt{n}. The right projection is insufficient in terms of correctly determining the problem for this solution, because its interpretation in the conventional direction does not compose to an identity. However, recovering information about branching is sufficient for inverse interpretation. Reversible programming languages that enforce local invertibility, such as RFun and CoreFun [7,18] simply throw an error at runtime if branching does not comply with a symmetric first match policy for pattern matching, and Theseus [8] requires the programmer to account for branching structure syntactically. However, as we have seen in the \texttt{fibonacci} program example, realistic programs often exhibit inter-procedural information that allows us to recover the branching structure. It remains to show how this information may be obtained systematically. In the remainder of this article, we concern ourselves with doing just that. 3 The Jeopardy Programming Language Jeopardy is a carefully designed first order functional language aimed at expressing invertible algorithms while enabling concise program analysis and dissemination. The main features are user-definable algebraic data types and explicit function level program inversion. The full grammar can be found in Figure 2. \[ \begin{align*} x &\in \text{Name} \quad \text{(Well-formed variable names).} \\ c &\in \text{Name} \quad \text{(Well-formed constructor names).} \\ \tau &\in \text{Name} \quad \text{(Well-formed datatype names).} \\ f &\in \text{Name} \quad \text{(Well-formed function names).} \\ p ::= [c p_i] \mid x \quad \text{(Patterns).} \\ v ::= [c v_i] \quad \text{(Values).} \\ \Delta ::= f (p : \tau_p) : \tau_t = t . \Delta \quad \text{(Function definition).} \\ \mid \text{data } \tau = [c \tau_i] . \Delta \quad \text{(Data type definition).} \\ \mid \text{main } g . \quad \text{(Main function declaration).} \\ g ::= f \mid (\text{invert } g) \quad \text{(Inversion).} \\ t ::= p \quad \text{(Patterns in terms).} \\ \mid g p \quad \text{(First order function application).} \\ \mid \text{case } t : \tau \text{ of } p_i \to t_i \quad \text{(Case statement).} \end{align*} \] Fig. 2. The syntax of Jeopardy. Running a program corresponds to calling the declared main function on a value provided by the caller in the empty context. Similarly, running a program backwards corresponds to calling the main function’s inverse on a value provided by the caller in the empty context as well. Since an application is a term, reasoning about inversion of terms is the same as reasoning about inversion of programs. The syntax of terms has been designed with the goal of providing program analysis at the cost of making programs harder to read and write. In the interest of writing intuitive program examples, we have therefore equipped Jeopardy with a set of derived syntactic connectives that we have shown in Figure 3. Additionally, we may choose to omit type annotations whenever these are not necessary, and use literal syntax for natural numbers (0, 1, \ldots) to mean their data representations ([\text{zero}], [\text{successor [zero]}], \ldots). \begin{equation} \begin{align*} [[c\ t_i]]_{\Delta(\text{data} \tau = \text{ct})} & := \text{case } t_i : \tau \text{ of } p_i \rightarrow [c\ p_i] \\ \Delta(t_1, t_2) & := [[\text{pair}\ t_1\ t_2]]_{\Delta} \\ \Delta(t_1 : t_2) & := [[\text{cons}\ t_1\ t_2]]_{\Delta} \\ \Delta([]) & := \text{nil} \\ \Delta(f\ t) & := \text{case } t : \tau \text{ of } p \rightarrow f\ p \\ \Delta\left[\text{let } p : \tau = t \text{ in } t'\right] & := \text{case } t : \tau \text{ of } p \rightarrow t' \\ \Delta(f(p_1, \tau_1 : \tau_1) = t_i) & := [t']_{\Delta(f(x: \tau_2) : \tau_1 = t_i)} = [t']_{\Delta(f(x: \tau_2) : \tau_1 = \text{case } x \tau_2 \text{ of } p_i = t_i)} \end{align*} \end{equation} Fig. 3. Disambiguation of syntactic sugar. 4 Implicit Arguments Recall from Section 2 that \texttt{sum} was not injective, and thus its inverse was not well-defined. A naïve solution to this problem is to automatically injectivise the program using, e.g., Bennett’s method \cite{Bennett1}: that is, we keep a computation history of our program, copy its result, and uncompute the history to be left with the input and a copy of the result. However, this is a very inefficient way of doing invertible computing and would constantly generate extra unwanted data. In many cases we can do better by first determining which inputs are needed for uniquely deciding branching information bidirectionally. For example, if we want to make \texttt{sum} invertible it suffices to copy one of its inputs to the outputs as follows: \begin{verbatim} \texttt{sum_and_copy_first} (m, n) = \texttt{case } m \texttt{ of } ; [\texttt{zero}] \rightarrow (m, n) ; [\texttt{successor} k] \rightarrow \texttt{case } \texttt{sum_and_copy_first} (k, [\texttt{successor} n]) \texttt{ of } ; (k, k*\texttt{suc}_n) \rightarrow (m, k*\texttt{suc}_n). \texttt{sum_and_copy_second} (m, n) = \texttt{case } m \texttt{ of } ; [\texttt{zero}] \rightarrow (n, n) ; [\texttt{successor} k] \rightarrow \texttt{case } \texttt{sum_and_copy_second} (k, [\texttt{successor} n]) \texttt{ of } ; (k*\texttt{suc}_n, [\texttt{successor} n]) \rightarrow (n, k*\texttt{suc}_n). \end{verbatim} To convince the reader that the branching symmetry is recoverable from the transformed program, in \texttt{sum_and_copy_first} we are matching on \texttt{m}, and \texttt{m} is embedded directly in the output. To see that branching is symmetric in \texttt{sum_and_copy_second}, we need to show that \texttt{k* suc}_n and \texttt{n} are different in the last branch, which we can by co-induction since the recursive call returns a pair of successors of \texttt{n}, or some larger structure that contains \texttt{n} from previous calls. Regardless of our choice of injectivisation of \texttt{sum}, we need to store information from the previous call in order to make branching symmetry decidable for the next, effectively by supplying a function (in this case the inverse to \texttt{sum}) with extra arguments. By producing specialised functions that take extra arguments in this way, we can transform programs that contain the original functions into equivalent programs that call the specialised function whenever the extra arguments are available. For instance, we might rewrite \texttt{fibber} from Figure 1 as follows: \begin{verbatim} fibber_specialized_for_sum_and_copy_first (m, n) = case sum_and_copy_first (m, n) of (m, m+n) -> (m+n, m). \end{verbatim} This transformation depends on knowing which specialised versions of a particular function it can use, and in turn, it depends heavily on knowing what terms are available, or can be made available in the program point at which the call happens. Because Jeopardy is a pure functional language, this is simply all terms that appear in all paths to the program point in a call graph with the programs main function as its entry point, and we already know how to construct such a graph \cite{15}. In fact, we can do a little better, since we only need to require that a term appears on every path that allows us to apply specialisation; though not necessarily the same specialisation in every path. In this way, the goal of our algorithm is as follows: for every function application in a program, for every distinct path to that application from the main function, compute what terms are available to be provided as implicit (extra) arguments. 5 The Algorithm To avoid having to deal with names, our algorithm performs an initial annotation of the input program, where each program point is assigned a unique integer label, as will be demonstrated for Figure 1 in Section 6. Furthermore, to make things more concrete, we specify what it means to be a call-configuration, as specified in Definition 1. \textbf{Definition 1.} A call-configuration is a 4-tuple \((c, f, A, I)\), containing the name \("c"\) of the function in which the call occurred, the (possibly inverted) function \("f"\) being called, a set \("A"\) containing the labels of the arguments to the function, a set \("I"\) of available implicit arguments from previous calls in which the program is running at the time of the call. Now, achieving the goal presented in Section 4, is equivalent to answering the question: for each call in a program, what are the possible configurations of the call? We answer this question by solving a set of equations. Each equation, have a fixed program-of-interest \(\Delta\). We give the name \(\mathcal{F}\) to the set of function names defined in $\Delta$; The superset of $F$ that includes inversions (function names occurring under the keyword invert), we call $I$. Furthermore, we assign the name $L$ to the set of labels of $\Delta$, and finally, we give the name $C \subseteq (F \times I \times L \times L)$ to the set of possible call-configurations. To produce all the possible configurations, we declare a function that computes the closure of the call-configurations that are reachable from two initial configurations$^6$: $$configuration : \Delta \rightarrow \mathcal{P}(C)$$ Its corresponding definition can be found in Figure 4, where $\Delta$ has been extended with a special top level function $\top$ and two special labels “input” and “output” for the arguments that should be provided by the entity that runs the program. The equation, and the computations it depends on, are all defined in this section. $$\text{configurations}(\Delta[\text{main } g.]) = \{ (\top, g, \{ \text{input} \}, \emptyset), (\top, (\text{invert } g), \{ \text{output} \}, \emptyset) \} \cup \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c) \right)$$ Fig. 4. The reachable call configurations from main. In the last part of the definition of configurations, a function: $$\text{call} : C \rightarrow \mathcal{P}(C)$$ takes as argument, a configuration and returns all the possible configurations that are reachable by calling the (possibly inverted) function $f$ from that configuration, as defined in figure 5 $$\text{call}((c, f, A, I)_{\Delta[p=t.]} = \begin{cases} \text{term } \downarrow (f, (I \cup A) \setminus \{\text{labels}(p, t)\}, t) & : \text{dir}(f) = \downarrow \\ \pi_1(\text{term } \uparrow (f, (I \cup A) \setminus \{\text{labels}(p, t)\}, t)_{\Delta}) & : \text{otherwise} \end{cases}$$ Fig. 5. The reachable configurations from a given configuration. body and argument of a function. We give the name \( T \) for such terms, and further declare two functions: \[ \text{term } \downarrow : (\mathcal{F}, \mathcal{I}, T) \to \mathcal{P}(\mathcal{C}) \] and \[ \text{term } \uparrow : (\mathcal{F}, \mathcal{I}, T) \to \mathcal{P}(\mathcal{C} \times \mathcal{L}) \] that compute the reachable configurations depending on said direction \( dir(f) \) in which the call is to be interpreted. We define these functions in Figure 6. They both return the set of configurations reachable from their argument term \( t \). However, the function interpreting calls against the conventional direction returns the labels of available expressions from “the future” as a means of definitional convenience. In the case for patterns, both functions yield an empty set of call-configurations since a pattern cannot contain an application. That is, terms in patterns are syntactic sugar that we disambiguated in Figure 3. The cases for function application yield the configurations reachable as defined by the function “call”. Regarding case-statements, both functions return the collection of configurations reachable in each branch. Fig. 6. Call configurations, reachable in terms for each direction of interpretation. The direction \( dir \), and the opposite direction \( op \), of a function call are defined by their corresponding functions in Figure 7, and with that we are done defining the algorithm. Computing the set of possible reachable call configurations in a program \( \Delta \) now, corresponds to calling the function \texttt{configurations} on \( \Delta \). It does so by finding the least fixed point of the function \texttt{call} from two initial configurations. And, we know that this least fixed point always exists (and thus the algorithm is well defined), from observing that configurations can be given the structure of dir(g) = \begin{cases} \text{op(dir(f))} & : g = (\text{invert } f) \\ \downarrow & : \text{otherwise} \end{cases} \text{op}(\downarrow) = \uparrow \text{op}(\uparrow) = \downarrow \textbf{Fig. 7.} Definition, the direction of a function call and its opposite direction. A complete lattice by \((c, f, A, I) \sqsubseteq (c', f', A', I')\) iff \(c = c', f = f', A = A',\) and \(I \subseteq I',\) with joins and meets of configurations (with the same name, function, and label set) given by unions and intersections of available implicit arguments. Further, it can be shown that the functions in Figures 4, 5 and 6 are all monotone with respect to this order, so it follows by Tarski’s fixed point theorem that the least fixed point we are looking for always exists. Furthermore, since a program contains only finitely many labels, it follows additionally that the analysis always terminates. \section{6 Instructive Example} On a less theoretical note, let us look at an example, namely that of finding the available implicit arguments at all call sites in the Jeopardy example from Figure 1. This is the same as finding a minimal fixed point for the equation for “configurations(\(\Delta\))”, where \[ \Delta = \] \[ \text{data natural_number} = [\text{zero}] [\text{successor natural_number}]. \] \[ \text{sum (m, n)^0} = \] \[ \text{(case m^4 of} \] \[ ; [\text{zero}]^5 \to n^6 \] \[ ; [\text{successor k}^{10}]^7 \to (\text{sum} (k^{10}, [\text{successor n}^{12}])^{11})^9 \]. \[ \text{fibber (m^{14}, n^{15})^{13} = ((\text{sum} (m^{19}, n^{20}))^{18}, m^{21})^{16}.} \] \[ \text{fibonacci_pair n^{22} =} \] \[ \text{(case n^{34} of} \] \[ ; [\text{zero}]^{25} \to ([\text{successor [zero]}^{28}]^{27}, [\text{successor [zero]}^{30}]^{29})^{26} \] \[ ; [\text{successor k}^{12}]^{31} \to (\text{fibber} (\text{fibonacci_pair} k^{35})^{34}, k^{33})^{23}.} \] \[ \text{fibonacci n^{36} =} \] \[ \text{(case (fibonacci_pair n^{39})^{38} of} \] \[ ; (_, nth_fibonacci_number^{41})^{40} \to (nth_fibonacci_number^{44}, n^{45})^{37}.} \] \[ \text{main fibonacci}^{46}.) \] As the main function is \texttt{fibonacci} we immediately get: \[ \text{configurations}(\Delta[\texttt{main fibonacci}]) = \{(\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset), (\top, (\texttt{invert fibonacci}), \{\text{output}\}, \emptyset)\} \\ \cup \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c)_{\Delta} \right) \] And, if we focus on the last term, and unfold it one step, we get: \[ \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c)_{\Delta} \right) = \text{call}((\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset))_{\Delta} \cup \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c)_{\Delta} \right) \] If we again restrict our attention to the “call” part, the result is: \[ \text{call}((\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset))_{\Delta[\texttt{fibonacci} \mathbin{\hat{=} t}]} = \text{term} \downarrow (\texttt{fibonacci}, \{\text{input}\} \setminus \{36\}, t) \\ = \text{term} \downarrow (\texttt{fibonacci}, \{\text{input}\}, \text{[case } t^{38} \text{ of } p_i^{40} \rightarrow t_i^{43}]) \\ = \text{term} \downarrow (\texttt{fibonacci}, \{\text{input}\}, t^{38}) \\ \cup \text{term} \downarrow (\texttt{fibonacci}, \{\text{input}\} \cup \text{labels}(t^{38}) \cup \text{labels}(p_i^{40}), t_i^{43}) \\ = \{(\texttt{fibonacci}, \texttt{fibonacci_pair}, \{39\}, \{\text{input}\})\} \cup \emptyset \] Finally, we get \[ \text{configurations}(\Delta[\texttt{main fibonacci}]) = \{(\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset), \\ (\top, (\texttt{invert fibonacci}), \{\text{output}\}, \emptyset), \\ (\texttt{fibonacci}, \texttt{fibonacci_pair}, \{39\}, \{\text{input}\})\} \\ \cup \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c)_{\Delta} \right) \] \[ = \{(\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset), \\ (\top, (\texttt{invert fibonacci}), \{\text{output}\}, \emptyset), \\ (\texttt{fibonacci}, \texttt{fibonacci_pair}, \{39\}, \{\text{input}\}), \\ (\texttt{fibonacci_pair}, \texttt{fibber_pair}, \{35\}, \{\text{input}, 39, 22, 23, 24, 31, 32\}), \\ (\texttt{fibonacci_pair}, \texttt{fibber}, \{34\}, \{\text{input}, 39, 22, 23, 24, 31, 32\})\} \\ \cup \left( \bigcup_{c \in \text{configurations}(\Delta)} \text{call}(c)_{\Delta} \right) \] \[ = \{(\top, \texttt{fibonacci}, \{\text{input}\}, \emptyset), \] Here the additional call-configurations in the last part are derived in a similar fashion to that which we did when we focused on the equation for “call” in the conventional direction, and in the interest of saving space in the paper, deriving the call configurations for \((\text{invert fibonacci})\) is left as an exercise for the reader. 7 Discussion The transformation suggested in Section 2 requires us to be able to infer a particular set of call-configurations. We have designed and implemented an algorithm for inferring said configurations. The algorithm works well for finding implicitly available arguments in function calls, but is limited in scope, to the configurations that do not reach beyond a single step of recursion in its search for implicit arguments. However, a single step is not a theoretical limit. It is easy to imagine a generalized algorithm that infers implicitly available arguments up to a fixed depth, but less so for arbitrary depth recursion. In Section 6, we have seen how to compute the implicitly available arguments at each program point in an example program that computes the Fibonacci numbers. The result of the analysis allows us to replace functions for which we cannot decide branching symmetry with specialized variations for which we can. For instance, at program point 17, \texttt{fibber} calls \texttt{sum} in a context where its first argument \(\texttt{m}\) is implicitly available in both directions (from program point 21) as witnessed, for the conventional direction, by the tuple \[(\texttt{fibber}, \texttt{sum}, [\ldots, 21, \ldots])\] And so, we can make \texttt{fibber} invertible by replacing the non-invertible function call to \texttt{sum} with a call to the invertible specialization \texttt{sum_and_copy_first} from Section 4, which in turn allows us to invert \texttt{fibber}, as demonstrated in the example function \texttt{fibber_specialized_for_sum_and_copy_first} also in Section 4. The problem with regards to recursion, is that the set of implicitly available expressions’ labels \(I\) in a configuration, according to Definition 1, corresponds to the terms that were “available from previous calls”. So, in a circular call structure (like recursion) it is not possible to see the difference in \( I \) between a term bound in this call, or the previous. In the example program from Section 2, this is not a problem, because deciding if e.g. \texttt{sum} is symmetric with respect to branching, only relies on implicit arguments either from the previous call to \texttt{fibber} or \texttt{sum} itself. However, one could imagine a scenario, where a term actually exhibits branching symmetry even though our analysis does not find sufficient information to say so. 8 Conclusion We have designed a program analysis for statically inferring the expressions that are available as implicit arguments in function calls in the Jeopardy programming language. The current formulation of the algorithm can be implemented in about 200 lines of code in fairly readable and maintainable Haskell code. Our implementation makes use of the nifty Reader-Writer-State monad, which turns out to be suitable for threading around the program \( \Delta \), as well as keeping track of termination conditions etc. In the near future, we expect to develop a program transformation, that rewrites Jeopardy programs where branching symmetry is not syntactically apparent, into programs where it is. To give an intuition, we have provided an example in Section 4, of applying such a transformation to the function \texttt{fibber} from the program example in Section 2. The reasons for wanting to design and implement such a transformation is twofold. As mentioned earlier, the main motivation is to enable static analysis for deterministic backwards branching execution inference. But such a transformation also has similar motivation to that of compiling programs rather than interpreting them. That is, a jeopardy interpreter should not perform this analysis every time a function is called, or thread around the implicit arguments at runtime, it should transform the program into an equivalent program where implicit arguments have been explicitly provided when necessary. Static program analysis is almost always an approximation, and cyclic call structures is where you will find the limitations of the available implicit arguments analysis. However, it is not hard to imagine a definition of call-configurations, that include (finitely many) layers of cyclic references to previous calls. And, it is unclear at the time of writing, if doing so will by useful in practice. Acknowledgments The second author is supported by DFF–International Postdoctoral Fellowship 0131-00025B. References
{"Source-Url": "https://static-curis.ku.dk/portal/files/375718255/Branching_execution.pdf", "len_cl100k_base": 8037, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 45427, "total-output-tokens": 10230, "length": "2e12", "weborganizer": {"__label__adult": 0.0004520416259765625, "__label__art_design": 0.0002906322479248047, "__label__crime_law": 0.0003635883331298828, "__label__education_jobs": 0.0005984306335449219, "__label__entertainment": 8.07046890258789e-05, "__label__fashion_beauty": 0.00018596649169921875, "__label__finance_business": 0.00019931793212890625, "__label__food_dining": 0.0004742145538330078, "__label__games": 0.0005564689636230469, "__label__hardware": 0.0008416175842285156, "__label__health": 0.0007925033569335938, "__label__history": 0.000244140625, "__label__home_hobbies": 0.00010222196578979492, "__label__industrial": 0.00045180320739746094, "__label__literature": 0.0003864765167236328, "__label__politics": 0.00031447410583496094, "__label__religion": 0.000637054443359375, "__label__science_tech": 0.0216827392578125, "__label__social_life": 0.00010526180267333984, "__label__software": 0.00351715087890625, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.0003995895385742187, "__label__transportation": 0.0006551742553710938, "__label__travel": 0.00020253658294677737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36097, 0.02177]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36097, 0.5394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36097, 0.82195]], "google_gemma-3-12b-it_contains_pii": [[0, 659, false], [659, 3250, null], [3250, 6551, null], [6551, 9629, null], [9629, 11937, null], [11937, 14258, null], [14258, 16958, null], [16958, 19743, null], [19743, 21599, null], [21599, 23483, null], [23483, 25566, null], [25566, 27917, null], [27917, 30042, null], [30042, 32787, null], [32787, 36097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 659, true], [659, 3250, null], [3250, 6551, null], [6551, 9629, null], [9629, 11937, null], [11937, 14258, null], [14258, 16958, null], [16958, 19743, null], [19743, 21599, null], [21599, 23483, null], [23483, 25566, null], [25566, 27917, null], [27917, 30042, null], [30042, 32787, null], [32787, 36097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36097, null]], "pdf_page_numbers": [[0, 659, 1], [659, 3250, 2], [3250, 6551, 3], [6551, 9629, 4], [9629, 11937, 5], [11937, 14258, 6], [14258, 16958, 7], [16958, 19743, 8], [19743, 21599, 9], [21599, 23483, 10], [23483, 25566, 11], [25566, 27917, 12], [27917, 30042, 13], [30042, 32787, 14], [32787, 36097, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36097, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
63bc377bb7e59e931c3f77a8486f2149b57f284c
Optimization Strategies for Inter-Thread Synchronization Overhead on NUMA Machine Song Wu, Jun Zhang, Yaqiong Peng, Hai Jin, Wenbin Jiang Services Computing Technology and System Lab Cluster and Grid Computing Lab School of Computer Science and Technology Huazhong University of Science and Technology, Wuhan, 430074, China wusong@hust.edu.cn Abstract—Overhead caused by data consistence issue in inter-thread synchronization probably degrades the performance of parallel applications. Non-Uniform Memory Access (NUMA), as the mainstream architecture in today’s multicore processor, further exacerbates this issue due to the significant overhead incurred by Remote Memory Reference (RMR). Therefore, to reduce synchronization overhead, it is important to solve the data consistence issue. In this paper, we classify the overhead into two kinds: (1) overhead incurred by algorithms themselves, and (2) overhead incurred by critical sections. To reduce two kinds of overhead on NUMA machine, we present two optimization strategies called search and backtrace (SAB) and reorder critical section and non-critical section (RCAN), respectively. In SAB, a server thread tries to search a thread coming from master NUMA node, and designates it as the new server thread. In this way, most of the time, shared data resides in the cache of master NUMA node, resulting in lower overhead caused by data consistence issue in critical section. In RCAN, each thread consecutively posts synchronization requests, followed by consecutively executing non-critical section. In this way, server threads could serve enough requests, resulting in better data locality. We design an algorithm named R-Synch based on SAB, while designing an algorithm named H-STA based on RCAN. Our evaluation with representative synchronization algorithms demonstrates the effectiveness of R-Synch and H-STA. Keywords—NUMA; data consistence; synchronization; algorithm I. INTRODUCTION The prevalence of multicore results in a popularity of parallel programming to improve the application performance. Unfortunately, it is not true for applications that frequently access shared data, because shared data must be accessed mutually exclusive by threads under the aegis of synchronization algorithms. Therefore, a highly efficient synchronization algorithm plays an important role in parallel programming, especially for applications that need significant synchronization. Processor manufacturers quickly shift from simple bus-based designs to NUMA and Cache Coherent NUMA (CC-NUMA) architectures due to the growing size of multicore machines [1]. Typically, a NUMA machine contains several nodes connected by an interconnect. Each node consists of several cores, independent cache, and shared local memory. Accessing of data missed in the local cache can incur off-chip memory access [2] and interconnect traffic which are significantly costly on CC-NUMA machines. According to [1, 3], access by a core to its local memory (e.g. its private or shared last level cache) can be much faster than access to the remote memory located on another node. These features complicate the design of highly efficient synchronization algorithms. Synchronization technique walks a long way from the traditional simple lock algorithms to the state-of-the-art algorithms such as CC-Synch and H-Synch [4]. Queue locks are proposed to reduce the overall cache coherence traffic by forming queues of threads [5–8], and each thread spins on a separate local memory location [1]. However, queue lock may not work well on CC-NUMA machines because threads executing instructions may alternately come from different NUMA nodes, resulting in non-trivial overhead of cache misses and interconnect communications. Combining technique [4, 9, 10] is a compelling approach to design lock algorithms by preventing the shared resource from bouncing back and forth among multiple cores. However, it still works not so well on NUMA machines, because communication between threads would incur lots of cache misses. Hierarchical locks originally presented by Radovic [11] is a good idea for designing NUMA-aware lock algorithms [1, 4], but it still faces a challenge that how to make threads coming from the same NUMA node consecutively access shared resource as many as possible. In summary, queue lock eliminates the hot spot [12] problem that causes the overhead inside algorithm itself; combining lock reduces the overhead occurring in critical section by using combining policy in serving synchronization requests; hierarchical lock tries to reduce the overhead occurring both in the critical section and algorithm itself. This paper presents two more efficient policies to reduce the overhead in critical section and algorithm itself, respectively. We first present a search and backtrace (referred to as SAB) policy. In SAB, a server thread tries to search a thread coming from the master NUMA node, and designates it as the new server thread. In this way, most of the time, shared data reside in the cache of master NUMA node, thus reducing the overhead caused by data consistence issue in critical section. Then, we present a policy called reorder critical section and non-critical section (referred to as RCAN). In RCAN, a thread consecutively posts synchronization requests, followed by consecutively executing non-critical section. In this way, a server thread could serve enough requests, thus enhancing data locality. The main contributions of this paper are listed below. - We present two optimization strategies, namely SAB and RCAN, to reduce the synchronization overhead incurred by algorithms themselves and critical sections, respectively. - To make the two strategies into practice, we design an algorithm named R-Synch based on SAB, while designing an algorithm named H-STA based on RCAN. - We conduct comprehensive experiments to demonstrate the effectiveness of R-Synch and H-STA. R-Synch works better when the main overhead incurred by algorithms themselves and critical sections, and RCAN, to reduce the synchronization overhead II. MOTIVATION AND DESIGN In this section, we first introduce synchronization overhead. Then, we analyze the locations of synchronization overhead. Next, we analyze how such overhead is generated. Finally, we design two optimization strategies to reduce the overhead. A. Overview of Synchronization Overhead As shown in Figure 1, there are three situations, where a thread executes a task containing subtasks: Task1, Task2, Task3. In the first execution, a single thread takes 3*T, 2*T, 3*T to complete Task1, Task2, and Task3, respectively. In the second situation, if there are three threads and the task can be totally parallelized, then the time taken to complete all three subtasks is 2*T. The third situation is shown in Figure 1(c), which involves the synchronization overhead. In this situation, multiple threads access shared data. To ensure data security, threads must access shared data one by one. As a result, the time taken to complete three subtasks increases to 3*T*a, 2*T*b, 3*T*c, respectively (a>1, b>1, c>1). Therefore, when the synchronization overhead is involved, the efficiency of parallel execution is probably even worse than the serial execution. B. Locations of Synchronization Overhead Figure 2 describes the execution overview of a multithreaded application. This multithreaded application contains three kinds of executions as mentioned above. The left and the right of Figure 2 describe the serial execution part. The middle mixes the parallel and synchronization execution, which are the main topic. In the middle of Figure 2, parallel parts are marked with non-CS, and critical section parts are marked with CS. In parallel parts, threads execute simultaneously without interacting with each other. In critical section, threads must first try to acquire the lock. If a thread successfully acquires and obtains the lock, it enters the critical section and operates the shared data. The thread will release the lock after leaving the critical section. Actually, it is hard for a thread to acquire and obtain the lock because other threads also compete for the lock. So a thread must pay some efforts between acquiring and obtaining the lock. This effort means the overhead caused by the process of acquiring and obtaining the lock. We call this overhead as synchronization overhead inside the algorithm because the size of the overhead depends on how a programmer designs the algorithm. Once entering into a critical section, threads access or operate shared data, causing some overhead due to the data consistence issue. This overhead affects the time for a thread to complete the critical section work. If the time gets longer, it will increase the difficulty for other threads to enter into the critical section [13]. We call this overhead as synchronization overhead inside critical section. C. Data Consistency Issue As analyzed above, there are two locations that cause the synchronization overhead: (1) synchronization overhead inside algorithm itself, and (2) synchronization overhead inside critical section. Two kinds of synchronization overhead are caused by data consistency issue, especially on NUMA machines. Figure 3 is an overview of NUMA architecture. On the left of the figure, there is a CMP (Chip Multiprocessors) chip containing four cores. On the right of the figure, four CMP chips make up the NUMA architecture, where each CMP is a NUMA node and has its private/shared cache. As multiple caches can hold the same memory location, data consistency issue arises. When data is accessed by a thread, it will enter into the corresponding cache line. Therefore, the shared data accessed by multiple threads will enter into several cache lines and these cache lines will spring over every NUMA node. Some cache lines may hold the latest valid data, while some cache lines may hold invalid data. At this time, data consistency issue arises. Cache coherence protocol is used to maintain data consistency due to that multiple cache lines hold the same memory locations. When the data accessed by a thread is not hit in the cache, it will cause an off- chip memory access (or RMR as mentioned before). The off-chip memory access indicates an access to the memory or a cache line on other NUMA nodes. An off-chip memory access is several times slower than the access to the local cache [3]. Two kinds of synchronization overhead are mainly carried by such off-chip memory accesses. To decrease the synchronization overhead, it is necessary to decrease two types of off-chip memory accesses. To investigate two kinds of overhead, we conduct four experiments on state-of-the-art lock algorithms, namely CC- Synch and H-Synch [4]. The results are shown in Figure 4. The first experiment simulates a Fetch&Multiply object coming from [4], and the experimental results are coincident with [4]. Clearly, H-Synch outperforms CC-Synch because H-Synch uses a hierarchical NUMA-aware policy to reduce off-chip memory accesses. In this experiment, two kinds of overhead are horse and horse. To investigate the overhead caused by critical section, we conduct two other tests and the experimental scheme comes from [11, 14], where threads in critical section modify each element of a shared vector. As shown in Figure 4(b), the results are still coincident with the conclusion of the original paper. When we increase the size of the shared vector, as shown in Figure 4(c), the performance of CC-Synch and H-Synch becomes approximately the same. This is because the overhead caused by critical section outweigh the benefits brought by H-Synch. In the last experiment, we make a little changes that a thread in critical section modifies a fixed number of elements that scatter across the shared vector. As shown in Figure 4(d), CC-Synch outperforms H-Synch in this experiment, which is out of our expectation. **Summary.** In the first two experiments, as the overhead occurring inside algorithm itself is the main overhead, the hierarchical policy used by H-Synch brings performance promotion. In the last two experiments, we change the size or the access pattern of the shared data, making the overhead occurring in critical section become the main overhead. At this time, the hierarchical policy does not help H-Synch much. Therefore, we have two points to improve the performance of synchronization algorithms. One is to reduce off-chip memory accesses inside algorithm itself, while another is to reduce off-chip memory accesses inside the critical section. **D. Optimization Strategies** To reduce two kinds of synchronization overhead, we present two optimization strategies: SAB and RCAN. 1) **Reducing off-chip memory accesses occurring in critical section:** SAB is proposed to reduce overhead occurring in critical section. In critical section, multiple threads access the same shared data, and thus several cache lines may hold the data. If only one cache of a NUMA node holds the shared data, the off-chip memory accesses will be significantly reduced. If threads access shared data coming from the same NUMA node (we refer to this NUMA node as master NUMA node), then the shared data would only enter into the cache of the same NUMA node. As a result, the data consistency issue is basically solved. We use Figure 5 to describe SAB policy. In the figure, A, B, C, and D represents four different NUMA nodes respectively, and the corresponding subscript number represents different threads of the NUMA node. SAB is based on a technique called combining [4]. In combining, the thread obtaining the lock is called as combiner. When a combiner completes its request, it will continue to serve the requests of other threads, then the combiner releases the lock and chooses a new combiner. Now we detail the working mechanism of SAB. Threads that want to execute critical section insert its request node to the linked list, and then spin on the locked field waiting to be served by the combiner or be designated as the new combiner. To be simplicity, we assume A is the master NUMA node and the thread at the head of the list is the combiner. When the combiner has served some number of requests and stops, it will face three situations that are marked with a number on top of the node. If the combiner stops at A , as the next thread A comes from the same NUMA node, A will be designated as the new combiner. If the combiner stops at C , as there are no active threads in the list, the combiner writes something to the node indicating that there are no active requests at present. The last and most complicate situation occurs when the combiner stops at A , as the next thread B comes from a different NUMA node, the combiner travels the linked list and tries to find a thread of the same NUMA node. If one such thread is found, then the old combiner designates it as the new combiner and tells it the backtrace position from which it should start to serve the requests in the next execution round. If not finding one, the combiner either simply designates the thread next to it as the new combiner or does the same as in the second situation. In this way, the shared data almost resides in the cache of master NUMA node, and threads accessing the shared data also come from master NUMA node. When threads access the shared data, the shared data are already in the local cache in most cases, resulting in less off-chip memory accesses occurring in critical section. 2) Reducing off-chip memory accesses occur in algorithm itself: Figure 6(a) shows an execution sequence of a thread. For simplicity, we assume that there is only one critical section in the code executed by threads. Therefore, we can use the critical section as a boundary to divide the code into three sections. The first: Critical Section (CS for short), the second: Result of Critical Section (ROCS for short) and the last: The Parallel Part (TPP for short). We propose a method called RCAN that can accelerate the velocity of posting requests for a single thread. According to combinatorics, there are totally three kinds of relationships between CS and TPP (here we regard CS and ROCS as a whole). Namely CS vs. CS, TPP vs. TPP, CS vs. TPP. We first analyze the three relationships under the condition of multiple threads by observing the execution behavior of multiple threads, then we try to apply the feature to a single thread. First, synchronization requests posted by threads can be served by combiner in sequence generally or not, and we maintain the default sequence semantics for a single thread. Second, the parallel parts do not access shared resources, so they can be executed simultaneously, and thus the execution order of TPPs does not matter for a single thread. Finally, as to CS and TPP, the parallel parts must be executed according to the results of critical section, that is to say, the parallel parts must be executed after the corresponding critical section. According to the analysis, we can reorder the execution sequence in Figure 6(a) and get a new execution sequence as shown in Figure 6(b). By doing this, we can accelerate the velocity of posting requests for a single thread. Hence the combiner can consecutively serve enough requests resulting in enhanced data locality. This is because inter-thread communications caused by synchronization mostly occur in the same NUMA node, so as to reduce the off-chip memory accesses occurring in algorithm itself. III. APPLYING SAB AND RCAN TO SYNCHRONIZATION ALGORITHMS In this section, we show how to employ SAB and RCAN to design synchronization algorithms. Based on SAB policy we design a synchronization algorithm called R-Synch. Based on RCAN policy, we design a synchronization algorithm called H-STA. When overhead in critical section dominates, R-Synch works better. When overhead in algorithm itself dominates, H-STA works better. A. R-Synch Before describing R-Synch algorithm, we first introduce some data structures. Each thread has a request node consisting of several fields: (1) \texttt{arg} is used to store arguments and results; (2) \texttt{locked} indicates whether the lock is held; (3) \texttt{completed} indicates whether the request has been served; (4) \texttt{node} represents the serial number of the corresponding NUMA node; (5) \texttt{btr} indicates the backtrace position; (6) \texttt{next} points to the next node. For simplicity, some details are omitted from Algorithm 1. ![Algorithm 1 Pseudocode for R-Synch](image) When a thread has a synchronization request, it inserts its request node to the tail of the linked list by using an atomic SWAP operation (lines 3-4). Then, it spins on locked field of \texttt{my\_new\_node} until the lock is released by the combiner. Once released, the thread decides what to do next by checking the value of \texttt{completed} field. If the filed is true, it means that its request has been served by the combiner. Then, the thread will return; If the field is false, the current thread becomes the new combiner. Once becoming combiner, the thread will set the value of \texttt{master} according to its initial value. Then, the new combiner starts to work by first checking the \texttt{btr} field. If \texttt{btr} is not NULL, then it sets the backtrace position to the value of \texttt{btr}. The combiner travels the linked list of requests, and serves its own request and a predefined number of requests of other threads (lines 21-27). When it completes a request, it sets the \texttt{locked} field and \texttt{completed} field of the corresponding thread to false and true, respectively. Once the combiner completes its work, it chooses the next new combiner according to the SAB policy (lines 28-41). If the next thread comes from the same NUMA node, or there is no active thread, it simply writes a false value to \texttt{locked} field of the next thread node or dummy node. Otherwise, it will travel the linked list to find a thread of the same node to be designated as the new combiner and tell it the backtrace position. If not finding an appropriate thread, it can only designate the thread next to it in the list as the new combiner. B. H-STA H-STA is a hierarchical version of RCAN as shown in Figure 7. We assume there are totally four NUMA nodes. For each NUMA node, there is a request buffer and control buffer. A request buffer contains several request nodes. Each request node consists of several slots which implements STA. Each slot is defined as a struct with three fields: (1) \texttt{arg} is used to store arguments of critical section or the results of it; (2) \texttt{pid} is used to distinguish threads in a NUMA node; (3) \texttt{completed} is used to identify whether there is an active request waiting to be served. Each control buffer contains several nodes of a size equal to the number of cores in that NUMA node, and each node is also defined as a struct with several fields: (1) \texttt{up} and \texttt{low} presents the upper and lower bound of the corresponding request node in the request buffer, respectively; (2) \texttt{combiner\_index} is used by combiner; (3) \texttt{thread\_index} is used by common threads. The lock policy is a hierarchical version. As shown in Figure 7, there are multiple local locks, and each lock is for a NUMA node. Besides, the policy contains a global CLH lock. In each NUMA node, threads compete for the local lock, the winner becomes combiner of that node. Then, all the local combiners compete for the global lock. Only the winner owns the right to access shared resource. Thread posts synchronization requests by writing essential information in the corresponding slots. When a thread has an request, it first judges whether the \texttt{thread\_index} field reaches the end of the corresponding request node. If not, it posts this request in the slot of the request node and increases the value of \texttt{thread\_index} by one. Otherwise, the thread will compete for the local lock. If failing in acquiring the lock, the thread will wait until its requests are served or the lock is released. In the former situation, the thread will travel the corresponding parallel parts and then returns. In the latter situation, it will try to acquire the lock. If succeeding in acquiring the lock, the thread becomes the combiner of this NUMA node. Then, it will try to acquire the global lock repeatedly until it succeeds. Then, it will travel each slot of the control buffer. According to \texttt{up} and \texttt{low} fields, the combiner will find the corresponding request node and check each slot of it. If there are active requests that have not been served, combiner serves them. When the combiner completes its work, it will execute its parallel parts according to the results stored in \texttt{arg} fields. Of course, not all programs can be divided according to RCAN policy, such as nested critical section. In future work, we will extend the applying range of RCAN policy. IV. EVALUATION In this section, we evaluate R-Synch and H-STA by comparing them with other state-of-the-art synchronization algorithms. We begin with an introduction of the hardware and software platform, followed by a description of the experiment methodology. Then, we test the algorithms with microbenchmarks that are widely used in the literature. Finally, we further investigate the performance of R-Synch and H-STA on more complex concurrent objects, namely shared stack. A. Platform We evaluate R-Synch and H-STA on a CC-NUMA machine consisting of two Intel Xeon E5-2670 processors. Each processor contains eight cores. Each core has a 32KB L1 private data/instruction cache, and a 256KB L2 private data cache. All cores within a processor share a fast 20MB L3 data cache. To avoid the bottlenecks in memory allocation, the Hoard memory allocator [15] is used. B. Experimental Methodology To evaluate the performance of R-Synch and H-STA, we compare them with several state-of-the-art synchronization algorithms, including H-Synch [4], CC-Synch [4], Flat-combining [10], FC-MCS [1], and CLH [7, 8]. In all experiments, each algorithm executes totally $10^7$ times operations for different values of \( n \), and \( n \) is the number of current active threads. Besides, we assume that the size of data accessed to serve the requests is smaller than the size of cache, and test the cache misses for each experiment. C. Microbenchmarks We test all the algorithms by using two microbenchmarks that are widely used in the literature. The first is a modified microbenchmark from [11]. As described in Section II, we make a little changes to it. For simplicity, we call it as R-vector. The second is a simulated shared Fetch\&Multiply object used in [4]. 1) R-vector: The throughput of each algorithm for R-vector is shown in Figure 8. When the number of thread is less than eight, the four combining based algorithms achieve approximately the same performance. When threads are across multiple NUMA nodes, R-Synch outperforms all other algorithms. To be specific, R-Synch achieves up to 1.38 times higher throughput than CC-Synch. The performance of Flat-combining is close to CC-Synch, and they are a little slower than H-Synch and H-STA. Also, R-Synch significantly outperforms CLH and FC-MCS by a factor up to 4.21. Although FC-MCS is NUMA-aware, it performs worse on machine with small clusters of cores. Experiments in [4] proves that FC-MCS performs well on machine with large clusters of cores. When threads reside in one NUMA node, there is no interconnect communication and RMRs. Therefore, all combining based algorithms achieve approximately the same performance. For CLH and FC-MCS, combining is not used to serve requests. Therefore, every request may be applied by a different thread, resulting in a higher L1/L2 cache miss than other four algorithms (as shown in Figure 9). When threads are spread across multiple NUMA nodes, circumstance becomes complex, due to the issues of off- chip memory accesses and interconnect communication. The better performance of R-Synch can be explained by the lower L3 cache miss as shown in Figure 9(b). However, when the number of threads is beyond 13, R-Synch does not have the lowest L3 cache misses. In section II, we classify the locations that may occur overhead into two kinds. The first is inside the algorithm itself, which is caused due to communication between combiner and other threads here. The second is inside critical section. In this experiment, the latter is more costly. According to the SAB policy used in R-Synch, we know that overhead incurred in critical section could be effectively reduced. Therefore, R-Synch performs better when multiple NUMA nodes are involved. 2) Fetch&Multiply: Figure 10 depicts the performance of each algorithm when simulating a Fetch&Multiply object. H-STA scales significantly better than all other algorithms. When multiple NUMA nodes are involved, we will mainly concentrate on the performance differences among all the algorithms. As a whole, H-STA outperforms H-Synch by a factor up to 1.98. The performance of CC-Synch and Flat-combining are close, and they are all overtaken by H-STA by a factor up to 2.15. Again, CLH and FC-MCS are the slowest algorithms. H-STA and H-Synch are the two fastest algorithms, because they have lower cache misses (as shown in Figure 11). When the number of threads is beyond 13, L3 cache misses of H-STA is a little higher than H-Synch. However, H-STA still performs better than H-Synch. This is because H-STA has a much lower L2 cache misses (as shown in Table I). D. Shared stack Stack is a data structure with a wide range of use. For example, inter-thread communication is heavily based on accessing such data structure [4]. Therefore, we further investigate the performance of each algorithm by applying them to shared stacks. As shown in Figure 12, HSTA-Stack achieves the best performance, followed by H-Stack (H-Synch). Throughput of FC-Stack (Flat-combining), CC-Stack (CC-Synch), and R-Stack (R-Synch) are approximately the same. Again, CLH-Stack and FCMCS-Stack are the slowest implementation of shared stack. Both HSTA-Stack and H-Stack employ a hierarchical policy to reduce RMRs and interconnect communication, resulting in enhanced data locality, which is proven by L3 cache miss curve in Figure 13(b). Figure 9. L2 cache misses per operation and L3 cache misses per operation. Figure 10. Average throughput of each implementation while simulating a Fetch&Multiply object. Although FC-MCS has a litter lower L3 cache misses than CC-Synch, it is still outperformed by CC-Synch in terms of throughput. This is because FC-MCS has a much higher L2 cache misses (as shown in Table I). Table I <table> <thead> <tr> <th></th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> </tr> </thead> <tbody> <tr> <td>FC-MCS</td> <td>13.5</td> <td>13.1</td> <td>13.2</td> <td>15.2</td> <td>13.4</td> <td>13.4</td> <td>14.9</td> <td>13.9</td> </tr> <tr> <td>H-Synch</td> <td>8.7</td> <td>9.6</td> <td>9.4</td> <td>9.3</td> <td>9.4</td> <td>9.1</td> <td>8.9</td> <td>8.8</td> </tr> <tr> <td>H-STA</td> <td>6</td> <td>6.1</td> <td>6.4</td> <td>6.6</td> <td>6.5</td> <td>6.8</td> <td>6.8</td> <td>6.8</td> </tr> </tbody> </table> Figure 11. L2 cache misses per operation and L3 cache misses per operation. Figure 12. Average throughput of each implementation when apply them on shared stack. V. RELATE WORK The combining technique has been studied for decades. To the best of our knowledge, the earliest combining technique proposed by [12] to construct a software combining tree for decreasing memory contention. Another combining based synchronization algorithm in [9] is presented later. However, it suffers lots of contention in posting requests and may cause unbounded RMRs. A hardware technique called ACS is proposed in [13], which uses an asymmetric faster core to execute critical sections. Sim [16] and Flat-combining [10] are two highly efficient implementations of combining technique, which are proven to significantly outperform fine-grained thread synchronization. Hierarchical technique is presented to mainly deal with issues caused by NUMA architectures, such as RMRs and interconnect communications among multiple NUMA nodes. To the best of our knowledge, HBO [11] is the first hierarchical technique that encourages threads from the same NUMA node to acquire the lock consecutively for reducing interconnect communication and achieving strong data locality. However, HBO is a test-and-test-and-set lock assisted with a backoff scheme, which are known to incur lots of invalidation traffic. FC-MCS [1] is another highly efficient hierarchical locks that outperforms all previous NUMA-aware or none NUMA-aware locks. Nevertheless, FC-MCS performs poorly on machines with small clusters of cores due to the difficulty in building long local list of requests. H-Synch [4] is the fastest lock algorithm that employs both combining and hierarchical technique. Unlike FC-MCS, H-Synch works well on machines with small clusters of cores. VI. CONCLUSION Synchronization overhead limits the performance of parallel applications. This paper analyzes the locations that incur the overhead and presents two strategies to reduce the overhead: SAB and RCAN. SAB tries to reduce the overhead occurring in critical section, while RCAN tries to reduce the overhead occurring in algorithm itself. Finally, we show how to use the two strategies to design synchronization algorithms. We use SAB to design an algorithm called R-Synch, and use RCAN to design an algorithm called H-STA. Experiments show our strategies effectively reduce the overhead in critical section and algorithm itself, respectively. ACKNOWLEDGEMENTS This work was supported by National Science Foundation of China under grant No.61232008, National 863 Hi-Tech Research and Development Program under grant No. 2014AA01A302 and No.2015AA01A203, the Fundamental Research Funds for the Central Universities under grant No.2015TS067. REFERENCES
{"Source-Url": "http://grid.hust.edu.cn/wusong/file/IPCCC15.pdf", "len_cl100k_base": 7124, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 35402, "total-output-tokens": 8438, "length": "2e12", "weborganizer": {"__label__adult": 0.00041294097900390625, "__label__art_design": 0.000553131103515625, "__label__crime_law": 0.0005307197570800781, "__label__education_jobs": 0.00072479248046875, "__label__entertainment": 0.00012290477752685547, "__label__fashion_beauty": 0.0002372264862060547, "__label__finance_business": 0.0003566741943359375, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0009760856628417968, "__label__hardware": 0.01047515869140625, "__label__health": 0.0006809234619140625, "__label__history": 0.0005178451538085938, "__label__home_hobbies": 0.000171661376953125, "__label__industrial": 0.0012350082397460938, "__label__literature": 0.00023353099822998047, "__label__politics": 0.0004382133483886719, "__label__religion": 0.0007004737854003906, "__label__science_tech": 0.364990234375, "__label__social_life": 7.766485214233398e-05, "__label__software": 0.01232147216796875, "__label__software_dev": 0.60205078125, "__label__sports_fitness": 0.0004732608795166016, "__label__transportation": 0.001110076904296875, "__label__travel": 0.0002810955047607422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34321, 0.01894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34321, 0.46094]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34321, 0.91495]], "google_gemma-3-12b-it_contains_pii": [[0, 4954, false], [4954, 9187, null], [9187, 13253, null], [13253, 17781, null], [17781, 20981, null], [20981, 26032, null], [26032, 29252, null], [29252, 34321, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4954, true], [4954, 9187, null], [9187, 13253, null], [13253, 17781, null], [17781, 20981, null], [20981, 26032, null], [26032, 29252, null], [29252, 34321, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34321, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34321, null]], "pdf_page_numbers": [[0, 4954, 1], [4954, 9187, 2], [9187, 13253, 3], [13253, 17781, 4], [17781, 20981, 5], [20981, 26032, 6], [26032, 29252, 7], [29252, 34321, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34321, 0.02717]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
08f32b5b79dd6eb41edcb1da1398b6f873922640
Utilizing Virtual Software Teams for Inconsistency Management in Distributed Software Development Dillip Kumar Mahapatra, Tanmaya Kumar Das, Gurudatta Lenka Abstract—In the challenging field of software project development, the work is invariably performed by teams. In today’s world of privatization and globalization, where the development costs are increasing at a breakneck speed, the focus is now on cost reduction and availability of highly motivated and suitably trained workforce. Keeping the above mentioned parameters in mind, Companies worldwide are relying on virtual software teams to do the work. This paper highlights the characteristics and throws light on the specifics of virtual software teams. It also illustrates some of the most common issues and challenges that virtual teams face while working on a project there by exposing some of the ground realities as de scribe the most. Key words : Cohesion, Complexity Factor, CSCW, Cultural Difference, Face-to-Face Interaction, Satisfaction, Socio-Emotional Process, Virtual Team. I. INTRODUCTION A. Social Aspects of Software Development The software development process would not be possible without human beings who handles the tasks of requirement specification, analysis, design, implementation, testing, and evaluation. Therefore, the success of software development depends on the human factor involved in it, specifically on the complex relationships that exist among the people that collaborate in order to deliver the product successfully. It is also considered that software development is essentially a social discipline and give psychological views to programming and software development. The cross-scientific research settings should be created more to better understand the group and personal psychological factors that plays essential role in software development. The team-level social processes may be a better predictor for team performance than the production methods explains theories from group psychology to management science can provide insights into how software development teams can improve their work practices by not only considering technical choices. Therefore, the importance of social factors in software development is enormous. Because of this, organizations need to investigate relationships between team members and give special notice to the development teams and the complexities and problems they face every day (Ref. 2). Manuscript received on December 16, 2012. Dillip Kumar Mahapatra, Associate Professor, Department of Information Technology, Krupajal Engineering College, Bhubaneswar (Orissa), India. Tanmaya Kumar Dash, Asst. Prof., Deptt. Computer Science & Engineering, C.V.Raman College of Engineering, Bhubaneswar (Orissa), India. Gurudatta Lenka, Department of Computer Science & Engineering, Krupajal Engineering College, Bhubaneswar (Orissa), India. II. VIRTUAL VS. TRADITIONAL SOFTWARE TEAMS Software engineering is a technical as well as a social discipline. However an organization is implementing traditional, distributed, virtual, or global software development project, the crucial building block of the project are the developer teams. A team can be defined “a collection of individuals who are interdependent in their tasks, who share responsibility for outcomes, who see themselves and who are seen by others as an intact social entity embedded in one or more larger social systems (for example, business unit or the corporation), and who manage their relationship across organizational boundaries” (Ref. 2). In the traditional, co-located software development, the work is performed by traditional or face-to-face teams. Therefore, a traditional team would be a collection of co-located individuals who perform tasks and have responsibilities. Similarly, virtual software teams are the work units of distributed, virtual, or global software development. However, they operate across time, geographical locations and organizational boundaries and are linked by communication technologies. A virtual team may be defined as “a team whose members use the Intranet, Intranets, Extranets and other networks to communicate, coordinate and collaborate with each other on tasks and projects even though they may work in different geographical locations and for different organizations”. However, the most important distinction between virtual and traditional teams is that the members of a virtual team are distributed across geographical locations. It is experienced that, in contrast to traditional teams, virtual teams are very dynamic because they are prevalently formed as the need arises and disassembled when the task is complete (Ref. 3). III. VIRTUAL TEAM CHARACTERISTICS The virtual teams are assembled and disassembled very dynamically, there is very little prior team history and work culture and responsibilities of team members vary with each new virtual team they are appointed to. Savage points out that the structures of virtual teams are typically non-hierarchical and decentralized. Moreover, virtual team members are prevalently dependent on lateral and informal information exchange to perform the tasks. The virtual team has to manifest following characteristics (Ref. 1): - It is a set of culturally and organizationally differentiated members. Utilizing Virtual Software Teams for Inconsistency Management in Distributed Software Development - The members are grouped temporarily. - The members are physically dispersed. - The members are connected by weak lateral ties. - The members are engaged in performing non-routine tasks. The above characteristics would be the characteristics of the ideal virtual team. In practice, however, there are few such teams. For example, there are teams where the members are geographically distributed, but are culturally and organizationally homogeneous. In other cases, team members may come from different cultures and organizations, but be physically co-located. The consequence of this fact is that the virtuality of a team is determined in degrees rather than in kind. This virtuality has three characteristics (Ref. 5): A. Virtual Team Context – The virtual team context is characterized by low team history, novel tasks, and physically distributed members. It’s reported that one of the biggest advantages of virtual teams over traditional teams is that its members can be assembled quickly in order to utilize emerging opportunities, and disassembled when the job is finished. So, lack of team unit is argued by the fact that virtual teams tend to have no history of collaboration. Also, different knowledge and capabilities of people have to be leveraged in order to exploit emerging market opportunities. Novel tasks are a side-effect of the nature of these opportunities. In order to utilize them, virtual teams must perform non-routine tasks and have non-routine responsibilities. They also have to perform them under time-pressured environments. Furthermore, the members of virtual teams are prevalently not co-located but dispersed around the world. They are connected only by various information technologies. (Ref . 4) B. Virtual Team Composition – Virtual team members are characterized by the heterogeneity in their cultural and organizational backgrounds. Virtual teams are often composed of culturally and organizationally diverse members. Now-a-days, as a result of the globalization and improvements in information technology, organizations are enabled to form virtual teams that connect members from different countries and organizations. It’s found that due to the unique cultural and organizational backgrounds of team members, the mix of their knowledge and talents maximizes the potential of the team to take advantage of market opportunities. C. Virtual Team Structure – The structure of a group describes the nature and the strength of patterns of relationships among individuals in work groups. As for the relationships between members in virtual teams, they are often lateral but weak. Virtual team members tend to be connected by lateral communication ties because of the physical distance between the members and the nature of the work they are performing. The team members have an efficient flow of information and are able to coordinate their task activities, despite the physical distance between them. These ties tend to be weak because “the lack of face-to-face interactions, the span across cultural and organizational boundaries, and the lack of prior history of cooperation prevent the time, the mutual confiding and the emotional support required for the formation of strong ties”. Due to the weak ties among the members in a virtual team, the members are more likely to treat each other formally and less likely to reciprocate requests from one another. Hence, due to cultural and organizational barriers and the shortage of prior work history, the relationships connecting virtual team members are likely to be lateral but weak. (Ref. 6) The coordination dynamics within the team greatly depends on the levels of virtuality characteristics it possesses. IV. DISTANCE AS A COMPLEXITY FACTOR The physical distance that is imposed on team members working in a distributed environment is found to have the greatest influence on the collaboration issues in virtual software teams. To distributed project management, distance itself introduces barriers and complexities. For virtual teams however, the distance has negative influence on other factors such as coordination, visibility, communication, and cooperation. If the issues that rise in these areas are neglected, they can cause additional barriers and complexities to the project (see figure 1). It has been known that physical proximity of co-workers has a great influence on collaboration. It is observed collaboration is more effective and probable if people in the building are located closer to each other. The frequency of communication among team members decreased with distance. Furthermore, it stated that in cases where the engineers’ offices were about 30 meters or more apart, the frequency of communication dropped to almost the same low level as in cases where the offices are separated by many miles. (Ref. 4) In order to combat the complexities introduced by the distance between members and aid virtual collaboration, software industry has been developing a number of computer supported cooperative work (CSCW) tools. These tools are far from perfect but research that is being conducted on virtual work helps developers improve their possibilities. (Ref. 4) ![Figure 1: Virtual Software team environment](image) V. ISSUES WITHIN VIRTUAL SOFTWARE TEAMS The distance between the members in a virtual team, the lack of face-to-face contact and the cultural and organizational diversity complicate the work of virtual teams. The results of current researches on virtual teams and present the issues that the virtual teams face and the following subsections are based on their work. The life cycle model includes four general categories of variables: inputs, socio-emotional processes, task processes, and outputs (see figure 2). ![Figure 2: Categories of the life cycle model](image) A. Inputs Inputs stand for the design and composition characteristics of the virtual team and the benefits of resources, skills, and abilities with which the team begins its work. The most commonly researched inputs are design, culture, technical expertise, and training. (i) Design The development of a shared language and shared understanding by team members depends greatly on the design of the virtual team and the structuring of its interactions. This is even more crucial early on in the team’s life. There are a number of various designs of virtual teams. Some incorporate different levels of face-to-face interaction, planning of activities and the use of communication media, and the articulation of goals, structures, norms, and values. The differences between traditional and virtual teams inform that traditional teams generally outperform virtual teams with respect to the ability to orderly and efficiently exchange information and to perform effective planning. The probability of the success of a virtual team can be greatly improved by team-building exercises, establishment of shared norms and the specification of a clear team structure. Some authors point out the need for periodic face-to-face meetings during project planning under the limitation of the use of electronic communication. This is due to the fact that discussion and team interaction in virtual environments can take longer and be confusing, thus leading to poorer comprehension and understanding. By organizing early face-to-face meetings during the team’s launch phase, organizations can improve the team’s project definition. This way they can also enhance the effectiveness and quality of subsequent electronic communication (Ref. 1). By enabling knowledge sharing (either by face-to-face meetings or electronic communication), designs can establish a common understanding and language. The establishment of a common understanding and language helps the team members to solve ambiguous tasks communicating electronically. On the other hand, the absence of shared understanding and language bares with it a number of possible communication problems. Such problems are termed as: failure to communicate, unevenly distributed information, difficulty understanding the importance of information to various team members, and difficulty interpreting the meaning of silence or non-reply by others. A design of team interaction that employs the setting of goals and strategies leads to the establishment of shared mental models. Different goal and strategy decisions are found to improve the performance of virtual teams (Ref. 6). (ii) Cultural Differences As projects are being deployed around the world, they often include team members that come from different cultural backgrounds. The cultural differences and their effect on project success have been studied on numerous occasions. The most important issues that lead from these differences are the coordination difficulties and the creation of obstacles to effective communication. These negative effects are present not only in global virtual teams but also in teams where there are subtle differences among team members having from different regions of the same country. The negative effects of cultural differences can be surpassed by actively understanding and accepting the differences. However, cultural differences have a lesser impact than the distance between members when it comes to project management challenges such as setting goals, budgets, schedules, resources, and identifying needs (Ref. 6). (iii) Technical Expertise The technical expertise of virtual team members has a great impact on team performance and individual satisfaction. The performance and individual satisfaction with the team experience are negatively impacted by a lack of technical expertise and the inability to cope with technical problems. It is observed that the novelty of the team affects the team members less than the novelty of the technology being used. The absence of technology related uncertainty and technological challenges foster the development of high trust among the team members. (iv) Training Various researches have shown that consistent training among all team members increases team performance. Moreover, team members require training not only in the usage of technology, but in effective communication using the virtual medium. However, virtual teams in which team members possess diverse technology skills may have difficulties if they cannot resolve differences and agree on one specific technology skill for the execution of a task. In order to foster cohesiveness, trust, team work, commitment to team goals, individual satisfaction, and higher perceived decision quality, organization can provide team members with early and uniform training. Organizations are also deploying formal mentoring programs. The goal of these programs is to cultivate relational development and help new members to feel connected to other team members (Ref. 6). B. Socio-Emotional Processes Relationship building, cohesion, and trust are the most important processes within virtual teams. Their existence has positive effects on team performance. However, they are very hard to realize when the team members are separated by physical distance. Relationship building includes interaction processes designed to increase feelings of inclusiveness or belonging to the team that are hypothesized to foster cohesion and trust. Research has found that there is a positive link between socio-emotional processes and outcomes of the virtual team project. It has also shown that virtual teams are confronted with unique challenges when it comes to meeting socio-emotional needs of virtual team members. (i) Relationship Building Another difference between virtual and traditional teams is that virtual teams are often more task focused than social... focused. However, over time, the amount of the task focus usually lessens. Virtual team members also generally have weaker relational links to their co-workers. This problem arises from the fact that virtual teams rely significantly on electronic communication and from difficulties that are present with this kind of communication. Thus, many authors have found that face-to-face communication early in the project supports the formation of closer interpersonal relationships between team members. If the budget and deadlines allow it, the team members should physically meet early in the project. These meetings should focus only on relationship building. Such meetings strengthen the socio-emotional development of the team and support later success by enhancing learning and improving performance. If face-to-face meetings are not possible, the relationship building can be encouraged by other means. One way to foster relationship building is to focus on the exchange of social communication. Virtual teams that send more social communication achieve higher level of trust and better social and emotional relationships. Social conversations between team members can also foster relationship building and improve social bonds if they emphasize commonalities between members of different cultures. Effective team leaders can stimulate relationship building by scheduling regular chat sessions with all team members present (Ref. 6). (ii) Cohesion Cohesion in a virtual team fosters better performance and greater satisfaction among team members. It has been identified as one of the differences between successful and unsuccessful virtual teams. Cohesion was the focus of several studies that compared virtual and traditional teams. However, the results have been mixed. It is found that the development of cohesion in virtual teams was obstructed by the use of collaborative technologies. Hence, traditional teams were found to have higher team cohesiveness. In contrast to this study, other studies have found that even though virtual teams start with lower cohesion, their members exchange enough social information over time and develop strong cohesion (Ref. 10). (iii) Trust It is a big challenge to develop trust in virtual teams because team members can hardly assess teammates’ trustworthiness if they never met them. Moreover, trust must develop quickly because the life of many virtual teams is relatively limited. The development of trust is essentially important because it is crucial for the successful completion of virtual team projects. Even though it is difficult to develop trust in virtual teams, early research has found that short-lived teams are in fact able to develop high trust. However, they do not develop trust by following the traditional model of trust development but by following a swift trust model. The swift trust model claims that, when they don’t have sufficient amount of time to slowly build trust, team members assume that teammates are trustworthy and begin working as if the trust was already developed. During the project they seek for confirming or disconfirming evidence about this trustworthiness. Virtual teams that show high trusting behaviors experience significant social communication, predictable communication patterns, substantial feedback, positive leadership, enthusiasm, and are also able to deal with technical. The perceived integrity of other team members is especially important in the development of trust early in a team’s life. On the other hand, the perception of other member’s benevolence helps the maintenance of trust over time. Face-to-face meetings with the focus on developing a strong foundation of trust between members can also be used to instantiate high trust virtual teams. Besides face-to-face meetings, communication training can also be used to develop high trust between virtual team members. C. Task Processes Task processes are defined as “the processes that occur as team members work together to accomplish a task or goal”. In the task processes category there are major issues regarding communication, coordination, and task technology-structure fit. (i) Communication Communication is an essential part of any virtual team process. Moreover, it is said that “if technology is the foundation of the virtual business relationship, communication is the cement”. Past research on traditional teams suggests that successful co-located teams can communicate effectively and share information crucial to project completion in a timely manner; However, the communication in a virtual setting is confronted with serious challenges that evolve from the nature of virtual environment. Such challenges include time delays in sending feedback, lack of a common frame of reference for all members, differences in salience and interpretation of written text, and assurance of participation from remote team members. In contrast to traditional teams, virtual teams usually suffer from absence of an important component of team communication, namely, nonverbal communication. Due to the importance of communication to virtual teams, it has been the most studied aspect of virtual work. These work represents that traditional teams often communicate more effectively than their virtual equivalents. Due to the physical separation between them, virtual team members are heavily dependent on information and communication technologies. However, technology is very likely to restrain the communication process. This happens because electronic media are intrinsically leaner than face-to-face communication and convey a limited set of communication cues. Hence, the teams that perform the work in the virtual setting a front greater difficulties to orderly and efficiently exchange information than their equivalents in the traditional setting. Even though technical challenges have the greatest influence, they are not the only cause of restricted communication. Information exchange runs into problems also when some team members are co-located and others are dispersed. In such settings dispersed members prevalently assume that co-located team members are talking and sharing information that is not communicated to them. Also, private exchanges have been found to cause friction between team members. Similarly, ineffective leadership and cultural differences have also been identified as the negative influence on communication effectiveness. In spite to all the difficulties of communicating in a virtual environment, virtual team members must effectively exchange information if they are to achieve their objectives and successfully complete their tasks. That is why the mitigation of communication difficulties and the development of information-sharing culture were the focus of many studies. The results of these studies... inform that the frequency and predictability of communication, and the extent to which feedback is provided on a regular basis, improves communication effectiveness. This then leads to higher trust and improves team performance. Contrarily, unpredictable communication patterns are said to cripple the coordination and success of virtual teams. The most frequent unstable communication pattern includes team members leaving for an extended period of time and failing to communicate the absence to other members previously. In regard to the extent of communication, virtual team members communicate more frequently than their traditional counterparts. In addition, members of female-only virtual teams communicate more than members of male-only or mixed gender virtual teams. Also, studies have found that more effective communication improves cultural understanding and vice-versa (Ref. 7). (ii) Coordination Coordination can be defined as the degree of functional articulation and unity of effort between different organizational parts and the extent to which the work activities of team members are logically consistent and coherent. Even though coordination has a great influence on the performance of virtual teams there are significant challenges that virtual teams face as they try to coordinate their work across time zones, different cultures and divergent mental models. Furthermore, collaboration norms need to be developed for the team to be able to consistently and coherently bring together team member’s contributions. In order to get leverage on challenges to effective coordination in the virtual setting, the research has focused on investigating interventions and approaches designed to improve virtual team coordination. Face-to-face meetings have been identified as a huge help in mitigating various issues in the virtual environment. If they are feasible, they can also have positive influence on coordination activities and drive a project forward. On the contrary, if periodic face-to-face meetings cannot be held, organizations can develop coordination protocols and communication trainings. Such activities support the improvement of coordination and collaboration. Another way that has shown itself useful when it comes to improving coordination between virtual team members is the minimization of cultural barriers (Ref. 5). (iii) Task-Technology-Structure Fit The possible combination between different technologies available to virtual teams and the tasks they need to perform plays a significant role in the life of a virtual team. Studies suggest that the technology for the completion of a task is chosen according to the individual preferences, individual experience with the technology and its ease of use, the need for documentation, and the urgency of the task. For instance, face-to-face meetings or phone calls have shown themselves as best adapted for ambiguous tasks, managing conflicts, managing external resources, brainstorming, and for setting strategic direction. On the other hand, electronic communication is the best choice when it comes to execution of more structured tasks or monitoring project status. In settings where virtual team members are not able to attend synchronous meetings (i.e. because of different time zones), a shared language can be successfully developed in order to help members overcome the limitations and adapt the technology to complete ambiguous tasks. Regardless of the availability of various technologies, effective virtual teams are often able to adapt the technology and accord it to the communication requirements of the awaiting task. The availability of different technologies for the completion of tasks is said to foster more satisfaction and better performance from virtual team members’’. The adaptability of virtual team members to the different team structure was also the focus of many studies. It is experienced that virtual teams experience distinct stages of team development just as traditional teams. In addition, in spite the fact that members of virtual teams need time to adapt to the technology and new team form, they are prevalently able to do so in a satisfying manner. It is also observed that virtual team members adapt themselves to the technology, organization/social environment, and/or team structures (Ref. 7). D. Outcomes The outcomes of virtual teams have also been the focus of many researches. They include the performance of virtual teams as well as the member’s satisfaction with the virtual team experience. (i) Performance The researched on performance also compared traditional and virtual teams. It has been observed that virtual teams are more effective than traditional teams. The virtual teams cannot outperform traditional teams. In addition, the majority of studies conducted on this topic found no significant difference between the two types of teams. Other research conducted on the performance of virtual teams focused on more specific aspects such as decision quality, number of generated ideas, and time the members needed to reach a decision that virtual teams do not differ much from traditional teams when it comes to the number of generated ideas. When it comes to time needed for decision making, virtual teams needed more time to make a decision because of the constraints in the virtual environment (Ref. 9). (ii) Satisfaction It has been observed that members of traditional teams were more satisfied with their experience than that of the members of virtual teams. There is no significant difference between two kinds of teams. The difference between satisfied and unsatisfied virtual team members was also studied. Training and the use of more communication methods are identified as possible prerequisites for a satisfied virtual team(Ref. 3). VI. CONCLUSION As software development is both a social and a technical discipline, the aspect of team members is inherently important. Virtual software teams represent a group of software engineers who are involved in a distributed software project and collaborate toward its goal. Virtual team members have to use various communication technologies in order to collaborate and coordinate their work. The main reason of complexity in distributed projects and workflows of virtual team members is the geographical distance between various development sites. The distance has negative effects on coordination, communication, visibility, and cooperation. Neglecting negative effects can lead to various kinds of issues that hinder the success of virtual teams. The issues that face geographically distributed team members fall into four categories: inputs, socio-emotional processes, task processes, and outputs. Every category includes several aspects of a virtual team. Inputs involve virtual team design, team culture, training, and technical expertise. Aspects of relationship building, cohesion, and trust fall into the category of socio-emotional processes. Task processes include communication, coordination, and task-technology-structure fit. Finally, performance and satisfaction of team members represent the outputs category. Research findings on issues of these aspects give a more clear view and insight on problems that distributed co-workers face as well as on reasons why these problems emerge. This can be highly useful for development and creation of new virtual collaboration tools that support virtual teams. The activities that team members have to perform are presented afterwards. Finally, this paper presents the tools that support collaborative work in a virtual environment as well as the different modes of virtual collaboration. REFERENCES AUTHORS PROFILE Dillip Kumar Mahapatra, has completed his master degree in CSE and having more than seven years in teaching UG and PG levels. He has published 15 papers in different journals of national level. He has also authored ten text books in the field of CSE and Information technology. Tanmaya Kumar Das, has completed his master degree in CSE and having 22years experience in the field of teaching and industries and having more than 19 papers published in journals of national levels. He has also authored more than 12 of books in the field of engineering for UG and PG students. Gurudatta Lenka, has completed his master degree in Computer Application and having more than 4 years of experience in the field of teaching and industries.
{"Source-Url": "https://www.ijies.org/wp-content/uploads/papers/v1i1/A0103121112.pdf", "len_cl100k_base": 5728, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20819, "total-output-tokens": 6479, "length": "2e12", "weborganizer": {"__label__adult": 0.00038361549377441406, "__label__art_design": 0.0004703998565673828, "__label__crime_law": 0.00029754638671875, "__label__education_jobs": 0.0076141357421875, "__label__entertainment": 9.000301361083984e-05, "__label__fashion_beauty": 0.0001399517059326172, "__label__finance_business": 0.0007801055908203125, "__label__food_dining": 0.0004029273986816406, "__label__games": 0.0006289482116699219, "__label__hardware": 0.0005283355712890625, "__label__health": 0.0005445480346679688, "__label__history": 0.00019025802612304688, "__label__home_hobbies": 0.00013065338134765625, "__label__industrial": 0.0003185272216796875, "__label__literature": 0.00033020973205566406, "__label__politics": 0.00020301342010498047, "__label__religion": 0.00039267539978027344, "__label__science_tech": 0.01302337646484375, "__label__social_life": 0.0004777908325195313, "__label__software": 0.01505279541015625, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.0003027915954589844, "__label__transportation": 0.00046706199645996094, "__label__travel": 0.0002617835998535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33953, 0.0082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33953, 0.40088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33953, 0.955]], "google_gemma-3-12b-it_contains_pii": [[0, 5307, false], [5307, 11081, null], [11081, 17092, null], [17092, 23902, null], [23902, 30541, null], [30541, 33953, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5307, true], [5307, 11081, null], [11081, 17092, null], [17092, 23902, null], [23902, 30541, null], [30541, 33953, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33953, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33953, null]], "pdf_page_numbers": [[0, 5307, 1], [5307, 11081, 2], [11081, 17092, 3], [17092, 23902, 4], [23902, 30541, 5], [30541, 33953, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33953, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
abe63061d3604d7bb0d4785166d942d2ca166e69
[REMOVED]
{"Source-Url": "http://videolectures.net/site/normal_dl/tag=736013/iswc2012_taheriyan_data_cloud_01.pdf", "len_cl100k_base": 4546, "olmocr-version": "0.1.50", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 54566, "total-output-tokens": 5820, "length": "2e12", "weborganizer": {"__label__adult": 0.0002906322479248047, "__label__art_design": 0.0005154609680175781, "__label__crime_law": 0.000354766845703125, "__label__education_jobs": 0.0011396408081054688, "__label__entertainment": 7.432699203491211e-05, "__label__fashion_beauty": 0.00012767314910888672, "__label__finance_business": 0.0006880760192871094, "__label__food_dining": 0.0002312660217285156, "__label__games": 0.0003364086151123047, "__label__hardware": 0.0004603862762451172, "__label__health": 0.0002579689025878906, "__label__history": 0.0005412101745605469, "__label__home_hobbies": 8.463859558105469e-05, "__label__industrial": 0.00025582313537597656, "__label__literature": 0.0002627372741699219, "__label__politics": 0.0002720355987548828, "__label__religion": 0.0002233982086181641, "__label__science_tech": 0.020050048828125, "__label__social_life": 0.00017189979553222656, "__label__software": 0.034576416015625, "__label__software_dev": 0.93798828125, "__label__sports_fitness": 0.0001195669174194336, "__label__transportation": 0.0004341602325439453, "__label__travel": 0.000308990478515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14560, 0.03805]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14560, 0.0338]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14560, 0.61142]], "google_gemma-3-12b-it_contains_pii": [[0, 123, false], [123, 447, null], [447, 447, null], [447, 1362, null], [1362, 1649, null], [1649, 2526, null], [2526, 2827, null], [2827, 2875, null], [2875, 3349, null], [3349, 4659, null], [4659, 4853, null], [4853, 5055, null], [5055, 5366, null], [5366, 6207, null], [6207, 6429, null], [6429, 6648, null], [6648, 7424, null], [7424, 7583, null], [7583, 7796, null], [7796, 8119, null], [8119, 8311, null], [8311, 8922, null], [8922, 9344, null], [9344, 10006, null], [10006, 10191, null], [10191, 10389, null], [10389, 10653, null], [10653, 11381, null], [11381, 11578, null], [11578, 12526, null], [12526, 13123, null], [13123, 13699, null], [13699, 13932, null], [13932, 14260, null], [14260, 14560, null]], "google_gemma-3-12b-it_is_public_document": [[0, 123, true], [123, 447, null], [447, 447, null], [447, 1362, null], [1362, 1649, null], [1649, 2526, null], [2526, 2827, null], [2827, 2875, null], [2875, 3349, null], [3349, 4659, null], [4659, 4853, null], [4853, 5055, null], [5055, 5366, null], [5366, 6207, null], [6207, 6429, null], [6429, 6648, null], [6648, 7424, null], [7424, 7583, null], [7583, 7796, null], [7796, 8119, null], [8119, 8311, null], [8311, 8922, null], [8922, 9344, null], [9344, 10006, null], [10006, 10191, null], [10191, 10389, null], [10389, 10653, null], [10653, 11381, null], [11381, 11578, null], [11578, 12526, null], [12526, 13123, null], [13123, 13699, null], [13699, 13932, null], [13932, 14260, null], [14260, 14560, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14560, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14560, null]], "pdf_page_numbers": [[0, 123, 1], [123, 447, 2], [447, 447, 3], [447, 1362, 4], [1362, 1649, 5], [1649, 2526, 6], [2526, 2827, 7], [2827, 2875, 8], [2875, 3349, 9], [3349, 4659, 10], [4659, 4853, 11], [4853, 5055, 12], [5055, 5366, 13], [5366, 6207, 14], [6207, 6429, 15], [6429, 6648, 16], [6648, 7424, 17], [7424, 7583, 18], [7583, 7796, 19], [7796, 8119, 20], [8119, 8311, 21], [8311, 8922, 22], [8922, 9344, 23], [9344, 10006, 24], [10006, 10191, 25], [10191, 10389, 26], [10389, 10653, 27], [10653, 11381, 28], [11381, 11578, 29], [11578, 12526, 30], [12526, 13123, 31], [13123, 13699, 32], [13699, 13932, 33], [13932, 14260, 34], [14260, 14560, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14560, 0.07572]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
6834d9079039341df40d24c1161bb224c37d9ff8
Procedure to Validate Non-Functional Requirements in Information Management Systems Niurka Martínez Durán, Ing. ¹, Alexander Delgado Gutierrez, Ing. ², Jorge Emilio Escala Maceo, Lic. ¹ ¹University of Informatics Sciences, Cuba, nduran@uci.cu, jeem@uci.cu ²University of Informatics Sciences, Cuba, adgutierrez@uci.cu Abstract– For a proper development of Information Management Systems it is required a validation of non-functional requirements. This activity is performed by the Quality Center for Technological Solutions (CALISOFT, Spanish acronyms), not from the initial stages, but once the system is implemented. Provoking mistakes made in specification of requirements to be dragged until the end, with an impact at every stage of the software development. The present work aims at designing a procedure to validate non-functional requirements of Information Management Systems. For that, it was made an analysis of the requirements engineering, specifically for the validation stage. Three phases of the procedure design were identified, each one of them containing: activities, responsible, participants, input and output artefacts and technique to be used. It was also proposed a set of metrics to assess the quality of the non-functional requirements following the ISO/IEC 9126 standard. The proposal is applied to the Integral Solution for Project Management and Centralized Actions (SIGEPAC by Spanish acronyms). Finally, the result was a procedure to validate non-functional requirements, allowing detecting and correcting mistakes since the beginning. Keywords– Requirement engineering, metrics, non-functional requirements, validation. Digital Object Identifier (DOI): http://dx.doi.org/10.18687/LACCEI2015.11.031 ISBN: 13 978-0-9822896-8-6 ISSN: 2414-6668 Procedure to validate nonfunctional requirements in Information Management Systems Ing. Niurka Martínez Durán¹, Ing. Alexander Delgado Gutierrez ², Lic. Jorge Emilio Escala Maceo³ ¹University of Informatics Sciences, Cuba, nduran@uci.cu, jeem@uci.cu ²University of Informatics Sciences, Cuba, adgutierrez@uci.cu Abstract—For a proper development of Information Management Systems it is required a validation of nonfunctional requirements. This activity is performed by the Quality Center for Technological Solutions (CALISOFT, Spanish acronyms), not from the initial stages, but once the system is implemented. Provoking mistakes made in specification of requirements to be dragged until the end, with an impact at every stage of the software development. The present work aims at designing a procedure to validate nonfunctional requirements of Information Management Systems. For that, it was made an analysis of the requirements engineering, specifically for the validation stage. Three phases of the procedure design were identified, each one of them containing: activities, responsible, participants, input and output artifacts and technique to be used. It was also proposed a set of metrics to assess the quality of the nonfunctional requirements following the ISO/IEC 9126 standard. The proposal in applied to the Integral Solution for Project Management and Centralized Actions (SIGEPAC by Spanish acronyms) Finally, the result was a procedure to validate nonfunctional requirements, allowing detecting and correcting mistakes since the beginning. Keywords-- Requirement engineering, metrics, nonfunctional requirements, validation. I. INTRODUCTION The development of the Information and Communication Technologies (ICTs) has reached an unparalleled worldwide success, provoking in return a fruitful advance in the software industry. One of the significant problems in the field of informatics is the Quality Management, due to technological innovations that have led to dramatically increase the size and complexity of informatics systems. Since its appearance, this topic has been a concern to specialists, engineers, researchers and traders of this branch, which have conducted studies around two main objectives: first to obtain a software with quality and second to assess the software quality. In the middle of this competitive view, the Information Management Systems emerge, computer applications specially designed for the management and continuous improvement of policies, procedures and processes of the organization. One of the key aspects for the proper functioning of this variant of software development, is to identify and validate nonfunctional requirements. "Non-functional requirements, as the name suggests, are requirements that are not directly concerned with the specific functions delivered by the system. They may related to emerging system properties such as reliability, response time and storage occupancy."[1] They arise from the needs of the user, due to budget constraints, the policies of the organization, the need for interoperability of software or hardware with other systems, or external factors such as safety regulations or laws about privacy. In this scenario a proper validation systematically increases the expectations of end users. Its implementation is generally difficult because typical defects usually appear as contradictions in the specification, small differences between functional and nonfunctional requirements, little understandable and redundant specifications, getting in conflict and not indicating all the necessary hardware resources. The validation cost is very high and the clients paying for the system sometimes think that these are not justified. At the University of Informatics Sciences validation of software products is carried out by the Quality Center for Technological Solutions (CALISOFT) and quality groups belonging to each development center. The strategy followed is to perform this activity at the end of the implementation of the system, by checking that the initial nonfunctional requirements are implemented correctly, as described in the document Software Requirements Specification (SRS). Many of the errors in these requirements are given by inadequate specification that can only be demonstrated when the product is already in its final stage, causing delays, rising costs and customer dissatisfaction. Validation at the end allows detecting deficiencies with the execution of system implementation, but not making it initially causes to drag errors and omissions of nonfunctional requirements, which increase their impact with the gradual development of the software. In an Information Management System in which a proper validation of nonfunctional requirements is not carried out in its development, from initial stages, it is likely to have problems in efficiency, portability, deployment, ease of use, robustness, reuse and compatibility with other systems. It also causes in the organization using it, difficulties to constantly renew its objectives, strategies, operations and service levels. For all things mentioned above it is intended as general objective: To develop a procedure for the validation of nonfunctional requirements of Information Management Systems. II. MATERIALS AND METHODS According to ISO / IEC 9126 standard, the quality of the software product should be detailed hierarchically into a model composed of features and subfeatures [2]. This model relates functional and nonfunctional requirements through the six quality attributes proposed (functionality, reliability, usability, efficiency, maintainability and portability) and some sub-categorizes as shown in the following picture: 13th LACCEI Annual International Conference: “Engineering Education Facing the Grand Challenges, What Are We Doing?” July 29-31, 2015, Santo Domingo, Dominican Republic DOI: http://dx.doi.org/10.18687/LACCEI2015.1.1.031 ISBN: 13 978-0-9822896-8-6 ISSN: 2414-6668 In this research were used those features and sub-features that match the classification of nonfunctional requirements (NFR) listed below: **Functionality** - **Security**: Capability of the software product to protect systems or data so that unauthorized individuals or systems cannot read or modify them, and authorized persons or systems have access to them. - **Interoperability**: The capability of the software product to interact reciprocally with one or more specified systems. **Reliability** - **Maturity**: Capability of the software product to avoid total failure as a result of a software failure occurred. - **Recoverability**: Capability of the software product to restore a specified level of performance and recover the data directly affected in case of total failure. **Usability** - **Understandability**: Capacity of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. **Validation of Requirements** According to ISO / IEC 9126 standard, the quality of the software product should be detailed hierarchically into a model composed of features and subfeatures. This model relates functional and nonfunctional requirements through the six quality attributes proposed (functionality, reliability, usability, efficiency, maintainability and portability) and some sub-categorizes as shown in the following picture: Validation is one of the stages of the requirements engineering (RE). Its purpose is to verify that all requirements listed in the specified document represent a description, at least acceptable, of the system to be implemented. This involves verifying that the requirements are consistent and that are complete. It also allows demonstrating that the requirements defined in the system are those the client really wants, it checks that it has not omitted any, and that they are not ambiguous, inconsistent or redundant. Its mission is to demonstrate that the requirements definition specifies the system the user wants. Discover problems in the requirements document before risking resources in the implementation. Among its activities is the assessment of the requirements, to which this investigation was focused. Validation of a system is not only performed at the beginning, as part of the requirements engineering, but also at the end of development to determine if the initial conditions are satisfied. It is performed by means of several test cases for each specified requirement or use case. According to the IEEE 2004 Standard Verification and Validation, validation is a process that provides evidence of whether the software, associated products and processes: - Satisfy the system requirements assigned to the software at the end of each life cycle activity. - Solve the problem correctly (for example, use the appropriate model and implement business rules correctly). - Satisfy the use and needs of the users. (Chun, 1999) Software requirements validation methods. Methods for validation can be classified into: - **Static Methods**: Focused on the analysis and corroboration of the representation of the system, including documents, diagrams and code. - **Dynamic Methods**: Involve running some kind of system implementation. It might seem that only static methods are enough, but this makes no sense, since static methods are more likely oriented towards verification and cannot demonstrate that the system meets user expectations, which is confirmed through validation. For the present investigation it was intended to use both methods in order to detect and correct errors in nonfunctional requirements. **Validation Techniques** Validation techniques of requirements are made in order to examine them, to ensure that the appropriate system is defined. Permitting to detect errors early in order not to lead to unexpected results, avoid overspending and great loss of time. Among the validation techniques that were used in this research are: reviews [4], audits [5] and validation testing [6] for international use and effectiveness in terms of supporting the validation processes. **Software Quality Models.** Different literatures agree that a quality model is the set of characteristics and the relationships between them, which provide the base or rule to specify quality requirements, assessing the quality or compare any aspect of the software. Promote the proper use of methods and tools, and enable communication among developers. The software quality models that are used in this research are mentioned at this point to serve as a reference and guide in achieving the objectives proposed. Validation of Information Management Systems An Information Management System is an application containing an integrated set of processes, mainly formal, that the organization knows and knows how to use (informal are not excluded) and are recorded in data through a database. Developed in a user-computer environment and operating on a set of structured data (database) using computer hardware and software, telecommunications networks, management techniques or other forms of information technology [9]. It is characterized by the availability of information when necessary and with appropriate means; by providing information selectively (quantity versus quality) variety in the form of information presentation (graphical, numerical, etc.); degree of "intelligence" of the system (preset relations); response time of the system: from a request to completion; accuracy: conformity between the data supplied and the real ones; generality: availability to meet different needs; flexibility: ability to adapt to new needs; reliability: probability of correct operation for a certain period of use; security: protection against loss and / or unauthorized use of resources; storage: repetition level of information to protect losses; and friendliness: the need for learning to its management. Examples of these include: Green SQA, Software Quality Assurance, SQA SA. Indudata Ltda. and TSOFT. These companies have high international recognition for the quality of services provided. All information related to the practiced methodologies or procedures performed, as well as results obtained can only be accessed by staff involved in the contract. At the University of Informatics Sciences NFR validation of these systems is performed by CALISOFT and quality groups associated with each development center. It runs once the implementation is finished through a Deployment Testing Plan. It defines the types of tests to be performed, which are established according to the six quality characteristics defined in ISO / IEC 9126 standard. After an analysis of existing alternatives for the validation of nonfunctional requirements, it was concluded that they are not feasible for the present investigation. The main disadvantage in the international arena is the inability to obtain information on how to validate the Information Management Systems. Particularly in this University validation is performed only in the final stage of software development causing failed implementations, delays in delivery time, unexpected costs and customer dissatisfaction. Modeling tool and prospective method Visual Paradigm was selected as a tool for the procedure design for validating nonfunctional requirements in Information Management Systems, for the advantage of having a user interface easy to use and allows diagrams necessary for the procedure development, it also generates documentation in HTML and PDF formats, without using external tools; and availability on multiple platforms [10]. Prospective Methods study the future, in regards to the evolution of the factors of techno-socio-economic environment and their interactions. The Experts method (Delphi method) is based on consultation with people who have great knowledge about the environment in which the organization carries out its work. These people express their ideas and finally a report is made indicating which are, in their opinion, the possible alternatives for the future [11]. For the presence in the University of staff with expertise in the subject from the Quality Center for Technological Solutions (CALISOFT), the importance of an assessment of engineers performing software testing to improve the quality of products and the features mentioned before, it was decided to use the Delphi method to validate the proposal of this research. III. RESULTS AND DISCUSSION The proposal for the procedure to validate nonfunctional requirements in Information Management Systems is presented as follows. Main features of the procedure The procedure consists of a series of activities organized in a logical and sequential way according to the element being analyzed. The implementation of the procedure is an iterative and incremental process by its own nature, that is, several iterations are performed while validating requirements do not meet the characteristics specified; allowing developers to run multiple sequences gradually, meaning that, as time goes by on the calendar each one of these produces an increase in the software quality. It fits any software development methodology. The artifacts and proposed roles in it are declared by the authors according to their research. It is proactive, for the preventive nature of its activities, since it indicates how to develop correctly, in a coherent and organized way, each activity and task. It has a wide range of applicability, given the feasibility of its use for various types of projects during the validation activity as part of the Requirements Engineering. It also maintains an approach on feedback from its participants, fostered by the implementation of its activities. Objective: To validate the nonfunctional requirements for the detection and correction of errors during the development life cycle of Information Management Systems. Scope: Information Management Systems. Structure: The procedure is structured in three phases to facilitate the organization of the activities carried out and a satisfactory range of the objective proposed. The phases are... named: Phase I. Validation of Nonfunctional Requirements List, Phase II. Validation of NFR Specifications and Phase III. Validation for Quality Metrics. ![Fig. 2 Phases of the Procedure to validate nonfunctional requirements.](image) The activities of the three phases of the procedure include the following elements: **Description**: Consists of explaining in detail what to make those involved in each activity and how defined techniques are used to fulfill the stated goal. Describe the treatment that will be given to the inputs of the activities to produce outputs. **Objective**: This is the primary goal of the activity; defines the purpose of it, towards which the people involved in its implementation work. **Responsible**: Plays a leading role in the development of activities. Main responsible of input and output devices, and the work flow done. **Participants**: People in charge of materializing correction activities of detected errors. Their main task is to work together with the responsible of the procedure. **Activities**: Set of actions that are based on the stage of the procedure, aiming to achieve its goal. Each activity consists of a sequence of tasks or steps logically ordered. **Input Artifacts**: Composed of all the information and documentation necessary for the implementation of activities. Participants make use of them, processing them to obtain the outputs for each particular activity. **Output Artifacts**: Documents, models, tables and general information obtained as a result of the implementation of activities. Some of them are the entries of other activities that in turn are used to generate other outputs. **Techniques**: Used to enable participants in each of the activities, gathering and obtaining information necessary for its implementation. Provides an exchange between actors promoting their good understanding and feedback. **Phase I. Validation of Nonfunctional Requirements List** Phase I is focused on analyzing each nonfunctional requirement, contained in the List of Nonfunctional Requirements. This artifact is generated by the quality administrator, which makes a request to the SRS analyst and puts the statements on the list, separated from its specifications and considering the already defined classifications of all existing NFR, as follows: \[ <\text{Classification}> \] \[ \text{NFR}<\text{Number}> : <\text{Statement}> \] The statement of the NFR is verified regarding ambiguity, consistency; possibility to be proved and that describes properties of the system. Permitting to obtain nonconformities from the errors found, which are stated in the DNC to be analyzed and corrected by the analyst. Many iterations are performed as necessary until it is accomplished, ranging from 95% to 100%, the goals of each task. The percentage is defined with these limits letting exist from one to three errors in relation to the amount of NFR because its correctness depends on the opinion of the analyst responsible for this activity. The percent of achievement is specified in Table 1: Deficiencies of NFR, found in the document Nonfunctional Requirements List (see Annex 1). It also shows those concepts that may be difficult to interpret in the implementation of these activities. <table> <thead> <tr> <th>Phase I. Validation of Nonfunctional Requirements List</th> </tr> </thead> <tbody> <tr> <td><strong>Objective</strong></td> </tr> <tr> <td><strong>Responsible</strong></td> </tr> <tr> <td><strong>Participants</strong></td> </tr> <tr> <td><strong>Output Artifacts</strong></td> </tr> <tr> <td><strong>Activities</strong></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td><strong>Technique</strong></td> </tr> </tbody> </table> **Phase II. Validation of NFR specifications** Phase II is directed to review the specifications of nonfunctional requirements for software, checking first that meet with the parameters set by the IEEE 830 standard. Activities are ruled by what this standard indicates, as the desirable features for correct specification of software requirements. Once corrected the stated errors of NFR (Phase I), it becomes necessary to check the specification found in the ERS as follows: \[ <\text{Classification}> \] \[ \text{NFR}<\text{Number}> : <\text{Statement}> \] \[ <\text{Specification}> \] Several iterations of this step are performed as often as necessary, that is, until the target for each activity is accomplished within the range of 95% to 100%. The percentage is defined by these limits due to there may be one to three errors in relation to the number of existing NFR, that cannot be eradicated. This problem is evident because the correction of non-conformities depends on the opinion of the analyst responsible for this activity. The technique used is audit, which will be implemented through a checklist. It not only evaluates the SRS artifact in general, but also establishes a series of questions for each activity to ensure proper development and calculate the percentage of achievement. ### TABLE II **Phase II. Validation of NFR Specifications** <table> <thead> <tr> <th>Objective</th> <th>Validate the nonfunctional requirements specifications.</th> </tr> </thead> <tbody> <tr> <td>Responsible</td> <td>Quality Administrator.</td> </tr> <tr> <td>Participants</td> <td>Analyst, Principal of the Project.</td> </tr> <tr> <td><strong>Activities</strong></td> <td>• Verify that the specification of NFR is correct.</td> </tr> <tr> <td></td> <td>• Verify that the specification of NFR is unambiguous.</td> </tr> <tr> <td></td> <td>• Verify that the specification of NFR is complete.</td> </tr> <tr> <td></td> <td>• Verify that the specification of NFR is possible to prove.</td> </tr> <tr> <td></td> <td>• Verify that the specification of NFR is consistent.</td> </tr> <tr> <td></td> <td>• Verify that the specification of NFR is modifiable.</td> </tr> <tr> <td>Technique</td> <td>Audit.</td> </tr> </tbody> </table> ### Phase III. Validation for Quality Metrics Phase III is focused on evaluating the six quality attributes contained in the ISO / IEC 9126 standard that match the selected nonfunctional requirements. The metrics set, using the same standard as a guide, allow regulating the features and sub-features associated with non-functional requirements. Validations take into account the metrics that best meet the current needs of the NFR. An analysis of the results obtained is performed after the application to adapt them from quantitative to qualitative, which subsequently allows defining the percent of achievement of the characteristic considered. All these calculations and conversions are listed in the Document of Tables for quality attributes. Several iterations are performed to achieve accomplishment of each quality attribute from a range of 90% to 100%, because there may be NFR that are not evaluated with the metrics proposed. **Structure metrics** For a better understanding of the metrics it has been defined a common structure for its approach: - **Metric Name**: Name of the metric. - **The metric is proposed to measure**: Question to be answered with the application of the metric. - **Application Method**: Provides a sequence of steps for the application of the metric. - **Measurement (formula)**: Provides measurement formula and meaning of the data used. - **Interpretation of the value obtained**: Provides the range for limiting the value obtained and its conversion to a qualitative value. - **Unit of measure**: Standardization of the measurement being performed. ### TABLE III **Phase 3. Validation for Quality Metrics** <table> <thead> <tr> <th>Objective</th> <th>Validate the quality of NFR through the metrics defined.</th> </tr> </thead> <tbody> <tr> <td>Responsible</td> <td>Quality Administrator.</td> </tr> <tr> <td>Participants</td> <td>Architect, Principal of the Project.</td> </tr> <tr> <td>Input Artifacts</td> <td>Document of Tables for quality attributes.</td> </tr> <tr> <td>Output Artifacts</td> <td>Document of Tables for quality attributes. (completed).</td> </tr> <tr> <td></td> <td>Nonconformities Document.</td> </tr> <tr> <td><strong>Activities</strong></td> <td>• Validate the NFR associated to functionality.</td> </tr> <tr> <td></td> <td>• Validate the NFR associated to reliability.</td> </tr> <tr> <td></td> <td>• Validate the NFR associated to usability.</td> </tr> <tr> <td></td> <td>• Validate the NFR associated to efficiency.</td> </tr> <tr> <td></td> <td>• Validate the NFR associated to maintainability.</td> </tr> <tr> <td></td> <td>• Validate the NFR associated to portability.</td> </tr> <tr> <td>Technique</td> <td>Validation Tests.</td> </tr> </tbody> </table> **Representation of the results of the nonfunctional requirements** With the application of metrics to NFR, according to the results, three cases are established: Appropriate Case, Worst Case and Recognized Case, with the percentage of achievement it means. The sum of the percent of cases defined represents the state of the attribute sub-feature being evaluated. If any of the metrics obtain a result which puts it in Appropriate Case: it means that although the NFR is implemented, there are few poor elements that do not allow proper operation; or Worst Case: means that there are NFR with implementation to be done or poor; it is registered nonconformity. Permitting to go directly to the affected area which includes the requirement, not to generalize its condition, avoiding ambiguities. The Document of Tables for quality attributes contains a summary table at the end that specifies the percentage of achievement of the attributes (NFR) that are measured by the metrics, and the final amount of non conformities found. It will decide whether a new iteration of the phase is done, that is, if the percentage of achievement of each quality attribute is not between 90% and 100%, another iteration will run. **Implementation of the procedure** The procedure is applied to SIGEPAC, a computer tool to register, follow up, and assess financial and physical performance of projects, as well as the impact of its results. The software solution is composed essentially by two subsystems: software management and dataware house. Herein are the results obtained in the implementation of the proposal designed. **Implementation of phase 1** Generated by the quality manager, the List of Nonfunctional Requirements is the first document which according to the proposed procedure is applied to the validation. After capturing the title of all the non-functional requirements separated from their specification and respecting the classification proposed by the SRS, it is made a detailed analysis by the quality manager to ensure fulfilling the objectives of this phase. It was detected in a first iteration that only 85.41 % of the NFR were unambiguous, and in the second iteration 96% had that feature. Example of refined requirements where ambiguity has been removed: - Initial requirement: NFR 4. Allows keyboard use to perform operations on the system (Allow quick access to the system using the keyboard) - Corrected requirement: NFR 4. Allows keyboard use for system access operations. Some other errors detected during this phase of the procedure are that in the first iteration only 93.75 % of the requirements were concise and abstract, in the second iteration 98% met this feature, that is, they were able to provide greater amount of information with less amount of words, allowing to abstract as much as possible of what can be the future system. As an example of an abstract and concise requirement we have: - NFR 37. Define communication interface: nonfunctional requirements that allowed a detailed abstraction of how the future of the system would be regarding the communication interface. Continuing the implementation of the procedure it was found that 96% of the NFR were described as a restriction or properties of the system, making possible in the second iteration that 100% of the requirements meet this feature. NFR possible to prove in the first iteration were at 93, 75%, and in the second iteration 100% of them had this characteristic. According to the method proposed in this paper, all errors detected are in a Non-conformities Document, which is sent to analysts, which in turn check it and applying the correction technique again meet with clients and the principal of the project to correct them. It was necessary to make a total of two iterations of this phase to validate the list of non-functional requirements. **Implementation of Phase II** After being analyzed the NFR separated from their specifications and non conformities detected, which are solved by analysts, the refinement of the Software Requirements Specification Document (SRS ) is performed; answering after that the questions in the checklist of this phase. Example of the implementation of checklists to the SRS document. <table> <thead> <tr> <th>Elements defined by activities on phase II.</th> <th>Activity 1: Check the SRS to be correct.</th> </tr> </thead> <tbody> <tr> <td>Import ance</td> <td>Parameters to assess.</td> </tr> <tr> <td>--------------</td> <td>----------------------</td> </tr> <tr> <td>1.</td> <td>Are all non-functional requirements requested by the customer present?</td> </tr> <tr> <td>2.</td> <td>All specified NFR contribute to satisfy a real need of the software?</td> </tr> <tr> <td>3.</td> <td>Are there NFR with lack of information by the client?</td> </tr> <tr> <td>4.</td> <td>Are there NFR with added information by the analyst?</td> </tr> <tr> <td>5.</td> <td>Is the source of NFR identified? (for example: a person, a regulation)</td> </tr> </tbody> </table> In the SRS document they were detected that 10.41 % of a total of 48 non-functional requirements, were incorrect. 2.08% have problems of ambiguity, they are neither complete not modifiable. 4.16% are not consistent, but there were no critical questions graded as wrong, that is to say, defined with 1. To make the SRS document meets all the features required in the process it was clearly evident the need for two iterations, although most of non-functional requirements met all the characteristics established for this phase, the correction characteristic was not found in the defined range (95% to 100%). The following data were obtained as results. Implementation of Phase III For the implementation of Phase III it was necessary to run the metric established by quality attributes, which allow us a percentage assessment of the six characteristics of quality desirable for the software and set by the ISO / IEC 9126 standard. Functionality In the Functionality were used metrics as access control and user accounts to measure the security sub characteristic. All test cases were carried out in the SIGEPAC security module, from which satisfactory results were obtained since there were no violations to the system access. In the metric of user accounts the results were similar, as there is only one shared account (administrator), this is responsible for managing user accounts by giving them the necessary permissions for the use of the system functionalities. Some other metrics that evaluate the functionality, in terms of interoperability, are data interchangeability, format based, data interchangeability, based on successful attempt, where tests cases were performed to the four Pentaho components, Pentaho BI Suite Community Edition in its 3.5 version, which are specified in the NFR 30: Business intelligence layer. Important to point out that although there were few poor data exchanges, there were actually problems with the component Pentaho Desing Studio 3.5. After adding the percentage of the two sub-characteristics it was determined that the functionality is 95% complete. Reliability Measurements made on the reliability grant it 90%, since the percentage was fulfilled in terms of failure eradication, there was a qualitative assessment of appropriate case for mean time recovery metrics and mean time between total failures, as there were total failures in the interaction with the Pentaho Desing Studio 3.5 component that did not meet the NFR 14 that sets a range of recovery going from 10 minutes to 72 hours for the solution of the problem, validation and testing. Usability Usability is 95%. Although in the SRS are not specified the amount of tutorials the application must have, a necessary aspect to implement the accessibility metric to tutorials, if it is detailed that you must implement a help. Out of the 12 requirements related to usability, one is not fully implemented (NFR 12: Change component without the need to authenticate again). Efficiency The characteristic of efficiency, which is evaluated by the metrics response time and user waiting time when using the equipments E/S is 100% according to the NFR measured. For the first metric used it is applied NFR 18. System response time, which sets five minutes as the maximum time for answers, and for the second NFR 4. Allow the use of the keyboard for system quick access operations. There are requirements as NFR 21. Number of users connected simultaneously, its implementation is not measured by the metrics established and it has not been possible to prove it. Maintainability Maintainability, which is measured by the metric implementation degree of diagnostic functions, is 95% since not all the registered failures were diagnosed with such diagnostic functions. Portability It important to say, that this feature in the project management module, where the procedure is applied, is 100% implemented, because it is easy to install by the user as this module is supported on a web application and moreover all NFR concerning portability have been successfully developed. With the implementation of the Phase III to the project Integral Solution for Project Management and Centralized Actions SIGEPAC concludes the implementation of the procedure. The results were analyzed making it necessary to perform two iterations of Phase I and Phase II. After this example we can say that the procedure is easy to understand and apply, with necessary practical contribution to software specialists dedicated to validate non-functional requirements. IV. Conclusions With the completion of this research a procedure for validating nonfunctional requirements of Information Management Systems was designed, coming to the following conclusions: - Standards and quality rules were analyzed, as part of the implementation of the theoretical basis of the research, including also the conceptual context, analyzing subjects as the study of requirements engineering, which proved as a result that validation of requirements and software metrics contributes to the control, monitoring and improvement of the quality of the software development process. - A procedure capable of improving the quality of nonfunctional requirements of Information Management Systems was designed; being necessary for the proposal to define a set of activities, responsible, participants, input and output artifacts and techniques that led to its best understanding. - The Integral Solution for Project Management and Centralized Actions was applied, determining the feasibility of the procedure. REFERENCES
{"Source-Url": "http://www.laccei.org/LACCEI2015-SantoDomingo/RefereedPapers/RP031.pdf", "len_cl100k_base": 7413, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27158, "total-output-tokens": 8311, "length": "2e12", "weborganizer": {"__label__adult": 0.0003342628479003906, "__label__art_design": 0.0006222724914550781, "__label__crime_law": 0.00046944618225097656, "__label__education_jobs": 0.004734039306640625, "__label__entertainment": 7.218122482299805e-05, "__label__fashion_beauty": 0.00020682811737060547, "__label__finance_business": 0.0007920265197753906, "__label__food_dining": 0.00041413307189941406, "__label__games": 0.0007433891296386719, "__label__hardware": 0.0009613037109375, "__label__health": 0.000705718994140625, "__label__history": 0.0003299713134765625, "__label__home_hobbies": 0.0001195669174194336, "__label__industrial": 0.0005640983581542969, "__label__literature": 0.0003893375396728515, "__label__politics": 0.00016438961029052734, "__label__religion": 0.00045680999755859375, "__label__science_tech": 0.0667724609375, "__label__social_life": 0.00011646747589111328, "__label__software": 0.0162353515625, "__label__software_dev": 0.90380859375, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0004198551177978515, "__label__travel": 0.0002143383026123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39667, 0.02326]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39667, 0.22633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39667, 0.92512]], "google_gemma-3-12b-it_contains_pii": [[0, 1777, false], [1777, 7759, null], [7759, 12177, null], [12177, 18098, null], [18098, 22822, null], [22822, 28595, null], [28595, 33492, null], [33492, 38395, null], [38395, 39667, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1777, true], [1777, 7759, null], [7759, 12177, null], [12177, 18098, null], [18098, 22822, null], [22822, 28595, null], [28595, 33492, null], [33492, 38395, null], [38395, 39667, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39667, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39667, null]], "pdf_page_numbers": [[0, 1777, 1], [1777, 7759, 2], [7759, 12177, 3], [12177, 18098, 4], [18098, 22822, 5], [22822, 28595, 6], [28595, 33492, 7], [33492, 38395, 8], [38395, 39667, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39667, 0.22581]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
73da69e47e45e861598873e0333dfaf0d770f548
[REMOVED]
{"len_cl100k_base": 6361, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17849, "total-output-tokens": 8422, "length": "2e12", "weborganizer": {"__label__adult": 0.0006346702575683594, "__label__art_design": 0.0006971359252929688, "__label__crime_law": 0.00067138671875, "__label__education_jobs": 0.0006389617919921875, "__label__entertainment": 0.0003745555877685547, "__label__fashion_beauty": 0.00024700164794921875, "__label__finance_business": 0.0002237558364868164, "__label__food_dining": 0.0005159378051757812, "__label__games": 0.0011358261108398438, "__label__hardware": 0.002262115478515625, "__label__health": 0.0012607574462890625, "__label__history": 0.00032258033752441406, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.0005335807800292969, "__label__literature": 0.0005578994750976562, "__label__politics": 0.0005803108215332031, "__label__religion": 0.0007653236389160156, "__label__science_tech": 0.1605224609375, "__label__social_life": 0.00012969970703125, "__label__software": 0.0174560546875, "__label__software_dev": 0.80908203125, "__label__sports_fitness": 0.0004498958587646485, "__label__transportation": 0.0005002021789550781, "__label__travel": 0.0002312660217285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31566, 0.06417]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31566, 0.08381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31566, 0.87559]], "google_gemma-3-12b-it_contains_pii": [[0, 5931, false], [5931, 11307, null], [11307, 21299, null], [21299, 26861, null], [26861, 31566, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5931, true], [5931, 11307, null], [11307, 21299, null], [21299, 26861, null], [26861, 31566, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31566, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31566, null]], "pdf_page_numbers": [[0, 5931, 1], [5931, 11307, 2], [11307, 21299, 3], [21299, 26861, 4], [26861, 31566, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31566, 0.17021]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
bea13f6ecc99d268450d62abef1aaff58cf3a28e
CSE220 - MIPS Pipelining - What we have seen so far is a very simplified approach to the execution of MIPS instructions. - We fetch an instruction, decode it, and execute it completely before starting another instruction. - Only one instruction is handled at a time by the CPU. - One disadvantage, is that the datapath has several parts/components, which sit idle when other parts are in use. - In the multicycle machine we tried to optimize this some by combining units (single ALU, one memory for data and instructions). We also try to do as much work as possible in each state. - The goal of Pipelining is to increase this utilization by handling several instructions at a time. **Ex: Laundry** - When we do laundry there are 4 steps: Load Washer, Move to Dryer, Fold Laundry, Store Laundry (put Away) - Assume we have 4 loads of laundry to do. - If we do laundry the same way we have been processing instructions, one at a time, it would take use 16 time units to do all the laundry form start to finish. - If we however, do laundry more efficiently: we use the washer again as soon as the previous load is done. We can finish much faster. - Since each load is at a different stage of the laundry process, we can do multiple loads at a time. The total time is now 50% of the original. - Once the ‘pipeline’ is full (when the fourth load – Task D – starts), we can do the laundry four times as fast. **Pipelined Datapath Execution** - In our multi-cycle implementation we have already split each MIPS instruction into 5 smaller tasks. - Instruction fetch (IF) - Instruction decode & register file read (ID) - Execution of arithmetic operation or address calculation (EX) - Data memory access (MEM) - Write back to register (WB) - By cascading the execution of each of these cycles, we can increase the throughput of our datapath. - However with this modification comes increased complexity. - Extra control is required to manage the execution of multiple instructions simultaneously - Extra registers are required to hold the intermediate values of each instruction. - What happens when two instructions need to use the same piece of hardware…. We must add additional hardware to the datapath to remove these situations - Ex: PC+4 during Fetch at the same time as the ALU operation for another instruction. Both cannot use the same ALU - Consider also when the flow of the program changes (conditional branches/jumps). In these scenarios we will have begun execution of the next instruction(s) before knowing if the branch will be taken or not (ALU stage). How do we stop their execution? - The maximum speed of the CPU will depend on how many stages we have. - More stages → high clock speed. Why? Less work to perform per stage. - But, not all instructions will require all stages…… - Can I eliminate these stages? No. because other instructions require the hardware during these stages, I cannot just skip ahead to the next stages/instruction. - Furthermore, each stage may not take exactly the same amount time. - But the critical path will still dictate the length of the clock cycle and the length of execution for each instruction (all stages). **Pros/Cons of Pipelining** - **Pros:** - Increased throughput - Parallelism - **Cons:** - Individual instruction execution time may increase - Increased complexity in the datapath (hardware/control) **MIPS Pipelining Datapath** - Pipelining increases the performance of the machines, but at the cost of greater complexity to the datapath. **Ex: Performance Comparison** - Assume time for stages is - 100ps for register read or write - 200ps for other stages - For Single cycle datapath: - The minimum cycle time is the length of the longest instruction <table> <thead> <tr> <th>Instr fetch</th> <th>Register read</th> <th>ALU op</th> <th>Memory access</th> <th>Register write</th> <th>Total time</th> </tr> </thead> <tbody> <tr> <td>lw</td> <td>200ps</td> <td>100ps</td> <td>200ps</td> <td>200ps</td> <td>800ps</td> </tr> <tr> <td>sw</td> <td>200ps</td> <td>100ps</td> <td>200ps</td> <td>200ps</td> <td>700ps</td> </tr> <tr> <td>R-format</td> <td>200ps</td> <td>100ps</td> <td>200ps</td> <td>100ps</td> <td>600ps</td> </tr> <tr> <td>beq</td> <td>200ps</td> <td>100ps</td> <td>200ps</td> <td></td> <td>500ps</td> </tr> </tbody> </table> - For Multi cycle datapath: - The minimum cycle time is the length of the longest stage, but each instruction requires multiple stages <table> <thead> <tr> <th>Instr fetch</th> <th>Register read</th> <th>ALU op</th> <th>Memory access</th> <th>Register write</th> <th>Total time</th> </tr> </thead> <tbody> <tr> <td>lw</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>1000ps</td> </tr> <tr> <td>sw</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>800ps</td> </tr> <tr> <td>R-format</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td>800ps</td> </tr> <tr> <td>beq</td> <td>200ps</td> <td>200ps</td> <td>200ps</td> <td></td> <td>600ps</td> </tr> </tbody> </table> - For pipelining, we are executing in a multi-cycle datapath, therefore the minimum cycle time is the length of the longest stage. However since we are executing multiple instructions simultaneously, we cannot skip stages. All instructions must “execute” all 5 stages in the same order. The MIS ISA was designed with Pipelining in mind. To make pipelining easier: - All instructions are 32-bits - Easier to fetch and decode in one cycle - Few and regular instruction formats - Can decode and read registers in one step - Load/store addressing - Can calculate address in 3rd stage, access memory in 4th stage - Alignment of memory operands - Memory access takes only one cycle As in the multicycle datapath, there are five stages, one execution step per stage: - IF: Instruction fetch from memory - ID: Instruction decode & register read - EX: Execute operation or calculate address - MEM: Access memory operand - WB: Write result back to register Abstract View of Pipeline - Note how in each cycle, each of the (five) instructions being executed is in a different stage. - A shaded stage indicates that the stage is being used. - A white stage means that this stage is not being used. - For the register file, **writes take place before reads**, which is the opposite from what the multicycle CPU does - This ordering is depicted by the shading of the RF stages - Register writes happen before reads so that data can be written and read back within a single cycle - In order to support pipelining in the multi-cycle datapath, modifications need to be made. - Split the memory into separate memories as was in the single cycle datapath - Put the PC+4 adder and the Branch adder back as was in the single cycle datapath - Additionally, each component of the datapath can only be used by a single instruction per clock cycle. If two instructions need to access/store data to the same component, there is a conflict which is called a **HAZARD**. Other hazards are the result of branch or jump instructions which modify the program counter and change the standard flow of instruction execution. (Hazards will be discussed later) - Each stage of the pipeline will be executing a different instruction, therefore all the data required from the current stage to the next stage must be stored in registers. These registers are placed where the IR, A, B, Data, and ALUOut registers in the multicycle datapath resided and are expanded to hold more values. These registers are called Pipeline registers and are sized only to hold the number of bits required to be stored (not necessarily multiples of 32). - The registers are named based on the stages they sit between. For example, between the Instruction Memory and the Register File is the IF/ID. - Specifically, how big are these registers (data only)? - IF/ID register: 64 bits (Instruction & PC+4) - ID/EX register: 138 bits (PC+4, Reg Read 1, Reg Read 2, Sign Extended I value, Rt, Rd) - EX/MEM: 102 bits (Branch address, ALUOut, Zero bit, Reg Read 2, WriteRegDestination) - MEM/WB: 69 bits (ALUOut, Data, WriteRegDestination) Note, ID/EX, EX/MEM, MEM/WB will be expanded later to hold more values (for example, the control signals). Instruction Execution in Pipeline Datapath - Consider the execution of a LW instruction through different stages of the pipeline - Each stage takes the input values from the pipeline registers. Note that only the components within that particular stage are used for the instruction. - Instruction Fetch (IF) - Note that the PC+4 value is calculated, and it is forwarded to the next stage. Why? Because the PC+4 value is needed to calculate the Branch/Jump address in later stages. - # PC <= PC + 4 - # IF/ID <= Instruction Mem[PC] - Note that the PC+4 value is calculated, and it is forwarded to the next stage. Why? Because the PC+4 value is needed to calculate the Branch/Jump address in later stages. - Instruction Decode (ID) - Note how the branch address is not calculated in the decode cycle. - # ID/EX <= Reg[rs] - # ID/EX <= Reg[rt] - # ID/EX <= sign extended IF/ID (immediate value) - **Execute (EX)** - # EX/MEM <= Reg[rs] + SignExtended Immediate - # EX/MEM <= Reg[rt] - All read from ID/EX pipeline registers - **Memory (MEM)** - # MEMWB <= Data MEM[EX/MEM ALUOut], both read from EX/MEM pipeline registers - **Write Back (WB)** - #Reg[rt] <= Mem/WB Data value - Notice that the register file Write Register input is incorrectly specified. The register number must be from the current instruction executing and not from the instruction currently in the Decode stage. Therefore, we must modify the datapath to forward the write register from the LW instruction though all the stages in order to have the information available at the time of the WB. (Note, that we do not forward the entire instruction and there is no Instruction register to hold the instruction during the entire execution.) • Consider the SW instruction o The fetch (IF) and decode (ID) stages never change, as they are instruction in-dependent. o The execute stage (EX) still calculates the memory address o The MEM stage, writes back the data to memory from the Reg[rt] o The WB stage is not needed for the SW instruction. No additional writing to registers is needed. Even though the instruction does not need to perform this stage, no other instruction can use this hardware/time. Why? **Pipelined Datapath Control** • Below is the control signals which are required in each stage in the simplified view of the datapath ![Diagram of control signals](image) • The control of the multi-cycle datapath was calculated by a single control unit in the decode stage of execution. The microcontroller/finite state machine specified the control signals for each instruction for each stage of execution. • It is inefficient to have a finite state machine for each instruction in each stage of the pipeline. Especially, when we don’t even save the full instruction after the Decode stage. • In the single cycle datapath all control signals were calculated for the instruction in the decode stage. The pipelined datapath reverts to this approach. Note that since the instruction itself is not saved after the decode stage, the control signals for each cycle cannot be calculated during each cycle, instead they are all calculated during the decode stage and passed to the following cycles. **MIPS Pipelining Hazards** • Hazards are situations, caused by the sequence of MIPS instructions or the hardware, that prevent or complicate the execution of the next instruction in the pipeline. • There are 3 types of hazards: o *Structural hazards:* when two instructions want to access the same resource, aka a required resource is busy o *Data hazards:* required data is needed from instructions in the pipeline, aka need to wait for previous instructions to complete its data read/write Control hazards: the next instruction to enter the pipeline is not known, aka deciding the control depends on the previous instruction **Structural Hazards** - These hazards occur when there is a conflict for use of a resource. - An excellent example is the memory in the multicycle datapath. - **Ex:** In the multicycle datapath the instruction and data memory were combined this reduced the amount of required hardware. Load and store instructions required access to the data in this memory during the MEM stage. Also the same memory is accessed during the Fetch stage to obtain the instruction to execute. - Fetching the instruction in the pipeline while in the MEM stage of a lw/sw instruction is executing would not be possible. Therefore the Fetch would need to be delayed by a clock cycle, aka *STALL*. - Pipelined datapaths require separate instructions and data memories to eliminate this structural hazard. **Data Hazards** - A data hazard occurs when an instruction depends on the completion of a previous instruction. IE. Next instruction cannot finish execution because data needed is not available yet. **EX:** ``` add $s0, $s2, $s3 and $t0, $s0, $s1 ``` - Consider the above instructions in the pipeline one after another. ``` add $s0, $s2, $s3 and $t0, $s0, $s1 or $t1, $s4, $s0 sub $t2, $s0, $s5 ``` - Add instruction: The result of the $s2+$s3 operation is not stored back into $s0 until the WB stage (5th cycle of execution) - Sub instruction: The value of $s0 is read from the register file during the ID stage (second stage). - The subtract instruction will fetch the incorrect value of $s0 because the fetch is performed while the add instruction is in the EX stage. - As a result of this issue, to execute these two instructions correctly, we would need to stall the execution of the sub instruction by 2 clock cycles. In this way the ID (register read) would occur after the WB (register write) of the add instruction. - Stalling the pipeline negatively impacts performance. Therefore, in order to resolve data hazards additional hardware is added to the datapath. There are multiple approaches to try to eliminate stalls. **Forwarding/Bypassing** - Instead of waiting until the write back occurs, we will “forward” the data as soon as it is available - We do not wait until the value is not stored back in the register - However, we need new connections (wires and multiplexor options) in the datapath Forwarding does resolve data hazards in many cases, but not all. - Data can never be sent backwards in time! - Ex: several data forwards to solve many of the read-after-write (RAW) hazards - Ex: $lw \ $s0, 20($t1) $sub \ $t2, \ $s0, \ $t3$ - The $s0$ value from the load word instruction is not available until after the MEM stage - This value is required by the sub instruction after ID but before EX stage. - In this case, forwarding and stalling is needed. - When an R-format instruction following a load tries to use the data - Without the stall, the path from memory access stage output to execution stage input would be going backward in time, which is impossible. - Figure is actually a simplification! - Don’t know until after Decode of Sub, that a stall is necessary…… - The value to be placed into $s0$ is not available until after being read from memory in the 4th stage. However it is still needed in the ID stage of subtract. A stall is required in this case. - **Code Scheduling to avoid Stalls** - The order in which instructions are programmed can often be modified while still producing the same result. - One approach to eliminate stalls due to load instructions is to reorder the code to avoid the use of the load result in the next instruction. - Ex: $A = B + E; \ C = B + F;$ The left code sequence requires 2 stalls because the load instructions are immediately followed by references. The right code sequence, however, reorders the instructions to have at least 1 instruction between each load and its reference. The code produces the same result and the stalls are eliminated. **Data Hazards & the Datapath** - Consider the sequence: ``` sub $2, $1, $3 and $12, $2, $5 or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) ``` - Which instructions will potentially have issues? What type? - and, or (data hazard) - add, sw (data hazard) - **How do we detect when to forward for these types of instructions?** - Register $2 is used by every instruction. The first instruction modifies the contents, and all remaining instructions use the new value. - The figure highlights the ID stage where $2 is read for all instructions following the first. - Can all hazards be resolved using forwarding? - When is the correct value of the sub instruction calculated and available? After EX - The value from sub EX can be forwarded to the EX stage of the and instruction. - It can also be forwarded from the sub MEM to the EX stage of the or instruction. - With the add instruction because writes to the register file occur prior to the reads, there is no data hazard. The sw instruction has no data hazard as the sub instruction is complete prior to the ID stage of sw. - NOTE: General rule of thumb: All data hazards drawn backward in time need forwarding. - How do we detect when forwarding is needed? - If the rs and rt registers of the current instruction are the rd register of the instruction before or the instruction 2 before, then forwarding is required. - Logically: - Forwarding from EX/MEM pipeline register (instruction before) 1a. EX/MEM.RegisterRd = ID/EX.RegisterRs 1b. EX/MEM.RegisterRd = ID/EX.RegisterRt ▪ Forwarding from MEM/WB pipeline register (instruction 2 before) 2a. MEM/WB.RegisterRd = ID/EX.RegisterRs 2b. MEM/WB.RegisterRd = ID/EX.RegisterRt • But only if the these 2 prior instructions are going to write to the rd register ▪ How do we know? Control signal RegWrite will be 1 ▪ What if the rd register is $0$? Then there is no need to forward, because $0$ is always $0$. • To enable forwarding modification to the datapath and pipelining registers are required. A Forwarding control unit is added. • In order to detect the need for forwarding we need to compare the rs and rt register number in the ID/EX stage of the current instruction with the rd register number of the 2 prior instructions. • Remember the rd register for each instruction is already passed through the pipeline for the WB stage. • The below figure modifies the datapath to introduce new multiplexors into the EX stage. These select between the rs and rt data coming from the 3 different places: the ID/EX (register file), the EX/MEM (ALUOut), the Mem/WB (Data/ALUOut delayed by 1 cycle) • In addition, we must consider the special case of an instruction trying to write to the $0$ register. Eg. add $0$, $t0$, $t2$ ▪ In this case, the instruction is equivalent to a No Operation (nop) instruction. But assembly has no special constructs to prevent a programmer from writing this code, as the $0$ is a special case register. ▪ We need to set the control logic to NOT forward the data when the destination register (rd) is $0$. Therefore, we need to add the condition EX/MEM.RegisterRd ≠ $0$ and MEM/WB.RegisterRd ≠ $0$ • The forwarding unit control these multiplexors through digital logic comparing the above mentioned conditions. Logically when are the forwarding values used? For an EX hazard if ((rsE != 0) AND (rsE == WriteRegM) AND RegWriteM) ForwardAE = 10 else if ((rsE != 0) AND (rsE == WriteRegW)) ForwardAE = 01 else ForwardAE = 00 The full datapath with hazard & forwarding unit How do we detect when to stall? - Stalls are required whenever the data required is not available at the time it is needed. - This occurs when an instruction needs a value from a lw instruction How do we detect when a stall is needed? o Check when current instruction is decoded in ID stage (rs and rt) is same as rd and the previous instruction is a lw instruction (the MemRead control signal will be 1) o Logically: - ID/EX.MemRead and ((ID/EX.RegisterRt = IF/ID.RegisterRs) or (ID/EX.RegisterRt = IF/ID.RegisterRt)) If detected, stall and insert bubble How do we stall/insert a bubble in the pipeline? o Force the control values in the ID stage to 0. This means that the EX, MEM, and WB stages will do nothing (nop instruction) o Then we need to repeat the instruction again. To do this we prevent the PC update and update of the IF/ID register - The current instruction will be decoded again - The next instruction will be fetched again - 1-cycle stall allows for MEM to read data for lw and then it can be forwarded from the MEM/WB pipeline register to the EX stage. Overall, stalls reduce the performance of the machine, but are required in order to produce the correct results. Another approach is to have the compiler rearrange the code to avoid hazards and stalls. This requires the compiler to have knowledge about the pipeline structure within the architecture it is compiling for. **Control Hazards** - Typical execution of instructions is one after another in memory. However, branch and jump instructions allow for the linear execution to be modified. - During pipelining, the branch instruction will determine the next instruction to fetch. But the branch determination is not complete until the 4th stage MEM (The ALU zero AND Branch control is performed in MEM). The pipeline can’t and won’t always fetch the correct instruction. (correct if branch not taken, incorrect if branch taken). - If the branch outcome is determined in the MEM cycle then 3 instructions have already started to be processed. We can reduce this branch delay by moving the branch address calculation and perform the branch comparison to the ID stage. Additional hardware is required. By performing the branch in the check in the decode stage, then when the branch is taken the control signals for the next instruction (in fetch stage) can be set to nop (IF.flush control signal). What about when Data Hazards occurs with Branches? - If one of the branch comparison registers is the rt register of the previous 2\textsuperscript{nd} or 3\textsuperscript{rd} ALU instruction there is a data hazard. We can resolve these issues with forwarding. ``` add $1, $2, $3 \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` add $4, $5, $6 \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ... ``` beq $1, $4, target \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` - If one of the branch comparison registers is the rt register of the previous ALU instruction or previous 2\textsuperscript{nd} load instruction, then the branch instruction must be stalled. There is no way to forward the data in time to complete the branch in ID. ``` lw $1, addr \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` add $4, $5, $6 \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` beq stalled \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` beq $1, $4, target \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` - If one of the branch comparison registers is the rt register of the previous load instruction, then the branch instruction must be stalled for 2 cycles. There is no way to forward the data in time to complete the branch in ID. ``` lw $1, addr \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` beq stalled \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` beq stalled \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` ``` beq $1, $0, target \hspace{0.5cm} \begin{array}{|c|c|c|c|c|} \hline \text{IF} & \text{ID} & \text{EX} & \text{MEM} & \text{WB} \\ \hline \end{array} ``` In the MIPS pipeline the branch delay is not serious. However in longer pipelines and in superscalar pipelines, the branch penalty is more significant. In these cases, branch prediction is used, specifically dynamic branch prediction. • The forwarding logic used in the ID stage is: \[\text{ForwardAD} = ((rsD \neq 0) \land (rsD = \text{WriteRegM}) \land \text{RegWriteM})\] \[\text{ForwardBD} = ((rtD \neq 0) \land (rsD = \text{WriteRegM}) \land \text{RegWriteM})\] • The stall detection logic handles both an ALU instruction in the EX stage and a `lw` instruction in the MEM stage: \[\text{branchstall} = (\text{BranchD} \land \text{RegWriteE} \land (\text{WriteRegE} = rsD \lor \text{WriteRegE} = rtD)) \lor (\text{BranchD} \land \text{MemtoRegM} \land (\text{WriteRegM} = rsD \lor \text{WriteRegM} = rtD))\] \[\text{StallF} = \text{StallD} = \text{FlushE} = \text{lwsstall or branchstall}\] **Branch Prediction** - Branch Prediction is another approach. In this case, a stall is only required if the prediction is wrong. - Assume we predict that all branches are not taken, therefore you always fetch the next instructions after each branch. What happens when you predict wrong? - The instruction that was fetched has to be FLUSHED from the pipeline (turned into a No Operation, NOP instruction). More realistic branch prediction methods exist, such as static branch and dynamic branch prediction. - Static branch prediction is based on the typical behavior – loop or if statements - All backward branches are predicted to be taken - All forward branches are predicted not taken - Dynamic branch prediction uses hardware to measure the actual branch behavior during execution. - Assume that the future behavior is the same as observed. When wrong, the pipeline is stalled and the correct instruction fetched. The history is then updated.
{"Source-Url": "http://www3.cs.stonybrook.edu/~cse220/notes/jwong/CSE220_Unit16_Pipelined.pdf", "len_cl100k_base": 6847, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 39572, "total-output-tokens": 7766, "length": "2e12", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.000667572021484375, "__label__crime_law": 0.0004320144653320313, "__label__education_jobs": 0.0006098747253417969, "__label__entertainment": 0.0001170039176940918, "__label__fashion_beauty": 0.00025653839111328125, "__label__finance_business": 0.00033402442932128906, "__label__food_dining": 0.0005345344543457031, "__label__games": 0.0009908676147460938, "__label__hardware": 0.033203125, "__label__health": 0.0004417896270751953, "__label__history": 0.00040221214294433594, "__label__home_hobbies": 0.00042724609375, "__label__industrial": 0.0029888153076171875, "__label__literature": 0.00019800662994384768, "__label__politics": 0.0003657341003417969, "__label__religion": 0.0008988380432128906, "__label__science_tech": 0.11962890625, "__label__social_life": 8.577108383178711e-05, "__label__software": 0.0113677978515625, "__label__software_dev": 0.82373046875, "__label__sports_fitness": 0.0005364418029785156, "__label__transportation": 0.001071929931640625, "__label__travel": 0.0002377033233642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26291, 0.01725]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26291, 0.67595]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26291, 0.89019]], "google_gemma-3-12b-it_contains_pii": [[0, 1549, false], [1549, 3338, null], [3338, 5231, null], [5231, 5902, null], [5902, 7093, null], [7093, 8158, null], [8158, 9078, null], [9078, 9213, null], [9213, 9903, null], [9903, 11877, null], [11877, 14353, null], [14353, 15661, null], [15661, 17459, null], [17459, 19286, null], [19286, 19754, null], [19754, 20866, null], [20866, 21612, null], [21612, 21966, null], [21966, 24668, null], [24668, 25589, null], [25589, 26291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1549, true], [1549, 3338, null], [3338, 5231, null], [5231, 5902, null], [5902, 7093, null], [7093, 8158, null], [8158, 9078, null], [9078, 9213, null], [9213, 9903, null], [9903, 11877, null], [11877, 14353, null], [14353, 15661, null], [15661, 17459, null], [17459, 19286, null], [19286, 19754, null], [19754, 20866, null], [20866, 21612, null], [21612, 21966, null], [21966, 24668, null], [24668, 25589, null], [25589, 26291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26291, null]], "pdf_page_numbers": [[0, 1549, 1], [1549, 3338, 2], [3338, 5231, 3], [5231, 5902, 4], [5902, 7093, 5], [7093, 8158, 6], [8158, 9078, 7], [9078, 9213, 8], [9213, 9903, 9], [9903, 11877, 10], [11877, 14353, 11], [14353, 15661, 12], [15661, 17459, 13], [17459, 19286, 14], [19286, 19754, 15], [19754, 20866, 16], [20866, 21612, 17], [21612, 21966, 18], [21966, 24668, 19], [24668, 25589, 20], [25589, 26291, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26291, 0.03371]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
32902d96d5eae43cb95269c9107a7d8aa76d152d
Integration of SAP NetWeaver BRM with SAP BusinessObjects Query as a Web Service Applies to: SAP BusinessObjects Query as a Web Service XI 3.1 SP2 and SAP NetWeaver BRM 7.2 For more information, visit the Business Objects homepage. Summary This document describes the procedure to model business rules using data model represented by Query as a Web Service. This is a convenient way to consume SAP BusinessObjects Query as a Web Service in NW Composition Environment (CE) and BRM. This paper is written in collaboration with innovation-center.sap.com (http://innovation-center.sap.com). The referenced files are available at the following location – http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/a07eec61-babf-2c10-418e-fcf453ff0937 Author: Abhishek Kumar Company: SAP Labs India, Ltd Created on: 11 November 2009 Author Bio Abhishek Kumar is a Project Lead in NW BRM team. He has been involved in development of Business Rules Management products for the past 5 years. # Table of Contents Introduction .................................................................................................................. 3 Business Scenario ....................................................................................................... 4 Required Software Components .................................................................................. 5 Step-by-Step Procedure ............................................................................................... 5 Query as a Web Service ............................................................................................... 5 Query as a Web Service query Generation ............................................................... 5 Query as a Web Service Service Provisioning .......................................................... 6 SAP NetWeaver BRM .................................................................................................... 7 Generating Business Vocabulary ................................................................................. 7 Ruleset Modeling ......................................................................................................... 10 Ruleset Web Services Provisioning ............................................................................ 12 Deploying Rules .......................................................................................................... 13 Unified Service .............................................................................................................. 13 Web Service client for Rules Service .......................................................................... 14 Web Service client for QaaWS Service ........................................................................ 15 Conclusion ..................................................................................................................... 18 References ..................................................................................................................... 18 Copyright ....................................................................................................................... 19 Introduction In an enterprise organization, data, using which rules are evaluated, often resides in one or more databases. BRM does not per se handle data; it only processes data that is given to it. This would mean that rules are actually written on data retrieved using database queries. But retrieving and combining data from diverse data sources and writing rules could become quickly very complex and painful task. A scenario such as this one takes away the very advantage that a BRM system can offer – facilitating the business user, without any technical expertise, to create, test and maintain business rules... This adaptation and data mapping process has to be hidden as much as possible from the final consumer and should be as easy to implement as possible. How does one solve the problem posed by the above use case? An ideal solution would be to present database queries as business terms that can be used directly in rules? Essentially, one would require a loose coupling between database queries and business rules. What it means is, the adaptation and data mapping process has to be hidden as much as possible to the final consumer and should be as easy to implement as possible. This paper presents an overview and the concepts behind Business Intelligence and Business Rules integration. Both, Business Intelligence and Business Rules, have the ability to interact in a SOA environment and make these implementation requirements possible today. While at one level, the integration described in this paper is pretty straightforward, it also shows some interesting points of integration between Business Rules and Business Intelligence: - SAP BusinessObjects Semantic Layer (universe) as the Single source of the truth: BRM uses the same view on the Enterprise data that the business intelligence platform presents to users. This is an important assurance that Business Rules decisions and user decisions are based on the same facts. - Nice separation of concerns with the rules engine executing rules and the business intelligence platform: - Business rules developer can quickly integrate data without having to worry about how potentially complex analytics are performed - Business Rules are easier to change than rules embedded in the BI platform - No need for Business Rules developer to learn a new language to gather data like SQL or MDX - SAP BusinessObjects Semantic Layer helps you to quickly get analytics on any type of data source in a highly secured manner. **Business Scenario** Take a retail store ‘ABC’. The manager of this store offers discount for different product categories based on their price, quantity sold, revenue and cost of sales. All the relevant information is stored in the company database. Business Rules will be used to actually calculate the discount based on the different criteria. To solve this problem, one would require the following services: 1. A service to securely fetch data from the Business Intelligence platform 2. A service which maps data to business terms that can be used to model rules, and also processes the data returned after rules execution 3. A service to execute rules on the data provided by service 2. **Service 2** will be responsible for multiple tasks and is also vulnerable to frequent changes with the database queries. Since **Service 1** fetches data from the database that service 2 works with, there is a very tight coupling between Service 1 and Service 2. The complexity involved in creating and maintaining the above services can be reduced by SAP BOBJ Query as a Web Service and SAP BRM. This approach would result in a solution that integrates data and rules without too much of code. The integration of Query as a Web Service and BRM helps user to get object data without having any glue code in their application. Basically, this integration provides externalization of data instantiation part from the application. **Query as a Web Service** is a tool from SAP BusinessObjects that can create and publish a query to fetch data from the database. The query is published as a web service which can be consumed by any other application. Each query is exposed through a WSDL definition that contains all necessary metadata required to build the object model for the integration. BRM allows modeling of rules based on a wsdl definition. The Ruleset with the modeled rules can then be published as a web service that can also be consumed by other applications. The Query as a Web Service and BRM integration would result in a unified service, a combination of the services published by Query as a Web Service and BRM. Required Software Components - SAP BusinessObjects XI 3.1 SP2 Enterprise - Tomcat or another supported Web Server - JDK 1.5 or above - .NET 2.0 framework - SAP NW Composition Environment Step-by-Step Procedure Query as a Web Service Query As a Web Service is a tool from SAP BusinessObjects which helps user create and publish a service based on the query created from universe. A universe is a business representation of corporate data that helps end users access data autonomously using common business terms. In other words the universe represents database queries in the form of Business Terms which are easily understandable to Business Analyst and used for report generation and rules modeling. The SAP BusinessObjects Universe Builder helps in creating the universe from diverse data bases. In most scenarios the universes are already available as the same are used for reporting across the company. The user has to just create queries for the terms which are needed for rules modeling. More details can be found about Query as a Web Service at http://help.sap.com/businessobject/product_guides/boexir31SP2/en/xi31_sp2_qaaws_en.pdf Query as a Web Service query Generation We will use following business terms to create the query for our solution. - **Unit Price MSRP**: This is the manufacturers suggested retail price per SKU and color. - **Quantity sold**: Quantity sold - number of SKU sold - **Sales revenue**: Sales revenue $ - $ revenue of SKU sold - **Sold at (unit price)**: This is the actual unit price per SKU obtained at sale time (i.e. Revenue/Quantity) - **Margin**: Revenue - Cost of sales - **SKU**: Stock Keeping Unit number (SKU). The lowest level of product description. - **Discount**: This is a virtual field and does not exist in database. This is added as query to get the discount result from the rules. The universe which is used to create queries for above business terms is **eFashion** and this is available as an example universe with SAP BusinessObject XI3.1 SP2 installation. Following screen shows the generated query: Query as a Web Service Service Provisioning See image below: See the attached stockService.wsdl for the QaaWS service definition. In the above wsdl the data for our business terms is represented by Table. The table has multiple rows and each row has SKU_number, Unit_Price_MSRP, Sales_revenue, Quantity_sold, Margin, Sold_at__unit_price_, and Discount as its child entities. The following XSD snippet shows, the way data is structured. ```xml <xs:complexType name="Row"> <xs:sequence> <xs:element name="SKU_number" type="xs:double" nillable="true" /> <xs:element name="Unit_Price_MSRP" type="xs:double" nillable="true" /> <xs:element name="Sales_revenue" type="xs:double" nillable="true" /> <xs:element name="Quantity_sold" type="xs:double" nillable="true" /> <xs:element name="Margin" type="xs:double" nillable="true" /> <xs:element name="Sold_at__unit_price_" type="xs:double" nillable="true" /> <xs:element name="Discount" type="xs:double" nillable="true" /> </xs:sequence> </xs:complexType> <xs:complexType name="Table"> <xs:sequence> <xs:element name="row" maxOccurs="unbounded" type="s0:Row" /> </xs:sequence> </xs:complexType> ``` **SAP NetWeaver BRM** The wsdl exposed by Query as a Web Service is used to model the rules. The XSD types of wsdl are used to generate aliases (Business vocabulary) for rules modeling. **Generating Business Vocabulary** **Importing a wsdl** 1. Create a Rules Composer DC in NetWeaver Developer Studio 2. In the Project Explorer view, expand the Rules Composer DC node, and the src node. 3. In the context menu of the wsdl node, choose Import. 4. In the wizard that appears, expand the Web Services node and choose wsdl. Choose Next. 5. In the screen that appears, choose Browse and specify the folder in your workspace where the wsdl file has to be placed and choose Remote Location / File System. Choose Next. 6. In the screen that appears, choose Browse and choose the wsdl file (stockServices.wsdl) in your system. Choose Finish. **Adding the XSD Elements to the Rules Composer DC** A wsdl file can contain multiple XSD elements, you can add the required XSD elements to the rules composer DC. 1. In the Project Explorer view, expand the rules composer DC node, the Rules Modeling node and double-click the Project Resources node. 2. In the Project Resources editor, choose the Aliases tab. 3. In the Aliases editor that appears, choose the Add button and in the menu that appears, choose XSD Element. 4. In the Add XSD Element dialog box that appears, expand the namespace node and choose the root element. 5. Choose “Create all default Xpath aliases for the selected element” radio button. 6. Choose Finish and save the changes. The following screen shows how to add an XSD element for rules modeling. The XSD elements listed in the wizard are derived from the types of *stockService wsdl*. The following screenshot shows the generated aliases for 'runQueryAsAServiceResponse' XSD type. The default alias name can be changed to give more user-friendly English name. The names are changed to Margin, Discount = (double), Quantity_sold in the below screen. Ruleset Modeling Once the aliases are generated the next step is to create the ruleset. The ruleset created for this solution is very simple and contains only one rule and one decision table. The decision table is used to calculate the discount for each product SKU number. Creating a Ruleset End of the example. 1. In the Project Explorer view, expand the rules composer DC node and in the context menu of the Rules Modeling node, choose New Ruleset. 2. In the dialog box that appears, enter the name of the ruleset (say stockRuleset) in the field. Choose OK. Creating a Definition 1. Open the 'stockRuleset' editor 2. Navigate to Definitions page in the editor 3. Click on '+' icon under Variable Definitions. 4. Select ‘double’ as type from the drop down 5. Give the name of definition (say Cost_per_SKU) Creating a Decision Table 1. In the context menu of the ruleset node ( stockRuleset), choose New Decision Table. 2. In the Decision Table Creation Wizard that appears, enter a name in the Decision Table Name (say discountCalculationDT ) field and optionally enter a description in the Comments field. Choose Next. 3. On the Select the Conditions screen, press Ctrl and select the alias ‘Quantity_sold’ and definition ‘Cost_PER_SKU’ in the Available Conditions section and choose the Select Conditions button. The ‘Quantity_sold’ alias and ‘Cost_PER_SKU’ definition appear in the Selected Conditions section. Choose Next. 4. On the Select the Actions screen, press Ctrl and select the alias, Discount = {double}, in the Available Actions section and choose the Select Actions button. The ‘Discount_sold’ alias appears in the Selected Actions section. Choose Finish. 5. Save the changes. The following diagram shows the decision table ``` Decision Table : discountCalculationDT ``` ### Documentation and Properties <table> <thead> <tr> <th>Quantity_sold</th> <th>Cost_Per_SKU</th> <th>Discount ={double}</th> </tr> </thead> <tbody> <tr> <td>&lt; 50</td> <td>&gt;= 4</td> <td>3</td> </tr> <tr> <td></td> <td>Between 4 and 1</td> <td>1</td> </tr> <tr> <td></td> <td>&lt;= 1</td> <td>0</td> </tr> <tr> <td>&gt; 50 and &lt; 200</td> <td>&gt;= 3</td> <td>2</td> </tr> <tr> <td></td> <td>Between 3 and 1</td> <td>1</td> </tr> <tr> <td></td> <td>&lt;= 1</td> <td>0</td> </tr> <tr> <td>&gt; 200</td> <td>&gt;= 2</td> <td>1.5</td> </tr> <tr> <td></td> <td>Between 2 and 1</td> <td>1</td> </tr> <tr> <td></td> <td>&lt;= 1</td> <td>0</td> </tr> </tbody> </table> --- ### Creating an If-Then Rule 1. In the context menu of the Ruleset Node (stockRuleset), choose **New Rule**. 2. In the dialog box that appears, enter a name (say stockRule) for the rule in the field. Choose OK. 3. In the rule editor that appears, under the **If** section, choose **(Add a new Condition)**. The default rule condition: `Operation.isSuccessful() Equals true` appears. 4. To edit the default rule condition: - Choose the LHS value: `Operation.isSuccessful()` and in the drop down menu choose a the alias `ns1:runQueryAsServiceResponse/ns1:table/ns:row.getXmlElement`. - Choose the comparator: **Not Equals** and choose a comparator in the drop down menu. - Choose the RHS value after the comparator and in the drop down menu choose the alias **null**. To enter static values, choose each component in the default rule condition (`Operation.isSuccessful() Equals true`) and enter the value in the inline text box. 5. In the rule editor that appears, under the **Then** section choose **(add a new Action)** and in the drop down menu, expand an action type node and choose **Assign :: Cost_Per_SKU action**. Choose the RHS value and create an expression: `(Sales_revenue − margin) / Quantity_sold`. Add one more action: **Evaluate-DecisionTable : discountCalculationDT** The below screenshot shows the rule ```csharp rule : stockRule Priority : 50000 Overrides : Effectivity : Always <Click to enter comments> Preconditions : + If Sales summary record Not Equals null + Then Assign :: Cost_Per_SKU = (Sales_revenue - Margin) / Quantity_sold. Evaluate-DecisionTable :: discountCalculation + ``` ### Ruleset Web Services Provisioning Once we have the Rule and the Decision Table ready, we would need to to publish this ruleset as a Web Service. We need to register SAP Java Server in NWDS to publish as a web service. The SAP Java Server can be configured in NWDS through preference page. ### Configure SAP Java Server Follow below steps to configure SAP Java Server 1. In Preferences page, select SAP AS Java 2. Click on the Add and give **Instance Hostname** and **Instance Number** in the dialog 3. Click OK ### Rules as Web Service Follow below steps to generate Web Service for stockRuleset 1. In the **Project Explorer** view, expand the rules composer DC and the **Rules Modeling** nodes. 2. In the context menu of a ruleset (stockRuleset), choose **Web Service ➔ Create WSDL Artifact**. 3. In the dialog box that appears, in the **Ruleset Name: Service Attributes** page, accept default values or make changes. Choose **Next**. 4. In the **Ruleset Name: Service Signature** page, the input and output types appear and the checkboxes are selected by default. Select all the check boxes. Choose **Next**. 5. In the **Ruleset Name: WSDL Preview** page, you should see the contents of the WSDL artifact. 6. Choose **Finish** Deploying Rules You should have a running instance of SAP AS, and should have configured the SAP NetWeaver Developer Studio with this instance. 1. In the Project Explorer view, in the context menu of the rules composer DC node, choose Development Component→Build. 2. In the dialog box that appears, select the rules composer DC checkbox and choose OK. 3. In the context menu of the rules composer DC node, choose Development Component→Deploy. 4. In the Deploy DCs dialog box that appears, select the rules composer DC checkbox and choose OK. 5. Open the NWA and navigate to ‘Web Services Navigator’ 6. Search for the service ‘stockRuleset’ and open the wsdl from the link 7. Copy the wsdl to the file system. See the attached rules.wsdl for the ruleset service definition. Unified Service Now we have both the services published. The next step is to create a client which unifies both the services. Since both the services work on the same type generating the proxy client is very simple. The unification of these two services into one entity implies: - An internal data mapping (at design time and runtime) between the Query web service output and the Business Rule web service input. - That the input definition of the unified web service must include the input definition of the Query web service as well as the input definition of the Rule Engine web service unresolved through the above data mapping process. Optionally additional input definitions may be required to pilot the data mapping process. - That the outputs of the unified web service can be empty or composed of full or partial Query and/or Business Rule web services output definition. We will use SAP Java server to create client for both the services and finally create a single client which gets data from Query as a Web Service service which is then used to invoke Rules service. Web Service client for Rules Service Follow below steps to create a Web Service client for Rules Service 1. Create a Dynamic Web Project, say RulesService 2. Create a folder ‘src’ 3. Create a folder ‘wsdl’ in the folder ‘src’ 4. Copy the file ‘rules.wsdl’ to the wsdl folder 5. Right click on ‘rules.wsdl’ file and select Web Services -> Generate Client as shown in below diagram 6. Click **Finish** in the wizard which appears. You may choose to change the package name and set other configuration by going through next wizard pages. The wizard looks like 7. Clicking **Finish** will create the java client for rules service. See the attached java code. **Web Service client for QaaWS Service** Follow below steps to create a Web Service client for Rules Service 1. Create a Dynamic Web Project say **QaaWSService** 2. Create a folder ‘src’ 3. Create a folder ‘wsdl’ in the folder ‘src’ 4. Copy the file ‘stockService.wsdl’ to the wsdl folder 5. Right click on ‘rules.wsdl’ file and select **Web Services -> Generate Client** 6. Click **Finish** in the wizard which appears. 7. Clicking **Finish** will create the java client for rules service. See the attached java code. **Unified Service** Follow below steps to create a Unified Web Service. 1. Create a Dynamic Web Project, say **UnifiedService** and define project dependencies on above two projects (RulesService and QaaWS service). 2. Create a Java class and add below methods. 3. Expose the method ‘getTableInforFromRules’ as a webservice. The below code snippet shows how we can combine both the services. ```java /** * This method fetches data for Table element from QaaWS service. */ public static Table getTableInfoFromQaaWS() { try { URL wsdlUrl = new URL("http://inln50076293a.dhcp.blrl.sap.corp:8080/dswsbobje/qaawsservices/?WSDL&cuid=AVYcPaCLNsJPK7v_53vcv6U"); QName qname = new QName("storeService","storeService"); StoreService service = new StoreService(wsdlUrl,qname); QueryAsAServiceSoap queryAsAServiceSoap = service.getQueryAsAServiceSoap(); RunQueryAsAService parameter = new RunQueryAsAService(); parameter.setLogin("Administrator"); parameter.setPassword("abhishek123"); QaaWSHeader header = new QaaWSHeader(); header.setSerializedSession("aaa"); header.setSessionID("bbb"); RunQueryAsAServiceResponse runQueryAsAServiceResponse = queryAsAServiceSoap.runQueryAsAService(parameter, header); return runQueryAsAServiceResponse.getTable(); } catch (MalformedURLException e) { //do nothing } return null; } /** * The method combine the service call for QaaWS and Rules service and get the final discount from the * rules. The value of Table element retrieved from QaaWS (in the method * "getTableInfoFromQaaWS") is * mapped to the Table element which is passed as input to rules service. */ public static Table getTableInfoFromRules() { try { QName qname = new QName("http://www.sap.com", "stockRuleset"); StockRuleset stockRuleset = new StockRuleset(wsdlUrl,qname); StockRulesetPortType stockRulesetPort = stockRuleset.getStockRulesetPort(); RunQueryAsAServiceResponse runQueryAsAServiceResponse = new RunQueryAsAServiceResponse(); //Fetch the information of Table to from QaaWS service Table table = getTableInfoFromQaaWS(); response.setTable(table); //Pass the information of Table to rules service RulesTypesDemoSapComStockrulesStockRulesetStockRuleset parameter = new RulesTypesDemoSapComStockrulesStockRulesetStockRuleset(); parameter.setRunQueryAsAServiceResponse(response); RulesTypesDemoSapComStockrulesStockRulesetStockRuleset invokeRules = stockRulesetPort.invokeRules(parameter); RunQueryAsAServiceResponse runQueryAsAServiceResponse = invokeRules.getRunQueryAsAServiceResponse(); ``` Table tableResponse = runQueryAsAServiceResponse.getTable(); //Get the final Table value after rules execution List<Row> rowsRules = tableResponse.getRow(); for(Row row : rowsRules){ System.out.println(row.getSKUNumber()); System.out.println(row.getDiscount()); } } catch (MalformedURLException e) { e.printStackTrace(); } return null; Please see the attached java code for unification of the Query as a Web Service and Rule services. The below table shows the final discount obtained for some of the products after the execution of the service <table> <thead> <tr> <th>SKU Number</th> <th>Discount</th> </tr> </thead> <tbody> <tr> <td>115121.0</td> <td>1.5</td> </tr> <tr> <td>116256.0</td> <td>3.0</td> </tr> <tr> <td>119427.0</td> <td>3.0</td> </tr> <tr> <td>120114.0</td> <td>3.0</td> </tr> <tr> <td>121764.0</td> <td>3.0</td> </tr> <tr> <td>122709.0</td> <td>3.0</td> </tr> <tr> <td>128390.0</td> <td>2.0</td> </tr> <tr> <td>128969.0</td> <td>3.0</td> </tr> </tbody> </table> Conclusion Combining both Query as a Web Services and BRM as one entity hides the complexity of data mapping and its flow from business user. It also offers simplicity through semantic translation of the Query output data model to the Rule Engine input definition and simple data mapping allowing the final integration to increase the level of information. References For more information on SAP BusinessObjects Query as a Web Service See also Web Intelligence chapter “Sharing Web Intelligence content with other Web applications” For more information on NW CE BRM Integration of SAP NetWeaver BRM with SAP BusinessObjects Query as a Web Service Copyright © Copyright 2009 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc. HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology. Java is a registered trademark of Sun Microsystems, Inc. JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape. SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP Business ByDesign, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. BusinessObjects and the BusinessObjects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other BusinessObjects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of BusinessObjects S.A. in the United States and in other countries. BusinessObjects is an SAP company. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary. These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (“SAP Group”) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.
{"Source-Url": "https://archive.sap.com/kmuuid2/405b466a-c6ca-2c10-b487-e2a5d0d5beff/Integration%20of%20SAP%20NetWeaver%20BRM%20with%20SAP%20BusinessObjects%20Query%20as%20a%20Web%20Service.pdf", "len_cl100k_base": 6414, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 44175, "total-output-tokens": 7372, "length": "2e12", "weborganizer": {"__label__adult": 0.00029206275939941406, "__label__art_design": 0.0003750324249267578, "__label__crime_law": 0.0003314018249511719, "__label__education_jobs": 0.0006189346313476562, "__label__entertainment": 8.225440979003906e-05, "__label__fashion_beauty": 0.00013899803161621094, "__label__finance_business": 0.005733489990234375, "__label__food_dining": 0.0003829002380371094, "__label__games": 0.0005817413330078125, "__label__hardware": 0.0006456375122070312, "__label__health": 0.0002237558364868164, "__label__history": 0.0001571178436279297, "__label__home_hobbies": 7.331371307373047e-05, "__label__industrial": 0.0006399154663085938, "__label__literature": 0.00018084049224853516, "__label__politics": 0.0002390146255493164, "__label__religion": 0.00025391578674316406, "__label__science_tech": 0.009002685546875, "__label__social_life": 6.914138793945312e-05, "__label__software": 0.053863525390625, "__label__software_dev": 0.92529296875, "__label__sports_fitness": 0.0002148151397705078, "__label__transportation": 0.0004954338073730469, "__label__travel": 0.0001729726791381836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29349, 0.01976]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29349, 0.19464]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29349, 0.82293]], "google_gemma-3-12b-it_contains_pii": [[0, 984, false], [984, 3213, null], [3213, 5712, null], [5712, 7840, null], [7840, 9862, null], [9862, 9969, null], [9969, 12631, null], [12631, 12793, null], [12793, 13057, null], [13057, 14763, null], [14763, 16802, null], [16802, 18371, null], [18371, 20229, null], [20229, 20611, null], [20611, 21737, null], [21737, 24246, null], [24246, 25066, null], [25066, 25938, null], [25938, 29349, null]], "google_gemma-3-12b-it_is_public_document": [[0, 984, true], [984, 3213, null], [3213, 5712, null], [5712, 7840, null], [7840, 9862, null], [9862, 9969, null], [9969, 12631, null], [12631, 12793, null], [12793, 13057, null], [13057, 14763, null], [14763, 16802, null], [16802, 18371, null], [18371, 20229, null], [20229, 20611, null], [20611, 21737, null], [21737, 24246, null], [24246, 25066, null], [25066, 25938, null], [25938, 29349, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29349, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29349, null]], "pdf_page_numbers": [[0, 984, 1], [984, 3213, 2], [3213, 5712, 3], [5712, 7840, 4], [7840, 9862, 5], [9862, 9969, 6], [9969, 12631, 7], [12631, 12793, 8], [12793, 13057, 9], [13057, 14763, 10], [14763, 16802, 11], [16802, 18371, 12], [18371, 20229, 13], [20229, 20611, 14], [20611, 21737, 15], [21737, 24246, 16], [24246, 25066, 17], [25066, 25938, 18], [25938, 29349, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29349, 0.06269]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
8c5a36377d2ebd382983596b2a9ae498a070faca
0. Introduction to This Syllabus .......................................................... 4 0.1 Purpose of this document .......................................................... 4 0.2 Cognitive Levels of Knowledge .................................................. 4 0.3 The Examination ................................................................. 5 0.4 Business Outcome .............................................................. 5 0.5 Specialization ........................................................................ 5 1. Course Introduction - 15 minutes ....................................................... 6 Literature .................................................................................. 6 2. Introduction to Mobile Application Types and Their High Level Architecture - 60 min (K2) ........... 7 2.1 Different types of Mobile Applications – 10 minutes (K1) .............. 7 2.2 Mobile Application Architecture – 20 minutes (K2) ...................... 7 2.2.1 Client-side architecture ...................................................... 7 2.2.2 Server-side architecture .................................................... 7 2.2.3 Connection Types .......................................................... 8 2.3 Development Environments and Tools – 30 Minutes (K1) ............. 8 2.3.1 Mobile Application Development Environment and Tools .......... 8 2.3.2 Emulators & Simulators ................................................... 8 3. Introduction to Performance Testing Concepts–70 min (K2) .................... 9 3.1 The importance of Performance Testing – 10 minutes (K1) .......... 9 3.1.1 Purpose of Performance Testing ........................................ 9 3.1.2 Performance testing focus ............................................... 9 3.2 Key terms and concepts in Performance world – 60 minutes (K2) .... 9 3.2.1 The main concepts used in the Performance world ................. 9 3.2.2 The key terms that are used in the Performance world ........... 10 3.2.3 The main items required in the Performance test environment .... 10 4. Performance Testing Process, Strategy & Approaches 45 minutes (K2) ........ 11 4.1 Challenges in Mobile Application Performance Testing 20 minutes (K2) .... 11 4.2 Performance Testing Process for Mobile Apps 25 minutes (K1) ........ 11 4.2.1 Test Process .................................................................. 11 4.2.2 Mobile Performance Testing .......................................... 11 5. Performance Testing solutions for different Mobile Applications 605 minutes (K3) .......... 13 5.1 Common Performance Issues for Mobile Applications 20 minutes (K1) .... 13 5.1.1 Common Performance Issues ........................................... 13 5.2 Native App Performance Testing 210 (Android)+120 (iOS) minutes (K3) ....... 13 ©iSQI GmbH 2017, CMAP-PT-Syllabus-V1.0R2_EN Page 2 of 15 5.2.1 Common Performance Issues 5.2.2 Common Tools and Indicators to be monitored 5.3 Mobile Web App Performance Testing 130 (Android) + 30 (iOS) minutes (K3) 5.4 Mobile Network Performance 20 minutes (K1) 5.5 Server Side Performance Testing 30 minutes (K3) 5.5.1 Method and Setup 5.5.2 Monitoring and Analysis 0. Introduction to This Syllabus 0.1 Purpose of this document This syllabus defines the content of the international qualification scheme for the "Certified Mobile App Professional – Performance Testing" (CMAP-PT). It has been established by the Special Interest Group SIG of the International Software Quality Institute (iSQI). CMAP-PT is an introduction to Mobile application performance testing. It provides an excellent introduction to performance testing in the mobile world for different kinds of apps, the most relevant techniques and terminology. The iSQI SIG CMAP-PT has created: - The syllabus - The Business Outcomes (BO) - The course material including practical exercises and other artifacts The course material can be licensed to training providers. In order to license the material the training provider must have at least two trainers that hold the CMAP-PT certificate. The SIG CMAP-PT-FL qualification is entry level certification aimed at anyone involved in mobile app performance testing: project managers, quality managers, software development managers, business analysts, developers, testers, load & performance testers, IT directors and management consultants. It is assumed that the trainees have basic knowledge of software testing concepts. It is recommended that the candidate holds a foundation level certificate such as "ISTQB® Certified Tester – Foundation Level" (ISTQB®- CTFL)and Certified Mobile App Professional – Testing” Foundation (CMAP-FL). 0.2 Cognitive Levels of Knowledge Detailed Learning Objectives (LO) are indicated for each section in this syllabus. These objectives identify what the trainee will be able to do following the completion of each module. They are classified as follows: Level 1: Remember (K1) Level 2: Understand (K2) Level 3: Apply (K3) The top-level heading for each chapter shows the highest level of learning objectives that is covered within the chapter. The definition of these cognitive levels matches the definition given in the ISTQB® Certified Tester scheme to guarantee compliance with and thus integrity to this scheme. Please refer to [CTFL2011] for more details. 0.3 The Examination The CMAP-PT certificate examination will be based on this syllabus. Answers to examination questions may require the use of material based on more than one section of this syllabus. All sections of the syllabus are examinable. The exam is 60 minutes, 40 questions multiple-choice exam. Examinations may be taken after the training course or taken later (e.g. in a public examination). 0.4 Business Outcome This section lists the Business Outcomes expected of a candidate who has achieved the CMAP-PT Foundation Level certification. A CMAP-PT-FL professional can BO1 Be familiar with the Mobile Application Performance Testing concepts BO2 Be able to define the challenges at the Mobile Application domain and in particularly performance BO3 Be able to apply the Load & Performance Process in the Mobile Application testing world BO4 Be able to define the Performance strategy and approaches when testing different types of Mobile Applications BO5 Be able to get familiar with the different attributes of the Performance Testing in Mobile Application BO6 Be able to apply Performance Testing for different Mobile Application solutions – Native, Web and Hybrid apps BO7 Be able to identify the relevant attributes, how to monitor them and present the results 0.5 Specialization CMAP–PT is one of the family of CMAP certifications that target different proficiency levels as well as specializations. Other certifications from CMAP are listed below: - Certified Mobile Application Professional – Testing Foundation Level - Certified Mobile Application Professional – Security Testing - Certified Mobile Application Professional – Automation Testing 1. Course Introduction - 15minutes **Literature** [SILLARS 2016] High Performance Android Apps - Doug Sillars [VO 2011] Pro iOS Apps Performance Optimization - Khang Vo [CTFL2011] ISTQB Foundation Level Syllabus [CMAP-FLT2012] Certified Mobile Application Professional – Testing (Foundation) The Certified Mobile Application Professional – Performance Testing (CMAP-PT) certification. Assist in adaptation of current test performance testing processes for mobile app performance testing process. Adapt existing performance testing experience and knowledge to develop performance tests for mobile applications. Identify and Apply appropriate methods for performance testing for mobile applications. Develop and execute performance tests for web, native and hybrid applications using open source tools. Assist in identification of requirements of a test lab for carrying out mobile application performance testing as well as provide instructions and tips for troubleshooting and performance test reports. The syllabus has following sections – - Course Introduction *(This section)* - Introduction to Mobile Application types and their High Level Architecture - Introduction to Performance Engineering Concepts - Mobile Application Performance Testing Process, Strategy & Approaches - Performance Testing solutions for Mobile Applications - Performance testing Native Apps – Android and iOS - Performance testing Web Apps – Android and iOS - Performance testing Web Apps – server side - Performance testing hybrid Apps The exam structure and question distribution is explained as part of the course material. The course timing includes time taken to do the subject discussion as well as the exercises and the exam question distribution follows the timing described in the syllabus. 2. Introduction to Mobile Application Types and their High Level Architecture - 60 min (K2) Terms: Client, Server, Synchronous and Asynchronous connection, Emulators and Simulators, Native Apps, Mobile Web Apps, Hybrid Apps 2.1 Different types of Mobile Applications – 10 minutes (K1) **PTFL2.1-1 Be able to recall different types of mobile applications (K1)** There are various types of mobile applications such as native, browser-based or hybrid mobile applications. Some of the applications come pre-installed on the mobile device and others can be downloaded from respective stores or marketplaces and installed. Each type of application has certain advantages and disadvantages requiring an engineering decision to be made before starting the application development. Testing of each of these application types may require a different approach. 2.2 Mobile Application Architecture – 20 minutes (K2) **PTFL2.2-1 Be able to understand the general architecture (Client & Server) of mobile applications (K2)** **PTFL2.2-2 Be able to classify the development environment for mobile devices and their tools (K2)** **PTFL2.2-3 Be able to understand the different connection types and data sync methods that can have an impact on the performance of mobile applications (K2)** There are multiple solutions to architect a mobile application. Some of the considerations in choosing a particular architecture or design decision are: - Who is the target audience for the application? - What kind of application we want to build – Native/Hybrid/Web application? - Is the application meant to run across various mobile and non-mobile platforms? - What are the connectivity needs for the application? - What is the data storage need for the application? ### 2.2.1 Client-side architecture Client side application can be Thin-client or Fat-client. Thin client applications do not have customized application code and these make minimal use of the features provided by the mobile operating system whereas Thick/Fat client applications may have multiple layers of application code and may make use of features provided by the mobile operating system. Communication and data storage needs between client and server also plays a role in choosing appropriate architecture. ### 2.2.2 Server-side architecture Server side architecture can be a single-tier or multi-tier. In single-tier architecture all server side components like application server, database server etc. are clubbed into one unit, whereas in multi-tier architecture they are spread across various units. 2.2.3 **Connection Types** There are various types of connections such as Wi-Fi, Cellular data networks, Bluetooth etc. and data synchronization method such as push and pull. The devices can operate in one of the three modes – Always connected, never connected or partially connected, each mode being useful in certain situations. | 2.3 Development Environments and Tools – 30 Minutes (K1) | **PTFL 2.3-1 Be able to recall the architecture of iOS and Android (K1)** **PTFL 2.3-2 Be able to identify and recall the purpose of some of the common tools that are supplied as part of Android/iOS application development platforms (K1)** 2.3.1 **Mobile Application Development Environment and Tools** All the operating systems have different set of tools for developing mobile applications. It is useful to know which OS/platform uses which tools and also what host operating system can be used to install and use these tools. Understanding the platform greatly helps testing of applications on that platform. It is important to get an overview of architecture, storage used, supported programming languages for application development, for major mobile operating systems namely, iOS, Android, Windows Mobile and Blackberry. There are two major players in the smartphone market currently that provide mobile operating systems. Google, which has Android operating system, and Apple which has iOS. Architecture of each of these OS is layered architecture. The OSes provide various services to application without allowing the applications accessing hardware directly or even through low-level libraries and routines. There are various tools provided with the development platforms to facilitate application development, testing and debugging. Note - two other popular operating systems by Blackberry Limited (earlier Research in Motion or RIM) that has Blackberry operating system and Microsoft which has Windows Mobile operating system are not covered. 2.3.2 **Emulators & Simulators** **PTFL 2.3-3 Understand differences between emulators and simulators (K2)** **PTFL 2.3-4 Understand the application of emulators/simulators for mobile application testing (K2)** Emulators are very useful in the early stage of development as these typically integrate with development environments and allow quick deployment and testing of applications. Emulators are also used to reduce the cost of test environments by replacing real devices with emulators. However an emulator can’t replace a device because the emulator may behave in a different manner than a mobile device. Emulators may not support all mobile device features. In addition some hardware types may not be supported such as touch, accelerometer and others. Simulators too are tools that mimic the device. However unlike emulators, which can consume device executable, simulators require applications to be compiled specifically for these. 3. Introduction to Performance Testing Concepts—70 min (K2) Terms: Performance Testing, Load Testing, Stress Testing, Scalability, Use/Load Profile, Throughput, Stability, KPI, Bottleneck, Memory leak, Crash, Deadlock, Latency 3.1 The importance of Performance Testing—10 minutes (K1) PTFL3.1-1 Be able to recognize the main purpose of the Performance Testing (K1) 3.1.1 Purpose of Performance Testing The main purpose of Performance Testing is to determine how a system performs in terms of responsiveness and stability under a particular workload and mitigate the related risks. The risks can address the ability of the application to handle the expected workload (usually the amount of transactions) which set of users perform concurrently at a given duration; the ability of the application to respond in a required timing; the ability of the application to be available, stable and reliable while load is running (normal and/or stress). Performance Testing can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. 3.1.2 Performance testing focus Performance testing is focused on generating load on the back end (servers) of the system under test, by using load tools to simulate the transactions run by the client(s) interface(s) that generate the traffic towards the servers using the relevant protocols. 3.2 Key terms and concepts in Performance world—60 minutes (K2) PTFL3.2-1 Be able to explain the various of concepts in Performance (K2) PTFL3.2-2 Be able to categorize the key terms in Performance into business and technical aspects (K2) PTFL3.2-3 Be able to recognize the main items required to be in the performance test environment (K2) 3.2.1 The main concepts used in the Performance world The performance world has its own unique tests and concepts. The performance activities have various concepts that drive them forward. The purpose of the performance testing is to mitigate risks in aspects that related to the way the product – system or application or solution – behave under load (its performance), addressing in addition various non-functional attributes like stability, reliability, availability, resiliency etc. The risks mitigations are measured against the cost of failures, the rate of failure and recovery, response time of the business flows vs. the needed ones in the market. In addition, looking for system bottlenecks and additional failure attributes, such as memory leaks, indication for crashes and deadlocks, limit of module or component’s abilities. There are tests need to be performed for the devices and measure the impact on the performance of the application and the device – mainly the defined KPIs for them. As well as for performing test for the Server-side and the impact on the application performance and the server-side performance as a results of these tests – as well against the defined KPIs for them. ### 3.2.2 The key terms that are used in the Performance world The main concepts are to address business needs throughout business processes defined to be measured. These business processes are captured into a load model that includes the usage model (how the users will use it?) and the traffic model (how the components of the system will handle it?); in addition to that, there are figures that placed/targeted for each component in the system in terms of capacity – how much traffic, transactions etc. the component can handle, and by that, we create a map between the transactions and the components and the infrastructure that holds the components. The additional information that is need to be placed before practicing the load model, are the profiles that the system should have, in terms of data, scale of data, complexity of the data, mimicking the real world as much as possible. In a case it’s a new product, the testing load manager and the product manager, needs to define the profiles aligned with the business needs/targets that put for this product to succeed. The system shall be load accordingly and then measured against the Key Performance Indicators (KPIs) that defined as success criteria. Each KPI has its own values and trash holds to be measured against. All of the above shall be handled by a process which is specific to the way we handle load and performance tests. ### 3.2.3 The main items required in the Performance test environment There are mainly 3 type of items that required when considering the environment for Performance tests as well as for the production: physical test environment (servers, machines, network devices etc.), its hardware, software and network configuration. Tools (including test tools, monitor tools etc.); and Resources available from the testing team. 45 minutes (K2) Terms: Performance Test Plan, Test Environment 4.1 Challenges in Mobile Application Performance Testing 20 minutes (K2) PTFL 4.1-1 Understand and recall the challenges in Mobile Application Performance Testing (K2) Performance Testing for Mobile application requires a process to be established like in any other software testing project. One of the main key points is to identify and be aware to the challenges that we’ve in the mobile application performance-testing domain. While being a part of these projects you will require to address challenges such as the ability of the application to handle exponential growth. In addition meeting the required performance criteria while addressing various challenges such as limited mobile network bandwidth and other mobile network issues, exponential growth in user sessions, transactions and data transfer; the ability to support multiple devices types, multiple mobile operating systems and platforms, unique user interfaces, various of mobile application types, assessing the capabilities of the client side as well as the server side, the verity of the technology that currently growing such as cloud-based apps, the capabilities of the devices and concurrent apps running simultaneously, the consumption of the mobile’s resources while the apps are using different types of sensors and supportive performance tools. 4.2 Performance Testing Process for Mobile Apps 25 minutes (K1) 4.2.1 Test Process PTFL 4.2-1 Recall the phases of the Performance Testing Process (K1) The testing process which is applicable to mobile application testing requires the following steps - - Performance Test Plan - Identify the Test Environment - Identify Performance Acceptance Criteria - Plan and Design Tests - Configure the Test Environment - Script Development - Monitoring Setup - Execute the Tests - Result Analysis & Diagnostics - Report and Retest - Closure 4.2.2 Mobile Performance Testing PTFL 4.2-2 Recall the Performance Testing Test Objects for Mobile Application Testing (K1) PTFL 4.2-3 Recall the Performance Testing requirements for Mobile Application Testing (K1) Mobile App Performance Testing consists of Client side testing done on the device for all types of apps namely, Native Apps, Web Apps and Hybrid Apps and on the server side it consists of usual performance testing as the desktop/web apps. However, on the mobile devices, the tools used to capture traffic information may be different. Network side performance testing requires the ability to mimic real world usage and conditions of network, signal etc. Mobile performance testing requires support for device simulation, support for network emulation with an ability to simulate Mobile networks such as Edge, 3G, 4G etc. There might also be a requirement for support of various technologies such as AJAX, HTML 5, JSON, Flex, Oracle Forms, Silverlight, .NET, SOAP, SAP etc. For the purpose of setting up monitoring and reporting collection of various device parameters such as CPU, memory usage, battery usage, rendering time, network data usage etc. requires use of various tools many of which are supplied by the mobile OS vendors along with the SDK. To reduce the cost of load generation infrastructure and tool cloud supported tools may also be used. 5. Performance Testing solutions for different Mobile Applications 605 minutes (K3) Terms: Monitor, Instruments, Jank, Memory Leaks, Heap, RRC, HTTP Pipelining, Caching 5.1 Common Performance Issues for Mobile Applications 20 minutes (K1) PTFL 5.1-1 Recall common performance issues for Mobile Applications (K1) 5.1.1 Common Performance Issues Some of the common performance issues faced by a large number of mobile applications are - Large amount of data transfers - Large storage requirements - App being battery unfriendly - Memory leaks and resource hogging applications - Slow response time for startup and various actions - Graceless handling of unexpected situations like network outage and others - Server side performance issues 5.2 Native App Performance Testing 210 (Android)+120 (iOS) minutes (K3) PTFL 5.2-1 Be able to understand the reasons of performance issues for native apps (K2) PTFL 5.2-2 Be able to monitor performance for Android and iOS native apps using various tools (K3) 5.2.1 Common Performance Issues The native apps face various performance challenges because of weaker mobile CPU, lesser memory and storage space availability. Network issues too need to be taken into consideration. Power consumption is almost never an issue for common desktop/web applications but it is a very important factor for mobile devices because of limited battery power capacities. There are a variety of sensors available on the devices which need to be handled appropriately and their power consumption also needs to be taken into account when designing an application. 5.2.2 Common Tools and Indicators to be monitored Android IDE such as Android Studio comes bundled with a monitoring tool called Monitor and Android Device Monitor and Xcode for iOS comes with a tool called Instruments. These tools, in addition to other third party tools can be used to monitor various device side parameters such as the Battery, the Display, various network related statistics and various resources such as the CPU, Threads, Memory, Storage etc. For Android devices understanding Wakelock behavior and monitoring apps for their appropriate use is critical to well-performing apps. ADB (Android Debug Bridge) provides various commands such as dumpsys for getting access to various types of logs for e.g., battery stats, CPU info etc., created on the device during normal operations and these logs can be used to analyze the app performance behavior. Some of the external tools used are those built-in such as battery status on the device or external tools such as the battery historian. An App in iOS moves through various states and at each transition various resource usage parameters need to be checked. Instruments tool can be used to monitor various parameters and find out amongst other things, memory leaks, heap allocations, power profiles etc. ### 5.3 Mobile Web App Performance Testing 130 (Android) + 30 (iOS) minutes (K3) **PTFL 5.3-1** Be able to monitor performance for Android and iOS Web apps using various tools (K3) Web applications including Mobile Web application rely on the browser on the desktop/device. Some applications use a mobile specific version of a website whereas others use responsive design and may send different content based on the user agent identification string of the website. In all the cases the client side performance depends on some common things such as Radio Resource Control, HTTP Pipelining, Browser Cache and JavaScript execution. Concepts like HTTP pipelining and waterfall need to be understood to analyze the reasons for poor client-side performance. In addition there are tools like those that come with browser (Dev tools) such as chrome and safari. For Android ARO is an example of a tool that provides detailed analysis of performance. ### 5.4 Mobile Network Performance 20 minutes (K1) **PTFL 5.4-1** Be able to recall use of various tools for network emulation (K1) Network emulation allows users to simulate/emulate various network conditions. There are many type of emulators available, some of them being every comprehensive and powerful but difficult to setup and others which are easy to setup and are part of various load testing applications such as LoadRunner and NeoLoad. ### 5.5 Server Side Performance Testing 30 minutes (K3) **PTFL 5.5-1** Be able to understand the method of performance testing server side (K2) **PTFL 5.5-2** Be able to use at least one tool for load testing the server side (K3) #### 5.5.1 Method and Setup Server Side performance testing requires monitoring various parameters on the server side. The tools used for loading the server side require capturing network traffic either on the device or using a proxy on the desktop. The difference between Web app and Mobile app performance testing is that the traffic to be captured comes from mobile devices rather than web browsers. 5.5.2 Monitoring and Analysis Various parameters on the server side need to be monitored and later analyzed for bottleneck identification. Some of them are - Processor, Memory, Network I/O, Disk I/O and various OS/Application Specific parameters. Once the data is collected it needs to be analyzed to understand the bottlenecks. Various tools like JMeter, LoadRunner, NeoLoad etc. can be used for server side performance testing of mobile applications. *Note: Spend 60 minutes on any one of the tools.*
{"Source-Url": "https://isqi.org/img/cms/CMAP/Syllabuses/CMAP-PT-Syllabus-V1-0R2_EN.pdf", "len_cl100k_base": 5656, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 38870, "total-output-tokens": 6507, "length": "2e12", "weborganizer": {"__label__adult": 0.0008254051208496094, "__label__art_design": 0.0012636184692382812, "__label__crime_law": 0.0007948875427246094, "__label__education_jobs": 0.216552734375, "__label__entertainment": 0.0003306865692138672, "__label__fashion_beauty": 0.0005445480346679688, "__label__finance_business": 0.0015544891357421875, "__label__food_dining": 0.0007781982421875, "__label__games": 0.00255584716796875, "__label__hardware": 0.0020732879638671875, "__label__health": 0.0008912086486816406, "__label__history": 0.0006055831909179688, "__label__home_hobbies": 0.0003790855407714844, "__label__industrial": 0.0008392333984375, "__label__literature": 0.000732421875, "__label__politics": 0.0003542900085449219, "__label__religion": 0.0008702278137207031, "__label__science_tech": 0.01250457763671875, "__label__social_life": 0.0003862380981445313, "__label__software": 0.02264404296875, "__label__software_dev": 0.7294921875, "__label__sports_fitness": 0.0010051727294921875, "__label__transportation": 0.0013580322265625, "__label__travel": 0.0006022453308105469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28393, 0.03912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28393, 0.37545]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28393, 0.8907]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3013, false], [3013, 3324, null], [3324, 5473, null], [5473, 7147, null], [7147, 9379, null], [9379, 11947, null], [11947, 14851, null], [14851, 17815, null], [17815, 19632, null], [19632, 22036, null], [22036, 22987, null], [22987, 25146, null], [25146, 27888, null], [27888, 28393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3013, true], [3013, 3324, null], [3324, 5473, null], [5473, 7147, null], [7147, 9379, null], [9379, 11947, null], [11947, 14851, null], [14851, 17815, null], [17815, 19632, null], [19632, 22036, null], [22036, 22987, null], [22987, 25146, null], [25146, 27888, null], [27888, 28393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28393, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3013, 2], [3013, 3324, 3], [3324, 5473, 4], [5473, 7147, 5], [7147, 9379, 6], [9379, 11947, 7], [11947, 14851, 8], [14851, 17815, 9], [17815, 19632, 10], [19632, 22036, 11], [22036, 22987, 12], [22987, 25146, 13], [25146, 27888, 14], [27888, 28393, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28393, 0.00446]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
37a719b56ab74d68bbad30c4913a21729f66bc65
A Preliminary Study of Sequence Effects in Judgment-based Software Development Work-Effort Estimation Stein Grimstad, Magne Jørgensen Simula Research Laboratory, P.O. Box 134NO-1325 Lysaker, Norway {steingr,magnje}@simula.no ABSTRACT Context: Software development effort estimates are often inaccurate, and this inaccuracy cause problems for the clients as well as the providers. Consequently, we need more knowledge about the estimation processes, so that we can improve them. Objective: This study investigates how initial judgment-based estimation of work effort in software development affects subsequent, unrelated estimation work. Method: Fifty-six software professionals from the same company were allocated randomly to two groups. One group estimated the most likely effort required to complete a small software development task, while the other group estimated the effort required to complete a large task. After that, all the subjects estimated the effort required to complete the same medium-sized task. We replicated the experiment in another company (with 17 software professionals). Results: We found that sequence effects may have a strong impact on judgment-based effort estimates. Both in the first experiment and in the replication, the subsequent estimates were assimilated towards the subjects’ initial estimate, i.e., the group that began with a small task supplied, on average, lower estimates of the medium-sized task than the group that began with the large task. Conclusion: The results of this study suggest that knowledge about sequence effects may be important in order to improve estimation processes. However, currently we have a quite incomplete understanding of how, when and how much sequence effects affect effort estimation. Consequently, further research is needed. Keywords: software effort estimation, judgment-based estimation, sequence effects. 1. BACKGROUND Several research studies have found that accurate software estimation is an important factor for success in software development projects; see e.g. (Lederer and Prasad, 1995; Ropponen and Lyytinen, 1997). Unfortunately, a recent survey (Møløkken and Jørgensen, 2003) reports that the average estimation error is about 30% in software development projects. We may conclude that there is an urgent need for more accurate software estimates. A better understanding of the processes of human judgment that are relevant to software effort estimation may be important in order to reduce estimation error, because human judgment plays a central role in almost all software estimation. The relevance for judgment-based estimation processes (e.g. expert estimation) is obvious. Not so obvious, but important nevertheless is its relevance for formal estimation models; it typically plays an important role in providing input to the models, selection of estimation model, etc. Human judgment has been studied extensively in other fields of research, such as cognitive and social psychology, experimental economics, forecasting, jury decisions, and consumer research. These studies have revealed numerous shortcomings in human judgment; see e.g. (Koehler and Harvey, 2004; Tversky and Kahneman, 1974). Previous studies have demonstrated that several of these issues are relevant to software estimation. For example, estimates are usually over-optimistic (Bergeron and St-Arnaud, 1992), over-confident (Jørgensen et al., 2004), inconsistent (Grimstad and Jørgensen, 2007), assimilated towards judgmental anchors (Aranda and Easterbrook, 2005) and affected by irrelevant information (Jørgensen and Grimstad, 2008). Knowledge about such shortcomings increases our understanding of the estimation process, and is important input for the development and improvement of estimation methods. For example, in (Jørgensen and Grimstad, 2008) we show that estimators who have a vested interest in the outcome of the estimation process are typically poor at making realistic estimates. It would, therefore, be wise to avoid using such persons as estimators. In this study we focus on whether software professionals’ current effort estimation work may be affected by unrelated estimation work that they have recently conducted. Will, for example, their estimates be too optimistic if they have recently estimated a very small task? Research on human judgment suggests that this may be the case. There is substantial evidence that activating a construct in one task, often referred to as contextual priming, increases the likelihood that it will later affect a subsequent, unrelated task; see e.g. (Higgins, 1996). For example, (Thomas et al., 2007) found that the duration of a just-completed anagram task affected the prediction of the duration of the next, structural different, anagram task, and that this led to over-optimistic estimates when the previous duration was shorter and to overly pessimistic estimates when it was longer than the current task. In software estimation, it is common for software development tasks to be estimated directly after each other. Typically, a project is broken down into subtasks, which are then estimated in separate estimation sessions. If the order in which the tasks are estimated affects the estimates, as research in other fields suggests, there may be orders in which tasks are estimated that are likely to provide more realistic estimates than others. However, few estimation methods address the order in which the tasks are estimated. This is, perhaps, not surprising, because we are not aware of any research studies that have investigated the effects of the sequence in which tasks are estimated in the context of software engineering. However, it would be useful to conduct such studies. In addition to offering practical advice, they may also contribute to a better understanding of the underlying steps involved in the cognitive processes of software estimation. The lack of previous research and the practical and scientific relevance of the topic motivated the research question in this study: **RQ: How does the sequence in which software development tasks are estimated affect the estimates in the judgment-based estimation of the most-likely software development effort?** We conducted a quasi-experiment to investigate our research question. The experiment was designed to test how estimating a large task vs. estimating a small task affects the subsequent estimation of a medium-sized task. We replicated the experiment in order to test the robustness of the results on different subjects. The remainder of this paper is organized as follows. In Section 2 we present the experiment, the replication and the results. In Section 3, we discuss the limitations of the study, suggest guidelines, and, discuss possible explanations for the effect. Section 4 summarizes. 2. EXPERIMENTS 2.1 Experimental Design **Subjects** The experiment was conducted as a part of an in-company estimation seminar in a medium-sized consultancy company located in eastern Norway. The company's main focus is web-based development for external clients. The 56 subjects described themselves as developers, designers, architects, technology experts, project leaders, and managers, i.e. as experienced software professionals who had different backgrounds and fields of expertise. However, most of the subjects had a technical education and most had previously been involved in the estimation of several software development projects. The subjects did not receive any payment. Instead, we used the results to illustrate key issues in the seminar. **Material** We created three independent requirement specifications. Each described a software development task. The amount of functionality and complexity in each requirement specification differed. We characterized the tasks as small (TS), medium (TM) and large (TL), according to the amount of effort required to complete the tasks. The tasks were based on the use of standard web-related technologies and there were no constraints regarding development tools and methodology. This was to ensure that most subjects had sufficient competence for meaningful estimation work. Two of the tasks (TM and TL) were based on real-world software specifications, while the remaining task (TS) was created for experimental purposes. The specifications were written in natural language. See Table 1 for an overview of the tasks. **Table 1 Estimation Tasks** <table> <thead> <tr> <th>Task Id</th> <th>Task Size</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>TS</td> <td>Small</td> <td>A simple web system for the registration of seminar participants. Participants register on the web by submitting their email address and a registration code. The system confirms that the data is registered. There are no data validation (duplicate check, etc). The data is stored in a database. Generation of reports, such as attendee lists, is done manually, i.e. by querying the database.</td> </tr> <tr> <td>TM</td> <td>Medium</td> <td>A web-based library system that contains information about scientific articles. Users and administrators can view an information page about each scientific article that is registered in the system, search for articles, see a printer-friendly display of the search results and the information pages, register new scientific articles (some data validation is done during registration), and perform simple user management (administrators can register, edit and remove other administrators).</td> </tr> <tr> <td>TL</td> <td>Large</td> <td>A web-based system that manages experiments and other studies. Users can view an information page about each study. The page contains information about the study design, the results, involved persons, related research articles, etc. Users can perform advanced searches, sort the search results, see the results in a printer-friendly display, generate graphical reports, etc. Administrator users can upload and manage files, add/delete/edit studies, and perform simple user management. The system requires some integration with other systems.</td> </tr> </tbody> </table> Procedure The subjects were randomized into two groups (Group TS-TM and TL-TM) by their physical location in the seminar room (every second subject was allocated to the same group). The subjects received a booklet that contained requirement specifications. The subjects were instructed to estimate the development tasks in the booklet in the same order as they appeared, and they were not allowed to go back and change previous, already completed, estimates. We collected the booklets when the allocated time had expired. Each group was asked to estimate two of the three requirement specifications; see Table 2. One group initially estimated the large task, while the other group initially estimated the small task. Subsequently, both groups estimated the middle-sized task. The tasks were estimated by expert judgment, and the subjects did not have access to any additional information. The subjects did not implement the tasks. We performed a pilot study prior to the experiment, and we had previously used variants of the requirement specifications in experiments. We used our experience from the pilot and the previous experiments to design this experiment, e.g. when allocating time to complete the estimation tasks. Table 2 Treatment <table> <thead> <tr> <th>Estimation Task</th> <th>Group TS-TM</th> <th>Group TL-TM</th> </tr> </thead> <tbody> <tr> <td>Estimation task 1</td> <td>TS</td> <td>TL</td> </tr> <tr> <td>Estimation task 2</td> <td>TM</td> <td>TM</td> </tr> </tbody> </table> Replication We replicated the experiment in another in-company estimation seminar. The subjects were 17 experienced software professionals (mainly developers) from a software department in a large company that is located in the middle of Norway. The company's main focus is in-house development and maintenance work for their company. We attempted to replicate all relevant aspects of procedure from the first experiment, including the tasks, the allocation of the tasks to treatment, and the amount of time that was allocated. Results The results of the first experiment are displayed in Table 3 and Figure 1, and those of the replication in Table 4 and Figure 2. The inter-estimator agreement is low in both the experiment and the replication. This a common finding in estimation studies; see e.g. (Grimstad and Jørgensen, 2007; Kusters et al., 1990). There are several possible reasons for this, some of which are related to internal inconsistency (Grimstad and Jørgensen, 2007), and variations in productivity (Brooks, 1975). Neither is it surprising that there appear to be systematic inter-company differences. It is likely that there are certain company-specific issues that can affect both the effort used and the estimation, related, for example, to clients, personnel skills, and the development process. We did not exclude potential outliers. Instead, we based the analysis on the median values (Kruskall-Wallis tests) in order to increase the robustness. The effect was stronger when we based the analysis on mean values. Table 3 Experiment: Median Most Likely Estimates (work-hours) <table> <thead> <tr> <th>Group</th> <th>N</th> <th>Estimate of TS</th> <th>Estimate of TL</th> <th>Estimate of TM</th> </tr> </thead> <tbody> <tr> <td>TS-TM</td> <td>28</td> <td>24,0</td> <td>N/A</td> <td>95,0</td> </tr> <tr> <td>TL-TM</td> <td>28</td> <td>N/A</td> <td>550,0</td> <td>195,0</td> </tr> </tbody> </table> Table 4 Replication: Median Most Likely Estimates (work-hours) <table> <thead> <tr> <th>Group</th> <th>N</th> <th>Estimate of TS</th> <th>Estimate of TL</th> <th>Estimate of TM</th> </tr> </thead> <tbody> <tr> <td>TS-TM</td> <td>28</td> <td>20,0</td> <td>N/A</td> <td>72,0</td> </tr> <tr> <td>TL-TM</td> <td>28</td> <td>N/A</td> <td>230,0</td> <td>90,0</td> </tr> </tbody> </table> The results show that the subjects that initially estimated the small task (Group TS-TM) submitted, on average, lower effort estimates for the medium task than the subjects that initially estimated the large task (Group TL-TM) (median estimates of the middle-sized task of 95,0 vs. 190,0 work-hours in the experiment, and 72,0 vs. 90,0 work-hours in the replication). Statistical analysis shows that the effect of task order on the estimates is statistically significant (p=0.01). The effect is not statistically significant (p=0.3) in the replication. Still, we believe that the replication strengthens the results from the original experiment, because the results clearly point in the same direction. It would be worthwhile to repeat the replication with a higher number of subjects, to determine whether the results are significant, because the replication only used 17 subjects, whereas the original experiment used 56. In both cases, the relative effect size of the treatments is medium large according to the classification in (Cohen, 1992)\(^1\) (Cohen’s d is 0.68 in the experiment and 0.60 in the replication). 3. DISCUSSION It is well-known that human judgment can be unreliable. The results demonstrate that judgment-based software effort estimation is no exception, and suggest that sequence effects can have a large impact on effort estimates of software development tasks. The results also illustrate that it is difficult to predict the impact of sequence effects, i.e. we are not able to explain satisfactorily the effect size in the original experiment and the replication. However, the results should be interpreted with care because of the limitations to this study. They include issues related to the following: - **Time pressure.** The subjects had about 20 minutes to estimate the two tasks. This restricted the amount of in-depth analysis that was possible. It may be that a thorough and time-consuming analysis of the necessary development work reduces sequence effects. There are, for example, studies that have found that primacy effects and judgmental anchoring increase in magnitude when there is increased time \(^{1}\) We have based the previous statistical analysis on the median values. Therefore, we have not removed potential outliers from the dataset. However, these outliers might impact Cohen’s d as this is a measure that is based on the mean values. The effect sizes should therefore be interpreted with some care. pressure under certain conditions; see e.g. (Kruglanski and Freund, 1983). In our experience, the time that was allocated to the estimation work in the experiment is typical for this type of estimation work in field-settings. Nevertheless, there are clearly real-world estimation situations in which more time is spent on the estimation work. - Estimation method. The subjects estimated the tasks by expert judgment. They were not allowed to use other estimation methods, such as group estimation and methods based on formal estimation models, and they were not allowed to discuss the estimation work with colleagues. Estimation methods may diverge with respect to the type and magnitude of sequence effects. For example, the justification component in discussion-based processes may moderate the effect of sequence effects. Consequently, we may have studied one of the estimation methods that is most likely to be affected by sequence effects. - Laboratory context. The experiment was conducted as a part of an estimation seminar, so the subjects were not functioning in their usual work context. As a result, they did not have access to historical estimation data, or any other information, apart from what they could remember. It may be the case that estimation in field situations are less affected by sequence effects than estimation in laboratory studies. We have, for example, found in an unpublished study that the effect of judgmental anchors may be significantly lower in some field situations than that which is typically found in laboratory studies. - Estimation tasks. There were no variations in estimation tasks in the experiment. Studies have shown that the sequence effects can lead to assimilation and contrast; see e.g. (Stapel and Koomen, 1998), and that there are large variations in effect sizes. Consequently, it is not unlikely that other estimation tasks, e.g. tasks that are less similar, will give completely different results. - Estimation accuracy. We can only speculate on how sequence effects would have affected the estimation accuracy if the participants had implemented the tasks that they estimated. It is intuitive to think that starting by estimating the largest task would improve the estimation accuracy, because the average effort estimates of the subsequent estimation work increased and it is well-known that effort estimates are often too optimistic. However, there are many factors that potentially affect estimation error, including the estimate itself, and it is difficult to accurately predict how estimation error will be impacted by a specific factor (Grimstad and Jørgensen, 2006). In order to address some of the limitations mentioned above, we analysed the data from a previous study. In a field experiment (Jørgensen, 2004), seven estimation teams from a large company estimated two real-life software development projects. Each estimation team applied a top-down estimation strategy on one project and a bottom-up estimation strategy on the other. The estimators were allowed to telephone people in their own company (e.g., other software developers who had relevant experience), and to collect documents from their own offices or computers. In addition, they had access to the company's online database of completed projects. The projects that they estimated had already been completed by other teams in the company, but the participants in the experiment did not know anything about the projects. The actual effort used for the first project was 1340 work-hours, and the actual effort used for the second project was 766 work-hours. Unfortunately, all the estimation teams estimated the projects in the same order. Obviously, this complicates the analysis of sequence effects. However, we based the analysis on the finding that estimates are likely to be assimilated towards previous estimates (see Section 2). A possible consequence of this is that estimates are likely to be too pessimistic when the previous estimate is larger than the current one, as is the case in the study reported herein. We therefore expected that the estimates of the second project would be less over-optimistic than the estimates of the first project. The results support this hypothesis. The median estimates of the first project are on average 14% too optimistic, and the median estimates of the second estimate are 15% too pessimistic. However, there are numerous limitations to this analysis, and the results should be implemented with great care. Sequence effects may be hard to avoid in real-world estimation situations. It is, for example, quite common that software development projects are re-estimated during the project execution. This typically means that all the uncompleted development tasks that are included in the project are estimated within a short timeframe. Our results suggest that it is likely that such estimates will be affected by sequence effects. Unfortunately, it is difficult to predict the impact of the sequence effects, e.g. related to effect size. At present, our best advice is as follows: In situations in which it is unlikely that the estimates will be over-optimistic, it may be best to start by estimating medium-sized and medium-complex tasks. However, when it is reason to suspect that estimates will be too optimistic, it may be best to start by estimating the largest and most complex tasks. However, a better understanding of the sequence effects might allow us to go beyond these simple guidelines and offer advice on how to neutralize the effect. This will require knowledge about how, when, why, how much and under what conditions sequence effects affects judgmental estimation processes. Most of the numerous models and theories that explain aspects of human judgment and decision making, such as the social judgment model proposed by Mussweiler in (Mussweiler, 2003), assumes that almost all human judgment is based on comparisons. An essential step in comparative judgment processes is to find a relevant reference that with which the current judgment task can be compared. The selection of reference for comparison will often impact the outcome of the judgment process, see e.g. (Herr, 1986; Jacowitz and Kahneman, 1995). A possible explanation of our results is consequently that the initial estimation task was used as a reference in the estimation of the subsequent task. However, there are many cognitive mechanisms that can cause the reference for comparison to produce the sequence effects that we observed in our experiment. It may, for example, be the case that selection of a large reference for comparison increased the focus on complexity related attributes, such as quality and testing, in the judgment-based estimation processes. Selection of a small reference might have increased the focus on attributes such as simplicity and rapid development. Unfortunately, our study does not allow us to discriminate between the different cognitive mechanisms. We believe that the main contribution of this paper is to demonstrate that sequence effects may have a large impact on software estimation. However, our current understanding of the phenomena is quite incomplete. Carefully designed studies are needed to reveal the mechanics that are involved and how they interact. Clearly, further research is needed. 4. SUMMARY The typical approach to the estimation of work effort in software development is based on the decomposition of projects into subtasks. These subtasks are usually estimated in a rather arbitrary sequence. However, research in other fields suggests that the sequence may be important. For example, studies on forecasting have found that initial predictions can strongly affect subsequent, even unrelated, predictions. We designed an experiment to test whether such sequence effects occur in a typical software effort estimation situation. In a laboratory-based experiment, we divided 56 software professionals randomly into two groups. One group started by estimating a small, and the other a larger, software development task. Subsequently, all the software professionals were asked to estimate the work effort of the same medium-sized task. We found that the estimates of the medium-sized tasks were assimilated towards the initial estimates, i.e., the group that initially estimated a small task submitted, on average, lower estimates of the medium-sized task than the group that initially estimated a larger task. We replicated the experiment and obtained similar results. There are several limitations to the experiment. For example, the experiment was conducted in a laboratory setting, there were time pressures that prevented in-depth analyses, and there was a lack of variation in the tasks that were estimated. It is not unlikely that other estimation contexts would have yielded different results. For example, the estimation method that all the software professionals used in the experiment was that of expert judgment. It may be the case that other estimation methods are more (or less) robust with respect to sequence effects. Despite these limitations, our study indicates that sequence effects are more important than they are currently treated as being in software effort estimation research and practice. Such sequence effects may affect whether estimates are too optimistic, too pessimistic or realistic, and a better understanding of the sequence effects may help us to understand and improve software professionals’ estimation performance. Currently, our understanding of how, when, and how much, sequence effects affect effort estimation is poor, and further research is needed. At present, our best advice is that software professionals should start with effort estimates of medium-complex, medium-sized sub-tasks of the project, or, with large and complex tasks if there is a tendency towards over-optimistic estimates. References
{"Source-Url": "http://www.bcs.org/upload/pdf/ewic_ea08_paper14_1.pdf", "len_cl100k_base": 5274, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 16724, "total-output-tokens": 6610, "length": "2e12", "weborganizer": {"__label__adult": 0.0004057884216308594, "__label__art_design": 0.00029468536376953125, "__label__crime_law": 0.00029015541076660156, "__label__education_jobs": 0.002620697021484375, "__label__entertainment": 6.508827209472656e-05, "__label__fashion_beauty": 0.0001589059829711914, "__label__finance_business": 0.000912189483642578, "__label__food_dining": 0.00039124488830566406, "__label__games": 0.0005583763122558594, "__label__hardware": 0.00048470497131347656, "__label__health": 0.0005636215209960938, "__label__history": 0.00018894672393798828, "__label__home_hobbies": 8.684396743774414e-05, "__label__industrial": 0.0003464221954345703, "__label__literature": 0.0004012584686279297, "__label__politics": 0.00020563602447509768, "__label__religion": 0.0003633499145507813, "__label__science_tech": 0.00894927978515625, "__label__social_life": 0.00015342235565185547, "__label__software": 0.005767822265625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0002841949462890625, "__label__transportation": 0.0004498958587646485, "__label__travel": 0.0001983642578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29694, 0.02975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29694, 0.59181]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29694, 0.93188]], "google_gemma-3-12b-it_contains_pii": [[0, 5165, false], [5165, 9997, null], [9997, 13615, null], [13615, 16074, null], [16074, 22519, null], [22519, 27632, null], [27632, 29694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5165, true], [5165, 9997, null], [9997, 13615, null], [13615, 16074, null], [16074, 22519, null], [22519, 27632, null], [27632, 29694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29694, null]], "pdf_page_numbers": [[0, 5165, 1], [5165, 9997, 2], [9997, 13615, 3], [13615, 16074, 4], [16074, 22519, 5], [22519, 27632, 6], [27632, 29694, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29694, 0.15888]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
cea18a914c9bb4a63742e21cea518cb57ab2f8d6
Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems Nermin Sökmen Abstract—An effort estimation model is needed for software-intensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested. Keywords—Functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis. I. INTRODUCTION SOFTWARE-INTENSIVE system projects face great challenges when they attempt to measure complexity of system design to estimate design effort. The literature studies have shown that effort estimations in software-intensive projects are made through the software size. A software-intensive system is a computer-based system which is ranging over software applications, information systems, embedded systems, and systems-of-systems [1]. Although software plays a critical role in the development of a system, it is important to mention that a software-intensive system requires hardware not only to run on but also perform specific tasks. Therefore, the hardware part of the whole system should be taken into consideration when making estimates for project effort. This paper introduces a different approach to estimating system development effort in software-intensive projects. The aim of this paper is twofold: to define a system design complexity metric and to construct a parametric effort estimation model for embedded and real time systems. The remainder of the paper is structured as follows. The paper first examines software and hardware size and complexity metrics and effort estimation models in the literature. It then describes the research method used in the construction of a system effort estimation model. The next section presents the analysis results and the constructed model. Finally, the paper ends with conclusion. Dr. Nermin Sökmen is chief senior researcher at Informatics and Information Security Research Center (BILGEM) of Scientific and Technological Research Council of Turkey (TÜBİTAK), PK 74 Gebze Kocaeli, 41470, Turkey (phone: +90-262-765-3109; fax: +90-262-648-1100; e-mail: nermin.sokmen@tubitak.gov.tr). II. LITERATURE RESEARCH Boehm [9] introduces the first COCOMO model for software development effort estimation. The model estimates effort based on size of software and pre-determined constants. Boehm's the intermediate COCOMO model computes software development effort as a function of estimated software size and a set of cost drivers that consists of product, hardware and personnel characteristics [10]. The formula uses different sets of coefficients when calculating program effort for organic, semi-detached and embedded software projects. The literature rarely addresses the problem of modeling hardware design complexity. Salchak and Chawla [13] propose a hardware design complexity measure for avionics systems. The measure has been derived from an avionics software design complexity measure constructed from six components, namely reuse, internal cohesion, external cohesion, interface complexity, data coupling and real-time coupling [14]. Even in a different domain, Bashir and Thomson [15] first propose a product complexity measure, and then develop number of parametric models to estimate design effort. Number of parametric models was developed to estimate design effort with using product complexity metric. Bashir and Thomson [16] develop an analogy-based model for Product complexity has the most significant impact on development time [19], [20] and effort [16]-[18]. Griffin [19] defines complexity as the number of functions of a product. Hobday [21] emphasizes that the quantity of components and sub-systems, the hierarchical manner in which they are integrated together and the degree of technological novelty are the important indicators of product complexity. El-Haik and Yang [22] identify three components of design complexity: coupling, variability, and correlation. A hierarchical structure is needed for managing complexity [23]. Bashir and Thomson [15] define hardware product complexity as a function of the number of functions and the depth of their functional trees. They propose the formula in (1): $$\text{Product Complexity (PC)} = \sum_{j=1}^{l} F_j x_j$$ \hspace{1cm} (1) where $F_j$ is the number of functions at level $j$ and $l$ is the number of levels. Hardware aspect of a system consisting of electronics sub-systems and components can be self-contained or embedded. In this study, system is defined as a hardware system alone or together with its embedded software. This paper focuses on Bashir and Thomson’s product complexity measure to calculate functional complexity of an electronic system. Fig. 1 shows the functional tree of an embedded system with its corresponding product complexity. **III. DATA SET AND METHODOLOGY** Historical data from completed 13 software-intensive projects were obtained from a research institute and an Information Technology (IT) company. System development effort, the dependent variable, was calculated with using the sum of hardware and embedded software development efforts spent in all phases of product development lifecycle, including requirement analysis, design, implementation and test. Traditional regression analysis was used to develop a system development effort estimation model for embedded and real time projects. This study focuses on three factors: functional complexity, technical difficulty and design team capability. **A. Functional Complexity (FC)** Functional complexity is the most significant impact on development time [19], [20] and effort [16]-[18]. Griffin [19] defines complexity as the number of functions of a product. Hobday [21] emphasizes that the quantity of components and sub-systems, the hierarchical manner in which they are integrated together and the degree of technological novelty are the important indicators of product complexity. El-Haik and Yang [22] identify three components of design complexity: coupling, variability, and correlation. A hierarchical structure is needed for managing complexity [23]. Bashir and Thomson [15] define hardware product complexity as a function of the number of functions and the depth of their functional trees. They propose the formula in (1): $$\text{Product Complexity (PC)} = \sum_{j=1}^{l} F_j x_j$$ \hspace{1cm} (1) where $F_j$ is the number of functions at level $j$ and $l$ is the number of levels. Hardware aspect of a system consisting of electronics sub-systems and components can be self-contained or embedded. In this study, system is defined as a hardware system alone or together with its embedded software. This paper focuses on Bashir and Thomson’s product complexity measure to calculate functional complexity of an electronic system. Fig. 1 shows the functional tree of an embedded system with its corresponding product complexity. **B. Technical Difficulty (TD)** Technical complexity, technical difficulty and technological newness are analyzed in various studies [24]-[27]. Technologically easy solutions can be used in the design of very complex products and radical technology changes can be required by less complicated products [28]. In addition to product complexity, this study focuses on technical difficulty. Griffin [28] identifies technical difficulty as the difficulty of developing the scientific solution to the problem. Technical difficulty indicates the degree of difficulty of technical goals and product specifications in a project [29]. In this study, technical difficulty is measured on seven-point scale that ranges from 1 (implemented existing/reuse technologies) to 7 (designed and implemented very complex and emerging technologies). **C. Design Team Capability (DTC)** Lack of required knowledge and skills in the project personnel was identified as one of the important risk items in software development projects [30]-[33]. This study also considers impact of design team capability that consists of knowledge, skill and experience variables. Similar to technical difficulty factor, it is measured on seven-point scale that ranges from 1 (knowledge, skill and experience do not exist) to 7 (highly qualified experienced team composition). **IV. SYSTEM DEVELOPMENT EFFORT (SDE) ESTIMATION MODEL** **A. Data Analysis Results** The descriptive statistics show that system development effort and functional complexity variables are normally distributed. Mean, median, minimum and maximum values of functional complexity variable are 67.3, 54, 19 and 153, respectively. Table I presents correlations among the basic and derived variables. The relationship between functional complexity and system development effort was supported at 0.01 level with a coefficient of 0.826. On the other hand, the regression model constructed with this variable explains 71 percent of the total variation in the system development effort variable. Durbin-Watson (DW) was found as 1.71. Since the DW value is less than 2.0, there may be some indication of serial correlation. TABLE I DESCRIPTIVE ANALYSIS AND CORRELATIONS <table> <thead> <tr> <th>Variables</th> <th>Mean Value</th> <th>Pearson Correlations</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>FC</td> </tr> <tr> <td>SDE (Person-month)**</td> <td>39.9</td> <td>0.826</td> </tr> <tr> <td>FC**</td> <td>67.3</td> <td>Normally distributed</td> </tr> <tr> <td>TD/DTC</td> <td>1.07</td> <td></td> </tr> <tr> <td>MPC (Modified PC)</td> <td>66.3</td> <td></td> </tr> </tbody> </table> The Pearson correlation test results showed that technical difficulty, design team capability and technical difficulties to design team capability variables were not associated with system development effort. Further, to make the development effort estimation more precise and accurate, it is necessary to consider functional complexity factor together with other factors. After several trials, the relationship between development effort and functional complexity is increased to 0.957 with the help of formula in (2). \[ MPC = FC \times \left(\frac{TD}{DTC}\right)^{0.5} \] where MPC is modified product complexity, FC is functional complexity of embedded or real time systems, TD is the degree of technical difficulty and DTC is the degree of design team capability. Technical difficulty to team expertise ratio was also used by Bashir and Thompson [18]. On the other hand, their final effort equation is quite different than the model constructed in their study. B. Model Generation Linear regression analysis was used to develop a model for estimating system development effort from the degree of technical difficulty of the system and the degree of design team capability. The results of the regression analysis are shown in Table II. <table> <thead> <tr> <th>Variables</th> <th>Unstandardized Coefficients</th> <th>Standardized Coefficients</th> <th>t</th> <th>Sig.</th> </tr> </thead> <tbody> <tr> <td></td> <td>B</td> <td>Std. Error</td> <td>Beta</td> <td></td> </tr> <tr> <td>MPC</td> <td>589</td> <td>0.25</td> <td>0.989</td> <td>23.141</td> </tr> </tbody> </table> The regression coefficient was found to be statistically significant. The generated system development effort estimation model is given in (3). \[ SDE = 0.589 \times FC \times \left(\frac{TD}{DTC}\right)^{0.5} \] C. Model Verification The mean magnitude of relative error (MMRE) and prediction quality indicator (Pred(m)) are the two most important indicators used in the performance assessment of software effort estimation models [34], [35]. This study used MMRE and Pred(0.25) indicators to test the accuracy of the regression model. MMRE formula is given in (4). \[ MMRE = \frac{1}{n} \sum_{i=1}^{n} MRE_i \] where \(MRE_i\) is the difference between the actual and the estimated effort relative to the actual effort, \(n\) is the number of systems in the dataset. \(MRE_i\) is given in (5). \[ MRE_i = \frac{|SDE_i - \overline{SDE}_i|}{SDE_i} \] where \(\overline{SDE}_i\) is the predicted effort of system \(i\) and \(SDE_i\) is the actual effort of system \(i\). Table III shows the actual efforts, the estimated efforts and \(MRE_i\) values calculated for each system in the dataset. The table also gives the MMRE and Pred(0.25) values. MMRE should be equal to 0.25 or less [16], [17], [35]. The computed MMRE for the dataset is 0.157. Since MMRE is less than 0.25, the model is considered to be acceptable. <table> <thead> <tr> <th>SDE (person-month)</th> <th>SDE (person-month)</th> <th>MRE_i</th> </tr> </thead> <tbody> <tr> <td>24.0</td> <td>21.6</td> <td>0.10</td> </tr> <tr> <td>30.0</td> <td>27.4</td> <td>0.09</td> </tr> <tr> <td>39.0</td> <td>35.6</td> <td>0.09</td> </tr> <tr> <td>32.0</td> <td>28.3</td> <td>0.12</td> </tr> <tr> <td>9.0</td> <td>9.6</td> <td>0.06</td> </tr> <tr> <td>22.0</td> <td>26.5</td> <td>0.20</td> </tr> <tr> <td>82.0</td> <td>79.1</td> <td>0.03</td> </tr> <tr> <td>87.0</td> <td>79.6</td> <td>0.08</td> </tr> <tr> <td>24.5</td> <td>32.5</td> <td>0.33</td> </tr> <tr> <td>40.5</td> <td>44.2</td> <td>0.09</td> </tr> <tr> <td>25.0</td> <td>15.8</td> <td>0.37</td> </tr> <tr> <td>42.0</td> <td>31.2</td> <td>0.26</td> </tr> <tr> <td>62.0</td> <td>76.2</td> <td>0.23</td> </tr> </tbody> </table> MMRE: 0.157 Pred(0.25): 0.77 Pred(0.25) is a measure of the percentage of observations whose MMRE is less than or equal to 0.25. Pred(0.25) is given in (6). \[ Pred. (0.25) = \frac{k}{n} \] where \(k\) is the number of observations whose MMRE is less than or equal to 0.25, \(n\) is the total number of systems. The model is considered to be acceptable if \((Pred(0.25)) \geq 0.75\) [17], [35]. Pred(0.25) is 0.77. The model can be acceptable. The study also verifies the regression assumptions. Table IV gives ANOVA test results. The F test in the ANOVA table implies that the model can fit for predicting system development effort estimation (Sig. < 0.01). Table V gives the model summary. As is shown in Table V, the results of the regression analysis indicate that modified product complexity variable is significantly related to system development effort. The regression model explains 97.6 percent of the total variation in system development effort. estimation. Since the DW statistic is close to 2.0, there is no autocorrelation problem. ### TABLE IV <table> <thead> <tr> <th>Model</th> <th>Sum of Squares</th> <th>df</th> <th>Mean Square</th> <th>F</th> <th>Sig.</th> </tr> </thead> <tbody> <tr> <td>Regression</td> <td>26752.987</td> <td>1</td> <td>26752.987</td> <td>535.495</td> <td>.000</td> </tr> <tr> <td>Residual</td> <td>599.513</td> <td>12</td> <td>49.959</td> <td></td> <td></td> </tr> <tr> <td>Total</td> <td>27352.500</td> <td>13</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ### TABLE V <table> <thead> <tr> <th></th> <th>R</th> <th>R Square</th> <th>Adjusted R Square</th> <th>Std. Error of the Estimate</th> <th>Durbin-Watson</th> </tr> </thead> <tbody> <tr> <td></td> <td>.989</td> <td>.978</td> <td>.976</td> <td>7.06820</td> <td>1.925</td> </tr> </tbody> </table> The plot of residuals versus the predicted values is shown in Fig. 2. The residuals fall within a generally random pattern. ![Fig. 2 Analysis of Residuals](image) The patterns shown in Fig. 3 indicate that the residuals are normally distributed. ![Fig. 3 The residuals diagrams](image) V. CONCLUSION Effort estimations in software-intensive projects are mostly made through the software size. In the literature, there are limited numbers of studies that address hardware complexity and effort estimation. On the other hand, an effort estimation model is needed for software-intensive systems that consist of hardware and embedded software parts. This study focused on embedded and real time systems. It first investigated a suitable indicator to measure functional complexity of a computer system. Due to its systematic approach and its language-independence, the functional decomposition technique was selected. The study then examined the relationships among system development effort, functional complexity, technical difficulty and design team capability. Test results showed the strong relation between development effort and functional complexity. Finally, the paper constructed a parametric model to estimate the development effort for software-intensive projects. The constructed regression model takes functional complexity, technical difficulty and design team capability factors as input variable. Model verification results show that the constructed model meets all regression assumptions and the criteria of MMRE and Pred(0.25) even though sample size is small. REFERENCES
{"Source-Url": "http://waset.org/publications/9999429/functional-decomposition-based-effort-estimation-model-for-software-intensive-systems", "len_cl100k_base": 4114, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17743, "total-output-tokens": 6796, "length": "2e12", "weborganizer": {"__label__adult": 0.00040650367736816406, "__label__art_design": 0.0004372596740722656, "__label__crime_law": 0.0003590583801269531, "__label__education_jobs": 0.0018405914306640625, "__label__entertainment": 8.285045623779297e-05, "__label__fashion_beauty": 0.0001906156539916992, "__label__finance_business": 0.0009937286376953125, "__label__food_dining": 0.00038909912109375, "__label__games": 0.0008301734924316406, "__label__hardware": 0.00188446044921875, "__label__health": 0.0006842613220214844, "__label__history": 0.00027489662170410156, "__label__home_hobbies": 0.00014412403106689453, "__label__industrial": 0.0006604194641113281, "__label__literature": 0.00034499168395996094, "__label__politics": 0.00024187564849853516, "__label__religion": 0.00042366981506347656, "__label__science_tech": 0.05426025390625, "__label__social_life": 0.00011092424392700197, "__label__software": 0.00579833984375, "__label__software_dev": 0.9287109375, "__label__sports_fitness": 0.00033211708068847656, "__label__transportation": 0.0006475448608398438, "__label__travel": 0.0001958608627319336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24816, 0.05582]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24816, 0.29598]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24816, 0.86671]], "google_gemma-3-12b-it_contains_pii": [[0, 5475, false], [5475, 11080, null], [11080, 16180, null], [16180, 20787, null], [20787, 24816, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5475, true], [5475, 11080, null], [11080, 16180, null], [16180, 20787, null], [20787, 24816, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24816, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24816, null]], "pdf_page_numbers": [[0, 5475, 1], [5475, 11080, 2], [11080, 16180, 3], [16180, 20787, 4], [20787, 24816, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24816, 0.22222]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
c1406ab6b613171ad371e0a31d049996362e30a8
Aug Reality Sibu Skarial\textsuperscript{1}, Avin Ayyappan \textsuperscript{2}, Eldhose M Manjummekudy \textsuperscript{3} \textit{Assistant Professor, Department Of Computer Application, M A College Of Engineering, Kothamangalam, Kerala, India\textsuperscript{1}} \textit{Student, Department Of Computer Application, M A College Of Engineering, Kothamangalam, Kerala, India\textsuperscript{2}} \textit{Assistant Professor, Department Of Civil Engineering, M A College Of Engineering, Kothamangalam, Kerala, India\textsuperscript{3}} I. INTRODUCTION The paper ‘AugReality’ introduces the technology of augmented reality for Android phones. This application is useful when the user is in a strange location. The system provides a user friendly environment with the location specific details. Augmented Reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called reality. In which the view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive. Artificial information about the environment and its objects can be overlaid on the real world. The ‘AugReality’ introduces the technology of augmented reality for Android phones. This application is useful when the user is in a strange location. The system provides a user friendly environment with the location specific details. This idea combines many functionalities like identifying the current location, finding common tags that a user wants to know (like hospitals, bus stand etc.), information about the current\upcoming events the city, guide the user to find the distance of the desired places along with route map. The system also provides the user with the provision of adding new tags (like new hospitals, hotels etc.) and events through a web application. For that the user need to be registered. The paper ‘AugReality’ is developed for the normal users who like to travel to unknown places. This application can be used to get the data about the current location and details about what’s going around them. For installing the mobile application they only need a good smart phone. Without any registration anyone can use this application. So this application helps the user to search for the places related to his current location. For a user in an unknown location the major places he wants to know are taxi stands, auto stands, bus stands, hospitals etc. The application helps the user to locate those places in the landed city. This application also helps the user to know about the famous places like hotels, auditoriums etc... in the city. In journey a person may want to get the distance to be travelled to reach a desired destination. In the mobile application the user can get the shortest distance between his current location and his destination, also he can view the path which he should use to reach the destination. This route map is displayed with the help of Google Maps. This helps the user to choose the necessary mode of vehicle. For example for small distance user can choose auto, for long distances user can either travel by bus or by taxi, so that the application helps the user to get to the respective vehicle’s stand. One of the major attractive of this application is that the user can view the minimum fare that he needs to pay to reach his destination. The minimum fare is calculated with the distance calculated and the minimum fare details from the server. The user may choose any of the three modes of transport. So the application will calculate the minimum fare for all the three modes and shown to the user. The user can search for the events happening in the city. So upon insertion of the events the users have to insert an image and details like phone no and website. So from the ‘AugReality’ mobile application the normal users can view the image about the event. The mobile no and web address are also displayed. The user can make the call to that no from the application itself and he can browse to the web address from the application. The ‘AugReality’ paper has an android application and a web application. The web application is used by the registered users and the admin. Users add their own events in the web application. For that the guest users have to be registered to the web application. The web application is used by the admin to approve the users and approve the events registered by the users. Minimum fare details are inserted by the admin. The web application is developed in Java platform with MySQL as backend. The Apache Tomcat is used as the web server. Java SE 1.7, NetBeans IDE 7.3, MySQL Front are the major tools used for developing this web application. System Environment System environment specifies the hardware and software configuration of the new system. Regardless of how the requirement phase proceeds, it ultimately ends with the software requirement specification. A good SRS should establish the basis for agreement between the customers and suppliers on what the software specified in the SRS will assist the potential users to determine if the software specified meets their needs or how the software must be modified to meet their needs. The software for the development has been selected based on several factors such as - Support and stability - Cost Effectiveness - Development Speed - Ability to create robust application least time Software environment: - Operating system : Windows 8, Android - Technologies Used : JAVA, MySQL, ANDROID - Application Server : Apache Tomcat - Back End : MySQL - Designing Tool : NetBeans IDE 7.3, Eclipse Juno Hardware environment: - Hard disk capacity : 500 GB - Ram : 4 GB - Processor : Intel Core i3 - Display : 1024 * 768 Resolution Color Monitor System Analysis System analysis is a detailed study of the various operations performed by a system and their relationships within and outside of the system. System analysis is the process of gathering and interpreting facts, diagnosing problems and using the facts to improve the system. Here the key question is what all problems exist in the present system, what must be done to solve those problems? Analysis begins when a user or manager begins a study of the problem using existing system. Existing System The existing system only provides less functionality. The GPS system only helps the user to locate the current location information. The user’s search for information is very limited. There is no provision for the user to know the occurrence of the events happening in the city. Using existing system, it is difficult to get the shortest distance between source and destination. In the existing system, there is no provision for user to get the precise information of various facilities such as hospitals, hotels, bus stand, and taxi stand of the particular city. Existing GPS functionality cannot locates the co-ordinates of event; hence the user was not able to find the information of the events happening in the city. Proposed System The proposed system includes both a web application as well as a mobile application. It is basically developed as an android application with augmented reality for android smart phones to increase convenience. The Proposed system aims at improving the current applications of GPS by adding necessary modules to ensure a happy journey. In addition to the information of user’s current location, the system helps the user to identify the current locations and also helps the user to locate to the nearest vehicle’s stand (auto, car, bus). Using this application user can find the distance between two places that he desired to travel. By finding the distance, user can choose the necessary mode of vehicles that he wish. For eg: for small distance user can choose auto, etc. so that the application helps the user to get to the respective vehicle’s stand. And also user is provided with facility for getting the nearest places (hospitals, hotels, historic places) in the city and also gets information about the nearby events happening in the city. The system also includes additional functionalities for finding the famous places in the city like famous museums, places, bus stand, hotels, hospitals etc. The system also provides the user about the current events and the latest updates of the city. In the web there are two actors; the admin and the user. Admin is granted with all permissions over the system; admin can add & search for information. Users also having a login for web application and can have facility for adding their own information along with other updates of the city. It will be later checked and updated by the administrator. Feasibility Study One of the important outcomes of preliminary investigation is feasibility study. The objective of feasibility study is not only to solve problem but also to acquire a sense of its scope. During the study, the problem defined is crystallized and aspects of the problem to be included in the system are determined. Consequently, costs and benefits are estimated with greater accuracy at this stage. It is a list of proposals according to system viability, its impact on the users, ability to meet user needs, and effective use of resources. Generally feasibility studies are undertaken within right time constraints. It should be conducted completely and no fundamental errors of judgments are made. If compatible social and technical systems can be devised then the system must be tested for economic feasibility. It is very important to evaluate the feasibility of a paper at the earliest possible time. Feasibility study and risk analysis are related in many ways. If the paper risk is great, the feasibility of producing quality software is reduced. The key factors considered during feasibility study are: - Technical Feasibility - Economic Feasibility - Operational Feasibility - Behavioral Feasibility Technical Feasibility Technical feasibility refers to the ability of the process to take advantage of the current state of the technology in pursuing further improvement. The technical capability of the personnel as well as the capability of the available technology should be considered. Implementation of the ‘AugReality’ system does not require changing of the existing configuration of the system. The proposed system is - To view place details in augmented point of view. - To identify nearest places and events happening in the city. - To provide minimal travel rates and additional information. - Functionally easy to handle. The existing internet facilities and computers are sufficient to implement the AugReality paper. AugReality Web application can be accessed by common browsers and AugReality android application by the JellyBean version. The JellyBean version android devices commonly has internet connectivity, compass, camera and GPS. There will not be much difficulty in getting required resources for the development and maintenance. All the resources needed for the development of the system as well as maintenance of the same is available. Here we are using only the already available resources. Therefore the system is technically feasible. The webserver used Apache is available to all users. Economic Feasibility Economic analysis is the most frequently used method for comparing the cost with the benefit or income that is expected from developed system. The AugReality system provides a cost-effective way for viewing things in a different angle. The proposed system incurs very low cost for the development and implementation. For the development of the AugReality application we are using NetBeans IDE and Eclipse. Both are available at free of cost. The system can work on systems with a configuration and connectivity which causes no excessive cost for implementation or usage. The AugReality web application will work in every browser with an internet connection and the AugReality android application will work on existing android smartphone with JellyBean version and corresponding configurations. This system, if developed and installed will be good benefit to the user. The system will be developed and operated in the existing hardware and software infrastructure. There is no need of additional hardware and software for the system, hence economically feasible. **Operational Feasibility:** This is necessary to know whether the system is operationally feasible. That is, it checks whether the system is flexible for the user to use and whether all the operations are working correctly and effectively. It is user-friendly and flexible application for the user to do all his activities in an effective manner. **Behavioral Feasibility:** It is easy to comprehend and hence should be feasible for various types of users who interact with it. Hence behavioral feasibility is tested such a way that it is feasible to all kind of users. People are inherently resistant to change and computer has known to facilitate change. The GUI forms used are well user supportive and direct user to accomplish his task. **List Of Actors And Their Roles** **Administrator:** Administrator can control over the system. Apart from using the basic task, they have full control over the user management and event management. **User:** They can register to the web to add their own events and they can use android smart phone to get an augmented view of events and places. All users of the system are expected to have basic knowledge of using a computer and basic knowledge in English language. **Administrator** The major roles of the administrator in this application includes - Approve user registration - Add events - Approve events added by registered user - Cost update - Edit Profile **User** The major roles of the user in this application includes - Registration - Add Events [view Edit Delete] **Business Rules** Use-cases are scenarios for understanding system requirements. A use-case model can be instrumental in development, planning, and documentation of systems requirements. - Admin and User have web interface and Android interface. - Guests can register in the website to become users. - Users must have a valid email id, mobile no. - Login: - Admin and User needed to login to website. The user uses username and password to log in to the. The login credentials goes to the server, it is validated and accordingly access is given to the client. - Admin approves User registration and new Events added by Users. - Admin can add Events and view events in web interface. - Admin should include an image with new event details. - Admin have to include minimum fare details in web interface. - Only Registered users can add events in AugReality web interface. - User can enter details about their event with an image. - Event Text Constraints : - Content - Size : size appropriated for uploading - Type (Plain Text) - Event Image Constraints : - User Selects the Image File - Size : size appropriated for uploading - Type : .jpg/.jpeg only. - All the details are stored in the database. - The Users must use a GPS enabled phone. - The Camera in the phone should be properly working. - Also a good Compass should be enabled in the phone. - The Android version should be above 2.2 4.5.2 Use Cases Augmented View In this use case we can get location information such as nearest hospitals, restaurants, major events, taxi stands and movie theatres…etc, in a augmented way. The augmented way of representation helps the users to retrieve information in a different and simple manner. The location information gathered from the GPS or by A-GPS. The location is identified by the latitude and longitude values extracted from the GPS signal. Then the values are located on the map and the location is shown to the user. This helps the user to know about the place, if he lands at a strange location. The information plot on the augmented screen is retrieves from the application database and it is managed by an admin. Map View This use case deals with the identification of the current location, other locations and plotting it on the map. This module identifies the location that the user is currently in. Then the values are located on the map and the location is shown to the user. This helps the user to know about the place, if he lands at a strange location. The GPS coordinates help to locate the user’s current geo co-ordinates. GPS is also used for searching and locating other places on the map. If a user wants to travel from the current location to any other location he wants to set the location on the map. The coordinates of that location is extracted from the map and thus the route is calculated. If a user searches for a place then it searches the places in the database and if found it is shown on the map. Each place is stored in the map as coordinates for exact identification. Place Identification Our application helps the user to search for the places related to his current location. For a user in an unknown location the major places he wants to know are taxi stands, auto stands, bus stands, hospitals etc... This module helps the user to locate those places in the landed city. This module also helps the user to know about the famous places like hotels, hospitals, auditoriums etc... in the city. User can search based on several categories. For example user can only search for hospitals related to the current city. If he selects a place then a small description about that place can be seen on the map as a popup. Event Identification This module includes information about the current happening events in the city. This notifies the user about the upcoming event details of the landed city. Users can also the view the various events happening in the various part of the landed district. If any event is occurring currently then the user can go to the location where the event is happening by the help of this application. Fare Calculation This module helps the user to calculate the fares of various modes of transport. This module calculates the fare based on the distance travelled with the minimum fare details. This module provides the user with the approximate cost of travel by buses and taxies in the city. The fare calculation module of the system enables the user to view the fares of various modes of transport and also helps in the calculation of fares based on the fixed rates information set by the government of the state. If a user selects the source and destination for a route, the he can calculate fares of various modes of transport by selecting the fare calculate option of the application. By this he can take a decision about the mode of traversal. Registration This module helps the user to register into the application. A registered user is provided with a unique username and password with which they can login into the system. Using this username and password, user can add information like new hospital, hotels etc. or any events in the city. Admin also has a login username and password with which he can login to the system. He can then view the requests from the users (requests for registration, adding new hospitals, hotels or events) and according to the reliability of the information the admin can approve or reject the requests. Admin controls the whole system. Admin has got all the privileges to maintain the application. Admin can also delete users if the user’s activity is not good, for example the user adds events which are not real so many times. Add Place & Events The admin is provided with the facility for adding the place details, event details, place tag (hotels, hospitals etc.), minimum fares for buses & taxies in the city in the database by using the web application. If an event is added then it will be shown in application. The user is provided with the provision for adding places & events for which user has to be registered. It will be later checked and updated by the administrator. Administrator will only add an event or a place which a user added if and only if the admin verifies it as valid. And for plotting places or events the system uses the service of Google Map. Search Place & Events The admin and the users are provided with the facility for searching the place details, event details. The admin gets the minimum fares for buses & taxies in the city. The normal user is provided with the provision for only searching information and adding places & events for which user has to be registered. It will be later checked and updated by the administrator. And for plotting places or events the system uses the service of Google Map. Business Process Model Business process model is used to model the entire business process. In a business process model states are activities representing the performance of operation and the transitions are triggered by the completion of the operations. Purpose of this model is to provide a view of flows and what is going on inside a use case or among several classes. It can also be used to represent class’s method implementation. Interaction Of Processes There are two main applications that are being developed: AugReality android application and AugReality web application. Admin user uploads the APK’s of the android application in the web server. The User should register in to the server in order to use the web application. User registers to the web server initially through the AugReality web application. The User must give his proper details in the provided registration form. The Registration of user is approved by the Admin after verification. Registered users can login into the Web application by providing the login credentials. He can then insert his own events and places in the web. For that he should include details such as Event name, description, location, phone no, email and web address. One image should be uploaded with the details. The Admin should approve this event also. After admin approves the event, user can view own events and other events at the place. Admin can login into Web application using his credentials. Then he should verify user registration details and either approve or reject the user. Events added by users are also approved by the admin. Admin can add his own events and events happening in the city. Then he can view and edit event details. Admin should regularly update minimum fare of the state. The minimum fare details of various modes transport are to be inserted. The fare details of auto, bus and taxi are inserted by the admin. The Android application can be downloaded by the users. No registration is required for using the android application. With the android application the users can either view nearby locations or search for events happening in the city. For that the GPS, Camera and Compass of the phone are to be working properly. The phone should have internet connectivity. The application first prompts the user to turn on the GPS if the GPS is not working. Then it will get the current location of the user from the GPS data. Then the user can select what he wishes to search, that is either he can select Search Nearby option or he can select Todays Events. If the user selects Search Nearby option then he is directed to a page to select what he what to search from the list. The list contains items like ATM, airport, bank, busstation, church etc. If the user selects the Todays Events option then he has to select the district. Both the above pages are redirected to the augmented screen in which the user can view the details. Here we obtain augmented reality with the help of camera and compass. So both the camera and compass should be working fine. The direction of camera is obtained with the help of compass and the events and place details of in that direction are loaded from the server into the camera. The user can select one event from the popup box and then the details of that event are loaded from server. So the user can view the shortest path between his location and the destination. Also can view the image of the event, contact details etc. He can also view the minimum cost to reach the destination by various modes of transport.
{"Source-Url": "http://www.ijerd.com/paper/vol12-issue10/Version-4/I121047378.pdf", "len_cl100k_base": 4841, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17114, "total-output-tokens": 5145, "length": "2e12", "weborganizer": {"__label__adult": 0.0007853507995605469, "__label__art_design": 0.0007500648498535156, "__label__crime_law": 0.0005292892456054688, "__label__education_jobs": 0.0013446807861328125, "__label__entertainment": 0.0001908540725708008, "__label__fashion_beauty": 0.00025963783264160156, "__label__finance_business": 0.0006384849548339844, "__label__food_dining": 0.0008616447448730469, "__label__games": 0.0018062591552734375, "__label__hardware": 0.00464630126953125, "__label__health": 0.0009593963623046876, "__label__history": 0.0008268356323242188, "__label__home_hobbies": 0.00015366077423095703, "__label__industrial": 0.0005426406860351562, "__label__literature": 0.0006184577941894531, "__label__politics": 0.00030350685119628906, "__label__religion": 0.0005908012390136719, "__label__science_tech": 0.06597900390625, "__label__social_life": 0.0001150369644165039, "__label__software": 0.0192718505859375, "__label__software_dev": 0.88818359375, "__label__sports_fitness": 0.000499725341796875, "__label__transportation": 0.00933074951171875, "__label__travel": 0.0010433197021484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24693, 0.00543]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24693, 0.15993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24693, 0.94055]], "google_gemma-3-12b-it_contains_pii": [[0, 4495, false], [4495, 8455, null], [8455, 12847, null], [12847, 15779, null], [15779, 20039, null], [20039, 24693, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4495, true], [4495, 8455, null], [8455, 12847, null], [12847, 15779, null], [15779, 20039, null], [20039, 24693, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24693, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24693, null]], "pdf_page_numbers": [[0, 4495, 1], [4495, 8455, 2], [8455, 12847, 3], [12847, 15779, 4], [15779, 20039, 5], [20039, 24693, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24693, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
51a08f91c2aed42a3c6862471ea56630e8ade8f1
Abstract The main way of coping with the complexity of software systems is to construct and use models expressed in UML. Unfortunately, the semantics (meaning) of models written in UML is not precisely defined. It may result in the incorrect interpretation of a model and make it hard to strictly verify a model and its transformation. In this paper we formally (mathematically) define UML class diagram and its semantics. The problem of consistency of the diagram is then introduced and some examples of inconsistencies are forwarded. 1: Introduction A modeling language is one of the fundamental tools used in the development process of a software system. Models hide irrelevant information about the system at the given stage of development, thus in a way reducing the complexity of the system. As the complexity of current systems is still increasing, the use of models in the development process becomes indispensable. The more complex the software system is, the more difficult it becomes to ensure its quality properties like dependability or security. The use of models can help in dealing with the complexity, however questions arise about the quality of the model, itself [8]. Is the model correct? Is the model complete? Is the model consistent? It is impossible to fully answer these questions without the knowledge, what does the given model exactly mean and more generally, what the precise semantics of the modeling language expressions is. 1.1: Unified Modeling Language Unified Modeling Language (UML) [1, 6] is a visual modeling language that is used to specify, construct and document software systems. It is important to note that it is a modeling language and not a method. It does not define nor advise the types of models which should be created and what steps should be taken to construct the software system. From a user’s point of view, the UML can be roughly treated as a set of different types of diagram. The UML has been adopted and standardized by the Object Management Group (OMG). The UML specification [1], published by OMG, is based on a metamodeling approach (see [2] for details about metamodeling). The metamodel (a model of UML) gives information about the abstract syntax of UML, but does not deal with semantics, this is expressed in a natural language. Furthermore, because the UML is method-independent, its specification rather sets a range of potential interpretations than an exact meaning. 1.2: Class diagram A class diagram is the most fundamental and widely used UML diagram. It shows a static view of a system, consisting of classes, their interrelationships (including generalization/specialization, association, aggregation and composition), operations and attributes of the classes. The way the class diagram is drawn (the notation elements used and the level of detail) and interpreted, depends on the perspective taken. There are three different perspectives which can be used in drawing a class diagram [3, 5]: 1. The conceptual perspective — the diagram is interpreted as a description of concepts in the real world or domain being studied, regardless of the software that might implement them. 2. The specification/design perspective — the diagram is interpreted as a description of software abstractions or components with interfaces, but without commitment to a particular implementation. 3. The implementation perspective — the diagram is interpreted as a description of software implementation using a particular technology or language. However, the above perspectives are not defined in the UML specification. The class diagram which is made from the conceptual perspective is called a conceptual class diagram. The conceptual class diagram describes the most significant concepts (represented as classes) and relations in the problem domain (represented as relationships between classes). It is characterized by a low level of detail. The conceptual diagram does not specify operations of the classes. Although attributes of the classes may be specified, from the conceptual perspective, there is no difference between an attribute of the class and an association [3]. In this paper we formally define both the syntax and semantics of a conceptual class diagram (hereafter, the term ‘class diagram’ will be used) in the UML notation. The definitions which are presented here relate to the UML 2.0, which is the current official version. 2: Mathematical notation As a language for defining the class diagram (so called metalanguage), we use basic mathematical notation. The advantage of this approach lies in the versatility and universality of mathematical notation. In this section the list and function notation, which may vary in different publications, is briefly outlined. For a set \( A \), \( \mathcal{P}(A) \) denotes the set of all the subsets of \( A \), and \( A^* \) denotes the set of all the finite lists of elements of \( A \). The function \( \text{len}(l) \) returns the length of a list \( l \). For simplicity, we add the expression \( A^{\ast(2)} \), which denotes the set of all finite lists with a length of at least 2. The function \( \pi_i(l) \) projects the \( i \)-th element of a list \( l \), whereas the function \( \pi_i(l) \) projects all but the \( i \)-th element. The list \([a_1, \ldots, a_n]\) is formally equal to the tuple \((a_1, \ldots, a_n)\). For a finite set \( A \), \( |A| \) denotes the number of elements of \( A \). The partial function from \( A \) to \( B \) is denoted by \( f : A \rightarrow B \), where the function \( \text{dom}(f) \) returns the domain of \( f \). The expression \( f : A \rightarrow B \) denotes the total function from \( A \) to \( B \) (in this case it holds \( \text{dom}(f) = A \)). 3: The syntax of a class diagram Graphical elements of a class diagram are shown in Fig. 1. In this section we formally define the The abstract syntax of the class diagram. The syntax is defined in a way which reflects the following semantic relationships between elements of the diagram: an association class is both a kind of association and a kind of class (it is a single model element [1, page 43]), an aggregation is a kind of association, and a composition is a kind of aggregation. To a large extent, it simplifies the definition of the semantics presented in Sec. 4. Both a class, an association and an association class are called a *classifier*. If we let \( \text{Classifiers} \) denote the set of all classifiers (names) which may appear on a diagram, then by a class diagram, we understand a tuple \[ \mathcal{D} = (\text{classes}, \text{assocs}, \text{ends}, \text{mults}, \text{assocs}_\text{agg}, \text{assocs}_\text{com}, \text{specs}) , \] where: 1. \( \mathcal{D} \).classes is a set of classes: \[ \mathcal{D} \text{.classes} \subseteq \text{Classifiers} . \] \( (1) \) 2. \( \mathcal{D} \).assocs is a set of associations: \[ \mathcal{D} \text{.assocs} \subseteq \text{Classifiers} . \] \( (2) \) For the diagram \( \mathcal{D} \), a set of association classes and a set of all classifiers are thus respectively defined as: \[ \mathcal{D} \text{.asclasses} =_{\text{def}} \mathcal{D} \text{.classes} \cap \mathcal{D} \text{.assocs} , \] \( (3) \) \[ \mathcal{D} \text{.classifiers} =_{\text{def}} \mathcal{D} \text{.classes} \cup \mathcal{D} \text{.assocs} . \] \( (4) \) 3. \( \mathcal{D} \).ends is a function of association ends. The function maps each association to a finite list of at least two, not necessarily different, classes participating in the association: \[ \mathcal{D} \text{.ends} : \mathcal{D} \text{.assocs} \rightarrow \mathcal{D} \text{.classes}^* . \] \( (5) \) The position on the list \( \mathcal{D} \text{.ends}(as) \) uniquely identifies the association end. 4. \( \mathcal{D} \).mults is a function of multiplicity of association ends. Multiplicity is a non-empty set of non-negative integers with at least one value greater than zero. The default multiplicity is the set of all non-negative integers (\( \mathbb{N} \)). The function assigns to each association a list of multiplicity on its ends: \[ \mathcal{D} \text{.mults} : \mathcal{D} \text{.assocs} \rightarrow (\mathcal{P}(\mathbb{N}) \setminus \{\emptyset, \{0\}\})^* . \] \( (6) \) As before, the position on the list $\mathcal{D}.\text{mults}(as)$ identifies the association end. The multiplicity must be defined for each association end: $$ \forall as \in \mathcal{D}.\text{assocs} \cdot \text{len}(\mathcal{D}.\text{mults}(as)) = \text{len}(\mathcal{D}.\text{ends}(as)). $$ (7) 5. $\mathcal{D}.\text{assocs}\_\text{agg}$ is a set of aggregations: $$ \mathcal{D}.\text{assocs}\_\text{agg} \subseteq \mathcal{D}.\text{assocs}. $$ (8) Only binary associations can be aggregations [1, page 37]: $$ \forall as \in \mathcal{D}.\text{assocs}\_\text{agg} \cdot \text{len}(\mathcal{D}.\text{ends}(as)) = 2. $$ (9) We assume that aggregate class (a class on the association end with a diamond adornment) is the first class on the list $\mathcal{D}.\text{ends}(as)$. 6. $\mathcal{D}.\text{assocs}\_\text{com}$ is a set of compositions: $$ \mathcal{D}.\text{assocs}\_\text{com} \subseteq \mathcal{D}.\text{assocs}\_\text{agg}. $$ (10) 7. $\mathcal{D}.\text{specs}$ is a function of specializations. The function assigns to each classifier a set of all (direct or indirect) its specializations: $$ \mathcal{D}.\text{specs} : \mathcal{D}.\text{classifiers} \to \mathcal{P}(\mathcal{D}.\text{classifiers}) $$ (11) The specialization hierarchy must be acyclical [1, page 49], what means that a classifier cannot be its own specialization: $$ \forall cf \in \mathcal{D}.\text{classifiers} \cdot cf \notin \mathcal{D}.\text{specs}(cf). $$ (12) 4: The semantics of a class diagram A classifier describes a set of instances that have something in common. An instance of a class is called an object, whereas an instance of an association is called a link. A link is a connection between two or more objects of the classes at corresponding positions in the association. An instance of a class association is both an object and a link, so it can both be connected by links and can connect objects. 4.1: Domain state The existing instances of a classifier are called its extent. The classifier extent usually varies over time as objects and links may be created and destroyed. Thus, from a conceptual perspective, the classifiers’ extents form a snapshot of the state of a problem domain at a particular point in time. If we let $\text{Instances}$ denote the finite set of instances that may come into existence in a problem domain, then a domain state (or shortly a state) is a pair $$ \mathcal{S} = (\text{instances}, \text{ends}),\text{ where}: $$ 1. $\mathcal{S}.\text{instances}$ is a partial function of extents. The function maps each classifier to a set of its instances (extent): $$ \mathcal{S}.\text{instances} : \mathcal{D}.\text{classifiers} \to \mathcal{P}(\text{Instances}) $$ (13) 2. $S$.ends is a partial function of link ends. The function assigns to each instance of an association, i.e. link, a list of instances of classes (objects) which are connected by the link: $$S$.ends : \text{Instances} \rightarrow \text{Instances}^{*} \tag{14}$$ The position on the list uniquely identifies the link end, which on the other hand, corresponds to an appropriate association end. 4.2: The relation of satisfaction The conceptual class diagram shows the structure of domain states or, from a different point of view, defines some constraints on domain states. Thus, the diagram can be interpreted as the set of all such states in which the mentioned constraints are satisfied. In this section we formally define what kind of constraints they are and what the word ‘satisfied’ means in this context. If we let $\text{Diagrams}$ be the set of all class diagrams as they were defined in Sec. 3 and let $\text{States}$ be the set of all domain states as they were defined in Sec. 4.1, then for a given $S \in \text{States}$ and $D \in \text{Diagrams}$, we say that the diagram $D$ is satisfied in the state $S$ and we write $\text{Sat}(D, S)$, if and only if: 1. $S$ specifies the extents of all classifiers in $D$ (and maybe others, not depicted in the diagram $D$): $$D$.classifiers \subseteq \text{dom}(S$.instances) \tag{15}$$ 2. An instance of a given association only connects instances of classes participating in this association (on the appropriate ends): $$\forall as \in D$.assocs \cdot \forall ln \in S$.instances(as) \cdot \text{len}(D$.ends(as)) = \text{len}(S$.ends(ln)) \land \forall i \in \{1, \ldots, \text{len}(D$.ends(as))\} \cdot \pi_i(S$.ends(ln)) \in S$.instances(\pi_i(D$.ends(as))) \tag{16}$$ 3. Instances of an association satisfy the specification of multiplicity on all association ends$^1$. For any $n - 1$ ends of $n$-ary association ($n \geq 2$) and $n - 1$ instances of classes on those ends, the number of links they form with instances of the class on the remaining end belong to the multiplicity of this end [1, pages 37–38]: $$\forall as \in D$.assocs \cdot \forall i \in \{1, \ldots, \text{len}(D$.ends(as))\} \cdot \forall p \in \text{product}(as, i) \cdot \left| \{ ln \in S$.instances(as) : \pi_i(S$.ends(ln)) = p \} \right| \in \pi_i(D$.mults(as)) \tag{17}$$ where: $$\text{product}(as, i) = _{def} \prod_{j=1, j \neq i}^{\text{len}(D$.ends(as))} S$.instances(\pi_j(D$.ends(as))) \tag{18}$$ 4. An extent of an association includes, at most, one link connecting a given set of class instances (on given link ends)$^2$: $$\forall as \in D$.assocs \cdot \forall ln_1, ln_2 \in S$.instances(as) \cdot ln_1 \neq ln_2 \Rightarrow \exists i \in \{1, \ldots, \text{len}(D$.ends(as))\} \cdot \pi_i(S$.ends(ln_1)) \neq \pi_i(S$.ends(ln_2)) \tag{19}$$ $^1$The meaning of multiplicity for an association with more than two ends was not precisely defined in UML prior to version 2.0. Possible interpretations are discussed in detail in [4]. $^2$This condition does not have to be true for an association with a \{bag\} adornment. See [7] for details. 5. An aggregation relationship is transitive and asymmetric across all aggregation links, even from different aggregations [6]. That is, an object may not be directly or indirectly part of itself: \[ \forall ob \in \text{Instances} \cdot ob \notin \text{parts}(ob), \] where \(\text{parts}(ob)\) determine the set of all parts of an object. Formally: \[ ob_2 \in_{def} \text{parts}(ob_1), \] if \(ob_2\) is a direct part of \(ob_1\): \[ \exists as \in D.\text{assocs}_{agg} \cdot \exists ln \in S.\text{instances}(as) \cdot ob_1 = \pi_1(S.\text{ends}(ln)) \land ob_2 = \pi_2(S.\text{ends}(ln)) \] or an indirect one, i.e. for a certain \(n \geq 2\), it holds that: \[ \exists as_1, \ldots, as_n \in D.\text{assocs}_{agg} : \\ \exists ln_1 \in S.\text{instances}(as_1), \ldots, \exists ln_n \in S.\text{instances}(as_n) : \\ \forall i \in \{1, \ldots, n-1\} \cdot \pi_2(S.\text{ends}(ln_i)) = \pi_1(S.\text{ends}(ln_{i+1})) . \] 6. An object may be a direct part of only one composite object at a time. Precisely, only one composition link (across all composition links, even from different compositions) may exist at one time for a one part-object [6]: \[ \forall as_1, as_2 \in D.\text{assocs}_{com} : \\ \forall ln_1 \in S.\text{instances}(as_1) : \\ \forall ln_2 \in S.\text{instances}(as_2) : ln_1 \neq ln_2 \Rightarrow \pi_2(S.\text{ends}(ln_1)) \neq \pi_2(S.\text{ends}(ln_2)). \] 7. An instance of a specializing classifier is also an instance of the specialized classifier: \[ \forall cf_1, cf_2 \in D.\text{classifiers} : \\ \forall cf_2 \in D.\text{specs}(cf_1) \Rightarrow S.\text{instances}(cf_2) \subseteq S.\text{instances}(cf_1). \] 4.3: The diagram’s meaning Now, using the satisfaction relation, the semantics of a class diagram can be formally defined. As stated earlier, the meaning of a class diagram from the conceptual perspective is the set of all domain states in which the diagram is satisfied. Let \(M : \text{Diagrams} \rightarrow \mathcal{P}(\text{States})\) be the function which is defined as: \[ M(D) =_{def} \{ S \in \text{States} : \text{Sat}(D, S) \} . \] The value \(M(D)\) we call the meaning or interpretation of the diagram \(D\). 5: Consistency As far as a model quality is concerned, consistency is one of the main criteria to be examined [8]. Generally, consistency is a measure of whether there are contradictions among the various diagrams within the model or between models produced at various stages of development. In the case of a class diagram, checking the consistency can also determine whether or not there are any internal conflicts within a single diagram. In this section, we outline the above problem by introducing a formal definition of classifier consistency. 5.1: The consistency of a classifier If we let $D \in \text{Diagrams}$ and $cf \in \text{Classifiers}$, then we can say that the classifier $cf$ is consistent in the context of the diagram $D$, if and only if: $$\exists S \in \mathcal{M}(D) \cdot S.\text{instances}(cf) \neq \emptyset.$$ In other words, the diagram admits a domain state in which at least one instance of that classifier exists. Otherwise, the classifier is deemed to be inconsistent. Two examples of inconsistent classifiers are presented below. 5.2: Examples of inconsistencies Let $D \in \text{Diagrams}$ includes the construction shown in Fig. 2a: $AG \in D.\text{assoc}_{\text{agg}}$, $D.\text{ends}(AG) = [A, A]$, $D.\text{mults}(AG) = [N, 1]$. In [7] it is proven that class $A$ is inconsistent in the context of the diagram $D$. The reason for the inconsistency is the multiplicity ‘1’ on one of the aggregation ends. This multiplicity means that every object of $A$ has exactly one part (which is also an object of $A$), thus from the transitivity and asymmetricity of the aggregation relationship, the objects of $A$ must form the infinite whole-part chain, which is contrary to the fact that the extent of $A$ is finite. The formal proof, however, is too extensive to be presented here in detail. Now, let us consider the diagram in Fig. 2b. The inconsistency of the class $A$ in this diagram can be shown in a more sophisticated way, using a simple property of consistency. If we let $D_1, D_2 \in \text{Diagrams}$, then we say that the diagram $D_2$ is a consequence of the diagram $D_1$ and we write: $$D_1 \Rightarrow D_2,$$ if and only if $\mathcal{M}(D_1) \subseteq \mathcal{M}(D_2)$. For the above definition, it is easy to prove the following theorem: If the classifier $cf$ is consistent in the context of the diagram $D_1$ and it holds $D_1 \Rightarrow D_2$, then $cf$ is also consistent in the context of $D_2$. In [7] we prove a set of transformation rules from one diagram into its consequences. By virtue of one of these rules, the implication shown in Fig. 3a-b holds. As stated earlier, the class $A$ in ![Figure 2. Examples of inconsistencies](image-url) Fig. 2a (or Fig. 3b) is inconsistent, thus by virtue of the above theorem, A in Fig. 3a (or Fig. 2b) must be inconsistent, too. Note, that in all cases, due to the fact that the instances of class A cannot exist, neither the instances of the aggregation AG nor class B, itself, can exist. Thus, they are inconsistent as well. 6: Conclusion The work presented here forms the formal foundation for the verification of a class diagram. The interpretation of particular elements of the diagram, as well as the interpretation of the whole diagram, have been precisely defined. The presented definitions take into a consideration the concepts which often cause interpretative difficulties like an aggregation/composition relationship or an n-ary association, thus allowing a better understanding of these concepts. Using the proposed diagram formalization, we have outlined the subject of reasoning about a class diagram, highlighting the possibility of the occurrence of internal inconsistencies in the diagram. Some interesting issues related to reasoning about a class diagram have only been briefly touched upon here (e.g. the problem of diagram transformations) and are presented in detail in [7], and the others are the subject of further investigation (e.g. the automatization of reasoning). References
{"Source-Url": "http://www.ia.pw.edu.pl:80/~mszlenk/pdf/Formal-Semantics-Reasoning-UML-Class-Diagram.pdf", "len_cl100k_base": 5436, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25808, "total-output-tokens": 6380, "length": "2e12", "weborganizer": {"__label__adult": 0.00028252601623535156, "__label__art_design": 0.0005154609680175781, "__label__crime_law": 0.0003726482391357422, "__label__education_jobs": 0.0011043548583984375, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 0.00012242794036865234, "__label__finance_business": 0.0002124309539794922, "__label__food_dining": 0.0002694129943847656, "__label__games": 0.0003750324249267578, "__label__hardware": 0.000446319580078125, "__label__health": 0.0003829002380371094, "__label__history": 0.0002073049545288086, "__label__home_hobbies": 8.064508438110352e-05, "__label__industrial": 0.00034928321838378906, "__label__literature": 0.00033593177795410156, "__label__politics": 0.00021505355834960935, "__label__religion": 0.00036787986755371094, "__label__science_tech": 0.0175018310546875, "__label__social_life": 9.22083854675293e-05, "__label__software": 0.00745391845703125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.00022530555725097656, "__label__transportation": 0.0003745555877685547, "__label__travel": 0.00016415119171142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21444, 0.03334]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21444, 0.87827]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21444, 0.81594]], "google_gemma-3-12b-it_contains_pii": [[0, 2444, false], [2444, 5873, null], [5873, 8279, null], [8279, 10999, null], [10999, 14105, null], [14105, 16840, null], [16840, 19010, null], [19010, 21444, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2444, true], [2444, 5873, null], [5873, 8279, null], [8279, 10999, null], [10999, 14105, null], [14105, 16840, null], [16840, 19010, null], [19010, 21444, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21444, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21444, null]], "pdf_page_numbers": [[0, 2444, 1], [2444, 5873, 2], [5873, 8279, 3], [8279, 10999, 4], [10999, 14105, 5], [14105, 16840, 6], [16840, 19010, 7], [19010, 21444, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21444, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
6f59e7f217ac3b64799303293fb01edae2921487
Internet Engineering Task Force S. Sorce Internet-Draft H. Kario Updates: 4462 (if approved) Red Hat, Inc. Intended status: Standards Track Jul 22, 2019 Expires: January 23, 2020 GSS-API Key Exchange with SHA2 draft-ietf-curdle-gss-keyex-sha2-10 Abstract This document specifies additions and amendments to RFC4462. It defines a new key exchange method that uses SHA-2 for integrity and deprecates weak DH groups. The purpose of this specification is to modernize the cryptographic primitives used by GSS Key Exchanges. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on January 23, 2020. Copyright Notice Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. 1. Introduction SSH GSS-API Methods [RFC4462] allows the use of GSSAPI [RFC2743] for authentication and key exchange in SSH. It defines three exchange methods all based on DH groups and SHA-1. This document updates RFC4462 with new methods intended to support environments that desire to use the SHA-2 cryptographic hash functions. 2. Rationale Due to security concerns with SHA-1 [RFC6194] and with MODP groups with less than 2048 bits [NIST-SP-800-131Ar1] we propose the use of hashes based on SHA-2 [RFC6234] with DH group14, group15, group16, group17 and group18 [RFC3526]. Additionally we add support for key exchange based on Elliptic Curve Diffie Hellman with the NIST P-256, P-384 and P-521 [SEC2v2] as well as the X25519 and X448 [RFC7748] curves. Following the practice of [RFC8268] only SHA-256 and SHA-512 hashes are used for DH groups. For NIST curves the same curve-to-hashing algorithm pairing used in [RFC5656] is adopted for consistency. 3. Document Conventions The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] when, and only when, they appear in all capitals, as shown here. 4. New Diffie-Hellman Key Exchange methods This document adopts the same naming convention defined in [RFC4462] to define families of methods that cover any GSS-API mechanism used with a specific Diffie-Hellman group and SHA-2 Hash combination. <table> <thead> <tr> <th>Key Exchange Method Name</th> <th>Implementation Recommendations</th> </tr> </thead> <tbody> <tr> <td>gss-group14-sha256-*</td> <td>SHOULD/RECOMMENDED</td> </tr> <tr> <td>gss-group15-sha512-*</td> <td>MAY/OPTIONAL</td> </tr> <tr> <td>gss-group16-sha512-*</td> <td>SHOULD/RECOMMENDED</td> </tr> <tr> <td>gss-group17-sha512-*</td> <td>MAY/OPTIONAL</td> </tr> <tr> <td>gss-group18-sha512-*</td> <td>MAY/OPTIONAL</td> </tr> </tbody> </table> Table 1: New key exchange algorithms Each key exchange method prefix is registered by this document. The IESG is the change controller of all these key exchange methods; this does NOT imply that the IESG is considered to be in control of the corresponding GSS-API mechanism. Each method in any family of methods (Table 2) specifies GSS-API-authenticated Diffie-Hellman key exchanges as described in Section 2.1 of [RFC4462]. The method name for each method (Table 1) is the concatenation of the family name prefix with the Base64 encoding of the MD5 hash [RFC1321] of the ASN.1 DER encoding [ISO-IEC-8825-1] of the corresponding GSS-API mechanism’s OID. Base64 encoding is described in Section 4 of [RFC4648]. <table> <thead> <tr> <th>Family Name prefix</th> <th>Hash Function</th> <th>Group</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>gss-group14-sha256-</td> <td>SHA-256</td> <td>2048-bit MODP</td> <td>Section 3 of [RFC3526]</td> </tr> <tr> <td>gss-group15-sha512-</td> <td>SHA-512</td> <td>3072-bit MODP</td> <td>Section 4 of [RFC3526]</td> </tr> <tr> <td>gss-group16-sha512-</td> <td>SHA-512</td> <td>4096-bit MODP</td> <td>Section 5 of [RFC3526]</td> </tr> <tr> <td>gss-group17-sha512-</td> <td>SHA-512</td> <td>6144-bit MODP</td> <td>Section 6 of [RFC3526]</td> </tr> <tr> <td>gss-group18-sha512-</td> <td>SHA-512</td> <td>8192-bit MODP</td> <td>Section 7 of [RFC3526]</td> </tr> </tbody> </table> Table 2: Family method references 5. New Elliptic Curve Diffie-Hellman Key Exchange methods In [RFC5656] new SSH key exchange algorithms based on Elliptic Curve Cryptography are introduced. We reuse much of section 4 of [RFC5656] to define GSS-API-authenticated ECDH Key Exchanges. Additionally, we also utilize the curves defined in [I-D.ietf-curdle-ssh-curves] to complement the three classic NIST-defined curves required by [RFC5656]. 5.1. Generic GSS-API Key Exchange with ECDH This section reuses much of the scheme defined in Section 2.1 of [RFC4462] and combines it with the scheme defined in Section 4 of [RFC5656]; in particular, all checks and verification steps prescribed in Section 4 of [RFC5656] apply here as well. Key-agreement schemes ECDHE-Curve25519 and ECDHE-Curve448 perform the Diffie-Hellman protocol using the functions X25519 and X448, respectively. Implementations MUST compute these functions using the algorithms described in [RFC7748]. When they do so, implementations MUST check whether the computed Diffie-Hellman shared secret is the all-zero value and abort if so, as described in Section 6 of [RFC7748]. Alternative implementations of these functions SHOULD abort when either input forces the shared secret to one of a small set of values, as discussed in Section 7 of [RFC7748]. This section defers to [RFC7546] as the source of information on GSS-API context establishment operations, Section 3 being the most relevant. All Security Considerations described in [RFC7546] apply here too. The parties each generate an ephemeral key pair, according to Section 3.2.1 of [SEC1v2]. Keys are verified upon receipt by the parties according to Section 3.2.3.1 of [SEC1v2]. For NIST Curves the keys use the uncompressed point representation and MUST be converted using the algorithm in Section 2.3.4 of [SEC1v2]. If the conversion fails or the point is transmitted using the compressed representation, the key exchange MUST fail. A GSS Context is established according to Section 4 of [RFC5656]; The client initiates the establishment using GSS_Init_sec_context() and the server responds to it using GSS_Accept_sec_context(). For the negotiation, the client MUST set mutual_req_flag and integ_req_flag to "true". In addition, deleg_req_flag MAY be set to "true" to request access delegation, if requested by the user. Since the key exchange process authenticates only the host, the setting of anon_req_flag is immaterial to this process. If the client does not support the "gssapi-keyex" user authentication method described in Section 4 of [RFC4462], or does not intend to use that method in conjunction with the GSS-API context established during key exchange, then anon_req_flag SHOULD be set to "true". Otherwise, this flag MAY be set to true if the client wishes to hide its identity. This key exchange process will exchange only a single message token once the context has been established, therefore the replay_det_req_flag and sequence_req_flag SHOULD be set to "false". The client MUST include its public key with the first message it sends to the server during this process; if the server receives more than one key or none at all, the key exchange MUST fail. During GSS Context establishment multiple tokens may be exchanged by the client and the server. When the GSS Context is established (major_status is GSS_S_COMPLETE) the parties check that mutual_state and integ_avail are both "true". If not the key exchange MUST fail. Once a party receives the peer’s public key it proceeds to compute a shared secret K. For NIST Curves the computation is done according to Section 3.3.1 of [SEC1v2] and the resulting value z is converted to the octet string K using the conversion defined in Section 2.3.5 of [SEC1v2]. For curve25519 and curve448 the algorithms in Section 6 of [RFC7748] are used instead. To verify the integrity of the handshake, peers use the Hash Function defined by the selected Key Exchange method to calculate H: \[ H = \text{hash}(V_C || V_S || I_C || I_S || K_S || Q_C || Q_S || K). \] The GSS_GetMIC() call is used by the server with H as the payload and generates a MIC. The GSS_VerifyMIC() call is used by the client to verify the MIC. If any GSS_Init_sec_context() or GSS_Accept_sec_context() returns a major_status other than GSS_S_COMPLETE or GSS_S_CONTINUE_NEEDED, or any other GSS-API call returns a major_status other than GSS_S_COMPLETE, the key exchange MUST fail. The same recommendations expressed in Section 2.1 of [RFC4462] are followed with regards to error reporting. The following is an overview of the key exchange process: Client ----- Generate ephemeral key pair. Calls GSS_Init_sec_context(). SSH_MSG_KEXGSS_INIT ----------> Verify received key is valid. (Optional) <-------------------- SSH_MSG_KEXGSS_HOSTKEY (Loop) | Calls GSS_Accept_sec_context(). | <------------------ SSH_MSG_KEXGSS_CONTINUE | Calls GSS_Init_sec_context(). | SSH_MSG_KEXGSS_CONTINUE ----------> | Calls GSS_Accept_sec_context(). | Generate ephemeral key pair. | Compute shared secret. | Computes hash H. | Calls GSS_GetMIC( H ) = MIC. | <------------------ SSH_MSG_KEXGSS_COMPLETE Verify received key is valid. Compute shared secret. Compute hash = H Calls GSS_VerifyMIC( MIC, H ) This is implemented with the following messages: The client sends: byte SSH_MSG_KEXGSS_INIT string output_token (from GSS_Init_sec_context()) string Q_C, client’s ephemeral public key octet string The server may respond with: byte SSH_MSG_KEXGSS_HOSTKEY string server public host key and certificates (K_S) The server sends: byte SSH_MSG_KEXGSS_CONTINUE string output_token (from GSS_Accept_sec_context()) Each time the client receives the message described above, it makes another call to GSS_Init_sec_context(). The client sends: - **byte**: SSH_MSG_KEXGSS_CONTINUE - **string**: output_token (from GSS_Init_sec_context()) As the final message the server sends either: - **byte**: SSH_MSG_KEXGSS_COMPLETE - **string**: Q_S, server’s ephemeral public key octet string - **string**: mic_token (MIC of H) - **boolean**: TRUE - **string**: output_token (from GSS_Accept_sec_context()) Or the following if no output_token is available: - **byte**: SSH_MSG_KEXGSS_COMPLETE - **string**: Q_S, server’s ephemeral public key octet string - **string**: mic_token (MIC of H) - **boolean**: FALSE The hash H is computed as the HASH hash of the concatenation of the following: - **string**: V_C, the client’s version string (CR, NL excluded) - **string**: V_S, server’s version string (CR, NL excluded) - **string**: I_C, payload of the client’s SSH_MSG_KEXINIT - **string**: I_S, payload of the server’s SSH_MSG_KEXINIT - **string**: K_S, server’s public host key - **string**: Q_C, client’s ephemeral public key octet string - **string**: Q_S, server’s ephemeral public key octet string - **mpint**: K, shared secret This value is called the exchange hash, and it is used to authenticate the key exchange. The exchange hash SHOULD be kept secret. If no SSH_MSG_KEXGSS_HOSTKEY message has been sent by the server or received by the client, then the empty string is used in place of K_S when computing the exchange hash. Since this key exchange method does not require the host key to be used for any encryption operations, the SSH_MSG_KEXGSS_HOSTKEY message is OPTIONAL. If the "null" host key algorithm described in **Section 5 of [RFC4462]** is used, this message MUST NOT be sent. If the client receives a SSH_MSG_KEXGSS_CONTINUE message after a call to GSS_Init_sec_context() has returned a major_status code of GSS_S_COMPLETE, a protocol error has occurred and the key exchange MUST fail. If the client receives a SSH_MSG_KEXGSS_COMPLETE message and a call to GSS_Init_sec_context() does not result in a major_status code of GSS_S_COMPLETE, a protocol error has occurred and the key exchange MUST fail. 5.2. ECDH Key Exchange Methods <table> <thead> <tr> <th>Key Exchange Method Name</th> <th>Implementation Recommendations</th> </tr> </thead> <tbody> <tr> <td>gss-nistp256-sha256-*</td> <td>SHOULD/RECOMMENDED</td> </tr> <tr> <td>gss-nistp384-sha384-*</td> <td>MAY/OPTIONAL</td> </tr> <tr> <td>gss-nistp521-sha512-*</td> <td>MAY/OPTIONAL</td> </tr> <tr> <td>gss-curve25519-sha256-*</td> <td>SHOULD/RECOMMENDED</td> </tr> <tr> <td>gss-curve448-sha512-*</td> <td>MAY/OPTIONAL</td> </tr> </tbody> </table> Table 3: New key exchange methods Each key exchange method prefix is registered by this document. The IESG is the change controller of all these key exchange methods; this does NOT imply that the IESG is considered to be in control of the corresponding GSS-API mechanism. Each method in any family of methods (Table 4) specifies GSS-API-authenticated Elliptic Curve Diffie-Hellman key exchanges as described in Section 5.1. The method name for each method (Table 3) is the concatenation of the family method name with the Base64 encoding of the MD5 hash [RFC1321] of the ASN.1 DER encoding [ISO-IEC-8825-1] of the corresponding GSS-API mechanism’s OID. Base64 encoding is described in Section 4 of [RFC4648]. Table 4: Family method references 6. Deprecated Algorithms Because they have small key lengths and are no longer strong in the face of brute-force attacks, the algorithms in the following table are considered deprecated and SHOULD NOT be used. <table> <thead> <tr> <th>Key Exchange Method Name</th> <th>Implementation Recommendations</th> </tr> </thead> <tbody> <tr> <td>gss-group1-sha1-*</td> <td>SHOULD NOT</td> </tr> <tr> <td>gss-group14-sha1-*</td> <td>SHOULD NOT</td> </tr> <tr> <td>gss-gex-sha1-*</td> <td>SHOULD NOT</td> </tr> </tbody> </table> 7. IANA Considerations This document augments the SSH Key Exchange Method Names in [RFC4462]. IANA is requested to update the SSH Protocol Parameters [IANA-KEX-NAMES] registry with the following entries: <table> <thead> <tr> <th>Key Exchange Method Name</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>gss-group1-sha1-*</td> <td>This draft</td> </tr> <tr> <td>gss-group14-sha1-*</td> <td>This draft</td> </tr> <tr> <td>gss-gex-sha1-*</td> <td>This draft</td> </tr> <tr> <td>gss-group14-sha256-*</td> <td>This draft</td> </tr> <tr> <td>gss-group15-sha512-*</td> <td>This draft</td> </tr> <tr> <td>gss-group16-sha512-*</td> <td>This draft</td> </tr> <tr> <td>gss-group17-sha512-*</td> <td>This draft</td> </tr> <tr> <td>gss-group18-sha512-*</td> <td>This draft</td> </tr> <tr> <td>gss-nistp256-sha256-*</td> <td>This draft</td> </tr> <tr> <td>gss-nistp384-sha384-*</td> <td>This draft</td> </tr> <tr> <td>gss-nistp521-sha512-*</td> <td>This draft</td> </tr> <tr> <td>gss-curve25519-sha256-*</td> <td>This draft</td> </tr> <tr> <td>gss-curve448-sha512-*</td> <td>This draft</td> </tr> </tbody> </table> 8. Security Considerations 8.1. New Finite Field DH mechanisms Except for the use of a different secure hash function and larger DH groups, no significant changes has been made to the protocol described by [RFC4462]; therefore all the original Security Considerations apply. 8.2. New Elliptic Curve DH mechanisms Although a new cryptographic primitive is used with these methods the actual key exchange closely follows the key exchange defined in [RFC5656]; therefore all the original Security Considerations as well as those expressed in [RFC5656] apply. 8.3. GSSAPI Delegation Some GSSAPI mechanisms can act on a request to delegate credentials to the target host when the deleg_req_flag is set. In this case, extra care must be taken to ensure that the acceptor being authenticated matches the target the user intended. Some mechanism implementations (such as commonly used krb5 libraries) may use insecure DNS resolution to canonicalize the target name; in these cases spoofing a DNS response that points to an attacker-controlled machine may result in the user silently delegating credentials to the attacker, who can then impersonate the user at will. 9. References 9.1. Normative References [I-D.ietf-curdle-ssh-curves] [RFC1321] [RFC2119] [RFC2743] [RFC3526] [RFC4462] [RFC4648] [RFC5656] [RFC7546] 9.2. Informative References [IANA-KEX-NAMES] [ISO-IEC-8825-1] [NIST-SP-800-131Ar1] Authors’ Addresses Simo Sorce Red Hat, Inc. 140 Broadway 24th Floor New York, NY 10025 USA Email: simo@redhat.com Hubert Kario Red Hat, Inc. Purkynova 115 Brno 612 00 Czech Republic Email: hkario@redhat.com
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-curdle-gss-keyex-sha2-10.pdf", "len_cl100k_base": 4813, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27181, "total-output-tokens": 6441, "length": "2e12", "weborganizer": {"__label__adult": 0.0004642009735107422, "__label__art_design": 0.0003056526184082031, "__label__crime_law": 0.0018873214721679688, "__label__education_jobs": 0.0006113052368164062, "__label__entertainment": 0.00011515617370605467, "__label__fashion_beauty": 0.0002005100250244141, "__label__finance_business": 0.0012426376342773438, "__label__food_dining": 0.0003664493560791016, "__label__games": 0.000736236572265625, "__label__hardware": 0.004238128662109375, "__label__health": 0.0006594657897949219, "__label__history": 0.0004193782806396485, "__label__home_hobbies": 9.679794311523438e-05, "__label__industrial": 0.001068115234375, "__label__literature": 0.0002684593200683594, "__label__politics": 0.0007066726684570312, "__label__religion": 0.0006399154663085938, "__label__science_tech": 0.290283203125, "__label__social_life": 0.00011336803436279296, "__label__software": 0.040374755859375, "__label__software_dev": 0.65380859375, "__label__sports_fitness": 0.00042629241943359375, "__label__transportation": 0.0006771087646484375, "__label__travel": 0.00024330615997314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20490, 0.06186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20490, 0.21699]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20490, 0.8001]], "google_gemma-3-12b-it_contains_pii": [[0, 1968, false], [1968, 3237, null], [3237, 5224, null], [5224, 7618, null], [7618, 9806, null], [9806, 11066, null], [11066, 12946, null], [12946, 14339, null], [14339, 14988, null], [14988, 16879, null], [16879, 18706, null], [18706, 19870, null], [19870, 20490, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1968, true], [1968, 3237, null], [3237, 5224, null], [5224, 7618, null], [7618, 9806, null], [9806, 11066, null], [11066, 12946, null], [12946, 14339, null], [14339, 14988, null], [14988, 16879, null], [16879, 18706, null], [18706, 19870, null], [19870, 20490, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20490, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20490, null]], "pdf_page_numbers": [[0, 1968, 1], [1968, 3237, 2], [3237, 5224, 3], [5224, 7618, 4], [7618, 9806, 5], [9806, 11066, 6], [11066, 12946, 7], [12946, 14339, 8], [14339, 14988, 9], [14988, 16879, 10], [16879, 18706, 11], [18706, 19870, 12], [19870, 20490, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20490, 0.19431]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e645052e5ffe5828963a7a90a94573dddf39e333
Chapter 13 Pointers and Linked Lists Overview 13.1 Nodes and Linked Lists 13.2 Stacks and Queues 13.1 Nodes and Linked Lists Nodes and Linked Lists - A linked list is a list that can grow and shrink while the program is running - A linked list is constructed using pointers - A linked list often consists of structs or classes that contain a pointer variable connecting them to other dynamic variables - A linked list can be visualized as items, drawn as boxes, connected to other items by arrows Nodes and Pointers ``` head "rolls" 10 "jam" 3 "tea" 2 end marker ``` Nodes The boxes in the previous drawing represent the nodes of a linked list. - **Nodes contain the data item(s) and a pointer that can point to another node of the same type** - The pointers point to the entire node, not an individual item that might be in the node - The arrows in the drawing represent pointers Implementing Nodes Nodes are implemented in C++ as structs or classes - Example: A structure to store two data items and a pointer to another node of the same type, along with a type definition might be: ```c++ struct ListNode { string item; int count; ListNode *link; }; ``` ```c++ typedef ListNode* ListNodePtr; ``` This circular definition is allowed in C++ The head of a List - The box labeled **head**, in display 13.1, is not a node, but a pointer variable that points to a node. - Pointer variable head is declared as: ``` ListNodePtr head; ``` Accessing Items in a Node Using the diagram of 13.1, this is one way to change the number in the first node from 10 to 12: \[(*\text{head}).\text{count} = 12;\] - head is a pointer variable so *head is the node that head points to - The parentheses are necessary because the dot operator . has higher precedence than the dereference operator * The Arrow Operator - The **arrow operator** -> combines the actions of the dereferencing operator * and the dot operator to specify a member of a struct or object pointed to by a pointer - \((\ast\text{head}).\text{count} = 12;\) - can be written as - head->count = 12; - The arrow operator is more commonly used Accessing Node Data head->count = 12; head->item = "bagels"; Before head "rolls" 10 "jam" 3 "tea" 2 NULL After head "bagels" 12 "jam" 3 "tea" 2 NULL The defined constant `NULL` is used as… - **An end marker for a linked list** - A program can step through a list of nodes by following the pointers, but when it finds a node containing `NULL`, it knows it has come to the end of the list - The value of a pointer that has nothing to point to The value of `NULL` is 0 Any pointer can be assigned the value `NULL`: ```c double* there = NULL; ``` To Use NULL - A definition of NULL is found in several libraries, including `<iostream>` and `<cstddef>` - A using directive is not needed for NULL Linked Lists - The diagram in Display 13.2 depicts a linked list - A linked list is a list of nodes in which each node has a member variable that is a pointer that points to the next node in the list - The **first node is called the head** - The pointer variable head, points to the first node - The pointer named head is not the head of the list...it points to the head of the list - The **last node contains a pointer set to NULL** Building a Linked List: The node definition Let's begin with a simple node definition: ```c struct Node { int data; Node *link; }; ``` typedef Node* NodePtr; Building a Linked List: Declaring Pointer Variable head With the node defined and a type definition to make or code easier to understand, we can declare the pointer variable head: ```c NodePtr head; ``` - head is a pointer variable that will point to the head node when the node is created To create the first node, the operator new is used to create a new dynamic variable: ``` head = new Node; ``` Now head points to the first, and only, node in the list. Now that head points to a node, we need to give values to the member variables of the node: ```c head->data = 3; head->link = NULL; ``` - Since this node is the last node, the link is set to NULL. Function head_insert It would be better to create a function to insert nodes at the head of a list, such as: ```c void head_insert(NodePtr& head, int the_number); ``` - The first parameter is a NodePtr parameter that points to the first node in the linked list - The second parameter is the number to store in the list head_insert will create a new node for the number - The number will be copied to the new node - The new node will be inserted in the list as the new head node Pseudocode for head_insert - Create a new dynamic variable pointed to by temp_ptr - Place the data in the new node called *temp_ptr - Make temp_ptr's link variable point to the head node - Make the head pointer point to temp_ptr Adding a Node to a Linked List 1. Set up new node 2. temp_ptr->link = head; 3. head = temp_ptr; 4. After function call Translating head_insert to C++ The pseudocode for head_insert can be written in C++ using these lines in place of the lines of pseudocode: - NodePtr temp_ptr; //create the temporary pointer temp_ptr = new Node; // create the new node - temp_ptr->data = the_number; //copy the number - temp_ptr->link = head; //new node points to first node - head = temp_ptr; // head points to new // first node Function to Add a Node at the Head of a Linked List Function Declaration ```c struct Node { int data; Node *link; }; typedef Node* NodePtr; void head_insert(NodePtr& head, int the_number); //Precondition: The pointer variable head points to //the head of a linked list. //Postcondition: A new node containing the_number //has been added at the head of the linked list. ``` Function Definition ```c void head_insert(NodePtr& head, int the_number) { NodePtr temp_ptr; temp_ptr = new Node; temp_ptr->data = the_number; temp_ptr->link = head; head = temp_ptr; } ``` An Empty List - A list with nothing in it is called an empty list - An empty linked list has no head node - The head pointer of an empty list is NULL ```c head = NULL; ``` - Any functions written to manipulate a linked list should check to see if it works on the empty list You might be tempted to write head_insert using the head pointer to construct the new node: ```cpp head = new Node; head->data = the_number; ``` Now to attach the new node to the list - The node that head used to point to is now lost! Lost Nodes head 12 ? 15 3 NULL Lost nodes Memory Leaks - Nodes that are lost by assigning their pointers a new address are not accessible any longer. - The program has no way to refer to the nodes and cannot delete them to return their memory to the freestore. - Programs that lose nodes have a memory leak. - Significant memory leaks can cause system crashes. Searching a Linked List To design a function that will locate a particular node in a linked list: - We want the function to return a pointer to the node so we can use the data if we find it, else return NULL - The linked list is one argument to the function - The data we wish to find is the other argument - This declaration will work: ```c NodePtr search(NodePtr head, int target); ``` Function search - Refining our function - We will use a local pointer variable, named here, to move through the list checking for the target - The only way to move around a linked list is to follow pointers - We will start with here pointing to the first node and move the pointer from node to node following the pointer out of each node Searching a Linked List 1. target is 6 1. head -> 2 here -> 1 1 -> 6 6 -> 3 NULL 2. head -> 2 here -> 1 1 -> 6 6 -> 3 NULL 3. head -> 2 here -> 1 1 -> 6 6 -> 3 NULL 4. head -> 2 here -> 1 1 -> 6 6 -> 3 NULL Not found Found Pseudocode for search - Make pointer variable here point to the head node - while (here does not point to a node containing target AND here does not point to the last node) ``` make here point to the next node ``` - If (here points to a node containing the target) ``` return here; ``` else ``` return NULL; ``` Moving Through the List The pseudocode for search requires that pointer here step through the list - How does here follow the pointers from node to node? - When here points to a node, here->link is the address of the next node - To make here point to the next node, make the assignment: here = here->link; A Refinement of search The search function can be refined in this way: here = head; while(here->data != target && here->link != NULL) { here = here->next; } if (here->data == target) return here; else return NULL; Our search algorithm has a problem - If the list is empty, here equals NULL before the while loop so… - here->data is undefined - here->link is undefined - The empty list requires a special case in our search function - A refined search function that handles an empty list is shown in following Function to Locate a Node in a Linked List Function Declaration ```c struct Node { int data; Node *link; }; typedef Node* NodePtr; NodePtr search(NodePtr head, int target); //Precondition: The pointer head points to the head of //a linked list. The pointer variable in the last node //is NULL. If the list is empty, then head is NULL. //Returns a pointer that points to the first node that //contains the target. If no node contains the target, //the function returns NULL. ``` Function Definition ```c //Uses cstddef: NodePtr search(NodePtr head, int target) { NodePtr here = head; if (here == NULL) { return NULL; // Empty list case } else { while (here->data != target && here->link != NULL) { here = here->link; } if (here->data == target) return here; else return NULL; } } ``` Pointers as Iterators An iterator is a construct that allows you to cycle through the data items in a data structure to perform an action on each item - An iterator can be an object of an iterator class, an array index, or simply a pointer A general outline using a pointer as an iterator: ```c Node_Type *iter; for (iter = Head; iter != NULL; iter = iter->Link) //perform the action on the node iter points to ``` - Head is a pointer to the head node of the list Using the previous outline of an iterator we can display the contents of a linked list in this way: ```cpp NodePtr iter; for (iter = head; iter != NULL; iter = iter->Link) cout << (iter->data); ``` To insert a node after a specified node in the linked list: - Use another function to obtain a pointer to the node after which the new node will be inserted - Call the pointer after_me - Use function insert, declared here to insert the node: ```c void insert(NodePtr after_me, int the_number); ``` Inserting in the Middle of a Linked List Inserting the New Node - Function insert creates the new node just as head_insert did - We do not want our new node at the head of the list however, so… - We use the pointer after_me to insert the new node Inserting the New Node This code will accomplish the insertion of the new node, pointed to by temp_ptr, after the node pointed to by after_me: ``` temp_ptr->link = after_me->link; after_me->link = temp_ptr; ``` The order of pointer assignments is critical - If we changed after_me->link to point to temp_ptr first, we would lose the rest of the list! The complete insert function is shown in following Function to Add a Node in the Middle of a Linked List Function Declaration ```c struct Node { int data; Node *link; }; typedef Node* NodePtr; void insert(NodePtr after_me, int the_number); // Precondition: after_me points to a node in a linked list. // Postcondition: A new node containing the_number has been added after the node pointed to by after_me. ``` Function Definition ```c void insert(NodePtr after_me, int the_number) { NodePtr temp_ptr; temp_ptr = new Node; temp_ptr->data = the_number; temp_ptr->link = after_me->link; after_me->link = temp_ptr; } ``` Function insert Again - Notice that inserting into a linked list requires that you only change two pointers - This is true regardless of the length of the list - Using an array for the list would involve copying as many as all of the array elements to new locations to make room for the new item - Inserting into a linked list is often more efficient than inserting into an array Removing a Node To remove a node from a linked list - Position a pointer, before, to point at the node prior to the node to remove - Position a pointer, discard, to point at the node to remove - Perform: before->link = discard->link; - The node is removed from the list, but is still in memory - Return *discard to the freestore: delete discard; Removing a Node 1. Position the pointer `discard` so that it points to the node to be deleted, and position the pointer before so that it points to the node before the one to be deleted. 2. `before->link = discard->link;` 3. `delete discard;` If head1 and head2 are pointer variables and head1 points to the head node of a list: ```c head2 = head1; ``` causes head2 and head1 to point to the same list. There is only one list! If you want head2 to point to a separate copy, you must copy the list node by node or overload the assignment operator appropriately. DISPLAY 13.11 A Doubly Linked List ``` front 1 2 3 back ``` **DISPLAY 13.12 A Binary Tree** 13.2 Stacks and Queues A stack is a data structure that retrieves data in the reverse order the data was stored. - If 'A', 'B', and then 'C' are placed in a stack, they will be removed in the order 'C', 'B', and then 'A'. A stack is a last-in/first-out data structure like the stack of plates in a cafeteria; adding a plate pushes down the stack and the top plate is the first one removed. A Stack A B C B A C B A C B A 52 We will create a stack class to store characters - Adding an item to a stack is pushing onto the stack - Member function push will perform this task - Removing an item from the stack is popping the item off the stack - Member function pop will perform this task // This is the header file stack.h. This is the interface for the class Stack, // which is a class for a stack of symbols. #ifndef STACK_H #define STACK_H namespace stacksavitch { struct StackFrame { char data; StackFrame *link; }; typedef StackFrame* StackFramePtr; class Stack { public: Stack(); // Initializes the object to an empty stack. Stack(const Stack& a_stack); // Copy constructor. ~Stack(); // Destroys the stack and returns all the memory to the freestore. void push(char the_symbol); // Postcondition: the_symbol has been added to the stack. char pop(); // Precondition: The stack is not empty. // Returns the top symbol on the stack and removes that // top symbol from the stack. bool empty() const; // Returns true if the stack is empty. Returns false otherwise. private: StackFramePtr top; };///stacksavitch #ifndef //STACK_H Function push The push function adds an item to the stack - It uses a parameter of the type stored in the stack ``` void push(char the_symbol); ``` - Pushing an item onto the stack is precisely the same task accomplished by function head_insert of the linked list - For a stack, a pointer named `top` is used instead of a pointer named head Function pop The pop function returns the item that was at the top of the stack char pop(); - Before popping an item from a stack, pop checks that the stack is not empty - pop stores the top item in a local variable result, and the item is "popped" by: \( \text{top} = \text{top} \rightarrow \text{link} \); - A temporary pointer must point to the old top item so it can be "deleted" to prevent a memory leak - pop then returns variable result Empty Stack - An empty stack is identified by setting the top pointer to NULL top = NULL; The Copy Constructor Because the stack class uses a pointer and creates new nodes using new, a copy constructor is needed - The copy constructor (a self-test exercise) must make a copy of each item in the stack and store the copies in a new stack - Items in the new stack must be in the same position in the stack as in the original The stack destructor Because function pop calls delete each time an item is popped off the stack, ~stack only needs to call pop until the stack is empty ```c char next; while( ! empty( ) ) { next = pop( ); } ``` Implementation of the Stack Class (part 1 of 2) ``` //This is the implementation file stack.cpp. //This is the implementation of the class Stack. //The interface for the class Stack is in the header file stack.h. #include <iostream> #include <cstddef> #include "stack.h" using namespace std; namespace stacksavitch { //Uses cstddef: Stack::Stack() : top(NULL) { //Body intentionally empty. } Stack::Stack(const Stack& a_stack) <The definition of the copy constructor is Self-Test Exercise 11.> ``` Implementation of the Stack Class (part 2 of 2) ```cpp Stack::~Stack() { char next; while (! empty()) next = pop(); // pop calls delete. } // Uses cstddef: bool Stack::empty() const { return (top == NULL); } void Stack::push(char the_symbol) <The rest of the definition is Self-Test Exercise 10.> // Uses iostream: char Stack::pop() { if (empty()) { cout << "Error: popping an empty stack.\n"; exit(1); } char result = top->data; StackFramePtr temp_ptr; temp_ptr = top; top = top->link; delete temp_ptr; return result; } // stacksavitch ``` Program to demonstrate use of the Stack class. #include <iostream> #include "stack.h" using namespace std; using namespace stacksavitch; int main() { Stack s; char next, ans; do { cout << "Enter a word: "; cin.get(next); while (next != '\n') { s.push(next); cin.get(next); } cout << "Written backward that is: "; while ( ! s.empty() ) cout << s.pop(); cout << endl; cout << "Again?(y/n): "; cin >> ans; cin.ignore(10000, '\n'); } while ( ans != 'n' && ans != 'N'); return 0; } The ignore member of cin is discussed in Chapter II. It discards input remaining on the current input line up to 10,000 characters or until a return is entered. It also discards the return ("\n") at the end of the line. Program Using the Stack Class (part 2 of 2) Sample Dialogue Enter a word: straw Written backward that is: warts Again?(y/n): y Enter a word: C++ Written backward that is: ++C Again?(y/n): n DISPLAY 13.21 Interface File for a Queue Class (part 1 of 2) 1 //This is the header file queue.h. This is the interface for the class Queue, 2 //which is a class for a queue of symbols. 3 #ifndef QUEUE_H 4 #define QUEUE_H 5 namespace queuesavitch 6 { 7 struct QueueNode 8 { 9 char data; 10 QueueNode *link; 11 }; 12 typedef QueueNode* QueueNodePtr; 13 14 class Queue 15 { 16 public: 17 Queue(); 18 //Initializes the object to an empty queue. 19 Queue(const Queue& aQueue); 20 ~Queue(); DISPLAY 13.21 Interface File for a Queue Class (part 2 of 2) 21 void add(char item); 22 // Postcondition: item has been added to the back of the queue. 23 char remove(); 24 // Precondition: The queue is not empty. 25 // Returns the item at the front of the queue and 26 // removes that item from the queue. 27 bool empty() const; 28 // Returns true if the queue is empty. Returns false otherwise. 29 private: 30 QueueNodePtr front; // Points to the head of a linked list. 31 // Items are removed at the head 32 QueueNodePtr back; // Points to the node at the other end of the 33 // linked list. Items are added at this end. 34 35 } // queuesavitch 36 #endif // QUEUE_H DISPLAY 13.22 Program Using the Queue Class (part 1 of 2) 1 //Program to demonstrate use of the Queue class. 2 #include <iostream> 3 #include "queue.h" 4 using namespace std; 5 using namespace queuesavitch; 6 7 int main() 8 { 9 Queue q; 10 char next, ans; 11 12 do 13 { 14 cout << "Enter a word: "; 15 cin.get(next); 16 while (next != '\n') 17 { 18 q.add(next); 19 cin.get(next); 20 } 21 } DISPLAY 13.22 Program Using the Queue Class (part 2 of 2) ```cpp 22 cout << "You entered:: "; 23 while (! q.empty() ) 24 cout << q.remove(); 25 cout << endl; 26 27 cout << "Again?(y/n): "; 28 cin >> ans; 29 cin.ignore(10000, '\n'); 30 }while (ans !='n' && ans != 'N'); 31 32 return 0; 33 } ``` The `ignore` member of `cin` is discussed in Chapter 8. It discards input remaining on the current input line up to 10,000 characters or until a return is entered. It also discards the return (`'\n'`) at the end of the line. --- **Sample Dialogue** Enter a word: **straw** You entered: straw Again?(y/n): y Enter a word: **C++** You entered: C++ Again?(y/n): n
{"Source-Url": "https://grid.cs.gsu.edu/~skarmakar/cs2311/ch13.pdf", "len_cl100k_base": 5485, "olmocr-version": "0.1.53", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 83043, "total-output-tokens": 8355, "length": "2e12", "weborganizer": {"__label__adult": 0.0005412101745605469, "__label__art_design": 0.00028204917907714844, "__label__crime_law": 0.00027179718017578125, "__label__education_jobs": 0.0004880428314208984, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.00016200542449951172, "__label__finance_business": 9.16123390197754e-05, "__label__food_dining": 0.00049591064453125, "__label__games": 0.0013647079467773438, "__label__hardware": 0.00138092041015625, "__label__health": 0.00032901763916015625, "__label__history": 0.0002359151840209961, "__label__home_hobbies": 0.00010764598846435548, "__label__industrial": 0.0003650188446044922, "__label__literature": 0.00025582313537597656, "__label__politics": 0.00019633769989013672, "__label__religion": 0.0005593299865722656, "__label__science_tech": 0.00323486328125, "__label__social_life": 8.07046890258789e-05, "__label__software": 0.0028362274169921875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0004534721374511719, "__label__transportation": 0.000690460205078125, "__label__travel": 0.00025773048400878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21505, 0.05088]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21505, 0.63331]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21505, 0.7609]], "google_gemma-3-12b-it_contains_pii": [[0, 38, false], [38, 99, null], [99, 128, null], [128, 501, null], [501, 575, null], [575, 894, null], [894, 1271, null], [1271, 1471, null], [1471, 1818, null], [1818, 2144, null], [2144, 2300, null], [2300, 2701, null], [2701, 2851, null], [2851, 3292, null], [3292, 3461, null], [3461, 3754, null], [3754, 3924, null], [3924, 4123, null], [4123, 4605, null], [4605, 4835, null], [4835, 4958, null], [4958, 5380, null], [5380, 5978, null], [5978, 6255, null], [6255, 6492, null], [6492, 6539, null], [6539, 6861, null], [6861, 7252, null], [7252, 7599, null], [7599, 7865, null], [7865, 8209, null], [8209, 8522, null], [8522, 8749, null], [8749, 9049, null], [9049, 9974, null], [9974, 10447, null], [10447, 10650, null], [10650, 10956, null], [10956, 10997, null], [10997, 11207, null], [11207, 11420, null], [11420, 11612, null], [11612, 12214, null], [12214, 12600, null], [12600, 12950, null], [12950, 13196, null], [13196, 13518, null], [13518, 13584, null], [13584, 13617, null], [13617, 13641, null], [13641, 14010, null], [14010, 14055, null], [14055, 14319, null], [14319, 15319, null], [15319, 15674, null], [15674, 16121, null], [16121, 16215, null], [16215, 16551, null], [16551, 16769, null], [16769, 17306, null], [17306, 17927, null], [17927, 18777, null], [18777, 18969, null], [18969, 18969, null], [18969, 19537, null], [19537, 20257, null], [20257, 20810, null], [20810, 21505, null]], "google_gemma-3-12b-it_is_public_document": [[0, 38, true], [38, 99, null], [99, 128, null], [128, 501, null], [501, 575, null], [575, 894, null], [894, 1271, null], [1271, 1471, null], [1471, 1818, null], [1818, 2144, null], [2144, 2300, null], [2300, 2701, null], [2701, 2851, null], [2851, 3292, null], [3292, 3461, null], [3461, 3754, null], [3754, 3924, null], [3924, 4123, null], [4123, 4605, null], [4605, 4835, null], [4835, 4958, null], [4958, 5380, null], [5380, 5978, null], [5978, 6255, null], [6255, 6492, null], [6492, 6539, null], [6539, 6861, null], [6861, 7252, null], [7252, 7599, null], [7599, 7865, null], [7865, 8209, null], [8209, 8522, null], [8522, 8749, null], [8749, 9049, null], [9049, 9974, null], [9974, 10447, null], [10447, 10650, null], [10650, 10956, null], [10956, 10997, null], [10997, 11207, null], [11207, 11420, null], [11420, 11612, null], [11612, 12214, null], [12214, 12600, null], [12600, 12950, null], [12950, 13196, null], [13196, 13518, null], [13518, 13584, null], [13584, 13617, null], [13617, 13641, null], [13641, 14010, null], [14010, 14055, null], [14055, 14319, null], [14319, 15319, null], [15319, 15674, null], [15674, 16121, null], [16121, 16215, null], [16215, 16551, null], [16551, 16769, null], [16769, 17306, null], [17306, 17927, null], [17927, 18777, null], [18777, 18969, null], [18969, 18969, null], [18969, 19537, null], [19537, 20257, null], [20257, 20810, null], [20810, 21505, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21505, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21505, null]], "pdf_page_numbers": [[0, 38, 1], [38, 99, 2], [99, 128, 3], [128, 501, 4], [501, 575, 5], [575, 894, 6], [894, 1271, 7], [1271, 1471, 8], [1471, 1818, 9], [1818, 2144, 10], [2144, 2300, 11], [2300, 2701, 12], [2701, 2851, 13], [2851, 3292, 14], [3292, 3461, 15], [3461, 3754, 16], [3754, 3924, 17], [3924, 4123, 18], [4123, 4605, 19], [4605, 4835, 20], [4835, 4958, 21], [4958, 5380, 22], [5380, 5978, 23], [5978, 6255, 24], [6255, 6492, 25], [6492, 6539, 26], [6539, 6861, 27], [6861, 7252, 28], [7252, 7599, 29], [7599, 7865, 30], [7865, 8209, 31], [8209, 8522, 32], [8522, 8749, 33], [8749, 9049, 34], [9049, 9974, 35], [9974, 10447, 36], [10447, 10650, 37], [10650, 10956, 38], [10956, 10997, 39], [10997, 11207, 40], [11207, 11420, 41], [11420, 11612, 42], [11612, 12214, 43], [12214, 12600, 44], [12600, 12950, 45], [12950, 13196, 46], [13196, 13518, 47], [13518, 13584, 48], [13584, 13617, 49], [13617, 13641, 50], [13641, 14010, 51], [14010, 14055, 52], [14055, 14319, 53], [14319, 15319, 54], [15319, 15674, 55], [15674, 16121, 56], [16121, 16215, 57], [16215, 16551, 58], [16551, 16769, 59], [16769, 17306, 60], [17306, 17927, 61], [17927, 18777, 62], [18777, 18969, 63], [18969, 18969, 64], [18969, 19537, 65], [19537, 20257, 66], [20257, 20810, 67], [20810, 21505, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21505, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
e3dd96be5a17c7945036fcae64dd0bdf0fe95dd0
[REMOVED]
{"Source-Url": "http://www.infosys.tuwien.ac.at/Staff/sd/papers/Buchbeitrag%20Fei%20Li%20Constructing%202015.pdf", "len_cl100k_base": 6666, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 36118, "total-output-tokens": 8558, "length": "2e12", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0005822181701660156, "__label__crime_law": 0.00035572052001953125, "__label__education_jobs": 0.0008301734924316406, "__label__entertainment": 0.00010579824447631836, "__label__fashion_beauty": 0.00015151500701904297, "__label__finance_business": 0.0024623870849609375, "__label__food_dining": 0.0004014968872070313, "__label__games": 0.0006856918334960938, "__label__hardware": 0.0011873245239257812, "__label__health": 0.0006456375122070312, "__label__history": 0.00032448768615722656, "__label__home_hobbies": 0.00010973215103149414, "__label__industrial": 0.0005402565002441406, "__label__literature": 0.0003192424774169922, "__label__politics": 0.00038814544677734375, "__label__religion": 0.0003285408020019531, "__label__science_tech": 0.09478759765625, "__label__social_life": 0.00010144710540771484, "__label__software": 0.02581787109375, "__label__software_dev": 0.8681640625, "__label__sports_fitness": 0.0002315044403076172, "__label__transportation": 0.0007004737854003906, "__label__travel": 0.0002734661102294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39859, 0.03262]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39859, 0.25288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39859, 0.91793]], "google_gemma-3-12b-it_contains_pii": [[0, 2018, false], [2018, 4494, null], [4494, 6434, null], [6434, 10846, null], [10846, 13766, null], [13766, 16956, null], [16956, 19304, null], [19304, 21354, null], [21354, 22777, null], [22777, 23901, null], [23901, 24871, null], [24871, 26324, null], [26324, 26379, null], [26379, 27453, null], [27453, 29045, null], [29045, 30120, null], [30120, 30191, null], [30191, 31362, null], [31362, 32704, null], [32704, 35302, null], [35302, 38350, null], [38350, 39859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2018, true], [2018, 4494, null], [4494, 6434, null], [6434, 10846, null], [10846, 13766, null], [13766, 16956, null], [16956, 19304, null], [19304, 21354, null], [21354, 22777, null], [22777, 23901, null], [23901, 24871, null], [24871, 26324, null], [26324, 26379, null], [26379, 27453, null], [27453, 29045, null], [29045, 30120, null], [30120, 30191, null], [30191, 31362, null], [31362, 32704, null], [32704, 35302, null], [35302, 38350, null], [38350, 39859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39859, null]], "pdf_page_numbers": [[0, 2018, 1], [2018, 4494, 2], [4494, 6434, 3], [6434, 10846, 4], [10846, 13766, 5], [13766, 16956, 6], [16956, 19304, 7], [19304, 21354, 8], [21354, 22777, 9], [22777, 23901, 10], [23901, 24871, 11], [24871, 26324, 12], [26324, 26379, 13], [26379, 27453, 14], [27453, 29045, 15], [29045, 30120, 16], [30120, 30191, 17], [30191, 31362, 18], [31362, 32704, 19], [32704, 35302, 20], [35302, 38350, 21], [38350, 39859, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39859, 0.17266]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b47809854b69e3d490b7fac8812f68210fdcf527
[REMOVED]
{"Source-Url": "http://ebooks.narotama.ac.id/files/Algorithms%20and%20Architectures%20for%20Parallel%20Processing;%209th%20ICA3PP2009/Chapter%2062%20A%20Lightweight%20Buffer%20Overflow%20Protection%20Mechanism%20with%20Failure-Oblivious%20Capability.pdf", "len_cl100k_base": 4521, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 22639, "total-output-tokens": 6516, "length": "2e12", "weborganizer": {"__label__adult": 0.0005097389221191406, "__label__art_design": 0.0003905296325683594, "__label__crime_law": 0.00145721435546875, "__label__education_jobs": 0.00039315223693847656, "__label__entertainment": 0.0001189112663269043, "__label__fashion_beauty": 0.00020754337310791016, "__label__finance_business": 0.00030350685119628906, "__label__food_dining": 0.0004150867462158203, "__label__games": 0.0010156631469726562, "__label__hardware": 0.0027256011962890625, "__label__health": 0.0008220672607421875, "__label__history": 0.00029206275939941406, "__label__home_hobbies": 0.00010478496551513672, "__label__industrial": 0.0005922317504882812, "__label__literature": 0.0003249645233154297, "__label__politics": 0.00034928321838378906, "__label__religion": 0.00046372413635253906, "__label__science_tech": 0.12445068359375, "__label__social_life": 9.298324584960938e-05, "__label__software": 0.01995849609375, "__label__software_dev": 0.84375, "__label__sports_fitness": 0.00036025047302246094, "__label__transportation": 0.0005459785461425781, "__label__travel": 0.00020015239715576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27768, 0.04129]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27768, 0.25926]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27768, 0.8847]], "google_gemma-3-12b-it_contains_pii": [[0, 2658, false], [2658, 5866, null], [5866, 9359, null], [9359, 11283, null], [11283, 13195, null], [13195, 15535, null], [15535, 17487, null], [17487, 18867, null], [18867, 21256, null], [21256, 22713, null], [22713, 26185, null], [26185, 27768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2658, true], [2658, 5866, null], [5866, 9359, null], [9359, 11283, null], [11283, 13195, null], [13195, 15535, null], [15535, 17487, null], [17487, 18867, null], [18867, 21256, null], [21256, 22713, null], [22713, 26185, null], [26185, 27768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27768, null]], "pdf_page_numbers": [[0, 2658, 1], [2658, 5866, 2], [5866, 9359, 3], [9359, 11283, 4], [11283, 13195, 5], [13195, 15535, 6], [15535, 17487, 7], [17487, 18867, 8], [18867, 21256, 9], [21256, 22713, 10], [22713, 26185, 11], [26185, 27768, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27768, 0.07527]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
8c3a2b8d2d820fca50fb82e1fed37dff90f9cf1a
Research of the Real-time Database in Embedded Configuration Software Wang Xiujuan\textsuperscript{1,a}, Li Xiaobing\textsuperscript{2,b}, Cheng Meng\textsuperscript{3,c}, Chen Yutang\textsuperscript{4,d}, Zhang Zhongxin\textsuperscript{5,e}, Cai Xiuyun\textsuperscript{6,f} \textsuperscript{1,2,3,5}Mechanical and Electrical Engineering College, University of Electronic Science and Technology of China, Chengdu, China \textsuperscript{4,6}Dongguan Yuefeng Electronic Technology Limited Company, Dongguan, China *Corresponding author, e-mail: wxj2799@126.com\textsuperscript{a}, 1014270681@qq.com\textsuperscript{b}, chengmeng125@163.com\textsuperscript{c}, leo@cables.com.tw\textsuperscript{d}, 876156954@qq.com\textsuperscript{e}, certificate@yfc-china.com\textsuperscript{f} Abstract In recent years, the application of embedded technology and configuration technology in industrial control is more and more widely. The embedded configuration software which is combined of embedded and configuration has become the inevitable trend in industrial control field. Real-time database system as the core of embedded configuration software, the organizational structure whether reasonable and effective is directly related to the performance of the whole system, affecting field devices real-time communication and data transmission in graphic display interface. Based on a large number of configuration-related papers, this paper deeply researched the real-time database and using three layer storage structures which consist of shared memory, file system and general database. It improves the access efficiency of real-time database and data reliability in a timely manner. Keywords: embedded system, configuration software, real-time database, data structure, storage structure, modular design, and interfaces mechanism Copyright © 2014 Institute of Advanced Engineering and Science. All rights reserved. 1. Introduction Configuration software develops quickly because of the widely application of PC. It specializes in the industrial-control field [1]. And embedded system has strong played an increasingly important role in industrial-control field. There have been many manufacturers and available embedded OS like Linux, WinCE, VxWorks and so on [2]. At present, the most popular international Commercial-embedded-configuration-software are Movicon X from PROGEA (Italy), WinCC from Siemens (Germany), InduSoft-CE1500 and InduSoft-300 both come InduSoft Web Studio. There are also some good domestic ones like kingview and Beijing Kunlun MCGS. Though the existing embedded configuration software have good man-machine interface, abundant drawing function, lifelike graphics display interface and flexible configuration, there are some certain deficiencies. Based on a large number of configuration-related papers, this paper deeply researched the real-time database which is the core of the embedded configuration operation. 2. Research Method 2.1. Embedded System Embedded system is the dedicated computer system which centers on application, bases on computer technology, software and hardware can be cut, adapts application system to strict with function, reliability, cost, volume, power consumption [3, 4]. It is the electronic equipment or devices which composed of microprocessor, peripheral equipment and related support hardware, embedded operating system and application software, to realize the functions such as control, monitoring and management for the other equipment [5], and its system structure as shown in Figure 1. The embedded system's main features [6] are: (1) Real-time: Can rapidly responses in the system response time limit to the foreseeable events or user intervention, have very strong real time; (2) Reliability: Usually work in unattended specific occasions, such as harsh environment or run continuously for a long time, has the high requirement of reliability; (3) Specificity: General is geared to the particular specific application, has certain specificity; (4) Diversity: Specificity determines its application in the field of different special, specific hardware and software needs to selects and develops according to the actual situation, reflects the diversity of embedded systems; (5) Cutting: In order to meet the specificity and the control of system cost, according the practical application to cut developing, to achieve the most reasonable configuration; (6) Low power consumption: embedded products mainly in the face of some small application system which does not have large power, has a strict requirements on power consumption. <table> <thead> <tr> <th>Type</th> <th>Main parameters</th> </tr> </thead> <tbody> <tr> <td>Analog quantity</td> <td>Point name, number, type, unit, connected devices, upper and lower bounds of initial value, offset, data, storage, marking, alarm tags, alarm level, etc.</td> </tr> <tr> <td>Digital quantity</td> <td>Point name, number, type, unit, connected devices, upper and lower bounds of initial value, offset, data, storage, marking, alarm tags, alarm level, etc.</td> </tr> <tr> <td>Memory variables</td> <td>Point name, number, type.</td> </tr> </tbody> </table> 2.2. Configuration Software Configuration software is the specialized software development environment works on the automatic control system monitoring layer to complete the data acquisition and process control [7], it provides a friendly graphical development interface and easy operation method, provided use of its various components can easily develop satisfying the various needs of monitoring application function, at the same time to control and management layer provides a variety of hardware and software interface, easy to integrate with other systems or programs [8]. Configuration software’s main purpose is to make the automation engineer convenient to generate application system to satisfy his needs in don’t need to modify the software program source code. The development of embedded system in the field of industrial control necessarily promotes configuration software production and the development. Embedded configuration software running on hardware system as the core of the embedded processor and its supporting environment is mostly the embedded real-time multitask embedded operating system. 2.3. Real-time Databases Real-time database (RTDB) is the core of the configuration. It is responsible for the equipment on site production process data acquisition and processing, and data organization. and management work, provides an important safeguard for the normal operation of the whole system. It not only gives the user interface the whole system running status data, conveniences user for the corresponding control operation; but also has other functions such as the preservation and statistical analysis of the non real-time data, alarm processing and I/O data connection. The transaction and data of real-time database both have timed featured or explicit time limit [9]. The correctness of real-time database system depends on both the data logical results and the time when it produced, that is to say, the system can accept inaccurate data within the time limit, but can't accept the accurate data more than the time limit [10]. And it should meet the requirement of data real-time and consistency, support a large amount of data sharing, maintain the consistency and integrity of data, and support the time limit of data and transaction [11]. The main purpose of real-time database transaction scheduling is to process transactions as much as possible within the stipulated time limit. Real-time database abstracts each data object into a point (Tag) which contains several parameters. Every I/O device in the industrial site is associated with the corresponding Tag in database. The composition of point parameters as shown in Table 1 and the structure relationship between point and point parameters in real-time database as shown in Figure 2. 3. Results and Analysis 3.1. Design of Real-time Database The data processing process of real-time database as shown in Figure 3. Display interface accesses the needed data from real-time database regularly to adjust the pixel in interface, with the intuitive images to show the running situation of the whole system; In addition, take the control command into field devices according to the operation of the user interface. Real-time database as the data server, to provide data sources to graphical interface; and graphical interface as the client of the data, to get the data from the server side and display in real time, both constitute C/S mode. In order to satisfy the independence, real-time and consistency of the data in the system, this subject adopted three layers storage structure which composed of memory database, file system and general relational database. As shown in Figure 4. 1) Storing the dynamic data which needs to update each sampling period in memory to guarantee the real-time response speed of real-time database. 2) Storing the static data which does not require high real-time response speed or the unshared data which does not require long-term preservation in a file system. 3) Storing the outdated production data which need to save long time and the shared data which has no special requirements in general relational database (MYSQL) to query and statistical analysis later. Inter-process communication mechanism of Linux platform has main pipeline and named pipe, signal, message queues, shared memory, semaphores and cover interfaces [12]. Shared memory can provide strong support to real-time dynamic data interaction between real-time database and graphic display interface. Named pipe is the better choices to achieve static data interaction between real-time database and graphic display interface. It's convenient to realize data exchange between real-time database and universal database through the interface function provided by ODBC and database access API interface function provided by Linux. The storage structure designed in this paper which consists of shared memory, named pipe and ODBC interface communication mechanism as shown in Figure 5. ![Figure 5. Three Floors Storage Structure of Real-time Database](image) Real-time database designs into modular to meet the requirements of embedded configuration system. The whole real-time database is divided into relatively independent modules, it is convenient to develop and test system fast. Its structure as shown in Figure 6. 1. Initialization module: It is mainly used to constructing and initializing data in the memory database, and the establishment work of historical database. 2. Data query module: According to the operation of customer’s choice or the demands of system, to retrieval the data which meet the conditions in real-time database, and return query results. 3. Data update module: To update the data which needs to be updated in the system, and to refresh the history database tasks according to actual conditions, etc. 4. Data storage module: To save the data which satisfy the trigger condition or time condition to the history database. 5. The window display module: According to the current display window ID, to query the data that corresponds to the query window's pixel in real-time database, and according to the return value to adjust the window’s pixel. 6. Data communication modules: To communicate with the field I/O devices according to the agreement, read the current production process data from device, issue the control instruction according to the device ID, achieve the scene device’s control function. 7. Alarm module: To test whether the data beyond the alarm limit, give the alarm information and save if it beyond. 8. Accident processing module: To save the state of the system, the field data and operation records of the operators when the accident happened to the system. 3.2. The Implementation of Real-time Database System Industrial field data including real-time data of on-site acquisition, system data, calculate data, attribute data, control and management data. All data can be represented by three data types that are analog quantity and switch quantity and strings. Real-time data implemented by structure type, the different process type to distinguish by the real-time data type field in the structure. Implementation of real-time data structure types is as follows: ```c /* enumeration type for tag real-time data process type */ typedef enum { double_t = 1, bool_t } pv_type_set; /* joint type realize real-time data process value */ typedef union { double dPV; bool swhPV; } pv_data_set; /* the data type of the real-time data */ #define NAME_LEN 20 #define DESC_LEN 50 typedef struct { char name[NAME_LEN +1]; // Name of the data points pv_type_set type; // Data point type char desc[DESC_LEN+1]; // Data points describing information pv_data_set pv; // Data point process char domain[3]; // The domain of the data points char eu[DESC_LEN+1]; // Data point engineering unit description double euLow; // Data point engineering unit lower limit double euHigh; // Data point engineering unit upper limit double pvRaw; // Field measurement data bool IsRanCon; // Whether the scale transform double pvRawLow; // Data range lower limit double pvRawHigh; // Upper limit of the data range bool static; // Static data, historical data store to the file system int storecyc; // The backup cycle bool Is Alarm; // Whether the alarm int Alarm Priority; // The alarm priority double LowLOwValue; // Alarm lower limit double LowVaule; // Alarm Low limit double HighHighvalue; // Alarm higher limit double Highvalue; // Alarm high limit double LowDevvalue; // Alarm low deviation values double HighDevvalue; // Alarm high deviation values } tag_node; ``` Using the support for real-time multitasking operating provide by Linux system, to perform the data acquisition and processing tasks of real-time database in the form of multi-process concurrent. Using the shared memory technology of IPC, can distribute the required memory space discretely in memory according to the application, then all discrete shared memory address form an index table, reached on managing all shared memory [13]. In practice, often combine the different processes or different workshop production data points into a separate data domain, built into indexing table structure of the domain table and the point table’s two levels structure address, the structure as shown in Figure 7: ``` | Field no. 1 | | | Data naming 1 | Data point 1 address | Shared memory data points 1 label | Data point 1 name | | Field no. 2 | | | Data naming 2 | Data point 2 address | Shared memory data points 2 label | Data point 2 name | | | | | Data naming 3 | Data point 3 address | Shared memory data points 3 label | Data point 3 name | ``` Figure 7. Domain Table and Data Point Table Two Levels Index Structure Diagram Research of the Real-time Database in Embedded Configuration Software (Wang Xiujuan) The relevant data structure of domain table and point table as follows: ```c /* describe domain table the data structure of data item */ typedef struct { char domIndex[3];// The domain, tbTag_item *tbTag_ptr;// Data point table address of the domain } tbDom_item; /* describe data point t6able the data structure of data item */ typedef struct { char tagIndex[3]; //data point number tag_node *tag_ptr; // Pointer to the data points int shmid; // Store the data shared memory label char name[NAME_LEN + 1];// data point name } tbTag_item; ``` Using the database interface can access and manipulate the database directly. It is convenient for user to develop I/O interface driver and exchange the data with other devices, make the real-time database has good versatility and openness. Some common database interface function as shown in Table 2: <table> <thead> <tr> <th>Function name</th> <th>The return value type</th> <th>Function description</th> </tr> </thead> <tbody> <tr> <td>GetTagNum();</td> <td>int</td> <td>Query the number of data points</td> </tr> <tr> <td>GetNameByID(char * tagID);</td> <td>Char*</td> <td>Obtain the data point named by data ID</td> </tr> <tr> <td>GetIDByName(char*tagName);</td> <td>Char*</td> <td>Obtain the data ID by data point named</td> </tr> <tr> <td>GetPVType(char *tagName); pv_type_set</td> <td>Obtain data point process value type by data point named</td> <td></td> </tr> <tr> <td>GetPVByName(char*tagName, pv_data_set *pv);</td> <td>int</td> <td>Obtain data point process value by data point named</td> </tr> <tr> <td>SetPVByName(char*tagName, pv_data_set *pv);</td> <td>int</td> <td>According to the data point named written to the data point calling process value</td> </tr> </tbody> </table> Real-time database development environment provides a real-time database configuration interface, and generate the database configuration file. In the configuration interface it is convenient for user to define all kinds of memory variables, I/O variables, set a variable process mode, etc., and generate a data dictionary; configuration files provide the basis for running environment generated data. 1) The definition of data dictionary Data dictionary refers to define the need variables in industrial control object and the device parameters need configure, part of the variable will be used as real-time database management object in the kernel. The definition of data dictionary in the development environment has the following features: (a) Specifies the data variable types: there are usually a variety of data types in the configuration system, such as analog quantity and switch quantity and character variables, etc. Specified data types in the configuration interface can be convenient the running environment to allocate reasonable storage space in memory for the corresponding data. (b) Specifies the field devices which I/O variable should associate with: after specifies the associated equipment for the variable, can be collected the corresponding data in the field devices by the I/O drivers under the operating conditions; (c) Set up the method of data processing: the original data acquire from field devices cannot be directly used for interface, should take the corresponding conversion operation or process, so set up the corresponding processing way for the data on the configuration interface; (d) Specify the data sampling time: different data variables in the system have different requirements for sampling and need set up different sampling time for data; (e) Set up the data preservation attributes: there's some data often needs to be saved in the system, convenient the system to statistical analysis for production status and alarm failure, so need set up different ways of data storage according to the actual needs, such as set-time storage, event trigger storage, and so on; (f) Set up data alarm attributes: setting alarm limit and priority of variables data. (2) The historical database For the data need to long-term preservation of system, such as alarm information, trends information, and the data for realizes the compensation mechanism need save on regular or according the agreed conditions setting the corresponding storage table in configuration interface, such as system information table, the fault information table, etc. (3) The configuration files storage To save the XML format database configuration file generated from data dictionary and historical database information configured by develop environmental, for the use of system running environment. Running environment is the ultimate use model of real-time database; this part of the design is good or bad directly affecting the efficiency of program. (1) Configuration files parsing Configuration file parsing is deploying file according to the database generate in the development environment, generate the tables which needed for the corresponding memory database and general database in memory space. (2) Real-time database runtime environment When the configuration software at run, the system to generate memory data file and historical database according to the configuration file. Memory database to store raw data collect from the field I/O devices, and make corresponding processing and save later; history database storage the data needed long-term preservation. 4. Conclusion Real-time database is divided into the development environment and running environment. Development environment provides a database design interface, the interface can set the data variable name, type and so on some regular option, and also can set the device node which the data is associated with and data processing methods, such as data sampling period, refresh time, etc. At the same time to design table in the history database for historical data in the system need long-term preservation and the table for recovery mechanism, such as alarm table, system information table, etc.; After configuration completed will to generate real-time database configuration file for the use of running environment analysis. In the running environment, the application program generate real-time database according to the configuration file of the configuration at first. If the history database does not exist, generate a new historical database; generate memory database at the same time and refresh data variable according to the configuration variable configuration of the sampling time and trigger event; to save the data need to save according to the event or at regular time. This component also provides historical data query, processing alarm etc. functions. References
{"Source-Url": "http://www.iaescore.com/journals/index.php/IJEECS/article/download/3357/1519", "len_cl100k_base": 4399, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 17359, "total-output-tokens": 5271, "length": "2e12", "weborganizer": {"__label__adult": 0.0005755424499511719, "__label__art_design": 0.0006918907165527344, "__label__crime_law": 0.0005431175231933594, "__label__education_jobs": 0.0007548332214355469, "__label__entertainment": 0.00010377168655395508, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0004227161407470703, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0008215904235839844, "__label__hardware": 0.0343017578125, "__label__health": 0.0006537437438964844, "__label__history": 0.0003304481506347656, "__label__home_hobbies": 0.00026726722717285156, "__label__industrial": 0.005443572998046875, "__label__literature": 0.00020635128021240232, "__label__politics": 0.00024271011352539065, "__label__religion": 0.000675201416015625, "__label__science_tech": 0.24560546875, "__label__social_life": 7.051229476928711e-05, "__label__software": 0.0321044921875, "__label__software_dev": 0.6728515625, "__label__sports_fitness": 0.0004105567932128906, "__label__transportation": 0.00171661376953125, "__label__travel": 0.00019466876983642575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23934, 0.01703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23934, 0.48034]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23934, 0.82141]], "google_gemma-3-12b-it_contains_pii": [[0, 3746, false], [3746, 6596, null], [6596, 9473, null], [9473, 12322, null], [12322, 15198, null], [15198, 19130, null], [19130, 23158, null], [23158, 23934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3746, true], [3746, 6596, null], [6596, 9473, null], [9473, 12322, null], [12322, 15198, null], [15198, 19130, null], [19130, 23158, null], [23158, 23934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23934, null]], "pdf_page_numbers": [[0, 3746, 1], [3746, 6596, 2], [6596, 9473, 3], [9473, 12322, 4], [12322, 15198, 5], [15198, 19130, 6], [19130, 23158, 7], [23158, 23934, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23934, 0.09938]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
69f686e234cd14cf6f3873b13ec6f2d956a6c728
Is Oberon as Simple as Possible? A Smaller Object-Oriented Language Based on the Concept of Module Type Atanas Radenski Department of Computer Science Winston-Salem State University, P.O. Box 13027 Winston-Salem, North Carolina 27110, U.S.A. E-mail: radenski@ecsvax.unccs.edu Abstract. The design of the programming language Oberon was led by the quote by Albert Einstein: 'make it as simple as possible, but not simpler'. The objective of this paper is to analyze some design solutions and propose alternatives which could both simplify and strengthen the language without making it simpler than possible. The paper introduces one general concept, the module type, which can be used to represent records, modules, and eventually procedures. Type extension is redefined in terms of component nesting and incomplete designators. As a result, type extension supports multiple inheritance. 1 Introduction The design of the programming language Oberon was led by the quote by Albert Einstein: 'make it as simple as possible, but not simpler'. The objective of this paper is to analyze some design solutions and propose alternatives which could both simplify and strengthen the language without making it simpler than possible. The object orientation of Oberon is based on the concept of type extension. Section 2 of this paper outlines a problematic point in this concept as defined in Oberon: type extension applies to record and pointer types, but does not apply to procedure types. For this reason, procedures cannot be directly and conveniently redefined for extended types. As a consequence, method overriding may seem somewhat unnatural and tedious. This problematic point is eliminated with the concept of module type defined in Section 3. It is a generalization of record and procedure types and a single substitute for these types. As shown in Section 3, instances of module types can be used as record variables, or as procedures, or as Oberon modules. Overriding a method can be easily implemented by changing the module assigned to a field in an extension. Type extension itself is redefined in terms of component nesting and incomplete designators; as a result, it supports multiple inheritance. Module types and type extension are integrated in an experimental object-oriented language that evolved from Oberon. The experimental language does not include record types, procedure types, procedures and modules, since all they are implemented by means of module types or module variables. The paper represents those features of the experimental language that are relevant to module types and type extension. The object orientation of this language is outlined in the end of Section 3. 2 The Need for Improvement 2.1 Type Extension as a Base of the Object Orientation of Oberon Classes are implemented in Oberon as pointer types bound to record types with procedure variables. Objects are dynamic variables of such record types. For instance: ``` TYPE Class = POINTER TO ClassDesc; ClassDesc = RECORD x : INTEGER; method : PROCEDURE (self : Class; v : INTEGER); END; VAR ptr : Class; ``` Note that `ptr.x` and `ptr.method` designate the fields `x` and `method` of the dynamic record variable `ptr`. Methods are implemented in Oberon as procedures. For example, a method may look like this: ``` PROCEDURE Method (self : Class; v : INTEGER); BEGIN self.x := v END Method; ``` To create a new object, one has to assign specific procedures to all procedure variables: ``` NEW (ptr); ptr.method := Method; ``` Messages are calls of procedure variables, as, for instance: ``` ptr.method(ptr, 1); ``` Inheritance in Oberon is based on the concept of type extension [2, 3]. It permits the construction of new record types by adding fields to existing ones. For instance, type `SubclassDesc` extends type `ClassDesc` with the data field `y`: ``` \[Subclass = \text{POINTER TO} SubclassDesc;\] \[SubclassDesc = \text{RECORD} (ClassDesc)\] \[y : \text{INTEGER}\] \[\text{END;}\] \[\text{VAR}\] \[subPtr : SubClass;\] Type \textit{SubclassDesc} is said to be a direct extension of type \textit{ClassDesc}. Type \textit{ClassDesc} is the direct base type of type \textit{SubclassDesc}. The fields of a record variable of an extended type can be referenced by usual field designators. For instance, \textit{subPtr.x}, \textit{subPtr.y}, \textit{subPtr.method} are designators referencing the fields of the record variable \textit{subPtr}. A new object that belongs to \textit{Subclass} can be created as follows: \[\text{NEW} (subPtr); subPtr.method := \text{Method;}\] An extended type is assignment compatible with its base type. For instance, the assignment \textit{ptr} := \textit{subPtr} is legal and acts as a projection of record \textit{subPtr} onto record \textit{ptr}. The field \textit{y} does not participate in the assignment. In contrary, the assignment \textit{subPtr} := \textit{ptr} is illegal. Type extension applies also to pointer types. By definition, the pointer type \textit{Class} is extended by \textit{Subclass} (see their declarations above), since the pointer base type \textit{ClassDesc} of \textit{Class} is extended by the pointer base type \textit{SubclassDesc} of \textit{Subclass}. Since \textit{subPtr} is an extension of \textit{ptr}, the assignment \textit{ptr} := \textit{subPtr} is legal. After the assignment, \textit{ptr} points to a dynamic variable of type \textit{SubclassDesc}. After the assignment, \textit{ptr} is said to be of dynamic type \textit{Subclass}, while its declared (static) type continues to be \textit{Class}. Thus, only \textit{ptr.x} and \textit{ptr.method} are accepted by the compiler as legal field designators. The field \textit{y} can be referenced through \textit{ptr} by means of a type guard, as illustrated by the following example: \[ptr(Subclass).y := 0;\] An attempt to execute the above statement when \textit{ptr} does not actually point to a dynamic record of type \textit{SubclassDesc} results in an abnormal halt. An abnormal halt can be prevented by a type test: \[\text{IF} \; \text{ptr IS Subclass} \; \text{THEN} \; ptr(Subclass).y := 0 \; \text{END;}\] 2.2 What is Problematic with Type Extension Overriding a method in Oberon can be implemented by changing the procedure assigned to a field in an extension [1]. Unfortunately, procedures cannot be directly and conveniently redefined for extended types. For this reason, method overriding may seem somewhat unnatural and tedious. Consider, for example, the following procedure: PROCEDURE OverridingMethod (self : Subclass; v : INTEGER); BEGIN self.x := v; self.y := v END OverridingMethod; To override Method with OverridingMethod, one may wish to use the assignment subPtr.method := OverridingMethod. However, the definition of Oberon implies that OverridingMethod is not assignment compatible with method, and this assignment is not allowed. More precisely, field method of SubclassDesc is inherited from ClassDesc and has the following procedure type: PROCEDURE (self : Class; v : INTEGER) Besides, the heading of the newly created OverridingMethod is PROCEDURE OverridingMethod (self : Subclass; v : INTEGER); The type of the formal parameter self of OverridingMethod, namely Subclass, is an extension of the type indicated in the declaration of method, namely Class. According to the definition of type extension, the type of OverridingMethod is not an extension of the type of method. Therefore, OverridingMethod is not assignment compatible with subPtr.method. The following implementation of OverridingMethod can be assigned to subPtr.method, since now OverridingMethod and subPtr.method have a single formal parameter of the same type: PROCEDURE OverridingMethod (self : Class; v : INTEGER); BEGIN self.x := v; IF self IS SubClass THEN self(SubClass).y := v; END END OverridingMethod; Despite the fact that the formal parameter of OverridingMethod is Class, it can and has to be called with actual parameters of type SubClass. By means of a type test and type guard, the overriding method treats the parameter as a variable of type Subclass. On the other end, the type of formal parameter of OverridingMethod is the same as that indicated for subPtr.method, and OverridingMethod can be assigned into subPtr.method. Such implementation of OverridingMethod seems somewhat unnatural and tedious. 3 Our Approach A major problem with the object orientation of Oberon is that type extension applies to record and pointer types, but does not apply to procedure types. For this reason, methods cannot be directly and conveniently overridden for subclasses (see Section 2.2). The problem can be eliminated with the concept of module type defined in this section. Module types can be viewed as generalized record types. As shown in what follows, instances of module types can be used as record variables, or as procedures, or as Oberon modules. Overriding a method can be easily implemented by changing the module assigned to a field in an extension. Module types and type extension are integrated in an experimental object-oriented language named K2 that evolved from Oberon. K2 does not include record types, procedure types, procedures and modules, since all they are implemented by means of module types or module variables. This section represents all features of the experimental language that are relevant to module types and type extension. The object orientation of this language is outlined in the end of the section. 3.1 Module Types A module type consists of a definition, and optionally, a body. A module definition is a collection of declarations of constants, types, and variables. A module body is a collection of declarations, other bodies, and a sequence of statements. The statements are executed when the body is activated through a module call (Section 3.6). The definition of a global identifier and/or its body may include an import list (Section 3.7). A module type allows a body only if its definition contains a forward body declaration. Then a body can be declared within the same scope, or it can be left undefined (Section 3.4). ModuleDefinition = "("[ImportList] DeclarationSequence [ForwardBodyDeclaration] ")" DeclarationSequence = {declaration ";"} | declaration = ConstantDeclaration | TypeDeclaration | VariableDeclaration ForwardBodyDeclaration = BODY BodyDeclaration = BODY ident ";" [ImportList] DeclarationSequence BodySequence [BEGIN StatementSequence] END ident BodySequence = {BodyDeclaration ";"} Examples: ``` TYPE Date = (day, month, year: INTEGER); TYPE PersonalRecord = ( CONST length = 32; TYPE Name = ARRAY length OF CHAR; name, firstName: Name; age: INTEGER ); Constants, types and variables declared in a module definition are called public components, while those declared in the corresponding body are referred to as local components. Public components that are variables are also referred to as parameters (see Section also 3.6). Public types and constants are not parameters. Example: ``` TYPE Sample = ( publicVar: INTEGER; BODY ); BODY Sample; localVar: INTEGER; BEGIN (* ... *) END Sample; ``` The scope of an identifier which denotes a public component includes the module definition itself and the whole body, if any. Such an identifier is also visible within component designators. An identifier which declares a local component is not visible outside of the body that contains its declaration. Local variables keep their values between two successive calls of the body. In addition to its public components and locally declared components, the entities declared in the environment of the body and its definition are also visible in the body. A local component hides non-local entities that have the same name. Hidden entries can still be referred to by component designators. A variable declared in a module type definition can be followed by the read-only mark "-". Such a variable can be assigned values only from within the module body. The identifier list of a variable declaration may contain the word RESULT. In this case, the type of the declared variable(s) can be neither a module type, nor an array type. Refer to Section 3.6 for the use of variables named RESULT. Example: TYPE Log2 = ( x: INTEGER; RESULT - : INTEGER; BODY ); 3.2 Type Extension A module type $T_{ext}$ directly extends a module type $T_{base}$ if $T_{ext}$ has exactly one component of type $T_{base}$. $T_{ext}$ extends a type $T_{base}$ if it equals $T_{base}$ or if it directly extends an extension of $T_{base}$. Examples: TYPE Module1 = (x : INTEGER); TYPE Module2 = (ancestor : Module1; y : INTEGER); TYPE Module3 = (ancestor : Module2; z : INTEGER); In the examples above, Module3 directly extends Module2 with component $z$. Module3 is an indirect extension of Module1. Module1 is a direct base type of Module2 which is a direct base type of Module3. Nested components of Module3 can be referenced by incomplete designators that do not contain the identifier $ancestor$, as explained below. Components of module variables can be denoted by incomplete designators according to the following rules. It is said that $c$ is a nested component of a module variable $m$, if $c$ is a component of $m$, or $c$ is a nested component of some component of $m$. Then, if the module variable $m$ does not have a component $c$, then $m.c$ designates a nested component of $m$ determined by left-to-right level-order search among all nested components of $m$. If $p$ designates a pointer, then $p.c$ stands for $p.c$ and $p[e]$ stands for $p[e]$ (that is, the dot and the opening bracket imply dereferencing). Examples: $m3 : Module3; m3.z m3.y (stands for m3.ancestor.y) m3.x (stands for m3.ancestor.ancestor.x) 3.3 Pointers Variables of a PointerType assume as values pointers to variables of some BaseType. The PointerType is said to be bound to its pointer BaseType. Pointer types inherit the extension relation of their base types. A pointer type $P$ bound to $T_{nct}$ is extended by any pointer type $P_{nt}$ bound to an extension $T_{nt}$ of $T_{nct}$. For instance, type $Ptr3$ extends type $Ptr1$, because $Module3$ extends $Module1$: \[\text{TYPE } Ptr1 = \text{POINTER TO Module1};\] \[\text{TYPE } Ptr3 = \text{POINTER TO Module3};\] \[p1 : Ptr1; \quad p3 : Ptr3;\] The type with which a pointer variable is declared is called its \textit{static type} (or simply its type). The type of the value assumed by a pointer variable at run time is called its \textit{dynamic type}. The dynamic type of a pointer variable may be an extension of its static type (see examples in Section 3.5). The \textit{type guard} $\text{PointerVariable(DynamicType)}$ asserts that the $\text{PointerVariable}$ has the quoted $\text{DynamicType}$. If the assertion fails, the program execution is aborted, otherwise the $\text{PointerVariable}$ is regarded as having the $\text{DynamicType}$. The guard is applicable only if the $\text{DynamicType}$ is an extension of the static type of the $\text{PointerVariable}$. The type test $v \text{ IS } T$ stands for "the dynamic type of $v$ is $T$" and is called a \textit{type test}. It is applicable if 1. $T$ is an extension of the declared type $T0$ of $v$, 2. $v$ is a pointer variable. The monadic address operator "@" applies to an operand which is a variable of any type. The type of the result is a pointer to the operand's type. This operator is used to implement variable parameters (See an example in Section 3.6.) Examples: \[i \quad \text{(INTEGER)} \quad \@\quad \text{(POINTER TO INTEGER)}\] 3.4 Bodies for Module Variables If a module type definition does not include a forward body declaration, variables of this type are not allowed to have bodies. If the definition does include a forward body declaration, two options exist. First, let $T$ be a module type for which a body $B$ has been declared. The variable declaration $M \ldots : T$ defines $B$ as a body of $M$. Second, let $T$ be a module type which body has been left undefined. The declaration $M \ldots : T$ does not define a body for $M$. An individual body $B_m$ may be defined for $M$ in the scope of $M$. In this way, module variables of the same type can have completely different bodies. In all cases, a whole module assignment (Section 3.5) can be used to give a new value and a new body to a module variable. Examples (refer to examples in Section 3.1): \[\text{log2: Log2;}\] BODY log2; (* assume x > 0 *) BEGIN RESULT := 0; WHILE x > 1 DO x := x DIV 2; INC (RESULT) END; END log2; myLog2: Log2; BODY myLog2; (* assume x > 0 *) y: INTEGER; BEGIN RESULT := 0; y := 1; WHILE x > y DO ASH (y); INC (RESULT) END; END myLog2; 3.5 Assignments Assignments replace the current value of a variable by a new value specified by an expression. The expression must be assignment compatible with the variable. In particular, an expression e of type T is assignment compatible with a variable v of type T if: - T and v are the same type, as specified below; - T and v are pointer types and T is an extension of T; Some less important cases of type compatibility (numeric types, strings, NIL and pointer types) need not to be discussed here. T is the same type as T if: - T and T are both denoted by the same type identifier, or - T and T are denoted by type identifiers and T is declared to equal T in a declaration of the form TYPE T = T, or - T and T are types of variables a and b which appear in the same identifier list in a variable declaration, provided T and T are not open arrays. Note that module variables of the same type may have different bodies. If an expression is assigned to a variable, the value of the variable becomes the same as the value of the expression. Besides: 1) If the expression is of a module type, both its value and its body (if any) are assigned into the variable. If the body of the expression is undefined, the body of the variable becomes undefined. 2) If the variable and the expression are of pointer types, the dynamic type of the variable becomes the same as the dynamic type of the expression. Examples (refer to examples in Sections 3.3 and 3.4): p1 := p3; p1(Ptr3).z := 0; log2 := myLog2; Compared to Oberon, K2 offers a restricted form of assignment compatibility: In K2, an extended module type is not assignment compatible with its base type, while in Oberon an extended record type is assignment compatible with its base type. 3.6 Module calls A *module call* consists of a module variable designator, followed by a (possibly empty) list of arguments. For the execution of the call, the arguments are assigned (Section 3.5) to the parameters (Section 3.1), then the body of the module variable (if any) is executed. The association between the arguments and the parameters is positional, but the list of arguments may have less members than the total number of parameters. Module calls can appear as individual statements; they also can be used in expressions, as specified later in this section. \[ \text{ModuleCall} = \text{designator} "(" \text{Arguments} ")" \] Examples: ``` Subroutine: ( valuePar: INTEGER; variablePar: POINTER TO INTEGER; BODY ); BODY Subroutine; BEGIN valuePar := valuePar + 1; variablePar := variablePar + 1 END Subroutine; ``` ``` i := 0; Subroutine(0, @i); (* ... *) Subroutine.valuePar := 0; Subroutine.variablePar := @i; Subroutine(); (* ... *) Subroutine(); i := Subroutine.ValuePar + 1; ``` In an expression, a designator of a module variable which is not followed by an argument list refers to the current value of that variable. If it is followed by a (possibly empty) argument list, the designator implies the activation of the module body and stands for the value of the module variable resulting from the execution. A factor of the form \[ F(\text{Arguments}) \] where \( F \) is a designator of a module variable which contains a component named \textit{RESULT}, is evaluated as follows: 1. the module call \( F(\text{Arguments}) \) is executed first; 2. the value of \( F.\text{RESULT} \) is returned as value of \( F(\text{ARGUMENTS}) \). Example (refer to the examples in Section 3.4): \[ \log_2(k) + 1 \] If `designator` is a pointer variable with value NIL, the call `designator^ (Arguments)` is executed as follows: 1. NEW(`designator`) allocates a dynamic module which is thereafter called and executed; 2. DISPOSE(`designator`) deallocates the dynamic module assigning NIL into `designator`. An implementation may use a stack rather than a heap for such implicit module allocation/deallocation. Example: ```plaintext TYPE Factorial = (n: INTEGER; RESULT := INTEGER; BODY); BODY Factorial; localFactorial: POINTER TO Factorial; BEGIN IF n = 0 THEN RESULT := 1 ELSE RESULT := n * localFactorial^ (n - 1); END END Factorial; ``` ### 3.7 Compilation Units A **compilation unit** is either a module type declaration eventually followed by a body, or a module variable declaration eventually followed by a body. ```plaintext CompilationUnit = TypeDeclaration [";"BodyDeclaration] | VariableDeclaration [";"BodyDeclaration] ``` A compilation unit declares a single global identifier which is exported by the declaring unit. The exported identifier can be imported and used by other compilation units by means of an **import list** (see also Section 3.1). ```plaintext ImportList = IMPORT ident [";"=ident] [","=ident] [";"=ident] ``` Each identifier \( I \) from the import list of a module definition can be used in the definition itself, and in the type's body, if the type has a body. If the import list belongs to a module body, \( I \) can only be used in the body. If the form \( II := I \) is used in the import list, then the imported entity is referred as \( II \) rather than \( I \). A main program can be implemented as a compilation unit which consists of a module variable declaration and a body. A conventional module (or a package) is also a compilation unit consisting of a module variable declaration plus eventually a body. A separately compiled class is a compilation unit which consists of a module type declaration and, in most cases, a body. Examples: ```plaintext TYPE ClassDesc = ( TYPE Class = POINTER TO ClassDesc; x : INTEGER; method : (v : INTEGER; BODY); BODY ); BODY ClassDesc; BODY method; BEGIN x := v; END method; END ClassDesc; MainProgram: (BODY); BODY MainProgram; IMPORT ClassDesc; ptr: ClassDesc.Class; (* ... *) BEGIN (* MainProgram *) NEW (ptr); ptr.method (1); (* ... *) END MainProgram; ``` 3.8 Module Types and Object Orientation In K2, a pointer type bound to a module type represents a class (see Class and ClassDesc in Section 3.7). A variable (such as ptr*) of that module type is an object. A module component of that module type is a method. A call of a module component (such as ptr.method(1)) is a message. Type extension implements inheritance in K2. For instance, SubclassDesc inherits field x from ClassDesc extending ClassDesc with a field y: ```plaintext TYPE SubclassDesc = ( IMPORT ClassDesc; superclass : ClassDesc; y : INTEGER; );``` BODY ); A module variable declared in the body of an extension can be used to override an inherited method: **BODY** SubclassDesc; `overridingMethod : (v: INTEGER; **BODY**); **BODY** overridingMethod; `BEGIN x := v; y := v **END**; **BEGIN** (* SubclassDesc *) `superclass.method := overridingMethod; **END** SubclassDesc; `subPtr = POINTER TO SubclassDesc; **NEW** (subPtr); **subPtr**++;; The module call `**subPtr**++;` executes the assignment `superclass.method := overridingMethod` from the body of SubclassDesc. This assignment overrides (in `**subPtr**++;`) the method inherited from ClassDesc. Thus, overriding a method is simply a module variable assignment. The difficulty with Oberon outlined in Section 2.2 does not exist in K2. Note finally that the fields of an extension can be referred by incomplete designators. For instance: `subPtr.x` (stands for `subPtr.superclass.x`) `subPtr.method` (stands for `subPtr.superclass.method`) 4 Conclusion A problematic point in Oberon is that procedure fields of records cannot be directly and conveniently redefined for extensions. From a standard object-oriented point of view, method overriding in Oberon may seem unnatural and tedious (see Section 2.2). To cure this problem, Oberon-2 [4] extends Oberon with the new concept of type bound procedures. Besides, Oberon-2 adds to Oberon open array variables, FOR loops, and read-only export of data. (Object Oberon [5] is an experimental predecessor of Oberon-2.) In fact, Oberon-2 implants the standard concept of method in Oberon. The resulting language is not so simple and clean as Oberon was intended to be. In particular, it supports too many different structures related to procedures: type bound procedures, traditional constant procedures, procedure types, and procedure variables. K2 evolved from Oberon by introducing only one new feature, the module type. Grace to the generality of the new concept, several features of Oberon were eliminated. Namely, K2 does not contain record and procedure types (because they are special kinds of module types), and does not need procedures and modules (because they are modeled by module variables). While record extension is supported by a specially designated language feature in Oberon, it is simply achieved by module nesting and use of incomplete module component designators in K2. The body of a K2 module that is a component of a larger module has access to the components of the enclosing module. Thus, syntactical binding is as simple as module nesting, and there is no need for a special concept such as the type-bound procedure of Oberon-2. One more advantage of K2 compared to Oberon is that a module type that implements a class can be compiled separately and need not be enclosed in a package or Oberon module. Most features of K2 have been tested by an experimental compiler implemented as a Turbo Pascal 6.0 program of about 5000 lines. A K2 compilation unit (a module type or variable declaration, eventually followed by a body) is translated into a Turbo Pascal unit; then this unit is compiled by the Turbo Pascal compiler. The K2 compiler extracts all constant and type declarations from module definitions and generates Turbo Pascal representations for those declarations. Module definitions are compiled into record types. Turbo Pascal objects are not used in the implementation. At present, type tests and type guards are not supported by the experimental compiler. This paper describes an approach to the design of a small and simple, yet practically convincing object-oriented language. Our approach can be characterized as simplicity through generality. While we present a solution, we do not consider it as a final one. The absence of procedures as a special language feature and their implementation by means of module variables is a point that is widely open for criticism. Although a pointer variable of a module base type can be used as a conventional procedure (as illustrated in Section 3.6), programmers may wish to have procedures explicitly included in the language. Fortunately, our solution can be relatively easily modified to include procedures, while merging record types and modules in the same concept. A careful evaluation of this alternative is a subject of future work. References 3. N. Wirth: Type Extensions. ACM Transactions on Programming Languages and Systems 10, 204-214 (1987) 4. H. Moessenboeck, J. Templ: Object Oberon - A Modest Object-Oriented **Appendix: Syntax Description** ``` declaration = ConstantDeclaration | TypeDeclaration | VariableDeclaration ConstantDeclaration = CONST ident "=" ConstExpr TypeDeclaration = TYPE ident "=" type type = ArrayDefinition | ModuleDefinition | PointerDefinition | TypeDesignator TypeDesignator = qualident qualident = {ident ","} ident ArrayDefinition = ARRAY [ConstExpr {""," ConstExpr}] OF type ModuleDefinition = "(" [ImportList] DeclarationSequence [BODY "]")" ImportList = IMPORT ident ["=" ident] {"=" ident [:=" ident] ";"} DeclarationSequence = {declaration ";"} BodyDeclaration = BODY ident ";" [ImportList] DeclarationSequence BodySequence [BEGIN StatementSequence] END ident BodySequence = {BodyDeclaration ";"} PointerDefinition = POINTER TO Type VariableDeclaration = ident["-" ["," ident["-"]] ":=" type expression = SimpleExpression [relation SimpleExpression] relation = ":=" | ":#" | ":<" | ":<=" | ":>" | ":>=" | IN | IS SimpleExpression = ["+" | "-" | "OR" term] AddOperator = ":+" | "-" | "OR" term = factor {MulOperator factor} MulOperator = ":*" | ":/" | DIV | MOD | ":&" factor = number | CharConstant | string | NIL | set | ":~" factor | ":@" designator | [designator ["("ExprList")"] | "("expression")" designator = qualident {"."ident ["(" ExprList ")"] | "("TypeDesignator")" | """ set = "{" [element {["," element]} ]" element = expression ["." expression] statement = [assignment | ModuleCall | IfStatement | CaseStatement | WhileStatement | RepeatStatement LoopStatement | WithStatement | EXIT | RETURN] assignment = designator ":=" expression ModuleCall = designator ["(" ExprList ")"] IfStatement = IF expression THEN StatementSequence {ELSIF expression THEN StatementSequence} {ELSE StatementSequence} END CaseStatement = CASE expression OF case "{" [case] [ELSE StatementSequence] END ``` case = [CaseLabels","CaseLabels","StatementSequence] CaseLabels = ConstExpr ["," ConstExpr] WhileStatement = WHILE expression DO StatementSequence END RepeatStatement = REPEAT StatementSequence UNTIL expression LoopStatement = LOOP StatementSequence END WithStatement = WITH qualident ":=" typeDesignator DO StatementSequence END CompilationUnit = TypeDeclaration [","BodyDeclaration] | VariableDeclaration [","BodyDeclaration]
{"Source-Url": "http://www1.chapman.edu/~radenski/research/papers/isoberon.pdf", "len_cl100k_base": 7060, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 86949, "total-output-tokens": 8216, "length": "2e12", "weborganizer": {"__label__adult": 0.0003223419189453125, "__label__art_design": 0.0002282857894897461, "__label__crime_law": 0.00021648406982421875, "__label__education_jobs": 0.0003654956817626953, "__label__entertainment": 4.303455352783203e-05, "__label__fashion_beauty": 0.00011605024337768556, "__label__finance_business": 0.00011718273162841796, "__label__food_dining": 0.0003056526184082031, "__label__games": 0.00034165382385253906, "__label__hardware": 0.0005321502685546875, "__label__health": 0.0002601146697998047, "__label__history": 0.00015282630920410156, "__label__home_hobbies": 5.5849552154541016e-05, "__label__industrial": 0.00024020671844482425, "__label__literature": 0.0001959800720214844, "__label__politics": 0.00018739700317382812, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.0024776458740234375, "__label__social_life": 6.091594696044922e-05, "__label__software": 0.0032367706298828125, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.00021982192993164065, "__label__transportation": 0.0003261566162109375, "__label__travel": 0.00015032291412353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30390, 0.01439]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30390, 0.84005]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30390, 0.83092]], "google_gemma-3-12b-it_contains_pii": [[0, 2365, false], [2365, 3873, null], [3873, 6526, null], [6526, 8393, null], [8393, 10511, null], [10511, 12303, null], [12303, 14012, null], [14012, 16549, null], [16549, 18551, null], [18551, 20278, null], [20278, 21904, null], [21904, 23233, null], [23233, 25357, null], [25357, 27925, null], [27925, 29960, null], [29960, 30390, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2365, true], [2365, 3873, null], [3873, 6526, null], [6526, 8393, null], [8393, 10511, null], [10511, 12303, null], [12303, 14012, null], [14012, 16549, null], [16549, 18551, null], [18551, 20278, null], [20278, 21904, null], [21904, 23233, null], [23233, 25357, null], [25357, 27925, null], [27925, 29960, null], [29960, 30390, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30390, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30390, null]], "pdf_page_numbers": [[0, 2365, 1], [2365, 3873, 2], [3873, 6526, 3], [6526, 8393, 4], [8393, 10511, 5], [10511, 12303, 6], [12303, 14012, 7], [14012, 16549, 8], [16549, 18551, 9], [18551, 20278, 10], [20278, 21904, 11], [21904, 23233, 12], [23233, 25357, 13], [25357, 27925, 14], [27925, 29960, 15], [29960, 30390, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30390, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
149c86cafefb50b8b29868d9bd21e4f152a52c0c
METHOD FOR CODING PICTURES USING HIERARCHICAL TRANSFORM UNITS Inventors: Robert A. Cohen, Somerville, MA (US); Anthony Vetro, Arlington, MA (US); Hufang Sun, Woburn, MA (US) Assignee: Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 247 days. Appl. No.: 13/169,959 Filed: Jun. 27, 2011 Prior Publication Data US 2012/0281928 A1 Nov. 8, 2012 Related U.S. Application Data Provisional application No. 61/482,873, filed on May 5, 2011. Int. Cl. G06K 9/36 (2006.01) U.S. Cl. 382/232 Field of Classification Search USPC 382/232-233, 236, 238-240, 244-250; 375/240.11, 240.18-240.19, 240.22; 348/395.1, 348/400.1-403.1, 408.1-413.1, 416.1, 420.1-421.1; 708/317, 400-405 See application file for complete search history. ABSTRACT A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TUs) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree. 20 Claims, 6 Drawing Sheets Fig. 1 PRIOR ART METHOD FOR CODING PICTURES USING HIERARCHICAL TRANSFORM UNITS RELATED APPLICATION FIELD OF THE INVENTION The invention relates generally to coding pictures, and more particularly to methods for coding pictures using hierarchical transform units in the context of encoding and decoding pictures. BACKGROUND OF THE INVENTION For the High Efficiency Video Coding (HEVC) standard currently under development as H.264/AVC, the application of TUs to residual blocks is represented by a tree as described in “Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp. 1676-1687, December 2010. Coding Layers The hierarchical coding layers defined in the standard include video sequence, picture, slice, and treeblock layers. Higher layers contain lower layers. Treeblock According to the proposed standard, a picture is partitioned into slices, and each slice is partitioned into a sequence of treeblocks (TBs) ordered consecutively in a raster scan. Pictures and TBs are broadly analogous to frames and macroblocks, respectively, in previous video coding standards, such as H.264/AVC. The maximum allowed size of the TB is 64x64 pixels (luma intensity), and chroma (color) samples. Coding Unit A Coding Unit (CU) is the basic unit of splitting used for Intra and Inter prediction. Intra prediction operates in the spatial domain of a single picture, while Inter prediction operates in the temporal domain among the picture to be predicted and a set of previously-decoded pictures. The CU is always square, and can be 128x128 (LCU), 64x64, 32x32, 16x16 and 8x8 pixels. The CU allows recursive splitting into four equally sized blocks, starting from the TB. This process gives a content-adaptive coding tree structure comprised of CU blocks that can be as large as the TB, or as small as 8x8 pixels. Prediction Unit (PU) A Prediction Unit (PU) is the basic unit used for carrying the information (data) related to the prediction processes. In general, the PU is not restricted to being square in shape, in order to facilitate partitioning, which matches, for example, the boundaries of real objects in the picture. Each CU may contain one or more PUs. Transform Unit (TU) As shown in FIG. 1, a root node 101 of the transform tree 100 corresponds to an NxN TU or “Transform Unit” (TU) applied to a block of data 110. The TU is the basic unit used for the transformation and quantization processes. In the proposed standard, the TU is always square and can have a size from 4x4 to 32x32 pixels. The TU cannot be larger than the PU and does not exceed the size of the CU. Multiple TUs can be arranged in a tree structure, henceforth—transform tree. Each CU may contain one or more TUs, where multiple TUs can be arranged in a tree structure. The example transform tree is a quadtree with four levels 0-3. If the transform tree is split once, then four N/2xN/2 TUs are applied. Each of these TUs can subsequently be split down to a predefined limit. For Intra-coded pictures, transform trees are applied over “Prediction Units” (PUs) of Intra-prediction residual data. These PUs are currently defined as squares or rectangles of size 2Nx2N, 2NxN, N2x2N, or NxN pixels. For Intra-coded pictures, the square TU must be contained entirely within a PU, so the largest allowed TU size is typically 2Nx2N or NxN pixels. The relation between a-j TUs and a-j PUs within this transform tree structure is shown in FIG. 1. As shown in FIG. 2, a new PU structure has been proposed for the proposed HEVC standard as described by Cao, et al. “CE6.61 Report on Short Distance Intra Prediction Method (SDIP),” JCTVC-E278, March 2011. With the SDIP method, TUs can be strips or rectangles 201 as small as one or two pixels wide, e.g. Nx2, 2xN, Nx1, or 1xN pixels. When overlapping a transform tree on an Intra-coded block that has been partitioned into such narrow PUs, the transform tree is split to a level where the size of the TU is only 2x2 or 1x1. The TU size cannot be greater than the PU size; otherwise, the transformation and prediction process is complicated. SUMMARY OF THE INVENTION A bitstream includes coded pictures, and split-flags. The split flags are used for generating a transform tree. Effectively, the bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram of a tree splitting for transform units according to the prior art; FIG. 2 is diagram of a decomposition into rectangular prediction units according to the prior art; FIG. 3A is a flow diagram of an example decoding system used by embodiments of the invention; FIG. 3B is a flow diagram of transform tree generation used by embodiments of the invention; FIG. 4 is a diagram of a first step of the transform tree generation according to this invention; and FIG. 5 is a diagram of a second step of the transform tree generation according to this invention. DETAILED DESCRIPTION OF THE INVENTION The embodiments of our invention provide a method for coding pictures using hierarchical transform units (TUs). Coding encompasses encoding and decoding. Generally, encoding and decoding are performed in a codec (CODEC). The codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal. For example, the coder encodes a bit stream or signal for compression, transmission, storage or encryption, and the decoder decodes the encoded bit stream for playback or editing. The method applies square and rectangular TUs on rectangular, and sometimes very narrow rectangular portions of pictures, while still maintaining a hierarchical transform tree structure of the Transform in Units (TUs) as defined in the High Efficiency Video Coding (HEVC) standard. Transforms can refer either to transforms or inverse transforms. In a preferred embodiment, the transform tree is a quadtree (Q-tree), however other tree structures, such as binary trees (B-tree) and octrees, generally N-ary trees are also possible. Input to the method is an N×N coding unit (CU) partitioned into Prediction Units (PUs). Our invention generates a transform tree that is used to apply TUs on the PUs. Decoding System FIGS. 3A-3B show an example decoder and method system 300 used by embodiments of the invention, i.e., the steps of the method are performed by the decoder, which can be software, firmware or a processor connected to a memory and input/output interfaces as known in the art. Input to the method (or decoder) is a bit stream 301 of coded pictures, e.g., an image or a sequence of images in a video. The bit stream is parsed 310 to obtain split-flags 311 for generating the transform tree. The split-flags are associated with TUs of corresponding nodes of a transform tree 321, and data 312 to be processed, e.g., N×N blocks of data. The data includes a partitioning of the coding units (CUs) into Prediction Units (PUs). In other words, any node represents a TU at a given depth in the transform tree. In most cases, only TUs at leaf nodes are realized. However, the codec can implement the TU at nodes higher in the hierarchy of the transform tree. The split-flags are used to generate 320 a transform tree 321. Then, the data in the PUs are decoded according to the transform tree to produce decoded data 302. The generation step 320 includes splitting 350 each TUs only if the split-flag 311 is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU. For example, a 16x8 PU can be partitioned by two 8x8 TUs. These two 8x8 TUs can be merged into one 16x8 TU. In another example, a 64x64 square PU is partitioned into sixteen 8x8 TUs. Four of these TUs are merged into a 32x32 square TU, and the other TUs remain as 8x32 rectangles. The merging solves the problem in the prior art of having many very small, e.g., 1x1 TUs, see Cao et al. Then, the transform tree 321 is modified 370 according splitting and merging. The splitting, partitioning, merging and modifying can be repeated 385 until a size of the TU is equal to a predetermined minimum 380. After the transform tree has been generated 320, the data 312 contained in each PU can be decoded using the TUs associated with the PU. Various embodiments are now described. Embodiment 1 FIG. 4 shows the partitioning 312 of the input CU into PUs 312, the iterative splitting 350 (or not) of the PUs according to split-flags, and the subsequent merging. Step 1: A root node of the transform tree corresponds to an initial N×N TU covering the N×N PU 312. The bit stream 301 received by the decoder 300, as shown in FIG. 3, contains the split-flag 311 that is associated with this node. If the split-flag is not set 401, then the corresponding TU is not split, and the process for this node is complete. If the split-flag is set 402, then the N×N TU is split into TUs 403. The number of TUs produced corresponds to the structure of the tree, e.g., four for a quadtree. It is noted that the number of TUs produced by the splitting can vary. Then, the decoder determines the PU includes multiple than TUs. For example, a rectangular PU includes multiple TUs, e.g., two square TUs, each of size N/2×N/2. In this case, the multiple TUs in that PU are merged 404 into an N×N/2 TU or an N/2×N rectangular TUs 405 aligned with the dimensions of the PU. The rectangular TUs and TUs can include longer axes corresponding to lengths, and a shorter axis corresponding to widths. Merging square TUs into larger rectangular TUs eliminates the problem where a long narrow rectangle can be split into many small square TUs, as in the prior art, see Cao et al. Merging also reduces the number of TUs in the PUs. Having many small TUs is usually less effective than having a few larger TUs, especially when the dimensions of these TUs are small, or when multiple TUs cover similar data. The transform tree is then modified. The branch of the transform tree that corresponded with the first N/2×N/2 TUs 406 is redefined to correspond to the merged rectangular TU, and the branch of the transform tree that corresponded to the second merged TU is eliminated. Step 2: For each node generated in Step 1, if a size of the TU is equal to a predefined minimum, the process is done for that node. Each remaining node is further split when the associated split-flag is set, or if the TU for that node is not contained entirely within the PU. Unlike Step 1, however, the way that the node is split depends upon the shape of the PU, as shown in FIG. 5, because the PUs can have arbitrary shapes and sizes. This splitting is performed as described in Step 2a or Step 2b below. The decision whether to look for the split-flag in the bit stream or to split when the TU covers more than one PU can be made beforehand, i.e., the system is defined such that the split-flag is signaled in the bit stream, or the split-flag is inferred based upon criteria such as minimum or maximum TUs sizes, or whether a TU spans multiple PUs. Implicit Split-Flag Alternatively, an "implicit split-flag" can be parsed from the bit stream 301. If the implicit split-flag is not set, then the split-flag is signaled for the corresponding node. If the implicit split-flag is set, then the split-flag is not signaled for this node, and the splitting decision is made based on predefined split conditions. The predefined split conditions can include other factors, such as whether the TU spans multiple PUs, or if the TU size limitation is met. In this case, the implicit split-flag is received before the split-flag, if any. For example, the implicit split-flag can be received before each node, before each transform tree, before each image or video frame, or before each video sequence. For Intra PUs, a TU is not allowed to span multiple PUs because the PU is predicted from a set of neighboring PUs, so those neighboring PUs are to be fully decoded, inverse transformed, and reconstructed in order to be used for predicting the current PU. In another example, the implicit flag cannot be set, but predefined metrics or conditions are used to decide whether to split a node without requiring the presence of a split-flag. Step 2a: If the TU for this node is square, the process goes back to Step 1 treating this node as a new root node and splitting it into four square TUs, e.g., of size N×N/4×N/4. Step 2b: If the TU for this node is rectangular, e.g., N×2×N/2 then the node is split into two nodes corresponding to N×4×N/2 TUs. Similarly, an N×N/2×N TU is split into two nodes corresponding to N×N/2×N/2 TUs. The process then repeats Step 2 for each of these nodes, ensuring that rectangular TUs are split along the direction of the longer axis, so that rectangular TUs become thinner. Embodiment 2 In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs are split into multiple nodes, e.g., four nodes and four TUs. For example, an N×2×N TU is split into four N/2×N TUs. This partitioning into a larger number of TUs can be beneficial for cases where the data in the PU is different for different portions in the PU. Rather than require two levels of a binary tree to split one rectangular TU into four rectangular TUs, this embodiment requires only one quadtree level, and thus only one split-flag, to split one TU into four rectangular TUs. This embodiment can be predefined or can be signaled as a “multiple split-flag” in the bit stream, similar to the way the implicit flag was signaled. Embodiment 3 Here, Step 1 is modified so that nodes associated with square TUs are not merged to become very large rectangular TUs only when the size of the square TU is less than a predefined threshold. For example, if the threshold is four, then a rectangular 8×4 PU may be covered by two 4×4 TUs. A 4×2 PU, however, may not be covered by two 2×2 TUs. In this case, Embodiment 1 is applied, and the two nodes are merged to form a 4×2 TU to cover the 4×2 PU. This embodiment is useful for cases where square TUs are preferred due to performance or complexity considerations, and rectangular TUs are used only when the square TUs lose effectiveness due to their small dimensions. Embodiment 4 In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs can be split to form more than two square or rectangular TUs, where the split is not necessarily aligned with the longer dimension of the rectangle. For example, a 16×4 TU can be split into four 8×4 TUs or two 8×8 TUs. The choice of whether to split into a square or rectangular TU can be explicitly indicated by a flag in the bit-stream, as was the case for the implicit flag, or it can be predefined as part of the encoding/decoding process. This embodiment is typically used for very large rectangular TUs, e.g., 64×16, so that eight 16×16 TUs are used instead of two 64×8 TUs. Another example splits a 64×16 TU into four 32×8 TUs. A very long horizontal TU, for example, can produce artifacts such as ringing in the horizontal direction, so this embodiment reduces the artifacts by reducing the maximum length of a rectangular TU. This maximum length may also be included as a signal in the bit-stream. Similarly, a maximum width can be specified. Embodiment 5 In this embodiment, Step 1 is modified so that the N×N TU is directly split into rectangular TUs, i.e., other than size N/2×N/2. For example, the N×N TU can be split into four N/2×N TUs. This embodiment differs from Embodiment 2 in that a square TU can be split directly into multiple rectangular TUs, even though the PU may be square. This embodiment is useful for cases where features in the PU are oriented horizontally or vertically, so that a horizontal or vertical rectangular TUs aligned with the direction of the features can be more effective than multiple square TUs that split the oriented data in the PU. Features can include, color, edges, ridges, corners, objects and other points of interest. As before, whether or not to do this kind of splitting can be predefined or be signaled, as was the case for the implicit split-flag. Embodiment 6 In this embodiment, Step 1 is modified so that a TU can span multiple PUS. This can occur when the PUS are Inter-predicted. For example, Inter-predicted PUS are predicted using data from previously-decoded pictures, not from data decoded from within the same CU. A transform can therefore be applied over multiple PUS within a CU. Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention. We claim: 1. A method for coding pictures, comprising the steps of: parsing a bitstream including coded pictures to obtain split-flags for generating a transform tree, and a partitioning of coding units (CUs) into Prediction Units (PUs); generating the transform tree according to the split-flags, wherein nodes in the transform tree represent transform units (TUs) associated with the CUs, wherein the generating further comprises; splitting each TU only if the split-flag is set; merging, for each PU that includes multiple TUs, the multiple TUs into a larger TU, modifying the transform tree according to the splitting and merging; and decoding data contained in each PU using the TUs associated with the PU according to the transform tree, wherein the steps are performed in a processor. 2. The method of claim 1, wherein square TUs are split into multiple rectangular TUs. 3. The method of claim 1, further comprising: repeating the splitting, merging and modifying until a size of each TU is equal to a predetermined minimum. 4. The method of claim 3, wherein the repeating continues when the TU for a particular node is not contained entirely within the associated PU. 5. The method of claim 1, wherein the bitstream includes an implicit-split flag, and if the implicit, split-flag is not set, then the split-flag is signaled in the bitstream for the corresponding node in the transform tree. 6. The method of claim 3, wherein the bitstream includes an implicit-split flag, and the repeating is performed only if the implicit split-flag is set and a predefined split condition is met. 7. The method of claim 1, wherein the splitting of a rectangular TU is along a direction of a longer axis of the rectangular TU. 8. The method of claim 1, wherein the splitting produces more than two TUs. 9. The method of claim 1, wherein a maximum length or a maximum width of the TUs are reduced. 10. The method of claim 1, wherein the PUs have arbitrary shapes and sizes. 11. The method of claim 1, wherein the splitting produces multiple TUs. 12. The method of claim 1, wherein horizontal rectangular TUs and vertical rectangular TUs are aligned with a direction of features in the PU. 13. The method of claim 1, wherein the PU contains a portion of video data. 14. The method of claim 1, wherein the PU contains residual data obtained from a prediction process. 15. The method of claim 1, wherein the transform tree is an N-ary tree. 16. The method of claim 1, wherein the splitting of rectangular TUs is along a direction of a shorter axis. 17. The method of claim 1, wherein square or rectangular TUs are merged into larger TUs. 18. The method of claim 15, wherein values of N of the N-ary tree differs for different nodes of the transform tree. 19. The method of claim 1, wherein the TU spans multiple PUs when the PUs are Inter-predicted. 20. The method of claim 1, wherein the TUs are represented by leaf nodes of the transform tree.
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/8494290", "len_cl100k_base": 5239, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 12617, "total-output-tokens": 5994, "length": "2e12", "weborganizer": {"__label__adult": 0.0004420280456542969, "__label__art_design": 0.0010700225830078125, "__label__crime_law": 0.0008730888366699219, "__label__education_jobs": 0.0005307197570800781, "__label__entertainment": 0.0002651214599609375, "__label__fashion_beauty": 0.00019860267639160156, "__label__finance_business": 0.0005397796630859375, "__label__food_dining": 0.0004763603210449219, "__label__games": 0.0009207725524902344, "__label__hardware": 0.00972747802734375, "__label__health": 0.0005555152893066406, "__label__history": 0.00035572052001953125, "__label__home_hobbies": 0.00010663270950317384, "__label__industrial": 0.0011653900146484375, "__label__literature": 0.00031065940856933594, "__label__politics": 0.0004143714904785156, "__label__religion": 0.0005269050598144531, "__label__science_tech": 0.472412109375, "__label__social_life": 5.775690078735352e-05, "__label__software": 0.024261474609375, "__label__software_dev": 0.483642578125, "__label__sports_fitness": 0.00034117698669433594, "__label__transportation": 0.0005793571472167969, "__label__travel": 0.0001882314682006836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22161, 0.04934]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22161, 0.82512]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22161, 0.88327]], "google_gemma-3-12b-it_contains_pii": [[0, 1561, false], [1561, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 7251, null], [7251, 14414, null], [14414, 21401, null], [21401, 22161, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1561, true], [1561, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 1578, null], [1578, 7251, null], [7251, 14414, null], [14414, 21401, null], [21401, 22161, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22161, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22161, null]], "pdf_page_numbers": [[0, 1561, 1], [1561, 1578, 2], [1578, 1578, 3], [1578, 1578, 4], [1578, 1578, 5], [1578, 1578, 6], [1578, 1578, 7], [1578, 7251, 8], [7251, 14414, 9], [14414, 21401, 10], [21401, 22161, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22161, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c20eee1d162403cbea739ddf8240f57e8c214884
[REMOVED]
{"Source-Url": "http://swerl.tudelft.nl/twiki/pub/Main/TechnicalReports/TUD-SERG-2009-026.pdf", "len_cl100k_base": 5002, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31712, "total-output-tokens": 6896, "length": "2e12", "weborganizer": {"__label__adult": 0.000316619873046875, "__label__art_design": 0.0002560615539550781, "__label__crime_law": 0.00024890899658203125, "__label__education_jobs": 0.0004487037658691406, "__label__entertainment": 4.470348358154297e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.0001621246337890625, "__label__food_dining": 0.00026416778564453125, "__label__games": 0.0003197193145751953, "__label__hardware": 0.0004222393035888672, "__label__health": 0.00030541419982910156, "__label__history": 0.00014638900756835938, "__label__home_hobbies": 4.988908767700195e-05, "__label__industrial": 0.00024068355560302737, "__label__literature": 0.0001863241195678711, "__label__politics": 0.00018656253814697263, "__label__religion": 0.000308990478515625, "__label__science_tech": 0.00433349609375, "__label__social_life": 6.288290023803711e-05, "__label__software": 0.004299163818359375, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00021731853485107425, "__label__transportation": 0.0003147125244140625, "__label__travel": 0.00015783309936523438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29435, 0.02015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29435, 0.324]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29435, 0.82551]], "google_gemma-3-12b-it_contains_pii": [[0, 151, false], [151, 151, null], [151, 3204, null], [3204, 6715, null], [6715, 9109, null], [9109, 10967, null], [10967, 13620, null], [13620, 16977, null], [16977, 19336, null], [19336, 22328, null], [22328, 25811, null], [25811, 29435, null], [29435, 29435, null], [29435, 29435, null]], "google_gemma-3-12b-it_is_public_document": [[0, 151, true], [151, 151, null], [151, 3204, null], [3204, 6715, null], [6715, 9109, null], [9109, 10967, null], [10967, 13620, null], [13620, 16977, null], [16977, 19336, null], [19336, 22328, null], [22328, 25811, null], [25811, 29435, null], [29435, 29435, null], [29435, 29435, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29435, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29435, null]], "pdf_page_numbers": [[0, 151, 1], [151, 151, 2], [151, 3204, 3], [3204, 6715, 4], [6715, 9109, 5], [9109, 10967, 6], [10967, 13620, 7], [13620, 16977, 8], [16977, 19336, 9], [19336, 22328, 10], [22328, 25811, 11], [25811, 29435, 12], [29435, 29435, 13], [29435, 29435, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29435, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
8836b84b9ed9e831a2a52622c73f2755b97d22e3
Module 7: Advanced Recursion If you have not already, make sure you - Read *How to Design Programs* Section 17. A *(listof Int)* is said to be sorted in increasing order if every item in the list is greater than or equal to the value that comes before it. For example, *(list 2 3 3 5 7)* is sorted in increasing order, but *(lst 2 3 5 3 7)* is not. ### Exercise **Complete sorted?.** ```scheme ;; (sorted? L) return #true if every value in L is >= the one before. ;; sorted? (listof Int) -> Bool ;; Examples: (check-expect (sorted? (list)) #true) (check-expect (sorted? (list 2 3 3 5 7)) #true) (check-expect (sorted? (list 2 3 5 3 7)) #false) ``` What is the base case? Suppose we have a sorted (listof Int), and we wish to add a new value, keeping it sorted. - What should we do if the list is empty? - What should we do if the item is less than or equal to the first item? - What should we do if the item is greater than the first item? Complete insert. ;;; (insert item L) Add item to L so L remains sorted in increasing order. ;;; insert: Int (listof Int) -> (listof Int) ;;; Requires: L is sorted in increasing order. ;;; Examples: (check-expect (insert 6 (list 7 42)) (list 6 7 42)) (check-expect (insert 81 (list 3 9 27)) (list 3 9 27 81)) (check-expect (insert 5 (list 2 3 7)) (list 2 3 5 7)) Using `insert`, sort a list that is not sorted Note that `insert` requires `L` to be sorted, but there are no restrictions on its length. It could be an empty list. We can use this to sort a list that is not already sorted. Suppose we have an unsorted list: `(list 2 9 7 4 6)`. Start with an empty list, and construct the answer there. Insert one value into the (empty) answer list. Then insert the next value into the result from this, and continue this process for each value in the list. How? `foldr`! Tracing Insertion Sort ;; (insertion-sort L) return a copy of L, sorted in increasing order. ;; insertion-sort: (listof Int) -> (listof Int) ;; Examples: (check-expect (insertion-sort (list 3 9 7 4)) (list 3 4 7 9)) (define (insertion-sort L) (foldr insert '() L)) (insertion-sort (list 3 9 7 4)) => (foldr insert '() (list 3 9 7 4)) => (insert 3 (insert 9 (insert 7 (insert 4 '())))) => (insert 3 (insert 9 (insert 7 (list 4)))) => (insert 3 (insert 9 (list 4 7))) => (insert 3 (list 4 7 9)) => (list 3 4 7 9) It works! Recursion can do everything – but it may be harder Anything that is possible with any combination of higher order functions (map, filter, and foldr) can be achieved using only recursion. Some more things are also possible! The recursive code may be harder to write or to read, but not always. Exercise Rewrite insertion-sort to use recursion instead of foldr. (You will still use insert.) ;; (insertion-sort L) return a copy of L, sorted in increasing order. (define (insertion-sort L) (foldr insert '() L)) It would be difficult or impossible to write insert using only higher order functions. Yet it is not too difficult to write using recursion. Always start by considering: can I do this using higher order functions? If you can, it will usually be easier. Simulating Higher Order Functions using Recursion: map The following program walks through an entire list, without doing anything with it: ```scheme (define (do-nothing L) (cond [(empty? L) '()] [else (cons (first L) (do-nothing (rest L)))])) ``` Previously, we used `map` to transform each item in a list using a given function. Similarly, using recursion: ```scheme ;; (double-each L) multiply each value in L by 2. ;; double-each: (listof Int) -> (listof Int) (define (double-each L) (cond [(empty? L) '()] [else (cons (* 2 (first L)) (do-nothing (rest L)))])) ``` Exercise Use recursion to write a function that duplicates the following function: ```scheme (def (f L) (map (lambda (x) (+ (sqr x) x)) L)) ``` The following program walks through an entire list, without doing anything to it: ``` (define (do-nothing L) (cond [(empty? L) '()] [else (cons (first L) (do-nothing (rest L)))]))) ``` This uses `cons` to include every value from the input. If we remove the `cons (first L) ...` it will recurse on the rest of the values, without keeping any. Using `filter` we could keep some values and discard others. Similarly, using recursion: ``` ;; (keep-evens L) return all values of L that are even. ;; keep-evens: (listof Int) -> (listof Int) (define (keep-evens L) (cond [(empty? L) '()] [(even? (first L)) (cons (first L) (keep-evens (rest L)))] [else (keep-evens (rest L))]))) ``` Exercise: Write a recursive function that duplicates the following function: ``` (defun (g L) (filter (lambda (x) (= 0 (remainder x 3))) L)) ``` Recall how `foldr` works. It has three parameters: a combining function, a base value, and a list. ```scheme define (sum L) (foldr + 0 L) (foldr + 0 (list 3 5 7)) => (+ 3 (+ 5 (+ 7 0))) ``` We can use recursion to combine the `first` value with the result of a recursive call on `rest`. ```scheme define (rsum L) (cond [(empty? L) 0] [else (+ (first L) (rsum (rest L)))])) ``` - The empty list is a base case, so it returns the base value; in this case, 0. - Otherwise, it combines `first L` with a recursive call on `rest L`, using the combining function; in this case, +. Processing two lists simultaneously Sometimes we have data in more than one separate list, and need to do computation on the lists together. We identify three important cases: A list “going along for the ride” E.g. appending two lists: (\text{my-append} \ (\text{list} \ 1 \ 2 \ 3) \ (\text{list} \ 4 \ 5 \ 6)) \Rightarrow \ (\text{list} \ 1 \ 2 \ 3 \ 4 \ 5 \ 6) Processing “in lockstep” E.g. adding items in one list to corresponding items in another: (\text{add-pairs} \ (\text{list} \ 1 \ 2 \ 3) \ (\text{list} \ 5 \ 8 \ 6)) \Rightarrow \ (\text{list} \ 6 \ 10 \ 9) Processing at different rates E.g. merging two sorted lists: (\text{merge} \ (\text{list} \ 2 \ 3 \ 7) \ (\text{list} \ 4 \ 6 \ 8 \ 9)) \Rightarrow \ (\text{list} \ 2 \ 3 \ 4 \ 6 \ 7 \ 8 \ 9) Inserting an item at the front of a list is easy: \((\text{cons} \ 7 \ (\text{list} \ 5 \ 3 \ 2)) \Rightarrow (\text{list} \ 7 \ 5 \ 3 \ 2)\) Appending an item at the back can be done with a little recursion: \[ \text{add-end: Num (listof Any) -> (listof Any)} \] \[ \text{Example:} \] \[ (\text{check-expect} \ (\text{add-end} \ 7 \ (\text{list} \ 2 \ 3 \ 5)) \ (\text{list} \ 2 \ 3 \ 5 \ 7)) \] \[ \text{define} \ (\text{add-end} \ n \ L) \ (\text{cond} \ [(\text{empty?} \ L) \ (\text{cons} \ n \ '())] \ [\text{else} \ (\text{cons} \ (\text{first} \ L) \ (\text{add-end} \ n \ (\text{rest} \ L)))])) \] How much harder would it be to append a list instead of just a number? Exercise \[ \text{Use recursion to complete append-lists.} \] \[ \text{Exercise:} \] \[ (\text{check-expect} \ (\text{append-lists} \ (\text{list} \ 3 \ 7 \ 4) \ (\text{list} \ 6 \ 8)) \ (\text{list} \ 3 \ 7 \ 4 \ 6 \ 8)) \] We do not need to recurse through \( L_2 \) in order to append it to \( L_1 \). \( L_2 \) is present in the recursion, and is passed to the next recursive call. We use \texttt{first} and \texttt{rest} on \( L_1 \), just like in single-list recursion. The template looks like this: \[ \text{(define \{my-alongforride-template \( L_1 \) \( L_2 \)\)}} \begin{array}{l} \text{(cond)} \hline \text{[(empty? \( L_1 \)) \ldots \]} \\ \text{[else \ldots (first \( L_1 \)) \ldots \}} \\ \text{\ldots (my-alongforride-template (rest \( L_1 \) \( L_2 \)) \ldots)]}) \end{array} \] Another list “going along for the ride” We can instead recurse on a number, with an unchanged list: ;; (duplicate-thing L n) return a list with n copies of L. ;; duplicate-thing: (listof Any) Nat -> (listof (listof Any)) ;; Example: (check-expect (duplicate-thing (list 42 6 7) 3) (list (list 42 6 7) (list 42 6 7) (list 42 6 7))) Ex. Complete duplicate-thing. We may process two lists of the same length, at the same time. The dot product of two vectors is the sum of the products of the corresponding elements of the vectors. (This works for vectors of any dimension.) E.g. if \( \vec{u} = [2, 3, 5] \) and \( \vec{v} = [7, 11, 13] \), then \( \vec{u} \cdot \vec{v} = 2 \cdot 7 + 3 \cdot 11 + 5 \cdot 13 = 112 \). Exercise Complete dot-product. ;; A Vector is a (listof Num). ;; (dot-produce u v) return the dot product of u and v. ;; dot-product: Vector Vector -> Num ;; Requires: u and v have the same length. ;; Example: ;; (check-expect (dot-produce (list 2 3 5) (list 7 11 13)) 112) Lockstep template Here we are consuming the two lists at the same rate, and they are of the same length. When one becomes empty, the other does too. (define (lockstep-template L1 L2) (cond [(empty? L1) ...] ; if L1 is empty, so is L2. [else (... (first L1) ... (first L2) ... ; We use both firsts. ... (lockstep-template (rest L1) (rest L2)) ... )]])) ; We make a recursive call on both rests. Exercise Write a recursive function vector-add that adds two vectors. (vector-add (list 3 5) (list 7 11)) => (list 10 16) (vector-add (list 3 5 1 3) (list 2 2 9 3)) => (list 5 7 10 6) Merging two sorted lists Suppose I have two lists, each sorted, and I wish to create a sorted list that contains the items from both lists. \[(\text{merge } (\text{list } 2 \ 3 \ 7) \ (\text{list } 4 \ 6 \ 8 \ 9)) \Rightarrow (\text{list } 2 \ 3 \ 4 \ 6 \ 7 \ 8 \ 9)\] Idea: look at the first item in both lists. Take the smaller one; then run recursively on the rest of the list that provided the smaller value, and the whole of the other list. There are two base cases; what are they? --- Complete \text{merge}. \[ \begin{align*} \text{Exercise} \\ \text{;; (merge L1 L2) return the list of all items in L1 and L2, in order.} \\ \text{;; merge: (listof Num) (listof Num) -> (listof Num)} \\ \text{;; Requires: L1 is sorted; L2 is sorted.} \\ \text{;; Example:} \\ \text{(check-expect (merge (list 2 3 7) (list 4 6 8 9)) (list 2 3 4 6 7 8 9))} \end{align*} \] More generally, we may need to consider if (1) both lists are empty; (2) just the first is empty; (3) just the second is empty; or (4) both are non-empty. (define (my-two-list-template L1 L2) (cond [(and (empty? L1) (empty? L2)) ... ] [(and (empty? L1) (not (empty? L2))) ( ... (first L2) ... (rest L2) ...)] [(and (not (empty? L1)) (empty? L2)) ( ... (first L1) ... (rest L1) ...)] [(and (not (empty? L1)) (not (empty? L2))) ( ... my-two-list-template ... )]] If L is a list, (cons? L) gives the same answer as (not (empty? L)). You may use either. Some examples using prime factor decomposition (pfd) ;; A PFD, or prime factor decomposition, is a (listof Nat) ;; Requires: ;; the elements are in ascending order ;; the elements are prime numbers. ;; (factorize n) return the prime factor decomposition of n. ;; factorize: Nat -> PFD ;; Examples: (check-expect (factorize 1) '()) (check-expect (factorize 17) (list 17)) (check-expect (factorize 24) (list 2 2 2 3)) (check-expect (factorize 42) (list 2 3 7)) Exercise Complete factorize. It may be helpful to consider the count-up template for recursion on a Nat. Given the prime factor decomposition of two numbers, it is relatively easy to compute the gcd. This can be solved using the generic two-list template. ```scheme ;; (pfd-gcd p1 p2) return the PFD of the gcd of p1 and p2. ;; pfd-gcd: PFD PFD -> PFD ;; Examples: (check-expect (pfd-gcd (list 2 2 3) (list 2 3 3 5)) (list 2 3)) (check-expect (pfd-gcd (list 2 3 5) (list 3 3 7)) (list 3)) (check-expect (pfd-gcd (list 5 7) (list 3 11)) '()) (check-expect (pfd-gcd (list 5 7) '()) '()) ``` Ex. Complete pfd-gcd. From pfd-gcd to pfd-lcm ;;; (pfd-gcd p1 p2) return the PFD of the gcd of p1 and p2. ;;; pfd-gcd: PFD PFD -> PFD ;;; Examples: (check-expect (pfd-gcd (list 2 2 3) (list 2 3 3 5)) (list 2 3)) (define (pfd-gcd p1 p2) (cond [(or (empty? p1) (empty? p2)) '()] [(= (first p1) (first p2)) (cons (first p1) (pfd-gcd (rest p1) (rest p2)))] [(< (first p1) (first p2)) (pfd-gcd (rest p1) p2)] [(> (first p1) (first p2)) (pfd-gcd p1 (rest p2))]))) Exercise Complete pfd-lcm. ;;; (pfd-lcm L1 L2) return the lcm of p1 and p2. ;;; pfd-lcm: PFD PFD -> PFD ;;; Example: (check-expect (pfd-lcm (list 2) (list 2)) (list 2)) (check-expect (pfd-lcm (list 2 2 3) (list 2 3 3 5)) (list 2 2 3 3 5)) Suppose we have two \((\text{listof Str})\): one of first names, and one of matching last names: \[ \begin{align*} \text{(define gnames (list "David" "James" "Douglas" "Burt" "Joseph"))} \text{(define snames (list "Johnston" "Downey" "Wright" "Matthews" "Hagey"))} \end{align*} \] Exercise Complete join-names. \[ \begin{align*} ;; \text{(join-names G S) Make a list of full names from G and S.} ;; \text{join-names: (listof Str) (listof Str) -> (listof Str)} ;; \text{Example:} \end{align*} \] (check-expect (join-names gnames snames) (list "David Johnston" "James Downey" "Douglas Wright" "Burt Matthews" "Joseph Hagey")) Hint Each name is formed from one value from each list; use the lockstep template! List equality How can we tell if two lists are the same? The built-in function `equal?` will do it, but let’s write our own. Things to consider: - Base case: if one list is empty, and the other isn’t, they’re not equal. - If the first items aren’t equal, the lists aren’t equal. - The empty list is equal to itself. **Exercise** Complete `list=?` ```scheme ;; (list=? a b) return true iff a and b are equal. ;; list=?: (listof Any) (listof Any) -> Bool ;; Examples: (check-expect (list=? (list 6 7 42) (list 6 7 42)) true) ``` For added enjoyment (!), rewrite `list=?` without using `cond`. Using lists to speed up computations Suppose I have a series of numbers that I use frequently, but which take work to compute, such as the Catalan numbers (used in combinatorics; https://oeis.org/A000108): \[ C_n = \frac{\binom{2n}{n}}{n+1} \quad C = [1, 1, 2, 5, 14, 42, \ldots] \] You may assume you have a function to compute a Catalan number: ```scheme ;;; (catalan n) return the n-th Catalan number. ;;; catalan: Nat -> Nat ``` If I every time my program need one of these, it computes it, it may compute the same number many times, which takes time. Instead, I can calculate each just once, and save them in a list. ```scheme ;;; (catalans-interval bottom top) return all the catalan numbers ;;; starting at index bottom, and ending before index top. ;;; catalans-interval: Nat Nat -> (listof Nat) (define (catalans-interval bottom top) (cond [(= bottom top) '()] [else (cons (catalan bottom) (catalans-interval (+1 bottom) top))]))) ``` (You could get the same result by `(map catalan (range bottom top 1)).) We can make a list of numbers, but can we get them back out? Complete n-th-item. ;;; (n-th-item L n) return the n-th item in L, where (first L) is the 0th. ;;; n-th-item: (listof Any) Nat -> Any ;;; Example: (check-expect (n-th-item (list 3 7 31 2047 8191) 0) 3) (check-expect (n-th-item (list 3 7 31 2047 8191) 3) 2047) By creating a list to store a sequence of numbers, then extracting the \( n \)th item of the list, we can speed computations, sometimes significantly. There is a built-in function list-ref that behaves exactly like n-th-item. In real code, it is almost always better to use the built-in function. Avoid writing your own! (list-ref (list 3 7 31 2047 8191) 0) => 3 (list-ref (list 3 7 31 2047 8191) 3) => 2047 A few reminders about `first` and `rest` Consider a few `(listof Nat)`: - `(first (list 1 2 3)) ⇒ 1`, which is a `Nat`. - `(rest (list 1 2 3)) ⇒ (list 2 3)`, which is a `(listof Nat)`. - `(first (list 2 3)) ⇒ 2`, which is a `Nat`. - `(rest (list 2 3)) ⇒ (list 3)`, which is a `(listof Nat)`. - `(first (list 3)) ⇒ 3`, which is a `Nat`. - `(rest (list 3)) ⇒ '()`, (the same as `empty`), which is a `(listof Nat)`. If L is a non-empty `(listof X)`, for any type `X`: - `(first L)` returns a `X` - `(rest L)` returns a `(listof X)`. ![Warning] Never use `first` or `rest` on empty lists. Each requires a non-empty list. Two-dimensional data You may know how to compute binomial coefficients, used in combinatorics: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] If I have \(n\) items, this tells me how many ways there are to choose \(k\) of them. Requires: \(k \leq n\). Suppose we want to save these, instead of recomputing as needed. How can we store the data? A list of lists! \((\text{binomials } 4) \Rightarrow (\text{list})\) \((\text{list } 1)\); 0 choose 0 \((\text{list } 1 1)\); 1 choose 0, 1 choose 1 \((\text{list } 1 2 1)\); 2 choose 0, 2 choose 1, 2 choose 2 \((\text{list } 1 3 3 1)\); ... \((\text{list } 1 4 6 4 1)\); ... I can get one row out of this: \((\text{n-th-item } 4 \text{ binomials}) \Rightarrow (\text{list } 1 4 6 4 1)\) ...and an item out of that row: \((\text{n-th-item } 2 \text{ (n-th-item } 4 \text{ binomials)}) \Rightarrow 6\) Computing binomials For reference, you may use the following functions to compute $\binom{n}{k}$: ;; (factorial n) return n!. ;; factorial: Nat -> Nat ;; Example: (check-expect (factorial 4) 24) (define (factorial n) (cond [(= n 0) 1] [else (* n (factorial (sub1 n)))])) ;; (binomial n k) return n choose k. ;; binomial: Nat Nat -> Nat ;; Example: (check-expect (binomial 4 1) 4) (check-expect (binomial 4 2) 6) (define (binomial n k) (/ (factorial n) (* (factorial k) (factorial (- n k)))))) Creating two-dimensional data How can I build a table like this? (binomials 4) => (list (list 1) ; 0 choose 0 (list 1 1) ; 1 choose 0, 1 choose 1 (list 1 2 1) ; 2 choose 0, 2 choose 1, 2 choose 2 (list 1 3 3 1) ; ... (list 1 4 6 4 1)) ; ... Looks like a good use of map. Since binomial has two parameters, use lambda to fill in the extra. To build one row: ;; (make-binomial-row r) return the r-th row of the binomial table. ;; make-binomial-table: Nat -> (listof Nat) ;; Example: (check-expect (make-binomial-row 4) (list 1 4 6 4 1)) (define (make-binomial-row r) (map (lambda (k) (binomial r k)) (range 0 (+ r 1) 1))) Now that we have a way to build one row, use \textit{map} a second time to build all the rows: \begin{verbatim} ;; (binomials n) return the binomial table up to n choose n. ;; binomials: Nat -> (listof (listof Nat)) ;; Example: (check-expect (binomials 2) (list (list 1) ; 0 choose 0 (list 1 1) ; 1 choose 0, 1 choose 1 (list 1 2 1))) ; 2 choose 0, 2 choose 1, 2 choose 2 (define (binomials n) (map make-binomial-row (range 0 (+ n 1) 1))) \end{verbatim} Creating two-dimensional data How can I use recursion to build a table like this? (binomials 4) => (list (list 1) ; 0 choose 0 (list 1 1) ; 1 choose 0, 1 choose 1 (list 1 2 1) ; 2 choose 0, 2 choose 1, 2 choose 2 (list 1 3 3 1) ; ... (list 1 4 6 4 1)) ; ... We will start by building a function to create just one row of the table. ;; (make-binomial-row r i) make the rest of the r-th row of the ;; binomial table, starting from i. ;; make-binomial-row: Nat Nat -> (listof Nat) ;; Example: (check-expect (make-binomial-row-from 4 0) (list 1 4 6 4 1)) (define (make-binomial-row-from r i) (cond [(> i r) '()] [else (cons (binomial r i) (make-binomial-row-from r (+ 1 i)))])) Creating two-dimensional data Since \texttt{make-binomial-row-from} makes one row of the table, now I just need to call it repeatedly, once for each row. I can do this with another count up recursion. \begin{verbatim} ;; (binomial-rows low high) make all the rows of binomials from low to high. ;; binomial-rows: Nat Nat -> (listof (listof Nat)) (define (binomial-rows low high) (cond [(= low high) '()] [else (cons (make-binomial-row-from low 0) (binomial-rows (+ 1 low) high))])) \end{verbatim} Exercise Using recursion, create a function (and necessary helper functions) to create the times tables up to a given value. For example, \( (\text{times-tables 4}) \Rightarrow (\text{list} (\text{list} 0 0 0 0) (\text{list} 0 1 2 3) (\text{list} 0 2 4 6) (\text{list} 0 3 6 9)) \) Module Summary - Become comfortable writing code that uses recursion in more complex ways, including insertion sort, selection sort. - Understand how recursion can replace any use of higher order functions, and do things that are impossible with only higher order functions. - Be able to design recursive functions that recurse on two values. - Use recursion to build lists to store data, and to extract it again. Before we begin the next module, please - Read *How to Design Programs* Sections 6, 7.
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs115/coursenotes1/recursion-2-notes.pdf", "len_cl100k_base": 6888, "olmocr-version": "0.1.47", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 45422, "total-output-tokens": 8561, "length": "2e12", "weborganizer": {"__label__adult": 0.0003223419189453125, "__label__art_design": 0.00022482872009277344, "__label__crime_law": 0.0002586841583251953, "__label__education_jobs": 0.0010328292846679688, "__label__entertainment": 5.543231964111328e-05, "__label__fashion_beauty": 0.00011587142944335938, "__label__finance_business": 9.500980377197266e-05, "__label__food_dining": 0.000453948974609375, "__label__games": 0.0008282661437988281, "__label__hardware": 0.0006995201110839844, "__label__health": 0.0003919601440429687, "__label__history": 0.00017404556274414062, "__label__home_hobbies": 9.02414321899414e-05, "__label__industrial": 0.0003006458282470703, "__label__literature": 0.00017762184143066406, "__label__politics": 0.00019693374633789065, "__label__religion": 0.0004820823669433594, "__label__science_tech": 0.004863739013671875, "__label__social_life": 8.696317672729492e-05, "__label__software": 0.003429412841796875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003426074981689453, "__label__transportation": 0.0004353523254394531, "__label__travel": 0.0001989603042602539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20706, 0.02989]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20706, 0.14487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20706, 0.71655]], "google_gemma-3-12b-it_contains_pii": [[0, 114, false], [114, 677, null], [677, 1311, null], [1311, 1818, null], [1818, 2345, null], [2345, 3114, null], [3114, 3861, null], [3861, 4743, null], [4743, 5321, null], [5321, 6089, null], [6089, 7002, null], [7002, 7575, null], [7575, 7941, null], [7941, 8576, null], [8576, 9184, null], [9184, 10052, null], [10052, 10678, null], [10678, 11249, null], [11249, 11757, null], [11757, 12472, null], [12472, 13195, null], [13195, 13792, null], [13792, 14825, null], [14825, 15559, null], [15559, 16200, null], [16200, 17052, null], [17052, 17562, null], [17562, 18211, null], [18211, 18681, null], [18681, 19380, null], [19380, 20203, null], [20203, 20706, null]], "google_gemma-3-12b-it_is_public_document": [[0, 114, true], [114, 677, null], [677, 1311, null], [1311, 1818, null], [1818, 2345, null], [2345, 3114, null], [3114, 3861, null], [3861, 4743, null], [4743, 5321, null], [5321, 6089, null], [6089, 7002, null], [7002, 7575, null], [7575, 7941, null], [7941, 8576, null], [8576, 9184, null], [9184, 10052, null], [10052, 10678, null], [10678, 11249, null], [11249, 11757, null], [11757, 12472, null], [12472, 13195, null], [13195, 13792, null], [13792, 14825, null], [14825, 15559, null], [15559, 16200, null], [16200, 17052, null], [17052, 17562, null], [17562, 18211, null], [18211, 18681, null], [18681, 19380, null], [19380, 20203, null], [20203, 20706, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20706, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20706, null]], "pdf_page_numbers": [[0, 114, 1], [114, 677, 2], [677, 1311, 3], [1311, 1818, 4], [1818, 2345, 5], [2345, 3114, 6], [3114, 3861, 7], [3861, 4743, 8], [4743, 5321, 9], [5321, 6089, 10], [6089, 7002, 11], [7002, 7575, 12], [7575, 7941, 13], [7941, 8576, 14], [8576, 9184, 15], [9184, 10052, 16], [10052, 10678, 17], [10678, 11249, 18], [11249, 11757, 19], [11757, 12472, 20], [12472, 13195, 21], [13195, 13792, 22], [13792, 14825, 23], [14825, 15559, 24], [15559, 16200, 25], [16200, 17052, 26], [17052, 17562, 27], [17562, 18211, 28], [18211, 18681, 29], [18681, 19380, 30], [19380, 20203, 31], [20203, 20706, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20706, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
ce2a5444078306f75e1cc46983ccf5787abd96eb
What can DDS do for Android? Contents Abstract .................................................................................................................................................. 3 What is Communications Middleware? ........................................................................................................ 4 What is DDS? .......................................................................................................................................... 5 How does DDS work? .............................................................................................................................. 5 What are the Benefits of DDS? .................................................................................................................. 6 DDS Reduces Risk: ................................................................................................................................. 7 DDS Reduces Cost: ................................................................................................................................. 8 CoreDX DDS ........................................................................................................................................... 9 CoreDX DDS is Interoperable: .................................................................................................................. 10 QoS Policies to Tailor Communications Behavior .................................................................................. 11 Dynamic Type Technology ...................................................................................................................... 11 CoreDX DDS and Android Sample Case Studies: .................................................................................... 12 Why is CoreDX DDS the best Communications Middleware for Android? .................................................. 13 Key Points: ............................................................................................................................................ 15 Conclusion and Summary ......................................................................................................................... 16 Twin Oaks Computing .............................................................................................................................. 17 Abstract Today’s Android developers typically build their applications projects without middleware. This is understandable considering most early apps did not communicate off the Android device. However, with Communications Middleware, this is changing. The number of activated Android devices continues to grow, with Android holding a 43% share of the US mobile market in 2011 (that’s almost 50 million active users). Along with this popularity, software developers in both the commercial and DoD industries are finding new and valuable uses for more complex and distributed apps on these mobile, handheld devices. In addition, many project managers would like to make their existing software Android compatible. Communications Middleware like CoreDX DDS provides numerous benefits to distributed software systems, and these benefits can now be taken advantage of by Android apps. This paper will give some background information on Communications Middleware, DDS, CoreDX DDS, and Interoperability, and how they apply to Android. --- 1 http://www.theverge.com/2012/1/7/2689585/neilson-2011-media-numbers-tv-android What can DDS do for Android? What is Communications Middleware? Communications Middleware is computer software that enables two otherwise separate software components, processes, and/or applications to exchange information, either within one device, or between multiple devices. It is a specific kind of Middleware: the layer that lies between the operating system (Android, Linux, Windows, etc.) and system applications (accounting software, media players, office productivity suites, etc.), that allows for communications. The purpose of Communications Middleware is to simplify the designing, programming, and managing of software applications by streamlining the way these applications receive and process data. Communications Middleware is used in a wide variety of software systems, from mobile devices (Android phones, PDAs, Kindle Fires, iPads, etc.) to enterprise and database systems. The equipment in these systems varies in screen and visual display capabilities, bandwidth capacities, and processing power. Communications Middleware can understand and support multiple programming languages (Java, C, C++, PHP, Ruby on Rails, etc.). We can use an Android phone and a PC here as an example. They both function in vastly different capacities, but with Communications Middleware are able to “talk” to and “work” with each other. This holds true for devices of similar capacities with different operating systems as well. Why use Middleware? A wide variety of operating systems are being used in today’s software development efforts: Android, Windows, Linux, and QNX, just to name a few. These operating systems communicate data differently, just as different hardware types (cell phones, computers, printers, etc.) store and retrieve information in a variety of ways. With the increasing popularity of Android apps, many developers would like to make their existing applications Android compatible. This translates into an expensive problem when your project wants to exchange information between two diverse systems, potentially costing you and your business precious time, money, and resources. What can DDS do for Android? **The solution to this problem is DDS.** Using a Communications Middleware such as CoreDX DDS reduces system complexity. While different Communication Middleware technologies provide different features and benefits, they all strive to provide application portability across different operating systems and hardware, reduce development cost, and simplify the resulting application code. **What is DDS?** Data Distribution Service (DDS) is a type of Communications Middleware whose concept was standardized and is currently managed by the Object Management Group (OMG). DDS simplifies software systems, and reduces risk and costs through development, integration, deployment, and lifetime maintenance of distributed software systems. Historically, DDS has been used in large DoD systems to satisfy Open Architecture requirements for Extensibility, Maintainability, Composability, and Interoperability, but only in the larger computer components of these systems. Now, with the availability of small-footprint DDS implementations, many other applications can benefit from standardized publish subscribe communications, including Android apps. The DDS Standard contains an easy to use, well defined Application Programming Interface (API). This allows the developer to write portable code, code that will work with any compliant DDS implementation. The DDS standard references the Real Time Publish Subscribe (RTPS) Wire Protocol standard which defines the wire protocol for DDS communications. This allows applications built with different DDS implementations to communicate, or interoperate, with each other. Users of DDS do not tie themselves to a particular vendor, but to a standard, and can change or intermix DDS vendors throughout the development and deployment cycles. Each application communicating over DDS contains the DDS API and provides the discovery, and other required communication details. DDS simplifies communications processes among different system types, making distributed development easier, faster, and more reliable. A DDS Communications Middleware simplifies your Android project from development through initial deployment and maintenance over the life of the system. **How does DDS work?** *DDS is in charge of transferring information:* Information is transferred from publishers (producers and senders of messages) to subscribers (consumers and receivers of messages). Subscribers and publishers employing DDS can use different platforms or operating systems and still communicate with each other. Exchanges can take place through tens of thousands of devices at the same time, each one of which can be publishers, subscribers, or both simultaneously. --- What can DDS do for Android? Systems that use DDS to communicate can do so independently of each other: They do not rely on each other’s systems to send and process information. A publisher can still publish information even if there is no subscriber seeking the information. A subscriber can receive information from other publishers if the original publisher it was getting information from fails. DDS automatically knows how to send and receive messages with other DDS users: By design DDS is able to conclude which users should receive messages, where these users are located, and what to do if the receiver is unavailable. This simplifies data distribution, lessens the code required to perform message delivery (and less code means more efficiency), and thus saves time. DDS participants can be on the same machine or across a network: the application uses the same DDS API for communications. Because there is no need to know or configure IP addresses, or take into account the differences in machine architectures, adding an additional communication participant on any operating system or hardware platform becomes an easy, almost trivial, task. Each version of DDS can perform the same minimum set of functions in the same way with the same results. This is referred to as an “open standard system”: system components from different manufacturers can be replaced and/or take over for each other with minimal or no changes to the larger systems in which they operate. This saves costs and avoids vendor lock-in. DDS works in “real time”: With very low overhead and efficient processing, messages are sent with minimal latencies (generally measured in the microseconds). It has a flexible architecture that is also scalable: it can adapt to processing both large and small amounts of data. What are the Benefits of DDS? DDS is widely adopted across a variety of industries, including some of the most mission-critical systems within the United States Department of Defense. DDS is also being used in a growing number of commercial applications, including smart vehicle control, high-speed stock trading, consumer electronics, telecommunications, manufacturing, power generation, medical devices, and simulation. As Android use and popularity increases, we expect to see this trend among Android Apps as well. What can DDS do for Android? **DDS Reduces Risk:** **DDS ensures consistency:** Users of DDS can make changes to one system without the other system being adversely affected. Time is saved as less design time is allocated to determining how to get these systems to “talk” to each other. **DDS automatically switches between publishers if the primary publisher fails.** For example, once programmed, a publisher knows to “re-try” in 10 milliseconds, 10 minutes, every hour, or to drop the message all together, etc. if it is unable to reach a subscriber, and vice versa. In addition, subscribers always get the information that most closely matches their needs. If the information they seek is unavailable, they get the next best information. The system will automatically switch back to the information that most closely matches their needs when it becomes available. **DDS has no single point of failure:** Systems that use DDS to communicate can do so independent of a server or service, and independently of each other. They do not rely on each other’s systems to send and process information. A publisher can still publish information even if there is no subscriber seeking the information, or if a subscriber becomes “lost” for any reason. A subscriber can search for other publishers if the publisher it is getting information from fails or is lost. **DDS filters data for unique users:** Each user only receives the information they need (or are intended) to receive. Consider online banking. The information is available to anyone who can access the web, as long as they have the correct username and password. **DDS can be used wirelessly to communicate information:** For example: handling secure transactions via Smartphones and financial institutions, or scanning and tracking systems for package delivery systems. DDS provides high performing, **reliable** communications over un-reliable wireless networks, including Wi-Fi, Bluetooth, and cellular networks. DDS is reliable and always available - interactions with other services or application’s are independent from network services, meaning they are always available for users (the server can’t be “down” because of too many users, etc.) Data is cached by the publisher until all subscribers have received the information, so even if the network is unavailable the information is not lost. The publisher and subscriber merely try again. DDS has the ability to tailor communication behavior – Quality of Service (QoS) policies allow the user to configure over 22 distinct items of communications behavior, providing fine-grained control to meet your communication requirements. For example: reliability requirements, storage requirements, data presentation requirements, data filtering requirements, and redundancy or failover requirements (more than one path to communicate information). DDS Reduces Cost: DDS cuts development lifecycle cost: When disparate systems need to be integrated, instead of building a new system from the beginning, DDS can be deployed to facilitate communications and the project can continue. This saves both time and labor cost. Administration and maintenance expenses are reduced with DDS: Standardized programming and communication interfaces simplify administration and maintenance of DDS-enabled systems. It is easy to replace a system component because the other components don’t have to change; the new component is simply accepted. It is easy to remove a component because other components will continue to exchange information when the removed component is gone. It is easy to add a device: new publishers and subscribers can be added with no change to existing components. http://portals.omg.org/dds/category/keywords/qos CoreDX DDS CoreDX DDS is a full-featured DDS implementation that comes in a surprisingly small package, perfect for Android devices. The entire “Android library”, containing full interoperability with the RTPS wire protocol and support for all the standard, and a some additional QoS policies, is a mere 500 KB. That’s Kilobytes, not Megabytes! The CoreDX DDS Ping application (text-based application built to test interoperability over lots of QoS policies) is just 250 KB. Platform Operating Systems and Transports CoreDX DDS supports a wide variety of platforms, including: - Android, - ARM - Bluetooth Network - Cellular Network - Linux - Lynx OS - NexusWare - QNX - Solaris 10 - VxWorks - WI-FI Network - Windows Small Runtime Footprints CoreDX DDS truly has a small footprint. Small from every angle, the full-featured CoreDX DDS library is measured in Kilobytes not MegaBytes. This compact implementation is truly unique in the middleware industry, and allows CoreDX DDS to benefit a wide range of Android devices. CoreDX DDS is small from every angle: small library size, low line of code count, and minimal run-time resource requirements. CoreDX DDS has been deployed on a single CPU Intel Pentium system with just 640K RAM. Low Line of Code Count Every line of code in a software product has a cost associated with it, and not just the cost of certification for safety critical applications. Each line of code has the potential for increasing the number of instructions that must be executed, and degrading the overall performance of communications. In addition, each line of code has the potential for a programming error, or bug, and more lines of code make it difficult to track down and identify such errors. These are fundamental concepts in any software development project, and truly experienced software engineers understand the importance of writing code that is well thought out and compact. With CoreDX DDS, each software addition and modification is carefully analyzed for its Source Line of Code (SLOC) and performance impact and benefit to the overall product. These fundamental software engineering concepts and complementing processes ensure the CoreDX DDS baseline maintains its status as the World Leading Small Footprint DDS Implementation. The complete CoreDX DDS baseline includes fewer than 35,000 SLOC for the Standard Edition. A Safety Critical Baseline for the CoreDX DDS product has fewer than 13,000 SLOC. This Safety Critical Baseline includes all the QoS policies and features of the standard CoreDX DDS baseline. **CoreDX DDS is Interoperable:** DDS Interoperability is the ability of DDS implementations from different vendors to communicate\(^7\). The Real-Time Publish Subscribe (RTPS) protocol defines the standardized wire protocol for DDS and is what allows interoperability on the wire between different implementations of DDS. CoreDX DDS can “talk” with every other type of DDS. The wire protocol is standardized; ensuring programs using different DDS products can discover each other and exchange data and can communicate. There are a growing number of vendors active in interoperability testing and demonstration, including Twin Oaks Computing, RTI, and PrismTech. These vendors are active at the OMG Technical Meetings and regularly test and demonstrate their interoperable DDS products.\(^8\) Twin Oaks brings DDS interoperability to Android embedded platforms. CoreDX DDS applications can easily communicate with applications based on DDS from other vendors. This multi-vendor interoperability is enabled by multiple standards managed by the Object Management Group (OMG), including specifications of the application programming interface (API), real-time publish subscribe wire protocol (RTPS), and quality of service (QoS) features\(^9\). CoreDX DDS includes proven support across all of these interoperability aspects. DDS consumers recognize the benefits of interoperability, especially where it provides them the flexibility to adapt and extend their systems with very little cost. Here are a few examples of the types of new devices are our clients are using or planning to use to extend their projects with CoreDX DDS: - Android based phones, tablets and embedded devices - QNX based mobile devices - Set-top boxes - Gateways - Gumstix tiny Linux computers - Micrium µC OS - FPGA’s - Safety Critical Applications QoS Policies to Tailor Communications Behavior DDS provides a rich set of Quality of Service (QoS) policies to tailor the behavior of communications. These QoS policies can be used individually or together to affect a variety of communications aspects, including reliability, performance, persistence of data, and amount of system resources used. The breadth and depth of the configuration available with these QoS policies allow CoreDX DDS to be a superior choice for communications in a large variety of industries and architectures. Examples of QoS policies include - **Reliability** (what are the reliability requirements for this data?) - **Durability** (how long is data saved for possible future publication?) - **History and Resource Limits** (what are the storage requirements?) - **Filtering and Presentation** (which data should be presented to the subscriber, and how?) - **Ownership** (are there any failover or redundancy requirements?) These are just a few of the twenty-two distinct QoS policies defined by the DDS standards. However, it is the coverage of these QoS policies—the number of these standardized QoS policies implemented by each DDS vendor—that allows for truly interoperable implementations. All of these interoperability aspects put together allow the greatest flexibility for middleware consumers. Dynamic Type Technology A feature exclusive to CoreDX DDS is support for Dynamic Type Technology. This innovative new technology eases system integration challenges, and enables bridging DDS data between disparate systems in a flexible and dynamic environment. This technology enables DataReaders to dynamically, at run-time, determine the topic data types. Through Dynamic Type introspection, the subscribing application can explore the data type and access data fields. Dynamic Types offer a flexible solution that lowers Total Cost of Ownership. DDS provides Dynamic Discovery of publishers and subscribers. Dynamic Discovery also makes your DDS applications extensible. This means the application does not have to know or configure the endpoints for communications because they are automatically discovered by DDS. This dynamic discovery goes even further than discovering endpoints. DDS will discover if the endpoint is publishing data, subscribing to data, or both. It will discover the type of data being published or subscribed to. It will also discover the publisher’s offered communication characteristics and the subscriber’s requested communications characteristics. All of these attributes are taken into consideration during the dynamic discovery and matching of DDS participants. CoreDX DDS and Android Sample Case Studies: ContextNet and Android: The Laboratory for Advanced Collaboration (LAC) chose to utilize the Twin Oaks Computing CoreDX DDS University Licensing Program for an ongoing project: ContextNet. Project ContextNet aims at enabling communication services for large and wide scale exchanges, including on-line monitoring or coordination of mobile device activities, and information sharing through social networks. These entities may be users of portable devices (e.g. smartphones), vehicles, or moveable gadgets. CoreDX DDS is being used to build the backbone of the communication infrastructure for the project, which will run via independent users in diverse network domains. This infrastructure will communicate with approximately 30,000-50,000+ mobile devices simultaneously, each of them sending data every 30 seconds. ContextNet is primarily focused on addressing three major challenges: - Enabling the scalable distribution of information among hundreds of thousands of context-producing and context-consuming entities - Devising automated reasoning techniques that are inherently distributed and capable of detecting application-relevant patterns of global context situations (e.g. identify over- or underload conditions in the distribution density of the mobile entities). - Using semantic Web to combine several types of context (computing, physical, time, user context) and integrate it with social networks so as to leverage the communication and coordination capabilities of mobile users and/or vehicles. Twin Oaks Computing supplied the University with CoreDX DDS: the middleware they needed to facilitate communications. CoreDX DDS is a high performance, robust and scalable data-centric publish-subscribe peer-to-peer architecture for real time data distribution. CoreDX DDS provides a wide set of configurable Quality of Service (QoS) policies for tailoring the communication behavior between producers and consumers of data. Some of the benefits of CoreDX DDS include decoupling software components, high availability, interoperability between implementations, and automatic discovery of comparative communication peers. Consider an example DoD system: DCS Corp, a company that works closely with government agencies in the national security sector, recently chose Twin Oaks Computing’s CoreDX DDS for their unmanned robotic systems project. DCS Corp is developing a Graphical User Interface (GUI) that controls these unmanned robotic systems. Their software architecture uses DDS to communicate user interactions, events, and status information between graphical displays (controllers) and the unmanned systems. Their software was originally developed with DDS and developed to run on Windows and Linux platforms, but DCS Corp recently began exploring porting to Android devices as well. The challenge they faced was to port their existing C++ code base to Android and minimize redevelopment so that they could maintain consistency across platforms. Their current DDS provider did not port to Android, but due to the standardized API and interoperability of DDS, DCS Corp was successfully able to migrate to CoreDX DDS for their project. “CoreDX was the only vendor to provide a DDS distribution for Android and it allowed us to migrate to CoreDX DDS without significant changes to our DDS interface software.” – Brian Wood, DCS Corp Twin Oaks Computing provided additional support for their effort by preparing and releasing a C and C++ binding for their CoreDX DDS Android distribution. This allowed DCS Corp to use the Android Native Development Kit (NDK) to develop C++ applications instead of requiring them to use Java, saving DCS Corp development time. “The small footprint of CoreDX DDS was also significant due to the embedded nature of our software and it has performed well on our small Android devices” – Brian Wood, DCS Corp. The Power of Interoperability The Android market and devices are proliferating at an amazing rate, and many existing DDS users would like to make use of this technology by extending the reach of their existing systems to individual, mobile, Android devices. The original DDS vendor does not support Android, but because DDS is a Standards Based Technology with proven Interoperability, this system maintainer can look to other DDS vendors for a possible solution. In this particular example, the contractor maintaining this DoD system contacted Twin Oaks Computing and CoreDX DDS with the hopes of finding a native DDS solution for Android that would meet the customer’s requirements - without requiring them to replace their existing DDS solution. Because of DDS Interoperability, they were successful. Now the customer has their enhanced system, connecting their legacy components with new Android devices, and they were able to do it without any modifications to the communication components of their legacy system. This is the strength of Interoperability, and the strength of the DDS Standards. Why is CoreDX DDS the best Communications Middleware for Android? CoreDX DDS provides a native DDS Android solution: CoreDX DDS does not require gateways, translators, or web servers. Rather, native Android libraries are linked to your Android app. CoreDX DDS is the leading small footprint implementation of the Data Distribution Standard (DDS): The full feature set of CoreDX DDS is easy to use with Size, Weight, and Power (SwaP) constrained applications such as Android. With a small footprint and full Quality of Service coverage, CoreDX DDS is designed specifically to meet the performance and complexity requirements of real-time, embedded, time-critical, and What can DDS do for Android? mission-critical applications, while still being small in size and conservative in memory usage. Core DX DDS has small run time requirements: CoreDX DDS can be used in a wide variety of embedded applications with minimal memory and CPU resources, reducing the amount of static memory (or FLASH) required to store your application. Based on an anonymous survey of Twin Oaks Computing customers conducted between December 2011 and January 2012, the features clients found most useful were the minimal run time memory footprint, ease of use, and small library size CoreDX DDS offers. CoreDX DDS is easy to use: CoreDX DDS has a clean, easy to use Application Programming Interface (API), uncluttered by any unnecessary or confusing configuration parameters. CoreDX DDS features completely native source code with no 3rd party products or packages, and is written to the DDS standards. This translates into clean source code, with low Software Line of Code (SLoC) counts. CoreDX DDS supports advanced reliable communications technology: CoreDX DDS can easily be employed reliably in wireless and other unreliable network environments (perfect for Android!). CoreDX DDS has lightweight, reliable communications protocols that have higher efficiency and scalability than TCP. CoreDX DDS has proven vendor interoperability: CoreDX DDS can exchange data and communicate with every other implementation of DDS. CoreDX DDS supports multiple development languages and environments: The same CoreDX DDS API, the same familiar programming languages, the same advanced DDS features are all available for Android. CoreDX DDS applications run on your favorite Android smart phone, tablet, and other embedded computers; we support Android on all the common (and some uncommon) hardware platforms. C, C++, and Java languages are supported for Android. What can DDS do for Android? Key Points: - Communications Middleware is computer software that enables two otherwise separate software components, processes, and/or applications to exchange information. - The purpose of Communications Middleware is to simplify the designing, programming, and managing of software applications by streamlining the way these applications receive and process data. - DDS is a communications middleware, in charge of transferring information - DDS simplifies software systems, and reduces risk and cuts costs through development, integration, deployment, and lifetime maintenance of distributed software systems. - Systems that use DDS to communicate can do so independently of each other. - DDS automatically knows how to send and receive messages with other DDS users - DDS participants can be on the same machine or across a network - DDS ensures consistency - DDS has no single point of failure - DDS can be used wirelessly to communicate information - DDS is reliable and always available - CoreDX DDS is the leading small footprint implementation of the Data Distribution Standard (DDS) - CoreDX DDS is easy to use, has small run time requirements, and a low line of code count - Core DX DDS provides a rich set of Quality of Service (QoS) policies - Core DX DDS has small run time requirements - CoreDX DDS is easy to use - CoreDX DDS has proven vendor interoperability - CoreDX DDS supports multiple development languages and environments Conclusion and Summary Communications Middleware is computer software that enables two otherwise separate software components, processes, and/or applications to exchange information, either within one device, or between multiple devices. Data Distribution Service (DDS) is a type of Communications Middleware that simplifies software systems, and reduces risk and costs through development, integration, deployment, and lifetime maintenance of distributed software systems. DDS is now available for Android devices. DDS technology increases software development productivity, reduces risk, and eases deployment and maintenance challenges in dynamic systems. DDS Interoperability allows consumers to replace or augment one DDS implementation with another better suited to their requirements and extend already deployed systems with new applications using different DDS implementations. This flexibility further reduces risk and further enables management of changing systems. With the increasing popularity of Android apps, many developers would like to make their existing applications Android compatible. CoreDX DDS is a full-featured DDS Communications Middleware implementation that comes in a surprisingly small package, perfect for Android devices. It is easy to use, has small runtime requirements, is interoperable, and supports multiple development languages and environments. The CoreDX DDS source code is clean, easy to read, easy to build, easy to port, and easy to modify. Download a free evaluation copy at www.twinoakscomputing.com/coredx/download. Twin Oaks Computing Twin Oaks Computing, Inc. is a company dedicated to developing and delivering quality software solutions. Our staff has extensive experience developing and supporting robust communication architectures. We leverage this world-class technical experience to provide innovative and useful communication software systems. We build the software that collects, manages, and distributes information in a wide range of industries. Our software is in use around the world supporting critical missions. Equally important, our clients are amazed and totally satisfied with our super responsive customer service. One of our early customers in China states, “Twin Oaks Computing [provided] great porting work during very short period of time (each porting for about 2-3 weeks). This made me really appreciate the portability framework of CoreDX DDS.” - Mr. Huang More recently, we received this comment from a customer in the United States, “There is nothing I don’t like about working with Twin Oaks Computing. In particular, working with Nina is a singular pleasure in today’s world of technical support - she is very responsive and helpful.” - Dr. Michael Mezzino About Twin Oaks Computing With corporate headquarters located in Castle Rock, Colorado, USA, Twin Oaks Computing is a company dedicated to developing and delivering quality software solutions. We leverage our technical experience and abilities to provide innovative and useful services in the domain of data communications. Founded in 2005, Twin Oaks Computing, Inc delivered the first version of CoreDX DDS in 2008. The next two years saw deliveries to over 100 customers around the world. We continue Contact Twin Oaks Computing, Inc. (720) 733-7906 (855)-671-8754 (toll free) +33 (0)9 62 23 72 20 755 Maleta Lane Suite 203 Castle Rock, CO. 80108 www.twinoakscomputing.com
{"Source-Url": "http://www.omg.org/hot-topics/documents/dds/Android_and_DDS1.pdf", "len_cl100k_base": 6129, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 38965, "total-output-tokens": 7161, "length": "2e12", "weborganizer": {"__label__adult": 0.0002906322479248047, "__label__art_design": 0.0002036094665527344, "__label__crime_law": 0.0002319812774658203, "__label__education_jobs": 0.00023376941680908203, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 0.00011372566223144533, "__label__finance_business": 0.0008664131164550781, "__label__food_dining": 0.00021398067474365232, "__label__games": 0.00047969818115234375, "__label__hardware": 0.00316619873046875, "__label__health": 0.0002135038375854492, "__label__history": 0.00013363361358642578, "__label__home_hobbies": 5.614757537841797e-05, "__label__industrial": 0.0004208087921142578, "__label__literature": 0.00012481212615966797, "__label__politics": 0.00011670589447021484, "__label__religion": 0.00022232532501220703, "__label__science_tech": 0.0135650634765625, "__label__social_life": 3.886222839355469e-05, "__label__software": 0.0238037109375, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.00017535686492919922, "__label__transportation": 0.0004024505615234375, "__label__travel": 0.0001323223114013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34547, 0.01938]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34547, 0.21388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34547, 0.89028]], "google_gemma-3-12b-it_contains_pii": [[0, 29, false], [29, 2390, null], [2390, 3512, null], [3512, 5626, null], [5626, 8395, null], [8395, 10879, null], [10879, 12857, null], [12857, 14695, null], [14695, 16872, null], [16872, 19461, null], [19461, 22093, null], [22093, 25036, null], [25036, 27773, null], [27773, 29642, null], [29642, 31121, null], [31121, 32689, null], [32689, 33870, null], [33870, 34547, null]], "google_gemma-3-12b-it_is_public_document": [[0, 29, true], [29, 2390, null], [2390, 3512, null], [3512, 5626, null], [5626, 8395, null], [8395, 10879, null], [10879, 12857, null], [12857, 14695, null], [14695, 16872, null], [16872, 19461, null], [19461, 22093, null], [22093, 25036, null], [25036, 27773, null], [27773, 29642, null], [29642, 31121, null], [31121, 32689, null], [32689, 33870, null], [33870, 34547, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34547, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34547, null]], "pdf_page_numbers": [[0, 29, 1], [29, 2390, 2], [2390, 3512, 3], [3512, 5626, 4], [5626, 8395, 5], [8395, 10879, 6], [10879, 12857, 7], [12857, 14695, 8], [14695, 16872, 9], [16872, 19461, 10], [19461, 22093, 11], [22093, 25036, 12], [25036, 27773, 13], [27773, 29642, 14], [29642, 31121, 15], [31121, 32689, 16], [32689, 33870, 17], [33870, 34547, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34547, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06