id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
254e5c561b96014345e83d3122302fd09dbb74e6
FORTRAN LANGUAGE CLASSES OF DATA Computer programs, regardless of the language in which they are written, are designed to manipulate data of some kinds. CONSTANTS and VARIABLES, and their terms are used in FORTRAN in almost the same sense as in mathematics. CONSTANTS A constant is a quantity whose value does not change during program execution. It may be of numeric, characters, or logical type. INTEGER CONSTANTS - A is integer constant is literally an integer, i.e., a whole number without a decimal point. It may be zero or any positive or negative value. RULES FOR FORMING INTEGER NUMBERS 1. No decimal point 2. No fractional values. 3. Used as counter, exponents and identification number. 4. Results of integer arithmetic reduced to next-lower integer. 5. In general, the maximum magnitudes is $2^{31} - 1$ a 10 digit number (depending on computers but some used 9 digits) 6. Remainders resulting from division are lost or truncated. 14/3 value is 4 5/6 value is 0 19/5 value is 3 7. No commas are to be included in a string of digits. 8. Spaces with a constant are allowed but their use is discouraged as it is error prone. 9. The minus sign must precede a negative constant, a plus sign is optional and an unsigned constant is considered positive. Example:- 25 0 -7 +15274 But the following are not valid integer constants. - 18.0 Contains a decimal point - -.284 Contains a decimal point - 10,235 Contains a comma - --7 Not preceded by a single algebra - 12345678995 Too large. **REAL CONSTANT** – Also called a floating points constant, is a constant with a decimal point and may have a fractional part. Examples: 18.3, -163.0, 42.0, 4.0125 (they are all valid) While the followings are invalid - 1,465.3 Contains a comma - -56 Contains no decimal point. **RULES FOR FORMING REAL CONSTANT** 1. Decimal point must be included 2. Should not be used as an exponent for integer numbers. 3. Remainders resulting from division are not lost. - \(4.0/3.0\) Value is \(1.3333333\) - \(4.0/5.0\) Value is \(0.8000000\) - \(17.0/5.0\) Value is \(3.4000000\) - \(0.5e2/ 0.2E1\) Value is \(25. 000000\) Other types of constant are double precision and complex but they are not treated here. **VARIABLES** A variable name, or simply variable, is a name used to identify data stored in a memory location whose contents may change during program execution. **Integer Variable Name:** A variable beginning with any of the letters I, J, K, L, M, N is assumed to be an INTEGER Variable. **Real-Variable Names:** A variable beginning with any letter except I, J, K, L, M or N is assumed to be a REAL Variable. RULES FOR FORMING VARIABLE NAMES 1. First character must be a letter 2. Subsequent letters may be any combination of letters and digits (other characters cannot be used). 3. Variable names must not exceed six characters (for most computers) 4. Integer-Variable names must begin with letters I to N only. 5. Real variable names must begin with any other letter (A-H or O-Z) 6. Blank spaces may be inserted to improve readability and do not count as one of the six allowable characters. 7. The same name must not be used for more than one variable in a given program. 8. Before a variable name can be used in computation, it must be assigned a numerical value by an assignment statement, READ statements, or a DATA statement. 9. Term that are part of the FORTRAN vocabulary e.g. READ, WRITE, REAL, IF, (reserved words should not be used as variable names. <table> <thead> <tr> <th>Integer variable names</th> <th>Real Variable names</th> </tr> </thead> <tbody> <tr> <td>I, J, K, L, M, N</td> <td>A-H, O to Z</td> </tr> <tr> <td>NUMBERS NUM NO</td> <td>RADIUS RAD R</td> </tr> <tr> <td>MAN I MOL M</td> <td>AREA ARC A</td> </tr> <tr> <td>ITEMS JOHN J</td> <td>CIRCUM CIR C</td> </tr> <tr> <td>NUM I LAGOS KILO2</td> <td>TIM T1 T2</td> </tr> </tbody> </table> The following are invalid variable name. - KONTAGORA exceeds 6 characters - READ FORTRAN Reserved word - KILO –1 Contains – (hyphen) - TIME .T Contains special character not a letter. OPERATIONS Numeric data can be manipulated using arithmetic operations, character data using concatenation and substring operations, and logical data using logical data using logic operation. When referring to a combination of variables and constants together with operation symbols, we use the phrase expression. **Arithmetic Operation** The following six arithmetic operations are available in FORTRAN. 1. Addition 2. Subtraction 3. Multiplication 4. Division 5. Exponentiation 6. Negation The first five arithmetic operations are all binary operations in the sense that two operations are required while the last operation is called Unary operation because only one operand is required. E.g. To indicate the negation of a variable X write. -X or (-X). When two constants or variable of the same type are combined, using one of the four arithmetic operations (+, -, *, /) the result will be of the same type as the operands. For example, the sum, difference and product of the integers 8 and 6 will be the integers 14, 2 and 48 respectively. Integer division in FORTRAN, say, evaluation 15/4 yields instead of 3.75, 5/6 yields Ø and –5/2 yields –2 (these are all integral part of the quotient. The fractional part of the quotient is deleted simply because the results are real in value. It is also possible to combine an integer quantity with a real quantity, using the four arithmetic operations (+, -, *, /). Whenever one of the operands of operations is an integer and the other real, the integer is automatically converted to its real equivalent, and the result is of real type. Thus, \[ 5. + 4 = 5. + 4. = 9. 5. + 3 = 5.* 3. = 15 5./4 = 5./4 = 1.25 \] Such operations involving different types of numeric operands are called mixed-mode operations. The manner in which exponentiation is performed depends upon the type of the exponent. E.g. \[ 2 \times 3 = 2 \times 2 = 4 \times 2 = 8 \] \[ 1.5 \times 2 = 1.5 \times 1.5 = .25 \] \[ (-3.\emptyset) \times 2 = (-3.\emptyset) \times (-3.\emptyset) = 9. \emptyset \] But, if the exponents is a real number the exponentiation of any constant/number, integer or real, is computed using logarithms and hence, since logarithms of negative values are not defined, the negative number raised to a real power is undefined. Thus, even through \[ (-3.\emptyset) \times 2 \] yields the value 9. \emptyset, \[ (-3.\emptyset) \times 2 .\emptyset - \exp (2. \emptyset \log (-3.\emptyset)) \] is undefined, since \[ \log (-3.\emptyset) \] is not defined. **PRECEDENCE** – This specifies the order in which the arithmetic operations are to be carried out. **ORDER** 1st parentheses 2nd function 3rd Exponentiation 4th Multiplication/Division (any one met first) 5th Addition/Subtraction (any one encountered first) Unary plus and minus on the same level as binary addition and subtraction, that is, -2.2 * 2 means – (2.2 * 2) which yields –4.84 Examples. Mathematical expression: \[ \frac{ab^3 + d - e}{\cos c} \] FORTRAN expression: $A \times B \times 3/ \cos (c) + D - E$ Assuming: $A = 2.0$, $B = 3.0$, $c = 0.0$, $D = 4.0$, $E = 5.0$ \[ 2.0 \times 3.0 \times 3/\cos(0.0) + 4.0 - 5.0 \] \[ (1) \quad 27.0 \] Mathematical expression: $a + b$ \[ \frac{(c + d) \sin^2(\pi/b)}{(c + d) \sin^2(\pi/b)} \] FORTRAN expression: $(A + B) / (C + D) \times \sin(\pi/b)^2$ Assuming: $A = 4.0$, $B = 6.0$, $C = 2.0$, $D = 3.0$, $\pi = 3.141593$ Then, the expression is evaluated in FORTRAN as \[ \frac{(4.0 + 6.0) / (2.0 + 3.0) \times \sin(3.141593/6.0)^2}{(4.0 + 6.0) / (2.0 + 3.0) \times \sin(3.141593/6.0)^2} \] (a) $X^{23}$ (b) $(X^2)^3$ FORTRAN expression (a) $X \times 2 \times 3$ (b) $(X \times 2)^3$ Let $X = 5.0$ The FORTRAN expression will be: (a) $5.0 \times 2 \times 3$ (b) $(5.0 \times 2)^3$ CONCATENATION This is a binary operation. This operation is denoted by the symbol `//` (double slash) and can be used to combine two, or more, character strings, and/or character variable e.g. ‘UN’ // ‘AAB’ Produces the character string UNAAB SUBSTRING This is a binary operation requiring only one operand. This is performed on character strings to extract a sequence of consecutive characters from a string. Such a sequence is called a substring of the given string. The general form is STRING (i:j) Where i and j are positive integer constants, variables, or expressions such that i < j. The default value for i and j are 1 and the last position in the character string respectively. Examples CHARACTER * 7 ll MODEL (here the 7 indicates 7 spaces) And MODEL is set to ‘IBM – 360’ Then MODEL // ‘com’// “PUTER” Yield the character string ‘IBM – 360 COMPUTER’ LOGICAL OPERATIONS The logical variables, or constants may be combined, using the three basic logical operations, AND., OR., and .NOT. Examples: A. .LT. B .AND. C .GT. D i.e. Is A less than B and is C greater than D? The .AND. requires that both conditions be satisfied for the expression to be true. The .OR. operator is used as follows. E .GE. F .OR. G .LE. (H – X) i.e is E greater than or equal to F or is G less than or equal to the value of H minus X ? The .OR. means that when either condition is true, the expression is true. An example of an INCORRECT logical expression is A. .AND. B .LT. C because logical operations . AND. and .OR. relate logical expressions, not individual variable name. Logical Expressions Logical Meaning Expression A. LT. B Is A less than B? C. EQ. (D/E) Is C equal to the quotient of D divided by E X .GT. (R + S) Is X greater than the Sum of R and S. 8.3 FUNCTIONS FORTRAN provides a number of program modules, or built-in functions that performs such mathematical functions. These built-in functions are termed INTRINSIC FUNCTIONS. The use of an intrinsic function is specified by writing the function name followed by the expression to be operated upon (the argument) inside a set of parentheses. e.g. The FORTRAN expression of \[ X = \sqrt{y} \] is \( X = \text{SQRT}(Y) \) \[ X = |Y - a| \] is \( X = \text{ABS}[Y - A] \) \[ X = e^{y+a} \] is \( X = \text{EXP}(Y + A) \) X = max(a,b,c) is X = AMAXI(A,B, C) **FORTRAN STATEMENT** **ASSIGNMENT STATEMENT** This is used to assign values to variables and has the form ``` Variable = expression. ``` Where expression may be a constant, another variable to which a value has previously been assigned, or a formula which the computer can evaluate. The ‘equals’ i.e. ‘=’ in the assignment statement is not interpreted the same sense as in mathematics but must be read as “is assigned”. Thus A =B is not the same as B = A since the first assigns B to A while the latter assigns A to B leaving B unchanged. This can be interpreted as A ← B and B ← A respectively. Consider the assignment statement ``` SOLA = SOLA + BUNMI ``` This instructs the computer to add to the value of the variables SOLA the value of the variable BUNMI and assign the result as the new value of the variable SOLA. If SOLA and BUNMI contains. <table> <thead> <tr> <th>SOLA</th> <th>BUNMI</th> </tr> </thead> <tbody> <tr> <td>8.0</td> <td>16.0</td> </tr> </tbody> </table> SOLA = SOLA + BUNMI SOLA = 8.0 + 16.0 SOLA = 24.0 <table> <thead> <tr> <th>SOLA</th> </tr> </thead> <tbody> <tr> <td>24.0</td> </tr> </tbody> </table> The former content of SOLA (i.e. 8.0 is destroyed after the assignment statement given way to the new value 24.0. The variable to be assigned a value must appear on the left of the equal sign and that a legal expression appears on the right of the equal sign in the assignment statement. The following are invalids. 18 = M Variable M is on the right hand instead of the left hand X + 4.3 = 3.14 Numeric expression should not appear to the left of equal sign. STRING = 4118 The number 4118 is an illegal expression A = B = 7 B = 7 is an illegal expression C = ‘3’ * ‘9’ ‘3’ * ‘9’ is an illegal expression STRING = 10 numeric value 10 is assigned to a character M = ‘10’ a character constant ‘10’ is assigned to a numeric variable ASSIGNMENT TO NUMERIC VARIABLES When assignment statement is used to assign value to a numeric variable, it must take care of the type of the numeric variable and the value assigned to the variable. If an integer-value expression is assigned to a real variable, the value is converted to a real constants and then assigned to the variable e.g. if N = 10 Y = 3 X = (N + 5) / 5.0 Assigns the real constant 3.0 to X and real constant 3.0 to Y If a real expression is assigned to an integer variable, the fractional part is truncated and the integer part is assigned to the variable. E.g. If X = 7.41 and I, J, K are all integer variables, then, I = 3.14159 J = X/3. K = 1./3. + 1./3 Would assign the integer constants 3, 2 and 0 to the variables I, J and K respectively. ASSIGNMENT TO CHARACTER VARIABLES It must take care of the length associated with it. e.g. CHARACTER* 6 STRNGA, STRNGB* M .TRVN, PAD STRNGA = ‘VRMILA’ STRNGB = STRNGA // ‘AGRAWAL’ From here, there are 6 character (spaces/length) assigned to STRNGA, TRVN, and PAD while STRNGB is assigned 14 characters. The values ‘URMILA’ and ‘VRMILA’ ‘AGRAWAL’ are assigned to STRNGA and STRNGB respectively. Here the declared length of the variables is greater than the length of the value being assigned, values are padded with blanks. Conversely, if the declared length of the variable is less than the length of the value being assigned, the value is truncated to the size of the variable and the leftmost characters are assigned. Thus the statement PAD = ‘JOY’ Will assign the value ‘JOYbbb’ to the variable PAD, where b denotes blank, and the statement TRUN = ‘COME – AGAIN’ assigns The value ‘COME-A’ to the variable TRUN INPUT AND OUTPUT STATEMENTS FORTRAN provides two types of input/output (I/O) statements; formatted and list-directed (Unformatted, or format-free). In the case of formatted, the programmer must specify the format in which the data is input, or, output. But in the case of unformatted, certain predetermined formats which match the types of items in the input/output list are automatically provided by the compiler. LIST DIRECTED INPUT STATEMENT. It is of the form READ *, V1, V2, V3, ... Vn Where V1 refers to the first input data item, V2 to the second and so on. The variable names must be separated by commas. The space between READ and * is optional. Additional spaces may be inserted between a comma and a variable name. Example: READ *, ADE, OLU, JOHN. READ * KOLA READ* A, B, C or READ* A READ* B READ* C Another way by which READ statement (without a Format) is coded as READ (* *) V_1, V_2 ............ V_n The first asterisk indicates that the data are to be manually keyed in via keyboard. The second asterisk indicates that the data supplied will be unstructured (without FORMAT specification). V_1, V_2 ........ V_n represents the lists of variable names to be read E.g. READ ( * , *) ADE, OLU READ (* , ) OJO LIST DIRECTED OUTPUT STATEMENT It is of the form PRINT *, V_1, V_2...V_n The same rules that apply to READ is also applicable to PRINT. Another way by which list-Directed WRITE statement can be output is WRITE (*, *) V_1, V_2, ..........., V_n This is also similar to the READ (*, *) V_1, V_2, ..........., V_n. The first asterisk indicates that the results are to be displayed on the screen. The second asterisk indicates that the compiler will structure the output. The programmer has no control over how the output is structured. The first output column is reserved for a vertical control character which is always a blank with list-directed output. Integer numbers are right justified in a 13 column field width, i.e. the number will be placed as far to the right in the field as possible, with any blanks in the field shown to the left of the number. The 13-column field includes up to 10 digits, the algebraic sign (displayed only when negative), and at least two leading blanks. Examples: WRITE (*,*) ADE, OLU WRIRE (*,*) OJO STOP AND END STOP: The STOP statement is used to specify that the program execution is to be brought to a halt. The form of the statement may be. STOP Or STOP CONSTANT Where constant is an integer constant with five or fewer digits, or a character constant. The use of this word in FORTRAN 77 is optional END: Every FORTRAN program and subprogram must conclude with an END statement that indicates to the compiler that the end of the program unit has been reached. This statement is of the form: END This must be the last statement of the program unit. There are 3 major differences between these two keywords (END and STOP). 1. STOP is an instruction to the computer to stop the program, where the END statement is an instruction to the compiler that there are no more statements in the program unit. 2. The STOP statement may be used anywhere in a program unit to stop execution and run time. The END statement is used only as the physical end of a program unit. 3. The STOP statement may include a reference number, whereas the END statement does not contain anything but the word END. RELATIONAL EXPRESSION A relational expression consists of two arithmetic expressions, or character strings, connected by a relational operator that defines the nature of the condition, or relation to be tested. Relational Operator in FORTRAN <table> <thead> <tr> <th>Relational Operator</th> <th>Application in FORTRAN</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>.LT</td> <td>A. LT. D</td> <td>A is less than B</td> </tr> <tr> <td>.LE</td> <td>A. LE. B</td> <td>A is less than, or equal to B</td> </tr> <tr> <td>.EQ</td> <td>A. EQ. B</td> <td>A is equal to B</td> </tr> <tr> <td>.NE</td> <td>A. NE.B</td> <td>A is not equal to B</td> </tr> <tr> <td>.GT</td> <td>A. GT.B</td> <td>A is greater than B</td> </tr> <tr> <td>.GE</td> <td>A. GE. B</td> <td>A is greater than, or equal to B</td> </tr> </tbody> </table> The value of a relational expression is one of the logical values. TRUE or FALSE. NUMBER COMPARISONS The operators can be used to compare numerical values. The numerical order is used when two numbers are compared. For example If \( X = 20.00 \) \( Y = 4.0 \) and \( M = 5.0 \) 1. \( X + 5 \) .GT. \( M \times Y \) i.e. \( 25.0 \) .GT. 20.0 (The value is true) 2. \( X/Y \) .EQ. \( M \) i.e. \( 20.0 / 4.0 = 5.0 \). \( 5.0 \) .EQ. 5.0 (The value is true) 3. \( 3 \times (X + Y)/4 \) .LE. \( 2 \times X + 5 \times Y - 42.01 \) i.e. \( 18.0 \) . LE. 17.9 (The value is true) CHARACTER COMPARISONS We may use ASC11 (American Standard Code for Information Interchange) and EBCDIC (Extended Binary Coded Decimal Interchange Code) Codes to obtain the collating sequences for the characters for these codes. Thus, the relational expressions `'B' .GT. 'A' 'B' .LT. 'Y' are all true. Since ‘B’ must follow ‘A’ and ‘X’ must precede ‘Y’ However, ‘A’ .GT. ‘1’ ‘*’ .LT. ‘C’ depends on the collating sequences used in a particular computer. The first is true and second false for ASCII but both are false for EBCDIC Code. ‘COMPUTER’ .GT. ‘COMPUTER’ Is true since a blank character (b) precedes all letters in every collating sequences ‘SHOLA’ .EQ. ‘SHOLA’ is true GOTO STATEMENT (Unconditional) Unconditional Transfer: The unconditional transfer of control can be accomplished by writing the statement. GOTO n Where n is a statement number. This tells the computer to go, unconditionally, to the part of the program beginning with the statement labeled. Obviously, the statement labeled n must be executable. The Computed GOTO Statement The computed GOTO statement is a conditional transfer statement that transfers to one of several executable statements, depending on the value of an integer numerical indicator. The general form for the computed GOTO statement is: GOTO (n₁, n₂, n₃,...,Π ) intexp Where \( n_1, n_2, n_3, \ldots, n \) are the first, second, third etc., statement labels in a list of statements label and intexp is an integer–variable or integer–arithmetic expression (the numerical indicator) whose value determines to which of these labeled statements (first, second, third, etc) transfer parenthesis is optional and may be omitted. If the value of intexp is less than one (zero or negative integer) or greater than the number of statement labels in the list, the computed GOTO statement is ignored and execution continues with the next executable statement following the computed GOTO statement. Example: \[ \text{GOTO (10, 25, 50, 35), K-J} \] Transfer is made to statement 10 if \( K - J = 1 \); to statement 25 if \( K - J = 2 \); to statement 50 if \( K - J = 3 \); and to statement label 35, the value \( K - J = 4 \). Since these are four statement labels, the value \( K - J \) must equal 1 through 4, inclusive, if the computed GOTO statement is to be executed, and the values for \( K \) and \( J \) must have been assigned earlier in the program. **IF Statement** There are three forms of IF statements Logical IF, block IF, and Arithmetic IF. The blocks IF is preferred except for simple selective structures, in which case the logical IF is used. The arithmetic IF is error prone and hence, its use is discouraged. **Logical IF Statement** The logical IF statement makes possible the conditional execution of a single statement depending upon whether a given logical expression is true. It has the form: ```fortran IF (Logical expression) statement ``` where the statement is any executable FORTRAN statement, but it cannot be another IF statement and END statement, or a DO statement. When a logical IF statements is executed, the value of the logical expression is determined and if it is true, the designated statement is executed (except when there is another transfer of control), the statement following the logical IF statement is executed next. But if the expression returns false, the designated statement is not executed, and the next statement to be executed is the one following the Logical IF statement. Examples: ```fortran IF (TOKS .LE. 105) PRINT *, TOKS ``` Toks is printed only if it is \( \leq 105 \). IF (SALARY. LT. 3000.0 AND. STATUS .GT. 2.0) INC = INC + 1 Here, 1 is added to INC only if SALARY < 3000 and STATUS > 2. IF (SCORE .GT. 90.0 .OR. GRADE .GE. 4.1) GOTO 99 Here, again, the program control is transferred to the statement labeled 99, if SCORE > 90 or GRADE ≥ 4.1 BLOCK IF STATEMENT (IF, THEN, ELSE, AND, END IF Statement) The logical IF statement allows the programmer to control logic flow while minimizing the under transfer steps. The general form is: IF (Logical expression) THEN • ) • )Statement set 1 • ) • ) ELSE • ) • ) Statement set 2 • ) END IF Basic Block IF structure. In this structure, the logical expression in the IF THEN statement is identical to the form (s) used logical IF statement. Statement set 1 and statement set 2 normally consist of one or more statement to be executed. If the logical expression is true control is transferred to the first statement in statement set 1 and the statement between the IF THEN statement and the ELSE statements are executed. After the completion of the statements within statement, control is transferred to the statement immediately following the END IF statement. The statements in statement set 1 are ignored and control is transferred to the statement of statement set 2. When completion of the execution of the statements within statement set 2 is accomplished, control is transferred to the statement immediately following the END IF statement. Example: To calculate the square root of the difference between two numbers: Solution N = 0 10 READ (5, *) A, B N = N + 1 IF (A - B .GE. 0.0) THEN ROOT = (A - B) ** 0.5 WRITE (6,*) 'THE REAL ROOT OF A - B IS', ROOT ELSE ROOT = (B - A) ** 0.5 WRITE (6,*) 'THE IMAGINARY SQUARE ROOT OF B - A IS', ROOT END IF IF (N .NE. 5) GOTO 10 STOP END 8.5 REPETITIVE STRUCTURES. This is also known as interactive structure or program loop. In a repetitive structure, a set of program statements appears only once in the program. This results in a substantial reduction of the number of statement consists of an entry point including initialization of certain variables, a repetition, or loop, body, and an exit point. The number of repetitions in a structure can be conditional-controlled, or counter-controlled. IF LOOP The simplest of the repetitive structures is the If Loop in which the number of repetitions can be conditional controlled, or counter-controlled. It involves the use of the IF and the GOTO statements. Example, N = 0 100 READ *, A, B, SUM SUM = A + B PRINT *, A, B, SUM N = N + 1 IF (N .LE. 10) GOTO 100 STOP END In the above program, N is used as a counter. In this case, it counts the number of interaction and terminates it when it reaches 10. **DO LOOP** The explicit DO statement is an executable statement that causes a portion of the program to be repeated a given number of times – a procedure called looping. The statements repeatedly executed during the looping procedure are referred as the DO Loop. The DO loop is initiated and controlled by an executable statement designated by statement label in the DO statement. The general form is: DO i j = k,m,n Where i represents a statement label identifying the terminal statement (always an integer constant) j represents an integer-variable name called the index k and m represent, respectively, the initial and limiting integer values to be assigned to the index j n represents the integer increment of the index (other than zero) The index j is always an integer-variable name. The initial and limiting value k and m may be integer constants integer-variable names, or integer-arithmetic expressions evaluated positive, negative, or zero. The increment n may be an integer constant, an integer-variable name, or an integer-arithmetic expression, evaluated positive or negative, but not zero. It is essential that a programmer know how many times a loop will be repeated. The number of increment is the difference between the limiting value m and the initial value k divided by the increment n, truncated to integer form. Therefore, NR = \(m - k\) + 1 = n where NR = number of repetitions (or iterations) of the DO Statement k = initial value specified in the DO statement m = limiting value specified in the DO statement n = increment specified in the DO statements. Example, \[ \text{DO } 50 \ J = 1, \ 7, \ 2 \] Statement label defining range of loop.
{"Source-Url": "http://www.unaab.edu.ng:80/attachments/1538_CSC%20201%20Week%20Five%20Lecture%20Note.pdf", "len_cl100k_base": 7198, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 37997, "total-output-tokens": 8148, "length": "2e12", "weborganizer": {"__label__adult": 0.0002491474151611328, "__label__art_design": 0.0003719329833984375, "__label__crime_law": 0.00030875205993652344, "__label__education_jobs": 0.0016489028930664062, "__label__entertainment": 9.357929229736328e-05, "__label__fashion_beauty": 0.00013124942779541016, "__label__finance_business": 0.00019478797912597656, "__label__food_dining": 0.0003361701965332031, "__label__games": 0.0007729530334472656, "__label__hardware": 0.0020618438720703125, "__label__health": 0.0003554821014404297, "__label__history": 0.0002263784408569336, "__label__home_hobbies": 0.0001308917999267578, "__label__industrial": 0.0007166862487792969, "__label__literature": 0.00024390220642089844, "__label__politics": 0.0002570152282714844, "__label__religion": 0.0004658699035644531, "__label__science_tech": 0.088623046875, "__label__social_life": 8.600950241088867e-05, "__label__software": 0.0276641845703125, "__label__software_dev": 0.8740234375, "__label__sports_fitness": 0.0002703666687011719, "__label__transportation": 0.0003800392150878906, "__label__travel": 0.00016391277313232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26391, 0.04078]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26391, 0.79053]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26391, 0.85792]], "google_gemma-3-12b-it_contains_pii": [[0, 1289, false], [1289, 2621, null], [2621, 4018, null], [4018, 5675, null], [5675, 6977, null], [6977, 8014, null], [8014, 8655, null], [8655, 10069, null], [10069, 11210, null], [11210, 12738, null], [12738, 14280, null], [14280, 15630, null], [15630, 16912, null], [16912, 18428, null], [18428, 19752, null], [19752, 22013, null], [22013, 23449, null], [23449, 24507, null], [24507, 26065, null], [26065, 26391, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1289, true], [1289, 2621, null], [2621, 4018, null], [4018, 5675, null], [5675, 6977, null], [6977, 8014, null], [8014, 8655, null], [8655, 10069, null], [10069, 11210, null], [11210, 12738, null], [12738, 14280, null], [14280, 15630, null], [15630, 16912, null], [16912, 18428, null], [18428, 19752, null], [19752, 22013, null], [22013, 23449, null], [23449, 24507, null], [24507, 26065, null], [26065, 26391, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26391, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 26391, null]], "pdf_page_numbers": [[0, 1289, 1], [1289, 2621, 2], [2621, 4018, 3], [4018, 5675, 4], [5675, 6977, 5], [6977, 8014, 6], [8014, 8655, 7], [8655, 10069, 8], [10069, 11210, 9], [11210, 12738, 10], [12738, 14280, 11], [14280, 15630, 12], [15630, 16912, 13], [16912, 18428, 14], [18428, 19752, 15], [19752, 22013, 16], [22013, 23449, 17], [23449, 24507, 18], [24507, 26065, 19], [26065, 26391, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26391, 0.05036]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
0d9b6198ed8ffe7bb0920bfa603ce380bb2e15ca
Instrumented Sensor System Architecture Mohamed Dekhil and Thomas C. Henderson Department of Computer Science University of Utah Salt Lake City, Utah 84112, USA. Abstract Sensor systems are becoming ubiquitous throughout society, yet their design, construction and operation are still more of an art than a science. In this paper, we define, develop, and apply a formal semantics for sensor systems that provides a theoretical framework for an integrated software architecture for modeling sensor-based control systems. Our goal is to develop a design framework which allows the user to model, analyze and experiment with different versions of a sensor system. This includes the ability to build and modify multisensor systems and to monitor and debug both the output of the system and the affect of any modification in terms of robustness, efficiency, and error measures. The notion of Instrumented Logical Sensor Systems (ILSS) that are derived from this modeling and design methodology is introduced. The instrumented sensor approach is based on a sensori-computational model which defines the components of the sensor system in terms of their functionality, accuracy, robustness and efficiency. This approach provides a uniform specification language to define sensor systems as a composition of smaller, predefined components. From a software engineering standpoint, this addresses the issues of modularity, reusability, and reliability for building complex systems. An example is given which compares vision and sonar techniques for the recovery of wall pose. This work was supported in part by NSF grant CDA 9024721 and a gift from Hewlett Packard Corporation. 1 Introduction In any closed-loop control system, sensors are used to provide the feedback information that represents the current status of the system and the environmental uncertainties. Building a sensor system for a certain application is a process that includes the analysis of the system requirements, a model of the environment, the determination of system behavior under different conditions, and the selection of suitable sensors. The next step in building the sensor system is to assemble the hardware components and to develop the necessary software modules for data fusion and interpretation. Finally, the system is tested and the performance is analyzed. Once the system is built, it is difficult to monitor the different components of the system for the purpose of testing, debugging and analysis. It is also hard to evaluate the system in terms of time complexity, space complexity, robustness, and efficiency, since this requires quantitative measures for each of these measures. In addition, designing and implementing real-time systems are becoming increasingly complex because of many added features such as fancy graphical users interfaces (GUIs), visualization capabilities and the use of many sensors of different types. Therefore, many software engineering issues such as reusability and the use of COTS (Commercial Off-The Shelf) components (Profeta, 1996), real-time issues (Hu et al., 1995; Schneider et al., 1994; Simon et al., 1993), sensor selection (Giraud & Jouwencel, 1994), reliability (Kapur et al., 1996; Kim & Subbaraman, 1997; Stewart & Khosla, 1997), and embedded testing (Weller et al., 1990) are now getting more attention from system developers. In a previous paper, we proposed to use formal semantics to define performance characteristics of sensor systems (Dekhil & Henderson, 1996a). In this paper, we address these and other problems related to sensor system modeling and evaluation. We start by presenting a theoretical frame- work for modeling and designing sensor systems based on a formal semantics in terms of a virtual sensing machine. This framework defines an explicit tie between the specification, robustness and efficiency of the sensor system by defining several quantitative measures that characterize certain aspects of the system’s behavior. Figure 1 illustrates our proposed approach which provides static analysis (e.g., time/space complexity, error analysis) and dynamic handles that assist in monitoring and debugging the system. 1.1 Sensor Modeling Each sensor type has different characteristics and functional description. Therefore it is desirable to find a general model for these different types that allows modeling sensor systems that are independent of the physical sensors used, and enables studying the performance and robustness of such systems. There have been many attempts to provide “the” general model along with its mathematical basis and description. Some of these modeling techniques concern error analysis and fault tolerance of multisensor systems (Brooks & Iyengar, 1993; Dekhil & Sobh, 1997; Iyengar & Prasad, 1994; Nadig et al., 1993; Prasad et al., 1991; Prasad et al., 1994). Other techniques are model-based and require a priori knowledge of the scanned object and its environment (Durrant-Whyte, 1988; Groen et al., 1993; Joshi & Sanderson, 1994). These techniques help fit data to a model, but do not provide the means to compare alternatives. Task-directed sensing is another approach to devise sensing strategies (Briggs & Donald, 1994; Hager & Mintz, 1991; Hager & Mintz, 1989), but again, it does not provide measures to evaluate the sensor system in terms of robustness and efficiency. Another approach to modeling sensor systems is to define sensori-computational systems associated with each sensor to allow design, comparison, transformation, and reduction of any sen- Figure 1: The proposed modeling approach. sory system (Donald, 1995). In this approach the concept of information invariants is used to define some measure of information complexity. This approach provides a very strong computational theory which allows comparing sensor systems, reducing one sensor system to another, and measuring the information complexity required to perform a certain task. However, as stated by Donald, the measures for information complexity are fundamentally different from performance measures. Also, this approach does not permit one to judge which system is "simpler," "better," or "cheaper." To that end, we introduce the notion of an *Instrumented Logical Sensor System* (ILSS) which represents our methodology for incorporating design tools and allows static and dynamic performance analysis, on-line monitoring, and embedded testing. Figure 2 shows the components of our framework. First (on the left), an Instrumented Logical Sensor Specification is defined, as well as $\mathcal{F}$, a set of functions which measure system properties of interest. This specification is derived from a mathematical model, simulation results, or from descriptions of system components. Analysis of some aspects of the ILSS are possible (e.g., worst-case complexity of algorithms). Next (the center of the figure), an implementation of the system is created; this can be done by hand or automatically generated in a compile step (note that the original Logical Sensor Specifications (Henderson & Shilcrat, 1984) could be compiled into Unix shell script or Function Equation Language (FEL), an applicative language). Either way, the monitoring, embedded testing or taps are incorporated into the system implementation. Finally (the right hand side), validation is achieved by analyzing the system response and performance measures generated during system execution. In this way, there are some semantic constraints on the values monitored which relate the system output measures to the original question posed for the specification. Currently, an ILSS library is under development as part of an interactive graphical programming environment called "CWave" used to design and execute real-time control systems (refer to “http://easy.cs.utah.edu/cwave/index.htm” for more information about the CWave project. Currently, we have a theoretical framework and validation strategy with a partial implementation within CWave. CWave is a graphical program specification language that has been created to design measurement systems and has been funded by HP. CWave has been applied to broad robot systems (e.g., Lego robot warehouse demos) in our software engineering projects class here at Utah. Finally, CWave is a specification language and can be linked to simulation tools, or executed in an interpreted mode, or compiled for incorporation in embedded systems. Figure 2: The Instrumented Logical Sensor System Components. 2 Performance Semantics of Sensor Systems The use of sensors in safety critical applications, such as transportation and medicine, requires a high level of reliability. However, increased robustness and reliability of a multisensor system requires increased cost through redundant components and more sensor readings and computation. In contrast, increasing the efficiency of the system means less redundant components, fewer sensor readings and less computation. Performance analysis is crucial to making an informed tradeoff between design alternatives. Performance analysis consists of a static analysis of a specification of the system and its parameters as well as a dynamic analysis of the system’s run-time behavior. The static analysis can be based on some formal description of the syntax and semantics of the sensor system, while the dynamic analysis requires on-line monitoring of some quantitative measures during run-time. Our goal is to achieve strong performance analysis and provide information which allows the user to make informed choices concerning system tradeoffs. This involves a sensor system model which permits quantitative measures of time and space complexity, error, robustness, and efficiency, and which facilitates analysis, debugging and on-line monitoring. Formal semantics of programming languages provides techniques to describe the meaning of a language based on precise mathematical principles. These formal techniques should provide the following: precise machine-independent concepts, unambiguous specification techniques, and a rigorous theory to support reliable reasoning (Gordon, 1979). The main types of formal semantics are: denotational semantics which concerns designing denotations for constructs, operational semantics which concerns the specification of an abstract machine together with the machine behavior when running the program, and axiomatic semantics which concerns axioms and rules of inference for reasoning about programs. Our view is that performance semantics should allow us to compute measures of interest on program structures. Denotational semantics is the closest to our view since, according to (Ashcroft, 1982), to specify the semantics of a language denotationally means to specify a group of functions which assigns mathematical objects to the program and to parts of programs (modules) in such a way that the semantics of a module depends only on the semantics of the submodules. Thus, given a set of programs, \( \mathcal{P} \), from a language, and an operating context, \( \mathcal{C} \), the semantics is a set of functions \[ \mathcal{F} = \{ f_i \} \] where \[ f_i : \mathcal{P} \times \mathcal{C} \rightarrow \mathbb{R} \] where \( \mathbb{R} \) is the measurement domain. The static semantics defines structural measures over the syntax of \( p \in \mathcal{P} \). This includes standard measures such as maximum depth of the program graph, branching measures, data structure properties, storage estimates and standard computational complexity measures. Note that these can be determined without reference to \( \mathcal{C} \) (i.e., \( f : \mathcal{P} \rightarrow \mathbb{R} \)). This can be extended to include functions of the operational context \( \mathcal{C} \), including sensor models, accuracy, precision, redundancy and replacement, as well as operating system effects, communication strategies and protocols, and processor properties. The dynamic semantics include validity measures and operational characteristics. Validity measures permit the comparison of behavior models to actual run-time performance (monitors), while operational characteristics are simply measures of run-time values (taps). The values of a tap or monitor are represented as a sequence \( X = (x_n : n \in \mathcal{N}) \); \( x_n \) is the \( n^{th} \) value produced by the tap. or monitor \[ X : \mathcal{N} \rightarrow S \] where \( S \) is the structure produced by the tap or monitor. The selection of functions in \( \mathcal{F} \) depends directly on the user’s needs and are defined so as to answer specific questions. Standard questions include actual running times, space requirements, bottlenecks, etc., and a complex application can be investigated in a top down manner – the user may define new measurement functions on lower level modules once information is gained at a higher level. This forces the user to identify crucial parameters and to measure their impact. For example, a computer vision application may be data dependent, say on the number of segmented objects or their distribution in the image. Thus, the user is coerced into a better understanding of the significant value regimes of these parameters and may develop monitors to ensure that the application stays within a given range, or that it dynamically switches algorithms when a particular parameter value occurs (e.g., more than 1000 segmented objects occur in the image). The main point is that the user can construct executable versions of the \( f_i \in \mathcal{F} \) to ensure the validity of the controller as it runs. Although computational complexity provides insight for worst case analysis, and for appropriate population distribution models, average case analysis can be performed, we propose here what might be termed \textit{empirical case analysis} which allows the user to gain insight into the system without requiring a detailed analytical model of the entire application and its context. Very few users exploit formal complexity analysis methods; we believe that empirical case analysis is a very useful tool. 2.1 Simple Example: Time Vs. Robustness Using Sonar Readings Suppose that we want to determine how many sonar readings to use to get a robust range estimate, but would like to trade off against the time taken to sample. This simple example demonstrates the motivation of the proposed approach and how it can be used to select between alternatives. In this example we have a “classical” tradeoff between speed (time to accomplish a certain task) and robustness (a combination of accuracy and repeatability). Assume that the sonar has been calibrated to eliminate any environmental effects (e.g., wall type, audio noises, etc.). The variables in this case are the accuracy of the physical sonar sensor and the number of readings taken for the same position. Assuming the time to take one reading is $t$, the error standard deviation is $\sigma$, and the probability of a bad reading is $Pr_b$, taking one reading yields minimum time and worst accuracy. By adding a filter (e.g., averaging) and taking multiple readings, accuracy increases and time also increases. Therefore, we need quantitative measures to decide how many readings are needed to achieve the required accuracy (measured in terms of the standard deviation of the error) within a time limit. Using the formalism presented earlier, the semantics of this problem can be defined using the set of functions $F = \{time, error, repeatability\}$. In the case of using a single reading these functions can be written as: $\text{time}(\text{single}) = t$ $\text{error}(\text{single}) = \frac{\sigma}{\sqrt{(1 - Pr_b)}}$ $\text{repeatability}(\text{single}) = 1 - Pr_b$ Now, if we take the average of $n$ readings, the semantics can be written as: $\text{time}(\text{average}) = nt + \tau_n$ \[ \text{error(average)} = \frac{\sigma}{\sqrt{n \times (1 - P_{r_0})}} \\ \text{repeatability(average)} = 1 - P_{r_0}^n \] where \(r_n\) is the time to calculate the average of \(n\) readings, and \(r_1 = 0\). In this simple example we were able to get estimates of the required measures using mathematical models. However, we did not consider the changes in the environment and how it affects these measures. In this case, the set of functions \(\mathcal{F}\) are mappings from the cross product of the program \(\mathcal{P}\) and the operating context \(\mathcal{C}\) to the measurement domain \(\mathbb{R}\), that is \[f_i : \mathcal{P} \times \mathcal{C} \to \mathbb{R}\] To solve this problem, we either have to model the environmental effects and include it in our model, or we may need to conduct simulations if a mathematical model is not possible. Simulation is a very useful tool to approximate reality, however, in some cases even simulation is not enough to capture all the variables in the model, and real experiments with statistical analysis may be required to get more accurate results. Thus, the formal functions can be operationalized as monitors or taps in the actual system. 3 Sensor System Specification The ILSS approach is based on Logical Sensor Systems (LSS) introduced by Henderson and Shilcrat (Henderson & Shilcrat, 1984). LSS is a methodology to specify any sensor in such a way that hides its physical nature. The main goal behind LSS was to develop a coherent and efficient presentation of the information provided by many sensors of different types. This representation provides a means for recovery from sensor failure and also facilitates reconfiguration of the sensor system when adding or replacing sensors (Henderson et al., 1985). We define the ILSS as an extension to the LSS and it is comprised of the following components (see Figure 3): 1. **ILS Name**: uniquely identifies a module. 2. **Characteristic Output Vector (COV)**: strongly typed output structure. We have one output vector ($COV_{out}$) and zero or more input vectors ($COV_{in}$). 3. **Commands**: input commands to the module ($Commands_{in}$) and output commands to other modules ($Commands_{out}$). 4. **Select Function**: selector which detects the failure of an alternate and switches to another alternate (if possible). 5. **Alternate Subnets**: alternative ways of producing the $COV_{out}$. It is these implementations of one or more algorithms that carry the main functions of the module. 6. **Control Command Interpreter (CCI)**: interpreter of the commands to the module. 7. **Embedded Tests**: self-testing routines which increase robustness and facilitate debugging. 8. **Monitors**: modules that check the validity of the resulting COVs. 9. **Taps**: hooks on the output lines to view different COV values. These components identify the system behavior and provide mechanisms for on-line monitoring and debugging. In addition, they give handles for measuring the run-time performance of the system. Monitors are validity check stations that filter the output and alert the user to any undesired results. Each monitor is equipped with a set of rules (or constraints) that governs the behavior of the COV under different situations. Embedded testing is used for on-line checking and debugging proposes. Weller proposed a sensor processing model with the ability to detect measurement errors and to recover from these errors (Weller et al., 1990). This method is based on providing each system module with verification tests to verify certain characteristics in the measured data and to verify the internal and output data resulting from the sensor module algorithm. The recovery strategy is based on rules that are local to the different sensor modules. We use a similar approach in our framework called local embedded testing in which each module is equipped with a set of tests based on the semantic definition of that module. These tests generate input data to check different aspects of the module, then examine the output of the module using a set of constraints and rules defined by the semantics. Also these tests can take input data from other modules if we want to check the operation for a group of modules. Figure 4 illustrates the idea of local embedded testing. Local embedded testing increases the robustness of the system and provides the user with possible locations to tap into when there is a problem with the system. 3.1 Construction Operators In our proposed framework, a sensor system is composed of several ILSS modules connected together in a certain structure. We define operations for composing ILSS modules, and then define the semantics of these operations in terms of the performance parameters. Some of these operations are (see Figure 5): • **Serial(ILSS1, ILSS2):** two logical modules are connected in series. Here $COV_3 = COV_2$. • **Select(ILSS1, ILSS2):** $COV_3$ is equal to either $COV_1$ or $COV_2$. • **Combine(ILSS1, ILSS2):** $COV_3$ is the concatenation of $COV_1$ and $COV_2$. For these simple constructs, the semantics is defined as a set of functions that propagate the required performance measures. Several techniques can be used for propagation. Best case analysis, worst case analysis, average, etc. Selecting among these depends on the application, hence it should be user defined. As an example, the time of the resulting logical system using worst case analysis can be calculated as follows: • $time(Serial(ILSS1, ILSS2)) = time(ILSS1) + time(ILSS2)$ • $time(Select(ILSS1, ILSS2)) = \max(time(ILSS1), time(ILSS2))$ • $time(Combine(ILSS1, ILSS2)) = \max(time(ILSS1), time(ILSS2))$ Hence, the semantic functions of the composite system are defined in terms of the semantic functions of the subcomponents. Similarly, functions that define the propagation of other performance measures can be defined in the same way. For error propagation, we use a simple approach which does not require carrying a lot of information through the system. This approach is based on the uncertainty propagation described in (Faugeras, 1993; Holman & W. J. Gajda, 1978). Assume that we have a certain module with $n$ inputs $X = (x_1, x_2, \ldots, x_n)$ and $m$ outputs $Y = (y_1, y_2, \ldots, y_m)$ such that $Y = f(X)$, and assume that the error variance associated with the input vector is $\Lambda_X = (\Lambda_{x_1}, \Lambda_{x_2}, \ldots, \Lambda_{x_n})$ (see Figure 6), Figure 5: Some operations used for propagating the performance measures. then the error variance for the output vector is calculated using the equation: $$\Lambda_Y = \left( \frac{\partial Y}{\partial X} \right) \Lambda_X \left( \frac{\partial Y}{\partial X} \right)^T$$ where $\frac{\partial Y}{\partial X}$ is the partial derivative of $Y$ with respect to $X$ evaluated at the measured value of the input vector $X$. If all the elements in $X$ are independent variables, then this equation can be written as: $$\Lambda_{y_i} = \sum_{j=1}^{n} \left( \frac{\partial y_i}{\partial x_j} \right)^2 \Lambda_{x_j}, \; i = 1, 2, \ldots, m$$ Figure 6: A simple approach for error propagation. Our overall goal is to provide a tightly coupled mechanism to map high-level performance measures onto an appropriate set of monitors, tests and taps so as to provide the required information. 4 Implementation The ultimate goal of this project is to utilize the proposed theoretical framework in a usable modeling and prototyping environment with tools for analysis, debugging, and monitoring sensor systems with emphasis on robot control applications. Thus, we are developing an ILSS library within a visual programming system called CWave targeted toward the development of control systems for measurement devices and hardware simulations. CWave is developed by the Component Software
{"Source-Url": "http://www.cs.utah.edu/~tch/publications/pub184.pdf", "len_cl100k_base": 5029, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 19431, "total-output-tokens": 5780, "length": "2e12", "weborganizer": {"__label__adult": 0.0004773139953613281, "__label__art_design": 0.0006504058837890625, "__label__crime_law": 0.0005459785461425781, "__label__education_jobs": 0.0008416175842285156, "__label__entertainment": 7.849931716918945e-05, "__label__fashion_beauty": 0.0002211332321166992, "__label__finance_business": 0.00031495094299316406, "__label__food_dining": 0.00058746337890625, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0034732818603515625, "__label__health": 0.0010013580322265625, "__label__history": 0.00039124488830566406, "__label__home_hobbies": 0.00020873546600341797, "__label__industrial": 0.0014390945434570312, "__label__literature": 0.0003056526184082031, "__label__politics": 0.00035190582275390625, "__label__religion": 0.0005817413330078125, "__label__science_tech": 0.1658935546875, "__label__social_life": 9.97781753540039e-05, "__label__software": 0.00650787353515625, "__label__software_dev": 0.8134765625, "__label__sports_fitness": 0.000484466552734375, "__label__transportation": 0.0013837814331054688, "__label__travel": 0.0002942085266113281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23665, 0.01581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23665, 0.76905]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23665, 0.89421]], "google_gemma-3-12b-it_contains_pii": [[0, 1673, false], [1673, 3650, null], [3650, 5550, null], [5550, 5592, null], [5592, 7785, null], [7785, 8484, null], [8484, 10432, null], [10432, 12341, null], [12341, 14077, null], [14077, 15831, null], [15831, 17449, null], [17449, 18868, null], [18868, 19687, null], [19687, 20640, null], [20640, 22286, null], [22286, 22359, null], [22359, 23665, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1673, true], [1673, 3650, null], [3650, 5550, null], [5550, 5592, null], [5592, 7785, null], [7785, 8484, null], [8484, 10432, null], [10432, 12341, null], [12341, 14077, null], [14077, 15831, null], [15831, 17449, null], [17449, 18868, null], [18868, 19687, null], [19687, 20640, null], [20640, 22286, null], [22286, 22359, null], [22359, 23665, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23665, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23665, null]], "pdf_page_numbers": [[0, 1673, 1], [1673, 3650, 2], [3650, 5550, 3], [5550, 5592, 4], [5592, 7785, 5], [7785, 8484, 6], [8484, 10432, 7], [10432, 12341, 8], [12341, 14077, 9], [14077, 15831, 10], [15831, 17449, 11], [17449, 18868, 12], [18868, 19687, 13], [19687, 20640, 14], [20640, 22286, 15], [22286, 22359, 16], [22359, 23665, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23665, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
9e7120d5c860c24daf2721c43a7859f7798e21be
Automation of HCl Engineering processes: System Architecture and Knowledge Representation M. N. Sheriyev* and L. B. Atymtayeva Kazakh British Technical University, Almaty, Kazakhstan Received: 3 Mar. 2015, Revised: 26 Apr. 2015, Accepted: 28 Apr. 2015 Published online: 1 May 2015 Abstract: The aim of this paper is to introduce the methods, techniques and theories concerning web usability and the automation of human-computer interaction (HCI) processes for the intelligent development of web-applications based on the main principles and rules of HCI Engineering. Starting from the description of the traditional ergonomic methods and models we come to the suggestions of the creating the architecture and knowledge base ontology of HCl-Engineering Intelligent Systems that may help to create good and usable web-interfaces. Keywords: human computer interaction, engineering knowledge, base knowledge, frame model ontology, Intelligent System. 1 Introduction The user interface is the important part of the direction of Human-Computer Interaction (HCI), which started to be regularly studied since 1960s. According to the recommendations of experts at least 10% of total budget of software development project should be allocated to development of the usable interfaces with enough level of usability. According to statistics at the USA [1], the average improvement of key business indicators belongs to the market of web applications which occupies about 83% of the software development market. This fact gives us evidence of significant economic efficiency of interactive software. However, not all projects related to software development are implemented by using the practical and effective interaction design methods. As a result, the value of the benchmark as the percentage of successful completed tasks for web applications is no more than 81% [1] and for certain categories of users it is 1.5-2 times lower. We can identify the problem of poor process of the use the practical HCI knowledge which accompanies by significant time costs in searching, interpreting and applying the relevant recommendations or ready design patterns (standard solutions used in the design interfaces) for developers [2]. To solve this problem it was proposed some solutions by development of the different intelligent systems (IS) in order to support the design of interfaces by dividing into elements and organization of recommendations (for example, MetroWeb, BORE systems and etc.) [2]. These systems mostly give the opportunity to automate code generation and the process of the interface validation. The effective combination of these proposed approaches would reduce the time required for searching of appropriate recommendations in design, in finding the usability problems, and in suggesting the associated with them solutions for improvement of the user interfaces. The aim of this paper is to propose possible solutions in automation of HCI Engineering processes by developing HCI analysis system with elements of intelligent tools for providing the improved human-computer interfaces used in web applications. The knowledge base [22] is the main part of any expert system. In this paper we are going to present the ontology and main architecture of the automated intelligent system that covers as fully as possible the stages of web-applications development process and takes into account the specificity of the HCI interaction in terms of practical knowledge in relevant context. To achieve these goals we follow the next steps: 1. Analyze the structure of knowledge in the HCI area and design of interaction process; 2. The selection of adequate models and tools for knowledge representation; * Corresponding author e-mail: madi.sheri@gmail.com © 2015 NSP Natural Sciences Publishing Cor. 3. Development and experimental study of interaction patterns in human-machine interface to identify the profiles of users which have relations to the various aspects of interaction. 4. Building the knowledge base and system architecture for processing the HCI engineering tools in web applications including mechanisms of knowledge storing and assessment of the relevant effectiveness and quality of results. 2 Actual problems of HCI engineering Human-computer interaction (HCI) is relatively young and a wide scientific section in IT and currently has no finally settled definition. As a working definition we are able to say following: Human Computer Interaction is a discipline that studies the design, evaluation and implementation of interactive computer systems intended for human use, as well as related aspects. During the recent period starting from 2000 the total number of Internet users in the world increased in more than 5.5 times (more than 2 billion people) [3] and in Kazakhstan the average annual increase achieved 28.8%, and the number of users exceeded 8 million (43% of the adult population) [4]. A similar rapid growth dynamics is also shown on the number of websites: if the total number of web-sites in the world was about 1 million in 1997, then 100 million in 2006 [5] nowadays we have more than 150 million web-sites in Internet world [6]. The quality of interaction is very important for web applications. The famous expert in HCI area J. Nielsen gives the following expression describing the business effect (B) of the e-commerce for web-site [1] \[ B \sim V * C * L, \] where B is amount of business done by the web-site, V - unique visitors coming to the site, C - conversion rate (the percentage of visitors who become customers); here we can note that the concept of conversion applies not only e-commerce web-sites, but also any web-applications which are used by the different users and L - loyalty rate (the number of the customers that are returned for conducting repeat business) [1]. The quality of usability is very crucial for web-application in e-commerce and e-business and any other e-area. But we can notice that the level of usability in web applications is still relatively low for many systems. There is a tendency to improve this situation. The solutions for practical problems in projects based on the development of web-applications contain the informal and empirical methods, and generally apply the modern approaches in HCI-engineering. We can distinguish the following factors which have impact to the usability improvement methods [18]: task analysis, iterative design and testing with real users. The aim of task analysis is the identifying the challenges that the user faces and finding the existing or possible solutions. Application of this method is advisable for later formulation of functional requirements and use cases. Iterative design implies a gradual improvement of interface quality. By using this approach the usability problems and poor design solutions may be discovered at early stage of development and can help in minimizing the costs [18]. The most popular method for identifying the problems of human - computer interaction is usability testing. Usability testing is a set of experimental techniques and methodologies for assessing the degree of usability of a product (web application programming interface, document, etc.) by testing them with real or potential users. One of the benefits of usability testing is its involving of a relatively small number of users. Generalizing data from a large number of projects in IT sector we can conclude that testing with just five users is able to identify more than 80% of the usability problems [7]. 3 HCI engineering and software development process On requirements analysis phase HCI engineering methods aimed at collecting requirements related to interaction with the system and fixing quality targets interface [21]. While making decisions during the project creation interface, developer can based on its own experience and principles described in literature of interaction and practical recommendations, according to context of the project. The search for an existing HCI knowledge in target context of development may creates a substantial labor costs, its correct interpretation and adaptation require a high level of skills of a designer [19]. Software implementation usually misses the area of interest in HCI, because the quality of interface is translated into a rectilinear object code. Testing the interface can serve two purposes: to identify problem arising from interaction or assessment of its quality performance; to compare different projects interface with each other. Some of testing methods for users can be fully or partially automated. That is embodied in numerous instruments that exist in this area. Implementation is a set of measures with established software system and support (maintenance). It works on its improvement and development, adapting new conditions and bug fixes. Unlike the previous stages of the cycle of software development, this activity may not be scheduled finite duration and it may continue until users operate the system. At this stage consumes about 60% of the total cost of software development. A main activity is not associated with error correction, as is sometimes supposed with the improvement and future development [9]. After software implementation, developers finally get potentially unlimited opportunities to analyze the behavior of users during the interaction, clarify their needs and iterative improvement of interface. The degree of success of the projected interface (quality) understood as a set of values of some characteristics that are working with the user software. In the next section of our work is a brief overview of user interfaces creating approaches and discusses the concept of quality of interaction. ## 4 User Interface Interface looks like a system designed for human-computer interaction including a physical (hardware) and logical (software) components. One of the main aims of HCI both scientific and practical section is to improve the quality of user interfaces, which is an important part of quality software product. Standards, which describe the quality software tools are usually not distinguish the characteristics associated with quality of user interface in a separate category. Standard ISO / IEC 9126-93 [10] identifies the category of “Usability” as “a set of attributes related to the volume of work required for ... express or implied terms of users” without mentioning interface explicitly or interaction. Part 4 of the international standard ISO / IEC TR 9126, dedicated to Quality of Use, gives the following set of characteristics: Effectiveness, Productivity, Safety and Satisfaction [17]. J. Nielsen gives the following five qualitative characteristics of “usability” interface [18]: 1. Learnability - how easily user can start working with the system, seeing it at first time; 2. Efficiency - how to work can be productive when user has mastered it; 3. Memorability - as far as user can easily go back to the effective operation of the system after a long break; 4. Errors - the error rate when operating the system, their severity and ease their correction; 5. Satisfaction - how nice to use the system. Measurement of learnability and memorability of system in practice is rarely done, because it is the most difficult. Besides these characteristics are relevant not for all software products. Most web applications will not be specifically examined by user and may never be used it again. That’s why vast plurality of quantitative degree of usability interface, it is recommended to use the following [11]: 1. Success rate - the percentage of successful completion of user tasks; 2. Performance time - time spent on tasks; 3. Error rate - the level of errors made in process of implementation 4. Subjective satisfaction - subjective user satisfaction. These set of indicators is widespread for describing quality of user interface (Standard ISO / IEC 25062:2006 [12]) and our work is based on these standards. Despite of relative availability and effectiveness of these methods, the provision of quality of communication is not a trivial task even if we have a team of developers skilled in this area. They can spend many times to perform their functions, especially if group target for the developed software system is special category of users and if these group users are not available for the development team (which may occur during web applications development), etc. Thus, it is important to use software tools that can provide support HCI engineering at various stages of the software development process. ## 5 IS architecture: input and output information The intelligent system is designed to support HCI engineering, especially in early stages web applications development. That’s why system based on the knowledge of different areas such as HCI interface design, graphic design, usability design, etc. A final decision regarding the interface is accepted by expert, who uses the system. The main component of the output of IS is a set of recommendations in natural language. As a solution to problem of processing requirements for web application IS offers an analysis of the text itself. Thats why we created Knowledge Base which contains vocabulary terms of subject area in system used for text analysis, indexing recommendations and describe context design. The existence of KB in intelligence system with controlled vocabulary can describe complex objects of subject area by comparing a list with basic terms. For example, knowledge base with the following recommendation: “Site logo should have a link to home page.” knowledge engineer should match terms Logo, Home, Hyperlink”. The context of project is described in similar way, but the set of terms formed by an intelligent system automatically based on target user and requirements defined as input information. Comparing these two sets the system is able to determine the relevance of each recommendations. Each recommendation is evaluated as a potential component of “solving the problem” or output list of recommendations. Validation mechanism and learning for intelligent system is special web portal of knowledge which offers to user ability to assess the applicability of the knowledge contained in IS. Recommendations from HCI area, originally presented in natural language, can be transformed into knowledge, presented in formal rules of inference production model. The architecture of IS for HCI engineering can be represented as shown in Fig. 1. 6 Knowledge representation models and hybrid model for IS The core of IS are knowledge base and solver that allows: to compare input information and the information provided in the knowledge base; to receive output information. For knowledge representation we propose hybrid model, including frame-based ontology which uses object-oriented approach and production model, which allows form implication knowledge. There are various models of knowledge representation: formal logic, production, framing and semantic network. One of the modern knowledge representations are ontologies, formal explicit description of the terms and relationships between them (defined according to T. Gruber, whose work is significant [20] in using of ontologies in artificial intelligence). To describe the ontology can be used syntax of predicate logic (subset). Under the definition of ontology there are many models of knowledge representation: frames, semantic networks, conceptual maps, etc. In our work we use frame ontology. Frame [13] is the structure of knowledge, modeling human thinking and corresponding abstract image of any object, phenomenon, events or process. A main feature of frame model is that initial slots can be filled with “absence of tasks”. It is preprepared values, not necessarily taking place in a particular situation. Therefore, frame structure contains: name of frame, inheritance pointers (if frame structure is hierarchical) name of slots, values data type and values of slots. In most frame structures value of slot may other frames, thereby realizing various relations between concepts and procedures (Procedural attachments), implementing the procedural component in knowledge representation. Features and benefits of frame model make it potentially used in our developing IS. Also we can use a frame script to represent the objectives of the organization of user interaction with the web application. Thus, working with IS we are able to refine the relevant attributes of the frame: requirements for a web application, characteristics of the target user and set of recommendations. The production model is one of the most common tools of knowledge representation in IS with its clarity, modularity (the ease of making changes and additions) and convenience for output [9]. In general, production model (P) can be represented as follows: \[ P = \langle S; L; A_p \rightarrow B_p; Q \rangle \] \hspace{0.5cm} (2) where S is description of class situations; L is a condition in which products are activated; \( A_p \rightarrow B_p \) is the core of the product; Q - postcondition production rule. Among the disadvantages of production model include: 1. simple conjuncts structure, no relationship between them, which leads to a small use model to describe the complex field of knowledge domain; 2. the complexity of the rules which may to conflict each other and management of priority of their implementation. To avoid these disadvantages we can use production model in symbiosis means of knowledge representation, which has recently gained popularity in the formalization of knowledge [6]. In this situation production model may be included in declarative component of knowledge representation: frames will set the structure of field of knowledge and rules of production model used to fill the value of frame instances. Also the solver, one of the main components of the IS must be able to work with such hybrid model of knowledge representation, as well as controlling the sequence of execution of rules. 7 Classes related to knowledge in HCI area and UI Based on the architecture of IS, first class of ontology was HCI engineering task, which has relationship with the classes representing input (Target user and Requirement and output (Guideline (Recommendation) and web interface design) information. In addition, the value of project context slot can be set, which includes any class of ontology (a child of the meta-class THING). The overall structure of the class HCI engineering task is shown in Fig. 2. Law, Principle and Guideline are ontology classes, united in abstract meta class called HCI knowledge. It is known that the success of HCI engineering depends on the deep understanding of interaction context and human behavior patterns, which gives engineer the opportunity to make informed choices of a practical recommendations of several different or its contradictory [14]. This gives particular importance to organize knowledge in a set of recommendations issued by IS, which corresponds to subsystem of explanations and makes it appropriate to provide for: Fig. 2: HCI engineering task class structure 1. Possibility of establishing links between various levels of HCI knowledge and knowledge of one level; 2. Possibility of establishing links to frame instance; 3. Possibility of establishing a link between an instance and context; 4. Ability to specify the “efficiency” or practical significance for recommendations. Based on points 1. and 2. above, in ontology we added classes Finding, Source and Reference and also corresponding relationship with all classes. Despite this all classes related to HCI knowledge has been combined into abstract class called HCI knowledge representation class. (Fig. 3). Fig. 3: HCI engineering task class structure Class Finding reflects some fact or empirical evidence obtained in research or practice in HCI area, class Source usually corresponds to a scientific publication, i.e. article or book. Based on the points 3. and 4. above there are slots tag and efficiency. If the first one is the attribute of knowledge at all levels (i.e. class HCI knowledge class), then the second refers only to the Guideline class. Slot tag is used to establish relation between an instance of knowledge and context of projected interaction. The value of this slot can be any classes of ontology. Slot efficiency reflects the evaluation of efficiency of recommendations in HCI engineering: The end result of activity on designing interaction is interface which created as a result of compromise solutions aimed at meeting the requirements and considering technological and other limitations [15]. Designer can make a decision based on practical recommendations or a higher level of knowledge existing in HCI. But separate rules such decisions are not formalized. Today web applications can be divided into two components: content and design, which is closely connected with each other. High-quality presentation of content, organization of its user-friendly and adaptation for "scanning" style of perception is an important factor to improve the usability of web applications [16]. 8 Conclusion We analyzed and built architecture methods and techniques for HCI. The application of intelligent system supports HCI engineering allow us to reduce requirements for qualification of engineers and the error probability of interaction design at initial stages of the process during web application development and also improve the quality of interaction for specific categories of users. In further development of the topic, we are going to update our knowledge base to represent more precise models of recommendations to HCI engineer. Also we are going to provide more detailed validation of knowledge base of existing web application. References © 2015 NSP Natural Sciences Publishing Corp. Madi Sheriyev Madi Sheriyev received the bachelor degree in Information Systems at International Information Technology University in Almaty, Kazakhstan. The Author has been pursuing his research work under the guidance of Lyazzat Atymtayeva, Doctor of Science degree in Mechanics, Mathematics and Computer Science. Currently he is studying as master student at KBTU. His research Interest includes Human Computer Interaction, Artificial Intelligence and Semantic Web. Lyazzat Atymtayeva Lyazzat Atymtayeva received the PhD and Doctor of Science degree in Mechanics, Mathematics and Computer Science at al-Farabi Kazakh National University, Kazakhstan. Her research interests are in the areas of mechanics, applied mathematics and computer science including the numerical and rigorous mathematical methods and models for mechanical engineering and computer science, intelligent and expert systems in Information Security, Project Management and HCI. She has published research papers in reputed international journals of mathematical and computer sciences. She is reviewer and editor of international journals in mathematics and information sciences.
{"Source-Url": "http://www.naturalspublishing.com/files/published/1kp8f83vd2ea39.pdf", "len_cl100k_base": 4282, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18818, "total-output-tokens": 4944, "length": "2e12", "weborganizer": {"__label__adult": 0.00047850608825683594, "__label__art_design": 0.0019683837890625, "__label__crime_law": 0.0004398822784423828, "__label__education_jobs": 0.003816604614257813, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.00028777122497558594, "__label__finance_business": 0.0003323554992675781, "__label__food_dining": 0.0005140304565429688, "__label__games": 0.00080108642578125, "__label__hardware": 0.0016326904296875, "__label__health": 0.0012044906616210938, "__label__history": 0.0004525184631347656, "__label__home_hobbies": 0.00014913082122802734, "__label__industrial": 0.0005822181701660156, "__label__literature": 0.0006337165832519531, "__label__politics": 0.0002448558807373047, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.156005859375, "__label__social_life": 0.00014162063598632812, "__label__software": 0.0106353759765625, "__label__software_dev": 0.8173828125, "__label__sports_fitness": 0.0003650188446044922, "__label__transportation": 0.0006504058837890625, "__label__travel": 0.00024044513702392575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23986, 0.02774]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23986, 0.57007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23986, 0.92358]], "google_gemma-3-12b-it_contains_pii": [[0, 3805, false], [3805, 9225, null], [9225, 14497, null], [14497, 19103, null], [19103, 22833, null], [22833, 23986, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3805, true], [3805, 9225, null], [9225, 14497, null], [14497, 19103, null], [19103, 22833, null], [22833, 23986, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23986, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23986, null]], "pdf_page_numbers": [[0, 3805, 1], [3805, 9225, 2], [9225, 14497, 3], [14497, 19103, 4], [19103, 22833, 5], [22833, 23986, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23986, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7148b2b5abc64156c2b7f815443ba4d24212c3af
Nomad is the mobile agent system integrated with eAuctionHouse, our next-generation Internet auction server. With the Nomad system, mobile agents travel to the eAuctionHouse site and participate in auctions on the user’s behalf. Users can create agents using Java or can automatically generate agents from Nomad’s template agent library. The Internet moves into the mainstream, electronic commerce is becoming an important mechanism for conducting business. It helps merchants and consumers reduce business costs and enables customized delivery of goods and services. Among the current business models, electronic auctions are emerging as one of the most successful e-commerce technologies. There are several successful commercial Internet auction sites, such as eBay and Yahoo, as well as interesting academic Internet auction houses. Our motivation in developing an auction server, eAuctionHouse, was to prototype next-generation features and test their feasibility, both computationally and in terms of consumer ease of use. eAuctionHouse is to our knowledge the first, and currently only, Internet auction site that supports combinatorial auctions, bidding via quantity-price graphs, and mobile agents. eAuctionHouse acts as a third-party auction site, allowing users across the Internet to buy and sell goods and to set up markets. eAuctionHouse is available for testing at http://ecommerce.cs.wustl.edu. As in conventional Internet auctions, in eAuctionHouse a user visits the auction website to create or close an auction or to submit bids. However, eAuctionHouse supports two additional mechanisms for creating auctions, closing auctions, and bidding: a user can send a formatted text string directly through a TCP/IP connection, or use Nomad, the integrated mobile agent system. Another article presents a detailed view of eAuctionHouse. This article focuses on the Nomad system. MOBILE AGENT SYSTEM FOR ELECTRONIC AUCTIONS Nomad allows mobile agents to travel to the eAuctionHouse site and actively participate in auctions on the user’s behalf even when the user is disconnected from the network. This reduces network traffic and latency, and the agents can respond to changes in the auction quicker than remote users could. The speed of executing a computationally intensive bidding strategy may also increase when agents execute on a powerful server. Mobile agents need not necessarily be bidding agents. They can be used for collecting information, learning price distributions, or setting up auc- tions. (For the advantages of using mobile agents, see the sidebar, “Mobile Agents in Internet-Based Auctions”). When multiple distributed eAuctionHouses are installed across the network, multiple Nomads help to form a virtual electronic auction site network. To compare deals across different eAuctionHouses, however, an agent does not necessarily have to migrate—from one agent dock it can download and upload information from all of the eAuctionHouses through their TCP/IP connections. The agent can make the decision to migrate dynamically based on the amount of information transmitted, latency, and so on. Also implemented in Nomad is a mobile agent control scheme. After registering itself at the server, a mobile agent can be seen and managed in its creator’s user portfolio. Once an agent docks on a server, it registers itself on the server. When the user asks, the server displays all of the user’s registered agents. If, for example, the user asks to kill an agent, the server sends a message to the agent. The agent then unregisters itself and the system deletes the registry entry. If, for example, the user asks to kill an agent, the server sends a message to the agent. The agent then unregisters itself and the system deletes the registry entry. If a user were to program an agent that leaves the server without unregistering, the registry entry would remain. If the user then asked to kill the agent, the system would delete the registry entry without actually killing the agent because the system does not keep track of where agents have migrated. In summary, in the current implementation we do not provide automatic agent tracking beyond a single agent dock, but leave that to the programmer of the agent. The high-level architecture of a Nomad is illustrated in Figure 1. A Nomad system consists of four main components: - an interface for specifying agents - an agent dock - an agent manager - an agent database The eAuctionHouse Web system provides the HTML interface for users to navigate within eAuctionHouse. When a user sends a request for automatically creating a mobile agent via the Web system, the request is directed to an agent generator. All other requests are forwarded to the connection manager. Figure 1 shows the connection manager receiving input from three kinds of sources: the Web system, TCP/IP connections, and agents. Internally the connection manager does not distinguish between these sources. The same TCP port is used for communication and all requests are sent as formatted text strings. When a request is accepted, the connection manager initiates a handler thread. The handler checks the request’s validity, takes necessary actions, synchronizes accesses to the database, retrieves and updates the database, and returns a result. Conceptually, each request is mapped to a particular handler algorithm. The modules and functionality of the auction engine itself are presented elsewhere.\textsuperscript{2,5} Part of the Web system is dedicated to specifying agents. Any request through that part is sent to the agent generator, which then decodes the formatted text string, creates mobile agents following those instructions, and launches them onto the agent dock. Mobile agents reside and execute on the agent dock. We use the Concordia system (http://www.meltca.com/HSL/Projects/Concordia) as the basis of our agent dock. Concordia is a framework for developing and managing mobile agent applications.\textsuperscript{6} It is a Java application and supports mobile agents written in Java. Application interfaces are provided in Concordia for sending agents around the network. In Nomad these are used both for launching automatically programmed agents and for sending agents manually programmed by users. Concordia agents process data at the data source. Network transport is hidden from applications, developers, and users. Typically, a Concordia agent has an itinerary, which can be seen as a list of network destination addresses. Associated with each address is an action—a Java class method executed when the agent travels to the associated site. The --- **Mobile Agents in Internet-Based Auctions** The use of agents in electronic auctions has several advantages for the user: - An agent can monitor the auction events the user has deemed relevant. When such events occur, the agent can alert the user. This frees the user from having to poll the auction repeatedly. - Compared with traditional bidding where every bid is specified parametrically (for example, by the amount the user bids), bidding agents give the user more flexibility when customizing a bidding strategy. The strategy can be a function of time, of other participants’ bids, and so on. - The agent can also be programmed to monitor external events, and to condition its bidding on those events (for example, stock prices or news). In other words, the agent can make decisions based on all available information that the bidder considers relevant. - Prototypical bidding agents can be analyzed game-theoretically off-line so that they will bid optimally on the user’s behalf in given auction settings. This puts expert bidders and amateurs on a more equal footing for e-commerce: since the bidder agent will optimally bid on the user’s behalf, the user need not engage in strategic considerations when revealing preferences to the agent. - Agents can be built to track bids in multiple auction houses, looking for the best deal and/or coordinating the user’s bids in the different auctions. For example, an agent can submit bids to multiple auction sites for the same item, but at any time allow at most one of the bids to be winning (highest within that auction). While this is possible for a human user to do without an agent, an agent saves effort, and makes such behavior viable even in settings where the user’s time is costly. - In many current Internet auctions, most of the bidding activity occurs just before the auction closes. With agents that execute on the server side, the user can avoid the network lag of getting the most current information from the auction to the user’s site, and of bids traveling from the user’s site to the auction. - Agents that execute on the server side will continue to operate as usual even when network connections are down or slow. - The user does not need to be continually connected to the network. For example, a user can connect a laptop to the network via a phone link on an airplane, launch an agent to execute on the auction server or on a home computer, and disconnect. - If the information transferred between the agent and the auction exceeds the code size of the agent, sending an agent to execute server-side uses less bandwidth than client-side execution. When the agent communicates locally at its destination rather than over the network, network traffic and latency are reduced because the amount of data transferred around the network is reduced. - In simple auctions with infrequent activity, the information downloaded from the auction to the agent (other bids, quotes) and the information that is uploaded from the agent to the server (mainly bids) might not exceed the size of the agent. However, in highly active auctions and in combinatorial auctions, the information that is transferred can easily exceed the size of the agent. In a combinatorial auction, bids can be submitted on combinations of items. So, if quotes are provided, they need to be provided on combinations of itinerary can be altered dynamically during the agent's trip. Agents can also collaborate with the help of an event-distribution mechanism and other services. The agent manager notifies agents when the auction information they are interested in is altered. The agent database stores information about agents—such as their creators and the information they want to receive. By communicating with the agent manager, agents can not only utilize the event-distribution mechanism (which triggers an agent only if something of interest happens in the auctions, rather than requiring the agent to poll the auctions), but they can also be seen via the eAuctionHouse Web system. This gives users a convenient interface for managing their agents. With this mechanism, the agent programmer does not have to write code for the agent to handle communication with the user and act based on that communication, although the user can do this to implement additional functionality. **Generating Mobile Agents** Nomad supports the creation of mobile agents by allowing users to program their own agents or launch parameterizable template agents that have been designed and programmed in advance. ...items and the number of combinations is exponential (of course, one could provide quotes on select combinations only). Furthermore, a new bid on a combination generally changes the quote on a large number of combinations, further increasing the amount of quote information that an agent needs to download. The benefits of remote execution could be captured by either remotely executing nonmobile agents or by mobile agents. Other advantages are specific to mobile agents. - Mobile agents can potentially take advantage of the available services distributed across the network. For example, they could travel to and execute on powerful servers with excess CPU time and disk space. This can be pertinent for bidder agents if, for example, their bidding strategies include complex computations such as statistical analysis and projection. - The use of mobile agents can lead to more effective load balancing. A mobile agent can move to an agent dock where the load is currently not too high. This leads to faster execution of the agent's bidding strategy. - Mobile agents can react to network latencies that vary over time in different parts of the network. An agent can move to an agent dock with a less congested connection to the auction server. - Using mobile agents, the decision of local versus remote execution can be made dynamically at run time based on the volume of information being uploaded and downloaded, network latency and congestion, and the availability, speed, and cost of computation at different sites. Naturally, there are also disadvantages to using mobile agents. Most notably, the appropriate allocation of resources—such as CPUs, RAM, and disk—among the mobile agents of different users remains an open research area. Several auction sites have solutions other than mobile agents. eBay has a proxy bidder "agent" that allows the user to enter a reservation price. As long as the auction is open and the user's reservation price has not been reached, the agent bids the minimum amount necessary to become the highest bidder. However, such an agent limits the user's choice of bidding strategy. For example, when a user's valuation of an item depends on other bids, such a simple agent is no longer optimal; rather, the agent should update its valuation dynamically based on the other bids so far. This involves taking into account the effect of the winner's curse: if the bidder bids the perceived valuation of the item and wins, the bidder will know that he or she paid too much because others valued the item less. Furthermore, the simple proxy bidder agents offered by current Internet auctions do not allow the user to coordinate bids across multiple auction houses automatically. The Michigan Internet AuctionBot could be viewed as supporting agents in the sense that it provides a TCP/IP-level message protocol by which agents can participate in the auction. However, no support is provided for mobile agents. **References** Tailored Agents Users can program their own mobile agents in Java, which allows maximal flexibility in what agents can do. The agents can use highly tailored bidding strategies that consider input from what is transpiring in the auction, other auctions, and in the news. The agents can also use highly tailored migration schemes. The user programs the agent to communicate with eAuctionHouse through TCP/IP connections using a string format that we have specified to allow rich forms of bidding, collecting information, and setting up auctions and markets. Template Agents To speed agent generation and to enable nonprogrammers to create mobile agents, the Nomad system allows automated generation of agents based on HTML forms. Using the forms, the user chooses from a library of preprogrammed template agents that are recommended to the user based on the auction type in question. Nomad currently makes the following parameterizable mobile agents available for automated generation: - The information agent monitors an auction and sends e-mail to the user when specified events occur. With this agent, the user does not have to poll the auction and is notified of important events immediately. - The incrementor agent implements the dominant strategy on the user’s behalf in single-item, single-unit, ascending open-cry first-price private-value auctions (that is, English auctions). It bids a small amount more than the current highest bid, and stops if the user’s reservation price is reached. With this agent, the user does not have to follow the auction. The user’s dominant strategy in these settings is to report the valuation truthfully to the agent. Not accounting for the technical aspects, such as having its own thread of execution and being able to migrate, this agent provides the same functionality as current proxy bidder “agents” like those on eBay. - The N-agent is for single-item, single-unit, sealed-bid first-price auctions where the number of bidders, N, is known, and the bidders’ private valuations are independently drawn from a uniform distribution. The goal in this type of auction is to win the auction but to underbid strategically so as to minimize payment. This involves guessing what others will... The symmetric Nash equilibrium strategy is to bid the user's valuation times \( (N - 1)/N \). Since this is how the \( N \)-agent bids, the user is motivated to reveal the true valuation to the agent. - The control agent submits very low noncompetitive bids. This agent is a speculator's tool that artificially increases the number of bidders so as to mislead other bidders, such as the \( N \)-agent. For example, a seller might submit control agents so that \( N \)-agents will bid higher. Of course, if control agents are present, it is no longer an \( N \)-agent's best strategy to bid the user's valuation times \( (N - 1)/N \). - The discover agent computes the expected gain from bidding a small amount more than the current highest bid according to the agent's current distribution of the user's valuation. This is intended for settings where the user has a probability distribution over the item rather than an exact valuation. In the future, the probability distribution could be updated by new events, or by what others have bid in nonprivate value auctions. Such updating is part of our current research. The user specifies the parameters: user identification number, password, email address used by the mobile agent for reporting, agent name for agent management, and the user's reservation price for the item. When the user clicks the create button, the Java agent is automatically generated based on these parameters, travels to the agent dock (ecommerce.cs.wustl.edu in the example), docks there, and bids at eAuctionHouse. --- **The search is not only focused on finding a partner, but on finding a desirable coalition structure.** --- **AUTOMATED COALITION FORMATION** Our current research focuses on mobile agents for automated coalition formation in electronic auctions. Economic efficiency can sometimes be improved if bidders form coalitions. Consider, for instance, an auction in which one seller is selling one item. One buyer wants part of the item and another buyer wants the remaining part. The sum of the amounts they are willing to pay separately exceeds the highest price offered for the whole item by other bidders. By forming a coalition, the two buyers and the seller benefit. There are two main barriers for users across the Internet to form coalitions while bidding online. First, finding partners can be time-consuming. Second, bidders do not necessarily trust each other without a binding contract. Issues arising in coalition formation include who is in charge of bidding, what happens if some bidders refuse to pay after the coalition's bid wins, and how much each participant has to pay if the coalition wins. To support automated coalition formation, we propose using mobile agents. With an appropriate communication mechanism, it can be easier for an agent to locate potential partners than it is for a person sitting at a computer to do so. For example, the agent can search in a public place where bidders or agents looking to form a coalition post partial bids in the hope of being combined with others. The search is not only focused on finding a partner, but more generally, finding a desirable coalition structure— that is, partitioning the agents into coalitions. This is a computationally hard search problem even if all the agents are in one location and their characteristics are commonly known.9–11 Agents usually search orders of magnitude faster than humans. Furthermore, agents’ time is less costly. To solve the trust problem, a third-party site might be necessary. At the third-party site agents could sign binding contracts and check users’ credit histories and reputations. Although collusion can improve economic efficiency for buyers and sellers, it also involves speculation costs and can cause revenue loss for the auctioneer. For example, bidders can coordinate to keep their bids artificially low so as to get the item at a lower price than they otherwise would. However, considering the number and diversity of users in most Internet auctions, it seems unlikely that bidders across the Internet would be able to establish such a coalition. Therefore, automated coalition formation in the Internet auction setting may contribute more to the positive aspects of coalition formation than to the negative. FUTURE WORK Future research includes developing additional prototype agents based on new game-theoretic analyses. We are also developing learning methods for updating the user’s valuation distribution in settings where the user does not know the exact value of the auctioned item. The discover agent then uses that distribution to bid optimally on the user’s behalf. Finally, automated coalition formation introduces new challenging problems to electronic auctions. These will be further studied in the continuing development of eAuction House and Nomad. ACKNOWLEDGMENTS This work is supported by the U.S. National Science Foundation under Career Award IRI-9703122 and grants IRI-9610122 and IIS-9800994. The authors would like to thank Kate Larson for her comments on drafts of this article. REFERENCES Tuomas Sandholm is assistant professor of computer science at Washington University, St. Louis. He received the PhD and M S degrees in computer science from the University of Massachusetts at Amherst, and an M S in Industrial Engineering and M anagement Science from the Helsinki University of Technology, Finland. His research interests include artificial intelligence, commerce, game theory, multiagent systems, auctions, automated negotiation and contracting, coalition formation, and normative models of bounded rationality. He has published over 100 technical papers and received several academic awards including the NSF CAREER award (http://www.cs.wustl.edu/~sandholm). Qianbo Huai is an engineer at Microsoft. He received an MS degree from Washington University, St. Louis, where he was involved in Sandholm’s Multiagent Systems Research Group. Readers can contact Sandholm at sandholm@cs.wustl.edu and Huai at qhuai@microsoft.com.
{"Source-Url": "http://www.cs.brandeis.edu:80/~jcastano/w2080.pdf", "len_cl100k_base": 4374, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20766, "total-output-tokens": 5579, "length": "2e12", "weborganizer": {"__label__adult": 0.0004673004150390625, "__label__art_design": 0.0005164146423339844, "__label__crime_law": 0.0010786056518554688, "__label__education_jobs": 0.0021648406982421875, "__label__entertainment": 0.0002263784408569336, "__label__fashion_beauty": 0.0002474784851074219, "__label__finance_business": 0.0084075927734375, "__label__food_dining": 0.0005850791931152344, "__label__games": 0.005207061767578125, "__label__hardware": 0.0014905929565429688, "__label__health": 0.0006871223449707031, "__label__history": 0.000492095947265625, "__label__home_hobbies": 0.0001767873764038086, "__label__industrial": 0.0010614395141601562, "__label__literature": 0.0004286766052246094, "__label__politics": 0.0007157325744628906, "__label__religion": 0.00039839744567871094, "__label__science_tech": 0.160400390625, "__label__social_life": 0.0001361370086669922, "__label__software": 0.050872802734375, "__label__software_dev": 0.7626953125, "__label__sports_fitness": 0.0005345344543457031, "__label__transportation": 0.0008339881896972656, "__label__travel": 0.0003614425659179687}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25031, 0.01271]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25031, 0.27918]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25031, 0.91574]], "google_gemma-3-12b-it_contains_pii": [[0, 2517, false], [2517, 5336, null], [5336, 10010, null], [10010, 14680, null], [14680, 16917, null], [16917, 20171, null], [20171, 25031, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2517, true], [2517, 5336, null], [5336, 10010, null], [10010, 14680, null], [14680, 16917, null], [16917, 20171, null], [20171, 25031, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25031, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25031, null]], "pdf_page_numbers": [[0, 2517, 1], [2517, 5336, 2], [5336, 10010, 3], [10010, 14680, 4], [14680, 16917, 5], [16917, 20171, 6], [20171, 25031, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25031, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
e72eaa3f6cff1c1714e8b6c45799142206a97859
Sequential Equivalence Checking EE 382M-11 The University of Texas at Austin Department of Electrical and Computer Engineering Dr. Xiushan “Shaun” Feng Formal Verification Group Leader, Samsung Austin Research Center (SARC), Austin, TX Samsung Advanced Computing Lab (ACL), San Jose, CA Samsung G2 Design Center, Gyeonggi-do, Korea Outline - Sequential Equivalence Checking (SEC) basics - Combinational EC vs. Sequential EC - Theory behind SEC - Reachability analysis on product machine - Bounded vs. unbounded, etc. - Partition on transition relation, CFG, etc. - How Industry deal with SEC issues - Leverage existing CEC and model checking tools - Sequential equivalence checking tools - E.g., SLEC, JasperGold SEC, VCF-SEQ, Hector, ESP-CV (RTL vs. Spice) **Equivalent Checking - Miter** For all possible aligned inputs, whether outputs are always equivalent - G and R can have different state elements - G and R can have different latencies. **CEC (State Points Mapped)** - All combinational circuits are aligned -- CEC - EC over outputs of combo circuits, i.e., next-state functions of state elements (induction prove of EC) - Good scalability, matured, extensively used. - Requires: complete state mappings **Types of SEC** - Input/output alignment - Cycle-accurate equivalence - Equivalent at every cycle - Transaction-level equivalence - Compare points can have different latencies - Initial state - Safe replacement (Singhal, Pixley, Aziz, Brayton) - No assumption about the initial state - Initial state needed - From init state, whether non-eq states can be forward reached - Or from non-eq states, where init state can be backward reached --- **Common SEC in Verification** - Electronic System Level (ESL) - ESL vs ESL (e.g., Matlab vs. timed SystemC) - ESL vs RTL (e.g., serial C vs. RTL, EC of HLS) - RTL vs. RTL – commonly used - Pipeline updates - Register retiming - Resource rescheduling - State recoding - Sequential clock gating verification - Xprop verification - RTL vs. Spice (switch-level) Example: Clock Gating SEC <table> <thead> <tr> <th>CG_en==0</th> <th>CG_en==1</th> </tr> </thead> <tbody> <tr> <td>Other inputs</td> <td>Other inputs</td> </tr> <tr> <td>Instance d0 of DUT</td> <td>Instance d1 of DUT</td> </tr> <tr> <td>- cg clocks always running</td> <td>- cg clocks can be disabled based on internal state</td> </tr> </tbody> </table> - For all possible input combinations, there is no output mismatch up to certain cycles - If possible, full prove for all cycles - d0.out_foo == d1.out_foo or d0.out_valid && d1.out_valid |\rightarrow| d0.out_foo == d1.out_foo Example: XPROP with SEC <table> <thead> <tr> <th>All Inputs</th> <th>All Inputs</th> </tr> </thead> <tbody> <tr> <td>Instance d0 of DUT</td> <td>Instance d1 of DUT</td> </tr> <tr> <td>- X assignments</td> <td>- X assignments</td> </tr> <tr> <td>- Uninitialized registers</td> <td>- Uninitialized registers</td> </tr> <tr> <td>- Array range overflow</td> <td>- Array range overflow</td> </tr> <tr> <td>- Multiple drivers, etc..</td> <td>- Multiple drivers, etc..</td> </tr> </tbody> </table> - Xs inside RTL are don't-care spaces (Synthesis/formal chooses 0 or 1) - For all possible input combinations, there is no X propagated to outputs - The reason that two instances are not equivalence is due to different assignments of Xs - d0.out_foo == d1.out_foo or d0.out_valid && d1.out_valid |\rightarrow| d0.out_foo == d1.out_foo **CEC vs SEC** Taken from reference 1 ![Diagram showing differences between CEC and SEC](image) <table> <thead> <tr> <th>CEC</th> <th>SEC</th> </tr> </thead> <tbody> <tr> <td>Re-encoding of state</td> <td></td> </tr> <tr> <td>Serial vs parallel interfaces</td> <td></td> </tr> <tr> <td>Scheduling</td> <td></td> </tr> <tr> <td>Pipelining</td> <td></td> </tr> <tr> <td>FFs match</td> <td></td> </tr> </tbody> </table> **SEC Approaches** - **Flattening** - flatten a sequential circuit into a combinational circuit - Reduce SEC into CEC --- too big to handle - **Graph isomorphism** - Two FSMs can be translated into the same one - Re-writing rules, canonical forms. - **Reachability analysis on product machine** - Turn SEC into model checking on equality assertions - State space explosion - Bounded model checking (DAC’03, Kroening, Clarke, and Yorav) - Abstraction techniques (slicing, exploiting similarity, etc.) - Redundancy removing Flatten Sequential circuit $C$ Unroll the sequential circuit by $t$ time frames Combinational circuit Isomorphic State Graph Rewriting $G_{\text{min}}$ and $R$ to check for equivalence. SEC -- Model Checking - Product machine: \( M = G \times R \) - Assertions: \( G_{\text{out}} == R_{\text{out}} \) - Reachability - For all reachable states, whether \( G_{\text{out}} == R_{\text{out}} \) - For all initial states? - Or for a given initial state? - Termination criteria: - fixpoint reached. e.g., least fixpoint if init state is given, no new state visited - Upper bound hit in BMC (bounded model checking) Monolithic Transition Relation – BDD\(^*\) Example ```c /* Builds a single BDD that's the transition relation for the entire circuit. */ for (i=0; i<state_var_count; i++) { /* Build the relation for each next state wire. */ wire_rel = Cudd_bddXnor(gbm, next_array[i]->bdd_var, next_array[i]->bdd); // predicate of \( x' = f(x) \) Cudd_Ref(wire_rel); temp = Cudd_bddAnd(gbm,TR,wire_rel); Cudd_Ref(temp); Cudd_RecursiveDeref(gbm,TR); Cudd_RecursiveDeref(gbm,wire_rel); TR = temp; } ``` - Monolithic TR is built - one big BDD for TR - BDDs for flops are ANDed into one. \(^*\)CUDD is used (http://vlsi.colorado.edu/~fabio/CUDD/) Post Image Computation – BDD Example /* Computes AND and quantifies out present state and input variables */ temp = Cudd_bddAndAbstract(gbm, S, TR, ps_input_cube); Cudd_Ref(temp); /* Now, change the image BDD back to present state variables. */ post_S = Cudd_bddSwapVariables(gbm, temp, ps_vars, ns_vars, state_var_count); Cudd_Ref(post_S); Cudd_RecursiveDeref(gbm,temp); - Image computation for all reachable states - reachable states are represented by one big BDD. Least Fixpoint Computation – BDD Example /*Least Fixpoint (LFP) Computation. Loop terminates when LFP is reached*/ S = Cudd_ReadLogicZero(gbm); Cudd_Ref(S); new_R = build_initial_state_bdd(); // a user function to set initial state. Cudd_Ref(new_R); do { temp= Cudd_bddOr(gbm,S, post_S); Cudd_Ref(temp); Cudd_RecursiveDeref(gbm,S); Cudd_RecursiveDeref(gbm,post_S); S = temp; post_S = image_monolithic_tr(TR, S, ps_in_cube, ps_vars, ns_vars); Cudd_Ref(post_S); } while (S!=post_S); - LFP works on reachable states - Monolithic transition relation - One BDD for reachable states State Space Explosion - SEC works on 2 times design space (G + R) - Exponentially increased state space. - Model checking cannot handle large designs or complicated arithmetic circuits. - BDD: size - SAT: number of clauses, runtime - Abstraction techniques are needed. SEC Abstractions - Simplify model G and R - Rewriting – make G and R more alike - Redundancy removing – drop unneeded logic - Retiming – move logic across state points. - Divide and conquer – decomposition - Partition transition relations, state space, flop stage, etc. - Use correspondences between G and R to simplify the product machine - Structure similarities between G and R. TBV: Transformation-Based Verification - Design N - Redundancy Removal Engine - Result N - Design N' - Retiming Engine - Result N' - Design N'' - Target Enlargement Engine - Result N'' - Design N''' - Result N''' Taken from reference 3 on IBM SixthSense TBV: Redundancy Removal Results - Number of Registers - IFU - SMM - S6669 <table> <thead> <tr> <th>Instruction</th> <th>Original Design</th> <th>After Merging via Induction</th> <th>After Merging via TBV</th> </tr> </thead> <tbody> <tr> <td>IFU</td> <td></td> <td></td> <td></td> </tr> <tr> <td>SMM</td> <td></td> <td></td> <td></td> </tr> <tr> <td>S6669</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Taken from reference 3 on IBM SixthSense Partition monolithic transition relation by assignment on each variable to reduce the size of SAT clauses. (Reference 7) Annotated Control Flow Graph - ACFG is a Partitioned Model Checking method - Partition the software states and hardware states based on the structure of ACFG - Reduce the state space - Use the flow graph and antecedents of ACFG to guide and tailor the state space exploration - Idea: - Given the flow, and antecedents → what set of states can be on each edge - After the computation → consequents are checked against result ACFG Partition Algorithm 1 1: function ModelCheck($ACFG$, $post_c$ (* Incorporate mapped antecedents to the circuit and initialize the simulation relation *) 2: for all edges $e$ of $ACFG$ graph do 3: $\text{ant}(e) := \text{ant}(e) \land \text{ant}(e)[I_g/I_c][O_g/O_c]$; 4: if $e$ is from the entry vertex then 5: $\text{sim}(e) := post(e)(\text{ant}(e))$; 6: add $e$ into taskQueue; 7: else 8: $\text{sim}(e) := \emptyset$; 9: end if 10: end for ACFG Partition Algorithm 2 (* Compute simulation relation and check consequents *) 11: while taskQueue $\neq \emptyset$ do 12: remove an edge $e$ from taskQueue; 13: if $\text{sim}(e) \nRightarrow \text{cons}(e)$ then 14: return(a counter-example trace); 15: end if 16: for each successor edge $e'$ of $e$ do 17: $\text{sim}(e') := \text{sim}(e') \lor$ $\text{post}(e)(\text{post}_c(\text{sim}(e)) \land \text{ant}(e'))$; 18: if there is a change in $\text{sim}(e')$ then 19: put $e'$ into taskQueue; 20: end if 21: end for 22: end while 23: return(succeed); ACFG: Results - We take radix-2 SRT divider as an example (2N-bit dividend, N-bit divisor) <table> <thead> <tr> <th>N</th> <th>Time(s)</th> <th>BDD size</th> <th>Time(s)</th> <th>BDD size</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>7</td> <td>12.27</td> <td>5390</td> <td>6.99</td> <td>7462</td> <td>1.76x</td> </tr> <tr> <td>8</td> <td>32.31</td> <td>10307</td> <td>18.41</td> <td>14635</td> <td>1.76x</td> </tr> <tr> <td>9</td> <td>21.89</td> <td>8982</td> <td>14.97</td> <td>9770</td> <td>1.46x</td> </tr> <tr> <td>10</td> <td>50.62</td> <td>6619</td> <td>26.26</td> <td>15566</td> <td>1.93x</td> </tr> <tr> <td>11</td> <td>392.66</td> <td>10915</td> <td>379.52</td> <td>59125</td> <td>1.03x</td> </tr> <tr> <td>12</td> <td>727.43</td> <td>19722</td> <td>484.12</td> <td>29792</td> <td>1.50x</td> </tr> <tr> <td>13</td> <td>1854.62</td> <td>63555</td> <td>877.29</td> <td>38756</td> <td>2.11x</td> </tr> <tr> <td>14</td> <td>950.25</td> <td>22262</td> <td>636.86</td> <td>44919</td> <td>1.50x</td> </tr> <tr> <td>15</td> <td>452.57</td> <td>20036</td> <td>193.19</td> <td>56982</td> <td>2.34x</td> </tr> </tbody> </table> Similarity - Signal correspondence TCADICS00, C. A. J. van Eijk - Internal state point mapping - Possible equivalent internal signals (wire, flop) - Cutpoint insertion - Text similarity IWLS’03, Matsumoto, Saito, and Fujita - If minor diffs between two RTL, focus on diffs - Control flow in ESL vs. control logic in RTL - Merging points in ESL to find RTL correspondence (software cutpoint, EMSOFT05 DAC06, Feng and Hu) Leverage Similarities - If multiple stages of flops can be proven equivalent, we can do assume-guarantee. Cutpoints and Blackboxes CutPoint Abstraction Theory - for $\forall x f_1(x) = g_1(x)$ - if $\forall z, y f_2(z, y) \equiv g_2(z, y)$, then $f_2(f_1(x), y) \equiv g_2(g_1(x), y)$ $\Rightarrow F \equiv G$ - if $\exists z, y f_2(z, y) \neq g_2(z, y)$ cannot say $F \equiv G$ - Cutpoint theory is equivalent to - Uninterpreted function in SMT and TRS (Term Rewriting System) - Blackboxing in CEC/SEC False non-equivalence: The tool reports not equivalent when designs are equivalent - CEC can have false non-eq due to cutpoints - Constraints are needed to remove infeasible space Assume-Guarantee Reasoning of Cutpoints - Double-side cutpoint - Prove $f_1(x) \equiv g_1(x)$ - Cut $F.z$ and $G.z$ - assume $F.z \equiv G.z$ - Single-side cutpoint - Prove $f_1(x) \equiv g_1(x)$ - Cut $G.z$ - Assume $G.z \equiv f_1(x)$ **Software Cutpoint** [DAC’06, EMSOFT’05, Feng and Hu] - Preliminary static analysis - Dependence analysis - Dataflow analysis: live variables analysis - Identify path merging points - Formal equivalence checking - Linearly unroll loops - Merge paths based on preliminary analysis - Reduce logic blow-up - Cutpoint insertion ### Path Enumeration vs. Linear BDD <table> <thead> <tr> <th>Example</th> <th>Path Enumeration</th> <th>Linear Building BDD</th> </tr> </thead> <tbody> <tr> <td>EX20-8</td> <td>241.24 28</td> <td>0.28 61</td> </tr> <tr> <td>EX20-16</td> <td>time out</td> <td>89.01 1746</td> </tr> <tr> <td>EX20-32</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX20-64</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX97-8</td> <td>4229.44 183</td> <td>1.46 92</td> </tr> <tr> <td>EX97-16</td> <td>time out</td> <td>1187.72 1800</td> </tr> <tr> <td>EX97-32</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX97-64</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX251-12</td> <td>time out</td> <td>309.18 1843</td> </tr> <tr> <td>EX251-16</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX251-32</td> <td>time out</td> <td>mem out</td> </tr> <tr> <td>EX251-64</td> <td>time out</td> <td>mem out</td> </tr> </tbody> </table> ### Linear BDD vs. Early Cutpoints <table> <thead> <tr> <th>Example</th> <th>Linear Building BDD</th> <th>Early Cutpoints</th> </tr> </thead> <tbody> <tr> <td>EX20-8</td> <td>0.28 61</td> <td>0.11 58</td> </tr> <tr> <td>EX20-16</td> <td>89.01 1746</td> <td>0.24 60</td> </tr> <tr> <td>EX20-32</td> <td>mem out</td> <td>0.53 64</td> </tr> <tr> <td>EX20-64</td> <td>mem out</td> <td>1.35 72</td> </tr> <tr> <td>EX97-8</td> <td>1.46 92</td> <td>0.51 64</td> </tr> <tr> <td>EX97-16</td> <td>1187.72 1800</td> <td>1.10 73</td> </tr> <tr> <td>EX97-32</td> <td>mem out</td> <td>2.35 95</td> </tr> <tr> <td>EX97-64</td> <td>mem out</td> <td>5.41 136</td> </tr> <tr> <td>EX251-12</td> <td>309.18 1843</td> <td>0.64 66</td> </tr> <tr> <td>EX251-16</td> <td>mem out</td> <td>1.09 71</td> </tr> <tr> <td>EX251-32</td> <td>mem out</td> <td>7.45 170</td> </tr> <tr> <td>EX251-64</td> <td>mem out</td> <td>16.81 327</td> </tr> </tbody> </table> **SEC at Industry** - **Leverage existing formal verification tools** - CEC resolves a subset by flattening/remodeling - Model checking tool + assertions - Small block level design - Cycle accurate equivalence - **Apply SEC tools** - Calypto SLEC - JasperGold SEC - IBM SixthSense - Synopsys ESP-CV (RTL vs Spice), VC Formal SEQ, Hector - In-house tools --- **SLEC Frontend Architecture** Taken from reference 1 and 2 - SystemC - Sys.Verilog - Verilog - language X - CPT API - CPT - CPT to CDB xforms - CDB - CDB API - SLEC Verification Engine - Future Products - ESL Synthesis Engine - Loop Unrolling - Dependence Analysis - Flop/Mux inferencing - Constant Propagation - Dead code elimination - Smart memory modeling - Language Neutrality - Support multiple languages scalably - Language independent transforms SLEC Verification Engine Architecture - Name based mappings - Structural Decomposition - Orchestration - Proof Decomposition - Simulation Engine - Machine Acceleration - Sequential Analysis - Convergence Analysis - Inductive Analysis - Fixed point Analysis - BLS - Solver - WLS - SAT - BDD - Simulation - IPBDP - WSAT Taken from reference 1 and 2 SLEC: Throughput and Latency - Interface and compare points alignment - How golden and revised synchronized? - When to compare? Fig taken from reference 2 SLEC – Setup and Operation - Specification vs. Implementation ![Diagram showing specification vs. implementation](image) ESP-CV - SEC on RTL vs. Spice (switch-level) - Symbolic simulation (1, 0, X, Z, S) on product machine - Bounded by simulation cycles or memory size - Terminates when simulation cycles complete. - Mem over limit, randomly picks values for symbols. - Exploits symmetry inside macro (SRAM, ROM) models – efficient models for G and R to reduce size of symbolic expression What We’ve learned - SEC basics - Flattening - Interface alignment - Reachability analysis on product state machine - Some research ideas - Redundancy removal based on TBV - Partitioning - Software cutpoint insertion - Industry tools – SLEC, ESP-CV References 1. Developing a commercially viable sequential equivalence checker. Anmol Mathur, Calypto Design Systems, 2. SLEC user manual, Calypto 4. ESP user manual, Synopsys 8. Sequential equivalence checking based on structural similarities C. A. J. van Eijk July, 2000 TCADICS
{"Source-Url": "https://www.cerc.utexas.edu/~jaa/verification/lectures/17-2.pdf", "len_cl100k_base": 5367, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 49188, "total-output-tokens": 6051, "length": "2e12", "weborganizer": {"__label__adult": 0.0006833076477050781, "__label__art_design": 0.0007772445678710938, "__label__crime_law": 0.0007028579711914062, "__label__education_jobs": 0.0015964508056640625, "__label__entertainment": 0.0001341104507446289, "__label__fashion_beauty": 0.0004117488861083984, "__label__finance_business": 0.0005612373352050781, "__label__food_dining": 0.0005688667297363281, "__label__games": 0.0011205673217773438, "__label__hardware": 0.025054931640625, "__label__health": 0.0011844635009765625, "__label__history": 0.0005445480346679688, "__label__home_hobbies": 0.000331878662109375, "__label__industrial": 0.0031185150146484375, "__label__literature": 0.0002541542053222656, "__label__politics": 0.0006089210510253906, "__label__religion": 0.0009899139404296875, "__label__science_tech": 0.4599609375, "__label__social_life": 0.00015366077423095703, "__label__software": 0.006061553955078125, "__label__software_dev": 0.491455078125, "__label__sports_fitness": 0.0009064674377441406, "__label__transportation": 0.0023517608642578125, "__label__travel": 0.0003247261047363281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16707, 0.03127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16707, 0.29054]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16707, 0.68689]], "google_gemma-3-12b-it_contains_pii": [[0, 786, false], [786, 1247, null], [1247, 2099, null], [2099, 3235, null], [3235, 4025, null], [4025, 4217, null], [4217, 5387, null], [5387, 6526, null], [6526, 7196, null], [7196, 8014, null], [8014, 8135, null], [8135, 8568, null], [8568, 9581, null], [9581, 10744, null], [10744, 10877, null], [10877, 11692, null], [11692, 12036, null], [12036, 13580, null], [13580, 14478, null], [14478, 14989, null], [14989, 15486, null], [15486, 16707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 786, true], [786, 1247, null], [1247, 2099, null], [2099, 3235, null], [3235, 4025, null], [4025, 4217, null], [4217, 5387, null], [5387, 6526, null], [6526, 7196, null], [7196, 8014, null], [8014, 8135, null], [8135, 8568, null], [8568, 9581, null], [9581, 10744, null], [10744, 10877, null], [10877, 11692, null], [11692, 12036, null], [12036, 13580, null], [13580, 14478, null], [14478, 14989, null], [14989, 15486, null], [15486, 16707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16707, null]], "pdf_page_numbers": [[0, 786, 1], [786, 1247, 2], [1247, 2099, 3], [2099, 3235, 4], [3235, 4025, 5], [4025, 4217, 6], [4217, 5387, 7], [5387, 6526, 8], [6526, 7196, 9], [7196, 8014, 10], [8014, 8135, 11], [8135, 8568, 12], [8568, 9581, 13], [9581, 10744, 14], [10744, 10877, 15], [10877, 11692, 16], [11692, 12036, 17], [12036, 13580, 18], [13580, 14478, 19], [14478, 14989, 20], [14989, 15486, 21], [15486, 16707, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16707, 0.14789]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
4046ee39bd7f46414f7ba6eadf1a236f0ba68b8b
Natural Language Processing and Program Analysis for Supporting Todo Comments as Software Evolves Pengyu Nie, Junyi Jessy Li, Sarfraz Khurshid, Raymond Mooney, and Milos Gligoric The University of Texas at Austin {pynie@, jessy@austin., khurshid@ece., mooney@cs., gligoric@}@utexas.edu Abstract Natural language elements (e.g., API comments, todo comments) form a substantial part of software repositories. While developers routinely use many natural language elements (e.g., todo comments) for communication, the semantic content of these elements is often neglected by software engineering techniques and tools. Additionally, as software evolves and development teams re-organize, these natural language elements are frequently forgotten, or just become outdated, imprecise and irrelevant. We envision several techniques, which combine natural language processing and program analysis, to help developers maintain their todo comments. Specifically, we propose techniques to synthesize code from comments, make comments executable, answer questions in comments, improve comment quality, and detect dangling comments. Introduction Natural language elements form a substantial part of software repositories. These elements are used to communicate between users and developers (e.g., API comments, bug reports, and feature requests), and among developers (e.g., todo comments). Todo comments contain invaluable data that describe changes to code that can increase software maintenance, reliability, and quality. Despite occurring frequently in practice and containing valuable information, these elements, because of their informal nature, are largely not exploited by existing software engineering tools. Research on combining program analysis and natural language processing (NLP), which recently started to gain some traction, is in its infancy (Ernst 2017; Arnaoudova et al. 2015; Hindle et al. 2012; Oda et al. 2015; Allamanis, Peng, and Sutton 2016; Vasiliecu, Casalnuovo, and Devanbu 2017; Raychev, Vechev, and Krause 2015; Nguyen et al. 2012), and the existing work, although novel, mostly neglected comments that are used to communicate among the developers (Storey et al. 2008; Sridhara 2016). In this position paper, we argue about the importance of content in todo comments and envision several techniques to automatically maintain and resolve those comments. This position paper is to a large extent inspired by our extensive analysis of a large corpus of open-source projects. Specifically, we analyzed over 30k open-source projects, which are available on GitHub, totaling 585 million lines of code (not counting comments). We found that these projects include over 297 million lines of comments (~30% of the total lines). Our analysis also uncovered more than 700k todo comments in the used corpus. We manually inspected (and discussed) hundreds of comments, code and comment changes, and commit messages. In the following subsections, we will frequently refer to this dataset and our findings related to this dataset. All examples of code and comments that we provide in this paper are taken from one of the analyzed open-source projects. This paper mostly focuses on todo comments that contain valuable information on increasing software quality, performance, maintenance, and reliability. We consider the following three categories of todo comments. First, task comments explain what features are currently not supported or what optimizations need to be implemented (e.g., from the Google Guava project: “For optimal performance, use a binary search when `targets.size() < size()/log(size())`”). Second, trigger-action comments talk about changes to the code repository that would be necessary if something else is modified by developers (e.g., from Guava: “check more preconditions (as `bufferSize >= chunkSize`) if this is ever public”). Finally, question comments are concerned with alternative implementations, potential optimizations, and testing, which may be explored by developers only if time permits (e.g., from Guava: “Is this faster than `System.arraycopy()` for small arrays?”). Regardless of the category of todo comments, as software evolves and development teams re-organize, these comments may be dangling, i.e., resolved but forgotten (Storey et al. 2008; Sridhara 2016). For example, a trigger may hold (e.g., “if this is ever public”) but the action may not be executed by developers (for very long time or ever), and developers may never have enough time to consider alternative algorithms and fine tune their existing implementations. With the goal to help developers increase the reliability of their software, we propose several techniques to (1) synthesize code described in task comments, (2) make trigger-action comments executable, (3) answer question comments, (4) improve the quality of all todo comments, and (5) automatically detect dangling comments. Techniques This section describes the basic idea behind each technique and the way we will approach the implementation. Synthesizing Error-Reporting Code We plan to develop lightweight synthesis techniques to generate error-reporting code for unsupported cases that are documented by developers in the task comments (e.g., from Guava: “support array types”). First, we will identify comments that document unsupported cases. To this end, we will explore possible supervision signals from resolved comments and their corresponding code changes, crowdsourcing annotation and semantic parsing of the comments. Second, we will synthesize error-reporting code that follows the style used in the codebase (e.g., throw an exception or return a special value from a function). Note that our goal is not to work on full-blown program synthesis, which would be interesting but challenging (e.g., Polikarpova, Kuraj, and Solar-Lezama (2016)), but rather to focus on a specific domain of error-reporting. Basically, our goal is to make the existing comments observable during program execution by reporting an appropriate message for unsupported cases. Extracting Executable Comments We will develop techniques to help software engineers to encode their trigger-action comments as executable code statements. This will help with repository maintenance, because developers will not need to manually check their todo comments; instead, the executable statements will be automatically triggered when appropriate. We show several examples of trigger-action comments in Table 1 (the top half). We found that ~10% of all todo comments (in our corpus) belong to this comment category. While it would be infeasible to support every comment written in the trigger-action style, we plan to focus on those tasks that update the codebase (e.g., transformations of abstract syntax trees) when triggers are satisfied. Our initial step is to develop a domain specific language embedded in Java to be used to: (1) query the static features of the codebase, e.g., required Java version, and (2) specify code transformations, e.g., remove a method from a class. Figure 1 shows two examples of trigger-action comments encoded in our framework (named TRIGIT); the original todo comments are crossed out and the statements for our framework are highlighted. In the first example, we use our framework to check a modifier of the current method; if the method becomes public, the code guarded by the trigger should become a part of the compiled class. In the second example, we specify that a variable should be removed if the required Java version is higher than 1.5; the required Java version can be obtained from a build script. (Note that the statements/expressions that use the variables need to be annotated too, but we do not show this due to space limitations.) The evaluation of the triggers will be done statically (once code is compiled), as the queries should not depend on the dynamic behavior of the program. Our tool, which can be implemented as a compiler plugin, will automatically remove the triggers and perform program transformations. Note that the user would still be able to inspect/approve the changes (e.g., by executing git diff). As the transformation engine we will use the existing open-source platforms, e.g., Eclipse, or program transformation systems, e.g., Cordy et al. (2004). The language design will be guided by examples, and we will evolve the language to support cases that we encounter in the future. Our second step is to automatically discover trigger-action comments present in a codebase and recover the corresponding triggers and actions via mining explicit condition relations within the content of the todo comments; explicit discourse relations can be classified with adequate accuracy (Pitler et al. 2008). In the third step, we will develop automated migration from comments to the TRIGIT specifications, which will follow our recent work on language to code for if-this-then-that (IFTTT) recipes (Quirk, Mooney, and Galley 2015). Specifically, we will train a semantic parser to map trigger-action comments into executable code using supervision automatically extracted from the code changes made when a todo comment is resolved. This supervision may be noisy, since not all code changes may be directly related to resolving the todo comment, but our previous work on IFTTT shows that noisy, automatically extracted supervision from pairing comments and code can be tolerated reasonably well. Answering Questions From Comments We will develop techniques to help software engineers to make informed decisions about questions that are asked in todo comments. In our preliminary studies, we discovered that developers ask questions in todo comments more than 10% of the time; we obtained this number by counting todo comments that contain “?”. Some of these questions are shown in Table 1 (the bottom half). Many of the questions are related to code optimization, program transformation, or testing. Our plan is to focus on techniques that will address these three types of questions. First, to answer questions related to optimizations, we will extract suggested... code modifications from comments, apply those modifications and profile the code (by executing existing test suites) and evaluate the performance with profiles (on various machines). Second, to answer questions related to tests, we will develop techniques that extract test inputs from a question and generate new tests with those inputs; these new tests will be obtained by adjusting an automated test generation tool (e.g., Randoop (Pacheco et al. 2007)) or by extending existing (manually written) tests. Third, to answer questions related to code structure, we will extract suggested changes (from Guava: “Add getters returning rowKeyToIndex and columnKeyToIndex?”), perform the changes, and measure quality of code in terms of naturalness (Hindle et al. 2012). Our question classification system will also learn from how todo comments are answered as software evolves (e.g., files and functions that are modified and language artifacts that are added or edited); we can also learn from actions taken by developers. As some of the questions may be open-ended, we plan to develop an interactive dialog interface, which we recently used for language to code translation (Chaurasia and Mooney 2017). We plan to use dialog systems to clarify user intent and gather information—in our case, when a question is initially asked. Improving Todo Comments We will develop techniques to help software engineers to write meaningful todo comments. While manually analyzing hundreds of todo comments, we found a number of comments that were hard to understand even if we read the code near those comments. We were also in disagreement about their meaning in several cases, and although we could understand a comment (e.g., from the Square Retrofit project: “TODO non-suck message”), it was clear that any technique would have a hard time to extract any useful data. Our initial task will be to detect todo comments that are not specific enough, as well as those comments that do not follow the conventions already used in the same project. The techniques that we will develop will build on our work on text specificity (Li and Nenkova 2015) and program analysis. When we detect an unspecific comment, we will either notify a developer to provide additional clarification, highlighting a part of the comment that does not follow the style (in a similar way that spellcheckers highlight typos in comments inside IDEs), or automatically reformat the comment to be consistent with other comments in the same repository. We will also provide automated comment style checkers, where the rules can be expressed by developers; this is similar to code style checkers, which are used in practice. Having specific comments that follow the same style will enable techniques from prior sections. Detecting Dangling Todo Comments Prior work has shown that developers may resolve todo comments but forget to remove these comments from source code (Storey et al. 2008; Sridhara 2016); these dangling comments can waste developers’ time during program comprehension and maintenance. We are working on a technique, based on machine learning, to automatically detect dangling todo comments. Our detection technique learns from existing software repositories. As mentioned earlier, we have already collected more than 700k todo comments. This large dataset provides examples for todo comments that were removed by developers (over 20k). We are using these examples as distant supervision signals, where we are exploring automatic labeling of examples (e.g., todo comments that are in the same file with removed todo comments). Our models are exploiting commit messages and static code analysis of changes. In the future, we plan to also utilize software histories to extract necessary context when todo comments were introduced. We will also reason about co-evolution of code and comments from when a todo comment was introduced until --- **Table 1: Example todo comments in open-source projects** <table> <thead> <tr> <th>Project (on GitHub)</th> <th>File (.java)</th> <th>Todo Comments</th> </tr> </thead> <tbody> <tr> <td>Apache/Incubator-wave</td> <td>Pretty</td> <td>Remove this if itnlViewImpl implements getAttributes</td> </tr> <tr> <td>Apache/Struts</td> <td>FreemarkerResultMockedTest</td> <td>Remove expected/SDK16 and a1 if after switching to Java 1.6</td> </tr> <tr> <td>Apache/Poi</td> <td>TextXSSBugs</td> <td>Delete this test case when RH60B1B and 2BK are implemented</td> </tr> <tr> <td>Google/Guiava</td> <td>Types</td> <td>Once we are on Java 8, delete this abstraction</td> </tr> <tr> <td>Google/Guiava</td> <td>AbstractStreamingHasher</td> <td>Check preconditions as buffer &gt;= chunkSize if this is ever public</td> </tr> <tr> <td>Google/Guiava</td> <td>MapTest</td> <td>Replace with Ascii caseInsensitiveEquivalence when it exists</td> </tr> <tr> <td>KangProject/Frameworks_base</td> <td>SoCertificate</td> <td>If deprecated constructors are removed, this should always be available</td> </tr> <tr> <td>Morristech/Gwt</td> <td>DefaultFilters</td> <td>This class needs to be revisited, as Gw'ts Ant is upgraded</td> </tr> <tr> <td>Morristech/Gwt</td> <td>Simplifier</td> <td>If the AST were normalized, we wouldn't need this</td> </tr> <tr> <td>Andyglick/Hk2-fork</td> <td>AbstractRepositoryImpl</td> <td>Is it allowed to call the initialize method multiple times?</td> </tr> <tr> <td>Apache/Net</td> <td>IMAPReply</td> <td>Would looking@() be more efficient? If so, then drop trailing .* from patterns</td> </tr> <tr> <td>Google/Guiava</td> <td>ArrayTable</td> <td>Add getters returning rowKeyToIndex and columnKeyToIndex?</td> </tr> <tr> <td>Google/Guiava</td> <td>EvictingQueue</td> <td>Do we want to checkForNull each element in containsAll and retainAll?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>LJvmEnvironmentVariableSupplier</td> <td>Is this actually called anywhere?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>EvalBinary</td> <td>What if the composite being accessed is not an array but a structure?</td> </tr> <tr> <td>Eclipse/Mwec</td> <td>PluginExtensionManager</td> <td>Test: what happens when a handler is not there? Exception?</td> </tr> <tr> <td>JetBrins/Jdk8u_jaxp</td> <td>NodeSet</td> <td>What happens if index is out of range?</td> </tr> <tr> <td>Square/Okhttp</td> <td>HtmlViewImpl</td> <td>Test case for empty continuation header</td> </tr> </tbody> </table> --- Basics of natural language processing (NLP): An overview of NLP tasks, including sentiment analysis, information retrieval, machine translation, and natural language understanding, and an introduction to machine learning techniques for NLP. Machine learning: A brief overview of machine learning techniques, including supervised learning, unsupervised learning, and reinforcement learning, and their applications in natural language processing. Applications of NLP: A look at how NLP is used in various fields, including social media analysis, customer service, healthcare, and finance. The role of NLP in autonomous vehicles and language translation is also discussed. Open-source software: A discussion of open-source software repositories and the role of NLP in understanding and managing code comments. The importance of natural language understanding in software development is highlighted. Deep learning: A brief introduction to deep learning and its applications in natural language processing, including neural machine translation and speech recognition. Natural language understanding: An overview of natural language understanding techniques, including named entity recognition, coreference resolution, and conversation modeling. it was resolved by a developer. Specifically, for each code change, we will compute its distance from todo comments, word similarity with each comment, and code structure that may be described in a comment. These sources of information provide complementary views to feature development and complementary models, so we plan to build on our prior work in co-training and ensemble models. **Related Work** Li et al. (2006) used text classification to validate the representativeness of their study of bug characteristics. Fluri, Wursch, and Gall (2007) empirically showed that code and comments frequently co-evolve. Padioleau, Tan, and Zhou (2009) manually studied over one thousand comments, and found that 50% of comments can be leveraged by various techniques. Haouari, Sahraoui, and Langlais (2011) introduced a taxonomy of comments and found that todo comments are the second most common type of comments. Movshovitz-Attias and Cohen (2013) used topic modeling and language models to generate comments from Java source files. Several works tackled automated generation of commit messages and mining relation from commit messages (Linares-Vásquez et al. 2015; Jiang and McMillan 2017; Andersson, Ericsson, and Wingkvist 2014; Loyola, Marrese-Taylor, and Matsuo 2017). Tan et al. (2007) detected inconsistencies between code and comments and proposed a technique to test Javadoc comments. Zhong et al. (2011) introduced a taxonomy of comments and found that todo comments are the second most common type of comments. Movshovitz-Attias and Cohen (2013) used topic modeling and language models to generate comments from Java source files. Several recent works tackled automated generation of commit messages and mining relation from commit messages (Linares-Vásquez et al. 2015; Jiang and McMillan 2017; Andersson, Ericsson, and Wingkvist 2014; Loyola, Marrese-Taylor, and Matsuo 2017). **Conclusion** We argued that comments used to communicate among developers (todo comments) contain invaluable content that is currently neglected. We described several techniques – synthesizing code from comments, making comments executable, answering questions in comments, improving comment quality, and detecting dangling comments. These techniques, based on natural language processing and program analysis, have potential to substantially simplify software maintenance and increase software reliability. **Acknowledgments** We thank Rishabh Rai for the initial discussion on this work. This work was partially supported by the US National Science Foundation under Grant No. CCF-1704790. **References** Andersson, R.; Ericsson, M.; and Wingkvist, A. 2014. Mining relations from Git commit messages: An experience report. In SLTC. Chaurasia, S., and Mooney, R. 2017. Dialog for language to code. In IJCNLP. Ernst, M. D. 2017. Natural language is a programming language: Applying natural language processing to software development. In SNAPL, volume 71. Haouari, D.; Sahraoui, H.; and Langlais, P. 2011. How good is your comment? A study of comments in Java programs. In ESEM. Li, J. J., and Nenkova, A. 2015. Fast and accurate prediction of sentence specificity. In AAAI. Loyola, P.; Marrese-Taylor, E.; and Matsuo, Y. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL. Padioleau, Y.; Tan, L.; and Zhou, Y. 2009. Listening to programmers’ taxonomies and characteristics of comments in operating system code. In ICSE. Polikarpova, N.; Kuraj, I.; and Solar-Lezama, A. 2016. Program synthesis from polymorphic refinement types. In PLDI. Quirk, C.; Mooney, R.; and Galley, M. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In ACL. Raychev, V.; Vechev, M.; and Krause, A. 2015. Predicting program properties from "Big Code". In POPL. Sridhara, G. 2016. Automatically detecting the up-to-date status of ToDo comments in Java programs. In ISEC. Storey, M.-A.; Ryall, J.; Bull, R. I.; Myers, D.; and Singer, J. 2008. TODO or to bug. In ICSE. Tan, L.; Yuan, D.; Krishna, G.; and Zhou, Y. 2007. /*iComment: Bugs or bad comments?*/. In SOSP. Vasilescu, B.; Casalnuovo, C.; and Devanbu, P. T. 2017. Recovering clear, natural identifiers from obfuscated JS names. In FSE.
{"Source-Url": "http://www.cs.utexas.edu/~ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fml%2Fpapers%2Fnie.nlse18.pdf&pubid=127683", "len_cl100k_base": 4853, "olmocr-version": "0.1.48", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 13627, "total-output-tokens": 5544, "length": "2e12", "weborganizer": {"__label__adult": 0.0003733634948730469, "__label__art_design": 0.0002048015594482422, "__label__crime_law": 0.0003058910369873047, "__label__education_jobs": 0.0006618499755859375, "__label__entertainment": 4.649162292480469e-05, "__label__fashion_beauty": 0.00013887882232666016, "__label__finance_business": 0.0001614093780517578, "__label__food_dining": 0.0002541542053222656, "__label__games": 0.00035858154296875, "__label__hardware": 0.0004270076751708984, "__label__health": 0.00032830238342285156, "__label__history": 0.00011539459228515624, "__label__home_hobbies": 5.8710575103759766e-05, "__label__industrial": 0.00019299983978271484, "__label__literature": 0.00022840499877929688, "__label__politics": 0.00021207332611083984, "__label__religion": 0.00032806396484375, "__label__science_tech": 0.002361297607421875, "__label__social_life": 9.870529174804688e-05, "__label__software": 0.0039215087890625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0002372264862060547, "__label__transportation": 0.0003390312194824219, "__label__travel": 0.00015342235565185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23371, 0.01454]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23371, 0.50433]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23371, 0.88345]], "google_gemma-3-12b-it_contains_pii": [[0, 4913, false], [4913, 10105, null], [10105, 17169, null], [17169, 23371, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4913, true], [4913, 10105, null], [10105, 17169, null], [17169, 23371, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23371, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23371, null]], "pdf_page_numbers": [[0, 4913, 1], [4913, 10105, 2], [10105, 17169, 3], [17169, 23371, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23371, 0.19608]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
857ad917925b23ca3d9bb77194344cff5057111d
An Introduction to CUDA/OpenCL and Manycore Graphics Processors Bryan Catanzaro, NVIDIA Research Overview - Terminology: Multicore, Manycore, SIMD - The CUDA and OpenCL programming models - Understanding how CUDA maps to NVIDIA GPUs - Thrust Heterogeneous Parallel Computing **Multicore CPU** Fast Serial Processing **Manycore GPU** Scalable Parallel Processing Multicore and Manycore **Multicore**: yoke of oxen - Each core optimized for executing a single thread **Manycore**: flock of chickens - Cores optimized for aggregate throughput, deemphasizing individual performance - (apologies to Seymour Cray) ## Multicore & Manycore, cont. <table> <thead> <tr> <th>Specifications</th> <th>Westmere-EP</th> <th>Fermi (Tesla C2050)</th> </tr> </thead> <tbody> <tr> <td>Processing Elements</td> <td>6 cores, 2 issue, 4 way SIMD @3.46 GHz</td> <td>14 SMs, 2 issue, 16 way SIMD @1.15 GHz</td> </tr> <tr> <td>Resident Strands/Threads (max)</td> <td>6 cores, 2 threads, 4 way SIMD: 48 strands</td> <td>14 SMs, 48 SIMD vectors, 32 way SIMD: 21504 threads</td> </tr> <tr> <td>SP GFLOP/s</td> <td>166</td> <td>1030</td> </tr> <tr> <td>Memory Bandwidth</td> <td>32 GB/s</td> <td>144 GB/s</td> </tr> <tr> <td>Register File</td> <td>6 kB (?)</td> <td>1.75 MB</td> </tr> <tr> <td>Local Store/L1 Cache</td> <td>192 kB</td> <td>896 kB</td> </tr> <tr> <td>L2 Cache</td> <td>1536 kB</td> <td>0.75 MB</td> </tr> <tr> <td>L3 Cache</td> <td>12 MB</td> <td>-</td> </tr> </tbody> </table> **Westmere-EP (32nm)** **Fermi (40nm)** Why Heterogeneity? - Different goals produce different designs - Manycore assumes work load is highly parallel - Multicore must be good at everything, parallel or not - Multicore: minimize latency experienced by 1 thread - lots of big on-chip caches - extremely sophisticated control - Manycore: maximize throughput of all threads - lots of big ALUs - multithreading can hide latency ... so skip the big caches - simpler control, cost amortized over ALUs via SIMD - Single Instruction Multiple Data architectures make use of data parallelism - We care about SIMD because of area and power efficiency concerns - Amortize control overhead over SIMD width - Parallelism exposed to programmer & compiler OpenMP / Pthreads / MPI all neglect SIMD parallelism Because it is difficult for a compiler to exploit SIMD How do you deal with sparse data & branches? - Many languages (like C) are difficult to vectorize Most common solution: - Either forget about SIMD - Pray the autovectorizer likes you - Or instantiate intrinsics (assembly language) - Requires a new code version for every SIMD extension A Brief History of x86 SIMD Extensions - **MMX** - 8*8 bit Int - **SSE** - 4*32 bit FP - **SSE2** - 2*64 bit FP - **SSE3** - Horizontal ops - **SSSE3** - 8*32 bit FP - **SSE4.1** - **SSE4.2** - 256 bit Int ops, Gather - **AVX** - 3 operand - **AVX+FMA** - **AVX2** - **LRB** - 512 bit - **3dNow!** - **SSE4.A** - **SSE5** What to do with SIMD? - Neglecting SIMD is becoming more expensive - AVX: 8 way SIMD, Larrabee: 16 way SIMD, Nvidia: 32 way SIMD, ATI: 64 way SIMD - This problem composes with thread level parallelism - We need a programming model which addresses both problems The CUDA Programming Model - CUDA is a programming model designed for: - Manycore architectures - Wide SIMD parallelism - Scalability - CUDA provides: - A thread abstraction to deal with SIMD - Synchronization & data sharing between small groups of threads - CUDA programs are written in C++ with minimal extensions - OpenCL is inspired by CUDA, but HW & SW vendor neutral - Similar programming model, C only for device code Hierarchy of Concurrent Threads - Parallel **kernels** composed of many threads - all threads execute the same sequential program - Threads are grouped into **thread blocks** - threads in the same block can cooperate - Threads/blocks have unique IDs What is a CUDA Thread? - Independent thread of execution - has its own program counter, variables (registers), processor state, etc. - no implication about how threads are scheduled - CUDA threads might be **physical** threads - as mapped onto NVIDIA GPUs - CUDA threads might be **virtual** threads - might pick 1 block = 1 physical thread on multicore CPU What is a CUDA Thread Block? - Thread block = a (data) **parallel task** - all blocks in kernel have the same entry point - but may execute any code they want - Thread blocks of kernel must be **independent** tasks - program valid for *any interleaving* of block executions CUDA Supports: - **Thread parallelism** - each thread is an independent thread of execution - **Data parallelism** - across threads in a block - across blocks in a kernel - **Task parallelism** - different blocks are independent - independent kernels executing in separate streams Synchronization - Threads within a block may synchronize with barriers ```c ... Step 1 ... __syncthreads(); ... Step 2 ... ``` - Blocks coordinate via atomic memory operations - e.g., increment shared queue pointer with `atomicInc()` - Implicit barrier between dependent kernels ```c vec_minus<<<nbblocks, blksize>>>(a, b, c); vec_dot<<<nbblocks, blksize>>>(c, c); ``` Blocks must be independent - Any possible interleaving of blocks should be valid - presumed to run to completion without pre-emption - can run in any order - can run concurrently OR sequentially - Blocks may coordinate but not synchronize - shared queue pointer: OK - shared lock: BAD ... can easily deadlock - Independence requirement gives scalability Scalability - Manycore chips exist in a diverse set of configurations - CUDA allows one binary to target all these chips - Thread blocks bring scalability! Hello World: Vector Addition //Compute vector sum C=A+B //Each thread performs one pairwise addition __global__ void vecAdd(float* a, float* b, float* c) { int i = blockIdx.x * blockDim.x + threadIdx.x; c[i] = a[i] + b[i]; } int main() { //Run N/256 blocks of 256 threads each vecAdd<<<N/256, 256>>>(d_a, d_b, d_c); } Memory model Thread → Per-thread Local Memory Block → Per-block Shared Memory Memory model Sequential Kernels Kernel 0 Kernel 1 ... ... Per Device Global Memory Memory model - Host Memory - Device 0 Memory - Device 1 Memory CUDA memcpy() Hello World: Managing Data ```c int main() { int N = 256 * 1024; float* h_a = malloc(sizeof(float) * N); //Similarly for h_b, h_c. Initialize h_a, h_b float *d_a, *d_b, *d_c; cudaMalloc(&d_a, sizeof(float) * N); //Similarly for d_b, d_c cudaMemcpy(d_a, h_a, sizeof(float) * N, cudaMemcpyHostToDevice); //Similarly for d_b //Run N/256 blocks of 256 threads each vecAdd<<<N/256, 256>>>(d_a, d_b, d_c); cudaMemcpy(h_c, d_c, sizeof(float) * N, cudaMemcpyDeviceToHost); } ``` CUDA: Minimal extensions to C/C++ - Declaration specifiers to indicate where things live ``` __global__ void KernelFunc(...); // kernel callable from host __device__ void DeviceFunc(...); // function callable on device __device__ int GlobalVar; // variable in device memory __shared__ int SharedVar; // in per-block shared memory ``` - Extend function invocation syntax for parallel kernel launch ``` KernelFunc<<<500, 128>>>(...); // 500 blocks, 128 threads each ``` - Special variables for thread identification in kernels ``` dim3 threadIdx; dim3 blockIdx; dim3 blockDim; ``` - Intrinsics that expose specific operations in kernel code ``` __syncthreads(); // barrier synchronization ``` Using per-block shared memory - Variables shared across block ``` __shared__ int *begin, *end; ``` - Scratchpad memory ``` __shared__ int scratch[BLOCKSIZE]; scratch[threadIdx.x] = begin[threadIdx.x]; // ... compute on scratch values ... begin[threadIdx.x] = scratch[threadIdx.x]; ``` - Communicating values between threads ``` scratch[threadIdx.x] = begin[threadIdx.x]; __syncthreads(); int left = scratch[threadIdx.x - 1]; ``` - Per-block shared memory is faster than L1 cache, slower than register file - It is relatively small: register file is 2-4x larger CUDA: Features available on GPU - Double and single precision (IEEE compliant) - Standard mathematical functions - `sinf`, `powf`, `atanf`, `ceil`, `min`, `sqrtf`, etc. - Atomic memory operations - `atomicAdd`, `atomicMin`, `atomicAnd`, `atomicCAS`, etc. - These work on both global and shared memory CUDA: Runtime support - Explicit memory allocation returns pointers to GPU memory - `cudaMalloc()`, `cudaFree()` - Explicit memory copy for host ↔ device, device ↔ device - `cudaMemcpy()`, `cudaMemcpy2D()`, ... - Texture management - `cudaBindTexture()`, `cudaBindTextureToArray()`, ... - OpenGL & DirectX interoperability - `cudaGLMapBufferObject()`, `cudaD3D9MapVertexBuffer()`, ... OpenCL - OpenCL is supported by AMD \{CPUs, GPUs\} and Nvidia - Intel, Imagination Technologies (purveyor of GPUs for iPhone/OMAP/etc.) are also on board - OpenCL’s data parallel execution model mirrors CUDA, but with different terminology - OpenCL has rich task parallelism model - Runtime walks a dependence DAG of kernels/memory transfers ### CUDA and OpenCL Correspondence <table> <thead> <tr> <th>CUDA</th> <th>OpenCL</th> </tr> </thead> <tbody> <tr> <td>Thread</td> <td>Work-item</td> </tr> <tr> <td>Thread-block</td> <td>Work-group</td> </tr> <tr> <td>Global memory</td> <td>Global memory</td> </tr> <tr> <td>Constant memory</td> <td>Constant memory</td> </tr> <tr> <td>Shared memory</td> <td>Local memory</td> </tr> <tr> <td>Local memory</td> <td>Private memory</td> </tr> <tr> <td><strong>global</strong> function</td> <td>__kernel function</td> </tr> <tr> <td><strong>device</strong> function</td> <td>no qualification needed</td> </tr> <tr> <td><strong>constant</strong> variable</td> <td>__constant variable</td> </tr> <tr> <td><strong>device</strong> variable</td> <td>__global variable</td> </tr> <tr> <td><strong>shared</strong> variable</td> <td>__local variable</td> </tr> </tbody> </table> OpenCL and SIMD - SIMD issues are handled separately by each runtime - AMD GPU Runtime - Vectorize over 64-way SIMD, but not over 4/5-way VLIW - Use float4 vectors in your code - AMD CPU Runtime - No vectorization - Use float4 vectors in your code (float8 when AVX appears?) - Intel CPU Runtime - Vectorization optional, using float4/float8 vectors still good idea - Nvidia GPU Runtime - Full vectorization, like CUDA - Prefers scalar code per work-item Imperatives for Efficient CUDA Code - Expose abundant fine-grained parallelism - need 1000’s of threads for full utilization - Maximize on-chip work - on-chip memory orders of magnitude faster - Minimize execution divergence - SIMT execution of threads in 32-thread warps - Minimize memory divergence - warp loads and consumes complete 128-byte cache line Mapping CUDA to Nvidia GPUs - CUDA is designed to be functionally forgiving - However, to get good performance, one must understand how CUDA is mapped to Nvidia GPUs - Threads: each thread is a SIMD vector lane - Warps: A SIMD instruction acts on a “warp” - Warp width is 32 elements: \textit{LOGICAL} SIMD width - Thread blocks: Each thread block is scheduled onto an SM - Peak efficiency requires multiple thread blocks per SM The GPU is very deeply pipelined to maximize throughput. This means that performance depends on the number of thread blocks which can be allocated on a processor. Therefore, resource usage costs performance: - More registers => Fewer thread blocks - More shared memory usage => Fewer thread blocks It is often worth trying to reduce register count in order to get more thread blocks to fit on the chip: - For Fermi, target 20 registers or less per thread for full occupancy. Occupyancy (Constants for Fermi) - The Runtime tries to fit as many thread blocks simultaneously as possible on to an SM - The number of simultaneous thread blocks (B) is ≤ 8 - The number of warps per thread block (T) ≤ 32 - Each SM has scheduler space for 48 warps (W) - B * T ≤ W=48 - The number of threads per warp (V) is 32 - B * T * V * Registers per thread ≤ 32768 - B * Shared memory (bytes) per block ≤ 49152/16384 - Depending on Shared memory/L1 cache configuration - Occupancy is reported as B * T / W Nvidia GPU hardware handles control flow divergence and reconvergence - Write scalar SIMD code, the hardware schedules the SIMD execution - One caveat: `__syncthreads()` can’t appear in a divergent path - This will cause programs to hang - Good performing code will try to keep the execution convergent within a warp - Warp divergence only costs because of a finite instruction cache Memory, Memory, Memory - A many core processor ≡ A device for turning a compute bound problem into a memory bound problem - Kathy Yelick, Berkeley - Lots of processors, only one socket - Memory concerns dominate performance tuning Memory is SIMD too - Virtually all processors have SIMD memory subsystems cached line width - This has two effects: - Sparse access wastes bandwidth 2 words used, 8 words loaded: ¼ effective bandwidth - Unaligned access wastes bandwidth 4 words used, 8 words loaded: ½ effective bandwidth Coalescing - GPUs and CPUs both perform memory transactions at a larger granularity than the program requests ("cache line") - GPUs have a "coalescer", which examines memory requests dynamically and coalesces them - To use bandwidth effectively, when threads load, they should: - Present a set of unit strided loads (dense accesses) - Keep sets of loads aligned to vector boundaries Data Structure Padding - Multidimensional arrays are usually stored as monolithic vectors in memory. - Care should be taken to assure aligned memory accesses for the necessary access pattern. - Problem: Sparse Matrix Vector Multiplication - How should we represent the matrix? - Can we take advantage of any structure in this matrix? Since this matrix has nonzeros only on diagonals, let’s project the diagonals into vectors Sparse representation becomes dense Launch a thread per row Are we done? The straightforward diagonal projection is not aligned Optimized Diagonal Representation - Skew the diagonals again - This ensures that all memory loads from matrix are coalesced - Don’t forget padding! Different data access patterns may also require transposing data structures. The cost of a transpose on the data structure is often much less than the cost of uncoalesced memory accesses. Use shared memory to handle block transposes. Certainties: Death and Taxes Productivity is often in tension with efficiency This is often called the “abstraction tax” The Concrete Tax - Parallel programming also gives us a “concrete tax” - How many of you have tried to write ... which is faster than a vendor supplied library? - Divergent Parallel Architectures means performance portability is increasingly elusive - Low-level programming models tie you to a particular piece of hardware - And if you’re like me, often make your code slow - My SGEMM isn’t as good as NVIDIA’s The Concrete Tax: A Case Study OpenCL experiment on CPU and GPU Two optimized reductions, one for CPU, one for GPU Running GPU code on CPU: - 40X performance loss compared to CPU optimized code Running CPU on GPU: - ~100X performance loss compared to GPU optimized code Concrete code led to overspecialization Abstraction, cont. - Reduction is one of the simplest parallel computations - Performance differentials are even starker as complexity increases - There’s a need for abstractions at many levels - Primitive computations (BLAS, Data-parallel primitives) - Domain-specific languages - These abstractions make parallel programming more efficient and more productive A C++ template library for CUDA - Mimics the C++ STL Containers - On host and device Algorithms - Sorting, reduction, scan, etc. #include <thrust/host_vector.h> #include <thrust/device_vector.h> #include <thrust/sort.h> #include <cstdlib> int main(void) { // generate 32M random numbers on the host thrust::host_vector<int> h_vec(32 << 20); thrust::generate(h_vec.begin(), h_vec.end(), rand); // transfer data to the device thrust::device_vector<int> d_vec = h_vec; // sort data on the device (846M keys per sec on GeForce GTX 480) thrust::sort(d_vec.begin(), d_vec.end()); // transfer data back to host thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin()); return 0; } Objectives - Programmer productivity - Build complex applications quickly - Encourage generic programming - Leverage parallel primitives - High performance - Efficient mapping to hardware **Containers** - Concise and readable code - Avoids common memory management errors ``` // allocate host vector with two elements thrust::host_vector<int> h_vec(2); // copy host vector to device thrust::device_vector<int> d_vec = h_vec; // write device values from the host d_vec[0] = 13; d_vec[1] = 27; // read device values from the host std::cout << "sum: " << d_vec[0] + d_vec[1] << std::endl; ``` Pair of iterators defines a range ```cpp // allocate device memory device_vector<int> d_vec(10); // declare iterator variables device_vector<int>::iterator begin = d_vec.begin(); device_vector<int>::iterator end = d_vec.end(); device_vector<int>::iterator middle = begin + 5; // sum first and second halves int sum_half1 = reduce(begin, middle); int sum_half2 = reduce(middle, end); // empty range int empty = reduce(begin, begin); ``` Iterators act like pointers ```cpp // declare iterator variables device_vector<int>::iterator begin = d_vec.begin(); device_vector<int>::iterator end = d_vec.end(); // pointer arithmetic begin++; // dereference device iterators from the host int a = *begin; int b = begin[3]; // compute size of range [begin, end) int size = end - begin; ``` Iterators - Encode memory location - Automatic algorithm selection ```cpp // initialize random values on host host_vector<int> h_vec(100); generate(h_vec.begin(), h_vec.end(), rand); // copy values to device device_vector<int> d_vec = h_vec; // compute sum on host int h_sum = reduce(h_vec.begin(), h_vec.end()); // compute sum on device int d_sum = reduce(d_vec.begin(), d_vec.end()); ``` Algorithms - **Elementwise operations** - for_each, transform, gather, scatter ... - **Reductions** - reduce, inner_product, reduce_by_key ... - **Prefix-Sums** - inclusive_scan, inclusive_scan_by_key ... - **Sorting** - sort, stable_sort, sort_by_key ... // allocate memory device_vector<int> A(10); device_vector<int> B(10); device_vector<int> C(10); // transform A + B -> C transform(A.begin(), A.end(), B.begin(), C.begin(), plus<int>()); // transform A - B -> C transform(A.begin(), A.end(), B.begin(), C.begin(), minus<int>()); // multiply reduction int product = reduce(A.begin(), A.end(), 1, multiplies<int>()); Algorithms Standard data types ```cpp // allocate device memory device_vector<int> i_vec = ... device_vector<float> f_vec = ... // sum of integers int i_sum = reduce(i_vec.begin(), i_vec.end()); // sum of floats float f_sum = reduce(f_vec.begin(), f_vec.end()); ``` struct negate_float2 { __host__ __device__ float2 operator()(float2 a) { return make_float2(-a.x, -a.y); } }; // declare storage device_vector<float2> input = ... device_vector<float2> output = ... // create function object or ‘functor’ negate_float2 func; // negate vectors transform(input.begin(), input.end(), output.begin(), func); // compare x component of two float2 structures struct compare_float2 { __host__ __device__ bool operator()(float2 a, float2 b) { return a.x < b.x; } }; // declare storage device_vector<float2> vec = ...; // create comparison functor compare_float2 comp; // sort elements by x component sort(vec.begin(), vec.end(), comp); Convert iterators to raw pointers ```cpp // allocate device vector thrust::device_vector<int> d_vec(4); // obtain raw pointer to device vector's memory int * ptr = thrust::raw_pointer_cast(&d_vec[0]); // use ptr in a CUDA C kernel my_kernel<<< N / 256, 256 >>>(N, ptr); // Note: ptr cannot be dereferenced on the host! ``` Recap - Containers manage memory - Help avoid common errors - Iterators define ranges - Know where data lives - Algorithms act on ranges - Support general types and operators Explicit versus implicit parallelism CUDA is explicit - Programmer’s responsibility to schedule resources - Decompose algorithm into kernels - Decompose kernels into blocks - Decompose blocks into threads Explicit versus implicit parallelism SAXPY in CUDA ```c __global__ void SAXPY(int n, float a, float * x, float * y) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i < n) y[i] = a * x[i] + y[i]; } SAXPY <<< n/256, 256 >>>>(n, a, x, y); ``` Explicit versus implicit parallelism SAXPY in CUDA ```c __global__ void SAXPY(int n, float a, float * x, float * y) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i < n) y[i] = a * x[i] + y[i]; } SAXPY <<< n/256, 256 >>>(n, a, x, y); ``` Decomposition Explicit versus implicit parallelism SAXPY in Thrust // C++ functor replaces __global__ function struct saxpy { float a; saxpy(float _a) : a(_a) {} __host__ __device__ float operator()(float x, float y) { return a * x + y; } }; transform(x.begin(), x.end(), y.begin(), y.begin(), saxpy(a)); Implicitly Parallel - Algorithms expose lots of fine-grained parallelism - Generally expose $O(N)$ independent threads of execution - Minimal constraints on implementation details - Programmer identifies opportunities for parallelism - Thrust determines explicit decomposition onto hardware - Finding parallelism in sequential code is hard - Mapping parallel computations onto hardware is easier Productivity Implications Consider a serial reduction ```c // sum reduction int sum = 0; for(i = 0; i < n; ++i) sum += v[i]; ``` Consider a serial reduction ```c // product reduction int product = 1; for(i = 0; i < n; ++i) product *= v[i]; ``` Productivity Implications Consider a serial reduction ```cpp // max reduction int max = 0; for(i = 0; i < n; ++i) max = std::max(max, v[i]); ``` Productivity Implications Compare to low-level CUDA ```c int sum = 0; for(i = 0; i < n; ++i) sum += v[i]; ``` ```c __global__ void block_sum(const float *input, float *per_block_results, const size_t n) { extern __shared__ float sdata[]; unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; // load input into __shared__ memory float x = 0; if(i < n) { x = input[i]; } ... ``` Leveraging Parallel Primitives - Use `sort` liberally <table> <thead> <tr> <th>data type</th> <th>std::sort</th> <th>tbb::parallel_sort</th> <th>thrust::sort</th> </tr> </thead> <tbody> <tr> <td>char</td> <td>25.1</td> <td>68.3</td> <td>3532.2</td> </tr> <tr> <td>short</td> <td>15.1</td> <td>46.8</td> <td>1741.6</td> </tr> <tr> <td>int</td> <td>10.6</td> <td>35.1</td> <td>804.8</td> </tr> <tr> <td>long</td> <td>10.3</td> <td>34.5</td> <td>291.4</td> </tr> <tr> <td>float</td> <td>8.7</td> <td>28.4</td> <td>819.8</td> </tr> <tr> <td>double</td> <td>8.5</td> <td>28.2</td> <td>358.9</td> </tr> </tbody> </table> Intel Core i7 950 NVIDIA GeForce 480 Input-Sensitive Optimizations ![Graph showing sorting rate (Mkey/s) vs. Key Bits] - Sorting Rate (Mkey/s) - Key Bits © 2011 NVIDIA Corporation Leveraging Parallel Primitives - Combine `sort` with `reduce_by_key` - Keyed reduction - Bring like items together, collapse - Poor man’s MapReduce - Can often be faster than custom solutions - I wrote an image histogram routine in CUDA - Bit-level optimizations and shared memory atomics - Was 2x slower than `thrust::sort + thrust::reduce_by_key` Thrust on Google Code - Quick Start Guide - Examples - Documentation - Mailing list (thrust-users) Manycore processors provide useful parallelism Programming models like CUDA and OpenCL enable productive parallel programming They abstract SIMD, making it easy to use wide SIMD vectors CUDA and OpenCL encourages SIMD friendly, highly scalable algorithm design and implementation Thrust is a productive C++ library for CUDA development Questions? Bryan Catanzaro bcatanzaro@nvidia.com http://research.nvidia.com
{"Source-Url": "https://people.eecs.berkeley.edu/~demmel/cs267_Spr12/Lectures/CatanzaroIntroToGPUs.pdf", "len_cl100k_base": 6548, "olmocr-version": "0.1.53", "pdf-total-pages": 76, "total-fallback-pages": 0, "total-input-tokens": 123356, "total-output-tokens": 9320, "length": "2e12", "weborganizer": {"__label__adult": 0.0007987022399902344, "__label__art_design": 0.0007758140563964844, "__label__crime_law": 0.0005846023559570312, "__label__education_jobs": 0.00047969818115234375, "__label__entertainment": 0.00013494491577148438, "__label__fashion_beauty": 0.00034499168395996094, "__label__finance_business": 0.0003077983856201172, "__label__food_dining": 0.0005559921264648438, "__label__games": 0.00159454345703125, "__label__hardware": 0.02166748046875, "__label__health": 0.0007686614990234375, "__label__history": 0.0004296302795410156, "__label__home_hobbies": 0.0002357959747314453, "__label__industrial": 0.0014820098876953125, "__label__literature": 0.0002434253692626953, "__label__politics": 0.0004374980926513672, "__label__religion": 0.001079559326171875, "__label__science_tech": 0.10516357421875, "__label__social_life": 8.869171142578125e-05, "__label__software": 0.00693511962890625, "__label__software_dev": 0.853515625, "__label__sports_fitness": 0.0008220672607421875, "__label__transportation": 0.0012197494506835938, "__label__travel": 0.0003383159637451172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24914, 0.01529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24914, 0.57162]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24914, 0.71642]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 244, null], [244, 366, null], [366, 616, null], [616, 2077, null], [2077, 2558, null], [2558, 2796, null], [2796, 3193, null], [3193, 3545, null], [3545, 3813, null], [3813, 4254, null], [4254, 4511, null], [4511, 4880, null], [4880, 5162, null], [5162, 5456, null], [5456, 5851, null], [5851, 6218, null], [6218, 6376, null], [6376, 6712, null], [6712, 6792, null], [6792, 6880, null], [6880, 6959, null], [6959, 7479, null], [7479, 8227, null], [8227, 8817, null], [8817, 9123, null], [9123, 9520, null], [9520, 9867, null], [9867, 10422, null], [10422, 10919, null], [10919, 11287, null], [11287, 11788, null], [11788, 12266, null], [12266, 12801, null], [12801, 13190, null], [13190, 13428, null], [13428, 13748, null], [13748, 14136, null], [14136, 14329, null], [14329, 14473, null], [14473, 14696, null], [14696, 14845, null], [14845, 15081, null], [15081, 15203, null], [15203, 15620, null], [15620, 15934, null], [15934, 16301, null], [16301, 16438, null], [16438, 17027, null], [17027, 17224, null], [17224, 17633, null], [17633, 18073, null], [18073, 18419, null], [18419, 18814, null], [18814, 19082, null], [19082, 19449, null], [19449, 19719, null], [19719, 20082, null], [20082, 20432, null], [20432, 20759, null], [20759, 20943, null], [20943, 21150, null], [21150, 21415, null], [21415, 21694, null], [21694, 22022, null], [22022, 22429, null], [22429, 22564, null], [22564, 22684, null], [22684, 22835, null], [22835, 23294, null], [23294, 23891, null], [23891, 24037, null], [24037, 24400, null], [24400, 24500, null], [24500, 24836, null], [24836, 24914, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 244, null], [244, 366, null], [366, 616, null], [616, 2077, null], [2077, 2558, null], [2558, 2796, null], [2796, 3193, null], [3193, 3545, null], [3545, 3813, null], [3813, 4254, null], [4254, 4511, null], [4511, 4880, null], [4880, 5162, null], [5162, 5456, null], [5456, 5851, null], [5851, 6218, null], [6218, 6376, null], [6376, 6712, null], [6712, 6792, null], [6792, 6880, null], [6880, 6959, null], [6959, 7479, null], [7479, 8227, null], [8227, 8817, null], [8817, 9123, null], [9123, 9520, null], [9520, 9867, null], [9867, 10422, null], [10422, 10919, null], [10919, 11287, null], [11287, 11788, null], [11788, 12266, null], [12266, 12801, null], [12801, 13190, null], [13190, 13428, null], [13428, 13748, null], [13748, 14136, null], [14136, 14329, null], [14329, 14473, null], [14473, 14696, null], [14696, 14845, null], [14845, 15081, null], [15081, 15203, null], [15203, 15620, null], [15620, 15934, null], [15934, 16301, null], [16301, 16438, null], [16438, 17027, null], [17027, 17224, null], [17224, 17633, null], [17633, 18073, null], [18073, 18419, null], [18419, 18814, null], [18814, 19082, null], [19082, 19449, null], [19449, 19719, null], [19719, 20082, null], [20082, 20432, null], [20432, 20759, null], [20759, 20943, null], [20943, 21150, null], [21150, 21415, null], [21415, 21694, null], [21694, 22022, null], [22022, 22429, null], [22429, 22564, null], [22564, 22684, null], [22684, 22835, null], [22835, 23294, null], [23294, 23891, null], [23891, 24037, null], [24037, 24400, null], [24400, 24500, null], [24500, 24836, null], [24836, 24914, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24914, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24914, null]], "pdf_page_numbers": [[0, 98, 1], [98, 244, 2], [244, 366, 3], [366, 616, 4], [616, 2077, 5], [2077, 2558, 6], [2558, 2796, 7], [2796, 3193, 8], [3193, 3545, 9], [3545, 3813, 10], [3813, 4254, 11], [4254, 4511, 12], [4511, 4880, 13], [4880, 5162, 14], [5162, 5456, 15], [5456, 5851, 16], [5851, 6218, 17], [6218, 6376, 18], [6376, 6712, 19], [6712, 6792, 20], [6792, 6880, 21], [6880, 6959, 22], [6959, 7479, 23], [7479, 8227, 24], [8227, 8817, 25], [8817, 9123, 26], [9123, 9520, 27], [9520, 9867, 28], [9867, 10422, 29], [10422, 10919, 30], [10919, 11287, 31], [11287, 11788, 32], [11788, 12266, 33], [12266, 12801, 34], [12801, 13190, 35], [13190, 13428, 36], [13428, 13748, 37], [13748, 14136, 38], [14136, 14329, 39], [14329, 14473, 40], [14473, 14696, 41], [14696, 14845, 42], [14845, 15081, 43], [15081, 15203, 44], [15203, 15620, 45], [15620, 15934, 46], [15934, 16301, 47], [16301, 16438, 48], [16438, 17027, 49], [17027, 17224, 50], [17224, 17633, 51], [17633, 18073, 52], [18073, 18419, 53], [18419, 18814, 54], [18814, 19082, 55], [19082, 19449, 56], [19449, 19719, 57], [19719, 20082, 58], [20082, 20432, 59], [20432, 20759, 60], [20759, 20943, 61], [20943, 21150, 62], [21150, 21415, 63], [21415, 21694, 64], [21694, 22022, 65], [22022, 22429, 66], [22429, 22564, 67], [22564, 22684, 68], [22684, 22835, 69], [22835, 23294, 70], [23294, 23891, 71], [23891, 24037, 72], [24037, 24400, 73], [24400, 24500, 74], [24500, 24836, 75], [24836, 24914, 76]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24914, 0.04519]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
96d748583dd6a14d4b2f89788dcbc6b64426632c
As indicated by the title, this paper deals with the experience of the Computer Software Management and Information Center (COSMIC) in computer program documentation. The first part of this paper will be a brief history of COSMIC as it relates to the handling of program documentation; the second part will summarize the items that seem to be essential for good program documentation. On July 1, 1966, the University of Georgia was awarded a contract by NASA to receive computer software developed by NASA and its contractors and to supply copies of such material, on request, to all interested domestic parties through COSMIC. Originally COSMIC was to have been a clearinghouse type of operation; i.e., it would send to the requester a copy of exactly what was submitted. No checks were made on either the documentation or the program. This type of operation led to a number of dissatisfied customers. In order to insure that the user received adequate documentation and a complete, workable computer program at a minimum cost, COSMIC established documentation and program checkout procedures. Time and experience have brought about changes to the original procedure. COSMIC, today, is composed of 18 employees, 12 of whom are professionals familiar with electronic data processing and hold degrees in a variety of fields, and understand the disciplines to which the programs apply. The professional staff is divided into two groups, one concerned with the evaluation of the documentation and one concerned with the checkout of the submitted computer program. The evaluation staff checks the documentation for completeness of vital material and assigns a class code to the document. The amount of detail, the complexity of the program, and the uniqueness of the solution all play a part in determining which class code is assigned to these programs. The programmer staff performs a check on each program submitted to the library to determine whether all nonstandard routines are present in the program deck. There are four machine types available to our programmers, the IBM 360, the IBM 7094, the CDC 6400 at the University of Georgia, and the UNIVAC 1108 at the Georgia Institute of Technology. Programs written for any one of these machines are compiled before dissemination; however, programs written for other machines must be assumed to be executable when they are disseminated. Of some 2800 program packages submitted to COSMIC, 60 percent have been rejected for one reason or another by either the programmer or the evaluation staff. The poor quality of the documentation received is a major factor in the rejection of the program package. Many times illegible documentation has been received, and the program has therefore been rejected. Programs have also been rejected because they are too short or too special purpose to have any value to organizations other than the originator’s. Other submitted packages have not contained vital segments of the documentation, making them unusable. For example, COSMIC has received documentation that was a Xerox copy of a listing, with penciled notes on the sides. Documentation of this caliber cannot be disseminated. COSMIC has encountered a variety of problems in the content of the documentation submitted. Experience has shown that the problem is most often in the user instructions. It is assumed that a purchaser of a COSMIC program is buying the program because it will solve his problem directly or because it can be modified slightly to solve his problem. Therefore, the user knows most of the technology involved or is at least familiar with its purpose. The reason the user needs the program is to obtain the desired results without having to write the program himself. The user, therefore, needs detailed user instructions that are easy to follow. The following is an example of poor user instructions: A Xerox copy of the handwritten instructions, “Use standard IBM OS/360 job control setup,” was submitted as documentation. Needless to say, the documentation was rejected. Complete instructions would have contained a listing of a sample deck setup and samples of input and output format. These are needed because machine configurations differ and what is standard to one installation may not be to another. The input and output formats are needed so the user can test his results and knows what to expect of his output appearance. Because of deadlines and overlapping projects, documentation does not always receive its fair share of the time allotted for these projects. When one works closely with a program for a period of time, certain terminology and concepts become very familiar, and when the documentation is updated, these terms and concepts might be omitted or overlooked. The potential user of the program, however, most likely will not know its routine terminology and familiar concepts; therefore, problems arise. The programmer should be aware of his users and should gear his documentation toward the novice, the user who knows very little, if anything, about the program. COSMIC’s purpose is to disseminate programs that any potential user can employ. Certain areas of documentation are essential and shall be outlined here: 1. Program name (official name, acronym, and program title) 2. Identification number (NASA, contractor, or other number; COSMIC references programs in our library by the NASA-assigned “flash-sheet” number) 3. Installation name (name and location of the center where the program was developed) 4. Date (date which program was completed) 5. Author(s) and affiliation(s) (The author of the program is usually the person who does the actual programming and design work. If these tasks are separate, both names should be given.) 6. Language (the programming language in which the program was written) 7. Computer or machine requirements (computer, minimum configuration, level of compiler, and other requirements for the execution of the program) (8) Functional abstract (approximately 300 words) including the following: (a) Description of the program (The problem that the program is designed to solve should be presented in such a way that the reader may identify elements that are analogous to his own problem.) (b) Method of solution (When the method is well known or documented in standard publications, it should be identified by reference. Modifications to well-known methods, new methods, or novel combinations of methods should be fully described to indicate their applicability.) (c) Special features of the program (Processing features and options that contribute to the uniqueness of the program should be summarized. Types of input and output should be discussed in terms of their potential value in solutions of problems.) (9) User instructions (a) Input preparation formats and options (precise definition of all variables, exact format and arrangement of input parameters, required card or tape format for all input data, and sequence of control statements) (b) Output formats and options (These should clearly explain all output variables; some note regarding accuracy of results also should be included.) (c) Data restrictions (The user should be provided with a full explanation of any data restrictions such as those constituting illegal input, numerical or data-set limitations, and the number of or size of the data sets that can be handled by the program.) (d) Procedural references (manuals and detailed documentation required to use the program) (10) Sample input and output models The documentation that COSMIC receives, in most cases, does not include all these items. Standards at COSMIC have been minimal in the past but are constantly being upgraded. (See appendix.) If the documentation is deemed insufficient, more information is requested from the originating center. If more information is not available, the program must be rejected. On some programs, this is all that can be done. The turnover among programmers is fantastic. A programmer remaining at one job for 2 years many times will have seniority in a department. Therefore, contacting the originator becomes a difficult task. But on the programs being written now, we hope to establish standards to obtain complete documentation initially with as much information as possible in order to anticipate later questions. APPENDIX—COSMIC DOCUMENTATION AND PROGRAM STANDARDS HANDBOOK 1. INTRODUCTION COSMIC (COmputer Software Management and Information Center) was established to evaluate computer software developed by governmental agencies and then disseminate the evaluated submittals to other governmental agencies, as well as industrial, educational, and research institutions. To expedite the technical aspects of this process, it is necessary for COSMIC to receive properly prepared documentation and program packages from submitting field centers and contractors. To explicitly state COSMIC's requirements for submittal packages is the primary purpose of this handbook. COSMIC is cognizant that all documentation packages received will not meet the exact format as outlined in this pamphlet; however, it is imperative that all information requested herein be included with the package regardless of the format chosen. It is anticipated that this volume will— (1) establish a much needed and easily implementable standard for documentation; (2) clarify the definition of a complete program deck; (3) promote a better understanding among all offices and agencies involved; and thus, (4) increase the efficiency and effectiveness of the entire project. II. DOCUMENTATION CRITERIA A. General Documentation which meets the COSMIC standards must include the amount of information necessary to inform a prospective user of the precise problem which the computer program is designed to solve and to enable a qualified programmer to input the required data, successfully run the program, and obtain the desired results. Below is a chart of documentation criteria, each of which will be defined in the following text. <table> <thead> <tr> <th>DOCUMENTATION CRITERIA</th> </tr> </thead> <tbody> <tr> <td>SPECIFIC REQUIREMENTS</td> </tr> <tr> <td>1. Description of the Problem</td> </tr> <tr> <td>2. Method of Solution</td> </tr> <tr> <td>3. Program Language</td> </tr> <tr> <td>4. Machine Requirements</td> </tr> <tr> <td>5. User Instructions</td> </tr> <tr> <td>6. Operating Instructions</td> </tr> <tr> <td>OPTIONAL REQUIREMENTS</td> </tr> <tr> <td>1. Program Timing</td> </tr> <tr> <td>2. Accuracy of Results</td> </tr> <tr> <td>3. Sample Input and Output</td> </tr> <tr> <td>4. Flowchart</td> </tr> <tr> <td>5. Listing</td> </tr> </tbody> </table> B. Specific Requirements The following information must be included in the documentation for it to meet the COSMIC standards: 1. Description of the Problem—The description must include a complete definition of the problem which the program solves. The thoroughness and sophistication of this definition is determined by the sophistication and degree of difficulty of the problem itself. For instance, a simple mathematical routine may be described in one sentence, whereas a description of a program designed to construct electronic printed circuit boards may require a multiple number of pages. 2. Method of Solution—This requirement must include the programming techniques or methods used, supporting theory, design, and computational equations with their derivations to substantiate or illustrate the program. 3. Program Language—A statement of program language must include all levels of languages found in the submitted deck (e.g., FORTRAN IV, MAP, OBJECT) as well as the compiler necessary to process the languages. 4. Machine Requirements—An explanation of machine requirements must encompass not only the computer system for which the program was developed but also all peripheral equipment utilized by the program (e.g., disks, drums, consoles, tape units, display devices, plotters, etc.). Also mandatory is the level of the operating system on which the program executed (e.g., IBM-360/65, Release 14; CDC-6600, Scope 3.1; etc.) as well as the amount of core a program occupies once loaded. 5. User Instructions a. Input Instructions—These instructions must provide the user with the information necessary to prepare his data for input to the program. They should include: (1) precise definition of all variables; (2) the exact format and arrangement of all input parameters (object time variables); and (3) the required card or tape format for all input data to be processed. It must be noted if the input requirement is for a specialized format, e.g., NASA formatted telemetry tapes. b. Output Requirements—The user instructions must also contain a description of the output data formats and types of output devices; e.g., card punch, printer, magnetic tape, etc. In addition, the instructions must include an example to illustrate both the input deck setup and the corresponding output. c. Data Restrictions—The assumption must be made that the user knows nothing of the mechanics of the program; therefore, any data restrictions or illegal input should be specified. For example: (1) x cannot equal zero; (2) y must be less than 200; (3) x cannot equal 5 unless y is less than 4. d. Program Structure—A list of all decks in the program, the main program as well as any subroutines called, must be included. If a routine is to be included in more than one subsection (e.g., chain, overlay, etc.) of a program, please so indicate. 6. Operating Instructions—This information must provide the computer operator with step-by-step instructions pertinent to the execution of a program. It must include: a. tape assignments or selection (Designate tapes required for input, working, and output for successive runs); b. deck setup; c. control and sequencing information; and d. special controls and requisite operator actions (e.g., console instructions). C. Optional Requirements The following information, although not essential, will facilitate processing and use of a program. 1. Program Timing—Timing information should include the computer time required for a run with a certain number of data points or check points, or the computer time required for an average run. 2. Accuracy of Results—This section should include the number of decimal points or number of significant digits which can be expected in the answer. Where some inputs are based on sampling, both the accuracy of the estimates and the reliability of the output should be supplied. 3. Sample Input and Output—A description of a sample problem, an example of the input data required to run the program, and resulting output from a run of the respective input should be included. 4. Flowchart—This must be a structural flowchart of the sequential logic and decision points included in the program. Machine-produced flowcharts of the exact programming techniques cannot be used to satisfy this requirement as they merely amount to a listing of the programs and do not briefly and concisely reflect the inherent logical flow of decisions. 5. Listing—This must be a post-list of the assembled program submitted to COSMIC to be used as an in-house aid in processing programs. III. PROGRAM CRITERIA A. Card Deck and Tape Submittal Formats Following is a list of requirements compiled by COSMIC in an attempt to standardize program handling processes and to eliminate misidentification of submitted programs: 1. Card Deck Submittals—These must be clearly marked with the respective program identification numbers. 2. Tape Submittals—It is requested that 7-track tapes be used. If this is impossible, 9-track will be accepted. a. Tapes must be recorded: (1) at 556 or 800 bpi, (2) in unblocked card image format (84 characters per record for BCD or 168 characters per record for binary), (3) with a complete program package (main deck, subroutines, data, etc.) in the same file, (4) with each complete program package separated by an End-of-File card (blank except for a 7-8 multiple punch in column 1), (5) with multiple 7-8 cards following the final program on tape. b. Programs must be identified by number, title, and file position sequence on tape. This may be accomplished with a cover letter or a label on the tape reel. *Note added in proof: These conventions have been revised in line with improved computer technology. The conventions stated here are not presently in use. B. Definition of a Complete Program An explanation of COSMIC's definition of a complete program is pertinent at this point. To be considered complete, a program must include: 1. main program; 2. all non-standard (not included with operating system as normally installed by manufacturer) subroutines called within the main program or by other subroutines in the package; and 3. all plotting routines called (If this is impossible for proprietary reasons, submit a dummy subroutine deck with all user called entry points; also, include with the documentation complete input and output variable formats for the routines used.). C. Mode of Submittal Programs It is imperative that COSMIC receive source decks rather than object mode decks. It is seldom that a disseminated program can be implemented by a purchaser without modifications being necessary. To facilitate modifications and, thus, wider usability of COSMIC programs, we publish only source programs. DISCUSSION MEMBER OF THE AUDIENCE: I wish to raise the question of standards versus guidelines. My understanding is that standards are something required, and guidelines are something to be desired. It seems to me that if documentation standards are insisted upon, many programmers will simply refuse to adhere to them. KALAR: If most users can use documentation in a certain form, then I think the best thing to do is to try to put it in that form. If the form is pretty well agreed upon, then I think that people ought to try to conform to it. Call it standards or guidelines, I cannot determine between the two. I do not think you can enforce anything. MEMBER OF THE AUDIENCE: I would like to try to answer that. I do believe that some minimum amount of information should be available to a potential user so that he can make some choice as to whether he wants to make a substantial investment in some of the documentation, which may run into thousands of dollars. I do believe that a standard or a standard requirement or specification may be needed in this area. MEMBER OF THE AUDIENCE: You have given us a list of things that you desire to see in documentation. Has this been disseminated to your customers? KALAR: A partial list is in the appendix of my paper. This is a little bit different from the one that COSMIC is now disseminating to its customers. MEMBER OF THE AUDIENCE: Is it a regular procedure to advise the customers or the people that send you programs of the problems that you see as you go along? KALAR: Generally, most of these items are covered in the appendix. When programs are submitted, if they are deficient in a certain area, we will tell the senders what areas to send us as documentation. These exact items are not written down yet, but they should be within the next couple of months. MEMBER OF THE AUDIENCE: To your knowledge, has COSMIC had an opportunity to review the proposed NASA NHB standards on documentation? KALAR: I do not know. MEMBER OF THE AUDIENCE: How do you determine the costs for the distribution of programs? KALAR: By the number of cards in the deck for the programming. The documentation is 10 cents a page. An average cost is about $275 per program. MEMBER OF THE AUDIENCE: You said you like to disseminate programs that any individual can employ. Well, I question this. We have taken the other tack in the nuclear field. We have said we want to disseminate programs to installations that have people competent in both computer science and nuclear science because we feel that if you really disseminate programs to anyone, you can spend a fortune trying to train them. KALAR: I do not think we can be so choosy about our customers. Whoever wants to buy a program can buy one, and if they can understand it, they can use it. MEMBER OF THE AUDIENCE: It seems to me you have to do a lot more work. More documentation is needed, and these people must be brought in and trained how to use these programs. KALAR: Most of our programs now being disseminated are accounting-type programs, programs that the small businessman can use without any extensive knowledge. MEMBER OF THE AUDIENCE: I have a question about your organization. How did a university get into disseminating programs that industry has paid fortunes to get? MEMBER OF THE AUDIENCE: Could I answer that question? I am the COSMIC specialist at Goddard Space Flight Center with the Technology Utilization Office. COSMIC is mainly a nonprofit institution. NASA has a duty to distribute the technology that NASA develops to people in the public sector, that is, commercial, profit, and nonprofit organizations that may have a need or a desire to use any part of the technology that we develop. Computer programs are considered a part of that technology. COSMIC's function is to distribute those programs to those in the general public who may find them useful, thereby increasing the productivity and welfare of the general public. There is no profit involved to COSMIC. The programs that industry develops under NASA contracts belong to the Government. What the Government does in this instance is make that property available to the commonweal. I hope that answers your question. MEMBER OF THE AUDIENCE: What is the general turnover in programs and purchases at COSMIC? KALAR: I think we sell around 60 or 80 packages per month and receive probably an average of 50. MEMBER OF THE AUDIENCE: One problem with documentation is that we may meet the documentation requirements for a Government contract. Then the contract monitor says to submit it to COSMIC. We submit it to COSMIC and receive a different set of documentation requirements. We go back to the contracting officer and ask for the money to document the program or the system for COSMIC, but they refuse. In other words, who is going to pay for documentation? MEMBER OF THE AUDIENCE: This is one of the things that falls in my area, I believe that most NASA software documentation requirements now incorporate the COSMIC requirements for program documentation. What often happens is that the contractor does not regard these as essential, because in the past the documentation specifications really have not been enforced. We now demand that these requirements be met. MEMBER OF THE AUDIENCE: Do you review the request for proposal (RFP) to see that the requirements... MEMBER OF THE AUDIENCE: I do see some of them, but I believe that most of our RFP's for documentation now include those requirements.
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730010488.pdf", "len_cl100k_base": 4670, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 18337, "total-output-tokens": 5021, "length": "2e12", "weborganizer": {"__label__adult": 0.00036835670471191406, "__label__art_design": 0.00074005126953125, "__label__crime_law": 0.0009608268737792968, "__label__education_jobs": 0.01375579833984375, "__label__entertainment": 0.00017249584197998047, "__label__fashion_beauty": 0.0001806020736694336, "__label__finance_business": 0.00473785400390625, "__label__food_dining": 0.0002512931823730469, "__label__games": 0.000713348388671875, "__label__hardware": 0.004123687744140625, "__label__health": 0.0005636215209960938, "__label__history": 0.0005617141723632812, "__label__home_hobbies": 0.0002665519714355469, "__label__industrial": 0.0012826919555664062, "__label__literature": 0.000568389892578125, "__label__politics": 0.0003762245178222656, "__label__religion": 0.0005025863647460938, "__label__science_tech": 0.1473388671875, "__label__social_life": 0.00020766258239746096, "__label__software": 0.1561279296875, "__label__software_dev": 0.6650390625, "__label__sports_fitness": 0.0002665519714355469, "__label__transportation": 0.0004625320434570313, "__label__travel": 0.00020444393157958984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22698, 0.01483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22698, 0.3143]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22698, 0.94601]], "google_gemma-3-12b-it_contains_pii": [[0, 2557, false], [2557, 5958, null], [5958, 8331, null], [8331, 10423, null], [10423, 13576, null], [13576, 16249, null], [16249, 19172, null], [19172, 22301, null], [22301, 22698, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2557, true], [2557, 5958, null], [5958, 8331, null], [8331, 10423, null], [10423, 13576, null], [13576, 16249, null], [16249, 19172, null], [19172, 22301, null], [22301, 22698, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22698, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22698, null]], "pdf_page_numbers": [[0, 2557, 1], [2557, 5958, 2], [5958, 8331, 3], [8331, 10423, 4], [10423, 13576, 5], [13576, 16249, 6], [16249, 19172, 7], [19172, 22301, 8], [22301, 22698, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22698, 0.11628]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d5313ddf3575dad119dad4b0dcbb7594c9b86ee6
The purpose of this paper is to project present trends in application development into the next decades. The revolution in information technology has reduced the cost of computer hardware as well as communications. Networks will consist of three kinds of specialized components: clients, servers, and processors. Standardization of component interfaces and message structures will replace the "one size fits all" Legacy system, reduce the need for human interaction to perform a task, and reduce the cost of modifications and enhancements. Inherited data structures in Legacy systems create obstacles to enhancement and new applications that require data not accommodated in the existing structure. What is required is an environment in which systems consisting of "Best-of-Breed" components, both hardware and software, can be assembled using off-the-shelf proprietary modules. To ensure the participation of all interested parties, including institutions and vendors, consortiums must assume part of the leadership in setting and achieving goals such as: atomizing application software, standardizing component interfaces, and developing rules for peer-to-peer applications messaging. Over time, a team of component products will replace their large, complex, centralized view of data processing being delivered in Legacy software. Those who seize the opportunity to advance messaging between peer components will be forging the next generation of administrative systems. (AEF) The Emerging Trends in Application Integration David K. Moldoff Applied Business Technologies 4631 West Chester Pike Newton Square, PA 19073 dmoldoff@email.abtcampus.com Introduction Information Technology is Advancing Constantly Everyone sees and feels the symptoms of revolutionary changes as innovation after innovation is introduced, and administrators and managers have been forced to adapt the way they perform tasks. During the past 50 years, for example, we have gone from wires, diodes and punched cards to mainframes, mini computers, on-line terminals, networks, and PC’s. Every administrative process can be divided into events or steps and, each step can generate multiple messages that reflect the status of the transaction or event. Messages can have standard actions performed before, upon receipt, at completion or upon deletion. Until now, users had three more or less expensive options for acquiring systems to perform these steps: Home grown systems; Purchasing stand-alone (Component) systems; Purchasing integrated (Legacy) systems. Today, we are still adapting to E-Mail; networking, Internet, and Microsoft Windows as standards are evolving for desktop operating systems, Local Area Networks, and Corporate IntraNets. Web standards and the communication media (i.e. Internet) will shape the landscape for information system technology and office automation in the 21st century. The purpose of this paper is to project present trends in application development into the next decades. How can application software developers reduce the cost of their software commensurate with the cost of computer and communication hardware? How can software be made more adaptable to unique user needs? Forthcoming Innovations The revolution in Information Technology has reduced the cost of computer hardware as well as communications. The growth of complexity and cost of application software have accompanied user expectations encouraged by the increasing power. The inevitable next steps must include software cost reduction by means of: Greater modularization, standardization, and interchangeability of hardware and software components and interfaces. Development of standard message structures to request and transmit data packets between local and remote components. Networks will consist of three kinds of specialized components: Clients, Servers, and Processors. To accomplish a typical unit of work, a Client might send a message to a Server requesting that the Server perform sub-tasks (such as store, replace, transmit, delete, etc. transaction data). The Servers might, in order to complete its sub-task, send a message to a Processor to perform an operation (such as sorting a file). When the requested action has been completed, the Server will respond with the results. **Standardization and Interchangeability** Standardization of component interfaces and message structures will have the following benefits: Facilitate mixing and matching of “Best-of-Breed” components; Reduce the need for human interaction to perform a task; Reduce the cost of modifications and enhancements. Acceptance of Client/Server technology requires the migration of ‘code’ into small interchangeable packets that support the standard message architecture pioneered by the Web and Internet. The rocket-like acceptance of JAVA, Sun Microsystem’s new hybrid language, is supporting this component concept called applets. Future systems will consist of ‘Best-of-Breed’ application software components, replacing the “one size fits all” Legacy system, built to serve a large family of functions set within an organization. Legacy systems had a primary benefit of cost sharing, at the price of slow response to the need for change. They also lack the function-rich features of component software developed under Windows, the Web, and the workflow model supported by messaging. New software components, designed with the ‘Best-of-Breed’ approach with messaging will dissolve the Legacy software system piece by piece by replicating the current functions and supporting a friendlier user interface making productivity more important than cost and centralization. Cost, functionality, and flexibility justify migration to component-based systems. The next phase is for industry groups to define transaction formats to support Electronic Data Interchange (EDI) using electronic commerce as the justification to define the events that support messaging between and within Application Systems from point to point. The next step will be to design and construct communication components. Once communications have been established, there will be a surge of application development. The Problem: Legacy Systems are Data Structure bound complex and Centralized. In Legacy systems, most application modules are linked by shared data structures. Sharing data structures is not necessarily complex. Complexity derives from the number of applications, the number of data files and the interface functions needed to perform workflow procedures. Inherited data structures create obstacles to enhancements and new applications that require data that are not accommodated in the existing structure. Two options are in order: (1) add new files, or (2) modify the existing files, with ripples throughout application designs. Both options add redundancy and increase the cost of operations as application components are implemented from different software developers. A typical Legacy system may have hundreds of data files, which accommodate the need for many interfaces. Complexity is added when a shared data file is altered, impacting other application modules sharing access to the changed data file. The cost to accommodate changes in this model is high due to hard coded dependencies. The Solution What is required is an environment in which systems consisting of 'Best-of-Breed' components, both hardware and software can be assembled using off-the-shelf proprietary modules. Interfaces between components must be integrated so that they appear seamless to users and managers. To ensure the participation of all interested parties, including institutions and vendors, consortiums such as the Midwestern Higher Education Commission (MHEC) must assume part of the leadership in setting and achieving goals such as: Atomizing Application Software, Standardizing Component Interfaces, and Developing Rules for Peer-to-Peer Application Messaging. The criteria for the solution will be the ease with which it accommodates innovation and change, its inclusiveness, and its flexibility. Application Software Atomization Every application program can be partitioned into single-instruction atoms independent of the language or source of the program provided the installation includes translation tables. Each instruction message therefore needs, at the minimum, to identify: - addressee of the instruction, - the generic instruction, - applicable identifiers, - operand source, - execution time, and - Disposition of results. Who Will Set the Standards? Legacy systems have implied standards set by the designers. Individual application components may even have internal standards, but they are all different. For messaging to provide the bridging between independently developed applications, they must adhere to standards for: Addressing and Protocol; Message structure and language, and Functional instructions. When Thomas Edison invented the electric light bulb, he had to confront the need to distribute electric power. He chose a direct current option, but soon had to switch to alternating current because of lack of DC power transmission components. Much of Edison's investment in generating and distribution plant assets went down the tubes. The challenge is different today. The information superhighway and its standards already exist. Transmission hardware and software are available. Lacking is the ability of application components to generate and interpret instructions and data comprehensible to disparate systems. The situation is analogous to a light bulbs made by various vendors that have different kinds of sockets and use different voltages and frequencies. Industry Must Define Messaging Standards Just as the industry is developing standards for reporting, it must develop standards for inter-application messaging to lower operational and acquisition costs. The longer standards are delayed, the greater will be the cost of implementation and integration of custom-built applications. Both developers and users of application software can be expected to agitate in favor of their own prototypes. Something like the FASB (Financial Accounting Standards Board) is needed to mediate and resolve the issues and define message standards. This standard setting agency should have representatives of all of the interested parties lest it become captive of some dominating interest. Technology - Peer to Peer Messaging Solves: The Bottleneck Problem and Creates a More Versatile Software Platform. Each message is transmitted in a packet envelope which contains control data, such as source & destination identifiers, dates, transaction type & status, priority, selection criteria, security level, etc., and the messages, when saved, constitute an audit trail. Network messages are deemed to be local, Intranet, or wide area. Messages can be described to be primary, secondary or dependent. Messages can be secured public or restricted. Messages need receivers (special programs in themselves) that can handle the methods, actions and behaviors designed into the message. This would allow application component products to be developed and deployed independent of their author and platform. Instead of reading and writing into data files, applications transmit and receive electronic messages in standard event formats. Each Server listens in on the network for messages with its address, and captures those with the appropriate identifiers. (All other messages are ignored.) The Server places its messages in a queue and executes them in priority sequence. Upon execution of the message, a message is returned conveying the results. The network medium is extremely fast, and the transmission delay is infinitesimal. **Messaging System Models** Electronic communications has undergone several revolutions since Morse code was invented. The development of the Web and Internet that is fueling radical changes in the computer industry are in the tradition of Bell’s telephone, Marconi’s wireless, Babbage’s computer, and Sarnoff’s TV. Key attributes that fuel the Web opportunity are: The simplicity of the user interface; The message driven model supporting the transmission of ‘requests’ and ‘answers’ from server to client; The standard format of HTML documents; The portability of the applications; The ease of developing Web sites; The links from Web site to Web site promoting cooperation and openness; The graphics and text support making the products sizzle, and The free form formats that combines structure with flexibility. As the Web and Internet expand, the application model of computing is under pressure to adapt and facilitate electronic commerce via the open Web interface. Organizations worldwide are building their Web presence by altering business practices to include Web services. Microsoft and Netscape are battling for presence on the Web and control the tools that support the Web and integrate with the operating systems on the desktop and on the server. Just as the Web was fueled by the above attributes, the platforms for business applications that run and administer businesses are under pressure to adopt the new model of information management. The evolution of computing has been altered by market acceptance of desktop innovations, cost savings in the use of purchased tools rather than in-house developments, and increases in access to information for a wider audience of users. These movements in the industry have been brought on by: the migration from centralized computing to decentralized support for users performing the work; decentralized functions on desktop computers that replace old ways of doing things, and Integration of office tools on the desktop serving the productivity of users. Application development lead-time and cost are major reasons for the market to encourage the trend toward the new model of computing. A Web message driven structure will reduce demands on centralized, complex Legacy systems. **Inter-Application Traffic** The same inducements for open systems between entities apply within administrative departments of each institution, and the market is shifting to components or object development based upon Microsoft’s component object technology called OLE (Object Linking and Embedding). The concept seems complex, but is quite simple in reality. Build functions called ‘clients’ and ‘servers’ that mirror the application actions we perform as users. A ‘client’ requests data or services from a range of ‘server’ functions. These requests are enveloped in ‘messages’ that are transported between ‘clients’ and ‘servers’. Windows is already based upon this technology and so is the Web. Most PC application software today must offer the same functionality. A user fills the blanks in a screen form and accepts by using the OK button. Data are validated and sent to the database where they are processed. If an error occurs, a ‘message’ is returned to the function and displayed for the user in a box window giving instructions on what to do next. If the data are OK and accepted, the transaction is accepted and other functions are triggered, if necessary, to update other database tables. The user filling in the screen form is in a sense using an object to perform a function. Today, the function resides in large central systems and called from menus. These functions are developed with instructions in Fourth Generation toolsets like Oracle, SQL Windows, PowerBuilder, or Delphi or legacy code like Cobol or Basic. The functions execute as ‘fat clients’ doing all the work or execute on a multi-user platform. When functions are exported to standalone forms in the client/server model, they can be called at any point within the framework of the desktop OS. This way, functions can be seamless with the OS and interact with E-mail and the other supporting Office tools. This is one of the main thrusts of client/server and it enables the developer to distribute the workload between server and client desktop. Most existing application software does not follow the approach described above since their designs rest on a foundation of data structures and complex logic performing the functions centralized in code developed over many years on UNIX or mainframe systems. Legacy systems have evolved into complex, expensive and cumbersome platforms. They can be large bottlenecks to adopting procedural improvements. If we break down the complexity of a Legacy system by managing the events rather than the database transaction, we can reduce the costs of development of new functions and their deployment and at the same time it will give users new flexibility, control and productivity managing their work responsibilities. The Transition The transition to open systems has barely begun with use of E-Mail and Windows. Atomizing application software into component objects or little black boxes that perform single functions distributed over a network and shared by many users is the opportunity of the future. This trend has started with the acceptance of Windows as the standard desktop OS. ABT expects a major shift in application deployment to result in component objects as Legacy systems continue to lose footings in peripheral functions they perform poorly. In the next five to ten years, many organizations will adopt this system orientation because of their exposure to Windows on the desktop. Over time, a team of component products will replace their large, complex, centralized view of data processing being delivered in Legacy software. The Agenda for Software Developers A new model for computing will evolve because The industry will be freed of software design restraints of record and table structures called relational database models. Organizations are building and implementing data warehouses to foster the integrated view of data while separating the operational layer from the data model. This will reduce the likelihood that one vendor or one product will support all operational functions. Users will demand greater flexibility, simplicity, and satisfaction of their needs and expectations. Institutions will demand faster response to their needs for changes at lower cost. Data warehousing, messaging, and software atomization will facilitate modification and enhancement of application software. Object oriented development, already used by Borland and Microsoft; will free code intensive applications from structural paralysis. Messaging via the Web and Internet will liberate users from the constraints of geography and system structure. For example, student registration represents a sale to a student. The event consists of a course request, debiting the course offering, crediting the course-section list, and retaining the transaction as an audit trail. The process requires user based forms for on-screen interaction and message-based functions that send and receive data from the server holding the database structure. The Web has led the way in popularizing a new method to address how applications can be designed piece by piece offering a much more flexible solution for application development. In the new scheme, non technical decision-making managers and administrators become users with the same ability to submit inquiries and receive pre-programmed responses without encountering delays inherent in acquiring formal reports through the chain of command. The new environment encourages competition on the basis of quality and responsiveness. Institutions have the option to assemble networks of 'Best-of-Breed' components since no vendor (not even IBM or ATT) has all the resources to develop all the functions needed in the real world to satisfy every user in every organization. Expectations will continue to march forward and change and is the only action that can be predicted. File/record imports and exports from function to function today support product integration. For example, when a prospect changes status to an applicant, a record is moved to a new structure. Or, when Cash Receipts are entered, the student’s balance must be posted. The terms batch or posting is often used to describe a computer process that ties two or more functional areas together. This type of process adds extra steps to procedures and complexity. Moving to a dynamic message process to update across components, the real time updates would eliminate the need for these steps. Criteria for Success The union of high-speed data packet technology has opened the window of opportunity for system integration by means of messaging between applications on the desktop and server. This has made the linking between applications a common expectation never dreamed of a few years back. Those who seize the opportunity to advance messaging between peer components will be forging the next generation of administrative systems. Those who bring out the 'Best-of-Breed' solutions will win the race. These will be selected by astute educational institutions to assemble the effective networks of application software. As component products are built and employed, users will be able to employ suites of components to satisfy their functional needs and supplement many functions lacking or limited by design of older Legacy systems. 'Best-of-Breed' components are defined as those that are user-friendly, that are easy to implement, operate, support, and maintain; that have the functionality demanded by the users; and that perform effectively in networks with applications built by competitive vendors. As organizations begin to adapt and work with EDI (Electronic Data Interchange) based applications; there will be a pull to adopt standard operating software supporting a message driven model independent of the database. This will provide a whole new level of computing. It will eliminate the application barriers present in large complex databases by breaking down the complex work process. Providing in essence, a component that does the work in a black box and delivering service or a solution in the message sent to a user. This may be in the form of a report, message box on the window or an e-mail message reminding a user to do something. These developments will promote a new approach for administrative systems for colleges and universities and will force market consolidation. Vendors who continue to develop and support integrated administrative systems will need to cope with growing expectations of their installed base of clients and attempting to serve a continuously changing prospect pool who aspire to adopt new technologies rather than established products based on obsolete technology. Many organizations have a great deal invested in present Legacy systems designed to maintain the organizations' data processing functions. The infrastructure of the Legacy systems is complex and expensive. Moving from Legacy systems to networks of application Components requires a significant investment in money and energy. Risk and stress are the price of advancing from one level of technology to the next; so many institutions are loath to replace their Legacy systems. NOTICE REPRODUCTION BASIS This document is covered by a signed "Reproduction Release (Blanket)" form (on file within the ERIC system), encompassing all or classes of documents from its source organization and, therefore, does not require a "Specific Document" Release form. This document is Federally-funded, or carries its own permission to reproduce, or is otherwise in the public domain and, therefore, may be reproduced by ERIC without a signed Reproduction Release form (either "Specific Document" or "Blanket").
{"Source-Url": "https://files.eric.ed.gov/fulltext/ED410930.pdf", "len_cl100k_base": 4267, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 23743, "total-output-tokens": 4803, "length": "2e12", "weborganizer": {"__label__adult": 0.00021207332611083984, "__label__art_design": 0.0002532005310058594, "__label__crime_law": 0.0003740787506103515, "__label__education_jobs": 0.0015764236450195312, "__label__entertainment": 6.765127182006836e-05, "__label__fashion_beauty": 0.00010287761688232422, "__label__finance_business": 0.0024890899658203125, "__label__food_dining": 0.0002384185791015625, "__label__games": 0.00038743019104003906, "__label__hardware": 0.000935077667236328, "__label__health": 0.00030231475830078125, "__label__history": 0.0001982450485229492, "__label__home_hobbies": 8.368492126464844e-05, "__label__industrial": 0.0003886222839355469, "__label__literature": 0.00017464160919189453, "__label__politics": 0.00024378299713134768, "__label__religion": 0.00019443035125732425, "__label__science_tech": 0.0224456787109375, "__label__social_life": 7.849931716918945e-05, "__label__software": 0.03955078125, "__label__software_dev": 0.92919921875, "__label__sports_fitness": 0.0001558065414428711, "__label__transportation": 0.0004143714904785156, "__label__travel": 0.0001590251922607422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23656, 0.00364]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23656, 0.31635]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23656, 0.93159]], "google_gemma-3-12b-it_contains_pii": [[0, 1480, false], [1480, 3464, null], [3464, 6172, null], [6172, 7272, null], [7272, 8512, null], [8512, 9674, null], [9674, 11208, null], [11208, 13598, null], [13598, 16739, null], [16739, 19423, null], [19423, 22661, null], [22661, 23136, null], [23136, 23656, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1480, true], [1480, 3464, null], [3464, 6172, null], [6172, 7272, null], [7272, 8512, null], [8512, 9674, null], [9674, 11208, null], [11208, 13598, null], [13598, 16739, null], [16739, 19423, null], [19423, 22661, null], [22661, 23136, null], [23136, 23656, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23656, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23656, null]], "pdf_page_numbers": [[0, 1480, 1], [1480, 3464, 2], [3464, 6172, 3], [6172, 7272, 4], [7272, 8512, 5], [8512, 9674, 6], [9674, 11208, 7], [11208, 13598, 8], [13598, 16739, 9], [16739, 19423, 10], [19423, 22661, 11], [22661, 23136, 12], [23136, 23656, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23656, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
41e22604d150bb0b4745e22e086d40befea86739
State Machine Modeling: From Synch States to Synchronized State Machines Dominikus Herzberg * Ericsson Eurolab Deutschland GmbH Ericsson Allee 1 52134 Herzogenrath, Germany Dominikus.Herzberg@eed.ericsson.se André Marburger Aachen University of Technology Department of Computer Science III 52074 Aachen, Germany marand@cs.rwth-aachen.de Abstract: To synchronize concurrent regions of a state machine, the Unified Modeling Language (UML) provides the concept of so-called "synch states". Synch states insure that one region leaves a particular state or states before another region can enter a particular state or states. For some application areas, it is beneficial to synchronize not only regions but also state machines. For example, in data and telecommunications, a pure black box specification of communication interfaces via statechart diagrams gives no adequate means to describe their coordination and synchronization. To circumvent the limitations of the UML, this paper presents the concepts of Trigger Detection Points (TDP) and Trigger Initiation Points (TIP); it allows a modeler to couple state machines. The approach is generic, easy to extend and smoothly fits to the event model of the UML; it could also substitute the more specific concept of synch states. 1 Introduction The problem of synchronizing concurrent state machines raised as an issue in a research project at Ericsson [9]. Concerned with architectural modeling of telecommunication systems, we developed a ROOM (Real-Time Object-Oriented Modeling) [20] like notation (see [8]) but were soon confronted with the question of coupling interfaces: How do we model the interaction between interfaces (or ports) of a single component without referring to its internals? The intention was to describe an architecture in a black box manner, though being capable to understand and simulate interface coordination and synchronization. In *This work is being funded by Ericsson and is run in cooperation with the Department of Computer Science III, Aachen University of Technology, Germany. other words, the question was how to properly couple the individual state machines, which specify the interface behavior of a single component. Independent of this investigation, an Ericsson internal study on the use of modeling languages for service and protocol specifications exactly points out the same problem. It shows that the coupling problem is of theoretical as well as practical relevance. It is also one of the reasons, why modeling languages like the UML (Unified Modeling Language) [17] have not successfully penetrated the systems engineering domain, yet. System designers of data and telecommunication systems do not find reasonable support in today’s modeling languages for their problem domain [7]. In the following two subsections, the telecommunication background is introduced and the problem is described in more detail. Subsequent sections discuss the proposed solution: In section 2 we elaborate on the model presented in subsection 1.1 in form of a case study. There, we study the TCP (Transmission Control Protocol) layer of a data communication system and show how the external interfaces can be described by a Finite State Machine (FSM) each. Section 3 discusses the coupling problem from different perspectives and demonstrates how FSMs can be synchronized via Trigger Detection Points (TDP) and Trigger Initiation Points (TIP). The implementation of a prototype, verifying the TDP/TIP concept, indicates how the UML could incorporate the TIPs and TDPs as extensions and supersede synch states; this is subject to section 4. Finally, section 5 closes with some observations and conclusions. 1.1 Background On an architectural level, any data or telecommunication system can be structured according to two different directions of communication, “vertically” and “horizontally”. “Vertical” communication refers to the exchange of information between layers. The “point” at which a layer publishes its services for access to an “upper” layer is called Service Access Point (SAP). “Horizontal” communication, on the opposite, refers to the exchange of information between remote peers. Remote peers are physically distributed, they reside in different nodes, and communicate with each other according to a protocol. We call the “point” describing the protocol interface Connection Endpoint (CEP). Note that the concept of a protocol is well-known and generally defines a set of messages and rules (see e.g. [2, p.191]); however, it has a special meaning in data and telecommunications. Whereas software engineers associate a reliable, indestructible communication relation with the term “protocol”, data and telecommunication engineers are faced with the “real” world: They have to add error correction, connection control, flow control and so on as an integral part to the protocol. A communication relation between remote peers can always break, be subject to noise, congestion etc. This is the reason why communication engineers introduced protocol stacks, with each protocol level comprising a dedicated set of functionality, thereby “stackwise” abstracting the communication service. These stacks naturally give means to “vertically” divide a node into layers. Consequently, three main interfaces completely describe the behavior of a node layer from an outer perspective, each interface covering a specific aspect of the communication relation, see figure 1. The SAP, denoted by a filled diamond symbol, provides layer \((N)\) services by means of so-called service primitives to a service user, the upper layer \((N + 1)\). Service primitives can be implemented as procedure calls, library calls, signals, methods etc., which is a design decision. The CEP, symbolized by a filled circle, describes the “horizontal” relation to another remote peer entity. A CEP holds the specification of a communication protocol such as the Transmission Control Protocol (TCP) [19] or the Internet Protocol (IP) [18]. In fact, we will exemplify the topic of discussion on TCP. Be aware that the CEP is purely virtual and represents a logical interface only. All protocol messages are transmitted using the services of a lower layer. This interface function is given by the inverse SAP \((SAP^{-1})\), which uses services from a lower layer \((N - 1)\) by accessing the \((N - 1)\)-SAP; it is depicted by an “empty” diamond symbol. The model described bases on the OSI (Open Systems Interconnection) Reference Model [12], which has laid a solid foundation for understanding distributed system intercommunication [3]. The notation used for the SAP and \(SAP^{-1}\) is an extension to ROOM; for a thorough discussion see [8]. 1.2 The Problem Given the model presented, one faces some important problems in modeling the behavior of a layer in a communication system. There are in principle two alternatives for specifying layer \((N)\). For this discussion, we assume that Finite State Machines (FSM) according to the Unified Modeling Language (UML) [17] are the primary means to describe behavioral aspects. FSMs are a common tool for specifying protocols [11]. 2 **Black box view:** Specifying a layer in a *black box* manner means that we give a complete description of the behavior of each and every external interface. In that case, the CEP, the SAP, and the SAP\(^{-1}\) are specified by an FSM each, which is a precise description of the remote peer protocol and the two interface protocols. Even though this view is ideal from a modeling point of view, the problem is that such a black box model can neither simulate nor explain the interface interaction without being wired with the internal behavior. 2 **White box view:** Specifying a layer in a *white box* manner means that we define a more or less huge and complex FSM that gives a complete specification of the internals driving the external behavior. As a result, the communication at an external interface cannot be understood without looking inside the layer; at best, a list of messages (or service primitives) going in and out at the external interface can be declared. This corresponds to the notion of an interface in UML, see e.g. [4, p.155ff.]. Here, the problem is that the FSM is difficult to structure in a way, so that at least internally the behavioral aspects of the external interfaces are made clear. What both problems have in common is that different views change scope and redefine how states or state machines are coupled with each other. In case of white box specifications, the UML offers the concept of composite states, which can be decomposed into two or more concurrent substates, also called *regions*. In order to enable synchronization and coordination of regions, the UML introduced *synch states*. However, synch states do not sufficiently support enough synchronization means as the case study presented below shows, nor do they solve the problem of synchronizing states of distinct state machines. Driven by a black box view, we propose the idea of Trigger Detection Points (TDP) to enable FSM separation but smooth coupling. TDPs together with Trigger Initiation Points (TIP) are introduced as a concept extending state machine modeling; they were motivated by the concept of detection points in [1]. ## 2 Case Study: The TCP Communication Layer The TCP protocol serves as an excellent example for discussing layer design and specification problems. It is simple to understand, easy to read (the technical standard [19] sums up to less than one hundred pages\(^1\), public available, and – most important – it is widespread and one of the most used protocols world-wide. Together with IP, the TCP/IP protocol suite forms the backbone of the Internet architecture. Looking at how the TCP standard [19] specifies the protocol unveils a typical problem: It presents the whole layer by a state machine and does not clearly separate the TCP protocol from its user (or application) interface. Both are combined, see figure 2; it is the result of a white box view. The figure uses a compact notation and shows both the server FSM and the client FSM. It reads as follows: When a user in his role as a server submits a LISTEN command, the state changes from CLOSED to LISTEN. If, on the other side, the client user \(^1\)Clarifications and bug fixes are detailed in [5], extensions are given in [14]. Figure 2: The TCP FSM figure is derived from [21, p.532]. The heavy solid line is the normal path for a client. The heavy dashed line is the normal path for a server. The light lines are unusual events. User commands are given in bold font. submits a CONNECT, the TCP protocol sends out a message with the synchronization bit SYN set to one, and the client’s state changes to SYN SENT. On receipt of the TCP message with SYN equal to one, the server sends out a TCP message with SYN and ACK (the acknowledgment bit) set to one and changes to state SYN RCVD. When the three-way handshake completes successfully, both parties end up in state ESTABLISHED and are ready to send and receive data respectively. This short description of figure 2 neglects a lot of details of TCP (e.g. timeouts, which are important to resolve deadlocks and failures) but is sufficient for the purpose of our discussion. The interested reader may consult [21] for more information. In order to structure TCP according to its interface functions (figure 1) the FSM in figure 2 needs to be partitioned. The result of this step is shown in figure 3 in UML notation. The left hand side of figure 3 displays the FSM, which corresponds in functionality to the SAP. Instead of using the TCP service commands LISTEN, CONNECT, SEND etc., the commands have been converted to service primitives, which are more narrative. Again, the client and the server side are combined in a single SAP FSM. From a user’s viewpoint the communication with the client/server SAPs looks like follows: When a user requests a connection (Conn.req), the client’s SAP changes to state C-PENDG. The server gets notified by the connection request via a connection indication (Conn.ind) and may respond with Conn.res, accepting the request. This is confirmed to the client via Conn.con and finally, the SAPs end up in state DATA. Note that neither the user of the client SAP nor the user of the server SAP see the underlying TCP protocol being used. They only see the Figure 3: The FSMs of the SAP and the CEP of the TCP layer. The shortcuts stand for connect and disconnect; the postfixes stand for request, confirmation, indication, and response. SAP interface; the layer and its use of TCP is hidden. The logical CEP holds the protocol specification of TCP, see the right hand side of figure 3. Since we have not introduced any coupling yet, the CEP FSM is strictly separated from the SAP FSM. That is why there is for example no indication what might have triggered the transition from CLOSED to SYN SENT at the client’s side; but when the transition is triggered, no matter how it happened, then it sends out a TCP message with the SYN bit set. Otherwise, figure 3b is similar to figure 2; just all the numerous SAP related details have been stripped off. To reduce complexity, we slightly simplified the TCP protocol specification and added transitions to the data transfer state ESTABLISHED. As described, TCP calls on a lower level protocol module to actually send and receive information over a network. This lower level protocol module usually is IP and is accessed via the inverse TCP SAP\textsuperscript{−1}. To avoid cluttering up the discussion and distracting the reader with too many FSMs, we intentionally left out an example figure. For the sake of brevity, the SAP\textsuperscript{−1} is not considered and supposed not to exist; that is we assume the logical connection between the CEPs to be for real. Consequently, we can restrict the discussion on the interaction between the SAP and the CEP; this simplifies and eases the topic under discussion. 3 The Concept of Coupled State Machines We managed to partition TCP according to its layer interfaces, which already is an achievement. All further details of TCP like flow control and buffering, congestion control, fragmentation, error control, window flow control etc. are hidden and subject of a refined view. As was mentioned above: If we prefer a white box view, the two state machines could be interpreted as concurrent regions in a “higher level” statechart and synchronized via synch states. If we, on the opposite, demand a rigid black box view (as is often the case for architectural modeling), the SAP and the CEP are described by two separate FSMs specifying the “horizontal” and “vertical” communication behavior; there are no coupling capabilities. However, for model understanding it would be beneficial to show how the different interfaces of the communication layer interact with each other without referring to any internals. As was shown by Ericsson’s language study, it is usually the “inside”, which drives the “outside”. We are looking for a way that allows the modeler to keep a purely external view. One way to couple the individual FSMs is by the usual event messaging mechanism provided by UML, that means by signals and/or call events. The drawback of this approach is that one would again tightly connect the FSMs. For example, the Conn.req transition of the SAP (see figure 3a) needs to have an activity attached that sends a signal to the CEP (see figure 3b). This signal would then represent the CLOSED/SYN SENT transition that triggers the tcp.out message. As a result, the FSM of the CEP would more or less turn out to be the original TCP FSM and finally look like figure 2. In other words, the modeler would not be better off, and splitting of the TCP FSMs seems to be an academic exercise only. Obviously, another technique is needed. Our solution to this problem is the introduction of so-called Trigger Detection Points (TDPs) and Trigger Initiation Points (TIPs). A TDP can be attached at the arrow head of an transition in a statechart diagram; it detects whenever this specific transition fires and broadcasts a notification message to all corresponding TIPs. TDPs are notated by small filled boxes, see figure 4. A TIP can be attached at the beginning of the transition arrow and triggers the transition to fire. An active TIP stimulates the transition to fire on receipt of a TDP notifier independent of the transition’s event-signature. That means, that either the event specified by the transition’s event-signature or the TIP can trigger the transition. Active TIPs are visualized by small filled triangles, see figure 4. Passive TIPs, on the other hand, have a locking mechanism and can be meaningfully used with “normal” transitions only, i.e. the transition explicitly requires an event-signature. The transition cannot fire unless the TIP’s corresponding TDP has been passed and unless the transition’s event has been received. The order of occurrence is irrelevant, it is just the combination of the TIP event and the transition event, which unlock the transition and let it fire. Passive TIPs behave like a logical “and” to synchronize a transition, whereas active TIPs realize a logical “or”. An example of a passive TIP can be found in figure 4a; it is pictured by a small, “empty” triangle. In general, the relation of a TIP and a TDP is given by a name consisting of a single or more capital letters. Note that one or more TIPs may be related to a single TDP. Now, the coupling of the SAP and the CEP can be easily described, see figure 4a and 4b. For example, when a client user sends a Conn.req to the SAP, TDP A detects the transition NULL to C-PENDG firing and broadcasts a notifier event to all corresponding TIPs. The notifier event causes the CEP to fire the CLOSED/SYN SENT transition and results in sending out a TCP message with the SYN bit set to one; the rest of the scenario is straightforward. However, some explanations should help understand the purpose of a passive TIP. Let us assume, that the protocol at the server side has just entered state SYN RCVD, Figure 4: The FSMs of the SAP and the CEP of the TCP layer coupled via TDPs and TIPs which triggers TIP C at the server SAP and results in a connection indication (Conn.ind) to the SAP user. Now, there are two concurrent and competing threads. The user of the server SAP may either accept the connection indication and answer with Conn.res or, alternatively, the user may deny the request and answer with a Disc.req. Concurrently, on the protocol thread, the server’s CEP enters state ESTABLISHED at some point in time. It is the passive TIP D that prevents the SAP FSM entering DATA on Conn.req unless the protocol has reached ESTABLISHED. On the other hand, if the user has decided to reject the connection indication (Conn.ind) via Disc.req, the CEP starts the disconnect procedure based on the TDP G trigger. All this could not be done using conventional messaging without changing the FSMs. The advantage of using TDPs and TIPs is that the FSMs remain autonomous but get coupled. They can notify each other about important state changes and use it for synchronization purposes; there is no need to introduce new event messages and modify transitions. TDPs and TIPs could be interpreted as some sort of annotations (with precise semantics), which specify FSM interaction and coordination. The modeler does not need to modify the original interface specification or reference to any internal “engine” driving the whole. If the broadcasting mechanism of TDP events can be directed, it is possible to couple external interface FSMs with layer internal FSMs reusing the same set of TDPs and TIPs. That means, that a black box and a white box view could peacefully coexist without blurring the difference between both views. 4 Extending the UML Synch states as known from the UML correspond in their behavior to what we called passive TIPs: A synch state is used in conjunction with forks and joins to insure that one Figure 5: The design of the FSM prototype region leaves a particular state or states before another region can enter a particular state or states [17]. Clearly, synch states do not support other synchronization means between regions like TIPs do and they are not suited for inter-FSM synchronization. Good reasons to think about integrating TIPs and TDPs in the UML and to substitute synch states. TDPs and TIPs can be smoothly integrated in an event driven execution model for FSMs. The prototype we developed at Ericsson (programmed in Python [16]) treats TDPs as a specialization of messages, see figure 5, and dispatches notifier events to the event queue. The implementation of TIPs required only a few modifications to the event processor. If one compares the prototype design and the metamodel for state machines (see section 2.12 of the UML [17] semantics), the required extensions to the UML can be easily identified: First, the notifier event needs to be subclassed to the event metaclass;\(^2\) this can be achieved by using stereotypes. Then, it is to be decided how TDPs can be attached to the transition metaclass. Since transitions are restricted to have not more than one event trigger, it is not possible to add TDPs as a second trigger. Rather, the transition metaclass can be extended by some few properties. A TDP property is needed referring to the notifier event, optionally added by a property holding a list of state machines the notifier event is selectively broadcasted to. Another property are the TIP and the TIP type, which hold the notifier reference and the value active or passive, respectively. The required changes to the execution semantics of state machines are uncritical, since the UML is relatively open to adaptations. To conclude, the extensions described are the simplest form to introduce TDPs and TIPs to the UML using its extension mechanisms [10]. Note that TDPs and TIPs make synch states superfluous. TDPs/TIPs contain the concept of synch states but allow much more semantic variations and extensions. Synch states are an oddity in the UML with no clear conceptual roots; TDPs and TIPs are their generalization but they are put in a meaningful semantical context of transitions and events. In fact, TIPs and TDPs specify a synchronization protocol between states machines or regions. Such a protocol does not only seem more appropriate to capture complex interactions of \(^2\)Regarding events, the UML is a bit different designed than our message based prototype. synchronization but also semantically cleaner. That is why we propose to remove synch states from the UML metamodel and instead introduce the notifier event subclass, insert metaclasses for TDPs and TIPs and associate them to the transition metaclass. This would enable flexible semantic extensions via stereotypes to the UML user. 5 Conclusions Actually, the TDP/TIP concept relates very much to the observer pattern [6]; it allows the modeler to notify other FSMs about state changes. Because of the distinction in active and passive TIPs, the concept of coupled state machines implements an extended observer pattern. This lifts the observer pattern from its use in the design domain in form of class diagrams to the modeling domain with an explicit notation for coupling, which is a quite interesting aspect. Furthermore, it is an interesting question, if TIPs and TDPs could be of use in sequence diagrams or Message Sequence Charts (MSC) [13]. Since the approach presented gives means to specify and separate aspects of a modeling entity, one could also investigate to which extend TDPs and TIPs enable aspect-oriented modeling in extension to aspect-oriented programming [15]. It also allows the modeler to specify APIs (Application Programming Interface) much more elegant; for instance, the TCP SAP could be seen as an API to TCP. As was shown in the case study, the design of communication protocols gains a lot of clarity from the separation of logical concerns. In short, it looks like that many application areas could benefit from using coupled state machines. Due to the specific nature of the application domain (data and telecommunications) we study, we cannot claim that we have identified all types of TDPs and TIPs required for coupling FSMs in an efficient manner. Extensions or specializations are conceivable. However, TDPs and TIPs appear to be a powerful modeling concept, they substitute synch states, and put a modeler in a better position especially for modeling the coordination and synchronization of concurrent systems. Acknowledgements: Many thanks to Andreas Witzel, who triggered the idea of coupled state machines. Furthermore, the authors would like to thank Jörg Bruß and Dietmar Wenzinger (all Ericsson) for their support. References
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings05/StatMachModel_15.pdf", "len_cl100k_base": 5210, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23515, "total-output-tokens": 6694, "length": "2e12", "weborganizer": {"__label__adult": 0.0003523826599121094, "__label__art_design": 0.0005655288696289062, "__label__crime_law": 0.0004100799560546875, "__label__education_jobs": 0.0007724761962890625, "__label__entertainment": 0.00010859966278076172, "__label__fashion_beauty": 0.00016117095947265625, "__label__finance_business": 0.000324249267578125, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.0004742145538330078, "__label__hardware": 0.00232696533203125, "__label__health": 0.0006136894226074219, "__label__history": 0.0004284381866455078, "__label__home_hobbies": 9.351968765258788e-05, "__label__industrial": 0.0007653236389160156, "__label__literature": 0.0003657341003417969, "__label__politics": 0.00036525726318359375, "__label__religion": 0.0006031990051269531, "__label__science_tech": 0.1923828125, "__label__social_life": 9.882450103759766e-05, "__label__software": 0.014984130859375, "__label__software_dev": 0.7822265625, "__label__sports_fitness": 0.0003032684326171875, "__label__transportation": 0.0008106231689453125, "__label__travel": 0.0002428293228149414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28130, 0.0323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28130, 0.50016]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28130, 0.8963]], "google_gemma-3-12b-it_contains_pii": [[0, 2068, false], [2068, 5263, null], [5263, 7150, null], [7150, 10385, null], [10385, 12396, null], [12396, 14403, null], [14403, 18134, null], [18134, 20055, null], [20055, 22574, null], [22574, 25171, null], [25171, 28130, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2068, true], [2068, 5263, null], [5263, 7150, null], [7150, 10385, null], [10385, 12396, null], [12396, 14403, null], [14403, 18134, null], [18134, 20055, null], [20055, 22574, null], [22574, 25171, null], [25171, 28130, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28130, null]], "pdf_page_numbers": [[0, 2068, 1], [2068, 5263, 2], [5263, 7150, 3], [7150, 10385, 4], [10385, 12396, 5], [12396, 14403, 6], [14403, 18134, 7], [18134, 20055, 8], [20055, 22574, 9], [22574, 25171, 10], [25171, 28130, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28130, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
573126069eda514825a82e2aa6da30e412dc1907
Scheduling of an Aircraft Fleet Massimo Paltrinieri (*) Alberto Momigliano (**) Franco Torquati Bull HN Italia Direzione Sistemi Esperti Pregnana Milanese, Milano Italia Abstract Scheduling is the task of assigning resources to operations. When the resources are mobile vehicles, they describe routes through the served stations. To emphasize such aspect, this problem is usually referred to as the routing problem. In particular, if vehicles are aircraft and stations are airports, the problem is known as aircraft routing. This paper describes the solution to such a problem developed in OMAR (Operative Management of Aircraft Routing), a system implemented by Bull HN for Alitalia. In our approach, aircraft routing is viewed as a Constraint Satisfaction Problem. The solving strategy combines network consistency and tree search techniques. 1. Introduction Two of the main concerns for a major airline are flight planning and aircraft routing. Flight planning involves both technical and market issues, such as the choice of the cities to be served and the weekly frequency of flights. It produces an aircraft rotation, valid for a whole season, which we shall refer to as the virtual plan (see fig. 1); it consists of a periodical time table where flights are organized in lines, one for each virtual aircraft, an hypothetical resource that could perform them in absence of technical and maintenance constraints. Aircraft routing assignes tail numbers - the identifiers of the aircraft - to flights, usually for a time window of 24 hours. This process, called predictive routing, is trial and error: routes are drawn on the virtual plan, performing switches, i.e. connections between flights on different lines of the plan, to satisfy the constraints that prevent an aircraft to cover the next flight on the same line. When there are no more tasks available for the given aircraft, an assignment to an already scheduled task is possibly invalidated. If the scheduler is not able to cover all the activities with the available resources, maintenance are delayed or, in some extreme cases, flights are delayed or even cancelled. The schedule produced by predictive routing is coded in the routing plan, which differs from the virtual plan in replacing virtual with actual aircraft and arranging programmed maintenance. The routing plan is often modified in real time to avoid or contain propagation of delays. Such an activity is said reactive routing. This paper describes the Prolog kernel of OMAR (Operative Management of Aircraft Routing), an interactive system designed to provide predictive and reactive routing of the Alitalia fleet. Routing is formulated as a Constraint Satisfaction Problem (CSP): each variable (task) has a domain of possible values (aircraft) while constraints (relations between variables) are used to restrict such domains. Since the refined domains are not in general single-valued, solutions must be found by search, iteratively selecting an aircraft and assigning it to a set of consecutive flights. Aircraft selection is driven by the first fail principle: the most constrained aircraft is scheduled first. A controlled form of backtracking is implemented to partially recover from heuristics flaws while maintaining predictable response time. Present addresses: (*) Stanford University - Department of Computer Science - Stanford, CA 94305 - palmas@cs.stanford.edu (**) Carnegie Mellon University - Department of Philosophy - Pittsburgh, PA 15213 - am4e@andrew.cmu.edu 2. Problem Definition In this section we give a formal definition of both predictive and reactive aircraft routing. The constraints of the problem are captured by the function \( \text{label} \), that associates to each task the set of aircraft that can perform it. The function \( \text{start}_{qs} \) returns the airport from which an aircraft has to depart after time \( qs \), the start time of the scheduling window. **Predictive Routing** **Input** - set \( T \) of tasks - set \( AP \) of airports - set \( AC \) of aircraft - set \( Q \) of times - schedule start time \( qs \) and schedule end time \( qe \) - total order \( \leq \) on \( Q \cup \{qs\} \cup \{qe\} \) s.t. \( \forall q \in Q, q_s \leq q \leq q_e \) - total function departing time, \( dt: T \rightarrow Q \) - total function arrival time, \( at: T \rightarrow Q \) - total function departing airport, \( da: T \rightarrow AP \) - total function arrival airport, \( aa: T \rightarrow AP \) - total function label, \( \text{label}: T \rightarrow 2^\{AC\} \) - total function start_{qs}, \( \text{start}_{qs}: AC \rightarrow AP \) **Output** an aircraft routing, i.e a total function \( s: T \rightarrow AC \), s.t. (i) \( \forall t \in T, s(t) \in \text{label}(t) \) (ii) if \( s^{-1}(ac) \) is not empty, then its elements can be ordered in a sequence (the routing path of \( ac \)) \[ r_{ac} = \langle t_{ac,0}, t_{ac,1}, \ldots, t_{ac,n} \rangle \] such that \[ da(t_{ac,0}) = \text{start}_{qs}(ac) \] \[ aa(t_{ac,i-1}) = da(t_{ac,i}) \] for \( i = 1, \ldots, n \) \[ at(t_{ac,i-1}) < dt(t_{ac,i}) \] for \( i = 1, \ldots, n \) **Reactive Routing** **Input** aircraft routing as defined above an unexpected event **Output** an aircraft routing that copes with the unexpected event and most closely conforms to the given routing. 3. Aircraft Routing as a Constraint Satisfaction Problem A task is said *programmed* if its departure and arrival airports and times are fixed. Flights, as well as main maintenance, are programmed, whereas secondary maintenance not necessary. The duration of each task is a given constant. Let us assume that we have a set \( T = \{ T_h, h=1, \ldots, m \} \) of programmed tasks to be scheduled in a time window of 24 hours. Two tasks \( T_h \) and \( T_k \) are said to be *connectible* (denoted \( T_h \rightarrow T_k \)), if the following Prolog clause holds: \[ \text{connectible}(T_h, T_k) :. \] \[ \text{task}_\text{arrival}_{\text{airport}}(T_h, \text{Airp}) \] \[ \text{task}_\text{departure}_{\text{airport}}(T_k, \text{Airp}) \] \[ \text{task}_\text{arrival}_\text{time}(T_h, \text{MinArr}_T) \] \[ \text{task}_\text{departure}_\text{time}(T_k, \text{MaxDep}_T) \] \[ \text{ground}_\text{time}(\text{Airp}, \text{Gr}_T) \] \[ \text{Arr}_T \leq \text{MinArr}_T + \text{Gr}_T \] \[ \text{Arr}_T \leq \text{MaxDep}_T \] In other words, task \( T_h \) is connectible to task \( T_k \) iff the arrival airport of the former is equal to the departure airport of the latter and the arrival time of the former plus the ground time precedes the departure time of the latter. The graph of the connectibility relation is said the *connection graph*. It is directed and acyclic. Fig. 2 shows the connection graph for the portion of virtual plan in fig. 1. We say that \( T_h \) *precedes* \( T_k \) and write \( T_h \prec T_k \) iff \( (T_h, T_k) \) is in the transitive closure of \( \rightarrow \). If neither \( T_h \prec T_k \) nor \( T_k \prec T_h \), then \( T_h \) and \( T_k \) are said *incompatible*, denoted \( T_h \uparrow \downarrow T_k \): incompatible tasks cannot be assigned to the same aircraft. A *routing path* \( P \) is a finite sequence of elements from \( T \) \[ P = \langle T_1, T_2, \ldots, T_n \rangle \] such that \( T_h \rightarrow T_{h+1} \) for each \( h, 1 \leq h < n \). A path \( S \) is *operable* by aircraft \( Ac \) if each task in the path is operable by \( Ac \), i.e. there are no technical reasons that forbid the assignment to \( Ac \). An initial state for the fleet is a one-to-one map from Acs, the set of aircraft in the fleet, to a subset of T, the set of programmed tasks. The image of Acs under such map is the set of initial tasks of T, which correspond to those nodes in the connection graph with no entering arcs. The set of final tasks is the set nodes in the connection graph with no exiting arcs. In the following, paths will have an initial task as first element of the sequence; the idea is that paths are the formalization of the routes that an individual aircraft may cover, starting from its initial state. We look at the elements of T as variables which take their values from the domain Acs. As already mentioned, a label of a task is the set of aircraft that can perform it. This concept can be extended to the set of all tasks: the labeling of the set T is a map \(1: T \rightarrow P(Acs)\), where \(P(Acs)\) is the powerset of Acs. Constraints are relations in Acs x P(T) that are used to refine the labels of tasks. They come in two types: a commitment constraint between aircraft Ac and tasks \(T_1, \ldots, T_n\) requires that Ac executes at least one of those tasks; an exclusion constraint between an aircraft Ac and and tasks \(T_1, \ldots, T_n\) requires for Ac to be excluded from those tasks. Each singleton labeling that satisfies all the constraints is an aircraft routing, i.e. a solution to the routing problem formalized in sect. 2. Such a singleton labeling generates a partition of the set T of tasks such that each element of the partition is a routing path for a distinct aircraft. 4. Routing Process The routing process implemented in OMAR starts loading the state of the fleet and the relevant information on the tasks to be scheduled from the Alitalia database. A necessary, but not sufficient, condition for the existence of a fleet routing is checked, namely whether the number of resources available to be assigned to each task is always greater than or equal to zero. We briefly describe the algorithm, linear in the number of tasks, that tests such condition. Each airport served by the fleet identifies a sequence of chronologically ordered events belonging to one of two classes: departures or arrivals. Each task entails two events, its arrival and departure, unless it is initial, in which case we consider only the arrival. A resource counter representing, at each time, the balance between arrivals and departures, is associated at every airport. The resource counter is initially set to 0 and is incremented or decremented, at each flight arrival or flight departure, respectively. If, scanning the whole plan, the counter of some airport becomes negative, the necessary condition is not satisfied and no routing exists. On the other hand, if the counters are always greater than or equal to zero, then the condition is satisfied and the system enters its next stage. --- Fig. 1. A portion of about one-fourth of the virtual plan for the DC-9 fleet. Fig. 2. The connection graph for the virtual plan in fig. 1. A sample list of events at Linate airport is shown below. <table> <thead> <tr> <th>Time</th> <th>Event</th> <th>Flight</th> <th>Resource</th> <th>Level</th> </tr> </thead> <tbody> <tr> <td>17:50+0</td> <td>d</td> <td>448</td> <td>0</td> <td></td> </tr> <tr> <td>17:25+35</td> <td>a</td> <td>267</td> <td>1</td> <td></td> </tr> <tr> <td>17:45+35</td> <td>a</td> <td>074</td> <td>2</td> <td></td> </tr> <tr> <td>18:30+0</td> <td>d</td> <td>316</td> <td></td> <td>1</td> </tr> </tbody> </table> Observe that the arrival of flight 267 at 17:25, given the ground time of 35 minutes, follows the departure of the flight 448 at 17:50. The constraint satisfaction algorithm refines the labels so that most dead-ends are avoided and expiry maintenance requirements are implicitly satisfied: this means that aircraft planned for the latter tasks are excluded by those routes that do not lead to the set of airports where maintenance jobs are possible. If the network is not found consistent, no complete routing exists and the control goes to the human scheduler who relaxes the constraints. It is our opinion that this kind of expertise cannot be adequately simulated by a computer, since the knowledge required to recognize the causes of an inconsistent situation and suggest a solution is too extended and fuzzy. If, on the other hand, everything is successful, the system is ready to schedule. The aircraft are sorted in decreasing order according to the number of occurrences inside the labeling; the idea is that the aircraft coming first in this order are the most constrained ones, since they have a smaller number of tasks on which they can be enroute. Routes are then created according to such an order by the Prolog procedures sketched below. The recursive procedure `route_gen/3` terminates when the list of aircraft to be scheduled is empty. It searches for a solution in depth-first mode, generating a descendant of the most recently expanded node and backtracking if some dead end is reached. If we relied exclusively on backtracking, the process duration would be unpredictable. Fortunately, we have developed some criteria that help us to discard paths likely to fail. On each aircraft `Ac`, `route_gen/3` calls `pathgen/3`, passing as parameters the aircraft `Ac` and the labeling `Lab` and returning a new labeling `TmpLab` in which the tasks assigned to `Ac` are the generated path. The procedure `pathgen/4` builds a path recursively, task after task, starting from the first one returned by `last_started/2`. A limited amount of backtracking is allowed: different choices are considered only during the coupling of a task with one of its direct offsprings. Yet paths cannot be invalidated after its completion (note the use of the cut sign `!` after `pathgen/3`). In case of failure, the interaction with the user is more effective. In our experience, after the relevant modifications have been performed, another run of the scheduler is usually sufficient to achieve a complete solution. Let us analyze the path generation process in more detail. The problem is not trivial, since there are both local and global optimizations which influence the choice at various extents, often in opposite directions. For instance, we could always choose the first task departing after the given one (local optimization), but this could generate a new line switch hard to manage in the overall routing (global optimization). ```prolog route_gen([Ac|Acs],Lab,NewLab):- pathgen(Ac,Lab,TmpLab), route_gen(Acs,TmpLab,NewLab). route_gen([],Lab,Lab). path_gen(Ac,NewLab):- last_started(Ac,Task), path_gen(Ac,Task,Lab,NewLab). path_gen(Ac,Task,Lab,NewLab):- select(Ac,Task,Lab,NextTask,NewLab), path_gen(Ac,Task,Lab,NewLab), path_gen(NextTask,Lab,NewLab). select(Ac,Task,Lab,NextTask,NewLab):- propose(Ac,Task,Lab,NextTask), check_rc(Task,NextTask), update_lab(Ac,NextTask,Lab,NewLab). propose(Ac,Task,Lab,NextTask):- get_methods(Ac,Task,Methods), member(Method,Methods), offsprings(Task,Offs), choose(Method,Ac,Offs,Task,NextTask). get_method(Ac,Task,Methods):- rule(Condition,Methods), apply(Condition,Ac,Task). rule(open_switch, [close_switch,straight,closet,stop]). rule(default, [straight,open_switch,closed,stop]). ``` The basic step of the path generation process is performed by the Prolog procedure select5 shown above. Given an aircraft Ac, just assigned to a flight or maintenance (Task), select5 extends the path of Ac to a new flight or maintenance (NextTask). The procedure propose4 returns Nextask, then check_rc/2 checks whether the resource counter becomes negative: in such a case it fails, otherwise it succeeds and the labeling is updated, aircraft Ac being assigned to NextTask. The path of Ac is extended with NextTask by propose4 as follows: first, a list Methods of methods compatible with Ac and Task is selected by get_methods/3; then, one Method is chosen nondeterministically from such a list; after, the offsprings of Task in the connection graph are returned by offsprings/2 and finally, one of them, NextTask, is returned by choose/5, which basically applies Method to the given Ac and Task. A method is a technique to choose the next task that extends a given path. Methods are gathered in lists and are associated to conditions. The relation between conditions and lists of methods is defined by rule/2. Two sample rules are shown above for the open_switch (remember that an aircraft opens a switch when its path is extended on a different row) and the default conditions. Given Ac and Task, if a condition is applicable to Ac and Task, which is checked by apply3, a list of methods is returned by get_methods/3. Such methods are tried in the same order as they appear in the Methods list, the first one being the most desirable. For any possible Ac and Task there is at least one rule whose condition is satisfied, thus a list of methods is always selected, eventually by the default rule. In such a case, the list of methods tries to extend the path on the same line of the virtual plan with the straight method, which is considered optimal, otherwise a switch is opened by open_switch; if it is not possible to open a switch, the closest flight is selected by closest to minimize the consumption of the resources; if even this method is not applicable, the path is terminated by stop. 5. Conclusions Aircraft routing is a problem for which no exact solution is known. Consequently, all models are heuristic and research is now concentrating on the systematic interaction between human and computer. OMAR is an interactive system for the routing of the Alitalia fleet. Its kernel is presently composed of 20,000 lines of Quintus Prolog source code, and the system’s response time is satisfactory. Once the derived structures have been computed from the primary database, the fleet routing is returned nearly in constant time (approximately 30 seconds for a fleet of 26 aircraft with 170 flights). Moreover, if the constraints are compatible with complete schedules, there is a very high probability that the system succeeds finding one of them. Of course, we cannot expect that the solution perfectly matches the user’s expectations. According to our experience, however, an intervention by the user modifying, on average, five assignments, is sufficient to reach such an accomplishment. In the tests supplied by Alitalia so far, OMAR’s solutions can be compared with those of a senior scheduler. References
{"Source-Url": "http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930009476.pdf", "len_cl100k_base": 4343, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 20068, "total-output-tokens": 5003, "length": "2e12", "weborganizer": {"__label__adult": 0.001870155334472656, "__label__art_design": 0.000885009765625, "__label__crime_law": 0.0014476776123046875, "__label__education_jobs": 0.00424957275390625, "__label__entertainment": 0.0004978179931640625, "__label__fashion_beauty": 0.0005960464477539062, "__label__finance_business": 0.0016469955444335938, "__label__food_dining": 0.0017118453979492188, "__label__games": 0.00585174560546875, "__label__hardware": 0.004817962646484375, "__label__health": 0.00173187255859375, "__label__history": 0.0019474029541015625, "__label__home_hobbies": 0.00051116943359375, "__label__industrial": 0.00424957275390625, "__label__literature": 0.0014295578002929688, "__label__politics": 0.001224517822265625, "__label__religion": 0.00152587890625, "__label__science_tech": 0.2303466796875, "__label__social_life": 0.00048422813415527344, "__label__software": 0.03350830078125, "__label__software_dev": 0.44580078125, "__label__sports_fitness": 0.0012788772583007812, "__label__transportation": 0.25, "__label__travel": 0.00238037109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19030, 0.01623]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19030, 0.67036]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19030, 0.90647]], "google_gemma-3-12b-it_contains_pii": [[0, 3515, false], [3515, 7524, null], [7524, 10563, null], [10563, 14718, null], [14718, 19030, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3515, true], [3515, 7524, null], [7524, 10563, null], [10563, 14718, null], [14718, 19030, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19030, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19030, null]], "pdf_page_numbers": [[0, 3515, 1], [3515, 7524, 2], [7524, 10563, 3], [10563, 14718, 4], [14718, 19030, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19030, 0.04651]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
3549443718a495e667c040c37923b32c51aecd44
An efficient test suit reduction methodology for regression testing Shailendra Gupta, Jitendra Choudhary Department of Computer Science, Medi-Caps University Indore, Indore, India ABSTRACT This paper goal is to provide a more effective algorithm for reducing the amount of test cases in a test suit during the regression testing phase. This algorithm divides the entire test suit into equivalence classes at first then apply boundary value coverage to select test case out of repeated test cases with same importance in suit. This algorithm is based on the concept that before selecting best test cases out of repeated test case in test suit to prepare reduced test suit we can divide all test cases in number of equivalence classes to reduce test cases by great extent. This paper proposed a method of experimentation involving test cases from different software application areas. Furthermore, we discuss a case study to verify and check new algorithm for its efficiency, for that we apply our algorithm on one of the program or group of programs. This complete proposed methodology shall be applied to different software applications belonging to soft computing, engineering software, and financial software. This work has not been done earlier as found in the literature survey performed by us. Minimization in test cases would lead to lesser testing effort and desirable test completion. Keywords: Method of experiment, Minimization methods, Test case, Test case generation, Test suite 1. INTRODUCTION It is well known fact that regression testing is main part of testing phase in software development cycle. It is very obvious that when we modify the code or do any kind of addition in existing code, we require more and more test cases to detect error due to these new addition and changes to the code. As the test suit size increases this process consume lot of time to test effect of changes on existing functionality along with new functionality. Therefore, it is imperative to speed up software development in order to lower how many tests are currently included in the test suite, allowing it to cover all necessary functionality with a lesser number of test instances. Test suite reduction is the name of this procedure [1], [2]. When we begin identifying a subset of currently available test cases to satisfy all stages of software testing requirements, it reveals that it is nondeterministic polynomial (NP)-problem [3], [4]. Few algorithms are designed to reduce existing test suit into reduced test suit in [1], [5]–[7]. As there are number of algorithms for this purpose but out of these there are four more important algorithms that can be used to minimize test cases in test suit. The greedy algorithm G [5] used to select test cased in repeating manner so that test scenarios in new minimized test suit become able to test new a group of specifications. The next test scenario will be chosen by the greedy algorithm hunger games search (HGS) [1] if the current test case meets the majority of the conditions. Chen and Lau [7] introduced an algorithm GRE with three main components i) select all most important test cases, ii) remove all 1-to-1 repeated test cases, and iii) select from remaining Journal homepage: http://ijeecs.iaescore.com test cases which are need to satisfy remaining requirement. Liu [8] replaced a random choice method for identically covered test scenarios capacity with an existing step to choose identical-cardinality test scenarios in accordance with the covering of boundaries methodology. As we know that existing algorithms minimize the test cases in test suit but a side effect this test suit is facing problem of fault detection capability loss [9]. Due to this, Jeffrey and Gupta [9], in their paper told that add some repeated test case to reduced test suit to gain same fault detection capability. But according to Lin and Huang [10], it is purely increased maintaining overhead, so they suggested using another testing criteria to select test from test case with equal importance instead of random selection. It is not easy to select new testing criteria to support the process; it is because new criteria may be effective with in some case and has no effect on other programs. Chvatal [5] and then Leiserson et al. [6] have proposed an algorithm for minimum set cover problem. The greedy test suite reduction strategy: Choose a test case that covers a lot of criteria, take out all the requirements it addresses, and until every condition is met, repeat the process. Finding necessary instances of testing that can fulfill certain test criteria that cannot be satisfied in additional test situations is the next approach recommended by Harrold et al. [1] for minimizing the test suite. It is algorithm to search each and every test scenario that is covering those test requirement not covered by other test cases. This algorithm repeats its steps until all the requirements are fulfill. By using this method, you can remove the test scenarios between 19% and 60% when compared to the initial number of test instances. Chen and Lau [7] provide a fresh GRE trick that is broken down into three parts: first is the greedy approach, the required method, and the 1-1 repetition technique. The greedy approach is also choosing the test scenarios which fulfill the most of test specifications that have not yet been satisfied, just like the basic greedy approach [5], [6]. The required technique picks all necessary test scenarios, and using the 1:1 approach, duplicate test cases in all are removed. In a simulation, GRE consistently outperforms H in both our studies and the tests conducted [11], [12]. However, GRE and H both have the capacity to reduce test suites. A novel test suite reduction method was proposed by Jeffrey and Gupta [9] hence, by preserving some duplicate test cases depending on additional testing standards, can improve the capacity to detect faults. When a test case was found as duplicate with regard to testing condition C, its duplicity with regard to the other testing criteria was also examined. Otherwise, it will be kept. Compared to the test suite achieved by meeting a single testing criterion, the reduced test suite is more able to detect errors because it satisfies more testing criteria. A innovative approach was presented by Sheikh et al. [13]. A combination of evolutionary algorithms is provided with Test Reduce in order to find an optimal and compact set of test cases. This study's ultimate objective is to provide a technique that could reduce the number of regression test cases when relevant criteria are available. The selected set will help in assessing the quality of the code and researching its ripple effects. Jung et al. [14] provided a workable process for implementing the software product line (SPL) testing strategy as well as a formula that enhances our earlier technique to reduce the superfluous tracing of test executions and recurrence of test executions. SPL testing may be completed more rapidly with this method. Package level security metrics were used by Tanejaa et al. [15] to demonstrate how to assess the probability of errors in software modules that utilize objects. It improves FEP and cuts down on execution time. The test suite reduction model developed by Zhang et al. [16] is transformed into a typical optimization issue. Furthermore, they use the quantum bits approach to encode the chromosome as discrete information. When utilizing the condensed test suite, it can significantly save testing expenses and increase test efficiency. An approach that reduces the number of test cases required by identifying a representative group that satisfies the testing requirements was proposed by Mohapatra and Pradhan [17]. How many test cases with different chromosomal lengths are included in each iteration of a test suit? Regression testing may save costs and increase software productivity with an efficient test suite. Lawanna [18] developed a unique model to improve the reduction of regression tests, which the research suggests using as an alternative to conventional approaches. There are four algorithms: testing, ordering, reducing test cases, and defect-fixing. It suggests that the size of the fixed bugs is less, at about 40% and 200%, respectively. Section 2 discuss about existing methods along with proposed methodology. Section 3 covering software application areas for proposed methodology. Finally, the concluding remarks will be presented in section 4. 2. EXISTING MINIMIZATION METHODS AND PROPOSED METHODOLOGY DETAILS OF TEST REDUCTION 2.1. Math and algorithm for minimization methods In this section we describe maths and algorithm used in recent time to perform operation of minimization. In first subsection we discussed about new technique called TestReduce to minimize test cases in test suit. In second subsection we discussed about an investigation on reducing duplicate SPL testing test executions. 2.1.1. An optimized test case minimization method for regression testing using GA (2023) [13] TestReduce: to reduce regression TCs, TestReduce is built on genetic algorithm (GA). The goal of test reduce is to find the best possible approach to identify to begin the regression testing process, a specific collection of test scenarios must be selected. The chosen set will assist in evaluating the modified code's quality and in conducting a ripple impact analysis. This section lists the several phases that make up the test reduce. Step 1: requirement prioritization. Step 2: requirements associations. Step 3: correction of the module bugs. Step 4: association level of rectified modules. Step 5: derived objective function: the importance of the objective function is crucial because test case priority is based on it. In this study, the goal function was a combinatorial problem, as demonstrated in (1). \[ f(p, da, di, dr, dm) = (p + da + di + dr + dm) - 5 \] (1) Step 6: application of genetic algorithm. 2.1.2. An investigation on reducing duplicate SPL testing test executions (2022) [14] A minimum set of steps are included in the suggested application procedure in order to apply our strategy to SPL testing. Therefore, extra stages for test case minimization and prioritizing, change impact analysis, and are applied. The method can be expanded to include everything required for thorough testing of a product line. Step 1: get the product family. Step 2: assemble/update the checksum matrix should not be used in the text. Instead, additional information can be added to the reference list. Step 3: decide which test scenario to run. Step 4: carry out test scenario and get execution traces is depicted in Figure 1. ![Figure 1. SPL](image) So, in proposed algorithm, we apply equivalence class partition before applying boundary value condition, it will reduce the test suit with great extent. So, we present a novel algorithm for test case reduction by applying equivalence class partition as first step for test suit reduction. New algorithm extends the algorithm given by Liu by introducing equivalence partition in algorithm. We also present the discussion of the implementation of new algorithm with case study to check the algorithm. 2.2. Proposed method details of test reduction The proposed methodology is depicted in Figure 2 as a flowchart. The flowchart consists of five main phases as: program collection to apply methodology, procedures to find set of test cases, list of test case minimization procedures, and area of study on minimized test cases. The last is the application of minimized test cases on different software’s. ### 2.2.1. In methodology shown in Figure 2 we follow these steps Initially we select N number of program containing conditions. - Simple conditions examples usually found in codes: - \( \text{If } (a<b) \) - \( \text{If } (a>b) \) - \( \text{If } (a<=b) \) - \( \text{If } (a>=b) \) - \( \text{If } (a==b) \) - \( \text{If } (a!=b) \) - Compound conditions examples usually found in codes: - \( \text{If } ((a<b) \land (c>d)) \) - \( \text{If } ((a<b) \lor (c>d)) \) **AND** is represented with \&\& **OR** is represented with ||. ### 2.2.2. After program selection we will generate different test cases using Boundary value analysis: analysis-this method checks for problems at the input level and values can range from minimum to maximum, just inside, and outside. It is an additional method for designing test cases to equivalence class partitioning. In contrast to other approaches, this one draws the test cases from both the input conditions and the output domain. Equivalence class partition: with this technique, by categorizing or dividing the input domain of the software unit, test cases are produced. As a result, fewer test instances are run overall to a manageable quantity as a result. An analogous class defines a collection of valid and invalid states for the input condition. ### 2.2.3. Techniques for reducing test cases [19] A. Requirement based: using the fewest number of test cases to satisfy all testing criteria is the fundamental goal of test suite reduction. One such method is to use requirement optimization to build test cases based on the requirements. B. Coverage based: the primary goal of the coverage-based reduction technique is to guarantee that the most paths possible inside a particular program are carried out. Case base reasoning (CBR) is used to accomplish this. CBR is divided into three categories: pivotal, auxiliary, and case. - Problems are solved using case-based searches for the most comparable problems in a memory. - Although the deletion of an auxiliary-based instance doesn’t alter competency, it does have an impact on the If eliminated, a pivotal-based case directly affects the system’s capability. C. GA: it is a method of solving test case reduction issues like evolutionary computation that is based on computational intelligence. The subsequent actions were taken: - Test case runtime and coverage were used to derive fitness value. - The smaller suite could contain just tests that made sense. - Up till a test suite is found to be optimized, this process is repeated. - The findings demonstrated the generality and effectiveness of the recommended test suite reduction technique. This strategy offers various advantages, one of which is that it reduces the several test cases while also shortening the overall runtime. However, it is ineffective when the defect detecting capability and other CR. D. Clustering: fewer test scenarios in the test suite, for efficiency and increase performance, the data mining approach of accumulating techniques is utilized. Instead of using all of the test cases generated by independent routes, clustering makes it possible to check the program using just one of the clustered test cases. E. Greedy algorithm: the popular code-based reduction technique is called t, it is employed while creating test suites using model-based techniques. The test scenarios are selected based on their ability to satisfy the most unmet requirements. Repeat this process until the entire test suite’s test scenarios result in the creation of a smaller test suite. The connection between test cases and testing specifications is the foundation for how this algorithm works. The advantage of the greedy algorithm is that it greatly lowers the total number of test instances, but in the event of a tie, a random selection of test instances is made. F. Fuzzy logic: the use of fuzzy logic is a different method for test suite optimization. Given that it speeds up execution time and reduces the size of the regression testing, this method is regarded as safe. Testing is carried out in this case to a is founded on an objective process that is remarkably similar to human judgment. Many times, test suites are optimized and examined for safe reduction using CI-based techniques, which may then be completed using control flow diagrams. For traversing tests for the best solutions, these graphs are employed. Since it is believed to be more secure than alternative methods for performing regression testing, this methodology is widely recommended. G. Program slicing: it is a method for building a slice set and checking a programmed against a particular property. Three different slicing methods exist: - Static slicing - Dynamic slicing - Relevant slicing H. Hybrid algorithm: this algorithm uses the efficient procedure such as genetic algorithm approximation and the greedy strategy to build superior Pareto fronts that can be used to achieve a variety of objectives. The test criterion is here thought of as being mathematically described by the objective functions. The greedy method is modified to be more economical for statement coverage and computational cost. Execution time, code coverage, and fault coverage are also taken into account for fault detection optimization. I. Apply minimized test suit on list of N programs selected initially to study different parameters like error study, cost of testing, and quality of S/W and customer acceptance. J. Above methodology is applied to different S/W types with at least 5 examples shown in Figure 3. An efficient test suit reduction methodology for regression testing (Shailendra Gupta) By understanding the variations between platform infrastructure as a service, software as a service, and you will be able to select the set of services best suited to your requirements with the aid of deployment plans and other information [23]. 3.5. Business applications A number of firms employ various types of business application software to automate their procedures for increasing operational effectiveness and gather important data about front- and back-office activities. Research by Adobe found that 25% of business owners who used automated technologies prioritized big data analysis while 30% put time savings first. Employees can focus their energies on more difficult jobs by saving time on routine chores, which improves workflow. Organizations are able to concentrate on boosting their earnings and expanding thanks to the increased efficiency and access to data reports [24]. 3.6. Artificial intelligence and machine learning Any specific tool used for ML classifiers, unsupervised learning, self-iteration based on data, and artificial intelligence (AI) is referred to as machine learning (ML) software. Many pieces of software used in business today, include ML in applications like email filtering and computer vision. There is additional software for simulation, hiring, architecture, and accounting that is specifically made for ML. Certain ML toolkits, including those in this article's list, can be specially designed to fit your particular data sets and process requirements [25]. 3.7. Data analytics Data can be used to find patterns and draw conclusions about the information they hold, data sets are analyzed using data analytics (DA). Increasingly frequently, the utilization of specialist hardware and software is required for data analytics. In the commercial sector, technology and methods for data analytics are frequently utilized to assist organizations in making more informed business judgments. Researchers and scientists also employ analytics tools to confirm or disprove scientific models, theories, and hypotheses. 4. CONCLUSION Test case minimization has been a focus area look at the enormous time and cost involved to achieve full test case completion criteria. Our work proposes test case minimization for regression testing. Our work proposes methodology to test soft computing, financial and other application which are emerging and having industry values. They all if subjected to minimization procedures shall be tested in time; in designated cost and quality required. This methodology is proposed by us has been designed keeping in mind the various revealing and promising software engineering tools; financial s/w, cloud application; business applications; AI/ML applications and data analytical s/w. No such earlier work has been found in literature. So as to perform any type of comparison at this stage depicting the uniqueness and originality of our proposed experimental method. REFERENCES **BIOGRAPHIES OF AUTHORS** **Mr. Shailendra Gupta** was born in 1976 at Murena, M.P. India. He received his B.Sc. degree in Computer Science from PMB Gujrati Science collage, Indore in 1997, MCA. Degree in Computer Science in 2001 and pursuing his Ph.D. Degree from Medi-Caps University Indore. He received his Ph.D. degree from Devi Ahilya University Indore in 2014. His areas are software engineering and software testing. His research area includes extreme programming and software maintenance. He has published more than 20 research paper in reputed international journal and conferences. He is an Assistant Professor in CA Department at Medi-Caps University, Indore, M.P., India. He has 23 years of teaching work experience at the UG and PG levels. He can be contacted at email: Shailendra.gupta@medicaps.ac.in. **Dr. Jitendra Choudhary** was born in 1984 at Dewas, M.P. India. He received his B.Sc. degree in Computer Science from Holkar Science College, Indore in 2003, M.Sc. degree in Computer Science in 2005 and M.Tech. degree in Computer Science (with Distinction) in 2010 from SCST, Devi Ahilya University Indore. He received his Ph.D. degree from Devi Ahilya University Indore in 2014. His areas are software engineering and software testing. His research area includes extreme programming and software maintenance. He has published more than 20 research paper in reputed international journal and conferences. He has received Gold Medal (AIR-1) in Software Engineering course run by IIT Kharagpur through Swayam-NPTEL. He is an Associate Professor and HOD, CS at Medi-Caps University, Indore, M.P., India. He has 17 years of teaching work experience at the UG and PG levels. He can be contacted at email: jitendra.scst@gmail.com.
{"Source-Url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/viewFile/35642/18282", "len_cl100k_base": 4444, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27654, "total-output-tokens": 7006, "length": "2e12", "weborganizer": {"__label__adult": 0.0003428459167480469, "__label__art_design": 0.0002715587615966797, "__label__crime_law": 0.0003006458282470703, "__label__education_jobs": 0.0013895034790039062, "__label__entertainment": 6.127357482910156e-05, "__label__fashion_beauty": 0.0001627206802368164, "__label__finance_business": 0.00023651123046875, "__label__food_dining": 0.0003154277801513672, "__label__games": 0.0006761550903320312, "__label__hardware": 0.0006837844848632812, "__label__health": 0.0005002021789550781, "__label__history": 0.00016498565673828125, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.00023162364959716797, "__label__literature": 0.0002865791320800781, "__label__politics": 0.00015437602996826172, "__label__religion": 0.0003299713134765625, "__label__science_tech": 0.0125274658203125, "__label__social_life": 8.630752563476562e-05, "__label__software": 0.0064849853515625, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.00025200843811035156, "__label__transportation": 0.0002925395965576172, "__label__travel": 0.00014698505401611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27812, 0.03187]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27812, 0.30712]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27812, 0.89816]], "google_gemma-3-12b-it_contains_pii": [[0, 3270, false], [3270, 8850, null], [8850, 11492, null], [11492, 15527, null], [15527, 17242, null], [17242, 17329, null], [17329, 21839, null], [21839, 27812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3270, true], [3270, 8850, null], [8850, 11492, null], [11492, 15527, null], [15527, 17242, null], [17242, 17329, null], [17329, 21839, null], [21839, 27812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27812, null]], "pdf_page_numbers": [[0, 3270, 1], [3270, 8850, 2], [8850, 11492, 3], [11492, 15527, 4], [15527, 17242, 5], [17242, 17329, 6], [17329, 21839, 7], [21839, 27812, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27812, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
7b9ede0cc316465cb84f49af60068d67f7cd4dee
Reifying configuration management for object-oriented software Jean-Marc Jézéquel To cite this version: Jean-Marc Jézéquel. Reifying configuration management for object-oriented software. 20th International Conference on Software Engineering (ICSE), Apr 1998, Kyoto, Japan. inria-00372744 HAL Id: inria-00372744 https://inria.hal.science/inria-00372744 Submitted on 2 Apr 2009 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ABSTRACT Using a solid Software Configuration Management (SCM) is mandatory to establish and maintain the integrity of the products of a software project throughout the project’s software life cycle. Even with the help of sophisticated tools, handling the various dimensions of SCM can be a daunting (and costly) task for many projects. The contribution of this paper is to propose a method (based on the use Creational Design Patterns) to simplify SCM by reifying the variants of an object-oriented software system into language-level objects; and to show that newly available compilation technology makes this proposal attractive with respect to performance (memory footprint and execution time) by inferring which classes are needed for a specific configuration and optimizing the generated code accordingly. We demonstrate this idea on an artificial case study intended to be representative of a properly designed OO software. All the performance figures we get are obtained with freely available software, and, since the source code of our case study is also freely available, they are easily reproducible and checkable. 1 INTRODUCTION Using a solid Software Configuration Management (SCM) [18, 31] is a basic requirement in the Software Engineering Institute (SEI) capability maturity model (CMM). There are however a number of different interpretations on the exact meaning of Software Configuration Management. In this paper, we focus its scope to be the management of software development projects with respect to the three dimensions identified in [9]: - targeting environmental differences (e.g., multiple platforms) - supporting multiple versions, and controlling the status of code - multiple developers working on the same code at the same time Following a terminology widely adopted in the Software Engineering community [20], variants of configuration items are different implementations that remain valid at a given instant in time, created to handle environmental differences (logical versioning). Revisions are the steps a configuration item goes through over time (historical versioning), whether to handle new features, fix bugs or to support permanent changes to the environment (e.g., operating system upgrades, if the old one is no longer supported). Variants and revisions provide a two-dimensional view into the repository, with variants incrementing along one axis as required and revisions incrementing through time on the other. Versions of configuration items are understood by the SCM community to be synonymous with either revisions or variants [32]. Therefore a version of a single configuration item denotes an entry in the two-dimensional view of the repository reached from an origin through some path of revisions and variants. A third dimension is brought in when concurrent development activities are enabled (cooperative versioning): at a given point in time, concurrent activities may have a cooperative version of the same object [9]. Since many developers may be authorized to modify the same version at the same moment, each of them is in fact provided with a copy of the item, in much the same way as shared virtual memory pages can be updated using weak-consistency algorithms in distributed systems. Even with the help of sophisticated tools [19, 23], handling these three dimensions of SCM can be a daunting (and costly) task for many projects [1, 26]. The contribution of this paper is to propose a method to simplify SCM by reifying the variants of an object-oriented software system into language-level objects; and to show that newly available compilation technology makes this proposal attractive with respect to performance (memory footprint and execution time) by in- ferring which classes are needed for a specific configuration and optimizing the generated code accordingly. There have already been some attempts to mix the OO paradigm with the SCM problematic. Most of these attempts were trying to implement a classical SCM with the help of the OO technology [4, 6, 11, 24, 27]. Our approach is quite orthogonal, because it consists in altering the (object-oriented) design in such a way that some aspects of the SCM (the variability dimension) are vastly simplified. This paper is organized as follows. In Section 2 we describe the factors that lead to a growing apparition of software variants, and how this variability has traditionally been addressed by more and more complex technics and tools. In Section 3 we introduce a case study and show how things can rapidly get out of hand. We then propose to reify the variability of software by eliminating the variant dimension from the three-dimensional view of the software baseline repository. Creational Design Patterns can then be used to provide the necessary flexibility for describing and selecting relevant configurations within the implementation object-oriented language, and thus benefitting from a better security implied by static typing checked by the compiler. SCM could then be implemented with much simpler tools (less costly), because only revisions would need to be dealt with. Alternatively, it could make full featured tools easier to use, thus attacking one of the perceived drawbacks of off-the-shelf SCM tools, i.e. their difficult learning curve [1, 8]. In Section 4 we discuss how new compilation technology, based on type inference, makes this proposal attractive by allowing the generation of code specialized for each variant of the software. We present performance results (memory footprint and execution time) of this approach on various systems. In Section 5 we discuss the interests, limitations and drawbacks of our approach, as well as related works. We conclude on the perspective open by our approach. 2 SOFTWARE CONFIGURATION MANAGEMENT 2.1 Variants in Software Systems The reasons why a given software design may have different implementations, all valid at a given instant in time, are manifold. But the basic idea is to be able to handle environmental differences. We can classify these environmental differences in the following categories: - Hardware level: most software systems must be able to drive various variants of hardware devices, e.g., multimedia or network interface boards. - Heterogeneous distributed systems: more and more applications (singularly in the real time domain) are implemented on distributed systems made of more than one processor type, and have thus to handle such things as task allocation and functionality distribution, and eventually differences in binary formats. - Specificities in the target operating systems: some system calls have syntax and/or semantics peculiar to a specific OS. Even more complicated are the cases when seemingly close abstractions (e.g., I/O handles in Win32 and file descriptors in Unix) must in fact be dealt with considerable differences in programming style (Win32 pro-active I/O vs. Unix reactive I/O). Note that functions whose names exist in only a subset of the supported systems cannot be linked in with a general purpose version configurable at run time. - Compiler differences, or poorly standardized languages getting different interpretations in different compilers. - Range of products: often, for marketing reasons, it is useful to be able to propose a range of products instead of a single suit-them-all product. For instance, one approach is to make various levels of functionality available: Demo version, Shareware, Retail, Premium, etc. Also in this category are the variants developed specifically for an important client. - User preferences for Graphical User Interface (GUI): look-and-feel, etc. - Internationalization: dealing with various languages and way of handling country specificities such as date and time formats, money representation etc. Managing all the combinations between these variability factors can soon become a nightmare. Consider the case of the software for a medium sized switch in the telecommunication domain, like the Alcatel E-10. Its source code size is in the order of the million lines. Due to the many versions of the switch tailored to fit each country specificities, its configuration software also reaches the million lines range. 2.2 Traditional Solutions One of the most primitive “solution” to these problems was to patch the executable program at installation time to take into account some variants. One of the most striking example was the word processor Wordstar under the CP/M operating system, cited in [13]. To cope with the widely different characteristics of printers and CRT terminals on CP/M systems, in addition to accommodating individual user preferences, this program came with a configuration tool and scripts. Running the configuration tool modified configuration data in the executable image of the word processing program. Various scripts provided consistent sets of answers, corresponding to common configurations, to questions asked by the installation tool. Device drivers are one example of configurability common to almost all operating systems. The actual binding can take place in source code, at link time, at boot time, or on demand at run time (with kernel loadable modules as in the Win32, Linux or Solaris OS). A widely used technique for making small real time programs configurable is the static configuration table. Data structures are provided for things that might differ in different installations of the program, and the installer is responsible for providing appropriately initialized instances for a specific installation. Sometimes configuration records are not directly prepared as initialized records in the programming language of the system, but rather are produced as database entries or expressed as sentences in a grammar, with some tool provided to generate from these the programming language records the system will actually use. This can be particularly useful when several programs need to be implemented for the same configurations. Static configuration tables are not entirely satisfactory. Rarely is there provision for error checking; indeed, because they are purely declarative with no language-defined semantics, constraint verification and consistency checking can be difficult, let alone error checking. There is an implicit assumption of an associated library, where variant units of code are kept, yet there is no assistance in managing or manipulating that library. For larger systems, one of the most popular approach consist in using conditional compilation (or assembly), implemented with e.g., a pre-processor. C programmers are familiar with the `cpp` tool, actually invoked as a first pass of the C compiler, that allows such conditional code to be written. Despite the help of sophisticated tools (such as the GNU autoconfig), this kind of code can rapidly become difficult to maintain [26]. For example, to add support for a new OS, one needs to review all the already written code looking for relevant `#ifdef` parts. ### 2.3 Using SCM Tools Traditionally, SCM is implemented with check-in/checkout control of sources (and sometimes binaries) and the ability to perform builds (or compiles) of the end products. Other functions, such as Process Management, i.e. the control of the software development activities, will not be considered here. Modern SCM tools have evolved from academic prototypes to full strength industrial products. Most of them now keep track of all the changes to files in secure, distributed repositories. They also support parallel development by enabling easy branching and merging. They provide version control of not only source code, but also binaries, executables, documentation, test suites, libraries, etc. Examples of such tools are Adele [5] or ClearCase [3]. For instance, ClearCase provides each developer with multiple consistent, flexible, and reproducible workspaces. It uses Views (similar to the views concept in databases) to present the appropriate versions of each file and directory for the specific task at hand. Views are defined by configuration specifications consisting of a few general rules. Rules may be as simple as “the versions that were used in the last release” or can be more complex when particular sets of bug fixes and features need to be combined. Views are dynamic — they are continually updated by re-evaluating the rules that define it. Newly created versions can thus be incorporated into a view automatically and instantly. Views allow team members to strike a balance between shared work and isolation from destabilizing changes. The main drawbacks of these sophisticated tools is that they are very costly to use, and have a steep learning curve. Furthermore, even when these two problems are overcome, it is a matter of facts that their underlying 3-dimensional model of the repository does not provide an easy framework to mentally handle the complexity of large software developments [1, 8]. For all these reasons, their use is still not as pervasive as it could be. ### 3 CASE STUDY #### 3.1 The Mercure Software To present the interest of a contribution to the software engineering field, people use to rely on a real software case study. However this approach has several drawbacks. The usual proprietary nature of the studied system makes it impossible for the author to give free access to all the source code and its compilation/execution environment. The community thus cannot check the validity of the study. Further, it is hard to reproduce the results since in a typical article, one lacks space to make all the context available to the reader. This lack of reproducibility defeats the scientific method, and the results are often merely empirical. So, instead of presenting the actual application that gave rise to the ideas described in this paper [15, 16], we build a model (called Mercure) of this kind of software, with all of its context (including full source code and reference to a freely available compiler) made available for fetching and checking by the community. It is basically an over-simplification of this kind of soft- See http://www.irisa.fr/pampa/EPEE/SCM ware, where only configuration management related issues would have been kept. The model is meant to be representative of the SCM issues explored in this paper, while being built in such a way that meaningful performance results can be obtained. The reader does not have to believe us on our good faith: she can check the relevance of the model to her own concerns to decide whether our conclusions apply to her specific case. Mercure is a model of a communication software sending, receiving and relaying "messages" from a set of network interfaces connected to the distributed memory parallel computer (i.e. set of loosely coupled CPU) on which it runs. Mercure must handle the following variability factors (as defined in 2.1): - **Hardware level**: Mercure must support a wide range of network interface boards (e.g., for ATM, various Ethernet, FDDI, ISDN, X25, etc.) from various manufacturers. Let’s call $V_i$ the number of supported boards. Since new hardware continually pops up, it must also be easy to add support for it in future releases of Mercure. - **Heterogeneous distributed systems**: Mercure is to be run on such a system, thus provision must be made to deal with heterogeneous code generation and task distribution: some processors are specialized for relaying messages (switching), others for computing routes, others for network management (billing, accounting, configuring, etc.), and still others for dealing with persistent databases. Let’s call $V_p$ the considered number of specialized processors. - **Range of products**: various levels ($V_n$) of functionality must be provided in the domain of network management. - **User preferences for GUIs**: various ($V_g$) look-and-feel must be available. - **Internationalization**: support for $V_l$ languages must be available. Considering that a given variant of the Mercure software might be configured with support for any number of the $V_i$ network interfaces and $V_l$ languages, and one of $V_p$ kinds of processors, one of the $V_n$ levels of network management and one of the $V_g$ GUIs, the total number of Mercure variants is: $$V = V_p \times V_n \times V_g \times 2^{V_i+V_l-2}$$ Figure 1: Class Diagram Modeling the Mercure Software in UML which, for \( V_i=16, V_p=4, V_n=8, V_g=5, V_i=24 \) gives more than several trillions possible variants (43,980,465,111,040 to be precise). ### 3.2 Object Oriented Modeling of Variants Using an object-oriented analysis and design approach, it is natural to model the commonalities between the variants of Mercure in an abstract way, and expressing the differences in concrete subclasses. Consider for example the case of the network interface boards. Whatever the actual interface, we must be able to poll it for incoming messages, to read them into memory buffers, to send outgoing messages, and to set various configuration parameters. So this abstract interface, valid for all kinds of network interface boards, could be expressed as an abstract class called `NetDriver`. The idea underlying this kind of object-oriented design is that a method (such as `read_msg` in the class `NetDriver` above) has an abstractly defined behavior (e.g., read an incoming message from the lower level network interface and store it in a buffer) and several differing concrete implementations, defined in proper subclasses (e.g., `NetDriver1`, `NetDriver2` ... `NetDriverN`). This way, the method can be used in a piece of code independently of the actual type of its receiver, that is independently of the configuration (e.g., on which kind of interface board do we actually read a message). Dealing with multiple variants is thus moved from the implementation realm (where it is usually handled by means of conditional compilation and complex CM tools) to the problem domain (analysis and design realm), meaning that it can fully be handled within the semantics of the (OO) implementation language. This way, it can be subject to both compiler verifications and semantics-based safe optimizations. In the past indeed, handling this kind of issues in an object-oriented way had a major drawback for many applications: performances. Since the choice of the proper method to call would have to be delayed until run time, we had to pay the price overhead of this dynamic binding. And this overhead could be prohibitive for some real time or performance driven applications, e.g., with Smalltalk where the inheritance hierarchy had to be searched or even with C++ where the handling of dynamic binding through a `vtable` used to provoke cache misses. Fortunately, object-oriented compiler technology has made tremendous progresses in the last few years, as explained in the next section. Figure 1 presents a class diagram of the Mercure software using the UML (Unified Modeling Language) object-model notation [28]. A Mercure system is an instance of the class of `MERCURE`, aggregating: - a GUI that encapsulates the user preference variability factor. A GUI has itself a collection of supported languages, and among them, the currently selected language. - a collection of MANAGERS that represent the range of functionalities available, - a collection of `NETDRIVERS` that encapsulate the network interfaces of this instance of Mercure, - an `ENGINE` that encapsulates the actual work that Mercure has to do with its `NETDRIVERS` on a particular processor of the target distributed system. ### 3.3 Applying Creational Design Patterns With this design framework, the actual configuration management can be programmed within the target language: it boils down to only create the class instances relevant to a given configuration. However some care has to be taken for programming the creation of these objects to ensure that the design is flexible enough. A good approach is to use the Creational Patterns proposed in [12]. In our simple case, we use an `Abstract Factory` (called `MERCURE_FACTORY`) to define an interface for creating Mercure variants. The class `MERCURE_FACTORY` features one `Factory Method` (encapsulating the procedure for creating an object) for each of our 5 variability factors. The Factory Methods are parameterized to let them create various kinds of products (i.e. variants of a type), depending on the dynamic Mercure configuration selected at runtime. These Factory Methods are abstractly defined in the class `MERCURE_FACTORY`, and given concrete implementations in its subclasses, called concrete factories. A concrete factory starts by creating a `MERCURE` instance, which calls back the concrete factory to configure its components (see Figure 2). Building an actual variant of the Mercure software then consists in implementing the relevant concrete factory. By restricting at compile time (that is in the source code of a concrete factory) the range of products that a Factory Method can dynamically create, we can choose to build specialized versions of the general purpose Mercure software. The selection of a given concrete Mercure factory as the application entry point allows the designer to specify the Mercure variant she wants. Since this is done at compile time, it should be possible to generate an executable code specialized towards the selected Mercure. variant. In the next section, we show how this can be done automatically with current compiler technology. 4 COMPILEMENT TECHNOLOGY AND PERFORMANCE RESULTS 4.1 Principle of Type Inference and Code Specialization Good object-oriented programming relies on dynamic binding for structuring a program flow of control — OO programming has even been nicknamed “case-less programming”. Most of the time, a routine (a method call) applies to a given object, called target or receiver. Dynamic binding allows the choice of the actual version of the routine to be delayed until run time: the exact type (called the dynamic type) of the receiver need not be known at compile time. Whenever more than one version of a routine might be applicable, it ensures that the most directly adapted to the target object is selected. In statically typed languages (e.g., C++, Eiffel, Java, Ada95), a receiver’s type must be declared beforehand (this is called the receiver’s static type). Then the receiver’s dynamic type must be a subtype of its static type. In this context, the main goal of the compilation techniques based on type inference consists in statically computing the set of types a receiver may assume at a given point in a program. In the most favorable case, this set is a singleton and thus the routine can be statically bound, and even in-lined in the caller context. In less favorable cases, the set may contain several types. However, the compiler is still able to compute the reduced set of routines that are potentially concerned, and generate specialized code accordingly. This can be implemented as an if-then-else block or a switch on the possible dynamic types of the receiver (corresponding to the C++ RTTI) to select the relevant procedure to call. In either case, the cost of the (conceptual) dynamic dispatch can be mostly optimized out (and the cache miss implied by dynamic binding is no longer a fatality). This idea is implemented for example in SmallEiffel [7], a free Eiffel compiler distributed under the terms of the GNU General Public License as published by the Free Software Foundation. So we have implemented the Mercure software with Eiffel, and used the SmallEiffel compiler to make a number of measures. Eiffel [21] is a pure OO language featuring multiple inheritance, static typing and dynamic binding, genericity, garbage collection, a disciplined exception mechanism, and an integrated use of assertions to help specify software correctness properties in the context of design by contract. However, our approach is not really dependent on Eiffel and could be applied to any class-based languages without dynamic class creation, e.g., C++, Ada95 or Java. 4.2 Experimental Conditions We consider three versions of Mercure to compare the effect of the specialization of the code generation. FullMercure The general purpose version of the program, including all the configurable parts. That means that anyone of the trillions of combinations can be dynamically chosen at runtime: all calls to the variant methods must be dynamically bound. CustomMercure This restricted version of the program only includes support for 8 different network drivers, 5 different languages and 5 different processor types. Only one network manager, and one GUI are available, thus allowing some method calls to be statically bound. MiniMercure A minimal version of the software, with only one of each configurable part available: support for Engine1, GUI2, Language3, Manager4 and NetDriver5 only. This limited support would theoretically allow every method to be statically bound, and thus the resulting code could have the same structure as with e.g., the #ifdef based pre-processor method. These three variants use exactly the same software baseline. The only difference is that a different Mercure SmallEiffel can be downloaded from ftp://ftp.loria.fr/pub/loria/genielog/SmallEiffel. Table 1: Compile time statistics <table> <thead> <tr> <th>Version</th> <th>Eiffel LOC (config.)</th> <th>Eiffel LOC (user)</th> <th>Eiffel LOC (total)</th> <th>Type inference Score</th> <th>C LOC (generated)</th> </tr> </thead> <tbody> <tr> <td>FullMercure</td> <td>96</td> <td>2903</td> <td>10056</td> <td>93.29%</td> <td>7135</td> </tr> <tr> <td>CustomMercure</td> <td>60</td> <td>1421</td> <td>8574</td> <td>96.79%</td> <td>3839</td> </tr> <tr> <td>MiniMercure</td> <td>36</td> <td>713</td> <td>7866</td> <td>99.29%</td> <td>2639</td> </tr> <tr> <td>HelloWorld</td> <td>-</td> <td>6</td> <td>4478</td> <td>100.00%</td> <td>189</td> </tr> </tbody> </table> 4.3 Compile Time Statistics In this section, we compare compile time statistics for the various variants of the Mercure software with respect to the minimal “Hello, world!” program (see Table 1). We display the number of Eiffel lines of code (LOC) for describing the configuration (i.e., the number of LOC of the relevant Mercure concrete factory), the number of LOC written by the programmer, as well as the total number of lines in all the classes needed by the application (including the libraries). Then comes the type inference score, that is the ratio of dynamic calls that could be replaced by direct call at compile time. It ranges from 93% to more than 99%. This means that the SmallEiffel compiler (version -0.87) has been able to early bind most of the (conceptually) dynamic binding in the MiniMercure version. Finally, the size of the generated C code is shown. Note that it includes the SmallEiffel runtime system (whose size may be approximated by the “Hello, world!” one). The small size of the code generated for the MiniMercure version illustrates the ability of the SmallEiffel compiler to take advantage of its knowledge of the living types to efficiently specialize generated C code: only code relevant to the specific variant of the Mercure software is actually generated. 4.4 Memory Footprint and Runtime Performances All versions have exactly the same dynamic behavior, because the dynamic configuration we choose for the Mercure and CustomMercure variants is the one selected at compile time in MiniMercure (i.e., we give the configuration as a set of command line parameters: `-run 10000 -engine 1 -gui 2 -lang 3 -manager 4 -netdriver '5 5 5 5 5 5 5'`). Their output is thus exactly the same. The results presented in Tables 2 and 3 have been made on a PC486 system running Linux 1.2.13 (and GCC 2.7.0, optimization level -O3) and on a Sparc running Solaris 5.0 (and GCC 2.7.2.1, optimization level -O3). Note that because all three variants have the same dynamic behavior (they do exactly the same thing), their use of dynamic memory is also identical. Despite the system being designed for a fully dynamic configuration, the compiler is able to use type inference to detect what is in fact configured statically in specialized versions of Mercure factories to generate code nearly as compact and efficient as if it had been written statically from the beginning. In the MiniMercure case, the generated code has the same structure as the one that would have been obtained with e.g., the `#ifdef` based pre-processor method. The performance differences between MiniMercure and FullMercure represent the maximum price that the designer would have to pay for trading time and space performances for dynamic configuration capabilities. But what is much more interesting is that with exactly the same software baseline, the designer can easily choose his own trade-off between these two properties: he has just to select the relevant concrete factory. 5 DISCUSSION AND RELATED WORK 5.1 Discussion Our approach is not the ultimate solution to all SCM problems. It has a number of drawbacks and advantages: - It forces some SCM issues (variant management) to be dealt with during the design phase of the software. But according to us, it belongs there, because it makes the notion of a product family much more concrete. There is one concrete factory for each variant of the product, and no more need to understand the variations between variants in terms of “diff” listings. - A compiler is able to do type inference only if it has access to the full code. It is clear that in our approach, we cannot deal efficiently with libraries of classes compiled in .o or .a forms. However, .o and .a Unix formats are anyway not very usable in an OO context because they lack type information. They were used in the past to solve a number of problems, that are now dealt with at another level: - enforcing modularity for procedural programs: this is now superseded by OO concepts. - speed of compilation: while this still holds for small programs, it is well known that large C++ compilations actually spend most of their time in link editing. So having .o or .a files no longer reduces the overall edit/compile/link/test time. With respect to medium size programs, for example it only takes 10 seconds on a Pentium Pro 200 to have SmallEiffel compile itself (50kLOC). - source protection: having access to the full code does not mean full source code, because the source can be pre-compiled in a “distributable” format, e.g., Java .class formats or Eiffel “pre-compiled” formats from some vendors. Alternatively, sophisticated encryption technology could be used to protect the source code. - Our approach does not remove the need for classical configuration management tools. We still have to deal with revisions (new features, bug corrections, etc.) and possibly concurrent development activities. However concurrent development activities are minimized by the fact that a variant part is typically small and located in its own file: someone responsible for a product variant would not have to interfere with other people modifications, and conversely. Thus in our experience, a simple tool such as RCS or CVS (equipped with automatic symbolic naming of versions, see below) should be enough for many sites. - Programming the concrete factories to specify the configuration is straightforward, but quite tedious. This could easily be generated by e.g., a simple Tcl/Tk shell. This shell would also encapsulate the call to the compiler and thus could be able to retrieve the name of all the files used in the compilation. Using this information, a snapshot of the full configuration (including the compiler, linker, etc.) could be assigned a symbolic version name and stored in a repository (e.g., using RCS). - Doing all the configuration in the target language eliminates the need to learn and use yet another complex language used just for the configuration management (e.g., the various existing Module Interconnection Languages, as in Adele [5], Proteus [10], etc.) 5.2 Related work This work can be seen as an application of ideas circulating in the “Partial Evaluation” community for years. Actually, it can be seen as taking benefit of the fact that the type of configurable parts have bounded static variations (i.e. the sets of possible types are known at compile time). Thus the Partial Evaluation community trick known as The Trick (see [17]) can be applied to specialize the general program at compile time. Because this partial evaluation only deals with the computation of dynamic type sets, it is also clearly related with the domain of type inference. Ole Agesen’s recent PhD thesis [2] contains a complete survey of related work. Reviewed systems range from purely theoretical ones [33] to systems in regular use by a large community [22], via partially implemented systems [29, 30] and systems implemented on small languages [14, 25]. Related work from the SCM point of view have already been extensively discussed all along this paper. Here we restrict ourselves to approaches trying to leverage the object-oriented or object-based technologies. Our idea of designing the application in such a way that the SCM is simplified is not new [6, 11]. But previous works needed a dedicated tool to handle the actual SCM. Since in our approach the SCM is done within the OO programming language, there is no need for such an ad hoc tool: the compiler itself handles all the work. 6 CONCLUSION Our contribution in this paper was to propose a method to simplify software configuration management by reifying the variants of an object-oriented software system into language-level objects; and to show that newly available compilation technology makes this proposal attractive with respect to performance (memory footprint and execution time) by inferring which classes are needed for a specific configuration and optimizing the generated code accordingly. This approach opens the possibility of leveraging the good modeling capabilities of OOL to deal with fully dynamic software configuration, while being able to produce space and time efficient executable when the program contains enough static configuration information. We have illustrated this idea with a small case study representative of a properly designed OO software. All the performance figures we get are obtained with freely available software, and, since the source code of our case study is also freely available, they are easily reproducible and checkable. In the most favorable cases, the SmallEiffel compiler is able to infer the type of the receiver in up to 100% of the cases, and thus to optimize out the dynamic binding. We believe that this approach can become mainstream when commercial compilers incorporate these kinds of technologies. From advertisement flyers we have seen, this seems to be work in progress for several compilers for C++ and Java. ACKNOWLEDGEMENTS We would like to thank Dominique Colnet, the author of the SmallEiffel compiler, for the many insights he gave us on the inner working of his compiler. We also thank Chantal Brohier (Newbridge Networks) who gave us feedback on this paper from an industry point of view. REFERENCES [12] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley, 1994.
{"Source-Url": "https://inria.hal.science/inria-00372744/file/Jezequel98b.pdf", "len_cl100k_base": 7407, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34690, "total-output-tokens": 10045, "length": "2e12", "weborganizer": {"__label__adult": 0.00031065940856933594, "__label__art_design": 0.00024390220642089844, "__label__crime_law": 0.00023090839385986328, "__label__education_jobs": 0.000385284423828125, "__label__entertainment": 3.647804260253906e-05, "__label__fashion_beauty": 0.00012230873107910156, "__label__finance_business": 0.00015413761138916016, "__label__food_dining": 0.0002510547637939453, "__label__games": 0.0003216266632080078, "__label__hardware": 0.0005202293395996094, "__label__health": 0.0002834796905517578, "__label__history": 0.00014889240264892578, "__label__home_hobbies": 5.638599395751953e-05, "__label__industrial": 0.00020444393157958984, "__label__literature": 0.00016629695892333984, "__label__politics": 0.0001779794692993164, "__label__religion": 0.000335693359375, "__label__science_tech": 0.0027923583984375, "__label__social_life": 5.793571472167969e-05, "__label__software": 0.0042572021484375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00020706653594970703, "__label__transportation": 0.0003058910369873047, "__label__travel": 0.0001596212387084961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43247, 0.03457]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43247, 0.31919]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43247, 0.89703]], "google_gemma-3-12b-it_contains_pii": [[0, 923, false], [923, 4651, null], [4651, 9593, null], [9593, 15055, null], [15055, 17293, null], [17293, 22292, null], [22292, 26194, null], [26194, 29954, null], [29954, 34733, null], [34733, 39299, null], [39299, 43247, null]], "google_gemma-3-12b-it_is_public_document": [[0, 923, true], [923, 4651, null], [4651, 9593, null], [9593, 15055, null], [15055, 17293, null], [17293, 22292, null], [22292, 26194, null], [26194, 29954, null], [29954, 34733, null], [34733, 39299, null], [39299, 43247, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43247, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43247, null]], "pdf_page_numbers": [[0, 923, 1], [923, 4651, 2], [4651, 9593, 3], [9593, 15055, 4], [15055, 17293, 5], [17293, 22292, 6], [22292, 26194, 7], [26194, 29954, 8], [29954, 34733, 9], [34733, 39299, 10], [39299, 43247, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43247, 0.03822]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
1c0b516d89b76ed4cd2f40c3ce467f67406b9b05
Automated validation of distributed software using the IF environment Marius Bozga, Susanne Graf, Laurent Mounier To cite this version: HAL Id: hal-00374652 https://hal.archives-ouvertes.fr/hal-00374652 Submitted on 9 Apr 2009 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Automated validation of distributed software using the IF environment Marius Bozga, Susanne Graf, and Laurent Mounier VERIMAG, Centre Equation, 2 avenue de Vignate, F-38610 Gières email: {bozga,graf,mounier}@imag.fr, phone: (33) 4 76 63 48 52 Abstract. This paper summarizes our experience with IF, an open validation environment for distributed software systems. Indeed, face to the increasing complexity of such systems, none of the existing tools can cover by itself the whole validation process. The IF environment was built upon an expressive intermediate language and allows to connect several validation tools, providing most of the advanced techniques currently available. The results obtained on several large case-studies, including telecommunication protocols and embedded software systems, confirm the practical interest of this approach. 1 Introduction Automated validation of distributed software is a desirable objective to improve the industrial production of correct systems like communication protocols or embedded systems. In spite of the numerous ongoing researches and tool developments carried out in this area, this activity remains difficult in practice: on one hand the initial software description is usually provided in a high-level formalism (either a programming language or a formal design notation like LOTOS [23], SDL [25] or UML [30]), and, on the other hand, a wide range of tools are necessary to cover the whole development process, operating at different levels of program descriptions. Even if several interesting tools are currently available, either commercial or academic ones, none of them can fulfill in itself all the practical needs. Commercial tools (such as ObjectGeode [32], Tau [1], StateMate [22], Rational Rose [31], etc.) provide several development facilities, like editing, code generation and testing. However, they are usually restricted to basic verification techniques (exhaustive simulation, deadlock detection, etc) and are “closed” in the sense that there are only limited possibilities to interface them with others. On the other hand, there exist numerous academic tools (like SMV [28], HyTech [19], Kronos [34], UPPAAL [27], Spin [20], InVeSt [2], etc.) offering a broad spectrum of quite efficient verification facilities (symbolic verification, on-the-fly verification, abstraction techniques, etc.), but often supporting only low-level input languages. This may restrict their use at an industrial scale. This situation motivated the development of IF, an intermediate representation for distributed software together with an open validation environment. This environment fulfills several requirements. First of all, it is able to support different validation techniques, from interactive simulation to automatic property checking, together with test case and executable code generation. Indeed, all these functionalities cannot be embodied in a single tool and only tool integration facilities can provide all of them. For a sake of efficiency, this environment supports several levels of program representations. For instance it is well-known that model-checking verification of real life case studies usually needs to combine different optimization techniques to overcome the state explosion problem. In particular, some of these techniques rely on a syntactic level representation (like static analysis and computations of abstractions) whereas others techniques operate on the underlying semantic level. Another important feature is to keep this environment open and evolutive. Therefore, tool connections are performed by sharing either input/output formats, or libraries of components. For this purpose several well-defined application programming interfaces (APIS) are provided. The IF validation environment is quite similar in its philosophy to the one proposed in the Bandera project [12], which also relies on a dedicated intermediate format to translate (abstract) JAVA source code into the input language of existing model-checkers (like Spin or SMV). However, currently we mainly address with IF distributed software validation from design formalisms (like SDL or UML) which are widely used in the application area we consider (communication protocols and embedded systems). 2 Architecture The IF validation environment relies on three levels of program representation: the specification level, the IF intermediate level, and the LTS semantic model level. Figure 1 describes the overall architecture and the connections between the toolbox components. ![Diagram](image_url) **Fig. 1.** An open validation environment for IF The **specification level** is the initial program description, expressed for instance using an existing language. To be processed, this description is (automatically) translated into its IF representation. The main input specification formalism is SDL, but connections with other languages such as UML, LOTOS and PROMELA are envisaged. The **intermediate level** corresponds to the IF representation [8]. In IF, a system is expressed by a set of parallel processes communicating either asynchronously through a set of buffers, or synchronously through a set of gates. Processes are based on timed automata with deadlines [3], extended with discrete variables. Process transitions are guarded commands consisting of synchronous/asynchronous inputs and outputs, variable assignments, and clock settings. Buffers have various queuing policies (lifo, stack, bag, etc.), can be bounded or unbounded, and reliable or lossy. A well-defined API allows to consult and modify the abstract tree of the IF representation. Since all the variables, clocks, buffers and the communication structure are still explicit, high-level transformations based on static analysis (such as live variables computation) or program abstraction can be applied. Moreover, this API is also well suited to implement translators from IF to other specification formalisms. The semantic model level gives access to the LTS representing the behaviour of the IF program. Depending on the application considered, three kinds of API are proposed: - The implicit enumerative representation consists in a set of C functions and data structures allowing to compute on demand the successors of a given state (following the OPEN/CÆSAR [16] philosophy). This piece of C code is generated by the if2c compiler, and it can be linked with a "generic" exploration program performing on-the-fly analysis. - In the symbolic representation sets of states and transitions of the LTS are expressed by their characteristic predicates over a set of finite variables. These predicates are implemented using decision diagrams (BDDs). Existing applications based on this API are symbolic model-checking and minimal model generation. - Finally, the explicit enumerative representation simply consists in an LTS file with an associated access library. Although such an explicit representation is not suitable for handling large systems globally, it is still useful in practice to minimize some of its abstractions with respect to bisimulation based relations. 3 Components description We briefly present here the main components of the environment, together with some external tools for which a strong connection exists. The specification level components. ObjectGeode [32] is a commercial toolset developed by Telelogic supporting SDL, MSC and OMT. In particular, this toolset provides an API to access the abstract tree generated from an SDL specification. We have used this API to implement the SDL2IF translator, which generates operationally equivalent IF specifications from SDL ones. Given the static nature of the current version of IF, this translation does not cover yet the dynamical features of SDL (e.g., process instances creation). The intermediate level components. IF2IF [6] implements several algorithms based on static analysis to transform an IF specification. A first transformation concerns dead variable resetting (a variable is dead at some control point if its value is not used before being redefined). This optimisation can be also applied to buffer contents (a message parameter is dead if its value is not used when the message is consumed). Although very simple, such optimisation is particularly efficient for state space generation (reductions up to a factor 100 were frequently observed), while preserving the exact behaviour of the original specification. A second transformation is based on the slicing technique [33]. It allows to automatically abstract a given specification by eliminating some irrelevant parts w.r.t. a given property or test purpose [7]. IF2PML [4] is a tool developed at Eindhoven TU to translate IF specifications into Promela. The semantic model level components. CADP [14] is a toolset for the verification of LOTOS specifications. It is developed by the VASY team of INRIA Rhône-Alpes and Verimag. Two of its model-checkers are connected to the IF environment: Aldebaran (bisimulation based), and Evaluator (alternating-free μ-calculus). For both tools, diagnostic sequences are computed on the LTS level and they can be translated back into Msc to be observed at the specification level. Kronos [34] is a model-checker for symbolic verification of TCTL formulae on communicating timed automata. The current connection with the IF environment is as follows: control states and discrete variables are expressed using the implicit enumerative representation, whereas clocks are expressed using a symbolic representation (particular polyhedra). TGV [15] is a test sequence generator for conformance testing of distributed systems (joint work between Verimag and the Pampa project of Irisa). Test cases are computed during the exploration of the model and they are selected by means of test purposes. 4 Case studies The IF environment was used in several case studies, including as well telecommunication protocols and embedded software. The most relevant ones, from the complexity point of view, and the results obtained are summarized below. 4.1 SSCOP Protocol The SSCOP (Service Specific Connection Oriented) protocol is standardized under reference itu-t q2110 [24]. Originally, it was conceived to reliably transfer data between two high bandwidth network entities. Although its design makes it ready to treat significant volumes of data, currently its use is confined in one of the underlayers of the aal layer (Atm Adaptation Layer). The services it provides are connection control (establishment, flow-control, release), data transfer, and error detection. The SSCOP standardization document contains an SDL description of the protocol. This description has been coded by FRANCE TELECOM R&D using ObjectGeode. It consists in approximately 2000 lines of SDL textual code which describes the protocol as one single process with 10 control states, 134 variables, and 4 timers. The description was centered on signaling and some simplifications have been made according to SSCOP implementations available in FRANCE TELECOM R&D. Our main goals were the formal validation of the specification and, in addition, automatic test-case generation starting from it. Clearly, the size and complexity of this specification made any brute force validation approach not applicable. In particular the data part was very large, and each state of the underlying model could not be stored in less than 2kB. Therefore only a small part of the state space could be explored from this initial specification, not sufficient to verify interesting properties. Consequently, we adopted a more incremental verification strategy. A first step was to apply a very rough abstraction by (automatically) eliminating all the variables in the specification. Thus, it was possible to compare this very abstract specification with the one supplied by the standard to model the interactions between adjacent layers of Sscop, to check if the abstract specification provides at least the expected behaviour. This comparison was performed using Aldebaran, with respect to the so-called safety preorder [5]. Some subtle errors, such as omission of timers setting, were found using this method. After this debugging phase, the second step was to “prepare” the initial Sdl specification for a more accurate state space analysis. It consisted in basic static analysis techniques like dead code elimination and live variable detection using IF2IF. The benefits were really spectacular on this example, and, in particular, the amount of memory required to store a model state fell to 0.2 kB. Finally, these optimisations made possible the use of exhaustive simulation techniques. More precisely we considered a system consisting in a pair of entities, communicating through a bounded fifo channel, and we concentrated our validation effort to a set of representative distinct scenarios (connection establishment, disconnection, data transfer, ...). Using specific slicing criteria, it was therefore possible to (automatically) simplify even more the specification, depending on the property under verification or the test purpose. The underlying models obtained were about 20 000 states large, and errors were found in the data transfer phase of the specification. The complete experiment is reported in [9]. 4.2 Mascara Protocol The Mascara (Mobile Access Scheme based on Contention And Reservation for ATM) protocol is a special medium access control protocol designed for wireless ATM communication and developed by the WAND (Wireless ATM Network Demonstrator) consortium [13]. A wireless ATM network extends transparently services to mobile terminals (MT) via a number of geographically distributed access points (AP). The task of the Mascara protocol is to mediate between APs and MTs via a wireless link. The protocol has a layered structure, where we consider only the highest layer, the Mascara control layer. The overall description of the Mascara protocol which we got is 300 pages of Sdl textual code. We concentrate on the verification of the Mascara control layer, for which the Sdl description could be made reasonably complete. Here we briefly present the verification of the dynamic control. For complete information, we refer the reader to [17] which reports the complete experiment on the dynamic part. In addition, another verification experiment has been carried out on static control [4]. Verification should be carried out under a general environment with realistic restrictions. As we have not obtained information on the Mascara upper layer, we considered initially an open system with an unconstrained upper layer, which would allow us to obtain the most general verification results. But communication via unbounded channels, leads to infinitely growing channel contents and thus an infinite state model in case that the environment sends requests too often. This is typically the case in reactive systems always ready to treat requests from the environment. The approach we have chosen to provide a more restricted, but still realistic envi- ronment consists in limiting the number of requests it can make per time unit. We assume that within one time unit, no more than \( N \) requests can be sent by the environment. The system has never to deal with more than \( N \) requests simultaneously which leads, in the MASCARA protocol, to bounded channel contents. The success of the method depends on the use of a realistic bound. We use \( N=4 \). Unfortunately, even with such a restricted environment, it was impossible to generate the state graph of the global system as a whole. However, we have applied two different types of compositionality verification: the first one is based on property decomposition [26], and the second one is based on compositional generation of a state graph minimized with respect to a behavioral equivalence [18]. In particular, using in addition both live analysis and partial order reduction for the generation of the subsystems, we were able to compositionally generate a reduced model of the global system using compositional generation. Table 4.2 gives an overview of a subset of the models we have generated using different reduction techniques and allows to compare their sizes and generation times. Finally, several properties ranging from generic ones such as deadlocks and livelocks to more specific such as association establishment, connection, disconnection, were verified on the generated models. <table> <thead> <tr> <th>generation strategy</th> <th>AP model size</th> <th>AP model time</th> <th>MT model size</th> <th>MT model time</th> <th>AP + MT model size</th> </tr> </thead> <tbody> <tr> <td>live reduction</td> <td>7 308 400 st.</td> <td>207.36 s.</td> <td>4 388 765 st.</td> <td>111.58 s.</td> <td></td> </tr> <tr> <td>+ live reduction + partial order</td> <td>1 536 699 tr.</td> <td>12.22 s.</td> <td>325 312 tr.</td> <td>0.07 s.</td> <td></td> </tr> <tr> <td>+ live reduction + partial order + slicing</td> <td>1 630 st.</td> <td>9.07 s.</td> <td>977 st.</td> <td>3.34 s.</td> <td></td> </tr> <tr> <td>+ live reduction + partial order + + partial order</td> <td>2 885 tr.</td> <td></td> <td>2 845 tr.</td> <td></td> <td></td> </tr> </tbody> </table> Table 1. MASCARA verification results 4.3 Ariane-5 Flight Program The work on this experiment was initiated by EADS Launch Vehicles to better evaluate the applicability of formal validation techniques on an existing software, the Ariane-5\(^1\) Flight Program. This is the embedded software which solely controls the Ariane-5 launcher during its flight, from the ground, through the atmosphere and up to the final orbit. \(^1\) Ariane-5 is an European Space Agency Project delegated to CNES France. The verification experiment is reported in [11]. First, this software has been formally specified in SDL by reverse engineering from the existing code. Then, following a set of general methodological guidelines, the specification has been continuously improved and all the initial requirements were verified on the final version. In particular, the combination of different optimisation techniques, operating either at the source level (like static analysis or slicing) or at the semantic level (like partial-order reductions) happened to be particularly useful in order to deal with large size state spaces. For example, the initial SDL version of the flight program used no less than 130 timers. Using our static analysis tool we were able to reduce them to only 55 timers, functionally independent ones. Afterward, the whole specification was rewritten taking into account the redundancies discovered by the analyzer. The main difficulty of this case-study comes from the combination of various kind of time constraints. On one hand, the functionality of the flight program strongly depends on an absolute time: coordination dates are frequently exchanged between components in order to synchronise their behaviour during the whole flight. On the other hand, this system has to be verified within a partially constrained environment, reacting with some degree of temporal uncertainty. In this experiment, this expressivity problem was solved at the IF level thanks to explicit urgency attributes. Clearly, such features should be made available at specification level. In particular, ongoing work address the introduction of high-level time and performance annotations in SDL [10]. In practice, we have considered two different situations regarding the environment. The first one is time-deterministic, which means that all environment actions (in particular the control part) take place at precise moments in time. The second one is time-nondeterministic which means that environment actions take place with some degree of time uncertainty (within a predefined time interval). From the environment point of view, the later situation corresponds to a whole set of scenarios, whereas the former situation focus only on a single one. Table 2 presents the sizes of both models generated according to different generation strategies. It gives also the average time required for verifying each kind of property (by temporal logic model checking and model minimisation respectively). 5 Conclusion and Perspectives The IF environment has already been used to analyze some representative SDL specifications such as SScop, an ATM signalisation protocol, Mascara, an ATM wireless transport protocol and Ariane-5 flight program, a part of the embedded software of Ariane-5 launchers. It is currently used in several on going industrial case-studies, including respectively real-time multicast protocols PGM and RMTP-II, and the session initiation protocol SIP. The benefits of combining several techniques, working at different program level, were clearly demonstrated. In particular, traditional model-checking techniques were not sufficient to complete on these large size examples. Several directions can be investigated to improve this environment. The first direction of improvement concerns the IF language. As currently defined, it allows only the description of static systems, were the number of components (processes and buffers) as well as their interactions are fixed throughout the execution. This strongly limits our ability to handle complex dynamic specifications. We work on a less restrictive definition, were both parameterized descriptions (containing some fixed number of replicated components) as well as general dynamic creation and destruction of components are allowed. Furthermore, some improvements will be made regarding the description of components itself, such as the possibility to express structured control using composed states (like in statecharts). A second direction of improvement concerns the IF simulator, the core component allowing to construct and to explore the underlying semantic model of IF specifications. Currently, this model is labeled transition systems, and its construction and exploration are quite restricted: first, only pure asynchronous execution (by interleaving) is possible and second, no access is provided to the state of the system (e.g., current values of variables, current states of processes). We envisage to improve these points, by implementing a flexible simulator able, for instance, to deal with both synchronous and asynchronous components, or more generally, to take into account some scheduling policy over components during the simulation. In addition, this simulator will interact with running components through a well-defined abstract interface, thus allowing to integrate also external components (for example, directly expressed as executable code). A third direction of improvement concerns the validation methods. Clearly, we will continue to adapt and to improve our static analysers as well as our model checkers to handle the extended IF descriptions. Also, some work must be done to reduce the manual overhead, yet important, needed by sophisticated techniques such as compositional verification. Finally, another important issue concerns the validation of non-functional requirements. In particular, performance evaluation becomes crucial for an important part of internet protocols (such as PGM or RMTP-II) which are not necessarily designed to achieve full reliability, but only an average correct behaviour with respect to probabilistic assumptions on their execution environment. <table> <thead> <tr> <th>Model Generation</th> <th>Time</th> <th>Time</th> </tr> </thead> <tbody> <tr> <td>Model verification</td> <td>Minimisation</td> <td>~1&quot;</td> </tr> <tr> <td>Checking</td> <td>~15&quot;</td> <td>~2’00’&quot;</td> </tr> </tbody> </table> Table 2. Ariane-5 Flight Program verification results. (e.g., propagation delays, message loss, network elements speed, etc). At middle term, we plan to connect to IF environment to simulation environments like OPNET [29] and SES/WORKBENCH [21]. The IF package can be downloaded at http://www-verimag.imag.fr/DIST_SYS/IF. Acknowledgements We gratefully thank Guoping Jia and Lucian Ghirvu for their help on tools development and experimentsation. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00374652/document", "len_cl100k_base": 4933, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33183, "total-output-tokens": 7325, "length": "2e12", "weborganizer": {"__label__adult": 0.00033736228942871094, "__label__art_design": 0.0003418922424316406, "__label__crime_law": 0.0003459453582763672, "__label__education_jobs": 0.00048160552978515625, "__label__entertainment": 7.718801498413086e-05, "__label__fashion_beauty": 0.000148773193359375, "__label__finance_business": 0.0002455711364746094, "__label__food_dining": 0.0003275871276855469, "__label__games": 0.00048160552978515625, "__label__hardware": 0.001255035400390625, "__label__health": 0.0005121231079101562, "__label__history": 0.00026416778564453125, "__label__home_hobbies": 8.487701416015625e-05, "__label__industrial": 0.0004756450653076172, "__label__literature": 0.00023066997528076172, "__label__politics": 0.00026988983154296875, "__label__religion": 0.00047707557678222656, "__label__science_tech": 0.05450439453125, "__label__social_life": 9.256601333618164e-05, "__label__software": 0.00894927978515625, "__label__software_dev": 0.92919921875, "__label__sports_fitness": 0.0002942085266113281, "__label__transportation": 0.0006151199340820312, "__label__travel": 0.00021517276763916016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30507, 0.04132]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30507, 0.39809]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30507, 0.87409]], "google_gemma-3-12b-it_contains_pii": [[0, 1033, false], [1033, 1887, null], [1887, 5297, null], [5297, 6991, null], [6991, 9944, null], [9944, 12776, null], [12776, 16100, null], [16100, 18650, null], [18650, 21899, null], [21899, 24542, null], [24542, 27626, null], [27626, 30507, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1033, true], [1033, 1887, null], [1887, 5297, null], [5297, 6991, null], [6991, 9944, null], [9944, 12776, null], [12776, 16100, null], [16100, 18650, null], [18650, 21899, null], [21899, 24542, null], [24542, 27626, null], [27626, 30507, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30507, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30507, null]], "pdf_page_numbers": [[0, 1033, 1], [1033, 1887, 2], [1887, 5297, 3], [5297, 6991, 4], [6991, 9944, 5], [9944, 12776, 6], [12776, 16100, 7], [16100, 18650, 8], [18650, 21899, 9], [21899, 24542, 10], [24542, 27626, 11], [27626, 30507, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30507, 0.08621]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
aae4aaa27c8a9aff2f97fedeeb36993d06630f88
WEBSITE STRUCTURE IMPROVEMENT BY USING TAILORING METHOD Milind Mahadeo Shinde Student, Department of computer engineering, Imperial College of engineering & research, Pune, India Vinod S. Wadne Prof., Department of computer engineering, Imperial College of engineering & research, Pune, India Abstract - The WWW grows rapidly and increases the complexity of web applications and navigation of websites. This paper presents a widespread overview of web mining techniques and methods used for the estimation of reconciling systems to get better website navigation efficiency to enhance the efficiency of web site. The goal of this project is to make adaptive Web sites by developing the site structure to help user access. This technique consists of three steps: preprocessing, classification of page, and site reorganization. In particular, a mathematical programming model is used to enhance the navigation of user on a web while decreasing alterations to its present structure. Especially, two evaluation metrics are defined and used them to judge the performance of enhanced website using real data set. Specifically, the modules that include a Web personalization system is introduced, which gives focus on the Web usage mining. The proposed techniques are achieved superior web navigation very effectively and maintain efficiency and it is more efficient from existing system. Key Words: Data and web mining, Web personalization, Mathematical programming model, Web reorganization... 1. INTRODUCTION The steady development in the range and use of the World Wide Web evolved new techniques of development and design of on-line Information Services. Generally lot of Web structures are complex and huge; so the users ignore the goal of its inquiry, or when they try to navigate through them, they can receive ambiguous results. On other side, the e-business region is quickly growing and the requirement for Web market places which expect the requirements of the customers is higher than ever evident. That’s why, the need for predicting requirements of user to enhance the retention of user and usability of a Web can be addressed by personalizing website. The definition of Web personalization is, every action which adapts the services or information supplied by a Web site for the needs of a specific customer, user or a set of customers and users, taking benefit of the knowledge obtained from the navigational behaviour of users and interests of individual, in grouping with the structure of the Web and multiple contents. The main aim of Web personalization is to “supply multiple users with necessary information, instead of expecting from them which ask for information explicitly”. Though finding required information on the website is difficult and creating efficient web is not a trivial job; there are lot of ever-increasing investments in web designing. If users found difficulty to reach the target pages are ready to abandon one a website; though it contains high quality information. The total reorganization of website could thoroughly alter the location of recognizable items and the creation of new website can disorient customers. So it cannot be repeatedly performed to enhance the user navigability. Different techniques are used to enhance the web structure like algorithms for Clustering, Ant colony system. An algorithm Graph Partitioned Clustering used to assemble multiple users with same navigational pattern. The undirected graph is used; which is based on connectivity stuck between every pair of used web pages. To every edge in the graph is given a weight and it is based on frequency and connectivity time. The connectivity time calculates the amount of visit ordering to every two pages in the session. Thus navigation pattern for users enhance the website structure by reorganizing on it. 2. RELATED WORK The Web Navigation is an essential part of a website. It is a way where a website is created in such a way that the surfer can view the information they wants. Lot of websites have a hierarchical association of content [1]. Particularly, it is often not clear that where a specific content or document is situated. Three techniques like First one, Optimize Time and Optimize Benefit are used for recommending extra links to Website Administrator. The algorithm is also used to find necessary pages or information in a website whose position is different from the position where visitors generally try to find them. The main focus is on the backtracking technique used by visitors when they do not find necessary information where users expect it. The region from where users backtrack is considered expected locations for these pages. When the website does not have an apparent separation in between content pages and index pages, then it can be difficult to differentiate between other pages and target pages. So the algorithm can produce false expected locations for pages if it treats the target pages as backtracking points and can fail to spot desired locations if it treats the backtrack points as the target pages. The main aim of a technique used to reorganize web sites residing on user access patterns is to make adaptive web sites by developing site structure to assist user access pattern [2]. Three steps are used in this approach: classification of page, pre-processing and website reorganization. In pre-processing stage, the pages in a web site are processed to form an internal demonstration of the site. User’s page access information is collected from web server log. In next stage of page classification, web pages on the web site are divided into two categories like content pages and index pages based on the information of page access. The pages are classified and examined to organize them properly. The decision of reorganization algorithm bases on the user accesses information, not on the web content. Hou et al [3] recommended an analysis of hyperlinks in the web page and new method to build the page source. Page similarity is used by two algorithms to discover relevant pages. The first one, Extended Co-citation algorithm, is a co-citation algorithm which extends the historical co-citation concepts. It is instinctive and short. The second algorithm is, Latent Linkage Information algorithm i.e. (LLI), which finds appropriate pages more efficiently and accurately by using theories in linear algebra, particularly singular value in matrix decomposition, to expose the relationships among pages. The recent page source minimizes the power of the pages similar web-site to a reasonable stage in page similarity dimension, avoids little helpful information being lost and prevents the results from being displaced by malicious hyperlinks. It can recognize the pages that are semantically related to the specified page and the pages which are related to the specified page in a vast sense. The In LLI algorithm, the page similarity concept could be modified for clustering of pages if the clustered pages are not huge. A web personalization technique customizes a Web site according to the needs of particular users [4], which takes advantage of the information gathered from analysis of the navigational behaviour of user in association with another data collected in context of web, structure, user profile data and content. As the web grows tremendously, the web personalization domain has obtained big momentum in the commercial areas and research. It can be gained by getting advantage of the navigational behaviour of user, as exposed throughout the web usage logs processing, as well as interests and characteristics of users. The solution is not provided for techniques combination used in profiling of user, mining in web usage, acquisition of contents and website publishing and management task. 3. MOTIVATION As per the brief literature survey, the main limitation of existing system is difficulty in navigation. This problem triggers most consumers to abandon a website and switch to a competitor. Generally, having traversed several paths to locate a target indicates that this user is likely to have experienced navigation difficulty. So due to this reason the main motivation is to improve the navigation effectiveness of a website with minimal changes. A mathematical programming model is used to improve the user navigation on a website while minimizing alterations to its current structure and show that this MP model not only successfully accomplishes the task but also generates the optimal solutions surprisingly fast and techniques that can accurately identify users’ targets can be generated. 4. PROPOSED SYSTEM 4.1 Personalization Based On Web Usage Mining The Web personalization process generally contains of three phases: preparation and transformation of data, discovery of patterns, and its recommendation. In the historical approaches of collaborative filtering, the phase of pattern discovery and the recommendation phase are performed in real time. The web personalization systems are based on website usage mining [5], which performs discovery of patterns offline. The data preparation phase converts raw web log data files in stream data which can be used by task like data mining. A wide range of different techniques of data mining can be applied on the web application data or click stream in the pattern discovery phase, like data clustering, mining of association rule [6, 7, 8], and in order pattern discovery. Generally, the active user session is considered for recommendation in conjunction with discovered patterns to be discovered to supply the personalized content. 4.2 Web Personalization The process of web personalization is also called as “tailoring” Web Pages according to the requirements of users using the information of navigational behaviour of user and user’s profile data. An approach described by Perkowitz and Etzioni automatically going to synthesize index pages which include links to various pages pertaining to specific topics residing on the co-occurrence frequency user traversal pages, to enhance navigation of user. The methods which are proposed by Mobasher et al. generates clusters of the users profiles from website logs and then generate dynamic links for customers who are classified into various categories based on their different access patterns. The prior studies on website has mainly focused on different types of issues, like understanding of web structures, finding the appropriate pages from a given page, informative structure mining of a newly created website, and collecting webpage template. 4.3 Web Transformation Specifically the web transformation, on another side, consists of change in the website structure to enhance and facilitate the navigation for a huge set of users or group of users without personalizing of pages for individual separate users. An approach is described here to restructure web pages in such way that it can provide desired information to the users in minimum clicks. As this approach is beneficial to the group of users, it takes into account local structures in the website instead of a site as a whole; therefore the new architecture cannot be essentially optimal. A heuristic method proposed here which is based on simulated re-link web pages to enhance the website navigability. This technique makes use of collective data of user preference and can also be used to enhance the link structure in various websites for wired as well as wireless devices. 4.4 Knowledge discovery from web logs For predicting the user behavior and preparing the web structure calculation of web access log is important task. The applications point of view is considered here and the information collected from various patterns of web usage can be useful to perform various tasks related to e-services, e-business, e-education, and on-line communities and so on. On another view side, there is fast growth in the data density and data size. Therefore information given by the running web log file in analysis tools can enhance inadequate information and so vast intelligent data mining techniques are required. In website usage mining, the web log files have an important role. All the necessary knowledge and information gathered from web log files is used in navigation process of user. Different types of users have also assigned varied navigational patterns with it. Generally, users continuously alter their focus, so it is difficult to gain such navigational behavior related knowledge. The knowledge in navigation pattern is used by users for two reasons: For website personalization system and to to help users by predicting the user’s future request. 4.5 Mathematical Model The mathematical programming model is used here for enhancing the navigational behavior of user on website at the same time reducing the contents for alterations to its present structure. The Results are conducted on a in public available real data set from wide-ranging tests to show that this model not only enhances the navigation of user with less changes, but it can be efficiently solved. To notify the path has been traversed by the user backtracking is used, and a backtracking technique is defined as a revisit of user to an earlier browsed page. The main fact is that users can perform backtrack if they don't get desired location. Hence, a path can be defined as the series of pages which are visited by a user or customer without backtracking; and this concept is analogous to maximal forward reference. Basically, every backtracking point is supposed as ending of a path. A mathematical programming model is shown below. It is used to decrease the alteration to the existing structure of web-site by enhancing the navigation of user as follows [9]: \[ \begin{align*} \text{Minimize} & \quad \sum_{(i,j) \in E} x_{ij} \left[1 - \lambda_{ij}(1-\varepsilon) \right] + m \sum_{i \in N_E} p_i \\ \text{subject to} & \quad c_{ij}^S = \sum_{(i,j) \in E} a_{ijk} x_{ij}; \tau = 1,2,\ldots, L_{D}(k,S), \quad k = 1,2,\ldots, L_{D}(S), \forall S \in TR \\ & \quad \sum_{k=1}^{b_k} L_{D}(k,S) \sum_{r=1}^{a_{kj}(S)} c_{ij}^S \geq 1; \forall S \in TR, \quad j = tgf(S) \\ & \quad \sum_{j \in E} x_{ij}(1 - \lambda_{ij}) + W_i - p_i \leq C_i; \forall i \in N_E \\ & \quad x_{ij} \in \{0,1\}, \quad p_i \in \{0\} \cup \mathbb{Z}^+, \forall (i,j) \in E, \quad i \in N_E. \end{align*} \] 5 HIGH LEVEL DESIGNS 5.1 Architecture of Proposed system The proposed system is used for user navigation by making use of relevant mini session and its relevant candidate links. In transformation techniques, the mathematical programming model is used to improve the navigation of users instead of personalization of individual user or single user. The process web pages tailoring as per need of users by using information regarding to the navigational behavior of user and its profile information is called as personalization of web. Therefore, the technique of personalization is a tailoring of web pages. A backtracking can be defined as a revisit of user to earlier browsed page. If user do not find page at expected location, then user will backtrack. So, a path is defined as a no of pages and its sequence visited by a user without backtracking. This concept is same as maximal forward reference. Basically, each and every backtracking point is end of path. The experiments were conducted, on a data set collected from a real website and on synthetic data sets. The model is first tested with changing parameters values on all data sets. The real data is then partitioned into training and testing data. Q= \{P_0, P_1, P_2, P_3, P_4\} C= \{x, z, w, e, y\} \\partial= \{A, B, C, D, E\} Q_o= \{P_0\} F= \{P_4\} Where, P_0 = process of personalization, P_1 = process of transformation P_2 = Backtracking P_3 = Reduction of problem size P_4= Evaluation x = url for single user z = url for target page y = Back to previous page w = insert minimum links e = target page ID 5.3 Algorithm to find Expected Location 1) In website; build the hash table of links 2) Then partition web log by visitor: By visitor ID as a Primary Key sort the web log file and Time as Secondary key. Partition web log file by hashing on visitor ID Partition each log separately Scan web log and take out the sequence of pages for every visitor ID 3) For each visitor, partition web log 4) Expected location for visitor and target page: Let (P_1, P_2, \ldots P_n) be set of visited pages B = \Phi denotes list of backtrack pages a) for i = 2 to n-2 begin b) if ((P_{i-1} = P_{i+1}) or (no link from P_i to P_{i+1})) c) Add P_i to B end If (B not empty) Add (P_n, B, P_{n-1}) to table The hash table is built in first step. Then the web log is partitioned by visitor in Step 2. In next Step 3, we divide the sequence of accesses for every visitor by the target pages which they visit. So it is assumed that the website administrator either specifies set of feasible target pages, or it specifies a time threshold value to differentiate between target pages and the other pages. In final step 4, all predictable locations (if any) are found for that target page, and added it to a table and it is used by the next step of algorithm. Detection of backtracks finds in Step 4(b). Additionally to check for the nonappearance of a link from the current page to the next page, we also see it if the previous page and next pages are same or not. The last check takes precaution of the case that where visitors use a navigation link to go to the previous page visited without using the “back” button present in the browser. 5.4 Algorithm to optimize time The algorithm to optimize time Recommend the set of pages that minimize the number of times the visitor has to backtrack, i.e., the number of times the visitor does not find the page in an expected location. The goal of this algorithm is to minimize the number of backtracks the visitor has to make. We use the number of backtracks as a proxy for the search time. Algorithm is as follows: Repeat For each record begin Let \( m \) be the number of expected locations in this record For \( j = 1 \) to \( m \) Increment support of value \((E_j)\) by \( m+1-j \); end Sort pages by support and \( P = \) Page with highest support If support\((P) \geq St\) begin Add \((P, \text{support}(P))\) to list of recommended pages. For each record begin For \( k = 1 \) to \( n \) begin If value \((E_k) = P\) Set \( E_k, E_{k+1}, \ldots, E_n \) to null end end end until (support\((P) < St\)); 6 PERFORMANCE EVALUATION In Web Organization; to locate particular destination; no of redirection links has been traversed and 3 mini sessions are used to check weight on no of path traversed. The tracking result shows values for objective function, path threshold, current Outdegree and Outdegree threshold. The links whose value does not exceed the value of Outdegree threshold are considered for web organization. Following table shows the links used in mini-session 1. <table> <thead> <tr> <th>Objective Function</th> <th>Path Threshold</th> <th>Current Outdegree</th> <th>Outdegree Threshold</th> </tr> </thead> <tbody> <tr> <td>0.2758</td> <td>0.5045</td> <td>0.8046</td> <td>1.3091</td> </tr> <tr> <td>0.6719</td> <td>0.2582</td> <td>0.3810</td> <td>0.6392</td> </tr> <tr> <td>0.7477</td> <td>0.8359</td> <td>0.3130</td> <td>1.1489</td> </tr> <tr> <td>0.9648</td> <td>0.4220</td> <td>0.0195</td> <td>0.4414</td> </tr> <tr> <td>0.3026</td> <td>0.2329</td> <td>0.8037</td> <td>1.0366</td> </tr> <tr> <td>0.1944</td> <td>0.6156</td> <td>0.5873</td> <td>1.2029</td> </tr> </tbody> </table> Table -1: Tracking Result 6.1 Graphical Result The time efficiency comparison between existing system and proposed system is given in milliseconds. The proposed system makes use of mathematical model and Optimize time algorithm to perform faster and exact search as compared to existing system. Chart -1: Time efficiency of existing system If the existing system reaches to the required target in 500 milliseconds then proposed system reaches the goal in just 30 milliseconds as shown in chart 2. The time efficiency value for proposed system can be varied according to no. of traversed paths. Chart -2: Time efficiency of proposed system 7 CONCLUSIONS The tailoring method and K-means clustering techniques are used for proper navigation purpose. To enhance the navigation effectiveness; the algorithms like finding expected location of requested information and optimization of time are used. The transformation approach is generally suitable for informational websites, whose contents don't change along with time. The web personalization module works well with the sites which contains dynamic information by considering information related to each visitor. The proposed system reaches the target very quickly as compared to existing system. REFERENCES
{"Source-Url": "https://www.irjet.net/archives/V2/i9/IRJET-V2I9253.pdf", "len_cl100k_base": 4534, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 24521, "total-output-tokens": 5615, "length": "2e12", "weborganizer": {"__label__adult": 0.00026106834411621094, "__label__art_design": 0.0008711814880371094, "__label__crime_law": 0.0003829002380371094, "__label__education_jobs": 0.001931190490722656, "__label__entertainment": 0.0001474618911743164, "__label__fashion_beauty": 0.00016689300537109375, "__label__finance_business": 0.000759124755859375, "__label__food_dining": 0.0003840923309326172, "__label__games": 0.0007905960083007812, "__label__hardware": 0.0014219284057617188, "__label__health": 0.0006074905395507812, "__label__history": 0.00048232078552246094, "__label__home_hobbies": 0.00017261505126953125, "__label__industrial": 0.000545501708984375, "__label__literature": 0.00042891502380371094, "__label__politics": 0.0002307891845703125, "__label__religion": 0.00040650367736816406, "__label__science_tech": 0.2423095703125, "__label__social_life": 0.00013744831085205078, "__label__software": 0.0777587890625, "__label__software_dev": 0.6689453125, "__label__sports_fitness": 0.00024056434631347656, "__label__transportation": 0.0004413127899169922, "__label__travel": 0.00027298927307128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23283, 0.03346]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23283, 0.40666]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23283, 0.89113]], "google_gemma-3-12b-it_contains_pii": [[0, 4827, false], [4827, 10699, null], [10699, 15094, null], [15094, 17615, null], [17615, 20303, null], [20303, 23283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4827, true], [4827, 10699, null], [10699, 15094, null], [15094, 17615, null], [17615, 20303, null], [20303, 23283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23283, null]], "pdf_page_numbers": [[0, 4827, 1], [4827, 10699, 2], [10699, 15094, 3], [15094, 17615, 4], [17615, 20303, 5], [20303, 23283, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23283, 0.05634]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
92396bb416479c53d293603a9b7afb286331a32b
3-03-25 Guidelines for Designing EIS Interfaces Hugh J. Watson John Satzinger Payoff For users of an executive information system (EIS), the EIS interface is the system. Here are eight guidelines for designing an EIS interface, all of which are derived from experience with actual EISs that are successfully meeting the unique information needs of business executives. Introduction Executive information systems are among the fastest-growing applications in corporate America. They are designed to supply senior executives with needed information, such as news and stock prices and information about competitors, customers, Key Performance Indicator, and internal operations. Most large firms have an Executive Information System in place or are planning to develop one, and even smaller firms are implementing them. A major factor leading to the development of executive information systems is executives' need for better information in today's competitive business environment. Another is the availability of special-purpose EIS software that facilitates the development of an EIS. Pilot Software, Inc. (Boston MA) and Comshare, Inc. (Ann Arbor MI) led the way with their products in the mid-1980's; today, a host of other products are available. As successes with Executive Information System at such corporations as Northwest Industries, Ltd., Lockheed Aeronautical Systems Co., Xerox Corp., Quaker Oats Co., and Beneficial Corp. have become widely known, executives and IS managers have recognized the potential of EIS and championed their development. As part of the research program on executive information systems at the University of Georgia, EIS practices at leading-edge firms have been studied. The program consults with many EIS developers using most available EIS products. The information from this research has yielded specific guidelines for developing EIS user interfaces. The guidelines presented in this article can be usefully applied by EIS developers. They are also helpful to executives, so they will know what to ask for and expect in their EIS. The guidelines are illustrated with specific examples, many of which are from Lockheed Aeronautical Systems, which developed one of the first successful EIS. Definition of an EIS User Interface In building a successful Executive Information System, a myriad of technical, organizational, and managerial issues must be addressed. Of utmost importance is creating an EIS that is easy to use. Executives have little time or patience with difficult systems. The term user interface refers to how the user directs the operation of the system (e.g., keyboard, mouse, or touchscreen; question/answer, command language, or menus) and how the output is given to the user (e.g., graphical, tabular, or textual; color or monochrome; paper or online). For the system to be easy to use, the user must know how to make it work and what the output means. A user interface must be designed to make operating the system and interpreting the output as easy as possible. Designing an EIS user interface is different from designing other information systems. Because of the nature of executive users, the system must be more than user-friendly; it must also be user-intuitive, even user-seductive. Another difference is the flexibility the system must have, because it is difficult to determine how a particular executive will use an EIS. Also, because of advances in hardware and software, systems designers have many new options to choose from when implementing an EIS. A successful EIS often benefits other users in addition to executives. For this reason, it has been argued that EIS also stands for “everybody’s information system.” These users are more likely to accept complex user interfaces than senior executives and may be willing to trade off simplicity for flexibility. In many instances, however, the more complex applications created for lower-organizational-level users are not given to executives. For example, an executive may not need an application that provides advanced query capabilities to analyze sales data. The focus of attention in this article is on the design of user interfaces for executives rather than for lower-level organizational personnel. **Design Guidelines** Developers of an Executive Information System are typically building their first system of this type. The EIS users, information content, and software are often different from previous systems development projects. Poor initial choices can undermine or even eliminate the chances for successful implementation. Some EIS have not been as successful as they might be or have even failed because of poor user interface designs. The following eight guidelines on designing an EIS interface should help developers successfully implement an EIS: - Involving executives in the design of the user interface. - Setting standards for screen layout, format, and color. - Using the system should be intuitive. - Using standard definitions of terms. - Designing the main menu as a gateway to all computer use. - Designing the system for ease of navigation. - Striving to make response time as fast as possible. - Expecting preferences in user interfaces to change. In the following sections these guidelines are examined and illustrated with examples from successful EIS implementations. --- Involving Executives in User Interface Design Although user involvement in the systems development process is critical for all types of information systems, executive involvement in the design of an Executive Information System user interface is especially important. Executives might have limited experience working directly with a computer, and if they do have some computer experience, the EIS will look and feel quite different from any organizational information systems the executives might know. Designers should be prepared to show a variety of prototype screens and navigation approaches because the executive might have limited knowledge of what an EIS can actually do. Evaluating these prototypes is also likely to get apprehensive executives more committed to the EIS as they begin to see the system's potential. For this reason, it is important to involve all executive users in the process, not just the executive sponsor. Prototyping Approaches. Early prototyping should be used to help decide on the basic look and feel of the system. Two fundamental approaches should be presented: - A full-screen interface with large buttons and icons. - A multiple window interface with pull-down menus and dialog boxes. The first approach might be less intimidating, but the second approach conforms to the popular interface design standards. The preferred look and feel should be used to finalize the development environment that will be used (e.g., Windows), as some development environments might more easily accommodate one or the other type of system. In addition, differences in preference for the look and feel may signal the amount of individual tailoring that might be required for each executive. Although rapid prototyping and extensive user feedback are quite important, the prototypes do not have to be computer-based. Paper screen mock-ups (i.e., storyboards) can be quite effective because the executive can review the screens as time permits and consider the alternatives before providing feedback to the designer. Computer-based prototypes, however, are more useful when showing the executive the potential of the technology and when exploring navigation approaches the executive might prefer. Executives also must be involved in the design of the interface because their preferences for screen prototypes can provide clues about the importance of screen content and design. This aids a designer in uncovering additional information requirements. By observing the relationships among importance of data, the level of detail desired, and frequency with which the information needs to be called up, a designer better understands the way an executive will actually use the EIS. Because of the almost endless number of possible screens that can be provided, the designer must narrow the number down to the most important screens for each executive. This not only reduces development time and system overhead, but also makes it possible to provide a system that makes it easy for the executive to find the information that is actually needed. Any later changes to the interface of an EIS should be discussed with its users. This is especially true when a designer considers deleting seldom-used screens. It is not easy to tell the value of a particular screen just by tracking usage. An executive may have looked at a particular screen only once, but that screen could have provided critical insight that day. Months later, the same screen might be needed once again when the same critical need arises. **Setting Standards for Screen Layout, Format, and Color** Currently available EIS software offers an array of screen design alternatives. Screens can display graphs, tables, and text in hundreds of formats and colors. Unfortunately, this cornucopia of choices sometimes tempts the designer to use many of these alternatives to add sizzle to the screens. In actuality, it only leads to the creation of displays that are confusing. Designers should carefully develop screen design standards that use only a few layouts, formats, and colors. The EIS at Lockheed illustrates the use of screen design standards; a sample screen is presented in Exhibit 1. The top of the screen presents the screen number, a title for the screen, and the date of the last update. The right hand corner gives the names of those who are knowledgeable about the information and their work telephone number. This information makes it easier for users to go directly to the person who is best able to answer any questions about the information. Some EIS allow users to click on the person’s name to have the telephone number dialed automatically. **Sample Screen from the Lockheed Aeronautical Systems EIS** **Layout Standards.** Lockheed’s standard layout is to present graphical information at the top of the screen, more detailed tabular data below it, and textual information at the bottom of the screen. The graph provides a quick visual presentation of a situation, the table gives specific numbers, and the text provides explanations, assessments, actions being taken, and other such information. **Graph Standards.** Graphs of historical and current data always use bar charts. When actuals are compared against plans or budgets, paired bar charts such as those shown in Exhibit 1 are used. The bars are of different widths to allow users with color perception problems to correctly identify each bar. Projections into the future use line graphs. Pie or stacked bar charts are used to depict parts of a whole. On all charts, vertical wording is avoided and abbreviations and acronyms are limited to those on an authorized list. All bar charts are set to zero at the origin to avoid distortions, scales are set in prescribed increments and are identical within a subject series, and bars that exceed the scale have numerical values shown. **Color Standards.** Lockheed’s EIS uses only a few carefully selected colors. Yellow is used to show actual performance, cyan (light blue) is used for company goals and commitments to the corporate office, and magenta represents internal goals and objectives. A traffic-light pattern is used to highlight comparisons: green is good, yellow is marginal, and red is unfavorable. For example, under budget or ahead of schedule is in green, on budget or on schedule is in yellow, and over budget or behind schedule is in red. Organization charts use different colors for the various levels of management. Colors have been selected to minimize color differentiation problems—about 6% of all men and less than 1% of all women have color perception problems—and all displays are designed to be effective with black- and-white hard copy output. Standard layouts, formats, and colors offer many advantages. They provide a consistent look and feel for the system. Users are less likely to misinterpret or misunderstand the information presented. Less cognitive effort is required, as is the time required to understand a display. **Use of Text.** Textual material is entered by the EIS support staff to make the information displayed more useful. The information itself may not reveal the full story, but the purpose of the commentary is to add value to the information displayed. Although Lockheed's EIS presents commentary information on the same screen to which it applies, other EIS place it on a separate screen. The power of textual commentary can be powerful, as is illustrated by the following example from Lockheed. Both a graph and tabular data indicated that actual cash flow was below budget by $20 million. A commentary revealed, however, that payment for a plane in the amount of $20 million was en route from a foreign country and would be in a Lockheed account by the end of the day. **Voice Annotations.** A few EIS allow voice commentaries to be associated with screens. This is an appealing feature because executives are used to receiving information verbally and voice is richer for communications than printed words. Voice annotations to screens is currently the best accepted of the multimedia enhancements to EIS. Other possibilities such as video and personal teleconferencing have good potential, but the business case for them has yet to be made. **Using the System Should Be Intuitive** Ideally, an executive should be able to use an EIS without training. At the most, no more than 15 minutes of instruction should be required to teach how to use the basic information-retrieval capabilities. Systems more complex than this are unlikely to be used. Most successful EIS are operated by point-and-click technology. By picking from among menus, icons, or buttons, an executive navigates through the system to a desired capability (e.g., E-mail or information). Experience with Decision Support System has shown that most executives will not use a command language with a verb-noun syntax because it is too time-consuming to use and difficult to learn and remember. 62 In one easy-to-use EIS, 35-inch monitors were installed in executives’ offices; each can simultaneously display up to 10 windows of information. What is shown is customized for each executive and varies with the day of the month. This EIS is essentially a ticker tape of relevant information. --- Forgo User Documentation. Systems developers are typically expected to write user documentation for new applications. However, this is usually unnecessary or inappropriate for Executive Information System. The system should be sufficiently intuitive that instruction manuals are not needed. Even more so than with other types of users, executives do not read documentation. If an executive is having a problem using the system, it is best if the user calls the EIS support staff to correct the difficulty. If users request documentation, it should be provided, either within the system and or in hard copy. Ideally, the instructions should fit on a single page or a few screens. (One word of warning: the fact that there is no need for user documentation should not be confused with the need for system documentation, which remains very important.) Adhering to Standard Definitions of Terms Most organizations have data dictionaries that include definitions for the data elements used in Transaction Processing applications. There are other terms that are widely used throughout organizations and are important to EIS that are not as precisely defined. Everyone in a company uses these words and has a general understanding of their meaning but slight differences exist and can cause misunderstandings. For example, the term sign-up at Lockheed had different shades of meaning. A sign-up involves a company interested in purchasing an aircraft. To marketing personnel a sign-up occurred when customers said they were going to make a purchase. The legal department, however, recorded a sign-up only when a signed contract was received. Finance waited until a down payment was received. Each group generally knew what the term meant, but slight differences based on their organizational perspective led to timing differences as to when a sign-up was recorded. Because an aircraft can cost between $20 million to $30 million, such differences can result in considerably different impressions as to how the organization is doing. A sign-up has now been defined as a signed contract with a down payment. A Data Dictionary. Lockheed has an executive data dictionary that contains definitions for all of the terms used in its EIS. The definitions can be accessed through the EIS and is available to all users. Creating an executive data dictionary is useful because it makes executives consider what terms are being used inconsistently and to develop definitions that reflect an organizationwide rather than functional-area perspective. Designing the Main Menu as a Gateway Most organizations have a variety of applications designed to support executives: E-mail, electronic filing, decision support, and access to external news and stock prices. Many of these applications require their own access procedures and passwords. This requirement poses some difficulty and inconvenience and discourages hands-on computer use. The development of an EIS provides an excellent opportunity to deliver all of these capabilities in a single, integrated system. An EIS provides the logical and physical umbrella under which all of the executives' computer applications are placed. A number of EIS used their main menus to display all information and applications available through picks (i.e., menus, icons, or buttons). The kinds of information usually provide one set of options. For example, there may be screen picks for financial, production, marketing, and human resources information. Separate picks may exist on the basis of products, geographical location, and organizational units (e.g., corporate, division). The choices reflect the information contained in the EIS and how it is organized. Lower-level menus let users move to specific information desired within a general category. Access to these applications should be transparent to the user and not require any additional log-on procedures or passwords; these activities should be handled automatically by the system. **Designing the System for Ease of Navigation** Vendors' demos often show executives moving easily through a system, looking at current status information and drilling down to more detailed information when a problem or item of interest is identified. This scenario is possible in practice, but only if careful attention is given to navigation issues early in the system's design. Navigation problems may be masked when there are few screens in the system. As the number of screens grow, as they inevitably do, users find it more difficult and time-consuming to move through a system. For example, an executive is looking at financial information and wants to move to operational production data. In a poorly designed system, the user will have to back out of the financial application, screen by screen, until the main menu is reached, and then enter the production application, and move through screens to the desired information. The starting point in designing navigation for an EIS is understanding the mental models that executives have of the organization. If the structure of the information does not match their mental models, users will have a difficult time finding the information they want. For example, do executives look at the firm in terms of geographical location, products, functional areas, or divisions? Each view of the organization may call for a pick on the main menu and a set of related screens. A complicating factor is when one or a few executives have unique mental models. During the development of one EIS at a hospital, designers found that the director of nursing wanted information structured much differently than other users. Her view of the hospital could be accommodated but required custom designing the system for her use. The decision of whether to do this was a business rather than a technical one. **Navigation Features.** There are features that can be included in an EIS to make navigation easier. Some systems have a screen that shows where the user is in the system. Often users get lost and are uncertain about how to move elsewhere, short of turning off the system and starting over. Another feature is to have a home key or pick that takes the user directly back to the main menu. Some systems provide a retrace capability that allows users to easily backtrack to screens viewed previously. Another helpful feature is to include a pick on the main menu that takes the user to a screen that lists the user's most popular screens. From this screen, a user can go directly to any screen on a personalized menu. Also, a single menu can be created that provides direct access to a large number of screens. For example, a company has five plants; each produces 20 products and there is work-in-process and finished goods inventory. The various combinations result in 200 screens (i.e., 5* 20 x 2 = 200). A single menu where the user picks the plant, the product, and the type of inventory provides direct access to the desired screen. Lockheed switched recently from custom developed to commercial EIS software. Before Lockheed signed the contract, however, the vendor had to agree to support keyboards as an input device to the system, largely for navigation reasons. Lockheed's executives were accustomed to point-to-point navigation in the system. Each screen could be accessed from any place in the system by simply entering its screen number. Most executives remembered or kept a list of the screen numbers of their favorite screens. **Response Time as Fast as Possible** When incorporating text and graphics, internal and external data, hundreds of individually tailored screens and views, and multiple navigational paths through the system, EIS developers must continually monitor the response time of the system. Executives are intolerant of slow response times. A recent survey of Executive Information System development practices found that response times for EIS had actually degraded from an average of 2.8 seconds in 1988 to 5.3 seconds in 1991, despite the increased use of powerful desktop computers and local area networks. Although the same survey found satisfaction with ease of use and the effectiveness of the EIS to be relatively high, satisfaction with response times was extremely low. Response time problems can be anticipated when the EIS must dynamically build a screen each time it is requested by searching corporate data bases. Response time can be much faster if the screens are static and updated each night, though designers must evaluate the trade-off between timeliness of data on the screens and response time. Response time can also be affected by the narrow bandwidth of today's networks. **When Speed Counts.** Generally, executives expect very fast response times when flipping through their usual set of screens each morning. One EIS developer suggested thinking of the maximum acceptable time to move from screen to screen as the time it takes the executive to turn a page of *The Wall Street Journal.* Executives can usually tolerate a slow response to ad-hoc queries. When an executive is used to waiting several days for the staff to gather information for a specific question, several minutes may be an acceptable wait for directly retrieving the same information through the EIS. The differences between predefined screens and ad hoc query screens should be made clear to the executive, however. In either case, when any system function takes more than a few seconds, a message should always provide feedback that the system is processing the executive's request. **User Preferences May Change** Almost all aspects of an EIS, including the user interface, change in time. Several examples illustrate this point. So much information is displayed on a screen in the Lockheed EIS that a first time viewer may be confused by all the information presented. However, this is what Lockheed’s executives prefer. They want information on a single screen rather than having to page through several screens to get the same information. This approach also better supports the making of comparisons, such as when an executive wants to check graphical against numerical presentations of data. --- Lockheed's screens were not originally designed this way; rather, they have evolved in response to executives' requests. This same phenomenon has been noted in other, but not all, organizations as their EIS have matured. Often, organizations developing an EIS order touchscreens for technophobic executives. These users quickly discover the disadvantages of touchscreens and also find that using a mouse is easy after a little practice. Although touchscreens may help sell the idea of an EIS to some executives, these executives will probably prefer mouses eventually. As an EIS evolves, the number of its users usually increases. Quite possibly, training given to first-time users will have to change. For example, more time may have to be spent discussing how to interpret the information presented on the most complex screens. Another approach is to include less complex screens in the system. This was done in one manufacturing firm where the new CEO had a strong background in engineering and production but was relatively weak in finance. Recognizing this fact, the EIS staff developed a number of simple screens that displayed key financial information. As the CEO became experienced in finance, the special screens were phased out of the system. **Conclusion** The most important thing to remember is that from a user's perspective, the user interface is the Executive Information System. Most users care little about which hardware or software is used, where data resides, or which communications protocols are used. Rather, EIS users focus their attention on what they must know in order to use the system, how the system's actions are directed, and how the system's output is presented. If executives have to spend much time learning to use the EIS or finding the information they need from it, they will not use this system. To make sure that executives will use an Executive Information System, developers must pay close attention to the user interface. The eight guidelines discussed in this article will help. **Bibliography** **Author Biographies** Hugh J. Watson Hugh J. Watson is the C. Herman and Mary Virginia Terry Chair of Business Administration at the University of Georgia in Athens GA. John Satzinger John Satzinger is a member of the MIS faculty at the Terry College of Business at the University of Georgia. **Comments** Favorable variance primarily due to two additional Hercules sales
{"Source-Url": "http://www.ittoday.info/AIMS/Information_Management/3-03-25.pdf", "len_cl100k_base": 5189, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20164, "total-output-tokens": 5880, "length": "2e12", "weborganizer": {"__label__adult": 0.0008306503295898438, "__label__art_design": 0.033050537109375, "__label__crime_law": 0.0006613731384277344, "__label__education_jobs": 0.0281524658203125, "__label__entertainment": 0.0004551410675048828, "__label__fashion_beauty": 0.0005769729614257812, "__label__finance_business": 0.034088134765625, "__label__food_dining": 0.0008339881896972656, "__label__games": 0.0013113021850585938, "__label__hardware": 0.0037937164306640625, "__label__health": 0.000919818878173828, "__label__history": 0.0009126663208007812, "__label__home_hobbies": 0.0003886222839355469, "__label__industrial": 0.0016508102416992188, "__label__literature": 0.0010004043579101562, "__label__politics": 0.0004322528839111328, "__label__religion": 0.0010156631469726562, "__label__science_tech": 0.0278778076171875, "__label__social_life": 0.0002015829086303711, "__label__software": 0.1534423828125, "__label__software_dev": 0.70654296875, "__label__sports_fitness": 0.0004031658172607422, "__label__transportation": 0.0010080337524414062, "__label__travel": 0.00046324729919433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27887, 0.00451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27887, 0.52556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27887, 0.9465]], "google_gemma-3-12b-it_contains_pii": [[0, 3035, false], [3035, 5457, null], [5457, 8893, null], [8893, 11862, null], [11862, 14887, null], [14887, 18232, null], [18232, 21850, null], [21850, 25236, null], [25236, 27809, null], [27809, 27887, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3035, true], [3035, 5457, null], [5457, 8893, null], [8893, 11862, null], [11862, 14887, null], [14887, 18232, null], [18232, 21850, null], [21850, 25236, null], [25236, 27809, null], [27809, 27887, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27887, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27887, null]], "pdf_page_numbers": [[0, 3035, 1], [3035, 5457, 2], [5457, 8893, 3], [8893, 11862, 4], [11862, 14887, 5], [14887, 18232, 6], [18232, 21850, 7], [21850, 25236, 8], [25236, 27809, 9], [27809, 27887, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27887, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6491bfe4eefaeb78f724979154827b862dfe74af
Talking about Security with Professional Developers Conference or Workshop Item How to cite: © 2019 IEEE Version: Accepted Manuscript Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Talking about Security with Professional Developers Tamara Lopez∗, Helen Sharp∗, Thein Tun∗, Arosha Bandara†, Mark Levine† and Bashar Nuseibeh∗‡ ∗School of Computing & Communications, The Open University, Milton Keynes, UK †Department of Psychology, University of Exeter, Exeter, UK ‡Lero - The Irish Software Research Centre, University of Limerick, Limerick, Ireland Email: ∗firstname.lastname@open.ac.uk, †firstname.lastname@exeter.ac.uk, ‡firstname.lastname@lero.ie Abstract—This paper describes materials developed to engage professional developers in discussions about security. First, the work is framed in the context of ethnographic studies of software development, highlighting how the method is used to explore and investigate research aims for the Motivating Jenny research project. A description is given of a series of practitioner engagements, that were used to develop a reflection and discussion tool using security stories taken from media and internet sources. An explanation is given for how the tool has been used to collect data within field sites, offering a way to clarify and member check findings, and to provide a different view on practice and process. The report concludes with observations and notes about future aims for supporting and encouraging professionals to engage with security in practice. Index Terms—secure software development, collaborative environments, empirical studies I. INTRODUCTION Given the availability of tools, accepted process models, and the vast coverage of security incidents in the media, one might expect that developers would adopt secure practices as a matter of course [1]. Surprisingly, however, many professional software developers do not consistently and comprehensively make use of security tools, models and practices. Why is this? It may be that writing secure code requires effort be made by developers at every step in the development process. However, it may be that security in software development is also driven by intrinsic factors that can be supported through social interactions in the community and culture of software development. Starting from this premise, the Motivating Jenny project1 is conducting a series of ethnographic studies that build upon frameworks of personal motivation and team culture [2], [3]. The project aim is examining what security in practice is like in the professional world, with the aim to find ways to better support and engage developers with security [1]. The workshop described in this report is one example of how this project is meeting this aim, by developing a set of materials that are being refined for dissemination to practitioners working in the field. II. BACKGROUND The ethnographic method is used to study peoples’ actions and accounts of actions. The method allows researchers to develop understanding about what practitioners working in socio-technical environments do and why they do it [4]. Ethnography’s distinguishing feature is that it allows researchers to consider experience from the perspective of the insider [5]. Motivating Jenny has an interest in understanding the point-of-view professional software developers have about security. A particular focus is given to ”ordinary organisational insiders” [6], that is, to developers who are not specialists in security. Ethnographic research often includes exploration of open ended questions that are used to guide individual study designs and to support interactions in the field. This paper opened with one such problem, noting the discrepancy between existing methods to secure software and adoption among practitioners, asking: Why don’t developers adopt secure practices and technologies as a matter of course? In engaging with questions like these, researchers are able to develop and maintain a critical stance toward a topic [5]. In studies for the Motivating Jenny project, this entails listening to what developers say, but also considering how what developers say might be shaped by their interactions with the researchers, and with other aspects in their working environment, including other developers, the workplace, within the broader software development profession, and beyond. The project seeks to understand how to motivate professional developers to engage with security in software engineering practice. The project does not intend to assess the quality or quantity of the security information developers possess. These two statements elide aspects inherent in conducting field studies in professional settings. First, it is necessary to establish trust and to engage with professionals in a way that is non-judgmental. This is particularly important in investigations that involve sensitive concerns. In the same way that companies, and developers, don’t want to be perceived as releasing buggy code [7], neither wants to be perceived as releasing insecure code. Related to this, it is often necessary to quickly get under the skin of a topic during interactions in the field; access to professionals is difficult to get and is often sporadic or constrained [8]. In software engineering, this means it can be challenging to get past the codified knowledge [9] that professional developers are ”supposed” to know. In the context of this research, early interactions confirmed that it is difficult to establish what developers think about security. In conversation, many developers are able to quickly name... common vulnerabilities or to describe in broad terms aspects of threat modelling, but this doesn’t reveal very much about their interest in security or how important they believe it is. One way to explore a topic like security is to undertake activities that run parallel to studies conducted in the field. These activities provide opportunities to build up understanding about the topic under investigation [10]. They also serve as an additional, if informal source of information by which to confirm or refute meanings collected in other data [8]. Finally, they serve the practical purpose of raising fluency with the topic, making it easier for researchers to conduct studies in the field. The workshop described in the following sections is an example of an activity that can run in parallel to field studies, and can be adapted for use with participants from field studies or taken into professional settings for independent use. III. DESIGN The design for the workshop grew out of observations made in early interactions with practitioners in a meetup² and a preliminary study of conversation in an on-line Q&A environment [11]. It should be noted that the first interaction did not include formal data collection; the second was conducted with approval of the first author’s university ethics committee. Through these activities, conversations among developers about security were shown to include technical advice and guidance that include established practices and principles. They also include statements about personal values and attitudes like responsibility, trust, and fear. This point stood out: similar attitudes have been shown to determine or influence security behavior in the general public [12]. It seemed reasonable to assume that they also influence developers, and that finding ways to harness talk about values and attitudes might be a way to positively influence secure coding behaviour. A. Aims and Objectives This workshop was developed as a part of research that is examining the role motivation plays in the production of secure code and how practitioners can initiate and sustain a secure code culture. The workshop meets two needs for the larger research program. First, as noted above, the workshop supports and strengthens our research activities in field sites [4]. It meets three research aims: 1) It increases fluency with the topic, better equipping the researchers to interact with developers in the field, and to “get under the skin” of security. 2) It provides a secondary source of information that can be compared with evidence gathered in formal studies. 3) It is a way evaluate the project’s growing sense about “what security is” against theories and conceptual frameworks in fields other than software engineering. Second, ethnography can be used to inform the design of software engineering tools and improve process development ²https://www.meetup.com/Extreme-Programmers-London/events/245075051/ B. Related Work The following sections describe related work that informed the workshop design. 1) Supporting interaction through talk: Unscheduled talk in the workplace is integral to software development. Conversations between developers include sharing war stories about past experiences but also provide a narrative for one another in the midst of practice, a “summing up” [13] that workers use to develop confidence, to circulate “community memory” and to learn [14]. Talk is used to generate understanding of what the software is and needs to be, and of what developers need and would like to make. This kind of “code talk” is often “snatched” or serendipitous, but lends structure to decisions about work that will be undertaken at the desk [15]. 2) Security events in the media: The public sphere has been identified as one of the ways workers develop awareness of security [16]. In the current climate, security incidents are widely reported in media sources. It is also possible to find accounts of developers who have been affected by high-profile breaches. Like war stories [13], these personal accounts provide insight into how developers solve security related problems. They also give additional perspective about the far reaching impact of security incidents on developers and companies. 3) Taking a positive approach toward security: When developers talk about code, it is value laden and dynamic [15]. It is also supportive. The approach used was influenced, in part, by The Envisioning Cards, a set of cards and exercises designed to help designers think broadly about how technologies are used, and to consider their long-term effects on societies [17]. In taking a positive, value-oriented approach toward security [18], developers are permitted to identify what they believe is important in their work, a point that has connections with research in motivation [19]. The approach taken is to position security as a quality to be striven for [20]. IV. AN OVERVIEW OF THE WORKSHOP Using recent tabletop security games [21], [22] as a guide, materials were developed that would evoke a sense of play and engage attendees [23]. The workshop is structured to last 90 minutes. Participants are divided into groups of 5 or 6 (see group work in Figure 2). Activities are undertaken in three parts: **Part 1 Compromised Software (30:00)** In this part, attendees read the report of the compromise given by HandBrake (read an overview of the incident in Section IV-B). A set of cards prompts discussion about different aspects of the security incident (see Figure 1), including stakeholders and impact of the incident. **Part 2 Another Point of View (30:00)** In part two, each group works through a second version of the HandBrake incident. This version of the incident is an account told from the perspective of a developer (See Figure 1) who was directly affected by the compromise reported in part one. Prompting questions aid discussion about how perceptions about stakeholders and impact change when the focus of the incident is oriented toward a developer. **Part 3 Group Discussion (30:00)** In Part 3, groups are brought together for facilitated discussion. Because this workshop is employed in different kinds of environments, the content of this section has been tailored to individual interactions. A. Instructions The instructions given for parts one and two are the same: Open the envelope marked Part1/Part 2. Inside, you will find a report of a security incident and a set of cards. - Use 10 minutes to read the reports. - Spend 15 minutes discussing the questions on the cards in relation to this story. - For the last 5 minutes, make a note about two or three points that stood out in discussion. Include notes about why each point stood out. As you work through each part, take notes about key points on sheets of paper or directly on the cards themselves (for examples of notetaking, see Figures 3 and 4). B. The Incident: HandBrake In May 2017, attackers compromised a download server for the open-source media-encoding software HandBrake. An infected version of the software was placed on the compromised server that included the malicious software Proton. Once installed on Mac computers, Proton allows unauthorized access to the affected machine. Reports were collected about this incident from general news websites, technical news websites, news aggregators, and the notice of the compromise posted by the software company. An account of a software developer working at Panic Software who infected his computer with the compromised software was also collected to provide a different account of the compromise at Panic software. The first set of values was used to support discussion for this part (See Table I). C. Value Cards Part two includes prompting cards and also a set of cards that name a series of values with suggested definitions. The prompting cards ask attendees to consider the perspective of the developer affected by the compromise in light of the values. Two sets of values have been adapted and trialled, including one set drawn from value-centred design (See Table I) and one from social psychology (See Table II). V. DISCUSSION The workshop has been run with slight variations three times. This section gives a synopsis of each event, with notes about variations. A. Summer 2018: Practitioner Conference The workshop was first given in a 75 minute session at a practitioner conference in London, UK. Attended by between 30 and 35 people, goals for this event were to trial the work shop design, and to define the security dimensions. Due to constraints in the room, attendees worked in groups of between 6 and 8 people. Each group was given a report from different media sources for Part 1. Each account reported basic details about the incident, but provided different kinds of background information and different commentary. In Part 2, each group was given the same account of the compromise at Panic software. The first set of values was used to support discussion for this part (See Table I). Part 3 was used to talk over a story about a security incident told by a participant in the group. Other participants were --- 4https://forum.handbrake.fr/viewtopic.php?f=33&t=36364 5https://panic.com/blog/stolen-source-code/ invited to use a prompting question or a value card from Part One or Part Two within the discussion. In running the session, the materials and instructions worked well, however there was not enough time in the schedule to permit a full discussion in part 3. In spite of this, written feedback provided to conference organisers after the event was overwhelmingly positive. 17 attendees provided conference organisers with written feedback after the session. Ranked along a scale with five indicating an Excellent session, all respondents found the session to be either a 4 or 5. Likewise, feedback indicated that the respondents felt that they learned a lot, and that the session was well led, a point we took as affirmation that the design of the workshop was effective. Here are some positive comments from the feedback: *Great team discussion!* I liked the way the two parts were from different perspectives. A refreshingly different look at security issues in software. Really enjoyable, thought provoking and participatory! Thank you! The feedback also included critical comments. Several attendees noted that the session was too short to allow for comprehensive group discussion. One person noted that how the value cards were to be used was not clear. We used these points to refine the second and third events. B. Autumn 2018: Field Site With this workshop, the primary aim was to observe how the participants from one project field site interact with one another when talking about security. Prior observations of group work had taken place during everyday practice in a large open plan office. Though a few instances of practice were observed that included security elements, this was the first time developers at this company were observed openly conversing about security. This session was also used to clarify findings generated in earlier visits, and to elicit further comments about the attendees’ experiences and attitudes. The workshop was attended by 6 developers, one tester that had been interviewed and one developer new to the company. The attendees worked in groups of four. The second set of values was used (See Table II). In Part 3, a group discussion was facilitated. The questions for Part 3 are given below: 1) Is talking about security incidents in this way helpful? - What did you like? - What didn’t make sense? - Are there other approaches you have used that were helpful? - How were they like or different than this exercise? 2) Does a consideration of stakeholders, and impact come into software development at this company? How often, what prompts it? 3) How about a consideration of values in the context of security? Does it come up? When? 4) Can you think of recent examples from your experience at this company where similar kinds of talk have taken place? - If so, where was that? What context, e.g. a meeting or in the kitchen? 5) Could an approach like this be used at this company if so where, how? 6) Who is responsible for security in code? - What role do/can developers play in this? As in the first session, attendees at this field site were engaged and focused in the activities for the workshop. Feedback gathered in the third part has not been fully analyzed, however, ### TABLE II <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Self-direction</td> <td>Freedom to cultivate one’s own ideas and abilities. Freedom to determine one’s own actions.</td> </tr> <tr> <td>Safety</td> <td>Safety in one’s immediate environment. Safety and stability in the wider society.</td> </tr> <tr> <td>Conformity</td> <td>Compliance with rules, laws and formal obligations. Avoidance of upsetting or harming other people.</td> </tr> <tr> <td>Benevolence</td> <td>Being a reliable and trustworthy member of the group. Devotion to the welfare of group members.</td> </tr> <tr> <td>Universalism</td> <td>Commitment to equality, justice and protection for all. Acceptance and understanding of differences in people.</td> </tr> <tr> <td>Power</td> <td>Exercising control over people. Control of material and social resources.</td> </tr> <tr> <td>Stimulation</td> <td>Excitement, novelty and change.</td> </tr> <tr> <td>Achievement</td> <td>Success according to social standards.</td> </tr> <tr> <td>Tradition</td> <td>Maintaining and preserving cultural traditions.</td> </tr> <tr> <td>Humility</td> <td>Recognising one’s insignificance in the larger scheme.</td> </tr> <tr> <td>Hedonism</td> <td>Pleasure and sensuous gratification.</td> </tr> <tr> <td>Face</td> <td>Maintaining one’s public image and avoiding humiliation.</td> </tr> </tbody> </table> the response to the first question, *Is talking about security incidents in this way helpful?* was a pronounced “Yes.” C. Autumn 2018: Seminar form For this invited 60 minute talk, the workshop was presented to a room in which participants were sitting in groups of two or three at small tables. Approximately 30 people attended. In this environment, the second set of values was used (See Table II). Each table was given a random selection of three or four value cards from the set of 12. Drawing on elements of the case-based [26] and peer instruction [27] teaching methods, the accounts of the software compromise were presented to the room as a case. After presenting the incident, attendees were asked to give a show of hands for these questions: - Is this incident unusual? - Is it likely to happen again? Following this, attendees were asked to discuss with each other who might be affected by a compromise of this type. Impressions were shared afterward around the room. After presenting the developer-centred version of the incident, attendees were asked to discuss with their partners how the values they found on their table figured into the case. After a period of five minutes, attendees were asked to volunteer information for the following questions: - Which of the values from the table was the most important? For whom? - When did this value come to the fore? Before the incident, after, or long after? For the third part of the discussion, the session leaders facilitated a brief discussion around the room around the following two points: - Security should be handled through tooling. - Developers should be thinking about security all the time. VI. LESSONS LEARNED It has been suggested that developers need to be taught new attitudes toward security that will encompass technical knowledge and security analysis, but also skills in communicating about security issues beyond engineering teams. What is needed are “engaging” interventions that will appeal to programmers [28]. Experiences to date with this workshop suggest that a positive approach that connects security to developer experiences is useful in engaging professionals in discussion, but there are some areas that can be improved. A. Developers engage with personal stories. Talking around two perspectives for a single security incident is an effective way to strike a balance in discussion between technical and security detail, and personal values. The “Panic” story is close enough to the developers’ own experience to engage them, but not close enough to inhibit participation. The workshop has been using reports printed in full from the internet. While this worked in the first two sessions, we have now gathered enough information about how developers engage with these sources to refine the stories into shorter cases. B. Interaction through play is effective. Participants enjoy the physical aspects of the game, including setting the timers and working with cards. Though the sets of prompting cards have been effective in the workshops, informal feedback suggests that there are too many questions for each section. Likewise, asking only two or three questions per section worked well in the seminar. Going forward, the sets of questions for each part will be refined and streamlined. C. Values support conversation. Informal feedback suggests that developers like taking a positive, value-oriented approach toward security. Two sets of values drawn from different fields of research have been trialled. The workshops in which the individual sets were used were similar in character; attendees did not appear to have difficulty in talking about the values in either set. However, it is not clear exactly what role the values play in the process, particularly in relation to the incident that was used. Formal evaluation of the workshop must be made to clarify these points. D. Public incidents facilitate information trading. Informal observation suggests that different kinds of information are traded among developers in the midst of these conversations. Developers were observed to expand on technical information included in the reports, providing additional scenarios and examples from personal experience, but also drawing in terminology associated with the security mindset including threats, attacks and technology specific security facts. Fuller analysis of the workshop data gathered in the field site to catalog information trading is underway. VII. CONCLUSIONS The question of why professionals don’t adopt secure practices and technologies as a matter of course remains open, and is a part of continuing investigations in the Motivating Jenny project. However, with this workshop, the project has identified ways to support professionals in talking about security. The workshop developed here includes a set of working materials that can be employed with practitioners in a variety of settings. It uses narrative and storytelling to connect developers with security incidents, and to encourage talk about security implications and impacts. These activities have a place in professional settings. In structuring activities around an incident taken from the public sphere, professionals are shown techniques for critical engagement with sources that are known to influence security awareness. The non-confrontational space for talk about security stands to positively affect security problem solving, confidence and knowledge. Looking forward, the materials will continue to be used and developed within the project to support interactions with developers in field sites and community engagements. Several additional research and practical aims have been identified: • Investigate in more detail what values bring to talk within security discussions. The set depicted in Table II will be explored in more detail, as they more closely reflect qualities related to software developer characteristics and motivation. • Conduct a formal evaluation of the materials. This will begin with a structured examination of feedback given from the first engagement, and an analysis of workshop data gathered as a part of the field study. • Develop the materials into a set of cards for production and dissemination. ACKNOWLEDGMENT We thank the professional developers who participated in our workshops. The work was supported by the National Cyber Security Centre (NCSC). Nuseibeh thanks SFI, EPSRC and ERC for financial support. REFERENCES
{"Source-Url": "http://oro.open.ac.uk/59840/1/PID5831073-CRC.pdf", "len_cl100k_base": 5159, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21990, "total-output-tokens": 7506, "length": "2e12", "weborganizer": {"__label__adult": 0.0005736351013183594, "__label__art_design": 0.0004727840423583984, "__label__crime_law": 0.00200653076171875, "__label__education_jobs": 0.00757598876953125, "__label__entertainment": 7.033348083496094e-05, "__label__fashion_beauty": 0.00020253658294677737, "__label__finance_business": 0.0005297660827636719, "__label__food_dining": 0.0003659725189208984, "__label__games": 0.0007810592651367188, "__label__hardware": 0.0005450248718261719, "__label__health": 0.0007104873657226562, "__label__history": 0.00020933151245117188, "__label__home_hobbies": 0.00012695789337158203, "__label__industrial": 0.00036978721618652344, "__label__literature": 0.00041103363037109375, "__label__politics": 0.0005116462707519531, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.0113983154296875, "__label__social_life": 0.0003180503845214844, "__label__software": 0.007518768310546875, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0004954338073730469, "__label__travel": 0.00021159648895263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31866, 0.02204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31866, 0.59311]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31866, 0.94158]], "google_gemma-3-12b-it_contains_pii": [[0, 660, false], [660, 6094, null], [6094, 11304, null], [11304, 15432, null], [15432, 16282, null], [16282, 19931, null], [19931, 25630, null], [25630, 31866, null]], "google_gemma-3-12b-it_is_public_document": [[0, 660, true], [660, 6094, null], [6094, 11304, null], [11304, 15432, null], [15432, 16282, null], [16282, 19931, null], [19931, 25630, null], [25630, 31866, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31866, null]], "pdf_page_numbers": [[0, 660, 1], [660, 6094, 2], [6094, 11304, 3], [11304, 15432, 4], [15432, 16282, 5], [16282, 19931, 6], [19931, 25630, 7], [25630, 31866, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31866, 0.0791]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
901f948dafc95b5a29d594b1c16c2da05f2aaeb4
BEHAVIOUR-BASED CONTROL FRAMEWORK FOR AN AUTONOMOUS MOBILE ROBOT Albert L. Schoute Dept. of Computer Science, P.O. Box 217, 7500 AE Enschede, The Netherlands email: a.l.schoute@cs.utwente.nl Abstract This paper presents the concept of an object-oriented software framework that provides a scheduling and execution environment for behaviour-based control of an autonomously navigating mobile robot. The framework is build around a basic class for ‘behaviours’ and for associated ‘situations’ that handle exceptional conditions. New control methods and navigation strategies are easily incorporated and tested by extending this framework. 1 Introduction Planning and control functions within intelligent, embedded systems are generally divided over multiple software layers. The lower layer(s) of the software hierarchy typically contain(s) the basic functions that directly control the hardware, i.e. the sensors and actuators. The top layer(s) handle(s) the long term planning and usually a high-level command interface. In between one can identify one or more intermediate layers that are responsible for fulfilling the current goals or tasks. In this paper we report about a general concept for the task scheduling framework at this intermediate level. In the context of an autonomous mobile robot the tasks to do are sequences of behaviours the robot has to perform in order to reach its goal. Behaviours are primitive actions such as moving through a hallway, entering a doorway, following a moving object, approaching some landmark, or just moving to some position. Behaviours are scheduled and executed like tasks in an operating system. In principle they run to completion, but – in reaction to circumstantial conditions – they may be preempted or suspended. Exceptional situations may be associated with behaviours to catch unexpected conditions in parallel to the normal behaviour execution. A situation-monitoring process is introduced as detection and exception handling mechanism. The framework that we describe is part of a control system for an experimental robot vehicle, named Marvin [Koetsier 1997]. Marvin is a low-budget, PC-based vehicle that contains all essential elements to serve as a test-bed for autonomous system development. The vehicle can drive physically unconnected by means of its own battery-power-supply, two independently driven wheels, ultrasonic distance sensors and a high-resolution CCD- camera with PCI-bus frame-grabber. A wireless LAN-connection enables remote monitoring and control. The control software runs on the Linux operating system. It has an object-oriented structure (written in Gnu-C++) and uses multithreading. The objective of the test-bed is to explore techniques by which robot vehicles can navigate in office environments using natural properties of the building. Although we allow the control program to exploit pre-knowledge of static building features, behaviours have to cope with the actual situation in a robust, reactive manner. The behaviour-based control framework creates an ideal context for experimentation: it is easy to program and test new behaviours. Object-orientation cares for the encapsulation of the basic properties of the behaviours and associated exceptional situations that must be traced. The control framework automatically provides the context in which these behaviours and situations are scheduled and executed. For example, position information in absolute or behaviour-relative coordinate systems is maintained and always accessible. Also remote monitoring and interaction is supported in a general way. 2 The software control system The global structure of Marvin’s control software is shown in Figure 1. Drivers for the ultrasonic sensors, the servo-motors and the frame-grabber are written in C as loadable kernel modules. These drivers implement a file-oriented device interface that can be accessed by standard system calls (like open, read, write, control, close). The Linux kernel supports multithreading by creating Posix-compliant threads (pthreads) as kernel processes [Beck 1997]. Separate threads have been introduced for behaviour execution, situation monitoring, user interaction, planning, remote connection handling and image capturing. In view of the complex way in which threads interact, the two top layers heavily rely on the object-oriented approach. Object classes for behaviours and situations play a central role in the control framework. The actual control of the robot depends on the currently active “behaviour” and “situation” objects. Object orientation facilitates a unified treatment of these objects by means of a common base class, whereas any specific control is detailed in a particular “derived” class. Common but dissimilar functions are implemented by virtual functions. The control program contains many shared components accesses by multiple threads. Classes help to structure these components by aggregating common data and functions and providing clearly defined interfaces. Shared interfaces in the control software are, for example, data stores containing recent sensor values, the robot status, the list of scheduled behaviours and the user control panel. The pthread-package supplies synchronisation functions for exclusive access to class objects according to the monitor concept [Silberschatz 2000]. 3 Behaviour-based control The base class Behaviour defines the basic, common properties of behaviours. All implemented behaviours have a “control function” in common. The function control of the current behaviour will at any instance determine the momentary behaviour of the robot. Behaviours are instantiated dynamically, which could happen in all sorts of circumstances dependent upon the application. The instantiation may be part of a pre-planned series of actions, but even likely be the result of remote intervention, sensor-based reactions or other unforeseen situations. New instances of behaviours can be entered for scheduling by the current behaviour or any situation handler. A switch of behaviour may occur either by termination or by interruption. 3.1 Behaviour scheduling The execution of behaviours, queued on a global behaviour list, is delegated to a separate execution-handler thread. New behaviours, for example entered by the planner, are typically placed at the tail of the list. A behaviour will in general run to completion before the next behaviour is activated. The occurrence of special situations could, however, disturb this normal FIFO-order. Behaviours can pass through a number of states as shown in Figure 2. The scheduling is organized such that the head of the behaviour list is always taken as the next behaviour. The state of a behaviour on the list could be either INITIAL (not been in action before) or RESUME (interrupted but still to be completed). A behaviour that is currently active has state EXECUTE. It can be set in state DESTROY for definite removal or in state SUSPEND for temporary pre-emption. In the latter case other behaviours may have been instantiated and inserted in the list before the pre-empted behaviour. In this way the scheduling works as exception mechanism: other behaviours can cope with the situation before the interrupted behaviour resumes. It allows for stack-wise (LIFO) scheduling in exceptional circumstances. In practice, this works in a natural and effective way. A dangerous situation may lead to a temporary interruption by the IDLE behaviour such that the robot does not move as long as the situation remains. Or, any behaviour could be overtaken by the REMOTE_CONTROL behaviour temporarily. The IDLE behaviour is also added and executed in case of an empty behaviour list. For example at system start-up this is the first behaviour that becomes active. ![Figure 2 State transitions of behaviours in the behaviour list](image-url) --- 239 The execution-handler thread performs the execution of the current behaviour in a cyclic, periodically timed loop. It calls for the function \textit{control} of the current behaviour and pauses according to some fixed time interval. The function \textit{control} is declared in the base class \texttt{BEHAVIOUR} as a virtual function, which means that the call is bind at execution time to the specific implementation of this function within the particular "derived" behaviour class. An important aspect of the general functioning of behaviours concerns motion control and position tracing, which is treated in a next section. 3.2 Situation handling Besides behaviour execution, a general mechanism is added for the detection of exceptional situations not covered by the behaviours them selves. An independent \texttt{situation-handler} thread runs concurrently with the execution thread. The separate treatment of unpredictable circumstances with respect to the normal, expected behaviour, highly contributes to the flexible and robust operation of the autonomous system. Behaviours are freed from anticipating, at any instance, all kinds of special situations. The same situations may occur during many behaviours and are therefore handled at best in a separate and independent fashion. Similar to the control function of behaviours, for situation handling a virtual function \textit{do\_situation} is declared in a base class \texttt{SITUATION}. If a situation instance is active the function is called periodically to detect some special condition and react on it. Any behaviour has a list of associated situations. These objects of derived situation classes are typically created at behaviour instantiation. It is the task of the application environment (for example the planner) to associate a behaviour with the appropriate situation objects. For safety reasons, certain important situations are always associated with behaviours, like the \texttt{SAFETY\_CHECK} (checking collision) and the \texttt{REMOTE\_CONTROL} situation (checking remote intervention). In case of a transient risky situation (some person is passing by) the current behaviour is pre-empted by the \texttt{IDLE} behaviour, and is later on resumed automatically. This behaviour-switching appears as a natural reaction of the robot. Optional situations are for instance the \texttt{OUTSIDE\_RANGE} situation (to cancel a behaviour that carries the robot outside some radius) or \texttt{LANDMARK} situation (to detect some environment feature). The situation handling thread scans the situation list of the current behaviour and calls for the function \textit{do\_situation}. Situation scanning can be used also for independent monitoring or tracing of certain conditions and variables. The situation function may contain output statements that display state information on the control panel or log data for later examination. 4 Motion control and position tracing Motion is of course a dominant factor in case of mobile robot control. Most of the behaviours will be related to motion manoeuvres and sensor-based navigation strategies. In fact a great advantage of the framework is that it offers an environment for experimentation in which alternative motion behaviours easily can be tested and compared. Common aspects of motion behaviours like speed control and position localization are supported already by the system and made available in the base class. Any specific behaviour inherits the basic motion control properties; only the particular elements have to be added within a derived class declaration. Motion control and position information are handled by a hierarchy of layers as depicted in Figure 3. The kernel module for the motor device does the direct i/o to actuate the wheel motors and to read the tachometers that measure the rotation speed of the wheels. The wheel configuration of Marvin constitutes a differential drive mechanism [Dudek 2000]. The two independently driven motors of the back wheels are regulated by feedback control to satisfy the linear and angular speeds as required by the upper layers. The control loop is activated every 10 milliseconds by a system timer. Odometry is used to estimate the robot’s position. According to the kinematics, relative displacements are calculated from the observed rotation speeds of the wheels. ![Diagram](image) Figure 3 Software components involved with motion control and position tracing The “motion class” object contains motion state variables (position, speeds, acceleration) and functions to access the motor device by `read`, `write` and `ioctl` system calls. Calling a function `refresh_motion_state` keeps the state variables up to date; it performs a read system call to obtain the most recent values from the motor device driver. By means of a function `set_speed` the desired linear and angular speeds are written to the motor device driver as setpoint references. Furthermore the global vehicle position maintained by the motor driver may be reset or corrected by functions `set_position` or `set_diff_position`. Correction of the global position state is necessary because accumulation of errors makes the absolute position estimation inaccurate over longer distances. It is the aim of experiments with sensor-based navigation to observe natural building features and use these for absolute position localization. The robot’s pose is represented by a coordinate frame (a class object of type `FRAME`). The pose consists of the x, y position and the orientation (i.e., the heading of the robot). In fact coordinate frames are relative notions: the robot’s pose is given by the actual placement of its “body frame” relative to some reference frame in the ground plane. The class `FRAME` provides functions for frame transformations like rotate and translate. Motion behaviours generally concern relative manoeuvres with respect to some starting position. The base class `BEHAVIOUR` contains a `local_frame centre` that is defined at the behaviour’s instantiation. During behaviour execution the current pose relative to this local frame is maintained by calling a function `refresh_local_pose` before the control function of the current behaviour is executed. The current pose is calculated by applying an inverse frame transform on the robot’s pose with respect to the local centre. In this way motion behaviours can be programmed easily in a uniform manner without bothering about the actual positioning at the moment of application. The only concern of the behaviour’s control function is to adapt the linear and angular speed parameters according to the motion state in the local frame and, probably, according to available sensor data or interactively defined global variables. Some simple examples of particular control function are shown in Figure 4. ```c void move_bhv::control(float *lin_speed, float *ang_speed) { /* follow reference motion if enough free space around */ *lin_speed = *ang_speed = 0; /* default: don't move */ if (*lin_ref_speed > 0 && sensors->free_range(FRONT) > 0.2) if (*lin_ref_speed < 0 && sensors->free_range(BACK) > 0.2) *lin_speed = *lin_ref_speed; if (sensors->free_range(LEFT) > 0.2 && sensors->free_range(RIGHT) > 0.2) *ang_speed = *ang_ref_speed; } void remote_control_bhv::control(float *lin_speed, float *ang_speed) { if (!marvin_remote) { /* check termination */ state = STAT_DESTROY; return; } *lin_speed = remote_lin_speed; *ang_speed = remote_ang_speed; } void turn_bhv::control(float *lin_speed, float *ang_speed) { /* turn until current orientation becomes zero */ float abs_diff = fabs(curr_pose.phi); bool sign = (curr_pose.phi > 0.0); if (abs_diff < 0.02) { /* almost reached */ state = STAT_DESTROY; return; } if (abs_diff > 1.0) *ang_speed = (sign ? -0.6 : 0.6); else if (abs_diff < 0.5) *ang_speed = curr_pose.phi * -0.6; else *ang_speed = (sign ? -0.3 : 0.3); *lin_speed = lin_ref_speed; } ``` Figure 4 Control functions of some derived behaviour classes 5 Remote user interaction During operation of the robot control program actual state information is regularly written to the standard terminal output. By means of cursor control these output is presented at fixed places on the screen according to a pre-defined control panel layout (see Figure 5). Items displayed are amongst others the current behaviour, its centre frame, the actual robot position and speeds, active situations and, optionally, a trace of the motor control variables, ultrasonic sensor readings or a primitive map of the traversed path. User interaction is possible due to a user-input-handler thread that reads the terminal input. By means of a simple tree-based menu selection mechanism the user is able to start and stop single behaviours, inspect the actual behaviour list and do many other things (putting motors on/off, resetting the global position, starting image acquisition, etc.). It can also invoke existing script files that contain series of behaviours with associated situations. Such script files are interpreted by a planner thread. The planner will instantiate and initialise the corresponding behaviour and situation objects. Normally, the Marvin robot drives around without a keyboard or monitor connected to the onboard computer. The only channel of communication with the Linux operating system is a wireless TCP/IP connection provided by a Wavelan-card (Lucent). connection uses radio transmission with communication speeds up to 2 Mb/s over an indoor range of maximal 100 meter. By establishing a Telnet-session, the robot control program can be started and controlled remotely. Figure 5 Remote session with Marvin's control program A more advanced, graphical user interface for monitoring and control, shown in Figure 6, has been developed in connection to vision-based navigation experiments. A client-program (for Windows) interacts via a TCP/IP socket connection (also using the wireless network) to a server thread in Marvin's control system. It can display camera-images (at a rate of 4 images/sec) and other sensor data. It can also start and stop behaviours, for example a recognition behaviour to detect and follow a moving pattern. It may even overrule the vehicle motion and steer Marvin remotely by activating the REMOTE_CONTROL behaviour. Figure 6 Graphical interface of remote monitor program 6 Experiments Experiments have been carried out with respect to motion control strategies for sensor-based navigation. Besides the ultrasonic distance sensors, computer vision is used to enhance the ability of position sensing. Additional threads and classes have been introduced to maintain a real-time "store" of camera images that can be accessed (read-only) at any time by one or more behaviours or threads (like the server thread fulfilling remote image requests). Images in the store are claimed during processing and have to be freed explicitly to allow buffer reuse. Examples of vision-related behaviours are the "moving pattern tracking" behaviour and a "free space search" behaviour. The latter one detects the visual edges of the ground floor and drives according to a free-space map that is derived from it (see Figure 7). The behaviour of "driving through a hallway" has been explored by different sensor approaches. An observer-based controller for position tracking has been employed successfully using ultrasonic wall-distance measurements only [Siers 2000]. Position estimation based on Kalman filtering of multiple sensor data has been investigated by [Klein 2000]. It exploits the fact that the position of the "vanishing point" of the hallway in the camera image reveals the heading direction of the robot [Zoghbi 2000]. ![Figure 7 Recognition of the floor edges (as shown by the white pixels) with its corresponding free-space map](image) 7 Conclusion The presence of the general framework has alleviated the required effort for experimentation considerably. Both the multithreading facility of Linux and the object orientation keep the development of the robot control program manageable. New components can be implemented independently, provided that they conform to the existing interfaces. To introduce a new behaviour the control program needs to be extended only at clearly isolated places. Mainly a new (derived) behaviour class has to be programmed with its own control function. New situation handlers are fitted easily in the existing scheme. Components of the control system, built earlier, are in general easily reusable and/or adaptable for new experiments and application contexts. Behaviours with associated situations can be simply tested interactively by using the existing control panel interface operated via a remote telnet session. During operation, state information is permanently logged. Extra logging of variables for testing purposes can be added to the behaviour's control function or to situation handlers. The capabilities of the autonomous robot have been extended over the years without a need for a major revision of the basic framework. The behaviour-based control mechanism has shown to be very versatile in combination with the client-server approach for remote monitoring and control, introduced at a later stage. The use of shared resources (like the image store) by multiple threads requires careful synchronisation. However, once the appropriate access functions to shared class objects are written correctly (by using exclusion and condition synchronization as provided by the monitor concept) the pitfalls of concurrency are hidden and do not burden the application context. References
{"Source-Url": "https://ris.utwente.nl/ws/portalfiles/portal/71643796/WesicMarvinPaper.pdf", "len_cl100k_base": 4161, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 9525, "total-output-tokens": 4899, "length": "2e12", "weborganizer": {"__label__adult": 0.00066375732421875, "__label__art_design": 0.0005631446838378906, "__label__crime_law": 0.0008959770202636719, "__label__education_jobs": 0.0007767677307128906, "__label__entertainment": 0.0001125335693359375, "__label__fashion_beauty": 0.0002799034118652344, "__label__finance_business": 0.0002386569976806641, "__label__food_dining": 0.0006852149963378906, "__label__games": 0.0014104843139648438, "__label__hardware": 0.00879669189453125, "__label__health": 0.0009393692016601562, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.0004119873046875, "__label__industrial": 0.0016145706176757812, "__label__literature": 0.0003483295440673828, "__label__politics": 0.0004107952117919922, "__label__religion": 0.000675201416015625, "__label__science_tech": 0.177490234375, "__label__social_life": 0.00012481212615966797, "__label__software": 0.00966644287109375, "__label__software_dev": 0.7861328125, "__label__sports_fitness": 0.0007801055908203125, "__label__transportation": 0.006305694580078125, "__label__travel": 0.00040602684020996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22551, 0.03389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22551, 0.67082]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22551, 0.88962]], "google_gemma-3-12b-it_contains_pii": [[0, 2436, false], [2436, 4778, null], [4778, 7875, null], [7875, 11570, null], [11570, 14575, null], [14575, 17335, null], [17335, 18283, null], [18283, 20844, null], [20844, 22551, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2436, true], [2436, 4778, null], [4778, 7875, null], [7875, 11570, null], [11570, 14575, null], [14575, 17335, null], [17335, 18283, null], [18283, 20844, null], [20844, 22551, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22551, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22551, null]], "pdf_page_numbers": [[0, 2436, 1], [2436, 4778, 2], [4778, 7875, 3], [7875, 11570, 4], [11570, 14575, 5], [14575, 17335, 6], [17335, 18283, 7], [18283, 20844, 8], [20844, 22551, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22551, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e914a6bbc61327da403c57a30ca1b49b0c4347e1
[REMOVED]
{"Source-Url": "https://sites.ualberta.ca/~smartynk/Resources/CMPUT%20379/beck%20notes/memory.pdf", "len_cl100k_base": 5663, "olmocr-version": "0.1.53", "pdf-total-pages": 55, "total-fallback-pages": 0, "total-input-tokens": 87823, "total-output-tokens": 7782, "length": "2e12", "weborganizer": {"__label__adult": 0.00030732154846191406, "__label__art_design": 0.00034308433532714844, "__label__crime_law": 0.0003185272216796875, "__label__education_jobs": 0.0005249977111816406, "__label__entertainment": 7.206201553344727e-05, "__label__fashion_beauty": 0.00015282630920410156, "__label__finance_business": 0.00027179718017578125, "__label__food_dining": 0.0003247261047363281, "__label__games": 0.0007958412170410156, "__label__hardware": 0.0109100341796875, "__label__health": 0.000377655029296875, "__label__history": 0.0002837181091308594, "__label__home_hobbies": 0.00017833709716796875, "__label__industrial": 0.0007810592651367188, "__label__literature": 0.0001875162124633789, "__label__politics": 0.00019419193267822263, "__label__religion": 0.0004243850708007813, "__label__science_tech": 0.08331298828125, "__label__social_life": 4.89354133605957e-05, "__label__software": 0.0172271728515625, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.00029730796813964844, "__label__transportation": 0.0006055831909179688, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23751, 0.02641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23751, 0.73889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23751, 0.87249]], "google_gemma-3-12b-it_contains_pii": [[0, 41, false], [41, 514, null], [514, 991, null], [991, 1410, null], [1410, 1806, null], [1806, 2240, null], [2240, 2900, null], [2900, 3286, null], [3286, 3742, null], [3742, 4254, null], [4254, 4690, null], [4690, 4976, null], [4976, 5402, null], [5402, 5979, null], [5979, 6315, null], [6315, 6731, null], [6731, 7554, null], [7554, 8109, null], [8109, 8327, null], [8327, 8775, null], [8775, 9180, null], [9180, 9596, null], [9596, 10041, null], [10041, 10237, null], [10237, 10583, null], [10583, 11196, null], [11196, 11272, null], [11272, 11651, null], [11651, 12217, null], [12217, 12835, null], [12835, 13059, null], [13059, 13164, null], [13164, 13556, null], [13556, 13789, null], [13789, 14268, null], [14268, 14555, null], [14555, 15080, null], [15080, 15415, null], [15415, 15683, null], [15683, 15864, null], [15864, 16596, null], [16596, 16802, null], [16802, 17387, null], [17387, 17834, null], [17834, 18519, null], [18519, 19369, null], [19369, 20116, null], [20116, 20282, null], [20282, 20828, null], [20828, 20986, null], [20986, 21515, null], [21515, 22116, null], [22116, 22655, null], [22655, 23169, null], [23169, 23751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 41, true], [41, 514, null], [514, 991, null], [991, 1410, null], [1410, 1806, null], [1806, 2240, null], [2240, 2900, null], [2900, 3286, null], [3286, 3742, null], [3742, 4254, null], [4254, 4690, null], [4690, 4976, null], [4976, 5402, null], [5402, 5979, null], [5979, 6315, null], [6315, 6731, null], [6731, 7554, null], [7554, 8109, null], [8109, 8327, null], [8327, 8775, null], [8775, 9180, null], [9180, 9596, null], [9596, 10041, null], [10041, 10237, null], [10237, 10583, null], [10583, 11196, null], [11196, 11272, null], [11272, 11651, null], [11651, 12217, null], [12217, 12835, null], [12835, 13059, null], [13059, 13164, null], [13164, 13556, null], [13556, 13789, null], [13789, 14268, null], [14268, 14555, null], [14555, 15080, null], [15080, 15415, null], [15415, 15683, null], [15683, 15864, null], [15864, 16596, null], [16596, 16802, null], [16802, 17387, null], [17387, 17834, null], [17834, 18519, null], [18519, 19369, null], [19369, 20116, null], [20116, 20282, null], [20282, 20828, null], [20828, 20986, null], [20986, 21515, null], [21515, 22116, null], [22116, 22655, null], [22655, 23169, null], [23169, 23751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 23751, null]], "pdf_page_numbers": [[0, 41, 1], [41, 514, 2], [514, 991, 3], [991, 1410, 4], [1410, 1806, 5], [1806, 2240, 6], [2240, 2900, 7], [2900, 3286, 8], [3286, 3742, 9], [3742, 4254, 10], [4254, 4690, 11], [4690, 4976, 12], [4976, 5402, 13], [5402, 5979, 14], [5979, 6315, 15], [6315, 6731, 16], [6731, 7554, 17], [7554, 8109, 18], [8109, 8327, 19], [8327, 8775, 20], [8775, 9180, 21], [9180, 9596, 22], [9596, 10041, 23], [10041, 10237, 24], [10237, 10583, 25], [10583, 11196, 26], [11196, 11272, 27], [11272, 11651, 28], [11651, 12217, 29], [12217, 12835, 30], [12835, 13059, 31], [13059, 13164, 32], [13164, 13556, 33], [13556, 13789, 34], [13789, 14268, 35], [14268, 14555, 36], [14555, 15080, 37], [15080, 15415, 38], [15415, 15683, 39], [15683, 15864, 40], [15864, 16596, 41], [16596, 16802, 42], [16802, 17387, 43], [17387, 17834, 44], [17834, 18519, 45], [18519, 19369, 46], [19369, 20116, 47], [20116, 20282, 48], [20282, 20828, 49], [20828, 20986, 50], [20986, 21515, 51], [21515, 22116, 52], [22116, 22655, 53], [22655, 23169, 54], [23169, 23751, 55]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23751, 0.06118]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
dde12cbe0786a70d78c29f31c14b09658685b82b
Today’s customers expect you to be relevant to their life. Relevancy is not generic; it is specific to the customer in the moment. To remain relevant, enterprises must expand and engage with the customer and with third parties (developers) to support the customer. It is time for innovative solutions (see Figure 1). Application programming interface (API) management delivers the business centricity and business model that many service-oriented architecture (SOA) initiatives historically have lacked, and SOA delivers the experience and engineering discipline that drives good API design and provides robust integration to systems of record. API management and classical SOA governance are highly synergistic. API management refocuses on the business aspects of human and software interactions. The advent of APIs allows you to separate the business concerns of making an API a successful product from the IT concerns of providing the service that implements the API. The journey from IT-centric web services to business-centric API management is necessary for enterprises that build systems of interaction spanning beyond their enterprise walls. API management solutions provide can define an API and project that API into an ecosystem that the enterprise cannot effectively reach through its own user solutions. Did you know? Mobile users are intrinsically impatient and always on the move; in fact, the average mobile user spends around 60 seconds in a mobile app before moving to something else. This means that mobile interactions must be personal to be relevant. They must be in the here and now. The modern user expects you to know who they are and what they need now. These characteristics represent an opportunity for enterprises that understand how to embrace them and provide a differentiating customer experience. Business value What is called the *Nexus of Forces* ([http://www.gartner.com/technology/research/nexus-of-forces/](http://www.gartner.com/technology/research/nexus-of-forces/)) by Gartner is the confluence of mobile, social, cloud, and big data analytics. The Nexus of Forces implies an experience where context is key and important interactions might not be directly related to business transactions but rather focused on building social relationships and ecosystems. Social interactions of interest to the business can and will happen between two third parties (such as viral content). Many interactions are mobile and happen in a real-time context. You can access information, accounts, or contact a friend dynamically. The growth of social media created a consumer who expects their opinion to matter and chooses the mobile device as the way the consumer interacts with the world. These characteristics change the scope of what is considered business relevant. For businesses that are ready to grow beyond the enterprise, the Nexus of Forces raises some key questions and challenges around defining a business model and engineering the business solutions that support it: - Where do transactions happen? Anywhere and anytime; this is the essence of mobile and cloud. - Who can influence your business? Anyone that publishes an opinion, positive or negative; this is the essence of social. - Who can access your information? Anyone that you can legally provide it to, whether in exchange for money, influence, or improved relationships. Applying big data analytics to vast amounts of available information sources lets you provide unique value and insight (particularly in the context of the Internet of Things). - What is an application? Any piece of software that provides value (including mobile apps, software that is embedded in appliances or cars, cloud services, and so on). - Who is your developer? Anyone that builds business solutions by using your information or services. In an open ecosystem, it is often someone who is not employed within your own enterprise. More engaging applications and processes intelligently use the context of a business interaction to optimize the experience. Context is crucial to optimizing the offers that are made to a particular customer, and context is a necessity for making that customer feel like they are being treated as a person rather than just a general business opportunity (see Figure 2). Human society is accustomed to people taking into account everything they know about us as a factor in how they choose to interact. Context enables more engaging applications and processes ![Context diagram] Figure 2. Engaging applications and processes You generally do not interact in the same way with a person who is bald by style, a person who is a soldier, and a person who is a survivor. Just knowing their physical appearance is not sufficient; you must know who they really are to make the optimal choice of how to interact. In the world of software and processes, the Nexus of Forces changes the way that your business operates by allowing you to apply the same type of contextual reasoning that you apply in the physical world. Solution overview To be an engaging enterprise requires embracing an open ecosystem, using sound SOA principles while delivering APIs as part of a business product, carefully promoting and managing your external business persona, and creating a fundamental topology that aids in understanding the different roles and purposes of integration middleware products. The separation of capabilities (into messaging, integration, and gateway) forms that fundamental topology. Classical SOA middleware focuses on creating and managing software services. The other three middleware ingredients of the API and service economy (Figure 3) are centered around the following concepts: - Designing and optimizing a business persona through the definition and management of easy to consume APIs. - Providing developer portals and participating marketplaces to make potential consumers aware of your APIs and the support of onboarding and self-service in a controlled fashion. - Making the consumption of APIs as easy as possible, including supporting uniform hybrid composition across a multitude of providers, environments, and technologies. Figure 3. API and service economy Many of these capabilities are known in isolation but must be integrated in new ways. Other elements require fundamental innovation and a different approach to delivery. IBM® has a recipe for more engaging and innovative business processes (see Figure 4). To cover the four ingredients in the recipe, consider them from a retail perspective: - Detect: A customer is detected walking down the street close to one of your stores. - Enrich: Deepen the understanding of the situation through knowledge about that customer's previous buying behavior. - Perceive: This particular customer tweeted last night about going on a beach vacation soon. - Act: Send an SMS with a promotion on swim wear. In only a few seconds, you created a unique and personalized experience. This more personal experience for the customer improved the chance of generating business, and might have strengthened the long-term relationship with the customer. Solution architecture Systems of interaction drive more engaging applications and processes by seamlessly and intelligently integrating systems of engagement with systems of record. This is an integration that crosses the boundary between the controlled enterprise environment and the uncontrollable Internet of Things (see Figure 5). This integration reaches from the mobile device to the corporate back end. Direct connections across this boundary are inappropriate and dangerous. Personal information typically is involved, which requires a secure and managed connection, and the traffic originating outside the boundary is internet scale in terms of both volume and spikes. So, traffic must be controlled and optimized to prevent bringing down the enterprise IT infrastructure. Figure 5. Interaction and the changing world In the context of integration throughout and beyond the enterprise, this situation implies an important distinction between the fundamental parts of your topology: - **Messaging:** Moving information payloads from A to B in a reliable fashion. At the heart of messaging is the ability to move an information payload from origin to destination in a controlled and reliable fashion. - **Integration Bus:** Creating structured assets and services that are based on existing data and functionality. At the heart of an Integration Bus is the ability to create reusable assets. The Integration Bus is the topology component that is closest to the classical notion of an enterprise service bus (ESB). The ESB is a general pattern embodying the SOA concept of consumers and providers that is mediated in a loosely coupled fashion. An Integration Bus is a particular embodiment of the ESB pattern that is centered around integrating resources within a zone of control. Although an Integration Bus provides mediation and composition of any conceivable type of resource, it does not provide the advanced security and traffic controls that are necessary for a gateway. • Gateway: Exposing APIs and services across a boundary in a controlled and optimized fashion. The concept of a gateway represents the topology component sitting on the boundary between systems of engagement and systems of record, or any other boundary of interest. Even though a gateway, by its nature, does support some amount of mediation, it is different from an Integration Bus. The difference stems from the fact that the gateway architecture is optimized towards control and throughput rather than towards aggregation of data and functionality (which is the focus of an Integration Bus). Gateways have advanced security and traffic control (a smart and efficient pipe), but no composition and limited mediation. Gateways typically are limited to a few standard protocols, such as SOAP and REST. The Nexus of Forces drives new business agendas and information needs. However, what drives the design to support such business innovation? SOA design principles are a key ingredient in building systems of interaction that are flexible, robust, and extensible. Three fundamental aspects (Figure 6) are part and parcel of SOA: • Service: A repeatable business task (such as checking customer credit or opening an account) • Service orientation: A way of thinking about your business through linked services and the outcomes they provide • Service-oriented architecture (SOA): A business-centric architectural approach that is based on service-oriented principles Figure 6. Service-oriented architecture A service as an abstract representation is important; it allows the service to be projected and accessed beyond the boundary of a physically controlled environment. A service as a representation of a business task is important for designing collaborative business systems beyond pure software integration. Finally, mediation, which is an intrinsic part of the enterprise service bus pattern and a fundamental building block of SOA, supports intelligent pairing of consumers and providers of services. This mediation functions whether those consumers and providers are software or people (see Figure 7). ![SOA mediates between consumers and providers (ESB pattern)](image) **Figure 7. SOA between consumers and providers** The design principles that aid building such collaborative systems must be broader than the criteria for what constitutes a good service. IBM believes that the following SOA design principles are also fundamental to building systems of interaction: - **Service orientation at the core**: Thinking about business solutions in terms of interacting processes and services. - **Process integration at an internet scale**: Ensuring integrity of interactions and information across time and location. - **Integration with enterprise capabilities and back-end systems**: Providing a unified experience across channels and systems, which maximizes existing capabilities to drive new innovative processes. - **A basis in industry standards**: No single player or vendor can dictate protocols or information standards. - **Providing the platform for a growing ecosystem**: As the business ecosystem grows beyond the walls of the enterprise, so does the ecosystem that delivers and manages business solutions. The last bullet is what the excitement around web APIs is about: growing the development ecosystem beyond the enterprise as a means for extended business outreach. Usage scenarios An online store and a social networking service, although adopting different business models, are both examples of early adopters of a computing model that is open by design and where the product is based on APIs and services that are projected into an extended ecosystem. Without its open merchant platform, the online store cannot be the one-stop shop for various goods and might not have become one of the dominant internet retail portals. Without the open interface to its communication servers, the social networking service cannot rely on a myriad of smart clients being provided for, at no cost, by various open source communities, and might not have achieved the popularity that it enjoys today. These are just two examples of a trend across all industries where solutions are first designed for an open ecosystem, whether those solutions are deployed internally, externally, or in a hybrid fashion. It is no coincidence that "mobile first" and "cloud first" are some of the mantras of this new age of computing. This new age is marked by designing for a different experience and environment yet allowing solutions, where appropriate, to still run within the enterprise or using traditional channels. Integration IBM has a sophisticated set of products that simplify integration and quickly solve a wide range of problems: - IBM WebSphere® Cast Iron® and IBM WebSphere DataPower® XH40: Connecting to applications in the public cloud enables enterprises to use a new cloud economy. - IBM Workload Deployer and IBM PureApplication® System: Enterprises looking to achieve more with less by better managing IT resources as collectives. - Integration Bus (IBM WebSphere Message Broker): The enterprise service bus integrates apps, data, services, and partners while controlling and optimizing connections. - IBM WebSphere eXtreme Scale (WXS) and IBM WebSphere DataPower XC10: Cache grids improve scale and performance of applications and services. - IBM Mobile Foundation Worklight®, A mobile utility that can deal with the scale and ubiquity of mobile and sensor rich environments that have changed requirements of enterprises. - IBM WebSphere DataPower XG45: Secure appliances enable controlled access to enterprise resources. - IBM WebSphere MQ: A messaging backbone in the data center that extends to external clients that are connected through the internet. - Sterling Commerce, IBM WebSphere DataPower XB62, and IBM WebSphere Cast Iron Live: These business-to-business partners open channels and collaboration where a new genre, "App Developer Partner", is emerging. For more information, see Integration Throughout and Beyond the Enterprise, SG24-8188-00, which can be found at the following website: Supported platforms For detailed system requirements, a list of supported operating systems, prerequisites, and optional supported software, with component-level details and operating system restrictions, go to the following websites: - More information and requirements for IBM WebSphere MQ can be found at [http://www-01.ibm.com/support/docview.wss?uid=swg27006467#7.5](http://www-01.ibm.com/support/docview.wss?uid=swg27006467#7.5). Ordering information Table 1 shows ordering information for the products of this solution. Table 1. Ordering information <table> <thead> <tr> <th>Program name</th> <th>PID number</th> <th>Charge unit description</th> </tr> </thead> <tbody> <tr> <td>IBM Worklight</td> <td>5725-I43</td> <td>Client device installation application</td> </tr> <tr> <td>IBM WebSphere DataPower Service Gateway XG45</td> <td>7198-32X</td> <td>Per appliance</td> </tr> <tr> <td>IBM WebSphere DataPower B2B Appliance XB62</td> <td>5725-K54</td> <td>Per appliance</td> </tr> <tr> <td>IBM WebSphere DataPower Integration Appliance XI52</td> <td>7199-42X</td> <td>Per appliance</td> </tr> <tr> <td>IBM WebSphere DataPower Cast Iron Appliance XH40</td> <td>7198-8FX</td> <td>Per appliance</td> </tr> <tr> <td>IBM WebSphere MQ for Multiplatform</td> <td>5724-H72</td> <td>Per Processor Value Unit (PVU) for Linux on IBM System z®</td> </tr> <tr> <td></td> <td></td> <td>Per PVU PROCRESSOR-Day</td> </tr> <tr> <td>IBM Integration Bus</td> <td>5724-J05</td> <td>PVU: Available through IBM Passport Advantage® only</td> </tr> </tbody> </table> Related information - The Nexus of Forces by Gartner is the confluence of mobile, social, cloud, and big data analytics. For more information, go to the following website: http://www.gartner.com/technology/research/nexus-of-forces/ - For a more detailed analysis of these SOA design principles and their importance in the context of the nexus of forces, see “SOA Design Principles for Dummies” at the following website: http://www-01.ibm.com/software/solutions/soa/ - Applying big data analytics to vast amounts of available information sources lets you provide unique value and insight (particularly in the context of the Internet of Things). For more information, go to the following website: http://www.ibm.com/smarterplanet/us/en/overview/article/iot_video.html - The IBM SOA reference model is available at the following website: ftp://ftp.software.ibm.com/software/soa/pdf/SOA_g224-7540-00_WP_final.pdf - For more information about systems of interaction, go to the following website: http://www-01.ibm.com/software/solutions/systems-of-interaction/ - For more information about systems of engagement, go to the following website: http://www-01.ibm.com/software/ebusiness/jstart/systemsofengagement/ - For the formal standard definition of service, see the Open Group SOA Ontology at the following website: http://www.opengroup.org/soa/source-book/ontology/ - Integration Throughout and Beyond the Enterprise, SG24-8188-00, found at: http://www.redbooks.ibm.com/abstracts/sg248188.html?Open - IBM Offering Information page (to search on announcement letters, sales manuals, or both): http://www.ibm.com/common/ssi/index.wss?request_locale=en On this page, enter any of the following terms (IBM Integration Bus, IBM Worklight, IBM WebSphere DataPower Service Gateway XG45, IBM WebSphere DataPower B2B Appliance XB62, IBM WebSphere DataPower Integration Appliance XI52, IBM WebSphere DataPower Cast Iron Appliance XH40, WebSphere MQ for Multiplatform, IBM WebSphere Message Broker), select the information type, and then click Search. On the next page, narrow your search results by geography and language. Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. © Copyright International Business Machines Corporation 2014. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
{"Source-Url": "https://www.redbooks.ibm.com/technotes/tips1076.pdf", "len_cl100k_base": 4543, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 23716, "total-output-tokens": 5454, "length": "2e12", "weborganizer": {"__label__adult": 0.0006237030029296875, "__label__art_design": 0.0011072158813476562, "__label__crime_law": 0.0010089874267578125, "__label__education_jobs": 0.0015516281127929688, "__label__entertainment": 0.0002675056457519531, "__label__fashion_beauty": 0.0002880096435546875, "__label__finance_business": 0.10107421875, "__label__food_dining": 0.00054931640625, "__label__games": 0.0012311935424804688, "__label__hardware": 0.005008697509765625, "__label__health": 0.0005445480346679688, "__label__history": 0.00032901763916015625, "__label__home_hobbies": 0.0002428293228149414, "__label__industrial": 0.001636505126953125, "__label__literature": 0.00041866302490234375, "__label__politics": 0.0004878044128417969, "__label__religion": 0.0003812313079833984, "__label__science_tech": 0.034515380859375, "__label__social_life": 0.0001220703125, "__label__software": 0.1505126953125, "__label__software_dev": 0.6962890625, "__label__sports_fitness": 0.00032019615173339844, "__label__transportation": 0.001186370849609375, "__label__travel": 0.00035119056701660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25096, 0.01473]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25096, 0.10993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25096, 0.90562]], "google_gemma-3-12b-it_contains_pii": [[0, 1832, false], [1832, 3913, null], [3913, 5497, null], [5497, 6191, null], [6191, 7122, null], [7122, 9111, null], [9111, 10619, null], [10619, 12509, null], [12509, 15321, null], [15321, 18358, null], [18358, 20579, null], [20579, 25096, null], [25096, 25096, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1832, true], [1832, 3913, null], [3913, 5497, null], [5497, 6191, null], [6191, 7122, null], [7122, 9111, null], [9111, 10619, null], [10619, 12509, null], [12509, 15321, null], [15321, 18358, null], [18358, 20579, null], [20579, 25096, null], [25096, 25096, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25096, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25096, null]], "pdf_page_numbers": [[0, 1832, 1], [1832, 3913, 2], [3913, 5497, 3], [5497, 6191, 4], [6191, 7122, 5], [7122, 9111, 6], [9111, 10619, 7], [10619, 12509, 8], [12509, 15321, 9], [15321, 18358, 10], [18358, 20579, 11], [20579, 25096, 12], [25096, 25096, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25096, 0.08772]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
83253a183725ea63f9abf35785640872aea508f4
PAIRSE: A Privacy-Preserving Service-Oriented Data Integration System Djamal Benslimane, Mahmoud Barhamgi, Frédéric Cuppens, Franck Morvan, Bruno Defude, Ebrahim Nageba To cite this version: Djamal Benslimane, Mahmoud Barhamgi, Frédéric Cuppens, Franck Morvan, Bruno Defude, et al.. PAIRSE: A Privacy-Preserving Service-Oriented Data Integration System. SIGMOD record, ACM, 2013, vol. 42 (n° 3), pp. 42-47. <10.1145/2536669.2536677>. <hal-01124429> HAL Id: hal-01124429 https://hal.archives-ouvertes.fr/hal-01124429 Submitted on 6 Mar 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited version published in: http://oatao.univ-toulouse.fr/ Eprints ID : 12918 To link to this article : DOI : 10.1145/2536669.2536677 URL : http://dx.doi.org/10.1145/2536669.2536677 To cite this version : Benslimane, Djamal and Barhamgi, Mahmoud and Cuppens, Frédéric and Morvan, Franck and Defude, Bruno and Nageba, Ebrahim. PAIRSE: A Privacy-Preserving Service-Oriented Data Integration System. (2013) SIGMOD Record, vol. 42 (n° 3). pp. 42-47. ISSN 0163-5808 Any correspondence concerning this service should be sent to the repository administrator: staff-oatao@listes-diff.inp-toulouse.fr PAIRSE: A Privacy-Preserving Service-Oriented Data Integration System Djamael Benslimane LIRIS, Lyon 1 University 69622 Villeurbanne, France djamael.benslimane@univ-lyon1.fr Mahmoud Barhamgi LIRIS, Lyon 1 University 69622 Villeurbanne, France mahmoud.barhamgi@univ-lyon1.fr Frederic Cuppens TELECOM Bretagne 35576 Rennes, France frederic.cuppens@telecom-bretagne.eu Franck Morvan IRIT, Paul Sabatier University 31062 Toulouse, France franck.morvan@irit.fr Bruno Defude TELECOM SudParis 91011 Evry, France bruno.defude@it-sudparis.eu Ebrahim Nageba Claude Bernard University 69622 Villeurbanne, France ebrahim.nageba@univ-lyon1.fr ABSTRACT Privacy is among the key challenges to data integration in many sectors, including healthcare, e-government, etc. The PAIRSE project aims at providing a flexible, loosely-coupled and privacy-preserving data integration system in P2P environments. The project exploits recent Web standards and technologies such as Web services and ontologies to export data from autonomous data providers as reusable services, and proposes the use of service composition as a viable solution to answer data integration needs on the fly. The project proposed new composition algorithms and service/composition execution models that preserve privacy of data manipulated by services and compositions. The proposed integration system was demonstrated at EDBT 2013 and VLDB 2011. 1. INTRODUCTION Data integration has been a long-standing challenge for the database community. This is motivated by the number of contexts in which the need for a flexible data integration mechanism has become critical, including Web and enterprise data integration, scientific data exploration, data exchange in government agencies, etc. Much of the literature on data integration across autonomous data sources has tacitly assumed that data on the side of each data source can be revealed and shared with other sources. In practice, however, data integration scenarios are often hampered by legitimate and widespread data privacy concerns. In the healthcare application domain for example, medical data are subject to many legislations (e.g., [2, 1]) around the world that restrict collection, processing, and disclosure of personal data, and hold data holders accountable for any unintended data disclosure or misuse. The PAIRSE project addresses the challenge of flexible and privacy-preserving data integration in peer-to-peer environments. Driven by the recent trends of using SOA-oriented architectures for data integration in modern enterprises, PAIRSE assumes that data sources are exposed to the data sharing environment as Web services. This type of services is commonly known as data services [11], where data services provide a well documented, platform (and source) independent, interoperable method of interacting with data. PAIRSE proposes a service composition-based approach for on-demand data integration; i.e., heterogeneous data services from autonomous service providers are selected and composed on the fly to answer users’ queries. Data privacy preservation is a key objective of PAIRSE. Users in PAIRSE are allowed only to access the information they are entitled to for a given purpose. PAIRSE focuses on modeling, discovering, selecting and composing data services to efficiently answer users’ queries. The contributions of PAIRSE, which was demonstrated at EDBT 2013 [5] and VLDB 2011 [9], are summarized as follows: - **Semantic description model for data services**: The semantics of data services should be explicitly represented to automate their discovery, selection and composition. We modeled data services as “RDF Views” over domain ontologies to formally define their semantics [7]. The service description files (e.g., WSDLs) are annotated with these RDF views. - **Query resolution by automatic service composition**: Queries in PAIRSE are resolved by automatically selecting and composing data services. We exploited mature query rewriting techniques to devise a novel service composition algorithm [7, 9]. The algorithm relieves users from having to manually select and com- pose services, tasks that would generally require important programming skills. We proposed also an efficient algorithm to locate relevant services in a P2P environment [14]. - **Privacy preservation**: We proposed a privacy preserving composition model [5, 8, 16]. Our model allows services providers to locally enforce their privacy and security policies when their services are invoked. In addition, it prevents services in a composition from learning any information about the data that each other holds, beyond what is permitted. The rest of the paper is organized as follows. Section 2 gives an overview of our integration system. Section 3 describes our semantic modeling of data services. Section 4 presents our composition approach. Section 5 presents our techniques to privacy preservation. Section 6 applies our work in two application domains, and summarizes obtained results. Section 7 concludes the paper. 2. **PAIRSE'S ARCHITECTURE** The PAIRSE data integration system has a hybrid peer-to-peer infrastructure [14], where peers form communities of interest, called **Virtual Organizations** (VOs). Each VO has a common domain ontology modeling its expertise, and peer members that may have relations with members from other VOs. Relations between peers exist only if there is a mapping between the ontologies of their respective VOs. PAIRSE does not impose any constraint on the topology graph formed by the ontologies and the different mappings. Peers export their (shareable) data sources as data services. PAIRSE follows a declarative approach to compose data services (Figure 1). Data services in each peer are modeled as **RDF Views** over domain ontologies to explicitly define their semantics. Users formulate their queries on domain ontologies using SPARQL query language. Then, our system exploits the defined RDF views (added as annotations to service description files) to select and compose the relevant services using an RDF query rewriting algorithm that we have devised for that purpose. Queries may necessitate the use of remote data services, in which case an efficient P2P service discovery algorithm [14] is used to locate and retrieve the descriptions of relevant services from remote peers. The system generates then an execution plan for the composition and executes it to provide the user with the requested data. As data services may manipulate privacy-sensitive information, PAIRSE proposed new service and composition execution models to preserve privacy. Figure 1: Peer Structure 3. **SEMANTIC MODELING AND SERVICE QUERYING** In this section, we explain our service composition based approach to query resolution. 3.1 **Semantic Modeling of Data Services** Modeling and explicitly specifying the semantics of data services are the first step towards the automation of service selection and composition. In PAIRSE, we proposed in [7] to model data services as **RDF Parameterized Views** (RPVs) over domain ontologies. A parameterized RDF view uses concepts and relations whose meanings are formally defined in domain ontologies to define the semantic relationships between input and output parameters sets of a data service. A parameterized view is a technique that has been used to describe content and access methods in Global-as-View (GaV) integration architectures [13]. Figure 2 shows an RPV of a service returning the personal information (i.e., name and dates of birth) of patients admitted in a given medical center. Note that input parameters are prefixed with the symbol “?” and output parameters are prefixed with the symbol “$”. RDF views may also specify constraints to characterize the data manipulated by their corresponding services. These constraints may have different forms, including simple interval constraints (e.g., $X \in [a, b]$, where $X$ is a variable used in an RDF view), and fuzzy constraints interpreted according to a fuzzy membership function (e.g., the medications returned by a service have “High” concentration of hydroxypropyl-β-cyclodextrin; i.e., $X$ is High, where the fuzzy term “High” is interpreted by a membership function specifying for each value of $X$ the degree to which it is high). We adopted an approach similar to SAWSDL\(^1\) to \(^1\)http://www.w3.org/2002/ws/sawSDL/ associate data services with their RPVs. We exploited the extensibility feature of the WSDL standard to annotate the WSDL files with RPVs. 3.2 Service-based Query Resolution In PAIRSE, users’ queries are resolved by composing relevant data services on the fly. Each virtual organization in PAIRSE’s hybrid P2P architecture has a DHT (Distributed Hash Table) to index its published services [14]. Services are indexed according to the ontological concepts used in their RPVs. When a query is issued at a given peer, relevant services are first sought in the same VOs where the query is posed, then the service discovery request is propagated to connected VOs. The descriptions of discovered services are then sent back to the initial peer, where the relevant services will be selected and composed. Furthermore, for each discovered service we return the mapping path between the ontologies associated with the expertise domains (i.e., VOs) of the discovered service and the initial peer. This mapping path allows the translation of the query service views. We proposed a query rewriting based service composition algorithm to select and compose data services on the fly [7, 9]. The algorithm, given a SPARQL query, and a set of data services represented by their RPVs, rewrites the query in terms of calls to relevant services. Our algorithm extends earlier works on query rewriting and data integration [13] in the following aspects: --- **Compliance with the RDF/S data models:** while most of previous work has focused on relational and XML data integration [13, 17], we considered the case of RDF/RDFS data integration. Specifically, our query rewriting algorithm takes into account RDF schema constraints such as `rdfs:subClassOf`, `rdfs:subPropertyOf`, `rdfs:domain`, and `rdfs:range` when comparing RPVs to queries. The consideration of RDFS constraints is important as allows our system to infer more results than the previous rewriting techniques. For example, suppose there is a statement in an RDFS ontology specifying that `Medication` is a subclass of `Drug`. Given a data service `S` returning the medications administered to a given patient, and a query `Q` for the drugs administered to a given patient, our algorithm automatically infers that `S` can be used to generate rewritings for `Q`. **Answering parameterized queries:** while previous data integration systems have focused on answering specific queries, PAIRSE has focused on answering parameterized queries. The key focus was on constructing compositions of services (i.e., parameterized integration plans) that are independent of a particular input value. For example, assume a parameterized query `Q(x,y)` for the medications `y` that may interact with a given medication `x`. Assume also two data services: - `S1(x,y)`, where `x ∈ [1, 5]` and `y ∈ [100,150]` - `S2(x,y)`, where `x ∈ [6,10]` and `y ∈ [150,200]` If `Q` was a specific query (`Q_{x=2}`), then `S2` would not be considered in the rewriting (i.e., composition) as `x=2` is not covered by `S2`. In contrast, both of `S1` and `S2` are usable for `Q`, to cover as much as possible of the potential values of `x`. Our composition algorithm extends the previous ones with: (i) a probabilistic subsumption test to determine in a polynomial time the minimum number of services required to satisfy the value constraints that may be specified on query’s parameters [6], and (ii) a mechanism to optimize the generated composition plans based on value constraints specified in service descriptions [7]. **Inclusion of user’s preferences:** often the number of candidate compositions that may be used to answer the same query is very large. We proposed an approach [9] to compute the top-k compositions based on user preferences. In our approach, we modeled user’s preferences using fuzzy sets. We match the (fuzzy) constraints of the relevant services to those of the query and determine their matching degrees using a set of matching methods from the fuzzy set theory. We then rank-order candidate services based on a fuzzification of Pareto dominance and compute the top-k compositions. 4. PRIVACY PRESERVATION IN PAIRSE In this section, we briefly present our models to preserve the privacy of manipulated data at the service and the composition levels. 4.1 Privacy-preserving Service Execution Model Data returned by a data service may be subject to different security and privacy concerns. For example, different people may have different access rights over the same data item; data subjects may... Figure 3: A Privacy-preserving Service Execution Process have different preferences about the disclosure of their data, etc. A common approach in the database field to handle such concerns is to push them to the underlying DBMS by rewriting the query to include these constraints [15]. However, this may not be applicable to data services as the same service may access a multitude of heterogeneous data sources that may not be necessarily managed by a DBMS. An alternative approach is to enforce privacy and security policies at the application level [4], by modifying, in our case, the source code of data services. However, this may not always be applicable, as most of current data service creation platforms (e.g., AquaLogic [11]) provide data services as black boxes that cannot be modified. Even if the code was modifiable, this approach often leads to privacy leaks [15]. We proposed a secure, privacy-preserving execution model for data services allowing service providers to enforce their privacy and security policies without changing the implementation of their data services (i.e., services are seen as black boxes). Our model is inspired by the database approach to “declaratively” handle the security and privacy concerns. It involves the following steps (refer to Figure 3): **Step 1: View rewriting to integrate security and privacy constraints.** When a data service is invoked, our model rewrites its corresponding RDF view to take into account applicable security and privacy rules from the service’s associated policies, which are expressed using the OrBAC and PrivOrBAC models over domain ontologies and take into account the data recipient (i.e., service consumer), his purpose for requesting the data, and the consents of data subjects [16]. The soundness and correctness of our algorithm are demonstrated in [16, 8]. **Step 2: Rewriting the extended view in terms of data services.** The extended RDF view \( v_{extended} \) may include additional data items (denoted by \( \Delta v = v_{extended} - v_{original} \)) required to enforce security and privacy constraints. In this step, we find the data services covering \( \Delta v \), and rewrites \( v_{extended} \) in terms of these services along with the initial service. **Step 3: Enforcing security and privacy constraints.** Services selected in the previous step are composed and executed using the conventional service execution process. The composition returns (i) the data items returned by the invoked service along with (ii) the data items necessary to evaluate the security and privacy constraints. We defined a privacy filter that evaluates the privacy constraints of the different items that are subject to privacy constraints in the view. Null values will be returned for items whose privacy constraints evaluate to \( False \). We demonstrated the validity of our model by extending the architecture of the famous service container AXIS\(^3\) 2.0 with a new module implementing our privacy-preserving service execution model. ### 4.2 Privacy-preserving Composition Execution Model Executing compositions may disclose confidential information to component services. Assume, for example, a composition of two services: \( S_1 \) returns HIV patients in a given city, and \( S_2 \) checks whether a given patient has been treated for psychiatric disorders. Such composition could be needed (by a pharmaceutical researcher) to investigate the connection between a chemical component present in HIV medicines and the development of severe psychiatric disorders. Assume also Bob is a common patient to \( S_1 \) and \( S_2 \). If \( S_2 \) is invoked with Bob’s identifier, and the provider of \( S_2 \) has an access to the composition plan (i.e., he knows that Bob was outputted by \( S_1 \)), then he will infer that Bob is an HIV patient. On the other hand, if the data returned by \( S_1 \) were completely privacy-sanitized (e.g., by removing identifiers and sensitive information), then the composition could not be executed. We proposed a privacy-preserving composition execution model in [5] that limits the information disclosed to services in a composition about the data that each other holds. Our model distinguishes between the following entities: (i) the services in the composition, (ii) the execution engine, and (iii) the recipient of final results. It relies on two key ideas: First, data services use the same order-preserving encryption scheme OPES [3] to encrypt the identifier attributes that are needed to connect data subjects across the different services. They are still free to protect non-identifier attributes with their own techniques (e.g., anonymization, etc.). This way the execution engine has only access to protected data and can still link data subjects across services using the encrypted identifier attributes (note that OPES allows for applying equality queries on encrypted \(^3\)http://axis.apache.org/axis2/java/core/ data). By the end of the composition execution, it removes from the final results the encrypted identifier attributes before returning them to the recipient, who will thus get only privacy-sanitized data. Second, we proposed a algorithm to allow the execution engine to generalize the encrypted value $v_e$ received from a service $S_i$ before proceeding with the invocation of the subsequent service $S_j$ in the composition, such that the generalized value Gen($v_e$) corresponds to $k$ input encrypted values for which $S_j$ has outputs; e.g., the identifier of Bob is generalized to cover $k-1$ other patients for which $S_j$ has an output (i.e., $S_j$ will not be able to distinguish between Bob and $k-1$ other patients). 5. IMPLEMENTATION AND EVALUATION We evaluated our different techniques and algorithms in the healthcare and bioinformatics application domains. These domains have widely embraced Web standards, such as XML and Web services [12, 10], and are characterized by the need for a flexible and privacy-preserving data integration approach. The cardiology hospital of Lyon provided us with access to two medical databases. The identities of patients in these databases were changed. We also generated synthetic medical data about the same patients. We implemented about /400/ data Web services on top of our real and synthetic data. Services were all deployed on an extended version of AXIS 2.0 implementing our service execution model. We built a medical ontology based on the building blocks and the data types defined in the HL7 standard, and used it for the annotation of service description files. To evaluate our techniques in the bioinformatics domain, we used a set of /300/ services from the BioCatalogue registry. Figure 4 (part a) shows the query interface to PAIRSE. Users are assisted in formulating their SPARQL queries over domain ontologies. The figure shows (in part b) also the composition plan of a selected composition, along with the privacy-sanitized results (part c). We conducted exhaustive experiments to evaluate the performance of our integration system. We summarize below obtained results: **Composition construction and execution:** Our experiments in [7] showed that our composition algorithm can handle hundreds of data services in a reasonable time. For example, for chain queries [13] and RPVs with a length of 3 or 4 object properties the algorithm was able to handle up to 400 services in less than 4 seconds. In the context of parameterized queries, our experiments in [6] showed that our algorithm to find the minimum set of services introduced only a small cost at the composition construction time (i.e., in all experiments the algorithm required less than 10% of the time needed to rewrite the query), and improved substantially the composition execution time (i.e., in all experiments the composition execution time was reduced to less than 0.75% of the time needed without optimization), as it removes redundant services. In the context of preferences queries, our experiments in [9] considered that services can be grouped in classes. The experiments showed that the top-k compositions can be computed efficiently. For instance, for classes containing about 400 services, the top-k compositions are computed in less than 4 seconds. **Security and privacy preservation:** The conducted experiments in [8] showed that our secure and privacy preserving service execution model added only a small increase to the service execution time. In all experiments, the cost incurred in the enforcement of security and privacy constraints did not exceed 10% of the time required to execute the service with ignoring these security and privacy constraints altogether. The conducted experiments for the evaluation of our composition execution model [5] showed that the time required to execute the composition with privacy preservation is at most three orders of magnitude of the time required without privacy preservation ($K_i$ was set to 4 in all tests). We were able to cut down that cost to two orders of magnitude by reusing the values of the protocol parameters that were computed in past invocations of the same services (and during the same composition execution). 6. CONCLUSION The goal of the PAIRSE project was to develop new methods and techniques for flexible and privacy-preserving data integration. We have evaluated our composition-based approach in the healthcare and the bioinformatics domains. The obtained results [5, 9, 7, 14, 8, 16, 6] are promising. 7. ACKNOWLEDGMENTS PAIRSE is funded by the French National Research Agency under the grant number ANR-09-SEGI-008 8. ADDITIONAL AUTHORS Michael Mrissa (Lyon 1 University, michael.mrissa@univ-lyon1.fr), Francois Paulus (Semsoft Company, francois.paulus@semsoft-corp.com), Stephane Morucci (Swid Company, stephane.morucci@swid.fr), Nora Cuppens (Telecom-Bretagne, nora.cuppens@telecombretagne.fr) 9. REFERENCES [8] M. Barhamgi, D. Benslimiane
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01124429/file/Benslimane_12918.pdf", "len_cl100k_base": 5345, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24559, "total-output-tokens": 6855, "length": "2e12", "weborganizer": {"__label__adult": 0.0005459785461425781, "__label__art_design": 0.00047206878662109375, "__label__crime_law": 0.0014200210571289062, "__label__education_jobs": 0.0021266937255859375, "__label__entertainment": 0.0001302957534790039, "__label__fashion_beauty": 0.00026345252990722656, "__label__finance_business": 0.001239776611328125, "__label__food_dining": 0.0006322860717773438, "__label__games": 0.0006222724914550781, "__label__hardware": 0.0009784698486328125, "__label__health": 0.00385284423828125, "__label__history": 0.000591278076171875, "__label__home_hobbies": 0.00015413761138916016, "__label__industrial": 0.0006256103515625, "__label__literature": 0.0006465911865234375, "__label__politics": 0.0007715225219726562, "__label__religion": 0.0005273818969726562, "__label__science_tech": 0.40869140625, "__label__social_life": 0.0003006458282470703, "__label__software": 0.051361083984375, "__label__software_dev": 0.52294921875, "__label__sports_fitness": 0.0003407001495361328, "__label__transportation": 0.0006122589111328125, "__label__travel": 0.0002949237823486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27247, 0.05267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27247, 0.17213]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27247, 0.87713]], "google_gemma-3-12b-it_contains_pii": [[0, 1086, false], [1086, 1892, null], [1892, 6057, null], [6057, 10336, null], [10336, 14883, null], [14883, 19843, null], [19843, 24774, null], [24774, 27247, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1086, true], [1086, 1892, null], [1892, 6057, null], [6057, 10336, null], [10336, 14883, null], [14883, 19843, null], [19843, 24774, null], [24774, 27247, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27247, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27247, null]], "pdf_page_numbers": [[0, 1086, 1], [1086, 1892, 2], [1892, 6057, 3], [6057, 10336, 4], [10336, 14883, 5], [14883, 19843, 6], [19843, 24774, 7], [24774, 27247, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27247, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
d05f0754d359f12c6356979c615f56f9516f1a72
PROJECT MONITORING AND CONTROL IN MODEL-DRIVEN AND COMPONENT-BASED DEVELOPMENT OF EMBEDDED SYSTEMS The CARMA Principle and Preliminary Results Rikard Land, Jan Carlson, Stig Larsson, Ivica Crnkovic Mälardalen University, School of Innovation, Design and Engineering, Västerås, Sweden rikard.land@mdh.se, jan.carlson@mdh.se, stig.larsson@mdh.se, ivica.crnkovic@mdh.se Abstract: This position paper describes how the combination of the Model-Driven Development (MDD) and Component-Based Software Engineering (CBSE) paradigms can support project monitoring and control, and project risk reduction. The core principle for this is articulated and named CARMA, and our research agenda and preliminary results are described. Through interviews, industry input, process simulation, tool implementation and pilot projects, and describing an extension of CMMI, we are exploring the CARMA principle in order to provide guidelines for MDD/CBSE projects. Keywords: Model-Driven Development, Component-Based Software Engineering, Project Monitoring and Control, Risk management, CMMI, Empirical Studies. 1 INTRODUCTION In this paper, we describe the preliminary results of an evaluation of the combination of two increasingly maturing approaches: Model-driven development (MDD) and Component-Based Software Engineering (CBSE). Current research efforts to combine these are mostly centered on technology, but there is a more or less implicit promise to reduce risk in development projects by adopting these two paradigms – especially when combined. The assumed benefits are usually cast in technical terminology: the software will be correct by construction, component properties can be composed into system properties, or models at different levels are ensured to be consistent (Håkansson et al, 2006; Selic, 2003; Stahl and Völter, 2006). Only implicitly are the benefits understood as e.g. reduced costs and risk (Feler et al, 2009). However, organizations need to change their culture and way of working compared to previous generations of software development paradigms (Selic, 2003). Such changes may include: - New ways of formulating requirements - Different approaches to verification (how/when) - New activities, re-ordered activities, significantly different effort (relative and absolute) than usual - New methods for project monitoring and control As far as we know, the MDD/CBSE paradigms have not been thoroughly evaluated from this point of view (see section 2 for related work): will the required effort and commitment pay off? The purpose of this paper is to describe our initial results in evaluating the MDD/CBSE combination from the perspectives of risk management and project monitoring and control. The goal of evaluating a combination of two paradigms, even from this more specific point of view, is extremely ambitious and needs to be made more concrete in order to be actionable. The next sections will describe: in more detail the particular technology chosen and related work (Section 2); the formulation of a principle capturing the essence of risk management and project monitoring and control in this context (Section 3); our research agenda, including the possible research methods to evaluate this principle, and preliminary results (section 4). The main threat to validity of the evaluation is that we cannot do industrial case studies by the very nature of the topic. Our investigation should rather be seen as a feasibility study, where we collect insights by means of implementations, interviews, industrial experience, process simulation, tool implementation and student projects, and the formulation of a CMMI extension. 2 TECHNOLOGY AND RELATED WORK This section describes the fundamentals of the fields of Model-Driven Development (MDD) and Component-Based Software Engineering (CBSE), and its relation of the work presented in this paper. 2.1 Model-Driven Development The principle behind Model-Driven Development (MDD) is to bridge the gap between various development artifacts such as requirements, architectural descriptions, lower-level designs, and implementation level through a series of more or less automatic translations (Selic, 2003; Stahl and Völter, 2006). MDD intends to make the development process more efficient (through automatic or semi-automatic translations), and enable earlier verification (of the models). The final software will thus to a large extent be correct by construction (Selic, 2003). OMG’s Model-Driven Architecture, MDA (http://www.omg.org/mda), is one important instantiation of this principle, where the main objective is to achieve platform independence. However, the MDD field focuses on languages that can capture as much as possible, because the next step in the process should ideally be generated automatically from a detailed model. The verification of models can only occur when a significant time of the project has passed. The concept of virtual integration in the SAVI program (Feiler et al, 2009) is similar to the CARMA principle we formulate, but we further clarify the essence of the principle and describe how project planning and milestones are integrated into the MDD/CBSE paradigm, and enable easier verification (of the models). The final software will thus to a large extent be correct by construction (Selic, 2003). OMG’s Model-Driven Architecture, MDA (http://www.omg.org/mda), is one important instantiation of this principle, where the main objective is to achieve platform independence. The literature on processes for Model-Driven Development (MDD) focuses mostly on the division into platform development and application development (Stahl and Völter, 2006; Kleppe, Warmer, Bast, 2003), and the new roles required for this (Aagedal and Solheim, 2004; Krahn, Rumpe, and Völkel, 2006; Guta, Szasz, and Schreiner, 2008). Also, while MDD relies on forward engineering in order to produce correct software, the combination MDD/CBSE in general also permits using pre-existing components produced in many different ways, including wrapped legacy code. 2.2 Component-Based Software Engineering In Component-Based Software Engineering (CBSE), the software is designed and constructed as components with clear boundaries and explicit interfaces (Szyperski, 2002). This paradigm is successful in e.g. the desktop domain, and has also found its way into the embedded systems domain, which is our focus (Hänninen et al, 2008; Larsson, Wall, Wallnau, 2005; van Ommering, van der Linden, and Kramer, 2006). From a process perspective, this means the processes of component development and system development are treated separately, but interact (Crnković, Chaudron, and Larsson, 2005). Component development could be a result of system top-down decomposition, and result in either internal development or in hiring a subcontractor. A system may also be built from pre-existing components, such as Off-the-Shelf (OTS) components (components developed for the marketplace), or as part of a product line initiative (Clements and Northrop, 2001). We have had to choose among the many component technologies in order to be specific enough, and identify the characteristics that support the project monitoring and control, as we envision it, to the largest extent. Although some literature, component models and component technologies describe the “component” concept as a deployable entity (Szyperski, 2002), others fundamentally assume there is a concept of component identity throughout the process, from early design to runtime. (For embedded software, it is even common that the component boundaries are optimized away during the deployment stage, when code is compiled and linked into one single binary image). To monitor the development (with support from automatic tools) as will be described in section 3, it is essential to adopt this second viewpoint. Also, there must be compositional reasoning theories and tools available for various component attributes, as well as an attribute framework making it possible to trace component attributes (such as timing properties, memory consumption) throughout the development process. Also, for embedded software, development of hardware models is an essential part of the development. This is true for the ProCom component model (Bureš et al, 2008; Sentilles et al, 2008) and associated research at the Progress Centre for Predictable Embedded Software Systems; there are other component models to which our evaluation will be applicable, but as a choice during our investigation they do not support all the desired characteristics to the same extent as ProCom, in particular the attribute framework: AADL (As-2 Embedded Computing Systems Committee, 2009), Autosar (www.autosar.org), SysML (www.sysml.org) and OMG’s MARTE (www.omgmarc.org). Also, we have good access to ProCom, the Progress development environment, and the Progress researchers, which makes it suitable to choose this track. 2.3 Other Related Work The principle presented in this paper inherits the basic ideas from the concepts of daily builds, continuous integration, continuous verification, and test-driven development (Beck, 1999; Duvall, Matyas, and Glover, 2007; Kruchten, 2004), and adapts them to fit the combined MDD/CBSE paradigm. 3 SUBJECT OF EVALUATION 3.1 Motivating Example Figure 1 depicts an electronic stability control system of a car (figure: Bureš et al., 2008; example previously used in Land et al., 2009). Our envisioned way to run a MDD/CBSE-oriented project is: - The different components may either be already existing, or to be developed. The existing ones may need modification. (In the figure, for example the Stability Control System may require new development, while the Anti-lock Braking System will be reused from a previous system with minor modifications, and for the Wheels speed component, there may be three potential COTS components available, etc.) - There are certain properties the system must fulfill in order to be successful, such as response times and static memory consumption. (In the example, the latency from the Wheels speed input to the Brake valves output must be less than, say, 10 ms, and the software needs to fit in, say, 64 kb memory.) Clearly, the functionality is also a property that needs to be fulfilled, e.g. according to a requirements specification, a use case model, and/or state chart models describing the behavior. - If these properties cannot be fulfilled, project management wants to be informed as early as possible, in order to identify mitigation solutions (e.g. acquire more powerful hardware, allocate human resources to optimize the source code, relax the requirements). - The properties of interest are (in principle) derivable from knowledge of individual components’ properties, their interconnections, and their allocation to hardware, and the characteristics of the hardware. For example, the response time depends on (at least): which components are invoked from input to output, the computation time needed by each component (which depends on hardware), and data transfer between components (which may be significant if this involves several communications over a network). Also behavioral diagrams can in principle be composed (Håkansson et al., 2008). (Another model, not shown here, is needed to describe the allocation of software components to hardware.) We assume that there exist reliable such composition theories for the properties of interest. - Later in the development, it may be possible to generate the values for the properties of interest from implementations. Earlier in the development, it may be possible to estimate the values of these properties from half-finished implementations, or less refined models, adding a certain margin. Very early in the development, it is possible to provide values for these properties through expert estimates, or as allocated budgets to components based on the requirements on the system. - Each attribute type (e.g. “memory consumption”, “behavior”) may thus be associated with many different values for a single component instance, which have been created differently (estimates, test results, static analysis results, model checking proofs), and on different versions of the component (Sentilles et al., 2009). - It becomes possible to formulate milestones (e.g. project gates) in terms of expected values of the attributes. For example, if the requirement on static memory consumption is 32 kb for a component, we may define the goal for an early milestone to be 40 kb, generated from a model in a language known to give pessimistic values). A later milestone may be defined for this property as 24 kb, based on static analysis for a point in time where an incomplete implementation should be achieved, with a known set of (planned) completed features. (If these features are not implemented, this should raise a flag in another milestone criterion, for attribute “functionality.”) The key observation is that all of these types of values from very different sources are valuable at different points during the project from a project monitoring and control viewpoint, and that they can be treated in a uniform manner independent of their source, with support for more or less automatic generation and composition of the values. Figure 1: Component design of an electronic stability control (ESC) subsystem of a car. 3.2 The CARMA Principle The principle we envision is implicit in (the combination of) MDD and CBSE, but has not before been clearly articulated, can be formulated as: - **Components.** Choose a component technology which supports compositional reasoning of component properties. As early as possible, define the components of your system (i.e. the architectural structure). - **Attributes.** Keep track of the properties of (components of) your system through component attributes. Use a tool that supports management of these properties, including automatic composition. - **Requirements.** Refine your high-level system requirements into product requirements, and specify these in terms of the attributes which are analyzable with (tools supporting the) composition theories. - **Milestones.** Formulate milestones (e.g. project gates) in terms of tuples: <$\text{expected value in relation to product requirement}; \text{method to generate this value}$>. - **Analysis.** Perform verification analysis at the defined milestones. In addition, the individual developers, architects, project manager, etc., may perform analyses of interest at any time; this resembles debugging in direct connection to implementation, which is informally done (i.e. not mandated by a formal process) but an invaluable tool for the individual developer before passing the code (or, in MDD, the model) on to verification as part of the formal process. We call this principle the CARMA principle (Components, Attributes, Requirements, Milestones, Analysis). The CARMA principle captures what we believe is a major opportunity in practice if adopting the MDD/CBSE paradigm. There is risk reduction inherent in the “correct by design/construction” paradigm, but it is important to leverage on this at a project management level, including the time dimension, the possibility of changed requirements, which may be due to external events as well as to internal events in the project. We are performing several complementary studies of this principle, each aiming at providing different types of insights. The studies explore the characteristics of MDD/CBSE projects implementing the CARMA principle, as is explained in detail in the next section. 4 PRELIMINARY RESULTS This section outlines four evaluation methods which are all underway in our research agenda. For each we describe the research method shortly, the evaluation point of view, and preliminary results. Our main basis for both interviews and extensions of existing implementation is the technology development at the Progress Centre for Predictable Embedded Software Systems (http://www.mrtc.mdh.se/progress/). 4.1 Interviews and Documentation **Research question:** How do the researchers developing modeling languages and methods, analysis methods, synthesis to executable, etc. envision the benefits of their methods? Is the approach in large feasible for embedded systems projects? **Research method:** We have performed interviews with researchers of various MDD/CBSE modeling/analysis/construction methods, and tool builders. As a concrete artifact discussed during the interviews, a process simulation model (see section 4.2) has been iterated with these researchers. We have also studied industrial requirements specifications with the objective of identifying how closely it matches the proposed approach. Preliminary results: The interviews and ongoing collaboration has led to the formulation of the CARMA principle as well as the construction of a simulation model (see section 4.2). The study of industrial requirements specifications have led to the following observations: - Some requirements are specified in enough detail to allow specific pass/fail criteria to be specified, and are relatively easy to map to the CARMA principle. In particular: - Many product requirements are specified in terms of execution steps, sometimes using some kind of dynamic diagram. Example: “During startup, register X shall first be read to determine the cause of the last shutdown/reset. If the cause is… then do…” - Some product requirements describe timing behavior of some execution steps. “Example: The first phase of startup shall take less than X ms; the second step Y ms; …” - Some product requirements, but not many, are hardware specifications. Example: “The processor shall be of type X”; “the software image shall fit in X bytes of memory”. - Requirements on safety-related functions are formulated to be unambiguous and verifiable, and with the highest level of detail (including e.g. timing and resource usage as described in the bullets above). This is due to the potentially catastrophic effects of a specification error. 4.2 Process Simulation Research method: We are simulating a queuing network model (Kobayashi, 1978) of a development process where the CARMA principle is adopted. We vary input parameters such as requirements volatility, the likelihood of detecting problems in analysis and verification, the amount and points in time verification is performed (e.g. milestones throughout the project, and/or only or mainly at the end of the project), the actions taken in case a problem is found (e.g. try to optimize, re-architect the system, drop or relax requirements, etc.). This model is iterated with the interviewees as indicated in section 4.1. Research question: If adopting the MDD/CBSE paradigm and the CARMA principle, what factors affect the project outcome the most? Preliminary results: The simulation results so far indicate that with frequent milestone verifications, the same amount of effort is spent on verification as when verification is performed at the end. However, the simulation results indicate several drawbacks with verification occurring only at the end: 1) more verifiers (i.e., people) are needed at the same time, 2) problems are found late, which cause a feedback of error correction and re-verification (it is easy to translate this into a sense of urgency and “fire-fighting” in the development organization), and 3) the project time is somewhat prolonged (but not very much). Also, with more volatile requirements (i.e. changed or added throughout the project) the total effort is increased. These results seem intuitively seem to be applicable more generally, and we take this as a sign of credibility of the simulation model. We hope that the simulation results, once fully analyzed, will provide concrete guidelines on how to plan and dimension MDD/CBSE development projects in different circumstances. 4.3 Tool Implementation Research method: We are implementing an extension of the Progress Integrated Development Environment, which implements the desired attribute framework for components (Sentilles et al., 2009). In this extension, requirements on attribute values are distinguished from actual attribute values, and there will be a general mechanism to compare these, as well as display summaries and visualizations of milestone verifications, etc. We will then use this tool in student projects. Research question: Through the construction of the tool extension, we will hopefully realize details earlier overlooked. By using the tool in student projects, we are able to collect insights. We can nevertheless observe the amount of perceived overhead the approach introduces, and suggestions for e.g. automation and user interface improvements. 4.4 Presentation of the principle as a CMMI extension Research method: We are systematically extending a well-known process model, CMMI (Chrissis, Konrad, and Shrum, 2007), to clarify and explain the CARMA principle. Research question: This is dissemination rather than evaluation, similar to CMMI extensions for safety-critical systems (Defence Materiel Organisation, Australian Department of Defence, 2007) and an extension for the medical domain (McCaffery, Burton, and Richardson, 2009). Preliminary results: An initial version has been published (Land et al., 2009), and we are currently extending the guidelines to cover not only the software components and associated models are covered, but also data (e.g. databases) and hardware nodes and networks, which are extremely important for accurate analysis of e.g. timing. 5 CONCLUSIONS Our findings so far indicates that the MDD/CBSE combination can be used in development projects and potentially reduce costs, time, and especially risk. With input from the research fields of MDD and CBSE as well as industry, the CARMA principle has been formulated and is shown to be reasonably realistic. When there are mature tools available, the results may be developed into guidelines for application. Further studies will also need to go beyond ProCom. ACKNOWLEDGEMENTS This work was partially supported by the Swedish Foundation for Strategic Research (SSF) via the strategic research centre PROGRESS. REFERENCES
{"Source-Url": "https://pdfs.semanticscholar.org/5cbd/8f5bfc24b71dd056498dc477e291851b4397.pdf", "len_cl100k_base": 4518, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18474, "total-output-tokens": 6099, "length": "2e12", "weborganizer": {"__label__adult": 0.0003962516784667969, "__label__art_design": 0.0003664493560791016, "__label__crime_law": 0.00032258033752441406, "__label__education_jobs": 0.000934600830078125, "__label__entertainment": 5.716085433959961e-05, "__label__fashion_beauty": 0.00018036365509033203, "__label__finance_business": 0.0002841949462890625, "__label__food_dining": 0.00033783912658691406, "__label__games": 0.0005450248718261719, "__label__hardware": 0.0016841888427734375, "__label__health": 0.000545501708984375, "__label__history": 0.00023746490478515625, "__label__home_hobbies": 0.00010645389556884766, "__label__industrial": 0.0006084442138671875, "__label__literature": 0.0002315044403076172, "__label__politics": 0.00025081634521484375, "__label__religion": 0.0005369186401367188, "__label__science_tech": 0.0225677490234375, "__label__social_life": 7.718801498413086e-05, "__label__software": 0.003353118896484375, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0003597736358642578, "__label__transportation": 0.0008921623229980469, "__label__travel": 0.00020003318786621096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26292, 0.02285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26292, 0.40363]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26292, 0.88754]], "google_gemma-3-12b-it_contains_pii": [[0, 3656, false], [3656, 8412, null], [8412, 12847, null], [12847, 16749, null], [16749, 21452, null], [21452, 26292, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3656, true], [3656, 8412, null], [8412, 12847, null], [12847, 16749, null], [16749, 21452, null], [21452, 26292, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26292, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26292, null]], "pdf_page_numbers": [[0, 3656, 1], [3656, 8412, 2], [8412, 12847, 3], [12847, 16749, 4], [16749, 21452, 5], [21452, 26292, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26292, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b2d1479b975f49e6e0b530bf59e7e17943eed7d9
Global Information Assurance Certification Paper Copyright SANS Institute Author Retains Full Rights Interested in learning more? Check out the list of upcoming events offering "Security Essentials Bootcamp Style (Security 401)" at http://www.giac.org/registration/gsec An Architecture for Implementing Enterprise Multifactor Authentication with Open Source Tools GIAC (GSEC) Gold Certification Author: Student Tom Webb, Tcw3bb@gmail.com Advisor: Kees Leune Accepted: December 5th, 2013 Abstract Credential stealing has become an epidemic. Organizations need an effective and cost efficient way to reduce the likelihood of being exploited thought this vulnerability. Leveraging Google Authenticator, RADIUS and a REST API will allow enterprises to enroll users into their existing service portal and integrate services easily with this well supported protocol. 1. Multifactor background We are all familiar with how password authentication works as we log into dozens of systems each day to check email or view bank account balance. This type of authentication is considered single factor authentication. Authentication can happen using something you know, something you have, something you are or somewhere you are (Bishop, 2004). Multifactor combines two or more of these methods to create a stronger authentication. Traditionally, multifactor authentication has been expensive to deploy, based on the cost of buying equipment and the time it took to enroll individuals into the systems. The use of smart phones in multifactor authentication has lowered the cost of deployment significantly. This is due to savings from not requiring hardware tokens and the use of open source software tokens on the phones. Time based one-time passwords (TOTP) is the method this paper focuses on. They are considered the *something you have* portion of authentication methods. There are several TOTP products, of which RSA SecureID is the most well-known. These products use various methods to generate a number specific to the individual user that expires after a predefined time. Biometrics is based on something you are. Two types of biometrics we find being used on consumer devices are fingerprint scanners on laptops and facial recognition used on Android phones. Deployment of biometrics to a large organization is difficult due to every computing device must have a reader and enrollment of individuals into an enterprise solution is time consuming. While the new iPhone 5s is one of the largest consumer devices that have this integrated, there is a reported 20 percent authentication failure rate (Kosner, 2013). Collecting this data also has privacy concerns (Prabhakar, Pankanti, & Jain, 2003) which could slow individual’s adoption of the technology. 1.1. What risk does it reduce? Multifactor authentication prevents attackers from gaining access to systems if they are able to steal the user’s passwords. Collecting passwords is very easy for Author Name, email@address advanced attackers and sixty percent of people are likely to fall for a well crafted phishing email (Sixty percent will fall to a phishing attack that might herald an APT, 2013). Adding another piece of information the attacker needs to access the systems makes the attack more complicated. In the NIST Special Publication 800-63-1 pg. 45, many types of threats and mitigation strategies are covered in section 6.2 (National Institute of Standards and Technology, 2011). Two of the more common threats are theft and revealing the token using social engineering. To reduce the effectiveness of theft, you can use a token that requires a pin before presenting the OTP. For a social engineering attack, making sure that token entropy is high enough to prevent guessing future values based on a single known one. 1.2. Google Authenticator OTP There are two main type of OTP supported by Google Authenticator. One based on time (TOTP) discussed in RFC 6238 (M'Raihi, Machani, Pei, & Rydell, 2011) and one based on an event HMAC one time password (HOTP) discussed in RFC 4226 (M'Raihi, Bellare, Hoornaert, Naccache, & Ranen, 2005). Time-based tokens are the most common, but you can also implement HOTP with this configuration. Changing what type of token you generate during the initial enrollment process will allow HOTP. 2. Architecture The system design described in this paper uses Redhat EL 6 as a starting point, but can be easily ported to CentOS for individuals who do not use Redhat. Kerberos is used to authenticate to Microsoft Active Directory (AD) and FreeRADIUS will use the Pluggable Authentication Module (PAM) for Googles TOTP. Authorization in this design should occur at the application level due to this system authenticating a large number of different services. If you want to do authorization from RADIUS, get more information on LDAP authorization at their website¹. ¹ http://wiki.freeradius.org/modules/rlm_ldap To setup a system or service to use multifactor, in this design, it must support RADIUS protocol for authentication. Each service that you wish to add should have a separate RADIUS secret key to prevent others from reading authentication traffic. The RADIUS shared secret is susceptible to offline dictionary attacks (Aboba, 2005). Each key length should be a minimum length of 25 characters to make brute forcing the secret a less attractive option for attackers. If authentication is occurring over untrusted networks, use IPSEC to provide more robust encryption to the protocol. Figure 1 displays the process of a SSH service setup to use RADIUS as authentication. When the user enters his credentials, the SSH service passes the information to the RADIUS server. ![Diagram of SSH service setup](image) *Figure 1. Radius Authentication using Kerberos and Google Authenticator.* When authenticating, users will need to combine both their AD credential and TOTP password into one word. If your password is bob and your token is 987654 then, when prompted, you will “smash” them together (e.g. bob987654). Appendix A covers the must have security considerations when implementing this architecture. Author Name, email@address 3. Enrollment Most large organizations already have a portal for their users to perform self-service functions. A web API will be used to allow self-enrollment into the system. When the user initiates the enrollment, a local user is created on the RADIUS system. The user must have a local account, as this is a way to track enrollment and create the secret key for creating the OTP. Once the local user is created, a secret seed key is created in /var/Google-auth/userid. The secret key will be sent to the web service that requested the creation via a QRCode. This code should be strictly protected. If attacker gets access to this code, they will be able to create OTP for that user. The attacker will still need to have the users AD credential, but they will have a critical piece of data to perform an attack. ![Flowchart of Enrollment API] Figure 2. Logic of Enrollment API. 4. Disenrollment To remove a user from the multifactor system, you should delete their local user along with the OTP secret seed. If you disable the users in Active Directory, they will no longer be able to authenticate using the RADIUS server. To make sure that users do not linger, this system should also be part of your HR off-boarding process. ![Diagram of disenrollment process] Figure 3. Logic of disenrollment process. 5. Configuration Software Installation ``` >sudo yum install ntp pam_krb5 krb5-workstation freeradius-utils.x86_64 make gcc bzip2 gcc++ pam-devel libpng libpng-devel freeradius ``` Author Name, email@address Install the latest software for QRcodes from the libqrencode web site by downloading and compiling it. This is used to create the QRcode for the secret seed key during enrollment process to prevent users from mistyping the key. ``` >tar -zxvf qrencode >cd qrencode* >./configure >make;make install ``` 5.1. NTP When using TOTP, having time synchronization on the server and your token device is critical. The Network Time Protocol (NTP) is used to keep time synchronized. Have it run at startup using the following commands: ``` > chkconfig ntpd on ``` On the server, you will need to edit the `/etc/ntp.conf` and add your primary time server. Comment any default servers, unless these are your primary time servers. The file format is “server <ip address>”. ``` server 1.2.3.4 ``` Next, you should start the NTP daemon and check to make sure the time sync is working appropriately. ``` > sudo service ntpd start >ntpq -p -n ``` The `ntpq` results should resemble the output in figure 4. The most important columns to check for are the last time it polled and offset. These numbers are in milliseconds, but the values should not be very high. ``` [root@localhost ~]# ntpq -p -n remote refid st when poll reach delay offset jitter 193.227.197.2 130.149.17.8 2 u 11 64 377 156.885 5.267 8.763 173.255.227.205 209.51.161.238 2 u 57 64 377 40.298 6.668 6.004 208.68.36.196 209.51.161.238 2 u 31 64 377 41.604 8.159 5.033 ``` --- 2 http://fukuchi.org/works/qrencode Author Name, email@address Figure 4. Typical ntpq results. 5.2. Kerberos Using Kerberos to authenticate to Active Directory is easy. By following the basic steps on the Indiana University site (In Red Hat Enterprise Linux, how do I authenticate to ADS.IU.EDU using Kerberos?, 2013) we will configure our systems. > /usr/sbin/authconfig-tui * Use Kerberos Figure 5. Authconfig-tui interface. In this configuration, we are not going to use LDAP for user information. Users will be added to this box to confirm they have enrolled in the multifactor system. Fill in the appropriate setting for Active Directory environment as seen in Figure 3. 5.3. PAM With the PAM RADIUS module, we are setting up how PAM should authenticate anyone who uses this service. Setup each user with his own token in a custom location for Google Authenticator to store the keys. In this case, place the keys into /var/Google-auth folder and we name the file with the user name. With the configuration below, both the Google Authenticator and Kerberos password to be correct before access is granted by RADIUS. ``` /etc/pam.d/radiusd # Use the right 6 digits for google-authenticator (forward_pass) auth requisite pam_google_authenticator.so user=root secret=/var/Google-auth/${USER}_google_auth forward_pass auth required pam_krb5.so use_first_pass account sufficient pam_localuser.so ``` 5.4. Google Authenticator Download the latest Google authenticator PAM module at its home page. Follow the instructions from the README file, but should be your traditional gcc compile instructions. Once you are done compiling, remove gcc. This will make it more difficult for attackers, who gain access to your system, to install additional tools. ``` >bzip2 -d libpam-google-authenticator-<VERSION>-source.tar.bz2;tar -zvf libpam-google-authenticator-<version>-source.tar.bz2 >cd libpam* >make install >sudo yum erase gcc pam-devel ``` 5.5. FreeRADIUS FreeRADIUS has been around for a while and it has many features. These instructions cover version 2.2 for installation. FreeRADIUS has released version 3.0 but --- Author Name, email@address is not considered stable at this time. One of the main security features it now supports is RadSec or TLS encryption. Unfortunately, due to FreeRADIUS needing PAM, it has to run as the root user. To help reduce the attack profile, SELinux should be enabled on your system, which is default. Additionally, limit access to UDP port 1812 to only system that will be authenticating to the system. Make the following changes to the RADIUS configuration file below. ```bash >vi /etc/raddb/radius.conf user=root group=radiusd destination = syslog stripped_names = yes auth = yes ``` Setup the default auth method for RADIUS to PAM. Add the settings below to the `/etc/raddb/users` file. Additionally, in the file `/etc/raddb/sites-enabled/default`, enable PAM under the authenticate section. ```bash >vi /etc/raddb/users DEFAULT Auth-Type := PAM >vi /etc/raddb/sites-enabled/default pam #Remove the # sign in front ``` Finally, make sure that the appropriate permissions are set on the folders to prevent others from accessing them. The most critical folder is the Google auth secret seed codes, makes sure no one can access this folder to prevent someone from duplicating a key and impersonating an authorized user ```bash > chown radius:radiusd /var/google-auth/ > chmod 400 /var/Google-auth/ > chcon -v --type=radiusd_t /var/google-auth/ ``` Author Name, email@address 5.6. **Enrollment API** When designing the enrollment code, it’s critical to keep security in mind. This process will create the secret key and send the QRcode to your portal. If attackers gain access to this system, they could steal a key or even create a key for themselves. As this is a simple service for creating secret keys, a REST API will be great for this. According to Rodriguez, REST APIs use HTTP methods explicitly, are stateless and expose directory structure-like URIs (2008). In cases where information is read from the API, a get request with the desired query should be used. A post request will create the users. All traffic must be encrypted between the portal and enrollment API. This is a web service, and SSL is the best way to do this. Any system that will request a user enrollment should have its own unique API. This key acts as additional authentication for the request and helps keep track of different servers that are using the API. Access to the API should be limited via local or network firewall to only allow your portal IP’s to access it. To prevent injection into the API, we need to make sure we create a whitelist of approved input values and encode responses output to stop other possible attack vectors. The API should keep detailed log information about all requests made. The web service making these requests should have limited permissions to prevent the possibilities of being used to escalate privileges. An example API will be made available at my github site.\(^4\). 6. **Service Integration** Integration with systems can be complicated and cumbersome. The key to rolling out multifactor is to determine what system, based on your data classification, should be required to have strong authentication. By rolling out per service, you can target communications and have a methodical approach to help make the deployment a success. --- \(^4\) [https://github.com/tcw3bb/google-auth-api](https://github.com/tcw3bb/google-auth-api) Author Name, email@address 6.1. VPN Many mature products support RADIUS as an authentication method. One of the most common items to implement multifactor authentication is VPN. Juniper SSL, Cisco ASA and OpenVPN all support this method. Authorization is critical to limit only approved individuals to use the VPN. Remote access and Email are the most targeted services for stolen credentials. Multifactor deployment for these services is the most common for this reason. 6.2. Web Integrating Apache directly with RADIUS is easy using mod_auth_radius. To protect specific web directories with multifactor add them to the apache.conf file. This is an easy way to retrofit a website without modifying the code at all. The FreeRADIUS website\(^5\) has a great article on proper setup. You can setup the user login form to directly integrate with your RADIUS. However, a better architecture is setting up an enterprise web single sign on (SSO) using Shibboleth or another CAS system. 6.3. SSH SSH configuration is done through the PAM sshd plugin. With this setup you will allow multifactor auth for remote connections, but local authentication will still use the local user’s password. You may use your AD credentials for sudo or require and additional TOTP, but this is may be covered in a later paper. Test connectivity Before you setup SSH for RADIUS, you want to make sure that the server can communicate to RADIUS to prevent being locked out of the server. Additionally, keep an additional SSH session open to the server while testing to limit the chance of loss of access to the system. You should see traffic on the RADIUS servers tcpdump output. If you do not, then you have a local or network firewall issue that needs to be resolved before moving on. \(^5\) http://freeradius.org/mod_auth_radius/ Author Name, email@address From the RADIUS Server ``` >sudo tcpdump -nnvi eth0 host <New Server IP> and not port 22 ``` From the new SSH Server ``` >nc -u <radius IP> 1812 ``` **On the RADIUS SERVER** Once connectivity has been confirmed, setup the client to authenticate to the server. Make sure to setup a unique secret for each client for the maximum security. This will prevent someone who has brute forced one key from sniffing all passwords being used for authentications. ``` >vi /etc/raddb/clients.conf ``` ```plaintext client NAME { ipaddr = <SSH Server IP> secret = SECRET } ``` **On the SSH Server** Download the latest version of PAM RADIUS module from the FreeRadius site. You will also need to install a couple additional packages to compile the software. ``` >yum install pam-devel >tar -zxvf pam*; cd pam*;./configure;make >sudo make install ``` To setup the new authentication against the RADIUS server, edit the `pam_radius_auth.conf` file and setup the shared secret and IP of the system. The file format is IP, shared secret and timeout. ``` >sudo vi /etc/pam_radius_auth.conf ``` ```plaintext #Place a # in front of the line with 127.0.0.1 1.1.1.1 shared_secret 3 ``` 6 http://freeradius.org/pam_radius_auth Author Name, email@address The last piece is to setup PAM to use the RADIUS module for authentication. We are going to require RADIUS to log into the system. Place the change at the top of the file and restart the SSH daemon. ```bash >sudo vi /etc/pam.d/sshd auth required /lib/security/pam_radius_auth.so #place at top of file >sudo service ssh restart ``` 7. Conclusion While the technology seems to be the most cost prohibited portion of multifactor authentication, user awareness and education on enrollment and preparing your helpdesk on how to prepare for the onslaught of calls during the initial rollout is critical to the success of the deployment. Targeting user’s credentials is the easiest way to gain access to systems and attackers are going to continue to exploit this vulnerability until another attack vector becomes more effective. While using some custom code for enrollment and leveraging smart phones will allow for a low cost alternative for multifactor authentication. 8. References Sixty percent will fall to a phishing attack that might herald an APT. (2013, January 15). Retrieved from Infosecurity-magazine: http://www.infosecurity-magazine.com/view/30220/sixty-percent-will-fall-to-a-phishing-attack-that-might-herald-an-apt/ Author Name, email@address Appendix A (Checklist) REST API - SSL - Logging - Limited service permissions - Input Validation - Encode responses - Pass secret or QRcode to portal. - API Key per request system Radius Security - SELinux - Kerberos AD authentication - IPsec across untrusted networks - NTP - Syslog to another system - File permissions on the secret key folder - IPTables for only allowed hosts to make API requests ## Upcoming Training <table> <thead> <tr> <th>Event</th> <th>Location</th> <th>Dates</th> <th>Organizer</th> </tr> </thead> <tbody> <tr> <td>SANSFIRE 2020</td> <td>, DC</td> <td>Jun 13, 2020 - Jun 20, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Instructor-Led Training</td> <td>Jun 22</td> <td>PA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> </tr> <tr> <td>Cyber Defence Australia Online 2020</td> <td>, Australia</td> <td>Jun 22, 2020 - Jul 04, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Live Online - SEC401: Security Essentials Bootcamp Style</td> <td>, United Arab Emirates</td> <td>Jun 24, 2020 - Jul 31, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Japan Live Online July 2020</td> <td>, Japan</td> <td>Jun 29, 2020 - Jul 11, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Summer of Cyber</td> <td>Jul 6</td> <td>, VA</td> <td>Jul 06, 2020 - Jul 17, 2020</td> </tr> <tr> <td>Live Online - SEC401: Security Essentials Bootcamp Style</td> <td>, United Arab Emirates</td> <td>Jul 13, 2020 - Aug 01, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS SEC401 Europe Online July 2020</td> <td>, United Arab Emirates</td> <td>Jul 13, 2020 - Jul 18, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Rocky Mountain Summer 2020</td> <td>, CO</td> <td>Jul 20, 2020 - Jul 25, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Summer of Cyber</td> <td>Jul 27</td> <td>, NC</td> <td>Jul 27, 2020 - Aug 01, 2020</td> </tr> <tr> <td>Instructor-Led Training</td> <td>Aug 3 ET</td> <td>, MA</td> <td>Aug 03, 2020 - Aug 08, 2020</td> </tr> <tr> <td>SANS SEC401 Europe Online August 2020</td> <td>, United Arab Emirates</td> <td>Aug 10, 2020 - Aug 15, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Instructor-Led Training</td> <td>Aug 10 MT</td> <td>, WA</td> <td>Aug 10, 2020 - Aug 15, 2020</td> </tr> <tr> <td>Cyber Defence APAC Live Online 2020</td> <td>, Singapore</td> <td>Aug 17, 2020 - Aug 22, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS SEC401 Multi-Week Europe Online 2020</td> <td>, United Arab Emirates</td> <td>Aug 17, 2020 - Aug 28, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Instructor-Led Training</td> <td>Aug 17 ET</td> <td>, DC</td> <td>Aug 17, 2020 - Aug 22, 2020</td> </tr> <tr> <td>SANS Virginia Beach 2020</td> <td>Virginia Beach, VA</td> <td>Aug 31, 2020 - Sep 05, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Virginia Beach 2020 - Live Online</td> <td>Virginia Beach, VA</td> <td>Aug 31, 2020 - Sep 05, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS London September 2020</td> <td>London, United Kingdom</td> <td>Sep 07, 2020 - Sep 12, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Baltimore Fall 2020</td> <td>Baltimore, MD</td> <td>Sep 08, 2020 - Sep 13, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Munich September 2020</td> <td>Munich, Germany</td> <td>Sep 14, 2020 - Sep 19, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Australia Spring Online 2020</td> <td>, Australia</td> <td>Sep 21, 2020 - Oct 03, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Northern VA - Reston Fall 2020</td> <td>Reston, VA</td> <td>Sep 28, 2020 - Oct 03, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Amsterdam October 2020</td> <td>Amsterdam, Netherlands</td> <td>Oct 05, 2020 - Oct 10, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Tokyo Autumn 2020</td> <td>Tokyo, Japan</td> <td>Oct 05, 2020 - Oct 17, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Dallas Fall 2020</td> <td>Dallas, TX</td> <td>Oct 19, 2020 - Oct 24, 2020</td> <td>Live Event</td> </tr> </tbody> </table>
{"Source-Url": "https://www.giac.org/paper/gsec/31888/architecture-implementing-enterprise-multifactor-authentication-open-source-tools/110964", "len_cl100k_base": 5909, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42376, "total-output-tokens": 7224, "length": "2e12", "weborganizer": {"__label__adult": 0.0005369186401367188, "__label__art_design": 0.0007257461547851562, "__label__crime_law": 0.003253936767578125, "__label__education_jobs": 0.01309967041015625, "__label__entertainment": 0.00016045570373535156, "__label__fashion_beauty": 0.0003228187561035156, "__label__finance_business": 0.0027866363525390625, "__label__food_dining": 0.00042629241943359375, "__label__games": 0.0007848739624023438, "__label__hardware": 0.00421142578125, "__label__health": 0.0011577606201171875, "__label__history": 0.00042557716369628906, "__label__home_hobbies": 0.00019884109497070312, "__label__industrial": 0.00142669677734375, "__label__literature": 0.0003516674041748047, "__label__politics": 0.0005922317504882812, "__label__religion": 0.0005507469177246094, "__label__science_tech": 0.361083984375, "__label__social_life": 0.0002834796905517578, "__label__software": 0.0771484375, "__label__software_dev": 0.529296875, "__label__sports_fitness": 0.0003581047058105469, "__label__transportation": 0.0006418228149414062, "__label__travel": 0.00022780895233154297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24703, 0.04053]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24703, 0.08011]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24703, 0.8438]], "google_gemma-3-12b-it_contains_pii": [[0, 272, false], [272, 867, null], [867, 2985, null], [2985, 4924, null], [4924, 6155, null], [6155, 7040, null], [7040, 7682, null], [7682, 9225, null], [9225, 9843, null], [9843, 11386, null], [11386, 12762, null], [12762, 14776, null], [14776, 16589, null], [16589, 17841, null], [17841, 18816, null], [18816, 20448, null], [20448, 20851, null], [20851, 24703, null]], "google_gemma-3-12b-it_is_public_document": [[0, 272, true], [272, 867, null], [867, 2985, null], [2985, 4924, null], [4924, 6155, null], [6155, 7040, null], [7040, 7682, null], [7682, 9225, null], [9225, 9843, null], [9843, 11386, null], [11386, 12762, null], [12762, 14776, null], [14776, 16589, null], [16589, 17841, null], [17841, 18816, null], [18816, 20448, null], [20448, 20851, null], [20851, 24703, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24703, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24703, null]], "pdf_page_numbers": [[0, 272, 1], [272, 867, 2], [867, 2985, 3], [2985, 4924, 4], [4924, 6155, 5], [6155, 7040, 6], [7040, 7682, 7], [7682, 9225, 8], [9225, 9843, 9], [9843, 11386, 10], [11386, 12762, 11], [12762, 14776, 12], [14776, 16589, 13], [16589, 17841, 14], [17841, 18816, 15], [18816, 20448, 16], [20448, 20851, 17], [20851, 24703, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24703, 0.12453]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
911866f441466d8c75f7e52ae7b240fb57454bda
## Table of Contents Cover Preface Contents Location of VideoNotes in the Text Online Labs Part 1: Becoming Skilled at Computing Part 1: Introduction Chapter 1: Defining Information Technology Terms of Endearment Computations Greatest Hits Digitizing Information Stored-Program Computers The Switch to Transistors Integrated Circuits Personal Computers The Internet HTTP and the World Wide Web Layered Software Development The Great Part of the Greatest Hits Terms of Endearment Tech Support Anchoring Knowledge Computers, Software, Algorithms Find the Computer Software Algorithms The Words for Ideas Abstract Generalize Operationally Attuned Mnemonic Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 2: Exploring the Human-Computer Interface Face It, Its a Computer A Few Useful Concepts Feedback Consistent Interface # Table of Contents - New Instance - Perfect Reproduction - An Exact Duplicate - Copying - What We See and What We Think - Metaphors - The Desktop - The Touch Metaphor - Relationship Between Metaphors - Summary of Metaphors - Summary - Try It Solutions - Review Questions - Multiple Choice - Short Answer - Exercises ## Chapter 3: The Basics of Networking Making the Connection ### Comparing Communication Types - General Communication - The Internets Communication Properties - The Client/Server Structure - Appearing to Stay Connected ### The Medium of the Message - The Name Game of Computer Addresses - Following Protocol - Far and Near: WAN and LAN - Connecting Your Computer to the Internet - Domains and the DNS - DNS Summary ### The World Wide Web - Requesting a Web Page - The Internet and the Web - Describing a Web Page ### File Structure - Directory Hierarchy - Organizing the Folder ### Summary - Try It Solutions - Review Questions - Multiple Choice - Short Answer - Exercises ## Chapter 4: A Hypertext Markup Language Primer Marking Up with HTML ### Marking Up with HTML - Formatting with Tags Table of Contents Tags for Bold and Italic Required Tags Lab Practice I Firefox Text Editor Hello, World! Save This Page Practicing in the Lab Structuring Documents Headings in HTML HTML Format Versus Display Format White Space Attributes Brackets in HTML: The Escape Symbol Accent Marks in HTML Lab Practice II Compose and Check Markup Validation Service Get Into Style with CSS A Place for Style Styling Background and Paragraph CSS Styling Designing the Paradoxes Page Marking Links and Images Two Sides of a Hyperlink Structure of the Image Tag Referring to Files Referring to Pages and Images Span, Lists, Tables, and Boxes Span Lists Tags Handling Tables The Box Model Cascading Style Sheets Style in Many Places Globally Speaking The Cascade Styling with Class A class Attribute An Alternate Class Hovering Above Links Navigation Bars HTML Wrap-Up Gradient Background Easy Enough for a Computer Summary Table of Contents Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 5: Locating Information on the WWW The Search for Truth Web Search Fundamentals How a Search Engine Works Multiword Searches Descriptive Terms Page Rank Advanced Searches The Logical Operator AND Complex Queries Combining Logical Operators Restricting Global Search Focused Searches Web Searching Selecting Search Terms The Anatomy of a Hit Using the Hit List Once You Find a Likely Page Searching Strategy Summary Bing Search Authoritative Information Don’t Believe Everything You Read Wikipedia What is Authoritative? Authoritative Sources Truth or Fiction? Site Analysis Tough Work Summary Chapter 6: An Introduction to Debugging To Err Is Human Precision: The High Standards of Computing Be Accurate Be Observant Debugging: What’s the Problem? Debugging in Everyday Life Debugging in Information Technology # Table of Contents - Whose Problem is It? - Using the Computer to Debug - A Dialog About Debugging - Debugging Recap - Fixing HTML Bugs: A Case Study - Look At the Page Closely - Focusing the Search - Nearly Perfect - Debugging the JJK Page: A Postmortem - No Printer Output: A Classic Scenario - Applying the Debugging Strategy - Pressing On - The Print Queue - Calling Tech Support? - Ensuring the Reliability of Software - Safety-Critical Applications - Fail-Soft and Fail-Safe Software - Community Debugging - Summary - Try It Solutions - Review Questions - Multiple Choice - Short Answer - Exercises - Interview with Vinton G. Cerf ## Part 2: Algorithms and Digitizing Information ### Part 2: Introduction ### Chapter 7: Representing Information Digitally Bits and the Why of Bytes - Digitizing Discrete Information - Limitation of Digits - Alternative Representations - Symbols, Briefly - Ordering Symbols - Information Representation - Beyond the Physical World - Memory - Bits in Computer Memory - Binary and Hex - Binary - Hex - Changing Hex Digits to Bits and Back Again - Digitizing Numbers in Binary - Binary Numbers Compared with Decimal Numbers - Digitizing Text # Table of Contents Assigning Symbols Extended ASCII: An 8-Bit Code ASCII Coding of Phone Numbers Advantages of Long Encodings NATO Broadcast Alphabet Bar Codes UTF-8 The Metadata and the OED Properties of Data Using Tags for Metadata Structure Tags Sample OED Entry Why Byte? Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises ## Chapter 8: Representing Multimedia Digitally Light, Sound, Magic ### Digitizing Color - Color and the Mystery of Light - Yellow = R + G? - Green Paint = Blue + Yellow - Making a Big Display - Thinking About Intensities - Black and White Colors - Decimal to Binary - Lighten Up: Changing Colors by Addition - To Increase Intensity: Add in Binary - Lighter Still: Adding with Carry Digits ### Computing on Representations - Old Photographs - Increasing Brightness and Contrast - Binary Addition - Contrast - Adding Color - Summary of Digital Color ### Digitizing Sound - Analog to Digital - Advantages of Digital Sound ### Digital Images and Video - Image Compression - JPEG - MPEG Compression Scheme Optical Character Recognition Table of Contents Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 10: Algorithmic Thinking What’s the Plan? Algorithms Writing One Letter at a Time Homemade Algorithms Many Questions; Fewer Questions Writing Algorithms Algorithms Versus Programs Experience with Algorithms Textbook Examples of Algorithms Algorithms Versus Heuristic Processes Inventing Algorithms Algorithms A Basic Concept A Definition A Closer Look Query Evaluation Intersecting Lists A Familiar Solution How Not to Match Different Solutions Doing the Right Thing A Strategy Explaining Why IAL Works Summary on Correctness Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Interview with Ray Kurzweil Part 3: Data and Information Chapter 11: Social Implications of IT Computers in Polite Society The Power of the Crowd Crowdsourcing Be a Martian Foldit Civic Participation Freerice Kickstarter Out on Good Behavior Table of Contents Netiquette Specific Guidelines for Email Please, Don’t Be Offended Expect the Unexpected The Onion Suspicious Activity Creating Good Passwords The Role of Passwords How Passwords Work Poor Passwords Creating Quality Passwords Easy to Remember Hard to Guess Managing Passwords Spam Controlling Spam Scams Nigerian Widow Scam Phishing The End of the Phishing Story Protecting Intellectual Property Licensing of Software Open Source Software Copyright on the Web Violating the Copyright Law Creative Commons Allow Copying and Distribution What to Keep, What to Give Creative Commons Summary Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 12: Privacy and Digital Security Shhh, It’s a Secret Privacy and Technology Modern Devices and Privacy Information Sources and Uses Controlling the Use of Information A Privacy Definition Enjoying the Benefits of Privacy Voluntary Disclosure Fair Information Practices OECD Fair Information Practices Table of Contents Is There No Privacy? Who is Protected? Business as Usual Targeted by Target Government, as Usual Tracking Online Tracking Cell Phones Cookies Appearing To Stay Connected The Right to Be Forgotten Identity Theft Digital Security Understanding the Problem Terms and Jargon What Does Malware Do? Prevention Play It Safe Safe Computing Checklist Oops, Now I've Done It! Plan of Action Encryption The Key to Encryption Keys Encrypting Example Private Key Encryption Public Key Encryption The Genius of PKC The Take-Home Message Factoring is Hard Back to the Coffee Shop Redundancy Is Very, Very, Very Good Protecting Your Data Backups and Recovery Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 13: The Basics of Spreadsheets Fill-in-the-Blank Computing Arranging Information An Array of Cells Sorting the Data Adding More Data to the List Computing with Spreadsheets # Table of Contents **Chapter 14: Advanced Spreadsheets for Planning What If Thinking Helps** **Designing a Spreadsheet** - The Trip - Design Guidelines - Initial Spreadsheet: Applying the Rules **Conditional Formatting** - Cell Value is Specifications - Formula is Specifications - Distinguish Between the United States and Canada **Conditional Formulas** - Figuring the Amount Paid - Cost in One Currency **Naming: Symbolic Reference** - Defining Names - Applying Names - Make Assumptions Explicit **What If Analysis** # Table of Contents Direct Experimentation Scenarios Analyzing a Model Analyzing Data Using Filtering Auto Filtering Technique Advanced Filtering Technique Filtering on Multiple Criteria Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 15: Introduction to Database Concepts A Table with a View Differences Between Tables and Databases Comparing Tables The Databases Advantage XML: A Language for Metadata Tags An Example from Tahiti Expanding the Use of XML Attributes in XML Effective Design with XML Tags The XML Tree Tables and Entities Entities Properties of Entities Every One Is Different The Science of Tables Relational Database Tables Computing with Tables Ask Any Question Summarizing the Science SQL: The Language of Databases Structure of a Database Physical and Logical Databases Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 16: A Case Study in Database Organization The iDiary Database Thinking About a Personal Database Regular Versus Irregular Data Physical Versus Logical Table of Contents The iDiary A Preliminary Exercise Travels Database Displaying the Travels with XSL The iDiary Database Getting Started Creating a First Entry (August 11) Thinking About the Nature of Things Developing Tags and Templates Using the iDiary Daily Archiving Photos Hiding Information Entering Data into the Database Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Interview with Alan Kay Part 4: Problem Solving Part 4: Introduction Chapter 17: Fundamental Concepts Expressed in JavaScript Get with the Program Overview: Programming Concepts Names, Values, and Variables Names Have Changing Values Names in a Program Are Called Variables Identifiers and Their Rules A Variable Declaration Statement The Statement Terminator Rules for Declaring Variables Three Basic Data Types of JavaScript Rules for Writing Numbers Strings Boolean Values The Assignment Statement Assignment Symbol Interpreting an Assignment Statement Three Key Points About Assignment Lab Practice Scratchpad Hello, World An Expression and Its Syntax Arithmetic Operators Relational Operators Logical Operators # Table of Contents A Conditional Statement - if Statements and Their Flow of Control - Compound Statements - if/else Statements - Nested if/else Statements The Espresso Program - The Logic of a Double Tall Latte Summary Try It Solutions Review Questions - Multiple Choice - Short Answer - Exercises Chapter 18: A JavaScript Program The Bean Counter Preliminaries - Background for the UI - Review of HTML Basics - Interacting with a UI - Three Input Elements Creating the Graphical User Interface - 1. Create a Button Table - 2. Delete Two Buttons - 3. Insert Text Box - 4. Label the Buttons - 5. Primp the Interface Event-Based Programming - The onclick Event Handler - Click Event - Shots Button - Size and Drink Buttons - Clear Button and Initializations - Referencing Data Across Inputs Critiquing the Bean Counter - Numbers Versus Money - Organization - Feedback - Application Bean Counter Recap - Program and Test - Assess the Program Design Summary Try It Solutions Review Questions - Multiple Choice - Short Answer # Table of Contents ## Exercises ### Chapter 19: Programming Functions Thinking Big - Anatomy of a Function - Converting Some Temperatures - Making the Call - Definition Versus Call - Forms and Functions - Writing Functions, Using Functions - Flipping Electronic Coins - The Body Mass Index Computation - Customizing Pages - Creating Page Content - Customizing the Coin Flip - Making a Web-Based Phone App - Design for Mobility - Referencing Functions - The Counter Assistants Structure - Better Applications - Recap: Two Reasons to Write Functions - Social Functions - Using Other Peoples Code - Making a Comment ### Summary ### Try It Solutions ### Review Questions - Multiple Choice - Short Answer - Exercises ### Chapter 20: Iteration Principles Once Is Not Enough - Iteration: Play It Again, Sam - The for Loop Basic Syntax - How a for Loop Works - JavaScript Rules for for Loops - The World-Famous Iteration - Why So Famous? - Avoiding Infinite Loops - Experiments with Flipping Coins - One Trial of 100 Flips - Multiple Trials - A Diagram of Results - Nested Loops ### Indexing - Index Syntax - Index Origin ### Arrays Table of Contents Rules for Arrays Array Reference Syntax Its Magic Setting Up the Array Structuring the Page The Busy Animation Using a Timer to Initiate Animation Prefetching Images Redrawing an Image Not So Busy Animation Three Key Ideas Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 21: A Case Study in Algorithmic Problem Solving The Smooth Motion Application The Smooth Motion Application How the Smooth Motion Application Should Work Planning Smooth Motion Apply the Decomposition Principle List the Tasks Decide on a Problem-Solving Strategy Build the Basic Web Page UI The Structural Page The Structural Page Heading Animate the Grid First Analysis Second Analysis Subtask: Define and Organize the Frames Subtask: Define and Place Initial Images Subtask: Prefetch the Frame Images Subtask: Set Timer and Build Timer Event Handler The Best Laid Plans . . . Build Controls Sense the Keys Subtask: Define and Organize the Frames Subtask: Place the Initial Images Subtask: Prefetch the Frames Subtask: Build the Event Handlers Combine the Subtasks Staircase Detection Subtask: Recognizing the Staircase Table of Contents Subtask: Recognizing Continuity Assemble Overall Design Primp the Design Assessment and Retrospective Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 22: Limits to Computation Computers Can Do Almost Everything, Nothing Can Computers Think? The Turing Test Passing the Test Acting Intelligently? Playing Chess A Game Tree Using the Game Tree Tactically Using Database Knowledge Using Parallel Computation The Deep Blue Matches Interpreting the Outcome of the Matches Watson Computer Versus Humans Technical Challenge Summary on Watson Acting Creatively? Creativity as a Spectrum What Part of Creativity is Algorithmic? The Universality Principle Universal Information Processor Practical Consequences of the Universality Principle More Work, Slower Speed Comparing IAL with NAL Are Best Algorithms All Fast? NP-Complete Problems Unsolvable Problems Summary Try It Solutions Review Questions Multiple Choice Short Answer Exercises Chapter 23: A Fluency Summary Click to Close Table of Contents Two Big Computing Ideas Information Structuring Strategies for Nonalgorithmic Tasks Fluency: Less Is More Lifelong IT Learning Pursuing New Uses Asking for Help Noticing New Technology Shifting for Yourself Try It Solutions Review Questions Multiple Choice Short Answer Exercises Interview with David Ferrucci Appendix Appendix A: HTML5 Reference Required HTML Tags HTML Tags Worked Example: D.C. Trip Page Appendix B: RSA Public Key Cryptosystem Choosing a Key Encrypting a Message The Decryption Method Summarizing the RSA System Appendix C: iDiary: Tags and Templates XML Database File iDiary.xml XSL file iDiarySS.xsl Appendix D: JavaScript Programming Rules Program Structure Data Types Variables and Declarations Expressions Arrays and Indexes Statements Functions Guidelines Appendix E: The Bean Counter Program Appendix F: myApps Page Appendix G: Smooth Motion Program Glossary Table of Contents Answers to Selected Questions Index Credits
{"Source-Url": "https://files.pearsoned.de/inf/toc/9781292061924", "len_cl100k_base": 4320, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 35500, "total-output-tokens": 5343, "length": "2e12", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0008225440979003906, "__label__crime_law": 0.0005116462707519531, "__label__education_jobs": 0.039642333984375, "__label__entertainment": 0.00018703937530517575, "__label__fashion_beauty": 0.0002460479736328125, "__label__finance_business": 0.0005106925964355469, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0012769699096679688, "__label__hardware": 0.0021228790283203125, "__label__health": 0.0004436969757080078, "__label__history": 0.0004763603210449219, "__label__home_hobbies": 0.0003371238708496094, "__label__industrial": 0.0005106925964355469, "__label__literature": 0.00112152099609375, "__label__politics": 0.0002646446228027344, "__label__religion": 0.0005512237548828125, "__label__science_tech": 0.03887939453125, "__label__social_life": 0.00033974647521972656, "__label__software": 0.0313720703125, "__label__software_dev": 0.87841796875, "__label__sports_fitness": 0.0004699230194091797, "__label__transportation": 0.0005402565002441406, "__label__travel": 0.00022280216217041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18568, 0.0019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18568, 0.72095]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18568, 0.75942]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1115, false], [1115, 2257, null], [2257, 3180, null], [3180, 4215, null], [4215, 5446, null], [5446, 6625, null], [6625, 6625, null], [6625, 7715, null], [7715, 8725, null], [8725, 9760, null], [9760, 10289, null], [10289, 11518, null], [11518, 12868, null], [12868, 13973, null], [13973, 15169, null], [15169, 16395, null], [16395, 17536, null], [17536, 18506, null], [18506, 18568, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1115, true], [1115, 2257, null], [2257, 3180, null], [3180, 4215, null], [4215, 5446, null], [5446, 6625, null], [6625, 6625, null], [6625, 7715, null], [7715, 8725, null], [8725, 9760, null], [9760, 10289, null], [10289, 11518, null], [11518, 12868, null], [12868, 13973, null], [13973, 15169, null], [15169, 16395, null], [16395, 17536, null], [17536, 18506, null], [18506, 18568, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18568, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 18568, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1115, 3], [1115, 2257, 4], [2257, 3180, 5], [3180, 4215, 6], [4215, 5446, 7], [5446, 6625, 8], [6625, 6625, 9], [6625, 7715, 10], [7715, 8725, 11], [8725, 9760, 12], [9760, 10289, 13], [10289, 11518, 14], [11518, 12868, 15], [12868, 13973, 16], [13973, 15169, 17], [15169, 16395, 18], [16395, 17536, 19], [17536, 18506, 20], [18506, 18568, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18568, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
2fa1d0e488fb633d62314ba3402feb071a9a8a9b
Chapter 1 Introduction 1.1 What is Web Intelligence? Today, the world is experiencing the excitement of an historic change. We find ourselves in the midst of an information revolution, the result of rapid advances in technology built in great part upon the shoulders of three pivotal pioneers: Kurt Godel, Alan Turing and Tim Berners-Lee. Through their contributions, we are witnessing the remarkable refashioning of the Information Age, which began in the 1950s, into the Information Revolution as the World Wide Web evolves into a resource with intelligent capabilities. While the capabilities and scope of today’s World Wide Web are impressive, its continuing evolution into a resource with intelligent features and capabilities presents many challenges. The traditional approach of building information systems has consisted of custom made, costly database applications. However, this is changing. Information services are beginning to use generic components and open global standards to offer widely accessible graphical presentations with easier interaction. As a result, benefits are accruing to transactions over the Web including such areas as: e-commerce, banking, manufacturing and education. At the heart of the Information Revolution is the transformation of the world toward a knowledge economy with a knowledge society. Helping to forge this transformation is the World Wide Web Consortium (W3C), which is working to deliver global machine processing built upon layers of open markup languages. It is widely accepted that the technology of today’s Information Age has had a major impact on global communications and commerce and that it will continue to support major improvements in human productivity. However, while the World Wide Web is making significant contributions to this progress, there remain many challenges to its further development into a resource with intelligent features. For the Information Age to achieve its full potential in improving human productivity, at least two key new advances must still be achieved: (1) ubiquitous access to transaction applications of all types and (2) intelligent software applications enabling automated transactions. For example, Web Services require human processing to be implemented. In addition, Web Services rely on the interoperation of two competing proprietary server frameworks to successfully communicate complex business logic. The solution of the W3C to both of these problems is to deliver automatic machine processing globally through a Web architecture utilizing layers of open markup languages. The term “intelligence” can be applied to nonhuman entities as we do in the field of Artificial Intelligence (AI). But frequently we mean something somewhat different than in the case of human intelligence. It is recognized that human thinking involves complicated interactions within the biological components of the brain and that the process of learning is also an important element of human intelligence. Increasingly, software applications perform tasks that are sufficiently complex and human like that the term intelligent may be appropriate. Whereas AI can be seen as the science of machines that behave intelligently (or simulate intelligent behavior), the concept of intelligent applications entails the efforts to take advantage of AI technologies to enhance applications and make them act in more intelligent ways. This brings us to the question of Web intelligence or intelligent software applications on the Web. The World Wide Web can be described as an interconnected network of networks, but that does not go quite far enough. The present day Web consists not only of the interconnected networks, servers and clients, but also the multimedia hypertext representation of vast quantities of information distributed over an immense global collection of electronic devices. With software services being provided over the Web, one can readily see an analogy to the human (or machine) thinking process where information is stored, accessed, transferred and processed by electronic patterns in electrical devices and their interconnections. However, the current Web consists primarily of static data representations that are designed for direct human access and use. We can view effective web searches as an information retrieval problem. Search engines are one Web technology designed to automatically process information from large numbers of Web sites to deliver useful processed information, but the search methods used today have rudimentary capabilities. Between the classical models of IR, the vector space model has been shown experimentally to have better performance over the earlier Boolean method. Many recent IR systems are built on the popular vector space model (VSM). Thus, the search engines based on classical vector space model are syntactic based and look for the partial/full match of the query terms in the documents. This feature being a big success in the IR community suffers with the inability to retrieve the semantically related documents just because they do not contain the keyword. The key to moving to the next level is the improvement of the ability of software applications to communicate directly with one another and the representation of information in ways that are far more usable by software applications. An important framework for creating such meaningful abilities can be provided by the proposed next generation of Web architecture: the Semantic Web. ### 1.1.1 World Wide Web How is the World Wide Web managing knowledge and empowering the Information Revolution? Does rapid change and improved information productivity require more intelligent Web capabilities? What technologies offer the best opportunities for sustained powerful change? This section explores these questions by briefly describing the development and limitations of today’s Web technology. The first implementation of the web represents the Web 1.0, which, according to Berners-Lee [1], could be considered the "read-only web". In other words, the early web allowed us to search for information and read it. There was very little in the way of user interaction or content contribution. However, this is exactly what most website owners wanted. Their goal for a website was to establish an online presence and make their information available to anyone at any time. Shopping cart applications, which most e-commerce website owners employ in some shape or form, basically fall under the category of Web 1.0. Currently, we are seeing the infancy of the Web 2.0 or the "read-write" web if we stick to Berners-Lee's [1] method of describing it. The ability to contribute content and interact with other web users has dramatically changed the landscape of the web in a short time. For example, looking at YouTube and MySpace, which rely on user submissions and the potential, becomes clearer. The Web 2.0 appears to be a welcome response to a demand by web users that they be more involved in what information is available to them. Now, it’s important to realize that there are a staggering number of definitions of what constitutes a "Web 2.0 application". For example, the perception exists that just because a website is built using a certain technology (like Ruby on Rails), or because it employs Ajax in its interface, it is a Web 2.0 application. From the general, bird’s-eye view we are taking, this is not the case; our definition simply requires that users be able to interact with one another or contribute content. Developers, for example, have a much more rigid definition of Web 2.0 than average web users, and this can lead to confusion. This in turn leads us to the rumblings and mumblings we have begun to hear about Web 3.0, by extending Tim Berners-Lee's explanations, the Web 3.0 would be something akin to a "read-write-execute" web. With Web 3.0 you'll be able to sit back and let the Internet do all the work for you. You could use a search service and narrow the parameters of your search. The browser program then gathers, analyzes and presents the data to you in a way that makes comparison a snap. It can do this because Web 3.0 will be able to understand information on the Web. A Semantic Web (or Web 3.0) agent could be programmed to do almost anything, from automatically booking your next vacation to researching a term paper. If you visit a movie blog, for instance, and read about a particular film, it immediately links to sites where you can buy or rent that film. With the Semantic Web, computers will scan and interpret information on Web pages using software agents. These software agents will be programs that crawl through the Web, searching for relevant information. They'll be able to do that because the Semantic Web will have collections of information called ontologies. In terms of the Internet, ontology is a file that defines the relationships among a group of terms. For the Semantic Web to be effective, ontologies have to be detailed and comprehensive. In Berners-Lee's concept, they would exist in the form of metadata. Metadata is information included in the code for Web pages that is invisible to humans, but readable by computers. 1.2 Metadata The rapid increase in the number and variety of resources on the World Wide Web has made inappropriateness of traditional schemas of resource description for web resources and has encouraged significant activities recently on defining web compatible schemas named "metadata". Metadata, in general, is defined as 'data about data' or 'information about information'. In the other words, metadata is data that describe information resources. Metadata is data that provide extra information about other data. For example, a photo can be described using the following metadata: <dateTaken> 01/01/2011 </dateTaken>, <placeTaken> seminar room </placeTaken> and <whatAbout> project meeting </whatAbout>. The information being described by metadata, may be considered at the first look as corporal and digital information resources such as books, newspapers, journals, photographs and so on. Greenberg [3] refers to this data as "object" and states that this object is any entity, form or mode for which contextual data can be recorded. The universe of objects to which metadata can be applied is radically diverse and seemingly endless, ranging from corporeal and digital information resources, such as a monograph, newspaper or photograph, to activities, events, persons, places, structures, transactions, relationships, execution directions and programmatic applications. Metadata, therefore, captures the wide range of intrinsic or extrinsic information about a variety of objects. These intrinsic or extrinsic characteristics and features are described in the individually structured data elements that facilitate object use, identification and discovery. The way that current service oriented infrastructure handles and manages services metadata is not adequate and effective for metadata to help services discovery and knowledge sharing. There are no problems for humans to understand XML based metadata because we know the meaning of these English words, the question is: “can machines understand and consume them?”, so that they can perform automatic processing with regards to the use of Web/Grid services. Clearly without further assumptions, the answer will be no. The Semantic Web / Grid are extensions of the current Web/Grid in which information and services are given well defined meaning, better enabling computers and people to work in cooperation. We believe that the first step towards the Semantic Web/Grid is to make the Web/Grid full of rich SMD, in other words, metadata with semantics. 1.3 Ontologies Ontology is defined as an explicit specification of a conceptualization [4]. An ontology is a formal explicit description of concepts in a domain of discourse (classes (sometimes called concepts)), properties of each concept describing various features and attributes of the concept (slots (sometimes called roles or properties)), and restrictions on slots (facets (sometimes called role restrictions)). Ontology together with a set of individual instances of classes constitutes a knowledge base. If a program wants to compare conceptual information across two knowledge bases on the Web, it has to know when any two given terms are being used to mean the same thing. Ideally, the program must have a way to discover common meanings for whatever knowledge bases it encounters. A solution to this problem is provided by the Semantic Web in the form of collections of information called ontologies. A typical ontology for the Web uses taxonomy and a set of inference rules. The taxonomy defines classes of objects and relations among them. Classes, subclasses, and relations among entities are important tools. We can express a large number of relations among entities by assigning properties to classes and allowing subclasses to inherit such properties. Inference rules in ontologies may express rules for manipulating information. Inference rules may express the rule: “If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code”. Being the conceptual models that capture domain knowledge, ontologies can be looked upon for aiding meaningful information retrieval. Ontology in general contains a vocabulary of terms that refer to the things of the interest in a given domain, some specification of meaning for the terms grounded in some form of logic. The way the knowledge of the domain is captured in Ontology enables one to retrieve the related information for a given term, thus supporting intelligent information retrieval. The various languages used for representing the ontologies are RDF/RDFS, DAML+OIL, OWL. The latest W3c recommended standard for representing ontologies is OWL. OWL contains most of the features to express the semantics of the domain knowledge. Now that we have the syntactic search engines and the knowledge base capturing the semantics of a domain, query expansion mechanism serves as a medium between these two techniques and thus helps in retrieving the semantically relevant information. 1.4 Query Expansion Logic reasoning is the formal semantic reasoning based on the explicitly defined relations between concepts in the ontology. The main logic reasoning expansions are expansion to equivalent concepts, expansion to broader or narrower concepts and expansion with the concepts having common super class. Current practice, for example most search engines work at lexical level, retrieving only the documents containing the words from the query. The words from the query do not consist in the relevant documents and is called imprecise retrieval. Query expansion addresses imprecise retrieval by modifying the query, adding in words related to the original query words. The additional terms may be taken from a thesaurus or calculated in a statistical or probabilistic way. Query Expansion is useful because imprecise queries cannot retrieve the entire set of relevant documents. The intent is to improve precision and/or recall. From the mere words in a query, we cannot tell the exact meaning that a searcher has in mind. We do have resources that give us more information than the query words provide alone. On the other hand, we can aid the query expansion with the domain knowledge to avoid retrieving those documents which are irrelevant even in the presence of the query. 1.5 Inference Engines An inference engine controls overall execution of a set of rules. It searches through a knowledge base, attempting to pattern-match facts or knowledge present in memory to the antecedents of rules. If a rule’s antecedent is satisfied, the rule is ready to fire and is placed in the agenda. When a rule is ready to fire it means that since the antecedent is satisfied, the consequent can be executed. They deduce new knowledge from previously established knowledge. It requires everyone to share exactly the same definition of common concepts. But central control is stifling, and increasing the size produces complexity that rapidly becomes unmanageable. These systems limit the questions that can be asked reliably. In avoiding the problems, traditional knowledge–representation systems narrow their focus and use a limited set of rules for making inferences. So the proposed approach is Plausible Inference. Plausible in a real world means having an appearance of truth or reason, means it may not be true but it is still believable. Given the statement that the spare tire is contained in the trunk and the trunk is part of the car, a plausible inference is that the spare tire is contained in the car. 1.6 Description of the Research Work 1.6.1 Motivation Search engines today are based on decades old technology patched with new solutions. When specifying a search, users enter a small number of terms in the query. Yet the query describes the information need, and is commonly based on the words that people expect to occur in the types of document they seek. This gives rise to a fundamental problem, in that not all documents will use the same words to refer to the same concept. Therefore, not all the documents that discuss the concept will be retrieved by a simple keyword-based search. Furthermore, query terms may of course have multiple meanings (query term polysemy). As conventional search engines cannot interpret the sense of the user's search, the ambiguity of the query leads to the retrieval of irrelevant information. Converse to the problem of polysemy, is the fact that conventional search engines that match query terms against a keyword-based index will fail to match relevant information when the keywords used in the query are different from those used in the index, despite having the same meaning (index term synonymy). Although this problem can be overcome to some extent through thesaurus-based expansion of the query, the resultant increased level of document recall may result in the search engine returning too many results for the user to be able to process realistically. In addition to an inability to handle synonymy and polysemy, conventional search engines are unaware of any other semantic links between concepts. Many search engines fail to take into consideration aspects of the user's context to help disambiguate their queries. User context would include information such as a person's role, department, experience, interests, project work etc. The results returned from a conventional search engine are usually presented to the user as a simple ranked list. The sheer number of results returned from a basic keyword search means that results navigation can be difficult and time consuming. Generally, the user has to make a decision on whether to view the target page based upon information contained in a brief result fragment. Most of them use the inverted index method and its statistics/popularity flavored derivatives. The search problem is directly attributable to the limitations of the inverted index method as the underlying platform. Any kind of semantic enrichment requires handling and organizing semantically rich data, and a very short processing time. This requirement exceeds what is expected from an inverted index regardless of hardware capacity. The search community has provided various solutions to such a problem by means of query expansion and semantic annotation. Query Expansion is addressed using linguistic knowledge in the form of WordNet/thesauri. Query expansion is also addressed using the domain specific knowledge captured in taxonomic model of Ontologies. By the very complex nature of the knowledge representation of a domain, very few relationships are addressed while solving this problem in QE. State of the art query expansion uses synonyms and hierarchical relationships only, expanding either in the directions of parent or children. Semantic annotation requires the annotation of the resources by using the terms from a knowledge base. Such a process of manual creation of the domain knowledge and also annotating the documents with the knowledge is quite expensive. As a result, annotations often are incomplete or erroneous, resulting in decreased search performance. 1.6.2 Proposed Work Our approach targets towards providing the relevant results using the domain specific knowledge. To meet the expectations/provide the solutions for the problem discussed above, we have used domain knowledge represented in the form of OWL Ontology. Our approach uses query expansion method to expand the query and the traditional basic keyword search as the search mechanism. Our system fits the query terms in the ontology graph in an appropriate way and exploits the surrounding knowledge to retrieve the relevant results. We find that we can fix the context of the query and also bring in semantically related terms into the picture using this approach. The resulting enhanced query is given to the underlying basic keyword search system. As a result, we find that we can achieve substantial improvement in both precision and recall compared to the basic keyword search system. We also proposed our approach towards the next generation service-oriented computing infrastructure with rich metadata and semantic support and also presented an integrated framework for Semantic Meta Data management for Web/Grid services. Finally a case study, describes the applicability of social network analysis to the semantic web, particularly discussing the multi-dimensional networks that evolve from ontological trust specifications. 1.6.3 Issues It is the nature of the relevance that there is no absolute right and wrong. What is a relevant document for one person’s query might be an irrelevant document for the same query of another person. It is so too with ontologies there are many possible correct relationships between concepts depending on information need. 1.6.4 Scopes and Objectives The proposed system retrieves relevant documents for a search with an increase in the ratio of precision and recall. - The objective of the proposed system is to provide a knowledge representation of linked data in order to allow machine processing on a global scale. - The proposed system holds good for the text corpus. - The query words map to the vocabulary in the Ontology. - The scope of the proposed system is limited to a particular domain, for which the knowledge is represented in a very rich structure called ontology. - The order of the query words in which appear does not matter because they merely map to the terms in the Ontology. 1.6.5 Organization of the thesis This thesis is organized as follows: Chapter 2 discusses the literature survey in the direction of the problem statement. Chapter 3 discusses the new directions in building seamless next generation web. Chapter 4 discusses the approaches towards next generation service oriented computing infrastructure with rich metadata and semantic support. Chapter 5 discusses the case study, which describes the applicability of social network analysis to the semantic web, particularly discussing the multi-dimensional networks that evolve from ontological trust specifications. Chapter 6 discusses conclusion.
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/4506/10/10_chapter%201.pdf", "len_cl100k_base": 4349, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 31195, "total-output-tokens": 4972, "length": "2e12", "weborganizer": {"__label__adult": 0.00033664703369140625, "__label__art_design": 0.0008020401000976562, "__label__crime_law": 0.0006756782531738281, "__label__education_jobs": 0.006702423095703125, "__label__entertainment": 0.0003647804260253906, "__label__fashion_beauty": 0.0002505779266357422, "__label__finance_business": 0.0010700225830078125, "__label__food_dining": 0.0005326271057128906, "__label__games": 0.0008196830749511719, "__label__hardware": 0.0010471343994140625, "__label__health": 0.00086212158203125, "__label__history": 0.000720977783203125, "__label__home_hobbies": 0.00013840198516845703, "__label__industrial": 0.0004780292510986328, "__label__literature": 0.002414703369140625, "__label__politics": 0.0006351470947265625, "__label__religion": 0.0006875991821289062, "__label__science_tech": 0.3779296875, "__label__social_life": 0.00035071372985839844, "__label__software": 0.1600341796875, "__label__software_dev": 0.44189453125, "__label__sports_fitness": 0.00024056434631347656, "__label__transportation": 0.0006051063537597656, "__label__travel": 0.0003254413604736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23328, 0.0237]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23328, 0.8035]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23328, 0.92972]], "google_gemma-3-12b-it_contains_pii": [[0, 1208, false], [1208, 2864, null], [2864, 4387, null], [4387, 5778, null], [5778, 7467, null], [7467, 8931, null], [8931, 10396, null], [10396, 11866, null], [11866, 13445, null], [13445, 14891, null], [14891, 16438, null], [16438, 17860, null], [17860, 19380, null], [19380, 20842, null], [20842, 22011, null], [22011, 23190, null], [23190, 23328, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1208, true], [1208, 2864, null], [2864, 4387, null], [4387, 5778, null], [5778, 7467, null], [7467, 8931, null], [8931, 10396, null], [10396, 11866, null], [11866, 13445, null], [13445, 14891, null], [14891, 16438, null], [16438, 17860, null], [17860, 19380, null], [19380, 20842, null], [20842, 22011, null], [22011, 23190, null], [23190, 23328, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23328, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23328, null]], "pdf_page_numbers": [[0, 1208, 1], [1208, 2864, 2], [2864, 4387, 3], [4387, 5778, 4], [5778, 7467, 5], [7467, 8931, 6], [8931, 10396, 7], [10396, 11866, 8], [11866, 13445, 9], [13445, 14891, 10], [14891, 16438, 11], [16438, 17860, 12], [17860, 19380, 13], [19380, 20842, 14], [20842, 22011, 15], [22011, 23190, 16], [23190, 23328, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23328, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1cfd182f2d3ecda7a8c3932b50001d8f2eb5e78d
RED HAT DEVELOPER TOOLSET: Build, Run, & Analyze Applications On Multiple Versions of Red Hat Enterprise Linux Dr. Matt Newsome Engineering Manager – Tools 13/JUN/13 Introduction - Dr. Matt Newsome Engineering Manager, Toolchain team - Responsible for toolchains in Fedora and RHEL - Toolchain: - GCC compiler suite - Language-specific runtimes - binutils assembler/linker/etc - OpenMP - Ancillary tools - Development lead for Red Hat Developer Toolset Overview - Red Hat Enterprise Linux Tools - Available Tools and Status - Work-in-progress - Developer Toolset - Background: - Why did we make Developer Toolset? - What is in Developer Toolset? - Lifecycle and Roadmap - How to get and use Developer Toolset - What's new in Developer Toolset? - Common Questions and Answers - Questions * Dates and features may change Red Hat Enterprise Linux Tools Toolchain Support Current Toolchains: 10 year RHEL lifecycle - High Stability - Very limited access to newer GCC releases & features …but our customers also want newer tools on older RHEL Red Hat Developer Toolset High-Level Summary • What is it? • An extra set of up-to-date developer tools – Compiler, IDE, debugger, performance analysis tools, etc. • Tools run on RHEL x and RHEL x+1 • Apps developed on RHEL x will work on RHEL x and x+1 • How do I get it? • Through Red Hat Developer Subscriptions • Through Partner Programs • When is it available? • v1.1 – general availability now • v2.0 – Beta released, aiming for GA release this fall What is the Developer Toolset? - More recent developer tools than those shipped in RHEL - **Does not replace RHEL system tools** - 2.0 is the second major Red Hat Developer Toolset release - C, C++ and Fortran - x86/x86_64 only - Built on Software Collections (more later) - Can be installed in parallel with Red Hat Developer Toolset v1.1 and base RHEL tools [i.e. gcc you get in RHEL Server install] Why Offer a Developer Toolset? - Single most common request from customers, partners and ISVs - More options for developers - Build and run on supported RHEL x and RHEL x+1 - e.g. build on RHEL 5, execute on RHEL 5 and 6 - Lower development cost (one compiler version) - Lower QA cost - Build once - Test and deploy that one version on multiple RHEL releases Building with base RHEL toolchain (1) - RHEL system toolchain - Multiple versions of gcc - Varying features - Different source branches for different gcc versions Building with base RHEL toolchain (2) - RHEL system toolchain - Issues compounded by multiple simultaneous minor releases - gcc on supported minor releases kept in sync periodically - But developers may prioritize minor versions differently - Result: higher development and testing costs Developer Toolset Alternative - Single set of sources - Build - Single, up-to-date gcc on multiple RHEL major and minor releases - Deploy - RHEL 5 (all supported minor releases) - RHEL 6 (all supported minor releases) - Developer Toolset gcc - Single version of gcc on all supported RHEL5, RHEL6 - Feature parity, all at the same patch level - Can deploy application binaries built with the same tools to multiple supported major and minor RHEL releases - No need to deploy extra libraries with your binaries to support this - Result: reduced development and testing costs What’s in Developer Toolset v1.1? - v1.1 [released] - **gcc-4.7**: C, C++ and Fortran Compilers & Associated Runtimes - **gdb-7.4**: C, C++ and Fortran debugging - **binutils-2.22**: x86/x86_64 assembler, linker, etc. - **Software Collection .rpm’s** for RHEL6 and RHEL5 - All components released in Fedora 18, planned for future RHEL - Can be used with Eclipse in base RHEL x86/x86_64 What’s in Developer Toolset v1.1? [continued] • v1.1 [released] • **SystemTap-1.8**: diagnostic tool for live analysis, programmable on-line response, and whole-system symbolic access • **Valgrind-3.8.0**: profiling programs and detecting memory management and threading errors • **OProfile-0.9.7**: low overhead system wide profiler for systems of all sizes • **elfutils-0.154**: provides a library and utilities to access, modify and analyze ELF objects • **dwz-0.7**: new tool to compress DWARF debug into smaller debuginfo files What’s updated in Developer Toolset 2.0? - v2.0 [Beta: May; GA: aiming for Fall 2013] - Contains the following rebased components: - gcc-4.8 [rebased to 2013 release] - gdb-7.6 [rebased, corresponds to gcc-4.8] - SystemTap 2.1 [rebased] - Valgrind 3.8.1 [rebased] - Elfutils 0.155 [rebased] - OProfile 0.9.8 [rebased] - dwz 0.1 [rebased, DWARF optimizer & duplicate removal utility] What’s new in Developer Toolset 2.0? - v2.0 [Beta: May; GA: aiming for Fall 2013] - Provides the following new components for developers: - **Eclipse IDE 4.3** provides the 2013 “Kepler” Eclipse Foundation community release of this powerful Integrated Development Environment [RHEL6 only] - **dyninst** 8.0 delivers a powerful application program interface (API) that aids the development of performance measurement tools, debuggers, and simulators by permitting the insertion of code into a running program - **strace** 4.7 traces system calls, helping developers more efficiently debug programs and identify the root cause of crashes or other unexpected behavior - **memstomp** helps identify code which relies on undefined behavior at a lower runtime cost than other tools such as Valgrind Updated GNU Compiler Collection (GCC) 4.8 Significant enhancements to GCC 4.8 include: - **Local Register Allocator (LRA)** - New register allocator contributed by Red Hat - x86/x86-64 generated code quality improvements - **C++11** - Support for the latest C++ standard - Ability to compile extremely large functions with smaller memory consumption in less time - Support for Hardware Transactional Memory on upcoming Intel CPU architectures Eclipse Integrated Development Environment ("Kepler") [RHEL6 x86/x86_64 only] Developer Toolset Life Cycle - Separate product from RHEL – independent release cycle - Major release once per year - New gcc release in Spring - Fedora Release of gcc and other tools in Summer - Developer Toolset release follows this - Support current and one previous major version [e.g. 2.0 + 1.1] - Effectively 2 years of support - Minor release after 6 months - Upstream patches; possibly new components - Z-stream releases for security fixes and serious bugs - As in RHEL Production Phase 2 Software Collections RHEL Software Collections - What are Software Collections? - Structural definition for an application or toolset that is independent of the OS - Installed outside the normal hierarchy for RHEL native components (but still compliant with FHS) - Installs additional applications - /opt/<vendor>/... - Allows multiple versions to be co-installed - Each major app version gets a different file system root - scl script used to activate (PATH, LDCONFIG, etc.) - Application lifecycles are independent of RHEL Developer Toolset is a Software Collection - Separate tools, not default - Only enabled by deliberate invocation (more later) Practicalities Usage - **Subscribe to channel and Install** - Subscribe to channel using RHN website or command line - e.g. `rhn-channel --add --channel=rhel-x86_64-workstation-dts-6` - `yum install devtoolset-2` - **Simple usage** ``` scl enable devtoolset-2 'gcc -g helloworld.c -o helloworld' scl enable devtoolset-2 'gdb helloworld' ``` - **More advanced usage** ``` scl enable devtoolset-2 'bash' (gcc/gdb/etc. now use toolset versions, not RHEL defaults) ``` - **Eclipse** ``` scl enable devtoolset-2 'eclipse' (launches Toolset Eclipse with Toolset gcc/gdb for build/debug) ``` Demonstration Under the Hood How does it work? - Toolset rpm includes `/opt/rh/devtoolset-2/enable` - Script simply prefixes PATH: - `/opt/rh/devtoolset-2/root/usr/bin:$PATH` - (and adds some environment variables) - Alternatives - `/opt/rh/etc/alternatives/...` Linkage - Most libraries linked dynamically for you from base RHEL - All of C Library (glibc - libm, libc, etc) - All of OpenMP (libgomp) - Most of libstdc++ - Most of libgcc - What about newer features? C++11 library contents? - Newer features in gcc-4.8 than in gcc-4.1/4.4 in RHEL5/6 - These parts statically linked for you into your application - But statically linked code == bad! - True, let's dig deeper... Security Security Implications (1) - **RHEL Toolchain** - Dynamic linkage - Security updates and bugfixes resolved by normal errata Security Implications (2) - Developer Toolset Works Similarly - Dynamic linkage for most symbols - Security updates and bugfixes resolved by normal errata - But... Security Implications (3) - Developer Toolset Statically Links Newer Symbols - Dynamic linkage for most symbols - Static linkage for symbols newer than base RHEL libraries - Carried in application to avoid carrying extra libraries Security Implications (4) - Developer Toolset Statically Links Newer Symbols - Bugs in statically linked objects require rebuild to resolve - But... - Risk is very low - Security errata will inform you if you need to rebuild Application Built with Developer Toolset symbol symbol symbol symbol newer symbol newer symbol Archive of newer libstdc++ symbols (.a) Statically linked symbols still contain bug Common Questions (1) - How do I make Developer Toolset gcc/gdb the default? scl enable devtoolset-2 'bash' (with caution) - Will my apps run on future RHEL major releases? - If you build today on RHEL6, we expect that application to run without issue on the next major RHEL release - But too early – we will test in due course - How do I use Developer Toolset gcc to...X? - scl enable devtoolset-2 '<X>' solves most of these - Should be easy to integrate into build systems Common Questions (2) - Which RHEL versions can I run toolset on? - Which RHEL versions can I run toolset-built apps on? ![Table showing supported and unsupported RHEL versions for toolset and toolset-built apps](image) [Unreleased versions, features and dates are not committed, subject to change] Common Questions (3) - What do gcc-4.7, etc give me as a developer? - User guide summarizes main new features - Includes fine details on changes from RHEL5/6 equivalent tools - Headline Features - C++11 Language Standard - Leading compiler implementation - Atomic extensions for guaranteed atomic access to memory - Memory model for clearer semantics - Transactional Memory - Software implementation in gcc-4.7 - Support for Intel Hardware TM extensions in gcc-4.8 - OpenMP v3.1 Common Questions (4) - Gotchas and issues? - Release notes spell these out - Main ones to be aware of - C++11, TM are experimental, use with caution or use C++98 - Some base RHEL errata are required for all features - Forwards only (don't build on RHEL6 and run on RHEL5) - Forwards only (don't build on rhel-5.8 and run on rhel-5.6) - Intended for userland development, not kernel rebuilding Common Questions (5) • How can I download Red Hat Developer Toolset? • Good question... Accessing Red Hat Developer Toolset How do I get the Developer Toolset? - **Developer Toolset** - Red Hat Developer Toolset 1.1 GA available today - Red Hat Developer Toolset 2.0 Beta available today - **Existing Red Hat Enterprise Linux Subscribers:** - You may procure a Red Hat Developer Workstation, priced the same as a RHEL Workstation. Contact your sales rep to purchase or convert an exiting subscription. - You can also access the Red Hat Developer Toolset 2.0 Beta - **If you are not a RHEL subscriber, you may procure:** - Red Hat Developer Workstation - Red Hat Developer Subscriptions - Red Hat Developer Suite - Red Hat Not-for-Resale (NFR) Partner Subscription Developer Suite - RHEL Server - Up to 8 sockets - Unlimited virtual guests - Features - High availability - Load balancer - Resilient storage - Scalable filesystems - High-performance networking - Extended User Support - Developer Toolset - Smart Management - MRG Real-time ## Access to Developer Toolset <table> <thead> <tr> <th></th> <th>Developer Suite</th> <th>Developer Workstation</th> <th>Developer Support Subscription</th> </tr> </thead> <tbody> <tr> <td>$99</td> <td>Professional</td> <td>Enterprise</td> <td>Professional</td> </tr> <tr> <td>RHEL 1</td> <td>1 x Developer Suite</td> <td>25 x Developer Suite</td> <td></td> </tr> <tr> <td>Support Self Support</td> <td>Unlimited Developer Support by Web and Phone</td> <td></td> <td></td> </tr> <tr> <td>Developer Toolset</td> <td>Included</td> <td></td> <td></td> </tr> </tbody> </table> Details for Developer Support can be found here: access.redhat.com/support/offerings/developer/soc.html Subscribing to the Developer Toolset Channel (1) - Subscribing to a RHEL Channel https://access.redhat.com/knowledge/solutions/11312 - Access RHN Channels Subscribing to the Developer Toolset Channel (2) Full Software Channel List Channels provide you with a way to keep your software and systems up to date. The software channels accessible from this page are all of the channels to which your organization is entitled. Information on how these production support policies and product update policies. Alternatively, you may also view a list of retired channels or a list of Beta channels. You can also download ISO images of channel content on the Download Software page. Filter by Product Channel: [Red Hat Enterprise Linux ▼ Latest Version ▼ All Architectures ▼ Filter] Channel Name | Architecture ---|--- Red Hat Enterprise Linux Server 6 | IA-32, IA-32, P1 Red Hat Common Server 6 | IA-32, IA-32, P1 Red Hat Core Product Toolset Server 6 | IA-32, IA-32, P1 Red Hat Developer Toolset Server 6 | IA-32, x86_64 Subscribing to the Developer Toolset Channel (3) - Subscribe Target Systems The following systems can be subscribed to this channel: Command Line Access to Developer Toolset v1.1 - Subscribe to channel (typical example with RHEL6 x86_64): - RHN Classic - `rhn-channel --available-channels` - `rhn-channel --add --channel=rhel-x86_64-server-dts-6` - `rhn-channel --list` (to verify channel addition) - Red Hat Subscription Management - `subscription-manager list --available` - `subscription-manager subscribe --pool=<pool_id>` - `subscription-manager list --consumed` (verify subscription attached) - `yum-config-manager --enable rhel-server-dts-6-rpms` - Install - `yum install devtoolset-1.1` - See User Guide for full details: - http://red.ht/devToolset Cmd Line Access to Developer Toolset v2.0 Beta - Subscribe to channel (typical example with RHEL6 x86_64): - RHN Classic ``` rhn-channel --available-channels rhn-channel --add -channel=rhel-x86_64-server-optional-6 (required) rhn-channel --add -channel=rhel-x86_64-server-dts2-6-beta rhn-channel --list (to verify channel addition) ``` - Red Hat Subscription Management ``` subscription-manager list --available subscription-manager subscribe --pool=<pool_id> subscription-manager list --consumed (verify subscription attached) yum-config-manager --enable rhel-server-optional-6-rpms (required) yum-config-manager --enable rhel-server-dts2-6-beta-rpms ``` - Install - `yum install devtoolset-1.1` - See User Guide for full details: - http://red.ht/devToolset Links - Resources - Developer Program (“Developer Connection”) - http://red.ht/rheldevelop - Developer Toolset Docs - http://red.ht/devToolset Links for Developer Toolset v1.1 - **Main Link: Red Hat Developer Toolset 1.1 User Guide** - Red Hat Developer Toolset 1.1 Release Notes - Red Hat Software Collections Guide - Red Hat Enterprise Linux 6 - Developer Guide - Installation Guide - Deployment Guide - Support Links for Developer Toolset v2.0 Beta - **Main Link: Red Hat Developer Toolset 2.0 Beta User Guide** [Link](https://docs.redhat.com/docs/en-US/Red_Hat_Developer_Toolset/...) - Red Hat Developer Toolset 2.0 Beta Release Notes [Link](https://docs.redhat.com/docs/en-US/Red_Hat_Developer_Toolset/...) - Red Hat Software Collections Guide [Link](https://docs.redhat.com/docs/en-US/Red_Hat_Developer_Toolset/...) - Red Hat Enterprise Linux 6 - **Developer Guide** - **Installation Guide** - **Deployment Guide** - **Support** #redhat #rhsummit Questions References • Related Talks / Demos: • Developer Day: • Tools for RHEL Developers – up next (this track, 10:30am) • Alternatively, Software Collections talk in track 3 (10:30am) • Diagnosing Performance Problems (this track, 12:30pm) • Profiling C++ Applications with Eclipse (this track, 1:30pm) • Debugging with GDB (this track, 2:30pm) • Summit • Repeat of this talk at Summit (Thurs 10:40am, Room 208) • Developer Toolset Demos in the Ballroom through the week • Further demos in the Developer Lounge through the week Contacts - General questions, thoughts, etc. - rheldevelop@redhat.com - Red Hat Developer Program - Mike Guerette (mguerett@redhat.com) - Langdon White (langdon@redhat.com) - Red Hat Developer Toolset - Product Manager - Brian Gollaher (bgollahe@redhat.com) - Engineering Lead - Matt Newsome (mattn@redhat.com, @dev_tools on Twitter) SEE US AT SUMMIT Visit us at Developer Zone! FOLLOW US ON TWITTER twitter.com/#!/RHELdevelop PLAY US ON YOUTUBE bit.ly/RHELdevOnYouTube LEARN & SHARE AT red.ht/rheldevoug GIVE US FEEDBACK RHELdevelop@redhat.com LEARN. NETWORK. EXPERIENCE OPEN SOURCE. June 11-14, 2013 Boston, MA
{"Source-Url": "https://rhsummit.files.wordpress.com/2013/06/newsome_t_1040_developer_toolset.pdf", "len_cl100k_base": 4711, "olmocr-version": "0.1.53", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 77151, "total-output-tokens": 7388, "length": "2e12", "weborganizer": {"__label__adult": 0.0002440214157104492, "__label__art_design": 0.0001939535140991211, "__label__crime_law": 0.0001347064971923828, "__label__education_jobs": 0.0007376670837402344, "__label__entertainment": 4.559755325317383e-05, "__label__fashion_beauty": 9.632110595703124e-05, "__label__finance_business": 0.0004978179931640625, "__label__food_dining": 0.00017189979553222656, "__label__games": 0.00029349327087402344, "__label__hardware": 0.0006632804870605469, "__label__health": 0.00013387203216552734, "__label__history": 7.76052474975586e-05, "__label__home_hobbies": 5.012750625610352e-05, "__label__industrial": 0.00017833709716796875, "__label__literature": 8.958578109741211e-05, "__label__politics": 0.00010734796524047852, "__label__religion": 0.00019037723541259768, "__label__science_tech": 0.0012950897216796875, "__label__social_life": 6.139278411865234e-05, "__label__software": 0.01358795166015625, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0001569986343383789, "__label__transportation": 0.00021469593048095703, "__label__travel": 0.00012695789337158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19746, 0.02114]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19746, 0.02239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19746, 0.73175]], "google_gemma-3-12b-it_contains_pii": [[0, 167, false], [167, 483, null], [483, 877, null], [877, 908, null], [908, 1163, null], [1163, 1189, null], [1189, 1640, null], [1640, 2051, null], [2051, 2427, null], [2427, 2597, null], [2597, 2894, null], [2894, 3485, null], [3485, 3884, null], [3884, 4428, null], [4428, 4826, null], [4826, 5641, null], [5641, 6095, null], [6095, 6173, null], [6173, 6685, null], [6685, 6706, null], [6706, 7223, null], [7223, 7350, null], [7350, 7365, null], [7365, 7978, null], [7978, 7992, null], [7992, 8007, null], [8007, 8248, null], [8248, 8677, null], [8677, 8686, null], [8686, 8814, null], [8814, 8985, null], [8985, 9223, null], [9223, 9649, null], [9649, 9649, null], [9649, 10134, null], [10134, 10435, null], [10435, 10946, null], [10946, 11352, null], [11352, 11443, null], [11443, 11479, null], [11479, 12138, null], [12138, 12434, null], [12434, 13109, null], [13109, 13268, null], [13268, 14134, null], [14134, 14269, null], [14269, 14951, null], [14951, 15771, null], [15771, 15771, null], [15771, 15927, null], [15927, 17252, null], [17252, 18540, null], [18540, 18550, null], [18550, 19108, null], [19108, 19463, null], [19463, 19678, null], [19678, 19746, null]], "google_gemma-3-12b-it_is_public_document": [[0, 167, true], [167, 483, null], [483, 877, null], [877, 908, null], [908, 1163, null], [1163, 1189, null], [1189, 1640, null], [1640, 2051, null], [2051, 2427, null], [2427, 2597, null], [2597, 2894, null], [2894, 3485, null], [3485, 3884, null], [3884, 4428, null], [4428, 4826, null], [4826, 5641, null], [5641, 6095, null], [6095, 6173, null], [6173, 6685, null], [6685, 6706, null], [6706, 7223, null], [7223, 7350, null], [7350, 7365, null], [7365, 7978, null], [7978, 7992, null], [7992, 8007, null], [8007, 8248, null], [8248, 8677, null], [8677, 8686, null], [8686, 8814, null], [8814, 8985, null], [8985, 9223, null], [9223, 9649, null], [9649, 9649, null], [9649, 10134, null], [10134, 10435, null], [10435, 10946, null], [10946, 11352, null], [11352, 11443, null], [11443, 11479, null], [11479, 12138, null], [12138, 12434, null], [12434, 13109, null], [13109, 13268, null], [13268, 14134, null], [14134, 14269, null], [14269, 14951, null], [14951, 15771, null], [15771, 15771, null], [15771, 15927, null], [15927, 17252, null], [17252, 18540, null], [18540, 18550, null], [18550, 19108, null], [19108, 19463, null], [19463, 19678, null], [19678, 19746, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19746, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19746, null]], "pdf_page_numbers": [[0, 167, 1], [167, 483, 2], [483, 877, 3], [877, 908, 4], [908, 1163, 5], [1163, 1189, 6], [1189, 1640, 7], [1640, 2051, 8], [2051, 2427, 9], [2427, 2597, 10], [2597, 2894, 11], [2894, 3485, 12], [3485, 3884, 13], [3884, 4428, 14], [4428, 4826, 15], [4826, 5641, 16], [5641, 6095, 17], [6095, 6173, 18], [6173, 6685, 19], [6685, 6706, 20], [6706, 7223, 21], [7223, 7350, 22], [7350, 7365, 23], [7365, 7978, 24], [7978, 7992, 25], [7992, 8007, 26], [8007, 8248, 27], [8248, 8677, 28], [8677, 8686, 29], [8686, 8814, 30], [8814, 8985, 31], [8985, 9223, 32], [9223, 9649, 33], [9649, 9649, 34], [9649, 10134, 35], [10134, 10435, 36], [10435, 10946, 37], [10946, 11352, 38], [11352, 11443, 39], [11443, 11479, 40], [11479, 12138, 41], [12138, 12434, 42], [12434, 13109, 43], [13109, 13268, 44], [13268, 14134, 45], [14134, 14269, 46], [14269, 14951, 47], [14951, 15771, 48], [15771, 15771, 49], [15771, 15927, 50], [15927, 17252, 51], [17252, 18540, 52], [18540, 18550, 53], [18550, 19108, 54], [19108, 19463, 55], [19463, 19678, 56], [19678, 19746, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19746, 0.01376]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
2666beab23988471a0ce074d1682fbb4542736ff
Natural Language Processing and Program Analysis for Supporting Todo Comments as Software Evolves Pengyu Nie, Junyi Jessy Li, Sarfraz Khurshid, Raymond Mooney, and Milos Gligoric The University of Texas at Austin \{pynie@, jessy@austin., khurshid@ece., mooney@cs., gligoric@\}utexas.edu Abstract Natural language elements (e.g., API comments, todo comments) form a substantial part of software repositories. While developers routinely use many natural language elements (e.g., todo comments) for communication, the semantic content of these elements is often neglected by software engineering techniques and tools. Additionally, as software evolves and development teams re-organize, these natural language elements are frequently forgotten, or just become outdated, imprecise and irrelevant. We envision several techniques, which combine natural language processing and program analysis, to help developers maintain their todo comments. Specifically, we propose techniques to synthesize code from comments, make comments executable, answer questions in comments, improve comment quality, and detect dangling comments. Introduction Natural language elements form a substantial part of software repositories. These elements are used to communicate between users and developers (e.g., API comments, bug reports, and feature requests), and among developers (e.g., todo comments). Todo comments contain invaluable data that describe changes to code that can increase software maintenance, reliability, and quality. Despite occurring frequently in practice and containing valuable information, these elements, because of their informal nature, are largely not exploited by existing software engineering tools. Research on combining program analysis and natural language processing (NLP), which recently started to gain some traction, is in its infancy (Ernst 2017; Arnaoudova et al. 2015; Hindle et al. 2012; Oda et al. 2015; Allamanis, Peng, and Sutton 2016; Vasilescu, Casalnuovo, and Devanbu 2017; Raychev, Vechev, and Krause 2015; Nguyen et al. 2012), and the existing work, although novel, mostly neglected comments that are used to communicate among the developers (Storey et al. 2008; Sridhara 2016). In this position paper, we argue about the importance of content in todo comments and envision several techniques to automatically maintain and resolve those comments. This position paper is to a large extent inspired by our extensive analysis of a large corpus of open-source projects. Specifically, we analyzed over 30k open-source projects, which are available on GitHub, totaling 585 million lines of code (not counting comments). We found that these projects include over 297 million lines of comments (~30% of the total lines). Our analysis also uncovered more than 700k todo comments in the used corpus. We manually inspected and (discussed) hundreds of comments, code and comment changes, and commit messages. In the following subsections, we will frequently refer to this dataset and our findings related to this dataset. All examples of code and comments that we provide in this paper are taken from one of the analyzed open-source projects. This paper mostly focuses on todo comments that contain valuable information on increasing software quality, performance, maintenance, and reliability. We consider the following three categories of todo comments. First, task comments explain what features are currently not supported or what optimizations need to be implemented (e.g., from the Google Guava project: “For optimal performance, use a binary search when \(\text{targets.size}() < \text{size}()/\log(\text{size}())\”)). Second, trigger-action comments talk about changes to the code repository that would be necessary if something else is modified by developers (e.g., from Guava: “check more preconditions (as \(\text{bufferSize} >= \text{chunkSize}\) if this is ever public”). Finally, question comments are concerned with alternative implementations, potential optimizations, and testing, which may be explored by developers only if time permits (e.g., from Guava: “Is this faster than \(\text{System.arraycopy()}\) for small arrays?”). Regardless of the category of todo comments, as software evolves and development teams re-organize, these comments may be dangling, i.e., resolved but forgotten (Storey et al. 2008; Sridhara 2016). For example, a trigger may hold (e.g., “if this is ever public”) but the action may not be executed by developers (for very long time or ever), and developers may never have enough time to consider alternative algorithms and fine tune their existing implementations. With the goal to help developers increase the reliability of their software, we propose several techniques to (1) synthesize code described in task comments, (2) make trigger-action comments executable, (3) answer question comments, (4) improve the quality of all todo comments, and (5) automatically detect dangling comments. protected AbstractStreamingHasher(int chunkSize, int bufferSize) { // TODO(chunkSize): check more preconditions (as bufferSize > chunkSize) // if this is ever public if (T Troj.isPublic(Troj.THIS, METHOD)) checkArgument(bufferSize >= chunkSize); checkArgument(bufferSize % chunkSize == 0); ... } (a) Example from Google Guava (AbstractStreamingHasher) public void testDynamicAttributesSupport() throws Exception { dispatcher.serviceAction(request, response, mapping); public static String expectedJDK15 = "\"input type=\"text\"\"; ...; String expectedJDK16 = "<input type=\"text\" ...; ...} (b) Example from Apache Struts (FreemarkerResultMockedTest) Figure 1: Examples of trigger-action comments from open-source projects; we show how the existing comments (crossed out) can be encoded as executable statements in our TRIGIT framework (highlighted code) Techniques This section describes the basic idea behind each technique and the way we will approach the implementation. Synthesizing Error-Reporting Code We plan to develop lightweight synthesis techniques to generate error-reporting code for unsupported cases that are documented by developers in the task comments (e.g., from Guava: “support array types”). First, we will identify comments that document unsupported cases. To this end, we will explore possible supervision signals from resolved comments and their corresponding code changes, crowdsourcing annotation and semantic parsing of the comments. Second, we will synthesize error-reporting code that follows the style used in the codebase (e.g., throw an exception or return a special value from a function). Note that our goal is not to work on full-blown program synthesis, which would be interesting but challenging (e.g., Polikarpova, Kuraj, and Solar-Lezama (2016)), but rather to focus on a specific domain of error-reporting. Basically, our goal is to make the existing comments observable during program execution by reporting an appropriate message for unsupported cases. Extracting Executable Comments We will develop techniques to help software engineers to encode their trigger-action comments as executable code statements. This will help with repository maintenance, because developers will not need to manually check their todo comments; instead, the executable statements will be automatically triggered when appropriate. We show several examples of trigger-action comments in Table 1 (the top half). We found that ~10% of all todo comments (in our corpus) belong to this comment category. While it would be infeasible to support every comment written in the trigger-action style, we plan to focus on those tasks that update the codebase (e.g., transformations of abstract syntax trees) when triggers are satisfied. Our initial step is to develop a domain-specific language embedded in Java to be used to: (1) query the static features of the codebase, e.g., required Java version, and (2) specify code transformations, e.g., remove a method from a class. Figure 1 shows two examples of trigger-action comments encoded in our framework (named TRIGIT); the original todo comments are crossed out and the statements for our framework are highlighted. In the first example, we use our framework to check a modifier of the current method; if the method becomes public, the code guarded by the trigger should become a part of the compiled class. In the second example, we specify that a variable should be removed if the required Java version is higher than 1.5; the required Java version can be obtained from a build script. (Note that the statements/expressions that use the variables need to be annotated too, but we do not show this due to space limitations.) The evaluation of the triggers will be done statically (once code is compiled), as the queries should not depend on the dynamic behavior of the program. Our tool, which can be implemented as a compiler plugin, will automatically remove the triggers and perform program transformations. Note that the user would still be able to inspect/approve the changes (e.g., by executing git diff). As the transformation engine we will use the existing open-source platforms, e.g., Eclipse, or program transformation systems, e.g., Cordy et al. (2004). The language design will be guided by examples, and we will evolve the language to support cases that we encounter in the future. Our second step is to automatically discover trigger-action comments present in a codebase and recover the corresponding triggers and actions via mining explicit condition relations within the content of the todo comments; explicit discourse relations can be classified with adequate accuracy (Pitler et al. 2008). In the third step, we will develop automated migration from comments to the TRIGIT specifications, which will follow our recent work on language to code for if-this-then-that (IFTTT) recipes (Quirk, Mooney, and Galley 2015). Specifically, we will train a semantic parser to map trigger-action comments into executable code using supervision automatically extracted from the code changes made when a todo comment is resolved. This supervision may be noisy, since not all code changes may be directly related to resolving the todo comment, but our previous work on IFTTT shows that noisy, automatically extracted supervision from pairing comments and code can be tolerated reasonably well. Answering Questions From Comments We will develop techniques to help software engineers to make informed decisions about questions that are asked in todo comments. In our preliminary studies, we discovered that developers ask questions in todo comments more than 10% of the time; we obtained this number by counting todo comments that contain “?”. Some of these questions are shown in Table 1 (the bottom half). Many of the questions are related to code optimization, program transformation, or testing. Our plan is to focus on techniques that will address these three types of questions. First, to answer questions related to optimizations, we will extract suggested <table> <thead> <tr> <th>Project (on GitHub)</th> <th>File (.java)</th> <th>Todo Comments</th> </tr> </thead> <tbody> <tr> <td>Apache/Incubator-wave</td> <td>Pretty</td> <td>Remove this if itmlViewImpl implements getAttributes</td> </tr> <tr> <td>Apache/Struts</td> <td>FreemarkerResultMockedTest</td> <td>Remove expectedJK16 and if() after switching to Java 1.6</td> </tr> <tr> <td>Apache/Poi</td> <td>TextXSSBugs</td> <td>Delete this test case when Mx60R1 and mxk are implemented</td> </tr> <tr> <td>Google/Guiava</td> <td>Types</td> <td>Once we are on Java 8, delete this abstraction</td> </tr> <tr> <td>Google/Guiava</td> <td>AbstractStreamingHelper</td> <td>Check preconditions (as bufferSize &gt;= chunkSize) if this is ever public</td> </tr> <tr> <td>Google/Guiava</td> <td>MapTest</td> <td>Replace with Ascii.caseInsensitiveEquivalence() when it exists</td> </tr> <tr> <td>Jaxp</td> <td>NodeSet</td> <td>If deprecated constructors are removed, this should always be available</td> </tr> <tr> <td>Trigger-action</td> <td></td> <td>This class needs to be revisited, when Gtw’s Ant is upgraded</td> </tr> <tr> <td>Looking-glass</td> <td></td> <td>If the AST were normalized, we wouldn’t need this</td> </tr> <tr> <td>Morritstech/Gwt</td> <td>DefaultFilters</td> <td>Is it allowed to call the initialize method multiple times?</td> </tr> <tr> <td>Morritstech/Gwt</td> <td>Simplifier</td> <td>Would lookingGet() be more efficient? If so, then drop trailing.* from patterns</td> </tr> <tr> <td>Andyglick/Hk2-fork</td> <td>AbstractRepositoryImpl</td> <td>Add getters returning rowKeyToIndex and columnKeyToIndex?</td> </tr> <tr> <td>Google/Guiava</td> <td>ArrayTable</td> <td>Do we want to checkForNull each element in containsAll and retainAll?</td> </tr> <tr> <td>Google/Guiava</td> <td>EvictingQueue</td> <td>Is this actually called anywhere?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>ListEnvironmentVariableSupplier</td> <td>What if the composite being accessed is not an array but a structure?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>EvalBinary</td> <td>Test: what happens when a handler is not there? Exception?</td> </tr> <tr> <td>Eclipse/Moe</td> <td>PluginExtensionManager</td> <td>What happens if index is out of range?</td> </tr> <tr> <td>JetBrains/Jdk8u_jaxp</td> <td>NodeSet</td> <td>Test case for empty continuation header</td> </tr> <tr> <td>Square/OKhttp</td> <td>Http2Reader</td> <td></td> </tr> </tbody> </table> In the future, we plan to also utilize software histories to extract necessary context when todo comments were introduced. We will also reason about co-evolution of code and comments from when a todo comment was introduced until code modifications from comments, apply those modifications and profile the code (by executing existing test suites) and evaluate the performance with profiles (on various machines). Second, to answer questions related to tests, we will develop techniques that extract test inputs from a question and generate new tests with those inputs; these new tests will be obtained by adjusting an automated test generation tool (e.g., Randoop (Pacheco et al. 2007)) or by extending existing (manually written) tests. Third, to answer questions related to code structure, we will extract suggested changes (from Guava: “Add getters returning rowKeyToIndex and columnKeyToIndex?”), perform the changes, and measure quality of code in terms of naturalness (Hindle et al. 2012). Our question classification system will also learn from how todo comments are answered as software evolves (e.g., files and functions that are modified and language artifacts that are added or edited); we can also learn from actions taken by developers. As some of the questions may be open-ended, we plan to develop an interactive dialog interface, which we recently used for language to code translation (Chaurasia and Mooney 2017). We plan to use dialog systems to clarify user intent and gather information—in our case, when a question is initially asked. ### Improving Todo Comments We will develop techniques to help software engineers to write meaningful todo comments. While manually analyzing hundreds of todo comments, we found a number of comments that were hard to understand even if we read the code near those comments. We were also in disagreement about their meaning in several cases, and although we could understand a comment (e.g., from the Square Retrofit project: “TODO non-suck message”), it was clear that any technique would have a hard time to extract any useful data. Our initial task will be to detect todo comments that are not specific enough, as well as those comments that do not follow the conventions already used in the same project. The techniques that we will develop will build on our work on text specificity (Li and Nenkova 2015) and program analysis. When we detect an unspecific comment, we will either notify a developer to provide additional clarification, highlight a part of the comment that does not follow the style (in a similar way that spellcheckers highlight typos in comments inside IDEs), or automatically reformat the comment to be consistent with other comments in the same repository. We will also provide automated comment style checkers, where the rules can be expressed by developers; this is similar to code style checkers, which are used in practice. Having specific comments that follow the same style will enable techniques from prior sections. ### Detecting Dangling Todo Comments Prior work has shown that developers may resolve todo comments but forget to remove these comments from source code (Storey et al. 2008; Sridhara 2016); these *dangling comments* can waste developers’ time during program comprehension and maintenance. We are working on a technique, based on machine learning, to automatically detect dangling todo comments. Our detection technique learns from existing software repositories. As mentioned earlier, we have already collected more than 700k todo comments. This large dataset provides examples for todo comments that were removed by developers (over 20k). We are using these examples as distant supervision signals, where we are exploring automatic labeling of examples (e.g., todo comments that are in the same file with removed todo comments). Our models are exploiting commit messages and static code analysis of changes. In the future, we plan to also utilize software histories to extract necessary context when todo comments were introduced. We will also reason about co-evolution of code and comments from when a todo comment was introduced until it was resolved by a developer. Specifically, for each code change, we will compute its distance from todo comments, word similarity with each comment, and code structure that may be described in a comment. These sources of information provide complementary views to feature development and complementary models, so we plan to build on our prior work in co-training and ensemble models. **Related Work** Li et al. (2006) used text classification to validate the representativeness of their study of bug characteristics. Fluri, Wursch, and Gall (2007) empirically showed that code and comments frequently co-evolve. Padioleau, Tan, and Zhou (2009) manually studied over one thousand comments, and found that 50% of comments can be leveraged by various techniques. Haouari, Saharaoui, and Langlais (2011) introduced a taxonomy of comments and found that todo comments are the second most common type of comments. Movshovitz-Attias and Cohen (2013) used topic modeling and language models to generate comments from Java source files. Several work tackled automated generation of commit messages and mining relation from commit messages (Linares-Vásquez et al. 2015; Jiang and McMILLAN 2017; Andersson, Ericsson, and Wingkvist 2014; Loyola, Marrese-Taylor, and Matsuo 2017). Tan et al. (2007) detected inconsistencies between code and comments and proposed a technique to test Javadoc comments. Zhong et al. (2011) developed a technique to infer specification from natural language API documentation and used it to detect issues in client code. **Conclusion** We argued that comments used to communicate among developers (todo comments) contain invaluable content that is currently neglected. We described several techniques – synthesizing code from comments, making comments executable, answering questions in comments, improving comment quality, and detecting dangling comments. These techniques, based on natural language processing and program analysis, have potential to substantially simplify software maintenance and increase software reliability. **Acknowledgments** We thank Rishabh Rai for the initial discussion on this work. This work was partially supported by the US National Science Foundation under Grant No. CCF-1704790. **References** Andersson, R.; Ericsson, M.; and Wingkvist, A. 2014. Mining relations from Git commit messages: An experience report. In SLTC. Chaurasia, S., and Mooney, R. 2017. Dialog for language to code. In IJCNLP. Ernst, M. D. 2017. Natural language is a programming language: Applying natural language processing to software development. In SNAPL, volume 71. Haouari, D.; Saharaoui, H.; and Langlais, P. 2011. How good is your comment? A study of comments in Java programs. In ESEM. Li, J. J., and Nenkova, A. 2015. Fast and accurate prediction of sentence specificity. In AAAI. Loyola, P.; Marrese-Taylor, E.; and Matsuo, Y. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL. Padioleau, Y.; Tan, L.; and Zhou, Y. 2009. Listening to programmers’ taxonomies and characteristics of comments in operating system code. In ICSE. Polikarpova, N.; Kuraj, I.; and Solar-Lezama, A. 2016. Program synthesis from polymorphic refinement types. In PLDI. Quirk, C.; Mooney, R.; and Galley, M. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In ACL. Raychev, V.; Vechev, M.; and Krause, A. 2015. Predicting program properties from "Big Code". In POPL. Sridhara, G. 2016. Automatically detecting the up-to-date status of Todo comments in Java programs. In ISEC. Storey, M.-A.; Ryall, J.; Bull, R. I.; Myers, D.; and Singer, J. 2008. TODO or to bug. In ICSE. Tan, L.; Yuan, D.; Krishna, G.; and Zhou, Y. 2007. /*iComment: Bugs or bad comments? */. In SOSP. Vasilescu, B.; Casalnuovo, C.; and Devanbu, P. T. 2017. Recovering clear, natural identifiers from obfuscated JS names. In FSE.
{"Source-Url": "http://www.cs.utexas.edu/~ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fml%2Fpapers%2Fnie.nlse18.pdf&pubid=127683", "len_cl100k_base": 4826, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 13643, "total-output-tokens": 5541, "length": "2e12", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.00020205974578857425, "__label__crime_law": 0.00031495094299316406, "__label__education_jobs": 0.0006413459777832031, "__label__entertainment": 4.6253204345703125e-05, "__label__fashion_beauty": 0.00014412403106689453, "__label__finance_business": 0.0001595020294189453, "__label__food_dining": 0.00026607513427734375, "__label__games": 0.00037217140197753906, "__label__hardware": 0.0004420280456542969, "__label__health": 0.0003306865692138672, "__label__history": 0.0001195669174194336, "__label__home_hobbies": 5.906820297241211e-05, "__label__industrial": 0.0001971721649169922, "__label__literature": 0.00022840499877929688, "__label__politics": 0.0002191066741943359, "__label__religion": 0.0003421306610107422, "__label__science_tech": 0.0021724700927734375, "__label__social_life": 9.506940841674803e-05, "__label__software": 0.0035877227783203125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002484321594238281, "__label__transportation": 0.0003561973571777344, "__label__travel": 0.00015723705291748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23083, 0.01422]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23083, 0.47822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23083, 0.87396]], "google_gemma-3-12b-it_contains_pii": [[0, 4960, false], [4960, 11045, null], [11045, 17224, null], [17224, 23083, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4960, true], [4960, 11045, null], [11045, 17224, null], [17224, 23083, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23083, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23083, null]], "pdf_page_numbers": [[0, 4960, 1], [4960, 11045, 2], [11045, 17224, 3], [17224, 23083, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23083, 0.19266]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
204a15c7a2f61787ae75ba48b04290f532f0f422
Adaptive Constructive Interval Disjunction Bertrand Neveu, Gilles Trombettoni To cite this version: HAL Id: hal-00936654 https://hal-enpc.archives-ouvertes.fr/hal-00936654 Submitted on 27 Jan 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract—An operator called CID and an efficient variant 3BCID were proposed in 2007. For numerical CSPs handled by interval methods, these operators compute a partial consistency equivalent to Partition-1-AC for discrete CSPs. The two main parameters of CID are the number of times the main CID procedure is called and the maximum number of sub-intervals treated by the procedure. The 3BCID operator is state-of-the-art in numerical CSP solving, but not in constrained global optimization. This paper proposes an adaptive variant of 3BCID. The number of variables handled is auto-adapted during the search, the other parameters are fixed and robust to modifications. On a representative sample of instances, ACID appears to be the best approach in solving and optimization, and has been added to the default strategies of the Ibex interval solver. I. CONSTRUCTIVE INTERVAL DISJUNCTION (CID) A filtering/contracting operator for numerical CSPs called Constructive Interval Disjunction (in short CID) has been proposed in [13]. Applied first to continuous constraint satisfaction problems handled by interval methods, it has been more recently applied to constrained global optimization problems. This algorithm is state-of-the-art in constraint satisfaction, but is generally dominated by constraint propagation algorithms like HC4 in optimization. The main practical contribution is that an adaptive version of CID becomes efficient for both real-valued satisfaction and optimization problems, while needing no additional parameter value from the user. A. Shaving The shaving principle is used to compute the Singleton Arc Consistency (SAC) of finite domain CSPs [7] and the 3B-consistency of numerical CSPs [9]. It is also at the core of the SATZ algorithm [11] used to prove the satisfiability of Boolean formula. Shaving works as follows. A value is temporarily assigned to a variable (the other values are temporarily discarded) and a partial consistency is computed on the remaining subproblem. If an inconsistency is obtained then the value can be safely removed from the domain of the variable. Otherwise, the value is kept in the domain. Contrarily to arc consistency, this consistency is not incremental [7]. Indeed, the work of the underlying refutation procedure on the whole subproblem is the reason why a single value can be removed. Thus, obtaining the singleton arc consistency on finite-domain CSPs requires an expensive fixed-point algorithm where all the variables must be handled again every time a single value is removed [7]. The remark still holds for the improved version SAC-Opt [5]. A similar idea can be followed on numerical CSPs (NCSPs). B. Numerical CSP An NCSP is defined by a tuple $P = (X, [X], C)$, where $X$ denotes a $n$-set of numerical, real-valued variables ranging in a domain $[X]$. We denote by $[x_i] = [x_i^1, x_i^2]$ the interval/domain of variable $x_i \in X$, where $x_i, x_i^2$ are floating-point numbers (allowing interval algorithms to be implemented on computers). A solution of $P$ is an $n$-vector in $[X]$ satisfying all the constraints in $C$. The constraints defined in an NCSP are numerical. They are equations and inequalities using mathematical operators like $+, \cdot, /, \exp, \log, \sin$. A Cartesian product of intervals like the domain $[X] = [x_1] \times \ldots \times [x_n]$ is called a (parallel-to-axes) box. $w(x_i)$ denotes the width $x_i^2 - x_i^1$ of an interval $[x_i]$. The width of a box is given by the width $w_{\text{max}} = \sum_i w(x_i)$ of its largest dimension $x_{\text{max}}$. The union of several boxes is generally not a box, and a Hull operator has been defined instead to define the smallest box enclosing all of them. NCSPs are generally solved by a Branch & Contract interval strategy: - **Branch:** a variable $x_i$ is chosen and its interval $[x_i]$ is split into two sub-intervals, thus making the whole process combinatorial. - **Contract:** a filtering process allows contracting the intervals (i.e., improving interval bounds) without loss of solutions. The process starts with the initial domain $[X]$ and stops when the leaves/boxes of the search tree reach a width inferior to a precision given as input. These leaves yield an approximation of all the solutions of the NCSP. Several contraction algorithms have been proposed. Let us mention the constraint propagation algorithm called HC4 [3], [10], an efficient implementation of 2B [9], that can enforce the optimal local consistency (called hull-consistency) only if strong hypotheses are met (in particular, each variable --- Adaptive Constructive Interval Disjunction Bertrand Neveu LIGM Université Paris Est Marne-la-Vallée, France Email: Bertrand.Neveu@enpc.fr Gilles Trombettoni LIRMM Université Montpellier II Montpellier, France Email: Gilles.Trombettoni@lirmm.fr must occur at most once in a same constraint). The 2B-Revise procedure works with all the projection functions of a given constraint. Informally, a projection function isolates a given variable occurrence within the constraint. For instance, consider the constraint $x + y = z$, $x \leftarrow z \times y$ is a projection function (among others) that aims at reducing the domain of variable $x$. Evaluating the projection function with interval arithmetics on the domain $[x] \times [y] \times [z]$ (i.e., replacing the variable occurrences of the projection function by their domains and using the interval counterpart of the involved mathematical operators) provides an interval that is intersected with $[x]$. Hence a potential domain reduction. A constraint propagation loop close to that of AC3 is used to propagate reductions obtained for a given variable domain to the other constraints in the system. C. 3B algorithm Stronger interval partial consistencies have also been proposed. 3B-consistency [9] is a theoretical partial consistency similar to SAC for CSP although limited to the bounds of the domains. Consider the $2n$ subproblems of the studied NCSP where each interval $[x_i]$ ($i \in \{1..n\}$) is reduced to its lower bound $x_i^l$ (resp. upper bound $x_i^u$). 3B-consistency is enforced iff each of these $2n$ subproblems is hull-consistent. In practice, the 3B($w$) algorithm splits the intervals in several sub-intervals, also called slices, of width $w$, which gives the accuracy: the 3B($w$)-consistency is enforced iff the slices at the bounds of the handled box cannot be eliminated by HC4. Let us denote var3B the procedure of the 3B algorithm that shaves one variable interval $[x_i]$ and $s_{3b}$ its parameter, a positive integer specifying a number of sub-intervals: $w = w(x_i)/s_{3b}$ is the width of a sub-interval. D. CID Constructive Interval Disjunction (CID) is a partial consistency stronger than 3B-consistency [13]. CID-consistency is similar to Partition-1-AC (P-1-AC) in finite domain CSPs [4]. P-1-AC is strictly stronger than SAC [4]. The main procedure varCID handles a single variable $x_i$. The main parameters of varCID are $x_i$, a number $s_{cid}$ of sub-intervals (accuracy) and a contraction algorithm ctc like HC4. $[x_i]$ is split into $s_{cid}$ slices of equal width, each corresponding subproblem is contracted by the contractor ctc and the hull of the different contracted subproblems is finally returned, as shown in Algorithm 1. Intuitively, CID generalizes 3B because a sub-box that is eliminated by var3B can also be discarded by varCID. In addition, contrary to var3B, varCID can also contract $[X]$ along several dimensions. Note that in the actual implementation the for loop can be interrupted earlier, when $[X]'$ becomes equal to the initial box $[X]$ in all the dimensions except $x_i$. var3BCID is a hybrid and operational variant of varCID. --- **Algorithm 1:** The main VarCID procedure of the CID operator shaving a given variable $x_i$. 1) Like var3B, it first tries to eliminate sub-intervals at the bounds of $[x_i]$ of width $w = w(x_i)/s_{3b}$ each. We store the left box $[X_l]$ and the right box $[X_r]$ that are not excluded by the contractor ctc (if any). 2) Second, the remaining box $[X]'$ is handled by varCID that splits $[X]'$ into $s_{cid}$ sub-boxes. The sub-boxes are contracted by ctc and hulled, giving $[X_{cid}]$. 3) Finally, we return the hull of $[X_l]$, $[X_r]$ and $[X_{cid}]$. The var3BCID process is illustrated in Figure 1. 3B: --- CID: --- Figure 1. Task of the var3BCID procedure. The parameter $s_{3b}$ is set to 10 and $s_{cid}$ is set to 1. var3BCID comes from the wish of managing different widths (accuracies) for $s_{3b}$ and $s_{cid}$. Indeed, the best choice for $s_{3b}$ generally belongs to $[5..20]$ while $s_{cid}$ should always be set to 1 or 2 (implying a final hull of 3 or 4 sub-boxes). The reason is that the actual time cost of the shaving part is smaller than the one of the constructive domain disjunction. Indeed, if no sub-interval is discarded by var3B, only two calls to ctc are performed, one for each bound of the handled interval; if varCID is applied, the subcontractor is often called $s_{cid}$ times. The procedure var3BCID has been deeply studied and experimented in the past. The number and the order in which calls to var3BCID are achieved is a harder question studied in this paper. II. ADAPTIVE CID: LEARNING THE NUMBER OF HANDLED VARIABLES Like for SAC or 3B, a quasi fixed-point in terms of contraction can be reached by 3BCID (or CID) by calling var3BCID inside two nested loops. An inner loop calls var3BCID on each variable $x_i$. An outer loop calls the inner loop until no interval is contracted more than a predefined (width) precision (thus reaching a quasi-fixed point). Let us call $3BCID-fp$ (fixed-point) this historical version. Two reasons led us to radically change this policy. First, as said above, $\var3BCID$ can contract the handled box in several dimensions. One significant advantage is that the fixed-point in terms of contraction can thus be reached in a small number of calls to $\var3BCID$. On most of the instances in satisfaction or optimization, it appears that a quasi fixed-point is reached in less than $n$ calls. In this case, $3BCID$ is clearly too expensive. Second, the $\var3CID$ principle is close to a branching point in a search tree. The difference is that a hull is achieved at the end of the sub-box contractions. Therefore an idea is to use a standard branching heuristic to select the next variable to be “varcided”. We will write in the remaining part of the paper that a variable is varcided when the procedure $\var3BCID$ is called on that variable to contract the current box. To sum up, the idea for rendering $3BCID$ even more efficient in practice is to replace the two nested loops by a single loop calling $\numVarCID$ times $\var3BCID$ and to use an efficient variant of the Smear function branching heuristic for selecting the variables to be varcided (called $\SmearSumRel$ in [12]). Informally, the Smear function favors variables having a large domain and a high impact on the constraints – measuring interval partial derivatives. A first idea is to fix $\numVarCID$ to the number $n$ of variables. We call $3BCID-n$ this version. This gives good results in satisfaction but is dominated by pure constraint propagation in optimization. As said above, it is too time costly when the right $\numVarCID$ is smaller than $n$ (which is often the case in optimization), but can also have a very bad impact on performance if a bigger effort brought a significantly greater filtering. The goal of Adaptive CID (ACID) is precisely to compute dynamically during search the value of the $\numVarCID$ parameter. Several auto-adaptation policies have been tested and we report three interesting versions. All the policies measure the decrease in search space size after each call to $\var3BCID$. They measure a contraction ratio of a box $[X]^b$ over another box $[X]^a$ as an average relative gain in all the dimensions: $$\text{gainRatio}([X]^b,[X]^a) = \frac{1}{n} \sum_{i=1}^{n} \left(1 - \frac{w(x_i^b)}{w(x_i^a)}\right)$$ A. ACID0: auto-adapting $\numVarCID$ during search The first version ACID0 adapts the number of shaved variables dynamically at each node of the search tree. First, the variables are sorted by their impact, computed by the same formula as the $\SmearSumRel$ function (used for branching). Variables are then varcided until the cumulative contraction ratio during the last $nv$ calls to $\var3BCID$ becomes less than $\ctratio$. This algorithm has thus 2 parameters $nv$ and $\ctratio$, and it was difficult to tune them. We experimentally found that $\ctratio$ could be fixed to 0.001 and $nv$ should depend on the number of variables $n$ of the problem. Setting $nv$ to 1 is often a bad choice, and fixing it with the formula $nv = \max(3, \frac{1}{\sqrt{n}})$ experimentally gave the best results. The experimental results are not bad but this policy prevents $\numVarCID$ from reaching 0, i.e. from calling only constraint propagation. This is a significant drawback when a simple constraint propagation is the most efficient approach. B. ACID1: interleaving learning and exploitation phases A more sophisticated approach avoids this drawback. ACID1 interleaves learning and exploitation phases for auto-adapting the $\numVarCID$ value. Depending on the node number, the algorithm is in a learning or in an exploitation phase. The behavior of ACID1, shown in Algorithm 2, is the following: - The variables are first sorted according to their impact measurement (using the $\SmearSumRel$ heuristic). - During a learning phase (during $\text{learnLength}$ nodes), we then analyze how the contraction ratio evolves from a $\var3BCID$ call to the next one, and store the number $kvarCID$ of varcided variables necessary to obtain most of the possible filtering. More precisely, $2\numVarCID$ variables are varcided at each node (with a minimum value equal to 2, in case $\numVarCID=0$). In the first learning phase, we handle $n$ variables. At the current node, the $\text{lastSignificantGain}$ function returns the number $kvarCID$ of varcided variables giving the last significant improvement. After the $kvarCID$ call to $\var3BCID$, the gain in current box size from a $\var3BCID$ call to the next one, computed by the $\text{gainRatio}$ formula, never exceeded a small given ratio, called $\ctratio$. This analysis starts from the last varcided variable. (For the readability of the pseudocode, we omit the parameters of the $\var3BCID$ procedure, i.e. $s_3b$, $s_2c$, the constraints $C$ and the contractor etc.) - During the exploitation phase following the previous learning phase, the average of the different $kvarCID$ values (obtained in the nodes of the learning phase) provides the new value of $\numVarCID$. This value will be used by $3BCID$ during the exploitation phase. Compared to the previous value (previous call to an exploitation phase), note that this new value can at most double, but can also drastically decrease. Every $\text{cycleLength}$ nodes in the search tree, both phases are called again. Numerous variants of this schema were tested. In particular, it is counterproductive to learn $\numVarCID$ only once. We fixed experimentally the 3 parameters of the ACID1 procedure learnLength, cycleLength and ctraitio, respectively to 50, 1000 and 0.002. ACID1 becomes then a parameter free procedure. With these parameter values, the overhead of the learning phases (where we double the numVarCID value) remains small. C. ACID2: taking into account the level in the search tree A criticism against ACID1 is that we average kvarCID values obtained at different levels of the search tree. This drawback is partially corrected by the successive learning phases of ACID1, where each learning phase corresponds to a part of the search tree. In order to go further in that direction, we designed a refinement of ACID1 for which each learning phase tunes at most 10 different values depending on the width of the studied box. A value corresponds to one order of magnitude in the box width. For example, we store a numVarCID value for the boxes with a width comprised between 1 and 0.1, another one for the boxes with a width comprised between 0.1 and 0.01, etc. However, this approach, called ACID2, gave in general results similar to those of ACID1 and appeared to be less robust. Indeed, only a few nodes sometimes fall at certain width levels, which renders the statistics not significant. III. Experiments All the algorithms were implemented in the C++ interval library Ibx (Interval Based EXplorer) [6]. All the experiments were run on the same computer (Intel X86 3GHz). We tested the algorithms on square NCSP solving and constrained global optimization. NCSP solving consists in finding all the solutions of a square system of \( n \) nonlinear equations with \( n \) real-values variables with bounded domains. Global optimization consists in finding the global minimum of a function over \( n \) variables subject to constraints (equations and inequalities), the objective function and/or the constraints being non-convex. A. Experiments in constraint satisfaction We selected from the COPRIN benchmark\(^1\) all the systems that were solved by one of the tested algorithms in a time comprised between 2s and 3600s. The timeout was fixed to 10,000s. The required precision on the solution is \( 10^{-8} \). Some of these problems are scalable. In this case, we selected the problem with the greatest number of variables that was solved by one of the algorithms in less than one hour. We compared our ACID method and its variants with the well known filtering techniques: a simple constraint propagation HC4, 3BCID-n (see Section II) and 3BCID-fp (fixed-point) in which a new iteration on all the variables is run when a variable domain width is reduced by more than 1%. At each node of the search tree, we used the following sequence of contractors: HC4, shaving, Interval-Newton [8], and X-Newton [2]. shaving denotes a variant of ACID, 3BCID-n, 3BCID-fp or nothing when only HC4 is tested. For each problem, we used the best bisection heuristics available (among two variants of the Smear function [12]). The main parameter ctraitio of ACID1 and ACID2, measuring a stagnation in the filtering while variables are varcided, was fixed to 0.002. The var3BCID parameters \( s_{3b} \) and \( s_{3id} \) were fixed to the default settings, respectively 10 and 1, proposed in [13]. Experiments on the selected instances confirm that these settings are relevant and robust to variations. In particular, setting \( s_{3b} \) to 10 gives results better than with smaller values (\( s_{3b} = 5 \)) and with greater values. (For 21 over the 26 instances, \( s_{3b} = 20 \) gives worse results.) As \(^1\)www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/benches.html shown in Table I, ACID1 appears to be often the best one, or close to the best one. In only 4 problems on 26, it was more than 10% slower than the best. The number of variced variables was tuned close to 0 in the problems where HC4 was sufficient, and more than the number of variables in the problems where 3BCID-fp appeared to be the best method. In the left part of Table II, we summarize the results obtained by the three variants of ACID and their competitors. It appears that only ACID1 could solve the 26 problems in 1 hour, while HC4 could solve only 21 problems in 10,000s. The gains in cpu time obtained by ACID1 w.r.t. competitors are sometimes significant (see the line max gain), while its losses remain weak. ACID0 with its two parameters was more difficult to tune, and it was not interesting to run the more complex algorithm ACID2. ACID1 obtains better gains w.r.t. 3BCID-n in total time than on average because the best gains were obtained on difficult instances with more variables. In the right part of the table, we report the solving time ratios obtained when X-Newton is removed (∼ XN) from the contractor sequence (4 problems could not be solved in 10,000s). The only ACID variant studied was ACID1. ACID1 and 3BCID-n obtain globally similar results, better than 3BCID-fp, but with a greater dispersion (i.e., standard deviation) than with X-Newton since the shaving takes a more important part in the contraction. B. Experiments in constrained global optimization We selected in the series 1 of the Coconut constrained global optimization benchmark all the 40 instances that ACID or a competitor could solve in a CPU time comprised between 2 s and 3600 s. The time out was fixed to 3600s. We used the IbexOpt strategy of Ibex that performs a Best First Branch & Bound. The experimental protocol is the same as the NCSP experimental protocol, except that we do not use Interval-Newton that is only implemented for square systems. For each instance, we use the best bisection heuristics (the same for all methods) among largestFirst, roundRobin and variants of the Smear function. The precision required on the objective is $10^{-8}$. Each equation is relaxed by two inequalities with a precision $10^{-8}$. Table III reports the same columns as Table I, plus a column indicating the number of constraints of the instance. For the constraint programming part of IbexOpt, HC4 is state of the art and 3BCID is rarely needed in optimization.\(^2\) \(^2\)www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html \(^3\)In fact, the more recent Mohc constraint propagation algorithm [1] is better than HC4. Mohc is not yet reimplemented in Ibex 2.0. However, 3BCID(Mohc) shows roughly the same gains w.r.t. Mohc than 3BCID(HC4) does w.r.t. HC4... Therefore, we report in the penultimate column a comparison between ACID1 and HC4. The number of variced variables was indeed tuned by ACID1 to a value comprised between 0 and the number of variables. Again, we can see that ACID1 is robust and is the best, or at most 10% worse than the best, for 34 among 40 instances. Table IV shows that we obtained an average gain of 10% over HC4. It is significant because the CP contraction is only a part of the IbexOpt algorithm [12] (linear relaxation and the search of feasible points are other important parts, not studied in this paper and set to their default algorithms in IbexOpt). ACID0 shaves a minimum of 3 variables, which is often too much. ACID2 obtains results slightly worse than ACID1, rendering this refinement not promising in practice. IV. CONCLUSION We have presented in this paper an adaptive version of the 3BCID contraction operator used by interval methods and close to partition-1-AC. The best variant of this Adaptive CID operator (ACID1 in the paper) interleaves learning phases and exploitation phases to auto-adapt the number of variables handled. These variables are selected by an efficient branching heuristic and all the other parameters are fixed and robust to modifications. Overall, ACID1 adds no parameter to the solving or optimization strategies. It offers the best results on average and is the best or close to the best on every tested instance, even in presence of the best Ibex devices (Interval-Newton, X-Newton). Therefore ACID1 has been added to the Ibex default solving and optimization strategies. REFERENCES Table III <table> <thead> <tr> <th>#var</th> <th>#cr</th> <th>ACID1</th> <th>ACID1</th> <th>ACID1</th> <th>best</th> <th>worst</th> <th>Time ratio</th> <th>Time ratio</th> <th>Time ratio</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>time</td> <td>#nodes</td> <td>#varcids</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Table IV <table> <thead> <tr> <th>#solved instances</th> <th>ACID1</th> <th>HC4</th> <th>3BCID-fp</th> <th>3BCID-n</th> <th>ACID1</th> <th>ACID1</th> </tr> </thead> <tbody> <tr> <td></td> <td>ACID1</td> <td>HC4</td> <td>3BCID-fp</td> <td>3BCID-n</td> <td>ACID1</td> <td>ACID1</td> </tr> </tbody> </table> 906
{"Source-Url": "https://hal-enpc.archives-ouvertes.fr/hal-00936654/file/ictai2013.pdf", "len_cl100k_base": 6078, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 32754, "total-output-tokens": 7299, "length": "2e12", "weborganizer": {"__label__adult": 0.0004162788391113281, "__label__art_design": 0.0006437301635742188, "__label__crime_law": 0.0006847381591796875, "__label__education_jobs": 0.0017099380493164062, "__label__entertainment": 0.00017535686492919922, "__label__fashion_beauty": 0.0002841949462890625, "__label__finance_business": 0.0008230209350585938, "__label__food_dining": 0.0004529953002929687, "__label__games": 0.0010709762573242188, "__label__hardware": 0.0012054443359375, "__label__health": 0.000949382781982422, "__label__history": 0.0005598068237304688, "__label__home_hobbies": 0.00019729137420654297, "__label__industrial": 0.0012178421020507812, "__label__literature": 0.0004119873046875, "__label__politics": 0.0005841255187988281, "__label__religion": 0.0007071495056152344, "__label__science_tech": 0.43701171875, "__label__social_life": 0.00018036365509033203, "__label__software": 0.01335906982421875, "__label__software_dev": 0.53564453125, "__label__sports_fitness": 0.0004606246948242187, "__label__transportation": 0.0009512901306152344, "__label__travel": 0.0002827644348144531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26753, 0.0364]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26753, 0.49618]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26753, 0.86339]], "google_gemma-3-12b-it_contains_pii": [[0, 1103, false], [1103, 5956, null], [5956, 10678, null], [10678, 16349, null], [16349, 20002, null], [20002, 22783, null], [22783, 26225, null], [26225, 26753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1103, true], [1103, 5956, null], [5956, 10678, null], [10678, 16349, null], [16349, 20002, null], [20002, 22783, null], [22783, 26225, null], [26225, 26753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26753, null]], "pdf_page_numbers": [[0, 1103, 1], [1103, 5956, 2], [5956, 10678, 3], [10678, 16349, 4], [16349, 20002, 5], [20002, 22783, 6], [22783, 26225, 7], [26225, 26753, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26753, 0.04762]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
d42dcb39271fc5c419b9e1c0d6ce9c99f6e769a9
Binary Search Trees Lecturer: Georgy Gimel’farb COMPSCI 220 Algorithms and Data Structures 1. Properties of Binary Search Trees 2. Basic BST operations 3. The worst-case time complexity of BST operations 4. The average-case time complexity of BST operations 5. Self-balancing binary and multiway search trees 6. Self-balancing BSTs: AVL trees 7. Self-balancing BSTs: Red-black trees 8. Balanced B-trees for external search Binary Search Tree: Left-Right Ordering of Keys Left-to-right numerical ordering in a BST: for every node $i$, - the values of all the keys $k_{\text{left}:i}$ in the left subtree are smaller than the key $k_i$ in $i$ and - the values of all the keys $k_{\text{right}:i}$ in the right subtree are larger than the key $k_i$ in $i$: $\{k_{\text{left}:i}\} \ni l < k_i < r \in \{k_{\text{right}:i}\}$ Compare to the bottom-up ordering in a heap where the key $k_i$ of every parent node $i$ is greater than or equal to the keys $k_l$ and $k_r$ in the left and right child node $l$ and $r$, respectively: $k_i \geq k_l$ and $k_i \geq k_r$. Binary Search Tree: Left-Right Ordering of Keys BST Non-BST: Key "2" cannot be in the right subtree of key "3". Non-BST: Keys "11" and "12" cannot be in the left subtree of key "10". Basic BST Operations BST is an explicit *data structure* implementing the table ADT. - BST are more complex than heaps: any node may be removed, not only a root or leaves. - The only practical constraint: no duplicate keys (attach them all to a single node). Basic operations: - **find** a given search key or detect that it is absent in the BST. - **insert** a node with a given key to the BST if it is not found. - **findMin**: find the minimum key. - **findMax**: find the maximum key. - **remove** a node with a given key and restore the BST if necessary. BST Operations **Find / Insert a Node** **find**: a successful binary search. **insert**: creating a new node at the point where an unsuccessful search stops. **BST Operations: FindMin / FindMax** *Extremely simple:* starting at the root, branch repeatedly left (findMin) or right (findMax) as long as a corresponding child exists. - The root of the tree plays a role of the pivot in quicksort and quickselect. - As in quicksort, the recursive traversal of the tree can sort the items: 1. First visit the left subtree; 2. Then visit the root, and 3. Then visit the right subtree. $O(\log n)$ average-case and $O(n)$ worst-case running time for find, insert, findMin, and findMax operations, as well as for selecting a single item (just as in quickselect). BST Operation: **Remove a Node** The most complex because the tree may be disconnected. - Reattachment must retain the ordering condition. - Reattachment should not needlessly increase the tree height. **Standard method of removing a node $i$ with $c$ children:** <table> <thead> <tr> <th>$c$</th> <th><strong>ACTION</strong></th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Simply remove the leaf $i$.</td> </tr> <tr> <td>1</td> <td>Remove the node $i$ after linking its child to its parent node.</td> </tr> <tr> <td>2</td> <td>Swap the node $i$ with the node $j$ having the smallest key $k_j$ in the right subtree of the node $i$. After swapping, remove the node $i$ (as now it has at most one right child).</td> </tr> </tbody> </table> In spite of its asymmetry, this method cannot be really improved. BST Operation: **Remove a Node** Remove 10 ⇒ Replace 10 (swap with 12 and delete) Minimum key in the right subtree Lemma 3.11: The search, retrieval, update, insert, and remove operations in a BST all take time in $O(h)$ in the worst case, where $h$ is the height of the tree. Proof: The running time $T(n)$ of these operations is proportional to the number of nodes $\nu$ visited. - **Find / insert:** $\nu = 1 + \langle\text{the depth of the node}\rangle$. - **Remove:** $\langle\text{the depth + at most the height of the node}\rangle$. - **In each case** $T(n) = O(h)$. For a well-balanced BST, $T(n) \in O(\log n)$ (logarithmic time). In the worst case $T(n) \in \Theta(n)$ (linear time) because insertions and deletions may heavily destroy the balance. Analysing BST: The Worst-case Time Complexity BSTs of height $h \approx \log n$ BSTs of height $h \approx n$ Analysing BST: The Average-case Time Complexity More balanced trees are more frequent than unbalanced ones. **Definition 3.12:** The total internal path length, $S_\tau(n)$, of a binary tree $\tau$ is the sum of the depths of all its nodes. Depth 0: $S_\tau(8) = 0 + 1 + 1 + 2 + 2 + 3 + 3 + 3 = 15$ - Average complexity of a successful search in $\tau$: the average node depth, $\frac{1}{n}S_\tau(n)$, e.g. $\frac{1}{8}S_\tau(8) = \frac{15}{8} = 1.875$ in this example. - Average-case complexity of searching: - Averaging $S_\tau(n)$ for all the trees of size $n$, i.e. for all possible $n!$ insertion orders, occurring with equal probability, $\frac{1}{n!}$. The $\Theta(\log n)$ Average-case BST Operations Let $S(n)$ be the average of the total internal path length, $S_\tau(n)$, over all BST $\tau$ created from an empty tree by sequences of $n$ random insertions, each sequence considered as equiprobable. **Lemma 3.13:** The expected time for successful and unsuccessful search (update, retrieval, insertion, and deletion) in such BST is $\Theta(\log n)$. **Proof:** It should be proven that $S(n) \in \Theta(n \log n)$. - Obviously, $S(1) = 0$. - Any $n$-node tree, $n > 1$, contains a left subtree with $i$ nodes, a root at height 0, and a right subtree with $n - i - 1$ nodes; $0 \leq i \leq n - 1$. - For a fixed $i$, $S(n) = (n - 1) + S(i) + S(n - i - 1)$, as the root adds 1 to the path length of each other node. The $\Theta(\log n)$ Average-case BST Operations Proof of Lemma 3.13 (continued): - After summing these recurrences for $0 \leq i \leq n - 1$ and averaging, just the same recurrence as for the average-case quicksort analysis is obtained: $$S(n) = (n - 1) + \frac{2}{n} \sum_{i=0}^{n-1} S(i)$$ - Therefore, $S(n) \in \Theta(n \log n)$, and the expected depth of a node is $\frac{1}{n} S(n) \in \Theta(\log n)$. - Thus, the average-case search, update, retrieval and insertion time is in $\Theta(\log n)$. - It is possible to prove (but in a more complicate way) that the average-case deletion time is also in $\Theta(\log n)$. The BST allow for a special balancing, which prevents the tree height from growing too much, i.e. avoids the worst cases with linear time complexity $\Theta(n)$. □ Self-balanced Search Trees **Balancing** ensures that the total internal path lengths of the trees are close to the optimal value of $n \log n$. - The average-case and the worst-case complexity of operations is $O(\log n)$ due to the resulting balanced structure. - But the insertion and removal operations take longer time on the average than for the standard binary search trees. **Balanced BST:** **Balanced multiway search trees:** - B-trees (1972: R. Bayer and E. McCreight). Self-balancing BSTs: AVL Trees Complete binary trees have a too rigid balance condition to be maintained when new nodes are inserted. **Definition 3.14:** An AVL tree is a BST with the following additional balance property: - for any node in the tree, the height of the left and right subtrees can differ by at most 1. The height of an empty subtree is $-1$. Advantages of the AVL balance property: - Guaranteed height $\Theta(\log n)$ for an AVL tree. - Less restrictive than requiring the tree to be complete. - Efficient ways for restoring the balance if necessary. Lemma 3.15: The height of an AVL tree with $n$ nodes is $\Theta(\log n)$. Proof: Due to the possibly different heights of subtrees, an AVL tree of height $h$ may contain fewer than $2^{h+1} - 1$ nodes of the complete tree. - Let $S_h$ be the size of the smallest AVL tree of height $h$. - $S_0 = 1$ (the root only) and $S_1 = 2$ (the root and one child). - The smallest AVL tree of height $h$ has the smallest subtrees of height $h - 1$ and $h - 2$ by the balance property, so that $$S_h = S_{h-1} + S_{h-2} + 1 = F_{h+3} - 1$$ where $F_i$ is the $i^{th}$ Fibonacci number (recall Lecture 6). Self-balancing BSTs: AVL Trees (Proof of Lemma 3.15 – cont.) That $S_h = F_{h+3} - 1$ is easily proven by induction: - **Base case:** $S_0 = F_3 - 1 = 1$ and $S_1 = F_4 - 1 = 2$. - **Hypothesis:** Let $S_i = F_{i+3} - 1$ and $S_{i-1} = F_{i+2} - 1$. - **Inductive step:** Then $$S_{i+1} = S_i + S_{i-1} - 1 = F_{i+3} - 1 + F_{i+2} - 1 + 1 = F_{i+4} - 1$$ Therefore, for each AVL tree of height $h$ and with $n$ nodes: $$n \geq S_h \approx \frac{\varphi^{h+3}}{\sqrt{5}} - 1 \text{ where } \varphi \approx 1.618,$$ so that its height $h \leq 1.444 \lg (n + 1) - 1.33$. - The worst-case height is at most 44% more than the minimum height for binary trees. - The average-case height of an AVL tree is provably close to $\lg n$. □ Self-balancing BSTs: AVL Trees Rotation to restore the balance after BST insertions and deletions: If there is a subtree of large height below the node $a$, the right rotation will decrease the overall tree height. - All self-balancing binary search trees use the idea of rotation. - Rotations are mutually inverse and change the tree only locally. - Balancing of AVL trees requires extra memory and heavy computations. - More relaxed efficient BSTs, e.g., red-black trees, are used more often in practice. Self-balancing BSTs: Red-black Trees **Definition 3.17:** A red-black tree is a BST such that - Every node is coloured either red or black. - Every non-leaf node has two children. - The root is black. - All children of a red node must be black. - Every path from the root to a leaf must contain the same number of black nodes. **Theorem 3.18:** If every path from the root to a leaf contains $b$ black nodes, then the tree contains at least $2^b - 1$ black nodes. Proof of Theorem 3.18: - **Base case:** Holds for $b = 1$ (either the black root only or the black root and one or two red children). - **Hypothesis:** Let it hold for all red-black trees with $b$ black nodes in every path. - **Inductive step:** A tree with $b + 1$ black nodes in every path and two black children of the root contains two subtrees with $b$ black nodes just under the root and has in total at least $1 + 2 \cdot (2^b - 1) = 2^{b+1} - 1$ black nodes. - If the root has a red child, the latter has only black children, so that the total number of the black nodes can become even larger. □ Self-balancing BSTs: Red-black and AA Trees Searching in a red-black tree is logarithmic, $O(\log n)$. - Each path cannot contain two consecutive red nodes and increase more than twice after all the red nodes are inserted. - Therefore, the height of a red-black tree is at most $2 \lceil \lg n \rceil$. No precise average-case analysis (only empirical findings or properties of red-black trees with $n$ random keys): - The average case: $\approx \lg n$ comparisons per search. - The worst case: $< 2 \lg n + 2$ comparisons per search. - $O(1)$ rotations and $O(\log n)$ colour changes to restore the tree after inserting or deleting a single node. **AA-trees**: the red-black trees where the left child may not be red – are even more efficient if node deletions are frequent. Balanced B-trees The “Big-Oh” analysis is invalid if the assumed equal time complexity of elementary operations does not hold. - External ordered databases on magnetic or optical disks. - One disk access – hundreds of thousands of computer instructions. - The number of accesses dominates running time. - Even logarithmic worst-case complexity of red-black or AA-trees is unacceptable. - Each search should involve a very small number of disk accesses. - Binary tree search (with an optimal height $\lg n$) cannot solve the problem. Height of an optimal $m$-ary search tree ($m$-way branching): $$\approx \log_m n, \text{ i.e. } \approx \frac{\lg n}{\lg m}$$ Balanced B-trees Height of the optimal $m$-ary search tree with $n$ nodes: <table> <thead> <tr> <th>$n$</th> <th>$10^5$</th> <th>$10^6$</th> <th>$10^7$</th> <th>$10^8$</th> <th>$10^9$</th> <th>$10^{10}$</th> <th>$10^{11}$</th> <th>$10^{12}$</th> </tr> </thead> <tbody> <tr> <td>$\lceil \log_2 n \rceil$</td> <td>17</td> <td>20</td> <td>24</td> <td>27</td> <td>30</td> <td>33</td> <td>36</td> <td>39</td> </tr> <tr> <td>$\lceil \log_{10} n \rceil$</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>10</td> <td>11</td> <td>12</td> </tr> <tr> <td>$\lceil \log_{100} n \rceil$</td> <td>3</td> <td>3</td> <td>4</td> <td>4</td> <td>5</td> <td>5</td> <td>6</td> <td>6</td> </tr> <tr> <td>$\lceil \log_{1000} n \rceil$</td> <td>2</td> <td>2</td> <td>3</td> <td>3</td> <td>3</td> <td>4</td> <td>4</td> <td>4</td> </tr> </tbody> </table> Multiway search tree of order $m = 4$: Data records are associated only with leaves (most of definitions). A **B-tree** of order $m$ is an $m$-ary search tree such that: 1. The root either is a leaf, or has $\mu \in \{2, \ldots, m\}$ children. 2. There are $\mu \in \{\lceil \frac{m}{2} \rceil, \ldots, m\}$ children of each non-leaf node, except possibly the root. 3. $\mu - 1$ keys, $(\theta_i : i = 1, \ldots, \mu - 1)$, guide the search in each non-leaf node with $\mu$ children, $\theta_i$ being the smallest key in subtree $i + 1$. 4. All leaves at the same depth. 5. Data items are in leaves, each leaf storing $\lambda \in \{\lceil \frac{l}{2} \rceil, \ldots, l\}$ items, for some $l$. - Conditions 1–3: to define the memory space for each node. - Conditions 4–5: to form a well-balanced tree. Balanced B-trees B-trees are usually named by their branching limits $\left\lceil \frac{m}{2} \right\rceil - m$: e.g., 2–3 trees with $m = 3$ or 2–4 trees with $m = 4$. $m = 4$; $l = 7$: 2–4 B-tree with the leaf storage size 7 (2..4 children per node and 4..7 data items per leaf) Balanced B-trees Because the nodes are at least half full, a B-tree with $m \geq 8$ cannot be a simple binary or ternary tree. - Simple **data insertion** if the corresponding leaf is not full. - Otherwise, splitting a full leaf into two leaves, both having the minimum number of data items, and updating the parent node. - If necessary, the splitting propagates up until finding a parent that need not be split or reaching the root. - Only in the extremely rare case of splitting the root, the tree height increases, and a new root with two children (halves of the previous root) is created. Data insertion, deletion, and retrieval in the worst case: about $\left\lceil \log_{\frac{m}{2}} n \right\rceil$ disk accesses. - This number is practically constant if $m$ is sufficiently big.
{"Source-Url": "https://www.cs.auckland.ac.nz/courses/compsci220s1c/lectures/2016S1C/CS220-Lecture14.pdf", "len_cl100k_base": 4403, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 47112, "total-output-tokens": 5500, "length": "2e12", "weborganizer": {"__label__adult": 0.0003571510314941406, "__label__art_design": 0.0004494190216064453, "__label__crime_law": 0.0005593299865722656, "__label__education_jobs": 0.005023956298828125, "__label__entertainment": 0.00012230873107910156, "__label__fashion_beauty": 0.00021588802337646484, "__label__finance_business": 0.0002837181091308594, "__label__food_dining": 0.000606536865234375, "__label__games": 0.0011510848999023438, "__label__hardware": 0.0021572113037109375, "__label__health": 0.0007710456848144531, "__label__history": 0.0005478858947753906, "__label__home_hobbies": 0.0002061128616333008, "__label__industrial": 0.0008344650268554688, "__label__literature": 0.00045108795166015625, "__label__politics": 0.0004248619079589844, "__label__religion": 0.0006895065307617188, "__label__science_tech": 0.234375, "__label__social_life": 0.00018477439880371096, "__label__software": 0.01291656494140625, "__label__software_dev": 0.736328125, "__label__sports_fitness": 0.0004277229309082031, "__label__transportation": 0.0007605552673339844, "__label__travel": 0.0002624988555908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14631, 0.0218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14631, 0.7133]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14631, 0.8065]], "google_gemma-3-12b-it_contains_pii": [[0, 93, false], [93, 425, null], [425, 1065, null], [1065, 1251, null], [1251, 1815, null], [1815, 1976, null], [1976, 2582, null], [2582, 3263, null], [3263, 3382, null], [3382, 4030, null], [4030, 4141, null], [4141, 4808, null], [4808, 5578, null], [5578, 6379, null], [6379, 7090, null], [7090, 7665, null], [7665, 8262, null], [8262, 9002, null], [9002, 9512, null], [9512, 9979, null], [9979, 10587, null], [10587, 11368, null], [11368, 12039, null], [12039, 12857, null], [12857, 13554, null], [13554, 13837, null], [13837, 14631, null]], "google_gemma-3-12b-it_is_public_document": [[0, 93, true], [93, 425, null], [425, 1065, null], [1065, 1251, null], [1251, 1815, null], [1815, 1976, null], [1976, 2582, null], [2582, 3263, null], [3263, 3382, null], [3382, 4030, null], [4030, 4141, null], [4141, 4808, null], [4808, 5578, null], [5578, 6379, null], [6379, 7090, null], [7090, 7665, null], [7665, 8262, null], [8262, 9002, null], [9002, 9512, null], [9512, 9979, null], [9979, 10587, null], [10587, 11368, null], [11368, 12039, null], [12039, 12857, null], [12857, 13554, null], [13554, 13837, null], [13837, 14631, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14631, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14631, null]], "pdf_page_numbers": [[0, 93, 1], [93, 425, 2], [425, 1065, 3], [1065, 1251, 4], [1251, 1815, 5], [1815, 1976, 6], [1976, 2582, 7], [2582, 3263, 8], [3263, 3382, 9], [3382, 4030, 10], [4030, 4141, 11], [4141, 4808, 12], [4808, 5578, 13], [5578, 6379, 14], [6379, 7090, 15], [7090, 7665, 16], [7665, 8262, 17], [8262, 9002, 18], [9002, 9512, 19], [9512, 9979, 20], [9979, 10587, 21], [10587, 11368, 22], [11368, 12039, 23], [12039, 12857, 24], [12857, 13554, 25], [13554, 13837, 26], [13837, 14631, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14631, 0.05584]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
7490ed29d018a88b8788d7adc6733c3496fe3e81
Describe, Manage and Discover Research Software Sue Cook (CSIRO) Jens Klump (CSIRO) Paola Petrelli (CLEX) Margie Smith (GA) Geoff Squire (CSIRO) Lesley Wyborn (NCI) Mingfang Wu (ARDC) Outline - Introduction, landscape of software citation and publish, changes in Research Data Australia (RDA) for promoting software (Mingfang Wu, ARDC) - New requirement from publishers and funders for software citation (Lesley Wyborn, NCI) - Lightning talks: - Software Citation and GA: Motivations, outcomes and future direction (Margie Smith, GA) - Software in the CSIRO DAP: Description (Sue Cook, CSIRO) - CLEX software publishing workflow (Paola Petrelli, CLEX) - Describing software for Virtual Laboratories (Geoff Squire, CSIRO) - Q/A, group discussion and feedback (Jens Klump, CSIRO) Why do we care - Software is pervasive in research - >90% of researchers acknowledge software is important for their own research - ~70% say their research would not be possible without it. - Of 40 papers examined in Nature Jan-March 2016, 32 contain 211 mentions of distinct pieces of software, for an average of 6.5 mentions per paper 12 scientific software challenges **Open Research and Scholarly Communication** - Intellectual property - Publication and peer review - Software dissemination, catalogues, search, and review **Sustainable Software** - Training and education - Software engineering - Portability - Multidisciplinary science - Reproducibility - Reusability **Sustainable community** - Incentives, citation/credit models, and metrics - Career paths - Software communities and sociology - Sustainability and funding models Daniel S. Katz: *Software in Research: Underappreciated and underrewarded*. Keynote speech from 2017 eRA. The FAIR Data Principles Findable, Accessible, Interoperable, Reusable The FORCE11 Software Citation Principles Importance, Credit and attribution, Unique identification, Persistence, Accessibility, Specificity The OSS Recommendations Make source code publicly accessible from day one Make software easy to discover by providing software metadata via a popular community registry Adopt a license and comply with the license of third-party dependencies Define clear and transparent contribution, governance and communication processes **Who and What (Internationally)** ### Open Research and Scholarly Communication - FORCE11 Software Citation Implementation WG - RDA Research Software Source Code IG - Nature software submission guidelines (2018) - Journal of Open Source Software - Elsevier – Why publish a software SoftwareX, Science of Computer Program, Neurocomputing ### Sustainable software - US Research Software Sustainability Institute (URSSI) - UK Software Sustainability Institute - Working Towards Sustainable Software For Science (WSSSPE) ### Sustainable community - Research Software Engineer Association **Support from disciplines and organisations** - ESIP: Software Guidelines - AGU: Enabling FAIR Data Project - Astrophysics: AAS Journals, Astrophysics Source Code Library - ... Australian activities supporting research software - Research Data Australia (catalogue) - Australian Research Software IG - RSE Association – Australian Chapter - ARDC Skills and Training Program - Uni., Gov. agencies, NCRIS facilities, etc. are treating software as research output **Force11 software citation implementation group** - RD-A software source code interest group - DataCite **AGU (Enabling FAIR Data Project)** - ESIP Software and Services Citations cluster **Working Towards Sustainable Software For Science (WSSSPE)** - Research Software Engineer Association Supporting catalogue/repository General repository - DataCite - Zenodo - Code Ocean - Code.gov - Figshare Domain specific software repository - Astrophysics Source Code Library (http://ascl.net/) - OMICStools (https://omictools.com/) - Bio.tools (https://bio.tools/) - Bioconductor (https://www.bioconductor.org/) Software code archive: Software Heritage Software metadata/ontology/vocabulary CodeMeta Project - DataCite - Zenodo - Dublin Core - R Package Description - Trove Software Map - Perl Module Description (CPAN:Meta) - Debian Package - Python Disutils (pyPI) - GitHub - NodeJS - Software Discover Index - OntoSoft - Software Ontology - Figshare - Research Data Australia (RIF-CS) What do we (ARDC) do - Recommended software citation format Creator (PublicationYear): Title. Version No. Publisher. [resourceTypeGeneral]. Identifier. What do we (ARDC) do - Amended Research Data Australia (RDA) registry schema (RIF-CS) for describing software as a distinct resource type What do we (ARDC) do - 212 registered software records from RDA (was 173 in Nov. 2017) - Commonwealth Scientific and Industrial Research Organisation 87 - Geoscience Australia 70 - Australian Ocean Data Network 34 - Monash University 15 - The University of Adelaide 4 - ARC Centre of Excellence for Climate System Science 1 - National Archives of Australia 1 108 of them have minted DOI Enhance software discoverability in RDA Clearly label software object Enhance software discoverability in RDA A new software link from this Contributor page Enhance software discoverability in RDA A new filter A new “Explore” page for software New requirements from Publishers (and the ARC/NHMRC/Universities Australia) Lesley Wyborn National Computational Infrastructure The Drivers for Change - Fifty years ago, most data that underpinned a publication could be represented in typeset tables and methods could be described in text. - Most calculations were done using slide rules and log tables - With the advent of the digital age and the computerisation of instruments, volumes of data collected became too large to process manually and publish as tables: computer code became integral to modern scientific research. - Increasingly data and software became included as a supplement in the research paper, accessible by contacting the journal, or else ‘by contacting the author’. The Problem - The inability to access primary data, samples, and software limits the ability to test the veracity and reproducibility of any publication. - They do not guarantee accessibility and persistence of input research artefacts (data, software and samples in particular). https://thelifeididntchoose.com/2018/08/14/life-is-absolutely-not-fair/ How do we fix it? 1. In 2017, a grant from the Laura and Arnold Foundation was awarded to the American Geophysical Union (AGU) and other partners to significantly improve the interconnection of data, software and samples in the literature in the Earth and environmental sciences, based around the FAIR guiding principles. 2. A coalition of Earth and environmental science publishers, disciplinary data repositories, and supporting organizations joined forces to work together to a commitment statement on FAIR publishing. 3. AuScope, ARDC and the NCI were all partners in the project and various members participated in stakeholder meetings and made contributions to final outcomes. The Commitment Statement This states that publication of scholarly articles in the Earth environmental science communities is conditional upon the concurrent availability of underpinning data and software. These should, to the greatest extent possible, be shared, open, and stored in community-approved FAIR-aligned repositories. This has been signed by publishers, repositories, professional societies, institutions, research data infrastructures and individuals (including AuScope, NCI, ARDC) What does this mean for the Earth and environmental sciences? For The Publishers? Publishers are now working towards following consistent policies for sharing and citing data, samples and software and will move from having these as supplements to using trusted repositories for publishing supporting research artefacts. For Repositories? Repositories will need to move towards be able to provide persistent identifiers, rich metadata, and related services for the data, software and samples they hold. For Researchers? Researchers will need to know how to consistently share, document, and reference data, samples and software and use globally persistent identifiers to uniquely identify their research outputs. ARDC has developed guidelines for citing software based on international recommendations of FORCE 11 software citation principals, DataCite, CodeMeta, and others. Software Citation and Geoscience Australia Motivations, outcomes and future direction Margie Smith Data Policy and Informatics Main motivations 1. Government Policy and Legislation a) Digital Continuity 2020 b) The Archives Act 1983 2. Geoscience Australia Data Strategy 3. Geoscience Australia’s Science Principles 4. Geoscience Australia’s Strategic Priorities 5. … Government policy considerations Government Data / digital transformation agenda through the DC2020 and the Archives Act 1983 requiring provenance of method as described in the Records Disposal Authority. Data management planning considerations **Upstream** - GA scientist - Contractor - Commercial operator - Other gov’t organisations **Input channels** - Sensor data stream - Web data stream - Email attachment - External disk drive - Physical sample - Physical copy **Science Areas work spaces** - Database - National Computational Infrastructure - Corporate Data Store - Metadata catalogue - Laboratory / software data - Cloud services (e.g. AWS) **Outputs** - Web services - Raw/processed data - Metadata - Scientific publications - News and mailing lists - Reports - Workflows and code **Downstream** - Other gov’t organisations - Geoscience Australia - Scientific institutions - Other geographic organisations **Data management planning considerations** **Science Areas work spaces** **Outputs** - Web services - Raw/processed data - Metadata - Scientific publications - News and mailing lists - Reports - Workflows and code **Downstream** - Other gov’t organisations - Geoscience Australia - Scientific institutions - Other geographic organisations **Data management planning considerations** Embed best practice data management Persistently identify all objects to enable provenance and cataloguing Outcomes – tracking provenance through the standard Current eCat search is not granular Outcomes – tracking provenance through the standard - IBTRrACS 70008 provides link to external p.o.t. - TCRM GitHub https://github.com/GeoscienceAustralia/tcrm/releases/tag/v2.0.2 - TCRM 77484 Model eCat record - Product Management Plan internal CMS link D2017-130697 Tropical Cyclone Risk Model Stochastic Event Catalogue 82033 Machine discoverable (?) but not people friendly Moving towards correct citation for advice Citation cases 1. (proposed workflow tool N. Car 2017-05-11) • {AUTHORS} ({YEAR}) {TITLE}. {TOOL_TYPE} {REPO_BRANCH}[0,1]. {PUBLISHER}. {DOI | URI}. {ACCESSED_DATE}[0,1] What we hope to have in eCat for provenance against advice generated: Future direction – discovery and linkage improvement Tropical Cyclone Risk Model Stochastic Event Catalogue The TCRM Stochastic Event Catalogue contains artificially generated tropical cyclone tracks and wind fields representing 10000 years of tropical cyclone activity. The catalogue is stored by year, with a track file and wind field file. The wind field file contains the maximum wind speed from all events occurring in the corresponding track file (i.e., it represents annual maximum wind speeds). About this resource Scope Code: dataset Categories - Climatology, meteorology, atmosphere - Data Package - DC2020 - Published_internal Australian and New Zealand Standard Research Classification (ANZSRC) - Natural Hazards Legal constraints: Creative Commons Attribution 4.0 International Licence Author: Arthur, W.C. Contact for the resource - Custodian: CSEMID - Owner: Commonwealth of Australia (Geoscience Australia) Citation If you wish to cite this record as you would a publication, please use the following format: Future direction – discovery and linkage improvement Slowly improving compliance Status of GA's GitHub Repo's metadata Totals - Total: 195 - Passed: 36 - Failed: 159 Previously - Total: 155 - Passed: 2 - Failed: 153 Repos failing tests | DefinitelyTyped | passed | README does not contain a subsection titled 'Contacts' |----------------|--------|------------------------------------------------------------------ | repo_must_contain_readme | | | | readme_must_start_with_title | | | | repo_must_have_license_file | | | | readme_must_contain_license_section | | | | readme_must_contain_contacts_section | | README does not contain any GA email addresses for contact people | | GeodePy | passed | README does not contain a subsection titled 'Contacts' |--------|--------|------------------------------------------------------------------ | repo_must_contain_readme | | | | readme_must_start_with_title | | | | repo_must_have_license_file | | | | readme_must_contain_license_section | | | | readme_must_contain_contacts_section | | README does not contain any GA email addresses for contact people | Thank you See THE BOSS ask Bob a question about the data. See Bob squirm. See Bob search and collect data. Bob didn't write any metadata, you see! Now, see Bob feeling queasy. data@ga.gov.au Software in the CSIRO DAP: Description Sue Cook | Data Librarian 17 October 2018 Search CSIRO collections Search By Location Featured Collections Data from the ASKAP latitude 50 Fast Radio Burst (FRB) sample The collection accompanies the paper "The dispersion-brightness relation for fast radio bursts from a watch-fiel survey" It contains 3 directories: full_data/ ASKAP CRAFT search mode data... Detecting Social Roles in Twitter Social roles are one particular demographic characteristic, which includes work, non-spatial, community and familial roles. We create a new annotated dataset for the task of detecting social... Silver Nanoparticle Data Set This is a set of silver nanoparticles final configurations, for use in data driven studies. These structures have been optimized (fully relaxed) using Density Functional Tight Binding... Search Results Found: 90 results Display: 10|25|50 results Privacy Preserving Linkage Software A set of software tools for privacy preserving entity linkage. * anonlink: A library for carrying out the low level hash comparisons required server side * entity-service: Our linkage server implemen... more Confidential Computing - - Published 05 Oct 2018 AusFarm Decision Support Software AusFarm modelling tool built using the Common Modelling Protocol. One CSIRO Rural Decision Support - Software development - Published 13 Sep 2018 PorosityPlus The PorosityPlus code can be used to calculate the surface area, volume and pore size distribution (PSD) of particle networks. These particles can be multiscale ranging from atoms, to nanoparticles to... more MMM Research & Applications - MMM Software - Published 04 Sep 2018 cuda-fixnum https://data.csiro.au/collections/ GrainScan - Software for analysis of grain images Alex Whan, Matt Böcker, Leanne Bischof This collection is to accompany the publication of the paper "GrainScan: A low cost, fast method for grain size and colour measurements." It contains the software version that is referred to in that publication. 20181008090840 Test Data Collection Suresh Palaniyandi 20181008090835 Test Data Collection Suresh Palaniyandi Software (140) Data (1339) Service (60) GrainScan - Software for analysis of grain images **About this Collection** **Collection Title:** GrainScan - Software for analysis of grain images **Collection Description:** This collection is to accompany the publication of the paper "GrainScan: A low cost, fast method for grain size and colour measurements". It contains the software version that is referred to in that publication. **Field of Research:** Plant Biology not elsewhere classified **DOI:** [http://doi.org/10.4225/08/S36302C43FC28](http://doi.org/10.4225/08/S36302C43FC28) **Contact:** CSIRO Enquiries **Keywords:** Grain; cereal; image analysis; seed size; software **Related Materials:** Collection: Repository containing maintained versions of GrainScan. **Collection:** Whan A., Cavanagh C: Scanned wheat grain images, 10.4225/08/S2P9AP7J62532. **Supporting Files:** GrainScanSupplement.docx **Licence:** CSIRO Binary Software Licence **Organisations:** CSIRO (Australia) **Attribution Statement:** Whan, Alex; Bolger, Matt; Bischof, Leanne (2014): GrainScan - Software for analysis of grain images. v2. CSIRO. Software Collection. [http://doi.org/10.4225/08/S36302C43FC28](http://doi.org/10.4225/08/S36302C43FC28) **Rights Statement:** All Rights (including copyright) CSIRO Australia 2014. **Access:** The metadata and files are available to the public. Workspace: Scientific Workflow Platform About this collection - CSIRO, Matt Bolger, Paul Cleary, Lachlan Hetherington, Chris Rucinski, David Thomas, Damien Watkins Collection description Workspace is a powerful software platform designed to address two specific user scenarios: 1) Scientists who want to create and share scientific workflows in one coherent, simple environment where much of the "heavy lifting" has already been developed and proven over a number of years 2) Developers who want to make their software available as commercial products, plugins or components that can be freely mixed with capabilities from collaborators. Access The metadata and files (if any) are available to the public. Related links - Publication: Workspace: A Platform for Delivering Scientific Applications - Publication: Workspace: scientific workflow platform - Website: Workspace website at CSIRO Supporting Files workspace.license About this project - CSIRO, Cleary, Paul, Hetherington, Lachlan, Bolger, Matt, Rucinski, Chris, Sankaranarayanan, Nilupama; Thomas, David; Watkins, Damien; Zhang, Zhi; Subramaniam, Rajesh; Nguyen, Dang Oanh; McNally, Matt (2018). Workspace: Scientific Workflow Platform, v1.4. CSIRO. Software Collection. https://doi.org/10.25919/5b3c1df633cd3 Workspace: Scientific Workflow Platform Software Environment requirements Windows/Linux/Mac Language (programming) C++ Operating system Windows/Linux/Mac Version 3.4.0 DAP common metadata: CSMD-CCLRC Core Scientific Metadata Model More about this Collection If you are entering software or data that requires specific research area fields, select extra descriptors to further search capabilities: **Metadata Schema:** - Software **Environment Requirements:** - ANZLIC - Darwin Core - Marine Community Profile **Language (Programming):** - VO Resource **Operating System:** - Software **Version:** - Sensor **Software Documentation:** *Upload the Software Documentation using Supporting Attachments.* More about this Collection If you are entering software or data that requires specific research area fields, select extra descriptors to further search capabilities: Metadata Schema: Software *Environment Requirements: *Language (Programming): *Operating System: *Version: ## Software Schema Usage <table> <thead> <tr> <th>Metadata access</th> <th>Used Software Schema?</th> <th>Totals</th> </tr> </thead> <tbody> <tr> <td></td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Public</td> <td>64</td> <td>26</td> </tr> <tr> <td>CSIRO Only</td> <td>3</td> <td>3</td> </tr> <tr> <td>Specific Users</td> <td>1</td> <td>1</td> </tr> <tr> <td><strong>Totals</strong></td> <td><strong>68</strong></td> <td><strong>30</strong></td> </tr> </tbody> </table> Schema.org tags ```script type='application/ld+json'> { "@context": "http://schema.org", "@type": "Dataset", "name": "Workspace: Scientific Workflow Platform", "description": "Workspace is a powerful software platform designed to address two specific user scenarios: \n1) Scientists who want to create and share scientific workflows in an coherent, simple environment where much of the \n2) Developers who want to make their software available as commercial products, plugins or components that can be freely mixed with capabilities from collaborators\n", "datePublished": "2018", "keywords": "scientific workflow platform", "license": "https://wiki.csiro.au/display/dmsdoc/CSIRO+Binary+Software+License+Agreement", "citation": "CSIRO; Cleary, Paul; Hetherton, Lachlan; Bolger, Matt; Rucinski, Chris; Sanjaramanayanan, Nirupama; Thomas, David; Watkins, Damien; Zhang, Zikai; Subramanian, Rajesh; Nguyen, Dang Quan; McNally, Matt (2018): Workspace: Scientific Workflow Platform. v14. CSIRO. Software Collection. 10.25919/5b3c1dc633cd3", "publisher": "CSIRO", "isAccessibleForFree": true, "author": [{"@type": "Person", "name": "Paul Cleary"},{"@type": "Person", "name": "Lachlan Hetherton"},{"@type": "Person", "name": "Matt Bolger"},{"@type": "Person", "name": "Chris Rucinski"},{"@type": "Person", "name": "Nirupama Sanjaramanayanan"},{"@type": "Person", "name": "David Thomas"},{"@type": "Person", "name": "Damien Watkins"},{"@type": "Person", "name": "Zikai Zhang"},{"@type": "Person", "name": "Rajesh Subramanian"},{"@type": "Person", "name": "Dang Quan Nguyen"},{"@type": "Person", "name": "Matt McNally"}], "funder": [{"@type": "Organization", "name": "CSIRO" }], "identifier": "DOI: 10.25919/5b3c1dc633cd3", "URL": "https://doi.org/10.25919/5b3c1dc633cd3" } </script> Next Rest of 2018 • Tech debt and consolidation • Could address some of our gaps - eg collection types in DataCite and schema.org 2019/20 • New UI for depositor pages • Greatly enhanced API for deposit Vision • Deposit API and CodeMeta could be mapped to pull in software from code repositories. Make your research software citable Did you know that you can make your research software citable? More and more journals are adopting the FORCE 11 Software Citation principles and encouraging researchers to make the software that was used in their research available. The easiest way for people to find out about the availability of your research software is to cite it in your references. The way to do this in CSIRO is to publish your software in the CSIRO Data Access Portal (DAP). This will create a snapshot of the version that you used and give you an attribution statement, including a DOI, which can be used as your citation. If you want to update your software later, you can simply update the existing record with the new release and the DAP will give you a new DOI and keep both versions preserved. See how other researchers have already created software records. There is a Software Release Process that needs to be completed. This will help you to select the right licence for your software. Putting a record on the DAP will generate an approval process so you know your CSIRO compliance issues are taken care of. See Using the DAP for software or contact researchdatasupport@csiro.au for more information. Thank you Thanks to: Dom Hogan for the statistics and Research Data Support team for feedback The Centre of Excellence for Climate Extremes (CLEX) is a major initiative funded by the Australian Research Council. The Centre is an international research consortium of five Australian universities and a network of outstanding national and international partner organizations. What we’re trying to achieve Encourage our community to share their codes Provide a source of relevant and reliable code for our community Supply a place to publish software in case our researchers want or have to Starting point We are not an institution we rely on others for services (mostly) We need to act quickly: less than 6 yrs left We have a “data source” with RDA We work at NCI and so publish data with their services We manage a github organization: https://github.com/coecms We manage a DMP web tool based on the UK DCC roadmap/dmponline Which software From github: - Code produced by our team: manage data/model and analysis - Code produced by student and researchers: for analysis, often used by the all research group, occasionally by a wider community Lost somewhere: - Model related combination of code and data, as: configurations, alterations of a model scheme, tutorials. Proposed workflow Communities created and curated by Zenodo users - climate extremes My communities - CLEX: Australian Centre of excellence for Climate Extremes Zenodo to publish and assign DOI Publishing also on RDA using existing data source CleX Roadmap data plan tool to create metadata and/or keep track of records. Proposed workflow Version control Most code already on github or bitbucket. Model configurations? Probably provide some form of template to help collecting them. Add metadata Collect information from repository and save as codemeta.json and zenodo.json files: - harvest metadata from repository (python) - template on CleX Roadmap Upload json files to repository Publish zenodo & RDA Admin review records and publish to zenodo CleX community: python code using zenodo api. Export record to RDA If metadata harvested directly from repository, then harvest record from zenodo to CleX Roadmap. Describing software for Virtual Labs Geoffrey Squire www.data61.csiro.au Describing software for use - Publishing software is easy - e.g. via GitHub, PyPI, download - Findable - Accessible - But users want to Just Use It! The goal is to make published software more useable by enabling automation. Information Model Applications run **Solutions** that use **Toolboxes** to solve Problems - Machine-readable descriptions - Metadata for searching, understanding, citation and provenance - Sufficient to provision and run software automatically Toolbox Describes a software environment that can run published software • Links to the published software • How to instantiate the environment • Dependencies - python - puppet - toolbox • Implementation - cloud image - HPC - puppet module - execute instructions Solution *Describes a workflow that uses a Toolbox to solve a specific Problem* - Link to published artifact (e.g. python script) - Link to the Problem it solves - Dependencies - Toolbox (usually) - How to implement the Solution - Specification of inputs and outputs - Name - Description - Type - Constraints Making it all useful - A Solution Centre is a catalogue of Toolboxes and Solutions - Developers (or others!) can publish descriptions of their software - URIs for reference, citation and provenance - Client apps can discover Solutions plus info to use them - WIP on CodeMeta, RIF-CS and DOIs https://sssc-vgl.geoanalytics.csiro.au Making use of it all • Virtual Laboratories (VLs) are clients - Data sets and services from registries - Solutions and toolboxes from the Solution Centre • Users find relevant and **useable** data and solutions • VL automates the job: - Generate UI to configure the Solution (parameters and inputs) - Assemble software environment - Wrangle input data - Execute and monitor the job - Store outputs and notify user when complete - Provide a provenance record https://vgl.auscope.org THANK YOU Data61/Unit Name Geoffrey Squire Software Engineer t +61 2 6216 7064 e geoffrey.squire@data61.csiro.au www.data61.csiro.au Q/A, Discussion Thank you ... Research software interest group:
{"Source-Url": "https://conference.eresearch.edu.au/wp-content/uploads/2018/10/1440-1500-Mingfang-Wu-Wed17-Lake-3-4.pdf", "len_cl100k_base": 6316, "olmocr-version": "0.1.53", "pdf-total-pages": 70, "total-fallback-pages": 0, "total-input-tokens": 74529, "total-output-tokens": 9718, "length": "2e12", "weborganizer": {"__label__adult": 0.00025200843811035156, "__label__art_design": 0.0004227161407470703, "__label__crime_law": 0.0003437995910644531, "__label__education_jobs": 0.00388336181640625, "__label__entertainment": 0.00011342763900756836, "__label__fashion_beauty": 0.00015556812286376953, "__label__finance_business": 0.0011777877807617188, "__label__food_dining": 0.00029754638671875, "__label__games": 0.0005154609680175781, "__label__hardware": 0.0008683204650878906, "__label__health": 0.0003020763397216797, "__label__history": 0.00030803680419921875, "__label__home_hobbies": 0.0001399517059326172, "__label__industrial": 0.0004534721374511719, "__label__literature": 0.0002925395965576172, "__label__politics": 0.00029850006103515625, "__label__religion": 0.00031065940856933594, "__label__science_tech": 0.038726806640625, "__label__social_life": 0.00017440319061279297, "__label__software": 0.10906982421875, "__label__software_dev": 0.84130859375, "__label__sports_fitness": 0.00024259090423583984, "__label__transportation": 0.0003285408020019531, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29581, 0.02221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29581, 0.28358]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29581, 0.78051]], "google_gemma-3-12b-it_contains_pii": [[0, 185, false], [185, 788, null], [788, 1424, null], [1424, 2036, null], [2036, 2792, null], [2792, 3628, null], [3628, 4208, null], [4208, 4566, null], [4566, 4903, null], [4903, 4903, null], [4903, 5172, null], [5172, 5311, null], [5311, 5714, null], [5714, 5785, null], [5785, 5873, null], [5873, 5962, null], [5962, 6091, null], [6091, 6707, null], [6707, 7061, null], [7061, 7747, null], [7747, 8310, null], [8310, 9025, null], [9025, 9275, null], [9275, 9403, null], [9403, 9655, null], [9655, 9961, null], [9961, 11067, null], [11067, 11175, null], [11175, 11227, null], [11227, 11263, null], [11263, 11594, null], [11594, 11643, null], [11643, 12084, null], [12084, 13276, null], [13276, 14951, null], [14951, 15148, null], [15148, 15230, null], [15230, 16000, null], [16000, 16838, null], [16838, 16873, null], [16873, 17328, null], [17328, 18967, null], [18967, 18967, null], [18967, 20247, null], [20247, 20420, null], [20420, 20483, null], [20483, 20959, null], [20959, 21328, null], [21328, 21757, null], [21757, 23561, null], [23561, 23865, null], [23865, 25091, null], [25091, 25091, null], [25091, 25186, null], [25186, 25466, null], [25466, 25683, null], [25683, 26025, null], [26025, 26371, null], [26371, 26698, null], [26698, 27292, null], [27292, 27367, null], [27367, 27600, null], [27600, 27846, null], [27846, 28125, null], [28125, 28456, null], [28456, 28789, null], [28789, 29289, null], [29289, 29427, null], [29427, 29443, null], [29443, 29581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 185, true], [185, 788, null], [788, 1424, null], [1424, 2036, null], [2036, 2792, null], [2792, 3628, null], [3628, 4208, null], [4208, 4566, null], [4566, 4903, null], [4903, 4903, null], [4903, 5172, null], [5172, 5311, null], [5311, 5714, null], [5714, 5785, null], [5785, 5873, null], [5873, 5962, null], [5962, 6091, null], [6091, 6707, null], [6707, 7061, null], [7061, 7747, null], [7747, 8310, null], [8310, 9025, null], [9025, 9275, null], [9275, 9403, null], [9403, 9655, null], [9655, 9961, null], [9961, 11067, null], [11067, 11175, null], [11175, 11227, null], [11227, 11263, null], [11263, 11594, null], [11594, 11643, null], [11643, 12084, null], [12084, 13276, null], [13276, 14951, null], [14951, 15148, null], [15148, 15230, null], [15230, 16000, null], [16000, 16838, null], [16838, 16873, null], [16873, 17328, null], [17328, 18967, null], [18967, 18967, null], [18967, 20247, null], [20247, 20420, null], [20420, 20483, null], [20483, 20959, null], [20959, 21328, null], [21328, 21757, null], [21757, 23561, null], [23561, 23865, null], [23865, 25091, null], [25091, 25091, null], [25091, 25186, null], [25186, 25466, null], [25466, 25683, null], [25683, 26025, null], [26025, 26371, null], [26371, 26698, null], [26698, 27292, null], [27292, 27367, null], [27367, 27600, null], [27600, 27846, null], [27846, 28125, null], [28125, 28456, null], [28456, 28789, null], [28789, 29289, null], [29289, 29427, null], [29427, 29443, null], [29443, 29581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29581, null]], "pdf_page_numbers": [[0, 185, 1], [185, 788, 2], [788, 1424, 3], [1424, 2036, 4], [2036, 2792, 5], [2792, 3628, 6], [3628, 4208, 7], [4208, 4566, 8], [4566, 4903, 9], [4903, 4903, 10], [4903, 5172, 11], [5172, 5311, 12], [5311, 5714, 13], [5714, 5785, 14], [5785, 5873, 15], [5873, 5962, 16], [5962, 6091, 17], [6091, 6707, 18], [6707, 7061, 19], [7061, 7747, 20], [7747, 8310, 21], [8310, 9025, 22], [9025, 9275, 23], [9275, 9403, 24], [9403, 9655, 25], [9655, 9961, 26], [9961, 11067, 27], [11067, 11175, 28], [11175, 11227, 29], [11227, 11263, 30], [11263, 11594, 31], [11594, 11643, 32], [11643, 12084, 33], [12084, 13276, 34], [13276, 14951, 35], [14951, 15148, 36], [15148, 15230, 37], [15230, 16000, 38], [16000, 16838, 39], [16838, 16873, 40], [16873, 17328, 41], [17328, 18967, 42], [18967, 18967, 43], [18967, 20247, 44], [20247, 20420, 45], [20420, 20483, 46], [20483, 20959, 47], [20959, 21328, 48], [21328, 21757, 49], [21757, 23561, 50], [23561, 23865, 51], [23865, 25091, 52], [25091, 25091, 53], [25091, 25186, 54], [25186, 25466, 55], [25466, 25683, 56], [25683, 26025, 57], [26025, 26371, 58], [26371, 26698, 59], [26698, 27292, 60], [27292, 27367, 61], [27367, 27600, 62], [27600, 27846, 63], [27846, 28125, 64], [28125, 28456, 65], [28456, 28789, 66], [28789, 29289, 67], [29289, 29427, 68], [29427, 29443, 69], [29443, 29581, 70]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29581, 0.03025]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
992bdd5eee1a40bda8ae6e12c447f8bff9c3d264
Online Generic Editing of Heterogeneous Dictionary Entries in Papillon Project Mathieu MANGEOT Unit Terjemahan Melalui Komputer Universiti Sains Malaysia, 11800, Pulau Pinang Malaysia mathieu@mangeot.org David THEVENIN National Institute of Informatics Hitotsubashi 2-1-2-1913 Chiyoda-ku 101-8430 Tokyo Japan thevenin@nii.ac.jp Abstract The Papillon project is a collaborative project to establish a multilingual dictionary on the Web. This project started 4 years ago with French and Japanese. The partners are now also working on English, Chinese, Lao, Malay, Thai and Vietnamese. It aims to apply the LINUX cooperative construction paradigm to establish a broad-coverage multilingual dictionary. Users can contribute directly on the server by adding new data or correcting existing errors. Their contributions are stored in the user space until checked by a specialist before being fully integrated into the database. The resulting data is then publicly available and freely distributable. An essential condition for the success of the project is to find a handy solution for all the participants to be able to contribute online by editing dictionary entries. In this paper, we describe our solution for an online generic editor of dictionary entries based on the description of their structure. 1 Introduction The Papillon Project (Sérasset and Mangeot, 2001) is a cooperative project for a multilingual dictionary on the Web with the following languages: English, Chinese, French, Japanese, Lao, Malay, Thai and Vietnamese. The dictionary structure makes it very simple to add a new language at any time. It aims to apply the LINUX construction paradigm to establish a multilingual usage dictionary with broad-coverage. This project is based on the participation of voluntary contributors. In order to be really attractive, this project must imperatively find a convenient solution so that contributors can easily edit the dictionary entries. Since Papillon dictionary is available on a web server and the contributors are located all around the world, the obvious solution is to implement an editor available online. Unfortunately, the existing solutions (HTML forms, java applets) have important limitations. Thus, we propose an entirely generic solution that can adapt very easily not only the interfaces to the various entry structures needing to be edited but also to the user needs and competences. Firstly, we outline the issue addressed in this paper; and draw up an overview of the existing methods for dictionary entry edition. A presentation of the chosen method follows detailing its integration in the Papillon server. Finally, we show an example of the online edition of a dictionary entry. 2 Addressed Issue and Requirements In this paper, the addressed issue is how to edit online dictionary entries with heterogeneous structures. 2.1 Online Edition In order to build a multilingual dictionary that covers a lot of languages, we need large competences in those languages. It may be possible to find an expert with enough knowledge of 3 or 4 languages but when that number reaches 10 languages (like now), it is almost impossible. Thus, we need contributors from all over the world. Furthermore, in order to avoid pollution of the database, we plan a two-step integration of the contributions in the database. When a contributor finishes a new contribution, it is stored into his/her private user space until it is revised by a specialist and integrated into the database. Then, each data needs to be revised although the revisers may not work in the same place of the initial contributors. Thus, the first requirement for the editor is to work online on the Web. 2.2 Heterogeneous Entry Structures The Papillon platform is built for generic purposes. Thus, it can manipulate not only the Papillon dictionary but also any kind of dictionary encoded in XML (Mangeot, 2002). The lexical data is organized in 3 layers: - Limbo contains dictionaries in their original format and structure; - Purgatory contains dictionaries in their original format but encoded in XML; - Paradise contains the target dictionary, in our case Papillon dictionary. The Purgatory data can be reused for building the Paradise dictionary. We would like then to be able to edit different dictionaries structures from Paradise but also from Purgatory. Furthermore, being Papillon a research project, entry structures may evolve during the life of the project, since they are not fixed from the beginning. Hence, the second requirement is that the editor must deal with heterogeneous and evolving entry structures. 2.3 Extra Requirements Previous requirements must be fulfilled, whilst the following ones are optional. The contributors will have various competences and use the editor for different purposes (a specialist in speech may add the pronunciation, a linguist may enter grammatical information, a translator would like to add interlingual links, and a reviewer will check the existing contributions, etc.). The second optional requirement concerns the adaptation to the user platform. The increasing number of smart mobile phones and PDAs makes real the following scenarios: adding an interlingual link with a mobile phone, adding small parts of information with a PDA and revising the whole entry with a workstation. It would then be very convenient if the editor could adapt itself both to the user and to the platform. 2.4 Final Aim Guided by these requirements, our final aim is to generate, as much automatically as possible, online interfaces for editing dictionary entries. It has to be taken into account the fact that entry structures are heterogeneous and may vary and to try to adapt as much as possible these interfaces to the different kinds of users and platforms. 3 Overview of Existing Editing Methods 3.1 Local and Ad Hoc The best way to implement a most comfortable editor for the users is to implement an ad-hoc application like the one developed for the NADIA-DEC project: DECID (Sérasset, 1997). It was conceived to edit entries for the ECD (Mel’čuk et al., 1984-1992). The Papillon microstructure is based on a simplification of this structure. We were indeed very interested by such software. It is very convenient - for example - for editing complex lexical functions. But several drawbacks made it impossible to use in our project. First, the editor was developed ad hoc for a particular entry structure. If we want to change that structure, we must reimplement changes in the editor. Second, the editor is platform-dependent (here written and compiled for MacOs). The users have to work locally and cannot contribute online. 3.2 Distributed and Democratic This solution implemented for the construction of the French-UNL dictionary (Sérasset and Mangeot, 1998) project is called ”democratic” because it uses common and widespread applications (works on Windows and MacOs) such as Microsoft Word. The first step is to prepare pre-existing data on the server (implemented here in Macintosh Common Lisp). Then, the data is converted into rtf by using a different Word style for each part of information (the style ”headword” for the headword, the style ”pos” for the part-of-speech, etc.) and exported. The clients can open the resulting rtf files locally with their Word and edit the entries. Finally, the Word rtf files are reintegrated into the database via a reverse conversion program. This solution leads to the construction of 20,000 entries with 50,000 word senses. It was considered as a very convenient method, nevertheless, two important drawbacks prevented us to reuse this solution. The first is that in order to convert easily from the database to rtf and vice-versa, the dictionary entry structure cannot be too complex. Furthermore, when the user edits the entry with Word, it is very difficult to control the syntax of the entry, even if some Word macros can partially remedy this problem. The second is the communication between the users and the database. The Word files have to be sent to the users, for example via email. It introduces inevitably some delay. Furthermore, during the time when the file is stored on the user machine, no other user can edit the contents of the file. It was also observed that sometimes, users abandon their job and forget to send their files back to the server. 3.3 Online and HTML Forms In order to work online, we should then use either HTML forms, or a Java applet. The use of HTML forms is interesting at a first glance, because the implementation is fast and all HTML browsers can use HTML forms. On the other hand, the simplicity of the forms leads to important limitations. The only existing interactors are: buttons, textboxes, pop-up menus, and checkboxes. JavaScripts offer the possibility to enrich the interactors by verifying for example the content of a textbox, etc. However, very often they raise compatibility problems and only some browsers can interpret them correctly. Thus, we will avoid them as much as possible. One of the major drawbacks of this solution is our need to modify the source code of the HTML form each time we want to modify the entry structure. We also need to write as many HTML forms as there are different entry structures. 3.4 Online and Java Applets In order to remedy the limitations of the HTML forms and to continue to work online, there is the possibility to use a java applet that will be executed on the client side. Theoretically, it is possible to develop an ad hoc editor for any complicated structure, like the 3.1 solution. Nevertheless, the problems linked to the use of a java applet are numerous: the client machine must have java installed, and it must be the same java version of the applet. Furthermore, the execution is made on the client machine, which can be problematic for not very powerful machines. Moreover, nowadays there is a strong decrease of java applets usage on the Web mainly due to the previous compatibility problems. 3.5 Conclusion As a result, none of these existing solutions can fully fulfil our requirements: online edition and heterogeneous entry structures. We might then use other approaches that are more generic like the ones used in interface conception in order to build our editor. In the remainder of this paper, we will detail how we used an interface generation module in Papillon server in order to generate semi-automatically editing interfaces. 4 Using an Interface Generation Module This Papillon module has to generate graphic user interfaces for consulting and editing dictionary entries. We base our approach on the work done on Plasticity of User interfaces (Thevenin and Coutaz, 1999) and the tool ART-Studio (Calvary et al., 2001). They propose frameworks and mechanisms to generate semi-automatically graphic user interfaces for different targets. Below we present the design framework and models used. 4.1 Framework for the UI generation Our approach (Calvary et al., 2002) is based on four-generation steps (Figure 1). The first is a manual design for producing initial models. It includes the application description with the data, tasks and instances models, and the description of the context of use. This latter generally includes the platform where the interaction is done, the user who interacts and the environment where the user is. In our case we do not describe the environment, since it is too difficult and not really pertinent for Papillon. From there, we are able to generate the Abstract User Interface (AUI). This is a platform independent UI. It represents the basic structure of the dialogue between a user and a computer. In the third step, we generate the Concrete User Interface (CUI) based on the Abstract User Interface (AUI). It is an instantiation of the AUI for a given platform. Once the interactor (widget) and the navigation in UI have been chosen, it is a prototype of the executable UI. The last stage is the generation of Final User Interface (FUI). This is the same as concrete user interface (CUI) but it can be executed. We will now focus on some models that describe the application. 4.2 Application Models: Data & Task The Data model describes the concepts that the user manipulates in any context of use. When considering plasticity issues, the data model should cover all usage contexts, envisioned for the interactive system. By doing so, designers obtain a global reusable reference model that can be specialized according to user needs or more generally to context of use. A similar design rationale holds for tasks modeling. For the Papillon project, the description of data model corresponds to the XML Schema description of dictionary and request manipulation. The tasks’ model is the set of all tasks that will be implemented independently of the type of user. It includes modification of the lexical database and visualization of dictionaries. As showed on Figure 2, the model of concepts will drive the choice of interactors and the structure of the interface. 4.3 Instance Model It describes instances of the concepts manipulated by the user interface and the dependence graph between them. For example there is the concept ”Entry” and one of its instances ”scientifique”. (cf. Figure 3). This model is described at design time, before generation, and linked with the task model (a task uses a set of instances). Each instance will be effectively created at run-time with data coming from the Papillon database. 4.4 Platform and Interactors Models A platform is described by interaction capacity (for example, screen size, mouse or pen, keyboard, speech recognition, etc.). These capacities will influence the choice of interactors, presentation layouts or the navigation in the user interface. Associated to the platform there are the interactors (widgets) proposed by the graphic toolbox of the targeted language (for example Swing or AWT for Java). In this project interactors are coming from HTML Forms (textBox, comboBox, popup menu, button, checkBox, radioButton) and HTML tags. We also had to build more complex interactors by a combination of HTML Forms and HTML Tags. 4.5 User Model Previous research has shown the difficulty to describe the cognitive aspects of user behavior. Therefore, we will simplify by defining different user classes (tourist, student, business man, etc.). Each class will be consisting of a set of design preferences. Depending on the target class, the generator will use appropriate design rules. The model is not yet implemented; it is implicitly used in the data & task models. We defined different views of data according to the target: - all data is rendered for the workstation editing interface for lexicographers, - only headword and grammatical class are rendered and examples are browsable on the mobile phone interface for a ”normal” dictionary user. 4.6 Concrete User Interface Model This model, based on an independent user interface language, describes the graphic user interface, as the final device will render it. It is target-dependent. 4.7 Final User Interface From the CUI model, the generator produces a final interface that will be executed by the targeted device, and links it with the Papillon database. In our case we produce: - HTML code for the workstation, Figure 4: Generated GUI - Tiny XHTML code for AU mobile phones, - and CGI links for the communication with the database. Figure 4 shows a simple example of a final generated UI. 5 Integrating the Module in Papillon Server 5.1 Implementation The Papillon server is based on Enhydra, a web server of Java dynamic objects. The data is stored as XML objects into an SQL database: PostgresQL. ARTStudio tool is entirely written in Java. For its integration into the Papillon/Enhydra server, we created a java archive for the codes to stay independent. The Papillon/Enhydra server can store java objects during a user session. When the user connects to the Papillon server with a browser, a session is created and the user is identified thanks to a cookie. When the user opens the dictionary entry editor, the java objects needed for the editor will be kept until the end of the session. 5.2 A Working Session When the editor is launched, the models corresponding to the entry structure are loaded. Then, if an entry is given as a parameter (editing an existing entry), the entry template is instantiated with the data contained in that entry. If no entry is given, the template is instantiated with an empty entry. Finally, the instantiated models and entry templates are stored into the session data and the result is displayed embedded in an HTML form, through a Web page (Figure 4). Then, after a user modification (e.g. adding an item to the examples list), the HTML form sends the data to the server via a CGI mechanism. The server updates the models and template stored in the session data and sends back the modified result in the HTML page. At the end of the session, the modified entry is extracted from the session data and then stored as a contribution in the database. 6 An Editing Example 6.1 A Dictionary Entry Figure 5 shows an abstract view of a simple dictionary entry. It is the entry "scientifique" (scientific) of a French monolingual dictionary. The entry has been simplified on purpose. The entries are stored as XML text into the database. 6.2 Entry Structure The generation of the graphic interface is mostly based on the dictionary microstructure. In the Papillon project, we describe them with XML schemata. We chose XML schemata instead of DTDs because they allow for a more precise description of the data structure and handled types. For example, it is possible to describe the textual content of an XML element as a closed value list. In this example, the French part-of-speech type is a closed list of "nom", "verb", and "adj". Figure 6 is an abstract view of the structure corresponding to the previous French monolingual dictionary entry. 6.3 Entry Displayed in the Editor The dictionary entry of Figure 5 is displayed in the HTML editor as in Figure 4. In the following one (Figure 7), an example has been added in the list by pushing the + button. 6.4 A More Complex Entry In the following figure (Figure 8), we show the entry 食べる (taberu, to eat) of the Papillon Japanese monolingual volume. The entry structure comes from the DiCo structure (Polguère, 2000), a light simplification of the ECD by Mel’čuk & al. Two interesting points may be highlighted. You can note that not only the content of the entry is in Japanese, but also the text labels of the information. For example, the first one, 見出し語 (midashigo) means headword. The interface generator is multitarget: it generates the whole HTML content. It is then possible to redefine the labels for each language. The second point is the complexity of the entry structure. There is a list of lexical functions. Each lexical function consists of a name and a list of valgroups (group of values), and in turn, each valgroup consists of a list of values. Finally, each value is a textbox. The lists are nested the one in the other one and it is possible to use the lists + and - operators at any level. 7 Evaluation 7.1 Preamble This paper focuses on one particular functionality of the Papillon platform: the generic editor. Its purpose is not to present the building of Papillon dictionary or the progress of Papillon Project as a whole. The evaluation will then focus on the editor. 7.2 Edition of the GDEF dictionary The GDEF project (Big Estonian-French Dictionary) is managed by Antoine Chalvin from INALCO, Paris. The dictionary microstructure is radically different from the Papillon dictionary as you will see in Figure 9 compared to figure 8. You may notice the 6 levels of recursion embedded in the entry structure. It took about one week to write the interface description files for the new dictionary structure in order to generate properly a complete interface for the GDEF dictionary. 7.3 Edition of the WaDokuJiTen The WaDokuJiTen project is managed by Ulrich Apel, now invited researcher at NII, Tokyo. The dictionary is originally stored in a FileMaker database. It has more than 200,000 entries. It took four days to export integrate the dictionary in Papillon platform and to write the files needed for the generation of the editor interface. The integration was done in 4 steps: export the dictionary from FileMaker into an XML file, tag the implicit structure with a perl script, write the metadata files and upload the dictionary on the Papillon server. The dictionary microstructure is simpler than the previous one (see figure 10). It took only two days to write the files needed for the generation of the editor interface. 8 Conclusion The implementation of ARTStudio and Papillon platform started separately four years ago. The development of the HTML generation module in ARTStudio and its integration into Papillon platform took about a year from the first specifications and the installation of a complete and functional version on the Papillon server. The collaboration between a specialist of computational lexicography and a specialist of the adaptability of interfaces has produced very original and interesting work. Furthermore, the evaluation and the feedback received from the users is very positive. Now, we want to further pursue this work following several paths. First of all, only a specialist can use the existing interface for Papillon entry since it is too complex for a beginner. We plan to generate different interface types adapted to the varied user needs and competences. Thanks to the modularity of the editor, we need only to describe the tasks and instance models corresponding to the desired interface. For the moment, the interface generation is not fully automatic; some of the model descriptions used by the editor have to be written "by hand". This is why we are working now on automating the whole generation process and the implementation of graphical editors allowing users to post-edit or modify a generated interface description. References
{"Source-Url": "http://www-clips.imag.fr/geta/mathieu.mangeot/Publis/COLING04_MM-DT.pdf", "len_cl100k_base": 4586, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21055, "total-output-tokens": 5609, "length": "2e12", "weborganizer": {"__label__adult": 0.0005898475646972656, "__label__art_design": 0.003482818603515625, "__label__crime_law": 0.00047469139099121094, "__label__education_jobs": 0.01219940185546875, "__label__entertainment": 0.00033974647521972656, "__label__fashion_beauty": 0.0002994537353515625, "__label__finance_business": 0.0003523826599121094, "__label__food_dining": 0.0004417896270751953, "__label__games": 0.0011358261108398438, "__label__hardware": 0.0008955001831054688, "__label__health": 0.0010156631469726562, "__label__history": 0.000949382781982422, "__label__home_hobbies": 0.00015115737915039062, "__label__industrial": 0.00045609474182128906, "__label__literature": 0.00588226318359375, "__label__politics": 0.00044083595275878906, "__label__religion": 0.0011281967163085938, "__label__science_tech": 0.11663818359375, "__label__social_life": 0.00036406517028808594, "__label__software": 0.076171875, "__label__software_dev": 0.775390625, "__label__sports_fitness": 0.00027370452880859375, "__label__transportation": 0.0005340576171875, "__label__travel": 0.0002677440643310547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24153, 0.03374]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24153, 0.48044]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24153, 0.88472]], "google_gemma-3-12b-it_contains_pii": [[0, 3820, false], [3820, 8006, null], [8006, 12507, null], [12507, 15310, null], [15310, 18414, null], [18414, 20327, null], [20327, 24153, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3820, true], [3820, 8006, null], [8006, 12507, null], [12507, 15310, null], [15310, 18414, null], [18414, 20327, null], [20327, 24153, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24153, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24153, null]], "pdf_page_numbers": [[0, 3820, 1], [3820, 8006, 2], [8006, 12507, 3], [12507, 15310, 4], [15310, 18414, 5], [18414, 20327, 6], [20327, 24153, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24153, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
a4eb04367a764340ecd2cb8f13542a5ab9d76f48
An Efficient Parallel Determinisation Algorithm for Finite-state Automata Thomas Hanneforth and Bruce W. Watson 1 Universität Potsdam, Germany 2 Stellenbosch University, South Africa Abstract. Determinisation of non-deterministic finite automata (NFA) is an important operation not only for optimisation purposes, but also the prerequisite for the complementation operation, which in turn is necessary for creating robust pattern matchers, for example in string replacement and robust parsing. In the paper, we present an efficient parallel determinisation algorithm based on a message-passing graph approach. In a number of experiments on a multicore machine we show that the parallel algorithm behaves very well for acyclic and cyclic NFAs of different sizes, especially in the worst case, where determinisation leads to an exponential blow-up of states. Keywords: finite-state automata, determinisation, parallel algorithms, message passing, flow graphs, Kahn process networks, replacement rules 1 Introduction Given a nondeterministic finite automaton (an NFA), determinisation is the construction of an equivalent deterministic finite automaton (DFA), where ‘equivalence’ means that the NFA and DFA accept the same language. Many real-life applications involve the relatively straightforward construction and manipulation initially of NFA’s, for example when compiling regular expressions, regular grammars, or other descriptive formalisms (such as replacement rules in computational linguistics) to finite automata. While NFAs are often very compact and easily manipulated, several situations motivate the subsequent construction of a DFA: – The standard approach for considering equivalence of two automata [4] is to minimize their respective equivalent DFA’s and compare those (thanks to the uniqueness-modulo-isomorphism of minimal DFA’s per language). – Effective complementation of regular languages requires the construction of a DFA (cf. [4]). – Complementation is also the key for robust natural language processing applications based on finite-state automata, e.g. shallow parsing systems etc. Many of these systems are build upon regular conditional [6] and unconditional replacement rules [7] which heavily rely on complementation to ensure robust application. – The end-goal of constructing automata is often to apply it to a string, e.g. for pattern matching, network security applications, and computational linguistics. Determinism of the DFA means that only a single ‘current state’ needs to be tracked while processing input. By contrast, in the worst case, all of an NFA’s states may become active while processing input – an enormous computational overhead as each symbol is processed, and usually impractical. In all of those applications, a DFA is essential [4]. The classical ‘subset construction’ algorithm\(^1\) follows directly from Rabin & Scott’s proof of NFA/DFA equivalence, and also shows that an equivalent DFA can be exponentially larger than the NFA in the worst-case\(^1\). Most real-life implementations combine reachability with the subset construction, which can subsequently be tuned quite effectively. In addition to tuning for memory and speed performance, various toolkits also implement incremental determinisation in which the DFA is constructed on-the-fly while processing an input string. Recent work by van Glabbeek & Ploeger\(^3\) presents five determinisation algorithms, classifying them in lattice (based on the resulting DFA size), and giving benchmarking results. Despite these algorithmic advances, there has been little work on parallel determinisation. Clock-speeds of modern processors and memory have plateaued and Moore’s Law advances in silicon chip production are now devoted to more processor cores, enabling cheap multi-threading, with the caveat that parallel algorithms are much more difficult to get correct. This paper presents one of the first such parallel algorithms. Before we turn to the algorithm, we define the relevant technical notions in the next section. Then, Section 3 restates the standard reachability-based serial determinisation algorithm, before it develops an efficient parallel one. In Section 4, we give a short C++ code fragment which implements the parallel algorithm. Finally, in Section 5 we report on several experiments we conducted to compare serial and parallel determinisation. ### 2 Preliminaries An alphabet \(\Sigma\) is a finite set of symbols. A string \(x = a_1 \cdot a_2 \cdots a_n\) over \(\Sigma\) is a finite concatenation of symbols \(a_i\) taken from \(\Sigma\) (the concatenation operator \(\cdot\) is normally omitted). The length of a string \(x = a_1 \cdots a_n\) – symbolically \(|x|\) – is \(n\). The empty string is denoted by \(\varepsilon\) and has length zero. Let \(\Sigma^*\) denote the set of all finite-length strings (including \(\varepsilon\)) over \(\Sigma\). A non-deterministic finite-state automaton (NFA) \(A\) is a 5-tuple \(\langle Q, \Sigma, q_0, \delta_{nd}, F \rangle\) with \(Q\) being a finite set of states; \(\Sigma\), an alphabet, \(q_0 \in Q\), the start state; \(\delta_{nd} : Q \times \Sigma \mapsto 2^Q\), the transition function; and \(F \subseteq Q\), the set of final states. Define \(\delta_{nd}^* : Q \times \Sigma^* \mapsto 2^Q\) as the reflexive and transitive closure of \(\delta_{nd}\). \[ \begin{align*} - \forall q \in Q, \delta_{nd}^*(q, \varepsilon) &= \{q\} \text{ and} \\ - \forall q \in Q, a \in \Sigma, w \in \Sigma^* : \delta_{nd}^*(q, aw) &= \bigcup_{p \in \delta_{nd}(q,a)} \delta_{nd}^*(p, a). \end{align*} \] \(\delta_{nd}\) may be a partial function. In case \(\delta_{nd}(S, a)\) is undefined for some state set \(S \subseteq Q\) and \(a \in \Sigma\), we take \(\delta_{nd}(S, a)\) to be equal to \(\emptyset\). The language of a NFA \(A = \langle Q, \Sigma, q_0, \delta_{nd}, F \rangle\), symbolically \(L(A)\), is defined as \(L(A) = \{w \in \Sigma^* \mid \delta_{nd}^*(q_0, w) \cap F \neq \emptyset\}\). A deterministic finite-state automaton (DFA) \(A\) is defined as a 5-tuple \(\langle Q, \Sigma, q_0, \delta_d, F \rangle\) where \(A, Q, q_0,\) and \(F\) are the same as in the NFA case and \(\delta_d\) is a (partial) function mapping \(Q \times \Sigma\) to \(Q\). The notions of \(\delta_d^*\) and \(L(A)\) are defined analogously to the ones in NFAs. A state \(q\) is reachable if there exists a word \(w \in \Sigma^*\) such that \(\delta_d^*(q_0, w) = q\). For every NFA, an equivalent DFA (with respect to the recognized language) can be constructed. The key idea is the subset construction: \(^1\) Sometimes known as the ‘powerset construction’, see the next section. Definition 1 (Subset construction). Let $A = (Q, \Sigma, q_0, F, \delta_{nd})$ be an NFA. Define $A'$, the equivalent DFA with $L(A') = L(A)$ as $A' = (2^Q, \Sigma, \{q_0\}, F', \delta_d)$ with: - $F' = \{S \subseteq Q \mid S \cap F \neq \emptyset\}$ - $\delta_d(S, a) = \bigcup_{q \in S} \delta_{nd}(q, a), \forall a \in \Sigma, \forall S \subseteq Q$ The next section describes serial and parallel determinisation algorithms based on the subset construction. 3 Determinisation algorithms This section recapitulates the standard serial determinisation algorithm and introduces our parallel version of it. 3.1 Serial determinisation A naïve NFA determinisation algorithm implementing Definition 1 directly would lead to worst-case behaviour in every case. In practice, it turns out that most of the states in the powerset of $Q$ are not reachable from the start state of the DFA. Thus, their creation can be completely avoided by incorporating a reachability constraint into the algorithm. This leads directly to the queue-based version shown in Algorithm 1. Algorithm 1: Serial NFA determinisation algorithm Input: NFA $A = (Q, \Sigma, q_0, \delta_{nd}, F)$ Output: DFA $A' = (Q', \Sigma, q_0, \delta_d, F')$ 1. $R(\{q_0,\}) \leftarrow c \leftarrow q_0 \leftarrow 0$ 2. $L \leftarrow \emptyset$ 3. $Q' \leftarrow F' \leftarrow \emptyset$ 4. Enqueue($L, (\{q_0,\}, q_0)$) 5. while $L \neq \emptyset$ do 6. $(S, q) \leftarrow$ Dequeue($L$) 7. $Q' \leftarrow Q' \cup \{q\}$ 8. if $S \cap F \neq \emptyset$ then 9. $F' \leftarrow F' \cup \{q\}$ 10. $C = \{(a, \bigcup_{p \in S} \delta_{nd}(p, a)) \in \Sigma \times 2^Q \mid \exists r \in S: \delta_{nd}(r, a) \neq \emptyset\}$ 11. for each $(a, S') \in C$ do 12. $p \leftarrow R(S')$ 13. if $p = \top$ then 14. $c \leftarrow c + 1$ 15. $R(S') \leftarrow p \leftarrow c$ 16. Enqueue($L, (S', p)$) 17. $\delta_d(q, a) \leftarrow p$ Algorithm 1 uses several auxiliary data structures: First of all, $R : 2^Q \mapsto \mathbb{N} \cup \{\top\}$ is a state register mapping subsets of $Q$ to natural numbers. If some set $S$ is not in the register, $R(S)$ returns $\top$. Initially, the set containing $q_0$ is mapped to zero. Furthermore, the algorithm maintains a queue $L$ holding pairs $(S, q) \in 2^Q \times \mathbb{N}$ and a global state counter $c$ initially set to $0$. In line 4, the initial pair $\langle \{q_0\}, 0 \rangle$ is added to the queue which is subsequently processed in the while-loop between lines 5 and 17. In line 6, a pair $(S, q)$ is removed from $L$. If $S$ contains a final state, $q$ is added to the final states of the DFA. In line 10, a set $C$ of candidate states is constructed. For this purpose, a set $\Sigma' \subseteq \Sigma$ is created such that $a$ is in $\Sigma'$ if $\delta_{nd}(r,a)$ is defined (that is, $\delta_{nd}(r,a) \neq \emptyset$) for some $r \in S$. Then, for each $a \in \Sigma'$, a new state set $S'$ is assembled holding all the destination states $\delta_{nd}(p,a)$ for all $p \in S$. In the following, we will refer to this step as the symbol indexing step. The for-loop in lines 11–17 processes all pairs $\langle a, S' \rangle$. Line 12 looks up state set $S'$ in the register $R$. If $S'$ is not found in $R$, it is added to $R$ by assigning it a new state number $p$ by incrementing the global state counter $c$ (line 14–15). Furthermore, the pair $\langle S', p \rangle$ is added to the queue $L$ (line 16). In both cases, a new transition from $q$ to $p$ with $a$ is added to $\delta_d$ (line 17). By maintaining a queue $L$, the algorithm ensures that each state $q$ added to $Q'$ in line 7 is reachable from that start state $q_{0,d}$. Nevertheless, in the worst case, all subsets of $Q$ are added to the queue resulting in a running time in $O(2^{|Q|} |\Sigma|)$. Given an alphabet $\Sigma = \{a, b\}$, the worst case is exhibited by NFAs resulting from regular expressions $r(k)$ of the form $\Sigma^*a(a + b)^k$ which leads to DFA with $2^{k+1}$ states. Figure 1 shows an NFA constructed from $r(2)$, while Figure 2 shows the equivalent DFA. Note that DFA constructed from regular expressions $r(k)$ are also complete, that is, $\delta_d$ is a total function. ![Figure 1. NFA created from regular expression $\Sigma^*a(a + b)^2$](image) ![Figure 2. Equivalent DFA to the NFA of Figure 1](image) This worst case of exponential blow-up may not be so uncommon in practice as one might expect. Consider a pattern matching problem where some finite set $P$ of patterns is to be efficiently found in some given input text. In automata-theoretic terms, this amounts to constructing an NFA for $\Sigma^* \cdot P$, the infinite regular language consisting of all strings having some $p \in P$ as a suffix. If $P$ has the form $a(a + b)^k$ or something similar, then determinisation is exponential. 3.2 Parallel determinisation When looking at Algorithm 1 for parts which could be run in parallel and which cannot, the following observations could be made: – The while-loop between lines 5 and 17 is a good candidate for parallel processing, since several pairs \( \langle S, q \rangle \) could be removed from the queue (line 6) and further processed in parallel. – This is in particular the case for the symbol indexing step in line 10, since the creation of follower candidate states for each state set \( S \) is completely independent for all state sets \( S \). – The for-loop (lines 11–17) could in principle be parallelised, but the main actions in its body – querying/adding to the state register and adding new transitions to the DFA – must certainly be serialised. – The same is true for adding final states to the DFA (line 8–9). Assuming a suitable data structure for the DFA, adding final states (line 9) and adding transitions (line 17) can certainly be done in parallel. – Incrementing the state counter (line 14) must again be serialised. A straightforward way to link parallel and serial components of the algorithm are the concepts of Kahn Process Networks, (cf. [5]) and Labeled Transition Systems ([2]). Definition 2 (Labelled Transition System (LTS), cf. [2]). Let a channel \( c \) be an unbounded FIFO-queue (first-in-first-out queue) with elements taken from some alphabet \( \Sigma_c \). Let \( \text{Chan} \) denote the set of all channels. An LTS is a tuple \( \langle S, s_0, I, O, \text{Act}, \rightarrow \rangle \) consisting of a set \( S \) of states, an initial state \( s_0 \in S \), a set \( I \subseteq \text{Chan} \) of input channels, a set \( O \subseteq \text{Chan} \) (distinct from \( I \)) of output channels, a set \( \text{Act} \) of actions consisting of input actions \( \{ c?a \mid c \in I, a \in \Sigma_c \} \subseteq \text{Act} \), output actions \( \{ c!a \mid c \in O, a \in \Sigma_c \} \subseteq \text{Act} \) and a labelled transition relation \( \rightarrow \subseteq S \times \text{Act} \times S \). Definition 3 (Kahn Process Network (KPN), cf. [2]). A Kahn process network is a tuple \( \langle P, C, I, O, \text{Act}, \{ \text{LTS}_p \mid p \in P \} \rangle \) with the components as follows: – \( P \) is a finite set of processes. – \( C, I \) and \( O \) (\( \subseteq \text{Chan} \)) are finite and pairwise disjoint sets of internal channels, input channels and output channels, respectively. – \( \text{Act} = \{ c?a, c!a \mid c \in C \cup I \cup O, a \in \Sigma_c \} \) – Every process \( p \in P \) is defined by a sequential labelled transition system \( \text{LTS}_p = \langle S_p, s_{p_0}, I_p, O_p, \text{Act}_p, \rightarrow_p \rangle \), with \( I_p \subseteq I \cup C \) and \( O_p \subseteq O \cup C \). – For every channel \( c \in C \cup I \), there is exactly one process \( p \in P \) that reads from it \( (c \in I_p) \) and for every channel \( c \in C \cup O \), there is exactly one process \( p \in P \) that writes to it \( (c \in O_p) \). Since KPNs are essentially graph structures, they admit an intuitive graphical representation. Figure 4 shows a KPN for the parallel version of the determinisation algorithm. The start state labelled \( s_0 \) in Figure 4 starts the network by passing the initial pair \( \{ \{ q_{0_{nd}} \}, q_{0_{nd}} \} \) to the state labelled process state set. This corresponds to enqueuing the initial pair in line 4 of Algorithm 1. State process state set – which basically implements the body of the while-loop in Algorithm 1 – is connected with 3 other nodes: 2 Basically, a sequential LTS accepts at most one input/output operation at a given point in time. 1. with itself. This corresponds to line 15 of Algorithm 1: a state set \( S \) may lead to the creation of further state sets if these are not already present in the state register. 2. with state \( \text{add delta} \) which reflects line 17 of the serial algorithm and 3. with state \( \text{make final} \) which corresponds to line 9. All states except \( s_0 \) work in principle in parallel, but they share three common resources: the state register, the state counter and the emerging DFA. Access to these resources must be synchronised by using appropriate locking mechanisms. ### 4 Implementation The algorithm of Section 3.2 is implemented on the basis of the flow graph construct in Intel’s ThreadBuildingBlocks C++ library (TBB, [8]). TBB defines a number of different graph nodes classes like broadcast node, function node and multifunction node, which can be connected to each other by data flow edges. Unlike instances of function node, which are required to always compute a result, instances of multifunction node are connected to other flow graph nodes by a tuple of channels, to which output actions are send. This is exactly what is required by state process state set in Figure 3, since a NFA state set currently processed may not lead to further new state sets. The serial and parallel version of the determinisation algorithm are basically based on the same data structures. State sets of NFAs were implemented as sorted vectors. To allow an efficient test for equality of state sets and to speed up look-up in the state register, each state set also stores a permanent hash value. The \( \delta \)-function of the class representing serial DFAs is based on a STL hash map, while the one of the concurrent DFA uses TBB’s concurrent hash map, a map data structure where the keys can be individually locked. The state registers for the serial and parallel algorithms are implemented in a similar fashion. Figure 4 states some of the relevant definitions of the parallel algorithm in C++. ParallelDeterminizerBody, AddDeltaBody, and MakeFinalBody are classes which implement the actions executed by the three parallel nodes of Figure 3. ### 5 Experiments For the experiments, we choose two different types of NFAs: 1. Acyclic NFAs compiled from word lists and 2. Cyclic NFAs exhibiting the worst case along the lines of Figure 1 with various number of states and alphabet sizes. The acyclic NFAs derived from two different English words lists are maximally non-deterministic, that is, each word inserted into the NFA constitutes a separate chain from the start state to a distinct final state. Table 1 summarises the different test automata. | NFA | $\Sigma$ | $|Q_{nd}|$ | $|\Delta_{nd}|$ | $|F_{nd}|$ | $|Q_d|$ | $|\Delta_d|$ | $|F_d|$ | |--------------|----------|----------|----------------|----------|--------|---------------|--------| | NFA$_{dict}$ | 56 | 681,718 | 681,718 | 49,999 | 271,194| 271,193 | 49,999 | | NFA$_{doc}$ | 45 | 994,676 | 994,675 | 128,972 | 270,411| 270,410 | 128,972| | NFA$_{(k),2}$, $k \in [10 \ldots 22]$ | 2 | $k+1$ | $2(k+1)+1$ | 1 | $2^{k+1}$| $2 \cdot 2^{k+1}$| $\frac{2^{k+1}}{2}$ | | NFA$_{(k),10}$, $k \in [10 \ldots 19]$ | 10 | $k+1$ | $10(k+1)+1$ | 1 | $2^{k+1}$| $10 \cdot 2^{k+1}$| $\frac{2^{k+1}}{2}$ | | NFA$_{(k),100}$, $k \in [10 \ldots 16]$ | 100 | $k+1$ | $100(k+1)+1$ | 1 | $2^{k+1}$| $100 \cdot 2^{k+1}$| $\frac{2^{k+1}}{2}$ | Table 1. Input NFA for the determinisation algorithms All subsequent experiments were run on a Linux machine with 2 Intel-XEON 64bit-2.93 GHz 4-core-CPUs. Hyperthreading was turned on. In hyperthreaded architectures, each physical core is supplemented by a virtual core which takes over control if the physical one is currently stalled because it is waiting for CPU cache data etc. Virtual cores duplicate only certain sections of their physical counterpart, mainly those holding the current thread’s state, but not the main computing resources. Let us now turn to the experiments. In Figure 4, we compare the serial and the parallel version of the determinisation algorithm for NFA$_{(k),2}$ on a logarithmic time scale. Unsurprisingly, the processing time for both versions grows exponentially with the number $k$ of disjunctions in the NFA. Starting with $2^{11+1} \approx 4,000$ DFA states, the parallel algorithm outperforms the serial one, with $k = 13$, it is already more than twice as fast. With bigger $k$s, the processing time of the parallel version converges at approximately one-third of the serial one. The advantage of the parallel algorithm becomes even better when the alphabet size is increased. Figure 5 compares the serial and parallel determinisation of NFA$_{(k)}$ with $|\Sigma| = 10$ and $|\Sigma| = 100$, respectively. For an alphabet size of 10, the parallel algorithm is, depending on $k$, approximately 2 to 3.5 times faster than the serial one. For $|\Sigma| = 100$, the ratio is between 3.7 to 1 for $k = 10$ and 4.7 to 1 for $k = 16$.\footnote{Also known as alternations.} An explanation for the speedup for bigger alphabet sizes could be, that the number of DFA states depends only on $k$ and thus the number of pairs $\langle S, q \rangle$ forwarded on the looping channel in Figure 3 is independent of the alphabet size. Furthermore, the state register shared between the parallel determinisation workers – even when queried $|\Sigma|$ times for each state set – doesn’t seem to slow down processing very much. To assess the amount of the contribution of the other shared resource – the DFA under construction – we ran a further experiment where we turned off the channels to the states add delta and make final in Figure 3 and made a similar move in the serial version. The results for the NFA$_{r(k),100}$ are shown in Figure 7. Figure 7 shows that for the serial case whether DFA construction is turned on or off makes almost no difference with respect to processing time. But the situation is different for the parallel algorithm where the DFA construction contributes with more than 40% to the overall processing time. Since both serial and concurrent DFA implementations rely on efficient hash maps, the difference must be explained with locking issues and the administrative overhead for implementing micro locks. In our second-last experiment, we compare serial and parallel determinisation applied to acyclic NFAs, namely, the NFAs derived from the two word lists. Table 2 summarises the results. <table> <thead> <tr> <th>NFA</th> <th>serial</th> <th>parallel</th> </tr> </thead> <tbody> <tr> <td>dict$_1$</td> <td>0.465 s</td> <td>0.197 s</td> </tr> <tr> <td>dict$_2$</td> <td>0.550 s</td> <td>0.230 s</td> </tr> </tbody> </table> Table 2. Serial and parallel determinisation of the dictionary NFAs The parallel version exhibits a speed-up of a factor of approximately 2.4 compared to the serial algorithm. Even though the alphabet sizes are bigger, this is less than the speed-up in the cyclic case with an alphabet of size 2. But, (serial) determinisation of acyclic NFA is very efficient anyway, since it is linear in the size of the NFA, so parallelising the algorithm is normally not worth the effort. Intel’s *ThreadBuildingBlocks* framework allows the control of the amount of parallelism by specifying the number of flow graph node copies concurrently active. A value of *unlimited* means that the framework chooses an optimal amount of concurrency. Figure 8 shows the dependencies between the number of workers and the time consumed for two NFA (*r*(17), 2 and *dict₂*). The relative plateau in both graphs for 8 to 16 workers could perhaps be explained with Intel’s *hyperthreading* feature. Since the non-virtual cores are not idle when the parallel algorithm is executed, the virtual ones cannot take over control and thus do not contribute at all. Also apparent from the graphs is, that the parallel algorithm performs best if TBB’s scheduler is allowed to control the amount of parallelism. ![Figure 8. Dependency between the processing time and the number of workers for NFAs *r*(17), 2 and *dict₂*](image) ### 6 Conclusion and further work In the preceding sections, we developed an efficient parallel determinisation algorithm based on Kahn process networks. Experiments showed that the algorithm performed particularly well in cases of highly-cyclic result DFAs over realistically sized alphabets. The worst-case pattern $\Sigma^*a(a + b)^k$ we choose for the cyclic test automata is not as artificial as it looks at first glance. For example, compiling replacement rules $\alpha \rightarrow \beta$ relies on a *does-not-contain* operator $\Sigma^* \cdot \alpha \cdot \Sigma^*$ to achieve robust behaviour by identity-mapping all strings to themselves which do not contain an instance of $\alpha$. Since the standard complementation operation depends on a deterministic DFA for $\Sigma^* \cdot \alpha \cdot \Sigma^*$, choosing $a(a + b)^k$ for $\alpha$ creates the worst case. In further work, we try to improve the algorithm in the following ways: - Examine and profile the algorithm to reduce the number of locking situations, - apply randomisation techniques to further reduce locking, – make use of graph theoretic notions to divide the determinisation problem into largely independent subproblems which can be solved without making much use of shared resources, and – study a redundant work approach where each parallel determinisation worker may process a limited number of state sets already processed by other workers to increase the relative independence of the workers from the shared resources. References
{"Source-Url": "http://www.stringology.org/cgi-bin/getfile.cgi?c=-&n=05&t=pdf&y=2012", "len_cl100k_base": 6517, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38928, "total-output-tokens": 7620, "length": "2e12", "weborganizer": {"__label__adult": 0.0005297660827636719, "__label__art_design": 0.0004744529724121094, "__label__crime_law": 0.000659942626953125, "__label__education_jobs": 0.0005960464477539062, "__label__entertainment": 0.0001354217529296875, "__label__fashion_beauty": 0.0002484321594238281, "__label__finance_business": 0.00029015541076660156, "__label__food_dining": 0.0005712509155273438, "__label__games": 0.00098419189453125, "__label__hardware": 0.002376556396484375, "__label__health": 0.0010595321655273438, "__label__history": 0.00043487548828125, "__label__home_hobbies": 0.00016069412231445312, "__label__industrial": 0.0009365081787109376, "__label__literature": 0.00048065185546875, "__label__politics": 0.000499725341796875, "__label__religion": 0.000926494598388672, "__label__science_tech": 0.1676025390625, "__label__social_life": 0.00012600421905517578, "__label__software": 0.006633758544921875, "__label__software_dev": 0.8125, "__label__sports_fitness": 0.0004603862762451172, "__label__transportation": 0.0011119842529296875, "__label__travel": 0.00028443336486816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25927, 0.03407]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25927, 0.60884]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25927, 0.84284]], "google_gemma-3-12b-it_contains_pii": [[0, 2797, false], [2797, 6671, null], [6671, 9245, null], [9245, 11485, null], [11485, 15182, null], [15182, 17588, null], [17588, 20304, null], [20304, 21940, null], [21940, 22349, null], [22349, 24357, null], [24357, 25927, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2797, true], [2797, 6671, null], [6671, 9245, null], [9245, 11485, null], [11485, 15182, null], [15182, 17588, null], [17588, 20304, null], [20304, 21940, null], [21940, 22349, null], [22349, 24357, null], [24357, 25927, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25927, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25927, null]], "pdf_page_numbers": [[0, 2797, 1], [2797, 6671, 2], [6671, 9245, 3], [9245, 11485, 4], [11485, 15182, 5], [15182, 17588, 6], [17588, 20304, 7], [20304, 21940, 8], [21940, 22349, 9], [22349, 24357, 10], [24357, 25927, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25927, 0.07801]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
cbd8eebd69fcb664c7e46930bbd2a6b557ff9829
The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/64416 Please be advised that this information was generated on 2017-10-02 and may be subject to change. WANDAML a markup language for digital document annotation Katrin Franke², Isabelle Guyon¹, Lambert Schomaker³, and Louis Vuurpijl⁴ 1. ClopiNet, 955 Creston Rd, Berkeley, USA, isabelle@clopinet.com (corresponding author.) 2. Fraunhofer Institute, Berlin, Germany. Abstract WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains of image filtering and feature extraction operations applied to the data. We present the design of this format and some annotation examples, in the more general perspective of digital document annotation. Annotations may be organized in a structure that reflects the document layout via a hierarchy of document regions. WANDAML can lend itself to a variety of applications, including the annotation all kinds of handwriting documents (on-line or off-line), images of printed text, medical images, and satellite images. Keywords: Handwriting, forensic data, XML, annotations, data format, document analysis. 1 Introduction We present the design of an XML-based markup language to annotate digital documents, called WANDAML. This markup language is designed for processing, analyzing and storing handwriting samples in application to forensic handwriting examination and writer identification. In the context of this application, particular specifications were met to ensure objectivity and reproducibility of the processing steps and examination results [6, 5]. Writer identification can never be as accurate as iris or DNA-based identification. However, usually a lot of constraining pieces of information are known (age category, handedness, major script style), which may reduce the size of a reference set to such an extent that automatic writer identification on the basis of script shape within that reduced set becomes viable. To that end, a portable and extensible data format is needed for modelling the knowledge from the forensic application domain. A standard database technology can then be used to apply logical constraints to the search process. Although our work is very application oriented, it is not particularly application specific. Our format lends itself to promoting research and development of handwriting analysis methods, and establishing common grounds for international exchange of handwritten data samples and annotations. It supports on-line, off-line handwriting and overlays of on-line over off-line data. Its simple and general structure makes it fit for annotating data with other modalities, including printed text, pictures, and sound. To ensure long-term operation we built upon the eXtensible Markup Language (XML) [4] that follows world-wide standardized syntax rules, recommended by the World Wide Web Consortium (W3C). XML allows users to define their own markup language respecting the basic XML syntax. As an ASCII format, XML is human readable and can be edited with regular text processors, although there is a large body of existing software, which manipulates XML. In particular, XML and Java are very complementary: XML is a widely accepted means of creating portable data, and the Java programming language provides portable code abilities. We wrote Java applications for WANDAML as part of the WANDA project [6, 5]. Before undertaking the task of defining a new XML language for forensic applications, we reviewed existing standards that are in use in the handwriting recognition and document analysis community. We distinguish between formats to encode data and formats to annotate data. WANDAML is essentially a data annotation format. It is compatible with a variety of data encoding formats. For image data, typical examples of data encoding formats would be JPEG, GIF, and TIFF. For vectorial data (graphics represented by their coordinate points, not by pixel values) we can cite the Scalable-Vector Graphics (SVG) format. Several handwritten document annotation formats predate WANDAML. Image annotation formats include IAM [9], Xmillum [11] and Trueviz [8]. Formats for virtual ink or “electronic ink” combining both data encoding and data annotation, include Unipen [7] and InkML [3]. We summarize the features of the existing formats in Table 1. We outline in bold the features that are required for forensic applications. No standard predating WANDAML meets all of our requirements. WANDAML is the result of joint efforts of forensic handwriting experts and researchers in computer-based handwriting analysis. Well-established termini and procedures of forensic science inspired many aspects of the design [2, 10, 12]. WANDAML also benefitted from the experience of some of the team members who invented Unipen, a data format for on-line handwriting [7]. 2 Notations and conventions A particular XML markup language such as WANDAML can be defined in standard ways using the document type definition language (DTD) or XML schemas [4]. We provide DTDs to specify WANDAML [1]. Following the general XML nomenclature, in XML annotation we have: ```xml <my_tag my_attribute="my_value"> <my_element/> </my_tag> ``` Table 1: Existing format comparison. ``` <table> <thead> <tr> <th>XML-based</th> <th>InkXML</th> <th>Unipen</th> <th>SVG</th> <th>IAM</th> <th>XMillum</th> <th>TrueViz</th> <th>WANDA</th> </tr> </thead> <tbody> <tr> <td>raster images</td> <td>√</td> <td></td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td>vector images</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td>virtual ink</td> <td>√</td> <td>√</td> <td></td> <td></td> <td>√</td> <td></td> <td>√</td> </tr> <tr> <td>v-ink segments</td> <td>√</td> <td>√</td> <td></td> <td></td> <td></td> <td>√</td> <td></td> </tr> <tr> <td>image regions</td> <td>√</td> <td></td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td>ink/image overlay</td> <td>√</td> <td></td> <td></td> <td></td> <td></td> <td>√</td> <td>√</td> </tr> <tr> <td>device annot.</td> <td>√</td> <td>√</td> <td></td> <td></td> <td>√</td> <td></td> <td>√</td> </tr> <tr> <td>writer annot.</td> <td>√</td> <td>√</td> <td></td> <td></td> <td>√</td> <td></td> <td>√</td> </tr> <tr> <td>script annot.</td> <td>√</td> <td></td> <td></td> <td></td> <td>√</td> <td></td> <td></td> </tr> <tr> <td>material annot.</td> <td>√</td> <td></td> <td></td> <td></td> <td></td> <td>√</td> <td></td> </tr> <tr> <td>content annot.</td> <td>√</td> <td></td> <td></td> <td></td> <td></td> <td>√</td> <td></td> </tr> <tr> <td>filters/plugins</td> <td></td> <td>√</td> <td>√</td> <td>√</td> <td>√</td> <td></td> <td></td> </tr> <tr> <td>interactivity</td> <td>√</td> <td>√</td> <td></td> <td></td> <td>√</td> <td></td> <td></td> </tr> <tr> <td>layers</td> <td>√</td> <td></td> <td></td> <td></td> <td></td> <td>√</td> <td></td> </tr> <tr> <td>styles</td> <td></td> <td>√</td> <td></td> <td></td> <td>√</td> <td></td> <td></td> </tr> <tr> <td>external link</td> <td>√</td> <td></td> <td></td> <td></td> <td></td> <td>√</td> <td></td> </tr> </tbody> </table> ``` where, `<my_tag/>` and `<my_element/>` are tags\(^1\) or “elements”, and `my_attribute` is an attribute of `my_tag`, having value `my_value`. In DTDs, “entities” are defined, which may be used as lists of admissible attribute values. We adopt a minimum set of conventions carried throughout WANDAML: - Entities, attributes and elements contain only lowercase and underscore characters. - Certain attributes have special meanings: `id` a unique identifier, `type` a type from a pre-defined list, `label` an optional user-defined string that can be used for search purpose. - To simplify parsing, we have defined a number of “containers” (`<pages/>`, `<filters/>`, `<annotations/>`, `<inputs/>`, and `<outputs/>`), which contain elements of the same name (`<page/>`, `<filter/>`, `<annotation/>`, `<input/>`, and `<output/>`) and have the optional attribute `number_of`. A defined markup language based on XML is extensible via the use of name spaces. A body of XML text may be enclosed between tags with an opening tag containing the attribute `xmlns` (XML name space.) The value of the attribute `xmlns` is a unique name reserved to identify the definition of a particular XML subset. The URI (Uniform Resource Identifier) of a DTD file or a URL (Uniform Resource Locators) are frequently used for that purpose. The use of name spaces in WANDAML allows us to separate the definition of the basic language skeleton from XML subsets that are application specific. \(^1\)We often use the shorthand `<my_tag/>` for empty tags `<my_tag/></my_tag>` or tags whose content is not expanded. These envelopes from actual suspects in the recent anthrax criminal case provide us with an example of possible use of our WANDA regions: A region of interest is delineated, isolating one envelope. A filter is applied to the region to clean the data. A hierarchy of regions (e.g. in address block, lines, words, and characters) is defined and annotated. 3 A simple scenario To understand the WANDAML annotation mechanisms and concepts, it is useful to first understand what it was originally intended for by analyzing an example. The concepts introduced in this Section will be later defined formally in Section ???. In Figure 1, we show an example of forensic evidence. The tasks of a computer-assisted human expert analyzing this document may include: 1. outlining regions of interest, 2. enhancing image quality, 3. measuring handwriting characteristics (size of characters, slant, etc.), 4. translating handwriting into typed text, 5. retracing characters with electronic ink, 6. annotating defined regions with information relative to content (threat, envelope, form, etc.), writer (age, gender, etc.), script (handprinted, cursive, etc.), and material (paper, pen, etc.) A document annotation file always start with a root element <wandoc/>: <wandoc id="20032004" label="Anthrax example case" xmlns="http://clopinet.com/isabelle/Projects/WANDA/wandoc/DTD-HOME.html" /> The id is a machine generated unique document annotation identifier, while the label is provided by the user. The name space xmlns points to the wandoc language definition. In our example (Figure 1), the <wandoc/> element contains a single page: ``` <wandoc ...> <pages number_of="1"> <page id="20032004_copy5" label="frontpage" next=""/> </pages> </wandoc> ``` By convention, we replace tag attributes by “...” to shorten the description. The page contains one call to a filter, two annotations, and one region: ``` <page ...> <filters number_of="1"/> <annotations number_of="2"/> <regions number_of="1"/> </page> ``` In the next paragraph, we expand the <filters ...> tag. In this example, a filter is importing an image from a scanner using the a scan software called "IBIS" and returning a link to the resulting image. In the wandoc framework, a document consists of one or several pages, each of which may be represented by an image. Notice that there is no tag <image/>. Images are imported through especially defined filters, e.g. the scan filter. ``` <filters number_of="1"> <filter type="import" label="ibisScan"> <inputs> <input type="stream" number="1" xmlns="../scan.dtd"> <scan/> </input> </inputs> <module type="extern" exec="ibis.exe"> <meta version="3.51" /> </module> <outputs> <output type="file"> <wanda\_link href="copy5.tif" /> </output> </outputs> </filter> </filters> ``` Table 2: **Wanda region summary.** A set of annotations are entered by an expert (e.g. with a GUI): ```xml <annotations number_of="2"> <annotation type="content" xmlns="../content.dtd"> <whole_document type="envelope" intent="personal"/> </annotation> <annotation type="writer" xmlns="../writer.dtd"> <writer id="2015"> <properties handedness="left" skill="ok"/> </writer> </annotation> </annotations> ``` The region content is the following: ```xml <regions number_of="1"> <region id="20032004_0001" label="Letter to Tom Brokaw" next="2"> <points> <point x="0" y="0" /> <point x="0" y="124" /> <point x="76" y="124"/> <point x="76" y="124"/> </points> <annotations number_of="3"/> <filters number_of="1"/> </region> </regions> ``` A region is defined by a unique id (presumably machine generated). Ids allow users and programs (filters) to refer to regions. Otherwise, by default, filters apply to their parent region, page or document. Additionally, regions may possess a user-defined label, the intend of which is to facilitate searching through regions. The attribute “next” is used to indicate a logical ordering of the regions, which are at the same hierarchical level. Such ordering is used, for instance, to indicate reading order. Regions are delineated by a polygon defined by a set of points. The origin is at the upper left corner. The unit, if not specified, is the pixel. The region of our example possesses three annotations and one filter. The filter corresponds to some measurements and returns features (not shown). 4 Basic concepts: regions, annotations, and filters The example of the previous section introduced a number of concepts that are essential to WANDAML: regions, annotations, and filters that we explain in more details in this section. WANDAML follows a general skeleton: ``` <wandoc> <filters/> [?] <annotations/> [?] <wanda_link/> [*] <meta/> [?] <pages> [?] <page> [+] <filters/> [?] <annotations/> [?] <wanda_link/> [*] <meta/> [?] </page> </pages> </wandoc> ``` There is a thin line between a markup language and a programming language. We make use of such possibilities of XML by introducing "filters", which allow users to record and eventually play back operations on the data. A filter can be understood as a computer program that processes the document and returns either a new transformed image or a set of features. We adopt the following skeleton for filters: ``` <filter> [+] <inputs> [+] <input/> [+] </inputs> <module/> [+] <outputs> [+] <output/> [+] </outputs> </filter> ``` The tags `<input/>` and `<output/>` wrap around application specific tags defining inputs and outputs. Via the use of name spaces, new types of inputs and outputs can be defined to extend WANDAML, without changing the core of the language. We include in the documentation examples of application specific filters [1]. Filters are general enough to encode any kind of computer-assisted document processing, including the definition of regions and annotations. Still, we introduce specialized tags for 2We use the following convention to describe tag requirements: [+ ] at least one; [ ? ] zero or one; [* ] any number including zero. Table 3: **Wanda annotation categories.** We summarize in this table the attributes collected in the various annotation categories. the “region” and “annotation” concepts because such concepts are central in the applications envisioned. The use of the specialized tags `<region/>` and `<annotation/>` enhances legibility and facilitates computer parsing. Wanda regions provide the possibility of multiple levels of annotations and local filtering. Regions may either be manually extracted or be the result of an automatic document segmentation. We summarize in Table 2 the essential region properties. The `<annotation/>` tag is generic, it is merely a wrapper tag specifying a name space. This name space references a set of application specific tags, e.g. `<writer/>`, `<script/>`, `<content/>`, or `<material/>`. We summarize the content of these annotations in Table 3. We introduce a `<meta/>` tag that regroups information about how the particular annotation was generated (author, date created, contact email, author’s affiliation, etc.) and pertains to all WANDAML subsets. We also introduce a generic `<wanda_link/>` tag: it refers to a file locator and plays a role similar to an HTML hyperlink or an XLink.\(^3\) Finally, WANDAML allows experts to superimpose virtual ink (electronic ink) as entered, for instance, using a digitizing tablet, on top of a document image. This is achieved with the **wink** XML subset, which we do not describe here for space constraint reasons. See our techreport for details [1]. Overall, the **wandoc** DTD contains only 21 elements (or tags). But we have defined 10 specialized languages for our application, totalling 146 elements [1]. --- \(^3\)The definition of a **wanda_link** in terms of Xlink has not been firmed up yet but there are provisions for it in the DTD. 5 Application to forensic handwriting examination WANDAML might become an important information exchange vehicle in daily forensic casework and forensic analysis research. After being generated by expert rating and/or by technical equipment, WANDAML data may be used to generate reports documenting handwriting characteristics and be transferred to a database for large-scale writer identification. In its current version WANDAML elaborates on the FISH [10] system, and introduces further categories and admissible values. The annotation categories implemented in WANDAML (Table 3) are well established in forensic science. Other categories such as the one supplied for digitalization, cleaning and interactive measurements are of great importance for computer-based examination. In the WANDA system, we implemented standard measurements such as slant, character and word spacing, ascender and descender length [13]. Using the open concepts of <filters/> and <annotations/> WANDAML is also capable to fulfill upcoming demands. The representation of handwritings features with a standardized XML format supports the exchange of examination result between different laboratories and/or governmental entities. The data can be rapidly imported into any computer platform. So, WANDAML promotes interoperability, and, with the anchors for further extensions, long-term usability. 6 Conclusion This paper described a new data format WANDAML to annotate forensic handwriting data. The choice of XML makes it intrinsically extensible. Our design is centered around a small number of concepts and conventions. Regions, annotations, and filters are the three essential elements of all WANDAML annotated documents. Regions are parts of documents delineated with a rectangular or polygonal shape. Regions are naturally organized by the XML hierarchy. This reflects well the hierarchical nature of documents’ organization (e.g., page, paragraph, line, word, character). In addition, within a given hierarchical level, regions are organized in linked lists to encode logical relations such as reading order. Region annotations are encapsulated within a generic annotation tag. Via the mechanism of XML name spaces, this provides users with the flexibility of defining their own types of annotations to supplement the types already defined: writer, material, script, and content. WANDAML has a simple syntax to format filters. New types of filter inputs and outputs may be defined via the use of name spaces. WANDAML also defines some application specific filter formats, and a virtual ink encoding format. We foresee that the simplicity and extensibility of the framework will encourage its wide spreading within the handwriting recognition community and in the document analysis community at large. Acknowledgements This work was sponsored by the Bundeskriminalamt in Wiesbaden, Germany (BKA). We gratefully acknowledge the many persons who contributed, and particularly: Axel Kerkhoff, Werner Kuckuck, Gerhard Grube, Altug Metin, Tomas Kühn, Martin Penk, Steffen Rose, Johan Everts, Geertje Zwarts, Merijn van Erp, Bernhard Boser, and Stefan Giesler. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/64416/64416.pdf?sequence=1", "len_cl100k_base": 4874, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23297, "total-output-tokens": 5904, "length": "2e12", "weborganizer": {"__label__adult": 0.0013217926025390625, "__label__art_design": 0.002288818359375, "__label__crime_law": 0.02276611328125, "__label__education_jobs": 0.00335693359375, "__label__entertainment": 0.00036025047302246094, "__label__fashion_beauty": 0.0006809234619140625, "__label__finance_business": 0.0004422664642333984, "__label__food_dining": 0.0006580352783203125, "__label__games": 0.0013666152954101562, "__label__hardware": 0.0036373138427734375, "__label__health": 0.0021381378173828125, "__label__history": 0.0011949539184570312, "__label__home_hobbies": 0.00022852420806884768, "__label__industrial": 0.00127410888671875, "__label__literature": 0.0016307830810546875, "__label__politics": 0.000904083251953125, "__label__religion": 0.0009016990661621094, "__label__science_tech": 0.294189453125, "__label__social_life": 0.00032639503479003906, "__label__software": 0.05987548828125, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.0005574226379394531, "__label__transportation": 0.00078582763671875, "__label__travel": 0.00029087066650390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22658, 0.01877]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22658, 0.54098]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22658, 0.81802]], "google_gemma-3-12b-it_contains_pii": [[0, 250, false], [250, 2582, null], [2582, 5604, null], [5604, 8841, null], [8841, 10227, null], [10227, 11804, null], [11804, 13413, null], [13413, 15106, null], [15106, 16931, null], [16931, 20076, null], [20076, 22658, null]], "google_gemma-3-12b-it_is_public_document": [[0, 250, true], [250, 2582, null], [2582, 5604, null], [5604, 8841, null], [8841, 10227, null], [10227, 11804, null], [11804, 13413, null], [13413, 15106, null], [15106, 16931, null], [16931, 20076, null], [20076, 22658, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22658, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22658, null]], "pdf_page_numbers": [[0, 250, 1], [250, 2582, 2], [2582, 5604, 3], [5604, 8841, 4], [8841, 10227, 5], [10227, 11804, 6], [11804, 13413, 7], [13413, 15106, 8], [15106, 16931, 9], [16931, 20076, 10], [20076, 22658, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22658, 0.08867]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
272ed8737db0be3ea67a39e03c1445a976e2f6cd
Objectives - Review: Web, HTML - CSS: Presentation of Web Pages - Project discussion/planning Web Review • What made the WWW possible? • What are the main applications that enable the Web? ➢ What protocol do they use to communicate? • How does the process of retrieving a page work? HTML Review • What is used to markup a document? ➢ What are its components? • What are the two main types of elements? ➢ How are they different? • How do we make... ➢ A heading ➢ A link ➢ An image ➢ A table ➢ A list Lab 0 • How did Lab 0 go? ➢ Wiki? ➢ Validating your page? • Anything tricky? • Any questions? cs.wlu.edu’s Web Server Set Up • How ~user directs to user’s public_html directory ```html <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # # UserDir disable # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disable" line above, and uncomment # the following line instead: # #UserDir public_html UserDir /home/www/users </IfModule> ``` cs.wlu.edu’s Web Server Set Up - How ~user directs to user’s public_html directory \texttt{public\_html} -> \texttt{/home/www/users/sprenkle/} cs.wlu.edu’s Web Server Set Up • Location of “main” web pages ``` # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/var/www/html" ``` cs.wlu.edu’s Web Server Set Up - Why when you go to a directory in the browser, you see index.html ``` # DirectoryIndex: sets the file that Apache will serve if a directory is requested. # # The index.html.var file (a type-map) is used to deliver content-negotiated documents. The MultiViews Option can be used for the same purpose, but it is much slower. # DirectoryIndex index.html index.html.var ``` CSS: CASCADING STYLE SHEETS Presentation of Web Pages • Talked mostly about structure and content of HTML pages • Want presentation to be *separate* - In general, don’t encode style into the HTML page itself - Easier to apply different styles to a set of web pages or a whole web site Cascading Style Sheets (CSS) • Describe the **appearance**, **layout**, and **presentation** of information on a web page - **How** information is to be displayed, not what is being displayed • CSS is designed to specify style - **HTML** is not • Can be embedded in HTML document or placed into separate **.css** file - Separate **.css** file advantage: one style sheet can be *shared* across many HTML documents Why *Cascading* Style Sheets? - **Cascading** because the attributes of an element cascade together in this order: - Browser’s default styles - external style sheet files (in a `<link>` tag) - internal style sheets (inside a `<style>` tag in the page’s header) - inline style (the `style` attribute of the HTML element) Attaching a CSS File: `<link>` - **link** appears in **head** element - Can link to multiple style sheet files - When > 1 style sheet defines a style for the same HTML element, latter sheet's properties are applied ``` <link rel="stylesheet" type="text/css" href="filename"/> ``` - **Example from W&L site:** ``` <link rel="stylesheet" type="text/css" href="http://www.wlu.edu/prebuilt/v2css/gateway.css"> <link rel="stylesheet" type="text/css" href="http://www.wlu.edu/prebuilt/shadowbox-3.0.3/shadowbox.css"> ``` Takes precedence Basic CSS Rule Syntax • A CSS file consists of one or more rules • Each rule starts with a selector that specifies an HTML element ➢ Applies style properties to the element ➢ Properties have values ```css selector { property: value; property: value; ... property: value; } p { font-family: sans-serif; color: blue; } ``` What Can You Specify Styles For? - CSS Categories - Colors - Fonts - Lists - Alignment of Text - Backgrounds - Borders - Margins Provide Overview of Properties Resources on Wiki April 26, 2016 Sprenkle - CSCI335 CSS Properties for Colors • **color**: color of the element’s text • **background-color**: color that will appear behind the element ```css p { color: red; background-color: black; } ``` This paragraph uses the above style. Specifying Colors • Color names recognized by all browsers: ➢ aqua, black, blue, fuchsia, gray, green, lime, maroon, navy, olive, purple, red, silver, teal, (white), yellow • RGB codes: red, green, and blue values from 0 (none) to 255 (full) • Hex codes: RGB values in base-16 from 00 (0, none) to FF (255, full) Specifying Colors Examples - Use Color Names, RGB code, or Hex Code ```css p { color: red; } h2 { color: rgb(128, 0, 196); /* purple */ } h3 { color: #FF8800; /* orange */ } ``` This paragraph uses the first style. This heading uses the second style. This heading uses the third style. - Color references on Wiki Resources page CSS Comments - Use /* */ style comments - CSS (and HTML) are not commented as rigorously as programming language code - The // single-line comment is NOT supported in CSS ```css /* CSS Comment. Can span multiple lines. */ p { color: red; } ``` FONTS, TEXT CSS Properties for Fonts <table> <thead> <tr> <th>Property</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>font-family</td> <td>which font will be used</td> </tr> <tr> <td>font-size</td> <td>how large the letters will be drawn</td> </tr> <tr> <td>font-style</td> <td>used to enable/disable italic style</td> </tr> <tr> <td>font-weight</td> <td>used to enable/disable bold style</td> </tr> </tbody> </table> font-family • Examples: ```css p { font-family: "Georgia"; } h2 { font-family: "Arial Narrow"; } ``` • Multi-name font names should be in quotes This paragraph uses the first style. This heading uses the second style. font-family • Can specify multiple font names from highest to lowest priority ➢ Use generic font name last p { font-family: "Garamond", "Times New Roman", serif; } This paragraph uses the above style. • Generic font names: ➢ serif, sans-serif, cursive, fantasy, monospace ➢ Keywords, so no quotation marks In Times New Roman b/c Garamond not installed Possible Values for font-size • Vague font sizes: xx-small, x-small, small, medium, large, x-large, xx-large • Relative font sizes: smaller, larger • Percentage font sizes, e.g., 90% or 120% • Units: pixels (px), points (pt), m-size (em), x-height (ex) ➢ 16px, 16pt, 1.16em, 1.16ex (no spaces) ```css p { font-size: large; } ``` This paragraph uses the above style. em - Defines the *proportion* of the letter width and height with respect to the point size of the current font - Scalable measurement - Originally derived from the width of the capital "M" in a particular typeface - Not defined in terms of any specific typeface - Same for all fonts at a given point size - Example: 1 em in a 16 point typeface = 16 points - Not an acronym or initialism and is pronounced the same as the letter it refers to, the letter "M" - ex is similar but the height of the lower-case x font-weight and font-style • Either can be set to **normal** to turn them off ➢ Such as for heading tags ```html p { font-weight: bold; font-style: italic; } ``` *This paragraph uses the above style.* **body** Style - Apply a style to the `body` element to apply a style to the entire body of your page - Advantage: don’t need to apply a style to each element ```css body { color: #666666; font-size: 14px; } ``` **Example: Course Web page** W3C CSS Validator - jigsaw.w3.org/css-validator/ - Or use WebDeveloper Tool - Checks your CSS to make sure it meets the official CSS specifications - May need to change the CSS version to CSS3 - Default seems to be CSS2.1 - More picky than the web browser, which may render malformed CSS correctly Practice Problem: Simpsons • Add a style sheet to the page • Entire page should have a Simpsons-yellow background and use 14 pt font • Main heading should use “Comic Sans MS” font • Lists should appear in “Lucida Console” font • Link text should be red • List bullets should have a blue background • List items should have a green background Why `<em>` and `<strong>`, not `<i>` and `<b>`? Why `<em>` and `<strong>`, not `<i>` and `<b>`? - **strong** and **em** describe attributes of the content - “This is something important in the document.” - **b** and **i** describe formatting and presentation - “I want this to be bold.” - Add style to **strong** and **em** to do something other than bold or italics - What would this do? ```css strong { font-weight: normal; color: red; } em { font-style: normal; color: #ff00ff; } ``` ## CSS Text Properties Subset <table> <thead> <tr> <th>Property</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td><strong>text-align</strong></td> <td>Alignment of text within its element, e.g., left, right, center, or justify</td> </tr> <tr> <td><strong>text-decoration</strong></td> <td>Decorations, such as underline, line-through, blink Can be combined</td> </tr> <tr> <td><strong>line-height, word-spacing, letter-spacing</strong></td> <td>Gaps between the various portions of text</td> </tr> <tr> <td><strong>text-indent</strong></td> <td>Indents the first letter of each paragraph</td> </tr> </tbody> </table> CSS Properties for Dimensions - **width, height:** - How wide or tall to make this element - Specified as percentage of frame or in pixels - **max-width, max-height, min-width, min-height:** - Maximum or minimum size of this element in the given dimension Grouping Styles - A style can select multiple elements separated by commas - The given properties will be applied to all of the elements ```css p, h1, h2 { color: blue; } h2 { background-color: yellow; } ``` This paragraph uses the above style. This heading uses the above style. - Individual elements can also have their own styles (like `h2` above) Document Tree • HTML document’s elements can be viewed as a tree ```html <html> <head><title>My Web Page</title></head> <body> <h1>My Web Page</h1> <p>My Favorite Movies: </p> <ul> <li>Tombstone</li> <li>The Muppet Movie</li> </ul> </body> </html> ``` Document Tree • HTML document’s elements can be viewed as a tree ```html <html> <head><title>My Web Page</title></head> <body> <h1>My Web Page</h1> <p>My Favorite Movies: </p> <ul> <li>Tombstone</li> <li>The Muppet Movie</li> </ul> </body> </html> ``` Inheriting Styles • Elements inherit their parents’ styles • A more tightly matching rule can override a more general inherited rule • Not all properties are inherited ➢ Example: Borders are not inherited ➢ Some have default, overriding styles Simpsons CSS Practice • All headings should be centered, bolded • Images should take up 1/3 of the width of the screen • List items should only take up 1/2 of the width of the screen • The text should be spaced so that the lines are further apart • Links should be slightly larger than the other text on the page CSS Classes • Selectively apply a CSS rule to only elements of a specific class ➢ Give a style to some occurrences of an element • From course schedule page: ➢ Set the background color for a row in the table, if its class is “even” ```html tr.even { background: #D8DFE7; } <table> <tr class="even"><td>…</td></tr> <tr class="odd"><td>…</td></tr> </table> ``` CSS Class Selector Without Element - Selectively applies a style to any element that is part of the class ``` .smallCaps { font-variant: small-caps; } ``` ```html <h2 class="smallCaps">Heading 2</h2> <p class="smallCaps">Paragraph Example</p> ``` CSS ID Selectors • Selectively applies a CSS rule to only the elements that have a particular id • Differs from class selector in that an id can only be used once in the HTML document ➢ Page won’t validate otherwise • HTML element can be omitted ➢ Rule will apply to any element with given ID ```html element#id { ... } ``` # CSS ID Selectors - **Course Web Page Example:** ```html #sidebar { color: rgb(117,144,174)); background-color: transparent; width: 8em; padding: 1ex 0; border: 1px solid rgb(204,204,204); position: absolute; left: 4px; top: 141px; } <div id="sidebar"><!-- sidebar --></div> ``` Logical Divisions in HTML: `<div>` - Denotes a section or division of an HTML document (block-level) - Has no on-screen appearance - Can apply a style or id to it - Inherited by all elements inside the `div` - Powerful for layouts, presentation Inline Styling Sections: `<span>` - Has no onscreen appearance - Can apply a style or ID to it - applied to the text inside the `span` ```html <p>Here is some text in <span class="smallCaps">Small Caps</span>. </p> ``` Here is some text in **SMALL CAPS**. Grouping Tags • Can group together some elements and give them a style • Similar to use of `div` tag but for specific type of elements • Example: `colgroup` ➢ Groups together columns with same style • More grouping tags on Thursday... Embedding Style Sheets: `<style>` - Placed within a page’s `head` element - Preferred: linking to an external style sheet - Especially when many styles ```html <head> <style type="text/css"> <!-- /* hide from browsers that can’t handle */ p { font-family: sans-serif } h2 { color: red } --> </style> </head> ``` Inline Styles with the **style** Attribute - Higher precedence than embedded or linked styles - Useful for one-time overrides ```html <p style="font-family: sans-serif; color: red;"> This is a red paragraph. </p> ``` Practice Problem • Modify the Simpsons’ CSS and HTML so that the second list item belongs to the “even” class • An element in the “even” class has a gray background ## CSS Background Properties <table> <thead> <tr> <th>Property</th> <th>Meaning/Values</th> </tr> </thead> <tbody> <tr> <td><code>background-color</code></td> <td>Color to fill background</td> </tr> <tr> <td><code>background-image</code></td> <td>Image to place in background</td> </tr> <tr> <td><code>background-position</code></td> <td>Placement of bg image within an element</td> </tr> <tr> <td><code>background-repeat</code></td> <td>Whether/how bg image should be repeated; values=<code>repeat</code> (default), <code>repeat-x</code>, <code>repeat-y</code>, or <code>no-repeat</code></td> </tr> <tr> <td><code>background-attachment</code></td> <td>Whether bg image scrolls within the page</td> </tr> <tr> <td><code>background</code></td> <td>Shorthand to set all background properties</td> </tr> </tbody> </table> Advanced Selection • Applies given properties to `selector2` only if it is *inside* a `selector1` on the page ``` selector1 selector2 { properties } ``` • Applies given properties to `selector2` only if `selector1` is *directly* inside `selector2` ➢ no intermediate tags ``` selector1 > selector2 { properties } ``` # Pseudo Classes <table> <thead> <tr> <th>Class Name</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>active</td> <td>An activated or selected element</td> </tr> <tr> <td>focus</td> <td>An element that has the keyboard focus</td> </tr> <tr> <td>hover</td> <td>An element that has the mouse over it</td> </tr> <tr> <td>link</td> <td>A link that has not been visited</td> </tr> <tr> <td>visited</td> <td>A link that has already been visited</td> </tr> <tr> <td>first-child</td> <td>An element that is the first child of another</td> </tr> </tbody> </table> Pseudo Classes • Example uses: ```css a:link {color:#ff0000;} /* unvisited link */ a:visited {color: #00FF00} /* visited link */ a:hover {color: #FF00FF} /* mouse over link*/ a:active {color: #0000FF} /* selected link */ ``` Modify so that unvisited links are blue, but only if they’re within a `paragraph` inside of the `div` with id `sidebar` • Course Web page Example Other Properties <table> <thead> <tr> <th>Property</th> <th>Meaning, Values</th> </tr> </thead> <tbody> <tr> <td><strong>list-style-type</strong></td> <td>Use with <code>ol</code> or <code>ul</code>. Some possible values: <code>none</code>, <code>decimal</code>, <code>upper-roman</code>, <code>lower-alpha</code>, <code>square</code>, ...</td> </tr> <tr> <td><strong>display</strong></td> <td>Sets the type of CSS box model an element is displayed with. Values: <code>none</code>, <code>inline</code>, <code>block</code>, <code>run-in</code>, <code>compact</code>, ... Use sparingly--can radically alter page layout</td> </tr> <tr> <td><strong>visibility</strong></td> <td>Sets whether an element should be shown onscreen. Element will still take up space onscreen but will not be shown; to make it not take up any space, set <code>display</code> to <code>none</code> instead. Values: <code>visible</code> (default) or <code>hidden</code>. Can be used to show/hide dynamic HTML content on the page in response to events</td> </tr> </tbody> </table> LAYOUT USING BOX MODEL Layout Using CSS: Box Model - For layout, every element is composed of: - element's content - border around the element - padding between the content area (inside) - margin between border and other content (outside) - width = content width + L/R padding + L/R border + L/R margin - height = content height + T/B padding + T/B border + T/B margin - IE6 doesn't implement these correctly Border Properties • Use **border** property to set borders on all 4 sides • Properties specified in this order: <table> <thead> <tr> <th>thickness</th> <th>specified in px, pt, em, %, or a general widths: thin, medium, thick</th> </tr> </thead> <tbody> <tr> <td><strong>style</strong></td> <td>One of none, hidden, dotted, dashed, double, groove, inset, outset, ridge, solid</td> </tr> <tr> <td><strong>color</strong></td> <td>specified as seen previously for text and background colors</td> </tr> </tbody> </table> Border Properties • Use **border** property to set borders on all 4 sides \[ \text{border: } \text{<thickness>} \ \text{<style>} \ \text{<color>}; \] • To set specific properties of border on all 4 sides: \[ \text{border-color, border-width, border-style} \] • All properties of a border on a particular side: \[ \text{border-bottom, border-left, border-right, border-top} \] • A specific property on a particular side: \[ \text{E.g., border-bottom-color, border-bottom-style, border-bottom-width} \] Border Example h1, h2 { font-family: sans-serif; color: gray; border-bottom: 1px solid black; } Unlike underline, border extends to edge of element’s width Padding - **padding**: padding on all 4 sides - If one value: all 4 sides - 2 values: top/bottom right/left - 3 values: top right/left bottom - 4 values: top right bottom left - **padding-bottom**: padding on bottom side only - **padding-left**: padding on left side only - **padding-right**: padding on right side only - **padding-top**: padding on top side only You may have TRouBLe remembering the order at first Padding Example ``` p { padding: 20px; border: 3px solid black; } h2 { padding: 0px; background-color: yellow; } ``` This is the first paragraph This is the second paragraph This is a heading Padding shares the element’s background color Padding Example Can set padding for each side separately: ```css p { padding-left: 200px; padding-top: 30px; background-color: fuchsia; } This is the first paragraph This is the second paragraph Margins - **margin**: margin on all 4 sides - If one value: all 4 sides - 2 values: top/bottom right/left - 3 values: top right/left bottom - 4 values: top right bottom left - **margin-bottom**: margin on bottom side only - **margin-left**: margin on left side only - **margin-right**: margin on right side only - **margin-top**: margin on top side only Margin Example ```css p { margin: 70px; background-color: fuchsia; } ``` This is the first paragraph Margin: Space between elements This is the second paragraph Margin Example ```css p { margin-left: 200px; background-color: fuchsia; } ``` This is the first paragraph This is the second paragraph FLOAT & CLEAR **float Property** - **float** can have values *left*, *right*, or *none* (default) - Floating elements are removed from normal document flow - Underlying text wraps around floating element as necessary - Usually has a **width** property - Otherwise, default is 100% width - Other text can’t wrap around Practice Problem • Make Simpsons image float to the right and text wraps around It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to heaven, we were all going direct the other way - in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only. clear Property • Disallows any floating elements from overlapping this element ➢ This element will start “below” floating elements • clear can be left, right, both, or none (default) POSITIONING ### position Property <table> <thead> <tr> <th>Property</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td><strong>static</strong></td> <td>default position</td> </tr> <tr> <td><strong>relative</strong></td> <td>offset from its normal static position, relative to block element that contains it</td> </tr> <tr> <td><strong>absolute</strong></td> <td>at a fixed position <em>within its containing element</em></td> </tr> <tr> <td><strong>fixed</strong></td> <td>at a fixed position <em>within the browser window</em></td> </tr> </tbody> </table> **fixed Position** - At a fixed position *within the browser window* - *top, bottom, left, right* properties specify positions of box's corners - Can be negative to create an element that sits outside the visible browser window Those Annoying Ads: \texttt{z-index} - Sets which absolute positioned element will appear on top of another that occupies the same space - Higher \texttt{z-index} wins - Can be \texttt{auto} (default) or a number Using WebDeveloper - Using Outlines - View CSS Style Information Bootstrap - “most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web” - Free front-end framework for faster and easier web development - Includes HTML and CSS based design templates for typography, forms, buttons, ... - optional JavaScript plugins - Easily create responsive designs http://getbootstrap.com/ http://www.w3schools.com/bootstrap/ TODO • Lab 1: CSS ➢ Practice using plugins ➢ Create your own home page • Readings/Summaries on Sakai forums • Project – more on Thursday
{"Source-Url": "http://www.cs.wlu.edu/~sprenkle/cs335/lectures/01-css.pdf", "len_cl100k_base": 6248, "olmocr-version": "0.1.53", "pdf-total-pages": 76, "total-fallback-pages": 0, "total-input-tokens": 114021, "total-output-tokens": 8694, "length": "2e12", "weborganizer": {"__label__adult": 0.0005388259887695312, "__label__art_design": 0.004665374755859375, "__label__crime_law": 0.0003826618194580078, "__label__education_jobs": 0.06597900390625, "__label__entertainment": 0.0002058744430541992, "__label__fashion_beauty": 0.0003914833068847656, "__label__finance_business": 0.0005035400390625, "__label__food_dining": 0.0005469322204589844, "__label__games": 0.0007262229919433594, "__label__hardware": 0.0013551712036132812, "__label__health": 0.0004203319549560547, "__label__history": 0.0006380081176757812, "__label__home_hobbies": 0.0006403923034667969, "__label__industrial": 0.0006999969482421875, "__label__literature": 0.0009260177612304688, "__label__politics": 0.0002605915069580078, "__label__religion": 0.0007562637329101562, "__label__science_tech": 0.00865936279296875, "__label__social_life": 0.0006928443908691406, "__label__software": 0.034027099609375, "__label__software_dev": 0.87548828125, "__label__sports_fitness": 0.0003616809844970703, "__label__transportation": 0.0005745887756347656, "__label__travel": 0.0004832744598388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23154, 0.0097]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23154, 0.36162]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23154, 0.67835]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 287, null], [287, 520, null], [520, 621, null], [621, 1157, null], [1157, 1304, null], [1304, 1613, null], [1613, 2018, null], [2018, 2046, null], [2046, 2371, null], [2371, 2793, null], [2793, 3122, null], [3122, 3661, null], [3661, 4001, null], [4001, 4231, null], [4231, 4462, null], [4462, 4780, null], [4780, 5112, null], [5112, 5358, null], [5358, 5370, null], [5370, 5840, null], [5840, 6062, null], [6062, 6425, null], [6425, 6800, null], [6800, 7418, null], [7418, 7631, null], [7631, 7879, null], [7879, 8188, null], [8188, 8531, null], [8531, 8579, null], [8579, 9026, null], [9026, 9719, null], [9719, 9983, null], [9983, 10339, null], [10339, 10627, null], [10627, 10915, null], [10915, 11164, null], [11164, 11478, null], [11478, 11849, null], [11849, 12099, null], [12099, 12429, null], [12429, 12756, null], [12756, 13004, null], [13004, 13265, null], [13265, 13504, null], [13504, 13820, null], [13820, 14039, null], [14039, 14205, null], [14205, 15120, null], [15120, 15449, null], [15449, 16109, null], [16109, 16484, null], [16484, 17381, null], [17381, 17404, null], [17404, 17803, null], [17803, 18289, null], [18289, 18830, null], [18830, 18994, null], [18994, 19424, null], [19424, 19667, null], [19667, 19872, null], [19872, 20239, null], [20239, 20408, null], [20408, 20550, null], [20550, 20564, null], [20564, 20873, null], [20873, 20954, null], [20954, 21568, null], [21568, 21755, null], [21755, 21767, null], [21767, 22107, null], [22107, 22338, null], [22338, 22552, null], [22552, 22618, null], [22618, 23011, null], [23011, 23154, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 287, null], [287, 520, null], [520, 621, null], [621, 1157, null], [1157, 1304, null], [1304, 1613, null], [1613, 2018, null], [2018, 2046, null], [2046, 2371, null], [2371, 2793, null], [2793, 3122, null], [3122, 3661, null], [3661, 4001, null], [4001, 4231, null], [4231, 4462, null], [4462, 4780, null], [4780, 5112, null], [5112, 5358, null], [5358, 5370, null], [5370, 5840, null], [5840, 6062, null], [6062, 6425, null], [6425, 6800, null], [6800, 7418, null], [7418, 7631, null], [7631, 7879, null], [7879, 8188, null], [8188, 8531, null], [8531, 8579, null], [8579, 9026, null], [9026, 9719, null], [9719, 9983, null], [9983, 10339, null], [10339, 10627, null], [10627, 10915, null], [10915, 11164, null], [11164, 11478, null], [11478, 11849, null], [11849, 12099, null], [12099, 12429, null], [12429, 12756, null], [12756, 13004, null], [13004, 13265, null], [13265, 13504, null], [13504, 13820, null], [13820, 14039, null], [14039, 14205, null], [14205, 15120, null], [15120, 15449, null], [15449, 16109, null], [16109, 16484, null], [16484, 17381, null], [17381, 17404, null], [17404, 17803, null], [17803, 18289, null], [18289, 18830, null], [18830, 18994, null], [18994, 19424, null], [19424, 19667, null], [19667, 19872, null], [19872, 20239, null], [20239, 20408, null], [20408, 20550, null], [20550, 20564, null], [20564, 20873, null], [20873, 20954, null], [20954, 21568, null], [21568, 21755, null], [21755, 21767, null], [21767, 22107, null], [22107, 22338, null], [22338, 22552, null], [22552, 22618, null], [22618, 23011, null], [23011, 23154, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23154, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23154, null]], "pdf_page_numbers": [[0, 95, 1], [95, 287, 2], [287, 520, 3], [520, 621, 4], [621, 1157, 5], [1157, 1304, 6], [1304, 1613, 7], [1613, 2018, 8], [2018, 2046, 9], [2046, 2371, 10], [2371, 2793, 11], [2793, 3122, 12], [3122, 3661, 13], [3661, 4001, 14], [4001, 4231, 15], [4231, 4462, 16], [4462, 4780, 17], [4780, 5112, 18], [5112, 5358, 19], [5358, 5370, 20], [5370, 5840, 21], [5840, 6062, 22], [6062, 6425, 23], [6425, 6800, 24], [6800, 7418, 25], [7418, 7631, 26], [7631, 7879, 27], [7879, 8188, 28], [8188, 8531, 29], [8531, 8579, 30], [8579, 9026, 31], [9026, 9719, 32], [9719, 9983, 33], [9983, 10339, 34], [10339, 10627, 35], [10627, 10915, 36], [10915, 11164, 37], [11164, 11478, 38], [11478, 11849, 39], [11849, 12099, 40], [12099, 12429, 41], [12429, 12756, 42], [12756, 13004, 43], [13004, 13265, 44], [13265, 13504, 45], [13504, 13820, 46], [13820, 14039, 47], [14039, 14205, 48], [14205, 15120, 49], [15120, 15449, 50], [15449, 16109, 51], [16109, 16484, 52], [16484, 17381, 53], [17381, 17404, 54], [17404, 17803, 55], [17803, 18289, 56], [18289, 18830, 57], [18830, 18994, 58], [18994, 19424, 59], [19424, 19667, 60], [19667, 19872, 61], [19872, 20239, 62], [20239, 20408, 63], [20408, 20550, 64], [20550, 20564, 65], [20564, 20873, 66], [20873, 20954, 67], [20954, 21568, 68], [21568, 21755, 69], [21755, 21767, 70], [21767, 22107, 71], [22107, 22338, 72], [22338, 22552, 73], [22552, 22618, 74], [22618, 23011, 75], [23011, 23154, 76]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23154, 0.07143]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e8dc0410d88bb792aedb810ee8ac5cdb6bc8d5e5
Exact Combinational Circuit Synthesis in Haskell Paul Tarau Department of Computer Science and Engineering University of North Texas Abstract. An exact synthesizer for small circuits, is provided as a literate Haskell program. Keywords: functional programming and hardware design, symbolic circuit modeling, exact combinational logic synthesis 1 Introduction We need the following Haskell modules: ```haskell module Syn where import Data.List import Data.Bits import Data.Array ``` We start with some auxiliary functions that we will use. Bitvector Boolean Operation Definitions ```haskell type N = Integer nand_ :: N -> N -> N -> N nor_ :: N -> N -> N -> N impl_ :: N -> N -> N -> N less_ :: N -> N -> N -> N and_ :: N -> N -> N -> N nand_ mask x y = mask .&. (complement (x .&. y)) nor_ mask x y = mask .&. (complement (x .| y)) impl_ mask x y = (mask .&. (complement x)) .| y less_ _ x y = x .&. (complement y) and_ _ x y = x .&. y ``` Boolean Operation Encodings and Names ```haskell opcode m 0 = nand_ m opcode m 1 = nor_ m opcode m 2 = impl_ m ``` opcode m 3 = less m opcode _ 4 = xor opcode m 5 = and m opcode _ n = error ("unexpected opcode:"++(show n)) opname 0 = "nand" opname 1 = "nor" opname 2 = "impl" opname 3 = "less" opname 4 = "xor" opname 5 = "and" opname n = error ("no such opcode:"++(show n)) A Few Interesting Libraries symops = [0,1] asymops = [2,3] impl_and = [2,5] Before looking at the actual code, one can try it out through a few simple tests. Tests for the Circuit Synthesizer t0 = findFirstGood symops 3 8 71 t1 = syn asymops 3 71 t2 = mapM_ print (synall asymops 2) t3 = syn symops 3 83 t4 = syn asymops 3 83 t5 = syn [0..4] 3 83 -- ite with all ops -- x xor y xor z -- cpu intensive t6 = syn asymops 3 105 2 Exact Combinational Circuit Synthesis We start by reviewing a mechanism for fast boolean evaluation, following [1] and [2]. 2.1 Evaluation of Boolean Functions with Bitvector Operations The boolean evaluation mechanism uses integer encodings of $2^n$ bits for each boolean variable $x_0, \ldots, x_{n-1}$. In a way reminding of qubits in quantum computing, bitvector operations are used to evaluate all value combinations at once. Proposition 1 (Knuth,[1]) Let $x_k$ be a variable for $0 \leq k < n$ where $n$ is the number of distinct variables in a boolean expression. Then column $k$ of the truth table represents, as a bitstring, the natural number: $$x_k = (2^{2^n} - 1)/(2^{2^{n-1}} + 1)$$ (1) For instance, if $n = 2$, the formula computes $x_0 = 3 = [0, 0, 1, 1]$ and $x_1 = 5 = [0, 1, 0, 1]$. The following functions, working with arbitrary length bitstrings are used to evaluate the $[0..n-1]$ variables $x_k$ with formula 1 and map the constant 1 to the bitstring of length $2^n$, 111...1. The constant 1 is provided by the function allOnes. \[ \text{allOnes nvars} = 2^{2^nvars} - 1 \] Next we define a function providing the (arbitrary size) Integer representation of the $k$-th boolean variable (out of $n$). \[ \text{var_n n k} = \text{var_mn (allOnes n) n k} \] \[ \text{var_mn mask n k} = \text{mask \ 'div' \ (2^{(2^{n-k-1})})+1} \] We have used in \text{var_n} an adaptation of the efficient bitstring-integer encoding described in the Boolean Evaluation section of [1]. Intuitively, it is based on the idea that one can look at $n$ variables as bitstring representations of the $n$ columns of the truth table. Variables representing such bitstring-truth tables (seen as projection functions) can be combined with the usual bitwise integer operators, to obtain new bitstring truth tables, encoding all possible value combinations of their arguments. Note that the constant 0 is represented as 0 while the constant 1 is represented as $2^{2^n} - 1$, corresponding to a column in the truth table containing ones exclusively. We will now use these variable encodings for combinational circuit synthesis, known to be intractable for anything beyond a few input variables. Clearly, a speed-up by a factor proportional to the machine’s wordsize matters in this case. ### 2.2 Encoding the Primary Inputs First, let us extend the encoding to cover constants 1 and 0, that we will represent as “variables” $n$ and $n+1$ and encode as vectors of $n$ zeros or $n$ ones (i.e. $2^{2^n} - 1$, passed as the precomputed parameter $m$ to avoid costly recomputation). \[ \text{encode_var m n k} | \text{ k=n = m} \] \[ \text{encode_var m n k} | \text{ k=n+1 = 0} \] \[ \text{encode_var m n k} = \text{var_mn m n k} \] Next we can precompute all the inputs knowing the number $n$ of primary inputs for the circuit we want to synthesize: \[ \text{init_inputs n} = \text{0:m:(map (encode_var m n) [0..n-1]) where m=allOnes n} \] \[ > \text{init_inputs 3} [0, 15, 3, 5] \] \[ > \text{init_inputs 3} [0, 255, 15, 51, 85] \] Given that inputs have all distinct encodings, we can decode them back - this function will be needed after the circuit is found. \[ \text{decode_var nvars } v \mid v = (\text{allOnes nvars}) = \text{nvars} \] \[ \text{decode_var nvars } 0 = \text{nvars+1} \] \[ \text{decode_var nvars } v = \text{head} \] \[ [k | k \leftarrow [0..\text{nvars}-1], (\text{encode_var } m \text{ nvars } k) = v] \] where \( m = \text{allOnes nvars} \) \[ \text{map (decode_var 2) (init_inputs 2)} \] \[ [3,2,0,1] \] \[ \text{map (decode_var 3) (init_inputs 3)} \] \[ [4,3,0,1,2] \] We can now connect the inputs to their future occurrences as leaves in the DAG representing the circuit. This means simply finding all the functions from the set of input variables to the set of their occurrences, represented as a list (with possibly repeated) values. \[ \text{bindings } 0 \text{ us } = [[]] \] \[ \text{bindings } n \text{ us } = \] \[ [zs | ys \leftarrow \text{bindings } (n-1) \text{ us}, zs \leftarrow \text{map (\text{\_})} \text{ us}] \] \[ > \text{bindings } 2 \ [0,3,5] \] \[ [[0,0],[3,0],[5,0],[0,3],[3,3],\ [5,3],[0,5],[3,5],[5,5]] \] For fast lookup, we place the precomputed value combinations in a list of arrays. \[ \text{generateVarMap } \text{occs vs } = \] \[ \text{map (listArray } (0,\text{occs}-1) \text{) (bindings } \text{occs vs)} \] \[ > \text{generateVarMap } 2 \ [3,5] \] \[ [\text{array } (0,1) \ [(0,3),(1,3)], \text{ array } (0,1) \ [(0,5),(1,3)],\ [5,3],[0,5],[3,5],[5,5], \text{ array } (0,1) \ [(0,3),(1,5)], \text{ array } (0,1) \ [(0,5),(1,5)]] \] 2.3 The Folds and the Unfolds We now are ready to generate trees with library operations marking internal nodes of type \( F \) and primary inputs marking the leaves of type \( V \). \[ \text{data } T \ a = V \ a \mid F \ a (T \ a) \ (T \ a) \text{ deriving (Show, Eq)} \] Generating all trees is a variant of an \textbf{unfold} operation. \[ \text{generateT lib n } = \text{unfoldT lib n 0} \] \[ \text{unfoldT } _n 1 \ k = [V \ k] \] \[ \text{unfoldT lib n k } = [F \ \text{op l r} \mid \ i \leftarrow [1..n-1],\ l \leftarrow \text{unfoldT lib i k},\ r \leftarrow \text{unfoldT lib (n-i) (k+i)},\ \text{op} \leftarrow \text{lib}] \] For later use, we will also define the dual fold operation parameterized by a function \( f \) describing action on the leaves and a function \( g \) describing action on the internal nodes. \[ \begin{align*} \text{foldT}_g \cdot \text{g} \cdot (V \cdot i) &= \text{g} \cdot i \\ \text{foldT}_f \cdot \text{g} \cdot (F \cdot i \cdot l \cdot r) &= \\ &= f \cdot i \cdot (\text{foldT}_f \cdot \text{g} \cdot l) \cdot (\text{foldT}_f \cdot \text{g} \cdot r) \end{align*} \] The \text{foldT} operation will be used later in the synthesis process - for things like boolean evaluation. A simpler use would be to compute the size of a formula as follows: \[ \text{fsize} \cdot t = \text{foldT}_f \cdot \text{g} \cdot t \quad \text{where} \begin{align*} \text{g} \cdot i &= 0 \\ \text{f} \cdot l \cdot r &= 1 + l + r \end{align*} \] We will use \text{foldT} to decode the constants and variables occurring in the result: \[ \begin{align*} \text{decodeV} \cdot nvars \cdot \text{i} \cdot = \text{V} \cdot (\text{decode_var} \cdot nvars \cdot (\text{is!i})) \\ \text{decodeF} \cdot i \cdot x \cdot y &= F \cdot i \cdot x \cdot y \\ \text{decodeResult} \cdot nvars \cdot (\text{leafDAG},\text{varMap},_)&= \\ &= \text{foldT} \cdot \text{decodeF} \cdot (\text{decodeV} \cdot nvars \cdot \text{varMap}) \cdot \text{leafDAG} \end{align*} \] The following example shows the action of the decoder: \[ \begin{align*} > & \text{decodeV} 2 \cdot (\text{array} \cdot (0,1) \cdot [(0,5),(1,3)]) \cdot 0 \\ & \quad \text{V} \cdot 1 \\ > & \text{decodeV} 2 \cdot (\text{array} \cdot (0,1) \cdot [(0,5),(1,3)]) \cdot 1 \\ & \quad \text{V} \cdot 0 \\ > & \text{decodeResult} 2 \cdot ((F \cdot 1 \cdot (\text{V} \cdot 0) \cdot (\text{V} \cdot 1)), \\ & \quad (\text{array} \cdot (0,1) \cdot [(0,5),(1,3)]), 4) \\ & \quad F \cdot 1 \cdot (\text{V} \cdot 1) \cdot (\text{V} \cdot 0) \end{align*} \] We can also use \text{foldT} to generate a human readable string representation of the result (using the \text{opname} function): \[ \begin{align*} \text{showT} \cdot nvars \cdot t &= \text{foldT}_f \cdot \text{g} \cdot t \quad \text{where} \begin{align*} \text{g} \cdot i &= \\ &= \text{if} \ i < \text{nvars} \\ &\quad \text{then} \ "x"++(\text{show} \ i) \\ &\quad \text{else} \ \text{show} \ (\text{nvars}+1-i) \\ \text{f} \cdot i \cdot l \cdot r &= \text{(opname} \ i)+"("++l++","++r++")" \end{align*} \end{align*} \] \[ \begin{align*} > & \text{showT} 2 \cdot (F \cdot 4 \cdot (\text{V} \cdot 0) \cdot (F \cdot 1 \cdot (\text{V} \cdot 1) \cdot (\text{V} \ 0))) \\ & \quad "xor(x0,nor(x1,x0))" \end{align*} \] ### 2.4 Assembling the Circuit Synthesizer **Definition 1** A Leaf-DAG is obtained from an ordered tree by fusing together equal leaves. Leaf equality in our case means sharing a primary input variable or a constant. In the next function we build candidate Leaf-DAGs by combining two generators: the inputs-to-occurrences generator `generateVarMap` and the expression tree generator `generateT`. Then we compute their bitstring value with a foldT based boolean formula evaluator. The function is parameterized by a library of logic gates `lib`, the number of primary inputs `nvars` and the maximum number of leaves it can use `maxleaves`: ```haskell buildAndEvalLeafDAG lib nvars maxleaves = [(leafDAG, varMap, eval varMap leafDAG) | k ← [1..maxleaves], varMap ← generateVarMap k vs, leafDAG ← generateT lib k ] where mask = allOnes nvars vs = init_inputs nvars eval varMap leafDAG = foldT (opcode mask) (varMap!) leafDAG ``` We are now ready to test if the candidate matches the specification given by the truth table of `n` variables `ttn`. ```haskell findFirstGood lib nvars maxleaves ttn = head [r | r ← buildAndEvalLeafDAG lib nvars maxleaves, testspec ttn r ] where testspec spec (_,_,v) = spec === v ``` > findFirstGood [1] 2 8 1 `(F 1 (F 1 (V 0) (V 1)) (F 1 (V 2) (V 3)), array (0,3) [(0,5),(1,0),(2,3),(3,0)],1) ``` The final steps of the circuit synthesizer consist in converting to a human readable form the successful first candidate (guaranteed to be minimal as they have been generated ordered by increasing number of nodes). ```haskell synthesize_from lib nvars maxleaves ttn = decodeResult nvars candidate where candidate = findFirstGood lib nvars maxleaves ttn ``` ```haskell synthesize_with lib nvars ttn = synthesize_from lib nvars (allOnes nvars) ttn ``` The following two functions provide a human readable output: ```haskell syn lib nvars ttn = (show ttn) ++ ":" ++ (showT nvars (synthesize_with lib nvars ttn)) ``` ```haskell synall lib nvars = map (syn lib nvars) [0..(allOnes nvars)] ``` The next example shows a minimal circuit for the 2 variable boolean function with truth table 6 (xor) in terms of the library with opcodes in [0] i.e. containing only the operator nand. Note that codes for functions represent their truth tables i.e. 6 stands for [0,1,1,0]. The following examples show circuits synthesized, in terms of a few different libraries, for the 3 argument function if-then-else (corresponding to truth table 83 i.e. [0,1,0,1,0,0,1,1]). As this function is the building block of boolean circuit representations like Binary Decision Diagrams, having perfect minimal circuits for it in terms of a given library has clearly practical value. The reader might notice that it is quite unlikely to come up intuitively with some of these synthesized circuits. 3 Related work We refer to [3] for general information on circuit design. The use of functional programming as a hardware description tool goes back as early as [4]. Tools like Hydra, Lava and Wired [5] have shown that various design concepts can be can be elegantly embedded in Haskell [6-8]. Exact circuit synthesis has been a recurring topic of interest in circuit design, complexity theory, boolean logic, combinatorics and graph theory for more than half a century [1, 9-14]. In [15, 2] a Prolog-based exact synthesis algorithm is used to compare expressiveness of various gate libraries in combinational logic. In [16] a similar algorithm is used for reversible circuits. Synthesis of reconfigurable logic is covered in [17]. References 7. Claessen, K., Pace, G.: Embedded hardware description languages: Exploring the design space. Hardware Design and Functional Languages (HFL'07), Braga, Portugal (2007)
{"Source-Url": "http://www.cse.unt.edu/~tarau/teaching/cf1/Syn.pdf", "len_cl100k_base": 4362, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20054, "total-output-tokens": 5807, "length": "2e12", "weborganizer": {"__label__adult": 0.0007128715515136719, "__label__art_design": 0.000743865966796875, "__label__crime_law": 0.0005288124084472656, "__label__education_jobs": 0.0008525848388671875, "__label__entertainment": 0.00014793872833251953, "__label__fashion_beauty": 0.0003018379211425781, "__label__finance_business": 0.00031495094299316406, "__label__food_dining": 0.000690460205078125, "__label__games": 0.0007944107055664062, "__label__hardware": 0.0145111083984375, "__label__health": 0.001026153564453125, "__label__history": 0.0003733634948730469, "__label__home_hobbies": 0.00033020973205566406, "__label__industrial": 0.0014753341674804688, "__label__literature": 0.00022804737091064453, "__label__politics": 0.0004758834838867187, "__label__religion": 0.000950336456298828, "__label__science_tech": 0.113037109375, "__label__social_life": 0.00012481212615966797, "__label__software": 0.0048370361328125, "__label__software_dev": 0.85498046875, "__label__sports_fitness": 0.0005898475646972656, "__label__transportation": 0.0015392303466796875, "__label__travel": 0.0003154277801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16030, 0.03038]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16030, 0.64374]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16030, 0.65324]], "google_gemma-3-12b-it_contains_pii": [[0, 1065, false], [1065, 2467, null], [2467, 4805, null], [4805, 7031, null], [7031, 9767, null], [9767, 11980, null], [11980, 14413, null], [14413, 16030, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1065, true], [1065, 2467, null], [2467, 4805, null], [4805, 7031, null], [7031, 9767, null], [9767, 11980, null], [11980, 14413, null], [14413, 16030, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16030, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16030, null]], "pdf_page_numbers": [[0, 1065, 1], [1065, 2467, 2], [2467, 4805, 3], [4805, 7031, 4], [7031, 9767, 5], [9767, 11980, 6], [11980, 14413, 7], [14413, 16030, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16030, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
8e0e47ef24e575e9e05b76c2581400f70bd2e0b3
Natural Language Processing and Program Analysis for Supporting Todo Comments as Software Evolves Pengyu Nie, Junyi Jessy Li, Sarfraz Khurshid, Raymond Mooney, and Milos Gligoric The University of Texas at Austin \{pynie@, jessy@austin., khurshid@ece., mooney@cs., gligoric@\}utexas.edu Abstract Natural language elements (e.g., API comments, todo comments) form a substantial part of software repositories. While developers routinely use many natural language elements (e.g., todo comments) for communication, the semantic content of these elements is often neglected by software engineering techniques and tools. Additionally, as software evolves and development teams re-organize, these natural language elements are frequently forgotten, or just become outdated, imprecise and irrelevant. We envision several techniques, which combine natural language processing and program analysis, to help developers maintain their todo comments. Specifically, we propose techniques to synthesize code from comments, make comments executable, answer questions in comments, improve comment quality, and detect dangling comments. Introduction Natural language elements form a substantial part of software repositories. These elements are used to communicate between users and developers (e.g., API comments, bug reports, and feature requests), and among developers (e.g., todo comments). Todo comments contain invaluable data that describe changes to code that can increase software maintenance, reliability, and quality. Despite occurring frequently in practice and containing valuable information, these elements, because of their informal nature, are largely not exploited by existing software engineering tools. Research on combining program analysis and natural language processing (NLP), which recently started to gain some traction, is in its infancy (Ernst 2017; Arnaoudova et al. 2015; Hindle et al. 2012; Oda et al. 2015; Allamanis, Peng, and Sutton 2016; Vasilescu, Casalnuovo, and Devanbu 2017; Raychev, Vechev, and Krause 2015; Nguyen et al. 2012), and the existing work, although novel, mostly neglected comments that are used to communicate among the developers (Storey et al. 2008; Sridhara 2016). In this position paper, we argue about the importance of content in todo comments and envision several techniques to automatically maintain and resolve those comments. This position paper is to a large extent inspired by our extensive analysis of a large corpus of open-source projects. Specifically, we analyzed over 30k open-source projects, which are available on GitHub, totaling 585 million lines of code (not counting comments). We found that these projects include over 297 million lines of comments (~30% of the total lines). Our analysis also uncovered more than 700k todo comments in the used corpus. We manually inspected (and discussed) hundreds of comments, code and comment changes, and commit messages. In the following subsections, we will frequently refer to this dataset and our findings related to this dataset. All examples of code and comments that we provide in this paper are taken from one of the analyzed open-source projects. This paper mostly focuses on todo comments that contain valuable information on increasing software quality, performance, maintenance, and reliability. We consider the following three categories of todo comments. First, task comments explain what features are currently not supported or what optimizations need to be implemented (e.g., from the Google Guava project: “For optimal performance, use a binary search when \(\text{targets.size()} < \frac{\text{size()}}{\log(\text{size()})}\)”). Second, trigger-action comments talk about changes to the code repository that would be necessary if something else is modified by developers (e.g., from Guava: “check more preconditions (as \(\text{bufferSize} >= \text{chunkSize}\) if this is ever public”). Finally, question comments are concerned with alternative implementations, potential optimizations, and testing, which may be explored by developers only if time permits (e.g., from Guava: “Is this faster than \(\text{System.arraycopy()\) for small arrays?”) Regardless of the category of todo comments, as software evolves and development teams re-organize, these comments may be dangling, i.e., resolved but forgotten (Storey et al. 2008; Sridhara 2016). For example, a trigger may hold (e.g., “if this is ever public”) but the action may not be executed by developers (for very long time or ever), and developers may never have enough time to consider alternative algorithms and fine tune their existing implementations. With the goal to help developers increase the reliability of their software, we propose several techniques to (1) synthesize code described in task comments, (2) make trigger-action comments executable, (3) answer question comments, (4) improve the quality of all todo comments, and (5) automatically detect dangling comments. protected AbstractStreamingHasher(int chunkSize, int bufferSize) { // TODO(kevinb): check more preconditions (as bufferSize > chunkSize) // if this is ever public ... } (a) Example from Google Guava (AbstractStreamingHasher) public void testDynamicAttributesSupport() throws Exception { dispatcher.serviceAction(request, response, mapping); checkArgument(bufferSize > chunkSize); checkArgument(bufferSize != 0); } (b) Example from Apache Struts (FreemarkerResultMockedTest) if (TRIGIT.isPublic(TRIGIT.THIS_METHOD)) checkArgument(bufferSize > chunkSize); checkArgument(bufferSize \% chunkSize == 0); ... } Figure 1: Examples of trigger-action comments from open-source projects; we show how the existing comments (crossed out) can be encoded as executable statements in our TRIGIT framework (highlighted code) Techniques This section describes the basic idea behind each technique and the way we will approach the implementation. Synthesizing Error-Reporting Code We plan to develop lightweight synthesis techniques to generate error-reporting code for unsupported cases that are documented by developers in the task comments (e.g., from Guava: "support array types"). First, we will identify comments that document unsupported cases. To this end, we will explore possible supervision signals from resolved comments and their corresponding code changes, crowdsourcing annotation and semantic parsing of the comments. Second, we will synthesize error-reporting code that follows the style used in the codebase (e.g., throw an exception or return a special value from a function). Note that our goal is not to work on full-blown program synthesis, which would be interesting but challenging (e.g., Polikarpova, Kuraj, and Solar-Lezama (2016)), but rather to focus on a specific domain of error-reporting. Basically, our goal is to make the existing comments observable during program execution by reporting an appropriate message for unsupported cases. Extracting Executable Comments We will develop techniques to help software engineers to encode their trigger-action comments as executable code statements. This will help with repository maintenance, because developers will not need to manually check their todo comments; instead, the executable statements will be automatically triggered when appropriate. We show several examples of trigger-action comments in Table 1 (the top half). We found that \( \sim 10\% \) of all todo comments (in our corpus) belong to this comment category. While it would be infeasible to support every comment written in the trigger-action style, we plan to focus on those tasks that update the codebase (e.g., transformations of abstract syntax trees) when triggers are satisfied. Our initial step is to develop a domain specific language embedded in Java to be used to: (1) query the static features of the codebase, e.g., required Java version, and (2) specify code transformations, e.g., remove a method from a class. Figure 1 shows two examples of trigger-action comments encoded in our framework (named TRIGIT); the original todo comments are crossed out and the statements for our framework are highlighted. In the first example, we use our framework to check a modifier of the current method; if the method becomes public, the code guarded by the trigger should become a part of the compiled class. In the second example, we specify that a variable should be removed if the required Java version is higher than 1.5; the required Java version can be obtained from a build script. (Note that the statements/expressions that use the variables need to be annotated too, but we do not show this due to space limitations.) The evaluation of the triggers will be done statically (once code is compiled), as the queries should not depend on the dynamic behavior of the program. Our tool, which can be implemented as a compiler plugin, will automatically remove the triggers and perform program transformations. Note that the user would still be able to inspect/approve the changes (e.g., by executing git diff). As the transformation engine we will use the existing open-source platforms, e.g., Eclipse, or program transformation systems, e.g., Cordy et al. (2004). The language design will be guided by examples, and we will evolve the language to support cases that we encounter in the future. Our second step is to automatically discover trigger-action comments present in a codebase and recover the corresponding triggers and actions via mining explicit condition relations within the content of the todo comments; explicit discourse relations can be classified with adequate accuracy (Pitler et al. 2008). In the third step, we will develop automated migration from comments to the TRIGIT specifications, which will follow our recent work on language to code for if-this-then-that (IFTTT) recipes (Quirk, Mooney, and Galley 2015). Specifically, we will train a semantic parser to map trigger-action comments into executable code using supervision automatically extracted from the code changes made when a todo comment is resolved. This supervision may be noisy, since not all code changes may be directly related to resolving the todo comment, but our previous work on IFTTT shows that noisy, automatically extracted supervision from pairing comments and code can be tolerated reasonably well. Answering Questions From Comments We will develop techniques to help software engineers to make informed decisions about questions that are asked in todo comments. In our preliminary studies, we discovered that developers ask questions in todo comments more than \( \sim 10\% \) of the time; we obtained this number by counting todo comments that contain "?". Some of these questions are shown in Table 1 (the bottom half). Many of the questions are related to code optimization, program transformation, or testing. Our plan is to focus on techniques that will address these three types of questions. First, to answer questions related to optimizations, we will extract suggested code modifications from comments, apply those modifications and profile the code (by executing existing test suites) and evaluate the performance with profiles (on various machines). Second, to answer questions related to tests, we will develop techniques that extract test inputs from a question and generate new tests with those inputs; these new tests will be obtained by adjusting an automated test generation tool (e.g., Randoop (Pacheco et al. 2007)) or by extending existing (manually written) tests. Third, to answer questions related to code structure, we will extract suggested changes (from Guava: “Add getters returning rowKeyToIndex and columnKeyToIndex.”), perform the changes, and measure quality of code in terms of naturalness (Hindle et al. 2012). Our question classification system will also learn from how todo comments are answered as software evolves (e.g., files and functions that are modified and language artifacts that are added or edited); we can also learn from actions taken by developers. As some of the questions may be open-ended, we plan to develop an interactive dialog interface, which we recently used for language to code translation (Chaurasia and Mooney 2017). We plan to use dialog systems to clarify user intent and gather information—in our case, when a question is initially asked. **Improving Todo Comments** We will develop techniques to help software engineers to write meaningful todo comments. While manually analyzing hundreds of todo comments, we found a number of comments that were hard to understand even if we read the code near those comments. We were also in disagreement about their meaning in several cases, and although we could understand a comment (e.g., from the Square Retrofit project: “TODO non-suck message”), it was clear that any technique would have a hard time to extract any useful data. Our initial task will be to detect todo comments that are not specific enough, as well as those comments that do not follow the conventions already used in the same project. The techniques that we will develop will build on our work on text specificity (Li and Nenkova 2015) and program analysis. When we detect an unspecific comment, we will either notify a developer to provide additional clarification, highlight a part of the comment that does not follow the style (in a similar way that spellcheckers highlight typos in comments inside IDEs), or automatically reformat the comment to be consistent with other comments in the same repository. We will also provide automated comment style checkers, where the rules can be expressed by developers; this is similar to code style checkers, which are used in practice. Having specific comments that follow the same style will enable techniques from prior sections. **Detecting Dangling Todo Comments** Prior work has shown that developers may resolve todo comments but forget to remove these comments from source code (Storey et al. 2008; Sridhara 2016); these *dangling comments* can waste developers’ time during program comprehension and maintenance. We are working on a technique, based on machine learning, to automatically detect dangling todo comments. Our detection technique learns from existing software repositories. As mentioned earlier, we have already collected more than 700k todo comments. This large dataset provides examples for todo comments that were removed by developers (over 20k). We are using these examples as distant supervision signals, where we are exploring automatic labeling of examples (e.g., todo comments that are in the same file with removed todo comments). Our models are exploiting commit messages and static code analysis of changes. In the future, we plan to also utilize software histories to extract necessary context when todo comments were introduced. We will also reason about co-evolution of code and comments from when a todo comment was introduced until --- **Table 1: Example todo comments in open-source projects** <table> <thead> <tr> <th>Project (on GitHub)</th> <th>File (.java)</th> <th>Todo Comments</th> </tr> </thead> <tbody> <tr> <td>Apache/Incubator-wa</td> <td>Pretty</td> <td>Remove this if it implements <code>getAttribute</code></td> </tr> <tr> <td>Apache/Struts</td> <td>FreemarkerResultMockedTest</td> <td>Remove expected/DEK16 if <code>it()</code> after switching to Java 1.6</td> </tr> <tr> <td>Apache/Poi</td> <td>TextXSBugs</td> <td>Delete this test case when <code>IMR0001</code> and <code>VAR</code> are implemented</td> </tr> <tr> <td>Google/Guiava</td> <td>Types</td> <td>Once we are on Java 8, delete this abstraction</td> </tr> <tr> <td>Google/Guiava</td> <td>AbstractStreamingHasher</td> <td>Check preconditions as <code>bufferSize &gt;= chunkSize</code> if this is ever public</td> </tr> <tr> <td>Google/Guiava</td> <td>MagTest</td> <td>Replace with <code>Ascii.caseInsensitiveEquivalence()</code> if it exists</td> </tr> <tr> <td>KangProject/Frameworks_base</td> <td>SoCertificate</td> <td>If deprecated constructors are removed, this should always be available</td> </tr> <tr> <td>Morristech/Gwt</td> <td>DefaultFilters</td> <td>This class needs to be revisited, when Gwt’s Ant is upgraded</td> </tr> <tr> <td>Morristech/Gwt</td> <td>Simplifier</td> <td>If the AST were normalized, we wouldn’t need this</td> </tr> <tr> <td>Andyglick/Fk2-fork</td> <td>AbstractRepositoryImpl</td> <td>Is it allowed to call the initialize method multiple times?</td> </tr> <tr> <td>Google/Net</td> <td>IMAPReply</td> <td>Would <code>-lookingAt()</code> be more efficient? If so, then drop trailing <code>*</code> from patterns</td> </tr> <tr> <td>Google/Guiava</td> <td>ArrayTable</td> <td>Add getters returning <code>rowKeyToIndex</code> and <code>columnKeyToIndex</code>?</td> </tr> <tr> <td>Google/Guiava</td> <td>EvictingQueue</td> <td>Do we want to checkForNull each element in <code>containsAll</code> and <code>retainAll</code>?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>LvmEnvironmentVariableSupplier</td> <td>Is this actually called anywhere?</td> </tr> <tr> <td>Eclipse/CDT</td> <td>EvalBinary</td> <td>What if the composite being accessed is not an array but a structure?</td> </tr> <tr> <td>Eclipse/Mwe</td> <td>PluginExtensionManager</td> <td>Test: what happens when a handler is not there? Exception?</td> </tr> <tr> <td>JetBrains/Jdk8u_jaxp</td> <td>NodeSet</td> <td>What happens if <code>index</code> is out of range?</td> </tr> <tr> <td>Square/OKhttp</td> <td>HtmlViewImpl</td> <td>Test case for empty continuation header?</td> </tr> </tbody> </table> it was resolved by a developer. Specifically, for each code change, we will compute its distance from todo comments, word similarity with each comment, and code structure that may be described in a comment. These sources of information provide complementary views to feature development and complementary models, so we plan to build on our prior work in co-training and ensemble models. Related Work Li et al. (2006) used text classification to validate the representativeness of their study of bug characteristics. Fluri, Wursch, and Gall (2007) empirically showed that code and comments frequently co-evolve. Padioleau, Tan, and Zhou (2009) manually studied over one thousand comments, and found that 50% of comments can be leveraged by various techniques. Haouari, Sahraoui, and Langlais (2011) introduced a taxonomy of comments and found that todo comments are the second most common type of comments. Movshovitz-Attias and Cohen (2013) used topic modeling and language models to generate comments from Java source files. Several work tackled automated generation of commit messages and mining relation from commit messages (Linares-Vásquez et al. 2015; Jiang and McMillan 2017; Andersson, Ericsson, and Wingkvist 2014; Loyola, Marrese-Taylor, and Matsuo 2017). Tan et al. (2007) detected inconsistencies between code and comments and proposed a technique to test Javadoc comments. Zhong et al. (2011) developed a technique to infer specification from natural language API documentation and used it to detect issues in client code. Conclusion We argued that comments used to communicate among developers (todo comments) contain invaluable content that is currently neglected. We described several techniques – synthesizing code from comments, making comments executable, answering questions in comments, improving comment quality, and detecting dangling comments. These techniques, based on natural language processing and program analysis, have potential to substantially simplify software maintenance and increase software reliability. Acknowledgments We thank Rishabh Rai for the initial discussion on this work. This work was partially supported by the US National Science Foundation under Grant No. CCF-1704790. References Andersson, R.; Ericsson, M.; and Wingkvist, A. 2014. Mining relations from Git commit messages: An experience report. In SLTC. Chaurasia, S., and Mooney, R. 2017. Dialog for language to code. In IJCNLP. Ernst, M. D. 2017. Natural language is a programming language: Applying natural language processing to software development. In SNAPL, volume 71. Haouari, D.; Sahraoui, H.; and Langlais, P. 2011. How good is your comment? A study of comments in Java programs. In ESEM. Li, J. J., and Nenkova, A. 2015. Fast and accurate prediction of sentence specificity. In AAAI. Loyola, P.; Marrese-Taylor, E.; and Matsuo, Y. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL. Padioleau, Y.; Tan, L.; and Zhou, Y. 2009. Listening to programmers: taxonomies and characteristics of comments in operating system code. In ICSE. Polikarpova, N.; Kuraj, I.; and Solar-Lezama, A. 2016. Program synthesis from polymorphic refinement types. In PLDI. Quirk, C.; Mooney, R.; and Galley, M. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In ACL. Raychev, V.; Vevev, M.; and Krause, A. 2015. Predicting program properties from "Big Code". In POPL. Sridhara, G. 2016. Automatically detecting the up-to-date status of ToDo comments in Java programs. In ISEC. Storey, M.-A.; Ryall, J.; Bull, R. I.; Myers, D.; and Singer, J. 2008. TODO or to bug. In ICSE. Tan, L.; Yuan, D.; Krishna, G.; and Zhou, Y. 2007. /*iComment: Bugs or bad comments?*/. In SOSIP. Vasilescu, B.; Casalnuovo, C.; and Devanbu, P. T. 2017. Recovering clear, natural identifiers from obfuscated JS names. In FSE.
{"Source-Url": "http://www.cs.utexas.edu/~ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fml%2Fpapers%2Fnie.nlse18.pdf&pubid=127683", "len_cl100k_base": 4843, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 13708, "total-output-tokens": 5513, "length": "2e12", "weborganizer": {"__label__adult": 0.0003943443298339844, "__label__art_design": 0.00021195411682128904, "__label__crime_law": 0.00032138824462890625, "__label__education_jobs": 0.0006642341613769531, "__label__entertainment": 4.780292510986328e-05, "__label__fashion_beauty": 0.00015175342559814453, "__label__finance_business": 0.00015926361083984375, "__label__food_dining": 0.00027942657470703125, "__label__games": 0.0003800392150878906, "__label__hardware": 0.0004527568817138672, "__label__health": 0.0003478527069091797, "__label__history": 0.00012302398681640625, "__label__home_hobbies": 6.264448165893555e-05, "__label__industrial": 0.0002046823501586914, "__label__literature": 0.00023603439331054688, "__label__politics": 0.0002264976501464844, "__label__religion": 0.00036025047302246094, "__label__science_tech": 0.0021724700927734375, "__label__social_life": 9.894371032714844e-05, "__label__software": 0.003429412841796875, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00026297569274902344, "__label__transportation": 0.0003731250762939453, "__label__travel": 0.00016450881958007812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22822, 0.01383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22822, 0.46951]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22822, 0.87095]], "google_gemma-3-12b-it_contains_pii": [[0, 4968, false], [4968, 11021, null], [11021, 16988, null], [16988, 22822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4968, true], [4968, 11021, null], [11021, 16988, null], [16988, 22822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22822, null]], "pdf_page_numbers": [[0, 4968, 1], [4968, 11021, 2], [11021, 16988, 3], [16988, 22822, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22822, 0.18182]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
c2a04e37ee576d52b59a7d7295c157e01912823e
Migration of the Maritime Simulation Model 2.0 into a Force-on-Force Federated Simulation Architecture Michael Schneider, Allen S. Harvey Jr., Nicholas Livas Engility Corporation Lorton, VA michael.schneider@engilitycorp.com, allen.harvey@engilitycorp.com, nicholas.livas@engilitycorp.com ABSTRACT The population of small maritime vessels within and around the United States greatly outnumbers the number of law enforcement vessels available to police them. Lawrence Livermore National Laboratory developed the Maritime Simulation Model (MSM) under an interagency agreement with the Department of Homeland Security (DHS) to understand concepts of operation used by law enforcement to encounter small vessel maritime traffic. MSM was implemented as an agent based model in Repast Simphony with limited behaviors and physics. DHS wanted to enhance the physics and behaviors without a significant rewrite or redevelopment of the original code. Under contract with DHS, Engility Corporation migrated MSM into the Defense Threat Reduction Agency’s federated modeling and simulation architecture to improve physics, enhance artificial intelligence, and federate with other detection models. Improving the fidelity of simulations required overcoming hardware resource limitations, terrain correlation issues, and assumptions made based on how agents behave in one model framework versus another. ABOUT THE AUTHORS Michael Schneider is a Software Engineer at Domain X Technologies (supporting Engility Corporation). He obtained his Bachelor’s degree in Computer Engineering at Virginia Tech. He has 14 years of varied experience including working with data analysis and databases, software development with C++, Java and other languages, DOD Acquisition, transportation logistics, Chemical Biological Radiological Nuclear (CBRN) and Command and Control (C2) systems, and Modeling and Simulation. Allen Harvey is a Computational Scientist and Associate Technical Fellow at Engility Corporation. He obtained his Bachelor’s degrees in Mathematics, Computer Science, and Physics at SUNY Brockport and his Masters in Computational Science at George Mason University. He is currently a PhD candidate in Computational Physics at George Mason University and a certified Project Management Professional by the Project Management Institute. Nicholas Livas is a Modeling and Simulation Engineer at Engility Corporation. He obtained both his Bachelor’s degree in Chemical Engineering and Master’s degree in Systems Engineering at the University of Virginia. Mr. Livas is currently registered as a Certified Systems Engineering Professional by the International Council of Systems Engineers. Migration of the Maritime Simulation Model 2.0 into a Force-on-Force Federated Simulation Architecture Michael Schneider, Allen S. Harvey Jr., Nicholas Livas Engility Corporation Lorton, VA michael.schneider@engilitycorp.com, allen.harvey@engilitycorp.com, nicholas.livas@engilitycorp.com INTRODUCTION The number of law enforcement entities is always outnumbered by the population they must police, on and off water. The Department of Homeland Security has invested in modeling and simulation to help better understand law enforcement capabilities of differing force structures and Concepts of Operations (CONOPS). In the maritime arena, Lawrence Livermore National Laboratory developed the Maritime Simulation Model (MSM) under an interagency agreement with the Department of Homeland Security (DHS) to understand CONOPS used by law enforcement to encounter small vessel maritime traffic. Although MSM and later MSM 2.0 was successful in simulating very large regions with thousands of small vessels, the approximations made for physics and behaviors introduced sufficient uncertainty in some scenarios of interest. To reduce these uncertainties, DHS asked Engility Corporation to replace the physics and artificial intelligence engines with federates from Defense Threat Reduction Agency’s (DTRA) federated modeling and simulation architecture while avoiding a total code rewrite. MSM 2.0 Background MSM 2.0 is an agent-based simulation tool developed to evaluate the effectiveness of law enforcement detecting a threat onboard a small boat within a population of otherwise benign boat traffic. It is implemented within the Repast Simphony, an agent based JAVA modeling environment. Law enforcement systems can perform detection operations as a mission ancillary to their normal safety and regulatory duties, or in a heightened alert mode in response to intelligence about a particular threat. MSM can be used to evaluate the effectiveness of such a surveillance capability under a variety of scenarios and conditions, as well as to determine factors important for increasing system effectiveness. The software reads a map of common boat routes throughout a region of interest and patrol zones for law enforcement. During a simulation, MSM stochastically places boats along these routes and within regions while applying Monte Carlo simulation to determine average behavior for tradeoff studies and sensitivity analysis. MSM 3.0 Background MSM 3.0, also known as MSM Force on Force Evaluation and Analysis of Key Performance Parameters (FREAK), represents the migration of the existing MSM 2.0 code into DTRA’s FREAK architecture. The DTRA FREAK architecture contains multiple modeling and simulation tools which together form a comprehensive solution set that supports training, security analysis, and investment decision support related to weapons of mass destruction. The two tools used to constitute the new MSM FREAK are Joint Semi-Automated Forces (JSAF) and Constellation of Intelligent Reasoning for Constructive Analytic Simulation (CIRCAS). Joint Semi-Automated Forces JSAF is a discrete event, three dimensional, agent-based force-on-force modeling and simulation engine. JSAF generates, controls, and displays joint service military, opposition forces, and civilian platforms (vehicles, people, and systems) that operate within and respond to a synthetic environment. JSAF was designed to support virtual and constructive simulations under the Defense Advanced Research Projects Agency’s Synthetic Theater of War Advanced Concept Technology Demonstration program in the 1990’s. JSAF was adopted primarily by DTRA and the Navy Warfare Development Center (NWDC), with both organizations maintaining their own version of the code. During the time of development, DTRA’s version of JSAF was chosen to be incorporated into MSM 3.0. Since initial development of MSM 3.0, both versions of JSAF are now merged and efforts are ongoing to incorporate the new release of JSAF into MSM 3.0. **Constellation of Intelligent Reasoning for Constructive Analytic Simulation** CIRCAS was developed as an artificial intelligence framework using Repast Simphony and is provided as an Application Programmer Interface for developers to model agent behaviors. Repast Simphony is an agent based modeling and simulation framework utilizing the JAVA programming language. Repast is and has been developed and maintained by Argonne National Labs for the past 14 years. DTRA sponsored the development of CIRCAS to replace the need for operator interactions with force-on-force federated simulation engines such as JSAF. The developer monitors reports from the federated simulation engine, updates the agent model states, and issues new orders to the agents in the simulation engine. The orders CIRCAS can issue are relatively high level; Halt, Move, Follow, Fire (as in fire a weapon), and Mount (as in mount a vehicle) are examples. CIRCAS implements the Context object required by Repast in order to perform the initial configuration, connect to the federation, and discover the federated objects. Since this object is required for CIRCAS to run it does pose a limitation to the ability for developers to build additional functionality into underlying Repast scenario such as adding different types of projections to display or metadata objects external to CIRCAS agents. **INTEGRATION APPROACH** The objective of MSM FREAK was to integrate MSM 2.0 into the DTRA FREAK architecture while reusing as much of the MSM 2.0 code as possible. Since CIRCAS and MSM 2.0 are both based on the Repast Framework it was believed that the bulk of the integration effort would be made to the way movement was handled in the models. For example, rather than agents updating their position on the Repast Cartesian grid, orders would be issued to the federate to move to a location. The integration was broken into several phases. The initial focus was of integrating/implementing MSM 2.0 functionality into MSM FREAK prior to the inclusion of any new behaviors. - Phase 1: Code review of MSM 2.0 - Phase 2: Set up the required scenario parameters in JSAF and CIRCAS and the initial compilation - Phase 3: Green/Neutral force behaviors - Phase 4: Blue force behaviors - Phase 5: Red force behaviors - Phase 6: New features and enhancements Initial delivery of MSM FREAK consisted of Phases 1 through 4 while Phase 5 is currently ongoing. **Phase 1 – Code Review of MSM 2.0** Originally planned to be one of the shorter phases of the MSM FREAK integration, Phase 1 turned out to be one of the longest durations. This was primarily due to obtaining access to an official release of original source code. While the code was written in JAVA and tools are available to decompile JAVA class and jar files, the code provided little insight into the original developer’s intentions. When recovering source code from JAVA class and jar files, significant context is lost as developer comments do not appear and spacing/tabbing changed. In the case of the products used, line numbers were added to every line of code which had to be stripped out before recompiling. During this decompiling and review process, an official code release was requested from the developer. With the official code obtained, the walkthrough and review accelerated by making UML designs of code. Umlet, a free and open source software UML tool was used to auto-generate UML diagrams of each class. Another tool and eclipse plugin, CodePro AnalytiX by Google, was used to perform a dependency analysis on the code. Both of these tools were very helpful in understanding the code at a high level, which allowed for a more targeted and organized line-by-line walk through of the code. **Phase 2 – Set Up of Scenario Files and Initial Compile of the Source Code** Setting up the JSAF and CIRCAS scenario and parameter files are fairly straightforward for anyone with experience working with those federates. One of the challenges with the DTRA FREAK architecture was that all agents that may be needed in the scenario must be declared prior to execution. This meant setting up generic scenarios for both CIRCAS and JSAF with the hundreds of agents that would be required for a typical scenario run. This is a limitation that the CIRCAS and JSAF developers are actively trying to alleviate in a future release of those tools. Successfully compiling the MSM FREAK code required making changes to MSM 2.0’s context object. Repast requires that users set up exactly one context object. Within Repast, the context object is primarily a collection of all the agents in the scenario. For MSM 2.0 the context also included data storage objects, geography objects for displaying the agents on a geo-located grid, simulation variables, and scenario terrain boundaries. CIRCAS implements its own context object that should not be modified. The MSM 2.0 context object was refactored to be a controller object which performed the same initialization functionality as before but removed the agent and geospatial projection of data. Fortunately, the JSAF federate would provide the display functionality without any further development required. One of the key components from MSM 2.0 that was desired to be reused was the ability to process Google Earth Keyhole Markup Language (KML) files. This allowed MSM FREAK to be backward and forward compatible with MSM 2.0 scenarios. However, particular attention needed to be paid to making sure that any terrain used by JSAF for MSM FREAK scenarios was correlated with the features identified by Google Earth. **Phase 3 – Green Force Behaviors** After examining the code it was determined that green force (i.e. the benign boat traffic) behavior would be the easiest of the three forces to integrate. Green traffic follows routes specified in the scenario and always responds to simple orders from law enforcement/blue forces. Most of the changes made to green forces were to enable agents to issue move and halt orders to the simulation engine and maintain awareness of its location and progress along the route. **Phase 4 – Blue Force Behaviors** The bulk of the code changes would have to happen in the blue forces, as that’s where most of the CONOPS are contained. The approach taken to integrate blue agents was to integrate one state at a time. There were 14 states in MSM 2.0. An additional state was added to support the initialization of blue agents, bringing the total to 15 states. The majority of the states and transitions are shown in the figure below. The scope of the project for the initial delivery did not include red forces. Therefore, the integration of response asset states has not yet been completed. Phase 5 – Red Force Behaviors The initial delivery of MSM FREAK only included Phases 1 through 4. The work on Phase 5 has begun but has not yet been completed. Most of the red force behavior is completed as part of the green force behavior as they are nearly identical, with the difference being that red forces are trying to go undetected and therefore will behave like green forces until an encounter with law enforcement. States for red forces will need to be integrated to allow for additional red agent responses, such as attempt to flee or fight law enforcement. Blue forces will also have to be modified to include states to fight (with either a win or lose outcome) and to chase down a fleeing threat. Phase 6 – New features and Additional Enhancements This phase is still to be determined once total integration of MSM 2.0 is complete. However, there are some proposed enhancements to blue and green forces, particularly through expanded sets of behaviors that CIRCAS makes easy to implement. One such item includes crowded harbor scenarios where green boats are not always traveling but reach a dense and steady population with boats occasionally arriving and leaving the area. Another enhancement includes the ability for blue forces to not inspect every boat they can but rather selectively choosing boats that are perceived to be violating a law (speeding, drinking, or other unsafe behavior). INTEGRATION CHALLENGES There were both expected and unexpected challenges while integrating MSM 2.0 into the DTRA FREAK architecture. These challenges included modifying the synchronous behavior of the existing MSM 2.0 code to allow for asynchronous behavior in CIRCAS which enables the code to execute tasks before preceding tasks complete. The intercept algorithm, which is the logic that determines how the law enforcement boats trail and ultimately proceed towards benign boats, had to be modified in order to function properly with JSAF and CIRCAS. Finally, terrain correlation between Google Earth (the terrain used in MSM 2.0) and JSAF led to issues that displayed unexpected behavior in the code. Synchronous vs Asynchronous MSM MSM 2.0 was designed as a synchronous simulation. At each tick, every agent in the scenario was moved, its internal parameters updated, and states changed as necessary. This allowed for very easy and orderly state transitions and scenario progress. MSM FREAK by necessity must be an asynchronous simulation. MSM FREAK sends orders to the simulation engine and waits for responses. It must also wait for status reports from the engine regarding an agent’s current location and the entities that can be seen. These events can happen at any time and in any order. Events that occurred after a specific period of time in MSM 2.0 (after a specific number of ticks) had to be changed to be scheduled to occur at a specific time. The scheduling of events and responding to external messages lead to multithreaded code. This became apparent during testing and extremely strange behaviors began to occur. An example was when two events causing a state transition occurred at similar times, such as when a law enforcement vessel’s time to patrol a particular area has elapsed at approximately the same time it is trying to begin an inspection. This required the addition of semaphores to ensure blocks of code that affected state transitions could not interrupted. This did have a small impact on performance, but because MSM FREAK is limited to running real time it is negligible. Intercept Algorithm The ability of a law enforcement vessel to choose another boat to intercept for the purpose of further investigation had multiple issues to resolve to make an intercept realistic and feasible. In the MSM 2.0 software, boats experienced instantaneous acceleration and deceleration, or in other words they only traveled at full speed and could stop on a dime. In reality, it takes appreciable time for a stationary law enforcement vessel to speed up to intercept and it must also slow down when approaching the target vessel so that both are moving at approximately the same speed during their encounter. When determining the intercept location in MSM 2.0, a flat earth approximation can be used and law enforcement knows exactly where their target is moving to and at what rate of speed. In the federated world of MSM 3.0, law enforcement cannot cheat but must observe their target and perceive its velocity and dead reckon its position. With that information, a curved earth mathematical model was needed to determine generally how close the intercept could be. However, in determining this intercept location, the distance the target is predicted to travel (assuming it does not change its course or reach its own destination first), is dead reckoned and additional 10% to account for the time needed for the both vessels to slow down to a safe encounter speed. Another small issue to resolve was the frequency of scheduling intercept target locations. Whereas MSM 2.0 applies a greedy algorithm to reevaluate all potential intercept locations each simulation clock tick to always be heading to the closest target, MSM 3.0 cannot do this. Not only is this not realistic behavior, but there is processing time overhead in scheduling new move orders in a federated approach. As such, MSM 3.0 only reevaluates targets every 5.0 seconds and will only issue a revised move order if the change in intercept distance varies by 10%. This does maintain the possibility that the blue vessel could pick a new target, but its likelihood is much less than in the MSM 2.0 approach. Terrain MSM 2.0 utilizes a KML file that defines the scenario boundary, destinations, travel routes, obstacles in the waterway, and patrol zones. In this way, the terrain in MSM 2.0 is flat and boats move wherever routes are defined or patrol regions identified, regardless if water of sufficient water depth exists. MSM FREAK uses JSAF and requires the use of a Compact Terrain Data Base (CTDB) format terrain. In addition, JSAF boats can only move on water where there is sufficient water depth. JSAF comes bundled with a low resolution world terrain and specific terrains that are more detailed for particular regions. Additional terrain files can be obtained various places; for example the United States Army Geospatial Center. In certain areas, there were occasions when the terrain from Google Earth was not properly correlated with the CTDB terrain that was used in JSAF, which ultimately led to erratic agent behavior during the simulation. Examples of this poor correlation include times when the CTDB terrain included land (above sea level) that extended many meters into an area that Google Earth showed as navigable water. On other occasions there were areas displayed as water while being defined as having no depth (0 meters; see the figure below), negative depth (meaning depth above sea level), or being areas being displayed as water but with a soil type that was not water. Terrain is generally built for a specific purpose and for a specific geolocated area. Like any engineering effort building to a list of requirements, terrain built for a land simulation that does not require high resolution, or just needs to “look about right” may not have implemented fully other terrain features (like navigable waterways) considered unnecessary for the purpose originally designed. These issues with terrain ultimately took significant time to debug and understand the odd behaviors seen during the scenario, as well as the locating other CTDB files that had much better defined terrain for our regions of interest. ![Figure 2. Example of Poor Terrain Correlation in JSAF’s default CTDB](image.png) **OBSERVATIONS** While maintaining most of the original functionality of MSM 2.0, there are significant differences in MSM FREAK compared to MSM 2.0. The scenario display, the simulation speed, and the use of a federated simulation engine are factors that make MSM FREAK different than MSM 2.0. **Scenario Display** MSM 2.0 displays the running scenario in the Repast Graphical User Interface (GUI). Due to limitations in CIRCAS, MSM FREAK does not have the capability to display the scenario in the Repast GUI. Instead, CIRCAS relies on its federated simulation engine to display the running scenario. MSM FREAK currently uses JSAF as its federated simulation engine. Simulation Speed MSM 2.0 is capable of running many times greater than real time. Each simulation tick in MSM 2.0 is one millisecond of wall clock time and represents 6 seconds of simulation time, thus running at a theoretical maximum speed of 6000 times real time. MSM FREAK runs in real time. Running faster than real time in MSM FREAK is not practical except for very small scenarios. This limitation is due to the enhanced physics fidelity provided by JSAF (such as boat movements) and the resources required to run the physics and federation overhead. The simulation can be accelerated slightly if the user chooses to not display any GUI at all and only cares about metrics reported upon simulation completion. Federated Simulation Engine MSM FREAK requires the use of a federated simulation engine with its own terrain. Paths, zones, boundaries, obstacles, and points defined in the Google Earth KML file must correlate with the terrain being used by the simulation engine. MSM FREAK currently makes use of JSAF as the engine to run the agents being simulated. However, other federated simulation engines already within the DTRA FREAK architecture or simulation engines that have the capability to federate with the DTRA FREAK tools can be used as well. SUMMARY MSM 2.0 was successfully integrated into the DTRA FREAK architecture, expanding the usage and capabilities of the DTRA FREAK tools. As a result there are more realistic behaviors from boats, such as boats accelerating/decelerating and using a turning radius based on its speed. However, MSM FREAK is not a replacement for MSM 2.0 but rather a different approach with different capabilities. The choice between using MSM 2.0 or MSM FREAK is that of a standard tradeoff in the modeling and simulation arena; that of speed vs fidelity. MSM 2.0’s ability to run much faster than real time it has the benefits of being able to analyze larger areas over a longer period of time than MSM FREAK. For more focused studies MSM FREAK has the advantage of being more accurate due to it enhanced physics fidelity over MSM 2.0. Development of MSM FREAK is continuing. At the time of this report, DTRA JSAF and NWDC JSAF baselines have been merged and released by DTRA. Efforts are underway to resolve any changes introduced with this newer version of JSAF and to incorporate the remainder of red force behaviors. Future efforts include incorporating new blue and green force behaviors to allow studying a wider range of law enforcement CONOPS. ACKNOWLEDGEMENTS This work has been supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-A-00004. This support does not constitute an express or implied endorsement on the part of the Government. Thanks also go out to Daniel Faissol of Lawrence Livermore National Laboratory (LLNL) (faissol1@lbnl.gov) and his team as the original developers of Maritime Simulation Model. Additional thanks goes out to Argonne National Laboratory’s Repast Simphony development team including (but not limited to) Michael North, Nicholson Collier, and Eric Tatara and the Defense Threat Reduction Agency’s JSAF and CIRCAS development teams, including (but not limited to) Tim Leong, Dave McKeeby, Eric Granlund, David Callaway, and Laura Dunleavy. REFERENCES
{"Source-Url": "http://modsimworld.org/papers/2015/Migration_of_the_Maritime_Simulation_Model_2_into_a_Force-on-Force_Federated_Simulation_Architecture.pdf", "len_cl100k_base": 5039, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26296, "total-output-tokens": 5603, "length": "2e12", "weborganizer": {"__label__adult": 0.0019474029541015625, "__label__art_design": 0.0011119842529296875, "__label__crime_law": 0.05517578125, "__label__education_jobs": 0.0025577545166015625, "__label__entertainment": 0.0004549026489257813, "__label__fashion_beauty": 0.0005984306335449219, "__label__finance_business": 0.0008540153503417969, "__label__food_dining": 0.000873565673828125, "__label__games": 0.007793426513671875, "__label__hardware": 0.005218505859375, "__label__health": 0.0012578964233398438, "__label__history": 0.00238037109375, "__label__home_hobbies": 0.0002465248107910156, "__label__industrial": 0.004306793212890625, "__label__literature": 0.0007152557373046875, "__label__politics": 0.0029087066650390625, "__label__religion": 0.0010700225830078125, "__label__science_tech": 0.330810546875, "__label__social_life": 0.0004105567932128906, "__label__software": 0.027313232421875, "__label__software_dev": 0.53662109375, "__label__sports_fitness": 0.0017118453979492188, "__label__transportation": 0.01294708251953125, "__label__travel": 0.0006289482116699219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25697, 0.01474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25697, 0.37793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25697, 0.93914]], "google_gemma-3-12b-it_contains_pii": [[0, 2678, false], [2678, 6528, null], [6528, 10417, null], [10417, 13393, null], [13393, 15510, null], [15510, 20403, null], [20403, 21796, null], [21796, 24300, null], [24300, 25697, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2678, true], [2678, 6528, null], [6528, 10417, null], [10417, 13393, null], [13393, 15510, null], [15510, 20403, null], [20403, 21796, null], [21796, 24300, null], [24300, 25697, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25697, null]], "pdf_page_numbers": [[0, 2678, 1], [2678, 6528, 2], [6528, 10417, 3], [10417, 13393, 4], [13393, 15510, 5], [15510, 20403, 6], [20403, 21796, 7], [21796, 24300, 8], [24300, 25697, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25697, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
3d638d0c1fe1a009b758add5250648074ece0b76
\[ \pi = \text{3.141592653589793...} \] CIRCUMFERENCE OF A CIRCLE: \[ 2\pi r^2 \] \( r \) is the circle's radius http://xkcd.com/10/ http://xkcd.com/1184/ Variables, types & flow control Frédo Durand and Ana Bell MIT EECS, 6.00 Bad news • you cannot earn late days on pset 0 Grading • Problem sets: 25% • Quiz I: 15% (March 6) • Quiz II: 20% (April 17) • Final: 35% • Participation: 5% Why Python? easy concise -versatile, supports different styles -widely used Why Python? • Easy syntax, concise • Supports all styles of programming • Widely used in science and engineering - lots of useful library - 6.815 is in Python! • Easy to transfer knowledge to other languages • Cons: can be slow Recall PI Imperative version of $\pi$ - Area of circle of radius 1 - Generate lots of points inside $[-1 \ 1]$ square - Count point inside circle - they satisfy $x^2+y^2<1$ Computing $\pi$ in Python ``` N=1000.0 number_in_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(N*N) print pi ``` Computing $\pi$ in Python - comments ``` # We will sample points inside the square from -1, to 1 in both x and y # We test which points inside the square are also inside the circle of radius 1. # the ration should be $\pi/4$ # (the area of the square is 4, the area of the disk is $\pi$) # note that we will space points 1/N apart, which means we will use 2Nx2N points N=1000.0 #we will use 2Nx2N points number_in_circle=0.0 #initialize counter of points inside the circle x=-1.0 #we will start in the lower left corner, x=-1, y=-1 while x<1.0: #loop over x, will stop when reaching the rightmost edge of the square y=-1.0 while y<1.0: #loop over y, will stop when reaching the top edge dist_square=x*x+y*y #compute squared distance to the center if dist_square<1: #if the (squared) distance is less than 1 number_in_circle=number_in_circle+1 #then the point is in the circle, increment the counter y=y+1/N #increment y to move to the next point x=x+1/N #increment x to move to the next set of points total_number_of_points=2*N*2*N # because we spaced points 1/N apart and went from -1 to +1 pi= 4*number_in_circle/total_number_of_points #the ratio of points inside the disk is $\pi/4$ print pi ``` Today sequence of steps variable store & maintain information type control flow Today • A program is a sequence of steps • Variables - store and update values - have types • Control flow with if and while Python tutor - [http://www.pythontutor.com/](http://www.pythontutor.com/) - shows step by step execution Variables Math vs. CS Math \[ x, y, a \] single letter ab \; axb unknown, general numbers CS inside-circle i store information 1 piece of info at a time Math vs. CS • Math variable - alphabetic character representing a number which is either arbitrary or not fully specified or unknown Math vs. CS • Math variable - alphabetic character representing a number which is either arbitrary or not fully specified or unknown • Computer science variable - adapted from http://en.wikipedia.org/wiki/Variable_(computer_science) - storage location and an associated symbolic name which contains some value Variables for $\pi$ in Python ```python N=1000.0 number_in_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N x=x+1/N pi= number_in_circle/(N*N) print pi ``` Variables Variables • Store one value at a time • Can be updated at any time - some variables don’t get updated much (N) - some change often (x, y, number_in_circle) Names • As meaningful as possible • Convention: separate different words by _ - for example: `number_in_circle` • Learn auto-completion in IDLE - type beginning of name+option / on Mac • Other possible convention (not in this class): – `numberInCircle` Types Type integer 1 -1 2 3 float 1.1 -2.5 -2.0 x = 1.1 x = 2 string 'I am a string' "I am a string" "I'm a string" $2 \geq 3$ $5 \geq -2.0$ Boolean: True or False: $x > 1$ Type • Variables and expressions have types • Numbers can be `int` (integers): 1, 2, -7, 1+1 • or `float` (approximation of reals): 2.2, -2.0, -1e6 • *Booleans* are `True` or `False` and represent the outcome of tests: 2>3 has value False • *Strings* represent sets of characters, written between " or "": 'I am a string', "I am a string", 'a' • More complex types later Types in pi N=1000.0 number_in_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(N*N) print 'pi is equal to ' print pi \pi \text{ with boolean variable} \begin{align*} N &= 1000.0 \\ \text{number\_in\_circle} &= 0.0 \end{align*} \begin{align*} x &= -1.0 \\ \text{while } x < 1.0: \\ & \quad y = -1.0 \\ & \quad \text{while } y < 1.0: \\ & \quad \quad \text{dist\_square} = x \times x + y \times y \\ & \quad \quad \text{is\_in\_disk} = x \times x + y \times y < 1 \\ & \quad \quad \text{if is\_in\_disk:} \\ & \quad \quad \quad \text{number\_in\_circle} = \text{number\_in\_circle} + 1 \\ & \quad \quad y = y + 1/N \\ & \quad x = x + 1/N \end{align*} \begin{align*} \pi &= \text{number\_in\_circle} / (N \times N) \\ \text{print } \pi \end{align*} \textit{I found init to False} \\ \textit{modify if you find stuff} Type? 17 int 15.7 float 14.0 float 'hello world' str '15.0' str 1+1 str 1/2 int 1/2.0 float 1>2 bool Casting (type translation) \[ 1 + 2.0 \rightarrow \text{float} \] \[ \text{float} (2) \quad 1 / \text{float} (N) \] \[ \text{int} (2.4) \quad \text{int} ('a') \] \[ \text{int} ('2') \quad \text{int} ('1+1') \rightarrow \text{error} \] Casting (type translation) - int get promoted (cast) to float in mixed expressions: - 1+2.0 returns a float, 3.0 - Casting can also be explicit: - float(3) - str(2) - int('2') - int('blah') → error int vs. float - problem ``` N=1000 BAD number_in_circle=0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(N*N) print pi ``` N=1000.0 number_in_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(N*N) print pi Careful \[ N = 2 \] \[ \text{average} = \frac{(3 + 4)}{N} \quad \text{BAD} \] \[ N = 2.0 \] or \[ \left( \frac{3 + 4}{\text{float}(N)} \right) \] \[ 2C = 2C + 1/N \] average = \frac{3 + 4}{N} \quad (BAD) N = 2.0 or \frac{3+4}{\text{float}(N)} x = x + 1/N \text{first compute this} \text{second operation} Careful - It is easy to do write \[ N=2 \] \[ \text{average} = (3+2)/N \] For the more curious • You can ask the type of an expression (or variable) with type(expression) - for example: type(3) - test: type(N) == type(x) Binding: the = and == challenge Binding \[ x = 1 + 2 \] - Take RHS - Compute value - Bind it (store it) - In LHS variable \[ \_ = \_ \] Binding • = does not have the same meaning as in math • x=2+3 - compute the right-hand-side value and bind it to the left-hand side variable - binding means storing, assigning, replacing whatever was there • 2+3=x does not mean anything in Python (=>error) Math vs. CS: \( x = x^x \) **Math** \[ x = x^2 \quad 0 \text{ or } 1 \] **Python** \[ x = 2 \] \[ x = x \times x \quad \Rightarrow \quad x \text{ has value } 4 \] Math vs. CS: $x=x^x$ - Math: $x$ is either 0 or 1 - CS: take whatever value $x$ used to have, multiply it by itself, store it in $x$. - for example: $x=3$ $x=x^x$ $x$ is now 9 Binding Binding • for example: \[ N=3 \] \[ N=N+1 \] Binding - for example: \[ N = 3 \\ N = N + 1 \] - The last line, \(N = N + 1\) involves the following - compute right-hand-side - get current value of \(N\): 3 - add 1 to it, gives 4 - bind \(N\) to 4 \begin{itemize} \item $y = 2$ \item $x = 3$ \item $x = y$ \item vs. \item $y = 2$ \item $x = 3$ \item $y = x$ \end{itemize} = - x=1 y=x x=2 print x, y swapping variables \[ \begin{align*} x &= 2 \\ y &= 3 \\ x &= y \\ y &= 2x \\ \text{tmp} &= x \\ x &= y \\ y &= \text{tmp} \end{align*} \] swapping variables • start with $x=3; y=2$ • DO NOT WRITE: ``` x=y y=x ``` • Instead ``` tmp=x x=y y=tmp ``` - (Python also has a simpler way. More later.) Test: $\equiv$ \[ 1 \equiv 3 - 2 \] operation returns Boolean Test: == • if x==2: print ‘yes’ - does the printing only if x has value 2 • == computes the value on both sides, then compares them. Returns true if they are the same, false otherwise • This one is symmetric: • if 2==x: print ‘yes’ - does the same Recap: The `=` vs. `==` challenge • `=` in computer science is very different from `=` in math • the left hand side of `=` is always a variable name • `=` means that the result of the right expression gets stored in the variable • some languages use `<-` or `:=` instead of `=` to be clearer • to make matters worse, there is a test to see if two expressions are equal, and in Python it’s `==` Input & output Output: print Output: print - \( x=3 \) ```python print x show 3 on the console ``` - print followed by arguments (any type) separated by comas: - \( x=3 \) ```python print 'x=', x ``` - note the string between ” (could also be ”x=”) - Careful: don’t use Python 3, print doesn’t work the same Input: raw_input - \( x = \text{int(raw_input('give me a number '))} \) Input: raw_input - \( x = \text{int}(\text{raw\_input}(\text{'give me a number '})) \) - Shows \textit{give me a number} - waits until the user types a number and return - binds that number to the variable \( x \) - raw\_input returns a string - this is why we need to cast to \textit{int}, - returns error if the string is not a number Operations Operations • numbers => number +, -, *, /, ** (exponent), % (remainder of integer division) • numbers=>boolean ==, != (different), >, <, >=, <= • boolean=>boolean and, not, or • strings +, *,... Operator precedence • What you’d expect • use parentheses when unsure Control Elements of a program - Sequence of simple steps - Store and update information in variables - Flow of control (while, if) While While while condition: step1 step2 stepN • where condition returns a boolean • keeps going through the steps as long as condition is True While - example While - example N=10 while N>0: N=N-1 print N outside the while Spaces and indentation Spaces and indentation - Indentation matters a lot in Python - “group” things (called blocks) - while condition: step1 step2 step3 - step 1 and 2 are in the loop, not step 3 - also note the : - Extra carriage returns are optional but make the code easier to read N=1000 number_in_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(N*N) print pi If if condition: step1 stepN If, else if condition: step1 stepN if condition step1 stepN else: stepA stepM If, else If x%2==0: print ‘even’ else: print ‘odd’ x = int(raw_input('Enter an integer: ')) if x%2 == 0: print 'Even' else: print 'Odd' if x%3 != 0: print 'And not divisible by 3' If, else If x%2==0: print 'even' else: print 'odd' If, else ```python if condition: step1 stepN if condition step1 stepN else: stepA stepM ``` If, else If x%2==0: print 'even' else: print 'odd' If within if ```python x = int(raw_input('Enter an integer: ')) if x%2 == 0: print 'Even' else: print 'Odd' if x%3 != 0: print 'And not divisible by 3' ``` N=1000 number_in_circle=0.0 number_out_circle=0.0 x=-1.0 while x<1.0: y=-1.0 while y<1.0: dist_square=x*x+y*y if dist_square<1: number_in_circle=number_in_circle+1 else: number_out_circle=number_out_circle+1 y=y+1/N x=x+1/N pi= number_in_circle/(number_in_circle+number_out_circle) print pi Recap • A program is a sequence of steps • Variables - store and update values - have types • Control flow with if and while
{"Source-Url": "http://stellar.mit.edu/S/course/6/sp14/6.00/courseMaterial/topics/topic4/lectureNotes/lec2/lec2.pdf", "len_cl100k_base": 4355, "olmocr-version": "0.1.53", "pdf-total-pages": 78, "total-fallback-pages": 0, "total-input-tokens": 94433, "total-output-tokens": 7372, "length": "2e12", "weborganizer": {"__label__adult": 0.00042366981506347656, "__label__art_design": 0.0005311965942382812, "__label__crime_law": 0.0003647804260253906, "__label__education_jobs": 0.0169219970703125, "__label__entertainment": 0.00014352798461914062, "__label__fashion_beauty": 0.0002090930938720703, "__label__finance_business": 0.0002894401550292969, "__label__food_dining": 0.0008435249328613281, "__label__games": 0.0014486312866210938, "__label__hardware": 0.001822471618652344, "__label__health": 0.0009279251098632812, "__label__history": 0.0004162788391113281, "__label__home_hobbies": 0.0003204345703125, "__label__industrial": 0.0008540153503417969, "__label__literature": 0.0005755424499511719, "__label__politics": 0.00033736228942871094, "__label__religion": 0.0007524490356445312, "__label__science_tech": 0.07568359375, "__label__social_life": 0.0002930164337158203, "__label__software": 0.01213836669921875, "__label__software_dev": 0.88330078125, "__label__sports_fitness": 0.0005660057067871094, "__label__transportation": 0.0006685256958007812, "__label__travel": 0.0002963542938232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12873, 0.05225]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12873, 0.68554]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12873, 0.66027]], "google_gemma-3-12b-it_contains_pii": [[0, 159, false], [159, 233, null], [233, 281, null], [281, 394, null], [394, 473, null], [473, 707, null], [707, 717, null], [717, 883, null], [883, 1164, null], [1164, 2435, null], [2435, 2519, null], [2519, 2651, null], [2651, 2758, null], [2758, 2768, null], [2768, 2915, null], [2915, 3107, null], [3107, 3481, null], [3481, 3796, null], [3796, 3806, null], [3806, 3972, null], [3972, 4235, null], [4235, 4241, null], [4241, 4363, null], [4363, 4424, null], [4424, 4797, null], [4797, 5079, null], [5079, 5780, null], [5780, 5891, null], [5891, 6130, null], [6130, 6340, null], [6340, 6620, null], [6620, 6867, null], [6867, 7039, null], [7039, 7184, null], [7184, 7263, null], [7263, 7415, null], [7415, 7447, null], [7447, 7554, null], [7554, 7821, null], [7821, 7990, null], [7990, 8199, null], [8199, 8207, null], [8207, 8254, null], [8254, 8470, null], [8470, 8608, null], [8608, 8644, null], [8644, 8798, null], [8798, 8982, null], [8982, 9046, null], [9046, 9304, null], [9304, 9699, null], [9699, 9714, null], [9714, 9728, null], [9728, 10046, null], [10046, 10119, null], [10119, 10468, null], [10468, 10479, null], [10479, 10686, null], [10686, 10757, null], [10757, 10765, null], [10765, 10889, null], [10889, 10895, null], [10895, 11039, null], [11039, 11055, null], [11055, 11126, null], [11126, 11149, null], [11149, 11424, null], [11424, 11668, null], [11668, 11706, null], [11706, 11799, null], [11799, 11859, null], [11859, 12000, null], [12000, 12060, null], [12060, 12178, null], [12178, 12238, null], [12238, 12415, null], [12415, 12742, null], [12742, 12873, null]], "google_gemma-3-12b-it_is_public_document": [[0, 159, true], [159, 233, null], [233, 281, null], [281, 394, null], [394, 473, null], [473, 707, null], [707, 717, null], [717, 883, null], [883, 1164, null], [1164, 2435, null], [2435, 2519, null], [2519, 2651, null], [2651, 2758, null], [2758, 2768, null], [2768, 2915, null], [2915, 3107, null], [3107, 3481, null], [3481, 3796, null], [3796, 3806, null], [3806, 3972, null], [3972, 4235, null], [4235, 4241, null], [4241, 4363, null], [4363, 4424, null], [4424, 4797, null], [4797, 5079, null], [5079, 5780, null], [5780, 5891, null], [5891, 6130, null], [6130, 6340, null], [6340, 6620, null], [6620, 6867, null], [6867, 7039, null], [7039, 7184, null], [7184, 7263, null], [7263, 7415, null], [7415, 7447, null], [7447, 7554, null], [7554, 7821, null], [7821, 7990, null], [7990, 8199, null], [8199, 8207, null], [8207, 8254, null], [8254, 8470, null], [8470, 8608, null], [8608, 8644, null], [8644, 8798, null], [8798, 8982, null], [8982, 9046, null], [9046, 9304, null], [9304, 9699, null], [9699, 9714, null], [9714, 9728, null], [9728, 10046, null], [10046, 10119, null], [10119, 10468, null], [10468, 10479, null], [10479, 10686, null], [10686, 10757, null], [10757, 10765, null], [10765, 10889, null], [10889, 10895, null], [10895, 11039, null], [11039, 11055, null], [11055, 11126, null], [11126, 11149, null], [11149, 11424, null], [11424, 11668, null], [11668, 11706, null], [11706, 11799, null], [11799, 11859, null], [11859, 12000, null], [12000, 12060, null], [12060, 12178, null], [12178, 12238, null], [12238, 12415, null], [12415, 12742, null], [12742, 12873, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12873, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12873, null]], "pdf_page_numbers": [[0, 159, 1], [159, 233, 2], [233, 281, 3], [281, 394, 4], [394, 473, 5], [473, 707, 6], [707, 717, 7], [717, 883, 8], [883, 1164, 9], [1164, 2435, 10], [2435, 2519, 11], [2519, 2651, 12], [2651, 2758, 13], [2758, 2768, 14], [2768, 2915, 15], [2915, 3107, 16], [3107, 3481, 17], [3481, 3796, 18], [3796, 3806, 19], [3806, 3972, 20], [3972, 4235, 21], [4235, 4241, 22], [4241, 4363, 23], [4363, 4424, 24], [4424, 4797, 25], [4797, 5079, 26], [5079, 5780, 27], [5780, 5891, 28], [5891, 6130, 29], [6130, 6340, 30], [6340, 6620, 31], [6620, 6867, 32], [6867, 7039, 33], [7039, 7184, 34], [7184, 7263, 35], [7263, 7415, 36], [7415, 7447, 37], [7447, 7554, 38], [7554, 7821, 39], [7821, 7990, 40], [7990, 8199, 41], [8199, 8207, 42], [8207, 8254, 43], [8254, 8470, 44], [8470, 8608, 45], [8608, 8644, 46], [8644, 8798, 47], [8798, 8982, 48], [8982, 9046, 49], [9046, 9304, 50], [9304, 9699, 51], [9699, 9714, 52], [9714, 9728, 53], [9728, 10046, 54], [10046, 10119, 55], [10119, 10468, 56], [10468, 10479, 57], [10479, 10686, 58], [10686, 10757, 59], [10757, 10765, 60], [10765, 10889, 61], [10889, 10895, 62], [10895, 11039, 63], [11039, 11055, 64], [11055, 11126, 65], [11126, 11149, 66], [11149, 11424, 67], [11424, 11668, 68], [11668, 11706, 69], [11706, 11799, 70], [11799, 11859, 71], [11859, 12000, 72], [12000, 12060, 73], [12060, 12178, 74], [12178, 12238, 75], [12238, 12415, 76], [12415, 12742, 77], [12742, 12873, 78]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12873, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
7ab23e0cffdbbe4730d1f2dcf78444f8a8310db6
Generating Encrypted Document Index Structure Using Tree Browser Doaa N. Mhawi1*, Haider W. Oleiwi2, Heba L. Al-Taie3 1 Computer science department, Technical Institute for Administration, Middle Technical University, Baghdad, Iraq 2 Department of Electronic and Electrical Engineering, Brunel University London, London, Uxbridge, UK 3 Information and Communication Technologies Center, Ministry of Construction and Housing, Iraq * Corresponding author E-mail: dododuaaenteesha@mtu.edu.iq 1. Introduction Notably, tremendous growth in service-demanding users and devices is expected in the next era of communication systems that will be globally covered. Unprecedented technologies and applications accompany the rapid leap in the communications field with new features, i.e., the data-hungry traffic, unified ubiquitous systems, and revolutionary search processes [1–5] generating huge document databases that include a huge collection of documents from a variety of sources, for instance, research papers, articles, news, digital libraries, books, messages, e-mails, and web pages. Using text databases to expedite the execution of critical tasks has been a part of many people's and organizations’ everyday routines throughout the years. As a result, data organization is required to conduct data update and query operations effectively; thus, indexing is one of the methods utilized. Different indexing techniques are being explored for the contents [6–10]. When it comes to database performance and data security, indexing techniques may be critical [11–15]. In contrast, Information Retrieval (IR) indexes have a variety of drawbacks, including huge index sizes, the ability to do an output search and potential security risks. This necessitates the development of index structures that can access important data quickly while using little storage capacity to index huge amounts of data. Balanced-tree (B-tree) structure is a widely used indexing structure. The indexing method is simple, takes less time to build, and results in a lower index size, which has led to its widespread adoption as a starting structure by many academics. The B-trees-based indexes can deal with huge datasets and respond to queries in near-linear time with little input/output (I/O) overhead. The query cost is reduced when a B tree is used as the size of the tree. [10] When used to index trajectory data B tree indexing shows its resilience and versatility [9, 10]. Following the discovery of these faults in several techniques by a group of academics, they commenced developing this framework, using a Tree-Browser (TB). This article presents a novel index structure that may be used to build and create inverted indexes for a variety of information retrieval systems such as those used by search engines. The primary contributions of this paper are as follows: - Proposing a new indexing technique called the tree browser (TB) to be applied to large IRS inverted files. - Developing a system that combines indexing and encryption in a single step. - Reducing the amount of storage required for the produced document index. - Developing an index to support single keyword and phrase searches. The remainder of the paper is divided into the following sections: section 2 describes the related work, while Section 3 includes similar attempts that included different B-Tree and B±Tree variants. The proposed structure is based on the B± tree, which is described in more detail in Section 4. The TB-tree structure and associated functionality, which includes searching, updating, and insertion into the proposed TB-tree structure, are explained in detail in section 5. Section 6 contains a presentation of the results and analysis. Finally, in section 7, you will find the conclusion and recommendations for further study. 2. Related Work A. K. Doaa Nteesha Mhawi, studied storage space [12], by modifying the genetic algorithm (GA) operators, a more efficient inverted index was created, which allowed the retrieval process to be accelerated. In conjunction with the term-proximity fitness function, the author employs a hybrid crossover operator to get the desired result. An accuracy of 89 percent and recall of 84 percent was achieved by the created system was tested on a dataset of 8000 HTML pages with 100MB of storage space. This dataset requires a significant amount of storage space, resulting in a lengthy retrieval time and an extremely large amount of storage space. G. E. Blelloch, J. T. Fineman, P. B. Gibbons, Y. Gu, and J. Shun, in [17] proposed the PRAM model with asymmetric write cost and showed that sorting can be performed in O(n) writes, O(n log n) reads, and logarithmic depth (parallel time). Next, they considered a variant of the External Memory (EM) model that charges k > 1 for writing a block of size B to the secondary memory and presented variants of three EM sorting algorithms (multi-way merge sort, sample sort, and heapsort using buffer trees) that asymptotically reduce the number of writes over the original algorithms and perform roughly k block reads for every block write. Finally, they defined a variant of the Ideal-Cache model with asymmetric write costs and presented write-efficient, cache-oblivious parallel algorithms for sorting, FFTs, and matrix multiplication. Adapting prior bounds for work-stealing and parallel-depth-first schedulers to the asymmetric setting yields parallel cache complexity bounds for machines with private caches or shared caches, respectively. G. Ramachandran and K. Selvakumar, [18]. In peer-to-peer networks, the authors provided a content-based search index. This study proposed Semantic Oriented Adaptive Search (SOAS) strategy based on semantic content. It is a multi-layer architecture model which utilizes a dynamic caching technique to achieve effective search on the unstructured P2P networks. It is a scheme constructed as a two-tier P2P network with ultra-peers of high connectivity based on a power law model. This approach extensively used Vector Space Model (VSM) and Latent Semantic Index (LSI) to derive local indices from summarized semantic vectors. Query searching was performed through a round of searches using derived indices from semantic data objects. If one search fails, the next round search is invoked sequentially in the order of local index, cache index, response index, global index, and adaptive search among ultra-peers. R. J. Derscheid, M. C. Rahe, E. R. Burrough, K. J. Schwartz, and B. Arruda, Secondary storage makes advantage of asymmetrical I/O. Writes are costlier than reads in this situation. As a consequence of the alteration of the B tree, animbalance B tree was created [19]. They developed a management algorithm that eliminated the need for written documentation via meticulous unbalancing and rebalancing procedures. It is possible to decrease I/O expenses by 30% while simultaneously increasing performance with this technique by changing the parameter settings. Although this method utilizes more nodes than the B tree, the creator points out. This has a detrimental effect on performance in memory-constrained settings. R. Jin, H. J. Cho, S. W. Lee, and T. S. Chung. Another indexing technique that makes use of the B tree is the lazy-split B tree, which was developed by [20] and introduced in the literature. It employs three strategies; the first is to divide nodes lazily to split the smallest number of possible nodes. The second, modify-two-nodes, only update the parents and nodes, thus decreasing the number of writes nodes. The third is to combine utilizing a lazy-coalesce technique to reduce the amount of merging and distribution. When these three methods are used together, the write speed of flash memory and the use of buffer space increase. L. Yang, M. Di, X. Huang, and F. Duan [21] developed an index structure, mapping high-dimensional data to single-dimensional values to overcome the large amount of dimensional data to manage. When the block distance method and the B+-tree structure were combined, the resulting approach was known as the Block B-Tree technique. A high-dimensional data set was translated into single-dimensional key values using the Block distance method; however, the resulting key values were maintained using the compact B+-tree. The use of the Euclidean distance similarity search was made possible by this method. Y. Chen et al., Both the B+-tree and the B-tree were used by [22] in the process of insertion and deletion to the flash memory of their system. It combined Linear Pipelining, Write Once Partitioning, and Background Linear Merging to create an embedded search engine for securing tokens used in the case of managing and securing documents in the personal cloud context. F. Boucenna, O. Nouali, S. Kecheid, and M. Tahar Kechadi, The inverted index created by [23] is yet another superior inverted index utilized in information retrieval. With the help of this technique, the document indexing phase of an information retrieval system (IRS) may be defined. It is necessary to build a document collection. The search space is condensed as a result of the score produced by IR Probabilistic. This approach accomplished two objectives; the first is to decrease the amount of storage space available while the second is to reduce the amount of processing time available. D. A. Bonilla, A. Pérez-Idárraga, A. Odriozola-Martínez, and R. B. Kreider in [24] proposed a secure inverted index constructed using a novel index-building technique. The approach used two techniques; the first is the employment of fictitious documents and the second is homomorphic encryption. Both methods, on the other hand, have significant disadvantages. During the search process, the first method generates a large number of false positive results, which is undesirable. A further disadvantage of the second method is that it generates very huge cipher texts that were times larger than their plaintext equivalents. To overcome these two disadvantages, the double score weighting method is used in conjunction with a compressed database of encrypted scores. To the best of the authors' knowledge, the proposed method of Generating an Encrypted Document Index Structure Using a Tree Browser is the best, overcoming other related works. This method represents the keywords in a variable-length binary format before being stored in the index. Moreover, it provides additional encryption to the information stored to reduce index size and fast retrieval. 3. Feature of B±Tree Searching Frequently, B+-Trees are used in applications that deal with a significant quantity of information such as file systems and database indexes. Compared to the balanced tree (B-tree), it includes just keys in each node, rather than (key-value), and it has an extra level with connected leaves at the button rather than (key-value). The method of searching in the B-tree is straightforward. It takes a reference to the root as input node x of a sub-tree and accepts a key k to be searched in that sub-tree for the search of B-Tree. This results in a top-level call of the type B-Tree-Search (root(T), k), where B-Tree-Search returns the ordered pair (y, i) as a result of the search. The pair consists of a node y and an index i in such a way that keys(y) = k if k is found in the B-tree; else, keys(y) = NIL is returned from the function. B-Tree includes extra restrictions (shown in Fig. 1) to guarantee that the tree is always balanced. ![Fig. 1. B-Tree constraints (a. represents internal node while b. represents leaf node with search values and data pointer)](image) The steps to search for a record by using an access structure on a key field are as follows [25]: - The form of each internal node in a tree is represented as follows: \[<P1, <K1, Pr1>, P2, represented Kq-1, Prq-1>, Pq>\] Where, \(q \leq p\) P represents the tree pointer. Pr represents the data pointer. - Values of the key are: \((K_i, ..., K_{q-1})\), These keys are ordered within each node. - The value of x in the sub-tree is pointed by pi: For \((1 < i < q, K_{i-1} < x < K_i)\), For \((i=1, x < K_i)\), For \((1=q, K_{i-1} < x)\) - Each node has at most p tree pointers. - Each node has at least \(\lceil p/2 \rceil\) three-pointers. However, the root node has at least two tree pointers unless it only considers the node in the tree. - A node has q-1 search key field values and hence q-1 data pointers when it comes within q tree pointers, \((q\leq p)\). - At the same stage, all leaf nodes are present. Except for the nodes with Pi null tree pointers, the inner nodes have the same structure as the leaf nodes. Order 3 of the B-Tree (shown in Fig. 2) is accessible via the use of a key field structure. This results in the values being unique and the pointer pointing to the block cluster when employing the B-tree in a field that does not include a key value. B±trees represent the entire range of values in a tree with each subinterval considered to be an internal node. The root of the B±trees represents the entire range of values in the tree. If k indicates the length of the longest word in the tree, then we are searching for the leaf that contains the value k in some way. An internal B±Tree node has at most \(d \leq b\) offspring, each of whom represents a distinct sub-interval of the interval represented by the node. The appropriate node is chosen by searching for the key values of the node in question. As shown in Fig. 4, this tree does not have a complete index page and one of the data pages has empty slots. In addition, one of the data pages has empty spaces [20], [26-28]. 4. The Proposed TB-Tree Structure The TB tree is a multi-tiered tree with many branches. It comprises an amazing balanced tree with a publishing-ready file for distribution. It has many of the same features as the B+tree structure that was previously suggested. The TB tree's root page layout is similar to that of the B+ tree, with the addition of a field for storing the largest word length in the TB tree's root page layout. Keeping track of the keys and the sequence in which they are kept is critical; thus, this section must be filled out correctly [15], [29]. 4.1. TB Tree properties There are different structures and properties for the leaf and non-leaf pages, besides, the variable length in their records. 4.1.1. The leaf pages' properties in the TB tree If the M2 considers the maximum number of records on the leaf index page. Then; - There are at least $M_2$ records for all tree-leaf nodes. - There is a unified distance from the root to all leaf nodes which is made it appear at the same level. - To record the posting file, there is a pointer located for each entry (word key) on the leaf page and another pointer for each leaf page to the right sibling page. 4.1.2. The non-leaf pages' properties in the TB tree If the M1 is considered to be the maximum number of records on the non-leaf index page. Then, - There are at least two children in the root. - There are $M_1/2$ children in the tree nodes except the tree root. 4.1.3. Posting File in TB tree In the postings file, several format records are presented as follows {document-no, a pointer to document-no, and positions}. Also, a linked list of records for each word has been used to show the file that the word belongs to and the word's location in that file. 4.2. B. Keys in TB tree In the proposed work, the key is associated with each word which is identified uniquely and efficiently, resulting in accessing every single word as in the case of the B+tree. The unique keys in TB-tree are defined as Firstly, the TB-tree English language is used and the number between 1 and 26 is assigned to each character in ascending order as shown in Table 1, which represents the character number of mappings. Secondly, concatenating the associated numbers of the characters of the word to construct the word key K. An example of a situation for the word "orange" is "15 18 01 14 07 05". The binary value of this number using a minimum number of bits is (01111 10010 00001 01110 00111 00101); it is stored in a text field of the record within the index. However, this requires additional processing to sort the words in ascending order in the TB tree. Equations (1) and (2) are used to compute the search key SKEY. $$n = LW - WDL$$ \hspace{1cm} (1) $$SKEY = K*\left(10^n\right)^2$$ \hspace{1cm} (2) Where n represents the difference between the longest word (LW) stored in the root node and the length of the word (WDL), i.e., the character number of the searched word. This difference is important to find the value of the skey. 5. Insertion and Searching in TB-Tree This section explains with examples the process of insertion of words into TB-Tree, then explains the two types of search: single keyword and phrase query. 5.1. Insertion algorithm By Algorithm 1, each word is transformed into a numerical representation before being included in the database. It is first transformed into a number using the mapping provided in Table 1, then translated into a binary number using the mapping shown in Table 2. If encryption is required, the encryption algorithm is applied to the converted text before it is put into the TB-tree; otherwise, only the converted text is placed into the TB-tree according to Algorithm 1, which is implemented as a function of the conversion algorithm. The works in question are as follows; While the insertion procedure is in progress, a new record is added to a leaf node that is not yet filled. As soon as the leaf node becomes full, it is split into two nodes, which are designated as new and old nodes to preserve the data. However, if an overflow occurs during the insertion process, the data will be reallocated across sibling pages before the splitting process is performed to minimize the number of pages that need to be divided. Algorithm 1: Preprocessing (converting to decimal and binary form) and Encryption. Begin Initialization step: \[ \text{Word}_\text{in}_\text{decimal} = 0, \text{Word}_\text{in}_\text{binary} \]. LOOP: For each C in W Do // C is a character, and W is the word. Get the corresponding numerical value of C as per Table 2. Add to the Word_in_decimal. Convert the word stored in Word_in_decimal into binary then add to Word_in_binary. If Encryption is required then \[ \text{Encrypt(Word}_\text{in}\text{binary}) \] End if END LOOP Return Word_in_binary [W]. End 5.2. Single keyword query The search method for locating a single word with an exact key value in a B-tree employs approaches similar to those employed in other search algorithms. The search key (Skey), on the other hand, is concerned with the (K) key, which is kept in the tree. The search process begins at the root node and works its way down to the remaining nodes. The non-leaf nodes may be used to connect the search to the leaf nodes. By closing the Postings File, which will be open throughout this process, if the required word (or the numerical value of the word) is found in the leaf node, the search process is directed to the requested documents that contain the desired word. Using the example of 20 where the inputs are the root page R and the keyword Key-Value K; the output is a list of documents that include the word K, the method of running a single keyword query is described. Consider the following example to get a better understanding of this method. <table> <thead> <tr> <th>Documents Number</th> <th>Word position in the document</th> </tr> </thead> <tbody> <tr> <td></td> <td>1</td> </tr> <tr> <td>1</td> <td>apple</td> </tr> <tr> <td>2</td> <td>apple</td> </tr> <tr> <td>3</td> <td>Cherry</td> </tr> <tr> <td>4</td> <td>nut</td> </tr> </tbody> </table> Example 1: Based on the set of documents listed in Table 2, suppose that there is a query for finding the word “fig” in the desired documents. However, the search key K= 060907, where 06, 09, and 07 are the mapping values off, i and g respectively according to Table 1, the word length WDL =3, the length of the longest word stored in the tree LW = 6, n = LW-WDL = 6 - 3 = 3, thus: \[ \text{Skey} = K^*(10^n)^2 = 60907* (103)^2 = 60907000000 \] Before insertion into the posting file, the Skey is converted into the equivalent binary code to be 00110 01001 00111, where each number in the mapped code is represented by 5 bits. Example 2: if users want to search for the word “apple”, then its binary representation is 00001 10000 10000 01100 00101. The number of bits in this example is 25-bits, while the maximum number of bits used in these examples (according to the set in Table 2) is 30-bits according to the longest word inserted into the TB tree (LW=6), as shown in Table 3. These bits are split into two parts to produce 15-bits each. This process is done to minimize the required storage and to ease the retrieval of the related documents. The resultant reduced bits are stored in the posting file so that the entered word length to the TB tree is half-length of the original word. This word is finally inserted into the posting file as shown in Figs. 3, 4, and 5. 5.3. Encryption in TB-tree As illustrated in the previous two examples, the indexed words are stored in binary representations, and they depend on several dynamic parameters that change according to the dataset. These parameters are the longest word length in the set LW and the word length WDL. These values are used to calculate n, which is used to generate the search key Skey as per in (2). The secrecy here is that different variations of a given word such as read, reading, and reader will have the same length and hence the same representation in the index. To differentiate them, the (2) is applied where additional zeros are added for ease of searching while maintaining the original word in the index. Another confidentiality aspect is that it relies on the longest-term length, which varies from set to set. 5.4. Phrase query search In the phrase query search, the B-tree technique is used for descending the tree from the root using the search algorithm. In addition, in the window query on R-tree, more than one sub-tree under a visited node may need to be searched. Therefore, children's nodes may have only a few visits to each tree level. Besides, the query is satisfied by guiding the search to the leaf nodes via the non-leaf index pages. Moreover, the search process is conducted for the desired documents that contain the required word in the sentence by the Postings File. This method is explained where the inputs are TB tree, root node R, linked_list1, and Linked_list2 and the output is all page IDs of the documents containing the words of the searched phrase. Initially, link-list1=link-list2=LWD=0, supposing that the root page R="grape", hence, the R-LONGEST WORD=5. The next step in the algorithm is to check whether this root is a leaf page or not. If not, search for all children's pages that satisfy the query and contain the words of the sentence while at the same time being at tree-level and leading to the leaf nodes. Now, the two children of this root are carrot and endive. Suppose these children's page-ID (document) =1, so the Linked_List1 will equal one. This operation continues until reaching the leaf page. Table 3. The inserted words into TB-tree <table> <thead> <tr> <th>Entry Seq.</th> <th>Word Length</th> <th>n</th> <th>(10^n)^2</th> <th>Stored Key</th> <th>Search Key</th> </tr> </thead> <tbody> <tr> <td>date</td> <td>4</td> <td>2</td> <td>100</td> <td>4012005</td> <td>40120050000</td> </tr> <tr> <td>Cherry</td> <td>6</td> <td>0</td> <td>1</td> <td>30805181825</td> <td>30805181825</td> </tr> <tr> <td>endive</td> <td>6</td> <td>0</td> <td>1</td> <td>51404092205</td> <td>51404092205</td> </tr> <tr> <td>carrot</td> <td>6</td> <td>0</td> <td>1</td> <td>30118181520</td> <td>30118181520</td> </tr> <tr> <td>apple</td> <td>5</td> <td>1</td> <td>100</td> <td>116161205</td> <td>116161205000</td> </tr> <tr> <td>lemon</td> <td>5</td> <td>1</td> <td>100</td> <td>1205131514</td> <td>120513151400</td> </tr> <tr> <td>banana</td> <td>6</td> <td>0</td> <td>1</td> <td>20114011401</td> <td>20114011401</td> </tr> <tr> <td>grape</td> <td>5</td> <td>1</td> <td>100</td> <td>7180011605</td> <td>71801160500</td> </tr> <tr> <td>melon</td> <td>5</td> <td>1</td> <td>100</td> <td>1305121514</td> <td>130512151400</td> </tr> <tr> <td>orange</td> <td>6</td> <td>0</td> <td>1</td> <td>151801140705</td> <td>151801140705</td> </tr> <tr> <td>nut</td> <td>3</td> <td>3</td> <td>1000000</td> <td>1401200</td> <td>140120000000</td> </tr> <tr> <td>peas</td> <td>4</td> <td>2</td> <td>10000</td> <td>1605019</td> <td>160501900000</td> </tr> <tr> <td>sorrel</td> <td>6</td> <td>0</td> <td>1</td> <td>191518180512</td> <td>19151818051200</td> </tr> <tr> <td>mango</td> <td>5</td> <td>1</td> <td>100</td> <td>1301140715</td> <td>130114071500</td> </tr> <tr> <td>fig</td> <td>3</td> <td>3</td> <td>1000000</td> <td>60907</td> <td>609070000000</td> </tr> </tbody> </table> Fig. 3. Sampling of the posting file Fig. 4. Processing of the TB tree inserting word 6. Result and Discussion This section describes the test environment used to implement this work and the experimental results of the proposed algorithm. 6.1. Test environment This system is implemented using VisualBasic.net 2019 with Microsoft Office (Access 2019) to store the database. It is executed by using a laptop with a processor having the following characteristics: Intel (R) Core (TM) i7 CPU M 460 2.4 GHz, RAM 8GB operated by Windows 11 with the 64-bit operating system. 6.2. Experimental results 8282 HTML web documents have been used to evaluate the proposed algorithm's performance. Carnegie Mellon University data set (WebKb) (The 4 Universities Data Set, 1998). Many researchers have used this data set. After preprocessing this data set by removing the noisy information (HTML tags and stop words), the unique words are 128213. A subset consisting of 10K words of this dataset is used in the proposed system. This set of 10K is believed to be reasonable compared to the 4.7K of distinct words tested. This subset is indexed using the insertion Algorithm. Each word of the collection is converted into a numerical number then this numerical number is converted into an equivalent binary number according to Algorithm 1. 6.3. Comparison with other related studies Fig. 6 demonstrates the comparison process of the proposed method to the one proposed and the traditional index applied in terms of storage space. The storage size is reduced by using TB-tree to 48.5 MB, while by using the traditional index the storage size is 307 MB, and the storage size utilized by applying the Enhanced Inverted Index (EII) proposed by 17 is reduced to 99.9 MB. Hence, TB-tree reduces the storage size by 47.5% compared to EII and reduces it by 94.6% compared to the one applied by 22. Fig. 6 demonstrated the differences in the storage space when using traditional, using another prosed related algorithm, and with the proposed TB tree method. It shows that the proposed method is the best when compared to another because it is reducing the storage size to 47.5%. Table 4 depicts the comparison with other related studies. <table> <thead> <tr> <th>References</th> <th>Methods</th> <th>Dataset names</th> <th>Classification method</th> <th>Storage Space</th> <th>Security</th> </tr> </thead> <tbody> <tr> <td>[12]</td> <td>Traditional</td> <td>Webpages</td> <td>Traditional algorithm</td> <td>896</td> <td></td> </tr> <tr> <td>[10]</td> <td>EII (Enhanced Inverted Index)</td> <td>Documents on the web</td> <td>EII</td> <td>99.6</td> <td></td> </tr> <tr> <td>[25]</td> <td>TB±tree</td> <td>Some words using as a dataset</td> <td>TREE</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>[12], [32]</td> <td>Proposed Method</td> <td>WebKb dataset</td> <td>Vector method</td> <td>48.5%</td> <td>Secure key</td> </tr> </tbody> </table> 7. Conclusions This article presented the development of a novel document indexing system named TB tree. In contrast to a technique in the B±tree, single keyword and phrase searches were supported by the system. This paper works on a new algorithm that helps to decrease the index file size by a minimum of 48.5 percent as shown in Fig. 6 where each word in the TB tree is represented by a one-of-a-kind binary integer saved as a binary code. Furthermore, the numerical binary representation was utilized as an encryption of the stored word, providing an extra layer of protection. Applying the suggested method to a bigger dataset (which contains 8282 semi-structure webpages) makes it possible to further generalize the findings obtained thus far. Additionally, a comprehensive test must be carried out to evaluate the time performance of running queries on the newly created index. For future work, it is suggested to use this method with different types of datasets such as documents, text, and images to convert them into a suitable format with a new method of security to make the system more authenticated. Acknowledgment The authors would like to thank the Middle Technical University (MTU) for the financial support of this project. References
{"Source-Url": "https://journal.mtu.edu.iq/index.php/MTU/article/download/948/280", "len_cl100k_base": 7052, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32606, "total-output-tokens": 8881, "length": "2e12", "weborganizer": {"__label__adult": 0.0003452301025390625, "__label__art_design": 0.0004475116729736328, "__label__crime_law": 0.0006289482116699219, "__label__education_jobs": 0.0019741058349609375, "__label__entertainment": 0.00012958049774169922, "__label__fashion_beauty": 0.0002110004425048828, "__label__finance_business": 0.00048732757568359375, "__label__food_dining": 0.0003995895385742187, "__label__games": 0.0006103515625, "__label__hardware": 0.0027713775634765625, "__label__health": 0.0009074211120605468, "__label__history": 0.000408172607421875, "__label__home_hobbies": 0.00014126300811767578, "__label__industrial": 0.0005855560302734375, "__label__literature": 0.0004949569702148438, "__label__politics": 0.00031566619873046875, "__label__religion": 0.00048065185546875, "__label__science_tech": 0.421875, "__label__social_life": 0.00013184547424316406, "__label__software": 0.026092529296875, "__label__software_dev": 0.53955078125, "__label__sports_fitness": 0.0002340078353881836, "__label__transportation": 0.0004792213439941406, "__label__travel": 0.00018012523651123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32418, 0.06459]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32418, 0.59873]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32418, 0.89113]], "google_gemma-3-12b-it_contains_pii": [[0, 3189, false], [3189, 10289, null], [10289, 13682, null], [13682, 16893, null], [16893, 22407, null], [22407, 24999, null], [24999, 26793, null], [26793, 32418, null], [32418, 32418, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3189, true], [3189, 10289, null], [10289, 13682, null], [13682, 16893, null], [16893, 22407, null], [22407, 24999, null], [24999, 26793, null], [26793, 32418, null], [32418, 32418, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32418, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32418, null]], "pdf_page_numbers": [[0, 3189, 1], [3189, 10289, 2], [10289, 13682, 3], [13682, 16893, 4], [16893, 22407, 5], [22407, 24999, 6], [24999, 26793, 7], [26793, 32418, 8], [32418, 32418, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32418, 0.19481]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
39a4c8014fa149140eadb3bd8311e65f68d6bb2a
Safety-Critical Systems Transcript Date: Tuesday, 10 January 2017 - 6:00PM Location: Museum of London Introduction Computer systems are used in many safety applications where a failure may increase the risk that someone will be injured or killed. We may distinguish between safety-related systems where the risk is relatively small (for example the temperature controller in a domestic oven) and safety-critical systems where the risk is much higher (for example the interlocking between the signals and points on a railway). The use of programmable systems in safety applications is relatively recent. This lecture explores the difficulties of applying established safety principles to software based safety-critical systems. The Causes of Accidents Many accidents do not have a single cause. Suppose that a motorist is driving their new sports car to an important meeting, skids on a patch of oil on the road around a corner, and hits a road sign. What caused the accident? Was it the speed, the oil, the position of the road sign or something else? Was the accident caused by the motorist driving too fast? If so, was it because they were late for their meeting, or because they were trying out the cornering abilities of their new car, or because they were thinking about the important meeting and not paying attention to their speed, or because they had not had appropriate driving tuition? If they were late, was it because their alarm clock had failed, or because one of their children had mislaid their school books, or... Was the accident caused by the oil on the road? It was there because the gearbox on a preceding lorry had leaked, but was that caused by a design fault, or poor maintenance, or damage from a pothole that hadn’t been repaired by the local council, or ... Was the accident caused by the position of the road sign? It was placed there because there were road works ahead, because the road edge has subsided, because it had been undermined by water running from a poorly maintained farm stream and an overweight lorry had passed too close to the edge. What should we say caused this accident? This is an artificial example but many real-life accidents are at least as complex as this. Accidents happen because several things combine together. There may be many contributory factors and no single cause. But many accidents are attributed to errors by human operators of equipment – pilots and car drivers, for example. As software replaces more and more of these human operators, the scope for software failures to be a leading contributor to accidents will increase. Safety and Engineering Many of the principles and techniques that are used to develop and assess safety-critical systems have their origins in the process industries, such as chemical plants and oil refineries, when these were relatively simple and controlled manually through valves, switches and pumps, monitored by thermometers and pressure gauges and protected by pressure release valves and alarms. In such systems, the common causes of failure were pipes becoming corroded and fracturing, valves sticking, or other physical problems with individual components. The safety of the whole system depended on the reliability of individual components, so systems were (and still are) designed to eliminate (so far as practicable) single points of failure where one component could fail and cause an accident. In those cases where the single point of failure cannot be eliminated (the rotor blades in a helicopter, for example) these critical components have to be designed and built in such a way that failure is extremely improbable. A process of hazard analysis attempts to identify all the dangerous states that the system could get into, and fault trees are drawn up to analyse the circumstances that could lead to each dangerous occurrence. Engineers distinguish between faults and failures: a fault is defined as an abnormal condition or defect at the component, equipment, or sub-system level which may lead to a failure, and failure is defined as the lack of ability of a component, equipment, sub system, or system to perform its intended function as designed. A failure may be the result of one or many faults. Having designed the system, the engineers will often consider each component and subsystem and analyse what would happen if it failed, drawing on knowledge of the different ways that such components can fail (to give a trivial example, a valve may stick open or shut). This process is called Failure Modes and Effects Analysis (FMEA). A related process that also considers criticality is called FMECA. There are rigorous techniques that a safety analyst may use to analyse the behaviour of complex technical and socio-technical systems, but most FMEA / FMECA analyses currently rely on intuition and judgement. After all this analysis, the engineers can calculate how reliable they need each component to be for the overall system to be adequately safe. But this poses the following question: **Safety Requirements: how safe is safe enough?** “How safe is safe enough” is a social question rather than a scientific or engineering one. It is a question that can be asked in a variety of ways, such as “what probability of failure should we permit for the protection system of this nuclear reactor?”, “what probability of failure should we permit for safety-critical aircraft components?” or “how much should we spend to avoid fatal accidents on the roads or railways?”. From the answers chosen in each context, you can calculate the value placed on a statistical life or VSL. This is defined as the additional cost that individuals would be willing to bear for improvements in safety (that is, reductions in risks) that, in the aggregate, reduce the expected number of fatalities by one. The calculated VSL varies quite widely but it can be important to use a consistent figure, so that (for example) the Department of Transport can decide where it would be most beneficial to spend road safety money, and the Department of Health can decide how to allocate NHS budgets between different ways of improving health outcomes. The VSL can be calculated in a variety of ways and the value differs in different countries. In Great Britain, the health and safety of workers and others who may be affected by the risks from work are regulated by the 1974 Health and Safety at Work Act (HSWA) which is enforced primarily by the Health and Safety Executive, HSE. HSWA states (as general duties) that > it shall be the duty of every employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees and it shall be the duty of every employer to conduct his undertaking in such a way as to ensure, so far as is reasonably practicable, that persons not in his employment who may be affected thereby are not thereby exposed to risks to their health or safety. The phrase so far as is reasonably practical means that someone who creates a risk to themselves, their employees or the public has a duty under the HSWA to assess the risk and, if it is not so low as to be clearly tolerable, they must take action to reduce the risk so that it is As Low as is Reasonably Practicable. This is known as the ALARP principle. HSE has explained ALARP as follows: The definition set out by the Court of Appeal (in its judgment in Edwards v. National Coal Board, [1949] 1 All ER 743) is: > “Reasonably practicable‘ is a narrower term than ‘physically possible’ ... a computation must be made by the owner in which the quantum of risk is placed on one scale and the sacrifice involved in the measures necessary for averting the risk (whether in money, time or trouble) is placed in the other, and that, if it be shown that there is a gross disproportion between them – the risk being insignificant in relation to the sacrifice – the defendants discharge the onus on them.” In essence, making sure a risk has been reduced ALARP is about weighing the risk against the sacrifice needed to further reduce it. The decision is weighted in favour of health and safety because the presumption is that the duty-holder should implement the risk reduction measure. To avoid having to make this sacrifice, the duty-holder must be able to show that it would be grossly disproportionate to the benefits of risk reduction that would be achieved. Thus, the process is not one of balancing the costs and benefits of measures but, rather, of adopting measures except where they are ruled out because they involve grossly disproportionate sacrifices. Extreme examples might be: To spend £1m to prevent five staff suffering bruised knees is obviously grossly disproportionate; but To spend £1m to prevent a major explosion capable of killing 150 people is obviously proportionate. Of course, in reality many decisions about risk and the controls that achieve ALARP are not so obvious. Factors come into play such as ongoing costs set against remote chances of one-off events, or daily expense and supervision time required to ensure that, for example, employees wear ear defenders set against a chance of developing hearing loss at some time in the future. It requires judgment. There is no simple formula for computing what is ALARP. There is much more guidance on the HSE website. I particularly recommend the 88pp publication *Reducing risks, protecting people* from which the following diagram is taken. In many industries there are agreed standards or certification requirements that provide guidance on the question *how safe is safe enough?* One example is the DO-178 guidance for certifying flight-critical aircraft components (including software). The FAA (for the USA) and EASA (for Europe) require that *catastrophic* failure conditions (those which would prevent continued safe flight and landing) should be *extremely improbable* (which you may find comforting) though neither regulation gives an equivalent numerical probability for this phrase. John Rushby’s excellent review of the issues in certifying aircraft software explains *Neither FAR 25.1309 nor CS 25.1309 defines “extremely improbable” and related terms; these are explicated in FAA Advisory Circular (AC) 25.1309 and EASA Acceptable Means of Compliance (AMC) 25.1309. These state, for example, that “extremely improbable” means “so unlikely that they are not anticipated to occur during the entire operational life of all airplanes of one type”. AC 25.1309 further states that “when using quantitative analyses. . . numerical probabilities . . . on the order of 10-9 per flight-hour may be used. . . as aids to engineering judgment . . . to . . . help determine compliance” with the requirement for extremely improbable failure conditions. An explanation for this figure can be derived as follows: suppose there are 100 aircraft of the type, each flying 3,000 hours per year over a lifetime of 33 years (thereby accumulating about 107 flight-hours) and that there are 10 systems on board, each with 10 potentially catastrophic failure conditions; then the “budget” for each is about 10-9 per hour if such a condition is not expected to occur in the entire operational life of all airplanes of the type. An alternative explanation is given in Section 6a of AMC 25.1309: the historical record for the previous (pre-software-intensive) generation of aircraft showed a serious accident rate of approximately 1 per million hours of flight, with 10% due to systems failure; the same assumption as before about the number of potentially catastrophic failure conditions then indicates each should have a failure probability less than 10-9 per hour if the overall level of safety is to be maintained. Even though recent aircraft types have production runs in the thousands, much higher utilization, and longer service lifetimes than assumed in these calculations, and also have a better safety record, AMC 25.1309 states that a probability of 10-9 per hour has “become commonly accepted as an aid to engineering judgement” for the “extremely improbable” requirement for catastrophic failure conditions.* So the target probability of catastrophic failure for safety-critical systems in aircraft is 10-9 per hour, or one failure in a billion hours. It is interesting to contrast this with the COMMON POSITION OF INTERNATIONAL NUCLEAR REGULATORS AND AUTHORISED TECHNICAL SUPPORT ORGANISATIONS on the licensing of safety-critical software for nuclear reactors, which states: “Reliability claims for a single software based system important to safety of lower than 10-4 probability of failure (on demand or dangerous failure per year) shall be treated with extreme caution”. We shall shortly consider the problems of establishing low failure probabilities for software-based systems and some of the proposed solutions. **Hazard, Risk, Safety and Reliability** Before we continue, it is important to distinguish what we mean by a hazard and a risk. A *hazard* is anything that may cause harm. The *risk* is the chance, high or low, that somebody could be harmed by these and other hazards, together with an indication of how serious the harm could be. A hazard is something that could lead to harm (an accident) and a risk is the combination of the *probability* that the hazard will lead to an accident and the *severity* of the accident if it occurs. When engineers design safety-critical systems, they try to identify all the potential hazards that their system creates or that it should control: for a chemical factory this might include the release of toxic materials, or an explosion or a fire. For each hazard, the risk is assessed and if the risk is not acceptable but can be made tolerable, measures are introduced to reduce it ALARP. This ALARP approach is used in Britain and in other countries with related legal systems, but not universally. It is also important to distinguish safety from reliability. A system can be safe (because it cannot cause harm) and yet be unreliable (because it does not perform its required functions when they are needed). If your car often fails to start, it may be perfectly safe though very unreliable. In general, if a system can *fail safe* then it can be safe whilst being unreliable if the failures occur too often. **Software-based systems** Software-based systems are often very complex. A modern car contains 100 million lines of software, some of it safety-critical. As we have seen above, the standards for assuring the safety of electronic systems originated in the approach adopted by the process industries; this approach was based on analysing how the reliability of system components contributed to the overall safety of the system. But failures can be *random or systematic*, where random failures are typically the result of physical changes in system components, because of wear, corrosion, contamination, ageing or other physical stresses. In contrast, systematic failures are intrinsic in the system design; they will always occur whenever particular circumstances co-exist and the system enters a state that has not been correctly designed or implemented. Historically, safety engineers assumed that systematic failures would be eliminated by following established engineering best practices, so the possible effects of systematic failures were ignored when calculating failure probabilities for electro-mechanical control systems. Software does not age in the way that physical components do (though it may become corrupted in store, for example by maintenance actions or corruption of its physical media, and it may be affected by changes in other software or hardware, such as a new microprocessor or a new compiler). Software failures are, in general, the result of software faults that were created when the system was specified, designed and built and these faults will cause a failure whenever certain circumstances occur. They are systematic failures but they are sufficiently common that it would be difficult to justify ignoring them in the safety certification of software-based systems. Some engineers have argued that it is a category mistake to talk about software failing randomly; they argue that software cannot be viewed as having a probability of failure. Their reasoning is that because every bug that leads to failure is certain to do so whenever it is encountered in similar circumstances, there is nothing random about it, so software does not fail randomly. This argument is wrong, because any real-world system operates in an uncertain environment, where the conditions that the system encounters and has to process will arise randomly and the resulting failures will occur randomly; it is therefore perfectly reasonable to treat the failures of software-based systems as random. There are two sources of uncertainty about when and how often software will fail: uncertainty about the inputs it will encounter in a given period because the inputs occur randomly (aleatoric uncertainty) and uncertainty about whether a particular sequence of inputs will trigger a failure because of limited knowledge about the total behaviour of a complex program (epistemic uncertainty). Probability is the mathematical tool that is used to reason about uncertainty; engineers therefore talk about the pdf (probability of failure on demand – used for systems that are rarely executed, such as the protection systems in nuclear reactors) and the pfd (probability of failure per hour – used for systems that operate continuously, such as the control software in a modern car or aircraft). It is a symptom of the immaturity of the software engineering profession that the debate about whether or not software failures can be regarded as random still continues on professional and academic mail groups, despite two decades of authoritative, peer reviewed papers on software failure probabilities. As we have seen in previous lectures, the primary ways that software developers use to gain confidence in their programs are through testing and operational experience, so let us now consider what we can learn from these methods. How reliable is a program that has run for more than a year without failing? A year is 8760 hours, so let us assume that we have achieved 10,000 hours of successful operation; this should give us some confidence in our program, but just how much? What can we claim as its probability of failure per hour (pfh)? When calculating probabilities from evidence, it is necessary to decide how confident you need to be that the answer is correct. It is self evident that the more evidence you have, the more confident you can be: if you toss a coin 100 times and get roughly the same number of heads as tails, you may believe it is a unbiased coin, if the numbers are close after 1 million tosses, your confidence should be much higher. In the same way, to know how much evidence you need to support a particular claim for a probability, you must first specify the degree of confidence you need to have that you are right. For a required failure rate of once in 10,000 hours (a pfh of 1/10000 or 10^-4), 10,000 hours without a failure provides 50% confidence that the target has been met; but if you need 99% confidence that the pfh is at least 10^-4 you would need 46,000 hours of tests with no failures. As an engineer’s rule-of-thumb, these multipliers can be used to indicate the hours of fault-free testing or operational experience that you need to justify a claim that your pfh (or the number of demands if you are claiming a pfd – the maths come out close enough for the relevant probability distributions). So to claim a pfh or pfd of no greater than 10^-n , you need at least 10n failure free hours or demands if you are happy with 50% confidence that your claim is correct. If you want to be 99% confident then you need at least 4.6 * 10n failure free hours or demands. (Similar calculations can be done to determine the amount of evidence needed if the program has failed exactly once, twice, or any other number of times. The increase is relatively small if the number of failures is small. For details see the referenced papers). The reasoning above means that if a program has run for a year without failing, you should only have 50% confidence that it will not fail in the following year. These probability calculations depend on three critical assumptions: • That the distribution of inputs to the program in the future will be the same as it was over the period during which the test or operational evidence was collected. So evidence of fault-free working in one environment should not be transferred to a different environment or application. • That all dangerous failures were correctly detected and recorded. That no changes have been made to the software during the test or operational period. These are onerous assumptions and if any one of them cannot be shown to be true then you cannot validly use the number of hours of successful operation to conclude that a program is highly reliable. This means that you must be able to justify very high confidence in your ability to predict the future operating environment for your software and, if the software is changed in any way, all the previous evidence from testing or operation should be regarded as irrelevant unless it can be proved that the software changes cannot affect the system failure probability. Most arguments for the re-use of software in safety-critical applications fail all three of the above criteria, so safety arguments based on re-used software should be treated with a high degree of scepticism, though such proven in use arguments are often put forward (and far too often accepted by customers and regulators). We saw earlier that the certification guidance for flight-critical software in aircraft is a pfh of 10^{-9} which would require 4.6 billion failure free hours (which is around 500,000 years) to give 99% confidence. In contrast, the certification of the software based primary protection system for the Sizewell B nuclear reactor only required a pfd of 10^{-3}, which was feasible to demonstrate statistically. At the other extreme, it is reported that some railway signalling applications require failure rates no worse than 10^{-12} per hour, which is far in excess of anything for which a valid software safety argument could be constructed. All these considerations mean that software developers have a major problem in generating valid evidence that their software is safe enough to meet certification requirements or ALARP. Yet safety-critical software is widely used and accident rates seem to be acceptable. Does this mean that the certification requirements have been set far too strictly? Or have we just been fortunate that most faults have not yet encountered the conditions that would trigger a failure? Should the increasing threat of cyber-attack change the way that we answer these questions? **N-version Programming** Some academics and some industry groups have tried to overcome the above difficulties by arguing that if safety-critical software is written several times, by several independent groups, then each version of the software will contain independent faults and therefore fail independently. On this assumption, the different versions could all be run in parallel, processing the same inputs, and their outputs could be combined through a voting system. On the assumption of independence and a perfect voter, three 10^{-3} systems could be combined to give one 10^{-9} system. **N-version programming** is a software analogue to multi-channel architectures that are widely used to mitigate hardware failures. N-version programming has strong advocates but it has been shown to be flawed. An empirical study by John Knight and Nancy Leveson undermined the assumption of independence that is critical to the approach, and other analyses, both empirical and theoretical, have reached the same conclusion. Professors Knight and Leveson have responded robustly to critics of their experimental work. Despite these flaws, the idea remains intuitively attractive and it is still influential. One interesting approach has been to combine two software control channels, one of which is assumed to be “possibly perfect”, perhaps because it has been developed using mathematically formal methods and proved to meet its specification and to be type safe. A paper by Bev Littlewood and John Rushby, called *Reasoning about the Reliability Of Diverse Two-Channel Systems In which One Channel is “Possibly Perfect”*, analyses this approach. **Standards for developing Safety Critical Systems: DO-178 and IEC 61508** The leading international standards for software that implements safety-critical functions (DO-178C for aircraft software, and IEC 61508 and its industry-specific derivatives) do not attempt to provide scientifically valid evidence for failure probabilities as low as 10^{-9} per hour or even 10^{-6} per hour. Instead, they define a hierarchy of **Software Integrity Levels** (SILs – the IEC 61508 term) or **Development Assurance Level** (DAL – the DO-178 term) depending on the required reliability of the safety function that is providing protection from a hazard. The standards recommend or require that the software is specified, designed, implemented, documented and tested in specified ways, with stricter requirements applying where a lower probability of failure is required. Many criticisms can be levelled at this approach. There is at best a weak correlation between the methods used to develop software and its failure rate in service. The exception to this is where the software has been mathematically specified and analysed and proved to implement the specification and to be devoid of faults that could cause a failure at runtime (and even here there is insufficient experience of enough systems to establish it empirically). The standards allow this approach but do not require it. Some of the recommended and required activities are costly and time-consuming (such as the DO178 Level A requirement to carry out MC/DC testing) and although though they may detect errors that would not otherwise have been found, it is not clear that these activities are a good use of safety assurance resources. There is no evidential basis for the way that recommended engineering practices have been assigned to SILs or DALs. For example, IEC 61508 specifies four SILs for safety functions that operate continuously, as below ### Safety Integrity Level - Probability of dangerous failure per hour | SIL 4 | >=10^-9 to <10^-8 | | SIL 3 | >=10^-8 to <10^-7 | | SIL 2 | >=10^-7 to <10^-6 | | SIL 1 | >=10^-6 to <10^-5 | Since (as we have seen) there is no way to demonstrate empirically that even SIL 1 pfh has been achieved, it is unclear how there could be evidence that additional development practices would reduce the failure rate by factors of 10, 100, or 1000. Indeed, there is no evidence that employing all the practices recommended for SIL 4 would actually achieve even SIL 1 with any certainty. The standards do not adequately address the safety threats from security vulnerabilities. One evaluation of software developed in accordance with DO-178 showed no discernable difference in the defect levels found in DAL A software and lower DAL software. I discussed this evaluation in my first Gresham lecture, *Should We Trust Computers?*. A 2007 report from the United States National Academies: *Software for Dependable Systems: Sufficient Evidence?* makes three strong recommendations to anyone claiming that software is dependable: Firstly on the need to be explicit about what is being claimed. No system can be dependable in all respects and under all conditions. So to be useful, a claim of dependability must be explicit. It must articulate precisely the properties the system is expected to exhibit and the assumptions about the system’s environment on which the claim is contingent. The claim should also make explicit the level of dependability claimed, preferably in quantitative terms. Different properties may be assured to different levels of dependability. Secondly on Evidence. For a system to be regarded as dependable, concrete evidence must be present that substantiates the dependability claim. This evidence will take the form of a “dependability case,” arguing that the required properties follow from the combination of the properties of the system itself (that is, the implementation) and the environmental assumptions. So that independent parties can evaluate it, the dependability case must be perspicuous and well-structured; as a rule of thumb, the cost of reviewing the case should be at least an order of magnitude less than the cost of constructing it. Because testing alone is usually insufficient to establish properties, the case will typically combine evidence from testing with evidence from analysis. In addition, the case will inevitably involve appeals to the process by which the software was developed—for example, to argue that the software deployed in the field is the same software that was subjected to analysis or testing. Thirdly on Expertise. Expertise—in software development, in the domain under consideration, and in the broader systems context, among other things—is necessary to achieve dependable systems. Flexibility is an important advantage of the proposed approach; in particular, the developer is not required to follow any particular process or use any particular method or technology. This flexibility provides experts the freedom to employ new techniques and to tailor the approach to their application and domain. However, the requirement to produce evidence is extremely demanding and likely to stretch today’s best practices to their limit. It will therefore be essential that the developers are familiar with best practices and diverge from them only with good reason. Expertise and skill will be needed to effectively utilize the flexibility the approach provides and discern which best practices are appropriate for the system under consideration and how to apply them. The recommended approach leaves the selection of software engineering and safety assurance methods under the control of the developers. It is a goal-based approach, very much consistent with the philosophy of the Health and Safety at Work Act, that those who create a hazard have the duty to control it and should be held accountable for the outcome, rather than being told what they have to do and then being able to claim that any accidents are not their fault because they followed instructions. I know of only one published standard for safety-critical software that follows this goal-based approach, CAP 670 SW01 Regulatory Objectives for Software Safety Assurance in Air Traffic Service Equipment from the Safety Regulation Group of the UK Civil Aviation Authority. They have also published an accompanying guide on Acceptable Means of Compliance. Final Observations Safety-critical computer systems are used widely and, on currently available evidence, many of them seem to be fit for purpose. Where systems have been certified or claimed to have a specified probability of failure, the evidence to support that claim is rarely, if ever available for independent review and in the case of claims of \( pfh \) lower than 10-4 per hour (which includes all claims for SILs 1, 2, 3 and 4 using the IEC 61508 international standard or its derivatives) it appears to be scientifically infeasible to produce valid evidence for such a claim. Current standards also treat the security threat to safety inadequately, if at all, and yet the possibility of a serious cyberattack is a Tier One Threat on the national risk register. It seems certain that many more safety-critical software-based systems will be introduced in the future, and it is essential that the software industry adopts engineering methods that can provide strong evidence that the risks from both systematic failure and cyberattack have been reduced so far as reasonably practicable. (Indeed, as we have seen, this could be said to be a legal obligation in Britain under Sections 2 and 3 of the Health and Safety at Work Act). Unfortunately, the history of engineering suggests that major improvements in new engineering methods usually only come about after major accidents, when an official investigation or public inquiry compels industry to change. © Professor Martyn Thomas, 2017
{"Source-Url": "https://www.gresham.ac.uk/lecture/transcript/download/safety-critical-systems/", "len_cl100k_base": 6432, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 18645, "total-output-tokens": 6927, "length": "2e12", "weborganizer": {"__label__adult": 0.0003948211669921875, "__label__art_design": 0.0006341934204101562, "__label__crime_law": 0.0014505386352539062, "__label__education_jobs": 0.010772705078125, "__label__entertainment": 9.638071060180664e-05, "__label__fashion_beauty": 0.0002005100250244141, "__label__finance_business": 0.0006256103515625, "__label__food_dining": 0.000640869140625, "__label__games": 0.0007543563842773438, "__label__hardware": 0.00449371337890625, "__label__health": 0.0020427703857421875, "__label__history": 0.0004475116729736328, "__label__home_hobbies": 0.0003256797790527344, "__label__industrial": 0.00391387939453125, "__label__literature": 0.0004642009735107422, "__label__politics": 0.00042366981506347656, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.4619140625, "__label__social_life": 0.00017726421356201172, "__label__software": 0.0269927978515625, "__label__software_dev": 0.480224609375, "__label__sports_fitness": 0.00035500526428222656, "__label__transportation": 0.0020465850830078125, "__label__travel": 0.0002353191375732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32486, 0.01963]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32486, 0.72549]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32486, 0.96085]], "google_gemma-3-12b-it_contains_pii": [[0, 103, false], [103, 4229, null], [4229, 9279, null], [9279, 15300, null], [15300, 20748, null], [20748, 25968, null], [25968, 30452, null], [30452, 32486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 103, true], [103, 4229, null], [4229, 9279, null], [9279, 15300, null], [15300, 20748, null], [20748, 25968, null], [25968, 30452, null], [30452, 32486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32486, null]], "pdf_page_numbers": [[0, 103, 1], [103, 4229, 2], [4229, 9279, 3], [9279, 15300, 4], [15300, 20748, 5], [20748, 25968, 6], [25968, 30452, 7], [30452, 32486, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32486, 0.0381]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
361b875278608a7d4f9fefd9e7aee44a34e93330
Analysis of Embedded Applications By Evolutionary Fuzzing Vincent Alimi, Sylvain Vernois, Christophe Rosenberger To cite this version: HAL Id: hal-01019978 https://hal.archives-ouvertes.fr/hal-01019978 Submitted on 7 Oct 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Analysis of Embedded Applications By Evolutionary Fuzzing V. Alimi, S. Vernois, C. Rosenberger Normandie Univ, France; UNICAEN, GREYC F-14032 Caen, France; ENSICAEN, GREYC, F-14032 Caen, France; CNRS, UMR 6072, F-14032 Caen, France christophe.rosenberger@ensicaen.fr Abstract—In this paper, we propose to use fuzzing techniques to discover vulnerabilities in programs hosted into smart cards used for telecommunications or banking purposes (SIM cards, credit cards, secure element into NFC mobile devices...). Those programs – called applets – usually host sensitive applications and manipulate sensitive data. A flaw by design or by implementation in one of those applet could have disastrous consequences. The proposed approach uses a genetic algorithm to optimize the vulnerabilities search. We illustrate the benefit of the proposed method on a MasterCard M/Chip applet through experimental results. I. INTRODUCTION With the development of Internet and digital networks, new transactions have emerged. These electronic transactions govern our daily lives such as access control to a building with a badge, internet payment, payment with a mobile phone, etc. As for example, here are some figures in France in 2010: E-commerce (online shopping) concerned an amount of $31 billion, it has been recorded $7 billions of credit card payments for an amount of $336 billion and 1.5 billion withdrawals on ATMs for $115 billion. Electronic transactions have opened the door to a multitude of opportunities declined in various forms : the internet portal to check your bank accounts, make transfers or place orders in scholarships, a smart card to open a door or validate his title transit, a downloaded to a computer or a mobile device like a PDA or a mobile phone application. This latter category of mobile equipment is extremely powerful in terms of service offerings. Indeed, a mobile phone (which will be called mobile thereafter) has the following characteristics : - Nomad: it is characterized by its mobility which allows him to become an indispensable tool of everyday life; - Online: with the phone 3rd and now 4th generation, the mobile has a roaming internet access broadband. This allows, among other things, the consultation emails, access to web services, ...; - Powerful: current mobile called smartphones ship processors more powerful, of memory increasingly important and operating more efficient system. By nature and as for example, contactless transactions of mobile services are more prone to attacks such as man-in-the-middle, eavesdropping and relay. The discovery of new vulnerabilities and new types of attacks are often defined through the generalization of existing ones from the state of the art. When developing new applications, most of implementation errors are mostly detected. It may nevertheless be possible to have undetected errors which could have disastrous consequences. Vulnerabilities can be classified in different ways. In [1], Dowd et al. propose a classification into three classes: - Vulnerabilities due to the design: introduced during the transcription of specifications into functional specifications themselves transcribed into technical specifications; - Vulnerabilities due to the implementation: introduced when developing the program code or server; - Vulnerabilities due to the use: observed during the deployment and operation of the program or server. The implementation errors are mostly detected by error or warning messages from the compiler, unit testing or integration testing. It may nevertheless happen that some errors are not detected, either by lack of resources to allocate to the test phase or because the test phase focuses primarily on compliance with the functional requirements. As for example, the attack on the Mifare Classic cards made by Koning et al. published in [2] exploits a weakness in the pseudo-random generator involved in the encryption of the communication between the card and the player. In the context of mobile contactless services, implementation errors can have disastrous consequences. Consider as an example the case of a mobile proximity payment service on the Secure Element as an applet. This application was developed considering defined specifications by an actor such as Visa, MasterCard, American Express or by a so-called proprietary networks. To overcome the difficulty of conducting extensive testing, fuzzing techniques can be very useful. The fuzzing is a technique to discover vulnerabilities in a program or a system. The principle is to inject malformed or the limits of their values. Barton Miller, professor at the University of Wisconsin, is the originator of the field of fuzzing. He introduced the concept of fuzz program in 1988 in a class project [3] whose first findings were published in [4]. The class project consisted in sending some random character chains as input into some UNIX processes in order to make them crash. In [5], Clarke enunciates the common features to fuzzing programs: - data generation (creating data to be passed to the target); - data transmission (getting the data to the target); - target monitoring and logging (observing and recording the reaction of the target), and; - automation (reducing, as much as possible, the amount of direct user-interaction required to carry out the testing regime). The last two may be considered optional or can be implemented in an external module fuzzer. Three methods for the generation of data [6] can be distinguished: Random, data mutation and protocol analysis. The random generation involves a generator that produces a set of test data. This approach minimizes the effort and time required, but turns out to be less effective for detecting vulnerabilities because they are not necessarily known in advance. The mutation data combines two techniques: data capture and selective mutation. Starting from a valid input data is carried mutations in order to obtain a set of test data which is very close to the structure of a valid data. Generating data based on protocol analysis is based on the principle that the input data meet very often a protocol or a pre-defined format. A fuzzer of this type is implemented by defining a model of this protocol so that it can create valid input but whose data is random data. The paper is organized as follows. In section II, a brief state of the art is given related to existing analysis method for embedded applications such as the ones in a Secure Element. Section III describes the proposed method based on a Fuzzing approach exploiting a genetic algorithm to optimize the space search. Illustrations of the proposed method are given in section IV on a real JavaCard payment application. Section V gives the conclusion and perspectives of this study. II. STATE OF THE ART In [7], Guyot illustrates the use of fuzzing on applets. He demonstrates how easy it is to accurately determine the commands (i.e. the instruction codes) that are accepted by the application. The commands – also called Application Data Units (APDU) – are coded in accordance with the ISO 7816 standard [8]. A command that is recognized induces an action and the return of data or of status words indicating an internal error. For example, the applet can return the status words 6F 00 if the input data are not conform and provoked a treatment error in the applet. On the other hand, if the command is not recognized the standard status words 6D 00 are returned. Then, Guyot uses the results obtained to make the application (a student card application) fuzzing-proof. Instead of using a different instruction code per supported command, he uses a single instruction code and differentiates the commands by using the reference parameters (P1 and P2). The result is an increase of the complexity for an attacker to find out the way the application works because the use of the reference parameters is not standard. In [9], Barreaud et al. expose a method to analyse the vulnerabilities on a smard card embedding a web server. Their approach consists in fuzzing the BIP protocol (Bearer Independant Protocol) responsible for the communication between the application processor and the SIM card of a mobile phone. They use the fuzzing framework Peach that they extend with the Pyscard library in order to communicate with the SIM card. They modelize the BIP protocol with an XML file. They also add to the file the two markups <Expected> and <Response> in order to monitor the target. They find out that some of the tested cards do not properly implement the protocol as defined by ETSI (European Telecommunications Standards Institute) and that some have implementation flaws allowing to create a Denial of Service attack. In [10], Lancia publishes one of the only known fuzzing methodologies aiming at discovering vulnerabilities in EMV (Eurocard Mastercard Visa) banking applications. To realize this attack, he uses the block fuzzing framework Sulley [11]. Sulley is written in Python and is recognized as one of the most complete and efficient fuzzing framework. In order to transmit data, Lancia has integrated the Triton library, also written in Python, allowing to communicate with smart cards through the standard PC/SC API. For monitoring the target, Lancia developed a reference implementation of the EMV applications he targeted. The same commands are simultaneously sent to the real card and the reference implementation. An anomaly is detected when the results returned by the real card differs from the one returned by the reference implementation. All the protocol commands are modelized then linked together in order to create the protocol graph (cf. figure 1). To test a particular command in the graph, all the preceding commands are sent with their default value. For that particular command, the framework Sulley generates the data from the model of the protocol. Thanks to this methodology, Lancia has brought out some functional differences with the EMV specifications and some security flaws on implementations of the Visa and Mastercard specifications. For instance, he noticed that a particular combination of data could generate the reset of the offline counters. Lancia’s approach has proven efficient by showing some functional differences and security flaws on real cards that have been certified in accredited certification labs. This efficiency is the result of a precise description of the data model and a thoroughness in the development of the reference implementation. However, the approach has gaps that we propose to fill in. In this paper, we propose an improvement of Lancia’s approach for the testing of payment applications in black box III. PROPOSED METHOD A. Principles Instinctively, we thought that an ideal testing framework would be a framework capable of assessing the quality of the data sent to the card function of the effect produced. In other words, the ideal framework would be capable of modifying in real-time the commands sent to the card function of the data returned in order to discover vulnerabilities. A fuzzer based on a reference implementation does not match our initial need to adapt the data generated function of the results it produces. Indeed, this kind of fuzzer makes a static comparison and raises a warning when the results returned by the target and the reference implementation differ. Instead, we prefer using an algorithm capable of iterating over a new round of data based on the evaluation of the preceding rounds data, such as evolutionary algorithms. In this paper, we propose a fuzzing methodology for smart card applications using the genetic algorithm (GAs) to generate the data sent to the card and evaluate the quality of this data. B. WSCT Framework WinSCard Tools, alias WSCT, is a framework foremost developed for Windows using C# programming language. In the same way as Java is based on the execution of byte code independent of the machine thanks to the Java Virtual Machine (JVM), C# compiler generates a byte code intended to be executed by a Common Language Runtime (CLR). Hopefully CLR implementations exist on most common systems: .Net framework by Microsoft is dedicated to Windows operating systems starting with Windows XP and Mono platform by Xamarin is dedicated to Linux based operating systems up to iOS and Android, even if Windows is also supported. Fig. 2. WSCT framework overview WSCT framework itself is mainly composed of five modules (figure 2): - **Wrapper:** this module publishes a first basic API allowing to access to the concrete PC/SC resource manager hosted on the machine. It is the most machine dependent module as it has to be adapted for each operating system. Standard implementation allows transparent access to PC/SC smartcard readers on Windows, Linux and MacOS by providing one binary for all. By overriding this module, the entire framework can self adapt to other systems (Android NFC reader is a work in progress for example) or specific readers and probes. - **Core:** this module aims at providing an higher level API allowing the communication with the wrapped readers and inserted smartcard. It provides useful interfaces and objects allowing observability of communication between the card and the caller. It’s the foundation of genericness and re-usability of tools developed upon WSCT. - **Stack:** it publishes a mechanism allowing the chaining of layers able to intercept and transform data exchange between the caller and the card. - **ISO 7816 library:** it provides mainly a partial implementation of common ISO7816-4 normalized objects, such as C-APDU and R-APDU (normalized formats of command and response) or SELECT instruction. - **Helpers:** it publishes a set of useful objects often needed when working with smart cards. For example TLV format (tag length value), often used with smart cards, is defined there and can be used everywhere. These items have finally been made public [12]. A graphic user interface is also available to help creating demonstrators. Several libraries have been built on top of WSCT to ease the work on concrete card. The most used is the EMV library. that implements main parts of EMV specification and allows sending, observability and interpretation of exchanges relative to EMV payment. The sources of these libraries are not published to prevent public misuse. The added value of this framework compared to other existing API dedicated to smartcard communication, whatever the language, is certainly the passive observation of the communication that is natively provided, allowing the interpretation of transaction exchanges to be kept separated and independent from the functional cinematic, as illustrated by figure 3. This is why the fuzzing experimentations were realized using this framework. ![Fig. 3. WSCT observability (Core) and interception (Stack)](image) C. Genetic algorithm Genetic algorithms determine the optimal value of a criterion by simulating the evolution of a population until survival of best fitted individuals [13], [14]. The survivors are individuals obtained by crossing-over, mutation and selection of individuals from the previous generation. We think that GA is a good candidate to find out the optimal combination of segmentation results for two main reasons. The first one is due to the fact an evaluation criterion is not very easy to differentiate. GA is an optimization method that does not necessitate to differentiate the fitness function but only to evaluate it. Second, if the population is enough important considering the size of the search space, we have good guarantees that we will reach the optimal value of the fitness. A genetic algorithm is defined by considering five essential data: 1) genotype: It is composed of the candidate solution for the command resulting to a vulnerability. In our case, we focus on the GENERATE AC command on a payment applet (see Figure 4). 2) initial population: a set of individuals characterized by their genotypes, 3) fitness function: this function enables us to quantify the fitness of an individual to the environment by considering its genotype. In our case, it corresponds to the evaluation of the GENERATE AC command. For more details, see section IV-B. 4) operators on genotypes: they define alterations on genotypes in order to make the population evolve during generations. Three types of operators are used: - individual mutation: individual’s genes are modified in order to be better adapted to the environment. We use the non-uniform mutation process which randomly selects one chromosome $x_i$, and sets it as equal to a non-uniform random number: \[ x'_i = \begin{cases} x_i + (b_i - x_i)f(G) & \text{if } r_1 < 0.5 \\ x_i - (x_i + a_i)f(G) & \text{if } r_1 \geq 0.5 \end{cases} \] where $f(G) = (r_2(1 - \frac{G}{G_{\text{max}}}))^b$ The values $r_1, r_2$ are numbers in the interval [0,1]. The values $a_i$ and $b_i$ are the lower and upper bound of chromosome $x_i$. $G$ is the current generation, $G_{\text{max}}$ is the maximum number of generations and $b$ is a shape parameter. - selection of an individual: individuals that are not adapted to the environment do not survive to the next generation. We used the normalized geometric ranking selection method which defines a probability $P_i$ for each individual $i$ to be selected as following: \[ P_i = \frac{q(1-q)^{r-1}}{1-(1-q)^n} \] where $q$ is the probability of selecting the best individual, $r$ is the rank of individual (1 is the best) and $n$ is the size of the population. - crossing-over: two individuals can reproduce by combining their genes. We use the arithmetic crossover which produces two complementary linear combinations of the parents: \[ X' = aX + (1-a)Y \] \[ Y' = (1-a)X + aY \] where $X$ and $Y$ are the genotype of parents, $a$ is a number in the interval [0,1] and $X'$ and $Y'$ are the genotype of the linear combinations of the parents. 5) stopping criterion: this criterion allows to stop the evolution of the population. We can consider the stability of the standard deviation of the evaluation criterion of the population or set a maximal number of iterations (we used the second one with the number of iterations equals to 2000). Given these five informations, the execution of the genetic algorithm is carried out in four steps: 1) definition of the initial population (segmentation results) and computation of the fitness function (evaluation criterion) of each individual, 2) mutation and crossing-over of individuals, 3) selection of individuals, 4) evaluation of individuals in the population, 5) back to step 2 if the stopping criterion is not satisfied. IV. ILLUSTRATIONS A. Experimental protocol Our GA-based testing framework is based on the architecture described in section III-B. We’re adding to the logic responsible for the fuzzing the library AForge.NET [15] offering an open source implementation of genetic algorithms. We’re loading, installing and personalizing an application that we developed onto a smart card. We implemented in this applet a part of the MasterCard M/Chip [16] specifications whose transaction flow complies with the EMV standard. We simplified the development by keeping only the piece of code strict necessary to perform a transaction. Table I summarizes the parts of the experiment. --- <table> <thead> <tr> <th>Fuzzing framework</th> <th>WinSCard Tools</th> </tr> </thead> <tbody> <tr> <td>Framework language</td> <td>C#</td> </tr> <tr> <td>Smart card used</td> <td>JCOP 2.4.1 simulator</td> </tr> <tr> <td>Payment application</td> <td>MasterCard MChip 4</td> </tr> <tr> <td>tCA library</td> <td>AForge.NET</td> </tr> <tr> <td>Population size</td> <td>10000</td> </tr> <tr> <td>Number of iterations</td> <td>5000</td> </tr> <tr> <td>Selection of best individuals</td> <td>Elitist selection</td> </tr> <tr> <td>Selection method</td> <td>Permutation of two genes randomly selected</td> </tr> <tr> <td>Mutation method</td> <td>Crossover of two genes randomly picked</td> </tr> <tr> <td>Data represented by individuals</td> <td>Data of the command GENERATE AC (cf. Figure 4)</td> </tr> <tr> <td>Fitness function</td> <td>Evaluation of the response to GENERATE AC (CID, CVR . . . ) (cf. 77)</td> </tr> <tr> <td>Coefficient $\alpha$, score multiplier</td> <td>$\alpha = 1$ if the cryptogram is an AAC, $\alpha = 3$ if the cryptogram is an ARQC, $\alpha = 5$ if the cryptogram is an TC</td> </tr> </tbody> </table> TABLE I. SUMMARY OF THE EXPERIMENT The following section details the implementation of our approach on this particular applet. B. Experiment procedure In our GA-based approach, we follow the flow of a payment transaction and use GAs to generate the data field of the command text GENERATE AC. Hence, the genome of the individual represents the data field of this command, i.e. the data related to CDOL 1 or CDOL 2 depending on the case. The CDOL 1 related data of the MasterCard M/Chip application is given in figure 4. The fitness evaluation of the command GENERATE AC is a combination of the type of cryptogram returned by the application indicated by the data element Cryptogram Information Data (CID) and the checks performed by the application gathered in the data element Card Verification Results (CVR). Both CID and CVR are returned by the card in the response to the command GENERATE AC. --- <table> <thead> <tr> <th>Data Element</th> <th>Tag</th> <th>Length</th> </tr> </thead> <tbody> <tr> <td>Amount, Authorized</td> <td>$9F02$</td> <td>6</td> </tr> <tr> <td>Amount, Other</td> <td>$9F05$</td> <td>6</td> </tr> <tr> <td>Terminal Country Code</td> <td>$9F1A$</td> <td>2</td> </tr> <tr> <td>Terminal Verification Results</td> <td>$95$</td> <td>5</td> </tr> <tr> <td>Transaction Currency Code</td> <td>$5F2A$</td> <td>2</td> </tr> <tr> <td>Transaction Date</td> <td>$9A$</td> <td>3</td> </tr> <tr> <td>Transaction Type</td> <td>$9C$</td> <td>1</td> </tr> <tr> <td>Unpredictable Number</td> <td>$9F37$</td> <td>4</td> </tr> <tr> <td>Terminal Type</td> <td>$9F35$</td> <td>1</td> </tr> <tr> <td>Data Authentication Code</td> <td>$9F45$</td> <td>2</td> </tr> <tr> <td>ICC Dynamic Number</td> <td>$9F4C$</td> <td>8</td> </tr> <tr> <td>CVR Results</td> <td>$9F34$</td> <td>3</td> </tr> </tbody> </table> Fig. 4. MasterCard M/Chip CDOL 1 related data MasterCard M/Chip CVR comprises six bytes: the first three bytes are used for information only while the last three bytes are used for the decision making. We detail in figure 5 the bits used by the fitness function. The fitness score is incremented by one if one of the bits flagged in bytes 1 to 3 is set to 0, and also incremented by one if one the bits flagged in bytes 4 to 6 are set to 1. Then this score multiplied by a coefficient $\alpha$ whose value is function of CID value, i.e. of the cryptogram returned: 1 if the cryptogram is an AAC, 3 if the cryptogram is an ARQC and 5 if the cryptogram is a TC. On the fuzzing framework side, we enriched the library AForge.NET with a new object type Chromosome capable of generating the data field of the command GENERATE AC based on the CDOL 1 or CDOL 2 and to performs crossovers and mutations. We also developed an evaluation function and a fitness function dedicated to this type of chromosome. For this fuzzing session, we had set a simple goal to reach in order to validate our approach: fuzz the command GENERATE AC responsible for the approval of the transaction by the application. The experiment procedure is the following: 1) Definition of the coefficients used for the score evaluation: $\alpha = 1$, $\beta = 3$, $\gamma = 5$. 2) Creation of a population comprising $n$ individuals. 3) For each iteration $i$, perform a payment transaction with the $n$ individuals: a) send the commands SELECT, GET PROCESSING OPTIONS, READ RECORD and GET DATA with their default values, b) randomly, send the PIN code with the command VERIFY (requires the prior knowledge of the PIN code), c) the first command GENERATE AC is sent to request a TC with the data of the individual coding the CDOL 1 related data. If an ARQC is returned by the application, the second GENERATE AC is sent requesting a TC with After many fuzzing sessions – dozens of hours and millions of transactions – the proposed approach did not allow us to observe any rest of the offline counters on our Mastercard M/Chip implementation, but we observed the approval of many transactions above the limit of consecutive offline transactions. This proves the presence of an anomaly in our implementation of the payment application which was the goal of our fuzzing framework and of our experiment. Those illegitimate transactions are recorded so that we have all the necessary elements for further analysis. However, it is quite difficult to go back to the sequence of commands that lead to this anomaly. For instance, we have detected during a fuzzing session that some transactions got approved offline beyond the 10,000th transaction. Hence, it is very difficult to know exactly the context of the anomaly and how the preceding transactions had influence on the result. V. CONCLUSION AND PERSPECTIVES We proposed in this paper a new fuzzing technique to detect vulnerabilities or problem of conformance to specifications based on a genetic algorithm. This technique allows us to optimize the search of commands that result to a problem. Experimental results on a real applet showed some interesting results such as the observation of different illegitimate transactions. We observe that it is difficult to identify exactly the commands resulting to a problem but vulnerabilities are correctly identified. Perspectives of this study concern the definition of properties on a transaction in order to better understand the situations resulting to a vulnerability or a problem of conformance to specifications. REFERENCES Fig. 6. Use of the JCOP simulator to perform fuzzing on smart card applications
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01019978/file/SHPCS_14.pdf", "len_cl100k_base": 5855, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23597, "total-output-tokens": 6801, "length": "2e12", "weborganizer": {"__label__adult": 0.0006380081176757812, "__label__art_design": 0.0006685256958007812, "__label__crime_law": 0.0016679763793945312, "__label__education_jobs": 0.0005083084106445312, "__label__entertainment": 0.0001418590545654297, "__label__fashion_beauty": 0.0002913475036621094, "__label__finance_business": 0.00079345703125, "__label__food_dining": 0.00045180320739746094, "__label__games": 0.0010890960693359375, "__label__hardware": 0.00881195068359375, "__label__health": 0.00102996826171875, "__label__history": 0.0004100799560546875, "__label__home_hobbies": 0.0001857280731201172, "__label__industrial": 0.0011796951293945312, "__label__literature": 0.0003025531768798828, "__label__politics": 0.0005168914794921875, "__label__religion": 0.0006017684936523438, "__label__science_tech": 0.3974609375, "__label__social_life": 0.0001024007797241211, "__label__software": 0.01458740234375, "__label__software_dev": 0.56689453125, "__label__sports_fitness": 0.000400543212890625, "__label__transportation": 0.00112152099609375, "__label__travel": 0.0002008676528930664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28894, 0.02789]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28894, 0.58565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28894, 0.88037]], "google_gemma-3-12b-it_contains_pii": [[0, 1067, false], [1067, 5670, null], [5670, 11760, null], [11760, 15226, null], [15226, 19288, null], [19288, 24773, null], [24773, 27947, null], [27947, 28894, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1067, true], [1067, 5670, null], [5670, 11760, null], [11760, 15226, null], [15226, 19288, null], [19288, 24773, null], [24773, 27947, null], [27947, 28894, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28894, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28894, null]], "pdf_page_numbers": [[0, 1067, 1], [1067, 5670, 2], [5670, 11760, 3], [11760, 15226, 4], [15226, 19288, 5], [19288, 24773, 6], [24773, 27947, 7], [27947, 28894, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28894, 0.1761]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ea3aaed81e3ea48198aa8fa97865a2b629c7bb29
[REMOVED]
{"Source-Url": "https://www.iitis.pl/sites/default/files/pubs/The%20Random%20Neural%20Network%20as%20a%20Bonding%20Model%20for%20Software%20Vulnerability%20Prediction.pdf", "len_cl100k_base": 5182, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40977, "total-output-tokens": 11756, "length": "2e12", "weborganizer": {"__label__adult": 0.00045371055603027344, "__label__art_design": 0.0004253387451171875, "__label__crime_law": 0.0007839202880859375, "__label__education_jobs": 0.0007023811340332031, "__label__entertainment": 0.00012069940567016602, "__label__fashion_beauty": 0.0002065896987915039, "__label__finance_business": 0.0003421306610107422, "__label__food_dining": 0.0004165172576904297, "__label__games": 0.0009946823120117188, "__label__hardware": 0.0012636184692382812, "__label__health": 0.0008907318115234375, "__label__history": 0.0002613067626953125, "__label__home_hobbies": 0.00011360645294189452, "__label__industrial": 0.0004954338073730469, "__label__literature": 0.00038313865661621094, "__label__politics": 0.00029158592224121094, "__label__religion": 0.00046443939208984375, "__label__science_tech": 0.10614013671875, "__label__social_life": 0.00011086463928222656, "__label__software": 0.01593017578125, "__label__software_dev": 0.8681640625, "__label__sports_fitness": 0.0002834796905517578, "__label__transportation": 0.0004591941833496094, "__label__travel": 0.000186920166015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42585, 0.06906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42585, 0.29117]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42585, 0.85226]], "google_gemma-3-12b-it_contains_pii": [[0, 599, false], [599, 3083, null], [3083, 6194, null], [6194, 9346, null], [9346, 12738, null], [12738, 15893, null], [15893, 18860, null], [18860, 20445, null], [20445, 21585, null], [21585, 22608, null], [22608, 25529, null], [25529, 28402, null], [28402, 31160, null], [31160, 34028, null], [34028, 36965, null], [36965, 39816, null], [39816, 42585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 599, true], [599, 3083, null], [3083, 6194, null], [6194, 9346, null], [9346, 12738, null], [12738, 15893, null], [15893, 18860, null], [18860, 20445, null], [20445, 21585, null], [21585, 22608, null], [22608, 25529, null], [25529, 28402, null], [28402, 31160, null], [31160, 34028, null], [34028, 36965, null], [36965, 39816, null], [39816, 42585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42585, null]], "pdf_page_numbers": [[0, 599, 1], [599, 3083, 2], [3083, 6194, 3], [6194, 9346, 4], [9346, 12738, 5], [12738, 15893, 6], [15893, 18860, 7], [18860, 20445, 8], [20445, 21585, 9], [21585, 22608, 10], [22608, 25529, 11], [25529, 28402, 12], [28402, 31160, 13], [31160, 34028, 14], [34028, 36965, 15], [36965, 39816, 16], [39816, 42585, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42585, 0.0203]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
c871d4cd49a76bd7e23023893b535d37af56be32
NON-EXHAUSTIVE JOIN ORDERING SEARCH ALGORITHMS FOR LJQO Tarcizio Alexandre Bini, Adriano Lange, Marcos Sfair Sunye, Fabiano Silva and Eduardo Cunha de Almeida Informatics Department, Federal University of Paraná, Centro Politécnico, Jardim das Américas, Curitiba PR, Brazil Keywords: Query optimization, Join ordering, Randomized algorithms, Genetic algorithms. Abstract: In relational database systems the optimization of select-project-join queries is a combinatorial problem. The use of exhaustive search methods is prohibitive because of the exponential increase of the search space. Randomized searches are used to find near optimal plans in polynomial time. In this paper, we investigate the large join query optimization (LJQO) problem by extending randomized algorithms and implementing a 2PO algorithm as a query optimizer in a popular open-source DBMS. We compare our solution with an implementation of a genetic algorithm. Through a multidimensional test schema, we discuss pros and cons about the behavior of these algorithms. Our results show that 2PO algorithm is fast to run and the costs of generated plans are better in most cases when compared to those of the genetic algorithms. 1 INTRODUCTION Over the last 40 years, database management systems (DBMS) have experienced an enormous workload shift from transaction processing at kilobyte-scale to real-time data analysis at petabyte-scale. In the context of data warehouse, columns data storage such as in C-Store DBMS (Stonebraker et al., 2005), and map-reduce implementations like Hadoop/Hive (Thusoo et al., 2009), are becoming increasingly common both in commercial and academic area. Storing data in columns in traditional DBMS is an onerous task, which can not bring major benefits (Abadi et al., 2008). In this storage architecture, each attribute is stored in a separated relation (Bruno, 2009). Thus, even a simple query that involves a few attributes demand several relational joins to compose its result. Costs expended in performing a large number of joins cause a performance degradation of the DBMS. In this context, the classic problem of join-ordering optimization is highly relevant. Computationally finding the optimal join order that compose the execution plan is NP-hard (Ibaraki and Kameda, 1984). Dynamic programming techniques (Selinger et al., 1979), stay restricted to a small number of relations, approximately ten. Above this limit, it is considered a large join query optimization (LJQO) problem. Thereby, an optimization technique must consider a trade off between the generation of an optimal plan and the query execution. Randomized algorithms can, in average, get good execution plans in polynomial time (Swami and Gupta, 1988; Ioannidis and Kang, 1991; Bennett et al., 1991; Louis and Zhang, 1998; Dong and Liang, 2007). However, an undesirable characteristic is the considerable costs variation (e.g., instability) of generated plans for the same query (Bini et al., 2009). Such instability is unacceptable in production applications, especially when the response time of a request is controlled. Furthermore, the insertion of a degree of uncertainty in the estimated time for a requested service or result can compromise the estimated time of a whole chain of processes depending on it. Thus, when we apply randomized algorithms to query optimization, the stability of generated plans is a decisive factor that must be considered. In this paper, we address the LJQO problem through our implementation of the 2PO (Two-Phase Optimization) randomized algorithm. To evaluate the quality of generated plans and the execution time of the optimizers, we used a popular open-source DBMS (Postgresql, 2010). We created a wide select-projet-join queries range considering the joins graph complexity, number of joins and cardinality of the involved relations. In this context, we compare our solution with an implementation of the genetic algorithms. The results demonstrate the feasibility of applying our optimizer in commercial DBMS. Considering the execution time of the optimizers, 2PO outperformed genetic algorithm in all scenarios. 2PO optimizer also showed better quality of generated plans (low computational cost) in the most of analyzed queries. This paper is organized as follows. Section 2 describes how the possible plans of a query can be organized considering important concepts of query optimization. Moreover, we present some randomized algorithms implemented and analyzed in this study. Section 3 describes the test methodology applied for evaluating the query optimizers. Section 4 presents the results obtained in our experiments. Finally, Section 5 concludes and presents future work. 2 BACKGROUND AND RELATED WORK Query optimizer is the DBMS component that transforms a query written in a declarative language, like SQL, into a procedural query execution plan (QEP). There may be several QEPs corresponding to a same specified query. They are identical in result, but distinct in computational cost, like CPU time sharing, primary memory usage and disc access (Ibaraki and Kameda, 1984; Swami and Gupta, 1988). The set of all QEP that exhibit the same result is called solution space or search space. It is defined as the set of all possible binary join trees (BJT). The query optimizer task is to determine the QEP with lowest cost based on a cost function or model. A common way to express graphically which relations mentioned in the query have join predicates is using an undirected graph called join graph. This structure can determine the complexity of finding the optimal join order. Nodes represent relations in a query. Edges represent join predicates between their respective relations. Regarding the possible forms of a join graph, five types are found in the literature: chain, star, cycle, grid and clique (Steinbrunn et al., 1997; Vance and Maier, 1996; Shapiro et al., 2001; Neumann, 2009). Next, we describe the randomized and genetic algorithms, that can be an alternative to exhaustive searches when applied to LJQO problem. 2.1 Randomized Algorithms The input of a randomized algorithm is another type of graph that represents the complete search space of a problem. Each node is a solution or state which is associated a cost defined by a specific cost function. Each edge is a possible move defined by transformation rules which allow that a state can be transformed into another. Several approaches were presented (Steinbrunn et al., 1997; Swami and Gupta, 1988; Ioannidis and Kang, 1990) to address the LJQO problem. Thus, in the Iterative Improvement (II) algorithm, QEPs can be represented as nodes in an undirected graph. Such nodes are inter-connected by edges which represent the transformations between these plans. The objective is to perform several movements between the graph nodes searching better solutions (execution plans) relative to its cost. II consists of several local optimizations, started randomly from different points of the search space, which are called initial states (Ioannidis and Kang, 1991). From each initial state, the algorithm traverses randomly the search space, always accepting moves if their costs are lower than the actual. This local optimization process is repeated until no further improvement can be found or a stopping condition is satisfied. Another randomized algorithm applied to the query optimization problem is the Simulated Annealing (SA) (Ioannidis and Wong, 1987; Swami and Gupta, 1988). This solution was derived by analogy from the process of annealing of solids. Inspired by this physical process some terminologies, e.g., temperature, freezing condition, are used to orient the optimization process. Unlike the II, which uses several random initial states, the SA starts from a single state. During the optimization process, SA performs “random walks” always accepts neighbors states with lower costs. However, unlike the II, states with higher costs can also be accepted to a certain probability. Ioannidis and Kang (Ioannidis and Kang, 1990) presented the 2PO (Two-Phase Optimization) algorithm to address the LJQO problem by combining the II and SA algorithms. In the first phase, the II is executed for a small period of time performing some local optimizations. This cover most of the search space and quickly reaches a state with low cost (called local minimum). The best local minimum found by II algorithm is used as a starting point for the SA, in the second phase. Then, the solution space is searched again for a state of even lower cost. Plans with lower costs than local minimum introduced by the II can be reached quickly through the SA. 2.2 Genetic Algorithms Genetic Algorithms are search methods based on genetic and natural selection process. An important cha- characteristic of this class of algorithm is that they do not work with a single solution, but with a set of solutions that is called population. These solutions are represented by chromosomes. Each chromosome is composed by genes that represent each part of the solution. In joins optimization, the chromosomes correspond to the executions plans. Costs associated with it, corresponds to your adaptation degree to the environment. As in the search space of II and SA, the environment, corresponds to the set of possible solutions. A genetic algorithm starts generating the first population of individuals randomly, with a fixed number of chromosomes. Chromosomes are selected (selection) from the population to become parents, based on fitness (Owais et al., 2005). The reproduction process occurs between pairs of selected chromosomes, through the recombination between themselves (crossover) which produces the offspring. Some fraction of the population can be randomly chosen to have mutated a gene or a small set of them (mutation). The new population becomes the new generation and the process repeats itself (Bennett et al., 1991). Iterations are performed until improvements in the quality of the population are observed, or the demanded number of generations is reached or when the demanded solution is found. In order to develop and analyze our implementation of 2PO optimizer, we used a popular open-source DBMS (Postgresql, 2010). This DBMS makes use of a genetic algorithm approach to enumerate possible BJTs. This algorithm called GEQO (Genetic Query Optimization) was presented by Martin S. Utesch (Postgresql, 2010) in 1997, as an alternative to the LJQO problem. More details about GEQO can be obtained in (Bini et al., 2009). 3 TEST METHODOLOGY In this section, we detail our methodology to evaluate our implementation of the 2PO optimizer. First, we describe how the database was generated. After, how we developed the queries set for our experiments. 3.1 Database The database schema and the query set are based on the systematic and multidimensional model proposed by Vance and Maier (Vance and Maier, 1996) et al. and Shapiro (Shapiro et al., 2001). One of its advantages is the independence of several construction parameters like number of relations and their cardinalities. However, some parameters originally used by these authors were extended or reconfigured, in order to observe the behavior of the algorithms applied to LJQO problem. These reconfigurations were based on methodologies proposed by Swami and Gupta (Swami and Gupta, 1988), Ioannidis and Kang (Ioannidis and Kang, 1990) and Steinbrunn et al. (Steinbrunn et al., 1997). The database is populated synthetically in accordance with the requirements of our experiments. Relations are created with their cardinalities ranging from \(2^5\) to \(2^{19}\) as \(\log_2(2^{19} / 2^5) = 14\) and mean = \(2^{12}\) (Vance and Maier, 1996). All relations follow the same basic building layout, composed by three numerical attributes: an attribute primary key (pk) and others two attributes foreign key (fk1, fk2). The tuples are inserted in the relations with values based on their cardinality. For the pk attribute, values are assigned sequentially from 1 to the relation cardinality. For the others attributes, fk1 and fk2, the values are inserted at random, also following values from 1 to the relation cardinality, using an uniform distribution. Updates are never performed during the queries submission. 3.2 Generated Queries Two steps are required to generate the query set: the selection relation sets and the combination of these relations in form of SQL queries. These steps are used for organization reasons and code reuse. Selection. In the first step, 12 relation sets are selected deterministically according to the combination of two parameters: Relations Number and LOGRATIO (Vance and Maier, 1996; Shapiro et al., 2001). The Relations Number \(N\), involved in the generated queries is respectively 10, 20, 50, and 100. LOGRATIO \(\mu\), is given by the logarithmic difference between the relation with greatest cardinality \(R_N\) and the relation with smallest cardinality \(R_1\), namely, \(\log_2(|R_N|/|R_1|)\), represented by the values 6, 12 and 14. In each generated relation set, all selected relations are different, although their cardinality may be the same. For all relation sets, the geometric average of cardinalities was fixed at \(2^{12}\). Combination. For each of the 12 selected sets, 40 queries are randomly generated, in order, 10 of them... for each join graph type chosen (chain, cycle, grid and clique) (Steinbrunn et al., 1997). Thus, the total number of generated queries is 480. Queries are generated as follows: from a relation set \( (R_1 \rightarrow R_N) \), selected in the first step, each derived query follows a random arrangement of these relations \((T_1 \rightarrow T_N)\). From this arrangement, relations are “accommodated” in their SQL structures corresponding to each join graph. ### 4 EXPERIMENTS This Section presents the comparison between our implementation of the 2PO optimizers and an implementation of the genetic algorithm. Results are presented using the scaled cost of plans, which is the ratio between its cost and the cost of the best plan found for the same query, independent from the optimizer. Since our test methodology is derived from a comparison among optimizers, the total execution time of the obtained plans was not computed. The plans obtained with the genetic algorithm, in some cases, presented extremely high costs. Thus, their executions were impossible, due to computational resources and time. #### 4.1 General Setup Our experiments were conducted in a computer equipped with an Intel Xeon Quad-Core - 2GHz/64bits processor, with 12MB of L2 cache and 2GB/667MHz of RAM memory. As secondary memory, we used two SATA disks with 250 GB each, operating in RAID 0. The operating system employed was GNU/Linux kernel 2.6.24 X86-64. The DBMS applied in experiments was the PostgreSQL 8.3. We used the plugin technique to compile code parts separately and then incorporate them into the DBMS as a library. Thus, we create the LJQO plugin, which includes our implementation of the 2PO algorithm. This plugin was equipped with a component to count the algorithms execution time. No changes in other DBMS components were required. There are several configurations and control parameters that must be considered in the 2PO implementation. These parameters are further detailed in (Ioannidis and Kang, 1990). We recall that PostgreSQL implements a genetic algorithm called GEQO. The GEQO module, in its default configuration, is designed to generate a nearly constant amount of plans for queries over seven relations. The 2PO in turn, tends to increase the optimization effort, as the number of relations increases. To eliminate this disadvantage, GEQO module was reimplemented to generate an equal plans amount to the 2PO optimizer. This configuration was performed individually for each query, considering the number of plans generated by 2PO in each case. In performed experiments, our implementation of GEQO module is called GA (Genetic Algorithm). 4.2 Performance Evaluation The performance evaluation aims at verify the average time spent to optimize the queries. For this experiment, we consider 300 optimizations. This total is composed by 10 optimizations, for each one of 30 queries used, regardless of their LOGRATIO values. Figure 1 presents the average time for optimization grouped by the relations number of each query. In queries with 10 relations (Figure 1(a)), the difference between the optimizers 2PO and GA was not significant. Although the number of generated plans by GA was approximately the same in relation to 2PO, its total time of optimization was relatively higher for queries chain, grid and cycle. In queries with 100 relations (Figure 1(d)), and join graph like chain, cycle and grid, the GA was almost 2 times worse than 2PO. On the other hand, in clique queries, its optimization time was very close from that presented by 2PO. 4.3 Costs of Generated Plans Our final analysis considers the costs of generated plans by GA and 2PO optimizers. Figure 2 presents the average and scaled cost variation obtained by the optimizers considering the relations amount and the join graph of each query. In all graphs presented in this figure, scaled costs are arranged in logarithmic scale of base 10. It is observed that both algorithms alternate in the generation of the best average of scaled cost. In chain, cycle and grid queries with relations greater or equal to 20, 2PO showed plans with superior quality (lower costs) in relation to presented by GA, as we can see in Figure 2 (b), (c) and (d). In this case, it is also observed that GA showed an accentuated degradation in the quality of the plans according increased the number of relations. In two particular cases, chain and cycle queries with 100 relations, respectively in Figure 2 (d), the quality of obtained plans by 2PO and GA was significantly large. In Figure 2(b) to Figure 2(d), we can verify that the plans degradation obtained by the GA is strongly correlated with the number of edges of each join graph. It is noticed, the fewer is the number of edges, the worse is the quality obtained plans by the GA. Concerning the chain (N−1 edges) and cycle (N edges) queries, we can observe a slight improvement in the quality. In grid queries (2N−3 edges), the improvement is more evident. Finally, clique queries \(\left\lfloor\frac{N(N-1)}{2}\right\rfloor\) with the maximum edges quantity, and obviously the join graph of greater connectivity, presented the best scaled cost, being better than those presented by 2PO. Plans quality presented by 2PO was superior in almost all cases. However, two exceptions were identified. The first case refers to queries with 10 relations, independent of join graphs. The second, was for all clique queries, regardless the relations number. 5 CONCLUSIONS Join optimization is part of query processing that represents a significant impact on relational DBMS efficiency. This paper presents this problem, with emphasis in LJQO. We presented an implementation of a 2PO algorithm and compared with an implementation of genetic algorithm. Our solution proves to be more robust for most of the cases, mainly in queries with a large number of relations and low connectivity of their joins graphs. In this context, 2PO presented better plans compared to GA, and still, had a feasible optimization average time. Our results were relevant in the actual context, since there is a growing demand for DBMS able to answer complex queries. Actually, such queries are really common in tools to generate management reports (OLAP - Online Analytical Processing) or deductive tools for Data Mining. Another potential source of complex queries are columns oriented DBMS (Stonebraker et al., 2005). In such applications, each attribute stored in a traditional DBMS is converted into an individual relation. Thus, queries read only the attributes that are required. Thereby, it is evident, the large number of joins required to compose the query results that involving several attributes in the SELECT clause. There are still issues that need to be evaluated in detail, how the use of aggregates, sort method (ORDER BY), and recursive queries. Even so, it is expected that these results can serve as a basis for the algorithms improvement and for developing new optimizations approaches. Finally, 2PO algorithm is available as a plugin called LJQO. This plugin can be obtained via Internet\textsuperscript{3} by interested in making improvements or analysis in our solution. REFERENCES \textsuperscript{3}http://git.c3sl.ufpr.br/gitweb?p=lbd/ljqo.git;a=summary
{"Source-Url": "http://www.scitepress.org/Papers/2011/34265/34265.pdf", "len_cl100k_base": 4425, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21010, "total-output-tokens": 6053, "length": "2e12", "weborganizer": {"__label__adult": 0.000362396240234375, "__label__art_design": 0.0002894401550292969, "__label__crime_law": 0.0004601478576660156, "__label__education_jobs": 0.0009675025939941406, "__label__entertainment": 8.630752563476562e-05, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.00067901611328125, "__label__food_dining": 0.0004210472106933594, "__label__games": 0.0005550384521484375, "__label__hardware": 0.0008764266967773438, "__label__health": 0.0009484291076660156, "__label__history": 0.00031447410583496094, "__label__home_hobbies": 0.00012099742889404296, "__label__industrial": 0.0006365776062011719, "__label__literature": 0.00030493736267089844, "__label__politics": 0.0002923011779785156, "__label__religion": 0.0004279613494873047, "__label__science_tech": 0.1055908203125, "__label__social_life": 0.00010657310485839844, "__label__software": 0.018280029296875, "__label__software_dev": 0.8671875, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0005397796630859375, "__label__travel": 0.0002193450927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24211, 0.03601]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24211, 0.56423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24211, 0.91165]], "google_gemma-3-12b-it_contains_pii": [[0, 3893, false], [3893, 8797, null], [8797, 13372, null], [13372, 15205, null], [15205, 20153, null], [20153, 24211, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3893, true], [3893, 8797, null], [8797, 13372, null], [13372, 15205, null], [15205, 20153, null], [20153, 24211, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24211, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24211, null]], "pdf_page_numbers": [[0, 3893, 1], [3893, 8797, 2], [8797, 13372, 3], [13372, 15205, 4], [15205, 20153, 5], [20153, 24211, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24211, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5c073a75b28d844069d723338e55fa5d87dd6d82
Elementary transformation analysis for array-OL Paul Feautrier To cite this version: HAL Id: hal-02102502 https://hal-lara.archives-ouvertes.fr/hal-02102502 Submitted on 17 Apr 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Elementary transformation analysis for Array-OL Paul Feautrier May 2007 Elementary transformation analysis for Array-OL Paul Feautrier May 2007 Abstract Array-OL is a high-level specification language dedicated to the definition of intensive signal processing applications. Several tools exist for implementing an Array-OL specification as a data parallel program. While Array-OL can be used directly, it is often convenient to be able to deduce part of the specification from a sequential version of the application. This paper proposes such an analysis and examines its feasibility and its limits. Keywords: Array-OL, multidimensional signal processing, program analysis Résumé Array-OL est un système de spécification de haut niveau spécialisé dans la définition d’application de traitement du signal intensif. Il existe plusieurs ateliers qui transforment une spécification Array-OL en un programme à parallélisme de données. Bien que Array-OL puisse être utilisé tel quel, il est souvent intéressant de pouvoir déduire ses paramètres d’une version séquentielle de l’application. Ce rapport propose une telle analyse et en examine la faisabilité et les limites. Mots-clés: Array-OL, traitement du signal multidimensionnel, analyse de programme 1 Introduction In the Array-OL formalism [1, 2], a program is a network of processes which communicate through shared arrays. A process is made of one or more parallel loops. At each iteration of these loops, a task (or elementary transform) is executed. The elementary transform may contain one or more loops, which are executed sequentially. The execution of an elementary task can be decomposed into three steps: - Move portions of the input array(s) (regions) to the local memory of the processor executing the task. - Execute the elementary transform and generate portions of the output array(s). - Move the results to the output array(s). In order to simplify code generation, the input and output regions must move uniformly across the shared arrays. It is admissible that each elementary transform use only a subset of regularly spaced entries in the input and output regions. In the present version of the software, regions must not overlap, as this would precludes parallel execution of the outer loops. The useful elements of a region are collected in a pattern, which must be a rectangular parallelepiped of fixed size. The Array-OL formalism may be used directly. The programmer is responsible for constructing the elementary transform, identifying the input and output regions, checking parallelism and specifying the regions parameters. Another possibility is to infer the Array-OL specification from a sequential version of the program. This requires the solution of three problems: - Rewriting the sequential program in such a way that the outer loops have no dependences. - Deducing the shape and size of the regions from an analysis of the array subscript functions. - Rewriting the sequential code by substituting pattern accesses to the original array accesses. This note is dedicated to a proposal for the solution of the second and third problems. The assumption is that one is given the sequential code, together with a list of input and output arrays, and an indication of which loop(s) are to be considered as the outer (repetition) loop(s). 2 Paving Let $A$ be an input or output array and let its occurrences in the sequential code be numbered from 1 to $N$. Let $r$ be the counter(s) of the repetition loop(s), and let $j^k$ be the counter(s) of the inner loop(s) that surround occurrence $k$ of $A$. Let $e^k(\mathbf{r}, \mathbf{j}^k)$ be its subscript function. $e^k$ is a vector function whose dimension is the rank of $A$. To be amenable to an Array-OL implementation, the subscript function $e^k$ must be affine in $\mathbf{r}$ and $\mathbf{j}^k$. A convenient way of checking this property consists in computing the two Jacobian matrices: \[ P^k = \left( \frac{\partial e^k}{\partial r^\alpha} \right), \quad B^k = \left( \frac{\partial e^k}{\partial j^\beta} \right), \] checking that they do not depend on \( r \) or \( j^k \), and verifying the identity: \[ e^k(r, j^k) = P^k r + B^k j^k + e^k(0, 0). \] In Array-OL terminology, \( P^k \) is the paving matrix, and \( e^k(0, 0) \) is the origin of the paving. The elements of these entities may be numbers, or they may depend on constants, which must be given numerical values just before code generation. References with different paving matrices may be separated by arbitrary distance in the source or target array; it is not possible to group them efficiently; they must be implemented as separate channels. In the following example: ```c myTE( in[], out[]){ for(i=0;i<7; i++) // boucle TE { for ( k=0;k<11;k++) { S=0; for(j=0;j<100;j++) { S+= in[0][j+11] * in[i+1][k+j]; } out[ i][k]=S; } } ``` there are two references to \( \text{in} \) with respective subscript functions \( e^1(i, k, j) = \left( \begin{array}{c} 0 \\ j + 11 \end{array} \right) \) and \( e^2(i, k, j) = \left( \begin{array}{c} i + 1 \\ k + j \end{array} \right) \). The corresponding paving matrices are \( P^1 = \left( \begin{array}{cc} 0 & 1 \\ 0 & 1 \end{array} \right) \) and \( P^2 = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \). Hence, the two accesses must be handled separately. In the following, I assume that accesses to \( A \) have been partitioned according to their paving matrix, and consider only one partition at a time. The size of the repetition space is deduced simply from the bound(s) of the elementary transform loop(s). In the Spear/DE implementation of Array-OL, there may be further constraints on the paving matrix (e.g. that it be a permutation of a diagonal matrix). ### 3 Pattern and fitting A pattern is a compact specification of all the elements of an array that are accessed, with references having the same paving matrix, in one iteration of the external loop(s). When discussing patterns, one has to consider three frames of reference (see Fig. 1). The first one is the original (input or output) array. Its dimension is the rank of the array, noted $|A|$, and its coordinates are called *subscripts*. The shape of an array is always a (hyper-)rectangle. The second frame of reference is the iteration space of the inner loops of the elementary transform. Its dimension is the number of loops enclosing the reference, noted $d^k$, and its coordinates are called *loop counters*. There may be as many iteration domains as there are references, or several references may share the same iteration domain. The shape of an iteration domain is arbitrary. The only requirement in the present context is to be able to construct its vertices, either because the iteration domain is rectangular, or because it can be expressed as a convex polyhedron with parameters in the constant terms only. The iteration domain of reference $k$ will be denoted as $D^k$ in what follows. The third frame of reference is the pattern. According to Boulet \[1\] the pattern is always of rectangular shape. The pattern associated to reference $k$ is denoted by $T^k$ and its dimension is $p^k$. The associated fitting matrix, $F^k$, connects the pattern space to the array space and its dimension, accordingly, is $|A| \times p^k$. The relation of these objects are as follows. Firstly, the local subscript function $f^k(j^k) = B^k j^k + e^k(0,0) = e^k(0, j^k)$ gives the coordinates of an array cell relative to the reference point $P^k$ which moves according to the paving matrix. Next, the image $f^k(D^k)$ is the *footprint* of reference $k$. Its shape is arbitrary. The images of the vertices of $D^k$ by $f^k$ form a superset of the vertices of the footprint; a representation as a convex polyhedron can be recovered by one application of the Chernikova algorithm \[3\]. Lastly, the image of the pattern by the fitting matrix must enclose the footprint, and it must be feasible to retrieve a datum from the pattern instead of the original array. This implies that there exists a function $\phi^k$ from $D^k$ to $T^k$ such that for every iteration vector $j^k \in D^k$, $f^k(j^k) = F^k \phi^k(j^k)$. In the text of the elementary transform, $\phi^k$ must be substituted to $e^k$ in reference $k$ to $A$. As one may see from this discussion, while the iteration domain and footprint are fixed once the sequential program is given, the choice of the pattern and fitting matrix are somewhat --- Figure 1: Data access in Array-OL arbitrary. There are two obvious solutions: in the first one, the pattern is the smallest rectangular box enclosing the footprint, the fitting matrix is the identity, and the subscript function is not changed. In the second solution, the pattern is isomorphic to the iteration domain (provided it is a parallelepiped), $B^k$ is the fitting matrix, and the new subscript function is the identity. In signal processing applications, it is often the case that several references to the same array have similar subscript functions; constructing only one pattern for several references is an interesting optimization. However, this should not be obtained at the cost of a large overhead in the size of the pattern. In other word, the number of useless elements in the pattern must be minimized. Useless elements come from two sources: - A subscript matrix which is not of full row rank: the pattern will have more dimensions than the footprint. - A subscript matrix whose determinant is not of modulus one: there will be holes (unused elements) in the footprint. The inverse of the determinant gives an asymptotic evaluation of the ratio of useful elements. The next section presents a method for computing a pattern and a fitting matrix in the general case (many references). This method can only be applied if all elements of the matrices $B^k$ and the vectors $b^k$ have known numerical values. Section 5 presents fail-soft solutions for cases in which these elements depend on unknown parameters. 4 The General Case The basic observation is that a conservative estimate of the footprint can be obtained by computing the projection of each iteration domain by the associated subscript function, then constructing a convenient superset of the union of these projections. One practical method consists in projecting the vertices of the iteration domains. One then gathers all such projections, and constructs their convex hull by familiar (e.g., Chernikova’s) algorithms. To reduce the size overhead, one should notice that a useful point for reference $k$ also belongs to the lattice which is generated by the column vectors of $B^k$. Hence, $B^k$, properly simplified (see later) could be used as the fitting matrix. However, in the case of several references, we have to combine several lattices into one, since each pattern has only one fitting matrix. As an illustration of this construction, consider the one dimensional case. A one-dimensional lattice is simply a set of regularly spaced points. Combining two lattices generates a lattice whose spacing is the gcd of the component spacings. The many-dimensional equivalent of the gcd is the construction of the Hermite normal form of the subscript matrices. Let $\Lambda(B, b)$ be the lattice generated by $B$ with origin $b$, i.e. the set of points $\{Bx + b \mid x \in \mathbb{N}^d\}$. Let $L^1 = \Lambda(B^1, b^1)$ and $L^2 = \Lambda(B^2, b^2)$ be two such lattices. I claim that the union of $L^1$ and $L^2$ is included in the lattice $L = \Lambda([B^1B^2(b^2 - b^1)], b^1)$. **Proof** Let $B^1.x + b^1$ be a point of $L^1$. We have: $$B^1.x + b^1 = B^1.0 + B^2.0 + (b^2 - b^1).b + b^1$$ hence $B^1.x + b^1$ is in $L$. Similarly: $$B^2.y + b^2 = B^1.0 + B^2.y + (b^2 - b^1).b + b^1.$$ I conjecture that \( L \) is the smallest lattice which includes \( L^1 \) and \( L^2 \). The proof is obvious if the \( b \)s are null. The general case is left for future work. The construction can be extended to any number of component lattices. The resulting matrix is \( [B^1 \ldots B^N(b^2 - b^1)\ldots(b^N - b^1)] \) and the origin is \( b^1 \). Furthermore, \( b^1 \) can be moved to the origin of the paving and hence taken as 0 when computing the fitting. In case where \( B \) has been obtained by mixing many references, it must be simplified before being used for an Array-OL specification. The starting point of this simplification is the row echelon form of \( B \). One can show (see the appendix) that there exists two unitary matrices \( P \) and \( U \) such that: \[ B = P \begin{bmatrix} H & 0 \\ C & 0 \end{bmatrix} U, \] where \( H \) is a square upper triangular matrix of size \( r \times r \) with positive diagonal coefficients, \( C \) is arbitrary, and both 0 represent null matrices of appropriate sizes. \( r \) is the row rank of \( B \). Furthermore, \( U \) can be partitioned, row wise, in two matrices of size \( r \times d \) and \( (d - r) \times d \), \[ U = \begin{bmatrix} U' \\ U'' \end{bmatrix}. \] Let \( j \) be a point in the iteration domain of the inner loops. The corresponding point in the footprint is: \[ Bj = P \begin{bmatrix} H & 0 \\ C & 0 \end{bmatrix} \begin{bmatrix} U' \\ U'' \end{bmatrix} j = P \begin{bmatrix} H \\ C \end{bmatrix} (U'j) \] One possible interpretation of this formula is that the pattern for the current reference is the image of its iteration domain by \( U' \), and that the corresponding paving matrix is \( P \begin{bmatrix} H \\ C \end{bmatrix} \). In the body of the elementary transform, accesses to \( Bj \) in the input or output array have to be replaced by accesses to \( U'j \) in the pattern. It may be that the pattern computed in this way is not rectangular, in which case it must be “boxed” by computing the component-wise minima and maxima of its extreme points. The dimension of the pattern is \( r \). It is interesting to notice that this general solution reduces to one of the approximate methods above in special cases. If \( B \) is unitary, then its row echelon form is the unit matrix. In that case, the pattern is the footprint, eventually extended to a rectangular box and the fitting matrix is the identity. Conversely, if \( B \) is already in row echelon form, \( P \) and \( U \) are identities. The pattern is isomorphic to the iteration space, and \( B \) is the fitting matrix. 5 The Parametric Case Parameters occurs mostly in loop bounds. They may also appear as strides and, more seldom, in the coefficients of subscript functions. In the Array-OL formalism, the repetition loops must be square. Hence, their bound may be extracted directly from the program text. The extraction of the paving matrix is a simple derivative computation, which is an easy task for a competent computer algebra system. Similarly, the $B^k$ matrices are the result of a derivation, and may contain parameters. There are no restrictions on the inner loops. For the construction of the pattern, one needs to know the vertices of the inner iteration domain. There are three cases: - The bounds are constant: they can be extracted even if parametric. - The bounds are affine expressions in other loop counters and parameters: the vertices can be computed with the help of the polylib. - In other cases, there is no way of computing vertices, but the user may supply a bounding box. The computation of the row echelon form can be done only if the matrix is known numerically, except in two cases: the matrix is $1 \times 1$ (it is its own normal form) or $2 \times 2$. The row echelon form of \([\begin{array}{cc} a & b \\ c & d \end{array}]\) is \([\begin{array}{cc} \gcd(a,b) & 0 \\ cu + dv & (ad - bc) / \gcd(a,b) \end{array} \] where $u$ et $v$ are the integers such that $au + bv = \gcd(a,b)$ whose existence is guaranteed by Bezout identity. If none of these circumstance applies, the solution of last resort is to use one of the approximate schemes above. For instance, if the vertices of the inner iteration domain are available, it is possible, whatever the $B$ matrix, to compute the vertices of the footprints and to enclose them in a rectangular box. The paving matrix is then the identity. 6 Extensions The Syntol tool computes dependences; it is thus possible to check that the repetition loops are actually parallel. One must take care that Syntol will find dependences if temporary scalars are used in the code of the elementary transforms. These scalars must be expanded or privatized at code generation time. Overlap between patterns (or, rather, between footprints) is another concern. For input arrays, overlap is just a cause of inefficiency, since some arrays cells will be copied several times to processors. Overlap for output arrays are more dangerous since they may induce non-determinism. The existence of overlap may be tested provided one stays inside the polytope model (affine loop bounds and indexing functions, with numerical coefficients and linear parameters). In the same context, it is possible to quantify the overhead by comparing the size of the pattern and the size of the real footprint using the \texttt{barvinok} library [4]. A Computing the row echelon form of a matrix For more details, see [3]. Let $B$ be an arbitrary matrix of size $p \times q$. 1. At any stage of the computation, we have constructed two unitary matrices $P$ and $U$ such that: \[ B = PB'U, \quad B' = \begin{bmatrix} H & 0 \\ C & D \end{bmatrix} \] where $H$ is lower triangular with positive diagonal coefficients. Initially, $P$ and $U$ are identity matrices, $H$ and $C$ are empty and $D = B$. Let $i$ be the index of the first row of $C$ and $D$. 2. If $D$ is null, the process stops. 3. If not, let $j$ be the index of some non zero row of $D$. Let $\pi_{ij}$ be the unitary matrix that permutes rows $i$ and $j$ of $B'$. Since $\pi_{ij}$ is its own inverse, one can write: $$B = (P\pi_{ij})(\pi_{ij}B')U,$$ and the new $D$ has a non zero first row. 4. Let $k$ be the index of a negative element in the first row of $D$. Let $\sigma_k$ be the unit matrix with the $k$-th diagonal element set to $-1$. Since $\sigma_k$ is its own inverse, one can write: $$B = P(B'\sigma_k)(\sigma_kU),$$ and element $k$ in the first row of $D$ is now positive. 5. If all elements in the first row of $D$ are positive, let $l$ be the index of the smallest element, and let $\pi_{il}$ be the matrix that interchange columns $i$ and $l$ of $B'$. Again: $$B = P(B'\pi_{il})(\pi_{il}U)$$ and now the first element of the first row of $D$ is smallest. 6. Let $m > i$ be the index of some nonzero element in the first row of $D$. Set $\alpha = B'_{im} \div B'_{ii}$. By construction, $\alpha > 0$. Let $\kappa_{im}(\alpha)$ be the identity matrix with $-\alpha$ added in position $(i, m)$. It is easy to see that the inverse of $\kappa_{im}(\alpha)$ is $\kappa_{im}(-\alpha)$. Hence: $$B = P(B'\kappa_{im}(\alpha))(\kappa_{im}(-\alpha)U)$$ and element $B'_{im}$ has been replaced by $B'_{im} \bmod B'_{ii}$. 7. If the only non-zero element of the first row of $D$ is the first element, then $i$ can be increased by 1. These transformations must be applied until no further progress is possible (i.e. when in case 2). Matrix $B'$ is in the required form, and since all the elementary matrices $\pi, \sigma$ and $\kappa$ are unitary, the resulting $P$ and $U$ are unitary. In fact, $P$ is even a permutation matrix. References
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02102502/file/RR2007-28.pdf", "len_cl100k_base": 5168, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26666, "total-output-tokens": 5940, "length": "2e12", "weborganizer": {"__label__adult": 0.0004270076751708984, "__label__art_design": 0.0007777214050292969, "__label__crime_law": 0.0005211830139160156, "__label__education_jobs": 0.0006513595581054688, "__label__entertainment": 0.00014495849609375, "__label__fashion_beauty": 0.0002046823501586914, "__label__finance_business": 0.00030422210693359375, "__label__food_dining": 0.0004701614379882813, "__label__games": 0.0006389617919921875, "__label__hardware": 0.0029087066650390625, "__label__health": 0.0007243156433105469, "__label__history": 0.0003857612609863281, "__label__home_hobbies": 0.00016248226165771484, "__label__industrial": 0.0011234283447265625, "__label__literature": 0.00026297569274902344, "__label__politics": 0.00042629241943359375, "__label__religion": 0.0007958412170410156, "__label__science_tech": 0.25830078125, "__label__social_life": 0.00012755393981933594, "__label__software": 0.01023101806640625, "__label__software_dev": 0.71875, "__label__sports_fitness": 0.0005092620849609375, "__label__transportation": 0.0009484291076660156, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20788, 0.02888]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20788, 0.76091]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20788, 0.8371]], "google_gemma-3-12b-it_contains_pii": [[0, 899, false], [899, 973, null], [973, 2155, null], [2155, 4839, null], [4839, 7068, null], [7068, 9435, null], [9435, 12688, null], [12688, 15715, null], [15715, 18586, null], [18586, 20788, null]], "google_gemma-3-12b-it_is_public_document": [[0, 899, true], [899, 973, null], [973, 2155, null], [2155, 4839, null], [4839, 7068, null], [7068, 9435, null], [9435, 12688, null], [12688, 15715, null], [15715, 18586, null], [18586, 20788, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20788, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20788, null]], "pdf_page_numbers": [[0, 899, 1], [899, 973, 2], [973, 2155, 3], [2155, 4839, 4], [4839, 7068, 5], [7068, 9435, 6], [9435, 12688, 7], [12688, 15715, 8], [15715, 18586, 9], [18586, 20788, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20788, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5d5bab921b1e70c1727d77431685c887298b1b46
Effective reduction of cryptographic protocols specification for model-checking with Spin Urszula Krawczyk¹*, Piotr Sapiecha¹,² ¹ Krypton-Polska, Al. Jerozolimskie 131 Warsaw, Poland ² Department of Electronics and Information Technology, Warsaw University of Technology, Warsaw, Poland Abstract In this article a practical application of the Spin model checker for verifying cryptographic protocols was shown. An efficient framework for specifying a minimized protocol model while retaining its functionality was described. Requirements for such a model were discussed, such as powerful adversary, multiple protocol runs and a way of specifying validated properties as formulas in temporal logic. 1. Introduction A flaw in a cryptographic protocol may become a real security thread [1, 2]. Even a seemingly small protocol may produce a great number of possible behaviours. One of the methods to formally consider protocols correctness is model checking by representing the protocol as Büchi automata \( M \), specifying every checked property as an LTL temporal formula \( \alpha \) and checking satisfiability of the formula in the model \( M \models \alpha \) [3, 4, 5, 6]. The automata of the protocol is typically generated from a more high-level description. This article has its focus on representing protocol models in the *E-mail address: U.Krawczyk@krypton-polska.com Promela language, which is an input for model checker Spin [7]. Choice of that tool was due to its effective, automatic minimizing techniques and its widespread use in the area of verification (see annual workshops page [8]). Some examples of verifying cryptographic protocols with Spin can be found in the literature [9, 10, 11, 12, 13, 14, 15]. However, the presented models are too simple, not taking into consideration multiple runs or limiting the protocol attacker abilities. Most importantly they create a large automaton, even though not so complicated Needham–Schroeder protocol is considered. Such an approach for modelling a more complex protocol like JFK would result in a model not feasible to verify. Model publicized in [10] seems to be the most sophisticated, as it is scalable and includes parallel runs but it contains many redundant transition causing state–space explosion. Another approach presented in [15] uses interesting recursive structures but at the expense of great memory cost. This does not disable the possibility of finding the attacks but only full coverage of reachable states can assure the model behaviour correctness. Also none of the mentioned models supports creating a readable counterexample indicating an attack. In this article a method for developing cryptographic protocol models, avoiding those drawbacks is outlined. The description of protocol framework is illustrated with the fragments of Promela code. ## 2. Problem Definition Key establishment and authentification cryptographic protocols, such as Needham–Schroeder or JFK, can be modelled as automata so that their properties, described as temporal formulas, can be checked. The main problem is to keep such a model effectively verifiable. The satisfiability of the formulas in the model should increase confidence in the security of the protocols. Thus it is crucial to explicitly list requirements such a model must comply with. The environment in which the protocol is studied is considered an important matter [16, 17, 18]. The main points are the following: - **Legal users** - can participate in parallel protocols taking different roles (initiator, responder). They can establish a session with other users including the intruder, that has a certificate like other legitimate users. - **The intruder** - can at any point eavesdrop a message, alter it and resend it to another user in another protocol run. The adversary produces messages on the basis of his actual knowledge, creating new complex elements (e.g. encryptions) or resending the remembered ones. - **Model scalability** - concerns the number of protocol runs and the attackers knowledge database. Effective reduction of cryptographic protocols... : **Additional data** - the information required for logical assertions that are written to check protocol properties must be stored. Also additional informations about protocol state are printed out and used later while producing a counterexample. : **Model configuration** - description of a particular model configuration, should specify any constraints in the way that the messages are sent from the user to the user and the roles the users can take. These specifications are responsible for models proper behaviour. While the above constraints hold, one important parameter must be minimized: : **Models size** - affects the amount of memory and time needed for verification. Considering the exponential complexity of the model checking problem ($O(\#(M)) = O(2^{\#(P)})$, where $P$ is the number of atomic prepositions describing the states of model $M$ [19]), this seems to be a critical issue in practical applications. 3. Representing Protocol as an Automaton To illustrate the idea of modelling protocols as Büchi automata, an example is given in this section, showing a path from a protocol description up to the automaton. A clear, simple protocol is used (Fig. 1). Also no reduction techniques have been demonstrated yet. This keeps the model comprehensible so that the reader can understand the general methodology. The verification process consists of the following steps. 1. **Modelling protocol** - the verifier describes in the Promela language the behaviours of the protocol users and all the possible actions the adversary can take. A sample code representing the responder in the example protocol is shown in Fig. 1. 2. **Protocol as automaton** - the Promela code describes an automaton. A guard and an action are associated with every transition from state to state. In the automaton in Table 1 and Fig. 2, the state when the key is established can be reached only if the guard corresponding to signature correctness holds. The actions can change the variables values and message channels contents. 3. **Kripke structure** - incorporating variables values into automaton produces a Kripke structure [4, 6]. Here every state represents a possible configuration of variables values. The example structure is shown in Fig. 3. 4. **Büchi automaton** - nondeterminism of Büchi automaton is crucial for model checking, as every possible path in the protocol must be analyzed. Büchi automaton can be constructed from Kripke structure... by copying the state labels onto the outgoing arcs [6], which can be seen in Fig. 4. (5) **The verified property** - all the desirable properties of the protocol are written down as *LTL* logic formulas. The formulas contain references to variables from the protocol model. Each formula is negated to denote the unsafe states and automatically transformed into special *never* process in the *Promela* code with *Spin* or another tool [20], as shown in Fig. 5. This code can be also transformed into the *Büchi* automaton. Locations represented as double framed circles are accepting locations. The automaton accepts an infinite input if it makes the automaton visits accepting states infinitely often [5, 6]. (6) **Verification algorithm** - at the end an asynchronous product of all automata representing protocol users is constructed. This automaton is used to construct a synchronous product with the formula automaton [6]. The algorithm is to search the resulting automaton for a path that would traverse infinitely often through the formula automaton accepting locations [4, 6]. (7) **Counterexample** - such a path indicates an error in the protocol and presents a way an unsafe state can be reached. On the whole, the protocol is flawed if its model can produce a path, that is accepted by the automaton representing an undesirable situation. The human verifier takes part only in the stages involving modelling the protocol in the *Promela* language and writing *LTL* logic formulas. Other activities are done automatically by the model checker tool. Actually effective implementations merge the described stages to reduce computing costs. Table 1. Table with automaton describing responders states while participating in the protocol from Fig. 1. <table> <thead> <tr> <th>Transition</th> <th>Current state</th> <th>Gard</th> <th>Transition effect</th> <th>Next state</th> </tr> </thead> <tbody> <tr> <td>t43</td> <td>0</td> <td>-</td> <td>-</td> <td>1</td> </tr> <tr> <td>t44</td> <td>1</td> <td>-</td> <td>m?certi,expi</td> <td>2</td> </tr> <tr> <td>t45</td> <td>2</td> <td>-</td> <td>printf('MSC: MSG2 Bob...')</td> <td>3</td> </tr> <tr> <td>t46</td> <td>3</td> <td>-</td> <td>m!?B, ExpB, BPr, expi, ExpB</td> <td>4</td> </tr> <tr> <td>t47</td> <td>4</td> <td>-</td> <td>m3?gkey, sigexpi, sigexpr</td> <td>7</td> </tr> <tr> <td>t48</td> <td>7</td> <td>(( certi == A ...)</td> <td>-</td> <td>6</td> </tr> <tr> <td>t49</td> <td>7</td> <td>(( certi != A ...)</td> <td>-</td> <td>13</td> </tr> <tr> <td>t1</td> <td>6</td> <td>-</td> <td>skip;</td> <td>11</td> </tr> <tr> <td>t50</td> <td>11</td> <td>-</td> <td>d_step{...}</td> <td>12</td> </tr> <tr> <td>t51</td> <td>12</td> <td>-</td> <td>-</td> <td>13</td> </tr> </tbody> </table> Effective reduction of cryptographic protocols Modelling protocol MSG1 $A \rightarrow B: A, k_a$ MSG2 $B \rightarrow A: B, k_b, SIG(BPr)(k_a, k_b)$ MSG3 $A \rightarrow B: SIG(APr)(k_a, k_b)$ \[ k_a = g^a \mod p \] \[ k_b = g^b \mod p \] \[ Key = k_a^b \mod p = k_b^a \mod p \] /* responder */ proctype Bob(chan ml, m2, m3) { byte exp1; /* D-H exponent of initiator */ byte certi; /* certificate of initiator */ byte sigkey, sigexpi, sigexpr; /* for values from sig */ MSG1: ml?certi, expi; MSG2: printf("MSG2 Bob \%d, \%d, Sig \%d(\%d, \%d)\n", B, ExpB, BPr, expi, ExpB); m2!B, ExpB, BPr, expi, ExpB; MSG3: m3?sigkey, sigexpi, sigexpr; /* sig */ if /* check sig */ ::( (certi == A & sigkey == APr) | (certi == E & sigkey == EPr) & sigexpi == expi & sigexpr == ExpB ) -> skip; fi; FINISH: d_step{ /* remember the data for verification in global variables */ explb = expi; certInit = certi; } } Fig. 1. Description of examplary key establishment cryptographic protocol based on the Diffie–Hellman schema and signature and the Promela code representing the responder. 4. Our approach The most intuitive way to model protocol is to represent users as independent processes, sending messages through channels controlled by the intruder. Protocol as automaton Fig. 2. Graphical representation of automaton from Table 1, describing responders’ behaviour. Kripke structure Fig. 3. Kripke structure constructed from automaton from Fig. 2. Büchi automaton Fig. 4. Büchi automaton constructed from the Kripke structure from Fig. 3. Unfortunately, such a model, though properly describing the protocol, might be too large to analyze. Due to exponential complexity of the problem [19], every redundancy in the model is expensive by means of memory and computation time. So the ability to model a protocol is not sufficient for practical verification. Thus the constructions below were used in the presented model to reduce its complexity, while giving the intruder strong abilities. The verified property ``` #define successAB (exp2a == ExpB && exp1b == ExpA) #define id_misb (certInit == E && successAB) never { TO_init: if :: (id_misbind) -> goto accept_all :: () -> goto TO_init fi; accept_all: skip } ``` Fig. 5. Büchi automaton constructed from the LTL logic formula $\diamond id\_misb$ (identity misbinding attack is possible). Specification in the Promela code (left) and graph representation (right). 4.1. Remembering Simple Message Elements Simple elements known by the intruder are remembered as bytes in the EveDB array. Every element has unique value and can be accessed with a combination of defined indices. The values for the JFKi protocol are shown in the left column of Fig. 6. The example of access to elements can be found in the right column of Fig. 6. For instance the index of responders nonce $\text{nonr}$, is a sum of index indicating user identifier, $\text{nonce}$ type and current protocol run ($\text{otherUser} + \text{NONCE} + \text{comm}$). On the other hand, exponentials are reused between protocol runs so to access them the $\text{comm}$ variable indicating the run is not used. If the EveDB array cell is not empty, the value is known by the attacker. 4.2. Remembering Complex Message Elements Complex elements such as signatures and encryptions are stored by the intruder in additional channels which work like FIFO queues. While generating a faked message, needed elements are randomly chosen from channels. The exemplary usage was shown in Fig. 6 (right). Channel EveSig2 holds signs from the second protocol message, that were intercepted earlier. 4.3. Eavesdrop On Send, Corrupt on Receive Tactic In a simple model the message is produced by the legal user, the intruder learns it and then the message is sent. Yet before the receiver gets it, the data is intercepted and generated once more by the attacker on the basis of their knowledge. An observation can be made, that it makes no sense to transport via the channels the data that is already stored in the intruder database. It can be seen that the original message is not used after the intruder learns it. Therefore in our approach channels transport only information that a message is sent as shown in Fig. 7. In consequence, all channels memory usage size is constant and small. Thus the tactic is crucial for minimizing the model size. It is also important that in our model the intruder can produce faked message after arbitrary time, possibly after receiving other messages from parallel protocol runs and learning new data. This models the ability of the intruder to delay messages. In Fig. 7, a circle is a point where a message is consumed by the attacker, while a square marks creation of a message by him. As can be seen message $M_1'$ is produced after learning message $M_2$ from the second protocol run. Sending of message is also the point where the intruder decides where the message will be sent. Effective reduction of cryptographic protocols... The effect is achieved by combining attackers’ activities with the users’ steps, rather than putting them into a separate process. As the method name suggests, the intruder takes his first action (eavesdropping) just after the legal user sends a message. The instructions are put into the sender process. The attackers’ second action (corrupting the message) is put into receivers’ process, just before the legal user gets a message. The tactic also eliminates introduction of additional mechanism to prevent the intruder from intercepting his own, faked messages. This could have been an additional field in a message, indicating if the message was sent by the attacker that can be found in the literature [10]. With the tactic this is not needed, as the data is generated only once before the legal user receives it. 4.4. Only One Channel For a Message Using for every message two channels (first for transporting data from the legal user to the intruder, second for transporting data to the legal receiver) is simple and intuitive but memory expensive. Thus only one channel is used in our model. This can be done as no message data is really transported as was mentioned. Only information that a message is sent is placed in channels. 4.5. All Users in One Process The eavesdrop on send, corrupt on receive technique also makes it possible to place the code of all users in one process. As it was mentioned, the intruders’ actions are combined with those of legal users. In a simple model senders and receivers could be put in independent processes. Every step of a user consists Fig. 7. Eavesdrop on send, corrupt on receive schema and specification in the Promela language. of receiving and/or sending a message. Such step would be put into an atomic clause to minimize interleavings we are not interested in. This would result in a sequence of users’ atomic steps from different protocol runs, which is the models proper behaviour. Yet the presence of many processes would cause the model checker to create an asynchronous product of the automata. This would introduce redundant interleaving and make the verified model grow too much [3]. That is why only one process is used with a do loop, in which from the set of executable steps one is nondeterministically chosen. Every step is represented by a function to keep both the advantage of one process and of having structured the Promela code, as shown in Fig. 7. Rather than storing the user identity in his process state, it becomes the function parameter. For example, a receiver of the message is indicated by the self parameter of function recvMSG1sendMSG2(). In such a model a sequence of nondeterministically chosen steps is produced just as in the multi-process case but without the undesirable overhead. So this approach does not affect the models functionality but its efficiency. 4.6. Consistent Message Generation by the Intruder The consistent generation of messages means that once chosen, an element (e.g. nonce, exponential) is used by the intruder in the whole message. This helps avoid messages that are known to be rejected by legal users. The example of this was shown in Fig. 6 (right). Here the same value is used as exponential of a responder expr in the plain text and in the faked signature. 5. Protocol Properties Verification The last step is specifying protocol properties as LTL (linear temporal logic) formulas. Notation □α means that α is satisfiable in the model, iff it is true for every execution path of the automata. It can be used to specify that it is desirable that unsafe states are never reachable. For the Needham–Schroeder protocol, an example safety formula detecting identity misbiding would be: \[ α = (\text{run2accepted } \&\& \text{otherUsrA[COMM1] == IDB } \&\& \text{otherUsrB[COMM1] == IDE} \text{ } \&\& \text{otherUsrA[COMM1] == IDE } \&\& \text{otherUsrB[COMM1] == IDA}) \] The wrong state is when one of the legitimate users accepts a session with another legal user (IDA or IDB), while this user did not, because he was engaged in a run with the intruder (IDE). So it should hold that \( M \models □¬α \). Fig. 8 presents a readable counterexample for the attack. It was automatically produced by a simple driver written by the authors, that runs the model checker, parses Spins output and interprets it. Only the emphasis was added by hand. for more readability. The required information, from the raw output of the model checker, originates from the printing commands shown in Figs 1 and 6. Form the listing it can be seen that Bob accepted a session with Alice who never took part in a run with Bob. Fig. 8. A description of an identity misbinding attack in the Needham–Schroeder protocol detected with Spin. Model checker raw output (left) and a readable output generated by our driver (right). Another issue about writing formulas is the labels mechanism. It should be used if possible because it avoids additional, global variables to mark a state. Labels in Promela are inserted into code just as in C language. Expression of the form \((\text{ProcessName}@\text{LabelName})\) used in a formula, will discover a point where the process is in the labelled state. As for the JFKi protocol the following two examplary formulas are presented. \[ \gamma = (JFKiProtocol@\text{INTRUDER\_DECRIPTED\_MSG3\_LABEL} \&\& \text{cert}! = IDE) \] \[ \psi = (JFKiProtocol@\text{ACCEPTED\_INIT\_SA\_LABEL} \&\& \text{cert}! = IDE \\ \&\& \text{secAssos} == SaiE) \] The first formula is used to detect privacy violation attack, when Eve decrypts the third message that was not supposed for her (global variable cert stores certificate of the peer chosen by the initiator and it is not IDE). The second formula is true if the responder accepts a wrong initiator's security association. That is when the association was inserted by the intruder (\(SaiE\)), although it should originate from a legal initiator (whose identity is kept in \(cert\)). There should never happen a situation when these formulas are true, so the Büchi automata are built for the formulas \(\Box \neg \gamma\) and \(\Box \neg \psi\). With analogical formulas, the ability of the adversary to change the exponentials, nonce and Diffie–Hellman group information can be checked. During the verification of JFKi protocol with Spin, none of the attacks was detected. 6. Application of the Method and Computational Results The following model instance configuration was used for the verification results below: two parallel runs, two legal protocol users, intruder knowledge database containing two elements (Needham–Schroeder) or one element (JFKi) of every type of the complex element. In the second case, to give the adversary more abilities, any received complex element is stored in the databases nondeterministically. So the first element may not fill the queue completely. This configuration makes it feasible to verify a protocol on an average computer (AMD Athlon 2.01GHz, 2GB RAM) and lets expect the standard attacks to be detected. Costs of example protocols verification are shown in Table 2. Our models are indicated bold. Sources of model from [10] are available, so scaled down to two parallel runs, they were included as a comparison. Also publicized fragments of [15] model give a hint of its size. At the first sight it is visible that the unminimized models present much bigger state vectors. In the case of JFKi, it can be very distinctly seen how beneficial for verification were the reductions of the model. The original automaton was much too complex and was only partially analyzed. The minimized model could be verified in less than a quarter of hour. Protocol security properties did hold in the JFKi model. <table> <thead> <tr> <th>Protocol</th> <th>Time</th> <th>Reached states</th> <th>State vector</th> <th>Used memory</th> <th>Verification type</th> </tr> </thead> <tbody> <tr> <td>Needham–Schroeder [10]</td> <td>2770s</td> <td>3.10e+007</td> <td>224b</td> <td>1128.758MB</td> <td>partial, code fragments</td> </tr> <tr> <td>JFKi non reduced</td> <td>172s</td> <td>4.00e+006</td> <td>1916b</td> <td>1851.350MB</td> <td>partial</td> </tr> <tr> <td>JFKi reduced</td> <td>650s</td> <td>3.62e+007</td> <td>204b</td> <td>1051.023MB</td> <td>full</td> </tr> </tbody> </table> 7. Conclusions and Future Plans The proposed modelling framework has proved to be computationally efficient, enabling verification of more complex protocols. Although the approach Effective reduction of cryptographic protocols... 39 is more work consuming than using tools specialized only in verification of cryptographic protocols such as *Casper* [21, 2], yet it gives *more control* over the model configuration. Another fact is that the discussed method is far more readable than for example *CSP* [2] or clauses for the *ProVerif* program [22]. Hence it is more accessible for the inspection and less error prone. In addition, automatic generation of counterexamples is a true asset of the method. The presented framework is a proposal of a method complementary to the existing ones, being aimed at solving the difficult problem of assuring correctness of safety protocols. As for the future improvements, designing a methodology to divide the model would make it possible to verify more parallel runs of a protocol. For example, in each part the initiator would choose a different responder. Analysis of each such model would require less memory and could be possibly done concurrently on separate computers, saving the time. A more complex task would be to create a parser that would transform a protocol specification, in a protocol description language such as *CAPSL* [23], into a model. The *Promela* code could be still edited by the human verifier if needed but the main work would be done automatically. This offers another opportunity, that from the same input many outputs can be generated, including several verification models or a protocol implementation [23, 17]. References [1] Uk chip and pin credit / debit cards are insecure (2009) http://www.youtube.com/watch?v=JPAX321gkrw http://www.albertolluch.com/research/promelamodels http://www.loria.fr/merz/papers/NeedhamSchroeder.spin (2009). Report, Microsoft Research (2009). [18] Stamer H., Verification of cryptographic protocols, Technical Report, University of Kassel (2005). [19] Schnoebelen Ph., The complexity of temporal logic model checking, Advances in Modal Logic (2003). http://web.comlab.ox.ac.uk/people/Gavin.Lowe/Security/Casper/ [22] ProVerif: Cryptographic protocol verifier in the formal model http://www.proverif.ens.fr/ Science Laboratory (1999).
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3318/2512", "len_cl100k_base": 5859, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33567, "total-output-tokens": 6954, "length": "2e12", "weborganizer": {"__label__adult": 0.0006237030029296875, "__label__art_design": 0.0005021095275878906, "__label__crime_law": 0.002094268798828125, "__label__education_jobs": 0.0005736351013183594, "__label__entertainment": 0.00013744831085205078, "__label__fashion_beauty": 0.00023424625396728516, "__label__finance_business": 0.00041961669921875, "__label__food_dining": 0.0005784034729003906, "__label__games": 0.0011892318725585938, "__label__hardware": 0.002964019775390625, "__label__health": 0.0009708404541015624, "__label__history": 0.0003960132598876953, "__label__home_hobbies": 0.00017368793487548828, "__label__industrial": 0.00136566162109375, "__label__literature": 0.0003731250762939453, "__label__politics": 0.0006618499755859375, "__label__religion": 0.0007748603820800781, "__label__science_tech": 0.3349609375, "__label__social_life": 0.00013315677642822266, "__label__software": 0.0106048583984375, "__label__software_dev": 0.63818359375, "__label__sports_fitness": 0.0005946159362792969, "__label__transportation": 0.0011081695556640625, "__label__travel": 0.0002263784408569336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26791, 0.03019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26791, 0.56026]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26791, 0.87491]], "google_gemma-3-12b-it_contains_pii": [[0, 1386, false], [1386, 4064, null], [4064, 6577, null], [6577, 9219, null], [9219, 10589, null], [10589, 11334, null], [11334, 13488, null], [13488, 14296, null], [14296, 16033, null], [16033, 18719, null], [18719, 20065, null], [20065, 22881, null], [22881, 25536, null], [25536, 26791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1386, true], [1386, 4064, null], [4064, 6577, null], [6577, 9219, null], [9219, 10589, null], [10589, 11334, null], [11334, 13488, null], [13488, 14296, null], [14296, 16033, null], [16033, 18719, null], [18719, 20065, null], [20065, 22881, null], [22881, 25536, null], [25536, 26791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26791, null]], "pdf_page_numbers": [[0, 1386, 1], [1386, 4064, 2], [4064, 6577, 3], [6577, 9219, 4], [9219, 10589, 5], [10589, 11334, 6], [11334, 13488, 7], [13488, 14296, 8], [14296, 16033, 9], [16033, 18719, 10], [18719, 20065, 11], [20065, 22881, 12], [22881, 25536, 13], [25536, 26791, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26791, 0.09326]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6ac9d4c2ab30c3fac5e465909eec208c39d75417
Katja Andresen University of Potsdam, katja.andersen@wi.uni-potsdam.de Norbert Gronau University of Potsdam, norbert.gronau@wi.uni-potsdam.de Follow this and additional works at: http://aisel.aisnet.org/amcis2005 Recommended Citation http://aisel.aisnet.org/amcis2005/150 Katja Andresen University of Potsdam Katja.Andresen@wi.uni-potsdam.de Norbert Gronau University of Potsdam Norbert.Gronau@wi.uni-potsdam.de ABSTRACT: The research project CHANGE\(^1\) aims to bring adaptability into Enterprise Resource Planning (ERP) software systems. Adaptability is seen as a quality to manage change. This could be a reaction to a need or a proactive push to leverage potential opportunities. In any case the process change should be optimally represented in the ERP application. One of the major problems in developing adaptable software systems is the lack of systematic methods during the process of software development. For that, a pattern-based approach has been developed which covers three identified dimensions of adaptability in ERP systems. In the next step a component framework is proposed for characterising adaptable ERP software systems, regardless of venture type. Keywords Adaptability, ERP systems, Framework, pattern INTRODUCTION AND MOTIVATION Development of adaptable software has been receiving much attention recently; as such software could better accommodate changes in user requirements as well as in needs of the developing organization. The role of Information Technology (IT) in the business process change is seen as dominant or as an enabler. However, insufficient support to optimally model into software systems hinders the business to be carried out as planned. Workarounds, and encapsulation are common procedures during the lifetime of a software system to accommodate change. A case is often made for the socio-technical design approach which suggests a mutual, bidirectional relationship between IT and the organisation (Gronau, 2003). An increasingly dynamic environment and the ongoing trend towards customized products are heightening the requirements which companies have to meet leading to constantly changing structures and processes. In dealing with the challenge of creating and evaluating adaptable software the contribution begins illustration of adaptability. Next, requirements for a proper evaluation of an adaptable software system are derived. For that, a pattern-based approach is chosen. Also, the adaptability focus has been narrowed to ERP Systems as fundamental part of enterprise architecture. ERP systems are highly standardized software systems and they link all or many business functions and operating locations so that all have access to relevant information (Nick, 2001). ERP systems may be appropriate for some organisations but less for others. ERP systems themselves are limited in the processes they can model. As a result, an organisation is limited to the collection of functions delivered with the ERP system or has to modify either the business processes or the ERP program code. Thus, currently no single ERP packaged software can meet all company functionalities or all special business requirements. Therefore, companies must choose an adaptive ERP system (Gattiker and Goodhue, 2002). And, adaptability of ERP systems is to leverage the overall adaptability of a business organisation in a turbulent environment (Andresen, Gronau and Schmid 2005). ADAPTABILITY Adaptability is a new research field, which has moved into the centre of interest in the 1990’s. Adaptability (sometimes also referred to as transformability) has been related to factory planning before penetrating into the domain of information systems. It is to be seen as a new design goal in factory planning. Design guidelines for modular and \(^1\) www.change-project.de adaptable factories were created under different aspects (Nofen and Klussmann 2002, Westkämper, Zahn, Beise and Tielbein, 2002, Eversheim, Lange-Stanlinski and Redelstah, 2002, Wirth, Enderlein and Hildebrand, 2003). Research in factory planning has defined so called adaptability enabling objects. Among them, the information system architecture (ISA) of a company which is seen as an adaptability enabling object holding strategic importance as well (Hernandez, 2002). Definitions on adaptability are not consistent. The Webster dictionary defines adaptability as the ability to fit a specific or new use or situation, often by modification (Webster, 1988). Following Balve/Wiendahl/Westkämper (Balve, Wiendahl and Westkämper, 2001) it can be said that the term adaptability is only applied if the system under consideration is able to: - Actively and quickly adapt its structures to changing and unforeseeable tasks and - Develop itself according to evolutionary principles within the framework of relatively constant demand. Adaptability enables information systems to follow changes which occur along the lifetime of a software system. In this spirit, adaptability is also a vision. The vision that business support is carried out almost automatically (autonomous) during build and runtime of the software system. By raising the question to bring adaptability into software systems the first step was to find criteria which allow adaptability in software systems. For that, the pattern approach seems well suited for documenting design techniques. Unlike a design document, a pattern reflects something that has been used in a number of situations and thus has some generality. Finally, patterns can express architectural considerations independent of language and design methodology. The pattern-based approach is subject of the following section. FACTORS AFFECTING ADAPTABILITY – A PATTERN-BASED APPROACH We understand all there is for us to understand through repeated parts and portions. We live by recognizing and using recurrences, by relying on what happens over and over. What repeats is important and reveals commonality. The recognition of repeatable parts is the idea of using patterns. Patterns are used in several areas. Christopher Alexander coined the term pattern language and explained the form well in his work “The Timeless Way of Building” (Alexander, 1979). As for software systems identified patterns embody repetitions and recurrences within software. Alexander structured his patterns hierarchically into a system he called a “pattern language”. The first book on software patterns was published by the “Gang of Four”, the nickname for the authors of the first book on software patterns (Gamma, Helm, Hohnson and Vlissides, 1995). By approaching the research quest to design for adaptability observation and analysis showed some building blocks deserving a strong focus to design and build for adaptable ERP software systems. Groups of identified patterns comprise system patterns allowing a neutral adaptability analysis of any ERP system and use patterns which are context dependent. The use patterns or the business dimension characterises the circumstances of usage for an ERP system. They reflect that adaptability enabling factors are also related to decisions referring to the deployment of the system. In that area patterns are for instance the capabilities of personnel (person-bound) knowledge; existing guidelines to proper deploy a software system. For that reason the second group which is not further within the focus of this contribution is termed use patterns. System Patterns describe the immanent qualities of the ERP system itself. Independently from surrounding conditions system patterns show the latent adaptability of the system. The identified patterns of adaptability in software systems are mentioned below. 1. Self-organisation Self Organisation is a basic characteristic of natural systems that adapt automatically to changing conditions. Self organization (auto-poiesis) marks the ability of a system to determine the system structure by adjusting and steering mechanisms to ensure the long-term system existence (Maturana and Varela, 1987). This quality is a fundamental criterion which for instance requires the software system to behave efficient and self-adjusting. The system elements or rather subsystems produce thereby their own order by taking up information about their environment and its interaction with the environment; what consolidates a "model" upon which further action is based in the real world. Self-organization of ERP systems is closely linked with mechanisms of adaptation as customization for instance. 2. Scalability Scalability refers to the permanent state to effectively and efficiently operate at many different scales. It is the best pattern for situations where large volumes of operational data must be written. The increased loads my be random and unpredictable, or planned out over time [RoSr02]. Referring to the software applications have to be conceived to increase equally with rising demands. An example is represented by the sliding usability of organizational levels for cost calculation, developed by an ERP system manufacturer. The system makes always as much levels as needed available (Gümbel, 2004). An ERP system is described as scalable if it will remain effective when there is a significant increase/decrease in the number of resources as for instance data in parts lists. Usually the capacity is fixed and resource allocation is not optimized. 3. Modularity Modularity or rather modularization in general means the structuring of a system into small, partly-autonomous and clear subsystems (Picot, Reichwald and Wiegand, 2003). These parts represent the modules. A module consists of a module-trunk and a module-interface. The latter contains a specification about the performance and the qualities, which are necessary for its surroundings. The module-trunk implements the definition of the module, which is specified in the interface. Therefore single modules can be removed without much expenditure, replaced by others or added to another system. In this way the modularity presents a possibility for efficient combination, recycling and fast change of informational applications. As for an ERP system, modularity is closely linked with component-based architectures. Modules are implemented so as to hide all the information about them except what is available through its interfaces. 4. Mobility Mobility raises the question on spatial and temporal unlimited access to applications of the information system architecture and therefore to its functions and data by different technologies - for example via web-browser, terminal-server or a virtual private network, by means of these applications data can be accessed. Some ERP systems do provide a limited access; some are even fully web-based (web-ERP+2). A second dimension represents the platform independence of applications. This freeness covers for example the used hardware, the operating system, data bases or application servers. 5. Interoperability Interoperability describes the ability of applications to work together. Independence of the used hardware, the operating-system, the network-technology and the realized application, cooperation between these applications can easily be established. Interoperability allows the uncomplicated access to different (also spatial) data- and processing resources within a workflow or rather the easy combination of different information systems. For cooperation between enterprises, interoperability means increased communication- and cooperation abilities. This indicator refers to the ability of system elements to make a high measure of compatibility available. Interoperability requires the use of well-established standards that define the behaviour of interfaces. It allows the uncomplicated access and coupling of different data- and processing resources within a workflow or rather the easy coupling of different ERP and information systems. 6. Self-Similarity Design Patterns are traditionally related to architecture. Transferring this approach to information technology one fundamental pattern could be identified: Self-similarity can be considered as key feature being part of all mentioned system-based patterns as it is related to the inner structure of each system pattern. Self-similarity is a pattern repeating itself on different scales no matter the selected degree of abstraction. The self similarity is a phenomenon owned by many natural objects (clouds, plants, mountains etc.). In different size scales the same essential structures appear. Also, many chaotic systems show self similar behaviour. As an example of the advantages of self similarity a unique design philosophy of applications shall be mentioned resulting in the easier ability to learn and efficiently use the application on different platforms and levels. DISCUSSION: ADAPTABILITY ANALYSIS OF WEB SERVICES In this section an example for an adaptability analysis of web services is presented. Web services facilitate the integration of services. Web services are located at the application and service layer in the reference model. In the literature Web services are seen as new paradigm giving freeness to companies to create and reconfigure organisational competencies to sustain competitive advantages. Scalability: Web services facilitate the Integration of services seamlessly. Web services do allow organisations to link applications within and across enterprises. Organisations have the ability to add or drop services of other business partners without worrying about the implementation details. Modularity: 2 Available online: http://web-erp.sourceforge.net Mobility: Web services and mobile services. Web services are loosely coupled software components delivered over Internet standard technologies. A Web service represents a business function or a business service and can be accessed by another application. From the point of view of the calling application a Web service is modular. The Web service is a modular entity that delivers services on demand through a well-defined programmatic interface. Mobility: On the one hand mobility is concerned with (data) access anywhere anytime. Another aspect of mobility is the independence of software and hardware. Since web services use the internet technology to become invoked and deliver their service they can be published, located and invoked from just about anywhere on the Internet or a local network. The provider and the consumer a Web service do not have to worry about operating system, language, environment, or component model used to create or access the Web service, as they are based on ubiquitous and open Internet standards, such as XML, HTTP. Self-Organisation: Web services are able to adapt to changes in its requirement and they are self-descriptive. Some features have been automated within Web services what makes them self-organising. The approach represents a major evolution in how systems connect and interact with each other. However, Web service is a concept – an envelop which can be deployed for a specific purpose. The collection of operations is individual as is the design and implementation of self-organisation which is determined by the programmer. Self-Similarity: The design principle of Web services is similar. The apply the same principles which means Web service is an interface that describes a collection of operations that are network-accessible through standardized XML messaging. Web Services are actively supported by major IT companies as IBM, Microsoft, and Sun Microsystems. They represent a new IT application development paradigm. Based on XML, the different elements of the technology allow the integration of heterogeneous applications within and across organisations. With web services technology, business processes can span across departments, divisions and enterprise boundaries, allowing firms to integrate the services of multiple applications without concerning about the underlying technologies and implementation characteristics. CHANGE COMPONENT FRAMEWORK - AT A GLANCE In order to apply the identified patterns on an ERP system we have modelled a layered systems structure. In seeking generalisability the extant perspectives tend to simplify a (real) ERP system into classes of services assigned to layers. The challenge is to produce a framework which serves the needs of the individual systems/system deployment. The illustration is shown in figure 1. The architectural model represents two layered ERP systems. Connectors mediate the interactions within the ERP system (numbers 1 - 4) and they govern component interaction outside the ERP system (numbers 5 – 9). The basic purpose of the framework is to allow for appropriate checking and analysis of ERP systems. For that, the layers provide the context of the patterns. Accordingly, a framework is proposed that consists of specific levels of functionalities termed the “control”, “presentation”, “adaptation”, “application”, “service”, “data” and “infrastructure” layers. The need for levels, which are the result of iterative examination, reflects the different technological purposes of the model. There is, at the control layer, a need to model business processes into the ERP system and to ensure that decisions are internally consistent. The control layer provides some sort of modelling language as for instance to be found within the ARIS suite of the SAP. At the presentation level the models purpose is to enable communication with the user as the user interface is located here. The application level is needed to structure and represent the software components. Part of this layer is the service layer which serves the handling of resources and transactions. The data area covers the data management. At the infrastructure level the decision is addressed how the system is distributed, what topology is chosen? An additional perspective approaches adaptability. Purpose of the adaptation layer is to cover decisions related to system-based patterns as mentioned before. The usefulness of any model is limited, however, unless it provides specific guidance and discipline to operations. For that, technologies and standards where examined using the pattern-based approach. Thus, each layer was assigned one or more design considerations as standards, protocols supporting the service task. For instance, the control layer should allow acting on data modelled as business process into the system. On the one hand there is a need to represent the data what might be realized by protocols as ebXML or BPMI the Business Process Management Initiative. On the other hand, methods are needed to pull out relevant data of the control layer. The latter task is performed at the connector number 1 to mediate data within one single system, and also the connector 5 to allow external data exchange. ![Adaptability Reference Model for ERP Systems](image) **Figure 1: Adaptability Reference Model for ERP Systems** The matrix below (table 1) lists the given marks per technology providing first insights on adaptability measures. To classify the assessment in terms of adaptability a limited number of codes are applied which are shown in table 2. The usage of five degrees is the essence of several iterations. They have been proven practicable enough for us to classify technology and systems without implementing too much complexity. <table> <thead> <tr> <th>Pattern</th> <th>Control</th> </tr> </thead> <tbody> <tr> <td>Scalability</td> <td>?</td> </tr> <tr> <td>Modularity</td> <td>+</td> </tr> <tr> <td>Mobility</td> <td>+</td> </tr> <tr> <td>Interoperability</td> <td>+</td> </tr> <tr> <td>Self-organisation</td> <td>+</td> </tr> </tbody> </table> **Table 1: Codes applied for evaluation** <table> <thead> <tr> <th>Pattern</th> <th>Layer</th> <th>Control</th> </tr> </thead> <tbody> <tr> <td>Tecnology</td> <td></td> <td></td> </tr> <tr> <td>BPMI</td> <td>?</td> <td>+</td> </tr> <tr> <td>ebXML</td> <td>+</td> <td>+</td> </tr> </tbody> </table> **Table 2: Example shows system-based patterns for Control Layer (cut-out)** The Codes taken: "+" full support for pattern (enables adaptability); "-" no support for pattern (breaks adaptability); "?" weak support for pattern – some constraints exist A validation of layers and connectors deploying the patterns of adaptability has been carried out for a number of technologies per layer providing a foundation for the neutral adaptability analysis. The results reported are being developed jointly within the CHANGE project and they have been catalogued within a “Reference Guide” which is frequently under revision as technology gets evaluated. For a neutral ERP system analysis, each layer and its communication links of the framework is to be matched with the relevant ERP software architecture. The evaluation of each layer contributes to the over-all rating. The more adaptable the technique the better the system is evaluated. NEXT STEPS So far current technologies which are used in software systems and ERP systems respectively have been catalogued in terms of adaptability. The latter has been carried out be applying patterns as an artefact of choice to indicate the systems adaptability. As pointed out two dimensions of adaptability have to be considered on the one hand the neutral system evaluation and on the other the business dimension. The case for a neutral system evaluation with regard of adaptability is the logical next step. To demonstrate the usage of the framework four open-source ERP systems have been chosen, namely AvERP, Compiere, Lx-Office and webERP+. All systems mentioned had been installed at the ERP research Center attached to the the University of Potsdam for in depth studies. CONCLUSIONS In this contribution a framework was presented, which helps to design, describe, categorize and analyse ERP systems with regard of adaptability. Adaptability as a research domain in information science shall enable business organisations to deploy adaptable software systems which support business processes during change and modifications. Future research in the field of adaptable ERP systems faces a couple of challenges which are addressed within the project CHANGE. Efforts have to be made to put the presented ideas into applicable tool-procedure models. Efforts have to be made to put the presented ideas into applicable tool-based procedure model. This involves also a further refined breakdown of patterns and a procedure to weight single patterns. Another challenge involves the design of adaptable ERP systems leading to transfer of results to the area of software engineering. For this purpose and beyond the Centre for ERP systems was recently founded at the University of Potsdam. REFERENCES
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1695&context=amcis2005", "len_cl100k_base": 4403, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 17674, "total-output-tokens": 5636, "length": "2e12", "weborganizer": {"__label__adult": 0.00029659271240234375, "__label__art_design": 0.0006518363952636719, "__label__crime_law": 0.0004189014434814453, "__label__education_jobs": 0.002246856689453125, "__label__entertainment": 8.469820022583008e-05, "__label__fashion_beauty": 0.0001685619354248047, "__label__finance_business": 0.004917144775390625, "__label__food_dining": 0.0004107952117919922, "__label__games": 0.0004267692565917969, "__label__hardware": 0.0006113052368164062, "__label__health": 0.0006232261657714844, "__label__history": 0.0002732276916503906, "__label__home_hobbies": 0.00010198354721069336, "__label__industrial": 0.000629425048828125, "__label__literature": 0.0003790855407714844, "__label__politics": 0.0002741813659667969, "__label__religion": 0.00032520294189453125, "__label__science_tech": 0.04345703125, "__label__social_life": 0.0001100301742553711, "__label__software": 0.03887939453125, "__label__software_dev": 0.90380859375, "__label__sports_fitness": 0.0002218484878540039, "__label__transportation": 0.0004224777221679687, "__label__travel": 0.00020551681518554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25459, 0.01802]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25459, 0.46247]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25459, 0.90269]], "google_gemma-3-12b-it_contains_pii": [[0, 534, false], [534, 4169, null], [4169, 9243, null], [9243, 13961, null], [13961, 18962, null], [18962, 20412, null], [20412, 24780, null], [24780, 25459, null]], "google_gemma-3-12b-it_is_public_document": [[0, 534, true], [534, 4169, null], [4169, 9243, null], [9243, 13961, null], [13961, 18962, null], [18962, 20412, null], [20412, 24780, null], [24780, 25459, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25459, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25459, null]], "pdf_page_numbers": [[0, 534, 1], [534, 4169, 2], [4169, 9243, 3], [9243, 13961, 4], [13961, 18962, 5], [18962, 20412, 6], [20412, 24780, 7], [24780, 25459, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25459, 0.10435]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
e99d49c1695420b21522f1bbb53b97d98c891b3c
Basic Instructions We are finally at a position where we can start going over some instructions! MOV The first is the MOV instruction which moves data from one location to another. The destination comes first, followed by the source. Either may be registers or memory. The sizes of the data you are moving must match (e.g. can’t move a word into a byte): \[ \begin{align*} \text{MOV reg, reg} & \quad \text{MOV mem, reg} \\ \text{MOV reg, mem} & \quad \text{MOV mem, immediate} \\ \text{MOV reg, immediate} & \\ \end{align*} \] Note the missing MOV instruction – you aren’t allowed to move from one memory location directly to another memory location. Instead you must move to a register first. Such is the price one pays for a non-orthogonal architecture. Here are some examples: \[ \text{.data} \begin{align*} x & \text{ db } 10 \\ y & \text{ db } 20 \\ total & \text{ dw } ? \\ \end{align*} \] \[ \text{.code} \begin{align*} \text{mov al, x} & \quad ; \text{Move values to AL and BL} \\ \text{mov bl, y} & \\ \text{mov total, 1000} & \quad ; \text{Store 1000 into total} \\ \text{mov x, y} & \quad ; \text{INVALID} \\ \text{mov ah, x+1} & \quad ; \text{move location X+1 into ah, which is Y} \\ \end{align*} \] Note the last example. We can reference memory as offsets from known memory locations. In the last case, we added one to the offset of x. This gives us the address for y, so the contents of y are moved into AH. Although y had a label, this technique lets you access data that may not have a label. MOVSX, MOVZX Sometimes we want to move a small-sized piece of data into a larger register. For example, we might want to move an 8 bit value into a 16 bit register. Generally this occurs with numbers. We might have a number that is being represented by a byte, but now we want to move it into a register and have the 16 bit register operate on the data. One solution is to use AL. We could copy the value that is a byte into AL, and then we would also have to zero out AH. Rather than use two instructions, there is a special instruction to zero out the unfilled bits in the destination. This is the **movzx** instruction (move with zero extend) ```assembly .data mynum byte 1 .code mov ax, 0AAAAh movzx ax, mynum ; AX now contains 0001 ``` The movzx instruction requires the .386 or higher processor directive. We can do a similar thing for the extended registers: ```assembly movzx eax, mynum ; EAX now contains 00000001 ``` In a similar fashion, the **movsx** instruction (move with sign extend) can be used to extend negative numbers. Consider what happens if we use movzx on a negative value: ```assembly .data mynum byte –1 .code mov ax, 0AAAAh movzx ax, mynum ; AX now contains 00FF ``` Why did we get 00FF? Because –1 is represented as FF in two’s complement. To correctly get –1 into AX, we need to extend (i.e. copy) the sign bit all the way to the most significant bit. If the sign bit is 1, then the movsx instruction pads with 1’s instead of 0’s: ```assembly movsx eax, mynum ; EAX now contains FFFFFFFF or –1 ``` Similarly, if the sign bit contained zero, then we would pad with 0’s all the way out to the most significant bit. **XCHG** The next instruction is the XCHG instruction. This exchanges the contents of two registers or a register and a variable: ```assembly XCHG reg, reg XCHG reg, mem XCHG mem, reg ``` This is an efficient way to swap two operands, for example, in sorting some data. **INC and DEC** INC is used to increment an operand by 1, while DEC decrements it by 1. The operand may be memory or a register. ADD The ADD instruction takes a destination and a source of the same size, adds them, and stores the result in the destination: ADD ah, al ; Sets AH = AH + AL ADD var1, 10 ; Var1 = Var1 + 10 Depending on the result of the addition, the zero, negative, sign, overflow, or carry flags are affected. SUB The SUB instruction takes a destination and a source of the same size, subtracts them, and stores the result in the destination. SUB ah, bl ; Sets AH = AH – BL SUB var1, 10 ; Sets Var1 = Var1 – 10 Depending on the result of the subtraction, the zero, negative, sign, overflow, or carry flags are affected. Types of Operands So far we have been dealing primarily with direct addresses and with immediate data. Let’s describe for now direct, direct-offset, and register indirect addressing. Direct operands refer to the contents of memory at some known location. For now these locations are specified by a label: .data countLabel WORD 1000 .code mov ax, countLabel ; Moves 1000 into AX inc countLabel Here, countLabel refers to the address that is used to store a word. If we actually want to access the offset that a variable is stored at, we can use the offset operator. In protected mode, an offset is always 32 bits long. In real mode, offsets are 16 bits. To illustrate, the following figure shows a variable named myByte inside the data segment: ``` Data Segment DS MyByte Offset ``` The offset essentially gives us the working address of some data variable. Here is a code sample. ```plaintext .data countLabel DWORD 1000 .code mov esi, offset countLabel ; Moves address of countLabel into ESI ; This would be 0 in real mode, some address ; where the 1000 is stored in protected mode ``` For example, if countLabel is stored at offset 0 of its segment, then the value 0 gets loaded into ESI. If we did not use the offset directive, then the actual value inside countLabel is loaded into ESI (i.e. 1000). **Direct-Offset** operands are used to access locations offset up (+) or down (-) from a label. For example: ```plaintext .data countLabel1 WORD 10 countLabel2 WORD 20 .code mov ax, countLabel1+2 ; Moves 20 into ax mov ax, countLabel2-2 ; Moves 10 into ax ``` I subtracted and added 2 because the word size is 2 bytes. Another example in real mode: ```plaintext .data bList db 10h, 20h, 30h, 40h ; Let's say bList begins at offset 0 wList dw 1000h, 2000h, 3000h .code mov di, offset bList ; DI = 0000 mov bx, offset bList +1 ; BX = 0001 mov si, offset wList+2 ; SI = 0006 need to pass up bList data ``` If we try this in protected mode: ```plaintext .data bList db 10h, 20h, 30h, 40h wList dw 1000h, 2000h, 3000h .code mov eax, offset bList ; EAX = Address of first byte in bList mov ebx, offset bList +1 ; EBX = Address of second byte in bList mov edx, offset wList+2 ; EDX = Address of 2000h ``` Finally, **register indirect** mode is used when a register contains an offset of some memory location. The contents of that memory location are then accessed. For example: ``` .data val1 BYTE 10h .code mov esi, offset val1 ; ESI contains offset of Val1 mov al, [esi] ; AL gets 10h ``` Using this mode it is possible to access memory outside our data segment. If this occurs then a general protection fault occurs which will crash our program. In real mode, we can only use the SI, DI, BX, or BP registers. We can access that memory location using brackets around the register, e.g. `[BX]`. By accessing `[BX]` we are accessing the effective address of DS:BX, where BX is some offset from the DS. For example: ``` .data countLabel1 WORD 1000 countLabel2 WORD 2 .code mov ebx, offset countLabel1 ; mov offset of countLabel to EBX mov ax, [ebx] ; mov 1000 to AX mov ax, [ebx+2] ; mov 2 to AX, combine with offset ``` We will have more to say about these later… as you can see this could be one way to access successive elements within an array. **More MASM Operators and Directives** There are various addressing operators available: OFFSET, PTR, LABEL, TYPE, and ALIGN. We have already discussed OFFSET. PTR is used to override the default size of an operand. This is used in combination with one of the following data types to indicate the new size: BYTE, SBYTE, WORD, DWORD, SWORD, DWORD, SDWORD, FWORD, QWORD, TBYTE. For example: ``` mov al, byte ptr count ; treat count like a byte mov ax, word ptr newVal ; treat newVal like a word mov eax, dword ptr listPointer ; treat listPointer like a double word ``` As an example, consider treating val32 below: ``` data val32 DWORD 12345678h code mov ax, val32 ; INVALID, ax is 16 bits while val32 is 32bits mov dx, val32+2 ; INVALID, doesn’t get high word, tries to get doubleword ``` The solution is to use PTR to override the size: ``` mov ax, word ptr val32 ; AX = 5678h mov dx, word ptr val32+2 ; DX = 1234h ``` In the above example we moved a large value into a small one. We can also move a small value into a large one, but we have to make sure that the memory locations after the first small value are set to the appropriate values: ``` data wordlist WORD 5678h, 1234h code mov eax, dword ptr wordList ; EAX = 12345678h ``` This moves data in reverse word order. The LABEL directive can be used to assign another label to a memory location. Below we assign a label called val16 to the same location we have val32: ``` data val16 label word val32 dword 12345678h code mov ax, val16 ; AX = 5678h mov bx, val16+2 ; BX = 1234h ``` The ALIGN directive aligns a variable on a byte, word, doubleword, or paragraph boundary. The syntax is: ``` ALIGN bound ``` Where bound can be 1 for byte, 2 for word, 4 for doubleword, etc. To do this, the assembler inserts empty bytes before the variable. The purpose of aligning data is because the CPU can process data stored at even-numbered addresses more quickly than those at odd-numbered addresses (recall how blocks of memory are loaded into the cache). Example: ``` bVal BYTE ? ALIGN 2 wVal WORD ? ; This word is now on an even boundary ``` The TYPE operator returns the size, in bytes, of a single element. For example: ``` .data var1 byte 20h var2 word 1000h var3 dword ? var4 byte 10,20,30,40 msg byte “Hello”, 0 .code mov ax, type var1 ; AX = 1 mov ax, type var2 ; AX = 2 mov ax, type var3 ; AX = 4 mov ax, type var4 ; AX = 1 mov ax, type msg ; AX = 1 ``` Note that type does not count the length of a string or multiply defined data. The LENGTHOF operator returns the number of elements that have been defined using DUP: ``` .data val1 dw 1000h arr dw 32 dup(0) arr2 db 10 dup(0) .code mov ax, lengthof val1 ; AX = 1 mov ax, lengthof arr ; AX = 32 mov ax, lengthof arr2 ; AX = 10 ``` The SIZEOF operator multiples the LENGTH by the TYPE: ``` .data arr dw 32 dup(0) arr2 db 10 dup(0) .code mov ax, sizeof arr ; 32*2 = 64 mov ax, sizeof arr2 ; 10*1 = 10 ``` More x86 Assembly Instructions We are now at a point to talk about additional x86 assembly instructions. You have already seen how to define, move, and perform mathematical operations on data. The next topic is how to perform unconditional branches and loops. Later we will look at performing conditional branches. **JMP** The JMP instruction tells the CPU to “Jump” to a new location. This is essentially a goto statement. We should load a new IP and possibly a new CS and then start executing code at the new location. On the x86 we have three formats for the JMP instruction: - **JMP SHORT** destination - **JMP NEAR PTR** destination - **JMP FAR PTR** destination Here, destination is a label that is either within +128 or –127 bytes (SHORT), a label that is within the same segment (NEAR), or a label that is in a different segment (FAR). By default, it is assumed that the destination is NEAR unless the assembler can compute that the jump can be short. Some usage examples: ```assembly jmp L1 ; NEAR unless can compute SHORT possible jmp near ptr L1 jmp short L2 jmp far ptr L3 ; Jump to different segment ``` If it is possible to use SHORT, that is preferred. In a short jump, the machine code includes a 1 byte value that is used as a displacement and added to the IP. For a backward jump, this is a negative value. For a forward jump, this is a positive value. This makes the short jump efficient and doesn’t need much space. In the other types of jumps, we’ll need to store a 16 or 32 bit address as an operand. Examples: ```assembly Label1: jmp short Label2 ; Short Jump ... Label2: jmp Label1 ; Short jump also since the ; assembler knows L1 is close ``` We can use JMP to make loops: ```assembly Label1: inc ax ``` ... do processing jmp Label1 This is of course an infinite loop unless we have a jump somehow to break out of it. **LOOP** For loops, we have a specific LOOP instruction. This is an easy way to repeat a block of statements a specific number of times. The ECX register is automatically used as a counter and is decremented each time the loop repeats. The format is: ``` LOOP destination ``` Here is a loop that repeats 10 times: ``` mov ecx, 10 mov eax, 0 start: inc eax ... loop start ; Jump back to start ``` The loop decrements ECX by one each time we are in the loop. When ECX equals zero, the loop stops and no jump takes place. Upon the end of the above loop, ECX =0 and EAX = 10. You have to be very careful with the LOOP instruction so that you don’t change the contents of ECX inside the loop. Otherwise the loop will probably not execute the correct number of iterations. **LOOPW, LOOPD** In Real Mode, the LOOP instruction only works using the CX register. Since CX is 16 bits, this only lets you loop 64K times. If you have a 386 or higher processor, you can use the entire ECX register to loop up to $2^{32}$ times. LOOPD uses the ECX doubleword for the loop counter: ``` .386 ; in protected mode mov ecx, 0A0000000h L1 ... loopd L1 ; loop A0000000h times ``` LOOPW uses a 16 bit word for CX just like LOOP. Indirect Addressing An indirect operand is generally a register that contains the offset of data in memory. In other words, the register is a pointer to some data in memory. Typically this data is used to do things like traverse arrays. In real mode, only the SI, DI, BX, and BP register can be used. By default, SI, DI, and BX are assumed to be offsets from the DS (data segment) register. By default, BP is assumed to be an offset from the SS (stack segment) register. The format to access the contents of memory pointed to by an indirect register is to enclose the register in square brackets. For example, if BX contains 100, then [BX] refers to the memory at DS:100. Based on the real mode limitations, many programmers also typically use ESI, EDI, EBX, and EBP in protected mode, although we can also use other registers if we like. Here is an example that sums three 8 bit values: ```asm .data aList byte 10h, 20h, 30h sum byte 0 .code mov ebx, offset aList ; EBX points to 10h mov al, [ebx] ; move to AL inc ebx ; BX points to 20h add al, [ebx] ; add 20h to AL inc ebx add al, [ebx] ; same as MOV sum, al mov esi, offset sum ; in these two lines mov [esi], al exit ``` Here instead we add three 16-bit integers: ```asm .data wordlist word 1000h, 2000h, 3000h sum word .code mov ebx, offset wordlist mov ax,[ebx] add ax,[ebx+2] ; Directly add offset of 2 add ax,[ebx+4] ; Directly add offset of 4 mov [ebx+6], ax ; [ebx+6] is offset for sum ``` Here are some examples in real mode: ```assembly .data aString db "ABCDEFG", 0 .code mov ax, @data ; Set up DS for our data segment mov ds, ax ; Don’t forget to include this mov bx, offset aString ; BX points to “A” mov cx, 7 L1: mov dl, [bx] ; Copy char to DL mov ah, 2 ; 2 into AH, code for display char int 21h ; DOS routine to display inc bx ; Increment index loop L1 ``` This loops through and copies A,B,C,D,E,F,G to DL and displays it to the screen. Here is another example that runs in real mode, can you figure out what it does? Recall that B800 is where video memory begins. ```assembly mov ax, 0B800h mov ds, ax mov cx, 80*25 mov si, 0 L: mov [si], word ptr 0F041h ; need word ptr to tell masm ; to move just two bytes worth ; (0F041h could use a dword) add si, 2 loop L ``` **Based and Indexed Operands** Based and indexed operands are essentially the same as indirect operands. A register is added to a displacement to generate an effective address. The distinction between based and index is that BX and BP are “base” registers, while SI and DI are “index” registers. As we saw in the previous example, we can use the SI index like it were a base register. There are many formats for using the base and index registers. One way is to use it as an offset from an identifier much like you would use a traditional array in C or C++: Another technique is to add the registers together explicitly: ```assembly mov ah, [array + ebx] ; same as mov ah, array[bx] mov ah, [string + ebx] ; same as mov ah, string[bx] ``` We can also add together base registers and index registers: ```assembly mov bx, offset string mov si, 2 mov ah, [bx + si] ; same as above, number 3 to ah ``` However we cannot combine two base registers and two index registers. This is just another annoyance of non-orthogonality: ```assembly mov ah, [si + di] ; INVALID mov ah, [bp + bx] ; INVALID ``` Finally, one other equivalent format is to put two registers back to back. This has the same effect as adding them: ```assembly mov ebx, 1 mov esi, 2 mov ah, array[ebx][esi] ; Moves number 4 to ah, offset+1+2 mov ah, [array+ebx+esi] ; Also moves 4 to ah ``` Sometimes this format is useful for representing 2D arrays.
{"Source-Url": "http://www.math.uaa.alaska.edu:80/~afkjm/cs221/handouts/irvine4.pdf", "len_cl100k_base": 4824, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 21763, "total-output-tokens": 5634, "length": "2e12", "weborganizer": {"__label__adult": 0.0004417896270751953, "__label__art_design": 0.0005612373352050781, "__label__crime_law": 0.00035881996154785156, "__label__education_jobs": 0.0004134178161621094, "__label__entertainment": 9.351968765258788e-05, "__label__fashion_beauty": 0.0002007484436035156, "__label__finance_business": 0.00022459030151367188, "__label__food_dining": 0.00047397613525390625, "__label__games": 0.0008449554443359375, "__label__hardware": 0.02099609375, "__label__health": 0.0003769397735595703, "__label__history": 0.0003368854522705078, "__label__home_hobbies": 0.00028634071350097656, "__label__industrial": 0.0021190643310546875, "__label__literature": 0.0001766681671142578, "__label__politics": 0.0002696514129638672, "__label__religion": 0.0007166862487792969, "__label__science_tech": 0.0941162109375, "__label__social_life": 5.924701690673828e-05, "__label__software": 0.0133209228515625, "__label__software_dev": 0.8623046875, "__label__sports_fitness": 0.00041747093200683594, "__label__transportation": 0.0008206367492675781, "__label__travel": 0.0002294778823852539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17445, 0.04186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17445, 0.79247]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17445, 0.8434]], "google_gemma-3-12b-it_contains_pii": [[0, 1875, false], [1875, 3589, null], [3589, 4990, null], [4990, 6429, null], [6429, 8099, null], [8099, 9545, null], [9545, 10461, null], [10461, 12247, null], [12247, 13623, null], [13623, 15208, null], [15208, 16585, null], [16585, 17445, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1875, true], [1875, 3589, null], [3589, 4990, null], [4990, 6429, null], [6429, 8099, null], [8099, 9545, null], [9545, 10461, null], [10461, 12247, null], [12247, 13623, null], [13623, 15208, null], [15208, 16585, null], [16585, 17445, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17445, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 17445, null]], "pdf_page_numbers": [[0, 1875, 1], [1875, 3589, 2], [3589, 4990, 3], [4990, 6429, 4], [6429, 8099, 5], [8099, 9545, 6], [9545, 10461, 7], [10461, 12247, 8], [12247, 13623, 9], [13623, 15208, 10], [15208, 16585, 11], [16585, 17445, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17445, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
4d64cc2dcc7269dd4b05d856f3e8befded7e3f69
CSE 344 Final Exam March 17, 2015 Name: ________________________________ <table> <thead> <tr> <th>Question</th> <th>Points</th> </tr> </thead> <tbody> <tr> <td>Question 1</td> <td>/ 10</td> </tr> <tr> <td>Question 2</td> <td>/ 30</td> </tr> <tr> <td>Question 3</td> <td>/ 18</td> </tr> <tr> <td>Question 4</td> <td>/ 24</td> </tr> <tr> <td>Question 5</td> <td>/ 21</td> </tr> <tr> <td>Question 6</td> <td>/ 32</td> </tr> <tr> <td>Question 7</td> <td>/ 35</td> </tr> <tr> <td>Question 8</td> <td>/ 20</td> </tr> <tr> <td>Total</td> <td>/ 190</td> </tr> </tbody> </table> The exam is closed everything except for 2 letter-size sheets of notes. No books, computers, electronics devices, phones of the smart or not-so-smart variety, telegraphs, telepathy, tattos, mirrors, smoke signals, or other contraptions permitted. By putting your name on this exam, you are certifying that you did not give or receive any unpermitted aid in the exam. The exam lasts 110 min. Please budget your time so you get to all questions. Please wait to turn the page until everyone has their exam and you are told to begin. Relax. You are here to learn. Reference Information This information may be useful during the exam. Feel free to use it or not as you wish. You can remove this page from the exam if that is convenient. Reference for SQL Syntax Outer Joins -- left outer join with two selections: SELECT * FROM R LEFT OUTER JOIN S on R.x=55 and R.y=S.z and S.u=99 The UNION Operation SELECT R.k FROM R UNION SELECT S.k FROM S The CASE Statement SELECT R.name, (CASE WHEN R.rating=1 THEN 'like it' WHEN R.rating IS NULL THEN 'do not know' ELSE 'unknown' END) AS a_rating FROM R; The WITH Statement Note: with is not supported in sqlite, but it is supported SQL Server and in postgres. WITH T AS (SELECT * FROM R WHERE R.K>10) SELECT * FROM T WHERE T.K<20 Reference for Relational Algebra <table> <thead> <tr> <th>Name</th> <th>Symbol</th> </tr> </thead> <tbody> <tr> <td>Selection</td> <td>σ</td> </tr> <tr> <td>Projection</td> <td>π</td> </tr> <tr> <td>Natural Join</td> <td>⋈</td> </tr> <tr> <td>Group By</td> <td>γ</td> </tr> <tr> <td>Set Difference</td> <td>−</td> </tr> <tr> <td>Duplicate Elimination</td> <td>δ</td> </tr> <tr> <td>Renaming of R to new relation with attributes A₁,A₂,A₃</td> <td>ρ_{A₁,A₂,A₃}(R)</td> </tr> </tbody> </table> XQuery example (from lecture slides) (a reminder of XQuery syntax) FOR $b in doc("bib.xml")/bib LET $a := avg($b/book/price/text()) FOR $x in $b/book WHERE $x/price/text() > $a RETURN $x Question 1. (10 points, 1 point each) Warm up – True or false (circle one). T F Broadcast join requires data to be redistributed using a hash function. T F A serializable schedule is always conflict-serializable. T F Two-phase locking is used to handle transactions that span multiple partitions. T F We need locks to ensure all transactions execute serially. T F Hash indexes benefit range selection queries. T F A relation can have at most one unique key. T F All XQuery outputs are well-formed XML. T F SQL queries and relational algebra expressions are one-to-one mappings. T F Every key is a superkey. T F A given schema with a set of functional dependencies can have multiple minimal superkeys. Question 2. (30 points) Return of the Dawgs.¹ The International Sled Dog (Husky) Racing Association (ISDRA) has turned tech-savvy after the midterm! This time they decided to store their sled race information using XML with the following DTD: ``` <!DOCTYPE races [ <!ELEMENT races (race)+> <!ELEMENT race (id, (participant)+)> // id uniquely identifies each race <!ATTLIST race date CDATA #REQUIRED> // MM/DD/YYYY <!ATTLIST race location CDATA #REQUIRED> // maximum 30 characters long <!ELEMENT participant (dog, musher)> // id uniquely identifies each musher <!ATTLIST participant resultPosition CDATA #REQUIRED> // = 1 if winner <!ELEMENT dog (id, name, age)> // id uniquely identifies each dog <!ELEMENT musher (id, name)> // id uniquely identifies each musher <!ELEMENT id (#PCDATA)> // integers <!ELEMENT name (#PCDATA)> // maximum 30 characters long <!ELEMENT age (#PCDATA)> // integers ]>``` Write XQuery expressions for the following queries. The data is stored on a file called races.xml. a) Find the names of all the dogs that participated in races that took place at Iditarod on February 1, 2015. (7 points) b) Find the average age of the dogs that won at least one race in Fairbanks. (7 points) ¹ Also the actual name of the Huskies 2005 Football team yearbook. c) Convert the DTD above into a relational schema. (4 points) d) Define a virtual view on top of your schema from c) that stores the number of distinct dogs that have raced at each location. The output schema should be raceStats(location varchar(20), numDogsRaced int). (5 points) e) Write a non-recursive Datalog query for a) using your relational schema from d). (7 points) Question 3. (18 points) Registering for races. The ISDRA maintains a website for racers to register for races. Races are stored with schema: races(mid int, did int, raceNum int) The following pseudo-code is used for online registration: ```sql register (musherId, dogId, raceNumber): L1: musherCount = execute(SELECT COUNT(*) FROM races WHERE raceNum = raceNumber); L2: if (musherCount < 10) // 10 mushers maximum per race L3: execute(INSERT INTO races VALUES (musherId, dogId, raceNumber)); ``` a) Three different mushers attempt to register for race #5, which has only one slot left, by calling register from their browsers independently: C1: register(1, 2, 5); C2: register(2, 6, 5); C3: register(3, 7, 5); At the end of the day, all three of them succeeded in registering for the race! How could this happen? Show a schedule of the above commands that could result in this outcome. Indicate your answer using the labels above and assume each of L1, L2, and L3 is executed atomically. For instance, the schedule C1:L1; C1:L2; C1:L3; C2:L1; means execute L1 from C1, then L2 from C1, then L3 from C1, etc. (6 points) b) ISDRA realizes the error above was caused by not having any locking protocol in their DBMS. ISDRA now implements strict two-phase locking with record-level shared and exclusive locks in their DBMS, and puts BEGIN TRANSACTION before L1 and COMMIT after L3. Explain why that fixes the problem in a). (6 points) c) In addition to online registrations, the ISDRA system, now running on the DBMS from b) with the fix in register, also supports report generation about races using the following code: ```java generateReport (raceNumber): L1: BEGIN TRANSACTION; L2: records = execute(SELECT * FROM races WHERE raceNum = raceNumber); L3: for (record : records) { print(record); } L4: count = execute(SELECT COUNT(*) FROM races WHERE raceNum = raceNumber); L5: print("total current registered mushers: " + count); L6: COMMIT; ``` They notice that sometimes there is an error: the count does not match the number of records printed even after using transactions! Explain how that can happen and what can you do to fix this problem. (6 points) Question 4. (24 points) Of Pigs and Dawgs. ISDRA stores pedigree history of dogs using files and would like to process them using Pig. Suppose they have two Pig tables defined as follows: // each rid is unique pedigreeRecords = load 'records.dat' using TextLoader as (rid:int, rname:chararray); // each pid is unique people = load 'people.dat' using TextLoader as (pid:int, pname:chararray); // each (rid, pid) pair is unique dogOwners = load 'owners.dat' using TextLoader as (do_rid:int, do_pid:int); Consider the following Pig program: x = group dogOwners by do_rid; x2 = foreach x generate flatten(dogOwners), COUNT(dogOwners) as count; x3 = cogroup x2 by do_rid, pedigreeRecords by rid; x4 = foreach x3 generate flatten(pedigreeRecords), flatten(x2); x5 = foreach x4 generate rname, count; dump x5; // prints result set x5 a) Pig implements the program above using MapReduce. Assume that each map function can read from at most one base table. How many map-reduce jobs will this program generate (hint: >1)? (3 points) b) Implement the first map function (pseudocode is fine as long as you clearly state what the inputs are and what key-value pairs are generated). (7 points) Code repeated here for your convenience. ```java // each rid is unique pedigreeRecords = load 'records.dat' using TextLoader as (rid:int, rname:chararray); // each pid is unique people = load 'people.dat' using TextLoader as (pid:int, pname:chararray); // each (rid, pid) pair is unique dogOwners = load 'owners.dat' using TextLoader as (do_rid:int, do_pid:int); x = group dogOwners by do_rid; x2 = foreach x generate flatten(dogOwners), COUNT(dogOwners) as count; x3 = cogroup x2 by do_rid, pedigreeRecords by rid; x4 = foreach x3 generate flatten(pedigreeRecords), flatten(x2); x5 = foreach x4 generate rname, count; dump x5; // prints result set x5 ``` c) Implement the first reduce function given the map function you wrote above (pseudocode is fine as long as you clearly state what the inputs are and what outputs are generated). (7 points) ```java // x is a result set with each (do_rid, pid) pair // pedigreeRecords is a result set with each rid // dogOwners is a result set with each (do_rid, pid) reduce x, pedestalRecords, dogOwners as (do_rid, pid) // get the rname for this (do_rid, pid) get rname from pedigreeRecords; // get the count for this (do_rid, pid) get count from x; // generate the output row generate rname, count; ``` d) To check query performance, the ISDRA also stores their records in the following relations: - `pedigreeRecords(rid int, rname varchar(20))` - `people(pid int, pname varchar(20))` - `dogOwners(do_rid int, do_pid int)` Using these relations, show how you would rewrite the Pig program above using SQL. (7 points) Question 5. (21 Points) Running in parallel. The ISDRA now wants to compare PIG and parallel DBMS performance. With data stored in the following relations: - `races(mid int, did int, raceNum int) -- stores race records` - `dogs(did int, name varchar(20), age int)` - `mushers(mid int, name varchar(20), age int)` They want to measure system performance with the following query: ```sql SELECT d.did, COUNT(*) FROM races r, dogs d, mushers m WHERE r.mid = m.mid AND r.did = d.did AND m.age > 21 GROUP BY d.did ``` a) Briefly describe what this query is computing. (3 points) b) If you can create two indexes on the three tables to speed up the query above, what would you choose? Briefly justify your answer. (6 points) Code repeated here for your convenience. ```sql races(mid int, did int, raceNum int) -- stores race records dogs(did int, name varchar(20), age int) mushers(mid int, name varchar(20), age int) SELECT d.did, COUNT(*) FROM races r, dogs d, mushers m WHERE r.mid = m.mid AND r.did = d.did AND m.age > 21 GROUP BY d.did ``` c) Suppose `races`, `dogs`, and `mushers` are block-partitioned across three different machines. Draw out how the query will be executed by a parallel DBMS that implements all joins using shuffle (repartition) joins assuming no indexes are available. Clearly label what each step is performing. (7 points) f) Briefly describe what happens to the query plan above if the data is hash-partitioned rather than block-partitioned. (5 points) After the midterm, the ISDRA bookstore is now under new management! For starters, they would like to redesign their DBMS. a) Design an E/R diagram for the bookstore that contains the following objects and their attributes: (10 points) - periodicals: name, issue number, publisher - fiction: name, author, publisher - catalogs: name, issue number, publisher - stores: zip code, square footage - newsstands: zip code Model the following relationships among the objects: - Each periodical contains review of at most one other fiction or periodical. - Each catalog contains an advertisement of at most one other catalog, fiction, or periodical. - Stores sell only fiction. - Newsstands sell only periodicals and catalogs b) Write the CREATE TABLE statements to represent this E/R diagram using SQL relations. Clearly label all keys and foreign keys. (10 points) Learning from HW4, the store maintains the following relations for its employee records: employee (officeNum, SSN, phone, managerName, deptNum) Given the following functional dependencies: officeNum \rightarrow phone SSN \rightarrow officeNum, deptNum deptNum \rightarrow managerName c) List one key of the employee relation. (5 points) d) Is the employee relation in BCNF? If so, write “Yes” below. Otherwise, decompose it into BCNF and underline all keys and foreign keys in the final relations. (7 points) **Question 7.** (35 points) Trouble at the plant. The bookstore has been running different transactions on its inventory table with schema: \[ \text{inventory (bid int, price double, count int)} \text{ -- attributes abbreviated as (b, p, c)} \] and using the following transactions: - **T1:** R(b); R(p); R(c); W(c); - **T2:** R(b); W(c); R(c); - **T3:** R(b); W(p); R(c); W(p); For each of the schedules shown in a) to d), circle all categories that the given schedule satisfies. (4 points each) a) R1(b); R2(b); R1(p); R1(c); R3(b); W1(c); W2(c); W3(p); R3(c); R2(c); W3(p); <table> <thead> <tr> <th>Serial</th> <th>Serializable</th> <th>Conflict-serializable</th> <th>Not serializable</th> </tr> </thead> </table> b) R3(b); R1(b); R1(c); W3(p); R2(b); R1(p); W2(c); W3(p); R2(c); R3(c); W1(c); <table> <thead> <tr> <th>Serial</th> <th>Serializable</th> <th>Conflict-serializable</th> <th>Not serializable</th> </tr> </thead> </table> c) R2(b); R1(b); R3(b); W3(p); W2(c); R1(p); R1(c); R3(c); W1(c); W3(p); R2(c); <table> <thead> <tr> <th>Serial</th> <th>Serializable</th> <th>Conflict-serializable</th> <th>Not serializable</th> </tr> </thead> </table> Code repeated here for convenience. T1: R(b); R(p); R(c); W(c); T2: R(b); W(c); R(c); T3: R(b); W(p); R(c); W(p); d) Under what isolation level is the following schedule allowed? R3(b); R1(b); W3(p); R2(b); R1(p); R1(c); W2(c); W1(c); R3(c); R2(c); W3(p); Read uncommitted Read committed Repeatable read Serializable e) Draw the precedence graph for the schedule shown in d). (7 points) Code repeated here for your convenience. T1: R(b); R(p); R(c); W(c); T2: R(b); W(c); R(c); T3: R(b); W(p); R(c); W(p); Consider this schedule: R2(b); R3(b); W3(p); W2(c); R3(c); R3(c); W3(p); f) Could it be produced by a scheduler using two-phase locking with only exclusive locks? If yes, show the schedule with locking operations (Use L1(b) to indicate T1 locking on attribute b, and U1(b) to indicate T1 unlocking attribute b). If no, briefly explain why not. (4 points) g) Could it be produced by a scheduler using two-phase locking with shared and exclusive locks? If yes, show the schedule with locking operations. If no, briefly explain why not. (4 points) h) Finally, could it be produced by a scheduler using strict two-phase locking with shared and exclusive locks? If yes, show the schedule with locking operations. If no, briefly explain why not. (4 points) Question 8. (20 points, 5 points each) Short Answers. a) What is the difference between horizontal and vertical partitioning? b) When would you use a virtual view as opposed to a materialized view and why? c) List out what ACID stands for and explain two of them. d) List one data model that is used in NoSQL systems other than relations. END OF EXAM Thank you for making the class enjoyable! Hope you have learned tons. Good luck with finals and have an awesome spring break! – 344 staff – Page 18 of 18
{"Source-Url": "http://courses.cs.washington.edu/courses/cse414/17sp/exams/final-15wi.pdf", "len_cl100k_base": 4372, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 35588, "total-output-tokens": 4964, "length": "2e12", "weborganizer": {"__label__adult": 0.0008058547973632812, "__label__art_design": 0.0009212493896484376, "__label__crime_law": 0.001003265380859375, "__label__education_jobs": 0.314453125, "__label__entertainment": 0.0002267360687255859, "__label__fashion_beauty": 0.0004649162292480469, "__label__finance_business": 0.00130462646484375, "__label__food_dining": 0.0011968612670898438, "__label__games": 0.0015554428100585938, "__label__hardware": 0.0018405914306640625, "__label__health": 0.00128936767578125, "__label__history": 0.001026153564453125, "__label__home_hobbies": 0.0005145072937011719, "__label__industrial": 0.00157928466796875, "__label__literature": 0.0011739730834960938, "__label__politics": 0.0005402565002441406, "__label__religion": 0.0010528564453125, "__label__science_tech": 0.046905517578125, "__label__social_life": 0.0008020401000976562, "__label__software": 0.0195465087890625, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.000950336456298828, "__label__transportation": 0.0014657974243164062, "__label__travel": 0.0006384849548339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15457, 0.02638]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15457, 0.24248]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15457, 0.83931]], "google_gemma-3-12b-it_contains_pii": [[0, 878, false], [878, 2275, null], [2275, 3006, null], [3006, 4308, null], [4308, 4686, null], [4686, 5813, null], [5813, 6854, null], [6854, 8040, null], [8040, 9610, null], [9610, 10335, null], [10335, 11096, null], [11096, 11857, null], [11857, 11998, null], [11998, 12512, null], [12512, 13674, null], [13674, 14069, null], [14069, 14945, null], [14945, 15457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 878, true], [878, 2275, null], [2275, 3006, null], [3006, 4308, null], [4308, 4686, null], [4686, 5813, null], [5813, 6854, null], [6854, 8040, null], [8040, 9610, null], [9610, 10335, null], [10335, 11096, null], [11096, 11857, null], [11857, 11998, null], [11998, 12512, null], [12512, 13674, null], [13674, 14069, null], [14069, 14945, null], [14945, 15457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15457, null]], "pdf_page_numbers": [[0, 878, 1], [878, 2275, 2], [2275, 3006, 3], [3006, 4308, 4], [4308, 4686, 5], [4686, 5813, 6], [5813, 6854, 7], [6854, 8040, 8], [8040, 9610, 9], [9610, 10335, 10], [10335, 11096, 11], [11096, 11857, 12], [11857, 11998, 13], [11998, 12512, 14], [12512, 13674, 15], [13674, 14069, 16], [14069, 14945, 17], [14945, 15457, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15457, 0.10078]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
53b7be6d49b9e45cb0ee4bf0471e909a4f7c2ed0
1 VaST pseudocode Algorithm 1 Variational State Tabulation. Initialize replay memory $\mathcal{M}$ with capacity $N$ Initialize sweeping table process $\mathcal{B}$ with transition add queue $\mathcal{Q}^+$ and delete queue $\mathcal{Q}^-$ 1: for each episode do 2: Set $t \leftarrow 0$ 3: Get initial observations $o_0$ 4: Process initial state $\bar{s}_0 \leftarrow \text{arg max}_s q_\phi(s|o_0)$ 5: Store memory $(o_0, \bar{s}_0)$ in $\mathcal{M}$ 6: while not terminal do 7: Set $t \leftarrow t + 1$ 8: Take action $a_t$ with $\epsilon$-greedy strategy based on $\tilde{Q}(s_{t-1}, a)$ from $\mathcal{B}$ 9: Receive $r_t, o_t$ 10: Process new state $\bar{s}_t \leftarrow \text{arg max}_s q_\phi(s|o_{t-k:t})$ 11: Store memory $(o_t, \bar{s}_t, a_t, r_t)$ in $\mathcal{M}$ 12: Put transition $(\bar{s}_t-1, a_t, r_t, \bar{s}_t)$ on $\mathcal{Q}^+$ 13: if training step then 14: Set gradient list $\mathcal{G} \leftarrow \emptyset$ 15: for sample in minibatch do 16: Get $(o_{j-k-1:j}, a_j)$ from random episode and step $j$ in $\mathcal{M}$ 17: Process $q_\phi(s_{j-1}|a_{j-k-1:j-1}), q_\phi(s_j|a_{j-k:j})$ with encoder 18: Sample $\hat{s}_j \sim q_\phi$ with temperature $\lambda$ 19: Process $p_\theta(o_j|\hat{s}_j), p_\theta(\hat{s}_j|a_j, \hat{s}_{j-1})$ with decoder and transition network 20: Append $\nabla_{\theta,\phi} F(\theta, \phi; o_{j-k-1:j})$ to $\mathcal{G}$ 21: for $i$ in $\{j-1, j\}$ do 22: Process $\tilde{s}_{i}^{\text{new}} \leftarrow \text{arg max}_s q_\phi(s|o_{i-k:i})$ 23: Get $(\tilde{s}_{i-1}, a_i, r_i, \tilde{s}_{i}, a_{i+1}, r_{i+1}, \tilde{s}_{i+1})$ from $\mathcal{M}$ 24: if $\tilde{s}_{i} \neq \tilde{s}_{i}^{\text{new}}$ then 25: Put $(\tilde{s}_{i-1}, a_i, r_i, \tilde{s}_{i}), (\tilde{s}_{i}, a_{i+1}, r_{i+1}, \tilde{s}_{i+1})$ on $\mathcal{Q}^-$ 26: Put $(\tilde{s}_{i-1}, a_i, r_i, \tilde{s}_{i}^{\text{new}}), (\tilde{s}_{i}^{\text{new}}, a_{i+1}, r_{i+1}, \tilde{s}_{i+1})$ on $\mathcal{Q}^+$ 27: Update $\tilde{s}_i \leftarrow \tilde{s}_{i}^{\text{new}}$ in $\mathcal{M}$ 28: end if 29: end for 30: end for 31: Perform a gradient descent step according to $\mathcal{G}$ with given optimizer 32: end if 33: end while 34: end for 2 Details to prioritized sweeping algorithm We follow the “Prioritized Sweeping with reversed full backups” algorithm [Van Seijen and Sutton, 2013] with some adjustments: a subroutine is added for transition deletions, and priority sweeps are performed continuously except when new transition updates are received. The Q-values of unobserved state–action pairs are never used, so we simply initialize them to 0. Finally, we kept a model of the expected immediate rewards $E[r | s, a]$ explicitly, although this is not necessary and was not used in any of the experiments presented; we omit it here for clarity. In the algorithm, discretized states $\bar{s}$ are simplified to $s$. Algorithm 2 Prioritized Sweeping Process. 1. Initialize $V(s) = U(s) = 0$ for all $s$ 2. Initialize $Q(s, a) = 0$ for all $s, a$ 3. Initialize $N_{sa}, N_{s'}_{sa} = 0$ for all $s, a, s'$ 4. Initialize priority queue $P$ with minimum priority cutoff $p_{\text{min}}$ 5. Initialize add queue $Q^+$ and delete queue $Q^-$ 6: while True do 2: while $Q^+, Q^-$ empty do 3: Remove top state $s'$ from $P$ 4: $\Delta U \leftarrow V(s') - U(s')$ 5: $U(s') \leftarrow V(s')$ 6: for all $(s, a)$ pairs with $N_{sa} > 0$ do 7: $Q(s, a) \leftarrow Q(s, a) + \gamma N_{s'}_{sa} / N_{sa} \cdot \Delta U$ 8: $V(s) \leftarrow \max_a \{Q(s, a) | N_{sb} > 0\}$ 9: add/update $s$ in $P$ with priority $|U(s) - V(s)|$ if $|U(s) - V(s)| > p_{\text{min}}$ 10: end for 11: end while 12: for $(s, a, r, s')$ in $Q^+$ do 13: $N_{sa} \leftarrow N_{sa} + 1; N_{s'}_{sa} \leftarrow N_{s'}_{sa} + 1$ 14: $Q(s, a) \leftarrow [Q(s, a) | N_{sa} - 1] + r + \gamma U(s') / N_{sa}$ 15: $V(s) \leftarrow \max_a \{Q(s, b) | N_{sb} > 0\}$ 16: add/update $s$ in $P$ with priority $|U(s) - V(s)|$ if $|U(s) - V(s)| > p_{\text{min}}$ 17: end for 18: for $(s, a, r, s')$ in $Q^-$ do 19: $N_{sa} \leftarrow N_{sa} - 1; N_{s'}_{sa} \leftarrow N_{s'}_{sa} - 1$ 20: if $N_{sa} > 0$ then 21: $Q(s, a) \leftarrow [Q(s, a) | N_{sa} + 1] - (r + \gamma U(s')) / N_{sa}$ 22: else 23: $Q(s, a) \leftarrow 0$ 24: end if 25: if $\sum_b N_{sb} > 0$ then 26: $V(s) \leftarrow \max_b \{Q(s, b) | N_{sb} > 0\}$ 27: else 28: $V(s) \leftarrow 0$ 29: end if 30: add/update $s$ in $P$ with priority $|U(s) - V(s)|$ if $|U(s) - V(s)| > p_{\text{min}}$ 31: end for 32: end while 3 Details to \(Q\)-value estimation Here, we simplify the discretized states \(\bar{s}\) to \(s\) for clarity. We denote \(S\) as the set of all states corresponding to \(d\)-length binary strings, \(\bar{Q}(s, a)\) as the \(Q\)-value estimate used for action selection, and \(Q(s, a)\) as the \(Q\)-value for a state–action pair in the lookup table as determined by prioritized sweeping (which is only used if \((s, a)\) has been observed at least once). In order to calculate \(\bar{Q}(s_t, a)\) for a particular state–action pair, we first determine the Hamming distance \(m\) to the nearest neighbour(s) \(s \in S\) for which the action \(a\) has already been observed, i.e. \[ m = \min_{s \in S} \{D(s_t, s)|N_{sa} > 0\}, \] (1) where \(D(s_t, s)\) is the Hamming distance between \(s_t\) and \(s\) and \(N_{sa}\) denotes the number of times that action \(a\) has been taken from state \(s\). We then define the set \(S_{tm}\) of all \(m\)-nearest neighbours to state \(s_t\), \[ S_{tm} = \{s \in S|D(s_t, s) = m\}, \] (2) and the \(Q\)-value estimate used for action selection is then given by \[ \bar{Q}(s_t, a) := \frac{\sum_{s \in S_{tm}} N_{sa} Q(s, a)}{\sum_{s \in S_{tm}} N_{sa}}. \] (3) If \((s_t, a)\) has already been observed, then \(m = 0\), \(S_{tm} = \{s_t\}\) and \(\bar{Q}(s_t, a) = Q(s_t, a)\). If \(m = 1\), \(\bar{Q}(s_t, a)\) corresponds to an experience–weighted average over all states \(s\) with a Hamming distance of 1 from \(s_t\), \(m = 2\) to the average over neighbours with a Hamming distance of 2 etc. \(\bar{Q}(s_t, a)\) can be seen as the \(Q\)-value of an abstract aggregate state \(s_{tm}\) consisting of the \(m\)-nearest neighbours to \(s_t\). To show this, we introduce the index set of past experiences \(E_{sa} = \{(\tau, \mu)|s_\tau^\mu = s, a_\tau^\mu = a\}\) that contains all the time indices \(\tau\) for all episodes \(\mu\) where action \(a\) was chosen in state \(s\) (taking into account all reassignments as described in section 2.3 of the main text and in Algorithm 1). With the above definition of \(N_{sa}\) we see that \(N_{sa} = |E_{sa}|\), i.e. there are \(N_{sa}\) elements in the set \(E_{sa}\). With this and the update mechanism of prioritized sweeping (Algorithm 2) we can write \[ Q(s, a) = \frac{1}{N_{sa}} \sum_{\tau, \mu \in E_{sa}} r_\tau^\mu + \frac{1}{N_{sa}} \sum_{\tau, \mu \in E_{sa}} V(s_{\tau+1}^\mu), \] (4) where \(V(s) = \max_b \{Q(s, b)|N_{sb} > 0\}\). Substituting this into Equation 3, we obtain \[ \tilde{Q}(s_t, a) = \frac{\sum_{s \in S_{tm}} \sum_{\tau, \mu \in E_{sa}} r_\tau^\mu + \gamma \sum_{\tau, \mu \in E_{sa}} V(s_{\tau+1}^\mu)}{\sum_{s \in S_{tm}} N_{sa}}. \] (5) We now consider an aggregate state \(s_{tm}\) by treating all states \(s \in S_{tm}\) as equivalent, i.e. \(E_{s_{tm}a} = \{(\tau, \mu)|s_\tau^\mu \in S_{tm}, a_\tau^\mu = a\}\). With this definition we get \(\sum_{s \in S_{tm}} \sum_{\tau, \mu \in E_{sa}} = \sum_{\tau, \mu \in E_{s_{tm}a}}\) and we obtain \[ \tilde{Q}(s_t, a) = \left[\frac{\sum_{\tau, \mu \in E_{s_{tm}a}} r_\tau^\mu + \gamma \sum_{\tau, \mu \in E_{s_{tm}a}} V(s_{\tau+1}^\mu)}{N_{s_{tm}a}}\right] \] (6) \[ = Q(s_{tm}, a), \] where we used Equation 4 to obtain the second equality. 4 Extended latent dimensionality analysis Figure 1: Effect of latent dimensionality in a large maze (left column, Figure 3B in main text) and a small maze (right column, Figure 6 in main text). [A] Average reward. [B] Cumulative percentage of revisited state–action pairs over the course of training. The sharp transition at 50 000 steps corresponds to the beginning of training. [C] The average lookup distance $m$ as a function of time. [D] The average percentage of observations from a minibatch that were reassigned to a different state during training. Extended sample efficiency results Figure 2: Performance comparison between models for [A] rewarded forced runs (identical to Figure 5B in main text) and [B] penalized forced runs. Black arrows indicate addition of teleporter and forced runs. 6 Effect of training on frame histories Figure 3: The free energy cost function over the course of training on Pong, broken into [A] the reconstruction terms and [B] the transition and entropy terms, conditioning on three additional past frames of observations ($k = 3$) and no additional frames ($k = 0$). Training with past frames as input resulted in faster learning on Pong (main text, Figure 7). As shown here, training on past frames conveys no added benefit in reconstructing the current frame, but instead decreases the additional cost terms. 7 Hyperparameters 7.1 3D Navigation For the three network–based models, hyperparameters were chosen based on a coarse parameter search in two mazes (Figure 3 excluding the hazards and Figure 5 excluding the teleporter), using the previously published hyperparameters as a starting point for the baselines \cite{Pritzel2017, Schaul2015, Mnih2015}. In all mazes except the smaller Plus–Maze, the agents explored randomly for 50 000 steps to initialize the replay memory before training; \( \epsilon \) was then annealed from 1 to 0.1 over 200 000 steps. In the Plus–Maze, the agents explored randomly for 10 000 steps and \( \epsilon \) was annealed over 40 000 steps. We used \( \epsilon = 0.05 \) for evaluation during test epochs, which lasted for 1000 steps. In all tasks we used a discount factor of 0.99. The encoder of VaST and the networks for NEC and Prioritized D–DQN all shared the same architecture, as published in \cite{Mnih2015}, with ReLU activations. For all three networks, we used the Adam optimizer \cite{Kingma2014} with \( \beta_1 = 0.9, \beta_2 = 0.999 \), and \( \epsilon = 1e^{-8} \), and trained on every 4th step. Unless otherwise stated, we used a replay memory size of \( N = 500 000 \) transitions. 7.1.1 VaST We used a latent dimensionality of \( d = 32 \) unless otherwise stated. For training, we used a minibatch size of 128 and a learning rate of \( 2 \times 1e^{-4} \). For sweeping, we used \( p_{\text{min}} = 5 \times 1e^{-5} \). For the Concrete relaxation, we used the temperatures suggested by \cite{Maddison2016}: \( \lambda_1 = 2/3 \) for sampling from the posterior and evaluating the posterior log–probability and \( \lambda_2 = 0.5 \) for evaluating the transition and initial state log–probabilities. For the decoder architecture, we used a fully–connected layer with 256 units, followed by 4 deconvolutional layers with \( 4 \times 4 \) filters and stride 2, and intermediate channel depths of 64, 64 and 32 respectively. We used an MLP with 3 hidden layers (with 512, 256 and 512 units respectively) for each action in the transition network. 7.1.2 NEC We used a latent embedding of size 64, \( n_s = 50 \) for the n–step Q-value backups, and \( \alpha = 0.1 \) for the tabular learning rate. We performed a 50 approximate nearest–neighbour lookup using the ANNoy library (pypi.python.org/pypi/annoy) on Differentiable Neural Dictionaries of size 500 000 for each action. For training, we used a minibatch size of 32 and a learning rate of \( 5 \times 1e^{-5} \). 7.1.3 Prioritized D–DQN We used the rank–based version of Prioritized DQN with \( \alpha = 0.7 \) and \( \beta = 0.5 \) (annealed to 1 over the course of training). We used a minibatch size of 32 and a learning rate of \( 1e^{-4} \) and updated the target network every 2000 steps. 7.1.4 LSH The LSH–based algorithm does not use a neural network or replay memory, since the embedding is based on fixed random projections. We achieved the best results with \( d = 64 \) for the latent dimensionality. For prioritized sweeping, we used \( p_{\text{min}} = 5 \times 1e^{-5} \). 7.2 Atari: Pong We used a latent dimensionality of \( d = 64 \), a replay memory size of \( N = 1 000 000 \) transitions, and annealed \( \epsilon \) over 1 000 000 steps. All other hyperparameters were the same as for navigation. References
{"Source-Url": "http://proceedings.mlr.press/v80/corneil18a/corneil18a-supp.pdf", "len_cl100k_base": 4307, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26311, "total-output-tokens": 5348, "length": "2e12", "weborganizer": {"__label__adult": 0.0006456375122070312, "__label__art_design": 0.0007233619689941406, "__label__crime_law": 0.0005831718444824219, "__label__education_jobs": 0.0009703636169433594, "__label__entertainment": 0.0002925395965576172, "__label__fashion_beauty": 0.00031113624572753906, "__label__finance_business": 0.0004374980926513672, "__label__food_dining": 0.000698089599609375, "__label__games": 0.00496673583984375, "__label__hardware": 0.0028629302978515625, "__label__health": 0.0008668899536132812, "__label__history": 0.0006303787231445312, "__label__home_hobbies": 0.0002353191375732422, "__label__industrial": 0.0011005401611328125, "__label__literature": 0.0005350112915039062, "__label__politics": 0.0003981590270996094, "__label__religion": 0.0007758140563964844, "__label__science_tech": 0.29248046875, "__label__social_life": 0.00012767314910888672, "__label__software": 0.01122283935546875, "__label__software_dev": 0.67724609375, "__label__sports_fitness": 0.0007567405700683594, "__label__transportation": 0.0009908676147460938, "__label__travel": 0.0003604888916015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13833, 0.06385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13833, 0.29997]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13833, 0.77204]], "google_gemma-3-12b-it_contains_pii": [[0, 2278, false], [2278, 4779, null], [4779, 8013, null], [8013, 8572, null], [8572, 8816, null], [8816, 9368, null], [9368, 12701, null], [12701, 13833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2278, true], [2278, 4779, null], [4779, 8013, null], [8013, 8572, null], [8572, 8816, null], [8816, 9368, null], [9368, 12701, null], [12701, 13833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13833, null]], "pdf_page_numbers": [[0, 2278, 1], [2278, 4779, 2], [4779, 8013, 3], [8013, 8572, 4], [8572, 8816, 5], [8816, 9368, 6], [9368, 12701, 7], [12701, 13833, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13833, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
eb979404c6fcec557637d963ad7986ac76bc6b9c
CSCI-1200 Data Structures Test 2 — Practice Problems Note: This packet contains selected practice problems from Test 2 from three previous years. Your test will contain approximately one third to one half as many problems (totalling ~100 pts). 1 The Dynamic Tetris Slide [35 pts] Our implementation of the Tetris game for Homework 3 only allowed pieces to drop vertically. The full game rules also allow pieces to move horizontally, which can be used by a skilled player to tuck in underneath an “overhang”. In this problem we will extend our solution with the slide function that allows the square piece, the 'O' piece, to slide one space to the right. For this problem you don’t need to worry about sliding any other piece shape, or about sliding to the left. Below are two example Tetris games showing how this function works. ![Example Games] The representation for the Tetris class consists of 3 private member variables: data, heights, and width. The memory layout for the 4th diagram above is shown to the right. Remember that we must maintain the arrays to be exactly as long as necessary to store the blocks on the board. The space character is used to represent empty air underneath a block. The slide function takes in 2 integers, the row and column of the lower left corner block of the square 'O' piece that we want to slide. We will also implement the can_slide function which first tests whether a piece is able to slide to the right. It will return false if the 'O' piece at the specified row and column is already at the right edge of the board, e.g., calling can_slide(1,4) in the third image above returns false. It will return false if the 'O' piece at the specified row and column is blocked by another piece on the board, e.g., calling can_slide(2,2) in the 5th image above will return false. 1.1 Algorithm Analysis [5 pts] Assume that the game board has width w, the height of the tallest column is h, and the number of blocks (total number of piece characters and space characters) is b. What is the Big O Notation for the running time of your can_slide and slide functions that you have implemented on the next two pages? Write two to three concise and well-written sentences justifying your answers. can_slide: slide: 1.2 can_slide Implementation [ 12 pts ] bool Tetris::can_slide(int row, int column) const { // First, let's do some error checking on input arguments // and the current board state. This will help when we need // to debug this new function. Write if/else statements // and/or assertions to verify your assumptions. // sample solution: 8 line(s) of code // Now, we can do the logic necessary to determine whether this piece // can slide to the right. // sample solution: 6 line(s) of code } 1.3 slide Implementation [18 pts] ```cpp void Tetris::slide(int row, int column) { assert (can_slide(row, column) == true); } ``` sample solution: 26 line(s) of code std::vector<std::string> a; std::list<std::string> b; // omitted: initialize both containers to hold n = a large number of words 01 a.push_front("apple"); 02 b.push_front("banana"); 03 a.push_back("carrot"); 04 b.push_back("date"); 05 std::vector<std::string>::iterator itr_a = a.begin(); 06 std::list<std::string>::iterator itr_b = b.begin(); 07 itr_a = a.insert(itr_a,"eggplant"); 08 itr_a += 5; 09 itr_a = a.erase(itr_a); 10 itr_b += 5; 11 itr_b = b.insert(itr_b,"eggplant"); 12 ++itr_b; 13 itr_b = b.erase(itr_b); 14 a.sort(); 15 b.sort(); 16 std::sort(a.begin(),a.end()); 17 std::sort(b.begin(),b.end()); Which lines result in a compilation error? Which lines cause a segmentation fault? Which lines have a memory leak? Which lines run in $O(1)$ time? Which lines run in $O(n)$ time? Which lines run in $O(n \log n)$ time? Which lines run in $O(n^2)$ time? Alyssa P. Hacker and Ben Bitdiddle are working on a team project based on the linked grid of `Node` data structure from Homework 5. Alyssa suggests they start with the `print_perimeter` function, which takes in a pointer to a `Node` named `start`, and walks around the edge of the grid in a *clockwise direction*. The function should print the value stored in every `Node` visited. For example, `print_perimeter(start)` for the diagram shown on the right will print this sequence of values to the screen: ``` B C D H L K J I E A ``` Alyssa says it’s ok to assume that the grid is at least two rows tall and at least two columns wide and that `start` definitely points to a `Node` somewhere on the edge/perimeter of the grid. ### 3.1 Implement `print_perimeter` [ 12 pts ] ```cpp template <class T> class Node { public: T value; Node<T> *up,*down,*left,*right; }; ``` Meanwhile, Ben is working on a function named `rebutton`, which takes in 2 arguments: `start`, a pointer to a `Node` on the top edge of the grid and a bool `shift_up`. The function makes a vertical cut to the right of `start` and reconnects the `Nodes` on either side of the cut shifted up (below left) or shifted down (below right) one row. Ben claims that calling `rebutton(start, true)` followed by `rebutton(start, false)` will restore the original grid. And vice versa. ### 3.2 Implement `rebutton` [ 14 pts ] *sample solution: 28 line(s) of code* Write a recursive function named `max_coin_path` that searches a 2D grid of "coins", an STL vector of STL vector of non-negative integers, for a path back to the origin (0,0). In walking from the start location (lower right corner of grid) to the origin (upper left corner), the path is only allowed to move up or left one grid space at a time. The goal is to find a path that maximizes the sum of the coins along the path. The function should return the maximum sum. For the example shown above right, the path (3,4) (3,3) (3,2) (2,2) (2,1) (1,1) (1,0) (0,0) collects coins with values 1 + 2 + 1 = 4, which is the maximum coin sum that can be achieved on this grid. The path achieving that sum should be stored in the second argument passed to the function, an STL list of Locations named `path`. Note: there are a few similar paths that have the same sum. Your function may return any of these optimal paths. ### 4.1 Usage [2 pts] You will implement the `max_coin_path` on the next page. But first, complete the initial call to the `max_coin_path` function below. Assume `grid` has already been initialized; for example, with the data shown above. What additional information does your function need to get started? ```cpp std::list<Location> path; int max_coin_sum = max_coin_path(grid, path, ); ``` ### 4.2 Algorithm Analysis [5 pts] Assume that the grid width and height are `w` and `h` respectively, the number of non-zero coins in the grid is `c`, and the value of the maximum coin is `m`. What is the Big O Notation for the running time of your answer on the next page? Write three to four concise and well-written sentences justifying your answer. 4.3 Implementation [ 16 pts ] Now implement the `max_coin_path` function. Remember: it should be recursive. *sample solution: 27 line(s) of code* 5 Linked Tube Repair [ / 33 ] Alyssa P. Hacker is working on a modified linked list that is both two-dimensional and circular. A small sample with \( \text{height}=3 \) and \( \text{circumference}=4 \) is shown below. Each templated Node has pointers to its 4 neighbors. The top and bottom edges of the tube structure have NULL pointers. But the left and right edges wrap around, like a circularly linked list. This cylindrical tube structure may have any number of nodes for its height and its circumference. 5.1 Tube repair Diagram [ / 4 ] First Alyssa wants to tackle the challenge of repairing a hole in the structure. Assume a single Node is missing from the structure, and we have a pointer \( n \) to the Node immediately to the left of the hole. Modify the diagram below to show all of the necessary edits for a call to \( \text{repair}(n,7) \); ``` template <class T> class Node { public: // REPRESENTATION T value; Node<T> *up; Node<T> *down; Node<T> *left; Node<T> *right; }; ``` 5.2 Thinking about Tube repair Complexity [ / 3 ] The \( \text{repair} \) function should have constant running time in most cases. Describe an example structure with a single missing Node that can be repaired, but not in constant time. Write 2-3 concise and well-written sentences. \textit{You may want to complete the implementation on the next page before answering.} 5.3 Tube repair Implementation Now, implement repair, which takes 2 arguments: a pointer to the Node immediately to the left of the hole and the value to be stored in the hole. You may assume a single Node is missing from the structure. sample solution: 26 line(s) of code Now write `destroyTube` (and any necessary helper functions) to clean up the heap memory associated with this structure. The function should take a single argument, a pointer to any `Node` in the structure. You may assume the structure has no holes or other errors. You cannot use a `for` or `while` loop. `sample solution: 17 line(s) of code` Complete the Vec assignment operator implementation below, while minimizing wasted heap memory. Assume the allocator is most efficient when all heap allocations are powers of two (1, 2, 4, 8, 16, etc.) ```cpp 1 template <class T> 2 Vec<T>& Vec<T>::operator=(const Vec<T>& v) { 3 if (this != &v) { 4 delete m_data; 5 m_size = v.m_size; 6 m_alloc = v.m_alloc; 7 m_data = new T[m_size]; 8 for (int i = 0; i < m_size; ++i) { 9 m_data[i] = v.m_data[i]; 10 } 11 } 12 return *this; 13 } ``` Add code below to perform a simple test of the assignment operator: ```cpp Vec<double> v; v.push_back(3.14159); v.push_back(6.02); v.push_back(2.71828); ``` Is line 12 necessary? Continue your testing code above with a test that would break if line 12 was omitted. What is the purpose of line 3? Write code for a test that would break if lines 3 and 10 were omitted. Write a function `embellish` that modifies its single argument, `sentence` (an STL list of STL strings), adding the word “very” in front of “pretty” and adding “with a wet nose” after “grey puppy”. For example: ``` the pretty kitty sat next to a grey puppy in a pretty garden ``` Should become: ``` the very pretty kitty sat next to a grey puppy with a wet nose in a very pretty garden ``` **sample solution:** 20 line(s) of code If there are \( w \) words in the input sentence, what is the worst case Big O Notation for this function? If we switched each STL list to STL vector in the above function, what is the Big O Notation? | STL list: | STL vector: | Complete **redundant**, which takes a sentence and 2 phrases and replaces all occurrences of the first phrase with the second, shorter phrase. For example “pouring down rain” is replaced with “pouring rain”: it is pouring down rain so take an umbrella → it is pouring rain so take an umbrella Or we can just eliminate the word “that” (the replacement phrase is empty): I knew that there would be late nights when I decided that CS was the career for me → I knew there would be late nights when I decided CS was the career for me typedef std::list<std::string> words; ```cpp void redundant(sentence, phrase, replace) { // sample solution: 19 line(s) of code } ``` Write a useful but buggy segment of code (or function) that will compile with no errors but will produce the indicated compilation warning. Put a star ⭐ next to the line of code that will trigger the warning. Write a concise and well-written sentence describing the intended vs. actual (buggy) behavior of the code. warning: comparison of integers of different signs: 'int' and 'unsigned int' warning: control reaches / may reach end of non-void function warning: variable is uninitialized when used here / in this function warning: returning reference to local temporary object / reference to stack memory associated with a local variable returned Ben Bitdiddle wrote the following code fragment to manage his personal information. ```cpp std::ifstream istr("my_information.txt"); std::string s; std::vector<std::string> data; while (istr >> s) { data.push_back(s); } std::vector<std::string>::iterator password = data.begin()+4; data.push_back("credit_card:"); data.push_back("1234-5678-8765-4321"); data[4] = "qwerty"; std::cout << "my password is: " << *password << std::endl; ``` Write “True” in the box next to each true statement. Leave the boxes next to the false statements empty. - Lines 2 & 3 will produce an “uninitialized read” error when run under gdb or lldb. - Line 5 is not a valid way to initialize an iterator. - Ben’s credit card information is not saved back to the file. - This program might behave differently if re-run on this computer or another computer. - A memory debugger might detect an “unaddressable access of freed memory” error on Line 9. - If we move lines 6 & 7 after line 9, this code fragment will run without memory errors. - This code contains memory leaks that can be detected by Dr. Memory or Valgrind. - These password choices disqualify Ben from any job in computer security. Eva Lu Ator is working on her capstone project to manage physical storage facilities. She’s mapped out the overall design and started implementation of the two classes. ```cpp class Box { public: Box(int w, int d, int h) : width(w), depth(d), height(h) {} int width; int depth; int height; }; Storage storage(4,3,2); assert (storage.available_space() == 24); Box *a = new Box(2,2,2); assert (storage.add(a,0,0,0)); Box *b = new Box(3,2,1); assert (!storage.add(b,2,0,0)); delete b; Box *b_rotated = new Box(2,3,1); assert (storage.add(b_rotated,2,0,0)); Box *c = new Box(1,1,1); assert (storage.add(c,2,0,1)); assert (storage.available_space() == 9); ``` ```cpp class Storage { public: Storage(int w, int d, int h); // FILL IN FOR PART 1 bool add(Box *b, int w, int d, int h); int available_space(); private: void remove(Box *b, int w, int d, int h); Box ****data; int width; int depth; int height; }; bool Storage::add (Box *b, int w, int d, int h) { for (int i = w; i < w+b->width; i++) { if (i >= width) return false; for (int j = d; j < d+b->depth; j++) { if (j >= depth) return false; for (int k = h; k < h+b->height; k++) { if (k >= height) return false; if (data[i][j][k] != NULL) return false; } } } for (int i = w; i < w+b->width; i++) { for (int j = d; j < d+b->depth; j++) { for (int k = h; k < h+b->height; k++) { data[i][j][k] = b; } } } return true; } ``` 11.1 Missing functions from Storage Class Declaration [ / 5 ] Her friend Ben Bitdiddle doesn’t remember much from Data Structures, but he reminds her that classes with dynamically-allocated memory need a few key functions. Fill in the missing prototypes for PART 1. 11.2 Storage Destructor [ / 20 ] Eva explains to Ben that the private remove member function will be useful in implementing the destructor. First write the remove member function: *sample solution: 10 line(s) of code* Now write the Storage class destructor: *sample solution: 14 line(s) of code* 12 Transpose Linked Grid [ / 27 ] Louis B. Reasoner is working on a new member function for our Homework 5 Linked Grid named `transpose`. This function should mirror or flip the elements along the diagonal. Here’s a sample grid with integer data and how it prints before and after a call to `transpose`: ```cpp grid.print(); std::cout << std::endl; grid.transpose(); grid.print(); ``` <table> <thead> <tr> <th>1 2 3 4</th> </tr> </thead> <tbody> <tr> <td>8 7 6 5</td> </tr> <tr> <td>9 10 11 12</td> </tr> </tbody> </table> | 1 8 9 | | 2 7 10 | | 3 6 11 | | 4 5 12 | 12.1 Diagram [ / 7 ] First neatly modify the diagram of this smaller grid below to show all of the necessary edits that must be performed by a call to `transpose()`. ``` template <class T> class Node { public: // REPRESENTATION T value; Node<T> *up; Node<T> *down; Node<T> *left; Node<T> *right; }; ``` 12.2 Complexity Analysis [ / 5 ] What is the Big 'O' Notation for the running time of the `transpose()` member function? Assume the grid width is \( w \) and the height is \( h \). Write 1-2 concise and well-written sentences justifying your answer. You probably want to complete the implementation on the next page before answering. 12.3 Implementation Louis has suggested that we first implement a helper non-member function named `swap`, which will make the implementation of `transpose` more concise. ``` sample solution: 5 line(s) of code ``` Now implement `transpose`, as it would appear outside of the `Grid` class declaration. ``` sample solution: 16 line(s) of code ``` Organizing Words [ / 30 ] Alyssa P. Hacker is working on a program to clean up a dataset of words. The task is to write a function named `organize_words` that takes in an STL vector of STL lists of words (STL strings). The function should organize the words into groups by word length, and ensure that the words are sorted within each group. Many or most of the words will already be in the right place. That is, they will already be in the slot of the vector that matches the length of the word. And the neighboring words in each slot/list will already be mostly alphabetized. For example, given the data shown on the left, your implementation should move the four misplaced words to produce the data shown on the right. ``` 0 1 diamond 2 3 gem malachite 4 jade opal rock ruby 5 geode pearl talc stone topaz 6 garnet quartz gypsum 7 amethyst azurite emerald 8 fluorite sapphire 9 ``` ``` 0 1 2 3 gem 4 jade opal rock ruby talc 5 geode pearl stone topaz 6 garnet gypsum quartz 7 azurite diamond emerald 8 amethyst fluorite sapphire 9 malachite ``` To make the problem a little more “fun”, you are NOT ALLOWED to use: - the STL vector subscript/indexing operator, [ ], or .at(), - the STL sort function, or - any of the push or pop functions on vector or list. You may assume that the initial vector has at least as many slots as the longest word in the structure. 13.1 Complexity Analysis - Big 'O' Notation [ / 6 ] Once you’ve finished your implementation on the next pages, analyze the running time of your solution. Assume there are \( w \) total words in the whole structure, \( v \) slots in the vector, a maximum of \( m \) words per list, and \( x \) words are misplaced and need to be moved. Write 2-3 concise and well-written sentences justifying your answer. Alyssa suggests writing a helper function named `place` that will place a word in the correct location in the structure. Work within the provided framework below. Do not add any additional `for` or `while` loops. ```c void place( // your code here ) { // sample solution: 2 line(s) of code } while ( ) { // sample solution: 3 line(s) of code while ( ) { // sample solution: 5 line(s) of code } } } ``` ```c // sample solution: 5 line(s) of code ``` 13.3 Organize Implementation And now write the `organize` function, which calls the `place` function. Again, work within the provided framework below and do not add any additional `for` or `while` loops. ```c void organize_words() { sample solution: 2 line(s) of code while ( ) { sample solution: 2 line(s) of code while ( ) { sample solution: 8 line(s) of code } } } ``` ```c } ``` ```c } ``` ```c } ``` ```c } ``` Ben Bitdiddle was inspired by the recursive merge sort example from Data Structures lecture and proposes it as a guide to compute the smallest interval that contains a collection of floating point numbers (e.g., the minimum and maximum). Implement Ben's idea, a recursive function named `compute_interval` that takes in an STL `vector` of `floats` and returns an `Interval` object. For example: 6.2 4.3 10.4 2.5 8.4 1.5 3.7 → [1.5, 10.4] ```cpp class Interval { public: Interval(float i, float j) : min(i), max(j) {} float min; float max; }; sample solution: 12 line(s) of code ``` Without resorting to personal insults, explain in two or three concise and well-written sentences why Ben’s idea isn’t going to result in significant performance improvements. Be technical.
{"Source-Url": "http://www.cs.rpi.edu/academics/courses/fall22/csci1200/reviews/problems2.pdf", "len_cl100k_base": 5294, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 44096, "total-output-tokens": 6516, "length": "2e12", "weborganizer": {"__label__adult": 0.0005755424499511719, "__label__art_design": 0.0005426406860351562, "__label__crime_law": 0.000598907470703125, "__label__education_jobs": 0.0085601806640625, "__label__entertainment": 0.0001285076141357422, "__label__fashion_beauty": 0.00026798248291015625, "__label__finance_business": 0.00019860267639160156, "__label__food_dining": 0.0008821487426757812, "__label__games": 0.00202178955078125, "__label__hardware": 0.0016326904296875, "__label__health": 0.0005288124084472656, "__label__history": 0.0004239082336425781, "__label__home_hobbies": 0.0002548694610595703, "__label__industrial": 0.0006389617919921875, "__label__literature": 0.00044798851013183594, "__label__politics": 0.0004153251647949219, "__label__religion": 0.0007638931274414062, "__label__science_tech": 0.0106964111328125, "__label__social_life": 0.00024700164794921875, "__label__software": 0.0034122467041015625, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0006594657897949219, "__label__transportation": 0.0011491775512695312, "__label__travel": 0.0003368854522705078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20362, 0.05025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20362, 0.50084]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20362, 0.8303]], "google_gemma-3-12b-it_contains_pii": [[0, 2256, false], [2256, 2785, null], [2785, 2957, null], [2957, 3827, null], [3827, 4703, null], [4703, 5258, null], [5258, 6923, null], [6923, 7071, null], [7071, 8463, null], [8463, 8738, null], [8738, 9083, null], [9083, 10011, null], [10011, 10676, null], [10676, 11348, null], [11348, 12001, null], [12001, 13175, null], [13175, 14777, null], [14777, 15345, null], [15345, 16494, null], [16494, 16843, null], [16843, 18621, null], [18621, 19103, null], [19103, 19575, null], [19575, 20362, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2256, true], [2256, 2785, null], [2785, 2957, null], [2957, 3827, null], [3827, 4703, null], [4703, 5258, null], [5258, 6923, null], [6923, 7071, null], [7071, 8463, null], [8463, 8738, null], [8738, 9083, null], [9083, 10011, null], [10011, 10676, null], [10676, 11348, null], [11348, 12001, null], [12001, 13175, null], [13175, 14777, null], [14777, 15345, null], [15345, 16494, null], [16494, 16843, null], [16843, 18621, null], [18621, 19103, null], [19103, 19575, null], [19575, 20362, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 20362, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20362, null]], "pdf_page_numbers": [[0, 2256, 1], [2256, 2785, 2], [2785, 2957, 3], [2957, 3827, 4], [3827, 4703, 5], [4703, 5258, 6], [5258, 6923, 7], [6923, 7071, 8], [7071, 8463, 9], [8463, 8738, 10], [8738, 9083, 11], [9083, 10011, 12], [10011, 10676, 13], [10676, 11348, 14], [11348, 12001, 15], [12001, 13175, 16], [13175, 14777, 17], [14777, 15345, 18], [15345, 16494, 19], [16494, 16843, 20], [16843, 18621, 21], [18621, 19103, 22], [19103, 19575, 23], [19575, 20362, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20362, 0.02381]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
4514d41cf543abebb700298f62d40b8625bdb60e
Finalization report: homogeneous PVM/PARIX Overeinder, B.J.; Sloot, P.M.A.; Petersen, J. Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Commission of the European Communities ****************************** ESPRIT III PROJECT NB 6756 ****************************** CAMAS COMPUTER AIDED MIGRATION OF APPLICATIONS SYSTEM ****************************** CAMAS-TR-2.3.4 Finalization Report Homogeneous PVM/PARIX ****************************** Date: October 1994 Technical Report CAMAS-TR-2.3.4 Finalization Report Homogeneous PVM/PARIX B. J. Overeinder and P. M. A. Sloot University of Amsterdam J. Petersen Parsytec GmbH October 1994 Abstract This document reports on the design and implementation considerations of PVM/PARIX, homogeneous version 1.0. This version is for use with PARIX 1.2 only. Further, it contains information how to use Homogeneous PVM/PARIX and the appendix contains the installation notes. 1 What Is Homogeneous PVM/PARIX? The popularity of PVM nowadays, can be partly explained by the inherent portability of PVM programs over a large number of parallel systems. The spectrum of parallel systems consists from loosely coupled networks with remote shell capabilities, via parallel clusters, to MPP architectures. The use of PVM as a parallel programming environment for MPP architectures, raises the following observation. The heterogeneity which made PVM so popular in the beginning, now seems a drawback in the application of PVM to MPP architectures due to the large overhead to handle this heterogeneity. This is especially true for the communication primitives, e.g., communication latencies between processes inside the MPP can become quite large. This has motivated the development of a so-called homogeneous version of PVM for the PARIX parallel operating system. This PVM/PARIX version has the additive homogeneous version, which essentially means that all PVM processes are to be run on the MPP system only. Thus, the parallel virtual machine can only consist of the nodes in the Parsytec MPP, and PVM inter-process communication with the front-end system is not supported (of course I/O with the front-end is possible, this is handled by PARIX). It is therefore not possible to send messages to PVM tasks outside the MPP. The large advantage however, is that the communication latency and throughput will be improved drastically, resulting in a communication performance which is almost as fast as the underlying PARIX layer. The user interface of this Homogeneous PVM/PARIX version is somehow different than with the standard PVM. There is no console present, for example to start a virtual machine, and the Homogeneous PVM/PARIX programs are issued as normal PARIX programs, e.g., with the \texttt{run} or \texttt{hpvmrun} script (see also Section 4). The homogeneous version of PVM for PARIX, is an implementation of PVM 3.2.6 [1] on top of PARIX 1.2 [2]. All PVM functionality applicable to MPP systems has been incorporated, with the exception of pvm_recv(). 2 Design Considerations 2.1 General Overview of Design The Homogeneous PVM/PARIX version is designed and implemented on top of an asynchronous communication layer, called the Communication Kernel. The Communication Kernel on his turn is implemented on top of PARIX, the operating system for Parsytec’s parallel architectures. See also Fig. 1. ![Figure 1: The Homogeneous PVM/PARIX design overview.](image) The advantage of this layered design is, that the functionality gap between PVM and PARIX is bridged by the Communication Kernel. The used strategy results in an implementation where the PVM intrinsics are clearly separated from the MPP’s operating system dependent characteristics. This improves the maintainability and portability to newer PVM versions. 2.2 Design Considerations of the Communication Kernel The Communication Kernel’s primary raison d’être was the need for typed asynchronous buffered communication, which is PVM’s basic message passing model. The implementation of the typed asynchronous buffered communication in a separate layer is motivated by the generic characteristics of this message passing model. Along the design and implementation, other functionality needed for a PVM implementation was thought to be generic enough to be added to the Communication Kernel. As an extension to the point-to-point communication, a multicast communication primitive has been integrated in the Communication Kernel. Other functionality added to the Communication Kernel is dynamic remote context creation and a run server capable of load balancing according to the round-robin strategy. This two components together efficiently supports the `pvm_spawn` call. ### 2.3 Design Considerations of Homogeneous PVM/PARIX With the effective support of the Communication Kernel, the PVM implementation has become quite straightforward. Many of the PVM/PARIX calls are implemented without complications on top of the Communication Kernel. Most prominent implementation efforts to PVM/PARIX were the multiple message buffering scheme and the group server. The implementation of the PVM multiple message buffering scheme is as flexible as can be. There is no restriction on the number of message buffers, and for each buffer there does not exist a limit on the size. Group communication and synchronization (`pvm_bcast` respectively `pvm_barrier`) is administered and coordinated by the group server. The group server is a independent thread/context running on processor number 0, but could be placed on any other processor (processor number 0 is always present). In standard PVM, the group server is build on top of the PVM routines. In our design, the group server is implemented on top of the Communication Kernel, thus resides in the same layer as PVM. This not only improves performance by circumventing an extra layer, but has also the advantage that the group server can make direct use of the multicast provided by the Communication Kernel. Worth to mention is, that the Homogeneous PVM/PARIX implementation supports multiple (dynamic) PVM tasks per MPP node. Many other MPP specific PVM implementations only supports one task per node. ### 3 Programming with PVM/PARIX The Homogeneous PVM/PARIX version is software compatible with other implementations of PVM 3.2.6 as described in [1], with the exceptions discussed in Section 5. #### Common Programming Practices With the Homogeneous PVM/PARIX version, PVM programs become normal PARIX executables, just like any PARIX program you would write yourself. However, the programming practice of writing PVM programs has not changed. Each PVM program has to include `pvm3.h (C)` or `fpvm3.h (FORTRAN)` found in the `include` directory of the distribution. The first PVM call in any program has to be `pvm_mytid()` (or `pvmfmytid` in FORTRAN programs). This routines initializes the PVM layer and enrolls the task to PVM. In order to terminate the PVM program, `pvm_exit()` has to be called. Without calling this function, each node will hang forever, waiting for the other nodes to call `pvm_exit()`. #### Compiling PVM Programs Compiling PVM programs is like compiling PARIX programs, with the exception of the library that is to be linked with the application. Depending on whether you are compiling a program for Transputer or PowerPC systems, you prefix the compiler call with px for Transputer architectures, and ppx for PowerPC architectures. The command-line to compile a program for a Transputer system would look like: $ px cc.px -I$pvmdir/include file.c -o file -L$pvmdir/lib -lpvm3, or using FORTRAN $ px f77.px -I$pvmdir/include file.f -o file \ -L$pvmdir/lib -lfpvm3 -lpvm3 When you want to compile a program that came with a Makefile, the simplest way to build the executable is: $ aimk <target> **Group Communication Library** The group communication code has been integrated into the standard PVM library. It is therefore not necessary to link with a separate group library in contrast to the standard PVM implementation. To prevent “old” makefiles from generating all sorts of errors because of a missing group communication library, PVM/PARIX comes with a dummy group library that contains nothing, but keeps the linker happy. ### 4 Running Programs with PVM/PARIX **Starting Programs** Since PVM/PARIX programs are regular PARIX programs, they can be run using the standard PARIX run utility. However, since many PVM makefiles automatically install executables in $HOME/pvm3/bin/<ARCH>, using run may be inconvenient. Therefore, PVM/PARIX comes with a simple front-end to run, called hpvmrun, that looks in different places for the binary to execute. The flags and arguments needed to run PARIX jobs should still be given to either run or hpvmrun (note: hpvmrun accepts the same arguments as run). Executing a PVM/PARIX job on a four processors partition becomes $ hpvmrun -a p4 file where nrm allocates the four nodes; or with a pre-allocated partition $ hpvmrun -f0 2 2 file Since there exist a number of different run scripts, it is impossible to give detailed information on how to run PARIX jobs on your particular system. Please refer to run(1) for more information. **NOTE:** Do not forget to increase the maximum number of virtual links, if one of the following errors is reported: PX_AllocLLink error or PX_AllocVLink error for PowerPC; and AllocLLink error or AllocVLink error for Transputer. See also the “Release Notes PARIX 1.2-PPC”, Section “Frequently Asked Questions”. **PVM Console** There is no pvmconsole for the Homogeneous PVM/PARIX implementation, since all tasks can be managed from the command-line. As a result, the output of each of the parallel tasks is sent to the terminal where the run or hpvmrun was issued. 5 Notes on the Implementation Unimplemented Calls This PVM implementation adheres to the definitions in the PVM 3.2.6 manual, with a few exceptions (see also the manual pages [3]). Most notably, the functionality that is not applicable to a homogeneous MPP implementation is not supported (e.g., `pvm_addhost`). Apart from this, several functions not suitable for MPP systems in general are not implemented, such as the signal capabilities. This affects the functions in Table 1. <table> <thead> <tr> <th>Function</th> </tr> </thead> <tbody> <tr> <td>pvm_addhost</td> </tr> <tr> <td>pvm_delhost</td> </tr> <tr> <td>pvm_kill</td> </tr> <tr> <td>pvm_notify</td> </tr> <tr> <td>pvm_recvf+</td> </tr> <tr> <td>pvm_sendsig</td> </tr> <tr> <td>pvm_startpvmd</td> </tr> <tr> <td>pvm_tickle</td> </tr> </tbody> </table> Table 1: The functions indicated with a ‘+’ will be implemented in a future release of PVM/PARIX, the other functions are not applicable to a MPP system and will therefore not be implemented. All non-implemented functions return `PvmNotImpl`. Process Grids Since PVM/PARIX is built on top of PARIX, it has also the same limitations with respect to the grid that can be built, which implies that one can only run PVM programs on an $m \times n$-grid. Different Libraries The Homogeneous PVM/PARIX distribution comes with a standard PVM3 library, and one compiled with debug information (`libpvm3g.a`). The last might be useful when you encounter a bug in PVM/PARIX and want to fill in a bug-report form. 6 Performance Results A ping-pong experiment was performed to measure the communication performance of the Homogeneous PVM implementation. The typical ping-pong experiment used on Parsytec parallel architectures is the SendLink/RecvLink benchmark, which sends packages of size 1, 4, 16, 64, 256, 1K, 4K, 16K, and 64K bytes. Each measurement for a package size is repeated thirty-two times, resulting in a mean value. The time measurements in the ping-pong experiment are performed in the following way: ```c t1 = time(); send(); receive(); t2 = time(); elapsed_time = (t2 - t1) / 2; ``` The latency measured in Table 2, is the time to send and receive one byte to and from the nearest neighbor in the process grid. The throughput is measured by sending a message of 64K bytes to and from, and divide the number of bytes by the elapsed time. Figure 2: Communication performance of PVM versus PARIX on Parsytec GCel. <table> <thead> <tr> <th>Machine/OS</th> <th>Latency $\mu$sec</th> <th>Throughput (Kb/sec)</th> </tr> </thead> <tbody> <tr> <td>GCel/PARIX</td> <td>46</td> <td>1088</td> </tr> <tr> <td>PVM</td> <td>315</td> <td>1082</td> </tr> <tr> <td>Xplorer/PARIX</td> <td>89</td> <td>1032</td> </tr> <tr> <td>PVM</td> <td>152</td> <td>1027</td> </tr> </tbody> </table> Table 2: Communication latency and throughput A Installing Homogeneous PVM/PARIX Where to Put PVM PVM can be installed either in a users home-directory, or in a system-wide accessible directory to ease sharing between different users. In any case, there must be a directory named $\text{pvm3}$ somewhere on the filesystem. The complete path to this directory, including $\text{pvm3}$ will be denoted by $\text{pvmdir}$ in this manual. Apart from this distribution tree, each PVM user has to create a pvm3 directory in his home directory. This directory will contain his own PVM-executables. **The Distribution** The Homogeneous PVM/PARIX distribution contains two files: - `install` - `HPvmParix-1.0.tar.Z` The tar-file is organized as follows: - `pvm3/bin`: Directory containing several example programs - `pvm3/doc`: Some documentation on this PVM implementation - `pvm3/examples`: Sources of standard PVM example programs - `pvm3/include`: Include files for C and FORTRAN programs - `pvm3/lib`: Libraries for PVM/PARIX, together with supporting tools - `pvm3/man`: Manual pages for Homogeneous PVM/PARIX One megabyte of disk space is required for the complete Homogeneous PVM/PARIX installation. **The Installation** First, place the install script and the Homogeneous PVM/PARIX distribution somewhere on your system. The install script with its arguments takes care for the installation in the proper directory. To actually install the Homogeneous PVM/PARIX distribution, execute the install script: ``` $ ./install <pvmdir-path> ``` where `<pvmdir-path>` is the path where the `pvmdir` should be created. For example, if you want PVM to be installed in `/usr/local`, you should run ``` $ ./install /usr/local ``` which creates `/usr/local/pvm3` and unpacks the distribution into it. If a different PVM version is already present, the install script will rename some of the existing tools in order to retain compatibility with the other version. You will not notice any difference when using either PVM version: every change is made completely transparent. **NOTE:** If you plan to (re-)install a PVM version other than the heterogeneous PVM/PARIX in the same directory where Homogeneous PVM/PARIX resides, take care of the following. - Before (re-)installation, make sure that you save the files `pvmgetarch` and `aimk` from `pvmdir/lib` before you install the new version. - After (re-)installation of the new PVM version, rename the files `pvmgetarch` and `aimk` that come with the new distribution to `pvmgetarch.org` and `aimk.org`. - Finally, restore the saved (Homogeneous PVM/PARIX) version of these programs. This is necessary because the Homogeneous PVM/PARIX version of these tools are wrappers around the original tools and thus rely on the originals being present, renamed to `<name>.org`. **Per User Installation** In order for a user to use PVM, he must create a `pvm3` directory in his home directory to contain his own PVM-binaries. To create this directory, execute the following commands: ``` $ mkdir $HOME/pvm3 $ mkdir $HOME/pvm3/bin $ mkdir $HOME/pvm3/bin/<ARCHITECTURE> ``` where `<ARCHITECTURE>` is either `PARIXPPC` for the Homogeneous PVM/PARIX for Power PC systems, or `PARIXT8` for the Homogeneous PVM/PARIX for Transputer T800 systems. Last, but not least, the path containing the PVM tools (`pvmdir/lib`) should be added to the user’s `PATH` (command path) environment variable and the environment variable `PVM_ROOT` should be set to `pvmdir`. **Different PVM Versions** As with standard PVM, `pvmgetarch` is supplied to determine on what system PVM/PARIX is running. This script is a wrapper around the original `pvmgetarch`, which, if present, is renamed to `pvmgetarch.org` for this version. If a particular computer system (front-end) has shared file systems with other front-end computers, the problem can arise that it is impossible to determine which PVM version has to be chosen. Should it be Homogeneous PVM/PARIX on a PPC based system, Homogeneous PVM/PARIX on a T800 based system, or the heterogeneous version? To solve this problem, you can set the environment variable PVMPARIX to PARIXPPC\_H, PARIXT8\_H or to NO. If PVMPARIX is set to PARIXPPC\_H or PARIXT8\_H, the pvmgetarch script determines the appropriate PVM version for PPC systems or T800 systems. If PVMPARIX equals NO, the control is immediately given to the original pvmgetarch script to determine which PVM version you need. References
{"Source-Url": "https://pure.uva.nl/ws/files/2044048/25758_Sloot59finalization.pdf", "len_cl100k_base": 4283, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 20912, "total-output-tokens": 4914, "length": "2e12", "weborganizer": {"__label__adult": 0.0002791881561279297, "__label__art_design": 0.00024139881134033203, "__label__crime_law": 0.00023853778839111328, "__label__education_jobs": 0.00030994415283203125, "__label__entertainment": 6.657838821411133e-05, "__label__fashion_beauty": 0.00011897087097167967, "__label__finance_business": 0.0002601146697998047, "__label__food_dining": 0.00025844573974609375, "__label__games": 0.0004978179931640625, "__label__hardware": 0.0045318603515625, "__label__health": 0.00031566619873046875, "__label__history": 0.00022482872009277344, "__label__home_hobbies": 9.91225242614746e-05, "__label__industrial": 0.0008344650268554688, "__label__literature": 0.00013625621795654297, "__label__politics": 0.000171661376953125, "__label__religion": 0.0003859996795654297, "__label__science_tech": 0.06329345703125, "__label__social_life": 5.65648078918457e-05, "__label__software": 0.0219573974609375, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0002741813659667969, "__label__transportation": 0.0004355907440185547, "__label__travel": 0.00017011165618896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18315, 0.01946]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18315, 0.32394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18315, 0.86046]], "google_gemma-3-12b-it_contains_pii": [[0, 1155, false], [1155, 1572, null], [1572, 3911, null], [3911, 5562, null], [5562, 8419, null], [8419, 10885, null], [10885, 12836, null], [12836, 13980, null], [13980, 14864, null], [14864, 17489, null], [17489, 18315, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1155, true], [1155, 1572, null], [1572, 3911, null], [3911, 5562, null], [5562, 8419, null], [8419, 10885, null], [10885, 12836, null], [12836, 13980, null], [13980, 14864, null], [14864, 17489, null], [17489, 18315, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18315, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18315, null]], "pdf_page_numbers": [[0, 1155, 1], [1155, 1572, 2], [1572, 3911, 3], [3911, 5562, 4], [5562, 8419, 5], [8419, 10885, 6], [10885, 12836, 7], [12836, 13980, 8], [13980, 14864, 9], [14864, 17489, 10], [17489, 18315, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18315, 0.09302]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
9e483396237d3bf5a36fdd078a15a9c28ebfd4fe
[REMOVED]
{"Source-Url": "http://www.cc.gatech.edu/~vazirani/watermarking.pdf", "len_cl100k_base": 8070, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 35908, "total-output-tokens": 9638, "length": "2e12", "weborganizer": {"__label__adult": 0.0005245208740234375, "__label__art_design": 0.0005993843078613281, "__label__crime_law": 0.001560211181640625, "__label__education_jobs": 0.00038504600524902344, "__label__entertainment": 9.79304313659668e-05, "__label__fashion_beauty": 0.00019812583923339844, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.000492095947265625, "__label__games": 0.0012292861938476562, "__label__hardware": 0.002017974853515625, "__label__health": 0.0008559226989746094, "__label__history": 0.0002849102020263672, "__label__home_hobbies": 0.00014960765838623047, "__label__industrial": 0.0005259513854980469, "__label__literature": 0.0004189014434814453, "__label__politics": 0.00035643577575683594, "__label__religion": 0.0005240440368652344, "__label__science_tech": 0.079345703125, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.011810302734375, "__label__software_dev": 0.89697265625, "__label__sports_fitness": 0.0003707408905029297, "__label__transportation": 0.0005311965942382812, "__label__travel": 0.00019466876983642575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34373, 0.03484]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34373, 0.5332]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34373, 0.89925]], "google_gemma-3-12b-it_contains_pii": [[0, 2521, false], [2521, 5787, null], [5787, 9054, null], [9054, 11929, null], [11929, 14763, null], [14763, 17670, null], [17670, 20619, null], [20619, 24418, null], [24418, 27337, null], [27337, 30392, null], [30392, 33049, null], [33049, 34373, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2521, true], [2521, 5787, null], [5787, 9054, null], [9054, 11929, null], [11929, 14763, null], [14763, 17670, null], [17670, 20619, null], [20619, 24418, null], [24418, 27337, null], [27337, 30392, null], [30392, 33049, null], [33049, 34373, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34373, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34373, null]], "pdf_page_numbers": [[0, 2521, 1], [2521, 5787, 2], [5787, 9054, 3], [9054, 11929, 4], [11929, 14763, 5], [14763, 17670, 6], [17670, 20619, 7], [20619, 24418, 8], [24418, 27337, 9], [27337, 30392, 10], [30392, 33049, 11], [33049, 34373, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34373, 0.05469]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
095b55120680fa785a259b9dc5dd3e15dab205e1
Rebuilding Debian using Distributed Computing Lucas Nussbaum To cite this version: Lucas Nussbaum. Rebuilding Debian using Distributed Computing. Challenges of Large Applications in Distributed Environments (CLADE’2009), Jun 2009, Munich, Germany. 6p. hal-00425611 HAL Id: hal-00425611 https://hal.archives-ouvertes.fr/hal-00425611 Submitted on 22 Oct 2009 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Rebuilding Debian using Distributed Computing Lucas Nussbaum Université Claude Bernard Lyon 1 RESO team, LIP, ENS Lyon Email: lucas.nussbaum@ens-lyon.fr Abstract—Doing Quality Assurance work on Debian, a Linux distribution with more than 12000 packages, requires an impressive amount of computing power, which is usually not available for its developers. In this article, we report on the development of an infrastructure to run quality-assurance tasks on Debian using the Grid’5000 experimental platform. In particular, we focus on the problem of rebuilding all packages in Debian from source. We describe the details of this task, and the infrastructure we developed, with scalability and robustness in mind. The results we obtained are then presented, and we discuss possible improvements and lessons we learnt in the process, which might be useful in the context of other large-scale experiments. 1. INTRODUCTION The Debian project builds an operating system – Debian GNU/Linux, usually simply called “Debian” – by gathering a very large amount of free software, and turning them into packages that can easily be installed by the user. It has been very successful since its creation in 1993, and serves as the basis for other Linux distributions, like Ubuntu. Debian is developed by more than 1000 volunteers, spread across the world and communicating over the Internet. As such, it is often regarded as one of the most important volunteer-based and distributed organizations. Debian is well renowned for its robustness and its stability, and is known as a good choice for a server’s operating system. This level of quality is mainly achieved by a great attention to details by the developers who maintain its 12000+ source packages. But some Quality Assurance (QA) tasks require computing power in addition to manpower, and, since 2006, we have used distributed computing on a Grid infrastructure to find defects in Debian. We worked on two classes of problems. First, we focused on testing the installation and the removal of packages. Debian has more than 22000 binary packages (source packages are built to generate binary packages, which are installed by users). Each package’s meta-data can express relationships with other packages (depends on another package, suggests the installation of another package, conflicts with another package). While the installability (whether a package can be installed) of a package can be determined statically [1], other problems might arise during installation, which are harder to detect without actually installing the package: a package could contain the same file as another package without explicitly conflicting with that other package, a script executed after the installation of the package might fail because of a missing dependency, a programming error, or a change in the behaviour of another package since the developer did the initial packaging work. A tool, piuparts [2], is available in Debian to perform tests on the installation, upgrade, and removal of packages. Running piuparts on all packages is an embarrassingly parallel problem: one could test each one of Debian’s 22000 packages in parallel, and the packages that take the longest do not take more than half an hour. Since 2006, we ran several test campaigns using piuparts, and reported about 250 bugs, most of them considered critical. However, the result of those installation tests is relatively stable: while bugs might not be easy to find, new bugs are relatively rare, and running those tests does not need to be done on a frequent basis. The second class of problems we looked at is more challenging. We examined the buildability of packages (whether packages can be built successfully). Since Debian contains only free software, the source code for each package is available, and for various reasons, it is important to ensure that it is possible, from a source package, to build the corresponding binary packages. Firstly, during the lifetime of a package, it might be necessary to change something in the source code – to correct a mistake, like a bug or a security problem. In that case, it will be necessary to rebuild the corresponding binary packages after the change has been made. Secondly, for legal reasons: the source code for programs covered by the GNU General Public License must be made available by Debian, and one could argue that shipping a source code that does not allow building the corresponding packages would be a license violation. Debian packages can be built automatically: all source packages provide a simple interface, based on a Makefile named debian/rules, that hides the specifics of each program’s build system (use of Automake or CMake, language-specific tools like Python’s distutils or Perl’s Makefile.pl): each package can be built by calling one of the targets of debian/rules, or by using a wrapper like dpkg-buildpackage, which would use debian/rules itself. Unfortunately, packages often become impossible to build, for different reasons. A package needed to build another package (called a build-dependency) could be removed from Debian, or modified in a way that makes its reverse dependencies impossible to build: a compiler could become more strict by rejecting previously-accepted constructs, the API of a library could change in an incompatible way, the parameters of another program could be modified. By rebuilding all packages in Debian, we not only ensure that Debian is self-contained (that all Debian packages can be rebuilt from Debian), we also stress-test the whole toolchain – the packages that are used to build other packages. This second role is at least as important as the first one: most packages in Debian lack a test suite, and using them to rebuild other packages often serves as some kind of automated test suite. In the remainder of this paper, we report on the execution of rebuilds of the Debian archive using distributed computing, by providing feedback on improvements implemented since [3]. In section II, we give some information on our workload. In section III, we present Grid’5000, which is the platform that we used to perform those rebuilds, and the specific infrastructure we developed to be able to run those tasks efficiently on Grid’5000. We then present the results we obtained in section IV and discuss possible optimizations in section V, before concluding in section VI. II. WORKLOAD ANALYSIS The implementation decisions that we will have to make depend greatly on the workload we would like to process with our application. In this section, we describe the characteristics of the Debian source packages set. We use Debian 5.0 ‘Lenny’, released in February 2009, on the i386 architecture, as the basis for our study. Previous releases of Debian do not differ significantly from those results, and it can be expected that future releases will not fundamentally differ either, except by increasing the number of packages. Debian lenny is composed of 12123 source packages, of which about 12000 can be built on the i386 architecture (Debian supports 12 different architectures, and some packages provide functionality that is specific to some architectures, due to specific hardware, for example). The total size of the source packages is 16.3 GB (compressed using gzip). Figure 1 shows the distribution of the size of packages. A lot of the packages are relatively small (44% smaller than 128 kB, 82% smaller than 1 MB, 99% smaller than 20 MB). However, a few packages are much larger (openoffice.org - 346 MB, nuxuii-data - 377 MB). As one can see on figure 2, the few largest packages are responsible for most of the archive’s size. Building source packages into binary packages requires several steps. First, a clean build environment is needed. It consists of a minimal chroot in which are installed the Debian packages that are always expected to be present when building. This includes the GCC compiler, binutils, and Debian-specific tools. In our setup, this chroot is stored as a tar archive, taking 73 MB compressed (200 MB uncompressed). Each source package can also specify other packages that must be installed before building. For example, a Fortran program will require the Fortran compiler to be installed, as this package is not expected to be installed by default in the build environment. The installation of those build-dependencies can take a significant amount of time, as some packages require the installation of a lot of them: openoffice.org requires 485 additional build-dependencies, and linphone requires 392 of them. Of the 22311 binary packages in Debian lenny, 5723 are build-dependencies of other packages. Also, those packages have to be fetched from a local mirror before they are installed. III. SOFTWARE INFRASTRUCTURE A. Grid’5000 Grid’5000 [4], [5] is an experimental platform for research on large-scale parallel and distributed systems. Grid’5000 is being developed under the INRIA ALADDIN development action, with support from CNRS, RENATER, and several universities as well as other funding bodies. Grid’5000 consists of about 2000 compute nodes, split in a dozen of clusters, located in 9 locations in France. Those 9 sites are connected with a dedicated 10 Gbps backbone (see figure 3). Grid’5000 aims at providing a reconfigurable, controllable, and monitorable experimental platform. As such, once compute nodes have been reserved, it is possible to deploy one’s own work environment using Kadeploy [6]. This allows installing specific software (including kernel) and to get administrator (root) access on the nodes. B. Infrastructure for Debian rebuilds To rebuild all Debian packages efficiently on Grid’5000, we developed our own software infrastructure (figure 4). We had the following goals in mind: - most of the infrastructure should be deployed dynamically during the rebuilds, using Kadeploy; - it should be robust. The rebuilds are supposed to be run unattended, and should not fail; - it should be scalable. As we will see in section IV, we will be able to run the rebuild on 50 to 100 compute nodes at the same time. Our infrastructure is composed of two parts: a static part, located in the Grenoble Grid’5000 site, and a dynamic part that can be deployed on any Grid’5000 site, depending on where resources are available. The static part of the infrastructure consists of an NFS server hosting all the necessary data: - A full Debian mirror internal to Grid’5000; - The scripts and configuration files, as well as some data files needed by the compute nodes; - The logs generated by the builds. An Apache web server is also configured next to the NFS server, and serves the Debian mirror over HTTP. This proved to be more efficient than distributing the packages directly using NFS, and also provides an opportunity for caching, thus reducing the load on the NFS server. The deployment of the dynamic part of the infrastructure is done in several steps. 1) Nodes are reserved using the OAR Batch Scheduler; 2) The reserved nodes are deployed using Kadeploy. A standard environment, available on all Grid’5000 clusters, is used. The deployment is managed by Katapult, to allow the failed nodes to be re-deployed if necessary. This takes 3 to 5 minutes; 3) From the frontend, a script is executed (over SSH) on one of the deployed nodes (the master node). Basic configuration is done on the node (like the mounting of a shared NFS directory); 4) From the frontend, a script located on the shared NFS directory is executed on the master node to continue the configuration of the nodes; 5) From the frontend, a last script located on the shared NFS directory is executed. This script will control the rest of the operations; 6) The script running on the master node executes the same process on the other nodes: it first copies a script to the nodes, executes it to mount the shared NFS directory, then run another script to finish the preparation of the nodes; 7) When all the nodes are properly prepared, the master node starts scheduling and executing tasks on them. At the beginning of each build, the chroot is uncompressed from a tar archive, to ensure that the build environment is always clean. The build log is stored locally until the end of the build, and is then copied to the NFS directory. 8) After all the tasks have been executed, all the nodes are given back to the batch scheduler. It is possible that, at the end of the rebuild, there are no remaining tasks to run on some nodes, which could therefore be freed. But the OAR batch scheduler does not allow releasing some nodes earlier than others, which can lead to a waste of resources in this case. While NFS is not efficient over high-latency networks, it proved to be an easy way to push configuration files and scripts to the nodes. Also, we made sure not to use the NFS server for performance-critical steps during the process. We also chose to use a standard deployment environment, instead of a customized one. This allows us to use an environment maintained by Grid’5000’s system administrators, and available everywhere. After deployment, we install the necessary software packages, like sbuild (the Debian tool used to build packages in a chroot) and approx, a Debian mirror proxy. Installed on each node, it allows caching locally build-dependencies that are frequently downloaded, and alleviate the load on the central Debian mirror. Finally, our infrastructure has obvious reliability issues: both the master node and the static part of the infrastructure (NFS server, Debian mirror) are single points of failure. However, due to the length of full Debian rebuilds on Grid’5000 (less than 10 hours in practice), we do not consider this to be an important issue: if a grave problem occurs during a rebuild, it is still possible and relatively cheap to restart the whole rebuild. Regarding compute nodes, the script responsible of running the tasks on them tries to detect problems that might arise during a build. When they occur, the failed build is restarted on another node, and the compute node is removed from the list of nodes used in the rebuild. IV. RESULTS Using our architecture, we rebuilt all the packages in Debian lenny using 49 nodes of the azur Grid’5000 cluster in Sophia. Each compute node is a server with 2 Opteron 246 (2.0 GHz) CPUs, and 2 GB of RAM. It took a total of 9 hours and 20 mins, of which 13 minutes where spent deploying the infrastructure. The sum of the build time of all packages (sequential time) is 17 days and 4 hours. The logs generated by all builds use 2.0 GB on the NFS directory. Figure 5 shows the distribution of the packages’ build time. One can see that most packages take a very short time to build – 62% take less than a minute, while 90% take less than 3 minutes. However, a few packages take a lot more time (table I). Those packages also are responsible for the majority of the build time (figure 6): the 5% longest packages account for 50% of the build time. Looking at various system counters during the builds, we could determine that the tasks are both CPU- and I/O-bound. Memory usage generally stays quite low (but may vary greatly between packages). Network does not play an important role: common build-dependencies are cached on the node, and it is only used for control besides that. ![Fig. 5. Distribution of the build time of packages. Most packages are fast to build.](image) ![Fig. 6. Share of the build time taken by each package. The few longest packages account for a large part of the archive’s build time.](image) <table> <thead> <tr> <th>Package</th> <th>Time</th> </tr> </thead> <tbody> <tr> <td>openoffice.org</td> <td>7 h 33 m</td> </tr> <tr> <td>openjdk-6</td> <td>5 h 42 m</td> </tr> <tr> <td>insighttoolkit</td> <td>5 h 38 m</td> </tr> <tr> <td>gecode</td> <td>4 h 51 m</td> </tr> <tr> <td>latex-cjk-chinese-arphic</td> <td>4 h 38 m</td> </tr> <tr> <td>linux-2.6</td> <td>4 h 33 m</td> </tr> <tr> <td>gcc-4.3</td> <td>4 h 21 m</td> </tr> <tr> <td>gcc-4.2</td> <td>3 h 38 m</td> </tr> <tr> <td>installation-guide</td> <td>3 h 28 m</td> </tr> <tr> <td>qt4-x11</td> <td>2 h 12 m</td> </tr> </tbody> </table> V. OPTIMIZATIONS Two objectives can be targeted when trying to improve the process: - Reduce the makespan, possibly increasing the number of necessary machines. The main reason for this is that, if possible, we could use a lot more machines on Grid'5000: it is generally considered less disturbing to use more nodes during a shorter period of time, than to use less nodes during a longer period. To reduce the makespan, the main problem to address is the build time of the longest packages; - Reduce the number of machines without increasing the makespan. This requires making the build process more efficient for all packages. A. Scheduling of the tasks: longest-first There are huge differences between the build time of all the packages. While most packages are extremely fast to build, a few packages take a very long time. To minimize the makespan, it is important to schedule those long packages early in the rebuild process: if we schedule them too late, we might reach a point where we all tasks are finished except one, and we are only waiting for that (long) task to finish. A simple optimization is therefore, after we have determined the time taken to build each package, to schedule them starting with the longest packages. Using this scheduling, and the results described in section IV, we can estimate that the optimal scheduling of the rebuild of all packages would take 7 h 33 m (time taken to build openoffice.org), using 55 compute nodes (this does not include the time needed to configure the environment at the start of the job); with more nodes, some nodes would be idle at the end of the process while we wait for openoffice.org; with less nodes, we would still have some tasks to process after openoffice.org is finished. B. Adding parallel building support to long packages The makespan is limited by the time taken by the longest package – openoffice.org. An interesting way to reduce its build time is to make use of parallel building (often known as make -j). This consists in running several steps of the build process in parallel to make use of several CPUs or to allow to continue to perform CPU-intensive operations in some threads while other threads are blocked on I/O [7]. Unfortunately, most Debain packages lack support for building using several threads. An interface for that was recently added to Debian’s build system, and we worked together with some package maintainers to help them implement it. The results presented in section IV include results obtained with parallel builds for some packages, like openoffice.org (where only a small part of the build process can be done in parallel), linux-2.6 or latex-cjk-chinese-arphic. C. Reducing the local I/O bottleneck Building packages is I/O intensive, especially for small packages where the build time is dominated by the creation of the chroot, and the installation of build-dependencies. We investigated ways to alleviate this problem. First, the EXT3 file system used on the compute nodes issues a sync() every 5 seconds by default, to ensure that all the data and meta-data is written to disk. We modified that parameter by re-mounting the file system with the commit=86400 option to delay syncs. Unfortunately, it seems that sync() system calls are also issued by some applications during the build process. As a consequence, we didn’t notice any improvement. Since the data written during the build is only used temporarily, we investigated another solution: building in memory, using Linux's tmpfs file system, which stores its content in virtual memory (RAM or swap). As our compute nodes have at least 2 GB of RAM, most packages could be built without ever swapping some memory pages to disk. This approach has some drawbacks. Firstly, We needed to add more swap space to the compute nodes by adding a swap file, and creating such a file takes a long time during node preparation. We chose to create a 32 GB swap file, and this file is not allowed to contain holes – it cannot be a sparse file. Creating and writing a 32 GB file takes 12 minutes on our compute nodes – limited by the disk writing speed, since the file has to be filled with zeroes. It is possible that the new EXT4 file system will solve this problem by implementing the allocate() system call. Secondly, some packages failed to build on tmpfs, for various reasons that still need to be investigated. However, this approach is promising: the build time (when comparing only packages that built fine with both configurations) was reduced by 13%. But some packages took more time to build on tmpfs, as seen on figure 7. Also, it seems that this optimization mainly benefits packages that are quick to build, while packages that take a long time to build do not benefit as much. D. Building several packages concurrently Since most packages lack support for building using several parallel threads ("make -j"), another solution is to build several packages concurrently, on the same compute node: for example, the same compute node would build 4 different packages concurrently. This allows to reduce the total number of compute nodes used for the rebuild, without increasing... the makespan. One problem with this approach, however, is that the individual packages take more time to build. While computing power can be shared equally between tasks, the concurrent tasks on the same node will be fighting for the I/O bandwidth. This is not a problem for small packages, but might be a problem for packages that already take a very long time to build: slowing down those packages would result in an increase of the makespan. We mitigated this issue by setting process priorities based on the duration of the build: long builds get a higher priority than short builds. Figure 8 describes the overall build time (number of compute nodes used, multiplied by walltime) when running several concurrent builds on the same compute nodes. Running one build per compute node, 14 days of compute time on Grid’5000 was used (which could translate into using 42 nodes for 8 hours, for example). Running 4 concurrent builds, the total time decreases to about 4 days (12 nodes for 8 hours). However, in practice, the slowdown caused by I/O concurrency is a major problem: even after having added processes priorities based on the duration of the build, the long packages still take more time when they are built concurrently with other packages (figure 9). A solution could be to schedule long packages alone on a compute node, while the shorter packages would be built concurrently with others. This has not been implemented yet. VI. CONCLUSION With this infrastructure, we performed several full rebuilds of Debian during the lenny development cycle, and reported more than 2300 critical bugs on packages that failed to build from source. In addition to that, this work was the basis of a small shift in the Debian development processes: since it was easy to rebuild the Debian archive with a custom setup, we performed some builds with custom environments to evaluate the consequences of proposed changes to build tools. In the same spirit, several rebuilds were also performed with newer development versions of base software, like the GCC compiler: rebuilding all packages in Debian with a beta version of GCC allowed to find several important regressions that were fixed before the final GCC release. We think that there are other opportunities where such environments could be helpful for the free software community. This work would not have been possible without the flexibility offered by Grid’5000. This application has very specific and demanding requirements, like the fact that a special environment has to be deployed on the nodes, and that root access is required for several steps. Despite being “experimental” in terms of software used, Grid’5000 proved reliable enough to fully automate the complex processes needed by this work. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00425611/file/debian-clade09.pdf", "len_cl100k_base": 5353, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20464, "total-output-tokens": 6157, "length": "2e12", "weborganizer": {"__label__adult": 0.0002417564392089844, "__label__art_design": 0.0003612041473388672, "__label__crime_law": 0.0002560615539550781, "__label__education_jobs": 0.0012054443359375, "__label__entertainment": 8.612871170043945e-05, "__label__fashion_beauty": 0.0001232624053955078, "__label__finance_business": 0.0004165172576904297, "__label__food_dining": 0.0002696514129638672, "__label__games": 0.000400543212890625, "__label__hardware": 0.0015039443969726562, "__label__health": 0.00033283233642578125, "__label__history": 0.00032019615173339844, "__label__home_hobbies": 0.0001157522201538086, "__label__industrial": 0.00041794776916503906, "__label__literature": 0.0002186298370361328, "__label__politics": 0.0002148151397705078, "__label__religion": 0.0003528594970703125, "__label__science_tech": 0.0853271484375, "__label__social_life": 0.0001392364501953125, "__label__software": 0.035003662109375, "__label__software_dev": 0.8720703125, "__label__sports_fitness": 0.00018334388732910156, "__label__transportation": 0.0004055500030517578, "__label__travel": 0.00020825862884521484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26239, 0.03201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26239, 0.43993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26239, 0.93603]], "google_gemma-3-12b-it_contains_pii": [[0, 903, false], [903, 6414, null], [6414, 10350, null], [10350, 13210, null], [13210, 17199, null], [17199, 22218, null], [22218, 26239, null]], "google_gemma-3-12b-it_is_public_document": [[0, 903, true], [903, 6414, null], [6414, 10350, null], [10350, 13210, null], [13210, 17199, null], [17199, 22218, null], [22218, 26239, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26239, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26239, null]], "pdf_page_numbers": [[0, 903, 1], [903, 6414, 2], [6414, 10350, 3], [10350, 13210, 4], [13210, 17199, 5], [17199, 22218, 6], [22218, 26239, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26239, 0.10714]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3dcdc69a76d65c18d3f4def8a873f2e228e8e795
Job Processing with SLURM on SuperMUC-NG - General - List of relevant commands - Queues (SLURM partitions) and their limits - srun and mpiexec - salloc / srun for interactive processing - sbatch Command / #SBATCH option - Batch Job Examples - General options applicable for all jobs - Options for resources and execution (click to expand) - Input Environment Variables - Output Environment Variables - Useful commands - Guidelines for resource selection - SLURM Documentation - List of SLURM Constraints and its Usage General The batch system on SuperMUC-NG is the open-source workload manager SLURM (Simple Linux Utility for Resource management). For details about the SLURM batch system, see [Slurm Workload Manager](#). Submit hosts are usually **login nodes** that permit to submit and manage batch jobs. Intel processors on SuperMUC-NG support the hyperthreading mode which might increase the performance of your application. With hyperthreading, you have to increase the number of MPI tasks per node from 48 to 96 in your job script. Please be aware that with 96 MPI tasks per node each process gets only half of the memory by default. If you need more memory, you have to specify it in your job script and use the fat nodes (see example batch scripts). List of relevant commands <table> <thead> <tr> <th>Command's name</th> <th>Functionality</th> </tr> </thead> <tbody> <tr> <td>sbatch</td> <td>submit a job script</td> </tr> <tr> <td>scancel</td> <td>delete or terminate a queued or running job</td> </tr> <tr> <td>squeue</td> <td>print table of submitted jobs and their state. Note: non-privileged users can only see their own jobs.e</td> </tr> <tr> <td>salloc</td> <td>create an interactive SLURM shell</td> </tr> <tr> <td>srun</td> <td>execute argument command on the resources assigned to a job. Note: must be executed inside an active job (script or interactive environment). mpiexec is an alternative and preferred on LRZ system</td> </tr> <tr> <td>sinfo</td> <td>provide overview of cluster status</td> </tr> <tr> <td>scontrol</td> <td>query and modify SLURM state</td> </tr> </tbody> </table> Queues (SLURM partitions) and their limits - Batch queues are called partitions in SLURM. - The allocation granularity is multiples of one node (only complete nodes are allocated and accounted for). - Scheduling and prioritization is based on a multifactor scheme including wait time, job size, partition, and required quality of service. The following partitions are available. Check with `sinfo` for more details and special partitions: <table> <thead> <tr> <th>partition</th> <th>min-max islands</th> <th>min-max nodes per job</th> <th>max usable memory</th> <th>cores per node</th> <th>max run time (hours)</th> <th>max running jobs per user</th> </tr> </thead> <tbody> <tr> <td>test</td> <td>1</td> <td>1-16</td> <td>90 GB</td> <td>48</td> <td>0.5</td> <td>1</td> </tr> <tr> <td>(also used for interactive access with salloc)</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>micro</td> <td>1</td> <td>1-16</td> <td>90 GB</td> <td>48</td> <td>48</td> <td>20</td> </tr> <tr> <td>general</td> <td>1</td> <td>17-792</td> <td>90 GB</td> <td>48</td> <td>48</td> <td>20</td> </tr> </tbody> </table> srun and mpiexec With SLURM srun command users can spawn any kind of application, process or task inside a job allocation or directly start executing a parallel job (and indirectly ask SLURM to create the appropriate allocation). It can be a shell command, any single-/multi-threaded executable in binary or script format, MPI application or hybrid application with MPI and OpenMP. When no allocation options are defined with srun command the options from sbatch or salloc are inherited. Note: srun at LRZ is defined as the alias srun='I_MPI_PMI_Library=/usr/lib64/ibpml.so /usr/bin/srun'. Since aliases are not inherited the alias is only available in the login shell or in the initial batch script, everywhere else it falls back to /usr/bin/srun. Use the full syntax in these cases. salloc / srun for interactive processing salloc is used the allocated nodes for interactive processing. The options for resource specification in salloc/srun/sbatch are the same. srun can be used instead of mpiexec. srun or mpiexec execute on the nodes previously allocated by the salloc. There is no advantage by using salloc over sbatch --partition=test in terms of wait time. ``` $ srun --nodes=2 --ntasks=2 --partition=test hostname i01r04c06s08 i01r04c06s08 i01r04c06s08 i01r04c06s08 i01r04c06s10 i01r04c06s10 $ salloc --nodes=2 --ntasks=5 --partition=test salloc: Granted job allocation 45932 # will be executed on loginnode $ hostname login01e # will be executed on the allocated nodes $ mpiexec -n 5 hostname i01r04c06s08 i01r04c06s08 i01r04c06s08 i01r04c06s10 i01r04c06s10 $ srun -n 5 hostname i01r04c06s08 i01r04c06s08 i01r04c06s08 i01r04c06s10 i01r04c06s10 ``` sbatch Command / #SBATCH option Batch job options and resources can be given as command line switches to sbatch (in which case they override script-provided values), or they can be embedded into a SLURM job script as a comment line of the form. Batch Job Examples General options applicable for all jobs #!/bin/bash # Job Name and Files (also --job-name) #SBATCH --job-name #SBATCH -o ./%x.%j.out #SBATCH -e ./%x.%j.err #Initial working directory (also --chdir): #SBATCH --chdir #Notification and type #SBATCH --mail-type=END #SBATCH --mail-user=<youremail> # Wall clock limit: #SBATCH --time=24:00:00 #SBATCH --no-requeue #SBATCH --export=NONE #SBATCH --account=<projectID> #SBATCH --constraint="scratch&work" Hints and Explanations: Replacement patterns in filenames: %J: jobid.stepid of the running job. (e.g. "128.0") %j: jobid of the running job. %s: stepid of the running job. %t: task identifier (rank) relative to current job. %u: User name. %x: Job name. a: Job array ID Notification types: NONE, BEGIN, END, FAIL, REQUEUE requeue/no-requeue: Whether the job should eligible to being requeue or not. When a job is requeued, the batch script is initiated from its beginning. no-requeue specifies that the batch job should never be requeued under any circumstances. environment: Do not export the variables of the submitting shell into the job (which would make debugging of errors nearly impossible for LRZ). account: Resources used by this job are substracted from budget of this project. The billing unit is core-hours. Make sure that you use the right project. constraint (optional): Nodes can have features. Users can specify which of these features are required by their job using the constraint option. Only nodes having features matching the job constraints will be used to satisfy the request. Multiple constraints may be specified with AND (&), OR (|), matching OR, resource counts, etc. The availability of specific file systems can be specified as a constraint, giving the LRZ the opportunity to start jobs which do not need all. See: - List of SLURM Constraints and its Usage Options for resources and execution (click to expand) Resource Specifications: nodes=<minnodes[-maxnodes]> Request that a minimum of minnodes nodes be allocated to this job. A maximum node count may also be specified with maxnodes. If only one number is specified, this is used as both the minimum and maximum node count. The default behavior is to allocate enough nodes to satisfy the requirements of the ntasks and cpus-per-task options. ntasks: The default is one task per node, but note that the cpus-per-task option will change this default. ntasks-per-node: Request that ntasks be invoked on each node. If used with the ntasks option, the ntasks option will take precedence and the ntasks-per-node will be treated as a maximum count of tasks per node. Meant to be used with the nodes option. ntasks-per-core: Request that the maximum ntasks be invoked on each core. cpus-per-task: Without this option, the controller will just try to allocate one core per task switches=<number>@[waittime hh:mm:ss] Maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. array: Submit a job array, multiple jobs to be executed with identical parameters. mpiexec: In most cases mpiexec can be used without specifying the number of tasks, because this is inherited from the sbatch command. Slurm output variables can also be used e.g., mpiexec -n $SLURM_NTASKS ./myprog If SLURM can detect the number of tasks form its settings it is sufficient to use mpiexec without further parameters e.g., mpiexec ./myprog Execution Specification: By default, the system may dynamically change the clock frequency of CPUs during the run time of a job to optimise for energy consumption (for more details, see Energy Aware Runtime (EAR)). This makes profiling or benchmark measurements difficult and unstable. Users can enforce a fixed default frequency by switching EAR off: #SBATCH --ear=off #SBATCH -- partition=gene ral #Number of nodes and MPI tasks per node: #SBATCH -- nodes=128 #SBATCH -- ntasks-per- node=8 #SBATCH -- cpus-per- task=6 #Run the program: export OMP_NUM_THREADS=6 #For pinning threads correctly: export OMP_PLACES=cor es mpiexec -n 1024 ./myprog #SBATCH -- partition=gene ral #Number of nodes and MPI tasks per node: #SBATCH --nodes=128 #SBATCH -- ntasks-per- node=8 #SBATCH -- cpus-per- task=12 #Needs specific MPI module switch mpi.intel mpi. intel/2019 #Run the program: export OMP_NUM_THREADS=OMP_NUM_THREADS S=12 #For pinning threads correctly: export OMP_PLACES=thre des mpiexec -n 1024 ./myprog #... (general part) #SBATCH -- partition=fat #Number of nodes and MPI tasks per node: #SBATCH -- nodes=64 #SBATCH -- ntasks-per-node=48 #SBATCH -- cpus-per-task=1 #Run the program: mpiexec -n 3072 ./myprog #SBATCH -- partition=general #SBATCH -- partition=gene ral #Number of nodes and MPI tasks per node: #SBATCH -- nodes=1 #SBATCH -- ntasks-per-node=1 #SBATCH -- cpus-per-task=8 export OMP_NUM_THREADS=8 #For pinning threads export OMP_PLACES=cores #Spreading for better Memory Balance: #SBATCH -- threads 0 go to core 0 #SBATCH -- threads 1 go to core 6 export OMP_PROC_BIND=spread mpiexec -n 1024 ./myprog #SBATCH --partition=fat #Max number of islands and max waittime #SBATCH --switches=2@24:00 #SBATCH --nodes=1024 #SBATCH --ntasks-per-node=48 #SBATCH --cpus-per-task=1 mpiexec -n 49152./myprog #SBATCH --partition=gene #SBATCH --nodes=10 #SBATCH --ntasks=480 #SBATCH --cpus-per-task=1 #SBATCH --array=1-10 mpiexec -n 10 ./myprog <in.$SLURM_ARRAY_TASK_ID #SBATCH --ear=off ... #!/bin/bash # Chain of batch jobs with dependencies NR_OF_JOBS=6 JOB_SCRIPT=./my_batch_script echo "Submitting chain of $NR_OF_JOBS jobs for batch script $JOB_SCRIPT" ##submit and get JOBID JOBID=$(sbatch $JOB_SCRIPT 2>&1 | awk '{print $(NF)}' ) I=1 while [ $I -lt $NR_OF_JOBS ] do JOBID=$(sbatch --dependency=afterok:$JOBID $JOB_SCRIPT 2>&1 | awk '{print $(NF)}' ) I=$(( $I+1 )) done dependency=<dependency_list> Defers the start of this job until the specified dependencies have been satisfied. <dependency_list> is of the form:<dependency> - after:job_id[jobid...] job can begin execution after the specified jobs have begun execution. - afterany:job_id[jobid...] job can begin execution after the specified jobs have terminated. - afternotok:job_id[jobid...] job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc). - afterok:job_id[jobid...] job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero) Input Environment Variables Upon startup, sbatch will read and handle the options set in the following environment variables. Note that environment variables will override any options set in a batch script, and command line options will override any environment variables. Some which may be used by you in $HOME/.profile: <table> <thead> <tr> <th>Variable</th> <th>Option</th> </tr> </thead> <tbody> <tr> <td>SBATCH_ACCOUNT</td> <td>--account</td> </tr> <tr> <td>SBATCH_JOB_NAME</td> <td>--jobid</td> </tr> <tr> <td>SBATCH_REQUEUE</td> <td>--requeue</td> </tr> <tr> <td>SBATCH_NOREQUEUE</td> <td>--no-requeue</td> </tr> </tbody> </table> Output Environment Variables The Slurm controller will sets the variables in the environment of the batch script <table> <thead> <tr> <th>Variable</th> <th>Option</th> </tr> </thead> <tbody> <tr> <td>SLURM_JOB_ACCOUNT</td> <td>Account name associated of the job allocation</td> </tr> </tbody> </table> Useful commands Show the estimated start time of a job: `squeue --start [-u <userID>]` Guidelines for resource selection Processing Mode - Jobs that only use one or at most a few hardware cores perform serial processing and are not supported on SuperMUC-NG. Use the SuperMUC-Cloud for such purposes. - Bunches of multiple independent tasks can be bundled into one job, using one or more nodes. Run time limits - Please note that all job classes impose a maximum run time limit. It can be adjusted downward for any individual job. Since the scheduler uses a backfill algorithm, the better you specify a realistic runtime limit, the better throughput of your job may be achieved. Islands/Switches - When a tree topology is used, this defines the maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. If Slurm finds an allocation containing more switches than the count specified, the job remains pending until it either finds an allocation with desired (lower) switch count or the time limit expires. It there is no switch count limit, there is no delay in starting the job. This trades off better performance vs. shorter wait time in the queue. Memory Requirements - The total memory available in user space for the set of nodes requested by the job must not be exceeded. - The memory used on each individual node must not be exceeded by all tasks run on that node. - Applications exist for which the memory usage is unsymmetric. In this case it may become necessary to work with a variable number of tasks per node. One relevant scenario is a master-worker scheme where the master may need an order of magnitude more memory and therefore requires a node of its own, while worker nodes can share a node. LRZ provides the “mixed” partition for using thin and fat nodes concurrently. Disk and I/O Requirements - The disk and I/O requirements are not controlled by the batch scheduling system, but rely on parallel shared file systems, which provide system-global services with respect to bandwidth - this means that the total I/O bandwidth is shared between all users. The consequence is that all I/O may be significantly slowed down if heavily used by multiple users at the same time, or even - for large scale parallel jobs - by a single user. At present, LRZ can not make any Quality of Service assurance for I/O bandwidth. - The appropriate usage of the parallel file systems is essential. - Please consult File Systems of SuperMUC-NG for more detailed technical information. Licences - Some jobs may make use of licensed software, either from the LRZ software application stack, or of software installed in the user’s HOME directory. In many cases, the software needs to access a license server because there exist limits on how many instances of the software may run and who may access it at all. - There is no connection from SuperMUC-NG to the outside. Check with LRZ if you are in need of such licenses. - LRZ is currently not able to manage license contingents. The reason is that a significant additional effort is required, not only with suitable configuration of SLURM, but also with how the license servers are managed. The situation implies that a job will fail if the usage limit of a licensed software is exceeded when the job starts. Conversion of scripts from LoadLeveler and other Workload Managers table - see: List of the most common command, environment variables, and job specification options used by the major workload management systems. SLURM Documentation - SLURM Workload Manager at LRZ - Command/option Summary (two pages) - Documentation for SLURM at SchedMD - The manual pages slurm(1), sinfo(1), squeue(1), scontrol(1), scancel(1), sview(1)
{"Source-Url": "https://doku.lrz.de/download/temp/pdfexport-20190317-170319-0120-281/PUBLIC-JobProcessingwithSLURMonSuperMUC-NG-170319-0120-282.pdf?contentType=application/pdf", "len_cl100k_base": 4172, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21329, "total-output-tokens": 4556, "length": "2e12", "weborganizer": {"__label__adult": 0.0002651214599609375, "__label__art_design": 0.0003333091735839844, "__label__crime_law": 0.000308990478515625, "__label__education_jobs": 0.000621795654296875, "__label__entertainment": 0.00014853477478027344, "__label__fashion_beauty": 0.00011354684829711914, "__label__finance_business": 0.0005230903625488281, "__label__food_dining": 0.00021314620971679688, "__label__games": 0.0007381439208984375, "__label__hardware": 0.00257110595703125, "__label__health": 0.00019633769989013672, "__label__history": 0.00017333030700683594, "__label__home_hobbies": 0.0001138448715209961, "__label__industrial": 0.0006394386291503906, "__label__literature": 0.00014030933380126953, "__label__politics": 0.00023567676544189453, "__label__religion": 0.0003476142883300781, "__label__science_tech": 0.028778076171875, "__label__social_life": 0.0001310110092163086, "__label__software": 0.27783203125, "__label__software_dev": 0.68505859375, "__label__sports_fitness": 0.00017189979553222656, "__label__transportation": 0.00018405914306640625, "__label__travel": 0.00015306472778320312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16376, 0.05742]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16376, 0.17306]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16376, 0.78699]], "google_gemma-3-12b-it_contains_pii": [[0, 3355, false], [3355, 5284, null], [5284, 7180, null], [7180, 9078, null], [9078, 9710, null], [9710, 10321, null], [10321, 10697, null], [10697, 12616, null], [12616, 16166, null], [16166, 16376, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3355, true], [3355, 5284, null], [5284, 7180, null], [7180, 9078, null], [9078, 9710, null], [9710, 10321, null], [10321, 10697, null], [10697, 12616, null], [12616, 16166, null], [16166, 16376, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16376, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16376, null]], "pdf_page_numbers": [[0, 3355, 1], [3355, 5284, 2], [5284, 7180, 3], [7180, 9078, 4], [9078, 9710, 5], [9710, 10321, 6], [10321, 10697, 7], [10697, 12616, 8], [12616, 16166, 9], [16166, 16376, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16376, 0.07571]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
fb4f0edb22cd5d433fd26367c057f741f733915f
Application of ColdFusion technology in large educational services Marcin Albiniak* Department of Computer Science, Vetters’ Schools of Economics, Bernardyńska 14, 20-109 Lublin, Poland Abstract The development in Internet Technologies created new possibilities and prospects in education. With the idea of large educational service in secondary schools there arose the necessity of choosing the most suitable Internet Technology. The analysis of various technologies resulted in the choice of ColdFusion which proved to be the best for the realization of this project. ColdFusion technology displays several characteristics: the independence from widely used operating system platform, the possibility of implementation of many databases standards and combining many programming standards of Internet services. ColdFusion is supported by CFML language which permits quick generation of Java bytecode across language, implementation XML structures and organization of large structures of Internet services. 1. Introduction The new project named “Virtual School” is going to be realized in ColdFusion architecture. The construction of the project consists of six basic parts: library, forum, knowledge tester, virtual lessons, certified courses and teachers’ thematic Web Sides. The aim of the project is to create additional possibilities in education. Such elements as virtual lessons, tester and certified courses will introduce interactive forms of teaching. As a result it will create widely accessible service with the possibilities of e-learning. Pupils and teachers will be offered the opportunity of exchanging experiences, which is an important element of virtual education. Moreover, the possibilities of certifying skills will be created for teachers as well as pupils. Besides, the project will offer handicapped people the opportunity of individual education and preparation for secondary-school certificate. It will also introduce new standards in education broadening the methods of teaching compulsory subjects in secondary schools. The first secondary school curriculum to be realized by the project is “information management”. After that, other subjects will be introduced. * E-mail address: albim@interia.pl The beginning of the work on the project raised the question of the most suitable technology to realize it. After analyzing various server technologies, the Macromedia technology with the utilization of ColdFusion Server 6.1 MX was chosen. 2. The characteristic of ColdFusion Server 6.1 MX ColdFusion Server MX 6.1, whose code name is ‘Red Sky’, was introduced by Macromedia in 2003. It is a new product which derives from the previous version 4.5.1. There are many changes in version 6.1 ColdFusion MX in comparison with the basic version. The new ColdFusion is faster and more effective. It uses reliable and tested technology J2EE. One of the changes is the new installer. The process of modernization of the previous application versions written for versions 5 and 4.5 was improved in it. Therefore, it is more readable than the previous one. Moreover, the service of new operating systems: Windows .Net 2003 (IIS 6), Roads Hat Linux 8 and 9, SuSE 8, Solaris 9, AIX 4.3.3 and 5.1 and HP UX 11i was added [1]. The next significant change in version 6.1 is the new compiler. The basic version MX receives a document and it compiles it into Java code which is the same as the original CFML code. This code is exchanged into Java bytecode and really executed in JVM. This process is repeated whenever a change is introduced in a document. It is easy to notice that the process is very time-consuming. Also, the following reading of the document caused the “class” file reading from server cache. Version 6.1, however, compiles directly CFML code to Java bytecode. It accelerates considerably the process of ColdFusion documents processing. It also means that ColdFusion MX 6.1 does not have to record “class” files in cfclasses catalogue. The whole process of compilation and processing of the code can be held directly in the memory. Besides, version 6.1 serves both types of processing – with recording “class” files on the hard disc and without recording them. As a result, acceleration of the first document compilation is achieved. JVM 1.4 was exchanged into version 1.4.2. The new server also introduces an improved version of e-mail service. ColdFusion MX had single-thread service POP3 and SMTP. ColdFusion MX 6.1, however, introduces multi-thread service of these protocols. CFMX 6.1 administrative panel enables manual configuration of maximum quantity of threads. Tests proved that version 6.1 is able to send even one million letters an hour [1,2]. Another element which makes the e-mail service more effective is CFML language. CFMAIL markup became enriched by two new attributes – USERNAME and PASSWORD. It enables the service of SMTP servers which demand logging. The next large convenience in the service of e-mail is the possibility of defining several SMTP servers (well-known also as ‘server fail-over’). If a server is inaccessible at the time of sending the mail, ColdFusion tries to send letters by a different server. Another improvement is the addition of CFMAILPART markup. It makes it possible to separate textual content from HTML in one letter. This improvement concerns also CFPOP markup, which is able to serve e-mail of type text/plain and text/html. The service of HTTP protocol was improved – ColdFusion MX 6.1 serves version 1.1 of this protocol (GET, FAST, HEAD, PUT, DELETE, TRACE, OPTIONS). The new version also contains two very desirable improvements in CFC service (ColdFusion components): the possibility of defining the special range “super” inside CFC which has an access to hidden methods and safe placing CFC inside different ranges – CFC has the access to all ranges. The new ColdFusion version was enriched by the Wrap() function which gives the possibility of breaking lines in a text. This function will be useful when using CFMAIL markup. Many new functions have been added to COM service, the efficiency of CFCHART markup has been improved improvement to Flash remoting engine has been made, many SOAP solutions have been improved and the co-operation has been enlarged with .Net Web Services [1,2]. The next facility is the introduction of XML language service by ColdFusion. XML can be now used like other database sources. XSL can be used in the document transformations and the automatic serialization of data can be used in XML. Moreover, new drivers were added: DataDirect JDBC, DataDirect SequeLink 5.3 ODBC Agent, ODBC Server. Also, JDBC drivers were modernized. The new ColdFusion contains J-Integra 1.5.3. [3]. 3. ColdFusion in UNIX and Windows systems ColdFusion Server is not adapted to all of Windows and Unix architecture differences. The first of the differences is the fact, that Unix distinguishes size of given signs in the names of files. File named index.cfml is different from Index.cfml file. Both names show the same file in the Windows system. The next difference between both systems is the way of mapping paths to files. The Windows system uses ‘\’ sign to mapping paths, while Unix uses the opposite ‘/’ sign. The lack of the drive signs in the Unix system is the next difference. ColdFusion in Unix is installed in the catalogue path: /opt/coldfusion – in system Windows in path: C:\CFusionMX [4]. If application which has to act independently of the operating system is required, an environment of application in which the differences between the systems will be considered should be created. Example. Let us create a file Application.cfm (the name of this file has to be recorded exactly as it was given). The application variables, which will check the operating system on which application is started and for example will qualify the way of paths mapping, will be defined inside the file: The list of Server variables and their possible values has been introduced [1]: - **Server.ColdFusion.ProductName** – It stores the name of the product, for example ColdFusion Server, - **Server.ColdFusion.ProductVersion** – The number of version of server is stored ColdFusion, for example. 6.0.0.58500, - **Server.ColdFusion.ProductLevel** – It turns information about the version of server which becomes installed, for example Professional, - **Server.OS.Name** – It stores information about the operating system, on which server is installed, for example Windows NT, - **Server.OS.Version** – The information about version of operating system, on which server is working, for example. 4.0, - **Server.OS.BuildNumber** – It turns information about version of compilation of operating system, for example 1381 or Generic. ### 4. Advantages of ColdFusion technology The first advantage of ColdFusion technology is the fact that ColdFusion Server 6.1 MX increases the effectiveness of applications and decreases the charge of databases by multiple use of connections with databases in pauses between the demands of the Web Pages reading. The important factor connected with the service of databases is the acceleration of efficiency of asking questions process with the simultaneous diminution of server charge by multiple use of the results answer from databases. Also, considering servers acting in clusters, it can be noticed that ColdFusion technology permits the distribution of charges resulted from executing applications between servers in clusters by the use of built-in functions of equalizing servers charge (load balancing) [5]. The integration with equipment solutions for balancing the charge of servers is also used with the purpose of obtaining the largest efficiency and accessibility. High accessibility of Internet service is also possible thanks to automatic “fail over” function and starting application on following servers acting in clasters. Moreover, the ColdFusion Server efficiency is assured for the client side. The data about sessions are kept in available database repository or cookies founded on the user’s computer to assure the continuity of session in the case of starting application on another claster of server. ColdFusion Server avoids the loading of unmodified www sides. The next advantage is the fact that Internet service efficiency has been improved by the development methods of inspection of the pages management which are kept in browser’s cache. Also, the enlargement of efficiency together with the simultaneous diminution of server charge by repeated use of yield pages in cache memory has been improved with the purpose of rare storing changes on dynamic generated sides. In addition, the dynamic optimization of sides’ codes has been introduced in 6.1 version of ColdFusion Server by the diminution of sides’ volume and the enlargement of speed of their opening in browser by removing of the free lines, remaining after the code edition. Another important feature which decides about the work speed of this technology is the maximization of application reaction onto demands thanks to the delivery of sides during the process of their generating using the script tool of buffering and supervision of sending them [4]. Finally, Server ColdFusion supports in full CORBA and SOAP standards, connected with programming of dispersed application. 5. Implementation of databases in the CFML language in the ColdFusion technology The CFML language is available in the two forms: CFML basic and CFScript script language. The CFML language is the markup language using the compiler of Java programming language which is implemented on the ColdFusion server. It possesses all features of object-oriented programming language and is independent of the operating system platform. The important feature is the side compilation only by side of Server (server-side applications). The CFScript language belongs to the same group of languages as CFML, JavaScript or ActionScript which derive from the ECMAScript language. CFScript was introduced to ColdFusion in order to improve recording of long operation on variable and creation of its own functions. ColdFusion technology has a built-in implementation of databases service in various standards. However, the choice of proper implementation requires considering the operating system platform on which CF server is settled. Microsoft Access databases sources are not supported in ColdFusion installed in the Unix system, whereas SQL Server is not supported in ColdFusion which is earlier than version 4.5.1. Additionally, in the Unix system the following options are not accessible: - Crystal Reports (cfrreport) – COM/DCOM, – Advanced Security (version 4.5.x and 5), – Verity vspider (version 5 and MX), – The undexing of Adobe Acrobat files (version 4.5.1-MX), – The “cfgraph” graphs are inaccessible in the HP-UX systems (version 5), – The markups Custom Tags have to be compiled by the use of GCC, Sun C++ or Unix C++ compiler. The XML language is treated in a special way by technology CF. The XML object is treated by ColdFusion similarly to other types of data for example structures. It makes searching of XML object possible with the use of StructFind() function and StructKeyEsists() function. The functions whose main application is service of big arrays can be used additionally to analyze the XML object. Such an approach creates completely new possibilities. Parsing of XML in ColdFusion documents can be exemplified as follows: Let’s take the exemplary XML file whose name is uczen.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <Szkola> <WSzkola> <uczniowie typ="Liceum Profilowane"> <osoba id="1">Jan Kowalski</osoba> <osoba id="2">Adam Nowak</osoba> </uczniowie> <uczniowie typ="Liseum Ogólnokształcące"> <osoba id="3">Marcin Wiśniewski</osoba> <osoba id="4">Alina Nowacka</osoba> </uczniowie> <uczniowie typ="Technikum"> <osoba id="5">Anna Potocka</osoba> </uczniowie> <uczniowie typ="Policealne Studium Zawodowe"> <osoba id="6">Karolina Przybyszewska</osoba> </uczniowie> </WSzkola> </Szkola> ``` The basic way of parsing the XML document with the use of CFML language can be done in the following way. The <cfset> markup assigns file uczen.xml (01). Next the XML file is read as a string of signs to KodXML variable (02). A very important step is the utilization of markup <cfset> to parse of XML document onto XML object (03). The following step is the creation of the object which includes all XML data (04). In order to get the name of the school the name of markup must be received (05). If the number of pupils in Virtual School is to be checked, the following procedure must be used: (06) 01 <cfset plikXML = ExpandPath("uczen.xml")> 02 <cffile action="read" file="#plikXML#" variable="KodXML" charset="UTF-8"> 03 <cfset DaneXML = XmlParse(KodXML)> 04 <cfset SzkołaDane = DaneXML.XmlRoot> 05 <cfset SzkołaNazwa = SzkołaDane.WSzkola.XmlName> 06 <cfset IloscUczniow = ArrayLen(SzkołaDane.WSzkola.XmlChildren)> The data from uczen.xml file must be written in the browser’s window: 07 <cfoutput> 08 Nazwa Szkoły : <strong>#SzkołaNazwa#</strong>. 09 <br> 10 Szkoła zapewnia <strong>#IloscUczniow#</strong> miejsca w Wirtualnej Szkole. 11 </cfoutput> 12 <br><br> 13 <strong>Typy szkół:</strong><br> 14 <cfoutput> 15 <cfloop from="1" to="#IloscUczniow#" index="a"> 16 <cfset element = SzkołaDane.WSzkola.XmlChildren[a]> 17 <cfset stanowisko = element.XmlAttributes["typ"]> 18 Uczeń numer #a# : #stanowisko# <br> 19 </cfloop> 20 </cfoutput> 21 <br><br> 22 <strong>Który uczeń z jakiej szkoły?</strong><br> 23 <br><br> 24 <cfoutput> 25 <cfloop from="1" to="#IloscUczniow#" index="b"> 26 <cfset element = SzkołaDane.WSzkola.XmlChildren[b]> 27 <cfset iloscOsob = ArrayLen(element.XmlChildren)> 28 <cfset stanowisko = element.XmlAttributes.typ> 29 Ze szkoły "#stanowisko#" są #iloscOsob# osoby:<br> 30 <cfloop from="1" to="#iloscOsob#" index="c"> 31 <cfset kursant = element.XmlChildren[c]> 32 - #kursant.XmlText#<br> 33 </cfloop> 34 <br> 35 </cfloop> 36 </cfoutput> As a result, the information about the pupils from “Virtual School” received from the XML document is shown on the service page. XML is an extremely effective tool which permits fast access to data and quick opening in the Internet browser. The built-in functions of CFML language permit internal processing of XML objects. 6. Co-operation of ColdFusion Server with the architecture Macromedia Studio MX 2004 ColdFusion Server implements all the possible Macromedia technologies realized in Macromedia Studio MX 2004 software. The best way to create applications in various languages (Java, PHP, CFML etc.) is to use the Macromedia Dreamweaver editor which has the possibilities of managing large dispersed projects, as well as effective tools serving the optimization of application source code. Moreover, Dreamweaver editor serves the CFML language and permits the remote testing of www service on server. Another application, which gives wide possibilities, is Flash application. It serves to create animation and graphics of www advanced services. In special uses of Flash application physical and mathematical processes can be modeled. The animation is written in the object-oriented ActionScript language. The Physical processes are modeled in accordance with laws of physics for example with the mechanics of movement. The mathematical processes can be shown in accordance with mathematical statements and functions. The example of using physical modeling is the model of cyclone imitating whirling movement. (Fig. 1a) The example of mathematical modeling is fractal dancer. (Fig. 1b) Modeling in ActionScript is the element which can be used in an interesting way during virtual lessons [6]. The Analysis Macromedia software proves that it provides both the technology as well as tools which permit quick creation of projects, as well as their optimization [6]. Fig. 1. The animation created in Macromedia Flash application: model of whirl – Cyclone (a) and fractal model – Fractal Dancer (b) 7. Conclusions The “Virtual School” Project realized in the ColdFusion technology opens wide research areas: problems of databases, software engineering and e-learning. The analysis of ColdFusion technology proves that it is a very effective technology which combines the power of Java language with the possibilities of databases especially the built-in service of XML language. The important feature of this type of solutions is the high efficiency of Internet applications and a very high level of their optimization. Therefore, ColdFusion technology can be used to build a stable, save and effective educational service. The optimization of services based on CF technology is very advanced which assures fast and effective work of web application. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/2998/2194", "len_cl100k_base": 4199, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21001, "total-output-tokens": 4884, "length": "2e12", "weborganizer": {"__label__adult": 0.0003631114959716797, "__label__art_design": 0.000827789306640625, "__label__crime_law": 0.0005326271057128906, "__label__education_jobs": 0.038970947265625, "__label__entertainment": 0.00013124942779541016, "__label__fashion_beauty": 0.00019741058349609375, "__label__finance_business": 0.0008635520935058594, "__label__food_dining": 0.0004248619079589844, "__label__games": 0.0005273818969726562, "__label__hardware": 0.0018587112426757812, "__label__health": 0.0005612373352050781, "__label__history": 0.0005002021789550781, "__label__home_hobbies": 0.0001443624496459961, "__label__industrial": 0.0009541511535644532, "__label__literature": 0.00038051605224609375, "__label__politics": 0.0003304481506347656, "__label__religion": 0.000701904296875, "__label__science_tech": 0.0462646484375, "__label__social_life": 0.00023734569549560547, "__label__software": 0.1044921875, "__label__software_dev": 0.79931640625, "__label__sports_fitness": 0.0002620220184326172, "__label__transportation": 0.0006308555603027344, "__label__travel": 0.00035881996154785156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19399, 0.03979]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19399, 0.60906]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19399, 0.87627]], "google_gemma-3-12b-it_contains_pii": [[0, 2235, false], [2235, 5220, null], [5220, 7918, null], [7918, 9808, null], [9808, 12629, null], [12629, 14414, null], [14414, 16131, null], [16131, 18138, null], [18138, 19399, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2235, true], [2235, 5220, null], [5220, 7918, null], [7918, 9808, null], [9808, 12629, null], [12629, 14414, null], [14414, 16131, null], [16131, 18138, null], [18138, 19399, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19399, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19399, null]], "pdf_page_numbers": [[0, 2235, 1], [2235, 5220, 2], [5220, 7918, 3], [7918, 9808, 4], [9808, 12629, 5], [12629, 14414, 6], [14414, 16131, 7], [16131, 18138, 8], [18138, 19399, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19399, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
08a0febade0bea8ac823b890ef2a84beca62a54c
PyGGI 2.0: Language Independent Genetic Improvement Framework Gabin An KAIST Daejeon, Korea agb94@kaist.ac.kr Aymeric Blot University College London London, UK a.blot@cs.ucl.ac.uk Justyna Petke University College London London, UK j.petke@ucl.ac.uk Shin Yoo KAIST Daejeon, Korea shin.yoo@kaist.ac.kr ABSTRACT PyGGI is a research tool for Genetic Improvement (GI), that is designed to be versatile and easy to use. We present version 2.0 of PyGGI, the main feature of which is an XML-based intermediate program representation. It allows users to easily define GI operators and algorithms that can be reused with multiple target languages. Using the new version of PyGGI, we present two case studies. First, we conduct an Automated Program Repair (APR) experiment with the QuixBugs benchmark, one that contains defective programs in both Python and Java. Second, we replicate an existing work on runtime improvement through program specialisation for the MiniSAT satisfiability solver. PyGGI 2.0 was able to generate a patch for a bug not previously fixed by any APR tool. It was also able to achieve 14% runtime improvement in the case of MiniSAT. The presented results show the applicability and the expressiveness of the new version of PyGGI. A video of the tool demo is at: https://youtu.be/PxRUdlRDS40. ACM Reference Format: 1 INTRODUCTION Genetic Improvement (GI) uses an automated search to find improved versions of existing software [18]. It has already led to significant breakthroughs with GI-improved code incorporated into production [12, 14]. For functional property improvement, such as correctness, Automated Program Repair (APR) techniques based on the GI paradigm have made significant advances during the last decade [9, 22, 23, 26]. For non-functional property improvement, topics such as execution speed improvement [13], automated problem specialisation [19], and energy consumption [7] have been studied, to name a few. Many of the existing GI techniques involve making modifications to the program source code and observing their effects on properties under observation, such as test execution results, execution time, or power consumption. This, in turn, requires effective and efficient ways to define modifications, i.e., GI operators, with respect to specific program representations. A wide range of approaches exist in the literature, ranging from line-level modifications [2], BNF grammar-like modifications [13], C Intermediate Language (CIL) based Abstract Syntax Tree (AST) modifications [22], and a custom Java parser based AST modifications [24]. Most of these are coupled with a single target language, such as C via CIL [22] or Java via JavaParser [24], as modifications have to be defined syntactically. The grammar-based approach of Langdon and Harman [13] captures syntax information by translating the program into a specific notation, on which GI operates: modifications made to the program representation become source code modifications. While this approach is theoretically language independent, Langdon and Harman’s tool only supports C and C++ programs, and the framework would require internal code changes and a dedicated translation tool to apply it to other programming languages. PyGGI has been originally introduced as an easy to use GI framework that is written in, as well as targets, Python [1, 2]. The initial release supported both line-level and AST-level modifications such as swap, insertion, and deletion. The choice of Python as the implementation language was a conscious one. The dynamic typing and interpreted runtime make it well-suited for fast prototyping. The choice of Python as the target language, however, was partly forced upon by the limited range of parsers for other languages implemented in Python (Python as the target language was supported by the use of internal ast module). This paper introduces version 2.0 of PyGGI, which supports a wider range of target languages, such as Java, C/C++, and C#, via the use of an XML-based representation of program source code. In our case studies, we utilise the srcML parser. The tree representation of srcML has been used to perform various program analysis tasks [3–5, 10]. By using the srcML as an intermediate representation, users of PyGGI can easily implement GI techniques for multiple languages, without having to deal with multiple parsers. We show the capabilities of version 2.0 of PyGGI with two case studies. The first one is an APR experiment using QuixBugs [15, 25] that contains 40 defective programs translated to both Java and Python. We show that PyGGI can be used to write a single APR algorithm that works for both languages. The second one is a replication of MiniSAT program specialisation [20] (the original work used line-level modification). We show that PyGGI is capable of finding similar improvements. We believe that PyGGI 2.0 will contribute towards faster uptake and popularisation of GI techniques. With the new XML engine, the framework allows for quick experimentation among multiple programming languages. PyGGI 2.0 is publicly available at https://github.com/coinse/pyggi. 2 DESIGN OF PyGGI 2.0 With this new version, PyGGI focuses on flexibility, versatility and expressiveness. Its core structure has been upgraded, most of its components being extracted and generalised, in order to further support future extensions and variations for particular applications. In addition, PyGGI 2.0 now provides support for XML files as a way to handle multiple programming languages. In this section, we first discuss the architecture of PyGGI 2.0, then introduce its way to handle multiple programming languages. In this section, we first discuss the architecture of PyGGI 2.0, then introduce its notion of engines, before finally describing how XML is used as an intermediary source code representation. 2.1 From PyGGI 1.1 to PyGGI 2.0 The initial version of PyGGI [1] only targeted language-agnostic source code lexical modifications, i.e., it only considered mutation of full raw lines of code. PyGGI 1.1 [2] introduced support for the second type of mutations, targeting Python lines of code, thus enabling an empirical study comparing lexical and syntactic mutations. However, PyGGI 1.1 was built directly on top of the first, purposely very simple and straightforward, version of PyGGI. Granularity level was also strongly tied to the choice of the specific parser used. Consequently, its codebase was monolithic, with intertwined components sharing multiple responsibilities, and overall not adapted to further extensions. If PyGGI 1.1 was an easy gateway for practitioners to try and use GI, PyGGI 2.0 aims to also provide researchers with a cleaner and more robust environment to try out new ideas, implement new functionalities, and perform experiments. In particular, GI components are generalised and abstracted so that concepts can be more easily compared over multiple types of granularity levels or types of source code targeted. While PyGGI 1.1 implementation was contained within a single Python module —pyggi— PyGGI 2.0 makes use of Python submodules to further structure its codebase, described hereafter: - **pyggi/base** is the main submodule of PyGGI 2.0; it defines the base classes of programs, engines (introduced in the following section), patches, edits, and algorithms. - **pyggi/algorithms** contains the local search of PyGGI 1.1. More algorithms are planned to be integrated in the near future. - **pyggi/utils** includes general helpers. In addition, code pertaining to the two granularity levels of PyGGI 1.1 have been relocated into the two following submodules: - **pyggi/tree** defines tree-based program representations and mutations. It includes PyGGI 1.1’s tree-based representation. - **pyggi/line** defines array-based program representations and mutations. It includes PyGGI 1.1’s line-based representation. 2.2 File-Specific Engines Together with the structural refactoring, the other main feature of PyGGI 2.0 architecture is the introduction of engines. While multiple files could be considered at the same time, in PyGGI 1.1 granularity was a global property, i.e., all files of the targeted source code had to share the same granularity level. In PyGGI 2.0 this constraint is lifted as different parts of the same source code can now be managed by different engines. Engines define both the representation of a single source code file —how to parse the initial contents of the file together with their modification points and how to convert back to text format— and the available atomic operations that can be performed on it, e.g., deletion, replacement, or insertion. PyGGI 2.0 provides two types of engines naturally associated to the two granularity levels of PyGGI and the two submodules pyggi/line and pyggi/tree. Each of the two submodules defines an abstract interface enabling editors to be shared between engines of the same type. In total PyGGI 2.0 provides three concrete engines, one under pyggi/line for general line-based operations, and two under pyggi/tree for Python statements and XML trees. Figure 1 details PyGGI 2.0’s usual workflow for a tree-based program. Engines enable granularity level to be dissociated from the concrete source code parser. This means, for example, that any experiment on a specific language (e.g., Python) can easily be replicated on another (e.g., C++) as long as both parsers implement the same granularity level abstract interface. In practice, the XML engine provides a shared representation at very high granularity for source code, greatly improving PyGGI’s scope for potential experiments. 2.3 XML Integration The two modes of PyGGI 1.1 enabled it to either consider language-agnostic files at the line granularity level, or Python files at the statement level. The third engine of PyGGI 2.0 introduces handling of XML files, and enables it to easily tackle C, C++, C# and Java files at various granularity levels through the use of the srcML1 parser. Listing 1 shows how source code can be translated to XML. Note that srcML outputs a highly detailed XML tree, which is here simplified to a much simpler format for the sake of keeping a reasonable search space. For example, the statement "x = j;" would actually be converted into the very detailed following XML fragment: ![Figure 1: Workflow of PyGGI 2.0 for tree-based programs](image-url) --- 1https://www.srcml.org/ 3 EXPERIMENTAL DESIGN In order to show how PyGGI 2.0 can be used for program improvement, we present two case studies. The first one is concerned with the improvement of a functional property (repair), while the other is focused on non-functional property improvement (runtime efficiency). We also target 3 programming languages: Python, Java, and C. In this section we outline our experimental design. 3.1 Automated Program Repair 3.1.1 Dataset. We evaluate PyGGI 2.0 on the QuixBugs benchmark [15, 25], which consists of 40 defective programs translated into both Python and Java. As only 31 of the programs have a test suite, we target those programs as our repair subjects. Furthermore, for 11 of 31 defective programs failing on all test cases, we tried additionally to generate passing test cases since this may make it difficult to distinguish the original faulty program from even worse programs. To do so, we repeatedly mutated the initial failing test inputs until finding passing test inputs that satisfy the described input precondition and yield the same output on both correct and defective programs. As a result, we succeeded to generate such passing test cases for 8 out of 11 programs, and the new test cases are merged into the benchmark’s master branch. All defective Java programs are translated to XML files using srcML Beta v0.9.5. 3.1.2 Experimental Setting. The experiment is conducted at both line and statement granularity level, with three modification operators: deletion, replacement, and insertion. For the Java programs translated to XML files by srcML, we targeted only srcML elements classified as statements along with "decl_stmt" and "expr_stmt". For the QuixBugs patches. <table> <thead> <tr> <th>Python</th> <th>Java</th> </tr> </thead> <tbody> <tr> <td>Line</td> <td>Statement</td> </tr> <tr> <td>lis</td> <td>2</td> </tr> <tr> <td>wrap</td> <td>0</td> </tr> <tr> <td>quicksort</td> <td>0</td> </tr> <tr> <td>sieve</td> <td>0</td> </tr> </tbody> </table> Table 1: Number of unique QuixBugs patches To evaluate each candidate patch, we use a simple fitness function defined by the number of failing test cases (including test cases that timed out), and a basic descent first hill climbing algorithm is employed as a search algorithm. In each step, either a random edit is added to the current best patch or one of the existing edits is removed from the best patch to generate neighboring solutions. The time limit for test suite execution is set to 10 seconds, and each run of the hill climbing search is given the fitness evaluation budget of 500 steps: the stopping criterion is either when the budget expires, or when a plausible patch is found. We execute the repair experiment 20 times for each fault. 3.1.3 Results. The hill climbing algorithm is able to generate 22 plausible (test-suite adequate) patches for four programs among the 31 defective programs, and the number of unique patches is reported in Table 1. Both Python and Java versions of lis are repaired in both granularity levels, whereas the other three programs are repaired in only one combination of language and granularity. Interestingly, the program sieve have not been repaired by any repair system in previous work [25]. The three plausible patches of sieve, which are semantically, but not syntactically, equivalent, consist of more than one atomic operation, while the patches for the other programs are composed of only one operation. This patch shows that the simple hill climbing algorithm can gradually find multi-edit patches when an appropriate partial repair is generated. Overall, the results show that PyGGI 2.0 can be used to implement program repair systems in different programs languages, Python and Java, and also at different granularity levels. 3.2 Running time Improvement 3.2.1 Dataset. As for our second case study, we consider a running time optimisation scenario specialising the MiniSAT [8] SAT solver, building on previous GI work [20, 21]. In particular, we use an existing instrumentalised MiniSAT source code —based on MiniSAT2-070721—from which we translate the main solving algorithm (Solver.C) using srcML, and a benchmark of 130 combinatorial interaction testing (CIT) SAT instances. 3.2.2 Experimental Setting. We operate on both statements and Boolean conditions. Most of the tags of the MiniSAT XML tree are ignored, as we only consider statement ones (e.g., "break", "continue", "decl_stmt", "do"), together with the "condition" tags of "do", "for", "if", and "while" statements. As in the previous case study, we consider deletion, replacement, insertion of either statements or conditions. Mixed mutations (e.g., replacement of a... Table 2: MiniSAT evolved mutants <table> <thead> <tr> <th>Mutant</th> <th>Lines of code</th> <th>Time (sec)</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>28398038591</td> <td>67.49</td> </tr> <tr> <td>seed 0</td> <td>24247029088</td> <td>67.36</td> </tr> <tr> <td>seed 3</td> <td>28094544573</td> <td>67.23</td> </tr> <tr> <td>seed 4</td> <td>23327239091</td> <td>72.01</td> </tr> <tr> <td>seed 5</td> <td>22496801475</td> <td>62.36</td> </tr> <tr> <td>seed 6</td> <td>25050802063</td> <td>63.51</td> </tr> <tr> <td>seed 7</td> <td>20066013444</td> <td>58.66</td> </tr> <tr> <td>seed 9</td> <td>18197820457</td> <td>58.04</td> </tr> <tr> <td>seed 22</td> <td>26562843149</td> <td>76.15</td> </tr> </tbody> </table> statement by a Boolean condition) are forbidden. Finally, Boolean conditions such as `<condition>foo</condition>` are automatically rewritten as `<condition><foo></condition>` so that deletion and insertion of conditions work as expected. Following the previous work [20, 21], in order to have a deterministic fitness function, during training we count the number of statements of "Solver.C" executed as a proxy for runtime. This metric is easily obtained by prefixing a global counter increment before all single-line statements and at the beginning of every "do", "for", and "while" statements. Finally, as GI search process we use PyGGI 2.0’s local search with a budget of 2000 steps. Previous work used a genetic programming approach with 5 instances selected in each generation from 5 bins (based on instance difficulty and satisfiability), containing overall 74 instances. Since we do not change instances during the search, we increase the size of the training set to 15, in order to avoid overfitting. Each mutant is first compiled, then executed on 15 instances selected at random at the beginning of the search from the training set. Mutants failing to solve all 15 instances are immediately discarded. Training is performed 30 times, with different independent random seeds. Performance of the 30 final mutants is then reassessed using the second test set of 56 SAT instances (used in previous work). 3.2.3 Results. Table 2 shows the assessment of 9 of the 13 final mutants that were able to correctly solve every of the 56 previously unseen test instances, averaged over 30 executions. As for the 21 other mutants, 4 correctly solved every instance but required noticeably more time than the baseline (between 100 and 200 seconds), 10 incorrectly classified at least one instance, 5 were discarded after spending more than 120 seconds on a single instance, and finally 2 experienced errors during execution. The best mutant —seed 9— reduced to only 64.1% the cumulative amount of statements executed by the baseline (the empty patch, i.e., the original source code) on all 56 test instances. Improvements in fitness mostly translate to improvement in running time, with the best mutant clearing the test benchmark in 58.04 seconds, compared to the 67.49 seconds of the baseline, improving it by 14%. Furthermore, analysis of mutant 9 highlighted a mutation which applied on its own yielded a 19.4% speed-up in running time. This mutation inserts a line manipulating variable activity levels, thus rebalancing the priority queue for variable assignment during search. This mutation is different from the one-line “good change” modification found in previous work [20, 21]. Interestingly these two mutations are compatible, leading to a mutant clearing the test benchmark without error in only 49.44 seconds (26.8% speed-up). 4 RELATED WORK The area of Genetic Improvement (GI) arose as a separate field of research only in the last few years [18]. GI tools can be divided into two categories: those that deal with the improvement of functional and non-functional program properties. In the first category program repair tools, such as GenProg [11], have gathered a lot of attention and led to the development of the field of Automated Program Repair (APR). Within the field, however, currently only the ASTOR [17] framework allows for comparison of different repair approaches. Another functional property for improvement tackled by GI is the addition of a new feature [16]. With regards to improvement of running time, memory or energy consumption, there is a plethora of GI frameworks available that target a specific programming language [21]. However, a lot of these tools are not available, and, aside from one exception, have not been designed to be general GI frameworks. The closest in the objectives of PyGGI is the Gin toolbox [6, 24], which targets Java. There also exist a few code manipulation frameworks that came from the field of GI. Among these, the Software Evolution Library (SEL) is worth mentioning, as it aims to manipulate multiple programming languages. However, it’s been written in Lisp and requires a substantial learning overhead. PyGGI, on the other hand, aims to be a light-weight framework for work in GI. 5 CONCLUSIONS We present PyGGI 2.0, a Python General Genetic Improvement framework, that allows for quick experimentation in GI for multiple programming languages. This is achieved by the use of XML representation incorporated in version 2.0 of the tool. We conducted two experiments, showing two usage scenarios of PyGGI 2.0: for the purpose of improvement of functional (repair) and non-functional (runtime efficiency) properties of software. We show that PyGGI 2.0 can find 22 patches for four programs from the QuixBugs benchmark, including a fix not previously produced by an APR tool. We were able to find these both in the Python and Java implementations of the subject programs. Moreover, we show that PyGGI 2.0 can also find efficiency improvements of up to 14% in the MiniSAT solver when specialising for a particular application domain, finding additional improvements to previous work. We thus demonstrate that PyGGI 2.0 is a useful tool for GI research, facilitating quick comparisons between different programming languages. ACKNOWLEDGEMENT Gabin An and Shin Yoo are supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (2017M3C4A7068179). Aymeric Blot and Justyna Petke are supported by UK EPSRC Fellowship EP/P023991/1. 3http://program-repair.org/tools.html 4https://grammatech.github.io/sel/ REFERENCES
{"Source-Url": "http://www0.cs.ucl.ac.uk/staff/J.Petke/papers/An_2019_FSE.pdf", "len_cl100k_base": 4875, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17648, "total-output-tokens": 7584, "length": "2e12", "weborganizer": {"__label__adult": 0.0003209114074707031, "__label__art_design": 0.00018143653869628904, "__label__crime_law": 0.00024247169494628904, "__label__education_jobs": 0.00036215782165527344, "__label__entertainment": 4.410743713378906e-05, "__label__fashion_beauty": 0.00012058019638061523, "__label__finance_business": 0.0001385211944580078, "__label__food_dining": 0.0002574920654296875, "__label__games": 0.00037384033203125, "__label__hardware": 0.0005640983581542969, "__label__health": 0.00030517578125, "__label__history": 0.0001474618911743164, "__label__home_hobbies": 6.41942024230957e-05, "__label__industrial": 0.00023066997528076172, "__label__literature": 0.00016391277313232422, "__label__politics": 0.00017786026000976562, "__label__religion": 0.0003077983856201172, "__label__science_tech": 0.00469970703125, "__label__social_life": 7.94529914855957e-05, "__label__software": 0.00493621826171875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.0003223419189453125, "__label__travel": 0.0001609325408935547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28616, 0.06297]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28616, 0.18306]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28616, 0.87815]], "google_gemma-3-12b-it_contains_pii": [[0, 5362, false], [5362, 10824, null], [10824, 15498, null], [15498, 21702, null], [21702, 28616, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5362, true], [5362, 10824, null], [10824, 15498, null], [15498, 21702, null], [21702, 28616, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28616, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28616, null]], "pdf_page_numbers": [[0, 5362, 1], [5362, 10824, 2], [10824, 15498, 3], [15498, 21702, 4], [21702, 28616, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28616, 0.144]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
dfcaf97754dd0deca7784ba56378efcfe39cc0ce
Building an Heterogeneous NoC with the CλaSH Hardware Description Language Jesse Kleinhesselink University of Twente P.O. Box 217, 7500AE Enschede The Netherlands jjkleinhesselink@gmail.com ABSTRACT This paper discusses the current limitations of traditional SoC designs and traditional Hardware Description Languages (HDLs) and proposes the use of the functional HDL CλaSH to describe future SoCs. The traditional SoC design methodologies cannot keep up with the increased need for performance and imperative HDLs lack certain abstraction mechanisms to deal with the increased complexity that comes with that need. CλaSH will be tested with a few NoC topologies to see if it is suited for this new SoC paradigm. The findings of these tests will be presented and conclusions will follow about CλaSH as implementation languages for NoCs. Keywords System on Chip, Network on Chip, heterogeneous NoC, CλaSH, Haskell, topology 1. INTRODUCTION As the number of transistors that can be contained on a given chip continues to grow, old System on Chip design methods cannot keep up with the increasing complexity anymore. Therefore, new design methods conforming to the NoC paradigm have been suggested in multiple papers. NoC is a paradigm in which different resources on a chip are all physically interconnected through a network of switching mechanisms and resources. Such a resource can be any logic-, storage- or communication device that fits on the NoC, such as a processor, DSP or memory module. All components use a protocol stack (similar to TCP/IP) to communicate with each other so logic and control of the complete system is fully distributed. This paradigm enables us to build scalable intra- or inter System on Chip networks that have high throughput and relatively low hardware costs. Also, system designs can possible be reused. In addition to the design methods, the increased complexity makes high-level abstraction mechanisms, that hide the complexity, more important. Those abstractions are lacking in traditional HDLs such as VHDL. CλaSH is a HDL that provides these abstraction mechanisms. It is a functional hardware description language based on Haskell. Functional languages lend themselves very good for hardware description because they excel at describing mathematical constructions, of which functions are especially important. Signal manipulating circuits are in fact functions on incoming signals so the use of a functional language comes naturally. The abstraction mechanisms allow the programmer to think functionally and semantically about the system and, more importantly, allow him to hide details. The language is still in development and it lacks the support of certain language constructs such as recursion[1]. It can, however, already be used to describe a range of circuits[2], including a non-trivial dataallow processor [9]. The goal of this paper is to present the possibilities of CλaSH to describe NoCs as they will probably play a meaningful role in the field of chip design. Separating the network from the topology allows us to configure a network with multiple topologies and test them. Three important network topologies will be discussed. The network is also heterogeneous, meaning, it may contain any resource as long as it’s interface conforms with the interface of the network. In the next section, the research questions will be presented. Section 3 handles the research methods. Then, attempts will be made to answer these questions. First there are some considerations that need to be made, after which the Haskell descriptions of an NoC will be discussed including a 2D-mesh like structure, ring and binary tree topology. In section 6, the discussion will be extended with CλaSH as the CλaSH compiler poses extra constraints on the code. 2. RESEARCH QUESTIONS There are a few questions that come to mind when considering a heterogeneous NoC in CλaSH. These questions relate to the use of Haskell and CλaSH as implementation languages for NoCs. When building a network one needs to consider the different topologies that are possible. Every topology has it’s positive and negative aspects regarding the performance, hardware costs and other criteria. Also, some topologies might lend themselves better for implementation in a functional language. Three topologies will be selected that will be put to the test together with the network traversal algorithms. The questions are: 1. Can the network algorithms be expressed in CλaSH? 2. Are the selected topologies suited for implementation in CλaSH? The questions will first be discussed with reference to Haskell, then conclusions regarding CλaSH will follow. 3. METHODS As the number of available topologies is overwhelming, there is a need for reduction. Criteria for the selection and the topologies themselves will be based on literature in the area of NoC. The actual selection will be loosely based on the criteria. Then, this selection will be implemented in Haskell as a first step towards synthesis. The main cores will, initially, be simple black boxes that only generate testdata. The switches will also be kept as simple as possible. After implementation in Haskell, the implementations will be tested and evaluated. First, the tests only involves the dummy black box resources. The internal representation shall be checked for correctness for the first three clock cycles. The second round of testing involves the router of another author in this field that substitutes all the original dummy switches. It is important that, assuming that the router operates correctly, all messages must reach their destinations. As states must be manually checked, only a few use cases will be implemented. When testing succeeds, attempts will be made to translate the Haskell codes into synthesizable C. If a testing succeeds, attempts will be made to translate the Haskell codes into synthesizable C. Any structural changes that might be needed in this step may lead to differences in evaluation compared to previous testing so this code must also be tested. The CλaSH code is compiled to VHDL so simulations must be run on top the code. Final testing results will then be presented. Additionally, there might be the need for improvements in CλaSH. 4. CONSIDERATIONS Because CλaSH cannot handle recursion yet, this must be taken into account when writing the code. So, recursion should be avoided whenever possible. The entry point of the program, which eventually will be called every clock cycle, preferably executes each node's operation in parallel because sequential code runs much slower. For successful communication there is a need for common interfaces for resources and communication devices. The external part of the resource network interface (RNI) needs to be considered in the design so any device with the correct external RNI can be placed into the network. The internal workings are not of any interest for the network implementation because they are abstracted away in the network design. The width of the node-to-node signals puts some constraints on the interfaces and the lower layer protocols. Also, variable signal widths are possible as in Fat Trees[8] where links become fatter as they go up in tree level. Dataflow must be synchronous in a way that no data is lost because of overwritten data or the fact that some resources run faster than others. Also all devices must be able to differentiate between old- and new data. This means that there must be a global clock, or other extra signals that devices can use to control dataflow. It is stated that future SoCs should use the GALS model of operation[7]. So, the traffic between nodes needs to be asynchronous from the nodes themselves. This reduces complexity, mainly because of the heterogeneous nature of the NoC we are discussing, and reduces wiring and power consumption. It might prove to be difficult to describe a GALS network in Haskell, however, so a global clock is a simple alternative. Considerations about topologies will be handled in the next section. 5. DESCRIPTIONS IN HASKELL The design is modular such that the network topology is separated from the algorithms that run on it. These are in turn separated from the actual cores and switches. So, in theory any routing protocol and any core can be integrated into the network. The topologies were also programmed in a generic way to support various sizes. Because CλaSH cannot handle recursion yet and it was not necessary, recursion was omitted. A network, whatever the topology may be, is defined as a List of the Node type. The Node type is specified in Figure 1. For all adjacent nodes, the buffer for incoming traffic is specified as a mapping from Node to Integer List. The mapping attribute of a node defines it’s operations. The mapping takes an abstract state and a buffer, does calculations and returns a new abstract state and output buffers. The abstract state or UnitCore acts as a container for the internal state of the resource. A UnitCore can be made for any possible resource so it actually is an abstraction mechanism that hides the internal workings of a resource. To support this an RNI must be made for a resource to run in the network. The network does not, however, support packet sizes greater than 64 because packets are represented as Int64. They could be represented more generically, however, which eliminates this constraint. 5.1 Topologies From the popular topologies, two were taken from literature and one was added for completeness (the ring). A 2D-mesh, a ring and a fat tree topology were initially considered. Due to time constraints, a binary tree was implemented instead of a fat tree. All topologies can be laid out in a two dimensional plane, which is a favorable property for SoCs [6] because less die material can be used. Important selection criteria for performance are diameter, connectivity, bandwidth, latency[3] and scalability. Bandwidth and latency are not measurable for Haskell implementations so they are omitted. The diameter refers to the maximum number of nodes between any source-destination pair. Connectivity is the number of direct neighbors of a node. All of the topologies are implemented with each switch being the neighbor of exactly one core, so the number of switches equals the number of cores. The communication channels consist of two one-directional links between two nodes. Note, that in this section, n is used as the number of switches, or switch/core pairs in the topology. 5.1.1 2D-Mesh The mesh discussed in this subsection uses the CLICHE approach (Chip-Level Integration of Communicating Heterogeneous Elements) [7]. The term mesh describes the layout of the channels, but it actually is a torus-like topology. Nevertheless, the terms mesh and 2D-mesh will be used for this topology unless stated otherwise. The original paper also uses these terms. Note that a torus is in fact a mesh with the endpoints connected to each other. All resources are embedded in a m * n grid and interconnected via switches. Every core communicates with exactly one switch, that is also connected with the four other neighboring switches (see Figure 2). This topology was chosen because it scores good on performance, complexity and hardware costs. The 2D-mesh also scales well. The structure is rather simple and straightforward. It uses more wires than the other two, however (3n), but this is just a linear factor. The diameter of this topology is \(\sqrt{n} - 1\) but with the switch-core configuration this boils down to \(\sqrt{n} - 1 + 2 = \sqrt{n} + 1\) because there is always an overhead of two switches. The connectivity of the combined switch and core is 4. 5.1.2 Ring All switches in this structure are connected in a ring with, again, the cores being only attached to their neighboring switches. The diameter of this topology is \(\lfloor \frac{n}{2} \rfloor\), taking cores into account this is \(\lfloor \frac{n}{2} \rfloor + 2\). The combined connectivity is 2. This topology has long shortest paths and has bad scalability but the wire costs are relatively low (proportional to \(2^{n}\)). 5.1.3 Binary Tree The fat-tree structure was initially chosen because it has been formally proved that it is the most cost-efficient topology for VLSI realisations [8] (see Figure 3 for the SPIN fat-tree). Because of time constraints, however, the binary tree was chosen; the structures are related to each other. Tree topologies are defined recursively, so this poses some difficulty on the programmer. But the solution was quite simple. Binary trees, specifically, can be described in an array with the parent placed on position \(i\) and the left and right children placed on position \(2i\) and \(2i + 1\), respectively. The root is placed on position 1. Similar configurations can be made for trees with a constant branching factor greater than 2. The diameter of the binary tree is \(2^d\), combined \(2^d + 2\), \(d = \lfloor \log_2 n \rfloor\) being the depth of the tree. The connectivity is 3. The binary tree has great performance and few wires (\(2^n - 1\)). Better performance of the fat tree is mainly expressed in bandwidth and latency as the bandwidth is distributed better at the top level nodes. 5.2 Synchronisation The network uses a global clock to distribute messages. The operations of the resources are initiated by the clock but they can use multiple clock cycles as long as they produce output every cycle, this could be an empty packet for instance but this depends on the network protocol. 5.3 Code 5.3.1 Network algorithm Figure 4 shows the network algorithm code at top level. See Figure 1 for the Node type definition. The function cycleClock takes a network, which is in fact a nodelist, and an integer \(i\) and executes every node's operation \(i\) times. traverse executes the resources once. It takes the network argument and calls the updateNodes function with all identifiers within the network, these are taken using the label attribute of all Nodes. The function updateNodes has the same functionality as updateNd, but with multiple nodes and the addition of clearing old data from buffers. Unfortunately, the use of foldl makes this network not executable in parallel since the fold functions introduce dependencies on the nodes. This sequential mode of operation is still a major performance problem but probably fixable by rewriting the algorithm. The folds are used because the updates that a given node makes to it's neighbors need to be available before those neighbors execute their logic. Otherwise, changes are overwriten by initial values. 5.3.2 Resource Network Interface The RNI in Figure 5 is the interface for the router [5] that is used in the mesh topology. The RNI executes the actual routing function with state and inputs as parameters, which are taken from the corresponding node, and translates the output back to the generic network format that the node can later be updated. The UnitCore is discussed in the beginning of this section, it contains the actual internal state. 5.4 Testing First, the initial neighbor configurations of all nodes were checked to make sure that the topologies were correctly described, then the network algorithms were put to the test. Unfortunately, only the 2D-mesh was tested with a working routing protocol [5] because it was specifically designed for a mesh-structure of dimensions at least 3 x 3. This is, however, not a problem because initial network configurations are checked. In addition, the routing protocol is a test for network traversal algorithms. The protocol was only implemented in the switches since there was no core implementation available. However, since every switch pairs with a core, the switches could forward the messages directly to their cores when the destination was reached. The cores were implemented as black boxes that merely act as hubs. Testing was done by placing two packets – one for the header and one for the data – in two corebuffers. All the other buffers were filled with null-packets – packets that initiate no actions. Then, clock cycle wise, the packets were monitored until they reached their destination. The other two topologies were tested using the dummy core mentioned previously and the dummy switch which simply copies it’s inputs. 5.5 Results In the listing below, the initial buffer configurations are shown for the router test. All buffers that are not shown are filled with empty packets. So, this configuration emulates the behaviour that Core C(0, 0) and C(2, 2) have just sent two packages to their switches. The first package basically means that the number 88 is sent to the core at position (2, 1). The meaning of the second one is analog to the first. The keywords like Continue are not of any concern in this paper, they are used in the routing algorithms. It is important that the second packet in the buffer belongs to the same stream as the first one. ``` data UnitCore = Router {RouterState | DummyRouter} deriving (Show) switch1 :: (UnitCore, [Int64]) -> (UnitCore, [Int64]) switch1 ((Router state), inputs) = (unit', output) where uni' = 'Router state' (state', packets') = router state packets output = map inputToPacket inputs output = map packetsToInt packets' Figure 5. RNI Code of a switch. ``` Listing the test results of the other topologies are far less interesting because the dummy functions only copy the input to the output, which means that the packets are only bounced back and forth between every adjacent node pair. Initial neighbor configurations were correct for all topologies and the traversal algorithms seemed to work flawlessly in the 2D-mesh. So, all three topologies were successfully implemented in Haskell without the use of recursion. Translation to C\(\lambda\)Sh is, therefore, feasible. Note that, although recursion was not necessary, it might be for other projects and code can be quite unreadable when omitting recursion. 6. TRANSLATION TO C\(\lambda\)Sh Translations to C\(\lambda\)Sh have not been made. It is expected that those translations are not significantly complex because recursion was omitted and other translations largely include converting Haskell types to hardware friendly types. This means, that for a specific topology to be translated, the List types must converted to vectors of constant size. 7. RELATED WORK Related work has been done in the area of NoCs and functional HDLs but not so much in both. The router discussed in 5.4 was ment for mesh NoCs and was written in Haskell. It is expected that the Haskell descriptions of this router can be translated to C\(\lambda\)Sh with few transormations. The topologies that are described in this paper are based on topologies in other papers. The 2D-mesh was taken from [7]. This paper proposes a packet switched platform that includes the architecture and design methodology. The architecture includes the physical layer, the data link layer and the network layer of the OSI protocol stack. It describes two phases where phase one encompasses the design of a concrete architecture and phase two maps the specific application onto this architecture. The fat-tree topology was based on [4]. This paper discards the use of a bus for future On-Chip systems and presents a packet-switched interconnection template with the fat-tree as it’s topology. This paper also discusses the modularity in which services are built on top of a bare network. For other functional HDLs that have been made you are encouraged to read [2]. In the related work section, this paper briefly describes other functional HDLs that are available. HML, Hawk, ForSyDe, Lava and Bluespec are discussed. All these languages lack at certain points where C\(\lambda\)Sh doesn’t. The main reason is that C\(\lambda\)Sh supports choice elements like pattern matching and guards and can handle polymorphic Haskell types and higher-order functions[2]. 8. CONCLUSIONS Two topologies were selected from research papers, the fat-tree and the 2D-mesh. The ring was added for completeness. The network algorithms were written successfully in Haskell without recursion and with types that can be transformed to CλaSH, which answers question two. A fat-tree is a complete binary tree with bandwidth parameters so these results can be easily extended. The same goes for the configuration of the core locations; the cores in the fat tree are positioned at the leaves while the binary tree discussed here has cores at every switch. The code still needs some improvements to achieve acceptable performance, better flexibility and better readability. This will be discussed in the next section. Unfortunately, there was no time to create a GALS network. The network uses a global clock for the network and the resources. This works but it has poor performance. 9. FUTURE WORK There is still a lot of work to be done when it comes to descriptions in functional HDLs. Not only is the work presented in this paper not complete, CλaSH is still a language in development mainly because of the lack of recursion. The research in this paper needs to be extended with the translation step to CλaSH. Also, the implementations are not tested thoroughly; only the mesh could be tested with a routing protocol because of the static nature of the protocol. The code needs some improvements to facilitate the integration of new topologies and resources, there is still a lot work in writing the interface part, partly due to the use of integer types instead of algebraic types. The network algorithms still operate on a sequential level, which needs to be transformed to parallel code so it can run more efficiently. The plan to write a GALS network has not succeeded. CλaSH is still a very young language so complex and large systems have to be written in CλaSH to fully explore it’s potentials and limitations. 10. REFERENCES
{"Source-Url": "http://referaat.cs.utwente.nl/conference/19/paper/7395/building-an-heterogeneous-noc-with-the-clash-hardware-description-language.pdf", "len_cl100k_base": 4525, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15579, "total-output-tokens": 5376, "length": "2e12", "weborganizer": {"__label__adult": 0.0010395050048828125, "__label__art_design": 0.0008759498596191406, "__label__crime_law": 0.0008435249328613281, "__label__education_jobs": 0.00081634521484375, "__label__entertainment": 0.00020897388458251953, "__label__fashion_beauty": 0.0004394054412841797, "__label__finance_business": 0.0004498958587646485, "__label__food_dining": 0.0008411407470703125, "__label__games": 0.0014591217041015625, "__label__hardware": 0.05194091796875, "__label__health": 0.0017595291137695312, "__label__history": 0.0007128715515136719, "__label__home_hobbies": 0.0003628730773925781, "__label__industrial": 0.00234222412109375, "__label__literature": 0.0003969669342041016, "__label__politics": 0.0005822181701660156, "__label__religion": 0.0013227462768554688, "__label__science_tech": 0.38916015625, "__label__social_life": 0.00013315677642822266, "__label__software": 0.006450653076171875, "__label__software_dev": 0.5341796875, "__label__sports_fitness": 0.0008692741394042969, "__label__transportation": 0.0025577545166015625, "__label__travel": 0.00044417381286621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23585, 0.04329]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23585, 0.64153]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23585, 0.93135]], "google_gemma-3-12b-it_contains_pii": [[0, 4671, false], [4671, 11696, null], [11696, 14627, null], [14627, 19956, null], [19956, 23585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4671, true], [4671, 11696, null], [11696, 14627, null], [14627, 19956, null], [19956, 23585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23585, null]], "pdf_page_numbers": [[0, 4671, 1], [4671, 11696, 2], [11696, 14627, 3], [14627, 19956, 4], [19956, 23585, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23585, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
3595ea887efaab708c33fe9dba744095a9a4040c
Human-Centered Evaluation of Software Artefacts in Computer Science: Introduction, State-of-the-Art, and Perspectives Stefan Hanenberg University of Duisburg-Essen, Germany Santiago de Chile, Chile, 14.09.2012 What is this talk about? - Tries to argue that human-centered / empirical studies are necessary - Introduces into some basic terms - Gives an overview of techniques required to perform experiments - Shows pitfalls of experiments - Gives an example of an experiment Motivation - Two different targets for research in CS - Machines - Execution speed, memory consumption, etc. - Human - Development speed, development errors, etc. - Nowadays research methods mainly address machines - Human plays rather minor role - Usability (human interaction) rarely tested Why should we care about humans? - Humans are one of the main audience for CS constructs - Usability of - Programming languages - APIs - User interfaces - ... - Extensibility - Maintainability Current situation - Example: Programming Language - Typical statement from the community: - „If a language is good, people will use it“ - Questions: - „How many people must use a language so that it becomes good?“ - „What about the moment when a language was initially developed?“ - „What about marketing effects?“ - „What should be the motivation of the first developer using a new PL?“ - Strange - Later on hardly tested whether PL was being used - „There is a community...so the language must be good“ - Example: well.....many, many Typical situation: anecdotes instead of applied research method Claim - Artifact design is (often) about developers - Current dominating approach 1. Find example 2. Build construct 3. Claim that construct helps developers This leads to nowhere - Research methods needed that consider developers / users ... involved humans Why not the traditional way? • Machine / algorithm / etc. • Formal models, formal proofs, etc. • Human • No formal model => no formal reasoning => traditional approaches cannot be applied Overview of CS Research Methods Taken from [Hanenberg, Onward'10] Structure - Need for experimentation (here: controlled experiments with humans) - What means experimentation? - What is required to run experiments? - State-of-the-art - Challenges in experimentation - Example: Experiment on type systems - Conclusion Why experiments? - Problem (again) - No formal model available how humans work - Experiments - Observations as tests what really happens - Approximation (examples) of actual behavior - What is a test? - There must be a statement which says when a test fails (hypothesis) - There must be an objective way to check, whether test has failed (falsification) Logic of experimentation • An experiment... • does not provide a proof for a theory • can NEVER consider all existing variables • can hardly reflect on real world situations • can only provide some evidence that a new construct helps (apart from developer's subjective impression) • Why should it be useful? • Test: „Does the artifact really help in situations the inventor had in mind“? • Result: „Uselessness of artifact can be shown!“ Structure of Experiment - Measurement of impact of - Independent variable (e.g. PL) on - Dependent variable (e.g. development time) - A variable has a number of different treatments - Example: Comparison between Java, C++, and C => Indep. Variable PL with three treatments - Experiment typically suffers from confounding factors (variable which are not controlled) Background of Experiments (Karl Popper) • Scientific argumentation – Falsification of hypothesis (use of statically typed language decreases development time) – More often • Exploratory analysis (let’s see what happens if…) – NO PROOFS / NO GENERALIZABILITY • But always the hope that repeated observations reveal some truth Background of Experiments (Karl Popper) • Validity of hypotheses – Evidence for hypotheses increases the more often they could not be rejected • Assumption – Massive execution of experiments • Hope...(as practical researcher) • the more data available, the more probable it is, that we finally „see some rules“ Single vs. Multiple Runs - General idea of experimentation - It shows, that hypothesis does not hold - Single run experiments (in physics) - Example: Galilei's Pisa experiment => Single run falsified existing theory => Boolean statement from single run => **Boolean logic** - With humans: **Multiple runs** - Humans differ too much => Multiple runs required => How often do runs need to falsify theory? => Argumentation based on analysis of sample => **Statistics** Remaining questions • How to design / perform experiments? • How to analyse experiments? ...let's discuss it the other way around Statistics in 5 minutes.... - Descriptive Statistics - Arithmetic mean, medians, variance, etc. - Relatively easy to understand, but inappropriate - Inductive Statistics - Consideration of probabilities - Not that intuitive to understand, but state-of-the-art Example: Descriptive Statistics - Software development times with techniques A and B (in hours), 10 subjects - A: 1, 2, 3, 4, 1000 (mean: > 200, median: 3) - B: 10, 20, 30, 40, 50 (mean: 30, median 30) - Problem - Argumentation based on mean or median? - Is 1000 an outlier that should not be considered? - Problems of descriptive statistics well known... Inductive Statistics • General idea: compare distribution / density functions of samples A and B ![Graph showing distributions A and B with an effect size arrow and test A < B] Inductive Statistics\(^{(1)}\) - General idea: compare density functions ![Diagram showing two normal distributions with annotations for effect size, deviation, and overlap.](image-url) Inductive Statistics (1) - General idea: compare density functions - Computation of overlap between density function Inductive Statistics (2) - **P-value**: (Error-) Probability that a sample does NOT show A<B - Arbitrarily(!) chosen alpha-level as „significance level“ (typically: 0.05, 0.01, …) - Example: - „The difference turned out to be significant under an alpha-level of 0.05“ => p<0.05 Inductive Statistics (3) - Sample typically does not show perfect curve => approximation of density function required => sometimes, not even the kind of density function is known - Standard mechanisms (significance tests) to compute p-values for different scales and sample sizes - T-Test, Wilcoxon-Test, Mann-Whitney-U-Test, - Standard mechanisms to determine, whether a certain distribution can be assumed - Shapiro-Wilk-Test, K-S-Test, etc. - All these tests are implemented in standard statistic software (R, SPSS, S, MiniStat, SAS, ...) Inductive Statistics (4) - Comparison of multiple curves (ANOVA): Impact of 1, 2, 3 on measurement - Again: p-value (error probability that difference does not depend on 1-3) - Partial-Eta-Square: How much of the variation can be explained by the variable (with the treatments 1-3) Inductive Statistics (5) - Quasi-endless different kinds of tests for different number of treatments and variables **Take away:** - Determination of error-probability $p$ - Different standard significance tests - Value of $p$ depends on - Effect size - Sample size - Scale - Applied significance test - Deviation (breadth of curve) Remaining question - How to design / perform experiments? - What kinds of experimental design are possible / desirable? - What is the impact of a certain design on the results? - What kinds of measurements can be applied? - ... Experiment Design (1) - Two-group between-subject design - One independent variable with two treatments - One subject tested under one treatment - Two different groups, each contains subjects with same treatment - Example (Language A, B): - A: 1, 2, 3, 4, 1000 - B: 10, 20, 30, 40, 50 - Problem - Both groups require subjects with "the same characteristics" - Problem: requires "very large" effect size in order to measure difference (for small sample sizes) Experiment Design (2) - Four-group between-subject design - Two independent variables with two treatments - One subject tested under one treatment - Four different groups, each subject assigned to treatment pair - Example (Language A, B; Programming Task 1, 2) - G1 (Language A, Task 1): 1, 2, 3, 4, 1000 - G2 (Language A, Task 2): ... - G3 (Language A, Task 3): ... - G4 (Language A, Task 3): ... - Problem - Groups still require subjects with „the same characteristics“ - Still: requires „very large“ effect size in order to measure difference (for small sample sizes) Experiment Design (3) - Large variety of further designs - Repeated measures designs, factorial designs, block designs, ... - Between vs. within-subject designs, ... - General problems / considerations - Does design match hypotheses? - Difference hypotheses, correlation hypotheses, ... - Does design permit to determine effect? - Effect size, deviation, sample size, statistical power of required significance tests, ... Experiment Design (4) - General problem: **No measured effect** - Possible interpretations: - Sample size too small - Deviation too high - Inappropriate design - Non-exact measurement - .... - Alternative interpretation - Well, maybe the effect does not exist - Pure technical problems - Easy to run into these problems!!! - NO (!) indicator that main effect does not exist Experiment Design Example Example - 2 group experiment, 10 subjects, comparison of Java and Assembler - Subjects: First year students - Task: - Write an algorithm that computes a strongly connected component with $O(n^3)$ - ...without using a book on algorithms - Assumed result: - Average solution requires more than a year development time - No measured difference between Java and Assembler => very large deviation, small sample size, unbalanced groups,... => actual task has a huge impact on measurements => be careful when having an experiment without measured effect (p > alpha-level) Experiment Design: \( p > 0.05 \) - But - if the significant effect of variable is „obvious“ (common community believe) - if the number of subjects is „high“ (whatever that means) - chosen tasks are the „killer-examples“ for the measured technique - …then... \[ \Rightarrow \] Non-significant results are still interesting (\textit{but only! then}) Experiment Design (6) - Take away: Experiment design - ...must match research question - ...influences the final result (p-value) - ...requires appropriate analysis (t-Test, ANOVA, …) - ...results highly depend on actual task - ...be careful when no effect has been measured Ok, let's do experiments ... but where and how to start? Challenges of Empirical Studies (remember: typically neither hypotheses nor concrete scenario available) Challenges of Empirical Studies (1) - Find / invent a hypothesis - Find situations where hypotheses should be tested - Find a good design - Typical problem - „ Fighting the deviation / effect-size beast“ Challenges of Empirical Studies (1) • Scientific approach – Observation of singular events (sample) (e.g. developers using a dynamically/statically typed programming language) • Formulation of hypothesis • Identification of dependent / independent variables (e.g. development time depending on type system) • Construction of environment (IDEs, tasks, languages, machines, …) – Collection of subjects – Measurements (e.g. development time to solve a certain task) – Analysis (mainly inductive statistics) Challenges of Empirical Studies (2) - Find / invent a hypothesis - Find situations where hypotheses should be tested - Find a good design - Typical problem - “Fighting the deviation / effect-size beast” Problems: Experiment Design • Comparison between two samples Example 1: Same effect size, different deviation Problems: Experiment Design • Comparison between two samples Example 1: Same effect size, different deviation Large overlap => no (significant) difference Problems: Experiment Design • Comparison between two samples Example 1: Same effect size, different deviation Large overlap => no (significant) difference Small overlap => (significant) difference Problems: Experiment Design - Comparison between two samples Example 2: Different effect size, same deviation Problems: Experiment Design • Comparison between two samples Example 2: Different effect size, same deviation Large overlap => no (significant) difference Small overlap => (significant) difference Problem(s) in Experimentation Conclusion Experimenter should try to - reduce deviation, and/or - increase effect size ● Possible ways ● Adaptation of experimental design (e.g. within-subject design) => Reduction of deviation ● Adaptation of tasks (no development „from scratch“) => Increase effect size Example: Static Type System [Kleinschmager, Hanenberg, Robbes, Tanter, Stefik; ICPC’12] - Background: 4 experiments, „mixed results“ - Idea: Static type systems help when using an undocumented API - Experiment - Java / Groovy as PLs - 9 programming tasks (designing tasks took about 2 month) - 2 tasks: fix semantic error / 2 tasks: fix type error / 5 tasks: use API classes - 33 subjects (mainly students) - Within-subject design (2 groups) - Result - Positive effect for 6/9 tasks - No effect on fixing semantic error - Positive effect on fixing type error - Mostly (4/5) positive effect on using API classes Example: Static Type System • Task 4,5: Semantic errors • 1,2,3,6,8: New class usage • 7, 10: Type errors Example: Static Type System - Potential problems - Artificially constructed API - parameter names do not reflect on type names (but on names chosen from the domain) - Is it representative? - Artificially constructed environment - Artificial programming tasks - Java type system - Maybe we measured something else - "Existence of type annotations in the code help....no matter whether they are statically type checked or not" - Maybe „in the wild“ positive effect of static type system „vanishes“ - There is no generalizability Example: Static Type System • How to go on? • Traditional way – „We have done an experiment on type systems and found differences, let's go to the next topic“ • Alternative way – Go on with experimentation on type systems • Variations on type systems, IDE support, replication of experiments, etc. – Try to find correlation hypothesis that survives falsification trials Where to Start? - Relatively few textbooks available specific to software engineering Where to Start? • Huge bunch of textbooks outside the domain of software engineering • Psychology • Social Sciences • Medicine • … • Why not just use these books? • Problem: Different domains have different problems… Problem of different domains - What is the difference between measuring blood pressure and software development time? Problem of different domains - **Blood pressure** - You will hardly find two (living) human subjects on this planet whose blood pressure differs by factor 10 (even factor 5 is unlikely) - **Software development time** - It is hard to find a sample of human subjects where development time between best and worst developer is less than factor 5 => Large set of experimental designs / statistical methods from for example medicine cannot be (directly) used in software construction State of the Art: Empirical SE State of the Art: Empirical SE (1) - Empirical approach typically not taught to students - ...how can students check whether a statement „static type systems are good for developer hold“? - ...how can students understand an empirical study they are reading about? - ...how can a student perform such a study? - There are empirical studies / controlled experiments (ok, not that many) State of the Art: Empirical SE (2) - Typically, a large number of experiments suffer from general problems (experiment design as well as analysis) - A lot of techniques come up without a hypothesis / proposed measurement - Example: "Eclipse is quite a mature IDE and helps developer a lot" => Experimenter becomes "inventor of hypothesis to be tested" State of the Art: Empirical SE - Theories mainly describe existence of a difference - “…static type systems better than dynamic type systems“ - …empirical knowledge rather low - Theories typically do not try to quantify differences (for some good reasons) - …empirical knowledge rather low - Experimenter currently have to „invent situations for language constructs on their own“ - Example: Java vs. Assembler.... Empirical SE: Open issues - Endless list of open issues - How can we distinguish good from bad developers upfront? - Fundamental question for certain experiment designs (factorial design, block design, etc.) - What kind of programming tasks are worth being studied? - What tasks do have small deviations, which represent „daily programming tasks“? - What tool support should be delivered in an experiment? - Most often, no data for tools is available... - ... Long term goal of SE - Theories - Descriptions of situations where certain constructs dominate others (size of difference part of theory) - Large number of experiments that try to falsify theories - Example (very first initial step): - “When using an undocumented API, ..... ....static typing reduces development time“ - General kind of theory: - “When the code is of kind X, ..... ...the use of construct A leads to C ...which differs to construct B by factor...“ Discussion & Conclusion • Controlled experiments as a research method • Statistics, experiment designs • Many, many problems • Missing experimentation in the past, basics, organizational issues • ... Conclusion - **We must teach experimentation** - Help people/students to understand what's going on - Students need to know methods which permit to identify techniques which are „bad, time consuming, error prone“ - **We need to integrate experimentation in our courses** - The SE course should not say „Pair programming is good“, it should also introduce the experiments which revealed that effect - **We must do experimentation** - We want to improve the life of developers & users - This does NOT mean that we should ignore the machines Human-Centered Evaluation of Software Artefacts in Computer Science: Introduction, State-of-the-Art, and Perspectives Stefan Hanenberg University of Duisburg-Essen, Germany Santiago de Chile, Chile, 14.09.2012
{"Source-Url": "http://pleiad.dcc.uchile.cl/_media/events/talks/stefan-hce.pdf", "len_cl100k_base": 4384, "olmocr-version": "0.1.53", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 79621, "total-output-tokens": 6645, "length": "2e12", "weborganizer": {"__label__adult": 0.0005011558532714844, "__label__art_design": 0.0007238388061523438, "__label__crime_law": 0.0004427433013916016, "__label__education_jobs": 0.0137786865234375, "__label__entertainment": 8.738040924072266e-05, "__label__fashion_beauty": 0.00021791458129882812, "__label__finance_business": 0.00036025047302246094, "__label__food_dining": 0.0004563331604003906, "__label__games": 0.0006585121154785156, "__label__hardware": 0.0007176399230957031, "__label__health": 0.0006165504455566406, "__label__history": 0.0003349781036376953, "__label__home_hobbies": 0.00015747547149658203, "__label__industrial": 0.0004916191101074219, "__label__literature": 0.0005750656127929688, "__label__politics": 0.0004682540893554687, "__label__religion": 0.000629425048828125, "__label__science_tech": 0.01407623291015625, "__label__social_life": 0.0002815723419189453, "__label__software": 0.00470733642578125, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.0004477500915527344, "__label__transportation": 0.0007219314575195312, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18834, 0.0107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18834, 0.76171]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18834, 0.86588]], "google_gemma-3-12b-it_contains_pii": [[0, 212, false], [212, 478, null], [478, 784, null], [784, 986, null], [986, 1616, null], [1616, 1884, null], [1884, 2086, null], [2086, 2153, null], [2153, 2409, null], [2409, 2774, null], [2774, 3226, null], [3226, 3602, null], [3602, 3947, null], [3947, 4268, null], [4268, 4763, null], [4763, 4896, null], [4896, 5166, null], [5166, 5534, null], [5534, 5713, null], [5713, 5901, null], [5901, 6020, null], [6020, 6305, null], [6305, 6859, null], [6859, 7144, null], [7144, 7490, null], [7490, 7728, null], [7728, 8202, null], [8202, 8792, null], [8792, 9232, null], [9232, 9636, null], [9636, 10242, null], [10242, 10602, null], [10602, 10888, null], [10888, 10946, null], [10946, 11052, null], [11052, 11260, null], [11260, 11791, null], [11791, 11998, null], [11998, 12110, null], [12110, 12268, null], [12268, 12469, null], [12469, 12581, null], [12581, 12782, null], [12782, 13094, null], [13094, 13733, null], [13733, 13840, null], [13840, 14388, null], [14388, 14781, null], [14781, 14868, null], [14868, 15096, null], [15096, 15215, null], [15215, 15702, null], [15702, 15733, null], [15733, 16125, null], [16125, 16484, null], [16484, 16909, null], [16909, 17383, null], [17383, 17862, null], [17862, 18071, null], [18071, 18623, null], [18623, 18834, null]], "google_gemma-3-12b-it_is_public_document": [[0, 212, true], [212, 478, null], [478, 784, null], [784, 986, null], [986, 1616, null], [1616, 1884, null], [1884, 2086, null], [2086, 2153, null], [2153, 2409, null], [2409, 2774, null], [2774, 3226, null], [3226, 3602, null], [3602, 3947, null], [3947, 4268, null], [4268, 4763, null], [4763, 4896, null], [4896, 5166, null], [5166, 5534, null], [5534, 5713, null], [5713, 5901, null], [5901, 6020, null], [6020, 6305, null], [6305, 6859, null], [6859, 7144, null], [7144, 7490, null], [7490, 7728, null], [7728, 8202, null], [8202, 8792, null], [8792, 9232, null], [9232, 9636, null], [9636, 10242, null], [10242, 10602, null], [10602, 10888, null], [10888, 10946, null], [10946, 11052, null], [11052, 11260, null], [11260, 11791, null], [11791, 11998, null], [11998, 12110, null], [12110, 12268, null], [12268, 12469, null], [12469, 12581, null], [12581, 12782, null], [12782, 13094, null], [13094, 13733, null], [13733, 13840, null], [13840, 14388, null], [14388, 14781, null], [14781, 14868, null], [14868, 15096, null], [15096, 15215, null], [15215, 15702, null], [15702, 15733, null], [15733, 16125, null], [16125, 16484, null], [16484, 16909, null], [16909, 17383, null], [17383, 17862, null], [17862, 18071, null], [18071, 18623, null], [18623, 18834, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18834, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18834, null]], "pdf_page_numbers": [[0, 212, 1], [212, 478, 2], [478, 784, 3], [784, 986, 4], [986, 1616, 5], [1616, 1884, 6], [1884, 2086, 7], [2086, 2153, 8], [2153, 2409, 9], [2409, 2774, 10], [2774, 3226, 11], [3226, 3602, 12], [3602, 3947, 13], [3947, 4268, 14], [4268, 4763, 15], [4763, 4896, 16], [4896, 5166, 17], [5166, 5534, 18], [5534, 5713, 19], [5713, 5901, 20], [5901, 6020, 21], [6020, 6305, 22], [6305, 6859, 23], [6859, 7144, 24], [7144, 7490, 25], [7490, 7728, 26], [7728, 8202, 27], [8202, 8792, 28], [8792, 9232, 29], [9232, 9636, 30], [9636, 10242, 31], [10242, 10602, 32], [10602, 10888, 33], [10888, 10946, 34], [10946, 11052, 35], [11052, 11260, 36], [11260, 11791, 37], [11791, 11998, 38], [11998, 12110, 39], [12110, 12268, 40], [12268, 12469, 41], [12469, 12581, 42], [12581, 12782, 43], [12782, 13094, 44], [13094, 13733, 45], [13733, 13840, 46], [13840, 14388, 47], [14388, 14781, 48], [14781, 14868, 49], [14868, 15096, 50], [15096, 15215, 51], [15215, 15702, 52], [15702, 15733, 53], [15733, 16125, 54], [16125, 16484, 55], [16484, 16909, 56], [16909, 17383, 57], [17383, 17862, 58], [17862, 18071, 59], [18071, 18623, 60], [18623, 18834, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18834, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
37ca914e4b96d3e707ed7f22a3a2673a24c669c0
Organizational Patterns between Developers and Testers Investigating Testers’ Autonomy and Role Identity Michal Doležel1 and Michael Felderer2,3 1Department of Information Technologies, University of Economics, Prague, Czech Republic 2Institute of Computer Science, University of Innsbruck, Innsbruck, Austria 3Department of Software Engineering, Blekinge Institute of Technology, Karlskrona, Sweden Keywords: Software Testing, Agile Testing, Conflict, Profession, Role Identity, Social Identity Theory, Organizational Structure, Combined Software Engineering, Software Engineering Management. Abstract: This paper deals with organizational patterns (configurations, set-ups) between developers/programmers and testers. We firstly discuss the key differences between these two Information Systems Development (ISD) occupations. Highlighting the origin of inevitable disagreements between them, we reflect on the nature of the software testing field that currently undergoes an essential change under the increasing influence of agile ISD approaches and methods. We also deal with the ongoing professionalization of software testing. More specifically, we propose that the concept of role identity anchored in (social) identity theory can be applied to the profession of software testers, and their activities studied accordingly. Furthermore, we conceptualize three organizational patterns (i.e. isolated testers, embedded testers, and eradicated testers) based on our selective literature review of research and practice sources in Information Systems (IS) and Software Engineering (SE) disciplines. After summarizing the key industrial challenges of these patterns, we conclude the paper by calling for more research evidence that would demonstrate the viability of the recently introduced novel organizational models. We also argue that especially the organizational model of “combined software engineering”, where the roles of programmers and testers are reunited into a single role of “software engineer”, deserves a closer attention of IS and SE researchers in the future. 1 INTRODUCTION Information Systems Development (ISD) is a team activity. Not even mentioning varying needs and demanding expectations of future IS users, the nature of the interplay among programmers, testers, analysts, and other functional roles involved in the execution of ISD activities influences outcomes of ISD projects to a great extent (Walz et al., 1993). Agile ISD is seen as an important step towards improving this interplay by promoting values of flexibility, cooperation, learning, and leanness (Conboy, 2009). Compared to traditional (i.e. sequential or phase-oriented) ISD methodologies, the Agile ISD philosophy brings two principal changes regarding the “human element” in ISD. First, it assumes a less fragmented and little formalized distribution of responsibilities across different functional roles active in ISD, including software testers (Cohn, 2010). Second, it puts forward the view that software development is a complex socio-technical process understandable through studying people and their interactions (Balijepally et al., 2006). These two changes motivate our paper. Though systems/software testing is a vital part of ISD activities, it has received scant attention in previous Information Systems (IS) research (Hassan and Mathiassen, 2018). Similarly, Software Engineering (SE) research has focused on technical challenges of software testing, and remains mostly silent on the management ones (Garousi and Felderer, 2017). In particular, there is very little work available that discusses the evolving role of testers in agile teams from an organizational point of view based on social science theories. This perspective is what we aim for, but the present paper is just a first step in this direction. This paper concentrates on software testing personnel (testers, test engineers, test analysts etc.) and the changing nature of their role at present day. Due to space constraints, however, we do not further expand on how various sub-professions in software development participate in Agile ISD. testing (e.g., test managers) are exactly impacted. We take a simplistic view that software tester or test engineer has been the one who carries out the majority of testing work in ISD. Our aim here is to review relevant Information Systems (IS) and Software Engineering (SE) literature, identify gaps in it, and prepare the grounds for presentation and interpretation of our results based on an ongoing research project. In so doing, this paper investigates the interconnected problems of testers’ role identity and organizational independence in ISD activities. To understand the problem comprehensively, we use also literature from Management and Organization Studies (MOS). This paper proceeds as follows. Section 2 lays conceptual foundations. Section 3 presents an overview of the organizational patterns suggested by us as distinct. Section 4 indicates the further direction of our research. Finally, Section 5 concludes the paper. 2 CONCEPTUAL BACKGROUND The points of departure of our paper are discussed in this section. Section 2.1 briefly highlights certain core features of Agile ISD approaches. Section 2.2 discusses differences between programmers and testers. Section 2.3 conceptualizes the problem of testers’ independence. Finally, Section 2.4 concludes by presenting some thoughts on the profession of software testers. 2.1 Agile ISD The popularity of agile IS development methods (Agile) has been steadily growing over the last decade (Conboy, 2009). More specifically, aside from small teams and start-up businesses, Agile gradually penetrates also traditional enterprises. In some organizations, the observed growth of Agile development initiatives can be, at least partly, attributed to the general popularity of the concept. Managers and executives have been always paying attention to emerging management trends, and Agile ISD approaches may be seen as one among their present day favourites (Cram and Newell, 2016). Cautiously stated, true efforts to introduce a revolutionary, people-centric management philosophy into the world of corporate organizing may drive the remaining efforts (Laloux, 2014). Considering the nature of the shift from traditional to Agile ISD methods, it is essential to recognize that agile software development consists of socio-technical activities. This understanding contrasts with the predominantly technical, mechanistic understanding of software processes at earlier times (Balijepally et al., 2006), even though the socio-technical nature of software processes obviously did not emerge over-night (Fuggetta and Nitto, 2014). Indeed, ISD personnel is in the centre of research on ISD and software processes nowadays. Another important change introduced by Agile directly influences the terminology adopted in this paper: Instead of the previously common term “developer”, we use a less-frequent term “programmer” to avoid confusion with the Agile terminology. Specifically, the concept of cross-functional development team promoted by Agile has a significant organizational impact: “the [development] team needs to include everyone [e.g., programmers, testers, analysts, and business representatives] necessary to go from idea to implementation” (Cohn, 2010, p. 152). 2.2 Two Different Software Species The mind-set of programmers and testers is considered as different (Pettichord, 2000). A tester is the one who empirically proves the system under test to investigate whether the system is able to stand its future operational mission. Examining system’s behaviour, he or she is driven by the ideal of protecting future end users and mitigation as many risks as possible. By breaking the system, the tester pursues to improve it. By contrast, programmers are the builders. “[F]lexing their intellectual muscles” (Cohen et al., 2004, p. 78), they may look for creative, technically-sound solutions irrespectively of potential negative implications for end-users. Differently put, while many programmers quite narrowly focus on technical aspects of ISD by prioritizing solution efficiency in technical terms, testers look primarily for solution effectiveness and fit-for-future-use. These differences are summarized in Table 1. Naturally, this table presents a sort of black/white, simplistic perspective. <table> <thead> <tr> <th>Dimension</th> <th>Programmers</th> <th>Testers</th> </tr> </thead> <tbody> <tr> <td>Work mindset</td> <td>Build</td> <td>Break</td> </tr> <tr> <td>Key value</td> <td>Technical excellence</td> <td>Customer advocacy</td> </tr> <tr> <td>Thinking focus</td> <td>Theory</td> <td>Practice</td> </tr> <tr> <td>Project goal</td> <td>Efficiency</td> <td>Effectiveness</td> </tr> <tr> <td>Job knowledge</td> <td>Depth</td> <td>Breadth</td> </tr> </tbody> </table> Table 1: Characteristic differences between programmers and testers. Adapted from Zhang et al (2018). It should come as no surprise that this dichotomy frequently results in conflicts between the two groups. Importantly, previous research indicates that this conflict is inherent to software development activities (Cohen et al., 2004). In principle, there are many sources of the conflict. From their rich research data, Zhang et al. (2014) identified five major categories, focusing on three common elements: process, people, and communication. Conflict management strategies differ accordingly (Cohen et al., 2004). However, our knowledge that the conflict is inevitable little helps with avoiding organizational misalignments between programmers and testers in real organizations (Onita and Dhaliwal, 2011). This also brings us to the point that the observed conflict has not been comprehensively studied in traditional ISD approaches with respect to the V-model. This ISD concept (Boehm, 1984) portrays segmented test levels (i.e., unit, integration, system, and user acceptance), and assumes different expectations and testing tasks distribution at each of the levels. But most importantly, the V-model indicates that there will be a specific dynamics between programmers and testers at each of the levels. Another walkable approach might be to radically change the rules of the ISD game. Going this route, Agile practitioners argue that Agile ISD methods help to reduce the tensions between programmers and testers by redefining the role of testers (Cohn, 2010). Aside from other factors, this redefinition is associated with the elimination of testers’ independence from programmers. Yet another group of Agile practitioners calls for removing testers from ISD processes entirely (Anderson, 2003). In general, Agile ISD does not conceptualize software testing using the V-model, because all necessary test activities must be executed iteratively (Cohn, 2010). We discuss the mentioned organizational strategies in Section 3. Prior doing so, we explain the historical role of testers’ independence in ISD, and the ongoing professionalization of software testing. ### 2.3 The Cost of Testers’ Independence In general, testers’ independence is codified by various practitioner sources by postulating a “common wisdom” that > A certain degree of independence (avoiding the author bias) often makes the tester more effective at finding defects and failures. (ISTQB, 2011, p. 18) The above thesis says that to prevent the “contamination” of their perspective, testers need to enjoy a certain level of organizational autonomy. Two examples follow. A high level of independence is when testers work in isolation and use formal ISD documentation as the primary source of information about tested applications. In theory, the testers should be better prepared to discover programmers’ lapses. In practice, they may find themselves isolated and disconnected from project activities. In addition, such organizational set-up may strengthen adversarial relationships between programmers and testers (Grechanik et al., 2010). An extreme case of testers’ independence typically occurs when contractual relationships are involved. First and foremost, an external test factory run by a third party may be contracted (Andrade et al., 2017). Second, as a tool of vendor management, a special client unit might be designated to perform quality verification of vendor’s IS development and testing activities. In such case, another psychological factor may drive vendor personnel’s behaviour. That is the angst of defects that escaped detection at vendor premises (Shah and Harrold, 2013). Despite the fact that some organizations decided against the organizational set-up with a high level of independence for testers, practitioner literature frequently promotes it (McKay, 2007). In addition, the existence of an independent testing unit in organizations was previously institutionalized as an important criteria of test maturity; for example, it is suggested by a popular test management guideline (TMMi Foundation, 2012). ### 2.4 The Profession of Software Tester In a broad sense, professions are vocations that carry out professional activities in a given area of practice (Hughes, 1958). The execution of professional activities might be conditioned by a previous formal training, length of practice, or entirely open to a loosely defined group of people who claim to belong among the professionals. The former two criteria apply, for example, to medical professions, whereas the latter one to software programmers and testers. Aside from formal regulations that might be in place, many professions informally or semi-formally (e.g., through optional certifications) postulate certain behavioural norms that are then expected to be followed by the profession’s members. Using the language of social sciences, this process is related to the social construction of “self-identity” of professions. Through the formation of shared meanings, members of a profession gradually reach consensus what the professional norms are. In the following, we use the term “professional identity” to label distinct “goals, values, beliefs, norms, [and] interaction styles” (Ashfort, 2001, p. 6) settled in a profession. In young fields like software development, the above formative processes are naturally different from the processes in well-established, formally regulated professions like law or medicine (Evett, 2014). For a number of years, testers were socialized in ISD environments where they were to become quality advocates (a rather soft version of the metaphor), quality gatekeepers (a mild version of the metaphor), or quality police/enforcers (an extreme version of the metaphor) (Charrett, 2012). Not long ago, managers of testing teams were instructed to build their unit as “The Perfect Beast” (McKay, 2007 n.p.), by metaphorically combining qualities of several animal predators to fight with software bugs (and possibly also with programmers). And Software Quality Assurance departments seen as quality watchdogs were encouraged to “bite if necessary” (Chemuturi, 2011, p. 65). Interestingly, people from industry routinely (but incorrectly) mix the role of software quality assurance and the one of software testing (Koch, 2000). Often mentioned during trainings, conferences, or in books, all these metaphors and labels may be seen as part of testers’ professional identity built through the past decades. The metaphors also somehow relate to the level of testers’ independence and their main mission as explicitly formulated or tacitly expected by an organization. Sadly enough, little guidance grounded in empirical research is available to the practitioners who struggle with whether one of the mentioned modes is fit-for-purpose in their company. Differently put, the role models that describe expected or ideal professional behaviour in software testing are often anecdotal, based on the personal experience of trainers, mentors, or various testing school gurus. And as a profession, software testing heavily relies on personal experience, which is not always shared with a wider community (Beer and Ramlar, 2008). 3 TYPICAL ORGANIZATIONAL CONFIGURATIONS In this section, we review three typical organizational patterns that can be encountered in software companies and IT units/divisions nowadays. Organizational configuration implemented in a particular company results from perceptions held by the company regarding the role of software testers in the company’s ISD processes (Charrett, 2012). Note that we do not discuss various sourcing options (e.g., offshore, onshore, nearshore), but we focus on programmers and testers in the sense of their standings and organizational relationships. 3.1 Traditional Testing: Testers as Gatekeepers 3.1.1 Grounding Originating in a late-1970s vision of Barry Boehm (1979), software testing has been traditionally perceived as a distinct, separate ISD phase. The concept of separate test levels with dedicated testing responsibilities codified by the V-model (i.e., unit, integration, system and user acceptance test levels) has been traditionally presented as a form of test maturity ideal. According the V-model, somewhat exaggeratedly put, the more test levels exist in the organization and the higher number of diverse groups involved in software testing, the more mature test process the organization exercises. 3.1.2 Key Industrial Challenges Though the dilemma “What level of independence should testers enjoy?” is one among “test management classics” for ISD managers, little research effort has been devoted to explore it scholarly so far (Garousi and Mäntylä, 2016; Sunyaev and Basten, 2015). Not surprisingly, the extreme cases when testers and programmers are geographically separated with no mechanisms to facilitate effective communication between them are typically found dysfunctional (Grechanik et al., 2010). However, quite little is known about the real effects of having testers reporting back to a manager who supervises both programmers and testers in a co-located environment. This single-point-of-responsibility configuration is established in many companies and supported by the way how a typical ISD project is managed (Atkinson, 1999). Though the problem of conflicting goals of the project manager or development manager is quite evident, there is no simple remedy. This case is represented by the full independence scenario (Figure 1, type A). Importantly, the character of the metaphorical wall (“Who reports to whom?”) and its “permeability” (“How testers interact with programmers?”) should be of interest to further research efforts exploring this area. Similarly, the conflicting goals dilemma should be explored from the viewpoint of software testers and their everyday activities. 3.2 Agile Testing: Removing the Wall 3.2.1 Grounding To solve the challenge of conflicting ISD priorities, Agile ISD approaches understand software testing as part of whole-team responsibility (Cohn, 2010; Crispin and Gregory, 2009). SCRUM, a well-known agile framework, explicitly states that testers are integral part of the development team. In other words, testers are “embedded” into the development team, and their responsibilities overlap with programmers to some extent (Figure 1, type B). Blending the responsibilities of programmers, testers, and analysts by creating a “cross-functional team”, SCRUM aims to remove unnecessary organizational boundaries. Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this rule; ... (Scrum.org, n.d.) In theory, the inherent conflict between programmers and testers should be solved. In practice, research shows that some testers still report problematic relationships with programmers (see below). This could be partly attributed to the fact that there is no single way of “doing Agile ISD”; the same label “Agile” may represent quite diverse ISD strategies in reality. Aside from pure Agile ISD approaches, many companies follow the way of tailoring or even dabbling the original Agile ISD philosophy (Cram and Newell, 2016). The latter two approaches indicate that in present days some companies tend to hybridize software processes rather than fundamentally transform their nature (Kuhrmann et al., 2017). 3.2.2 Key Industrial Challenges Practitioner literature suggests that moving from waterfall to agile environment may be a challenging task for testers (Crispin and Gregory, 2009). The main reason behind this challenge is the nature of expected change in testers’ mind-set towards frequent direct communication and participative behaviours (Cohn, 2010). The fact that transitioning to Agile does not assure happiness of testers was explicated by Deak et al. (2016). From their work, however, it is not entirely clear why “more agile [than waterfall] testers were unhappy about their relationship with the developers”. Their research indicates that “removing the wall” might be not enough if subsequent coaching strategies for both programmers and testers are not implemented in order to increase “group maturity” of the ISD team (Gren et al., 2017). Based on these challenges, the essence of future research efforts could lay in (i) understanding effective coaching mechanisms to help programmers and testers transitioning from waterfall to agile, and (ii) the creation of guidelines to help these groups working in hybrid environments where not all agile principles are applicable in a pure form. 3.3 Combined Software Engineering: Eradication of Testers 3.3.1 Grounding “Combined software engineering” is a term popularized by Microsoft (Dobrigkeit et al., 2016). The notion implies that there had been traditionally at least two broad functional responsibilities and career paths in ISD: programming and testing. (In this discussion, for the sake of simplicity, we ignore distinct career paths of software architects/analysts... and development managers.) They may have been given titles such as “Software (Development) Engineer” and “Software (Development) Engineer in Test” (Page et al., 2009). Note that Google, among others, is known to differentiate between “Software Engineers in Test” and “Test Engineers” more precisely (see Whittaker et al., 2012). The original organizational situation of having some dedicated testing roles clearly differs from having no testers at all historically. The latter may be typical in smaller or “less mature” – according the traditional worldview – companies (Prechelt, 2016). Speaking about the former, some companies recently introduced certain organizational changes in order to stop differentiating between their programmers and testers in terms of their professional status. Simply put, these companies have combined the two previously independent ISD functions (Figure 1, type C). These organizational changes are implemented consistently with the companies’ hiring, firing, and compensation & benefits policies. Recently the expert public paid quite a lot of attention to the case of Microsoft in which testers played an important role (Thonangi, 2014). 3.3.2 Key Industrial Challenges The idea of “combined software engineering” is quite new and unproven. Though there are some interesting blog posts (e.g. Jensen, 2016), there is not a lot of information in printed literature to date. We see two important goals on which further research should concentrate: (i) to understand organizational enablers of combined software engineering models, and (ii) to help organizations with solving possible side effects and people problems in the area of motivation when such a model is introduced to the organization. In our opinion, the first area can be elegantly studied using cultural lens in order to understand nuances of organizational life culturally (Smircich, 1983). The latter one calls for more research on the motivation of programming and testing specialists under the mentioned organizational conditions (Beecham et al., 2008; Deak et al., 2016). 4 RESEARCH DIRECTION AND DISCUSSION In this section, we briefly explain our open-ended research idea. Our overall goal is to understand which of the configurations explained above real organizations use, and what the reasons behind their decisions are. By exploring this problem, we hope to provide a conceptual guideline to organizations transitioning from traditional ISD approaches to Agile ISD. Specifically, we believe that this endeavour might help practitioners with forming and managing cross-functional agile teams in enterprise environment. Similarly, the new theory we aim to develop will hopefully contribute to a better theoretical understanding of this area. Overall, regarding the theorizing which follows, we are roughly guided by data from our ongoing research projects. It is our argument that the body of knowledge on role identity (RI) accumulated in MOS can help with further directing our research. The RI work is informed predominantly by concepts originating in social psychology and microsociology (or sociological social psychology), in particular by Social Identity Theory (SIT, see Tajfel and Turner, 1986) and Identity Theory (IDT, see Stryker and Burke, 2000) respectively. Role identities [or role-based identities] are socially constructed definitions of self-in-role (this is who a role occupant is). Role identities anchor or ground self-conceptions in social domains. To switch roles is to switch social identities. (Ashford, 2001, p. 27) A specific type of role identity is professional role identity (or simply professional identity), which is the term we have introduced in Section 2.4 without providing much theoretical background. Differently from personal identity, social, role, and professional identities are based on group membership. With regards to the problem studied by the present paper, it is important to understand the concept of group very broadly. In our case, the first group (a macro group) might be that of the professional community of software testers (where exists and its influence is salient). The second group (a micro group) is that of a particular ISD team where the tester works. An additional group membership (a meso group) may be introduced when the company centralizes software testing activities under one tent of enterprise test organization or test centre. These centralized entities can form an additional organizational layer across the existing landscape of ISD teams that have been previously constituted in the company (Doležel, 2017). And naturally, these two or three social group memberships can collide. We thus propose that one needs to understand not only the micro organizational context, but also more abstract, high-level layers that contribute to forming of the professional identity (i.e. the meso and macro levels). In this sense, one needs to identify the influence of “institutional forces” (a term borrowed from sociology). This need stems from the fact that “professionals act as bridges between the institutional forces of their professions and their respective organizations” (Daudigeos, 2013, p. 725). Our key thesis is thus provocative. We argue that the macro social forces that drive the ongoing professionalization of software testers may significantly conflict with the core Agile principles implemented in a purist (i.e. crusader) way at the micro level. Crusader organizations are “employing a highly prescriptive adoption of agile techniques, alongside an avoidance of traditional approaches entirely” (Cram and Newell, 2016, p. 9). By contrast, dedicated software testers typically work in traditional, larger, and “more mature” organizations. When such a traditional organization wants to suddenly become a crusader, software testers as a profession may feel jeopardized by the ideas presented by the Agile community, and react defensively. An excellent example supporting our view is presented by Larman and Vodde (2010) in their book. Book sections titled “Avoid... Test department”, “Avoid... TMM, TPI, and other [test] ‘maturity models’, and “Avoid... ISTQB and other tester certification” (!) speak for themselves. It seems not exaggerated to expect that if their advices are followed by ISD managers literally, the relevant organizational changes must result in a professional identity crisis of software testers working for those managers. Everything the testers learned in the past is gone, and the world is not the same as it was. A piece of anecdotal evidence describing testers’ reactions during a training run by Larman and Vodde is presented in the same source. Inversely, people in crusader organizations typically believe that Agile is “a better, more rational approach compared to traditional methods” (Cram and Newell, 2016, p. 6). In other words, relevant socio-psychological forces are aligned with the organizational culture of such companies and people happily work there while adoring the mentioned principles of agility. As indicated above, this balanced, positive state may significantly differ from a situation, when an organization had institutionalized different working patterns and interaction styles in the past years, and suddenly decided to change them overnight. In such cases, the psychological safety of members of certain professional groups may be significantly harmed. The previous work patterns and interaction styles are suddenly out-dated, and the new ones still to be created. More importantly, unless the impacted groups feel safe and comfortable with the new reality, the implemented change won’t be successful (Burnes, 2004). Though the above ideas are mostly speculative, we present them in this paper because we believe that they are quite important and promising for ISD practice. It is also our hope that they may guide further theory building efforts carried out by both IS and SE researchers. Interestingly, though similar interests were originally inherent mostly to research in the sociology of professions, it is argued that this discipline gradually “came to an intellectual standstill” (Gorman and Sandefur, 2011, p. 290). Instead of occupational sociologists, increasingly often scholars who educate knowledge workers dominate the field research on “their” professions, focusing on everyday activities of “their” professionals (ibid.). 5 CONCLUSION This paper has discussed the topic of organizational patterns between programmers and testers, and how these patterns evolve in Agile ISD. Drawing on the problem of testers’ independence (Sunyaev and Basten, 2015), and extending the problem to the Agile world, the paper has summarized existing knowledge and indicated the further research direction. We have taken a critical stance to pinpoint some problems we see when “Agile ideals” are blindly followed and used as a rhetorical tool and salvation device (Case, 1999). Our position articulated in this paper is that the previous IS and SE research indicates that programmers and testers form distinct groups with distinct social and professional identities. These groups execute their work activities in a different manner. We support the view that agile ISD must be philosophically based on a new set of fundamental principles (Cohn, 2010). However, based only on anecdotal evidence, we remain undecided whether transformation to a “combined software engineering” model in a traditional company can be successful. If it can, it will be imperative to understand situational factors that contribute to this success (Clarke and O’Connor, 2012). We indeed agree that ISD teams may function very well without dedicated testers (Prechelt, 2016). We, however, cautiously note that before going this route, a “traditional” company where software testers are a well-established profession should carefully analyse the impact of such decision, and propose sound risk mitigating strategies. The history teaches us that simply jumping on the bandwagon of the latest management fad hardly ever paves the way towards the success of the intended change (Case, 1999). We are eager to hear about further independent research efforts that probe into organizations who deployed such models. We propose that success or failure of the change initiative should be understood in terms of the following criteria (though not necessarily all adopted in a single case study): (i) objective, measurable quality metrics demonstrate that software quality is not degraded under the new conditions; (ii) fulfilling all tasks of the former ISD actors, new ISD teams have a high level of group maturity (Gren et al., 2017); (iii) previous programming and testing personnel is still motivated and happily working in the new setting (Deak et al., 2016), proud of their newly acquired identity. Based on anecdotal evidence and our own work-in-progress research data, we are concerned about, especially with regards to the last criteria. We have noticed that effects on the morale and motivation of both the unique software species seem to devastating when the changes are introduced insensitively, and people do not understand reasons behind them (see also Jensen, 2016). Our research continues to concentrate mostly on the standing of testers in hybridized ISD settings (Cram and Newell, 2016; Kuhrmann et al., 2017). We work on an empirical study that uses the lens of SIT and IDT to understand the challenges introduced by blending the roles and responsibilities of programmers and testers in agile ISD processes. In parallel, we also explore the growing influence of DevOps on software testing concepts, as exemplified by the concept of “testing in production”. REFERENCES Cohn, M. (2010), Succeeding with Agile, Addison Wesley, Upper Saddle River, NJ. International Handbook of Research in Professional and Practice-Based Learning, pp. 29–57. Page, A., Johnston, K. and Rollisson, B. (2009), How We Test Software in Microsoft, Microsoft Press, Redmond, WA.
{"Source-Url": "http://www.scitepress.org/Papers/2018/67837/67837.pdf", "len_cl100k_base": 6813, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28808, "total-output-tokens": 10046, "length": "2e12", "weborganizer": {"__label__adult": 0.0005183219909667969, "__label__art_design": 0.00044918060302734375, "__label__crime_law": 0.0004169940948486328, "__label__education_jobs": 0.0122528076171875, "__label__entertainment": 8.094310760498047e-05, "__label__fashion_beauty": 0.00021016597747802737, "__label__finance_business": 0.0014286041259765625, "__label__food_dining": 0.0004832744598388672, "__label__games": 0.0007452964782714844, "__label__hardware": 0.0004668235778808594, "__label__health": 0.0005450248718261719, "__label__history": 0.0002455711364746094, "__label__home_hobbies": 0.00010508298873901369, "__label__industrial": 0.00036406517028808594, "__label__literature": 0.000518798828125, "__label__politics": 0.000392913818359375, "__label__religion": 0.0004870891571044922, "__label__science_tech": 0.0041656494140625, "__label__social_life": 0.00024628639221191406, "__label__software": 0.005237579345703125, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.00044798851013183594, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42473, 0.0383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42473, 0.31866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42473, 0.9163]], "google_gemma-3-12b-it_contains_pii": [[0, 4123, false], [4123, 8861, null], [8861, 13945, null], [13945, 18641, null], [18641, 21867, null], [21867, 26745, null], [26745, 31876, null], [31876, 37096, null], [37096, 42473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4123, true], [4123, 8861, null], [8861, 13945, null], [13945, 18641, null], [18641, 21867, null], [21867, 26745, null], [26745, 31876, null], [31876, 37096, null], [37096, 42473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42473, null]], "pdf_page_numbers": [[0, 4123, 1], [4123, 8861, 2], [8861, 13945, 3], [13945, 18641, 4], [18641, 21867, 5], [21867, 26745, 6], [26745, 31876, 7], [31876, 37096, 8], [37096, 42473, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42473, 0.04698]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
373b40abc628982281f26ac70ac3ea587d836cf8
Abstract GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 application development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE’s approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team’s DSSA approach and then presents our work on automated support for message processing. The DSSA Concept DSSA is based on the concept of an accepted generic software architecture for the target domain. As defined by DSSA, a software architecture describes the topology of software components, specifies the component interfaces, and identifies computational models associated with those components. The architecture must apply to a wide range of systems in the chosen domain; thus it must be general and flexible. It must be established with the consensus of practitioners in the domain. Once an architecture is established, components that conform to the architecture—i.e., that implement elements of its functionality in conformance with its interfaces—will be acquired. They may be acquired by identifying and modifying (if required) existing components or by specifically creating them. One of the ways they may be created is through automated component generation. DARPA has sponsored work in this area at USC Information Sciences Institute -- the AP5 application generator project, and is interested in incorporating this or related technology. The existence of a domain-specific architecture and conformant component base will dictate a significantly different approach to software application development. The developer will not wait until detailed design or implementation to search for reuse opportunities; instead, he/she will be driven by the architecture throughout. The architecture and component base will help define requirements and allow construction of rapid prototypes. Design will use the architecture as a starting point. Design and development tools will be automated to “walk through” the architecture and assist the developer in the selection of appropriate components. The ultimate goal is to significantly automate the generation of applications. A major DSSA task is to define such a software lifecycle model and to prototype a supporting toolset. These activities will be accompanied by extensive interaction with the development community for the target domain, and by technology transition activities. One aspect of this is that each domain team is working closely with a DoD agency that carries out major developments in the designated area. The GTE team is working with the US Army Communications and Electronics Command. Why Command and Control? There are many reasons why the command and control domain is an excellent target for DSSA technology. It is a high payoff area; command and control systems are needed even in the current military climate. (This is particularly true when one recognizes that applications such as drug interdiction fall within the C2 “umbrella”.) It is a well-understood area; most of the processing performed in C2 Braun applications is not algorithmically complex. However, C2 applications are very large, and much of this size comes from repeated similar processing -- for example, parsing hundreds of types of messages. In addition to this commonality within applications, there is much commonality across applications. Multiple C2 systems must handle the same message types, display the same kinds of world maps, etc. The kinds of commonality in C2 applications are very well-suited to DSSA techniques. In some areas, components can be reused identically; these can be placed in the DSSA component base and highly optimized. In other areas, components will be very similar in nature but differ in the particulars, e.g., message parsing. These areas are a natural fit to the DSSA component generation technology, allowing a table-driven generator to quickly create the needed specific component instances. **GTE's Approach** Figure 1 illustrates GTE's overall approach to the DSSA program. Initially, project work will follow two parallel threads. The first will define a software process model appropriate to architecture-driven software development and will develop a toolset to support that process. The second will establish a capability that implements the process for the command and control domain, based on a C2 architecture and a set of reusable C2 components. The DSSA process model will address all aspects of the software life cycle. It will describe activities for establishing system requirements, developing the software system, and sustaining the system after delivery. The DSSA toolset will support all of these activities, automating them as far as possible. In particular, it will automate system development activities by using the architecture as a template, guiding the selection of available reusable components, and automating the generation of specific required components. The toolset will be constructed insofar as possible from available tools - both commercial products and products of the research community. In particular, it will make use of USC/ISI's AP5 application generator, DARPA/STARS reuse libraries, and DARPA/Prototech tools. Open tool interfaces will be emphasized to minimize specific tool dependencies, thus making the toolset usable in the widest range of environments. Fundamental to the C2 DSSA capability is the development ![Diagram](image.png) **Figure 1. GTE's DSSA Approach** of a C2 software architecture. This starts with development of a multi-viewpoint domain model, created through interaction with all elements of the DoD C2 community. The automated Requirements Driven Development (RDD) methodology will be used in model creation. From this, an object-oriented software architecture will be developed. The architecture will tie back to the multi-viewpoint model so that mappings to different views of the domain functional decomposition are apparent. George Mason University’s Center for C3I will play a major part in this modeling and consensus-building activity. A base of components conforming to the architecture will then be developed. Many of these will be existing components, perhaps modified to fit the architecture. Others will be automatically generated using AP5. The DSSA capability will be demonstrated by development of a prototype C2 system, most likely an element of the Army Tactical Command and Control System (ATCCCS). An independent metrics/validation task will assess the effectiveness of the approach and gather metrics. The methodology and toolset will be revised based on findings and further necessary research will be identified. Throughout the program, a technology transfer task will present results in conferences, papers, seminars, and short courses. The George Mason University Center for C3I will serve as a focal point for technology transfer. Application Generation The Technology Application generators are tools that permit software developers to create software application programs in a much higher-level language tailored to the application domain. These programs are automatically translated by the application generator to a lower-level language, thus “generating applications.” This greatly reduces the effort required to create working applications, typically by at least an order of magnitude. The benefits are analogous to those achieved by moving from assembly language development to use of standard procedural languages such as FORTRAN, C, and Ada. Fourth Generation Languages (4GLs) are application generators for DBMS-oriented information system applications. Because 4GLs focus on a narrow class of applications, they can include very powerful constructs that allow software to be developed quickly and easily by those familiar with the application domain. Management Information System (MIS) developers using 4GLs achieve productivity improvements of as much as 50-100 times over traditional (usually COBOL) language users. Application generators can be (and have been) developed for other types of applications as well. They are best suited to narrow domains, or subdomains of large domains such as C2. Because they require a domain specific vocabulary for expressing applications, they are generally unique to the domain or subdomain and not easily modified to handle other domains. Creation of an application generator for a particular domain, furthermore, is a significant undertaking. Development of an application generator is most appropriate in domains that are well-understood and in which many different developments perform primarily the same kinds of processing. The AP5 Approach USC Information Sciences Institute (ISI) has developed a capability (called AP5) that supports the development of application generators. AP5 is based on the concept of relational abstraction. The application developer identifies abstract data objects and the logical relationship among them. Effectively, the developer has access to a “virtual database” expressed succinctly in terms of the known structure of the domain’s data model. Application behavior is then expressed in terms of these data objects, accessing them associatively via queries and modifying them based on values of other objects. This allows the user to concentrate on behavior rather than representation, and provides the power to express that behavior at a very high level. Providing an AP5 application generator for a particular subdomain requires the development of a domain-specific language for that domain. This is a relatively straightforward task because the language, regardless of domain, involves the same fairly simple set of relation-oriented constructs for expressing data relationships, validations, and actions. It is also a critical task, because the expressive capability of this language is what provides the application generator’s power. A translator is then developed to map the language to an underlying program generator, which produces executable procedural code. This is also not too complex, as all languages contain similar constructs. Most of the work is done by the underlying generator. (Currently the system generates LISP; an Ada generator is in development.) A drawback to many existing application generators is poor efficiency of the generated code. This has, in many cases, made these generators suitable only for developing prototypes. AP5 addresses this problem by allowing the user to specify annotations that provide guidance to the translator on desired implementations of specific operations. These annotations can be added incrementally while tuning to achieve desired performance. AP5 can play a key role in the C2 DSSA program. We anticipate that a number of C2 subdomains will be amenable to this approach. By developing generators for those subdomains we can achieve two major advances in productivity: DSSA users can use the generators to create specific components in the subdomain with far less effort. DSSA architects can use the generators to create reusable subsystems that can then form part of the component base available to DSSA users. We have already identified the message handling subdomain as a candidate for AP5 technology; a tentative choice for the next area to tackle is fusion processing. Figure 2 shows the activity flow that will be followed: identifying classes of components (subdomains) to be addressed, based on the architecture; defining domain specific languages and producing generators; developing annotations to permit optimization; and generating reusable application components. C2 Message Handling As indicated in Figure 3, the message handling subsystem is one of the key interfaces between a C2 system and the "outside world". It provides a means of communicating information between different C2 systems and to/from other C2 resources (such as architecture and weapon installations). Messages may be text or bit streams; we will deal here with text messages. Some text messages are free-form, but most today follow standard prescribed formats; we will deal with formatted messages. C2 messages are created by humans (on the transmitting side of the interface) according to a written description of the formats. The receiving side parses the message (according to an encoded understanding of the standard format), validates it for correctness, and places the received information in the database for use by other parts of the system (for example, decision support). There are several standard families of messages, for example NATO and JINTACCS messages. Each of these can include several hundred message types; for example, there are approximately 300 NATO message types. (Many types of messages are shared by several message families.) Message formats are described in massive documents using ad hoc, non-standard description methods. Typically the descriptions involve much prose. For example, Figure 4 shows the description for a single line in one type of message. Furthermore, it is not a complete description; many field descriptions cross-reference to other descriptions. A message consists of a number of such lines (called datasets— may be more than one physical line) grouped together in an envelope (which contains from/to information, classification level, etc.). While each type of message can contain only certain kinds of datasets, many are optional and their order is generally not prescribed (though there are exceptions). Validity of datasets can depend on other datasets in the message. Each dataset contains a prescribed sequence of fields, separated by slashes, with a required order and a well-defined format. Field validity can depend on values in other fields of that dataset as well as in other datasets in the message. Figure 5 is an example message (excluding the envelope). The code involved in writing the software to implement message handling is extensive and error prone. Working from the prose specification, programmers write code to extract each field from each dataset, validate it according to the specified rules, translate it to the appropriate internal representation, build database update transactions, and write to the database. Typically, a single message type can take from 5000 - 100,000 lines of HOL code. The Navy WWMCCS system uses approximately 4 million lines of code to implement 30 message types. Clearly this is a part of C2 system development that should be considered for automation. Figure 2. DSSA Application Generation Activity Flow Figure 3. C2 System Operations Automating C2 Message Handling Using AP5 To automate C2 message handling using AP5, we have developed a language specific to the message handling subdomain that provides constructs for specifying message formats, for indicating required validations, and for describing desired database updates. Specifying Message Formats Message formats are described in a simple set language that indicates which datasets are allowed and which are optional for a particular message type. For example, \[ \text{type SPOT} = (\text{FORCE}), (\text{SHIPTK} \mid \text{AIRT K} \mid \text{AIRCRAFT}), \text{SHIP} \] would indicate that a SPOT message consists of an optional FORCE dataset, an optional occurrence of one of the SHIPTK, AIRT K, or AIRCRAFT datasets, and a required SHIP dataset. Message format descriptions can be accompanied by validations that indicate which combinations of datasets are valid. For example, \[ \text{type SPOT} = (\text{FORCE}), (\text{SHIPTK} \mid \text{AIRT K} \mid \text{AIRCRAFT}), \text{SHIP} \] validations - disallow MSGID.message-serial-number; - require SHIP.location no SHIPTK and no AIRT K requires FORCE; indicates that the message-serial-number field of the MSGID dataset must not be present, the location field of the SHIP dataset must be present, and, if no SHIPTK dataset and no AIRT K dataset is present, the FORCE dataset must be present. Specifying Datasets Dataset formats are described in terms of the fields that make up the dataset and the format of each of those fields. Fields are ordered, so each dataset is characterized by a sequence of fields. Optional fields are indicated by parenthesizing them. Mutually exclusive fields are indicated by alternative bars. As for message formats, dataset descriptions can include validations. For example, a dataset description of a MSGID dataset might be: \[ \text{dataset MSGID} = \text{message-code-name} (\text{originator}) \] \[ \text{(message-serial-number) (as-of-month) (as-of-year) (as-of-DTG)} \] validations - as-of-DTG precludes as-of-month; - as-of-DTG precludes as-of-year; - as-of-year requires as-of-month; - message-code-name /= SPOT requires originator; - message-serial-number and no as-of-DTG requires as-of-month; - field message-code-name = A*26; ### Data Set ID: MSGID <table> <thead> <tr> <th>Fld</th> <th>Element Descriptive Name</th> <th>Descript.</th> <th>M</th> <th>Edit Rule</th> <th>Remarks</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Message Code Name</td> <td>25 AN</td> <td>x</td> <td>1. Must be a member of the approved set of message code words.</td> <td></td> </tr> <tr> <td>2</td> <td>Originator</td> <td>25 AN</td> <td>x</td> <td>1. Must be a plain language address or approved short title</td> <td></td> </tr> <tr> <td>3</td> <td>Message Serial Number</td> <td>3 N</td> <td>a</td> <td>1. Positive integer between the values 001 to 999.</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>2. Out of sequence may indicate missing message. See rules for specific msg. code word.</td> <td></td> </tr> <tr> <td>4</td> <td>As-of-Month</td> <td>3 AN</td> <td>a</td> <td>1. Standard abbreviation for month message sent.</td> <td></td> </tr> <tr> <td>5</td> <td>As-of-Year</td> <td>4 N</td> <td>o</td> <td>1. May not be a future year.</td> <td></td> </tr> </tbody> </table> Figure 4. Example Message Line Description Figure 5. Example Formatted Message NATOUNCLASSIFIED SIC: NSR EXER /OPEN GATE 91/ MSGID /NAVSITREP/CINCIBERLANT/135/DEC/91/ PART /I/HOSTILE/ FORCE /OR523/3/37000N0-012000W3/145/17K/H/ SHIP /OR523A/KARA/-/CG/-/UR/ SHIP /OR523B/KRESTA/ SHIP /OR523C/KRESTA/ SUBTK /OR734/33000N6-010000W1/095/9K/M/ SUB /OR734/TANGO/ PART /II/UNKNOWN/NC/ PART /III/FRIENDLY/ FORCE /CTU 405.1.2/5/420015N2-1333440W8/175/20K/ FORCE /CTU 387.3.2/2/36010N0-004380W5/090/5K/ AMPN /MINE SWEEPING GROUP.../ AIRTK /934/33000N6-010000W1/ AMPN /ONE P-3 SEARCHIN BOX.../ The C2 message description language also includes a means for describing the transactions to be carried out for each received message. An example of a segment of such a specification is: \[ \text{(insert msg-OriginSr (ORIGINATOR = PROSIGN.FN, MSG_TYPE = MSGID.Code, MSG_DTG = sortable_date (ENVELOPE.DTG), CLASSIFY = classification_code (ENVELOPE.Sec));} \] The database update language also includes tests of field values, so that updates can be conditional on those values, and a capability to allow a sequence of updates to be named and reused in other update instructions. This simple language provides all the power needed to describe the database transactions resulting from received messages. Implications Clearly, automated generation of message handling software can save greatly on the labor involved in creating such software. A message handling subsystem that requires 4 million lines of HOL code should require less than 1% of that in the message description language. Perhaps more significantly, there will be little reason to write most of the code more than once. The code required to parse and validate a message of a particular type is not specific to the system being implemented. Once the message specification is developed in the message description language, it can be reused. Minor changes in the specification of required database updates can be easily implemented for individual systems. An even more far-reaching impact of this work is the development of a precise, unambiguous way of describing message formats. Rather than the ad hoc prose descriptions now used in describing message formats, the message description language can be used directly. This will eliminate errors in understanding and correctly implementing message descriptions. This precise message description mechanism, along with the built-in incentive to reuse message description implementations, will contribute substantially to the development of more error-free message handling subsystems. A major aspect of this benefit is improved interoperability, as systems will no longer be dependent on the programmers' understanding of message formats. All implementations will share a common understanding and be able to interoperate with the full power and precision envisioned for formatted messages. Acknowledgment The work described in this paper has been supported by the Defense Advance Research Projects Agency through U.S. Army Communications-Electronics Command Contract No. DAA807-92-C-Q502 and through NASA Ames Research Center Contract No. NCC 2-520. References
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930008314.pdf", "len_cl100k_base": 4476, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21069, "total-output-tokens": 5106, "length": "2e12", "weborganizer": {"__label__adult": 0.0002849102020263672, "__label__art_design": 0.00021648406982421875, "__label__crime_law": 0.0003457069396972656, "__label__education_jobs": 0.00033164024353027344, "__label__entertainment": 4.315376281738281e-05, "__label__fashion_beauty": 0.00012034177780151369, "__label__finance_business": 0.00018131732940673828, "__label__food_dining": 0.0002440214157104492, "__label__games": 0.0003745555877685547, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0002837181091308594, "__label__history": 0.00018024444580078125, "__label__home_hobbies": 6.4849853515625e-05, "__label__industrial": 0.0004813671112060547, "__label__literature": 0.00011879205703735352, "__label__politics": 0.0002105236053466797, "__label__religion": 0.00027108192443847656, "__label__science_tech": 0.01483154296875, "__label__social_life": 5.424022674560547e-05, "__label__software": 0.0060272216796875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00024437904357910156, "__label__transportation": 0.0006246566772460938, "__label__travel": 0.00016701221466064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22596, 0.03623]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22596, 0.52494]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22596, 0.87799]], "google_gemma-3-12b-it_contains_pii": [[0, 3645, false], [3645, 6071, null], [6071, 11483, null], [11483, 15119, null], [15119, 17413, null], [17413, 18826, null], [18826, 22596, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3645, true], [3645, 6071, null], [6071, 11483, null], [11483, 15119, null], [15119, 17413, null], [17413, 18826, null], [18826, 22596, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22596, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22596, null]], "pdf_page_numbers": [[0, 3645, 1], [3645, 6071, 2], [6071, 11483, 3], [11483, 15119, 4], [15119, 17413, 5], [17413, 18826, 6], [18826, 22596, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22596, 0.06107]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
84dd61c2264f4001b27cca22ba49aefffce96e77
[REMOVED]
{"Source-Url": "http://www.usixml.org/servlet/Repository/memmel-cadui2008.pdf?ID=873&saveFile=true", "len_cl100k_base": 5320, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26800, "total-output-tokens": 6927, "length": "2e12", "weborganizer": {"__label__adult": 0.0004401206970214844, "__label__art_design": 0.0037994384765625, "__label__crime_law": 0.00034999847412109375, "__label__education_jobs": 0.0021686553955078125, "__label__entertainment": 9.66191291809082e-05, "__label__fashion_beauty": 0.0002104043960571289, "__label__finance_business": 0.00023281574249267575, "__label__food_dining": 0.00035572052001953125, "__label__games": 0.0006122589111328125, "__label__hardware": 0.0007872581481933594, "__label__health": 0.0003669261932373047, "__label__history": 0.00034236907958984375, "__label__home_hobbies": 7.69495964050293e-05, "__label__industrial": 0.000408172607421875, "__label__literature": 0.0003552436828613281, "__label__politics": 0.0002448558807373047, "__label__religion": 0.0006074905395507812, "__label__science_tech": 0.01474761962890625, "__label__social_life": 8.45193862915039e-05, "__label__software": 0.009124755859375, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.00023996829986572263, "__label__transportation": 0.00043392181396484375, "__label__travel": 0.00020551681518554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31131, 0.03464]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31131, 0.35976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31131, 0.90665]], "google_gemma-3-12b-it_contains_pii": [[0, 2686, false], [2686, 4503, null], [4503, 7728, null], [7728, 11430, null], [11430, 14803, null], [14803, 16946, null], [16946, 19414, null], [19414, 20142, null], [20142, 22722, null], [22722, 26155, null], [26155, 27513, null], [27513, 31131, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2686, true], [2686, 4503, null], [4503, 7728, null], [7728, 11430, null], [11430, 14803, null], [14803, 16946, null], [16946, 19414, null], [19414, 20142, null], [20142, 22722, null], [22722, 26155, null], [26155, 27513, null], [27513, 31131, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31131, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31131, null]], "pdf_page_numbers": [[0, 2686, 1], [2686, 4503, 2], [4503, 7728, 3], [7728, 11430, 4], [11430, 14803, 5], [14803, 16946, 6], [16946, 19414, 7], [19414, 20142, 8], [20142, 22722, 9], [22722, 26155, 10], [26155, 27513, 11], [27513, 31131, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31131, 0.16854]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
855dd903a5632c47a07ec778a9ca494ee0583662
Automating Compositional Verification Dimitra Giannakopoulou, NASA Ames collaborators - Corina Păsăreanu (CMU / NASA Ames) - and talented students / visitors: - Howard Barringer (Univ. of Manchester) - Colin Blundell (UPenn) - Jamieson Cobleigh (UMass, now MathWorks) - Michael Emmi (UCLA) - Mihaela Gheorgiu (Univ. of Toronto) - Chang-Seo Park (UC Berkeley) - Suzette Person (Univ. of Nebraska) - Rishabh Singh (MIT) state-explosion problem compositional verification does system made up of $M_1$ and $M_2$ satisfy property $P$? - check $P$ on entire system: too many states! - use system’s natural decomposition into components to break-up the verification task - check components in isolation: ``` M_1 ``` ``` M_2 ``` ``` A ``` does $M_1$ satisfy $P$? “when we try to pick out anything by itself, we find it hitched to everything else in the universe” John Muir introduces assumptions / reasons about triples: \[ \langle A \rangle M \langle P \rangle \] is true if whenever M is part of a system that satisfies A, then the system must also guarantee P simplest assume-guarantee rule (ASYM): 1. \[ \langle A \rangle M_1 \langle P \rangle \] 2. \[ \langle true \rangle M_2 \langle A \rangle \] \[ \langle true \rangle M_1 || M_2 \langle P \rangle \] “discharge” the assumption how do we come up with the assumption? given component $M$, property $P$, and the interface $\Sigma$ of $M$ with its environment, generate the **weakest** environment assumption $WA$ such that: $\langle WA \rangle M \langle P \rangle$ holds weakest means that for all environments $E$: $\langle true \rangle M \parallel E \langle P \rangle$ IFF $\langle true \rangle E \langle WA \rangle$ weakest assumption in AG reasoning 1. \( \langle A \rangle M_1 \langle P \rangle \) 2. \( \langle true \rangle M_2 \langle A \rangle \) \[ \langle true \rangle M_1 \parallel M_2 \langle P \rangle \] weakest assumption makes rule complete for all \( E \), \( \langle true \rangle M \parallel E \langle P \rangle \) IFF \( \langle true \rangle E \langle WA \rangle \) \[ \langle true \rangle M_1 \parallel M_2 \langle P \rangle \) IFF \( \langle true \rangle M_2 \langle WA \rangle \\ \text{in other words:} \\ \langle true \rangle M_2 \langle WA \rangle \) holds implies \( \langle true \rangle M_1 \parallel M_2 \langle P \rangle \) holds \\ \langle true \rangle M_2 \langle WA \rangle \) not holds implies \( \langle true \rangle M_1 \parallel M_2 \langle P \rangle \) not holds formalisms - components modeled as **finite state machines** (FSM) - FSMs assembled with parallel composition operator “||” - synchronizes shared actions, interleaves remaining actions - a safety property P is a **FSM** - P describes all legal behaviors in terms of its alphabet - P\(_{err}\) — complement of P - determinize & complete P with an “error” state; - bad behaviors lead to error - component M satisfies P iff error state unreachable in (M || P\(_{err}\)) - **assume-guarantee** reasoning - assumptions and guarantees are FSMs - \(\langle A \rangle M \langle P \rangle\) holds iff error state unreachable in (A || M || P\(_{err}\)) require in and out to alternate (property Order) Input Output Order_{err} Input Output parallel composition crex. 1: \((I_0, O_0)\) out \((I_0, O_{\text{error}})\) crex. 2: \((I_0, O_0)\) in \((I_1, O_1)\) send \((I_2, O_1)\) out \((I_2, O_0)\) out \((I_2, O_{\text{error}})\) assume-guarantee reasoning Input Assumption crex 1: \((I_0, A_0, O_0)\) out \(X\) crex 2: \((I_0, A_0, O_0)\) in \((I_1, A_0, O_1)\) send \((I_2, A_1, O_1)\) out \((I_2, A_0, O_0)\) out \(X\) iterative solution + intermediate results $L^*$ learns unknown regular language $U$ (over alphabet $\Sigma$) and produces minimal DFA $A$ such that $L(A) = U$ ($L^*$ originally proposed by Angluin) L* learner queries: should word w be included in L(A)? conjectures: here is an A – is L(A) = U? the oracle yes / no yes! no: word w should (not) be in L(A) oracle for WA in assume-guarantee reasoning 1. \( \langle A \rangle M_1 \langle P \rangle \) 2. \( \langle true \rangle M_2 \langle A \rangle \) \[ \langle true \rangle M_1 \parallel M_2 \langle P \rangle \] \[ \langle WA \rangle M_1 \langle P \rangle \text{ holds} \] \[ \langle true \rangle M_2 \langle WA \rangle \text{ holds implies } \langle true \rangle M_1 \parallel M_2 \langle P \rangle \text{ holds} \] \[ \langle true \rangle M_2 \langle WA \rangle \text{ does not hold implies } \langle true \rangle M_1 \parallel M_2 \langle P \rangle \text{ does not hold} \] characteristics assumptions conjectured by L* are not comparable semantically - terminates with *minimal* automaton \( A \) for \( U \) - generates DFA candidates \( A_i: |A_1| < |A_2| < \ldots < |A| \) - produces at most \( n \) candidates, where \( n = |A| \) - \# queries: \( O(kn^2 + n \log m) \), - \( m \) is size of largest counterexample, \( k \) is size of alphabet - for assume-guarantee reasoning, may terminate early with a smaller assumption than the weakest we check: \( \langle \text{true} \rangle \) Input || Output \( \langle \text{Order} \rangle \) \(M_1 = \text{Input}, \ M_2 = \text{Output}, \ P = \text{Order} \) assumption alphabet: \{send, out, ack\} queries \[ S = \text{set of prefixes} \] \[ E = \text{set of suffixes} \] ### Table \( T \) <table> <thead> <tr> <th>( S \cdot \Sigma )</th> <th>( E )</th> </tr> </thead> <tbody> <tr> <td>( \lambda )</td> <td>true</td> </tr> <tr> <td>out</td> <td>false</td> </tr> <tr> <td>ack</td> <td></td> </tr> <tr> <td>out, ack</td> <td></td> </tr> <tr> <td>out, out</td> <td></td> </tr> <tr> <td>out, send</td> <td></td> </tr> </tbody> </table> \[ S = \{ \lambda, \text{out} \} \] **Order** \[ \text{Input} \] \[ \text{Output} \] candidate construction \[ S = \text{set of prefixes} \] \[ E = \text{set of suffixes} \] **Table T** <table> <thead> <tr> <th>( S )</th> <th>( \lambda )</th> <th>true</th> </tr> </thead> <tbody> <tr> <td>out</td> <td>false</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>( S \cdot \Sigma )</th> <th>( \text{out, ack} )</th> <th>true</th> </tr> </thead> <tbody> <tr> <td></td> <td>( \text{true, false} )</td> <td></td> </tr> <tr> <td></td> <td>( \text{ack, send} )</td> <td>true</td> </tr> <tr> <td></td> <td>( \text{out, out} )</td> <td>false</td> </tr> <tr> <td></td> <td>( \text{out, send} )</td> <td>false</td> </tr> </tbody> </table> \[ \text{2 states – error state omitted} \] **Assumption A_1** \[ \text{counterexamples add to } S \] conjectures Oracle 1: \(\langle A_1 \rangle\) Input \langle Order \rangle Order_{err} Counterexample: \(c = \langle \text{in,send,ack,in} \rangle\) Oracle 1: \(\langle A_2 \rangle\) Input \langle Order \rangle Oracle 2: \(\langle \text{true} \rangle\) Output \langle A_2 \rangle property \text{Order} holds on Input \| Output more than 2 components [TACAS03, FMSD09] symmetric rules: motivation \[ M_1 = \text{Input}, \ M_2 = \text{Output}, \ P = \text{Order} \] \[ A_1: \quad \text{ack, send} \] \[ \text{Order}_{err} \] \[ M_2 = \text{Output}, \ M_1 = \text{Input}, \ P = \text{Order} \] \[ A_1: \quad \text{ack, in} \] \[ A_2: \quad \text{in, send} \] \[ A_4: \quad \text{ack, out, send} \] symmetric learning framework [SAVCBS05] \[ \begin{align*} L^* & \Rightarrow A_1 \Rightarrow \langle A_1 \rangle M_1 \langle P \rangle \\ L^* & \Rightarrow A_2 \Rightarrow \langle A_2 \rangle M_2 \langle P \rangle \\ L(\text{co}A_1 \parallel \text{co}A_2) & \subseteq L(P) \\ \text{counterexample analysis} & \\ P \text{ holds in } M_1 \parallel M_2 & \\ P \text{ violated in } M_1 \parallel M_2 \end{align*} \] - beyond syntactic interfaces - *(open file before close)* - document implicit assumptions - **safe:** accept NO illegal sequence of calls - **permissive:** accept ALL legal sequences of calls - **safe & permissive interface = weakest assumption** should word \( w \) be included in \( L(A) \)? yes / no is \( A \) safe and permissive? yes! no: word \( w \) should (not) be in \( L(A) \) checkSafe(interface A, FSM M) checkPermissive(interface A, FSM M) if M is non-deterministic, permissiveness check requires subset construction permissiveness heuristics [FASE 2009] \[(A_{\text{err}} \parallel M)\] - model check for (err, ok) - reached (err, ok) by “p” query “p” - no (“p” should not be in A) - backtrack & continue search… resolves non-determinism dynamically & selectively; remember, it’s a heuristic JavaPathfinder UML statecharts assume-guarantee reasoning interface generation / discharge jpf-cv http://babelfish.arc.nasa.gov/trac/jpf infinite components [CAV 2010] - use predicate abstraction (e.g., \(x \geq 0\), \(x < 0\)) - generate may and must abstraction an interface **safe** w.r.t. \(C_{\text{may}}\) and **permissive** w.r.t. \(C_{\text{must}}\) **is safe** and **permissive** w.r.t. concrete component \(C\) 1. if `checkSafe(\sigma, C^{\text{must}})` \(!=\) null 2. \quad return “no” 3. \(cex = checkSafe(\sigma, C^{\text{may}})\) 4. if `cex` == null 5. \quad return “yes” 6. \(\text{Preds} = \text{Preds} \cup \text{Refine}(cex)\) 7. `Query(\sigma, C)` *If concrete component is deterministic, so is the must abstraction…* *ARMC model checker: Java2SDK library classes, OpenSSL, NASA CEV model* related work - learning with alphabet refinement (TACAS 2007; also Chaki et al.) - learning assumptions for interface automata (FM 2008) - assume-guarantee abstraction refinement (CAV 2008) - compositional verification in symbolic setting (Alur et al. 05) - minimal assumptions as separating automata for languages $L(M_2)$ and $L(M_1) \cap L(coP)$ (Gupta et al. 07, Chen et al. 09) - learning omega-regular languages for liveness (Farzan et al. 08) - learning non-deterministic automata (Bollig et al. 09) - learning Boolean functions (Chen et al. 10) - assumption generation in probabilistic setting (Feng et al. 10) summary and food for thought... - techniques are generic - better applied at design level - not a panacea... - perform well when alphabets & assumptions are small - what makes a system amenable to compositional techniques? - design for compositional verification; combine with other design approaches - how can we make it practical for real systems? what types of interfaces are useful in practice? - discovering good system decompositions - liveness, timed & probabilistic systems, non functional properties - multi core / parallelization? thank you! invoke a model checker **within** a model checker? permissiveness check MC: model check for \((M_i, A_{\text{error}})\) reached \((\text{err, ok})\) by trace \(t\) if \((\text{memoized}(t) == \text{no})\) // \(t\) is spurious backtrack and continue search else // \(\text{memoized}(t) == \text{yes}\) or \(t\) not in memoized model checker produces \(t\) if \((\text{query}(t) == \text{yes})\) return \(t\) to \(L^*\) // not permissive else restart at MC 1. $\text{cex} = \text{checkSafe}(A, C^{\text{may}})$ 2. if $\text{cex} == \text{null}$ 3. invoke Oracle2 4. If $\text{Query}(\text{cex, C}) == \text{“false”}$ 5. return $\text{cex}$ to $L^*$ 6. else 7. goto 1 1. cex = checkPermissive(A, C^{must}) 2. if cex == null 3. return A 4. If Query(cex, C) == "true" 5. return cex to L* 6. else 7. goto 1 example 1: Mars Exploration Rover - tools: LTSA, SPIN - model derived from JPL’s Mars Exploration Rover (MER) Resource Arbiter - local management of resource contention between resource consumers (e.g. science instruments, communication systems) - consists of \( k \) user threads and one server thread (arbiter) - checked mutual exclusion between resources (e.g. driving while capturing a camera image are incompatible) - compositional verification scaled to >5 users vs. monolithic verification ran out of memory [SPIN’ 06] example 2: autonomous rendezvous & docking - **tool**: LTSA - consists of control software, state estimator, and 4 types of sensors - input provided as UML state-charts, properties of type: - “you need at least two operational sensors to proceed to next mode” - 3 bugs detected - **scaling achieved with compositional verification:** - monolithic verification runs out of memory after > 13M states - compositional verification terminates successfully in secs. Largest state-space explored is less than 60K states, as opposed to > 13M. example 3: K9 Rover Executive - tools: LTSA, JavaPathfinder - model of NASA Ames K9 Rover Executive - executes flexible plans for autonomy - consists of Executive thread and ExecCondChecker thread for monitoring state conditions - checked for specific shared variable: if Executive reads its value, ExecCondChecker should not read the variable before the Executive clears it - generated assumption of 6 states for model in LTSA [TACAS 2003] - used generated assumption to check 8K lines of JAVA code translated from 10K lines of C++ code using the JavaPathfinder model checker [ICSE 2004] - reduced memory used by JavaPathfinder > 3 times - used generated assumption to perform assume-guarantee testing of C++ code using Eagle runtime monitoring framework [SAVCBS 2005, IET Software 2009]
{"Source-Url": "http://fm.csl.sri.com/SSFT11/DimitraFMschoolMay2011.pdf", "len_cl100k_base": 4131, "olmocr-version": "0.1.50", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 66521, "total-output-tokens": 5812, "length": "2e12", "weborganizer": {"__label__adult": 0.0004322528839111328, "__label__art_design": 0.0006155967712402344, "__label__crime_law": 0.0005393028259277344, "__label__education_jobs": 0.0019969940185546875, "__label__entertainment": 0.00011235475540161131, "__label__fashion_beauty": 0.00025534629821777344, "__label__finance_business": 0.00032138824462890625, "__label__food_dining": 0.0004734992980957031, "__label__games": 0.0007977485656738281, "__label__hardware": 0.0016765594482421875, "__label__health": 0.000701904296875, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.00021767616271972656, "__label__industrial": 0.0010242462158203125, "__label__literature": 0.0004420280456542969, "__label__politics": 0.0005292892456054688, "__label__religion": 0.0007610321044921875, "__label__science_tech": 0.18310546875, "__label__social_life": 0.00021386146545410156, "__label__software": 0.007694244384765625, "__label__software_dev": 0.79541015625, "__label__sports_fitness": 0.0005035400390625, "__label__transportation": 0.001155853271484375, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13096, 0.01831]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13096, 0.60267]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13096, 0.64074]], "google_gemma-3-12b-it_contains_pii": [[0, 73, false], [73, 437, null], [437, 461, null], [461, 780, null], [780, 891, null], [891, 1350, null], [1350, 1702, null], [1702, 2487, null], [2487, 3157, null], [3157, 3249, null], [3249, 3270, null], [3270, 3439, null], [3439, 3634, null], [3634, 3834, null], [3834, 3996, null], [3996, 4574, null], [4574, 5050, null], [5050, 5254, null], [5254, 5711, null], [5711, 6395, null], [6395, 6727, null], [6727, 6768, null], [6768, 7102, null], [7102, 7514, null], [7514, 7764, null], [7764, 7908, null], [7908, 7938, null], [7938, 8052, null], [8052, 8305, null], [8305, 8332, null], [8332, 8473, null], [8473, 8760, null], [8760, 9150, null], [9150, 9853, null], [9853, 10398, null], [10398, 10409, null], [10409, 10460, null], [10460, 10881, null], [10881, 11091, null], [11091, 11227, null], [11227, 11758, null], [11758, 12300, null], [12300, 13096, null]], "google_gemma-3-12b-it_is_public_document": [[0, 73, true], [73, 437, null], [437, 461, null], [461, 780, null], [780, 891, null], [891, 1350, null], [1350, 1702, null], [1702, 2487, null], [2487, 3157, null], [3157, 3249, null], [3249, 3270, null], [3270, 3439, null], [3439, 3634, null], [3634, 3834, null], [3834, 3996, null], [3996, 4574, null], [4574, 5050, null], [5050, 5254, null], [5254, 5711, null], [5711, 6395, null], [6395, 6727, null], [6727, 6768, null], [6768, 7102, null], [7102, 7514, null], [7514, 7764, null], [7764, 7908, null], [7908, 7938, null], [7938, 8052, null], [8052, 8305, null], [8305, 8332, null], [8332, 8473, null], [8473, 8760, null], [8760, 9150, null], [9150, 9853, null], [9853, 10398, null], [10398, 10409, null], [10409, 10460, null], [10460, 10881, null], [10881, 11091, null], [11091, 11227, null], [11227, 11758, null], [11758, 12300, null], [12300, 13096, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13096, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13096, null]], "pdf_page_numbers": [[0, 73, 1], [73, 437, 2], [437, 461, 3], [461, 780, 4], [780, 891, 5], [891, 1350, 6], [1350, 1702, 7], [1702, 2487, 8], [2487, 3157, 9], [3157, 3249, 10], [3249, 3270, 11], [3270, 3439, 12], [3439, 3634, 13], [3634, 3834, 14], [3834, 3996, 15], [3996, 4574, 16], [4574, 5050, 17], [5050, 5254, 18], [5254, 5711, 19], [5711, 6395, 20], [6395, 6727, 21], [6727, 6768, 22], [6768, 7102, 23], [7102, 7514, 24], [7514, 7764, 25], [7764, 7908, 26], [7908, 7938, 27], [7938, 8052, 28], [8052, 8305, 29], [8305, 8332, 30], [8332, 8473, 31], [8473, 8760, 32], [8760, 9150, 33], [9150, 9853, 34], [9853, 10398, 35], [10398, 10409, 36], [10409, 10460, 37], [10460, 10881, 38], [10881, 11091, 39], [11091, 11227, 40], [11227, 11758, 41], [11758, 12300, 42], [12300, 13096, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13096, 0.05686]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
0a5dd1c524298da00f2d8dfa93cb169c33fbcf3c
As promised in the previous lecture, we will now prove the Cook-Levin theorem, which is restated below. **Theorem 1 (Cook-Levin)** 3-sat is \( NP \)-complete. To prove this theorem, we will reduce \( \text{TM-sat} \) to \( 3\text{-sat} \). However, this is difficult to do directly, so we will introduce an intermediate language, called \( \text{Circuit-sat} \). This requires us to first define the notion of a boolean circuit. **Definition 2** A boolean circuit is a directed acyclic graph with the following properties: 1. It has \( n \) sources \( \{u_1, \ldots, u_n\} \) and one sink output, with \( d_{\text{in}}(\text{output}) = 1 \). 2. All the vertices that are not sources or sinks are called gates. Each gate is labeled with a symbol from \( \{\land, \lor, \neg\} \). 3. Every gate \( v \) satisfies \( d_{\text{out}}(v) = 1 \). 4. If a gate \( v \) is labeled with \( \land \) or \( \lor \), then \( d_{\text{in}}(v) = 2 \). If \( v \) is labeled with \( \neg \) then \( d_{\text{in}}(v) = 1 \). A boolean circuit \( c \) with \( n \) sources computes a boolean function in the following natural way. If values \( (b_1, \ldots, b_n) \in \{0, 1\}^n \) are provided, label each edge with \( \{0, 1\} \) so that (1) every edge emerging from the source \( u_i \) is labeled with \( b_i \) and (2) the edge emerging from each gate evaluates the appropriate boolean operator on the labels of the edges entering that gate. Then the label of the edge leading to output is the output of the boolean function; we will denote this \( c(b_1, \ldots, b_n) \). It is left as an exercise to show that this function is well-defined. **Definition 3** The size of a boolean circuit is the number of gates in that circuit. **Definition 4** The depth of a boolean circuit is the length of a longest path in that circuit. Having described boolean circuits, we can now define an associated \( NP \)-complete language, \( \text{Circuit-sat} \). **Definition 5** The language \( \text{Circuit-sat} \) is defined as follows: \[ \text{Circuit-sat} = \{ \varphi : \varphi \text{ is a satisfiable boolean circuit} \}. \] **Proposition 6** \( \text{Circuit-sat} \in NP \). **Proof** If \( \varphi \) is a satisfiable boolean circuit, any satisfying assignment serves as a witness to this fact. The satisfying assignment has size equal to the number of inputs of \( \varphi \), which is polynomial in the size of \( \varphi \) (since a boolean circuit with \( n \) inputs has at least \( n - 1 \) gates). The following result showcases the importance of boolean circuits, and will be used in the proof of the Cook-Levin theorem. **Theorem 7 (Universality of circuits)** For every boolean function \( f : \{0, 1\}^\ell \rightarrow \{0, 1\} \), there is an \( \ell\text{-CNF} \) circuit \( \varphi \) of size \( O(\ell \cdot 2^\ell) \) such that \( \varphi(u) = f(u) \) for all \( u \in \{0, 1\}^\ell \). Proof. Consider \(S = \{u : f(u) = 0\}\). For each \(u \in S\) write the clause \(C_u\) such that \(C_u(u) = 0\) and \(C_u(v) = 1\) for every \(v \neq u\). Let \(\varphi = \bigwedge_{u \in f^{-1}(0)} C_u\). Then \(\varphi\) computes \(f\) and is of the right size. ■ We are now ready to begin the proof of the Cook-Levin theorem. As noted above, there are two main lemmas. **Lemma 8** \(\text{TM-SAT} \leq_P \text{CIRCUIT-SAT} \) (and therefore \(\text{CIRCUIT-SAT}\) is \(\text{NP-complete}\)). **Proof.** We first show the following fact: Given any Turing machine \(M\) that runs in time \(T(n)\) (where \(n\) is the size of the input), there exists an \(O(T(n)^2)\)-size family of circuits \(\{c_n\}_{n \in \mathbb{N}}\) such that for all \(n \in \mathbb{N}\) and \(x \in \{0, 1\}^n\), \(c_n(x) = M(x)\). Furthermore, if \(T(n)\) is polynomial in \(n\), then \(\{c_n\}_{n \in \mathbb{N}}\) can be constructed in polynomial time. First, simulate \(M\) by an oblivious Turing machine \(M'\) that runs in \(O(T(n)^2)\) time (this is proved possible in Arora and Barak). On input \(x\) to \(M'\), suppose that \(M'\) halts in time \(t\). Then let \(z_t, \ldots, z_i\) be “local snapshots” of the computation of \(M'(x);\) that is, \(z_i = (s_i, h_i)\), where \(s_i\) is the state of \(M'\) at time \(i\) and \(h_i\) is the symbol read by the head at time \(i\). Represent \(z_i\) by a fixed-length binary string. Now, because \(M'\) is oblivious, \(z_i\) is a boolean function of \(z_{i-1}\) and \(z_j\), where \(j\) is the most recent time before \(i\) that \(M'\) visited the current location on the tape. In particular, note that the number of bits required to determine \(z_i\) is a fixed number that depends only on \(M'\) (not \(x\)). By the universality of circuits, this means that \(z_i\) can be computed by a circuit of size \(O(1)\). By appropriately “composing” these circuits for all \(i \in \{1, \ldots, t\}\), \(z_c\) can be computed by a circuit of size \(O(t) = O(T(n)^2)\). Furthermore, by following the above procedure, this circuit can be constructed in time polynomial in \(n\). Having proved the fact, we now apply it to reduce \(\text{TM-SAT} \leq_P \text{CIRCUIT-SAT}\). Let \((\alpha, x, 1^n, 1^t)\) be given. Let \(M\) be the Turing machine that, given \(y\), computes \(M_\alpha(x, y)\) for \(t\) steps and accepts if and only if \(M_\alpha\) accepted. Construct the circuit family \(\{c_n\}_{n \in \mathbb{N}}\) corresponding to \(M\) using the fact. Then the size of \(c_n\) is \(O(t^2)\), and \(c_n\) is satisfiable if and only if \((\alpha, x, 1^n, 1^t) \in \text{TM-SAT}\), so we have reduced \(\text{TM-SAT} \leq_P \text{CIRCUIT-SAT}\). Because we previously showed (Proposition 6) that \(\text{CIRCUIT-SAT}\) was in \(\text{NP}\), and we have now proved it to be \(\text{NP-hard}\), we also see that \(\text{CIRCUIT-SAT}\) is \(\text{NP-complete}\). ■ **Lemma 9** \(\text{CIRCUIT-SAT} \leq_P \text{3-SAT}\) **Proof.** Given a circuit \(\varphi\) with input variables \(u_1, \ldots, u_n\), for each \(\lor\) or \(\land\) gate \(g_i \in \varphi\) introduce a new variable \(v_{g_i}\) that represents the output of that gate. For each such gate, the inputs are either of the form \(u_i, v_{g_i}, \) or \(\neg v_{g_i}\). If \(v_{g_k}\) is a \(\land\) gate with inputs \(w_i\) and \(w_j\) (of the above form), let \[ C_k = (\neg v_{g_k} \lor w_i \lor w_j) \land (\neg v_{g_k} \lor w_i \lor \neg w_j) \land (\neg v_{g_k} \lor \neg w_i \lor w_j) \land (v_{g_k} \lor \neg w_i \lor \neg w_j). \] If \(v_{g_k}\) is a \(\lor\) gate, let \[ C_k = (\neg v_{g_k} \lor w_i \lor w_j) \land (v_{g_k} \lor w_i \lor \neg w_j) \land (v_{g_k} \lor \neg w_i \lor w_j) \land (v_{g_k} \lor \neg w_i \lor \neg w_j). \] Then let \(\psi = \bigwedge_k C_k\). Observe that \(\psi\) is satisfiable if and only if \(\varphi\) is. This is the desired reduction. ■ From these lemmas, it is now easy to assemble the **Proof of the Cook-Levin theorem:** By Lemma 8, \(\text{TM-SAT} \leq_P \text{CIRCUIT-SAT}\). By Lemma 9, \(\text{CIRCUIT-SAT} \leq_P \text{3-SAT}\). Hence \(\text{TM-SAT} \leq_P \text{3-SAT}\); this means that \(3\text{-SAT}\) is \(\text{NP-hard}\). It remains only to show that \(\text{3-SAT}\) is in \(\text{NP}\). This is easy because if \(\varphi\) is a \(3\text{-CNF formula}\), then a satisfying assignment is a polynomial-size witness to the satisfiability of \(\varphi\). Combining these facts, \(3\text{-SAT}\) is \(\text{NP-complete}\). ■ 2 Why do we study 3-SAT and 3-CNF Formulas? For one, although TM-SAT is an important problem in that we used it to establish the existence of NP-complete problems, it is less accessible in that it is intimately tied to Turing Machines. 3-SAT, however, is an easier problem to work with as it is divorced from Turing Machines, asking only about the satisfiability of certain boolean expressions. This makes it useful for reductions. Indeed, SAT and/or 3-SAT are the initial problems from which a myriad of other important problems are shown to be NP-complete. See figure 2.4 on page 51 of Arora and Barak. Additionally, 3-SAT is an important problem in that CNF expressions are already intensively studied in logic and have been for a long time. Now we give an example of an NP-complete language for which it is not hard to see that is NP-complete having established 3-SAT as NP-complete. A 0/1 integer program is a system of m linear inequalities in n variables with rational coefficients. We seek an assignment of either 0 or 1 to the variables that satisfies all the equations. We define a language “0/1 int prog” to be those systems which have a solution in 0 and 1. **Theorem:** 0/1 Integer Programming is NP-complete. **Proof:** First we observe that 0/1 int prog ∈ NP. This is easy to see. The witness is the satisfying boolean assignment which is a sequence of n bits. This is clearly polynomial in the size of the input. Checking the witness involves checking m linear inequalities, which can be done in time poly(n, m). Next, we want to show that 3-SAT \( \leq_p \) 0/1 int prog. This will show that 0/1 int prog is in NP. So suppose we are given a 3-SAT \( \psi \). To each clause of \( \psi \) we associate an inequality. This is best illustrated by example: \( u_i \lor u_j \lor \overline{u_k} \rightarrow [u_i + u_j + (1 - u_k) \geq 1] \). In general if \( u_i \) appears in the clause we associate the variable \( u_i \) in an inequality in our integer program and if \( \overline{u_i} \) appears we associate \((1 - u_i)\) in the inequality. Since the clause is a sequence of ORs we need only one to be true and if \( u_i \geq 1 \) if \( u_i = 1 \) and \((1 - u_i) \geq 1\) if \( u_i = 0\). So the resulting system of m inequalities (where m is the number of clauses in our 3-CNF expression) is solvable with 0 and 1 if and only if the 3-CNF expression is satisfiable. It is also worth noting that the language int-prog is NP-complete. Since 0/1 int prog is a sub problem it is clear that int-prog is NP-hard. To see that int-prog ∈ NP takes a little more work. It is not as obvious that the witness (the satisfying integer values of the n variables) must be poly(n, m) in size, but this turns out to be true. From studying the many interesting problems in NP that turn out to be NP-complete, one might think that any problem in NP that is not in P might be NP-complete. This turns out not to be true (if P ≠ NP): **Theorem (Ladner):** If P ≠ NP then there exists \( L \in NP \) such that \( L \notin P \) and \( L \) is not NP-complete. There are two natural and important languages in NP that are conjectured to be of this type (not in P but not NP-complete). Note, that they are of course not known to be of this type as this would imply P ≠ NP. - Graph-Isomorphism - Factoring. Note that given n, determining the factorization of n is not a decision problem. However, we can pose the following decision problem: given n and bounds a and b, does n have an integer factor between a and b. If this can be computed in time \( T(n) \) then employing a binary search, n can be factored in roughly \( T(n) \log(n) \) steps. So it is highly doubtful that the factoring decision problem is in P. However, subexponential factoring algorithms have been discovered suggesting that this problem might be “easier” than NP-complete problems. Next we discuss another important complexity class, co-NP. First, we define the complement of a language. If \( L \subseteq \{0,1\}^* \) then we define \( \overline{L} = \{0,1\}^* \setminus L \). Definition: $\text{co-NP} = \{ L : \overline{L} \in \text{NP} \}$. For example, CNF formulas which are not satisfiable are in co-NP. As an exercise, one can show that the following alternative definition is equivalent. Definition: An equivalent definition of co-NP is as follows. $L \subseteq \{0,1\}^*$ is in co-NP if there exists a polynomial $p : \mathbb{N} \rightarrow \mathbb{N}$ and a polynomial time TM $M$ such that for all $x \in \{0,1\}^n$ we have: $$x \in L \iff \forall y \in \{0,1\}^{p(|x|)}, m(x,y) = 1.$$ It is not hard to see that $\text{P} \subseteq \text{co-NP}$. However, it is unresolved if $\text{NP} = \text{co-NP}$, but this is conjectured not to be the case. Theorem (Time Hierarchy): Let $f : \mathbb{N} \rightarrow \mathbb{N}$ be a time-constructable function. Then $\text{DTIME}(f(n)) \neq \text{DTIME}(\omega f(n) \log f(n))$, i.e. the containment is strict. This is a powerful theorem. For example, it shows that in our definition $\text{P} = \bigcup_{c \geq 1} \text{DTIME}(n^c)$. So there are languages in $\text{P}$ for which there is no $O(n^{1000})$ algorithm. We will prove the weaker statement $\text{DTIME}(n) \neq \text{DTIME}(n^{10})$ which gives the main idea, but doesn’t get bogged down in the details. Proof: We use diagonalization. Enumerate all Turing Machines $M_i$ as follows: $M_1, M_1, M_2, M_1, M_2, M_3, M_1, M_2, M_3, M_4, \ldots$ We can relabel this sequence as $N_1, N_2, N_3, \ldots$ In fact any sequence $\{N_j\}_{j=1}^\infty$ of TMs such that each $M_i$ appears in the sequence an infinite number of times will work. Now consider the following language, $L$, defined as follows. For input $\alpha$ where $n = |\alpha|$ if $M_\alpha(\alpha)$ accepts within $n^2$ steps then we reject (i.e. $\alpha \notin L$). Otherwise we accept. The simulation of $M_\alpha(\alpha)$ can be done using the universal Turing Machine, U-TM, which induces only a $\log(n)$ slowdown. So it is easy to see that $L \in \text{DTIME}(n^{10})$. Now suppose, $L \in \text{DTIME}(n)$ then there exists a TM $M$ deciding $L$ in time $cn$ for some constant $c$. Since our enumeration $\{N_j\}_{j=1}^\infty$ contained an infinite number of copies of $M$, we have for some $\alpha$ that $M = M_\alpha$ and $c|\alpha| < |\alpha|^2$. But then we have $M(\alpha) \neq M_\alpha(\alpha)$ by our definition of $L$. $\Rightarrow \Leftarrow$. Next, we develop the notion of non-deterministic Turing Machines and NTIME. Definition: We say $L \in \text{NTIME}(f(n))$ if there exists a TM $M$ and a constant $c$ such that for all $x \in \{0,1\}^n$: $$x \in L \iff \exists y \in \{0,1\}^{cf(n)} \text{ such that } M(x,y) \text{ haults in at most } cf(n) \text{ steps and outputs } 1.$$ Definition: An alternative definition of NP is then $\text{NP} = \bigcup_{c \geq 1} \text{NTIME}(n^c)$. The name NP comes from “Nondeterministic polynomial time.” It is those languages which are computable in polynomial time on a nondeterministic Turing machine (NDTM). What is an NDTM and where can I get my hands on one?! Definition (loose): A nondeterministic Turing machine (NDTM) is defined similarly to a Turing machine except that it has two different transition functions. At each stage of computation it branches and uses each transition function to continue its computation. It returns 1 if there is some sequence along which it haults and outputs 1. If every branch haults without returning 1 then the output is 0. NDTMs are described on page 41 of Arora and Barak.
{"Source-Url": "http://sites.math.rutgers.edu/~ss1984/complexity-scribe-02.pdf", "len_cl100k_base": 4614, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 17977, "total-output-tokens": 5087, "length": "2e12", "weborganizer": {"__label__adult": 0.00051116943359375, "__label__art_design": 0.000446319580078125, "__label__crime_law": 0.0007996559143066406, "__label__education_jobs": 0.00212860107421875, "__label__entertainment": 0.00018203258514404297, "__label__fashion_beauty": 0.0002677440643310547, "__label__finance_business": 0.00035452842712402344, "__label__food_dining": 0.000942707061767578, "__label__games": 0.0014743804931640625, "__label__hardware": 0.0020999908447265625, "__label__health": 0.0015506744384765625, "__label__history": 0.0005064010620117188, "__label__home_hobbies": 0.0002536773681640625, "__label__industrial": 0.0010528564453125, "__label__literature": 0.0008783340454101562, "__label__politics": 0.0005373954772949219, "__label__religion": 0.0010547637939453125, "__label__science_tech": 0.2763671875, "__label__social_life": 0.00015485286712646484, "__label__software": 0.0067291259765625, "__label__software_dev": 0.7001953125, "__label__sports_fitness": 0.0004742145538330078, "__label__transportation": 0.0010442733764648438, "__label__travel": 0.00027942657470703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14925, 0.02358]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14925, 0.83806]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14925, 0.8613]], "google_gemma-3-12b-it_contains_pii": [[0, 2899, false], [2899, 7386, null], [7386, 11436, null], [11436, 14925, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2899, true], [2899, 7386, null], [7386, 11436, null], [11436, 14925, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14925, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14925, null]], "pdf_page_numbers": [[0, 2899, 1], [2899, 7386, 2], [7386, 11436, 3], [11436, 14925, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14925, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
82b5b79eb2ef23206154ef6c229d58a3fb4ed767
Lecture #6 - Schematic Protection Model - Safety question - Expressive Power - HRU and SPM - Multiparent create - ESPM Key Question • Characterize class of models for which safety is decidable – Existence: Take-Grant Protection Model is a member of such a class – Universality: In general, question undecidable, so for some models it is not decidable • What is the dividing line? Schematic Protection Model • Type-based model – Protection type: entity label determining how control rights affect the entity • Set at creation and cannot be changed – Ticket: description of a single right over an entity • Entity has sets of tickets (called a domain) • Ticket is X/r, where X is entity and r right – Functions determine rights transfer • Link: are source, target “connected”? • Filter: is transfer of ticket authorized? Link Predicate • Idea: $\text{link}_i(X, Y)$ if $X$ can assert some control right over $Y$ • Conjunction of disjunction of: – $X/z \in \text{dom}(X)$ – $X/z \in \text{dom}(Y)$ – $Y/z \in \text{dom}(X)$ – $Y/z \in \text{dom}(Y)$ – true Examples • Take-Grant: \[ \text{link}(X, Y) = Y/g \in \text{dom}(X) \lor X/t \in \text{dom}(Y) \] • Broadcast: \[ \text{link}(X, Y) = X/b \in \text{dom}(X) \] • Pull: \[ \text{link}(X, Y) = Y/p \in \text{dom}(Y) \] Filter Function - Range is set of copyable tickets - Entity type, right - Domain is subject pairs - Copy a ticket $X/r:c$ from $dom(Y)$ to $dom(Z)$ - $X/rc \in dom(Y)$ - $link_i(Y, Z)$ - $\tau(Y)/r:c \in f_i(\tau(Y), \tau(Z))$ - One filter function per link predicate Example • $f(\tau(Y), \tau(Z)) = T \times R$ – Any ticket can be transferred (if other conditions met) • $f(\tau(Y), \tau(Z)) = T \times RI$ – Only tickets with inert rights can be transferred (if other conditions met) • $f(\tau(Y), \tau(Z)) = \emptyset$ – No tickets can be transferred Example - **Take-Grant Protection Model** - $TS = \{ \text{subjects} \}, \ TO = \{ \text{objects} \}$ - $RC = \{ \text{tc}, \ gc \}, \ RI = \{ \text{rc}, \ wc \}$ - $\text{link}(p, q) = p/t \in \text{dom}(q) \lor q/g \in \text{dom}(p)$ - $f(\text{subject, subject}) = \{ \text{subject, object} \} \times \{ \text{tc, gc, rc, wc} \}$ Create Operation - Must handle type, tickets of new entity - Relation $cc(a, b)$ [cc for can-create] - Subject of type $a$ can create entity of type $b$ - Rule of acyclic creates: ``` a b | | v v c d ``` ``` a b | | v v c d ``` Types - \( cr(a, b) \): tickets created when subject of type \( a \) creates entity of type \( b \) [\( cr \) for \textit{create-rule}] - \( B \) object: \( cr(a, b) \subseteq \{ b/r:c \in RI \} \) - \( A \) gets \( B/r:c \) iff \( b/r:c \in cr(a, b) \) - \( B \) subject: \( cr(a, b) \) has two subsets - \( cr_P(a, b) \) added to \( A \), \( cr_C(a, b) \) added to \( B \) - \( A \) gets \( B/r:c \) if \( b/r:c \in cr_P(a, b) \) - \( B \) gets \( A/r:c \) if \( a/r:c \in cr_C(a, b) \) Non-Distinct Types \( cr(a, a) \): who gets what? - \( self/r:c \) are tickets for creator - \( a/r:c \) tickets for created \[ ctr(a, a) = \{ a/r:c, self/r:c \mid r:c \in R \} \] Attenuating Create Rule \[ \text{cr}(a, b) \text{ attenuating if:} \] 1. \( \text{cr}_C(a, b) \subseteq \text{cr}_P(a, b) \) and 2. \( a/r:c \in \text{cr}_P(a, b) \Rightarrow \text{self}/r:c \in \text{cr}_P(a, b) \) Example: Owner-Based Policy - Users can create files, creator can give itself any inert rights over file - \( cc = \{ (user, file) \} \) - \( cr(user, file) = \{ file/r:c \mid r \in RI \} \) - Attenuating, as graph is acyclic, loop free ![Diagram](image-url) Example: Take-Grant - Say subjects create subjects (type \(s\)), objects (type \(o\)), but get only inert rights over latter - \(cc = \{(s, s), (s, o)\}\) - \(cr_c(a, b) = \emptyset\) - \(cr_p(s, s) = \{s/tc, s/gc, s/rc, s/wc\}\) - \(cr_p(s, o) = \{s/rc, s/wc\}\) - Not attenuating, as no *self* tickets provided; *subject* creates *subject* Safety Analysis • Goal: identify types of policies with tractable safety analyses • Approach: derive a state in which additional entries, rights do not affect the analysis; then analyze this state – Called a maximal state Definitions - System begins at initial state - Authorized operation causes *legal transition* - Sequence of legal transitions moves system into final state - This sequence is a *history* - Final state is *derivable* from history, initial state More Definitions • States represented by $h$ • Set of subjects $SUB^h$, entities $ENT^h$ • Link relation in context of state $h \ link^h$ • Dom relation in context of state $h \ dom^h$ \( \text{path}^h(X,Y) \) - \( X, Y \) connected by one link or a sequence of links - Formally, either of these hold: - for some \( i \), \( \text{link}^h_i(X,Y) \); or - there is a sequence of subjects \( X_0, \ldots, X_n \) such that \( \text{link}^h_i(X,X_0), \text{link}^h_i(X_n,Y) \), and for \( k = 1, \ldots, n \), \( \text{link}^h_i(X_{k-1},X_k) \) - If multiple such paths, refer to \( \text{path}^h_j(X,Y) \) Capacity $cap(path^h(X,Y))$ - Set of tickets that can flow over $path^h(X,Y)$ - If $link_i^h(X,Y)$: set of tickets that can be copied over the link (i.e., $f_i(\tau(X), \tau(Y))$) - Otherwise, set of tickets that can be copied over *all* links in the sequence of links making up the $path^h(X,Y)$ - Note: all tickets (except those for the final link) *must* be copyable Flow Function - Idea: capture flow of tickets around a given state of the system - Let there be $m$ path$^h$s between subjects $X$ and $Y$ in state $h$. Then flow function $$flow^h: SUB^h \times SUB^h \to 2^{T \times R}$$ is: $$flow^h(X,Y) = \bigcup_{i=1, \ldots, m} cap(path^h_i(X,Y))$$ Properties of Maximal State • Maximizes flow between all pairs of subjects – State is called * – Ticket in flow*(X,Y) means there exists a sequence of operations that can copy the ticket from X to Y • Questions – Is maximal state unique? – Does every system have one? Formal Definition - Definition: $g \leq_0 h$ holds iff for all $X, Y \in SUB^0$, $flow^g(X,Y) \subseteq flow^h(X,Y)$. - Note: if $g \leq_0 h$ and $h \leq_0 g$, then $g, h$ equivalent - Defines set of equivalence classes on set of derivable states - Definition: for a given system, state $m$ is maximal iff $h \leq_0 m$ for every derivable state $h$ - Intuition: flow function contains all tickets that can be transferred from one subject to another - All maximal states in same equivalence class Maximal States • Lemma. Given arbitrary finite set of states $H$, there exists a derivable state $m$ such that for all $h \in H$, $h \leq_0 m$ • Outline of proof: induction – Basis: $H = \emptyset$; trivially true – Step: $|H'| = n + 1$, where $H' = G \cup \{h\}$. By IH, there is a $g \in G$ such that $x \leq_0 g$ for all $x \in G$. Outline of Proof • $M$ interleaving histories of $g$, $h$ which: – Preserves relative order of transitions in $g$, $h$ – Omits second create operation if duplicated • $M$ ends up at state $m$ • If $\text{path}^g(X,Y)$ for $X, Y \in \text{SUB}^g$, $\text{path}^m(X,Y)$ – So $g \leq_0 m$ • If $\text{path}^h(X,Y)$ for $X, Y \in \text{SUB}^h$, $\text{path}^m(X,Y)$ – So $h \leq_0 m$ • Hence $m$ maximal state in $H'$ Answer to Second Question • Theorem: every system has a maximal state * • Outline of proof: $K$ is set of derivable states containing exactly one state from each equivalence class of derivable states – Consider $X, Y$ in $SUB^0$. Flow function’s range is $2^{|T \times R|}$, so can take at most $2^{|T \times R|}$ values. As there are $|SUB^0|^2$ pairs of subjects in $SUB^0$, at most $2^{|T \times R|} |SUB^0|^2$ distinct equivalence classes; so $K$ is finite • Result follows from lemma Safety Question • In this model: Is there a derivable state with $X/r:c \in \text{dom}(A)$, or does there exist a subject $B$ with ticket $X/rc$ in the initial state in $\text{flow}^*(B,A)$? • To answer: construct maximal state and test – Consider acyclic attenuating schemes; how do we construct maximal state? Intuition • Consider state $h$. • State $u$ corresponds to $h$ but with minimal number of new entities created such that maximal state $m$ can be derived with no create operations – So if in history from $h$ to $m$, subject $X$ creates two entities of type $a$, in $u$ only one would be created; surrogate for both • $m$ can be derived from $u$ in polynomial time, so if $u$ can be created by adding a finite number of subjects to $h$, safety question decidable. Fully Unfolded State • State $u$ derived from state 0 as follows: – delete all loops in $cc$; new relation $cc'$ – mark all subjects as folded – while any $X \in SUB^0$ is folded • mark it unfolded • if $X$ can create entity $Y$ of type $y$, it does so (call this the $y$-surrogate of $X$); if entity $Y \in SUB^g$, mark it folded – if any subject in state $h$ can create an entity of its own type, do so • Now in state $u$ Termination • First loop terminates as $SUB^0$ finite • Second loop terminates: – Each subject in $SUB^0$ can create at most $|TS|$ children, and $|TS|$ is finite – Each folded subject in $|SUB^i|$ can create at most $|TS|$ – $i$ children – When $i = |TS|$, subject cannot create more children; thus, folded is finite – Each loop removes one element • Third loop terminates as $SUB^h$ is finite Surrogate - Intuition: surrogate collapses multiple subjects of same type into single subject that acts for all of them - Definition: given initial state $0$, for every derivable state $h$ define surrogate function $\sigma: \text{ENT}^h \rightarrow \text{ENT}^h$ by: - if $X$ in $\text{ENT}^0$, then $\sigma(X) = X$ - if $Y$ creates $X$ and $\tau(Y) = \tau(X)$, then $\sigma(X) = \sigma(Y)$ - if $Y$ creates $X$ and $\tau(Y) \neq \tau(X)$, then $\sigma(X) = \tau(Y)$- surrogate of $\sigma(Y)$ Implications - $\tau(\sigma(X)) = \tau(X)$ - If $\tau(X) = \tau(Y)$, then $\sigma(X) = \sigma(Y)$ - If $\tau(X) \neq \tau(Y)$, then - $\sigma(X)$ creates $\sigma(Y)$ in the construction of $u$ - $\sigma(X)$ creates entities $X'$ of type $\tau(X) = \tau(\sigma(X))$ - From these, for a system with an acyclic attenuating scheme, if $X$ creates $Y$, then tickets that would be introduced by pretending that $\sigma(X)$ creates $\sigma(Y)$ are in $\text{dom}^u(\sigma(X))$ and $\text{dom}^u(\sigma(Y))$ Deriving Maximal State • Idea – Reorder operations so that all creates come first and replace history with equivalent one using surrogates – Show maximal state of new history is also that of original history – Show maximal state can be derived from initial state Reordering - $H$ legal history deriving state $h$ from state 0 - Order operations: first create, then demand, then copy operations - Build new history $G$ from $H$ as follows: - Delete all creates - “$X$ demands $Y/r:c$” becomes “$\sigma(X)$ demands $\sigma(Y)/r:c$” - “$Y$ copies $X/r:c$ from $Y$” becomes “$\sigma(Y)$ copies $\sigma(X)/r:c$ from $\sigma(Y)$ Tickets in Parallel • Theorem – All transitions in $G$ legal; if $X/r:c \in \text{dom}^h(Y)$, then $\sigma(X)/r:c \in \text{dom}^h(\sigma(Y))$ • Outline of proof: induct on number of copy operations in $H$ Basis - $H$ has create, demand only; so $G$ has demand only. $s$ preserves type, so by construction every demand operation in $G$ legal. - 3 ways for $X/r:c$ to be in $dom^h(Y)$: - $X/r:c \in dom^0(Y)$ means $X, Y \in ENT^0$, so trivially $\sigma(X)/r:c \in dom^g(\sigma(Y))$ holds - A create added $X/r:c \in dom^h(Y)$: previous lemma says $\sigma(X)/r:c \in dom^g(\sigma(Y))$ holds - A demand added $X/r:c \in dom^h(Y)$: corresponding demand operation in $G$ gives $\sigma(X)/r:c \in dom^g(\sigma(Y))$ Hypothesis - Claim holds for all histories with $k$ copy operations - History $H$ has $k+1$ copy operations - $H'$ initial sequence of $H$ composed of $k$ copy operations - $h'$ state derived from $H'$ Step • $G'$ sequence of modified operations corresponding to $H'$; $g'$ derived state – $G'$ legal history by hypothesis • Final operation is “$Z$ copied $X/r:c$ from $Y$” – So $h, h'$ differ by at most $X/r:c \in \text{dom}^h(Z)$ – Construction of $G$ means final operation is $\sigma(X)/r:c \in \text{dom}^g(\sigma(Y))$ • Proves second part of claim Step - \( H' \) legal, so for \( H \) to be legal, we have: 1. \( X/rc \in \text{dom}^{h'}(Y) \) 2. \( \text{link}^{h'}_i(Y, Z) \) 3. \( \tau(X/r:c) \in f_i(\tau(Y), \tau(Z)) \) - By IH, 1, 2, as \( X/r:c \in \text{dom}^{h'}(Y) \), \( \sigma(X)/r:c \in \text{dom}^{g'}(\sigma(Y)) \) and \( \text{link}^{g'}_i(\sigma(Y), \sigma(Z)) \) - As \( \sigma \) preserves type, IH and 3 imply \( \tau(\sigma(X)/r:c) \in f_i(\tau((\sigma(Y)), \tau(\sigma(Z))) \) - IH says \( G' \) legal, so \( G \) is legal Corollary • If $link_i^h(X, Y)$, then $link_i^g(\sigma(X), \sigma(Y))$
{"Source-Url": "http://nob.cs.ucdavis.edu/classes/ecs235b-2009-01/slides/sli06.pdf", "len_cl100k_base": 4326, "olmocr-version": "0.1.49", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 68998, "total-output-tokens": 5870, "length": "2e12", "weborganizer": {"__label__adult": 0.0003845691680908203, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.0007348060607910156, "__label__education_jobs": 0.0029296875, "__label__entertainment": 0.00010776519775390624, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.0008344650268554688, "__label__food_dining": 0.000484466552734375, "__label__games": 0.00197601318359375, "__label__hardware": 0.0027217864990234375, "__label__health": 0.0007662773132324219, "__label__history": 0.0003669261932373047, "__label__home_hobbies": 0.0004696846008300781, "__label__industrial": 0.0014047622680664062, "__label__literature": 0.0004320144653320313, "__label__politics": 0.00043320655822753906, "__label__religion": 0.00046944618225097656, "__label__science_tech": 0.160888671875, "__label__social_life": 0.00013625621795654297, "__label__software": 0.01114654541015625, "__label__software_dev": 0.810546875, "__label__sports_fitness": 0.0004458427429199219, "__label__transportation": 0.0012454986572265625, "__label__travel": 0.00020265579223632812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12973, 0.00286]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12973, 0.7172]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12973, 0.77365]], "google_gemma-3-12b-it_contains_pii": [[0, 126, false], [126, 393, null], [393, 858, null], [858, 1105, null], [1105, 1329, null], [1329, 1606, null], [1606, 1902, null], [1902, 2244, null], [2244, 2509, null], [2509, 3007, null], [3007, 3190, null], [3190, 3408, null], [3408, 3673, null], [3673, 4025, null], [4025, 4250, null], [4250, 4499, null], [4499, 4685, null], [4685, 5110, null], [5110, 5486, null], [5486, 5778, null], [5778, 6056, null], [6056, 6561, null], [6561, 6903, null], [6903, 7330, null], [7330, 7824, null], [7824, 8142, null], [8142, 8608, null], [8608, 9050, null], [9050, 9454, null], [9454, 9954, null], [9954, 10459, null], [10459, 10729, null], [10729, 11096, null], [11096, 11306, null], [11306, 11818, null], [11818, 12025, null], [12025, 12390, null], [12390, 12902, null], [12902, 12973, null]], "google_gemma-3-12b-it_is_public_document": [[0, 126, true], [126, 393, null], [393, 858, null], [858, 1105, null], [1105, 1329, null], [1329, 1606, null], [1606, 1902, null], [1902, 2244, null], [2244, 2509, null], [2509, 3007, null], [3007, 3190, null], [3190, 3408, null], [3408, 3673, null], [3673, 4025, null], [4025, 4250, null], [4250, 4499, null], [4499, 4685, null], [4685, 5110, null], [5110, 5486, null], [5486, 5778, null], [5778, 6056, null], [6056, 6561, null], [6561, 6903, null], [6903, 7330, null], [7330, 7824, null], [7824, 8142, null], [8142, 8608, null], [8608, 9050, null], [9050, 9454, null], [9454, 9954, null], [9954, 10459, null], [10459, 10729, null], [10729, 11096, null], [11096, 11306, null], [11306, 11818, null], [11818, 12025, null], [12025, 12390, null], [12390, 12902, null], [12902, 12973, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12973, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12973, null]], "pdf_page_numbers": [[0, 126, 1], [126, 393, 2], [393, 858, 3], [858, 1105, 4], [1105, 1329, 5], [1329, 1606, 6], [1606, 1902, 7], [1902, 2244, 8], [2244, 2509, 9], [2509, 3007, 10], [3007, 3190, 11], [3190, 3408, 12], [3408, 3673, 13], [3673, 4025, 14], [4025, 4250, 15], [4250, 4499, 16], [4499, 4685, 17], [4685, 5110, 18], [5110, 5486, 19], [5486, 5778, 20], [5778, 6056, 21], [6056, 6561, 22], [6561, 6903, 23], [6903, 7330, 24], [7330, 7824, 25], [7824, 8142, 26], [8142, 8608, 27], [8608, 9050, 28], [9050, 9454, 29], [9454, 9954, 30], [9954, 10459, 31], [10459, 10729, 32], [10729, 11096, 33], [11096, 11306, 34], [11306, 11818, 35], [11818, 12025, 36], [12025, 12390, 37], [12390, 12902, 38], [12902, 12973, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12973, 0.00766]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
64d143478fbae75e4b65d811d9516a2258710a66
CS 3410: Chord Distributed Hash Table Introduction In this assignment, you will implement a basic CHORD distributed hash table (DHT) as described in this paper: You can download my Linux implementation to play with: - [Example solution for Linux](Example_solution_for_Linux) Save the file as `chord-sol` and make it executable: ``` chmod 755 chord-sol ``` Then launch it using: ``` ./chord-sol ``` Requirements Your DHT must implement the services to maintain a DHT, and it must support a simple command-line user interface. Each instance (running on different machines, or on the same machine on different ports) will run an RPC server and act as an RPC client. In addition, several background tasks will run to keep the DHT data structures up-to-date. User interface When running, an instance of your DHT should support a simple command-line interface. It will supply a prompt, and the user can issue one of the following commands. The simplest command is: - `help`: this displays a list of recognized commands The main commands start with those related to joining and leaving DHT rings: - `port <n>`: set the port that this node should listen on. By default, this should be port 3410, but users can set it to something else. This command only works before a ring has been created or joined. After that point, trying to issue this command is an error. - `create`: create a new ring. This command only works before a ring has been created or joined. After that point, trying to issue this command is an error. - `join <address>`: join an existing ring, one of whose nodes is at the address specified. This command only works before a ring has been created or joined. After that point, trying to issue this command is an error. - `quit`: shut down. This quits and ends the program. If this was the last instance in a ring, the ring is effectively shut down. If this is not the last instance, it should send all of its data to its immediate successor before quitting. Other than that, it is not necessary to notify the rest of the ring when a node shuts down. Next, there are those related to finding and inserting keys and values. A `<key>` is any sequence of one or more non-space characters, as is a value. - `put <key> <value>`: insert the given key and value into the currently active ring. The instance must find the peer that is responsible for the given key using a DHT lookup operation, then contact that host directly and send it the key and value to be stored. - `putrandom <n>`: randomly generate `n` keys (and accompanying values) and put each pair into the ring. Useful for debugging. - **get <key>:** find the given key in the currently active ring. The instance must find the peer that is responsible for the given key using a DHT lookup operation, then contact that host directly and retrieve the value and display it to the local user. - **delete <key>:** similar to lookup, but instead of retrieving the value and displaying it, the peer deletes it from the ring. Finally, there are operations that are useful mainly for debugging: - **dump:** display information about the current node, including the range of keys it is responsible for, its predecessor and successor links, its finger table, and the actual key/value pairs that it stores. - **dumpkey <key>:** similar to **dump**, but this one finds the node responsible for `<key>`, asks it for its dump info, and displays it to the local user. This allows a user at one terminal to query any part of the ring. - **dumpaddr <address>:** similar to above, but query a specific host and dump its info. - **dumpall:** walk around the ring, dumping all information about every peer in the ring in clockwise order (display the current host, then its successor, etc). Of these, **dump** is the most helpful and is required. The others may prove helpful in debugging, but they are optional. **DHT interface** When communicating with other DHT nodes, your DHT should use the basic operations described in the paper, including finger tables. You should implement a successor list as described in the paper, and basic migration of key/value pairs as nodes join and leave. Details are given later in this document. There are a few different basic types of values that you will need to work with. For node addresses, use a string representation of the address in the form `address:port`, where address is a dotted-decimal IP address (not a host name). When referring to positions on the ring (ring addresses), you will be using a large number, not a string. This works out nicely: keys and node addresses are strings, and both can be hashed into ring addresses, which are not strings. The type checker will make sure you never mix up the two types of addresses. You will use the sha1 hash algorithm (available in `crypto/sha1` in the Go standard library), which gives 160-bit results. This means that your finger table will have up to 160 entries. In order to treat these as large integer operations and perform math operations on them, you will need to use the `math/big` package. I recommend immediately converting sha1 hash results into `big` values and using that as your type for ring addresses. Each node will need to maintain the following data structures as described in the paper: - **finger:** a list of addresses of nodes on the circle. Create a list with 161 entries, all of which start out empty (`None`). You will only need to maintain a few of them at the end, but by creating a full-size list you will simplify your code and make it as similar to the pseudo-code in the paper as possible. Use entries 1–160 (instead of 0–159) for the same reason. - **successor:** a list of addresses to the next several nodes on the ring. Set the size of this list as a global, named constant with a value of at least 3. - **predecessor:** a link to the previous node on the circle (an address) Note that addresses should contain an IP address and a port number. The main operations you must support are: - **find_successor(id):** note that this is not the same as the `get` function from the command-line interface. This is the operation defined in Figure 5 of the paper. - **create():** note that this is not the same as the `create` function in the application interface; it is the operation defined in Figure 6 of the paper (and is invoked by the command-line interface). - **join(n'):** note that this is not the same as the `join` function in the application interface. - **stabilize()** - **notify(n')** Each of these is described in the pseudo-code in the paper. You must also incorporate the modifications described in the paper to maintain a list of successor links instead of a single successor value. **Suggestions** The following are implementation suggestions, which you are free to follow or ignore. **Goroutines** - When you start the DHT (either through a create request or a join request), you should launch the RPC server for the local node. This is similar to what you have already done in the first project. Set up the necessary data structures, then launch the RPC server in its own goroutine. - Run the stabilize procedure in its own goroutine, also launched at node creation time. It should run in a forever loop: sleep for a second, then call the stabilize procedure described in the paper. - Similarly, fixfingers should run every second or so in its own goroutine. - CheckPredecessor also runs in its own goroutine, also in a loop that repeats every second or so. - Use the main goroutine to run the command-line interface loop. When it exits, everything else will be shut down as well. **RPC connections** The number and locations of peers will vary over time. For a production system, you would maintain a pool of outgoing connections and garbage collect connections over time. To make things simpler, establish a fresh RPC connection for each message you send, wait for the response, then shut down that connection. You may find it helpful to write a function like this: ```go func call(address string, method string, request interface{}, reply interface{}) error ``` This function takes care of establishing a connection to the given address, sending a request to the given method with the given parameters, then closes the client connection and returns the result. It is okay to make all of your requests synchronously, i.e., the goroutine that sends the request can stop and wait until the response is available. **Iterative lookups** Use the iterative style of recursive lookups as described in the paper. All RPC calls will be able to return values immediately without blocking, i.e., every RPC server function queries or modifies local data structures without having to contact other servers. The most complicated operation you must support is a complete `find_successor` lookup that may have to contact multiple servers. It should start by checking if the result can be found locally. If not, it should enter a loop. In each iteration, it contacts a single server, asking it for the result. The server returns either the result itself, or a forwarding address of the node that should be contacted next. Put in a hard-coded limit on the number of requests that a single lookup can generate, just in case. Define this as a global, named constant. 32 ought to be sufficient for hundreds of nodes. Here is revised pseudo-code for `find_successor` and friends: ```go // ask node n to find the successor of id // or a better node to continue the search with n.find_successor(id) if (id \in [n, successor]) return true, successor; else return false, closest_preceding_node(id); // search the local table for the highest predecessor of id n.closest_preceding_node(id) ``` Hash functions To get a sha1 hash value, include the appropriate libraries and use something like this: ```go func hashString(elt string) *big.Int { hasher := sha1.New() hasher.Write([]byte(elt)) return new(big.Int).SetBytes(hasher.Sum(nil)) } ``` Now you can use the operations in `math/big` to work with these 160-bit values. It is a bit cumbersome because you cannot use infix operators. For example, the following is helpful: ```go const keySize = sha1.Size * 8 var two = big.NewInt(2) func jump(address string, fingerentry int) *big.Int { n := hashString(address) fingerentryminus1 := big.NewInt(int64(fingerentry) - 1) jump := new(big.Int).Exp(two, fingerentryminus1, nil) sum := new(big.Int).Add(n, jump) return new(big.Int).Mod(sum, hashMod) } ``` This computes the address of a position across the ring that should be pointed to by the given finger table entry (using 1-based numbering). Another useful function is this: ```go func between(start, elt, end *big.Int, inclusive bool) bool { if end.Cmp(start) > 0 { return (start.Cmp(elt) < 0 && elt.Cmp(end) < 0) || (inclusive && elt.Cmp(end) == 0) } else { return start.Cmp(elt) < 0 || elt.Cmp(end) < 0 || (inclusive && elt.Cmp(end) == 0) } } ``` This one returns true if `elt` is between `start` and `end` on the ring, accounting for the boundary where the ring loops back on itself. If inclusive is true, it tests if `elt` is in `[start,end]`, otherwise it tests for `(start,end)`. It is helpful to be able to print out hash codes in a format that makes it easy to visually compare two hash values. The following code will do this: ```go hex := fmt.Sprintf("%040x", hashString(value)) s := hex[:8] + ". (" + string(value) + ")" ``` appends the original string. This is useful for keys and addresses. **Getting your network address** It is helpful to have your code find its own address. The following code will help: ```go func getLocalAddress() string { var localaddress string ifaces, err := net.Interfaces() if err != nil { panic("init: failed to find network interfaces") } // find the first non-loopback interface with an IP address for _, elt := range ifaces { if elt.Flags&net.FlagLoopback == 0 && elt.Flags&net.FlagUp != 0 { addr, err := elt.Addr() if err != nil { panic("init: failed to get addresses for network interface") } for _, addr := range elt.Addrs() { if ipnet, ok := addr.(*net.IPNet); ok { if ip4 := ipnet.IP.To4(); len(ip4) == net.IPv4len { localaddress = ip4.String() break } } } } } if localaddress == "" { panic("init: failed to find non-loopback interface with valid address on this node") } return localaddress } ``` It finds the first address that is not a loopback device, so it should be one that an outside machine can connect to. If you have multiple devices or multiple IP addresses, it will choose one arbitrarily. It may even pick an IPv6 address. **Implementation order** **Week 1** - Start by building your keyboard handler. Implement simple functions like `help`, `quit`, `port`, etc. - Implement a skeleton `create` function that starts a `Node` instance complete with RPC server. Start by having it listen for a `ping` method that just responds to all requests immediately. - Implement a `ping` command for the keyboard, which issues a ping request to a given address (as explained earlier). Use this to test if your RPC setup is working. You should use the `call` function given earlier. This will give you a complete RPC client/server pair. - Implement `get`, `put`, and `delete` RPC server functions that manipulate the bucket of key/values pairs stored on a given instance. Implement `get`, `put`, and `delete` keyboard commands that require an address to be specified (later you will find the address by invoking your find function). Test everything with multiple instances running. You should be able to insert keys into remote instances, and retrieve them. - Start implementing your `dump` command, which will initially just list the key/value pairs stored locally. - Make a ring. Start by ignoring finger tables, and make a ring that just uses successor pointers. Follow the rules outlined in the paper. Implement the `join` command that initially just sets the given address as your successor. - Update `dump` to show all interesting information about this node. Week 2 - Implement a timer event for the \texttt{stabilize} function. This will require implementing \texttt{notify} as well. Your goal is to allow a complete ring with valid successor and predecessor pointers. - Implement the initial version of the find function. Have it follow the list of successor pointers (since no finger table exists yet). - Update \texttt{get}, \texttt{put}, and \texttt{delete} to use your find function to find the address where a key/value pair should be located instead of requiring the user to specify it. - At this point, you should be able to build rings with all basic functionality in place. The rest is just optimization and failure handling. - Add \texttt{fix_fingers} and \texttt{check_predecessor} and update \texttt{stabilize} to conform to the pseudo-code in the paper (if it does not already). When \texttt{fix_fingers} finds a entry, add it to the next entry (as given in the pseudo-code), but also loop and keep adding to successive entries as long as it is still in the right range. For example, the successor for the position 1 step further around the ring is probably the successor for 2 steps around the ring, and for 4, 8, and 16 steps around the ring. You can check this without repeating the \texttt{find_successor} operation, and will likely fill the first 155 or so of the 160 entries with a single value. This is a good optimization. - Update your \texttt{find_successor} function to use the finger table. In \texttt{closest_preceding_node}, the last line (the fallback case if nothing suitable was found in the finger table) should return your successor, not your own address. At this point, you should have complete functionality as long as there are no failures. Week 3 To handle failures, you need to keep a successor list. This is a surprisingly easy extension, and makes for a much more satisfying result. - Store a list of successors \textit{in addition} to the successor pointer. All of your existing code will continue to use the single successor link; you will only need to consult the list when your successor goes down. This list will start out with only a single entry (the initial value of the successor pointer). Set a global constant that will be its maximum size (3 is a reasonable number). - Change \texttt{stabilize} to obtain your successor’s successor list in addition to its predecessor. Add your successor to the beginning of this list, and remove an element off the end if necessary to keep the list within its maximum size. This is your new successor list. - When \texttt{stabilize}’s call to the successor fails, assume that the successor has gone down. Chop the first element off your successors list, and set your successor to the next element in the list. If there is no such element (the list is empty), set your successor to your own address. Your ring should now tolerate node failures. All that is left is to manage migrating key/value pairs around the ring as nodes join and leave. This is also relatively easy: - Make two new RPC server functions. \texttt{put_all} receives a map of key/value pairs, and adds all of its contents to the local bucket of key/value pairs. \texttt{get_all} takes the address of a new node that is between you and your predecessor. It should gather all keys that belong to that new node (use your \texttt{between} function to determine this) into a new map, and it should also remove them from your bucket. You can loop through all the values in a map like this: \begin{verbatim} for key, value := range elt.Bucket { ... } \end{verbatim} Return the map of values that no longer belong in your node. - When a node is about to go down (in response to a \texttt{quit} command, call \texttt{put_all} on its successor, handing it the entire local bucket before shutting down. - When joining an existing ring, issue a \texttt{get_all} request to your new successor once the join has succeeded, i.e., as soon as you know your successor. This should give you full functionality.
{"Source-Url": "http://cit.dixie.edu/cs/3410/asst_chord.pdf", "len_cl100k_base": 4214, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18978, "total-output-tokens": 4730, "length": "2e12", "weborganizer": {"__label__adult": 0.0002880096435546875, "__label__art_design": 0.00021529197692871096, "__label__crime_law": 0.00020122528076171875, "__label__education_jobs": 0.001277923583984375, "__label__entertainment": 6.479024887084961e-05, "__label__fashion_beauty": 0.0001035928726196289, "__label__finance_business": 0.0001170039176940918, "__label__food_dining": 0.0003428459167480469, "__label__games": 0.0005040168762207031, "__label__hardware": 0.0007166862487792969, "__label__health": 0.0003135204315185547, "__label__history": 0.00016129016876220703, "__label__home_hobbies": 8.058547973632812e-05, "__label__industrial": 0.000270843505859375, "__label__literature": 0.0001984834671020508, "__label__politics": 0.00016939640045166016, "__label__religion": 0.00037598609924316406, "__label__science_tech": 0.00612640380859375, "__label__social_life": 0.00011920928955078124, "__label__software": 0.00457000732421875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.000225067138671875, "__label__transportation": 0.0003197193145751953, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18594, 0.00739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18594, 0.38529]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18594, 0.88831]], "google_gemma-3-12b-it_contains_pii": [[0, 2731, false], [2731, 6648, null], [6648, 9883, null], [9883, 11717, null], [11717, 14574, null], [14574, 18594, null], [18594, 18594, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2731, true], [2731, 6648, null], [6648, 9883, null], [9883, 11717, null], [11717, 14574, null], [14574, 18594, null], [18594, 18594, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18594, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18594, null]], "pdf_page_numbers": [[0, 2731, 1], [2731, 6648, 2], [6648, 9883, 3], [9883, 11717, 4], [11717, 14574, 5], [14574, 18594, 6], [18594, 18594, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18594, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
2154d5d7ba1aaa1962754dc22245f67bf499e6dc
LAMBDA-UPSILON-OMEGA: AN ASSISTANT ALGORITHMS ANALYZER Philippe Flajolet INRIA, Rocquencourt 78150 Le Chesnay (France) Bruno Salvy INRIA and Ecole Polytechnique 91405 Palaiseau (France) Paul Zimmermann INRIA, Rocquencourt Abstract. Lambda-Upsilon-Omega, \( \Lambda\gamma\Omega \), is a system designed to perform automatic analysis of well-defined classes of algorithms operating over “decomposable” data structures. It consists of an ‘Algebraic Analyzer’ System that compiles algorithms specifications into generating functions of average costs, and an ‘Analytic Analyzer’ System that extracts asymptotic informations on coefficients of generating functions. The algebraic part relies on recent methodologies in combinatorial analysis based on systematic correspondences between structural type definitions and counting generating functions. The analytic part makes use of partly classical and partly new correspondences between singularities of analytic functions and the growth of their Taylor coefficients. The current version \( \Lambda\gamma\Omega_0 \) of \( \Lambda\gamma\Omega \) implements as basic data types, term trees as encountered in symbolic algebra systems. The analytic analyzer can treat large classes of functions with explicit expressions. In this way, \( \Lambda\gamma\Omega_0 \) can generate in the current stage about a dozen non-trivial average case analyses of algorithms like: formal differentiation, some algebraic simplification and matching algorithms. Its analytic analyzer can determine asymptotic expansions for large classes of generating functions arising in the analysis of algorithms. The outline of a design for a full system is also discussed here. The long term goal is to include a fairly rich set of data structuring mechanisms including some general recursive type definitions, and have the analytic analyzer treat wide classes of functional equations as may be encountered in combinatorial analysis and the analysis of algorithms. 1. Introduction Ideally, a system for automatic program analysis should take as input a procedure or function specification 1. procedure Quicksort \{instructions\} end; 2. procedure Diff \{instructions\} end; for sorting or computing symbolic derivatives, and produce an “analysis” of the program. We concern ourselves here with average-case analysis and optimization of programs, and we would like the system to output something like O1. Time for Quicksort on random inputs of size \( n \) is \[ 11.67 \, n \ln(n) - 1.74 \, n + \Oo(n) \] O2. Time for Diff on random inputs of size $n$ is \[ \frac{\text{n}}{8 - \frac{240 + 37}{42} + 0(1)} \] \[ \frac{2}{-13 + 2} \] These two analyses will naturally depend on type specifications that are companion to (11) and (12), a description of a complexity model (e.g., an ‘if’ takes 5 units of time), and a description of a (random input) statistical model. For Quicksort, we could have some way of specifying that all permutations of $n$ are taken equally likely, while for Diff we could decide that all expression trees of size $n$ with the proper type are equally likely. We shall describe here a system whose current state performs automatic analysis of a whole class of algorithms in the realm of symbolic manipulation algorithms and contains a good deal of what is needed in order to analyze permutation algorithms like (11). Result (O1) is taken directly from [Knuth 1973], but (O2) was literally produced by our system. Our system is called $\Lambda\Gamma\Omega$. The name $\Lambda\Gamma\Omega$ (Lambda-Upsilon-Omega) comes from the Greek word $\lambda\upsilon\omega$ which means (amongst other things) ‘I solve’, and it is from this verb that “analysis” derives. Implementation was started in mid 1987. We shall describe here its overall design principles as well as the state of the current implementation $\Lambda\Gamma\Omega_0$. There are two major components in $\Lambda\Gamma\Omega$: - An Algebraic Analyser System, ALAS, that accepts algorithms specifications in a suitably restricted programming language. That part produces type descriptors and complexity descriptors in the form of generating functions. - An Analytic Analyser System, ANANAS, that accepts generating functions (for type descriptors and complexity descriptors) and tries to determine automatically an asymptotic expansion of its Taylor coefficients. The algebraic component is currently implemented in Lisp, though ML is also considered for later implementations. In its present form, it permits to analyze a class of symbolic term (tree) manipulation programs and comprises about 500 Lisp instructions (in the L.e.Lisp dialect). The analytic component is already a fairly large set of symbolic “algebra” routines written in Maple and comprising about 3000 instructions. Both components encapsulate a fair amount of mathematical expertise at a rather abstract level. - The algebraic system is based on research in combinatorial analysis developed mostly during the 1970’s regarding correspondence between structural definitions of combinatorial objects and generating functions, together with some new extensions to program schemes. - The analytic system is based on some recent developments of late 19th and early 20th century complex asymptotic analysis concerning the correspondence between singularities or saddle points of functions and the asymptotic order of coefficients in Taylor expansions. The $\Lambda\Gamma\Omega$ system has of course no claim of being universal, since program termination is in general undecidable. Its interest lies in consideration of a restricted class of purely “functional” procedures that operate through recursive descent over a large class of “decomposable” structures defined by powerful type structuring mechanisms. Such a class contains algorithms and data structures, like binary search trees, unbalanced heaps for priority queues, quicksort, digital tries and radix exchange sort, merge sort, several versions of hashing, pattern matching, recursive parsing. Our long term objective is to have a system that will perform automatically analysis of a non-negligible fractions of these algorithms as well as many other of the same style. The current system implements a complexity calculus on term trees along the lines of [Flajolet, Steyaert 1987] and the analytic analyzer is already appreciably more general. For an interesting alternative approach to automatic complexity analysis, the reader is referred to [Hickey, Cohen 1988] and references therein. 2. A Sample Session A typical \( \lambda \Omega \) session\(^\dagger\) starts with by calling a script, which (using Unix virtual tty’s) initiates a joint Lisp and Maple session. We then load ALAS, apply it to the program to be analyzed. This generates a set of equations over generating functions, that are passed to Maple initialized with ANANAS. The example considered is a program that computes symbolic derivatives (without simplification) of expressions (terms, trees) built from the operator set \[ t^{(0)}, x^{(0)}, \exp^{(1)}, +^{(2)}, -^{(2)} \] with superscripts denoting arities. The key steps are 1. The recursive definition of the type ‘term’ is reflected by a quadratic equation for its generating function \( t(z) \). 2. The recursive structure of the Diff procedure is reflected by a linear equation for its complexity descriptor \( \tau_{\text{diff}} \) (generating function of average costs). These two steps are completed automatically by ALAS, which also uses a small Maple procedure to derive explicit expressions. At the next stage, ANANAS is used on those generating functions: 3. Both \( t(z) \) and \( \tau_{\text{diff}} (z) \) are recognized as having singularities at a finite distance of a so-called ‘algebraico-logarithmic’ type. Local singular expansions are then determined (through Maple’s Taylor capability). 4. Using general theorems from complex asymptotic analysis, singular expansions can be transformed automatically into asymptotic expansions of the coefficients. This is achieved by means of the versatile ‘equivalent’ command of ANANAS. Dividing the asymptotic form of the coefficients of \( [z^n] \) in \( t(z) \) and \( \tau_{\text{diff}} (z) \), we obtain the asymptotic average complexity of symbolic differentiation in an either algebraic or floating point form. The same device may be used to analyse the variant DiffCp of Diff that proceeds by copying subexpressions instead of sharing them. In this way we obtain average case analyses which we summarized here, compared to the obvious best case and worst case results. <table> <thead> <tr> <th>Algorithm</th> <th>Best Case</th> <th>Average Case</th> <th>Worst Case</th> </tr> </thead> <tbody> <tr> <td>Diff [sharing]</td> <td>( O(n) )</td> <td>( c.n + O(1) )</td> <td>( O(n) )</td> </tr> <tr> <td>DiffCp [copy]</td> <td>( O(n) )</td> <td>( c.n^{3/2} + O(n) )</td> <td>( O(n^2) )</td> </tr> </tbody> </table> The order of the cost for Diff was as expected. The \( O(n^{3/2}) \) result for DiffCp is harder to guess, and it is related to the behaviour of the average path length in trees as discussed in [Knuth 1968] or [Meir, Moon 1978]. \(^\dagger\) The necessary concepts will be developed in Sections 3 (Algebraic System) and 4 (Analytic System). The script that follows has been slightly edited and a few commands have been decomposed for the sake of readability. Figure 1. A $\LaTeX$ session showing the automatic analysis of symbolic differentiation. 3. The Algebraic Analyzer System 3.1 Combinatorial Principles The algebraic part of our system – ALAS – relies on recent research in combinatorial analysis. Till the mid twentieth century, the field of combinatorial enumerations was mostly conceived as an art of obtaining recurrences for the counting of combinatorial structures, with generating functions entering as an ad hoc solution device in more complex cases. The books by Riordan, and many of the analyses in Knuth’s magnum opus are witnesses of this approach. From research conducted by Rota, Foata, Schützenberger and their schools, there has emerged a general principle: A rich collection of combinatorial constructions have direct translation into generating functions. More precisely, let \( A \) be a class of combinatorial objects, with \( A_n \) the subclass consisting of objects of size \( n \), and \( A_n = \text{card}(A_n) \). We define the ordinary generating function (OGF) and exponential generating function (EGF) of \( A \) by \[ A(z) = \sum_{n \geq 0} A_n z^n \quad \text{and} \quad \hat{A}(z) = \sum_{n \geq 0} \frac{A_n z^n}{n!}. \] (3.1) A combinatorial construction, say \( C = \Phi[A, B] \) is said to be admissible if the counting sequence \( \{C_n\}_{n \geq 0} \) of the result depends only on the counting sequences \( \{A_n\} \) and \( \{B_n\} \) of the arguments. An admissible construction then defines an operator (or a functional) on corresponding generating functions: \[ C(z) = \Psi[A(z), B(z)] \quad \text{and} \quad \hat{C}(z) = \hat{\Psi}[\hat{A}(z), \hat{B}(z)]. \] For instance the cartesian product construction is admissible since \[ C = A \times B \implies C_n = \sum_{k=0}^{n} A_k B_{n-k} \quad \text{and} \quad C(z) = A(z) \cdot B(z). \] Combinatorial enumerations are developed systematically within a comparable framework in the book by Goulden and Jackson [1983]. The tables in Figure 2 summarize a collection of admissible constructions borrowed from [Flajolet 1985, 1988]. In this context, the primary object for combinatorial enumerations is no longer integer sequences but rather generating functions. Furthermore, that approach fits nicely with asymptotic analysis, the main tool for asymptotic analysis being analytic function theory rather than explicit integer sequences. It is on the conjunction of these two principles that our system is built. The task of enumerating a class of combinatorial structures is then reduced to specifying it (up to isomorphism) by means of admissible constructions. Once this is done, the task of computing a set of generating function equations reduces to performing simply a purely formal translation. In the context of the analysis of algorithms, data structure declarations are thus converted to generating functions (GF’s), each data type having its own GF also called its type descriptor. An interesting approach similar to ours and based on an extension of context-free grammars is presented in Greene’s thesis [1983]. OGF: Disj. Union \( C = \mathcal{A} \cup \mathcal{B} \), \( c(z) = a(z) + b(z) \) Cart. Product \( C = \mathcal{A} \times \mathcal{B} \), \( c(z) = a(z) \cdot b(z) \) Diagonal \( C = \Delta(\mathcal{A} \times \mathcal{A}) \), \( c(z) = a(z^2) \) Sequence \( C = \mathcal{A}^* \), \( c(z) = (1 - a(z))^{-1} \) Marking \( C = \mu \mathcal{A} \), \( c(z) = z \frac{d}{dz} a(z) \) Substitution \( C = \mathcal{A}[\mathcal{B}] \), \( c(z) = a(b(z)) \) PowerSet \( C = 2^\mathcal{A} \), \( c(z) = \exp\left( a(1 - z) + \frac{1}{2} a(z^2) + \frac{1}{3} a(z^3) - \cdots \right) \) MultiSet \( C = M\{\mathcal{A}\} \), \( c(z) = \exp\left( a(1 - z) + \frac{1}{2} a(z^2) + \frac{1}{3} a(z^3) + \cdots \right) \) EGF: Disj. Union \( C = \mathcal{A} \cup \mathcal{B} \), \( \hat{c}(z) = \hat{a}(z) + \hat{b}(z) \) Label. Product \( C = \mathcal{A} \star \mathcal{B} \), \( \hat{c}(z) = \hat{a}(z) \cdot \hat{b}(z) \) Label. Sequence \( C = \mathcal{A}^{<\infty} \), \( \hat{c}(z) = (1 - \hat{a}(z))^{-1} \) Marking \( C = \mu \mathcal{A} \), \( \hat{c}(z) = z \frac{d}{dz} \hat{a}(z) \) Label. Subst. \( C = \mathcal{A}[\mathcal{B}] \), \( \hat{c}(z) = \hat{a}(b(z)) \) Label. Set \( C = \mathcal{A}^{[\infty]} \), \( \hat{c}(z) = \exp\left( \hat{a}(z) \right) \) Figure 2. A catalog of admissible constructions and their translation to ordinary or exponential generating functions. The OGF constructions are relative to unlabelled structures, the EGF constructions are relative to labelled structures. 3.2. Analysis of Algorithms Let \( \Gamma \) be an algorithm that takes its inputs from a data type \( \mathcal{I} \) and produces some output of type \( \mathcal{O} \). We consider exclusively additive complexity measures, thereby restricting ourselves to time complexity analyses. Let \( \tau \Gamma[e] \) denote the complexity of an execution of \( \Gamma \) on input \( e \). By the additive character: \[ \Gamma = (\Gamma^{(1)}; \Gamma^{(2)}) \quad \implies \quad \tau \Gamma = \tau \Gamma^{(1)} + \tau \Gamma^{(2)}. \] The purpose of average case analysis is to determine the expectation of \( \tau \Gamma[e] \) when \( e \) is a random element of \( \mathcal{I} \) with size \( n \). Thus, assuming \( \mathcal{I}_n \) is a finite set, that quantity is a quotient \[ \frac{\tau \Gamma_n}{\mathcal{I}_n} \quad \text{where} \quad \tau \Gamma_n = \sum_{e \in \mathcal{I}_n} \tau \Gamma[e], \] \( \tau \Gamma_n \) being thus a cumulated value of \( \tau \Gamma \) over \( \mathcal{I}_n \). The ordinary complexity descriptor (OCD) of algorithm \( \Gamma \) is defined as the generating function \[ \tau \Gamma(z) = \sum_{n \geq 0} \tau \Gamma_n z^n. \] (There is an obvious analogue for exponential descriptors.) A program construction \( \Gamma = \Phi[\Gamma^{(1)}, \Gamma^{(2)}] \) is said to be admissible if the cost sequence \( \{\tau \Gamma_n\} \) of \( \Gamma \) depends only on the cost sequences of the arguments \( \Gamma_1, \Gamma_2 \) and the counting sequences of intervening data structures. An admissible construction again defines an operator over corresponding generating functions. Assume for instance that \( P(z : \mathcal{C}) \) is a procedure that operates on inputs \( z = (a, b) \) of type \( \mathcal{C} = \mathcal{A} \times \mathcal{B} \) and is defined by \[ P(z : \mathcal{C}) := \Phi(a); \] where $Q$ is of type $Q(x : A)$. Then, is is easy to see that $$ \tau P(z) = \tau Q(z) \cdot b(z). $$ If in addition, we make use of the additivity of $\tau$, the scheme $$ P(x : C) := Q(a) ; R(b) $$ translates into $$ \tau P(z) = \tau Q(z) \cdot b(z) + a(z) \tau R(z). $$ It turns out that there is a collection of program schemes naturally associated to constructions described above that are admissible. Corresponding to $C = A \uplus B$, $C = A \times B$, $C = A^*$, $C = 2^A$, $C = M\{A\}$, we find $$ \begin{align*} P(c) = & \begin{cases} Q(c) & \text{if } c \in A \\ R(c) & \text{else} \end{cases} & \tau P(z) = \tau Q(z) + \tau R(z) \\ P((a, b)) = & Q(a) & \tau P(z) = \tau Q(z) b(z) \\ P(a_1, \ldots, a_k) = & Q(a_1); \ldots; Q(a_k) & \tau P(z) = \tau Q(z) / (1 - a(z))^2 \\ P\{a_1, \ldots, a_k\} = & Q(a_1) ; \ldots; Q(a_k) & \tau P(z) = c(z) (\tau Q(z) - \tau Q(z^2) + \tau Q(z^3) - \cdots) \\ P\{a_1, \ldots, a_k\} = & Q(a_1) ; \ldots; Q(a_k) & \tau P(z) = c(z) (\tau Q(z) + \tau Q(z^2) + \tau Q(z^3) + \cdots) \end{align*} $$ For instance, a recursive type definition for trees $$ T = \{a\} \times T^*; $$ together with a recursive procedure specification scheme $$ Q(x : T) := R(x); \text{ for } y \text{ root\_subtree\_of } x \text{ do } Q(y); $$ will result in the system of equations $$ \begin{align*} T(z) &= \frac{z}{1 - T(z)} \\ \tau Q(z) &= \tau R(z) + z \frac{\tau Q(z)}{(1 - T(z))^2}. \end{align*} $$ Observe that $T(z)$ is an algebraic function of degree 2, and owing to the structure of the algorithm, $\tau Q(z)$ is expressed linearly in terms of itself. This is roughly the situation that we encounter when analyzing symbolic differentiation as well as many similar algorithms [Steyaert 1984]. The algebraic analyzer of $\Lambda\Gamma\Omega_0$ implements a calculus based on previously exposed principles, but restricted to trees. Nonetheless (cf Fig. 1), it can produce automatic analyses of versions of matching, simplification, or various types of evaluations etc. ### 4. The Analytic Analyzer System At this stage, our task is to take a generating function, defined either explicitly (for non recursive data types) or implicitly via a functional equation (for most recursive data types). The current version of $\Lambda\Gamma\Omega$ treats only functions that lead to explicit expressions after a possible usage of the ‘solve’ routine of Maple. We shall therefore limit ourselves to this case. 4.1. Analytic Principles Let \( f(z) \) be a function analytic at the origin. We assume further that \( f(z) \) is explicitly given by an expression, a blend of sums, products, powers, exponential and logarithms. Most explicit generating functions constructed by the combinatorial tools of Section 3 are of this type. The starting point is Cauchy’s coefficient formula \[ [z^n]f(z) = \frac{1}{2\pi i} \int_{\Gamma} f(z) \frac{dz}{z^{n+1}}, \] where \([z^n]f(z)\) is the usual notation for the coefficient of \( z^n \) in the Taylor expansion of \( f(z) \). Two major classes of methods are applicable to determine the Taylor coefficients of those functions: - For functions with singularities at a finite distance, the local behaviour of the function near its dominant singularities (the ones of smallest modulus) determines the growth order of the Taylor coefficients of the function. Asymptotic information is obtained by taking \( \Gamma \) to be a contour that comes close to the dominant singularities. - For entire functions, saddle point contours \( \Gamma \) are usually applicable. Several observations are useful here. First, functions defined by expressions are analytically continuable – except for possible isolated singularities – to the whole of the complex plane (though they may be multivalued). Second, by Pringsheim theorem, functions with positive coefficients (such is the case for our generating functions) always have a positive dominant singularity, a fact that eases considerably the search for singularities. Though a complete algorithm covering all elementary functions is not (yet) available since the classification of singularities, even for such functions, is not fully complete, a good deal of functions arising in practice can be treated by the following algorithm. \textbf{Procedure} equivalent\((f: \text{expression}) : \text{expression};\) \{determines an asymptotic equivalent of \([z^n]f(z)\)\} 1. Determine whether \( f(z) \) is entire or \( f(z) \) has singularities at a finite distance. 2. If \( f(z) \) has finite singularities, let \( \rho \) be the modulus of a dominant singularity. We know at least that \[ f_n = [z^n]f(z) \approx \rho^{-n}, \] Compute a local expansion of \( f(z) \) around its dominant singularity (-ies). This is called a singular expansion. \textbf{2a}. If a singular expansion is of an ‘algebraico-logarithmic’ type, namely \[ f(z) \sim (1 - z/\rho)^\alpha \log^\alpha(1 - z/\rho)^{-1} \quad \text{as} \quad x \to \rho \] then apply methods of the Darboux-Pólya type to transfer singular expansions to coefficients \[ f_n = [z^n]f(z) \sim \rho^{-n \frac{n - \alpha - 1}{\Gamma(-\alpha)}} \log^\alpha n \] This applies generally to functions that are “not too large” near a singularity. \textbf{2b}. If the function is large near its singularity, for instance \[ f(z) \approx \exp\left(\frac{1}{(1 - z/\rho)^\alpha}\right), \] then apply saddle point methods like (3) below. 3. If \( f(z) \) is entire, then use a saddle point integral. If this succeeds, we get \[ f_n = [z^n] f(z) \sim \frac{e^{h_n(R_n)}}{\sqrt{2\pi h_n(R_n)}} \] where \( h_n(z) = \log f(z) - (n + 1) \log z \), and \( R_n \) is such that \[ \left[ \frac{dh_n(z)}{dz} \right]_{z=R_n} = 0. \] This is the outline of the algorithm that we have implemented in Maple, with the minor exception of step (2b) (saddle point at a finite distance) and with the current limitation that singularities and saddle points should be within reach of Maple’s `solve` routine. It is important to note that a few theorems, whose conditions can be automatically tested, are used to support this algorithm. Singularity Analysis. The classical form of the Darboux-Pólya method requires differentiability conditions on error terms. However, from [Flajolet, Odlyzko 1987], we now know that analytic continuation is enough to ensure the transition from (4.2) to (4.3), and by our earlier discussion, these conditions are always fulfilled for functions defined by expressions. Thus, the use of (2a) is guaranteed to be sound. Furthermore, that approach makes it possible to cope with singularities involving iterated logarithms as well (not yet implemented). Saddle Point Integrals. There has been considerable interest for those methods, due to their recognized importance in mathematical physics and combinatorial enumerations. We thus know, from works by Hayman, Harris and Schoenfeld, or Odlyzko and Richmond, classes of functions defined by closure properties for which saddle point estimates are valid. Such conditions, that are extremely adequate for combinatorial generating functions, can be checked inductively on the expression. 4.2. Some Applications Let us take here the occasion of a few examples to discuss some further features of AANAS. The next three examples are all taken from combinatorial enumerations: (E1) Trees of cycles of cycles of beads; (E2) Involutive permutations; (E3) Children’s Rounds of [Stanley 1978]; (E4) Bell numbers counting partitions of \( n \). \[ [E1]> \text{equivalent}(1/2*(1-sqrt(1-4*log(1/(1-log(1/(1-z))))))); \] \[ \frac{1}{2} \exp\left(\exp\left(-\frac{1}{4}\right)\right) \exp\left(\frac{-3}{8}\right) \] \[ 1 / (n \left( \frac{\exp\left(\exp\left(-\frac{1}{4}\right)\right) \exp\left(-\frac{1}{2}\right) + 1}{n \frac{1}{2}} \right) \] \[ \left( 1/2 - n \right) \exp\left(-\exp\left(-\frac{1}{4}\right) - 1 \right) + \exp\left(-\exp\left(-\frac{1}{4}\right) - 1 \right) \] \[ + 0\left(-\frac{1}{\exp\left(-\exp\left(-\frac{1}{4}\right) - 1 \right) + 1}\right) \] \[ \frac{7}{2} \] 9 \[ E[2] \] > \text{equivalent}(\exp(z^2/2)); \\ \frac{1/2}{\exp(-1/2 - 1/2 (1 + 4 n) + 1/2 (-1/2 - 1/2 (1 + 4 n)))} \\ / (\exp(-1/2 - 1/2 (1 + 4 n)) (-2) \Pi (-1/2 - 1/2 (1 + 4 n)) 1/2 1/2) \\ 1/4 \\ / (1 + 4 n) \[ E[2] \] > \text{equivalent}((1 - z)^(-2), 5); \\ 1 - 1/n - \frac{\text{ln}(n)}{3} - \gamma + \frac{1}{4} - \frac{\text{ln}(n)}{4} + 5/2 - \frac{\text{ln}(n)}{4} - \frac{\text{gamma}}{4} + 5/2 - 0(-\text{ln}(n)) \[ E[4] \] > \text{equivalent}(\exp(\exp(z)-1)); \\ 1/2 \exp(\exp(W(n + 1)) - 1) 2^{1/2} + \text{etc...} \ [W(z) \exp(W(z))=z] Example 1 demonstrates the processing of functions with singularities at a finite distance. The singularity has an explicit form and the function behaves locally like a square root, whence the final result of the form: \[ C n^{-3/2} \left(1 - e^{-3/4 - 1}\right)^{-n}. \] Example 2 shows the asymptotic analysis of the Involution numbers (Knuth [1973, p.65-67] does it by the Laplace method). It is treated here by “Hayman admissibility” (a classical notion bearing no direct relation to our previous usage of this word). Hayman’s Theorem provides a class of admissible functions for which a saddle point argument (Step 3 of procedure ‘equivalent’) can be applied: If \( f \) and \( g \) are admissible, \( h \) is an entire function and \( P \) is a real polynomial with a positive leading coefficient, then \[ \exp(f), f + P, P(f), \text{ and under suitable conditions, } f + h \] are admissible. These conditions can be checked syntactically here. Example 3 is a further illustration of singularity analysis in a non trivial case. Example 4, the classical asymptotics of Bell numbers [De Bruijn 1981] resembles Example 2. It is treated here by Harris–Schoenfeld admissibility (which also provides complete expansions), and the corresponding step in the algorithm implements a theorem of Odlyzko and Richmond relating Hayman admissibility to Harris–Schoenfeld admissibility. 5. Conclusions We have presented here some preliminary design considerations for a system that would assist research in the analysis of algorithms. There are two benefits to be expected from such a research. The first and most obvious benefit is to help an analyst explore some statistical phenomena that seem “tractable” in principle, but too intricate to be done by hand. The second and most important one in our view is that the design of such a system creates needs of a new nature in algorithmic analysis methodology. (Thus, our approach departs radically from “Artificial Intelligence”). a. There is a need for extracting general program schemes that can be analyzed by these methods. In this way, we wish to attack the analysis of elementary but structurally complex programs, and we can hope to find general theorems relating complexity and structure of algorithms (cf [Steyaert 1984]). b. When making algorithmic some parts of complex asymptotics, we naturally discover “gaps” that have never been revealed before. For instance, nobody seems to have considered such a simple asymptotic problem as $$[z^n] \exp \left( \log^2 \left( \frac{1}{1 - z} \right) \right)$$ In summary, all we hope is that the development of \( \Lambda \Omega \) will bring even more questions than answers! 6. Bibliography
{"Source-Url": "https://inria.hal.science/inria-00075678/document", "len_cl100k_base": 7418, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 76137, "total-output-tokens": 9249, "length": "2e12", "weborganizer": {"__label__adult": 0.0004334449768066406, "__label__art_design": 0.0006823539733886719, "__label__crime_law": 0.0005812644958496094, "__label__education_jobs": 0.001995086669921875, "__label__entertainment": 0.00016760826110839844, "__label__fashion_beauty": 0.00024580955505371094, "__label__finance_business": 0.0003840923309326172, "__label__food_dining": 0.0006456375122070312, "__label__games": 0.0007824897766113281, "__label__hardware": 0.0013704299926757812, "__label__health": 0.0017147064208984375, "__label__history": 0.0005230903625488281, "__label__home_hobbies": 0.0002417564392089844, "__label__industrial": 0.0008883476257324219, "__label__literature": 0.0008320808410644531, "__label__politics": 0.000499725341796875, "__label__religion": 0.0009183883666992188, "__label__science_tech": 0.416748046875, "__label__social_life": 0.00021004676818847656, "__label__software": 0.0077362060546875, "__label__software_dev": 0.56103515625, "__label__sports_fitness": 0.0004763603210449219, "__label__transportation": 0.0008802413940429688, "__label__travel": 0.00024700164794921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29597, 0.03082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29597, 0.61509]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29597, 0.83029]], "google_gemma-3-12b-it_contains_pii": [[0, 2528, false], [2528, 5968, null], [5968, 9310, null], [9310, 9399, null], [9399, 12386, null], [12386, 15713, null], [15713, 18155, null], [18155, 21071, null], [21071, 23738, null], [23738, 26274, null], [26274, 29188, null], [29188, 29597, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2528, true], [2528, 5968, null], [5968, 9310, null], [9310, 9399, null], [9399, 12386, null], [12386, 15713, null], [15713, 18155, null], [18155, 21071, null], [21071, 23738, null], [23738, 26274, null], [26274, 29188, null], [29188, 29597, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29597, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29597, null]], "pdf_page_numbers": [[0, 2528, 1], [2528, 5968, 2], [5968, 9310, 3], [9310, 9399, 4], [9399, 12386, 5], [12386, 15713, 6], [15713, 18155, 7], [18155, 21071, 8], [21071, 23738, 9], [23738, 26274, 10], [26274, 29188, 11], [29188, 29597, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29597, 0.01439]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
506ed722ba3a5bd5acdd443faf788d0c0a6e5355
CHAPTER 3 PROPOSED BLACK BOX ANALYTICAL MODEL 3.1 Introduction In the previous chapter the different aspects for modeling Web applications by partitioning them into clusters, identifying FSMs for each cluster, combining these to represent the application itself as FSM, and the patterns in the state transitions of the FSMs, were identified from the literature and the groundwork for modeling laid. An elegant model based on ‘thread-per-request’ architecture of server, to represent a variety of Internet applications and understand the nuances in implementation is presented in this chapter. 3.2 Modeling Workload Workload characterization is a modeling process to reproduce the user access patterns to Web sites. In the study of workload characterization of traditional Web information servers, a request is the basic unit for analysis. Workload to a Web Server is viewed as a stream of requests. The service request is taken as a stationary random process confirming to normal distribution. The proposed service system considers a discrete random arrival pattern. Each arrival is independent of its previous arrivals. An arrival set is characterized as a set of requests, represented as $A\{\text{Time of arrival of requests}(T_i), \text{Number of request arrivals}(Q), \text{Number of requests of each service category}(Q_{c1}, Q_{c2}...Q_{cn})\}$. Each request in the arrival set is represented as $R\{\text{ArrivalTime}(A), \text{ServiceTime}(S), \text{Reward}(R), \text{Category}(C), \text{UserId}(U)\}$. The arrival capacity is limited only by the system capacity (i.e., maximum number of threads supported by the server) and this is the maximum number of concurrent requests the server is capable of handling. The pseudocode for modeling arrival pattern is specified in Algorithm 3.1. 3.2.1 Modeling Peak load Ideally a peak load can be looked upon as an impulse load of user requests superposed on the regular load (of stationary distribution). A peak load impulse should conform to the following: 1. It should occur randomly. 2. Its magnitude should be at least one order larger than the regular average load. 3. The execution strain on the system due to it should be conspicuously higher than that of the regular load. In other words the impulse value should be at least one order larger than the possible maximum value of \( \sum_{i=3}^{i} Q_i \) where \( Q_i \) is the number of service requests in \( T_i \) (i.e., \( i^{th} \) timeslot). The model can be modified to study services with any surge type peak load superposed on regular loads. Service under these conditions with a large number of users seeking concurrent service can be accomplished. <table> <thead> <tr> <th>Algorithm 3.1 Arrival</th> </tr> </thead> <tbody> <tr> <td><strong>begin</strong></td> </tr> <tr> <td>2^n balls are to be assigned three indices each</td> </tr> <tr> <td>( k = 0 )</td> </tr> <tr> <td><strong>while</strong> ( (- (n - 1) + 2k \leq (n - 1)) )</td> </tr> <tr> <td>Assign the first index to the next ( \binom{n}{k} ) balls as ( \mu - (n - 1) + 2 )</td> </tr> <tr> <td>( k++ )</td> </tr> <tr> <td>Assign third index serially from 1 to ( 2^n )</td> </tr> <tr> <td>( R = 2^n )</td> </tr> <tr> <td>( i = 0 )</td> </tr> <tr> <td><strong>while</strong> ( (R) )</td> </tr> <tr> <td>( r = \text{rand}() \ % \ R )</td> </tr> <tr> <td>if ( r ) is not already assigned as second index</td> </tr> <tr> <td>Assign it as second index for ball with ( i ) as second index</td> </tr> <tr> <td>Break</td> </tr> <tr> <td>end if</td> </tr> <tr> <td>end while</td> </tr> <tr> <td>( i++ )</td> </tr> <tr> <td>( R-- )</td> </tr> <tr> <td><strong>end while</strong></td> </tr> <tr> <td>#Basket is ready</td> </tr> <tr> <td>#Random selection of arrivals</td> </tr> <tr> <td>( R = \text{rand}() \ % \ 2^n )</td> </tr> <tr> <td>Pick the ball from the basket with the second index as ( r )</td> </tr> <tr> <td>The first index of this ball is the number of arrivals</td> </tr> <tr> <td><strong>End</strong></td> </tr> </tbody> </table> 3.2.1.1 Modeling Multiclass User Arrivals in Workload An arriving request may belong to any service category that determines the QoS to be offered. The model, has room for calibrated services, so that, service rendered will be commensurate to the user service category. The model can generate arrival pattern of requests, in different timeslots, for each service category of user, their distribution being random with equal probability for each value. Algorithm 3.2 specifies the pseudocode for generating number of requests for each service category. To make realistic estimations, the length of the experiments must be long enough (i.e., peak loads that span over a lengthy period of time). The network attributes like source IP, Destination IP, and bytes transferred do not play any role in the design of the scheme; hence their logs are omitted. ### 3.3 Modeling Application Characteristics #### 3.3.1 Service Architecture Many researchers and practitioners have been trying to find an effective way to model Web applications. FSM based modeling has a long history in the design of Web applications [2]. A FSM is used to simulate processing in a server. In this study, we assume that each Web request comes with the following information: - Service Category. - Requested data. - Service time required. - Arrival time. A sequential set of FSM states comprises the service architecture. #### 3.3.2 Modeling User Behavior Recommender systems based on historic profiles of users, can offer enhanced quality of service to preferred user service category. In experimentation, initial one-third of the recorded log was used for training the system (collecting data for providing service differentiation) and the remaining were used for testing purpose (offering service differentiation). The pseudocode for providing service differentiation based on user behavior (number of visits to Website) is specified in Algorithm 3.3. --- **Algorithm 3.2 Multiclass–arrival** ```plaintext begin Get the total number of arrivals for the timeslot using Algorithm 3.1 Assigning \( r \) arrivals randomly to the \( n \) service categories as \( a[1], a[2], \ldots, a[n] \) with \( a[1] + a[2] + \ldots + a[n] = r \) \[ a[1] = \text{rand()} \mod r \] \[ \vdots \] \[ a[n] = \text{rand()} \mod (r - \sum_{i=1}^{n-1} a[i]) \] end ``` Algorithm 3.3 Service Differentiation begin # Number of hits of users was used to provide differentiated service Let hit=0 Create a lookup table with desired user mix (e.g. 30% of known users/registered users and 70% of unknown/new users) Generate the arrival pattern of requests using Algorithm 3.1 for ( each request in the arrival lot ) begin if (free thread is available and thread not assigned to the request) begin Accept the request for processing Get user characteristics from lookup table Assign request characteristics Process request and update the user_profile Update hit=hit+1 end else begin Select requests with user_id’s having hit count set below a defined threshold value and holding threads Terminate the selected requests and add the obtained threads from the requests to the Free pool end end if end for end 3.3.3 Modeling Session A set of successive requests submitted by a user constitutes a session. Typical interaction of users with Internet Service is organized into sessions, and a sequence of related requests, which together achieve a higher level user goal. An example of such interaction is an on-line shopping scenario for an E-Commerce Website which involves multiple requests that: 1. Search for particular products. 2. Retrieve information about a specific item (e.g., quantity and price). 3. Add it to the shopping cart. 4. Initiate the check-out process. 5. Finally commit the order. The duration of each session is different as individual users make their own decisions on how they should navigate in the application. ‘BBAM-I’ – being explained in the sequel – is a basic scheme to model and study such Internet Services. In ‘BBAM-I’ a session is characterized with the following two attributes: **Session Length**: The number of requests submitted by a user in a session is called the ‘session length’. The model supports unlimited session length (because once the user is admitted into the system, service is guaranteed until termination/completion of transaction). **Workload Mix**: Workload mix defines the proportion of service categories in which the site is visited by users. The model uses the recorded patterns of user behaviors to re-categorize user. ### 3.4 Proposed Model (‘BBAM-I’) The scheme here uses the traditional ‘thread-per-request’ model used by servers (APACHE). The service architecture of ‘BBAM-I’ is illustrated in Figure 3.1. An FSM is an abstract machine consisting of finite number of states, input actions, output actions, and a finite number of transitions between states. The interaction between the users and the states in the FSM can be modeled using UBMG. **Working**: At the beginning of every time slot threads from requests that completed execution in final state of FSM are returned to the free pool of threads. The incoming requests are assigned threads for execution until request list is exhausted or threads in the thread pool are exhausted whichever is earlier. The scheme does not call for any detailed decision making for thread allocation. Once a thread is allocated to a request, it remains tied to the request until completion of execution. There can be time slots when threads are free and those when requests are dropped though both do not happen simultaneously. The pseudocode of ‘BBAM-I’ is specified in Algorithm 3.4. 3.4.1 The Stage The fundamental unit of processing represents a *stage*. A *stage* (Figure 3.1) is a self-contained application component. At the highest level of abstraction a *stage* represents the associated functionalities to achieve a sub-goal of the overall Web application processing with an integrated behavior model graph that represents the order in which these functions are invoked by users. At the next lower level of abstraction a *stage* represents all the associated Web pages used by the application to achieve the function and the associated behavior model that represents the transitions between Web pages. At the third lower level of abstraction a *stage* represents the steps in processing a Web page with the associated resource requirements. Figure 3.2 is a partial representation of such a ‘3-Tier’ scheme. The UBMG at the first level can be used to generate virtual user’s invocation of functionalities of the application and the UBMG at the next level can be used to generate the access patterns of users to invoke the Web pages associated with providing a specific functionality of the application. Conceptually such repeated splitting down of *stages* – in an ‘inverted tree’ form – can be carried through to the lowest grain level desired. Algorithm 3.4 BBAM-I begin Generate request arrival pattern using algorithm 3.1 choosing parameters µ and σ² Make resource commitments.(Maximum number of threads, stages – based on FSM of Web application) Register the metrics to be collected for (each time slot) Generate the request lot for that time slot Free the threads in the final stage(As the requests would have got completed) while (more requests in the request lot) if (free thread is available) begin Allocate the thread to the request Collect logs of requests, stage and resource end else Drop the request end if end while Compute and record the registered SLA metrics {Throughput, Response time….} end for end 3.5 Performance Evaluation 3.5.1 Workload An extensive simulation study was carried out with the scheme. Its mean µ and variance σ² were varied over wide enough ranges to accommodate extremes of possible real life situations. As the deviation ((T/S) – µ) increases the average drop rate reduces for all values of σ². Here T and S stand for maximum number of threads (system capacity) and maximum number of stages (number of states in FSM) respectively. Table 3.1 shows the variation of the probability of a request drop in a set of S successive timeslots at ((T/S) – µ) < 2 with σ²; the same being negligible, the simulation need be carried out only for ((T/S) – µ) < 2. Similarly for ((T/S) – µ )> 2, from the symmetry of the service distribution one can see that the un-serviced request rate also has an identical behavior. Combining these facts the simulation study was limited to the range | (T/S) – µ | < 2 for all σ² values. The representative nature of a workload depends directly on the quality of its control parameters. In the experimental runs, several criteria (time varying nature, number of requests, and mixes of service categories) were used, to test the representative nature of the workload. The experimental measurement collection showed that the model clearly provided a concise representation of the workload characteristics. 3.5.2 Metric Evaluation Request Drop ($D_j$): Consider the continuous operation of the system. $t_j$ and $x_i$ are the $j^{th}$ time slot and the number of arrivals in the $i^{th}$ time slot respectively. Total arrivals ($A_T$) in $S$ successive time slots preceding are: $$A_T = \sum_{j=0}^{S-1} x_{t-j}$$ \hspace{1cm} (3.1) The number of requests dropped in the $j^{th}$ time slot $D_j$ is given by $D_j = A_T - T$. For a given arrival pattern $\{\mu, \sigma^2\}$ one can get this quantity. Response Time ($R_T$): With $\tau$ as the execution time of a request and $w_i$ being the waiting time in the $i^{th}$ stage before thread allotment, the response time is computed as: $$R_T = \sum_{i=1}^{S-1} w_i + 5\tau$$ \hspace{1cm} (3.2) Availability: Availability is the ratio of the number of requests serviced to the total number of requests arrived. User Frustration: User frustration is modeled in ‘BBAM-I’ as a measure of the number of times the user’s request has been dropped based on history of visits. User frustration of a customer is the ratio of the number of requests of the customer dropped to the total number of requests made by that customer (hits). 3.5.3 Simulation Results The simulation run duration was selected with the two following criteria: 1. All possible service request values should appear enough number of times in the simulation to bring out all behavioral characteristics. 2. The transients associated with the initial part of the simulation, should be kept away in culling out the data representing regular steady operation. Further for each $\mu$ selected $\sigma^2$ was varied over the full range. With each of these sets random arrival patterns were generated for the study. Extensive simulation was carried out each time over a large range of time slots. The results have been consolidated and representative cases have been presented in Table 3.2. Table 3.2: Summary of Logs for $S = 5$, $T = 100$, $\mu = 22$, and Varied $\sigma^2$ Values <table> <thead> <tr> <th>Scheme - BBAM-I</th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Number of threads($T$)</td> <td>100</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Number of Stages($S$)</td> <td>5</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Number of Timeslots of simulation</td> <td>100</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Mean($\mu$) = 22</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Variance($\sigma^2$)</td> <td>2</td> <td>3</td> <td>7</td> <td>11</td> </tr> <tr> <td>Total Submitted requests(Total Arrivals)</td> <td>1980</td> <td>1986</td> <td>2000</td> <td>2000</td> </tr> <tr> <td>Request in progress - (Will be realized in successive time slots)</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> </tr> <tr> <td>Throughput</td> <td>1889</td> <td>1880</td> <td>1858</td> <td>1854</td> </tr> <tr> <td>Idle thread periods</td> <td>0</td> <td>0</td> <td>76</td> <td>122</td> </tr> <tr> <td>Service Efficiency (%)</td> <td>95.404</td> <td>94.6626</td> <td>92.9</td> <td>92.7</td> </tr> <tr> <td>Number of Dropped Requests</td> <td>101</td> <td>106</td> <td>152</td> <td>156</td> </tr> <tr> <td>System Utilization (%)</td> <td>100</td> <td>100</td> <td>99.20</td> <td>98.72</td> </tr> </tbody> </table> With the above procedure the system was able to provide same level of user satisfaction to all the accepted user requests. With service differentiation introduced in the scheme, to differentiate users based on their history of visits, the system was allowed to drop the user requests if number of hits were below a set threshold value. These free threads were used to accommodate new requests. Graphs in Figure 3.3 illustrate the measure of service level offered to users based on the collected profile. It was observed that, until the system was stable all the users were serviced. With increase on the load, users were denied access based on their previous history of visits. From the graph in Figure 3.4 it can be seen that user with ID=85 was dropped 8 times, and served 21 times out of the total visits of 29 times. Figure 3.3: Performance details with simulations runs for 100 timeslots. (a) Idle Thread Periods (b) Service Efficiency (c) Request Drops (d) Resource Utilization (e) Request Drops(Timeslotwise) 3.5.4 General Observations from the Logs 1. The maximum possible continuous service request execution rate is of $T/\tau_S$ requests per ns. 2. If $R$ is the average request rate $L=(T/\tau_S)-R$ is the latency margin available as a cushion to meet peak / emergency loads. 3. Stable operation is possible only if $L \geq 0$. 4. With $Q_i$, as the number of service requests in the $i^{th}$ time slot, in general a service request at the $i^{th}$ time slot can be denied service if $\sum_{i-1}^{i-1} Q_i > T$. Here $\sum_{i-1}^{i-1} Q_i$ is a function of the average arrival rate $R$, $L$, and the variance $\sigma^2$. 5. In turn the request denial rate is a direct function of $\sigma^2$ and an indirect function of $L$. Further the thread utilization $T_u$ defined as (average number of threads utilized / $T$) approaches 100% as $L \rightarrow 0$ and also as $\sigma^2 \rightarrow 0$. 6. Service efficiency defined as (average number of requests serviced / average request rate) approaches 100% as long as $L > 0$ and as $\sigma^2 \rightarrow 0$. 3.5.5 Discussions Ideally independent of the $\sigma^2$ value selected, the cumulative number of requests should linearly increase with mean value $\mu$. As was explained in Section 3.5.4 as long as the margin $L \leq 0$ the throughput was able to cope up with the variations in the load but for $L > 0$ the throughput is not able to cope up with the demand; the variation of service efficiency and drop rates with $\sigma^2$ in the graph 3.3c brings this out. Further one can see that service efficiency is practically 100% for all $L \leq 0$; the deficiency here increases with $\sigma^2$ values (Figure 3.3b). In line with this the drop rate also increases. marginally as $\sigma^2$ increases and the disparity is more with increase in $L$. Similar observations hold good for system utilization also (idle thread periods). Graphs in Figures 3.3a, 3.3d, and 3.3e bring this out clearly. The graph in Figure 3.3e shows the access rates and drop rates of users due to non availability of threads. 3.5.5.1 Peak Load ‘BBAM-I’ does not address peak loads on servers as once the number of threads in the server exhausts, further requests that arrive in that time slot are dropped. Further acceptance of requests is done only when one or more threads in the system become free. 3.6 Summary of Findings In a traditional ‘thread-per-connection’ when all server threads are busy, the server stops accepting new connections; this is the type of overload protection used by APACHE Web Server [66]. The serious problems with this approach are that, the performance in terms of service availability is based on the number of threads rather than on the user load / length of time. Not accepting new connections, gives the user no indication that the site is overloaded. Finally, this approach is extremely unfair to clients, as some clients will be able to establish connections quickly, while others will experience large back off delays i.e. the scheme is not efficient at times of increased workloads. This scheme is useful only for applications where resource guaranteeing is more important than serving higher number of concurrent users (e.g., a multimedia streaming application with a periodic time deadline). That is, the scheme is useful for an Internet Service where, the focus is on individual requests, for which it is permissible (and often desirable) to meet statistical performance targets over a large number of requests. 3.7 Conclusions The basic model for studying the performance of Web applications based on ‘thread-per-request’ service architecture has been presented in this chapter. The model is general enough to capture workloads imposed on a service, to measure, and to update model parameters. The implications of this architectural model at times of overload are as follows: 1. Thread pool exhaustion lead to slow or unresponsive server and service becomes unavailable when it is needed at most. 2. Users who managed to get connected to the server will enjoy high quality of service and on the other hand other users waiting to get a connection will be starved. 3. Request rejections lead to user frustration which may have different impact on different job profiles for e.g. in E-Commerce, it takes dissatisfied customers only seconds to leave a site and take their business to a competitor. It is to be emphasized here that, the scheme is too simplistic to be adopted by reward driven Internet Services. However, the scheme was selected to understand the nuances of an implementation and make a case for a sophisticated one. Still, the model could be useful with new Internet Services that are yet to become popular.
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/90827/11/11_chapter%203.pdf", "len_cl100k_base": 5369, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 30437, "total-output-tokens": 5660, "length": "2e12", "weborganizer": {"__label__adult": 0.0002911090850830078, "__label__art_design": 0.00039768218994140625, "__label__crime_law": 0.0003712177276611328, "__label__education_jobs": 0.0008134841918945312, "__label__entertainment": 0.0001183152198791504, "__label__fashion_beauty": 0.00011777877807617188, "__label__finance_business": 0.0006513595581054688, "__label__food_dining": 0.0003986358642578125, "__label__games": 0.0005950927734375, "__label__hardware": 0.001883506774902344, "__label__health": 0.0006146430969238281, "__label__history": 0.0003273487091064453, "__label__home_hobbies": 9.047985076904296e-05, "__label__industrial": 0.000553131103515625, "__label__literature": 0.0003256797790527344, "__label__politics": 0.00024962425231933594, "__label__religion": 0.0003414154052734375, "__label__science_tech": 0.173095703125, "__label__social_life": 8.463859558105469e-05, "__label__software": 0.0294952392578125, "__label__software_dev": 0.7880859375, "__label__sports_fitness": 0.00028586387634277344, "__label__transportation": 0.0006089210510253906, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21127, 0.03162]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21127, 0.19424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21127, 0.91701]], "google_gemma-3-12b-it_contains_pii": [[0, 2045, false], [2045, 4254, null], [4254, 6030, null], [6030, 7768, null], [7768, 9347, null], [9347, 10617, null], [10617, 12294, null], [12294, 12710, null], [12710, 14607, null], [14607, 16239, null], [16239, 16434, null], [16434, 18147, null], [18147, 20281, null], [20281, 21127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2045, true], [2045, 4254, null], [4254, 6030, null], [6030, 7768, null], [7768, 9347, null], [9347, 10617, null], [10617, 12294, null], [12294, 12710, null], [12710, 14607, null], [14607, 16239, null], [16239, 16434, null], [16434, 18147, null], [18147, 20281, null], [20281, 21127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21127, null]], "pdf_page_numbers": [[0, 2045, 1], [2045, 4254, 2], [4254, 6030, 3], [6030, 7768, 4], [7768, 9347, 5], [9347, 10617, 6], [10617, 12294, 7], [12294, 12710, 8], [12710, 14607, 9], [14607, 16239, 10], [16239, 16434, 11], [16434, 18147, 12], [18147, 20281, 13], [20281, 21127, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21127, 0.205]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
bbbc8a2dfbd742474ddbd6864ec89470d893e915
Elementary transformation analysis for array-OL Paul Feautrier To cite this version: HAL Id: hal-02102502 https://hal-lara.archives-ouvertes.fr/hal-02102502 Submitted on 17 Apr 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Elementary transformation analysis for Array-OL Paul Feautrier May 2007 Elementary transformation analysis for Array-OL Paul Feautrier May 2007 Abstract Array-OL is a high-level specification language dedicated to the definition of intensive signal processing applications. Several tools exist for implementing an Array-OL specification as a data parallel program. While Array-OL can be used directly, it is often convenient to be able to deduce part of the specification from a sequential version of the application. This paper proposes such an analysis and examines its feasibility and its limits. Keywords: Array-OL, multidimensional signal processing, program analysis 1 Introduction In the Array-OL formalism [1, 2], a program is a network of processes which communicate through shared arrays. A process is made of one or more parallel loops. At each iteration of these loops, a task (or elementary transform) is executed. The elementary transform may contain one or more loops, which are executed sequentially. The execution of an elementary task can be decomposed into three steps: - Move portions of the input array(s) (regions) to the local memory of the processor executing the task. - Execute the elementary transform and generate portions of the output array(s). - Move the results to the output array(s). In order to simplify code generation, the input and output regions must move uniformly across the shared arrays. It is admissible that each elementary transform use only a subset of regularly spaced entries in the input and output regions. In the present version of the software, regions must not overlap, as this would preclude parallel execution of the outer loops. The useful elements of a region are collected in a pattern, which must be a rectangular parallelepiped of fixed size. The Array-OL formalism may be used directly. The programmer is responsible for constructing the elementary transform, identifying the input and output regions, checking parallelism and specifying the regions parameters. Another possibility is to infer the Array-OL specification from a sequential version of the program. This requires the solution of three problems: - Rewriting the sequential program in such a way that the outer loops have no dependences. - Deducing the shape and size of the regions from an analysis of the array subscript functions. - Rewriting the sequential code by substituting pattern accesses to the original array accesses. This note is dedicated to a proposal for the solution of the second and third problems. The assumption is that one is given the sequential code, together with a list of input and output arrays, and an indication of which loop(s) are to be considered as the outer (repetition) loop(s). 2 Paving Let $A$ be an input or output array and let its occurrences in the sequential code be numbered from 1 to $N$. Let $r$ be the counter(s) of the repetition loop(s), and let $j^k$ be the counter(s) of the inner loop(s) that surround occurrence $k$ of $A$. Let $e^k(r, j^k)$ be its subscript function. $e^k$ is a vector function whose dimension is the rank of $A$. To be amenable to an Array-OL implementation, the subscript function $e^k$ must be affine in $r$ and $j^k$. A convenient way of checking this property consists in computing the two Jacobian matrices: \[ P^k = \left( \frac{\partial e^k}{\partial r} \right), \quad B^k = \left( \frac{\partial e^k}{\partial j} \right), \] checking that they do not depend on \( r \) or \( j^k \), and verifying the identity: \[ e^k(r, j^k) = P^k r + B^k j^k + e^k(0,0). \] In Array-OL terminology, \( P^k \) is the paving matrix, and \( e^k(0,0) \) is the origin of the paving. The elements of these entities may be numbers, or they may depend on constants, which must be given numerical values just before code generation. References with different paving matrices may be separated by arbitrary distance in the source or target array; it is not possible to group them efficiently; they must be implemented as separate channels. In the following example: ```plaintext myTE( in[][], out[][]){ for(i=0;i<7; i++) // boucle TE { for ( k=0;k<11;k++) { S=0; for(j=0;j<100;j++) { S+= in[0][j+11] * in[i+1][k+j]; } out[ i][k]=S; } } } ``` there are two references to \( \text{in} \) with respective subscript functions \( e^1(i, k, j) = \begin{pmatrix} 0 \\ j + 11 \end{pmatrix} \) and \( e^2(i, k, j) = \begin{pmatrix} i + 1 \\ k + j \end{pmatrix} \). The corresponding paving matrices are \( P^1 = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \) and \( P^2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \). Hence, the two accesses must be handled separately. In the following, I assume that accesses to \( A \) have been partitioned according to their paving matrix, and consider only one partition at a time. The size of the repetition space is deduced simply from the bound(s) of the elementary transform loop(s). In the Spear/DE implementation of Array-OL, there may be further constraints on the paving matrix (e.g. that it be a permutation of a diagonal matrix). ### 3 Pattern and fitting A pattern is a compact specification of all the elements of an array that are accessed, with references having the same paving matrix, in one iteration of the external loop(s). When discussing patterns, one has to consider three frames of reference (see Fig. 1). The first one is the original (input or output) array. Its dimension is the rank of the array, Figure 1: Data access in Array-OL noted \(|A|\), and its coordinates are called *subscripts*. The shape of an array is always a (hyper-) rectangle. The second frame of reference is the iteration space of the inner loops of the elementary transform. Its dimension is the number of loops enclosing the reference, noted \(d^k\), and its coordinates are called *loop counters*. There may be as many iteration domains as there are references, or several references may share the same iteration domain. The shape of an iteration domain is arbitrary. The only requirement in the present context is to be able to construct its vertices, either because the iteration domain is rectangular, or because it can be expressed as a convex polyhedron with parameters in the constant terms only. The iteration domain of reference \(k\) will be denoted as \(D^k\) in what follows. The third frame of reference is the pattern. According to Boulet [1] the pattern is always of rectangular shape. The pattern associated to reference \(k\) is denoted by \(T^k\) and its dimension is \(p^k\). The associated fitting matrix, \(F^k\), connects the pattern space to the array space and its dimension, accordingly, is \(|A| \times p^k\). The relation of these objects are as follows. Firstly, the local subscript function \(f^k(j^k) = B^k j^k + e^k(0,0) = e^k(0,j^k)\) gives the coordinates of an array cell relative to the reference point \(P^k, r\) which moves according to the paving matrix. Next, the image \(f^k(D^k)\) is the *footprint* of reference \(k\). Its shape is arbitrary. The images of the vertices of \(D^k\) by \(f^k\) form a superset of the vertices of the footprint; a representation as a convex polyhedron can be recovered by one application of the Chernikova algorithm [3]. Lastly, the image of the pattern by the fitting matrix must enclose the footprint, and it must be feasible to retrieve a datum from the pattern instead of the original array. This implies that there exists a function \(\phi^k\) from \(D^k\) to \(T^k\) such that for every iteration vector \(j^k \in D^k\), \(f^k(j^k) = F^k \phi^k(j^k)\). In the text of the elementary transform, \(\phi^k\) must be substituted to \(e^k\) in reference \(k\) to \(A\). As one may see from this discussion, while the iteration domain and footprint are fixed once the sequential program is given, the choice of the pattern and fitting matrix are somewhat arbitrary. There are two obvious solutions: in the first one, the pattern is the smallest rectangular box enclosing the footprint, the fitting matrix is the identity, and the subscript function is not changed. In the second solution, the pattern is isomorphic to the iteration domain (provided it is a parallelepiped), $B^k$ is the fitting matrix, and the new subscript function is the identity. In signal processing applications, it is often the case that several references to the same array have similar subscript functions; constructing only one pattern for several references is an interesting optimization. However, this should not be obtained at the cost of a large overhead in the size of the pattern. In other word, the number of useless elements in the pattern must be minimized. Useless elements come from two sources: - A subscript matrix which is not of full row rank: the pattern will have more dimensions than the footprint. - A subscript matrix whose determinant is not of modulus one: there will be holes (unused elements) in the footprint. The inverse of the determinant gives an asymptotic evaluation of the ratio of useful elements. The next section presents a method for computing a pattern and a fitting matrix in the general case (many references). This method can only be applied if all elements of the matrices $B^k$ and the vectors $b^k$ have known numerical values. Section 5 presents fail-safe solutions for cases in which these elements depend on unknown parameters. ### 4 The General Case The basic observation is that a conservative estimate of the footprint can be obtained by computing the projection of each iteration domain by the associated subscript function, then constructing a convenient superset of the union of these projections. One practical method consists in projecting the vertices of the iteration domains. One then gathers all such projections, and constructs their convex hull by familiar (e.g., Chernikova’s) algorithms. To reduce the size overhead, one should notice that a useful point for reference $k$ also belongs to the lattice which is generated by the column vectors of $B^k$. Hence, $B^k$, properly simplified (see later) could be used as the fitting matrix. However, in the case of several references, we have to combine several lattices into one, since each pattern has only one fitting matrix. As an illustration of this construction, consider the one dimensional case. A one-dimensional lattice is simply a set of regularly spaced points. Combining two lattices generates a lattice whose spacing is the gcd of the component spacings. The many-dimensional equivalent of the gcd is the construction of the Hermite normal form of the subscript matrices. Let $\Lambda(B, b)$ be the lattice generated by $B$ with origin $b$, i.e. the set of points $\{Bx + b \mid x \in \mathbb{N}^d\}$. Let $L^1 = \Lambda(B^1, b^1)$ and $L^2 = \Lambda(B^2, b^2)$ be two such lattices. I claim that the union of $L^1$ and $L^2$ is included in the lattice $L = \Lambda([B^1B^2(b^2 - b^1)], b^1)$. **Proof** Let $B^1.x + b^1$ be a point of $L^1$. We have: $$B^1.x + b^1 = B^1.x + B^2.0 + (b^2 - b^1).0 + b^1$$ hence $B^1.x + b^1$ is in $L$. Similarly: $$B^2.y + b^2 = B^1.0 + B^2.y + (b^2 - b^1).1 + b^1.$$ I conjecture that $L$ is the smallest lattice which includes $L^1$ and $L^2$. The proof is obvious if the $b$s are null. The general case is left for future work. □ The construction can be extended to any number of component lattices. The resulting matrix is $[B^1 \ldots B^N(b^2 - b^1) \ldots (b^N - b^1)]$ and the origin is $b^1$. Furthermore, $b^1$ can be moved to the origin of the paving and hence taken as 0 when computing the fitting. In case where $B$ has been obtained by mixing many references, it must be simplified before being used for an Array-OL specification. The starting point of this simplification is the row echelon form of $B$. One can show (see the appendix) that there exists two unitary matrices $P$ and $U$ such that: $$B = P \begin{bmatrix} H & 0 \\ C & 0 \end{bmatrix} U,$$ where $H$ is a square upper triangular matrix of size $r \times r$ with positive diagonal coefficients, $C$ is arbitrary, and both 0 represent null matrices of appropriate sizes. $r$ is the row rank of $B$. Furthermore, $U$ can be partitioned, row wise, in two matrices of size $r \times d$ and $(d-r) \times d$, $U = \begin{bmatrix} U' \\ U'' \end{bmatrix}$. Let $j$ be a point in the iteration domain of the inner loops. The corresponding point in the footprint is: $$B_j = P \begin{bmatrix} H & 0 \\ C & 0 \end{bmatrix} \begin{bmatrix} U' \\ U'' \end{bmatrix} j \quad (1)$$ $$= P \begin{bmatrix} H \\ C \end{bmatrix} (U'j) \quad (2)$$ One possible interpretation of this formula is that the pattern for the current reference is the image of its iteration domain by $U'$, and that the corresponding paving matrix is $P \begin{bmatrix} H \\ C \end{bmatrix}$. In the body of the elementary transform, accesses to $B_j$ in the input or output array have to be replaced by accesses to $U'j$ in the pattern. It may be that the pattern computed in this way is not rectangular, in which case it must be “boxed” by computing the component-wise minima and maxima of its extreme points. The dimension of the pattern is $r$. It is interesting to notice that this general solution reduces to one of the approximate methods above in special cases. If $B$ is unitary, then its row echelon form is the unit matrix. In that case, the pattern is the footprint, eventually extended to a rectangular box and the fitting matrix is the identity. Conversely, if $B$ is already in row echelon form, $P$ and $U$ are identities. The pattern is isomorphic to the iteration space, and $B$ is the fitting matrix. 5 The Parametric Case Parameters occurs mostly in loop bounds. They may also appear as strides and, more seldom, in the coefficients of subscript functions. In the Array-OL formalism, the repetition loops must be square. Hence, their bound may be extracted directly from the program text. The extraction of the paving matrix is a simple derivative computation, which is an easy task for a competent computer algebra system. Similarly, the $B^k$ matrices are the result of a derivation, and may contain parameters. There are no restrictions on the inner loops. For the construction of the pattern, one needs to know the vertices of the inner iteration domain. There are three cases: - The bounds are constant: they can be extracted even if parametric. - The bounds are affine expressions in other loop counters and parameters: the vertices can be computed with the help of the polylib. - In other cases, there is no way of computing vertices, but the user may supply a bounding box. The computation of the row echelon form can be done only if the matrix is known numerically, except in two cases: the matrix is $1 \times 1$ (it is its own normal form) or $2 \times 2$. The row echelon form of $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ is $\begin{pmatrix} \gcd(a, b) & 0 \\ cu + dv & (ad - bc) / \gcd(a, b) \end{pmatrix}$ where $u$ and $v$ are the integers such that $au + bv = \gcd(a, b)$ whose existence is guaranteed by Bezout identity. If none of these circumstance applies, the solution of last resort is to use one of the approximate schemes above. For instance, if the vertices of the inner iteration domain are available, it is possible, whatever the $B$ matrix, to compute the vertices of the footprints and to enclose them in a rectangular box. The paving matrix is then the identity. 6 Extensions The Syntol tool computes dependences; it is thus possible to check that the repetition loops are actually parallel. One must take care that Syntol will find dependences if temporary scalars are used in the code of the elementary transforms. These scalars must be expanded or privatized at code generation time. Overlap between patterns (or, rather, between footprints) is another concern. For input arrays, overlap is just a cause of inefficiency, since some arrays cells will be copied several times to processors. Overlap for output arrays are more dangerous since they may induce non-determinism. The existence of overlap may be tested provided one stays inside the polytope model (affine loop bounds and indexing functions, with numerical coefficients and linear parameters). In the same context, it is possible to quantify the overhead by comparing the size of the pattern and the size of the real footprint using the barvinok library [4]. A Computing the row echelon form of a matrix For more details, see [3]. Let $B$ be an arbitrary matrix of size $p \times q$. 1. At any stage of the computation, we have constructed two unitary matrices $P$ and $U$ such that: $$B = PB'U, \quad B' = \begin{bmatrix} H & 0 \\ C & D \end{bmatrix}$$ where $H$ is lower triangular with positive diagonal coefficients. Initially, $P$ and $U$ are identity matrices, $H$ and $C$ are empty and $D = B$. Let $i$ be the index of the first row of $C$ and $D$. 2. If $D$ is null, the process stops. 3. If not, let $j$ be the index of some non zero row of $D$. Let $\pi_{ij}$ be the unitary matrix that permutes rows $i$ and $j$ of $B'$. Since $\pi_{ij}$ is its own inverse, one can write: $$B = (P\pi_{ij})(\pi_{ij}B')U,$$ and the new $D$ has a non zero first row. 4. Let $k$ be the index of a negative element in the first row of $D$. Let $\sigma_k$ be the unit matrix with the $k$-th diagonal element set to $-1$. Since $\sigma_k$ is its own inverse, one can write: $$B = P(B'\sigma_k)(\sigma_kU),$$ and element $k$ in the first row of $D$ is now positive. 5. If all elements in the first row of $D$ are positive, let $l$ be the index of the smallest element, and let $\pi_{il}$ be the matrix that interchange columns $i$ and $l$ of $B'$. Again: $$B = P(B'\pi_{il})(\pi_{il}U)$$ and now the first element of the first row of $D$ is smallest. 6. Let $m > i$ be the index of some nonzero element in the first row of $D$. Set $\alpha = B'_{im} \div B'_{ii}$. By construction, $\alpha > 0$. Let $\kappa_{im}(\alpha)$ be the identity matrix with $-\alpha$ added in position $(i, m)$. It is easy to see that the inverse of $\kappa_{im}(\alpha)$ is $\kappa_{im}(-\alpha)$. Hence: $$B = P(B'\kappa_{im}(\alpha))(\kappa_{im}(-\alpha)U)$$ and element $B'_{im}$ has been replaced by $B'_{im} \mod B'_{ii}$. 7. If the only non-zero element of the first row of $D$ is the first element, then $i$ can be increased by 1. These transformations must be applied until no further progress is possible (i.e. when in case 2). Matrix $B'$ is in the required form, and since all the elementary matrices $\pi, \sigma$ and $\kappa$ are unitary, the resulting $P$ and $U$ are unitary. In fact, $P$ is even a permutation matrix. References
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02102502/file/RR2007-28.pdf", "len_cl100k_base": 4950, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26666, "total-output-tokens": 5717, "length": "2e12", "weborganizer": {"__label__adult": 0.0003986358642578125, "__label__art_design": 0.0006475448608398438, "__label__crime_law": 0.0005030632019042969, "__label__education_jobs": 0.0006213188171386719, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.00019478797912597656, "__label__finance_business": 0.0003025531768798828, "__label__food_dining": 0.0004413127899169922, "__label__games": 0.0005974769592285156, "__label__hardware": 0.0025634765625, "__label__health": 0.0006875991821289062, "__label__history": 0.0003628730773925781, "__label__home_hobbies": 0.00015044212341308594, "__label__industrial": 0.0010509490966796875, "__label__literature": 0.00024235248565673828, "__label__politics": 0.000415802001953125, "__label__religion": 0.0007476806640625, "__label__science_tech": 0.2119140625, "__label__social_life": 0.0001227855682373047, "__label__software": 0.010223388671875, "__label__software_dev": 0.76611328125, "__label__sports_fitness": 0.0004897117614746094, "__label__transportation": 0.0008978843688964844, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20004, 0.03148]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20004, 0.78573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20004, 0.86102]], "google_gemma-3-12b-it_contains_pii": [[0, 899, false], [899, 973, null], [973, 1578, null], [1578, 4225, null], [4225, 6363, null], [6363, 8772, null], [8772, 12029, null], [12029, 14956, null], [14956, 17801, null], [17801, 20004, null]], "google_gemma-3-12b-it_is_public_document": [[0, 899, true], [899, 973, null], [973, 1578, null], [1578, 4225, null], [4225, 6363, null], [6363, 8772, null], [8772, 12029, null], [12029, 14956, null], [14956, 17801, null], [17801, 20004, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20004, null]], "pdf_page_numbers": [[0, 899, 1], [899, 973, 2], [973, 1578, 3], [1578, 4225, 4], [4225, 6363, 5], [6363, 8772, 6], [8772, 12029, 7], [12029, 14956, 8], [14956, 17801, 9], [17801, 20004, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20004, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
98ae3f1bcc39ba805c8bad3d9176bdb855a430d2
Behavior models and verification Lecture 3 Jan Kofroň, František Plášil Model checker: Spin Markov chains Timed automata Labelled transition system Kripke structure Model Property specification AG(start → AF heat) Model checker Property satisfied Property violated Jan Kofroň, František Plášil, Lecture 3 SPIN - Introduction - **SPIN** (= Simple Promela Interpreter) - is a tool for analysing the logical consisteny of concurrent systems, specifically of data communication protocols. - state-of-the-art model checker, used by >2000 users - Concurrent systems are described in the modelling language called Promela. - **Promela** (= Protocol/Process Meta Language) - allows for the dynamic creation of concurrent processes. - communication via message channels can be defined to be - synchronous (i.e. rendezvous), or - asynchronous (i.e. buffered). - resembles the programming language C - specification language to model finite-state systems Promela Model • **Promela model** consist of: – type declarations – channel declarations – variable declarations – process declarations – [init process] • A Promela model corresponds with a (usually **very large**, but) finite transition system, so – no unbounded data – no unbounded channels – no unbounded processes – no unbounded process creation ```plaintext mtype = {MSG, ACK}; chan toS = ...; chan toR = ...; bool flag; proctype Sender() { ... } \[ process body \] proctype Receiver() { ... } init { ... } \[ creates processes \] ``` Processes (1) - A process type (proctype) consist of - a name - a list of formal parameters - local variable declarations - body ```latex proctype Sender(chan in; chan out) { bit sndB, rcvB; do :: out ! MSG, sndB -> in ? ACK, rcvB; if :: sndB == rcvB -> sndB = 1-sndB :: else -> skip fi od } ``` The body consist of a sequence of statements. Processes (2) - A process - is defined by a proctype definition - executes concurrently with all other processes, independent of speed of behaviour - communicate with other processes - using global (shared) variables - using channels - There may be several processes of the same type. - Each process has its own local state: - process counter (location within the proctype) - contents of the local variables Processes - Processes are created using the run statement (which returns the process id). - Processes can be created at any point in the execution (within any process). - Processes start executing after the run statement. - Processes can also be created by adding active in front of the proctype declaration. ```plaintext proctype Foo(byte x) { ... } init { int pid2 = run Foo(2); run Foo(27); } active[3] proctype Bar() { ... } ``` - number of procs. (opt.) - parameters will be initialised to 0 /* A "Hello World" Promela model for SPIN. */ active proctype Hello() { printf("Hello process, my pid is: %d\n", _pid); } init { int lastpid; printf("init process, my pid is: %d\n", _pid); lastpid = run Hello(); printf("last pid was: %d\n", lastpid); } $ spin -n2 hello.pr init process, my pid is: 1 last pid was: 2 Hello process, my pid is: 0 Hello process, my pid is: 2 3 processes created Variables and Types (1) - Five different (integer) basic types. - Arrays - Records (structs) - Type conflicts are detected at runtime. - Default initial value of basic variables (local and global) is 0. **Basic types** - `bit` turn=1; - `bool` flag; - `byte` counter; - `short` s; - `int` msg; **Arrays** - `byte` a[27]; - `bit` flags[4]; Array indexing start at 0 **Typedef (records)** ```c typedef Record { short f1; byte f2; } Record rr; rr.f1 = .. ``` Variable declaration Statements • The body of a process consists of a sequence of statements. A statement is either – executable: the statement can be executed immediately. – blocked: the statement cannot be executed. • An assignment is always executable. • An expression is also a statement; it is executable if it evaluates to non-zero. \[ \begin{align*} 2 & < 3 \quad \text{always executable} \\ x & < 27 \quad \text{only executable if value of } x \text{ is smaller than } 27 \\ 3 + x & \quad \text{executable if } x \text{ is not equal to } -3 \end{align*} \] Statements (2) - The `skip` statement is always executable. - "does nothing", only changes process’ process counter - A `run` statement is only executable if a new process can be created (remember: the number of processes is bounded). - A `printf` statement is always executable (but is not evaluated during verification, of course). ```c int x; proc Aap() { int y=1; skip; run Noot(); x=2; x>2 && y==1; skip; } ``` Executable if `Noot` can be created... Can only become executable if a *some other process* makes `x` greater than 2. Statements (3) - `assert(<expr>);` - The `assert`-statement is always executable. - If `<expr>` evaluates to zero, SPIN will exit with an error, as the `<expr>” has been violated”. - The `assert`-statement is often used within Promela models, to check whether certain properties are valid in a state. ```proctype monitor() { assert(n <= 3); } proctype receiver() { ... toReceiver ? msg; assert(msg != ERROR); ... } Mutual Exclusion (1) ```c bit flag; /* signal entering/leaving the section */ byte mutex; /* # procs in the critical section. */ proctype P(bit i) { flag != 1; flag = 1; mutex++; printf("MSC: P(%d) has entered section.\n", i); mutex--; flag = 0; } proctype monitor() { assert(mutex != 2); } init { atomic { run P(0); run P(1); run monitor(); } } ``` Problem: assertion violation! Both processes can pass the flag != 1 "at the same time", i.e. before flag is set to 1. Starts two instances of process P Mutual Exclusion (2) ```c bit x, y; /* signal entering/leaving the section */ byte mutex; /* # of procs in the critical section. */ active proctype A() { x = 1; y == 0; mutex++; mutex--; x = 0; } active proctype B() { y = 1; x == 0; mutex++; mutex--; y = 0; } active proctype monitor() { assert(mutex != 2); } ``` Process A waits for process B to end. Problem: invalid-end-state! Both processes can pass execute \( x = 1 \) and \( y = 1 \) "at the same time", and will then be waiting for each other. if-statement (1) ``` if :: choice₁ → stat₁₁; stat₁₂; stat₁₃; ... :: choice₂ → stat₂₁; stat₂₂; stat₂₃; ... :: ... :: choiceₙ → statₙ₁; statₙ₂; statₙ₃; ... fi; ``` - If there is at least one choiceᵢ (guard) executable, the if-statement is executable and SPIN non-deterministically chooses one of the executable choices. - If no choiceᵢ is executable, the if-statement is blocked. - The operator “→” is equivalent to “;”. By convention, it is used within if-statements to separate the guards from the statements that follow the guards. if-statement (2) if :: (n \% 2 != 0) -> n=1 :: (n >= 0) -> n=n-2 :: (n \% 3 == 0) -> n=3 :: else -> skip fi - The **else** guard becomes executable if none of the other guards is executable. give n a random value if :: skip -> n=0 :: skip -> n=1 :: skip -> n=2 :: skip -> n=3 fi non-deterministic branching skips are redundant, because assignments are themselves always executable... do-statement (1) \[ \begin{align*} &\text{do} \\ &:: \text{choice}_1 \rightarrow \text{stat}_{1.1}; \text{stat}_{1.2}; \text{stat}_{1.3}; \ldots \\ &:: \text{choice}_2 \rightarrow \text{stat}_{2.1}; \text{stat}_{2.2}; \text{stat}_{2.3}; \ldots \\ &:: \ldots \\ &:: \text{choice}_n \rightarrow \text{stat}_{n.1}; \text{stat}_{n.2}; \text{stat}_{n.3}; \ldots \\ &\text{od}; \end{align*} \] - With respect to the choices, a do-statement behaves in the same way as an if-statement. - However, instead of ending the statement at the end of the choosen list of statements, a do-statement repeats the choice selection. - The (always executable) break statement exits a do-loop statement and transfers control to the end of the loop. Communication (1) Sender ! is sending ? is receiving Receiver s2r r2s s2r!MSG MSG s2r?MSG ACK r2s!ACK Thursday 11-Apr-2002 Theo C. Ruys - SPIN Beginners' Tutorial Communication (2) - Communication between processes is via **channels**: - message passing - **rendez-vous** synchronisation (handshake) - Both are defined as **channels**: \[ \text{chan } <\text{name}> = [<\text{dim}>] \text{ of } \{<t_1>, <t_2>, \ldots, <t_n>\}; \] - name of the channel - type of the elements that will be transmitted over the channel - number of elements in the channel - `dim==0` is special case: **rendez-vous** \[ \begin{align*} \text{chan c} &= [1] \text{ of } \{\text{bit}\}; \\ \text{chan toR} &= [2] \text{ of } \{\text{mtype, bit}\}; \\ \text{chan line[2]} &= [1] \text{ of } \{\text{mtype, Record}\}; \end{align*} \] Communication (3) - channel = FIFO-buffer (for dim>0) **Sending - putting a message into a channel** \[ \text{ch} \; ! \; \langle \text{expr}_1 \rangle, \; \langle \text{expr}_2 \rangle, \; \ldots \; \langle \text{expr}_n \rangle; \] - The values of \( \langle \text{expr}_i \rangle \) should correspond with the types of the channel declaration. - A send-statement is executable if the channel is not full. **Receiving - getting a message out of a channel** \[ \text{ch} \; ? \; \langle \text{var}_1 \rangle, \; \langle \text{var}_2 \rangle, \; \ldots \; \langle \text{var}_n \rangle; \] - If the channel is not empty, the message is fetched from the channel and the individual parts of the message are stored into the \( \langle \text{var}_i \rangle \)s. \[ \text{ch} \; ? \; \langle \text{const}_1 \rangle, \; \langle \text{const}_2 \rangle, \; \ldots \; \langle \text{const}_n \rangle; \] - If the channel is not empty and the message at the front of the channel evaluates to the individual \( \langle \text{const}_i \rangle \), the statement is executable and the message is removed from the channel. Rendez-vous communication \[ \text{dim} \] == 0 The number of elements in the channel is now zero. - If send \texttt{ch!} is enabled and if there is a corresponding receive \texttt{ch?} that can be executed simultaneously and the constants match, then both statements are enabled. - Both statements will “handshake” and together take the transition. **Example:** ```plaintext chan ch = [0] of {bit, byte}; - P wants to do \texttt{ch! 1, 3+7} - Q wants to do \texttt{ch? 1, x} - Then after the communication, \texttt{x} will have the value 10. ``` Processes in Promela - Interleaving semantics - Each time, a process is selected and its current statement is executed - Has to be *enabled* - This is repeated - Number of all possible interleavings may be very high - → state space explosion → not verifiable models - A mechanism to control the interleavings would be handy proctype P1() { t1a; t1b; t1c } proctype P2() { t2a; t2b; t2c } init { run P1(); run P2() } Not completely correct as each process has an implicit end-transition... proctype P1() { atomic {t1a; t1b; t1c} } proctype P2() { t2a; t2b; t2c } init { run P1(); run P2() } It is as if P1 has only one transition... If one of P1's transitions blocks, these transitions may get executed Although atomic clauses cannot be interleaved, the intermediate states are still constructed. proctype P1() { d_step {t1a; t1b; t1c} } proctype P2() { t2a; t2b; t2c } init { run P1(); run P2() } It is as if P1 has only one transition... No intermediate states will be constructed. Checking for pure atomicity - Suppose we want to check that none of the atomic clauses in our model are ever blocked (i.e. pure atomicity). 1. Add a global bit variable: ``` bit aflag; ``` 2. Change all atomic clauses to: ``` atomic { stat_1; aflag=1; stat_2 ... stat_n aflag=0; } ``` 3. Check that `aflag` is always 0. ``` []!aflag ``` e.g. active process `monitor` { ``` assert(!aflag); ``` } timeout (1) - Promela does not have real-time features. - In Promela we can only specify functional behaviour. - Most protocols, however, use timers or a timeout mechanism to resend messages or acknowledgements. - timeout - SPIN’s timeout becomes executable if there is no other process in the system which is executable - so, timeout models a global timeout - timeout provides an escape from deadlock states - beware of statements that are always executable… timeout (2) - Example to recover from message loss: ```plaintext active proctype Receiver() { bit recvbit; do :: toR ? MSG, recvbit -> toS ! ACK, recvbit; :: timeout -> toS ! ACK, recvbit; od } ``` - **Premature timeouts** can be modelled by replacing the `timeout` by `skip` (which is always executable). One might want to limit the number of premature timeouts (see [Ruys & Langerak 1997]). `goto label` - transfers execution to `label` - each Promela statement might be labelled - quite useful in modelling communication protocols ```promela wait_ack: if :: B?ACK -> ab=1-ab ; goto success :: ChunkTimeout?SHAKE -> if :: (rc < MAX) -> rc++; F!(i==1),(i==n),ab,d[i]; goto wait_ack :: (rc >= MAX) -> goto error fi fi; ``` Timeout modelled by a channel. Part of model of BRP unless { <stats> } unless { guard; <stats> } - Statements in <stats> are executed until the first statement (guard) in the escape sequence becomes executable. - resembles exception handling in languages like Java - Example: ```c prototype MicroProcessor() { { ... /* execute normal instructions */ } unless { port ? INTERRUPT; ... } } ``` macros - **cpp** preprocessor - Promela uses **cpp**, the C preprocessor to preprocess Promela models. This is useful to define: - **constants** ``` #define MAX 4 ``` - **macros** ``` #define RESET_ARRAY(a) \ d_step \ { a[0]=0; a[1]=0; a[2]=0; a[3]=0; } ``` - **conditional** Promela model fragments ``` #define LOSSY 1 ... #ifdef LOSSY active proctype Daemon() { /* steal messages */ } #endif ``` inline - poor man's procedures - Promela also has its own macro-expansion feature using the `inline`-construct. ```c inline init_array(a) { d_step { i=0; do :: i<N -> a[i] = 0; i++ :: else -> break od; i=0; } } ``` - error messages are more useful than when using `#define` - cannot be used as expression - all variables should be declared somewhere else (random) Simulation Algorithm ```java while (!error & !allBlocked) { ActionList menu = getCurrentExecutableActions(); allBlocked = (menu.size() == 0); if (!allBlocked) { Action act = menu.chooseRandom(); error = act.execute(); } } ``` - `deadlock = allBlocked` - `act is chosen by the user` - `act is executed and the system enters a new state` - `Visit all processes and collect all executable actions` Verification Algorithm - **SPIN** uses a depth first search algorithm (DFS) to generate and explore the complete state space. ```plaintext procedure dfs(s: state) { if error(s) reportError(); foreach (successor t of s) { if (t not in Statespace) dfs(t); } } ``` - States are stored in a hash table. - Requires state matching. - The old states s are stored on a stack, which corresponds with a complete execution path. - Note that the construction and error checking happens at the same time: SPIN is an on-the-fly model checker. **Thursday 11-Apr-2002** **Theo C. Ruys - SPIN Beginners' Tutorial** Properties (1) - Model checking tools automatically verify whether $M \models \phi$ holds, where $M$ is a (finite-state) model of a system and property $\phi$ is stated in some formal notation. - With SPIN one may check the following type of properties: - deadlocks (invalid endstates) - assertions - unreachable code - LTL formulae - liveness properties - non-progress cycles (livelocks) - acceptance cycles In Spin a subset – LTL_{\neg X} - LTL without X operator - More efficient model checking algorithm - Still expressive enough Describing properties of **states** (or runs), not of **transitions** between states Alternating Bit Protocol - Three examples with simple acknowledgment - First example with “perfect lines” #define MAX 4; mtype {MSG, ACK}; chan toR = [1] of {mtype, byte, bit}; chan toS = [1] of {mtype, bit}; active proctype Sender() { byte data; bit sendb, recvb; sendb = 0; data = 0; do :: toR ! MSG(data,sendb) -> toS ? ACK(recvb); if :: recvb == sendb -> sendb = 1-sendb; data = (data+1)%MAX; :: else -> skip; /* resend old data */ fi od } active proctype Receiver() { byte data, exp_data; bit ab, exp_ab; exp_ab = 0; exp_data = 0; do :: toR ? MSG(data,ab) -> if :: (ab == exp_ab) -> assert(data == exp_data); exp_ab = 1-exp_ab; exp_data = (exp_data+1)%MAX; :: else -> skip; fi; toS ! ACK(ab) od } Alternating Bit Protocol - Three examples with simple acknowledgment - Second example with a stealing daemon modeling lossy channels – the protocol does not work well Adding a special stealing daemon process: ```active proctype Daemon() { do :: toR ? _, _, _ :: toS ? _, _ od } ``` Q: What happens now? A: Deadlock! Three examples with simple acknowledgment Third example – redemption Fixing the sender: ```plaintext do :: toR ! MSG(data,sendb) -> if ::toS ? ACK(recv) -> if :: recv == sendb -> sendb = 1-sendb; data = (data+1)%MAX; else /* resend old data */ fi ::timeout /* message lost */ fi od ``` Q: What happens now? A: No error found. But no data transmitted! Alternating Bit Protocol - Three examples with simple acknowledgment - Fourth example – does receiver really get data? Augmenting the receiver: \[ \text{do} \\ \text{:: toR ? MSG(data,ab) ->} \\ \text{if} \\ \text{:: (ab == exp_ab) -> assert(data == exp_data);} \\ \text{exp_ab = 1-exp_ab;} \\ \text{progress: exp_data = (exp_data+1)\%MAX;} \\ \text{:: else -> skip;} \\ \text{fi;} \\ \text{toS ! ACK(ab)} \\ \text{od} \] Alternating Bit Protocol – Summary - We should be aware of all possible executions and issues in the model - **Model is not implementation!** - If there is error due to simplification (abstraction), it can still be ok - In ABP, for example, we may know that messages can get lost but usually are delivered - Consider possible errors beyond the ignored one! Information on Spin - The homepage: www.spinroot.com - Tutorials:
{"Source-Url": "http://d3s.mff.cuni.cz/teaching/system_behaviour_models/files/lecture02.pdf", "len_cl100k_base": 5416, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 55719, "total-output-tokens": 7550, "length": "2e12", "weborganizer": {"__label__adult": 0.0003170967102050781, "__label__art_design": 0.00030422210693359375, "__label__crime_law": 0.0003719329833984375, "__label__education_jobs": 0.0008730888366699219, "__label__entertainment": 8.606910705566406e-05, "__label__fashion_beauty": 0.00011593103408813477, "__label__finance_business": 0.00016319751739501953, "__label__food_dining": 0.0003895759582519531, "__label__games": 0.0008835792541503906, "__label__hardware": 0.001384735107421875, "__label__health": 0.00042724609375, "__label__history": 0.00022792816162109375, "__label__home_hobbies": 0.00011903047561645508, "__label__industrial": 0.0005850791931152344, "__label__literature": 0.0002410411834716797, "__label__politics": 0.00030422210693359375, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.0579833984375, "__label__social_life": 0.00011926889419555664, "__label__software": 0.00860595703125, "__label__software_dev": 0.9248046875, "__label__sports_fitness": 0.00032591819763183594, "__label__transportation": 0.0005307197570800781, "__label__travel": 0.00016236305236816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19117, 0.01583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19117, 0.48873]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19117, 0.71959]], "google_gemma-3-12b-it_contains_pii": [[0, 74, false], [74, 314, null], [314, 977, null], [977, 1551, null], [1551, 1938, null], [1938, 2367, null], [2367, 2885, null], [2885, 3314, null], [3314, 3820, null], [3820, 4389, null], [4389, 4956, null], [4956, 5400, null], [5400, 5940, null], [5940, 6503, null], [6503, 7046, null], [7046, 7436, null], [7436, 8166, null], [8166, 8341, null], [8341, 9025, null], [9025, 10137, null], [10137, 10689, null], [10689, 11024, null], [11024, 11190, null], [11190, 11500, null], [11500, 11689, null], [11689, 12157, null], [12157, 12631, null], [12631, 13052, null], [13052, 13488, null], [13488, 13858, null], [13858, 14328, null], [14328, 14754, null], [14754, 15192, null], [15192, 15836, null], [15836, 16265, null], [16265, 16476, null], [16476, 16585, null], [16585, 17391, null], [17391, 17561, null], [17561, 17736, null], [17736, 17806, null], [17806, 18157, null], [18157, 18279, null], [18279, 18583, null], [18583, 18945, null], [18945, 19117, null]], "google_gemma-3-12b-it_is_public_document": [[0, 74, true], [74, 314, null], [314, 977, null], [977, 1551, null], [1551, 1938, null], [1938, 2367, null], [2367, 2885, null], [2885, 3314, null], [3314, 3820, null], [3820, 4389, null], [4389, 4956, null], [4956, 5400, null], [5400, 5940, null], [5940, 6503, null], [6503, 7046, null], [7046, 7436, null], [7436, 8166, null], [8166, 8341, null], [8341, 9025, null], [9025, 10137, null], [10137, 10689, null], [10689, 11024, null], [11024, 11190, null], [11190, 11500, null], [11500, 11689, null], [11689, 12157, null], [12157, 12631, null], [12631, 13052, null], [13052, 13488, null], [13488, 13858, null], [13858, 14328, null], [14328, 14754, null], [14754, 15192, null], [15192, 15836, null], [15836, 16265, null], [16265, 16476, null], [16476, 16585, null], [16585, 17391, null], [17391, 17561, null], [17561, 17736, null], [17736, 17806, null], [17806, 18157, null], [18157, 18279, null], [18279, 18583, null], [18583, 18945, null], [18945, 19117, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19117, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19117, null]], "pdf_page_numbers": [[0, 74, 1], [74, 314, 2], [314, 977, 3], [977, 1551, 4], [1551, 1938, 5], [1938, 2367, 6], [2367, 2885, 7], [2885, 3314, 8], [3314, 3820, 9], [3820, 4389, 10], [4389, 4956, 11], [4956, 5400, 12], [5400, 5940, 13], [5940, 6503, 14], [6503, 7046, 15], [7046, 7436, 16], [7436, 8166, 17], [8166, 8341, 18], [8341, 9025, 19], [9025, 10137, 20], [10137, 10689, 21], [10689, 11024, 22], [11024, 11190, 23], [11190, 11500, 24], [11500, 11689, 25], [11689, 12157, 26], [12157, 12631, 27], [12631, 13052, 28], [13052, 13488, 29], [13488, 13858, 30], [13858, 14328, 31], [14328, 14754, 32], [14754, 15192, 33], [15192, 15836, 34], [15836, 16265, 35], [16265, 16476, 36], [16476, 16585, 37], [16585, 17391, 38], [17391, 17561, 39], [17561, 17736, 40], [17736, 17806, 41], [17806, 18157, 42], [18157, 18279, 43], [18279, 18583, 44], [18583, 18945, 45], [18945, 19117, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19117, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
50d778f48c613a8676c6edb102aa2c4c34f3455b
Implementing Parallel Google Map-Reduce in Eden Jost Berthold, Mischa Dieterle, and Rita Loogen Philippus-Universität Marburg, Fachbereich Mathematik und Informatik Hans-Meerwein Straße, D-35032 Marburg, Germany (berthold,dieterle,loogen)@informatik.uni-marburg.de Abstract. Recent publications have emphasised map-reduce as a general programming model (labelled Google map-reduce), and described existing high-performance implementations for large data sets. We present two parallel implementations for this Google map-reduce skeleton, one following earlier work, and one optimised version, in the parallel Haskell extension Eden. Eden’s specific features, like lazy stream processing, dynamic reply channels, and nondeterministic stream merging, support the efficient implementation of the complex coordination structure of this skeleton. We compare the two implementations of the Google map-reduce skeleton in usage and performance, and deliver runtime analyses for example applications. Although very flexible, the Google map-reduce skeleton is often too general, and typical examples reveal a better runtime behaviour using alternative skeletons. 1 Introduction To supply conceptual understanding and abstractions of parallel programming, the notion of algorithmic skeletons has been coined by Murray Cole in 1989 [1]. An algorithmic skeleton abstractly describes the (parallelisable) structure of an algorithm, but separates specification of the concrete work to do as a parameter function. Skeletons are meant to offer ready-made efficient implementations for common algorithmic patterns, the specification of which remains sequential. Thus, an algorithmic skeleton contains inherent parallelisation potential in the algorithm, but this remains hidden in its implementation. A broader research community has quickly adopted and developed the idea further [2]. Skeletons are a well-established research subject in the scientific community, yet until today they have only little impact on mainstream software engineering, in comparison with other models, like MPI [3] collective operations and design patterns [4]. To a certain extent, MPI collective operations and algorithmic skeletons follow the same philosophy: to specify common patterns found in many applications, and to provide optimised implementations that remain hidden in libraries. However, while skeletons describe a whole, potentially complex, algorithm, collective operations only predefine and optimise common standard tasks which are often needed in implementing more complex algorithms. The design pattern paradigm © Springer-Verlag Berlin Heidelberg 2009 has considerable potential in identifying inherent parallelism in common applications, capturing complex algorithmic structures, and providing conceptual insight in parallelisation techniques and problem decomposition. However, it often merely targets concepts and leaves implementation to established low-level libraries and languages (see e.g. textbook [5] for a typical example). Design patterns thus cannot provide the programming comfort and abstraction level of algorithmic skeletons. Moreover, collective operations and design patterns, by their very nature, are explicit about parallelism already in their specification, whereas skeletons completely hide parallelism issues. Even more remarkable is the fact that applications from industry have meanwhile achieved the mainstream breakthrough for the skeleton idea (even though it is never called like this). In 2004, we saw the first publication which abstractly described large-scale map-and-reduce data processing at Google [6,7]. It was proposed as a “programming model” for large dataset processing, but in fact precisely realises the skeleton idea. A publication by Ralf Llimmel [8] points out shortcomings of the skeleton’s formal specification, provides sequential Haskell implementations, and briefly discusses parallelism. Given the great acceptance that the programming model has found, and its close relation to skeleton programming, this paper investigates possible parallel implementations starting from Llimmel’s Haskell code, and discusses their respective advantages and drawbacks. From the perspective of functional languages, skeletons are specialised higher-order functions with a parallel implementation. Essentially, the skeleton idea applies a functional paradigm for coordination, independent of the underlying computation language. While skeleton libraries for imperative language, e.g. [9,10], typically offer a fixed, established set of skeletons, parallel functional languages are able to express new skeletons, or to easily create them by composition [11,2]. Some functional languages parallelise by predefined data parallel operations and skeletons, like NESL [12], OCamFP31 [13], or PMLS [11]. These fixed skeleton implementations are highly optimised and allow composition, but not the definition of new problem-specific skeletons or operations. More explicit functional coordination languages are appropriate tools not only to apply skeletons, but also for their implementation, allowing formal analysis and conceptual modeling. Coordination structure, programming model and algorithm structure can be cleanly separated by functional languages, profiting from their abstract, mathematically oriented nature. In our work, we use the general-purpose parallel Haskell dialect Eden [14] as an implementation language for the skeletons. The paper is structured as follows: Section 2 explains the classical map-and-reduce skeleton defined in Eden, thereby introducing features of the language we use for our implementations. Section 3 introduces the Google map-reduce skeleton. Parallel implementation variants are discussed in Section 4. A section with measurements and analyses for some example applications follows. Section 6 considers related work, the final section concludes. 2 Parallel Transformation and Reduction in Eden Classically, reduction over a list of elements is known as the higher-order function \texttt{fold} (from the left or from the right), and is often combined with a preceding transformation of the list elements (in other words, \texttt{map}). Denotationally, it is a composition of the higher-order functions \texttt{map} and \texttt{fold}: \[ \text{mapFoldL} :: (a \to b) \to (c \to b \to c) \to c \to [a] \to c \\ \text{mapFoldL} \: \text{mapF} \: \text{redF} \: n \: \text{list} = \text{foldl} \: \text{redF} \: n \: (\text{map} \: \text{mapF} \: \text{list}) \] \[ \text{mapFoldR} :: (a \to b) \to (b \to c \to c) \to c \to [a] \to c \\ \text{mapFoldR} \: \text{mapF} \: \text{redF} \: n \: \text{list} = \text{foldr} \: \text{redF} \: n \: (\text{map} \: \text{mapF} \: \text{list}) \] In this general form, the folding direction leads to slightly different types of the reduction operator \texttt{redF}. Parallel implementations have to unify types \texttt{b} and \texttt{c} and require associativity to separate sub-reductions. In addition, the parameter \texttt{n} should be neutral element of \texttt{redF}. Under these conditions, the folding direction is irrelevant, as both versions yield the same result. Parallel implementations may even reorder the input, requiring the reduction operator \texttt{redF} to be commutative. Assuming associativity and commutativity, we can easily define a parallel map-reduce skeleton for input streams in the functional Haskell dialect Eden, as shown in Fig.\ref{fig:parmapfold}. \begin{verbatim} parmapFoldL :: (Trans a, Trans b) => Int -> -- no. of processes (a -> b) -> -- mapped on input (b -> b -> b) -> -- reduction (assumed commutative) b -> -- neutral element for reduction [a] -> b parmapFoldL np mapF redF neutral list = foldl' redF neutral subRs where sublists = unshuffle np list subFoldProc = process (foldl' redF neutral . (map mapF)) subRs = spawn (replicate np subFoldProc) sublists unshuffle :: Int -> [a] -> [[a]] -- distributes the input stream unshuffle n list = ... -- round-robin in np streams spawn :: [Process a b] -> [a] -> [b] -- instantiates a set of processes spawn ps inputs = ... -- with respective inputs \end{verbatim} Fig. 1. Parallel map-reduce implementation in Eden The input stream is distributed round-robin into \texttt{np} inputs for \texttt{np} Eden processes, which are instantiated by the Eden library function \texttt{spawn}. \texttt{Process} is the type constructor for Eden Process abstractions, which are created by the function \texttt{process :: (a -> b) -> Process a b}. Type class \texttt{Trans} provides implicitly used data transfer functions. The spawned processes perform the transformation (\texttt{map}) and pre-reduce the map results (duplicating the given neutral element) by the \texttt{strict} left-fold \texttt{foldr}. As explained, the fold operator \texttt{redF} has got a restricted type in order to combine the subresults in a second stage, and is assumed commutative because input for the pre-reductions is distributed round-robin and streamed. 3 The "Google Map-Reduce" Skeleton A more general variant of map-and-reduce has been proposed as a programming model for processing large datasets, by Google personnel Jeff Dean and Sanjay Ghemawat. In January 2008, an update of the original publication (OSDI 2004 [6]) appeared in the ACM communications [7]. The intention to provide a framework which allows one "to express the simple computations [...] but hides the messy details of parallelization, fault-tolerance, data distribution, and load balancing, in a library" [6] is precisely the skeleton idea. However, the word "skeleton" does not figure in any of the two publications! Neither publication claims for the model to be new, its essential merit is that it brought the skeleton approach to industry. The model has found great acceptance as a programming model for parallel data processing (e.g., [15,16]). The computation scheme of Google map-reduce is depicted in Fig. 2. In a nutshell, a Google map-reduce instance first transforms key/value pairs into (intermediate) other key/value pairs, using a \texttt{mapF} function. After this, each collection of intermediate data with the same key is reduced to one resulting key/value pair, using a \texttt{reduceF} function. In-between the transformation and the reduction, the intermediate data is grouped by keys, so the whole computation has two logical phases. The parallelisation described in the original work [6] imposes additional requirements on the applied parameter functions; which, on the other hand, have more liberal type constraints than what \texttt{map} and \texttt{foldl} would require. Ralf Lämmel, in his related publication [8], captures them in a formal specification derived from the original examples and description, using Haskell.\footnote{The code is provided online by Lämmel, so we do not reproduce it here, see \url{http://www.cs.uu.nl/~ralf/MapReduce/}} \begin{verbatim} gOOGLE_MaPReduce :: forall k1 k2 v1 v2 v3. Ord k2 => -- for grouping k1 -> v1 -> ([k2,v2]) -- 'map' function, with keys 1 -> k2 -> [v2] -> Maybe v3 -- 'reduce' function, can use key 2 -> Map k1 v1 -- A key to input-value mapping -> Map k2 v3 -- A key to output-value mapping \end{verbatim} The ‘map’ function takes an input pair of type \((\alpha, \nu)\) (\(\alpha\) could actually be subsumed under \(\nu\) by pairing) and may produce a whole list of intermediate key-value pairs \((\nu, \nu')\) from it, which are then grouped by key (the ordering constraint enables more efficient data structures – an equality constraint \(= \nu \nu'\) would suffice). Each of the value lists for a key is then processed by the ‘reduce’ function to yield a final result of type \(\nu'\) for this key, or no output (thereby the Maybe-type). Unlike the usual fold, output does not necessarily have the same type as the intermediate value (but typically \(\nu\) and \(\nu'\) are of the same type). So the parameter function of fold in the Google publications is not properly a function which could be the argument to a fold (i.e., reduce) operation, nor is it always a reduction in the narrow sense. Additionally, as Lümmel points out, the original paper confuses lists and sets: The input to a skeleton instance is neither a set, nor a list, but a finite mapping from keys to values, where duplicate values are not allowed for the same key. And likewise, the output of the skeleton conceptually does not allow the same key to appear twice. **Examples:** A classical combination of map and foldl can be expressed as a special case of the more general skeleton. The map function here produces singleton lists and assigns a constant intermediate key 0 to every one. The reduction function ignores these keys, and left-folds the intermediate values as usual. ```haskell mapfold :: (a -> b) -> (b -> b -> b) -> b -> [a] -> b mapfold mapF redF neutral input = head (toList gresult) where mapF' x = [0, mapF x] redF', list = Just (foldl' redF' neutral list) gresult = googleMapReduce mapF' redF' (fromList (zip (repeat 0) input)) ``` A more general example, often given in publications on Google map-reduce, is to compute how many times certain words appear in a collection of web pages. The input is a set of pairs: web page URLs and web page content (and the URL is completely ignored). The ‘map’ part retrieves all words from the content and uses them as intermediate keys, assigning constant 1 as intermediate value to all words. Reduction sums up all these ones to determine how many times a word has been found in the input. ```haskell wordOccurrence = googleMapReduce toMap forReduction where toMap :: URL -> String -> [(String, Int)] toMap url content = zip (words content) (repeat 1) forReduction :: String -> [Int] -> Maybe Int forReduction word counts = Just (sum counts) ``` A range of other, more complex applications is possible, for instance, iteratively clustering large data sets by the k-means method, used as a benchmark in two recent publications [15,16]. We will discuss this benchmark in Section 5. ## 4 Parallel Google Map-Reduce Google map-reduce offers different opportunities for parallel execution. First, it is clear that the map function can be applied to all input data independently, inner interface parMapReduce :: Ord k2 => Int -> (k2 -> Int) -> No. of partitions, key partitioning -> (k1 -> v1 -> [k2,v2]) -> 'map' function -> (k2 -> [v2] -> Maybe v3) -> 'combiner' function -> (k2 -> [v3] -> Maybe v4) -> 'reduce' function -> [Map k1 v1] -> Distributed input data -> [Map k2 v4] -> Distributed output data parMapReduce parts keycode mAP combiner reduce = map ( -- parallelise in n reducers: parmap reducePerKey reduce ) -- 7. Apply 'reduce' to each partition . mergeByKey ) -- 6. Merge scattered intermediate data transpose ) -- 5. Transpose scattered partitions . transpose ) -- 4. Apply 'combiner' locally groupByKey ) -- 3. Group local intermediate data . partition parts keycode . mapPerKey mAP ) -- 2. Partition local intermediate data . mapPerKey mAP ) -- 1. Apply 'map' locally to each piece |Fig. 3. Parallel Google map-reduce skeleton, following Lüsnel [8]| |(we have added the parallelisation annotations in bold face)| Furthermore, since reduction is done for every possible intermediate key, several processors can be used in parallel to reduce the values for different keys. Additionally, the mapper processes in the implementation perform pre-grouping of intermediate pairs by (a hash function of) intermediate keys. Usually, implementations strictly split the whole algorithm in two phases. The productive implementation described in [7] is based on intermediate files in Google’s own shared file system GFS. Pre-grouped data is periodically written to disk, and then fetched and merged by the reducer tasks before they start reduction of values with the same key. This makes it possible to reassign jobs in case of machine failures, making the system more robust. Furthermore, at the end of the map phase, remaining map tasks are assigned to several machines simultaneously to compensate load imbalances. Following Lüsnel’s specification. To enable parallel execution, Lüsnel proposes the version shown in Fig. 3. Interface and functionality of the Google map-reduce skeleton are extended in two places. First, input to the map function is grouped in bigger “map jobs”, which allows to adapt task size to the resources available. For instance, the job size can be chosen appropriately to fit the block size of the file system. For this purpose, the proposed outer interface (not shown) includes a size parameter and an estimation function. The skeleton input is sequentially traversed and partitioned into tasks with estimated size close to (but less than) the desired task size. Second, two additional pre-groupings of equal keys are introduced, one for a pre-reduction in the mapper processes, and one to aggregate input for the reducer processes. The map operation receives a task (a set of input pairs), and produces a varying number of intermediate output. This output is sorted using intermediate keys, and the map processes pre-group output with the same key, using the added parameter functionducer. In many cases, this combiner will be the same function as the one used for reduction, but in the general case, its type differs from the reduce function type. To reduce the (potentially unbounded) number of intermediate keys, these intermediate (pre-reduced) results are then partitioned into a fixed number of key groups for the reducer processes, using two additional skeleton parameters. The parameter parts indicates how many partitions (and parallel reducer processes) to use, and the function keycode maps (or: is expected to map; the code in [8] does not check this property) each possible intermediate key to a value between 1 and parts. This mimics the behaviour of the productive Google implementation, which saves partitioned data into n intermediate files per mapper. Our parallel straightforward implementation of the skeleton consists of replacing the map calls in the code (see Fig. 3) by appropriate parallel map implementations. The number of reducers, n, equals the number of parts into which the hash function keycode partitions the intermediate keys. A straightforward parallel map skeleton with one process per task can be used to create these n reducer processes. An implementation which verbosely follows the description should create n mapper processes, which process a whole stream of input tasks each. Different Eden skeletons realise this functionality: a farm skeleton with static task distribution, or a dynamically balanced workflow [14]. However, the interface proposed by Lämmel lacks the m parameter, thus our parallelisation simply uses as many mappers as reduce processes, n = m. An optimised implementation. A major drawback of this straightforward version, directly derived from Lämmel's code [8], is its strict partitioning into ![Parallel Google map-reduce using distributed transpose functionality](image-url) the map phase and the reduce phase, and the call to transpose in between. In the Eden implementation suggested in Fig 3, all intermediate data produced by the mapper processes is sent back to the caller, to be reordered (by transpose) and sent further on to the reducer processes. Our optimised implementation uses direct stream communication between mappers and reducers, as depicted in Fig. 4. Furthermore, instances of mapper and reducer are gathered in one process, which saves some communication (not shown). In order to directly send the respective parts of each mapper’s output to the responsible reducer process via channels, a unidirectional $m : n$ communi- cation must be set up. Each process creates a list of $m$ channels and passes them on to the caller. The latter thus receives a whole matrix of channels (one line re- ceived from each worker process) and passes them on to the workers column-wise. Intermediate data can now be partitioned as before, and intermediate grouped pairs directly sent to the worker responsible for the respective part. Google’s productive implementation realises this $m : n$ communication by shared files. The whole data subset processed by one mapper is pre-grouped into buckets, each for one reducer process, and written to a distributed mass storage system (GFS), to be fetched by reducer processes later. While this is clearly essential for fault tolerance (in order to restart computations without data being lost in failing machines), we consider accumulating all intermediate data on mass storage a certain disadvantage in performance and infrastructure requirements. 5 Measurements for Example Applications Example applications of Google map-reduce can be taken from literature [15,16], which, however, tend to apply very simple reduce functions and can be realised using the elementary map-reduce without keys as well. We have chosen two example programs with non-trivial key-based reduction from litera- ture, after comparing performance for a simple map-fold computation. We have tested: \begin{itemize} \item a simple map-fold computation (sum of Euler totient values), \item the NAS-EP benchmark (using key-based reduction), \item the K-Means implementation (using key-based reduction) \end{itemize} on a Beowulf cluster\footnote{At Heriot-Watt University Edinburgh.} with up to 32 Intel Pentium 4 SMP processor elements (PEs) running at 3 GHz with 512 MB RAM and a Fast Ethernet interconne- tion. Trace visualisations show activity profiles of the PEs (y-axis) over time (x-axis) in seconds. \textbf{Sum of Euler totient values.} The sumEuler program is a straightforward map-fold computation, summing up values of the Euler function $\varphi(k)$. $\varphi(k)$ tells how many $j < k$ are relatively prime to $k$, and the test program computes it naïvely instead of using prime factorisation. So, the program computes $\sum_{k=1}^{n} \varphi(k) = \sum_{k=1}^{n} \left\{ j < k \mid \gcd(k, j) = 1 \right\}$, or in Haskell syntax: ```haskell result = foldl1 (++) (map phi [1..n]) phi k = length (filter (primeTo k) [1..(k-1)]) ``` Fig. 5 shows runtime traces for two versions: the straightforward parallel map-fold skeleton and the Google map-reduce version with distributed transposition. Both programs perform well on our measurement platform. The Google map-reduce implementation suffers from the overhead of distributing the map tasks (which is almost entirely eliminated in the map-fold version), whereas the other version obviously exposes uneven workload due to the static task distribution. **NAS-EP benchmark.** NAS-EP (Embarrassingly Parallel) [17] is a transformation problem where two-dimensional statistics are accumulated from a large number of Gaussian pseudo-random numbers, and one of only few problems which profit from the per-key reduction functionality provided by the Google map-reduce skeleton (keys indicating the 10 different square annuli). Fig. 6 shows the results, using worker functions for reduced input data size, and the skeleton version with distributed transposition. Workload distribution is fair, and the system profits from Eden’s stream functionality to finish early in the reducer processes. **Parallel k-means.** The final example, k-means, illustrates that the additional reduction using the `combine` function (as opposed to simply applying `reduce` Implementing Parallel Google Map-Reduce in Eden \[ mAP :: \text{Int} \rightarrow \text{Vector} \rightarrow ([\text{Int}, \text{Vector}]) \] \[ mAP :: \text{vec} = [(1+\text{minIndex} \ (\text{map} (\text{distance} \ \text{vec}) \ \text{centroids}), \text{vec})] \] \[ \text{COMBINE} :: \text{Int} \rightarrow ([\text{Vector}]) \rightarrow \text{Maybe} ([\text{Int}, \text{Vector}]) \] \[ \text{COMBINE} :: \text{vs} = \text{Just} (\text{length} \ \text{vs}, \text{center} \ \text{vs}) \] \[ \text{REDUCE} :: \text{Int} \rightarrow ([\text{Int}, \text{Vector}]) \rightarrow \text{Maybe} ([\text{Int}, \text{Vector}]) \] \[ \text{REDUCE} :: \text{vs} = \text{Just} (\text{round} \ w, v) \] \[ \text{where} \ \text{vs}' = \text{map} ((k, v) \rightarrow (\text{fromIntegral} \ k, v)) \ \text{vs} :: ([\text{Double}, \text{Vector}]) \] \[ (k, v) = \text{foldl1'} \ \text{combineWgt} \ \text{vs}' \] \[ \text{combineWgt} :: ([\text{Double}, \text{Vector}]) \rightarrow ([\text{Double}, \text{Vector}]) \rightarrow ([\text{Double}, \text{Vector}]) \] \[ \text{combineWgt} ((k1, v1), (k2, v2)) = (k1 + k2, \text{zipWith} (+) (\text{wgt} \ f \ v1) (\text{wgt} (1-f) \ v2)) \] \[ \text{where} \ f = 1/(1+(k2/k1)); \text{wgt} x = \text{map} (\times x) \] --- **Fig. 7.** Parameter functions for the parallel \(k\)-means algorithm The input of the \(k\)-means benchmark is a collection of data vectors (and arbitrary irrelevant keys). A set of \(k\) cluster centroids is chosen randomly in the first iteration. Parameterising the function to map with these centroids, the map part computes distances from the input vector to all centroids and yields the ID of the nearest centroid as the key, leaving the data as the value. The reduction, for each centroid, computes the mean vector of all data vectors assigned to the respective cluster to yield a set of \(k\) new cluster centroids, which is used in the next iteration, until the cluster centroids finally converge to desired precision. --- **Fig. 8.** Iteration skeleton **Fig. 9.** \(k\)-means algorithm, Google map-reduce followed by iteration version Fig. 7 shows the parameter functions for implementing k-means with Google map-reduce. Instead of computing sub-centroids by a simple sum, the vector subsets are precombined to a sub-centroid with added weight (combine function), to avoid numeric overflows and imprecision. The final reduction reduce then uses the weight, which indicates the (arbitrary) number of vectors assigned to the sub-centroid, and combines centroids by a weighted average. However, k-means is not suitable for the Google map-reduce skeleton on distributed memory machines. The problem is, the algorithm works iteratively in fixed steps, and amounts to setting up a new Google map-reduce instance for each iteration. These globally synchronised iteration steps and the skeleton setup overhead dominate the runtime behaviour, especially because the tasks are data vectors, sent and received again in each step. An iteration skeleton (as depicted in Fig. 8) should be used instead, with the advantage that the data vectors become initialisation data for the skeleton. The execution trace in Fig. 9 shows both versions for 25000 data vectors: first 10 iterations of the Google map-reduce version, then a version using an iteration skeleton (which performs around 90 iterations in shorter time). Communication of the data vectors completely eats up parallel speedup, whereas they are initialisation data in the second version. 6 Related Work We have already cited the basic references for the skeleton in question throughout its description and in the introduction. Originally published in 2004 [6], the Google employees Dean and Ghemawat have updated their publication recently with ACM [7], providing more recent figures of the data throughput in the productive implementation. The Hadoop project [18] provides an open-source java implementation. A more thorough description of its functionality is provided by Ralf Lümmel [8], first published online in 2006. Both the original description by the Google authors and Ralf Lümmel discuss inherent parallelism of the Google map-reduce skeleton. While Lümmel presents substantial work for a sound understanding and specification of the skeleton, his parallelisation ideas remain at a high level, at times over-simplified, and he does not discuss any concrete implementation. The original Google work restricts itself to describing and quantifying the existing parallelisation, but gives details about the physical setup, the middleware in use, and error recovery strategies. Several publications have adopted and highlight Google map-reduce, with different focus. An evaluation in the context of machine-learning applications can be found in [15]. Ranger et al. [16] present an implementation framework for Google map-reduce for multicore, Phoenix. They report superlinear speedups (ranges up to 30) on a 24-core machine, achieved by adjusting the data sizes to ideal values for good cache behaviour. Other authors have recognised the advantages of the high-level programming model and propose it for other custom architectures: Cell [19] and FPGAs [20]. 7 Conclusions A comprehensive overview of Google map-reduce and its relation to algorithmic skeletons has been given. We have discussed two implementations for Google map-reduce, one following earlier work, and an optimised Eden version. As our runtime analyses for some example applications show, the skeleton implementation delivers good performance and is easily applicable to a range of problems. Implementations using explicitly parallel functional languages like Eden open the view on computation structure and synchronisation, which largely facilitates skeleton customisation and development. The popularity of Google map-reduce nicely shows the ease and flexibility of skeletons to a broader audience; and has found good acceptance in mainstream development. On the other hand, the popularity of just one skeleton may sometimes mislead application development, not considering alternative skeletons. It turns out that the full generality of the Google map-reduce skeleton is often not needed, and other skeletons are more appropriate. Nevertheless, we consider Google map-reduce a big step for skeleton programming to finally get adequate attention, as a mathematically sound high-level programming model for novel parallel architectures. Acknowledgements: We thank the anonymous referees, Phil Trinder, Kevin Hammond and Hans-Wolfgang Loidl, for helpful comments on earlier versions. References 3. MPI-2: Extensions to the message-passing interface, Technical report, University of Tennessee, Knoxville (July 1997)
{"Source-Url": "https://www.mathematik.uni-marburg.de/~eden/paper/EuroPar09GMR.pdf", "len_cl100k_base": 6825, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 14233, "total-output-tokens": 8587, "length": "2e12", "weborganizer": {"__label__adult": 0.0003476142883300781, "__label__art_design": 0.00030350685119628906, "__label__crime_law": 0.0003633499145507813, "__label__education_jobs": 0.0003843307495117187, "__label__entertainment": 7.534027099609375e-05, "__label__fashion_beauty": 0.00016057491302490234, "__label__finance_business": 0.0002123117446899414, "__label__food_dining": 0.0004010200500488281, "__label__games": 0.0004563331604003906, "__label__hardware": 0.0012264251708984375, "__label__health": 0.0005655288696289062, "__label__history": 0.00028634071350097656, "__label__home_hobbies": 9.804964065551758e-05, "__label__industrial": 0.0004856586456298828, "__label__literature": 0.0002416372299194336, "__label__politics": 0.00031495094299316406, "__label__religion": 0.0005483627319335938, "__label__science_tech": 0.039947509765625, "__label__social_life": 9.167194366455078e-05, "__label__software": 0.00676727294921875, "__label__software_dev": 0.9453125, "__label__sports_fitness": 0.0003459453582763672, "__label__transportation": 0.0006165504455566406, "__label__travel": 0.00022923946380615232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33381, 0.01636]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33381, 0.54535]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33381, 0.82566]], "google_gemma-3-12b-it_contains_pii": [[0, 2725, false], [2725, 5994, null], [5994, 8895, null], [8895, 11367, null], [11367, 14446, null], [14446, 17239, null], [17239, 19288, null], [19288, 22088, null], [22088, 23648, null], [23648, 25760, null], [25760, 28838, null], [28838, 31647, null], [31647, 33381, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2725, true], [2725, 5994, null], [5994, 8895, null], [8895, 11367, null], [11367, 14446, null], [14446, 17239, null], [17239, 19288, null], [19288, 22088, null], [22088, 23648, null], [23648, 25760, null], [25760, 28838, null], [28838, 31647, null], [31647, 33381, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33381, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33381, null]], "pdf_page_numbers": [[0, 2725, 1], [2725, 5994, 2], [5994, 8895, 3], [8895, 11367, 4], [11367, 14446, 5], [14446, 17239, 6], [17239, 19288, 7], [19288, 22088, 8], [22088, 23648, 9], [23648, 25760, 10], [25760, 28838, 11], [28838, 31647, 12], [31647, 33381, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33381, 0.00881]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
36f121591c5f61b5d53cddff4677eac9b1b33794
Memory-Based Machine Translation and Language Modeling Antal van den Bosch, Peter Berck Abstract We describe a freely available open source memory-based machine translation system, MBMT. Its translation model is a fast approximate memory-based classifier, trained to map trigrams of source-language words onto trigrams of target-language words. In a second decoding step, the predicted trigrams are rearranged according to their overlap, and candidate output sequences are ranked according to a memory-based language model. We report on the scaling abilities of the memory-based approach, observing fast training and testing times, and linear scaling behavior in speed and memory costs. The system is released as an open source software package¹, for which we provide a first reference guide. 1. Introduction Recently, several independent proposals have been formulated to integrate discrete classifiers in phrase-based statistical machine translation, to filter the generation of output phrases (Bangalore, Haffner, and Kanthak, 2007, Carpuat and Wu, 2007, Giménez and Màrquez, 2007, Stroppa, Van den Bosch, and Way, 2007), all reporting positive effects. This development appears an interesting step in the further development of statistical machine translation. These same developments can also be employed to produce simple but efficient stand-alone translation models. In this paper, we introduce MBMT, memory-based machine translation. The memory-based approach, based on the idea that new instances of a task can be solved by analogy to similar instances of the task seen earlier in training and stored in memory as such, has been used successfully before in various NLP areas; for an overview, see (Daelemans and Van den Bosch, 2005). MBMT is a stand-alone translation model with a simple decoder on top that relies on a memory-based language model. With a statistical word alignment as the starting point, such ¹http://ilk.uvt.nl/mbmt © 2009 PBML. All rights reserved. Please cite this article as: Antal van den Bosch, Peter Berck, Memory-Based Machine Translation and Language Modeling. The Prague Bulletin of Mathematical Linguistics No. 91, 2009, 17–26. Figure 1. An example training pair of sentences, converted into six overlapping trigrams with their aligned trigram translations. as produced by GIZA++ (Och and Ney, 2003), MBMT is shown to be very fast both in training and translation. The overall architecture of the system is described in Section 2. A brief evaluation and reference guide of the system is provided in Section 3. The language modeling module, also made available as a separate language modeling toolkit, is described in Section 4. We wrap up in Section 5. 2. Memory-based machine translation Memory-based machine translation (Van den Bosch, Stroppa, and Way, 2007) can be characterized as an instantiation of example-based machine translation (EBMT), as it essentially follows EBMT’s basic steps (Carl and Way, 2003): given a sentence in the source language to be translated, it searches the source side of the corpus for close matches and their equivalent target language translations. Then, it identifies useful source–target fragments contained in those retrieved examples; and finally, it recombines relevant target language fragments to derive a translation of the input sentence. The scope of the matching function implied in the first step is an important choice. We take a simplistic approach that assumes no linguistic knowledge; we use overlapping trigrams of words as the working units, both at the source language side and the target language side. The process of translating a new sentence is divided into a local phase (corresponding to the first two steps in the EBMT process) in which memory-based translation of source trigrams to target trigrams takes place, and a global phase (corresponding to the third EBMT step) in which a translation of a sentence is assembled from the local predictions. We describe the two phases in the following two subsections. 2.1. Local classification Both in training and in actual translation, when a new sentence in the source language is presented as input, it is first converted into windowed trigrams, where each token is taken as the center of a trigram once. The first trigram of the sentence contains an empty left element, and the last trigram contains an empty right element. At training time, each source language sentence is accompanied by a target language translation. We assume that word alignment has A. van den Bosch and P. Berck Memory-Based Machine Translation (17–26) Figure 2. Excerpt of an mbmt igtree structure, zooming in on the path represented by the input trigram “geen zin om”, translatable to “no point in”, among others. taken place, so that we know for each source word whether it maps to a target word, and if so, to which. Examples are only generated for source words that align to target words. Given the alignment, each source trigram is mapped to a target trigram of which the middle word is the target word to which the word in the middle of the source trigram aligns. The left and right neighboring words of the target trigram are the center word’s actual neighbors in the target sentence. Figure 1 exemplifies the conversion of a training translation to six trigram mappings. During translation, source trigrams are matched against the training set of stored source trigrams with a known mapping to a target trigram. The matching is carried out as a discrete classification. To this purpose we make use of igtree² (Daelemans, Van den Bosch, and Weijters, 1997), which compresses a database of labeled examples into a lossless-compression decision-tree structure that preserves the labeling information of all training examples. Figure 2 displays a fragment of the decision tree trained on the translation of Dutch trigrams to English. It highlights one path in the tree, representing the Dutch trigram “geen zin om” (translatable, among others, into “no point in” and “no sense to”). The tree encodes all possible trigram translations of, respectively, the middle word “zin”, the bigram “geen zin” , and the full trigram. This order reflects the information-gain weights of the three words with respect to predicting the output class. During translation, igtree’s classification algorithm traverses the decision tree, matching the middle, left, and right words of each new trigram to a path in the tree. Two outcomes are possible: (1) igtree finds a complete matching path, upon which it returns the most probable output trigram; (2) igtree fails to match a value along the way, upon which it returns the most probable output trigram given the matching path so far. Instead of the most probable path, it is also possible for igtree to return the full distribution of possible trigrams at the end of a matching path. When translating new text, trigram outputs are generated for all words in each new source language sentence to be translated, since our system does not have clues as to which words would be aligned by statistical word alignment. ²http://ilk.uvt.nl/timbl 2.2. Global search To convert the set of generated target trigrams into a full sentence translation, the overlap between the predicted trigrams is exploited. Figure 3 illustrates a perfect case of a resolution of the overlap (drawing on the example of Figure 1), causing words in the English sentence to change position with respect to their aligned Dutch counterparts. The first three English trigrams align one-to-one with the first three Dutch words. The fourth predicted English trigram, however, overlaps to its left with the fifth predicted trigram, in one position, and overlaps in two positions to the right with the sixth predicted trigram, suggesting that this part of the English sentence is positioned at the end. Note that in this example, the “fertility” words take and this, which are not aligned in the training trigram mappings (cf. Figure 1), play key roles in establishing trigram overlap. In contrast to the ideal situation sketched in Figure 3, where one translation is produced, in practice many different candidate output sequences can be generated due to two reasons: first, each (potentially partially or fully incorrect) trigram may overlap with more than one trigram to the left or right, and second, the classifier may produce more than one output trigram at a single position, when it reaches a non-ending node with equally-probable trigram classes. To select the most likely output among the potentially large pool of candidate outputs, we employ a memory-based target language model (Van den Bosch, 2006). This model, called WOPR, described in more detail in Section 4, is a word prediction IGTRIE system trained on a monolingual target language corpus, which produces perplexity scores for each candidate output sequence presented to it. WOPR provides the language model in a different way than most standard models do. Most models, like for example SRILM (Stolcke, 2002), estimate probabilities of words in context and build a back-off model containing n-grams to unigrams. WOPR uses a trigram model (it is not limited to trigrams, it could use any size and any context) but, because it uses the IGTRIE algorithm, stores exceptions to default values rather than all n-grams. Other language models could be used, but we prefer to use WOPR because it uses the same IGTRIE model also used as the core translation engine in the MBMT system. The model is described in more detail in Section 4. As the number of possible output sequences may be large, MBMT currently applies Monte Carlo sampling to generate candidate output sequences to be scored by WOPR. This sampling is subject to a patience threshold $p$ that halts the generation of new candidates when no improvement in perplexity scores is observed for $p$ sample steps. By default, $p = 100$. 3. Evaluation and an Annotated Example For the purpose of a brief evaluation, we first focus on the translation of Dutch to English, using the EMEA corpus, part of the Opus open source parallel corpus\(^3\). The EMEA corpus contains documents from the European Medicines Agency\(^4\). Texts in this corpus are of a re- --- \(^4\)http://www.emea.europa.eu/ restricted genre, consisting of quite formal, exact, and largely controlled language. We used the first 749,602 lines of text (approximately 9.0 million English and Dutch words). The corpus was split into a final 1,000-sentence test set and a training set containing the remainder of the data. The training sets were word-aligned using the giza++ algorithm (Och and Ney, 2003). No decapitalization was performed. The igtree-based language model used for translation is a single model trained on the first 112 million words of the Reuters RCV1 corpus. We performed a learning curve experiment on the eMIA training set. We start at a training set size of 100,000 tokens, and increment with steps of 100,000 until 1 million tokens; then, we increment with steps of 1 million tokens up to the maximal training set size of 9 million tokens. The learning curve experiment serves to get an idea of the scaling abilities of MBMT in terms of performance; we also measure training and testing speeds and memory footprint. The learning curve experiment on the eMIA corpus produced performance curves of which we combine two in the left graph of Figure 4: the bleu and meteor (exact) scores. Both curves show a steady but somewhat weakening increase when the dataset doubles in size (note that the x axis is logarithmic). Second, the middle graph of Figure 4 displays the number of seconds it takes to construct a decision tree, and to test. Testing occurs in a few seconds (up to eight seconds for 1,000 sentences, with an approximately linear increase of one second of testing time with each additional million training examples); the test graph virtually coincides with the x axis. Training times are more notable. The relation between training times and number of training examples appears to be linear; on average, each additional million of training examples makes training about 130 seconds slower. Third, the right graph of Figure 4 shows a similar linear trend of the memory footprint needed by igtree and its decision tree, in terms of Megabytes. At 9 million training examples, the decision tree needs about 40 Mb, an average increase of 4.4 Mb per additional million examples. As a second evaluation, we compare against the performance and training and testing times of moses on the eMIA corpus at the maximal training set size. Table 1 lists the performance on the test data according to word error rate, position-independent word error rate, bleu, meteor, and nist. As is apparent from the results, moses performs at a markedly higher level. of performance, but does so at the cost of a longer translation time: \textsc{mbmt} is about 20 times as fast. Training \textsc{mbmt} is about 10 times as fast as training \textsc{moses}; in both cases, the \textsc{giza++} process has already been performed and is not included here. <table> <thead> <tr> <th>System</th> <th>WER</th> <th>PER</th> <th>BLEU</th> <th>Meteor</th> <th>NIST</th> <th>Training (h:m:s)</th> <th>Test (m:s)</th> </tr> </thead> <tbody> <tr> <td>\textsc{mbmt}</td> <td>72.7</td> <td>63.6</td> <td>0.238</td> <td>0.460</td> <td>4.97</td> <td>20:17</td> <td>0:08</td> </tr> <tr> <td>\textsc{moses}</td> <td>46.6</td> <td>39.4</td> <td>0.470</td> <td>0.650</td> <td>7.06</td> <td>3:10:06</td> <td>2:51</td> </tr> </tbody> </table> \textbf{Table 1. Comparing mbmt against moses on the emea corpus, in terms of five MT evaluation metrics, and training and testing times (elapsed wallclock time).} **Annotated Example** The \textsc{mbmt} software assumes a \textsc{giza++}-style A3 file, i.e. a word alignment file containing all aligned source and target training sentences, as training material. The software will convert this training file into an \textsc{igtree} decision tree, and is then capable of translating a raw text file in the source language (tokenized, one sentence per line) into a translated raw text file in the target language (also one sentence per line). The commandline functionality is currently limited to the identification of the A3 training file, and the source-language test text, plus the optional setting of the patience threshold to a non-default setting, e.g. \( p = 50 \), with the \(-p\) switch: \begin{verbatim} mbmt -t EMEA.9m.train.A3.final -t EMEA-dutch.test.txt -p50 \end{verbatim} During runtime, \textsc{mbmt} generates several intermediary files. First, the A3 file is converted to a training file suited for \textsc{igtree}, mapping source-language trigrams to aligning target-language trigrams (cf. Figure 1). Subsequently, this file is compressed into an \textsc{igtree} decision tree, at a typical compression rate of about 95%. The test set is also converted into trigram instances (one instance per word), which are then classified by IGTREE. This output is stored in a file of which the first line looks as follows: ``` na de behandeling ? after_the_end { after_the_end 3.00000, , _the_rate 3.00000, after_the_treatment 3.00000 } ``` The Dutch trigram `na de behandeling` ("after the treatment") is classified by IGTREE as mapping to three equally likely trigram translations. These three translations will be carried along to the final phase, where all predicted trigrams are used to generate possible translations, using a Monte Carlo sampling method with a halting criterion governed by the patience parameter p. Each candidate output sequence is scored by WOPR, the memory-based language model. 4. Memory-based language modeling The MBMT system generates a candidate number of translations for each input sentence. The typical approximate solution to picking the best translation is to use a language model to determine which translation fits best in the target language, e.g. selecting the candidate string with the lowest perplexity score. In the MBMT system, WOPR is the language model. It is a word predictor based on IGTREE, trained to predict the next word in a sentence (Van den Bosch, 2006). To calculate the perplexity of a sentence, we feed it to WOPR and see which words it predicts for each word in the sentence. The perplexity is calculated from the estimated probabilities of each prediction. A prediction is a classification by IGTREE based on a local context of preceding words. In contrast with how IGTREE is used in the MBMT translation module, the word predictor classifier in WOPR produces class distributions (with more than one class if the classification occurs at a non-ending node). Thus, WOPR usually returns more than one word for a given sequence, together with a probability based on frequency counts. This distribution of possible answers is used to calculate a perplexity value. There are three possibilities: (1) If the distribution returned by IGTREE contains the correct word, we take the probability of the word in the distribution; (2) If the distribution does not contain the correct word, we check if it is in the lexicon. If it is, the lexical probability is taken; (3) If it is not in the lexicon, a probability for unseen items is used that is estimated through Good-Turing smoothing. WOPR calculates the sum of $-p \log_2(p)$ of all the probabilities (one for each word in the sentence), and divides this by the number of words to obtain the average over the sentence. The perplexity value is two to the power of this sum. Annotated Example Besides its language modeling functionalities of predicting words and measuring perplexities, WOPR also provides the necessary tools to prepare the data, create datasets and train its prediction models. The following shows how a memory-based language model can be created starting from plain text data. Let us assume the file is called `corpus1.txt`. WOPR commands generally have two parameters. The first one, -r tells \texttt{wopr} which subroutine or tool to run. The second parameter, -p is a comma separated list of keyword: value pairs which specify the different parameters. As a first step, \texttt{wopr} is used to create a lexicon, which in this case is a list of words and their frequency in the corpus. It also generates a list with “counts of counts”, which is used in Good-Turing smoothing of probabilities. \begin{verbatim} wopr -r lexicon -p filename:corpus1.txt \end{verbatim} \texttt{WOPR} creates output file names based on the input file name and the command that is executed. In this case, it creates two files called \texttt{corpus1.txt.} \texttt{lex} and \texttt{corpus1.txt.cnt.} Next, we generate our windowed dataset. In this example we use a window size of three previous words. The resulting file is called \texttt{corpus1.txt.ws3}. \begin{verbatim} wopr -r window_s -p filename:corpus1.txt,ws:3 \end{verbatim} We want to discard words with a frequency of five or less from our data set, and replace them with a special token \texttt{<unk>}. This is done with the following command: \begin{verbatim} wopr -r hapax -p filename:corpus1.txt.ws3,lexicon:corpus1.txt.lex, hpx:5 \end{verbatim} We then train our instance base. \texttt{WOPR} is used as a wrapper in this case, and most of the work is done by \texttt{igtree}. This could take some time, depending on the size of the data, but once the decision tree has been created and saved, it can easily be read in and used again. \begin{verbatim} wopr -r make_ibase -p corpus1.txt.ws3.hpx5,ibasefile: corpus1.txt.ws3.hpx5.ibase, timbl:"-a1 +D" \end{verbatim} Now we are ready to run our word predictor on a test file. The command to do this is as follows: \begin{verbatim} wopr -r pplxs -p filename:test1.txt.ws3.hpx5,ibasefile: corpus1.txt.ws3.hpx5.ibase, timbl:"-a1 +D" \end{verbatim} The test data is prepared in the same way as the training data. The following shows a line of output from \texttt{wopr}. It shows an instance (I \texttt{would like}), the following word (to) and the --in this case correct-- guess from the predictor, to. \begin{verbatim} I would like to to -0.351675 1.8461 1.27604 65 [ to 768 the 34 a 20 it 12 an 12 ] \end{verbatim} This is followed by a number of statistics. The logprob of the prediction is -0.351675. The entropy of the distribution returned by \texttt{igtree} is 1.8461 ($- \sum p \log_2(p)$). The third number shows the word level perplexity ($2^{-\text{logprob}}$). The last number shows the number of elements in the distribution, in this case 65. This is followed by a top 5 of the distribution returned (with counts). It is also possible to run wopr in server mode, communicating over a socket connection with a client; in fact, this is how it is incorporated in the mbmt system. In server mode, wopr will wait for a connection by another program and process the data it receives. The answer is sent back over the same connection. 5. Discussion We have released mbmt, a straightforward translation model based on a fast approximation of memory-based classification. The approach fits into the ebmt framework; it models the mapping of sequences of word spans (here, word trigrams) in the source language to word trigrams in the output language. We showed that mbmt scales well to increased amounts of learning material. Within the current experiments we observed that training time and memory storage costs are approximately linear in the number of training examples. Translation speed on unseen data is very fast; our test set of 1,000 sentences was processed within seconds. Based on these results, we conclude for now that memory-based machine translation systems may be relevant in cases in which there is a need for fast and memory-lean training and/or classification. The low memory footprint may be additionally interesting for implementations of such systems in limited-capacity devices. As a separate component of mbmt we have released the memory-based language model software package wopr, which can also be used in isolation for general language modeling purposes. Wopr offers its functionality through command line options, but can also run in server mode; this is how mbmt uses wopr. Acknowledgments This research is funded by NWO, the Netherlands Organisation for Scientific Research. We are grateful to Nicolas Stroppa, Andy Way, and Patrik Lambert for discussions and help with the comparison with Moses. Bibliography
{"Source-Url": "https://pure.uvt.nl/ws/portalfiles/portal/1160048/Van_den_Bosch_-_Berck.pdf", "len_cl100k_base": 5148, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 27593, "total-output-tokens": 6369, "length": "2e12", "weborganizer": {"__label__adult": 0.0005655288696289062, "__label__art_design": 0.0006422996520996094, "__label__crime_law": 0.0007166862487792969, "__label__education_jobs": 0.0025081634521484375, "__label__entertainment": 0.00037550926208496094, "__label__fashion_beauty": 0.0003018379211425781, "__label__finance_business": 0.00044345855712890625, "__label__food_dining": 0.0005540847778320312, "__label__games": 0.0010633468627929688, "__label__hardware": 0.0010728836059570312, "__label__health": 0.00127410888671875, "__label__history": 0.0005030632019042969, "__label__home_hobbies": 0.0001264810562133789, "__label__industrial": 0.0008225440979003906, "__label__literature": 0.0042724609375, "__label__politics": 0.0005984306335449219, "__label__religion": 0.0009369850158691406, "__label__science_tech": 0.432861328125, "__label__social_life": 0.00025081634521484375, "__label__software": 0.0303955078125, "__label__software_dev": 0.51806640625, "__label__sports_fitness": 0.0004763603210449219, "__label__transportation": 0.0008182525634765625, "__label__travel": 0.00023090839385986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24751, 0.03182]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24751, 0.63113]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24751, 0.88684]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2174, false], [2174, 4517, null], [4517, 7116, null], [7116, 10331, null], [10331, 12879, null], [12879, 14767, null], [14767, 17842, null], [17842, 20329, null], [20329, 23122, null], [23122, 24751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2174, true], [2174, 4517, null], [4517, 7116, null], [7116, 10331, null], [10331, 12879, null], [12879, 14767, null], [14767, 17842, null], [17842, 20329, null], [20329, 23122, null], [23122, 24751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24751, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2174, 2], [2174, 4517, 3], [4517, 7116, 4], [7116, 10331, 5], [10331, 12879, 6], [12879, 14767, 7], [14767, 17842, 8], [17842, 20329, 9], [20329, 23122, 10], [23122, 24751, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24751, 0.03571]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
5eecd2f1e6a42d68a44c98b666d9ba14eee41795
Document Decomposition into Geometric and Logical Layout Vincent Deo Stanford University vdeo@stanford.edu Terry Kong Stanford University tckong@stanford.edu Maisy Wieman Stanford University mwieman@stanford.edu Abstract We present an Android application for scanning LaTeX documents to determine the logical layout of the document. The algorithm first prepares the image for processing, then determines where text and figures are within a document, and finally classifies these various components of a document. Chapter 1 Introduction One popular method of producing aesthetically pleasing PDF documents including diverse contents is LaTeX. LaTeX is a low-level markup and programming language that allows high flexibility for designing placement of text and figures, as well as overall document structure. However, this precision is hard to reproduce; once a PDF document is generated, there is no way in general to access the code used to generate the document. In particular, it is very difficult to recreate the template used to design a document. This project aims to analyze the layout of a PDF document in order to simplify the generation of LaTeX templates. Images are taken on an Android device and sent to a server to be processed\(^1\). Once sent to the server, it is processed in a combination of Matlab and Java, and the final output is saved on the server. The algorithm is comprised of 3 main steps: preprocessing, detecting maximal white rectangles, and classifying components of the document. \[\text{Android image capture} \rightarrow \text{Binarization} \rightarrow \text{Skew Correction} \rightarrow \text{Margin Removal}\] \[\text{Preprocess Connected Components} \rightarrow \text{Maximal White Rectangles} \rightarrow \text{Postprocess White Rectangles} \rightarrow \text{Analyze Horizontal Projections}\] \[\text{Global and Local Feature Comparison}\] \(^1\text{This is the typical pipeline for image processing tasks and is preferred in this project because of the computationally intensive tasks.}\] Chapter 2 Preprocessing Before analyzing the image, a reliable binarized representation of the image must be found to allow for proper interpretation of various components of a document. To optimally apply further document analysis, the image must be vertically aligned without a background. The image is captured with a mobile phone, and the document in the picture may be rotated or have low contrast. This section discusses our method of addressing these concerns in order to prepare the image for layout analysis. 2.1 Binarization Lighting conditions may be uneven, so adaptive Otsu thresholding is used to binarize segments of the image. Overlapping windows of a fixed size are applied to the image, and for each window, the variance is compared to some threshold to determine whether to apply the Otsu threshold. This threshold is partially determined by the mean and variance of the entire image, since images that are taken with low lighting levels will have lower means and variances. After computing for each window, any pixels that have been evaluated as black more than twenty percent of the time are determined to be black. 2.2 Skew Detection Images taken at an angle must be rotated so that text lines are parallel to the image edges. The Hough transform is used to determine the angle to which the image is rotated. Rather than applying the Hough transform to the whole image (which will detect lines that stretch diagonally across text, as well as dominant lines in the background), we apply the Hough transform to smaller portions of the image. We determine which regions are most likely to be text by evaluating the variance, and we take the Hough transform of regions where the variance is high. For each window, we find the histogram of angles, and sum the contributions. We assume that the image will be rotated by an angle less than forty-five degrees. Because (a) Detecting Hough Transform lines of small portions of an image. (b) Theta Values of Hough Peaks of this, we can combine the contributions of perpendicular lines; a page with text going in one direction will often have lines perpendicular to the text, whether those lines represent the edges of the paper or the edges of figures. When the input angle is constrained, we can interpret perpendicular lines as representing the same input angle. We rotate the image by the angle that occurs most often in this new histogram of combined perpendicular lines. 2.3 Background and Margin Removal To remove the background, we find the edges of the document. If the background is in contrast with the paper, binarization will detect high variance at the edges of the document, and create dark edges around the document. We find these dark edges by creating a histogram detailing the longest consecutive line of dark pixels for each row and column, and find where the number consecutive dark pixels is greater than average, which occurs at the longer stripes across the edges. Once we have determined the boundaries of the paper, we must then remove the margin. We then apply a median filter to denoise the image, and find the first and last rows and columns with black pixels to remove the margin. Chapter 3 Layout extraction by Maximal White Rectangle analysis. 3.1 Approach and Purpose The Maximal White Rectangles (MWR) approach for document layout analysis was developed by Henri S. Baird in the 90s. With recent dramatic improvements in computing power, this technique formerly either very constrained or very slow is now achievable for full scale 150 to 300 dpi binarized documents. The idea of MWR analysis comes from a global-to-local, non-backtracking document analysis approach matching the implicit approach of a human reader. Indeed, the eye is first attracted to important white spacing from which we deduce the reading order. e.g first the margins, then isolate the title, separate the columns of text, then the paragraph boundaries, etc. Similarly, the MWR algorithm enumerates all the maximum (by inclusion order) empty rectangles, in a document image, and outputs them sorted by an area-based metric. From that point, we are to select a given percentile of the highest-scoring MWRs, which reunion creates a template of empty zones in the original images (see figure 3.1). The remaining area uncovered by selected MWRs is separated in connected components which are likely to correspond to logical blocks in the document, given that they are surrounded by large empty areas but do not contains such large white areas. The purpose of the MWR in this context is to use this overlay, to further on use the relative positions of the blocks to determine their reading order and logical functions. 3.2 Algorithm The algorithm used to enumerate all the maximum white rectangles in a binary image is described in this section. It is a balanced-tree-based algorithm, which overall complexity is $O(n \log(n))$ where $n$ is the number of maximum white rectangles, which itself amounts to $O(m^2)$ in the worst case with $m$ the number of black pixels. 3.2.1 Preprocessing For runtime reduction, we use a connected-component reduction pre-processing. Each black connected component is extracted from the original image, from which we extract a pixel-aligned rectangular frame surrounding it. Only two sides of this frame are kept, the left and top ones. This step reduces every connected component to a 1-pixel thin ‘Γ’-like shape, which then can be downsampled with very little loss of information. The templates of the connected components are stored for further overlaying after MWR enumeration. With that preprocessing, we achieve a reduction of the number of black pixels to process by typically 70-80% before downsampling, that amounts to a total reduction of 85-95% after downsampling. A typical 300 dpi letter format document (8.5 Mpx) contains about 1,000,000 black pixels in 10,000 connected components, which is reduced to a preprocessed image of 1 Mpx with 50,000 - 150,000 black pixels. ### 3.2.2 MWR Enumeration Core Algorithm The algorithm sweeps across the document from left to right, processing all black pixels. The pending maximal white rectangles are kept in a balanced tree structure in which every node represents a rectangle. Such a rectangle is represented by its upper, lower and leftmost bound, until a black pixel interrupts the rectangle by imposing a right bound. The tree structure is naturally derived from a natural point of view of creating maximal white rectangles sweeping from left to right. ‘Thin’ white rectangles go from the left margin to the current column in between the already found black pixels. Taller rectangles start at the right of already processed black pixels. See figure 3.2. The tree is kept balanced with the following invariants: - The leaves of the tree from (left to right) form an uninterrupted partition of the height of the image, all being rectangles starting at the left edge. • The root is a rectangle covering the whole height of the paper, with its left edge starting at the last (hence rightmost so far) processed black pixel. • For any node, the leaves of the subtree below span an exact partition of the height span of the rectangle represented by this node. • For any node, the left bound is larger than the left bound of all rectangles in the subtree below. For the processing step displayed on figure 3.2, the rectangle tree would be the following, with the syntax ([top, bottom], left) for a running rectangle. ``` ([0,3], 2.5) / \ ([0,1.5], 1.5) ([1.5,3], 0.5) / \ / \ / \ ([0,0.5], 0) ([0.5,1.5],0) ([1.5,2.5],0) ([2.5,3],0) ``` The invariants described above are preserved by the following process for integrating a new black pixel: • Find the lowest node being split if the vertical direction by this black point. Discard all the nodes that are split as finished MWRs. • Split the tree along the split node and its fathers (that are all split as well). A rebalancing operation not described here ensures that the depth does not increase dramatically during these operation. • Merge the two trees generated by the previous step under a new root, a rectangle starting a the current black pixel and spanning the whole height. ### 3.2.3 Implementation It is to be noted that the algorithm described above processes ponctual black dots in a continuous 2D space. Optimizations can be done when we know black dots are actually 1-px wide and necessarily lie on a grid. Otherwise, we just need to discard 1-unit wide rectangles as being nonexistent (in between contiguous black pixels). For performance, the algorithm was implemented in the Java language, achieving a runtime of about 8 seconds for a typical document. ### 3.3 Post-processing The process allows to successively discard maximal white rectangles in a binary layout. This rectangles are immediately pushed in a priority queue which keeps them sorted by a user-defined metric. Empirically, the rectangles that best separate the logical block without going through them are the large rectangles with a large aspect ratio, such as ones running between columns or cutting in between two paragraphs, or between a figure and its caption. For that purpose, the following classifying metric was empirically developed by Henri S. Baird, and is the one we used in this project with encouraging results. The ceiling of aspect ratio to 16 is somehow arbitrary but allows to avoid growing the score of -for instance- very thin rectangles that run in between characters. \[ \text{score} = \text{area} \times \log(\min(\text{aspectratio}, 16)) \] Finally, a certain percentage has to be chosen to construct the image layout. In this project, we have used between 1-5%, providing typical results as shown in figure 3.1. Typically our preprocessed images contain about 15,000 MWR, and between 100-200 MWR is a good quantity to determine the block-by-block layout without separating lines with smaller rectangles. After this MWR layout processing, logical components are isolated and can be analyzed either independently or together to determine their nature and logical connections, which is done in the following steps of the project. \[1^1\text{This was produced with } 1\% \text{ of all MWR.}\] Chapter 4 Classification Once the maximal white rectangles have been calculated, a subset of the largest rectangles can be used as partial covers to segment content in the document. The union of these partial covers creates a mask for the document where connected components in the mask are blocks of content. It is crucial that the chosen subset of partial covers correctly separates different types of content; otherwise the classification will be meaningless. 4.0.1 Assumptions There are many types of document layouts that each have different conventions for properties such as caption position or footer extent. This required us to limit the scope of document layouts the algorithm may accept in order to guarantee reliable classification. Academic papers and textbook pages tend to follow the conventions that we aim to cover, but more generally, any document that follows these set of rules should be correctly decomposed and classified: - The document should be in Manhattan Layout, but in general, any one type of content in the document must have a rectangle that circumscribes only that content and no other type of content. - Text is horizontal.¹ - Figures must have captions, and the captions must be proximal and appear below the figure. - Page Numbers must be centered at the bottom of the page. The types of content that we aim to classify with this set of rules are text, figures, captions, page numbers and miscellaneous objects, e.g., horizontal and vertical dividing bars. First, a global-to-local classification is done to classify a block of content as text, figure, or neither, and then local classification is done to classify types of text. ¹Vertical Text would be recognized as a figure. 4.0.2 Global-to-Local Text Classification Within global-to-local classification, there are two main types of content: "large" content and "small" content. The terms "large" and "small" are loosely used here because they are qualifiers of both area and aspect ratio. More on this is discussed in subsequent sections. "Large" Content Given that a block of content is "large," it is almost certain that this block is either text or a figure. The approach of classifying this large block involves taking the horizontal projection and analyzing it’s autocorrelation which is based off of [1]. The fundamental idea is that large blocks of text have multiple parallel lines of text which are separated by line breaks. Once a horizontal projection is taken, it can be considered as a quasiperiodic signal. If there is a strong fundamental frequency component in this signal, which corresponds to the line break spacing, the block is considered text. The blue line in the plot Figure 4.3b is an example of a horizontal projection of a block of text. While designing the classification step, it was observed that the horizontal projection near the top and bottom border of a text block negatively affected the classification step; so in each projection, the ends are removed. The edge removed projection is the red plot in Figure 4.3b. The details of this are not crucial so further explanation is not needed. Instead of analyzing the raw signal to determine it’s fundamental frequency, it is easier to analyze the autocorrelation of the signal. This is because the autocorrelation of a periodic signal is again periodic. Another reason the autocorrelation is preferred is because the autocorrelation is essentially a summation of many random variables and this essentially smoothes the data. Taking a Fourier Transform is the natural next step, however in this project, the fundamental frequency was calculated in the primal domain. \[ \text{By convolution.} \] Figure 4.2: Figure Block Analysis: a) Figure Block b) The horizontal projection with its mean subtracted out of the text block in part a c) the autocorrelation of the horizontal projection with the border effects removed Figure 4.3: Text Block with Skew Analysis: a) Text Block with Skew b) The horizontal projection with its mean subtracted out of the text block in part a c) the autocorrelation of the horizontal projection with the border effects removed by finding the mode distance between a subset of the most significant maxima, which offered a more robust way of rejecting figure blocks. Figure 4.3c shows an example of the autocorrelation of a horizontal projection with marks on the most significant peaks. If the autocorrelation did not have enough significant maxima to establish a fundamental frequency/period, then the block was classified as a figure. As a comparison, Figure 4.2 shows the autocorrelation of a figure’s horizontal projection with the most significant peaks marked. This use of the autocorrelation to classify large text blocks is robust against slight variations in skew angle. This is because the horizontal projection is still quasiperiodic because of the periodicity of the line breaks. This is shown in Figure 4.3. "Small" Blocks To classify smaller blocks, the approach involved categorizing each block by its area, aspect ratio, and vertical projection to surmise the type. This idea isn’t completely new and still appears in the literature [2] as a reasonable approach when the classification The rules that were implemented in this project were: - If the height of the bounding box is shorter than some threshold it is not a figure. - If a small block in area has a large aspect ratio it is either a line of text or a horizontal divider depending on the variance of its vertical projection. - If a block has small area and close to unity aspect ratio, it is either text, like a page number, or part of a figure. These rules are not meant to comply with every document layout; however, for the document layouts of interest, these rules seemed to work well. 4.0.3 Local Classification Once all of the blocks are classified as text, figure, or neither, we can double back and further determine what kind of text blocks we have. This is important since our goal is to classify document parts with fine granularity to make the layout classification task easier. Another advantage to doubling back is that content such as figures that do not have captions underneath them must have been incorrectly classified in the global-to-local classification step. This improves the accuracy of classification. The rules that were implemented in this project were: - Captions must appear proximal and below a figure. There can only be one caption per figure. - Page numbers are "small" blocks of text that are centered with respect to the entire page and must be below a certain point on a page. The fact that they are "small" blocks prevents misclassification of a footer. Footers were not classified since they are usually separated from text by a horizontal line. If the skew of the binary document is incorrect, without OCR, this horizontal line will not be distinguishable from text using only a horizontal or vertical projection. --- 3 The convex hull of a connected component from the MWR mask Chapter 5 Results Since the image pipeline involves executing Matlab code, it was a logical choice to run the image processing off the device. With the current implementation in Java and Matlab running on a 2.3 GHz Intel Code i7, a typical picture of a document taken on an 8MP camera takes approximately 25 seconds to process. The results of 73 pictures of documents conforming to the IEEE conference format are compiled below in Figure 5.1. The numbers on top of each bar are the number of documents, out of 73, that contained that content type. Figure 5.1: Tested Accuracy --- 1Paragraphs were considered as one block and if all lines of text within a paragraph were classified as text, the paragraph was considered correctly classified. Chapter 6 Conclusions As the results suggest, text is the most correctly classified content block owing to the fact that the priority of the classification step is determining whether or not a block is text with the autocorrelation method. The bias in the design of the classification algorithm led to the bias in the results. The classification could be made even more robust if an OCR engine was used to confirm the character components in text blocks, which suggests a route for future work. Another observation was that the accuracy was heavily dependent on the skew estimation in the preprocessing step. While the classification step may still be correct if a block is slightly skewed, the maximal white rectangle analysis may not return maximal rectangles that segment content properly. This is a huge contributor to the error observed in Figure 5.1. One step to improve this would be to extend the allowable transforms from affine to homography. This will allow for perspective corrections, which were an issue in a few test cases. Future work for this project includes developing a bottom up approach incorporating OCR to transcribe the text in each text block as well as produce the \texttt{LATEX} code that creates the layout of the document. Bibliography
{"Source-Url": "https://web.stanford.edu/class/ee368/Project_Spring_1415/Reports/Deo_Kong_Wieman.pdf", "len_cl100k_base": 4461, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 29075, "total-output-tokens": 5611, "length": "2e12", "weborganizer": {"__label__adult": 0.0003786087036132813, "__label__art_design": 0.0026607513427734375, "__label__crime_law": 0.0004572868347167969, "__label__education_jobs": 0.0022182464599609375, "__label__entertainment": 0.0001233816146850586, "__label__fashion_beauty": 0.0003032684326171875, "__label__finance_business": 0.00027060508728027344, "__label__food_dining": 0.000408172607421875, "__label__games": 0.0004775524139404297, "__label__hardware": 0.0038089752197265625, "__label__health": 0.0007653236389160156, "__label__history": 0.000591278076171875, "__label__home_hobbies": 0.0002586841583251953, "__label__industrial": 0.0007781982421875, "__label__literature": 0.0004916191101074219, "__label__politics": 0.00025844573974609375, "__label__religion": 0.0005731582641601562, "__label__science_tech": 0.3984375, "__label__social_life": 0.00013566017150878906, "__label__software": 0.023651123046875, "__label__software_dev": 0.56201171875, "__label__sports_fitness": 0.0002727508544921875, "__label__transportation": 0.0005397796630859375, "__label__travel": 0.00022602081298828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22587, 0.03188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22587, 0.81498]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22587, 0.91162]], "google_gemma-3-12b-it_contains_pii": [[0, 227, false], [227, 529, null], [529, 2050, null], [2050, 3940, null], [3940, 5233, null], [5233, 6749, null], [6749, 7926, null], [7926, 8997, null], [8997, 11081, null], [11081, 12303, null], [12303, 14024, null], [14024, 15983, null], [15983, 17521, null], [17521, 19322, null], [19322, 20068, null], [20068, 21325, null], [21325, 22587, null]], "google_gemma-3-12b-it_is_public_document": [[0, 227, true], [227, 529, null], [529, 2050, null], [2050, 3940, null], [3940, 5233, null], [5233, 6749, null], [6749, 7926, null], [7926, 8997, null], [8997, 11081, null], [11081, 12303, null], [12303, 14024, null], [14024, 15983, null], [15983, 17521, null], [17521, 19322, null], [19322, 20068, null], [20068, 21325, null], [21325, 22587, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22587, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22587, null]], "pdf_page_numbers": [[0, 227, 1], [227, 529, 2], [529, 2050, 3], [2050, 3940, 4], [3940, 5233, 5], [5233, 6749, 6], [6749, 7926, 7], [7926, 8997, 8], [8997, 11081, 9], [11081, 12303, 10], [12303, 14024, 11], [14024, 15983, 12], [15983, 17521, 13], [17521, 19322, 14], [19322, 20068, 15], [20068, 21325, 16], [21325, 22587, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22587, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
0baef97d603763f8cdefdd775ed41a8758cb49c8
Secure Coding. Practical steps to defend your web apps. Interested in learning more? Check out the list of upcoming events offering "Defending Web Applications Security Essentials (DEV522)" at http://software-security.sans.org/ http://software-security.sans.org/events/ Data Encryption and Redaction: A Review of Oracle Advanced Security A SANS Product Review Written by Dave Shackleford September 2014 Sponsored by Oracle The need for organizations to protect sensitive information has never been greater. The risks of data breaches and sensitive data exposures are driving organizations to look for solutions, because an increasing amount of data is being stored and processed outside the perimeter, in cloud applications and service environments. Organizations must protect this sensitive data at its heart, in the database. In the past, database security has focused on protecting data from access by a database administrator (DBA) or other internal users. Although this is still a valid use case, especially in cloud environments where DBAs may be responsible for databases with numerous customer records, the more pressing focus for many is to ensure records are protected from malicious intruders or accidental exposure. Data breaches linked to the loss of backup tapes, disk drives or flash drives can result in the loss of payment card information, personal health information and many other types of data that are sensitive and regulated by compliance mandates such as HIPAA or PCI DSS. For compliance reasons alone, many organizations have sought to encrypt data; today, the use cases go far beyond compliance. Security teams are looking to encrypt database information as a fundamental control to safeguard the organization if any storage media or data files are stolen or lost. We had the opportunity to review Oracle Advanced Security for Oracle Database 12c, which offers two main features for protecting sensitive information in databases. The first, Oracle Transparent Data Encryption (Oracle TDE) is a flexible encryption solution that allows for either column encryption or complete tablespace encryption. The second is Oracle Data Redaction, which removes or redacts columns of sensitive data on the fly during output to applications. We found Oracle Advanced Security’s encryption and redaction capabilities to be top-notch. The product has a wide range of features and—after spending some time with the tools and management interface—we were able to easily and transparently encrypt and redact data. In our testing, performance was barely affected at all, making this an attractive option for database administrators as well as security teams. As one might guess from the name, the concept of transparent data encryption (TDE) enables encryption of individual table columns or an entire tablespace without any special effort on the part of the application designer or end users. When a user (either directly or through an application) inserts data into an encrypted column or tablespace, a TDE-enabled database automatically encrypts the data; when authorized users select the column (or tablespace) the encrypted data is automatically decrypted and returned. TDE offers several benefits to organizations: - **Encrypted data is transparently decrypted for the database user.** By storing encrypted data, organizations protect themselves from breaches related to the storage system. - **Developers and users do not have to create triggers or views to decrypt data.** No special actions are required, which provides a better user experience. - **Applications don’t need to be modified to handle encrypted data.** The database engine alone manages all encryption and decryption functions. Our evaluation of Oracle TDE followed the steps for encrypting a specified tablespace and viewing the data before and after the encryption operation. **Key Management** Key management is perhaps the most critical part of any encryption scheme, so we began our evaluation with this fundamental element. Oracle Advanced Security uses a tiered key management infrastructure, where keys can be stored in a software keystore on a local file system, on a centralized key server, or in a hardware security module (HSM). A software keystore is likely more flexible and initially costs less to implement; however, the security of the software keystore is tied directly to the local file system and the platform where it is installed. This risk in this case is largely mitigated with implementation of HSM platforms, but these can be more expensive to implement and may be incompatible with some applications. --- 1 Oracle Key Vault, the company’s centralized key management platform ([www.oracle.com/us/products/database/security/key-vault/overview/index.html](http://www.oracle.com/us/products/database/security/key-vault/overview/index.html)), was released after our testing concluded and is not part of this review. An Oracle wallet is a software keystore that typically contains master encryption keys, TLS certificates, private keys and the Oracle Secure External Password Store (SEPS), which stores user and password information for automating database server logins; it can be one of the following types: - **Password-protected keystores.** These are secured with a password that you create. You must open the keystore before the keys can be retrieved or used. This is a simple type of keystore to generate that still affords some access control security. - **Auto-login keystores.** These are automatically opened when accessed. Auto-login keystores don’t need to be explicitly opened by a security administrator and are, therefore, less secure than password-based keystores, since they do not have any explicit access control measures built in, relying instead on the file system’s permissions. - **Auto-login local keystores.** These are, naturally enough, auto-login keystores that cannot be opened on any computer other than the one on which they are created. Thus, even if they are stolen, they cannot be used elsewhere. Encrypting data using Oracle TDE starts with creating a keystore file to store the master encryption key by using the **ADMINISTER KEY MANAGEMENT** SQL command; as part of this, the keystore is secured with a password of your choice. Once the keystore was created, we could then use the password to open the keystore that enables the database to access the master encryption key. When a user writes data to an encrypted tablespace or column, Oracle Database: 1. Retrieves the master key from the keystore (performed only the first time the keystore is opened, because the key is cached for continued use). 2. Decrypts the specific encryption key associated with the column or tablespace using the master key (again, the key is cached after the initial query of the tablespace). 3. Uses the encryption key to encrypt the data entered by the user. 4. Stores the data in encrypted format in the database. If the user is selecting data, Oracle Database follows the same steps, but decrypts the data and then returns the original data. We found encrypting with Oracle TDE to have a minimal impact on performance, although the method employed can have an effect. Column encryption affects performance only when data is retrieved from or inserted into an encrypted column.\(^2\) Tablespace encryption provides better performance than column encryption because Oracle Database encrypts and decrypts at the I/O block layer. Once blocks are decrypted, they are cached in Oracle Database’s working memory for optimal performance. (Although we did not test performance in our review, Oracle claims its customers report minimal performance impact when using TDE.) To evaluate Oracle TDE’s key management, we logged into a test environment the company provided for us. First, we validated the encryption keys and keystore that Oracle’s lab team had set up for our review, then set the keystore to “open” status to receive data, and the master key was made available for encryption and decryption operations, as shown in Figure 1. ```sql SYS@<db> SQL> alter session set container=<db>$root; Session altered. SYS@<db> SQL> select status from v$encryption_wallet; STATUS ------------------------ CLOSED SYS@<db> SQL> administer key management set keystore open identified by "Oracle123"; keystore altered. SYS@<db> SQL> select status from v$encryption_wallet; STATUS ------------------------ OPEN SYS@<db> SQL> alter session set container=<db1>; Session altered. SYS@<db> SQL> select status from v$encryption_wallet; STATUS ------------------------ CLOSED SYS@<db> SQL> administer key management set keystore open identified by "Oracle123"; keystore altered. SYS@<db> SQL> select status from v$encryption_wallet; STATUS ------------------------ OPEN ``` **Figure 1. Opening Keystore and Readying Master Key** In these steps, the master key was stored in the Oracle wallet on the local platform, which is secured with an administrator password for access control. This keystore was available for encryption operations. \(^2\) Naturally, encrypted data requires more storage space than plain text data; a single column requires 32 to 48 bytes of additional storage for each row when encrypted. After logging into the environment, we pointed a browser to the management interface—Oracle Enterprise Manager Cloud Control—where we could manage database configurations as well as the Oracle Advanced Security settings. After logging in and selecting the database “PDB1” (created as part of the testbed setup), we used the top menu bar controls to select (in order) Administration, Security and, finally, Oracle Advanced Security, enabling us to review the selected database's encryption keys and status. The encryption keystore and keys for PDB1 are shown in Figure 2. The next step in our walkthrough was to create a new encrypted tablespace to hold our sensitive data. This was a relatively simple process from the menu bar: in sequence, we selected Administration, Storage and, finally, Tablespaces. In this screen, we created a new tablespace called HR_ENC to hold encrypted human resources (HR) data in the PDB1 database. Setting the encryption options for this tablespace was also simple, as shown in Figure 3. **Figure 3. Setting Tablespace Encryption Options** For this example, we chose to use the Advanced Encryption Standard algorithm with 256-bit encryption (AES-256); the default is AES-192. We were also able to review the commands used to generate the encrypted tablespace, as shown in Figure 4. **Figure 4. SQL for Encrypted Tablespace Generation** The newly encrypted tablespace (accessed in command-line operations by the tablespace file `hr_enc.dbf`) is shown in Figure 5. **Oracle TDE in Action** Once the encrypted tablespace was ready, we needed to move the sensitive HR data into it by performing a “reorganization” of the existing HR tablespace. Before this, however, we wanted to verify that the existing tablespace could be searched and that data was visible in cleartext. Figure 6 shows a simple command to search for the term `Shareholder` in the `DEPARTMENTS_TOO` tablespace. ``` bash-4.1$ strings /app/oracle/dbs12c/oradata/cdb/pdb1/example01.dbf | grep -n Shareholder 1618:Shareholder Services 762806:Shareholder Services bash-4.1$ ``` **Figure 5. Final Encrypted Tablespace** **Figure 6. Cleartext Data in Existing Tablespace** Having verified searchability and visibility, we performed the tablespace reorganization, shown in Figure 7. ![Figure 7. Reorganizing the Tablespace](image) The resulting table (DEPARTMENTS_TOO) was moved to the HR_ENC tablespace, as shown in Figure 8. ![Figure 8. HR Table in the Encrypted Tablespace](image) After moving the data into the encrypted tablespace, we ran a generic `strings` query against the new encrypted tablespace file. The results, shown in Figure 9, show that no cleartext discernible words were present in the table (having been encrypted). ![Figure 9. Encrypted Tablespace](image) All in all, we found the process and tools available with Oracle TDE to be simple and easy to use; we did not have to change the applications on our testbed at all to take advantage of its features. Many more granular options are available as well, depending on the type of encryption operations desired. Oracle Data Redaction gives security teams the ability to perform on-the-fly redaction of sensitive data in query results prior to display by applications, preventing unauthorized application users from viewing sensitive data. For example, a customer relationship management (CRM) application should return only nonsensitive data to a call center team and redact sensitive or personally identifiable information such as birth dates or Social Security numbers. Even when the source code is available, changing the application to redact data completely can be error-prone, laborious and a drag on performance. When the redaction tools are built into the database platform—as they are with Oracle Advanced Security—stripping out sensitive data fields dynamically can be much more efficient and effective. Oracle Data Redaction is ideal for organizations that must comply with regulatory or data security requirements that call for masking sensitive data when it is displayed (e.g., PCI DSS requirement 3.3, which covers account number masking). It reduces implementation costs because developers don’t have to modify applications to accommodate different data formats or manage encryption keys. Oracle Data Redaction’s declarative policy functions can apply different data transformations in the form of full, partial or random redaction and do so conditionally based on factors tracked by the database or on external variables. This redaction has no impact on database operations such as clustering, backup and restore, or upgrades and patching; organizations can therefore deploy it without changing their operating procedures. Oracle Data Redaction policies are enforced directly in the database kernel, and a number of granular options are available to control when redaction is applied, as well as the input and output formats for redaction. Once enabled, policies are enforced immediately, even for active sessions. --- To evaluate data redaction, we performed another basic test, this time with individual employee records in the testbed’s mock HR database. The original, unedited employee record is shown in Figure 10. Figure 10. Employee Record with Sensitive Data For this phase of our review, our goal was to redact several of the columns in the “Supplemental Data” category of employee records. We started by creating a new redaction policy, which was done by clicking (again, in sequence) on the Oracle Enterprise Manager pages Administration, Security and, finally, Oracle Data Redaction; we then clicked Create to start a new policy. We proceeded to create a number of policy elements that corresponded to specific columns in the HR database; Oracle Data Redaction has a number of out-of-the-box pattern matches for sensitive data, as shown in Figure 11. Custom redaction functions can also be included; specifics about the output of redaction functions are granular and simple to apply and verify. In Figure 12, you can see how the redaction function was set up to follow specific patterns in the TAXPAYER_ID column. After we created several redaction functions, we had a final policy that redacted several fields in the HR database. In the Supplemental Data category of each employee record, we were looking to redact the values for bonus amount, specifics of the last insurance claim, payment account number and taxpayer ID. All of these except the taxpayer ID (stored in this case as `TAXPAYER_ID`) are full redaction functions, with the data redacted entirely; in contrast, `TAXPAYER_ID` is a partial redaction operation, replacing the first five characters with X’s. Once implemented, the new employee record appears as seen in Figure 13. ![Employee Record with Redaction Enabled](image-url) *Figure 13. Employee Record with Redaction Enabled* Another important consideration for our assessment was to ensure the data would remain redacted when read by someone with casual or even unintentional SQL*Plus access. In this instance, we queried the database via SQL*Plus and verified that the same type and format of redaction functions were present; our query returned the redacted output, rather than the real data, as shown in Figure 14. ![Figure 14. Redaction Demonstrated in SQL*Plus](image) The process of creating and applying redaction functions was easy and straightforward. For organizations looking to protect sensitive data without the need for any modifying applications and without any performance penalty, Oracle Data Redaction is a good fit. Conclusion Data encryption and redaction are effective means of protecting sensitive data; the problem for many organizations is implementing them without upsetting their existing database schemas or making things extremely difficult for database managers, developers and end users. Oracle Advanced Security provides such protection without causing performance or functional issues with database schemas. Oracle Advanced Security was easy to configure and implement, and its encryption and redaction functions operated efficiently and securely. Encryption key management was easy to set up, and keys can be stored in a secure wallet or hardware module. Redaction functions were easy to configure and automatically deploy by setting a few parameters. Oracle makes a declarative policy-based approach to encryption and redaction simple to create, manage and change, thanks to Oracle Advanced Security’s data redaction and transparent data encryption features. In addition, applying the encryption and redaction functions to the data, as well as verifying that these functions were operating properly, was straightforward and easy to document, which is important from any compliance or regulatory perspective. Dave Shackleford is the founder and principal consultant with Voodoo Security, a SANS analyst, instructor and course author, and a GIAC technical director. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. He is a VMware vExpert and has extensive experience designing and configuring secure virtualized infrastructures. He has previously worked as chief security officer for Configuresoft and CTO for the Center for Internet Security. Dave is the author of the Sybex book, Virtualization Security. Recently, Dave co-authored the first published course on virtualization security for the SANS Institute. Dave currently serves on the board of directors at the SANS Technology Institute and helps lead the Atlanta chapter of the Cloud Security Alliance. SANS would like to thank this paper’s sponsor: ORACLE # Upcoming SANS App Sec Training <table> <thead> <tr> <th>Event</th> <th>Location</th> <th>Dates</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>SANS 2020</td> <td>Orlando, FL</td> <td>Apr 03, 2020 - Apr 10, 2020</td> <td>Live Event</td> </tr> <tr> <td>Mentor Session - DEV522</td> <td>Novi, MI</td> <td>Apr 21, 2020 - May 21, 2020</td> <td>Mentor</td> </tr> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Silicon Valley - Cupertino 2020</td> <td>Cupertino, CA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Copenhagen August 2020</td> <td>Copenhagen, Denmark</td> <td>Aug 24, 2020 - Aug 29, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Online</td> <td>Anytime</td> <td>Self Paced</td> </tr> <tr> <td>SANS SelfStudy</td> <td>Books &amp; MP3s Only</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/data-encryption-redaction-review-oracle-advanced-security", "len_cl100k_base": 4209, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 31636, "total-output-tokens": 5039, "length": "2e12", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.0003650188446044922, "__label__crime_law": 0.00254058837890625, "__label__education_jobs": 0.0010385513305664062, "__label__entertainment": 9.369850158691406e-05, "__label__fashion_beauty": 0.00022327899932861328, "__label__finance_business": 0.0029659271240234375, "__label__food_dining": 0.0003311634063720703, "__label__games": 0.0006766319274902344, "__label__hardware": 0.004268646240234375, "__label__health": 0.0008759498596191406, "__label__history": 0.00018668174743652344, "__label__home_hobbies": 0.00018155574798583984, "__label__industrial": 0.0009021759033203124, "__label__literature": 0.00019609928131103516, "__label__politics": 0.0003447532653808594, "__label__religion": 0.0003581047058105469, "__label__science_tech": 0.1358642578125, "__label__social_life": 0.0001323223114013672, "__label__software": 0.154541015625, "__label__software_dev": 0.69287109375, "__label__sports_fitness": 0.000244140625, "__label__transportation": 0.0003998279571533203, "__label__travel": 0.0001552104949951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20803, 0.02259]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20803, 0.15331]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20803, 0.89592]], "google_gemma-3-12b-it_contains_pii": [[0, 271, false], [271, 429, null], [429, 2676, null], [2676, 4937, null], [4937, 7090, null], [7090, 9239, null], [9239, 10169, null], [10169, 10610, null], [10610, 11410, null], [11410, 12325, null], [12325, 14463, null], [14463, 14846, null], [14846, 15573, null], [15573, 16308, null], [16308, 17020, null], [17020, 18230, null], [18230, 19126, null], [19126, 20803, null]], "google_gemma-3-12b-it_is_public_document": [[0, 271, true], [271, 429, null], [429, 2676, null], [2676, 4937, null], [4937, 7090, null], [7090, 9239, null], [9239, 10169, null], [10169, 10610, null], [10610, 11410, null], [11410, 12325, null], [12325, 14463, null], [14463, 14846, null], [14846, 15573, null], [15573, 16308, null], [16308, 17020, null], [17020, 18230, null], [18230, 19126, null], [19126, 20803, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20803, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20803, null]], "pdf_page_numbers": [[0, 271, 1], [271, 429, 2], [429, 2676, 3], [2676, 4937, 4], [4937, 7090, 5], [7090, 9239, 6], [9239, 10169, 7], [10169, 10610, 8], [10610, 11410, 9], [11410, 12325, 10], [12325, 14463, 11], [14463, 14846, 12], [14846, 15573, 13], [15573, 16308, 14], [16308, 17020, 15], [17020, 18230, 16], [18230, 19126, 17], [19126, 20803, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20803, 0.10484]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
4dd5fdc6a2c594ee58b52c75ebe149f9c5faabde
Project 2: IP over UDP Due: 11:59 PM, Oct 16, 2018 Contents 1 Introduction 2 2 Requirements 2 2.1 Core Requirements ........................................................... 2 2.2 Capstone Requirements ....................................................... 2 3 Implementation 3 3.1 Abstract Link Layer .......................................................... 3 3.2 Routing - RIP ................................................................. 4 3.3 Forwarding ................................................................. 4 3.4 Driver ................................................................. 5 3.5 Traceroute (Capstone Only) .............................................. 6 3.6 Route Aggregation and Longest Prefix Match (Capstone Only) .... 6 4 Getting Started 6 4.1 Development Environment .................................................. 7 4.2 Executables ................................................................. 7 4.3 Sample Networks ........................................................... 7 4.4 Utilities for C ............................................................. 8 5 Getting Help 8 6 Grading 9 6.1 Milestone - 20% ............................................................. 9 6.2 Functionality - 65% ......................................................... 9 6.3 Code Quality - 10% ......................................................... 9 6.4 README - 5% .............................................................. 9 7 Handing In and Interactive Grading 10 8 A Warning 10 1 Introduction In this assignment you will be constructing a Virtual IP Network using UDP as the link layer. Your network will support dynamic routing. Each node will be configured with its (virtual) links at startup and support the activation and deactivation of those links at run time. You will build a simple routing protocol over these links to dynamically update the nodes’ routing tables so that they can communicate over the virtual topology. The relevant class lectures and textbook will be especially helpful with this part of the project. This is a 2-person group project. You should find a partner to work with right away, and post your pairing on Piazza to inform the TAs of your pairing. If you are having problems with this (there could be an odd number of people in the class), post something on Piazza or ask us. Once the groups are set, you’ll be assigned a mentor TA to help you through this project and the next, TCP. TCP will build on this project, so your effort on design will pay off twice. 2 Requirements 2.1 Core Requirements Before you start coding, you need to understand what you’re doing. It will take a little while to wrap your head around, but once you do, it will seem straightforward, we promise. There are two main parts to this assignment. The first is IP in UDP encapsulation, and the design of forwarding — receiving packets, delivering them locally if appropriate, or looking up a next hop destination and forwarding them. The second is routing, the process of exchanging information to populate the routing tables you need for forwarding. This will be done with the Routing Information Protocol (RIP) which will be shown in class, and is described in section 4.2.2 of the textbook. Your network will be structured as a set of cooperating processes. You might run several processes on a single machine or use separate machines; it doesn’t matter because your link layer is UDP. Files you will need to bootstrap your network are available in your Github repository. You will write a network topology file (we’ve supplied some examples) describing the virtual topology of your intended network. After running our script net2lnx on the complete topology, you’ll have a file for each node that specifies only that node’s links. You will run your process, which must be called node, for each virtual node, and must accept the name of that node’s link file as its first argument on the command line. An example invocation would be: node <linksfile> 2.2 Capstone Requirements In addition to everything mentioned in the Core Requirements section, there is a small additional requirement for students taking cs168 for a capstone requirement. Below is a list of additional features to implement, and each capstone student is required to implement one of them. So if both --- 1To be clear, RIP is an independent protocol apart from IP. IP as a protocol does not prescribe any particular routing method. Read [RFC 2453](https://tools.ietf.org/html/rfc2453) the specification for RIP version 2, for more information. students in a pair working on IP are taking cs168 for a capstone credit, then 2 of the following need to be implemented: a. **Traceroute** You will be implementing a subset of ICMP within your IP layer in order to implement traceroute functionality. The requirements are that you should be able to report the nodes in shortest path from the current node to any other endpoint in the network. This functionality requires you to implement the **ICMP Time Exceeded Message** from RFC 792. b. **Route Aggregation and Longest Prefix Match** In large networks it is necessary to aggregate the networks known to a router in order to keep the size of the routing tables reasonably small. You will be implementing a mechanism that automatically aggregates the networks a router learns from its neighbours. As a result of network aggregation, in order for your routers to route correctly you will also need to implement Longest Prefix Matching. See the implementation section for more details. ### 3 Implementation In brief, your nodes will come up, create an abstract link layer, and begin running RIP on the specified links. Each node will also support a simple command line interface, described below, to bring links up and down, and send packets. Finally, when IP packets arrive at their destination, if they aren’t RIP packets, you will implement an interface to deliver them to an upper layer. In the next assignment, you will deliver them to your TCP implementation when appropriate. In this current assignment, you will simply print the packets out in a useful way. From a network stack perspective, you will implement a link layer interface over UDP sockets with the ability to disable and enable individual interfaces. You will then implement a virtual IP layer over these interfaces, and then build RIP as a client protocol that uses your virtual IP layer. #### 3.1 Abstract Link Layer You will use UDP as your link layer for this project. Each node will create an interface for every line in its links file — those interfaces will be implemented by a UDP socket. All of the virtual link layer frames it sends should be directly encapsulated as payloads of UDP packets that will be sent over these sockets. You must observe an Maximum Transfer Unit (MTU) of 1400 bytes; this means you must never send a UDP packet (link layer frame) larger than 1400 bytes. However, be liberal in what you accept. Read link layer frames into a 64KiB buffer, since that’s the largest allowable IP packet (including the headers). To enforce the concept of the network stack and to keep your code clean, we require you to provide an abstract interface to your link layer rather than directly make calls on socket file descriptors from your forwarding code. For example, define a network interface structure containing information about a link’s UDP socket and the physical IP addresses/ports associated with it, and pass these to functions which wrap around your socket calls. We also require that you provide the functionality to activate/deactivate a link layer interface; this would be equivalent to `ifconfig eth0 up/down` or disabling your Ethernet/wireless card. 3.2 Routing - RIP One part of this assignment is to implement routing using the [RIP protocol] described in class, but with some modifications to the packet structure. You must adhere to the following packet format for exchanging RIP information: ```c uint16_t command; uint16_t num_entries; struct { uint32_t cost; uint32_t address; uint32_t mask; } entries[num_entries]; ``` `command` will be 1 for a request of routing information, and 2 for a response. `num_entries` will not exceed 64 (and must be 0 for a request command). `cost` will not exceed 16; in fact, we will define infinity to be 16. `address` will be an IPv4 address. `mask` is a bitmask to denote the size of the network. In the standard version of the assignment, all routing entries will refer to single IP addresses and thus will use a mask of 255.255.255.255 (i.e. all 1’s) to denote a /32. If you are implementing link aggregation, you will adjust the mask based on how you decide to represent each network. As with all network protocols, all fields must be sent on the wire in network byte order. Once a node comes online, it must send a request on each of its interfaces. Each node must send periodic updates to all of its interfaces every 5 seconds. A routing entry should expire if it has not been refreshed in 12 seconds. If a link goes down, then the network should be able to recover by finding different routes to nodes that went through that link. You must implement split horizon with poisoned reverse, as well as triggered updates. Triggered updates do not contain the entire routing table, just the routes that are updated. 3.3 Forwarding In addition, you will design a network layer that sends and receives IP packets using your link layer. Overall, your network layer will read packets from your link layer, then decide what to do with the packet: local delivery or forwarding. The IP packet header is available in `/usr/include/netinet/ip.h` as `struct ip`. Those of you not using C/C++ may use `/usr/include/netinet/ip.h` or other sources as a reference for crafting your headers. While you are not required to support advanced IP features such as IP options and fragmentation, your design should be able to receive these packets and act on them in a sensible way. For example, you are not required to send packets with IP options, but you must be able to accept packets. --- 2If you are writing in C or C++, consider using flexible array members for allocation of your packet structure. 3When testing your project, feel free to make these times longer if it assists with using a debugger. 4RFC 791, the IPv4 specification, would be a good place to start! with options (ignoring the options). Similarly, you are not required to send fragmented packets, or implement reassembly of fragmented packets that you receive. You will need an interface between your network layer and upper layers for local delivery. In this project, some of your packets need to be handed off to RIP; others will simply be printed. In the next assignment, you will be handing packets off to your TCP implementation. These decisions are based on the IP protocol field. Use a value of 200 for RIP data, and a value of 0 for the test data from your send command, described below. We ask you to design and implement an interface that allows an upper layer to register a handler for a given protocol number. We'll leave its specifics up to you. An example of how you might go about doing this in C (for some some_data_t): ```c typedef void (*handler_t)(some_data_t *, struct ip *); void net_register_handler(uint8_t protocol_num, handler_t handler); ``` For example, for RIP packets, the RIP packet should be the payload for the IP packet. As a protocol, an RIP handler should be able to be registered with the following in order to receive incoming packets with an IP protocol field of 200: ```c net_register_handler(200, RIP_handler); ``` Likewise, RIP as a protocol should be able to send packets over a particular interface through IP. Keep in mind that this will require a clean abstraction of your link layer interfaces. Even without a working RIP implementation, you should be able to run and test simple forwarding, and local packet delivery. Try creating a static network (hard code it, read from a route table, etc.) and make sure that your code works. Send data from one node to another one that requires some amount of forwarding. Integration will go much smoother this way. ### 3.4 Driver Your driver program, node, will be used to demonstrate all features of the system. You must support the following commands within a command line interface. **interfaces, i** Print information about each interface, one per line. **routes, r** Print information about the route to each known destination, one per line. **down integer** Bring an interface “down”. **up integer** Bring an interface “up” (it must be an existing interface, probably one you brought down) **send vip proto string** Send an IP packet with protocol proto (an integer) to the virtual IP address vip (dotted quad notation). The payload is simply the characters of string (as in Snowcast, do not null-terminate this). **q** Quit the node by cleaning up used resources You should feel free to add any additional commands to help you debug or demo your system, but the above the commands are required. It would be to your advantage to add bandwidth-intensive test commands to help prepare your implementation for TCP. 3.5 Traceroute (Capstone Only) Your driver should include the following command for demonstrating traceroute: traceroute vip prints out the sequence of hops in the following format: <hop num> <vip>. So an example output would be: ``` Traceroute from 192.168.0.2 to 192.168.0.5 1 192.168.0.2 2 192.168.0.8 3 192.168.0.5 Traceroute finished in 3 hops ``` From the driver command, we should be able to see changes in the path when any node in the network is brought up or down. If a host is not in the network, or is unreachable, you should print that information. 3.6 Route Aggregation and Longest Prefix Match (Capstone Only) Every router you create in this project represents a single local ip address. In contrast, real world routers represent networks that are attached directly to them. Therefore in this project you will implement a relaxed version of route aggregation: - all the networks (a single address is a network with mask 255.255.255.255) learned from a specific neighboring router can be aggregated to the smallest network which contains all the learned networks. You can safely assume that networks this will be tested on will have ip addresses allocated in a manner that conforms to the topology. - The distance metric of the aggregated network will be equal to the shortest distance from the aggregated routes. These relaxations will lead to cases where the routers behaviour will diverge from what is otherwise considered correct behaviour: a router may forward a packet whose destination is a “hole” in the network (a target address that part of the network, but does not exist in any node), and the distance metric may be incorrect for some of the nodes in the network (when aggregating we use the shortest distance, leading to some nodes to appear closer than they are). Apart from these cases, a driver implementing route aggregation MUST behave correctly, therefore Longest Prefix Matching is necessary. Note: The reference node does not implement route aggregation. Do not run the reference node while testing your own implementation for this step, as it will not propagate your messages correctly. 4 Getting Started We’ve created a few tools that you can use to help you with your project. They are available in the following github classroom invitation link: [https://classroom.github.com/g/SLdvHwug](https://classroom.github.com/g/SLdvHwug) 4.1 Development Environment To build this assignment, you may work on either the department machines or the provided Vagrant VM. You can find details on how to use the Vagrant environment here: https://cs.brown.edu/courses/csci1680/f18/content/vagrant.pdf When submitting your work, please indicate which environment you used in your Readme. 4.2 Executables - tools/ref_node - The reference node that you should be able to communicate with. - tools/net2lnx - A tool to convert a .net file into a series of .lnx files that each node can read separately. 4.3 Sample Networks These are found in the nets/ subdirectory. - AB.net - Simple network with three nodes. It may looks like this: ```plaintext node A localhost node B localhost node C localhost A <-> B B <-> C ``` which tells you the physical location of each node and how they are connected. After running net2lnx on it, you will have something look like: ```plaintext A.lnx: localhost 17000 localhost 17001 10.116.89.157 10.10.168.73 B.lnx: localhost 17001 localhost 17000 10.10.168.73 10.116.89.157 localhost 17002 10.42.3.125 14.230.5.36 C.lnx: localhost 17002 localhost 17001 14.230.5.36 10.42.3.125 ``` which you can feed each node as their link information. These files mean that A has one interface defined by a pair of tuples, the IP “localhost” and port 17000 and the IP “localhost” and port 17001. The interface’s virtual IP is 10.116.89.157. It is connected to another interface (defined by the reversed tuple) with virtual IP 10.10.168.73. - loop.net - More complicated network with the following shape: A useful test for routing is to start the network and make sure src goes to dst through short. Then stop the short node and see what happens. 4.4 Utilities for C If you are using C, we have provided several utility files with useful functions in main.c and the support directory: - Debugging: dbg.c dbg.h. Print colored debugging messages. You can enable and disable categories of messages based on the environment variable DBG_MODES. See node.c for an example of how to use them in your code. By default, node enables only error messages. If you want to enable only, say, net layer and routing messages, then you can run: `DBG_MODES=net,route ./node file.lnx` See dbg_modes.h for a full list of debugging modes—feel free to add your own! - IP checksum calculation: ipsum.c. Use this function to calculate the checksum in the IP header for you. - Linked list: list.h. - Hash table: htable.c htable.h. - IP header: ip.h, equivalent to <netinet/ip.h>. - parseLinks: parseLinks.c parseLinks.c, implementation of parsing the lnx file. Feel free to use it directly or modifying it. 5 Getting Help This project isn’t intended to be painful, and you have many resources to help you. Make sure you’ve read this handout and really understand what we mean when we say that UDP is your virtual network’s link layer. Piazza is always a good place to get help on general topics, and the TAs will, of course, be holding TA hours and scheduling appointments. Make sure that you work together with your group partner, and try to split the project up so that neither of you has too much to handle. An obvious way to split things up is for one person to implement routing (RIP) and the other to be responsible for everything else (packet forwarding, send/recv interface, etc), but you can do whatever you feel is appropriate. It will not be possible for you to go off into separate rooms, implement your half, and “just hook them up.” You should work together, there is a lot that should be designed together. The routing table is the most obvious example. We request you use Git well so that you can update each other periodically (commit often, but only when the build succeeds!). However, please note that your Git repos should be private, and you are not allowed to share code with other groups. You can talk to other groups about concepts, algorithms, etc., but each group’s code must be their own. Finally, each group will have a mentor TA. This means that you will have one of the TAs as your group’s advisor. You’ll need to set up a milestone appointment to meet with your mentor TA during the first week of the project to discuss the project design. Once you have approval, you should stay in contact with your mentor, who will be grading your project and will be able to explain what the project ultimately should be doing. Your mentor also will do their best to help outside of TA hours, debugging, discussing design, etc. Just because your mentor is helping you out, however, does not mean that they are at your beck and call. Understand that the TA staff is busy too! 6 Grading 6.1 Milestone - 20% You will schedule a milestone design meeting with your mentor TA by Tuesday, October 9th. At this milestone, you should have a clear design for your program and be ready to ask questions you don’t understand. Be ready to answer specific questions about your design, for instance: - What objects will you use to abstract link layers, and what interface will it have? - What fields in the IP packet are read to determine when to forward a packet? 6.2 Functionality - 65% Most of your grade will be based on how well your program conforms to our specification. As in the Snowcast project, you will be expected to interoperate with the reference implementation as well as with other pairs’ projects. Among other details, your program will be expected to maintain forwarding tables, handle packet forwarding, network loops, and maintain interfaces that may go up or down at any time without causing the network to crash. 6.3 Code Quality - 10% Because part of the specification prescribes the programming interfaces for the link layer and for the upper layer handlers to IP, we will be evaluating the design of those interfaces as well as general code quality. 6.4 README - 5% Please include a README that describes your design decisions, including how you abstract your link layer and its interfaces, the thread model for your RIP implementation, and the steps you will need to process IP packets. List any known bugs or notable design decisions. If you identify known bugs, we will take off fewer points than if we have to find them ourselves. In your README, please also note whether you worked on the department machines or using the development VM, and any instructions necessary to build your project. If you are using a language other than C/C++, please include a Makefile that builds your code in the appropriate manner—this will make our lives much easier when grading! 7 Handing In and Interactive Grading Once you have completed the project you should commit and push your Git repo to deliver your code. Your mentor TA will arrange to meet with you for your interactive grading session to demonstrate the functionality of your program and grade the majority of it. This meeting will take place at some point shortly after the project deadline. Between the time you’ve handed in and the demo meeting, you can continue to make minor tweaks and bug fixes (and you should, since it will be the codebase for your next project). However, the version you’ve handed in should be nearly complete since it could be referenced for portions of the grading. 8 A Warning You should start on this project now. We expect all of the projects in CS168 to take the full amount of time we give you. It can be tricky so we want to make sure that you stay on top of it. The milestone design meeting is meant to encourage you to plan your design. Ask questions now if in doubt. Start talking with your partner right away, and get ready to get connected! Please let us know if you find any mistakes, inconsistencies, or confusing language in this or any other CS168 document by filling out the anonymous feedback form: https://piazza.com/brown/fall2018/csci1680
{"Source-Url": "http://cs.brown.edu/courses/csci1680/f18/content/ip.pdf", "len_cl100k_base": 5124, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21511, "total-output-tokens": 5777, "length": "2e12", "weborganizer": {"__label__adult": 0.0007476806640625, "__label__art_design": 0.000713348388671875, "__label__crime_law": 0.0004992485046386719, "__label__education_jobs": 0.05059814453125, "__label__entertainment": 0.00017547607421875, "__label__fashion_beauty": 0.0004627704620361328, "__label__finance_business": 0.0003817081451416016, "__label__food_dining": 0.0009541511535644532, "__label__games": 0.0019550323486328125, "__label__hardware": 0.004100799560546875, "__label__health": 0.00102996826171875, "__label__history": 0.0006866455078125, "__label__home_hobbies": 0.00047135353088378906, "__label__industrial": 0.0010499954223632812, "__label__literature": 0.0006189346313476562, "__label__politics": 0.0003933906555175781, "__label__religion": 0.0011186599731445312, "__label__science_tech": 0.037506103515625, "__label__social_life": 0.00046324729919433594, "__label__software": 0.0100555419921875, "__label__software_dev": 0.8828125, "__label__sports_fitness": 0.0009698867797851562, "__label__transportation": 0.001758575439453125, "__label__travel": 0.0004792213439941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23574, 0.0607]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23574, 0.42732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23574, 0.90226]], "google_gemma-3-12b-it_contains_pii": [[0, 1589, false], [1589, 4650, null], [4650, 7808, null], [7808, 10478, null], [10478, 13297, null], [13297, 15686, null], [15686, 17298, null], [17298, 19548, null], [19548, 22155, null], [22155, 23574, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1589, true], [1589, 4650, null], [4650, 7808, null], [7808, 10478, null], [10478, 13297, null], [13297, 15686, null], [15686, 17298, null], [17298, 19548, null], [19548, 22155, null], [22155, 23574, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23574, null]], "pdf_page_numbers": [[0, 1589, 1], [1589, 4650, 2], [4650, 7808, 3], [7808, 10478, 4], [10478, 13297, 5], [13297, 15686, 6], [15686, 17298, 7], [17298, 19548, 8], [19548, 22155, 9], [22155, 23574, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23574, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
bd76e438538953b8d5ceede5d8823c549f2c79d9
Understanding user preferences and goals in recommender systems Martijn Willemsen Human-Technology Interaction EnCHIReS Workshop ACM EICS 26 June 2017 Explaining the user experience of recommender systems Netflix tradeoffs popularity, diversity and accuracy AB tests to test ranking between and within rows Source: RecSys 2016, 18 Sept: Talk by Xavier Amatriain We don’t need the user: Let’s do AB Testing! Netflix used 5-star rating scales to get input from users (apart from log data) Netflix reported that they did an AB test of thumbs up/down versus rating: Yellin (Netflix VP of product): “The result was that thumbs got 200% more ratings than the traditional star-rating feature.” So is the 5-star rating wrong? or just different information? We don’t need the user: Let’s do AB Testing! Netflix used 5-star rating scales to get input from users (apart from log data) Netflix reported that they did an AB test of thumbs up/down versus rating: Yellin (Netflix VP of product): “The result was that thumbs got 200% more ratings than the traditional star-rating feature.” So is the 5-star rating wrong? or just different information? However, over time, Netflix realized that explicit star ratings were less relevant than other signals. Users would rate documentaries with 5 stars, and silly movies with just 3 stars, but still watch silly movies more often than those high-rated documentaries. Behavior versus Experience Looking at behavior… - Testing a recommender against a random videoclip system, the number of clicked clips and total viewing time went down! Looking at user experience… - Users found what they liked faster with less ineffective clicks… Behaviorism is not enough! (Ekstrand & Willemsen, RecSys 2016) We need to measure user experience and relate it to user behavior… We need to understand user goals and develop Rec. Systems that help users attain these goals! Algorithms Accuracy: compare prediction with actual values Recommendation: best predicted items Choose (prefer?) Experience! dataset user-item rating pairs 90% of work in Recommender Systems User-Centric Framework Computers Scientists (and marketing researchers) would study behavior.... (they hate asking the user or just cannot (AB tests)) User-Centric Framework Psychologists and HCI people are mostly interested in experience... User-Centric Framework Though it helps to triangulate experience and behavior... User-Centric Framework Our framework adds the intermediate construct of perception that explains why behavior and experiences changes due to our manipulations. User-Centric Framework - And adds personal and situational characteristics Relations modeled using factor analysis and SEM Choice difficulty and satisfaction in RecSys Applying latent feature diversification Understanding the role of latent feature diversification on choice difficulty and satisfaction Martijn C. Willemsen¹ · Mark P. Graus² · Bart P. Knijnenburg³ Abstract People like variety and often prefer to choose from large item sets. However, large sets can cause a phenomenon called “choice overload”; they are more difficult to choose from, and as a result decision makers are less satisfied with their choices. It Seminal example of choice overload Less attractive 30% sales Higher purchase satisfaction More attractive 3% sales From Iyengar and Lepper (2000) Satisfaction decreases with larger sets as increased attractiveness is counteracted by choice difficulty. Can we reduce difficulty while controlling attractiveness? Dimensions in Matrix Factorization Dimensionality reduction Users and items are represented as vectors on a set of latent features Rating is the dot product of these vectors (overall utility!) Gus will like Dumb and Dumber but hate Color Purple Latent Feature Diversification: high diversity with equal attractiveness Latent Feature Diversification System (OSA) Psychology-informed Diversity manipulation Perception (SSA) Increased perceived Diversity & attractiveness Experience (EXP) Reduced difficulty & increased satisfaction Interaction (INT) Less hovers More choice for lower ranked items Choice Satisfaction <table> <thead> <tr> <th>Diversification</th> <th>Rank of chosen</th> </tr> </thead> <tbody> <tr> <td>None (top 5)</td> <td>3.6</td> </tr> <tr> <td>Medium</td> <td>14.5</td> </tr> <tr> <td>High</td> <td>77.6</td> </tr> </tbody> </table> Higher satisfaction for high diversification, despite choice for lower predicted/ranked items ### Algorithms **Rating?** - **Experience!** - **Choose (prefer?)** - **Recommendation:** best predicted items **Accuracy:** compare prediction with actual values **dataset** user-item rating pairs Table: <table> <thead> <tr> <th>User</th> <th>Movie 1</th> <th>Movie 2</th> <th>Movie 3</th> <th>Movie 4</th> <th>Movie 5</th> </tr> </thead> <tbody> <tr> <td>Wijnand</td> <td>2</td> <td>...</td> <td>3</td> <td>2</td> <td>5</td> </tr> <tr> <td>Martijn</td> <td>...</td> <td>4</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>Chris</td> <td>4</td> <td>2</td> <td>5</td> <td>4</td> <td>4</td> </tr> <tr> <td>Mark</td> <td>4</td> <td>3</td> <td>4</td> <td>1</td> <td>5</td> </tr> <tr> <td>Maurits</td> <td>4</td> <td>5</td> <td>3</td> <td>2</td> <td>1</td> </tr> <tr> <td>Eric</td> <td>3</td> <td>...</td> <td>5</td> <td>...</td> <td>3</td> </tr> </tbody> </table> Preference elicitation improving the input... Preference Elicitation (PE) is a major topic in research on Decision Making I even did my PhD thesis on it... ;-) What can Psychology teach us about improving this aspect? Role of memory in ratings Rating support Cold start problem: please rate this movie… Does it matter if the preference you provide (by rating) is based on recent experiences or mostly on your memory? We don’t have data on the time between consumption and rating… Take a proxy: Time from release date Koren finds increasing ratings with the age of the movie (positivity effect?) Or just ask the users! Results 247 users, 4212 ratings Users rated movies they have seen and when that was (week, month, … 3 years or longer) Rating distributions: • Only 28% seen in the last year • # Positive ratings decrease with time • 1\textsuperscript{st} timeslot: 60% 4/5 star • Last timeslot: only 36% Multilevel model: Random intercepts for movies and users high-rated versus low-rated: both show regression towards the mean <table> <thead> <tr> <th></th> <th>Coefficient</th> <th>Std. Err.</th> <th>t-value</th> </tr> </thead> <tbody> <tr> <td>intercept</td> <td>2.95</td> <td>0.15</td> <td>19.05</td> </tr> <tr> <td>time</td> <td>0.29</td> <td>0.13</td> <td>2.31</td> </tr> <tr> <td>highrated</td> <td>1.62</td> <td>0.22</td> <td>7.43</td> </tr> <tr> <td>time$^2$</td> <td>-0.09</td> <td>0.02</td> <td>-3.55</td> </tr> <tr> <td>Time x highrated</td> <td>-0.73</td> <td>0.18</td> <td>-4.10</td> </tr> <tr> <td>Tine$^2$ x highrated</td> <td>0.11</td> <td>0.03</td> <td>3.26</td> </tr> </tbody> </table> How can we train a recommender system.. If ratings depend on our memory this much… Problem lies partly in the type of judgment asked: Rating is separate evaluation on an absolute scale… Lacks a good reference/comparison Two solutions we explored: Rating support Different elicitation methods: choice! Joint versus Separate Evaluation (Hsee, 1996) Evaluations of two job candidates for a computer programmer position expecting the use of a special language called KY. <table> <thead> <tr> <th></th> <th>Candidate A</th> </tr> </thead> <tbody> <tr> <td>Education</td> <td>B.Sc. computer Sc.</td> </tr> <tr> <td>GPA (0-5)</td> <td>4.8</td> </tr> <tr> <td>KY Experience</td> <td>10 KY programs</td> </tr> </tbody> </table> Mean WTP (in thousands): <table> <thead> <tr> <th></th> <th>Separate</th> <th>Joint</th> </tr> </thead> <tbody> <tr> <td>$WTP</td> <td>$32.7 k</td> <td>$31.2 k</td> </tr> <tr> <td>$WTP</td> <td></td> <td>$33.2 k</td> </tr> </tbody> </table> Rating support interfaces Using movielens! Can we help users during rating to make their ratings more stable/accurate? We can support their memory for the movie using tags. We can help ratings on the scale using previous ratings as exemplars. Movielens has a tag genome and a history of ratings so we can give real-time user-specific feedback! Tag Interface - Provide 10 tags that are relevant for that user and that describe the movie well - Didn’t really help… Exemplar Interface Support rating on the scale by providing exemplars: Exemplar: Similar movies rated before by that user for that level on the scale This helped to anchor the values on the scale better: more consistent ratings But what are preferences? Ratings are absolute statements of preference… But preference is a relative statement… Preferences are **constructive**: People **develop/construct** their preferences while making a decision (Bettman et al. 1998) So why not ask users to choose and have the recommender adapt to that? Which do you prefer? Several recent examples using different PE methods Loepp, Hussein & Ziegler (CHI 2014) • Choose between sets of movies that differ a lot on a latent feature Chang, Harper & Terveen (CSCW 2015) • Choose between groups of similar movies • By assigning points per group (ranking!) Accuracy: compare prediction with actual values Recommendation: best predicted items Interaction between user and Rec. System! Choose (prefer?) Experience! dataset user-item rating pairs Algorithms test the output! Choice-based preference elicitation Choices are relative statements that are easier to make Better fit with final goal: finding a good item rather than making a good prediction In Marketing, conjoint-based analysis uses the same idea to determine attribute weights and utilities based on a series of (adaptive) choices Can we use a set of choices in the matrix factorization space to determine a user vector in a stepwise fashion? How does this work? Step 1 **Iteration 1a:** Diversified choice set is calculated from a matrix factorization model (red items) **Iteration 1b:** User vector (blue arrow) is moved towards chosen item (green item), items with lowest predicted rating are discarded (greyed out items) How does this work? Step 2 **Iteration 2:** New diversified choice set (blue items) **End of Iteration 2:** with updated vector and more items discarded based on second choice (green item) Evaluation of Preference Elicitation - **Choice-based PE:** choosing 10 times from 10 items - **Rating-based PE:** rating 15 items - After each PE method they evaluated the interface on - interaction usability in terms of ease of use - e.g., “It was easy to let the system know my preferences” - **Effort:** e.g., “Using the interface was effortful.” - effort and usability are highly related (r=0.62) - **Results:** less perceived effort for choice-based PE perceived effort goes down with completion time Behavioral data of PE-tasks Choice-based PE: most users find their perfect item around the 8\textsuperscript{th} / 9\textsuperscript{th} item and they inspect quite some unique items along the way. Rating-based: user inspect many lists (Median = 13), suggesting high effort in rating task. Perception of Recommendation List - Participants evaluated each recommendation list separately on Choice Difficulty and Satisfaction. - More satisfied with choice-based list: less difficult, less obscure items (popularity prevails!) ``` +-----------------+ | Choice-Based List | | | | | | | | | | | |-----------------+ | Intra List Similarity | 14.00 (4.51) | p<.01 | | Difficultly | -.240 (.145) | p<.1 | -2.407 (.381) | p<.001 | Obscurity | -.479 (.111) | p<.001 | -.257 (.045) | p<.001 | | Satisfaction with Chosen Item ``` New version with trailers With trailers less popular movies are chosen no reduction in satisfaction! Algorithms Accuracy: compare prediction with actual values Recommendation: best predicted items Goals Choose (prefer?) Experience! Recommending to help users achieve their goals understand the input! dataset user-item rating pairs Rating? dataset user-item rating pairs Experience! Accuracy: compare prediction with actual values How recommenders can help users achieve their goals Research with Alain Starke (PhD student) To be presented At RecSys 2017 Recommending for Behavioral change - Behavioral change is hard… - Exercising more, eat healthy, reduce alcohol consumption (reducing Binge watching on Netflix 😊) - Needs awareness, motivation and commitment Combi model: Klein, Mogles, Wissen Journal of Biomedical Informatics, 2014 What can recommenders do? • Persuasive Technology: focused on how to help people change their behavior: – personalize the message… • Recommenders systems can help with what to change and when to act – personalize what to do next… • This requires different models/algorithms – our past behavior/liking is not what we want to do now! Behaviorism is not enough…! One of our use cases: How can we help people to save energy? Our first (old) recommender system using simple MAUT ### Indicate your preference Here is a list of possible needs. Indicate how important they are for you by clicking **multiple** checkboxes to change your attribute weights. <table> <thead> <tr> <th>Attribute</th> <th>Less Important</th> <th>More Important</th> </tr> </thead> <tbody> <tr> <td>Low initial effort</td> <td>8%</td> <td>33%</td> </tr> <tr> <td>Little continuous effort</td> <td>11%</td> <td>22%</td> </tr> <tr> <td>Low initial costs</td> <td>8%</td> <td>14%</td> </tr> <tr> <td>Save more energy</td> <td>8%</td> <td>14%</td> </tr> <tr> <td>Quick return on investment</td> <td>14%</td> <td>10%</td> </tr> <tr> <td>Positive environmental effects</td> <td>10%</td> <td>10%</td> </tr> </tbody> </table> ### Make a choice Here are several recommendations; choose those energy-saving measures from this list which you want to implement. <table> <thead> <tr> <th>Name</th> <th>Initial effort</th> <th>Effort</th> <th>Energy savings</th> <th>Return on investment</th> <th>Env. effects</th> <th>Comfort</th> </tr> </thead> <tbody> <tr> <td>Roof insulation</td> <td>€3100</td> <td>€299</td> <td>1424 kWh/year</td> <td>10 years</td> <td></td> <td></td> </tr> <tr> <td>Laptop instead of a PC</td> <td>€95</td> <td>€31</td> <td>150 kWh/year</td> <td>3 years</td> <td></td> <td></td> </tr> <tr> <td>Turn off PC when absent</td> <td>€50</td> <td>€50</td> <td>8 kWh/year</td> <td>3 year</td> <td></td> <td></td> </tr> <tr> <td>Heat laundry</td> <td>null</td> <td>null</td> <td>45 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> <tr> <td>Close curtains</td> <td>null</td> <td>null</td> <td>5 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> <tr> <td>Shower 3 minutes</td> <td>null</td> <td>null</td> <td>5 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> <tr> <td>Boiler-heated</td> <td>null</td> <td>null</td> <td>5 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> <tr> <td>Air-dry clothes</td> <td>null</td> <td>null</td> <td>5 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> <tr> <td>A++ Fridge/freezer combo</td> <td>null</td> <td>null</td> <td>45 kWh/year</td> <td>4 year</td> <td></td> <td></td> </tr> </tbody> </table> ### Your savings Here are your selected savings! Show totals in € euro kWh. <table> <thead> <tr> <th>This is what I want to do</th> <th>This is what I already do</th> <th>This is what I don’t want to do</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> You have been using the system for 1 minute. When you press stop, you will be asked a few final questions, after which you can print your savings. **stop** Study 3 (AMCIS 2014) • Online lab study • 147 paid pps (79M, 68F, mean age: 40.0) • Selected pps interacted for at least 2.5 minutes • 3 PE-methods, 2 baselines • Attribute-based PE • Implicit PE • Hybrid PE (attribute + implicit) • Sort (baseline, not personalized) • Top-N (baseline, not personalized) Study 3 — Results - Experts prefer Attribute-based PE and Hybrid PE, novices prefer Top-N and Sort (baselines) - System satisfaction mediates the effect on choice satisfaction and behavior! Towards a better (psychometric) user model consumers differ in energy-saving capabilities, attitudes, goals, … • Our prior work did not take that into account… • Energy-saving interventions are more effective when personalized. But how? Campbell’s Paradigm (Kaiser et al., 2010) “One’s attitude or ability becomes apparent through its behavior…” “Attitude and Behavior are two sides of the same coin…” Three assumptions for our user model 1. All Energy-saving behaviors form a class serving a single goal: **Saving Energy** 2. Less performed behaviors yield higher **Behavioral Costs** (i.e. are more difficult) 3. Individuals that execute more energy-saving behaviors have a higher **Energy-saving Ability** (i.e. more skilled) Using **behavioral costs** to order energy-saving measures Persons indicate which measures they execute INPUT Highest Costs Lowest Costs The Rasch model - The Rasch model equates behavioral difficulties and individual propensities in a probabilistic model. Log-odds of engagement levels (yes/no): \[ \ln \left( \frac{P_{ni}}{1 - P_{ni}} \right) = \theta_n - \delta_i \] - \( \theta \) = an individual’s propensity/attitude - \( \delta \) = behavioral difficulty - \( P \) = probability of individual \( n \) engaging in behavior \( i \) - Rasch also determines individual propensities and item difficulties & fits them onto a single scale Resulting Rasch Scale: Probability of a person executing behavior depends on the Ability - Costs \[ \ln \left( \frac{P_{ni}}{1 - P_{ni}} \right) = \theta_n - \delta_i \] Using Rasch for tailored advice • Earlier research (Kaiser, Urban) found evidence for a unidimensional scale, but with few items & no advice • We set out a Rasch-based, energy recommender system that: – Shows the measures in order of difficulty (either ascending or descending) – Provide tailored conservation advice to users (or not) – Include a more extensive set of measures • Our question: is ordering items on the scale sufficient or do we also need to provide tailored recommendations? Energy Webshop: Besparingshulp.nl We arranged 79 energy-saving measures on their behavioral costs. Webshop for energy-saving measures: Experiment • We inferred a user’s ability through his current behavior – Asking 13 random items from across the entire scale *Able to suggest new measures by matching costs & attitude* • User was subject to one of 4 conditions – No tailoring, ascending cost order (‘Most popular’) – No tailoring, descending cost order (‘Most difficult’) – Ability-tailored, ascending cost order – Ability-tailored, descending cost order ### Dependent Measures from the interaction **Users interacting with the website** - Behavioral difficulty of chosen measures - Number of chosen measures - Clicking behavior **Evaluative Survey (UX)** - Perceived Effort - Perceived System Support - Choice Satisfaction **Follow up survey after 4 weeks** - Extent of implementation of chosen measures --- #### PERCEIVED EFFORT – survey items <table> <thead> <tr> <th>Item</th> <th>Factor Loading</th> </tr> </thead> <tbody> <tr> <td>It took me little effort to use the Saving Aid.</td> <td>0.804</td> </tr> <tr> <td>The Saving Aid takes up a lot of time.</td> <td></td> </tr> <tr> <td>I quickly understood the functionalities of the Saving Aid.</td> <td>−0.554</td> </tr> <tr> <td>Many actions were required to use the Saving Aid properly.</td> <td></td> </tr> <tr> <td>The Saving Aid is easy to use.</td> <td>0.741</td> </tr> </tbody> </table> **PERCEIVED SUPPORT – survey items** <table> <thead> <tr> <th>Item</th> <th>Factor Loading</th> </tr> </thead> <tbody> <tr> <td>I make better choices using the Saving Aid tool.</td> <td>0.551</td> </tr> <tr> <td>The Saving Aid is helpful to find appropriate measures.</td> <td>0.608</td> </tr> <tr> <td>The Saving Aid does not help to come to a decision.</td> <td></td> </tr> <tr> <td>The Saving Aid presents the measures in a convenient way.</td> <td></td> </tr> <tr> <td>Because of the Saving Aid, I could easily choose measures.</td> <td>0.678</td> </tr> </tbody> </table> **CHOICE SATISFACTION – survey items** <table> <thead> <tr> <th>Item</th> <th>Factor Loading</th> </tr> </thead> <tbody> <tr> <td>I am happy with the measures I've chosen.</td> <td>0.574</td> </tr> <tr> <td>I think I've chosen the best measures from the list.</td> <td></td> </tr> <tr> <td>I would have liked to choose different measures than the ones I've chosen.</td> <td></td> </tr> <tr> <td>It would be fun to perform the chosen measures.</td> <td>0.550</td> </tr> <tr> <td>The measures I've chosen fit me seamlessly.</td> <td>0.549</td> </tr> </tbody> </table> Results Structural Equation Modelling (SEM) *** p < 0.001, ** p < 0.01, * p < 0.05. Tailored recommendations → Perceived Support Perceived Effort Perceived Support Choice Satisfaction .746*** - .767*** - .440* + Conclusions • Tailored recommendations positively affect UX: – reducing both perceived and actual effort, users felt more support, and in turn chose more energy-saving measures and were also satisfied about those choices. – ability-tailored recommendations are more effective than merely presenting an ordered Rasch scale. • Do users reach their goals? – Although more measures were chosen when higher support was perceived, this was against a reduced difficulty level. – Follow up four weeks later showed that users were more likely to perform easier measures (consistent with the rasch scale) • Cliff hanger: See our RecSys 2017 paper for study 2, in which we use another interface to test how to engage users in more difficult measures… **Sneak Preview:** # Besparingshulp Kies maatregelen die u nog niet toepast maar wel wilt gaan toepassen. Wanneer u klaar bent gaat u naar uw winkelwagen. Controleer uw keuzes en klik op 'bevestigen'. <table> <thead> <tr> <th></th> <th>Basis</th> <th>Aanbevolen</th> <th>Uitdagend</th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>Kleding luchten i.p.v. wassen</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Beschrijving</td> <td>Laat uw kleren een keer luchten in plaats van ze meteen in de wasmand te gooien. Dit kan per week ongeveer 1 wasbeurt schelen.</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Besparing</td> <td>26 kWh/j</td> <td>€ 5,- p.j.</td> <td>€ 0,-</td> <td></td> </tr> <tr> <td>Match</td> <td>100%</td> <td>Ga ik doen (0)</td> <td></td> <td></td> </tr> </tbody> </table> | **Koffie zetten zonder warmhoudplaatje** | | | | | | Beschrijving | Bij sommige koffiezetapparaten is er aanzienlijk wat stroom nodig voor het warmhoudplaatje. Door bijvoorbeeld een thermoskan te gebruiken kunt u een flinke besparing bereiken en toch uw koffie warm houden. | | Besparing | 23 kWh/j | € 5,- p.j. | € 0,- | | Match | 99% | Ga ik doen | | | **Thermostaat 1 graad lager zetten** | | | | | | Beschrijving | Als u uw verwarming standaard een graadje lager zet, bespaart u tientallen euro's op uw energierekening. Ook zorgt u ervoor dat er minder CO2 wordt uitgestoten. | | Besparing | | | | | Match | 99% | Ga ik doen | | General conclusions • Recommender systems are all about good UX • Taking a psychological, user-oriented approach we can better account for how users like to express their preferences (input) and reach their goals (output) • Enhancing user interaction (system satisfaction or perceived support) improves both choice satisfaction as well as user behavior! • Behaviorism is not enough: an integrated user-centric approach offers many insights/benefits! Questions? Contact: Martijn Willemsen @MCWillemsen M.C.Willemsen@tue.nl Find direct links to most of my papers on my website www.martijnwillemsen.nl
{"Source-Url": "https://sites.unica.it/enchires/files/2017/07/Enchires_keynoteWillemsen_june2017.pdf", "len_cl100k_base": 6549, "olmocr-version": "0.1.49", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 81458, "total-output-tokens": 8434, "length": "2e12", "weborganizer": {"__label__adult": 0.0005903244018554688, "__label__art_design": 0.003940582275390625, "__label__crime_law": 0.00046372413635253906, "__label__education_jobs": 0.0274810791015625, "__label__entertainment": 0.0006527900695800781, "__label__fashion_beauty": 0.00042891502380371094, "__label__finance_business": 0.0081329345703125, "__label__food_dining": 0.0007939338684082031, "__label__games": 0.0019235610961914065, "__label__hardware": 0.0016689300537109375, "__label__health": 0.0011701583862304688, "__label__history": 0.0007481575012207031, "__label__home_hobbies": 0.0004246234893798828, "__label__industrial": 0.0007548332214355469, "__label__literature": 0.001461029052734375, "__label__politics": 0.0004322528839111328, "__label__religion": 0.0006647109985351562, "__label__science_tech": 0.322021484375, "__label__social_life": 0.0006442070007324219, "__label__software": 0.1331787109375, "__label__software_dev": 0.490966796875, "__label__sports_fitness": 0.0003006458282470703, "__label__transportation": 0.0007939338684082031, "__label__travel": 0.0004811286926269531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24352, 0.02778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24352, 0.02911]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24352, 0.82429]], "google_gemma-3-12b-it_contains_pii": [[0, 208, false], [208, 471, null], [471, 861, null], [861, 1588, null], [1588, 2079, null], [2079, 2276, null], [2276, 2428, null], [2428, 2520, null], [2520, 2602, null], [2602, 2763, null], [2763, 3115, null], [3115, 3621, null], [3621, 3937, null], [3937, 4317, null], [4317, 4390, null], [4390, 4973, null], [4973, 5676, null], [5676, 5937, null], [5937, 6299, null], [6299, 6590, null], [6590, 7204, null], [7204, 7522, null], [7522, 8115, null], [8115, 8675, null], [8675, 8795, null], [8795, 9026, null], [9026, 9362, null], [9362, 9644, null], [9644, 9866, null], [9866, 10511, null], [10511, 10795, null], [10795, 10986, null], [10986, 11506, null], [11506, 11798, null], [11798, 12543, null], [12543, 12876, null], [12876, 13218, null], [13218, 13344, null], [13344, 13633, null], [13633, 14003, null], [14003, 14064, null], [14064, 15953, null], [15953, 16273, null], [16273, 16489, null], [16489, 16728, null], [16728, 17225, null], [17225, 17366, null], [17366, 17873, null], [17873, 18044, null], [18044, 18545, null], [18545, 18645, null], [18645, 19118, null], [19118, 21479, null], [21479, 21700, null], [21700, 21700, null], [21700, 22451, null], [22451, 23751, null], [23751, 24202, null], [24202, 24352, null]], "google_gemma-3-12b-it_is_public_document": [[0, 208, true], [208, 471, null], [471, 861, null], [861, 1588, null], [1588, 2079, null], [2079, 2276, null], [2276, 2428, null], [2428, 2520, null], [2520, 2602, null], [2602, 2763, null], [2763, 3115, null], [3115, 3621, null], [3621, 3937, null], [3937, 4317, null], [4317, 4390, null], [4390, 4973, null], [4973, 5676, null], [5676, 5937, null], [5937, 6299, null], [6299, 6590, null], [6590, 7204, null], [7204, 7522, null], [7522, 8115, null], [8115, 8675, null], [8675, 8795, null], [8795, 9026, null], [9026, 9362, null], [9362, 9644, null], [9644, 9866, null], [9866, 10511, null], [10511, 10795, null], [10795, 10986, null], [10986, 11506, null], [11506, 11798, null], [11798, 12543, null], [12543, 12876, null], [12876, 13218, null], [13218, 13344, null], [13344, 13633, null], [13633, 14003, null], [14003, 14064, null], [14064, 15953, null], [15953, 16273, null], [16273, 16489, null], [16489, 16728, null], [16728, 17225, null], [17225, 17366, null], [17366, 17873, null], [17873, 18044, null], [18044, 18545, null], [18545, 18645, null], [18645, 19118, null], [19118, 21479, null], [21479, 21700, null], [21700, 21700, null], [21700, 22451, null], [22451, 23751, null], [23751, 24202, null], [24202, 24352, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24352, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24352, null]], "pdf_page_numbers": [[0, 208, 1], [208, 471, 2], [471, 861, 3], [861, 1588, 4], [1588, 2079, 5], [2079, 2276, 6], [2276, 2428, 7], [2428, 2520, 8], [2520, 2602, 9], [2602, 2763, 10], [2763, 3115, 11], [3115, 3621, 12], [3621, 3937, 13], [3937, 4317, 14], [4317, 4390, 15], [4390, 4973, 16], [4973, 5676, 17], [5676, 5937, 18], [5937, 6299, 19], [6299, 6590, 20], [6590, 7204, 21], [7204, 7522, 22], [7522, 8115, 23], [8115, 8675, 24], [8675, 8795, 25], [8795, 9026, 26], [9026, 9362, 27], [9362, 9644, 28], [9644, 9866, 29], [9866, 10511, 30], [10511, 10795, 31], [10795, 10986, 32], [10986, 11506, 33], [11506, 11798, 34], [11798, 12543, 35], [12543, 12876, 36], [12876, 13218, 37], [13218, 13344, 38], [13344, 13633, 39], [13633, 14003, 40], [14003, 14064, 41], [14064, 15953, 42], [15953, 16273, 43], [16273, 16489, 44], [16489, 16728, 45], [16728, 17225, 46], [17225, 17366, 47], [17366, 17873, 48], [17873, 18044, 49], [18044, 18545, 50], [18545, 18645, 51], [18645, 19118, 52], [19118, 21479, 53], [21479, 21700, 54], [21700, 21700, 55], [21700, 22451, 56], [22451, 23751, 57], [23751, 24202, 58], [24202, 24352, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24352, 0.20601]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
3f3ada428785c373acae9d6aeaf9b9c075f962e2
(Clojure (for the masses)) (author “Tero Kadenius” “Tarmo Aidantausta”) (date “30.03.2011”) Contents 1 Introduction 4 1.1 Dialect of Lisp 4 1.2 Dynamic typing 4 1.3 Functional programming 4 2 Introduction to Clojure syntax 4 2.1 Lispy syntax 5 2.1.1 Parentheses, parentheses, parentheses 5 2.1.2 Lists 6 2.1.3 Prefix vs. infix notation 6 2.1.4 Defining functions 6 3 Functional vs. imperative programming 7 3.1 Imperative programming 7 3.2 Object oriented programming 7 3.3 Functional programming 8 3.3.1 Functions as first class objects 8 3.3.2 Pure functions 8 3.3.3 Higher-order functions 9 3.4 Differences 10 3.5 Critique 10 4 Closer look at Clojure 11 4.1 Syntax 11 4.1.1 Reader 11 4.1.2 Symbols 11 4.1.3 Literals 11 4.1.4 Lists 12 4.1.5 Vectors 12 4.1.6 Maps 12 4.1.7 Sets 12 4.2 Macros 12 4.3 Evaluation 13 4.4 Read-Eval-Print-Loop 13 4.5 Data structures 13 4.5.1 Sequences 14 4.6 Control structures 14 4.6.1 if 14 4.6.2 do 14 4.6.3 loop/recur 14 5 Concurrency in Clojure 5.1 From serial to parallel computing ........................................... 15 5.2 Problems caused by imperative programming paradigm ............. 15 5.3 Simple things should be simple .............................................. 16 5.4 Reference types ................................................................. 16 5.4.1 Vars ............................................................................. 16 5.4.2 Atoms .......................................................................... 17 5.4.3 Agents ......................................................................... 18 5.4.4 Refs ............................................................................ 19 5.5 Software transactional memory (STM) ..................................... 19 References ................................................................................. 21 1 Introduction This seminar paper introduces Clojure, a dialect of Lisp, which is a dynamically typed functional programming language hosted on virtual machines like JVM and CLR. The key point that makes Clojure stand out in the pool of new and fancy programming languages is the fact it has been developed from the ground up with multithreaded programming in mind. Immutable data structures, support for software transactional memory (STM) with the combination of functional programming paradigm give Clojure good tools working with concurrency. Even though concurrency had an emphasis in designing Clojure it is still a general purpose language with interoperability features to leverage the functionality and the ecosystem of the host platform, so it’s possible to reuse already existing code and libraries done for the host platform. The paper gives first a short tour to very basics of Clojure and dives into further details of the language and its features. 1.1 Dialect of Lisp \( (= 2 (+ 1 1)) \) “Lisp is a family of computer programming languages based on formal functional calculus. Lisp (for "List Processing Language") stores and manipulates programs in the same manner as any other data, making it well suited for "meta-programming" applications.” “Lisp has jokingly been called "the most intelligent way to misuse a computer". I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.” 1.2 Dynamic typing “In a dynamically typed language, values have fixed types, but variables and expressions have no fixed types.” 1.3 Functional programming “In functional programming, the model of computation is the application of functions to arguments.” 2 Introduction to Clojure syntax To make it easier to read this paper, parts of the syntax of Clojure are introduced here with simple shortly explained examples. There are a lot of concepts that are not explained but will be explained later in the article. 2.1 Lispy syntax “At once, just like they said, I felt a great enlightenment. I saw the naked structure of Lisp code unfold before me.” Lisps, like Clojure, don’t have a lot of syntax in the traditional sense of syntax, compared to languages like Java, C or C++. The syntax is minimized to mainly into defining lists with parenthesis and the different rules of evaluations of those lists. 2.1.1 Parentheses, parentheses, parentheses “These are your father’s parentheses. Elegant weapons, for a more... civilized age.” As Lisps are about lists - the name LISP derives from "LISt Processing", and lists in Lisps are about parentheses. This means that, in Clojure, reading and writing parentheses is inevitable. A lot has been said about the amount of parentheses in Lisps, both good and bad. “Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in.” “...and here I thought it was LotsofInfernalStupidParentheses. My mistake; I must have just been in a worse mood. ;->” 2.1.2 Lists In Clojure, as in all the Lisps, the lists have special syntax to them which is due to the homoiconicity - code is data and data is code. This might seem confusing at first but the rules are quite simple although they overload the definition of lists a bit. Below we can see a list which evaluates to a function call to `+` with 1, 2, and 3 as parameters to that function. ``` (+ 1 2 3) ``` Below is a definition of list with numbers. Quote in the beginning of the line tells the reader that it should treat the following list just as data. ``` '( + 1 2 3) ``` Below we can see how a list can be also constructed with a function call to a list. ``` (list + 1 2 3) ``` 2.1.3 Prefix vs. infix notation “Polish notation, also known as prefix notation, is a form of notation for logic, arithmetic, and algebra. Its distinguishing feature is that it places operators to the left of their operands.” [21] ``` + 1 2 ``` “Infix notation is the common arithmetic and logical formula notation, in which operators are written infix-style between the operands they act on (e.g. 2 + 2).” [15] ``` 1 + 2 ``` Clojure, as all Lisps, uses prefix notation in contrast to infix notation which used in languages like C, C++ and Java. [21] [15] 2.1.4 Defining functions Functions are first-class objects in Clojure and there is more than one way of defining them.[43] ``` (def hai (fn [] "Ou hai!")) ``` Above we define a function that returns string “Ou hai!” by using macros called `fn` and `def. fn` that creates the function takes names and the default values for parameters inside `[]` as the first parameter and the body of the function as the second parameter.[39] A macro called `def` binds a the name with a value.[36] You can also define functions with a `macro` called `defn`. [43] \begin{verbatim} (defn hai-u [u] (str "Hai, " u)) That macro takes the name, optionally a document string and attribute map, parameters and the function body as parameters. [36] You can also use a Clojures dispatch macro to create a function. [47] (def hai-u2 #(str "Hai, " %1 " and " %2)) \end{verbatim} 3 Functional vs. imperative programming 3.1 Imperative programming "Imperative programming is a programming paradigm that describes computation in terms of statements that change a program state." [13] Nearly all machine code implementations are written in imperative style. The contents of the memory holds the state and machine language instructions modify it. Higher-level imperative languages have more advanced features like variables and complex statements, but the basic idea remains the same. [14] Here is a small snippet of imperative code. It has a notion of state (a), which is mutated. In addition, an IO operation is performed. \begin{verbatim} int a = 3; int b = 4; if (a < b) { a++; } print(a); \end{verbatim} 3.2 Object oriented programming "The set of values of the attributes of a particular object is called its state. The object consists of state and the behavior that’s defined in the object’s classes." [19] Object oriented programming provides a feature called encapsulation. Encapsulation prevents users of the object from directly modifying the data that forms the state by providing operations (methods) for doing it. This is done in order to ensure the validity of the internal state of the object. [8] In other words, at its heart, object oriented programming tends to be imperative. The paradigm itself doesn’t enforce it, but that is usually the case. [7] An example that demonstrates the imperative nature of OO code: class Foo { int a = 3; int b = 4; increment() { if (a < b) { a++; } } print() { print(a); } ... } What happens here is identical to the imperative code example except that in this case the data (a and b) and the operations mutating the state (a++) or causing other side effects (print(a)) are encapsulated inside the object. increment() is an instruction for modifying the state of the object. The result of increment() may vary in different points of time depending on the state of the object. 3.3 Functional programming "Functional programming has its roots in mathematics. Instead of providing instructions for modifying the state of the program, functional programming emphasizes the application of functions and avoid state and mutable data in general."[11] 3.3.1 Functions as first class objects The notion of a function is not unique to functional programming languages. However, functional languages have what is called first class functions. This means that functions have a central role in the code, much like objects do in OO languages. Functions can be stored to data structures and the use of higher-order functions is common.[3] The objective of having no side effects manifests itself in pure functions. 3.3.2 Pure functions Function is considered pure if: 1. "The function always evaluates the same result value given the same argument value(s). The function result value cannot depend on any hidden information or state that may change as program execution proceeds or between different executions of the program, nor can it depend on any external input from I/O devices."[4] 2. “Evaluation of the result does not cause any semantically observable side effect or output, such as mutation of mutable objects or output to I/O devices.” [4] Using pure functions has several benefits: 1. Pure expression can be removed without affecting other expressions if the result of the pure expression is not used. [11] 2. Referential transparency. An expression can be replaced with its value without causing changes to the program. The output is always the same with the same input. [23] 3. If a pure function does not depend on the result of another pure function, they can be performed in any order. I.e., they are thread-safe and can be run in parallel without typical concurrency issues. [11] 4. Lack of side effects guaranteed by the language, provides opportunities for compiler optimizations. [11] 3.3.3 Higher-order functions Higher-order function is a function that either takes one or more functions as parameters or returns one as a value. [13] Well-known examples of higher-order functions are map and fold. “Map is the name of a higher-order function that applies a given function element-wise to a list of elements and returns a list of results.” [10] Fold is a “function that iterate an arbitrary function over a data structure in some order and build up a return value”. [10] - Doubling the value of every element in a list using map: \[ \text{map} (\text{fn } [x] (* 2 \ x)) \ \text{'} (1 \ 2 \ 3) \] \[ \Rightarrow ((* 2 \ 1) (* 2 \ 2) (* 2 \ 3)) \Rightarrow (2 \ 4 \ 6) \] infix notation equivalent: \(2 \times 1 \ 2 \times 2 \ 2 \times 3\) - In Clojure fold is called reduce. A trivial example for calculating the sum of the elements in a list: \[ \text{reduce} + \ \text{'} (1 \ 2 \ 3) \] \[ \Rightarrow (+ (1 \ 2) \ 3) \Rightarrow 6 \] infix notation equivalent: \(1 \times 2 \times 3\) Partial function application and currying Higher-order functions enable an interesting feature where a new function can be generated based on another function. Partial function application is a technique which produces a function in which one or more of the arguments of the original function are fixed. Currying resembles partial function application. The difference is that in currying each (curried) function takes only a single argument and produces a function which takes one argument less than its predecessor. I.e. currying produces a chain of functions, whereas with partial function application arbitrary number of functions can be fixed at once. [7] Out of the box, Clojure supports partial function application but not currying. A simple example where a function that adds 2 to its argument is applied to a list of elements: • `(map (partial + 2) '(1 2 3)) => ((+ 2 1)(+ 2 2)(+ 3 3)) => (3 5 7) *infix notation equivalent:* ((2 + 1)(2 + 2)(2 + 3)) ### 3.4 Differences <table> <thead> <tr> <th>Characteristic</th> <th>Imperative approach</th> <th>Functional approach</th> </tr> </thead> <tbody> <tr> <td>Programmer focus</td> <td>How to perform tasks (algorithms) and how to track changes in state.</td> <td>What information is desired and what transformations are required.</td> </tr> <tr> <td>State changes</td> <td>Important.</td> <td>Non-existent.</td> </tr> <tr> <td>Order of execution</td> <td>Important.</td> <td>Low importance.</td> </tr> <tr> <td>Primary flow control</td> <td>Loops, conditionals, and function (method) calls.</td> <td>Function calls, including recursion.</td> </tr> <tr> <td>Primary manipulation unit</td> <td>Instances of structures or classes.</td> <td>Functions as first-class objects and data collections.</td> </tr> </tbody> </table> [12] ### 3.5 Critique The proponents of functional programming claim that imperative programming is fundamentally broken - especially in a multi-threaded environment. First of all, there is an argument that the world doesn't function in a way imperative programming models it. When dealing with mutable state, the "world" has to stop in order it be examined or changed. This becomes a major problem when bringing concurrent programming to the picture. [3] OO programming suffers from the same problems as imperative programming. To quote Rich Hickey, the creator of Clojure: "Encapsulation just means: I’m in charge of this spaghetti code." I.e. encapsulation doesn’t change the fact that OO is usually based on mutable state. It just tries to prevent the user of object’s interface from seeing it (the (imperative) spaghetti code). 4 Closer look at Clojure Now that the paradigm of functional programming has been introduced, some of the details of Clojure's features and terminology is explained. 4.1 Syntax "Clojure is a homoiconic language, which is a fancy term describing the fact that Clojure programs are represented by Clojure data structures." [47] Clojure syntax is built on symbolic expressions, S-expressions, that are list based data structures. [24, 47, 5] In addition to lists also symbols, literals, vectors, maps and sets make up the syntax and are parsed by the Clojure reader. [47] 4.1.1 Reader The reader parses the textual presentation of the Clojure code to data structures. (doc read) *Documentation for any given function can be acquired by calling the function doc.* Then it creates the form of that same data structure that the compiler will see. Clojure compiler compiles the code, data structures, to host platform bytecode. This bytecode is then executed by the host platform virtual machine. [52] 4.1.2 Symbols "Symbols begin with a non-numeric character and can contain alphanumeric characters and *?, +, !, -, _, and ?." [47] (def ines "a symbol called ines") def is a macro that takes a symbol as a parameter and then gives that symbol a value if one is given. Here it defines a symbol called ines in the current namespace with the value "a symbol called ines". [56] 4.1.3 Literals '(strings are literals as are numbers and characters (1 2 3) (\a \b \c)) (and of course booleans and and keywords true :keyword) 4.1.4 Lists (list "can contain anything you want" 1 2 3 \a \b \c :keyword '(') If you don’t use ’ or quote Clojure will try to use the first cell off a list as a function call. So if you want to just express data, use ’ or quote. ’ (any of these won’t be evaluated) 4.1.5 Vectors (vector ’[can contain anything too 1 2 3 \a \b \c :keyword]) 4.1.6 Maps (hash-map :key value key :value :map { :can contain :maps too }) 4.1.7 Sets (hash-set "can contain sets" #{a :b} #{b :c} "and anything unique" 1 2 3 \a \b \c ’(:a :b)) 4.2 Macros "The definition of a macro is essentially a function that generates Lisp code—a program that writes programs." "Macros work differently from normal functions, and knowing how and why macros are different is the key to using them correctly. A function produces results, but a macro produces expressions—which, when evaluated, produce results.” [29] In Clojure macros are implemented in a way that the compiler can be extended by user code - you can really grow the language. [45] (defmacro print-evaluate [code] '((println '~code "evaluates to " ~code)) "defmacro" defines a new macro with similar structure to defn. The ’ is used to create a template expression, where we can evaluate certain items within the expression by using macro characters (#, ~, list-frag?). “ [28] * defmacro macro takes the name (symbol), parameter (vector) and the body (expression) as parameters [34] - print-evaluate - [code] - ’(println ~code "evaluates to " ~code) * ’, which is called either tick or backquote, stops evaluation [47] * ~code is equivalent to (quote ~code) [47] • quote stops evaluation • ~code unquotes the ' symbol This very simple example only scratches the surface what you can do with macros, but it demonstrates at least one way of creating them. 4.3 Evaluation “Every form not handled specially by a special form or macro is considered by the compiler to be an expression, which is evaluated to yield a value. There are no declarations or statements, although sometimes expressions may be evaluated for their side-effects and their values ignored.” (println is a symbol that is bound to a function value) Clojure code can be evaluated interactively with REPL, forms read from a stream via load or load-file or programatically with eval. “In all cases, evaluation is the same - a single object is considered by the compiler, evaluated, and its result returned. If an expression needs to be compiled, it will be.” 4.4 Read-Eval-Print-Loop “A read-eval-print loop (REPL), also known as an interactive toplevel, is a simple, interactive computer programming environment.” Clojure has a REPL which you can use to interact with your code - you can grow your program, with data loaded, adding features, fixing bugs, testing, in an unbroken stream. 4.5 Data structures The data structures (collections) in Clojure are persistent. The implementation of Clojure collections allows efficient (semantic) copying. The efficiency is achieved by utilizing structural sharing. This is possible due to the immutability. I.e. structural sharing between mutable data structures would be problematic. Collections are not bound to concrete data structures. This is a big difference between Clojure and some older Lisps. Instead, in Clojure, collections are represented by abstractions. Each abstraction may have one or more implementations. “Clojure’s reader supports literal syntax for maps, sets and vectors in addition to lists.” The literal syntax for the aforementioned data structures was introduced in chapter 4.1. 4.5.1 Sequences Another key thing related to data structure abstraction in Clojure is sequences (seqs). Many algorithms in Clojure use sequences as their data structure abstraction. "A seq is a logical list, and unlike most Lisps where the list is represented by a concrete, 2-slot structure". Seq interface (ISeq) allows many data structures to expose their elements as sequences. "The seq function yields an implementation of ISeq appropriate to the collection. Seqs differ from iterators in that they are persistent and immutable, not stateful cursors into a collection. As such, they are useful for much more than foreach - functions can consume and produce seqs, they are thread safe, they can share structure etc." Clojure provides an extensive library of functions for processing sequences. All sequence functions can be used with any collection. 4.6 Control structures if, do, and loop/recur are three most common control structures in Clojure. 4.6.1 if (println (if (= 42 (* 6 7)) "true" "false")) 4.6.2 do (defn grade-good? [grade] (if (> grade 3) "Good it is! :)" (do (println "Low grade!" grade) "No, it's bad. :("))) Do is idiomatic to use when you're introducing side-effects. Do ignores the return values all the other forms besides the last. 4.6.3 loop/recur "The loop special form works like let, establishing bindings and then evaluating exprs. The difference is that loop sets a recursion point, which can then be targeted by the recur special form." "recur binds new values for loop’s bindings and returns control to the top of the loop.” "If the recursion point was a fn method, then it rebinds the params. If the recursion point was a loop, then it rebinds the loop bindings.” (loop [x '(1 2 3 4) sum x] The code example above iterates through the list of numbers by reducing a number every recursion from the list x and adds it up to the sumx. - x is set to (1 2 3 4) and sumx to 0. - first returns the first item in the collection. \[48\] - + returns the addition of two numbers. \[31\] - rest returns a sequence of the items after the first. \[48\] Loop/recur is a couple that main exists in Clojure due to deficiencies of tail-call-recursion on host platforms. 5 Concurrency in Clojure 5.1 From serial to parallel computing For many years the speed of majority of computer programs could be improved by upgrading the hardware on which the program was run. I.e. “Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004.” \[20\] Therefore, the standard practice has been to write software for serial computation meaning that: \[26\] - Software is run on a single computer on a single processor (core) - Only one instruction may execute at any moment in time. - Instructions are executed one after another. Moore’s law states that the number of transistors that can be placed in an integrated circuit doubles approximately every two years. The trend started in 1958 and is expected to continue until 2015 or 2020 or later. \[17\] The extra transistors cannot be used for increasing the frequency of the microprocessor, but they can be used for adding new processor cores for parallel computing \[31\]. 5.2 Problems caused by imperative programming paradigm As stated earlier, by avoiding mutable state there is no need for locking and threads cannot interfere with each other. Let’s take a look at two common concurrency issues. - “A deadlock occurs when two threads each lock a different variable at the same time and then try to lock the variable that the other thread already locked.” \[2\] • “Race condition occurs when two threads access a shared variable at the same time.” [2] As we can see these issues are not necessarily related to the underlying problem we are trying to solve. They are mere implementation details. 5.3 Simple things should be simple Immutable data structures and pure functions enable trivially easy parallel processing of functions as long as there is no data dependency between them. A simple example of easy parallel execution of functions using pmap, a parallel map implementation: \[ \text{pmap (fn \[ x \] (+ x 2)') (1 2 3)} \] There is no need for explicitly spawning new threads nor is there fear for race conditions or deadlocks since the function does not rely on external state or mutate anything. 5.4 Reference types “Clojure, being a practical language, allows state to change but provides mechanism to ensure that, when it does so, it remains consistent, while alleviating developers from having to avoid conflicts manually using locks etc.” [38] In practice, this means that Clojure has extensive concurrency features built-in. Clojure provides 4 different mechanisms for maintaining a persistent reference to a changing value. They are: [51] 1. Vars 2. Atoms 3. Agents 4. Refs The reference types can be divided into 2 categories based on how they are modified. Vars, Atoms, and Refs are modified synchronously meaning that when a function is applied to a synchronous reference type, the call blocks until the function has been applied. Agent is the only asynchronous reference type. [3] Atom, Agent and Ref are created by calling a specific function, unsurprisingly \text{atom}, \text{agent} and \text{ref} respectively and the value each reference type is holding, can be accessed with the function \text{deref} or the reader macro \text{@}. 5.4.1 Vars Vars resemble global variables in other programming languages. [wikib] Vars can have a binding to an initial value called a root binding. A root binding is shared by all threads unless the var has a per-thread binding. Therefore, the value of a var is its per-thread binding or if it doesn’t have any, its root binding. If neither binding exist, then the var is unbound. If neither binding exist, then the var is unbound. If a var did exist and had a value, the old value remains bound. Rebinding the same var with def is not encouraged since “Subsequently calling (def something 6) is not a thread-safe operation.” Repl examples: ```clojure user=> (def foo) #'user/fo0 user=> foo java.lang.IllegalStateException: Var user/fo0 is unbound. (NO_SOURCE_FILE:0) user=> (def foo 3) #'user/fo0 user=> foo 3 user=> (def foo) #'user/fo0 user=> foo 3 user=> (def foo 4) #'user/fo0 user=> foo 4 ``` 5.4.2 Atoms “Atoms provide a way to manage shared, synchronous, independent state.” “Shared” means that they can be modified from different threads. “Synchronous” means that the call function modifying an atom blocks until the operation is performed. “Independent” means that there is no coordinated mechanism for ensuring that only one thread at a time can modify the value of an atom. Atom uses a different technique for achieving the same goal. The value of an atom is changed by applying the function `swap!` to its old value. `swap!` reads the current value, applies the function to it, and attempts to compare-and-set it in. Since another thread may have changed the value in the intervening time, it may have to retry, and does so in a spin loop. The net effect is that the value will always be the result of the application of the supplied function to a current value, atomically. However, because the function might be called multiple times, it must be free of side effects.” Atoms are created by calling the function \textit{atom}. \cite{33,35,37}. Repl examples: \begin{verbatim} user=> (def foo (atom 1)) #’user/’foo user=> (swap! foo inc) 2 user=>(let [bar (atom [:a])] (println @bar) (swap! bar conj :b)) [:a] [:a :b] \end{verbatim} 5.4.3 Agents Agents are the asynchronous counter-part of atoms. They are uncoordinated like atoms are, meaning that “atoms and agents queue up change functions to ensure that the changes occur atomically.” \cite{6} Agents are modified by applying the function \textit{send} on them. Send applies (sends) a function to the agent, which is used for modifying the value of the agent. However, unlike the case of atom, the function sent is not applied to the current value immediately. Instead the call to send returns immediately. \cite{6} The state of the agent can be requested by adding a watcher to the agent. \cite{32} Alternatively, a blocking function \textit{await} can be called. \textit{Await} “blocks the current thread (indefinitely!) until all actions dispatched thus far, from this thread or agent, to the agent(s) have occurred.”\cite{34} Repl examples: \begin{verbatim} user=> (def foo (agent 1)) #’user/’foo user=> @foo 1 user=> (do (send foo inc) (await foo)) nil user=> @foo 2 \end{verbatim} 5.4.4 Refs Refs are mutable references bound to a single storage location. The key thing about Refs is that they are transactional. Ref modification has to happen within a coordinated transaction. [31] Enforcing the use of a transaction eliminates the possibility of a conflict when two threads update a Ref. [6] Refs are created by calling the function ref. The value of a Ref is modified by using functions ref-set, alter or commute. A transaction is enabled by using the macro dosync. Repl examples: ``` user=> (def foo (ref "bar")) #'user/foo user=> @foo bar user=> (ref-set foo "impossible") java.lang.IllegalStateException: No transaction running (NO_SOURCE_FILE:0) user=> (dosync (ref-set foo "success")) "success" user=> @foo "success" ``` 5.5 Software transactional memory (STM) What exactly does a transaction, as referred in the last section, mean? Clojure has a clever mechanism for automatic handling of transactions called software transactional memory (STM) [38]. (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock-based synchronization. A transaction in this context is a piece of code that executes a series of reads and writes to shared memory. [25] STM implementations have been written for a number of languages as some kind of an API or binding but in Clojure, STM is built directly into the language core. [25] Clojure transactions are similar to those found in database management systems. The STM implementation in Clojure ensures that all actions on Refs are atomic, consistent and isolated. [31] - “Atomic means that every change to Refs made within a transaction occurs or none do.” [31] - “Consistent means that each new value can be checked with a validator function before allowing the transaction to commit.” [31] • “Isolated means that no transaction sees the effects of any other transaction while it is running.” [31] If a transaction encounters a conflict while running, it is automatically retried. On a more detailed level the implementation guarantees that: [31] 1. “All reads of Refs will see a consistent snapshot of the ‘Ref world’ as of the starting point of the transaction (its ‘read point’). The transaction will see any changes it has made. This is called the in-transaction-value.” 2. “All changes made to Refs during a transaction (via ref-set, alter or commute) will appear to occur at a single point in the ‘Ref world’ timeline (its ‘write point’).” 3. “No changes will have been made by any other transactions to any Refs that have been ref-set/altered/ensured by this transaction.” 4. “Changes may have been made by other transactions to any Refs that have been commuted by this transaction. That should be okay since the function applied by commute should be commutative.” 5. “Readers and commuters will never block writers, commuters, or other readers.” 6. “Writers will never block commuters, or readers.” 7. “I/O and other activities with side-effects should be avoided in transactions, since transactions will be retried. The io! macro can be used to prevent the use of an impure function in a transaction.” 8. “If a constraint on the validity of a value of a Ref that is being changed depends upon the simultaneous value of a Ref that is not being changed, that second Ref can be protected from modification by calling ensure. Refs ‘ensured’ this way will be protected (item #3), but don’t change the world (item #2).” 9. “The Clojure MVCC [18]STM is designed to work with the persistent collections, and it is strongly recommended that you use the Clojure collections as the values of your Refs. Since all work done in an STM transaction is speculative, it is imperative that there be a low cost to making copies and modifications. Persistent collections have free copies (just use the original, it can’t be changed), and ‘modifications’ share structure efficiently.” 10. “The values placed in Refs must be, or be considered, immutable!” References
{"Source-Url": "http://www.mit.jyu.fi/opiskelu/seminaarit/tiesem2011/clojure-for-masses.pdf", "len_cl100k_base": 7785, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 52642, "total-output-tokens": 11278, "length": "2e12", "weborganizer": {"__label__adult": 0.00033020973205566406, "__label__art_design": 0.0003314018249511719, "__label__crime_law": 0.0001983642578125, "__label__education_jobs": 0.000576019287109375, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.00010985136032104492, "__label__finance_business": 0.0001112818717956543, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0004010200500488281, "__label__hardware": 0.0005402565002441406, "__label__health": 0.00027370452880859375, "__label__history": 0.00019025802612304688, "__label__home_hobbies": 7.82012939453125e-05, "__label__industrial": 0.0002281665802001953, "__label__literature": 0.0002884864807128906, "__label__politics": 0.00014722347259521484, "__label__religion": 0.0004336833953857422, "__label__science_tech": 0.0054473876953125, "__label__social_life": 9.262561798095704e-05, "__label__software": 0.004638671875, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.00022661685943603516, "__label__transportation": 0.0003261566162109375, "__label__travel": 0.0001519918441772461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39611, 0.06812]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39611, 0.79167]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39611, 0.84531]], "google_gemma-3-12b-it_contains_pii": [[0, 94, false], [94, 1062, null], [1062, 1968, null], [1968, 3977, null], [3977, 5036, null], [5036, 6838, null], [6838, 8608, null], [8608, 10276, null], [10276, 12581, null], [12581, 14907, null], [14907, 16516, null], [16516, 18056, null], [18056, 20015, null], [20015, 21782, null], [21782, 23640, null], [23640, 25693, null], [25693, 27168, null], [27168, 28634, null], [28634, 30511, null], [30511, 32674, null], [32674, 34608, null], [34608, 36563, null], [36563, 38271, null], [38271, 39611, null]], "google_gemma-3-12b-it_is_public_document": [[0, 94, true], [94, 1062, null], [1062, 1968, null], [1968, 3977, null], [3977, 5036, null], [5036, 6838, null], [6838, 8608, null], [8608, 10276, null], [10276, 12581, null], [12581, 14907, null], [14907, 16516, null], [16516, 18056, null], [18056, 20015, null], [20015, 21782, null], [21782, 23640, null], [23640, 25693, null], [25693, 27168, null], [27168, 28634, null], [28634, 30511, null], [30511, 32674, null], [32674, 34608, null], [34608, 36563, null], [36563, 38271, null], [38271, 39611, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39611, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39611, null]], "pdf_page_numbers": [[0, 94, 1], [94, 1062, 2], [1062, 1968, 3], [1968, 3977, 4], [3977, 5036, 5], [5036, 6838, 6], [6838, 8608, 7], [8608, 10276, 8], [10276, 12581, 9], [12581, 14907, 10], [14907, 16516, 11], [16516, 18056, 12], [18056, 20015, 13], [20015, 21782, 14], [21782, 23640, 15], [23640, 25693, 16], [25693, 27168, 17], [27168, 28634, 18], [28634, 30511, 19], [30511, 32674, 20], [32674, 34608, 21], [34608, 36563, 22], [36563, 38271, 23], [38271, 39611, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39611, 0.01532]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
5fad1b174c20b43c0ffd3c054506ca1facf7d781
Make RCU do less (& later)! Presenters: - Joel Fernandes (Google) - Uladzislau Rezki (Sony) - Rushikesh Kadam (Intel) Intel power data courtesy: Sitanshu Nanavati. Overview ● Discuss what RCU does at high-level (not how it works!). ● Discuss the 2 main issues we found: ○ On a mostly idle system, RCU activity can disturb the idleness. ■ RCU default config required to keep tick on when idle and CBs queued. ■ RCU constantly asked to queue callbacks on a lightly loaded system. ● Discuss possible solutions. ● Taking questions in the end as time permits (and then hallway) What RCU does? - RCU reader critical section protected by “read lock” - RCU writer critical section protected by regular locks. - Reader and writer execute concurrently. - Writer creates **copy** of obj, writes to it and switches object pointer to new one (release ordered write). - Writer GCs old object after waiting (**update**) What RCU does? - That’s just one use case, there are many uses of RCU. All use cases need same basic tools: - Lock-less markers of a critical section (CS). Call it “reader”. - Start waiting at some point in time \( t = T_0 \). - Stop waiting after all readers that existed at \( T_0 \) exited CS. What RCU does? - On a local CPU (running in **kernel mode** with CB queued). **Upper red arrows** are timer tick checking are there readers left? If not, report. **Lower red arrows** are timer tick: have ALL CPUs reported? If yes, invoke CB. If no, try again. Queued a Callback (CB) What RCU does? - On a local CPU (running in **idle mode** with CB queued). **Upper red arrows** are timer tick checking are there readers left? If not, report. **THESE NOT NEEDED - AS CPU CANNOT BE IN RCU READER CRITICAL SECTION!** **Lower red arrows** are timer tick: have ALL CPUs reported? If yes, invoke CB. If no, try again. **THESE STILL NEEDED - AS local CPU has queued CB.** What RCU does? - You see the problem? - RCU can block the timer tick from getting turned off! - Negates power-savings of CONFIG_NOHZ_IDLE (To be fair to the main RCU maintainer, this issue is courtesy of the use case queuing a lot of RCU Callbacks on otherwise idle CPUs, in the first place). What RCU does? - This happens even in user mode - NOHZ_FULL systems typically turn tick off. - RCU can keep it on (if CBs are queued on a ‘nohz full’ CPU) Issue 1: RCU keeping the scheduler tick ON when idle. - “Local Video Playback” use-case has 2500+ wakes per second. A large chunk of the wakes result from constantly queued RCU callbacks. - RCU wakes are seen at HZ rate (red boxes) between graphics 16.6ms activity (blue boxes) - Blocks deeper Package C-states. Impacts power How bad are idle ticks for power - We know idle ticks are bad for power, but we did not know how bad! - In Video playback, RCU wakes amount to < 2% CPU load, but blocked deepest package C-states (PC8) for 25+% of the time, causing 10+% in SoC + memory power. - If you are profiling CPU load, then you will likely miss the impact of wakes (use powertop) Why idle ticks are so bad for power What are package C-states? - Traditionally ACPI C-states were CPU power states - Idle governor picks C-states based on OS next event prediction and C-states exit latency & target residency - CPU C-states have low exit latency & target residency. - 1000 HZ ticks do not block core C-states much - E.g. Sandy Bridge C-states table (2011) ```c static struct cpuidle_state snb_cstates[] __initdata = { { .name = "C1", .exit_latency = 2, .target_residency = 2, }, { .name = "C1E", .exit_latency = 10, .target_residency = 20, }, { .name = "C3", .exit_latency = 80, .target_residency = 211, }, { .name = "C6", .exit_latency = 104, .target_residency = 345, }, { .name = "C7", .exit_latency = 109, .target_residency = 345, }, { .enter = NULL } }; ``` Why idle ticks are so bad for power What are package C-states? SoC architecture provides opportunity to extend the OS C-states hints to power manage the entire SoC. SoCs have power management unit (HW + microcode), which coordinates CPU, IP blocks and common logic, to put entire SoC in low power mode. Package C-states add extended C-states with high exit latency & target residency. 1000 HZ ticks would impact deeper package C-states, E.g. AlderLake C-state table 2021 ```c static struct cpuidle_state adl_cstates[] __initdata = { { .name = "C1", .exit_latency = 1, .target_residency = 1, }[ .name = "C1E", .exit_latency = 2, .target_residency = 4, }[ .name = "C6", .exit_latency = 220, .target_residency = 600, }[ .name = "C8", .exit_latency = 280, .target_residency = 800, }[ .name = "C10", .exit_latency = 680, .target_residency = 2000, }[ .enter = NULL }; ``` Why was RCU keeping the tick on? This is required in default RCU configurations as CBs are invoked on same CPU they were queued on, in a softirq. Advantages: - Timely detection of GP end and thus execution of queued CBs. - Executing CBs on queuing CPU eliminates need for CB list locking. - No need for additional thread wake ups as local softirq execs CB. - Cache-line is likely hot from queuing and CB would not incur misses. These can be especially useful on busy systems and large #CPU server! Issue 1: RCU keeping the scheduler tick ON when idle. Possible solution: Using CONFIG_FAST_NOHZ option - CPUs enter the dyntick-idle state (the state where the tick is turned off) even if they have CBs queued. - Idle CPUs with callbacks check RCU state every 4 jiffies. - 4 jiffies for non-kfree CBs. - 6 jiffies or so for kfree CBs. Issue 1: RCU keeping the scheduler tick ON when idle. Solution for newer kernels: \texttt{CONFIG_RCU_NOCB_CPU} (Execute RCU CBs in per-cpu threads.) Issue 1: RCU keeping the scheduler tick ON when idle. Solution for newer kernels: CONFIG_RCU_NOCB_CPU Can cause performance overhead on system with frequent CB queue/exec! Issue 1: RCU keeping the scheduler tick ON when idle. Solution for newer kernels: `CONFIG_RCU_NOCB_CPU` However, can be great for power and CPU isolation… - Scheduler may move threads to non-idle CPUs thus leaving more idle. - **Both** starting of new grace periods, and executing CBs are moved out of the softirq context and into threads. CONFIG_RCU_NOCB_CPU saves lots of power - RCU callback offload unblocks dynticks-idle and hence reduces timer wakes. - RCU callback offload does increase the scheduler wakes marginally, but reduces total platform wakes. - Improves Package C-states residency and hence SoC + Memory power. Use-case: Local video playback via Chrome browser, VP9 1080p @ 30 fps content Device: Chrome reference device, AlderLake Hybrid CPU with 2 Cores (with Hyperthreading) + 8 Atoms New option: CONFIG_RCU_NOCB_CPU_ALL - If you enable CONFIG_RCU_NOCB_CPU, you still need to specify rcu_nocbs=0-N to make it work. So... - New option CONFIG_RCU_NOCB_CPU_ALL was added to just enable nocb for all CPUs by default. Can we do even better? Observations: ● When a system is mostly idle, most CBs don’t need to execute right away, some can be delayed as long as needed! ● Some CBs in the system “trickle” frequently. Observation: ChromeOS when idle - Some CBs in the system “trickle” frequently. - Several callbacks constantly queued. rcutop refreshing every 5 seconds. ChromeOS logged in with screen off. Device on battery power. <table> <thead> <tr> <th>Callback</th> <th>Queued</th> <th>Executed</th> </tr> </thead> <tbody> <tr> <td>inode_free_by_rcu</td> <td>7</td> <td>10</td> </tr> <tr> <td>delayed_put_task_struct</td> <td>7</td> <td>15</td> </tr> <tr> <td>k_itimer_rcu_free</td> <td>9</td> <td>9</td> </tr> <tr> <td>radix_tree_node_rcu_free</td> <td>16</td> <td>27</td> </tr> <tr> <td>rcu_work_rcufn</td> <td>1</td> <td>2</td> </tr> <tr> <td>put_cred_rcu</td> <td>4</td> <td>8</td> </tr> <tr> <td>delayed_put_pid</td> <td>7</td> <td>15</td> </tr> <tr> <td>unbind_fence_free_rcu</td> <td>4</td> <td>5</td> </tr> <tr> <td>dst_destroy_rcu</td> <td>4</td> <td>10</td> </tr> <tr> <td>__i915_gem_free_object_rcu</td> <td>5</td> <td>10</td> </tr> <tr> <td>thread_stack_free_rcu</td> <td>3</td> <td>7</td> </tr> </tbody> </table> Observation: ChromeOS Display pipeline Display pipeline in ChromeOS constantly opens/close graphics buffers. Observation: Logging in Android (as example) Android uses CONFIG_RCU_NO_CB by default to offload all CPUs. Observation: Logging in Android (as example) Example: Logging during static image (Android). Static image is important use-case for power testing on Android. The system is mostly idle to minimize a power drain of the platform: - CPU stops refreshing panel and panel self-refreshes on its own. - CPUs spend most of their time in deepest C-state - SoC bandwidth is minimal (memory bus, CPU/cache frequencies, etc.). Logging does constant file open/close giving RCU work when FDs get freed. As a side effect of such periodic light load, many wakeups happen due to frequent kicking an RCU-core for initializing a GP to invoke callbacks after it passes. Observation: Logging in Android (as example) Below is a post process of scheduler ftrace for static image use-case during 30 seconds (this is with CONFIG_RCU_NOCB_CPU with all CPUs offloaded). A trace was taken on the ARM big.LITTLE system. It is obvious that the biggest part belongs to the “iddd logger” whereas a second place is fully owned by the RCU-core subsystem marked as red. Observation: Logging in Android (as example) RCU mostly invokes callbacks related to the VFS, SELinux subsystems during logging: - `file_free_rcu()` - `inode_free_by_rcu()` - `i_callback()` - `__d_free()` - `avc_node_free()` Since system is lightly loaded and a number of posted callbacks to be invoked are rather small, between 1-10, such pattern produce most of the wakeups (in static image use-case) to offload a CPU with __only__ few callbacks there. Observation: Logging in Android 6-7 milliseconds interval Only a few callbacks are invoked Issue 2: RCU queuing CBs on lightly loaded system Let us explore some solutions to this… Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Delay RCU processing using `jiffies_till_{first,next}_fqs` - Great power savings <table> <thead> <tr> <th>jiffies_till_first_fqs &amp; jiffies_till_next_fqs</th> <th>Baseline (NOCB)</th> <th>= 8, 8</th> <th>= 16, 16</th> <th>= 24, 24</th> <th>= 32, 32</th> </tr> </thead> <tbody> <tr> <td>SoC+Memory, power savings w.r.t Baseline</td> <td>0%</td> <td>2%</td> <td>3%</td> <td>3.4%</td> <td>3.2%</td> </tr> </tbody> </table> - Problem: - Causes slow down in ALL `call_rcu()` users globally whether they like it or not. - Causes slow down in `synchronize_rcu()` users globally. - Significantly regresses boot time. Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Jiffies causes massive synchronize_rcu() slowdown. - ChromeOS tab switching autotest - Due to synchronize_rcu() latency increases quickly from 23 ms to 169 ms (with changing jiffies from 3 to 32) - The same evaluation with synchronize_rcu Expedited() gives us a latency of < 1 msec at jiffies = 32 Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Jiffies increase causing function tracer issues Several paths in ftrace code uses synchronize_rcu(): For but 2 examples: - pid_write() triggered by write to /sys/kernel/tracing/debug/tracing/set_ftrace_pid - ring buffer code such as ring_buffer_resize() End result is trace-cmd record -p function_graph can take several more seconds to start and stop recording, than it would otherwise. Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Jiffies causing boot-time issues (SELinux) SELinux enforcing during ChromeOS boot up invokes synchronize_rcu() [ 17.715904] => __wait_rcu_gp [ 17.715904] => synchronize_rcu [ 17.715904] => selinux_netcache_avc_callback [ 17.715904] => avc_ss_reset [ 17.715904] => sel_write_enforce [ 17.715904] => vfs_write [ 17.715904] => ksys_write [ 17.715904] => do_syscall_64 Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Jiffies causing per-cpu refcount regression - RCU used to toggle atomic-mode and vice versa - Can badly hurt paths that don’t really want to free memory but use call_rcu() for some other purposes. Like suspend. - call_rcu() slow down affects percpu refcounters - These counters use RCU when switching to atomic-mode - __percpu_ref_switch_mode() -> percpu_ref_switch_to_atomic_sync(). - This call slows down for the per-cpu refcount users such as blk_pre_runtime_suspend(). This is why, we cannot assume call_rcu() users will mostly just want to free memory. There could be cases just like this, and blanket slow down of call_rcu() might bite unexpectedly. Issue 2: RCU queuing CBs on lightly loaded system Solution 1: Jiffies with expedited option - The previous synchronize_rcu() issues can be mitigated by using expedited boot option which expedites while ensuring good power efficiency. - However, experiments showed that using expedited RCU with jiffies, still causes a boot time regression. - Also, the expedited option is expensive, and can affect real-time workloads. Issue 2: RCU queuing CBs on lightly loaded system Solution 2: Delay RCU CB processing (Lazy RCU) - Delay Callback execution as long as possible. - Introduce new API for lazy-RCU (call_rcu_lazy). - Need to handle several side-effects: - RCU barrier. - CPU hotplug etc - Memory pressure - Offloading and De-offloading. Issue 2: RCU queuing CBs on lightly loaded system Solution 2: Delay RCU CB processing (Lazy RCU) Solution 2: Delay RCU CB processing (Lazy RCU) Issue 2: RCU queuing CBs on lightly loaded system Intro: Life Cycle of a grace period - Waiting for a new GP request - Propagate start of GP down the TREE (rcu_gp_init) - Force Quiescent State (FQS) loop (rcu_gp_qe_loop) - Are ALL QS marked? (root node qs_mask == 0) - Mark and Propagate GP end down tree (rcu_gp_cleanup sets gp_seq of rcu_state) - Mark CPU QS - Propagate QS up TREE - Is a GP in progress? - All CPUs done? (Set Root node qs_mask = 0) - Synchronize RCU - Queue wake up callback (rcu_seqcallback_queue) - Request a new GP (rcu_start_is_gp) - Sleep - Wake up - Continue - Softirq CB exec DELAYED Issue 2: RCU queuing CBs on lightly loaded system Lazy RCU: design approach Can cause performance overhead on system with frequent CB queue/invoke due to locking! Issue 2: RCU queuing CBs on lightly loaded system Lazy RCU: design approach - re-use the bypass list. By-pass list is per-cpu and (almost) lock free! Issue 2: RCU queuing CBs on lightly loaded system Lazy RCU: design approach - re-use the bypass list. Flush the bypass list if there is memory pressure, or lengthy timer expires! Issue 2: RCU queuing CBs on lightly loaded system Solution 2: Delay RCU CB processing (Lazy RCU) RCU lazy further reduces 300+ wakes per seconds, and improves SoC package C-states residency & Power Use-case: Local video playback via Chrome browser, VP9 1080p @ 30 fps content Device: Chrome reference device, AlderLake Hybrid CPU with 2 Cores (with Hyperthreading) + 8 Atoms Issue 2: RCU queuing CBs on lightly loaded system Solution 2: Delay RCU CB processing (Lazy RCU) Latest Patches: https://lore.kernel.org/all/20220819204857.3066329-1-joel@joelfernandes.org/ Summary: - Introduce new API for lazy-RCU (call_rcu_lazy). - Queue CBs into the Bypass list. - Flush the Bypass list when: - Non-Lazy CBs show up. - Bypass list grows too big. - Memory is low. - Several corner cases now handled (rcu_barrier, CPU hotplug etc). Home screen swipe (as example) default cpu_lazy_v6 Home screen swipe power (~3% delta) avg: ~365mA avg: ~352mA Some CBs in the system “trickle” frequently. Several callbacks constantly queued. rcutop refreshing every 5 seconds. ChromeOS logged in with screen off. Device on battery power. Drawbacks and considerations - Depends on user of call_rcu() using lazy - If a new user of call_rcu() shows up, it would go unnoticed and negate the benefits. - Updates to docs may help: https://docs.kernel.org/RCU/whatisRCU.html#id11 - Risk of user using call_rcu_lazy() accidentally when they should really use call_rcu(). For example, a use case requiring synchronous wait. - Risks on memory pressure: - Protection is enough on extreme condition? - Can test with more test cases such as ChromeOS memory pressure test. Thanks! - Paul McKenney (for putting up with us). - Presenters. - LPC sponsors and organizers. - Frederic Weisbec for reviewing code. Questions?
{"Source-Url": "https://lpc.events/event/16/contributions/1204/attachments/985/1937/Make%20RCU%20do%20less%20(and%20later)!%20(1).pdf", "len_cl100k_base": 4613, "olmocr-version": "0.1.50", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 54162, "total-output-tokens": 6377, "length": "2e12", "weborganizer": {"__label__adult": 0.0004527568817138672, "__label__art_design": 0.0005254745483398438, "__label__crime_law": 0.00034999847412109375, "__label__education_jobs": 0.0005273818969726562, "__label__entertainment": 0.00011426210403442384, "__label__fashion_beauty": 0.00018203258514404297, "__label__finance_business": 0.00024306774139404297, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0010204315185546875, "__label__hardware": 0.0300750732421875, "__label__health": 0.0005297660827636719, "__label__history": 0.00031447410583496094, "__label__home_hobbies": 0.00022411346435546875, "__label__industrial": 0.0011491775512695312, "__label__literature": 0.00015878677368164062, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0006518363952636719, "__label__science_tech": 0.081298828125, "__label__social_life": 8.612871170043945e-05, "__label__software": 0.0195770263671875, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.0004532337188720703, "__label__transportation": 0.0008039474487304688, "__label__travel": 0.00026154518127441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16781, 0.02323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16781, 0.42255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16781, 0.83352]], "google_gemma-3-12b-it_contains_pii": [[0, 172, false], [172, 595, null], [595, 928, null], [928, 1228, null], [1228, 1521, null], [1521, 1921, null], [1921, 2220, null], [2220, 2376, null], [2376, 2705, null], [2705, 3061, null], [3061, 4018, null], [4018, 5039, null], [5039, 5541, null], [5541, 5882, null], [5882, 6032, null], [6032, 6206, null], [6206, 6549, null], [6549, 7019, null], [7019, 7252, null], [7252, 7453, null], [7453, 8412, null], [8412, 8522, null], [8522, 8630, null], [8630, 9283, null], [9283, 9671, null], [9671, 10129, null], [10129, 10222, null], [10222, 10312, null], [10312, 10985, null], [10985, 11350, null], [11350, 11809, null], [11809, 12238, null], [12238, 12961, null], [12961, 13384, null], [13384, 13711, null], [13711, 13808, null], [13808, 14472, null], [14472, 14637, null], [14637, 14788, null], [14788, 14969, null], [14969, 15347, null], [15347, 15808, null], [15808, 15861, null], [15861, 15923, null], [15923, 16103, null], [16103, 16635, null], [16635, 16781, null]], "google_gemma-3-12b-it_is_public_document": [[0, 172, true], [172, 595, null], [595, 928, null], [928, 1228, null], [1228, 1521, null], [1521, 1921, null], [1921, 2220, null], [2220, 2376, null], [2376, 2705, null], [2705, 3061, null], [3061, 4018, null], [4018, 5039, null], [5039, 5541, null], [5541, 5882, null], [5882, 6032, null], [6032, 6206, null], [6206, 6549, null], [6549, 7019, null], [7019, 7252, null], [7252, 7453, null], [7453, 8412, null], [8412, 8522, null], [8522, 8630, null], [8630, 9283, null], [9283, 9671, null], [9671, 10129, null], [10129, 10222, null], [10222, 10312, null], [10312, 10985, null], [10985, 11350, null], [11350, 11809, null], [11809, 12238, null], [12238, 12961, null], [12961, 13384, null], [13384, 13711, null], [13711, 13808, null], [13808, 14472, null], [14472, 14637, null], [14637, 14788, null], [14788, 14969, null], [14969, 15347, null], [15347, 15808, null], [15808, 15861, null], [15861, 15923, null], [15923, 16103, null], [16103, 16635, null], [16635, 16781, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16781, null]], "pdf_page_numbers": [[0, 172, 1], [172, 595, 2], [595, 928, 3], [928, 1228, 4], [1228, 1521, 5], [1521, 1921, 6], [1921, 2220, 7], [2220, 2376, 8], [2376, 2705, 9], [2705, 3061, 10], [3061, 4018, 11], [4018, 5039, 12], [5039, 5541, 13], [5541, 5882, 14], [5882, 6032, 15], [6032, 6206, 16], [6206, 6549, 17], [6549, 7019, 18], [7019, 7252, 19], [7252, 7453, 20], [7453, 8412, 21], [8412, 8522, 22], [8522, 8630, 23], [8630, 9283, 24], [9283, 9671, 25], [9671, 10129, 26], [10129, 10222, 27], [10222, 10312, 28], [10312, 10985, 29], [10985, 11350, 30], [11350, 11809, 31], [11809, 12238, 32], [12238, 12961, 33], [12961, 13384, 34], [13384, 13711, 35], [13711, 13808, 36], [13808, 14472, 37], [14472, 14637, 38], [14637, 14788, 39], [14788, 14969, 40], [14969, 15347, 41], [15347, 15808, 42], [15808, 15861, 43], [15861, 15923, 44], [15923, 16103, 45], [16103, 16635, 46], [16635, 16781, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16781, 0.0472]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
4ed4b50c791c0e5aa9358f7e3ae750ee37cebf83
Buffer Overflows CSE 351 Autumn 2020 Alt text: I looked at some of the data dumps from vulnerable sites, and it was ... bad. I saw emails, passwords, password hints. SSL keys and session cookies. Important servers brimming with visitor IPs. Attack ships on fire off the shoulder of Orion, c-beams glittering in the dark near the Tannhäuser Gate. I should probably patch OpenSSL. http://xkcd.com/1353/ Administrivia - hw13 due Wednesday (11/4) - hw15 due Monday (11/9) - Lab 3 released Wednesday, due next Friday (11/13) - You will have everything you need by the end of this lecture - Midterm Group stage due tonight - Individual stage Thu-Fri (expect adjustments) - Rubric and grades will be found on Gradescope - We will grade as quickly as we can Buffer Overflows - Address space layout review - Input buffers on the stack - Overflowing buffers and injecting code - Defenses against buffer overflows Review: General Memory Layout - **Stack** - Local variables (procedure context) - **Heap** - Dynamically allocated as needed - `new`, `malloc()`, `calloc()`, ... - **Statically-allocated Data** - Read/write: global variables (Static Data) - Read-only: string literals (Literals) - **Code/Instructions** - Executable machine instructions - Read-only Memory Allocation Example ```c char big_array[1L<<24]; /* 16 MB */ int global = 0; int useless() { return 0; } int main() { void *p1, *p2; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } ``` Where does everything go? What Is a Buffer? - A buffer is an array used to temporarily store data - You’ve probably seen “video buffering...” - The video is being written into a buffer before being played - Buffers can also store user input Reminder: x86-64/Linux Stack Frame - **Caller’s Stack Frame** - Arguments (if > 6 args) for this call - **Current/ Callee Stack Frame** - Return address - Pushed by `call` instruction - Old frame pointer (optional) - Caller-saved pushed before setting up arguments for a function call - Callee-saved pushed before using long-term registers - Local variables (if can’t be kept in registers) - “Argument build” area (Need to call a function with >6 arguments? Put them here) Buffer Overflow in a Nutshell - C does not check array bounds - Many Unix/Linux/C functions don’t check argument sizes - Allows overflowing (writing past the end) of buffers (arrays) - “Buffer Overflow” = Writing past the end of an array - Characteristics of the traditional Linux memory layout provide opportunities for malicious programs - Stack grows “backwards” in memory - Data and instructions both stored in the same memory Buffer Overflow in a Nutshell - Stack grows *down* towards lower addresses - Buffer grows *up* towards higher addresses - If we write past the end of the array, we overwrite data on the stack! Enter input: hello No overflow 😊 Buffer Overflow in a Nutshell - Stack grows *down* towards lower addresses - Buffer grows *up* towards higher addresses - If we write past the end of the array, we overwrite data on the stack! Enter input: helloabcdef Buffer Overflow in a Nutshell - Stack grows down towards lower addresses - Buffer grows up towards higher addresses - If we write past the end of the array, we overwrite data on the stack! Enter input: helloabcdef Buffer overflow! 😞 Buffer Overflow in a Nutshell - Buffer overflows on the stack can overwrite “interesting” data - Attackers just choose the right inputs - Simplest form (sometimes called “stack smashing”) - Unchecked length on string input into bounded array causes overwriting of stack data - Try to change the return address of the current procedure - Why is this a big deal? - It was the #1 *technical* cause of security vulnerabilities - #1 *overall* cause is social engineering / user ignorance String Library Code - Implementation of Unix function `gets()` ```c /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; } ``` - What could go wrong in this code? String Library Code - Implementation of Unix function `gets()` ```c /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; } ``` - No way to specify limit on number of characters to read - Similar problems with other Unix functions: - `strcpy`: Copies string of arbitrary length to a dst - `scanf`, `fscanf`, `sscanf`, when given `%s` specifier Vulnerable Buffer Code ```c /* Echo Line */ void echo() { char buf[8]; /* Way too small! */ gets(buf); puts(buf); } void call_echo() { echo(); } ``` ``` unix> ./buf-nsp Enter string: 123456789012345 123456789012345 unix> ./buf-nsp Enter string: 1234567890123456 Segmentation fault (core dumped) ``` Buffer Overflow Disassembly (buf-nsp) echo: ``` 0000000000401146 <echo>: 401146: 48 83 ec 18 sub $0x18,%rsp ... ... calls printf ... 401159: 48 8d 7c 24 08 lea 0x8(%rsp),%rdi 40115e: b8 00 00 00 00 mov $0x0,%eax 401163: e8 e8 fe ff ff callq 401050 <gets@plt> 401168: 48 8d 7c 24 08 lea 0x8(%rsp),%rdi 40116d: e8 be fe ff ff callq 401030 <puts@plt> 401172: 48 83 c4 18 add $0x18,%rsp 401176: c3 retq ``` call_echo: ``` 0000000000401177 <call_echo>: 401177: 48 83 ec 08 sub $0x8,%rsp 40117b: b8 00 00 00 00 mov $0x0,%eax 401180: e8 c1 ff ff ff callq 401146 <echo> 401185: 48 83 c4 08 add $0x8,%rsp 401189: c3 retq ``` Buffer Overflow Stack Before call to gets /* Echo Line */ void echo() { char buf[8]; /* Way too small! */ gets(buf); puts(buf); } echo: subq $24, %rsp ... leaq 8(%rsp), %rdi mov $0x0,%eax call gets ... Note: addresses increasing right-to-left, bottom-to-top Buffer Overflow Example **Before call to gets** Stack frame for `call_echo` <table> <thead> <tr> <th>00</th> <th>00</th> <th>00</th> <th>00</th> </tr> </thead> <tbody> <tr> <td>00</td> <td>40</td> <td>11</td> <td>85</td> </tr> </tbody> </table> 8 bytes unused <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>[3]</td> <td>[2]</td> <td>[1]</td> <td>[0]</td> </tr> </tbody> </table> 8 bytes unused void echo() { char buf[8]; gets(buf); . . . } call_echo: . . . 401180: callq 401146 <echo> 401185: add $0x8,%rsp . . . Buffer Overflow Example #1 **After call to gets** Stack frame for call_echo ``` 00 00 00 00 00 40 11 85 00 35 34 33 32 31 30 39 38 37 36 35 34 33 32 31 ``` 8 bytes unused Note: Digit “$N$” is just $0x3N$ in ASCII! ```c void echo() { char buf[8]; gets(buf); ... } ``` call_echo: ``` 00 00 00 00 00 40 11 85 00 35 34 33 32 31 30 39 38 37 36 35 34 33 32 31 ``` ``` void echo() { char buf[8]; gets(buf); ... } ``` echo: ``` subq $24, %rsp ... leaq 8(%rsp), %rdi mov $0x0,%eax call gets ... ``` ``` subq $24, %rsp ... leaq 8(%rsp), %rdi mov $0x0,%eax call gets ... ``` ``` 00 00 00 00 00 40 11 85 00 35 34 33 32 31 30 39 38 37 36 35 34 33 32 31 ``` ``` 401180: callq 401146 <echo> 401185: add $0x8,%rsp ``` ``` 401180: callq 401146 <echo> 401185: add $0x8,%rsp ``` ``` unix> ./buf-nsp Enter string: 123456789012345 123456789012345 ``` Overflowed buffer, but did not corrupt state Buffer Overflow Example #2 After call to gets void echo() { char buf[8]; gets(buf); ... } call_echo: ... 401180: callq 401146 <echo> 401185: add $0x8,%rsp ... Segmentation fault (core dumped) Overflowed buffer and corrupted return pointer unix> ./buf-nsp Enter string: 1234567890123456 Segmentation fault (core dumped) Buffer Overflow Example #2 Explained After return from echo Stack frame for call_echo <p>| | | | | | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> <td>00</td> </tr> <tr> <td>00</td> <td>40</td> <td>11</td> <td>00</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>36</td> <td>35</td> <td>34</td> <td>33</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>32</td> <td>31</td> <td>30</td> <td>39</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>38</td> <td>37</td> <td>36</td> <td>35</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>34</td> <td>33</td> <td>32</td> <td>31</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> 8 bytes unused 00000000004010d0 <register_tm_clones>: - 4010d0: lea 0x2f61(%rip),%rdi - 4010d7: lea 0x2f5a(%rip),%rsi - 4010de: sub %rdi,%rsi - 4010e1: mov %rsi,%rax - 4010e4: shr $0x3f,%rsi - 4010e8: sar $0x3,%rax - 4010ec: add %rax,%rsi - 4010ef: sar %rsi - 4010f2: je 401108 - 4010f4: mov 0x2efd(%rip),%rax - 4010fb: test %rax,%rax - 4010fe: je 401108 - 401100: jmpq *%rax - 401102: nopw 0x0(%rax,%rax,1) - 401108: retq "Returns" to a valid instruction, but bad indirect jump so program signals SIGSEGV, Segmentation fault Malicious Use of Buffer Overflow: Code Injection Attacks - Input string contains byte representation of executable code - Overwrite return address A with address of buffer B - When `bar()` executes `ret`, will jump to exploit code Practice Question - smash_me is vulnerable to stack smashing! - What is the minimum number of characters that gets must read in order for us to change the return address to a stack address? - For example: (0x00 00 7f ff ca fe f0 0d) <table> <thead> <tr> <th>Previous stack frame</th> </tr> </thead> <tbody> <tr> <td>00 00 00 00</td> </tr> <tr> <td>00 40 05 d1</td> </tr> <tr> <td>...</td> </tr> <tr> <td>[0]</td> </tr> </tbody> </table> smash_me: subq $0x40, %rsp ... leaq 16(%rsp), %rdi call gets ... A. 27 B. 30 C. 51 D. 54 E. We’re lost... Exploits Based on Buffer Overflows Buffer overflow bugs can allow attackers to execute arbitrary code on victim machines - Distressingly common in real programs - Programmers keep making the same mistakes 😞 - Recent measures make these attacks much more difficult - Examples across the decades - Original “Internet worm” (1988) - Heartbleed (2014, affected 17% of servers) - Similar issue in Cloudbleed (2017) - Hacking embedded devices - Cars, Smart homes, Planes Example: the original Internet worm (1988) - Exploited a few vulnerabilities to spread - Early versions of the finger server (fingerd) used `gets()` to read the argument sent by the client: - `finger droh@cs.cmu.edu` - Worm attacked `fingerd` server with phony argument: - `finger "exploit-code padding new-return-addr"` - Exploit code: executed a root shell on the victim machine with a direct connection to the attacker - Scanned for other machines to attack - Invaded ~6000 computers in hours (10% of the Internet) - see [June 1989 article](https://www.acm.org/publications/crosslistings/crosslistings-acl) in *Comm. of the ACM* - The author of the worm (Robert Morris*) was prosecuted... Example: Heartbleed (2014) **HOW THE HEARTBLEED BUG WORKS:** **Server:** Are you still there? If so, reply “POTATO” (6 letters). **User Meg** wants these 6 letters: POTATO. User Meg wants pages about “irl games.” Unlocking secure records with master key 5130985733435. User wants pages about “boats.” Unlocking secure connection using key "4538538374224. User Meg wants these 6 letters: POTATO. Example: Heartbleed (2014) User Olivia from London wants pages about "he's in car why". Note: Files for IP 375.381.383.17 are in /tmp/files-3843. User Meg wants these 4 letters: BIRD. There are currently 34 connections open. User Brendan uploaded the file elf.png (contents: 234ba962a2c9f9ff89b33ff8). User Olivia from London wants pages about "he's in car why". Note: Files for IP 375.381.383.17 are in /tmp/files-3843. User Meg wants these 4 letters: BIRD. There are currently 34 connections open. User Brendan uploaded the file elf.png (contents: 234ba962a2c9f9ff89b33ff8). SERVER, ARE YOU STILL THERE? IF SO, REPLY "BIRD" (4 LETTERS). Example: Heartbleed (2014) User Meg wants these 500 letters: HAT. Lucas requests the "missed connections" page. Eve (administrator) wants to set server’s master key to "14835038534". Isabel wants pages about snakes but not too long. User Karen wants to change account password to "Colibri9!". Heartbleed Details - Buffer over-read in OpenSSL - Open source security library - Bug in a small range of versions - “Heartbeat” packet - Specifies length of message - Server echoes it back - Library just “trusted” this length - Allowed attackers to read contents of memory anywhere they wanted - Est. 17% of Internet affected - “Catastrophic” - Github, Yahoo, Stack Overflow, Amazon AWS, ... Hacking Cars (2010) - UW CSE research demonstrated wirelessly hacking a car using buffer overflow - Overwrote the onboard control system’s code - Disable brakes, unlock doors, turn engine on/off Hacking DNA Sequencing Tech (2017) Computer Security and Privacy in DNA Sequencing Paul G. Allen School of Computer Science & Engineering, University of Washington - Potential for malicious code to be encoded in DNA! - Attacker can gain control of DNA sequencing machine when malicious DNA is read Dealing with buffer overflow attacks 1) Employ system-level protections 2) Avoid overflow vulnerabilities 3) Have compiler use “stack canaries” 1) System-Level Protections - **Non-executable code segments** - In traditional x86, can mark region of memory as either “read-only” or “writeable” - Can execute anything readable - x86-64 added explicit “execute” permission - **Stack marked as non-executable** - Do *NOT* execute code in Stack, Static Data, or Heap regions - Hardware support needed Any attempt to execute this code will fail 1) System-Level Protections - **Non-executable code segments** - Wait, doesn’t this fix everything? - Works well, but can’t always use it - Many embedded devices *do not* have this protection - *e.g.*, cars, smart homes, pacemakers - **Some exploits still work!** - Return-oriented programming - Return to libc attack - JIT-spray attack Stack after call to `gets()` - `foo` stack frame - `bar` stack frame - `exploit code` - `pad` - `data written by `gets()`` Any attempt to execute this code will fail 1) System-Level Protections - Randomized stack offsets - At start of program, allocate \textit{random} amount of space on stack - Shifts stack addresses for entire program - Addresses will vary from one run to another - Makes it difficult for hacker to predict beginning of inserted code - \textbf{Example}: Address of variable \texttt{local} for when Slide 5 code executed 3 times: - 0x7ffd19d3f8ac - 0x7ffe8a462c2c - 0x7ffe927c905c - Stack repositioned each time program executes 2) Avoid Overflow Vulnerabilities in Code /* Echo Line */ void echo() { char buf[8]; /* Way too small! */ fgets(buf, 8, stdin); puts(buf); } - Use library routines that limit string lengths - fgets instead of gets (2nd argument to fgets sets limit) - strncpy instead of strcpy - Don’t use scanf with %s conversion specification - Use fgets to read the string - Or use %ns where n is a suitable integer 2) Avoid Overflow Vulnerabilities in Code - Alternatively, don’t use C - use a language that does array index bounds check - Buffer overflow is impossible in Java - ArrayIndexOutOfBoundsException - Rust language was designed with security in mind - Panics on index out of bounds, plus more protections 3) Stack Canaries - Basic Idea: place special value ("canary") on stack just beyond buffer - Secret value that is randomized before main() - Placed between buffer and return address - Check for corruption before exiting function - GCC implementation - -fstack-protector Protected Buffer Disassembly (buf) echo: ``` 401156: push %rbx 401157: sub $0x10,%rsp 40115b: mov $0x28,%ebx 401160: mov %fs:(%rbx),%rax 401164: mov %rax,0x8(%rsp) 401169: xor %eax,%eax ... call printf ... ``` ``` 40117d: callq 401060 <gets@plt> 401182: mov %rsp,%rdi 401185: callq 401030 <puts@plt> 40118a: mov 0x8(%rsp),%rax 40118f: xor %fs:(%rbx),%rax 401193: jne 40119b <echo+0x45> 401195: add $0x10,%rsp 401199: pop %rbx 40119a: retq 40119b: callq 401040 <__stack_chk_fail@plt> ``` Setting Up Canary Before call to gets Stack frame for call_echo Return address (8 bytes) Canary (8 bytes) [3] [2] [1] [0] /* Echo Line */ void echo() { char buf[8]; /* Way too small! */ gets(buf); puts(buf); } echo: . . . movq %fs:40, %rax # Get canary movq %rax, 8(%rsp) # Place on stack xorl %eax, %eax # Erase canary . . . buf ← %rsp This is extra (non-testable) material Checking Canary After call to gets ```c /* Echo Line */ void echo() { char buf[8]; /* Way too small! */ gets(buf); puts(buf); } ``` ```assembly echo: . movq 8(%rsp), %rax # retrieve from Stack xorq %fs:40, %rax # compare to canary jne .L4 # if not same, FAIL . .L4: call __stack_chk_fail buf ← %rsp ``` Input: 1234567 Summary of Prevention Measures 1) Employ system-level protections - Code on the Stack is not executable - Randomized Stack offsets 2) Avoid overflow vulnerabilities - Use library routines that limit string lengths - Use a language that makes them impossible 3) Have compiler use “stack canaries” Think this is cool? - You’ll love Lab 3 😊 - Released Wednesday, due next Friday (11/13) - Some parts *must* be run through GDB to disable certain security features - Take CSE 484 (Security) - Several different kinds of buffer overflow exploits - Many ways to counter them - Nintendo fun! - Using glitches to rewrite code: [https://www.youtube.com/watch?v=TqK-2jUQBUY](https://www.youtube.com/watch?v=TqK-2jUQBUY) - Flappy Bird in Mario: [https://www.youtube.com/watch?v=hB6eY73sLV0](https://www.youtube.com/watch?v=hB6eY73sLV0)
{"Source-Url": "https://courses.cs.washington.edu/courses/cse351/20au/lectures/15/CSE351-L15-buffoverflow_20au.pdf", "len_cl100k_base": 5631, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 79279, "total-output-tokens": 7889, "length": "2e12", "weborganizer": {"__label__adult": 0.000522613525390625, "__label__art_design": 0.0004525184631347656, "__label__crime_law": 0.001247406005859375, "__label__education_jobs": 0.00855255126953125, "__label__entertainment": 0.0001380443572998047, "__label__fashion_beauty": 0.0002256631851196289, "__label__finance_business": 0.00016570091247558594, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0013074874877929688, "__label__hardware": 0.002105712890625, "__label__health": 0.0005521774291992188, "__label__history": 0.00027561187744140625, "__label__home_hobbies": 0.00017833709716796875, "__label__industrial": 0.0006237030029296875, "__label__literature": 0.000492095947265625, "__label__politics": 0.00037384033203125, "__label__religion": 0.0006489753723144531, "__label__science_tech": 0.0272369384765625, "__label__social_life": 0.0003135204315185547, "__label__software": 0.011688232421875, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.0003817081451416016, "__label__transportation": 0.0005412101745605469, "__label__travel": 0.00017833709716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17936, 0.08759]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17936, 0.26149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17936, 0.69604]], "google_gemma-3-12b-it_contains_pii": [[0, 403, false], [403, 763, null], [763, 917, null], [917, 1284, null], [1284, 1600, null], [1600, 1820, null], [1820, 2321, null], [2321, 2763, null], [2763, 2992, null], [2992, 3212, null], [3212, 3448, null], [3448, 3946, null], [3946, 4250, null], [4250, 4759, null], [4759, 5077, null], [5077, 5888, null], [5888, 6188, null], [6188, 6579, null], [6579, 7495, null], [7495, 7835, null], [7835, 8853, null], [8853, 9085, null], [9085, 9581, null], [9581, 10067, null], [10067, 10789, null], [10789, 11190, null], [11190, 11832, null], [11832, 12126, null], [12126, 12536, null], [12536, 12839, null], [12839, 13194, null], [13194, 13339, null], [13339, 13753, null], [13753, 14277, null], [14277, 14778, null], [14778, 15200, null], [15200, 15515, null], [15515, 15795, null], [15795, 16286, null], [16286, 16723, null], [16723, 17083, null], [17083, 17394, null], [17394, 17936, null]], "google_gemma-3-12b-it_is_public_document": [[0, 403, true], [403, 763, null], [763, 917, null], [917, 1284, null], [1284, 1600, null], [1600, 1820, null], [1820, 2321, null], [2321, 2763, null], [2763, 2992, null], [2992, 3212, null], [3212, 3448, null], [3448, 3946, null], [3946, 4250, null], [4250, 4759, null], [4759, 5077, null], [5077, 5888, null], [5888, 6188, null], [6188, 6579, null], [6579, 7495, null], [7495, 7835, null], [7835, 8853, null], [8853, 9085, null], [9085, 9581, null], [9581, 10067, null], [10067, 10789, null], [10789, 11190, null], [11190, 11832, null], [11832, 12126, null], [12126, 12536, null], [12536, 12839, null], [12839, 13194, null], [13194, 13339, null], [13339, 13753, null], [13753, 14277, null], [14277, 14778, null], [14778, 15200, null], [15200, 15515, null], [15515, 15795, null], [15795, 16286, null], [16286, 16723, null], [16723, 17083, null], [17083, 17394, null], [17394, 17936, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17936, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17936, null]], "pdf_page_numbers": [[0, 403, 1], [403, 763, 2], [763, 917, 3], [917, 1284, 4], [1284, 1600, 5], [1600, 1820, 6], [1820, 2321, 7], [2321, 2763, 8], [2763, 2992, 9], [2992, 3212, 10], [3212, 3448, 11], [3448, 3946, 12], [3946, 4250, 13], [4250, 4759, 14], [4759, 5077, 15], [5077, 5888, 16], [5888, 6188, 17], [6188, 6579, 18], [6579, 7495, 19], [7495, 7835, 20], [7835, 8853, 21], [8853, 9085, 22], [9085, 9581, 23], [9581, 10067, 24], [10067, 10789, 25], [10789, 11190, 26], [11190, 11832, 27], [11832, 12126, 28], [12126, 12536, 29], [12536, 12839, 30], [12839, 13194, 31], [13194, 13339, 32], [13339, 13753, 33], [13753, 14277, 34], [14277, 14778, 35], [14778, 15200, 36], [15200, 15515, 37], [15515, 15795, 38], [15795, 16286, 39], [16286, 16723, 40], [16723, 17083, 41], [17083, 17394, 42], [17394, 17936, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17936, 0.03484]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
1fa73a333913ec0aa61df8e315c06f0a5df24af0
Virtual memory management is one of the most valuable contributions operating systems research has provided. It is now an expected feature of modern desktop operating systems, including Linux and Windows NT. In this assignment, you will study the source code for the Linux virtual memory management system and instrument the system to collect page fault statistics for individual processes and the system as a whole. You will also learn more about kernel organization by implementing your own system call in the Linux kernel to access the information. 2 Specification In this assignment, you will implement a single system call that returns the number of page faults incurred by a single process or by the system as a whole. Using your newly created system call, you will write a small user program named pageflts that prints out the page fault rates either for the system or for a specified user process. The command line for pageflts is ``` pageflts samples rate -u pid | -s ``` where samples is the number of samples to take, rate is the rate in seconds to sample page faults, -u pid specifies that we want page fault rates for the user process given by pid and -s specifies that we want the page fault rates for the entire system. One and only one of -u pid or -s is required as an argument. The output of pageflts should be repeated lines of pagefault statistics for the specified process or the system. Suppose that we take 3 samples of the page fault rate every 5 seconds. An example of expected output is ``` 5 27 5.40 10 33 6.60 15 3 0.60 ``` where the first column is the elapsed time in seconds since starting the program, the second column is the number of page faults incurred, and the third column is the rate per second at which page faults occurred during the last monitoring time (to two digits of precision following the decimal point). Each column is tab delimited, so they won’t line up in all cases. Consider using gnuplot to plot your page fault data to see it graphically. 3 Implementing System Calls Recall that system calls are used to transfer execution from user-space code into kernel-space code. The code for system calls is executed while the processor is in supervisor mode. To accomplish this, Linux on Intel platforms generates an interrupt \texttt{0x80} to occur, with a parameter set to the system call number to execute. This system call number is an offset into the \texttt{sys_call_table}, the table of all system call entries (stored in \texttt{/usr/src/linux/arch/i386/kernel/entry.S}). In RedHat 6.2, the table is defined as follows: \begin{verbatim} .data ENTRY(sys_call_table) .long SYMBOL_NAME(sys_ni_syscall) /* 0 - old "setup()" system call*/ .long SYMBOL_NAME(sys_exit) .long SYMBOL_NAME(sys_fork) .long SYMBOL_NAME(sys_read) .long SYMBOL_NAME(sys_write) .long SYMBOL_NAME(sys_open) /* 5 */ ... .long SYMBOL_NAME(sys_sigaltstack) .long SYMBOL_NAME(sys_sendfile) .long SYMBOL_NAME(sys_ni_syscall) /* streams1 */ .long SYMBOL_NAME(sys_ni_syscall) /* streams2 */ .long SYMBOL_NAME(sys_vfork) /* 190 */ /* * NOTE!! This doesn't have to be exact - we just have * to make sure we have _enough_ of the "sys_ni_syscall" * entries. Don't panic if you notice that this hasn't * been shrunk every time we add a new system call. */ .rept NR_syscalls-190 .long SYMBOL_NAME(sys_ni_syscall) .endr Entry 1 contains the address of the \texttt{exit()} system call, 2 is for \texttt{fork()}, and so on. Any entry labeled \texttt{sys_ni_syscall} is a system call that is not implemented. 3.1 System Call Table To implement your system call, you will need to modify several files. First, your system call must be added to the system call table just shown. If your system call is to be named \texttt{sys_my_call()}, then you change the table in \texttt{/usr/src/linux/arch/i386/kernel/entry.S} to reflect the new call: \begin{verbatim} ENTRY(sys_call_table) .long SYMBOL_NAME(sys_ni_syscall) /* 0 - old "setup()" system call*/ .long SYMBOL_NAME(sys_exit) ... \end{verbatim} Due: 11:59.59 p.m., Thursday, Nov. 16 3.1 Symbol Name Definition .long SYMBOL_NAME(sys_vfork) /* 190 */ .long SYMBOL_NAME(sys_my_call) /* 191 */ /* * NOTE!! This doesn't have to be exact - we just have * to make sure we have enough of the "sys_ni_syscall" * entries. Don't panic if you notice that this hasn't * been shrunk every time we add a new system call. */ .rept NR_syscalls-190 .long SYMBOL_NAME(sys_ni_syscall) .endr This allows a trap (interrupt 0x80) with an argument of 191 to invoke sys_my_call(). Before modifying entry.S be sure to make a backup of the original file! You may need it to recover from errors. 3.2 System Call Stub Even though you have added an entry to the system call table, you still need to generate a stub so that a C function call will invoke the new system call. The stub generates code initiating a trap with the proper argument. To generate the stub, you should first edit the /usr/src/linux/include/asm/unistd.h file to add constant definition for your new system call. #define __NR_exit 1 #define __NR_fork 2 #define __NR_read 3 #define __NR_write 4 #define __NR_open 5 ... #define __NR_getpmsg 188 /* some people actually want streams */ #define __NR_putpmsg 189 /* some people actually want streams */ #define __NR_vfork 190 /* #define __NR_ugetrlimit 191 SuS compliant getrlimit */ #define __NR_mmap2 192 #define __NR_truncate64 193 #define __NR_ftruncate64 194 #define __NR_stat64 195 #define __NR_lstat64 196 #define __NR_fstat64 197 The system call number 191 is commented out, so you can replace it with the constant definition for your new call. #define __NR_my_call 191 Again, be sure to make a backup of this file before you edit it! Macros are available for generating system calls with zero to five parameters. For example, to generate a stub for a system call with two parameters, the macro has the form Due: 11:59.59 p.m., Thursday, Nov. 16 ```c _syscall2(type, name, type1, arg1, type2, arg2); ``` In this macro, `type` is the return value type, `name` is the name of the stub, `type1` is the type of the first parameter `arg1` and `type2` is the type of the second parameter `arg2`. To generate the stub for `my_call` in your user program, you make the following macro call. ```c #include <linux/unistd.h> ... /* Generate system call stub for int my_call(int x, double y) */ _syscall2(int, my_call, int, x, double, y); ... ``` The function `my_call` is the function call you would execute to call the system call `sys_my_call`. ### 3.3 Implementing Your System Call To implement your system call, the easiest thing to do is modify one of the existing kernel source files to add a function implementing the system call. In this assignment, `/usr/src/linux/mm/memory.c` will be the best file to use. Whichever file you choose to modify, **make a backup of the file first!** The function implementing the system call uses an additional modifier to the return type, `asmlinkage` to denote that the function is interacting with assembler generated code. Suppose that `sys_my_call` sums its two arguments and returns the result as an integer value. The complete system call is ```c asmlinkage int sys_my_call(int x, double y) { return (int)(x + y); } ``` You can browse the kernel source for functions named `sys_*` to find the implementations for several more system calls. Finally, you may want to print out debugging information along the way. The `printf` function and several other C library functions are **NOT** available for use in kernel code. Thus another function, `printk` is used to print out information in kernel code. The `printk` function behaves in the same manner as `printf` (i.e., it has the same parameters and uses the same formatting codes). You can look at the manual page for `printf` for more information. ### 4 Linux Virtual Memory Management Linux uses demand paging memory management to support virtual memory. Each process is created with a 4GB virtual address space, 1 GB of which is mapped to the kernel address space. The addresses 0x0–0xffffffff are used when executing in user mode (i.e., when the CPU is not supervisor mode) and addresses 0x00000000–0xffffffff are used when executing in supervisor mode. This allows user processes to reference kernel addresses, although they do not necessarily have permission to do so. (Question To Ponder: How might a process have or gain permission to reference kernel address space?) No address can be used until it is mapped by the operating system. The page size and block size in Linux are 4K bytes. The contents of mapped pages may be in primary or secondary memory. For more information regarding Linux memory management, the following URLs may be helpful: - Concrete Architecture of the Linux Kernel, http://plg.uwaterloo.ca/~itbowman/CS746G/a2/ 4.1 Implementation The Linux memory management subsystem has a generic implementation in /usr/src/linux/mm and an architecture specific implementation in /usr/src/linux/arch/i386/mm. In this assignment, we are concerned with page faults. After a process has been mapped into its virtual address space, it starts execution. The process begins referencing virtual addresses with its main entry point (e.g., main() for C/C++ programs). If the address translation hardware detects that a page is not loaded in primary memory, a page fault interrupt occurs, and the page fault handler (do_page_fault() in /usr/src/linux/arch/i386/mm/fault.c) is called. The handler determines if the address is valid at all, and then determines what kind of access to the page occurred (read or write). If the request is valid and the page really is not in memory, handle_mm_fault() in /usr/src/linux/mm/memory.c is called. This function calls one of two functions: do_wp_page() to handle copy-on-write situations for write-protected pages, and do_no_page() for a normal page miss. You should read carefully through these functions and try to follow the logic involved in the page fault handling. You are encouraged to share your understanding of the code on the class listserv. I have a much more in depth resource for memory management in my office, which I will allow to be checked out for brief periods of time. 5 Installing the Kernel Sources If you do not have the sources in /usr/src/linux then you need to install them. You should obtain the following RPMs for RedHat 6.2 from a RedHat mirror: - kernel-headers-2.2.14-5.0.i386.rpm - kernel-source-2.2.14-5.0.i386.rpm - kernel-doc-2.2.14-5.0.i386.rpm If you are using a laptop, you will also want: - kernel-pcmcia-cs-2.2.14-5.0.i386.rpm You might be interested in debugging kernel dump files as well (should you cause a kernel panic). If you want this capability, you will also want: 6 Building and Running Your Kernel The final step is to build and run your new kernel. The process I outline here assumes that you have Linux installed in its own partition and that you are using LILO (the Linux Loader) to boot your system. For those running partitionless systems or boot disks, I would like volunteers to help test building and installing instructions appropriate for those platforms. Compilation of the kernel should be the same as below, but installing kernel and running LILO is not appropriate for boot disk or partitionless systems. Be careful! I used this process myself on a laptop, and it generally worked, with the exception of one kernel module, emu10k1.o. This is a soundcard module and is not necessary. However, if anyone wants to help me identify the problem, please contact me. With the following instructions, I have RedHat 6.2 set up to boot either the original kernel installed or the modified one I built. I encourage you to use this kind of setup so that you can recover in the case your kernel has problems. Some more information can be obtained in the README file in /usr/src/linux. I will only focus on what I did. If you’re a Linux guru, I’d appreciate any corrections to the process below, but you can do your own thing when building your kernel. All commands are assumed run in /usr/src/linux as the root user unless otherwise specified. 6.1 Starting with a Clean Environment It’s a good idea to clean out any old object files from the kernel source directory by using the command make clean After cleaning the environment, you should modify the Makefile and edit the EXTRAVERSION variable so that you do not overwrite the original RedHat Kernel. I changed EXTRAVERSION to be EXTRAVERSION = -os to denote that this is my OS class kernel. 6.2 Configuring Your Kernel The first step is to configure your kernel. There are several options for configuration, which will allow you to build a lean kernel: make config, make menuconfig, and make xconfig. I went the easy route and copied the kernel configuration used by RedHat, which is stored in /usr/src/linux/configs/kernel-2.2.14-i386.config. To use this, copy the file to /usr/src/linux/.config and then configure the kernel with • kernel-utils-2.2.14-5.0.i386.rpm Each of these files can be installed on your system by using the command rpm -ivh filename as the root user. make oldconfig This command uses the configuration file in /usr/src/linux/.config to configure the kernel. I also tried to use /usr/src/linux/configs/kernel-2.2.14-i686.config but this caused several module errors. Again, if someone can help me figure this out, I would appreciate it. 6.3 Making Source Dependencies After configuring the kernel, you should make the source dependencies with either make depend or make dep Both commands do the same thing. 6.4 Compile the Kernel The next step is to compile the kernel. This is accomplished with the command make 6.5 Building the Kernel Image Once the kernel is built, a file named vmlinux is created. This contains the compiled kernel code, but the kernel must be made into a loadable image. This usually requires compressing the kernel so that it can fit into the memory allocated by the loader. There are several options, but the safest option appears to be make bzImage which generates the file /usr/src/linux/arch/i386/boot/bzImage. For those who use boot disks, you may find the command make bzdisk useful for creating a boot disk with your newly compiled kernel. Make sure not to overwrite your original RedHat bootdisk! 6.6 Building Kernel Modules The default RedHat kernel configuration configures several drivers as kernel modules: shared object files loadable by the kernel as needed. To build all of these modules, execute the command make modules 6.7 Installing the New Kernel Once the kernel and kernel modules are built, you should install the kernel and modules in such a way that doesn’t overwrite the original Linux kernel. I used the following script, which I wrote and named `kerninst`: ```bash #!/bin/sh # Install the newly created OS kernel. This script is intended # to be run from /usr/src/linux make modules_install cp arch/i386/boot/bzImage /boot/vmlinuz-2.2.14-os cp System.map /boot/System.map-2.2.14-os ``` It is not comprehensive, such as checking for files that already exist, but it gets the job done. 6.8 LILO Before booting your new system, you need to edit your LILO configuration file, which is stored in `/etc/lilo.conf`. You need to add an entry for your newly created kernel. I assume that you have followed my directions above, in which case the kernel you should boot for the OS project is `/boot/vmlinuz-2.2.14-os`. My `/etc/lilo.conf` is ```plaintext boot=/dev/hda3 map=/boot/map install=/boot/boot.b prompt linear default=linux message=/boot/message default=linux image=/boot/vmlinuz-2.2.14-5.0 label=linux read-only root=/dev/hda3 image=/boot/vmlinuz-2.2.14-os label=os read-only root=/dev/hda3 other=/dev/hda1 label=windows ``` which is set up to boot the original RedHat kernel, the kernel I modified (the second `image` entry), and Windows 98. This also is set up to display a message contained in the file `/boot/message` which is Due: 11:59:59 p.m., Thursday, Nov. 16 <table> <thead> <tr> <th>Command</th> <th>Operating System</th> </tr> </thead> <tbody> <tr> <td>linux</td> <td>RedHat Linux 6.2</td> </tr> <tr> <td>windows</td> <td>Windows 98</td> </tr> <tr> <td>os</td> <td>OS kernel</td> </tr> </tbody> </table> Type one of the commands above to boot the specified operating system. Your own LILO configuration file may look very different. My suggestion is to copy the entry for the original Linux kernel and change it to load your modified kernel (don’t forget to change the label!). You should only need to edit your file once, unless you make a mistake. Each time you compile and install a modified kernel, you must run LILO to set up booting: /sbin/lilo 6.9 Running the Modified Kernel To run your modified kernel, simply type in the label you gave it (os in my example) at the LILO boot: prompt. To run the original kernel, simply type the label you gave the original kernel (linux in my example). By setting things up this way, you can recover by switching to the original kernel. 7 Hints • Look at task_struct in /usr/include/linux/sched.h. There are fields related to solving this problem, which are documented (tersely) in that structure. • Look for a correspondence between some of the task_struct fields and those described in the getrusage manual page. • Look at the kill_proc_info(...) function in /usr/src/linux/kernel/sys.c to find out how to locate a task by process id. Be sure to use the tasklist locking technique in that function. • Look at kernel_struct in /usr/src/linux/include/linux/kernel_stat.h. Consider using the pgpgin field. 8 Submission We will use the Curator, http://ei.cs.vt.edu/~eags/Curator.html to collect program submissions. The URL for submission is http://spasm.cs.vt.edu:8080/curator/. Only the servlet interface to the Curator is supported. No grading will be done by the Curator. You are to submit a single tarred (man tar) and gzipped (man gzip) archive containing • A text file named README describing the changes you made to the kernel, how your user program handles sampling page faults in the specified increments, the processes used to generate your sample output, and a description of each file included in the archive. • The modified `entry.S`, `linux/unistd.h`, and `memory.c` kernel sources • The source file(s) for `pageflts`. • Sample output from running `pageflts` on at least 3 different processes and one run for the entire system. Your files must be in the top level directory of the archive (i.e. not placed in a subdirectory). Be sure to include only the files listed above. Do not include extra files from an integrated development environment such as `configure` scripts, automake related files, etc. This is primarily an issue if you are using KDevelop. Be sure to include your name in all files submitted. **DO NOT** include executables or object files of any type in the archive. **Submissions that do not gunzip and/or untar will not be graded.** Be careful to FTP in binary mode if you are transferring your file to a Windows machine before submitting to the Curator. Failure to follow the submission rules will result in severe penalties. There will be no exceptions made for this assignment. ### 9 Programming Environment As stated in the syllabus, you may use either FreeBSD or Linux and ANSI C/C++ to implement this project. You must use C to implement your system call, but C++ is acceptable for implementing `pageflts`. **All data structures used in your program must be student implemented.** Using the standard template library (STL) or other third party libraries for data structure implementations is strictly prohibited. Using C++ input and output streams and C++ strings is OK. Students using FreeBSD are encouraged to share information equivalent to what is provided for Linux in this assignment. ### 10 Acknowledgements Portions of this exercise are synthesized from *Kernel Projects For Linux* by Gary Nutt.
{"Source-Url": "http://courses.cs.vt.edu:80/~cs3204/fall00/struble/projects/project3.pdf", "len_cl100k_base": 4793, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20907, "total-output-tokens": 5432, "length": "2e12", "weborganizer": {"__label__adult": 0.0005741119384765625, "__label__art_design": 0.0005826950073242188, "__label__crime_law": 0.00046324729919433594, "__label__education_jobs": 0.03662109375, "__label__entertainment": 0.0001360177993774414, "__label__fashion_beauty": 0.00028514862060546875, "__label__finance_business": 0.0004363059997558594, "__label__food_dining": 0.0006899833679199219, "__label__games": 0.0012331008911132812, "__label__hardware": 0.003002166748046875, "__label__health": 0.0006704330444335938, "__label__history": 0.0005698204040527344, "__label__home_hobbies": 0.00025582313537597656, "__label__industrial": 0.0008568763732910156, "__label__literature": 0.00067138671875, "__label__politics": 0.0004897117614746094, "__label__religion": 0.0008969306945800781, "__label__science_tech": 0.072021484375, "__label__social_life": 0.0003814697265625, "__label__software": 0.0170745849609375, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.00043320655822753906, "__label__transportation": 0.0010232925415039062, "__label__travel": 0.0003407001495361328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20192, 0.03906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20192, 0.27758]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20192, 0.84719]], "google_gemma-3-12b-it_contains_pii": [[0, 2019, false], [2019, 4102, null], [4102, 5978, null], [5978, 8563, null], [8563, 10993, null], [10993, 13376, null], [13376, 14804, null], [14804, 16264, null], [16264, 18463, null], [18463, 20192, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2019, true], [2019, 4102, null], [4102, 5978, null], [5978, 8563, null], [8563, 10993, null], [10993, 13376, null], [13376, 14804, null], [14804, 16264, null], [16264, 18463, null], [18463, 20192, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20192, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20192, null]], "pdf_page_numbers": [[0, 2019, 1], [2019, 4102, 2], [4102, 5978, 3], [5978, 8563, 4], [8563, 10993, 5], [10993, 13376, 6], [13376, 14804, 7], [14804, 16264, 8], [16264, 18463, 9], [18463, 20192, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20192, 0.02155]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e513aa4015e7a72bad9bbbeb2a2ebbf387c5e23c
Tiny Compiler: Back End 1 Plan It is time to build the Tiny compiler back end. Our front end already parses and produces an AST. We first build a translation from AST to intermediate representation (IR): instructions for an abstract machine with infinitely many named storage locations. Next, we review a code generator to translate IR to x86 assembly code. Finally, we insert basic machine-independent optimizations on the IR. 1.1 Goals □ Preview course concepts by building a working compiler at accessible scale, with guidance. Concept exposure (not concept mastery) is the goal of this activity. □ Start learning Scala and associated tools. (https://cs.wellesley.edu/~cs301/s21/tools/) □ Try working with potential project teammates or tutorial group members. 1.2 Instructions • Complete this activity in groups of 3. Choose teammates with whom you have not worked before. Include at least one CS 251 alum per group if possible. • Aim for a big-picture view of each compiler stage. Do not worry if details are fuzzy: guess, follow intuition, experiment to see what works. Check in with me if you are lost or stumped. 2 Setup Use a terminal in Linux, macOS, or WSL (Windows Subsystem for Linux). WSL info: https://cs.wellesley.edu/~cs301/s21/tools/#wsl. Start from your tiny-front code if it is done (https://cs.wellesley.edu/~cs301/s21/project/tiny/). Or, use these steps to get a completed front end: 1. Install and configure IntelliJ, the Scala Plugin, and a Java 11 JDK with these steps: https://cs.wellesley.edu/~cs301/s21/tools/#intellij-idea 2. Clone the starter project: (a) Open IntelliJ and choose New > Project from Version Control from the menu. (b) Enter the URL https://github.com/wellesleycs301s21t4/tiny-back.git (c) Click Clone. (d) When prompted about whether to trust and open the BSP project, click Trust Project. (e) IntelliJ should now clone and automatically build the project. Wait for this to finish: watch progress bars at the bottom of the IntelliJ window. 3. Configure IntelliJ and a terminal session when IntelliJ has finished importing and building: (a) Open the IntelliJ preferences/settings from the menu (IntelliJ > Preferences or File > Settings, varies by OS). i. In Build, Execution, Deployment > Compiler, enable Build project automatically. ii. Close the preferences/settings. (b) Open a terminal, cd to the tiny-back project directory if needed, and run the command source env.sh. This produces wrapper scripts for the TINY compiler and interpreter and put them on the PATH. 4. Find files in the upper left Project pane: (a) Find source code files in tinyc/src/tiny. (b) Find test files in test. 3 Intermediate Representation The first step in the back end is to translate our TINY AST to intermediate code for a simple abstract machine that is closer to the model of a real computer. Unlike ASTs, but similar to assembly code, intermediate code programs have flat, sequential structure. Our abstract machine has infinitely many storage cells, each with a unique name: \( x_1, x_2, \) etc. The abstract machine supports a small set of instructions that read operands, perform simple computations on them, and store the result in a destination cell. Instruction operands, \( o \), are either: - the name of a storage cell (thereby referring to the cell’s contents); or - a literal number value. A program is a linear sequence of instructions to be executed in order. The abstract machine supports the following instructions: - \( x := o \) A copy instruction reads a source operand, \( o \), and stores it in the destination cell, \( x \). - \( x := o_1 + o_2 \) An add instruction reads two source operands, \( o_1 \) and \( o_2 \), and stores the sum in the destination cell, \( x \). - \( x := \text{input} \) An input instruction reads an integer input and stores it in the destination cell, \( x \). - \( \text{print } o \) A print instruction reads a source operand and prints it as output. This style of abstract machine language is called three-address code (TAC), since each instruction addresses at most three operands. Exercise 1. What does the following TAC program print if the user provides input 7? \[ \begin{align*} x_1 & := 3 \\ x_2 & := \text{input} \\ x_3 & := x_1 + x_2 \\ x_4 & := x_3 + x_3 \\ \text{print } x_4 \\ x_5 & := x_4 + 5 \\ x_6 & := x_5 + x_2 \\ \text{print } x_6 \end{align*} \] Look at that TAC again. It looks quite similar to a Tiny source program with mildly different syntax. In fact, putting aside superficial syntax differences, it appears that our TAC language is basically the Tiny language with expression nesting restricted to one level. Nonetheless, we use an explicitly separate TAC representation in our Tiny compiler (and in our later compiler, when there will be wider differences in the languages). Exercise 2. Write a TAC program to print $4i_1 + 2i_2 + 5$, given any inputs $i_1$ and $i_2$, in that order. As with the original TINY source syntax, this syntax is just a convenient way to write down a concrete representation of an abstract program structure. Unlike the original TINY syntax, we will never use this TAC syntax within our compiler. It is just a convenience for pencil and paper. Exercise 3. Read the IR.scala file. Rewrite your answer from Exercise 2 as a Scala expression of type `Seq[TacInstruction]`. Use a `Vector`, which is also a `Seq` and acts similarly to an array while also supporting insert, prepend, and append operations. For example, a `Vector[Int]` holding the numbers 1, 2, 3 in order is constructed using the Scala syntax `Vector(1,2,3). 4 Translating ASTs to TAC Next, we translate a TINY program AST to an equivalent TAC program. These are rather different structures. The program AST is a hierarchical, non-linear tree. The TAC program is a flat, linear sequence of instructions. ASTs have nested expression nodes. TAC instructions accept only storage cells or literals as operands. We must be sure that our translation is semantics-preserving. That is, the observable behavior (input and output for us) of the translated program must be identical to that of the source program. Like the recursive descent parser used to build the AST from a linear source code string, we will use a recursive algorithm to traverse the AST and emit a new linear structure. The key is to traverse through the AST recursively, emitting one or more TAC instructions for each node. The result of each expression (i.e., each node) will be stored in its own TAC storage cell. To translate an expression node that needs this result, we simply emit an instruction that will read the contents of the cell where the result is to be stored. Let us derive the algorithm for this translation. For now, omit variables and assignments. Exercise 4. Draw the AST for the following Tiny program: \[ \text{print} \ ( \ 7 \ + \ ( \ \text{input} \ + \ 5 \ ) \ ) \ ; \] The TAC translation of this program is shown below. Match each node in the original AST to the corresponding instruction in the translated TAC. What tree traversal order (one of in, pre, post, level) could produce these instructions in this order? Try to reverse-engineer the algorithm that would generate this code given the AST you extracted. (Use pseudocode on paper or whiteboard. Stick to high-level description. Ignore Scala for now.) The algorithm is described below if you get stuck. \[ \begin{align*} x_1 & := 7 \\ x_2 & := \text{input} \\ x_3 & := 5 \\ x_4 & := x_2 + x_3 \\ x_5 & := x_1 + x_4 \\ \text{print} & \ x_5 \end{align*} \] It is tempting to translate an AST for an expression \(( 3 + 5 )\) to a single TAC instruction, \(x_2 := 3 + 5\), or even simpler, \(x_2 := 8\). Resist the urge! Stay away from special cases; they will complicate the translation algorithm. (Spoiler alert: our optimizer will fix this for us later. Let’s keep the concerns of translation and optimization separate.) Exercise 5. Try your proposed AST-to-TAC translation algorithm on the following program: \[ \text{print} \ ( \ ( \ 2 \ + \ 4 \ ) \ + \ ( \ 6 \ + \ 8 \ ) \ ) \ ; \] Write the translated TAC program here, then check your answer by evaluating it manually. 4.1 TAC Generation The algorithm for generating TAC from an AST is explained generally here. The commented partial Scala implementation in IRGen.scala makes this more concrete. For **expressions**, follow this basic plan: - Store the result (if any) of each AST node in a unique TAC cell. (Even an integer literal should be copied into a TAC cell!) - Generate code by a **post-order** traversal of the AST: - Emit code for all subexpression children of this node, remembering the destination cell of the last instruction emitted by translating each child node. - Emit an instruction for this node, using the child destination cells as the corresponding source operands of the instruction. For **statements**, emit instructions to compute the expression (using the plan above) and use the last destination cell in the resulting instructions as the source operand for an instruction to implement the statement. For **programs**, simply emit instructions for each statement in order. All three of these get slightly more involved when we introduce variables later. **Exercise 6.** In Compiler.scala, uncomment the call to backend in main. In IRGen.scala, read the Scala code for translateProgram and the Print case in translateStmt. Ignore the symtab argument and the Assign case for now. Ask questions about anything that does not make sense. The following Scala operations are used: - **Pattern-matching:** match with some cases. Quiz your CS 251 teammates about this, check the relevant section in the Scala documentation (https://docs.scala-lang.org/tour/pattern-matching.html), or ask me. - **Sequence (Seq) operations.** We use immutable Vectors as sequences, applying these operations: - `s1 ++ s2` produces a new sequence with the elements of `s1` followed by the elements of `s2`. - `seq :+ elem` produces a new sequence with all of the elements of `seq` in order followed by the element `elem`. - `seq.last` returns the last element of non-empty sequence `seq`. - The provided `fresh` function returns a new `TacCell` with a unique ID every time it is called. **Exercise 7.** In IRGen.scala, implement translateExpr. The entire body is a single pattern-matching expression with one case for each type of Expr. Each case should return a `Seq[TacInstruction]`: a sequence of TAC instructions. We provide the Input case. Add the Num case, then test it on a simple program. Repeat for Add. Ignore Var for now. When you need to call translateExpr, pass the existing symtab argument along. (We use it later for variables.) 4.2 Variables and Symbol Tables The final AST-to-TAC translation concern is variables. A natural representation of Tiny variables exists: represent each unique Tiny variable by a unique TAC cell. An assignment statement is translated to a copy instruction that copies the source into the variable’s corresponding cell. Later variable references are translated to instructions that copy from the same cell. This requires a durable mapping from variable name to TAC cell, since variable references may appear arbitrarily later in the program. For this, we use a symbol table. (A symbol table has more jobs in a compiler for a larger language.) Translating an assignment statement requires the usual translation of the source expression and an instruction copying the expression result to the assigned variable’s TAC cell. If the variable is already mapped in the symbol table, reuse its mapped TAC cell. Otherwise, create a fresh TAC cell, map it in the symbol table, and use the fresh cell. Exercise 8. Draw the AST for the following Tiny program (or use the front end to generate it), then translate the AST to TAC by hand. Build a symbol table as you go to track which variable maps to which TAC cell. Double check your translated code: run it by hand and check the output. ```plaintext x = ( 4 + input ) ; y = input ; print ( x + ( y + 297 ) ) ; ``` Exercise 9. Implement translation of assignment statements (the Assign case in translateStmt) and variable reference expressions (the Var case in translateExpr). Test on a few programs. Our Scala code represents the symbol table with a mutable\(^2\) `Map[String,TacCell]`: a map where keys are source-code variable names (as String) and values are the TAC cells that represent them (as TacCell). This single symbol table is shared by all levels of the translation algorithm since Tiny has no scoping or control-flow constructs. Entries for key k in map symtab are accessed with `symtab(k)` and updated with `symtab(k) = v`. Recall that, since we already implemented variable scope checking in the frontend of the Tiny compiler, it is guaranteed that any variable use we see here follows its definition. \(^1\)CS 251 note: This implements a mutable variable semantics for Tiny. Nonetheless, since Tiny lacks any scoping or control-flow, mutable variables will act identically to immutable bindings with shadowing. In other words, we could just as well map a variable to a fresh cell at every assignment with no discernible difference. This implementation ambivalence is not present in more interesting languages. 5 x86 Code Generation The final stage in a compiler is Code Generation: translation from the intermediate representation to code in the target language. Our Tiny-to-x86 compiler targets x86 assembly language. There are actually many interesting machine-specific optimizations to do at this stage, such as mapping TAC cells to registers in a way the minimizes copying and use of the stack, choosing which variables could be mapped to registers vs. stack locations, selecting the best set of instructions, and more. For the sake of building the Tiny compiler quickly, our code generator is straightforward and applies no optimization at all. It does two basic tasks: - Map TAC cells to stack locations in memory, which it does in truly dull fashion by using the TAC cell’s ID to assign an offset into the stack frame. - Translate each TAC instruction to one or more x86 instructions following a simple template. After generating x86 assembly code, the compiler invokes the assembler and linker to build a complete binary executable. Take a look through `CodeGen.scala` if you are curious. --- **Exercise 10.** Uncomment the section containing the `CodeGen` and `Link` stages in `Compiler.scala`, per the `FIXME` comments for this exercise. Run your Tiny compiler on a program of your choosing. It should produce an executable file `path/to/x.tiny.bin` when given an input file called `path/to/x.tiny`. Now, for the moment of truth, run the command: ``` ./path/to/x.tiny.bin ``` Does it work??? Debug until it does, then celebrate accordingly! --- 6 Optimization Congrats, you have a compiler that reads Tiny source code and produces an x86 executable! Let’s make it even better. Instead of improving the simplistic code generator (feel free to think about that on your own), but our goal is to implement a few machine-independent optimization on the intermediate representation. Just as with the AST-to-TAC translation and the code generation stage, optimizations must be semantics-preserving. --- **Exercise 11.** Consider briefly: what are some benefits of applying optimizations on intermediate code instead of on source code or machine code? 6.1 Design Later in the course, we spend significant time developing a general framework for expressing optimizations. For now, we implement a few standard optimizations with ad hoc techniques (though later in the course you will be able to look back and see the unifying thread). Just as with AST-to-TAC translation, a key rule for our optimizations is to keep them general. Consider this TAC: \[ \begin{align*} &x_1 := 4 \\ &x_2 := 5 \\ &x_3 := x_1 + x_2 \\ &\text{print } x_3 \end{align*} \] It is tempting to try to build an optimizer that automatically recognizes this pattern all at once and produces the equivalent program `print 9`. A pattern-based approach is actually useful in efficient machine-code generation, but at this stage, we want to avoid a large, unruly, error-prone collection of special cases. An optimization is just a function that takes a TAC program as input and returns another (equivalent) TAC program as output. This means that we can make a larger optimization by composing two smaller optimizations. We therefore develop a small suite of minimal, independent optimizations that do little on their own, but have big impact when composed. 6.2 Copy Propagation You have likely been irritated at all of the wasteful copy instructions our AST-to-TAC translation emitted. They are everywhere! The copy propagation optimization can help eliminate them. Here is the basic idea: if a TAC instruction \(i\) has a storage cell as a source operand and this cell was last updated by a copy instruction, it is equivalent for \(i\) to use the source operand of that copy instruction instead. <table> <thead> <tr> <th>Table 1: Copy propagation example.</th> </tr> </thead> <tbody> <tr> <td>Original</td> </tr> <tr> <td>-------------------------------</td> </tr> <tr> <td>(x_1 := 300)</td> </tr> <tr> <td>(x_2 := \text{input})</td> </tr> <tr> <td>(x_3 := x_2)</td> </tr> <tr> <td>(x_4 := x_1 + x_3) \rightarrow (x_4 := 300 + x_2)</td> </tr> <tr> <td>\text{print } x_4</td> </tr> </tbody> </table> Table 1 shows a transformation of original TAC instructions on the left by applying two copy propagations to produce the new TAC instructions on the right. Both of the copy propagations target operands of the original instruction, \(x_4 := x_1 + x_3\): 1. Replace the source operand \(x_1\) by 300, since \(x_1\) was last updated by copying 300. 2. Replace the source operand \(x_3\) by \(x_2\), since \(x_3\) was last updated by copying the contents of \(x_2\) and \(x_2\) has not been changed since then. Copy propagation stops here: we cannot predict the input that will be stored into \(x_2\) when the program is run, so we cannot propagate it as an operand to the add instruction. The print instruction uses \(x_4\), but it was produced by an add instruction, not a copy instruction. Again, it may be tempting to go further here, but the key is to keep it simple. **Exercise 12.** Uncomment lines and remove a line as indicated by the `FIXME` note for this exercise to introduce the `Opt` stage in `Compiler.scala`. Rerun the compiler to make sure it still works. Exercise 13. Read the code for `copyPropagate` in `Opt.scala` and try to understand how it performs copy propagation algorithmically. Make notes with inline comments. 6.3 Dead Code Elimination Dead code is code that does nothing useful: code that computes a result that never affects any observable behavior of the program. (In TINY, that means it does not affect input/print.) For example, the second line of this TINY program is dead code: ```java x = input; y = (x + 4); print(x + x); ``` A result is computed and stored in `y`, but the value of `y` never affects any printed output. We could remove the second line without changing the observable behavior of the program. The dead code elimination optimization performs this removal automatically. Feel free to take a look at the implementation in `Opt.scala`. Is it useful? Our AST-to-TAC translation never introduces dead code. TINY programmers may write dead code, but even if they do not, and more importantly, the copy propagation optimization often does cause code to become dead. Table 2 shows the continuation of the example from Table 1 by applying dead code elimination after copy propagation. <table> <thead> <tr> <th>Original</th> <th>After Copy Propagation</th> <th>After Dead Code Elimination</th> </tr> </thead> <tbody> <tr> <td><code>x1 := 300</code></td> <td><code>x1 := 300</code></td> <td>→</td> </tr> <tr> <td><code>x2 := input</code></td> <td><code>x2 := input</code></td> <td><code>x2 := input</code></td> </tr> <tr> <td><code>x3 := x2</code></td> <td><code>x3 := x2</code></td> <td>→</td> </tr> <tr> <td><code>x4 := x1 + x3</code></td> <td><code>x4 := 300 + x2</code></td> <td><code>x4 := 300 + x2</code></td> </tr> <tr> <td><code>print x4</code></td> <td><code>print x4</code></td> <td><code>print x4</code></td> </tr> </tbody> </table> 6.4 Constant Folding Our last classic optimization is constant folding. If a computation instruction (e.g., add) has literals (constants) for all source operands, constant-folding pre-computes the result at compile time and replaces the instruction with a simple copy instruction, where the source is the literal pre-computed answer. In our case, this means replacing an instruction like `x1 := 300 + 1` with `x1 := 301`. Exercise 14. Implement `constantFold` in `Opt.scala`. Use the `map` method of sequences to produce a new sequence of TAC instructions where addition instructions meeting the constant-folding criteria are replaced by simply copies of constants and all other instructions are unchanged. Consult examples in `copyPropagate` or `deadCodeElim`, your CS 251 teammates, or ask for help with `map`. Scala’s anonymous function syntax is `(param => body)`. The function `constantFold` is much smaller than the other two optimizations. A simple implementation is 4-5 sparse lines. 6.5 Composing Optimizations Our AST-to-TAC translation never produces instructions that can be constant-folded, but just as with dead code elimination, constant folding becomes useful when copy propagation is used. Exercise 15. Apply copy propagation to this TAC sequence manually. Then apply dead code elimination to the result. Then apply constant folding to that result. What do you get? Can you reapply optimizations to make the program even smaller? \begin{verbatim} x_1 := 7 x_2 := 8 x_3 := x_1 + x_2 print x_3 \end{verbatim} It turns out that these three optimizations can feed off each other if composed repeatedly. To get the best result, we run them until a fixed point, a TAC program for which a pass through all three optimizations produces the exact same program. At this point, there is nothing more for them to do. Exercise 16. In the apply method in Opt.scala, replace once with fixpoint. Run the compiler on a couple Tiny programs to see how much the fixpoint improves optimization. 7 Reflections At this point we have an optimizing compiler for Tiny. The language is, well, tiny, and the compiler could use smarter code generation and a few other improvements, but we have managed to consider interesting problems and solutions at most stages of a typical compiler architecture. Compilers for larger languages will complicate these problems significantly, so as we move on to start our deeper consideration of each compiler stage over the course of the semester, keep the following in mind: The combined jobs of a compiler are complicated overall if we consider a compiler as a black box. A key to a clean, approachable implementation is to decompose large complicated problems into smaller simpler problems, solve those individual problems in a simple, elegant way, and then compose the solutions. This is true at a small scale, in the way we design recursive case-by-case translations or many simple individual optimizations. It is just as true at a large scale, where we break the compiler into several independent stages. Hopefully you enjoyed building the Tiny compiler and gained some initial perspective for the rest of the semester. Do not worry if the details are fuzzy – it will take the whole semester to revisit all these items in detail! If you have any thoughts on how helpful these activities were or how to improve them, let me know. Happy compiling! A Extending the Language (Optional) Exercise 17. (Optional) Port your implementation of multiplication from the Tiny frontend to this version of the compiler and extend the backend to support multiplication as well. If you made good design decisions initially, this will be fairly straightforward. Feel free to experiment with other language improvements as well. Think ahead about how each feature impacts each stage of your compiler.
{"Source-Url": "https://cs.wellesley.edu/~cs301/s21/project/tiny/tiny-back.pdf", "len_cl100k_base": 5667, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 26577, "total-output-tokens": 6236, "length": "2e12", "weborganizer": {"__label__adult": 0.0003972053527832031, "__label__art_design": 0.0003218650817871094, "__label__crime_law": 0.00023555755615234375, "__label__education_jobs": 0.0033512115478515625, "__label__entertainment": 6.455183029174805e-05, "__label__fashion_beauty": 0.00016891956329345703, "__label__finance_business": 0.00014448165893554688, "__label__food_dining": 0.0005030632019042969, "__label__games": 0.0007891654968261719, "__label__hardware": 0.0008454322814941406, "__label__health": 0.00035262107849121094, "__label__history": 0.00022292137145996096, "__label__home_hobbies": 0.00014102458953857422, "__label__industrial": 0.0004777908325195313, "__label__literature": 0.0002570152282714844, "__label__politics": 0.0002275705337524414, "__label__religion": 0.0006136894226074219, "__label__science_tech": 0.004543304443359375, "__label__social_life": 0.0001392364501953125, "__label__software": 0.0035190582275390625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0004303455352783203, "__label__transportation": 0.0006132125854492188, "__label__travel": 0.0002446174621582031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24177, 0.03384]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24177, 0.47556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24177, 0.86503]], "google_gemma-3-12b-it_contains_pii": [[0, 2114, false], [2114, 4434, null], [4434, 4871, null], [4871, 6817, null], [6817, 8215, null], [8215, 11001, null], [11001, 13333, null], [13333, 15492, null], [15492, 18669, null], [18669, 21554, null], [21554, 24177, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2114, true], [2114, 4434, null], [4434, 4871, null], [4871, 6817, null], [6817, 8215, null], [8215, 11001, null], [11001, 13333, null], [13333, 15492, null], [15492, 18669, null], [18669, 21554, null], [21554, 24177, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24177, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24177, null]], "pdf_page_numbers": [[0, 2114, 1], [2114, 4434, 2], [4434, 4871, 3], [4871, 6817, 4], [6817, 8215, 5], [8215, 11001, 6], [11001, 13333, 7], [13333, 15492, 8], [15492, 18669, 9], [18669, 21554, 10], [21554, 24177, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24177, 0.0804]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
42bf323068b503051ff3d32c2d3a7c58ad29950a
The Virtual Reality Modeling Language and Java Don Brutzman Code UW/Br, Naval Postgraduate School Monterey California 93943-5000 USA brutzman@nps.navy.mil Communications of the ACM, vol. 41 no. 6, June 1998, pp. 57-64. Abstract. The Virtual Reality Modeling Language (VRML) and Java provide a standardized, portable and platform-independent way to render dynamic, interactive 3D scenes across the Internet. Integrating two powerful and portable software languages provides interactive 3D graphics plus complete programming capabilities plus network access. Intended for programmers and scene authors, this paper provides a VRML overview, synopsizes the open development history of the specification, provides a condensed summary of VRML 3D graphics nodes and scene graph topology, describes how Java interacts with VRML through detailed examples, and examines a variety of VRML/Java future developments. Overview. The Web is being extended to three spatial dimensions thanks to VRML, a dynamic 3D scene description language that can include embedded behaviors and camera animation. A rich set of graphics primitives provides a common-denominator file format which can be used to describe a wide variety of 3D scenes and objects. The VRML specification is now an International Standards Organization (ISO) specification (VRML 97). Why VRML and Java together? Over twenty million VRML browsers have shipped with Web browsers, making interactive 3D graphics suddenly available for any desktop. Java adds complete programming capabilities plus network access, making VRML fully functional and portable. This is a powerful new combination, especially as ongoing research shows that VRML plus Java provide extensive support for building large-scale virtual environments (LSVEs). This paper provides historical background, a detailed overview of VRML 3D graphics, example VRML-Java test programs, and a look ahead at future work. Development history. “Birds of a feather” sessions organized by Mark Pesce and Tony Parisi at the 1994 World Wide Web and SIGGRAPH conferences led to the formation of an open working group to examine potential technologies for a “virtual reality markup language,” later changing markup to modeling. The initial goal of this effort was selection of a general 3D scene-description file format that was suitable for modification by addition of hyperlink semantics similar to the HyperText Markup Language (HTML). Open nomination and discussion of seven possible candidates led to selection of an extended subset of Silicon Graphics Inc. (SGI) OpenInventor file format [Wernicke 94] [Carey, Strauss 92]. Extensive online proposals and discussions punctuated by reported results became the hallmark of the VRML development process. Key contributions of the initial VRML 1.0 standard were selection of a core set of object-oriented graphics constructs augmented by hypermedia links, all suitable for geometry rendering by Web browsers on personal computers or workstations. Language extensions for VRML. Although spirited debate and rough consensus successfully delivered an initial specification, there were two major limitations in VRML 1.0: lack of support for dynamic scene animation, and no traditional programming language constructs. Difficult issues regarding real-time animation in VRML 1.0 included entity behaviors, user-entity interaction and entity coordination. VRML 2.0 development tackled these issues directly, using event-driven ROUTEs to connect 3D nodes and fields to behavior-driven sensors and timing. “Language wars” were avoided by allowing other programming languages to communicate with the VRML scene via a Script node. Initial languages chosen are JavaScript for light-weight in-line calculations, and Java for full-fledged programming and network access. If Java or JavaScript are supported in a VRML browser, they must conform to the formal interface specified in (VRML 97) Annexes B and C, respectively. Major browsers now support both. Ongoing development of VRML continues via open working groups supported by the nonprofit VRML Consortium (VRMLC 97). Behaviors. The term “behaviors” refers to making changes in the structure or appearance of a 3D scene. Thus a broad definition of a VRML behavior might be “any change in the nodes or fields of a VRML scene graph.” VRML 97 provides local key-frame animation mechanisms and remote scripting language hooks (i.e. an applications programming interface or API) to any scene graph component. Dynamic scene changes can be stimulated by scripted actions, message passing, user commands or behavior protocols, implemented using either via Java calls or complete VRML scene replacement. General approaches for VRML behaviors and scene animation are possible that provide simplicity, security, scalability, generality and open extensions. Using Java is the most powerful way for 3D scene authors to explore the many possibilities provided by VRML. Getting started. Numerous useful resources for obtaining browser software, subscribing to the www-vrml mailing list, tutorials etc. are available via the VRML Repository (SDSC 97). Excellent books for learning VRML include (Ames 97) (Hartman 97). The VRML 97 specification is online, and an annotated version of the specification by principal VRML 97 architects Rikk Carey and Gavin Bell is available in book form and online (Carey 97). Resources on Java network programming include (Harold 97) and (Hughes 97). Intermediate and advanced textbooks for combined VRML and Java programming are (Lea 97) and (Roehl 97), respectively. 3D graphics nodes. For most programmers, there are many new language concepts and terms in VRML. An overview of this admittedly large language is necessary before describing how Java works in combination with it. This section describes the 3D-specific VRML nodes. Big picture. Since VRML is a general 3D scene description language that can be used as an interchange file format, there are a large number of 3D graphics nodes available. These nodes are organized in a hierarchical structure called a scene graph, i.e. a directed acyclic graph. The primary interaction model for 3D VRML browsers is point and click, meaning that content can have embedded links just like the 2D HyperText Markup Language (HTML). VRML 3D browsers are typically installed as plugins within 2D browsers (such as Netscape and Internet Explorer). VRML is optimized for general 3D rendering and minimal network loading, taking advantage of extensive VRML browser capabilities. For example, geometric primitives such as IndexedFaceSet allow authoring tools and datasets to create highly complex objects. Textures and MovieTextures can wrap 2D images over and around arbitrary geometry. Sound nodes embed spatialized audio together with the associated shapes. Lighting and camera control provide complete control of presentation and rendering, including animation of camera position to create flyby explorations or dramatic visual transitions. Finally the Script node interface to Java allows modification or generation of any VRML content. Shape. The Shape node is a container node which collects a pair of components called geometry and appearance. Nodes that describe the objects being drawn are typically used in the geometry and appearance fields of the Shape node. Geometry. Box, Cone, Cylinder and Sphere are nodes for simple regular polyhedra (sometimes called primitives) which provide basic building blocks for easy object construction. The Text node simplifies specification of planar or extruded polygonal text. ElevationGrid is a table of height (y) values corresponding to x-z horizontal spacing indices. The Extrusion node stretches, rotates and scales a cross-section along a spine into a wide variety of possible shapes. IndexedFaceSet, IndexedLineSet and PointSet can create 3D geometry of arbitrary complexity. Since Extrusion, IndexedFaceSet, IndexedLineSet and PointSet are specified by sets of coordinate points, color values and normal vectors can be specified point by point for these nodes. Appearance. The appearance of geometry is primarily controlled by specifying color values or texture images. The Material node permits specification of diffuse, emissive and specular color components (which roughly correspond to reflective, glowing and shininess colors). Material nodes can also specify transparency. Colors are specified as red- green-blue (RGB) triples ranging from 0 (black) to 1 (full intensity). Transparency value a similarly ranges from 0 (opaque) to 1 (completely transparent). As an alternative or supplement to material values, three types of texture nodes are provided. ImageTexture is the most common: a 2D image is wrapped around (or over) the corresponding geometry. MovieTexture allows use of time-dependent textures (e.g. MPEG movie files) as the image source. Finally TextureTransform specifies a 2D transformation in texture coordinates for advanced texture-mapping techniques, i.e. applying specific repetitions or orientations of a texture to corresponding geometry features. Scene topology: grouping and child nodes. VRML syntax and node typing also helps enforce a strict hierarchical structure of parent-child relationships, so that browsers can perform efficient rendering and computational optimizations. Grouping nodes are used to describe relationships between Shapes and other child nodes. Moreover, the semantics of a scene graph carefully constrain the ways that nodes can be organized together. Child nodes come under grouping nodes to comply with the scene graph hierarchy inherent in any author’s VRML scene. In addition to Shapes, child nodes describe lighting, sound, viewing, action sensors and animation interpolators. This section provides rich functionality for animating viewpoint, assisting user navigation, and providing environmental effects. **Grouping**. Fundamental to any VRML scene is the notion that graphics nodes can only be grouped in ways that make sense. Grouping is used to describe spatial and logical relationships between nodes, and as an intentional side result also enable efficient rendering by 3D browsers. The Group node is the simplest of the grouping nodes: it merely collects child nodes, with no implied ordering or relationship other than equivalent status in the scene graph. The Transform node similarly groups child nodes, but first applies rotation, scaling and translation to place child nodes in the proper coordinate frame of reference. The Billboard node keeps its child nodes aligned to always face the viewing camera, either directly or about an arbitrary rotation axis. The Collision node lets an author specify a bounding box which serves as a proxy to simplify collision detection calculations for grouped child nodes. The Switch node renders only one (or none) of its child nodes, and is useful for collecting alternate renderings of an object which might be triggered by external behaviors. The LOD (level of detail) node also renders only one of multiple child nodes, but child selection is triggered automatically based either on viewer-to-object distance or on frame rate. Thus LOD enables the browser to efficiently select high-resolution or low-detail alternative renderings on-the-fly in order to support interactive rendering. **Grouping and the Web**. Since the Web capabilities of VRML are analogous to HTML, two types of grouping nodes enable Web connectivity in 3D scenes. The Inline node allows importing additional 3D data from another VRML world into the current VRML world. In contrast, the Anchor node creates a link between its child nodes and an associated Uniform Resource Locator (URL) web address. When the child geometry of an Anchor node is clicked by the user’s mouse, the current VRML scene is entirely replaced by the VRML scene specified in the Anchor URL. Multiple strings can be used to specify any URL, permitting browsers to preferentially load local copies before searching for remote scenes or backup locations. Typically browsers highlight “hot links” in a 3D scene by modifying the mouse cursor when it is placed over Anchor-enabled shapes. **Lighting and sound**. Virtual lights in a 3D scene are used to determine illumination values when rendering. Lights cannot be “seen” in the world directly, rather they are used to calculate visibility, shininess and reflection in accordance with a carefully specified mathematical lighting model. The DirectionalLight node illuminates using parallel rays, the PointLight node models rays radiating omnidirectionally from a point, and the SpotLight node similarly provides radial rays constrained within a conical angle. Lights include color and intensity values which are multiplied against material values to calculate produce proper shading. The Sound node places an audio clip at a certain location, and with nonrendered ellipsoids surrounding it determine minimum and maximum threshold distances. Sound can repeat, be rendered spatially relative to user location, and be triggered on/off by events. Embedding various lights and sounds at different locations within a scene can produce dramatic results. **Viewing**. Most 3D nodes describe location, size, shape and appearance of a model. The Viewpoint node specifies position, orientation and field of view for the virtual camera that is used to “view” (i.e. calculate) the 3D scene and render the screen image. Most objects and scenes contain a number of named viewpoints to encourage easy user navigation. The NavigationInfo node extends the camera concept to include the notion of an Avatar bounding box, and render the screen image. The Transform node similarly groups child nodes, but first applies rotation, scaling and translation to place child nodes in the proper coordinate frame of reference. The Billboard node keeps its child nodes aligned to always face the viewing camera, either directly or about an arbitrary rotation axis. The Collision node lets an author specify a bounding box which serves as a proxy to simplify collision detection calculations for grouped child nodes. The Switch node renders only one (or none) of its child nodes, and is useful for collecting alternate renderings of an object which might be triggered by external behaviors. The LOD (level of detail) node also renders only one of multiple child nodes, but child selection is triggered automatically based either on viewer-to-object distance or on frame rate. Thus LOD enables the browser to efficiently select high-resolution or low-detail alternative renderings on-the-fly in order to support interactive rendering. **Action sensors**. Sensors detect change in the scene due to passage of time (TimeSensor), user intervention or other activity such as viewer proximity (VisibilitySensor). Sensors produce time-stamped events whose values can be routed as inputs to other nodes in the scene. User intervention is often as simple as direct mouse interaction with a shape via a TouchSensor, or interaction with a constraining bounding geometry specified by PlaneSensor, ProximitySensor or SphereSensor. Consistently typed input and output events are connected to correspondingly typed fields in the scene graph via ROUTEs. **Animation interpolators**. Key-frame animation typically consists of simple time-varying values applied to the fields of the appropriate node. Smooth in-between animation can be interpolated linearly as long as key values are at sufficient resolution. Linear interpolators are provided for Color, Coordinate, Normal, Orientation, Position and scalar fields. Linear interpolation is sufficient for many demanding applications, including most humanoid animation. As with Sensors, Interpolator inputs and outputs are connected with other nodes via ROUTEs. For performance reasons, use of these standard VRML sensors and interpolators is usually preferable to writing a custom script, since they can efficiently perform most authors’ intended tasks. Scripts are the mechanism where authors can extend the action and animation capabilities of VRML. Prototypes. Prototypes (PROTO and EXTERNPROTO) allow creation of new VRML node types by authoring combinations of nodes and fields from other preexisting node types. In this sense, a PROTO definition is somewhat analogous to a macro definition. In order to avoid completely copying a PROTO for each file where it is used, the EXTERNPROTO definition specifies remote URL where the original PROTO is defined, along with the interface to permit local type-checking by browsers during scene loading. The EXTERNPROTO mechanism thus allows construction of PROTO libraries for easy access and reuse. Graphics example. No doubt dedicated readers are fully convinced by this point that VRML contains a great deal of functionality! To regain clarity, a canonical “Hello world” example in Figures 2a and 2b displays basic VRML syntax. This scene is available at www.stl.nps.navy.mil/~brutzman/vrml/examples/course/hello_world.wrl ``` #VRML V2.0 utf8 Group { children [ Viewpoint { description "initial view" position 6 -1 0 orientation 0 1 0 1.57 } Shape { geometry Sphere { radius 1 } appearance Appearance { texture ImageTexture { url "earth-topo.png" } } } Transform { translation 0 -2 1.25 rotation 0 1 0 1.57 children [ Shape { geometry Text { string [" Hello" "world!"] } appearance Appearance { material Material { diffuseColor 0.1 0.5 1 } } ] ] } ] } ``` Figure 2a. VRML source hello_world.wrl Figure 2b. VRML scene hello_world.wrl VRML and Java: scripts, events, naming and ROUTEs. Interfaces between VRML and Java are effected through Script nodes, an event engine, DEF/USE naming conventions, and ROUTEs connecting various nodes and fields in the VRML scene. VRML provides the 3D scene graph, Script nodes encapsulate Java functionality, and ROUTEs provide the wiring that connects computation to rendering. Scripts. Script nodes appear in the VRML file, encapsulating the Java code and providing naming conventions for interconnecting Java variables with field values in the scene. (Similar scripting conventions are specified for JavaScript). Interfaced Java classes import the vrml.* class libraries in order to provide type conversion (for both nodes and simple data types) between Java and VRML. Java classes used by Script nodes must extend the vrml.node.Script class in order to interface properly with the VRML browser. The basic interface and a good description of Script nodes is excerpted from the (VRML 97) specification in Figure 3. ``` Script { exposedField MFString url [] field SFBool directOutput FALSE field SFBool mustEvaluate FALSE # And any number of: eventIn eventType eventName field fieldType fieldName initialValue eventOut eventType eventName } ``` Script node is used to program behavior in a scene. Script nodes typically a. signify a change or user action; b. receive events from other nodes; c. contain a program module that performs some computation; d. effect change somewhere else in the scene by sending events. **Figure 3.** Script node specification (VRML 97). *Events.* Events allow VRML scenes to be dynamic. Events are merely time-stamped values passed to and from different parts of a VRML world. EventIns accept events, and EventOuts send events (when triggered by some predefined behavior). Events must strictly match the simple type (such as integer, float, color) or node type (such as a Material node) being passed from input to output. Script parameters are designated as eventIn, eventOut or exposedField, which respectively correspond to in, out or in/out parameter semantics. Private fields are simply designated as field rather than exposedField. **DEF/USE naming conventions.** Node naming and light-weight multiple instanting of nodes is possible through the DEF (define) and USE mechanisms. DEF is used to associate names with nodes. USE permits duplicate instances of nodes to be efficiently referenced without complete reinstanitiation, significantly boosting performance. Node names created via DEF are also used for routing events to and from fields. Thus Script nodes (and other 3D nodes which interact with the script) all must be named using DEF. The scope of all DEF’ed names is kept local to the file (or PROTO) where the name is defined. **ROUTEs.** ROUTE statements define connections between named nodes and fields, allowing events to pass from source to target. ROUTE statements usually appear at the end of a file since all nodes must be DEF’ed (named) prior to referencing. Typically ROUTEs are used for all events passed into (or out of) Script nodes. Use of ROUTEs is not always required, however, since nodes in the VRML scene can be passed by reference as fields to the encapsulated Script code. This second approach permits direct manipulation of VRML by Java without using events or ROUTEs. **Example: event-based control.** A scene demonstrating VRML-Java connectivity using Scripts, events, node naming and ROUTEs is examined in Figure 4. This VRML scene and corresponding Java source code are available at www.stl.nps.navy.mil/~brutzman/vrml/examples/course/ScriptNodeEventOutControl.wrl and ScriptNodeEventOutControl.java Figure 4. Script node interface between VRML and Java. This example tests event-based VRML-Java functionality. Note shared events startTime, ChangedText and ChangedPosition. The following sequence of events occurs: 0) Initialize method on Java side establishes links, sets trace text in 3D scene to intermediate value. 1) User clicks trace text in 3D scene with mouse, activating the TouchSensor built-in eventOut touchTime, which is ROUTEd to trigger the Script node EventIn startTime, which in turn invokes the processEvent() method in the corresponding Java class. Changed values for text and position are calculated by the Java class and then returned to the Script node as eventOut values. 2) ChangedText eventOut sent to MessageToUser text node, sets trace text in 3D scene to final message value. 3) ChangedPosition eventOut sent to TextPosition Transform node, moving trace text to bottom of scene. Example: field-based control. An alternative to event passing via ROUTEs is to pass references to VRML nodes as values for fields in the Script node. In effect, Java gains direct control of VRML nodes and fields, rather than sending or receiving event messages to set/get values. Upon initialization, the Java class instantiates the node reference as a local variable. During subsequent invocations the Java class can read or modify any referenced field in the scene graph directly, without using ROUTEs. A second example follows which demonstrates the exact same functionality as the preceding example, but uses field-based control instead of events and ROUTEs. Figure 5 shows how nodes in the VRML scene are first defined, then passed as parameters to the Java class. The field-based example is available via www.stl.nps.navy.mil/~brutzman/vrml/examples/course/ScriptNodeFieldControl.wrl and ScriptNodeFieldControl.java Figure 5. Field interface between VRML and Java. This example tests field-based VRML-Java functionality. Note shared event startTime, and shared fields ChangedText and ChangedPosition. Operation of this example is similar to Figure 4, except that the Java class directly manipulates VRML nodes via fields instead of sending events. **Script interface performance hints.** Two authoring hints are provided as fields in the Script node for potential browser optimization: mustEvaluate and directOutput. In the first example (event-based control), mustEvaluate is set to FALSE as an author hint allowing the browser to postpone event passing until convenient, in order to optimize rendering. Similarly, the author hint directOutput is set to FALSE since the script only passes events and doesn’t modify VRML nodes directly. In the second example (field-based control), the opposite values are used. Since values in the scene graph might be modified by the script directly (i.e. without notifying the browser via ROUTE activity), the hint field mustEvaluate is set to TRUE and the browser can’t delay event passing as a performance optimization. Similarly, directOutput is set to TRUE to indicate that the script can modify VRML nodes directly via field control. If a scene uses both event and field control, the safest approach is to keep both values set to TRUE to maximize browser responsiveness to script actions. **Browser interface.** Java via the Script node is provided a variety of methods to interact with the host Web browser. getName and getVersion provide browser identification information. getWorldURL provides a string containing the original URL for the top-level VRML scene. setDescription resets the top-level page heading. getCurrentSpeed and getCurrentFrameRate show user navigation and window redraw speeds. An author’s Java program can also create and insert VRML source code (including nodes, PROTOs and additional ROUTEs) at run time. Java modifies VRML in the scene using the replaceWorld, createVrmlFromString, createVrmlFromURL, addRoute, deleteRoute, and loadURL methods. Valuable examples demonstrating these techniques appear throughout the public-domain JVerge class libraries, which provide a complete Java API mirroring all VRML nodes. JVerge accomplishes scene graph changes by sending modifications through the browser interface (Couch 97) (Roehl 97). **Future language interfaces.** Java via VRML’s Script node is well specified and multiple compliant browsers exist. Other interfaces are also on the horizon which can further extend Java-VRML functionality. Details follow. **External Authoring Interface (EAI).** Rather than provide Java connectivity from “inside” the VRML scene via the Script node, the EAI defines a Java or Javascript interface for external applets which communicate from an “external” HTML web browser (Marrin 97). EAI applets can pass messages to and from VRML scenes embedded in an HTML page. Much of the browser interface is similar but somewhat different semantics and syntax are necessary. for event passing and flow of control. The primary benefit of the EAI is the ability for direct communications between the encapsulating HTML browser and the embedded VRML browser. The EAI will likely be proposed as an official extension to VRML 97. A next-generation Java-VRML working group is examining additional capabilities and possibly unifying the EAI with the Script node classes, for consideration in future versions of VRML. Java3D. Sun has recently released the Java3D class library for 3D graphics programming (Deering 97). Java3D is an API, providing a programming interface for 3D that is analogous to the Abstract Window Toolkit (AWT) for 2D graphics. Java3D programs are saved as Java byte codes, not as a modeling format. Although Java3D is expected to include a loader capable of importing and exporting VRML geometry, it is not yet clear whether the Java3D event model will be able to similarly import and export VRML events. Public availability of the Java3D classes adds a number of new tools for Java programmers interested in VRML scene authoring. A VRML-Java3D working group is exploring usage conventions for interoperability, and also prototyping possible specification changes. Further unification of these already-complementary approaches holds tremendous promise. Example Java-VRML research. Programming Java in combination with VRML provides great expressive power. Simply illustrated in Figure 6, this combination of capabilities appears sufficient to provide a general entity model. VRML provides the 3D rendering and dynamic interaction capabilities, while Java provides general computation capabilities and network access. A variety of example projects follow. Multi-user server worlds. A number of research laboratories and commercial companies are producing networked games and shared worlds that utilize centralized servers to share data among multiple participants. This alternative approach can reliably scale to several hundred participants. An extensive example that builds such a world appears in (Roehl 97). Networking. In order to scale to many simultaneous users, peer-to-peer interactions are necessary in addition to client-server query-response. As one example, the IEEE 1278.1 Distributed Interactive Simulation (DIS) protocol is a well-specified way to pass entity behavior such as position, orientation, collision, fire/detonate and other message types. Use of multicast networking enables scalable many-to-many communications, avoiding server bottlenecks. DIS is particularly effective at communicating physics information at interactive rates in real time. The dis-java-vrml working group is implementing a public-domain DIS library in Java for use in multiple-entity VRML worlds, available via www.stl.nps.navy.mil/dis-java-vrml. Physics of motion. Displaying realistic entity motion requires properly modeling the underlying physics which govern entity equations of motion. Much work has been done in the graphics and robotics communities to produce kinematic (velocities only) and dynamic (forces, accelerations) models for many different types of entities. Such models typically range from three to six spatial degrees of freedom (x, y, z, roll, pitch and yaw). The NPS Phoenix autonomous underwater vehicle (AUV) hydrodynamics model provides a perhaps-worst-case example how computationally demanding physical responses can be calculated in real time (10 Hz or better on a Pentium processor). NPS AUV software is available at www.stl.nps.navy.mil/~auv. Eventually we expect that interface conventions will emerge and physics libraries for most entity types will be widely available. A particularly appealing feature of such an approach is that computational load is distributed evenly, with each entity responsible for its own physical response calculations. Conventions for kinematic human body animation are already under development by the h-anim working group at ece.uwaterloo.ca/~h-anim/. Sound. Computational requirements for spatial audio rendering are also demanding. VRML provides a simple sound node which localizes sound clips with their corresponding geometry. Widespread work in streaming audio is beginning to provide relatively low-bandwidth protocols with adequate sound quality and scalability. Further work in spatialized audio is beginning to show that advanced techniques for aural rendering are becoming possible on desktop machines. Further information is available via the Sound in Interactive Environments (SIE) mailing list at www.cs.nps.navy.mil/people/phd/storms. vrtp. Finally, as the demanding bandwidth and latency requirements of virtual environments begin to be exercised by VRML and Java, some client-server design assumptions of the HyperText Transfer Protocol (http) may no longer be valid. Users won’t be satisfied with network mechanisms that break down after a few hundred players. A spectrum of functionality is needed on each desktop which includes client, server, peer-to-peer and network monitoring. Our research group is building a Virtual Reality Transfer Protocol (vrtp) to better take advantage of available transport-layer functionality for VRML and overcome bottlenecks in http. Experimentation and quantitative evaluation are essential to develop the next-generation code needed for diverse inter-entity virtual environment communications. **Next steps.** A great deal of implementation work is now in progress. The best news: VRML and Java are powerful software languages for 3D modeling, general computation and network access. They are well matched, well specified, openly available and portable to most platforms on the Internet. VRML scenes in combination with Java can serve as the building blocks of cyberspace. Building large-scale internetworked worlds now appears possible. Using VRML and Java, practical experience and continued success will move the field of virtual reality past speculative fiction and isolated islands of research onto desktops anywhere, creating the next-generation Web. **References** San Diego Supercomputing Center (SDSC), *VRML Repository*, 1997, available via [www.sdsc.edu/vrml](http://www.sdsc.edu/vrml) VRML Consortium (VRMLC), working groups and other information, 1997, available via [www.vrml.org](http://www.vrml.org) About the author. Don Brutzman (brutzman@nps.navy.mil) is an assistant professor at the Naval Postgraduate School in Monterey California. He serves as technology vice president for the VRML Consortium.
{"Source-Url": "https://rebo.pw/media-691.pdf", "len_cl100k_base": 6614, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25624, "total-output-tokens": 8293, "length": "2e12", "weborganizer": {"__label__adult": 0.00038552284240722656, "__label__art_design": 0.0006318092346191406, "__label__crime_law": 0.0003070831298828125, "__label__education_jobs": 0.0006775856018066406, "__label__entertainment": 0.0001131892204284668, "__label__fashion_beauty": 0.0001481771469116211, "__label__finance_business": 0.0002225637435913086, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.0011682510375976562, "__label__hardware": 0.0014934539794921875, "__label__health": 0.0004284381866455078, "__label__history": 0.0003428459167480469, "__label__home_hobbies": 8.314847946166992e-05, "__label__industrial": 0.0004279613494873047, "__label__literature": 0.000209808349609375, "__label__politics": 0.0001809597015380859, "__label__religion": 0.0004625320434570313, "__label__science_tech": 0.056610107421875, "__label__social_life": 7.098913192749023e-05, "__label__software": 0.013763427734375, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.00032258033752441406, "__label__transportation": 0.0004591941833496094, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36893, 0.01579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36893, 0.53112]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36893, 0.85575]], "google_gemma-3-12b-it_contains_pii": [[0, 4494, false], [4494, 9817, null], [9817, 15748, null], [15748, 18514, null], [18514, 21529, null], [21529, 23363, null], [23363, 26418, null], [26418, 31413, null], [31413, 34980, null], [34980, 35182, null], [35182, 36893, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4494, true], [4494, 9817, null], [9817, 15748, null], [15748, 18514, null], [18514, 21529, null], [21529, 23363, null], [23363, 26418, null], [26418, 31413, null], [31413, 34980, null], [34980, 35182, null], [35182, 36893, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36893, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36893, null]], "pdf_page_numbers": [[0, 4494, 1], [4494, 9817, 2], [9817, 15748, 3], [15748, 18514, 4], [18514, 21529, 5], [21529, 23363, 6], [23363, 26418, 7], [26418, 31413, 8], [31413, 34980, 9], [34980, 35182, 10], [35182, 36893, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36893, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
588911674a9357b94629aa9fe30b010ed5d4703f
Lecture 23: Domain-specific programming on graphs Tunes Alt-J Tessellate (An Awesome Wave) “We wrote the lyrics to Tessellate while waiting on our GraphLab code to finish cranking on the Twitter graph.” - Joe Newman Recall last time - Domain-specific programming systems - Idea: give up generality in what types of programs can be expressed in exchange for achieving programmer productivity and high performance - “Performance portability” is a key goal: want programs to execute efficiently on a variety of complex parallel platforms (recall: wide diversity in modern platforms) - Doing so can require different data structure and algorithmic choices: not just good low-level code generation Today - Three modern systems for expressing operations on graphs - We’ll use these systems as examples of making design choices when architecting systems GraphLab Ligra Green-Marl Analyzing big graphs - Many modern applications: - Web search results, recommender systems, influence determination, advertising, anomaly detection, etc. - Public dataset examples: Twitter social graph, Wikipedia term occurrences, IMDB actors, Netflix Designing a framework for writing programs that graph operations - What operations do we want to make easy to express and efficient? - What are the key optimizations performed by best-known implementations of these operations? MapReduce (Hadoop) abstraction was the wrong tool for the job (e.g., iterative graph computations did not map well to “map” of independent computations) \[ R[i] = \frac{1 - \alpha}{N} + \alpha \sum_{j \text{ links to } i} \frac{R[j]}{\text{OutLinks}[j]} \] Illustrative example: Page Rank GraphLab - A system for describing *iterative* computations on graphs - Implemented as a C++ runtime - Runs on shared memory machines or distributed across clusters - GraphLab runtime takes responsibility for graph partitioning across cluster machines, communication between master, work scheduling, etc. Application state: - The graph: $G = (V, E)$ - Application defines data blocks on each vertex and directed edge - $D_v = \text{data associated with vertex } v$ - $D_{u \rightarrow v} = \text{data associated with directed edge } u \rightarrow v$ - Read-only global data Notice: I always first describe program state And then describe what operations are available to manipulate state Key concept: GraphLab vertex program - Defines per-vertex operations on the vertex’s local neighborhood - Neighborhood (aka “scope”) of vertex: - The current vertex - Adjacent edges - Adjacent vertices = vertex or edge data in scope of red vertex Simple example: PageRank * \[ R[i] = \frac{1 - \alpha}{N} + \alpha \sum_{j \text{ links to } i} \frac{R[j]}{\text{OutLinks}[j]} \] * Let \( \alpha = 0.85 \) PageRank_vertex_program(vertex i) { // (Gather phase) compute the sum of my neighbors rank double sum = 0; foreach (vertex j : in_neighbors(i)) { sum = sum + j.rank / num_out_neighbors(j); } // (Apply phase) Update my rank (i) i.rank = (1-0.85)/num_graph_vertices() + 0.85*sum; } * This is made up syntax for simplicity: actual syntax is C++, as we’ll see on the next slide Programming in GraphLab amounts to defining how to update state at each vertex Systems handles scheduling/parallelization Actual graphLab code (C++) ``` struct web_page { std::string pagename; double pagerank; web_page(): pagerank(0.0) { } } typedef graphlab::distributed_graph<web_page, graphlab::empty> graph_type; class pagerank_program: public graphlab::ivertex_program<graph_type, double>, public graphlab::IS_POD_TYPE { public: // we are going to gather on all the in-edges edge_dir_type gather_edges(icontext_type& context, const vertex_type& vertex) const { return graphlab::IN_EDGES; } // for each in-edge gather the weighted sum of the edge. double gather(icontext_type& context, const vertex_type& vertex, edge_type& edge) const { return edge.source().data().pagerank / edge.source().num_out_edges(); } // Use the total rank of adjacent pages to update this page void apply(icontext_type& context, vertex_type& vertex, const gather_type& total) { const gather_type total; double newval = total * 0.85 + 0.15; vertex.data().pagerank = newval; } // No scatter needed. Return NO_EDGES edge_dir_type scatter_edges(icontext_type& context, const vertex_type& vertex) const { return graphlab::NO_EDGES; } }; ``` - **Graph has record of type web_page per vertex, and no data on edges** - **Define edges to gather over in “gather phase”** - **Compute value to accumulate for each edge** - **Update vertex rank** - **PageRank example performs no scatter** Running the program ```cpp graphlab::omni_engine<pagerank_program> engine(dc, graph, "sync"); engine.signal_all(); engine.start(); ``` GraphLab runtime provides “engines” that manage scheduling of vertex programs engine.signal_all() marks all vertices for execution You can think of the GraphLab runtime as a work queue scheduler. And invoking a vertex program on a vertex as a **task** that is placed in the work queue. So it’s reasonable to read the code above as: “place all vertices into the work queue” Or as: “foreach vertex” run the vertex program. Generating new work by signaling \[ R[i] = \frac{1 - \alpha}{N} + \alpha \sum_{j \text{ links to } i} \frac{R[j]}{\text{OutLinks}[j]} \] - Iterate update of all \( R[i]'s \) 10 times - Uses generic “signal” primitive (could also wrap code on previous slide in a for loop) ```cpp struct web_page { std::string pagename; double pagerank; int counter; web_page(): pagerank(0.0),counter(0) { } } ``` // Use the total rank of adjacent pages to update this page void apply(icontext_type& context, vertex_type& vertex, const gather_type& total) { double newval = total * 0.85 + 0.15; vertex.data().pagerank = newval; vertex.data().counter++; if (vertex.data().counter < 10) vertex.signal(); } ``` If counter < 10, signal to scheduler to run the vertex program on the vertex again at some point in the future Signal is a general primitive for scheduling work - Parts of graph may converge at different rates (iterate PageRank until convergence, but only for vertices that need it) ```cpp class pagerank_program: public graphlab::ivertex_program< graph_type, double>, public graphlab::IS_POD_TYPE { private: bool perform_scatter; // Private variable set during apply phase, used during scatter phase public: // Use the total rank of adjacent pages to update this page void apply(icontext_type& context, vertex_type& vertex, const gather_type& total) { double newval = total * 0.85 + 0.15; double oldval = vertex.data().pagerank; vertex.data().pagerank = newval; perform_scatter = (std::fabs(prevval - newval) > 1E-3); // Check for convergence } // Scatter now needed if algorithm has not converged edge_dir_type scatter_edges(icontext_type& context, const vertex_type& vertex) const { if (perform_scatter) return graphlab::OUT_EDGES; else return graphlab::NO_EDGES; } // Make sure surrounding vertices are scheduled void scatter(icontext_type& context, const vertex_type& vertex, edge_type& edge) const { context.signal(edge.target()); } }; ``` Synchronizing parallel execution - Local neighborhood of vertex (vertex’s “scope”) can be read and written to by a vertex program. Programs specify what granularity of atomicity (“consistency”) they want GraphLab runtime to provide: determines amount of parallelism. - **Full consistency**: implementation ensures no other execution reads or writes to data in scope of \( v \) when vertex program for \( v \) is running. - **Edge consistency**: no other execution reads or writes any data in \( v \) or in edges adjacent to \( v \). - **Vertex consistency**: no other execution reads or writes to data in \( v \) ... Job scheduling order - GraphLab supports a collection of work scheduling policies - Synchronous: update all scheduled vertices “simultaneously” (vertex programs observe no updates from programs run on other vertices in same “round”) Graph (copy A of data structure) → Updated graph (copy B of data structure) → Updated graph (copy A of data structure) Run vertex programs for all scheduled vertices. (output to copy of graph structure) Job scheduling order - GraphLab supports a collection of work scheduling policies - Synchronous: update all vertices simultaneously (vertex programs observe no updates from programs run on other vertices in same “round”) - Round-robin: vertex programs observe most recent updates - Graph Coloring - Dynamic: based on new work created by signal - Several implementations: fifo, priority-based, “splash” ... - Application developer has flexibility for choosing consistency guarantee and scheduling policy - Implication: programs make assumptions about the schedule (unlike Lizst) - Kayvon’s opinion: seems like a weird design at first glance, but this is common (and necessary) in the design of efficient graph algorithms Summary: GraphLab concepts - Program state: data on graph vertices and edges + globals - Operations: per-vertex update programs and global reduction functions (reductions not discussed today) - Simple, intuitive description of work (follows mathematical formulation) - Graph restricts data access in vertex program to local neighborhood - Asynchronous execution model: application creates work dynamically by “signaling vertices” (enable lazy execution, work efficiency on real graphs) - Choice of scheduler and consistency implementation - In this domain, the order in which nodes are processed can be critical property for both performance and quality of result - Application responsible for choosing right scheduler for its needs Ligra - A simple framework for parallel graph operations - Motivating example: breadth-first search parents = {-1, ..., -1} // d = dst: vertex to “update” (just encountered) // s = src: vertex on frontier with edge to d procedure UPDATE(s, d) return compare-and-swap(parents[d], -1, s); procedure COND(i) return parents[i] == -1; procedure BFS(G, r) parents[r] = r; frontier = {r}; while (size(frontier) != 0) do: frontier = EDGEMAP(G, frontier, UPDATE, COND); Semantics of EDGEMAP: foreach vertex i in frontier, call UPDATE for all neighboring vertices j for which COND(j) is true. Add j to returned set if UPDATE(i, j) returns true Implementing edgemap Assume vertex subset $U$ (*frontier* in previous example) is represented sparsely: - e.g., three vertex subset $U$ of 10 vertex graph $G=(E,V)$: $U \subset V = \{0, 4, 9\}$ Procedure EDGEMAP_SPARSE($G$, $U$, $F$, $C$): result = {} parallel foreach $v$ in $U$ do: parallel foreach $v2$ in out_neighbors($v$) do: if ($C(v2) == 1$ and $F(v,v2) == 1$) then add $v2$ to result remove duplicates from result return result; Cost of EDGEMAP_SPARSE? $O(|U| + \text{sum of outgoing edges from } U)$ Visiting every edge on frontier can be wasteful - Each step of BFS, every edge on frontier is visited - Frontier can grow quickly for social graphs (few steps to visit all nodes) - Most edge visits are wasteful! - **claimed child**: edge points to unvisited node (useful work) - **failed child**: edge points to node found in this step via another edge - **peer**: edge points to a vertex that was added to frontier in same step as current vertex - **valid parent**: edge points to vertex found in previous step [Credit Beamer et al. SC12] Implementing edgemap for dense vertex subsets Assume vertex subset (frontier in previous example) is represented densely with a bitvector: - e.g., vertex subset $U$ of 10 vertex graph $G=(E,V)$: $U \subset V = \{1,0,0,0,1,0,0,0,0,1\}$ ```plaintext procedure EDGEMAP_DENSE(G, U, F, C): result = {} parallel for $i$ in $\{0,\ldots,|V|-1\}$ do: if (C($i$) == 1) then: foreach $v$ in in_neighbors($i$) do: if $v \in U$ and F($v$, $i$) == 1 then: add $i$ to result; if (C($i$) == 0) break; return result; ``` ```plaintext procedure EDGEMAP_SPARSE(G, U, F, C): result = {} parallel foreach $v$ in U do: parallel foreach $v_2$ in out_neighbors($v$) do: if (C($v_2$) == 1 and F($v$, $v_2$) == 1) then: add $v_2$ to result remove duplicates from result return result; ``` Cost of EDGEMAP_DENSE? For each unvisited vertex, quit searching as soon as some parent is found Could be as low as $O(|V|)$ Also no synchronization needed (“gather” results rather than “scatter”) Ligra on one slide - **Entities:** - Graphs - Vertex subsets (represented sparsely or densely by system) - EDGEMAP and VERTEXMAP functions ```javascript procedure EDGEMAP(G, U, F, C): if (|U| + sum of out degrees > threshold) return EDGEMAP_DENSE(G, U, F, C); else return EDGEMAP_SPARSE(G, U, F, C); procedure VERTEXMAP(U, F): result = {} parallel for u ∈ U do: if (F(u) == 1) then: add u to result; return result; ``` Page rank in Ligra \[ \begin{align*} r_{cur} &= \{1/|V|, \ldots, 1/|V|\}; \\ r_{next} &= \{0, \ldots, 0\}; \\ \text{diff} &= \{} \\ \end{align*} \] procedure PRUPDATE(s, d): atomicIncrement(&r_{next}[d], r_{cur}[s] / vertex\_degree(s)); procedure PRLOCALCOMPUTE(i): r_{next}[i] = alpha \cdot r_{next}[i] + (1 - alpha) / |V|; \text{diff}[i] = |r_{next}[i] - r_{cur}[i]|; r_{cur}[i] = 0; return 1; procedure COND(i): return 1; procedure PAGERANK(G, alpha, eps): frontier = \{0, \ldots, |V|\} - 1 error = HUGE; while (error > eps) do: frontier = EDGEMAP(G, frontier, PRUPDATE, COND); frontier = VERTEXMAP(frontier, PRLOCALCOMPUTE); error = \text{sum of diffs} // this is a parallel reduce \text{swap(r\_cur, r\_next)}; return err Question: can you implement the iterate until convergence optimization we previously discussed in GraphLab? (if so, what GraphLab scheduler implementation is the result equivalent to?) Ligra summary - Abstract graph operations as data-parallel operations over vertices and edges - Emphasizes graph traversal (potentially small subset of vertices operated on in a data parallel step) - These basic operations permit surprising wide space of graph algorithms: - Betweenness centrality - Connected components - Shortest paths See Ligra: a Lightweight Framework for Graph Processing for Shared Memory [Shun and Blelloch 2013] Green-Marl A domain-specific language for computations on graphs Procedure PageRank(G: Graph, thresh, alpha: Double, max_iter: Int, PR: Node_Prop<Double>(G)) { Double diff = 0; Int cnt = 0; Double N = G.NumNodes(); G.PR = 1 / N: Do { diff = 0.0; Foreach (t: G.nodes) { Double val = (1 - alpha) / N + alpha * sum(w: t.InNBrs) (w.PR / w.outDegree()); t.PR <= val & t; // modification not visible until end of t loop diff += |val - t.PR|; } cnt++; } While (diff > thresh && cnt < max_iter); } Graph-specific iteration - Betweenness-centrality example: - Iteration over sets - BFS/DFS iteration over graphs ```plaintext Procedure Compute_BC( G: Graph, BC: Node.Prop<Float>(G) { G.BC = 0; // initialize BC Foreach(s: G.Nodes) { // define temporary properties Node.Prop<Float>(G) Sigma; Node.Prop<Float>(G) Delta; s.Sigma = 1; // Initialize Sigma for root // Traverse graph in BFS-order from s InBFS(v: G.Nodes From s)(v!=s) { // sum over BFS-parents v.Sigma = Sum(w: v.UpNbrs) { w.Sigma; }; } // Traverse graph in reverse BFS-order InRBFS(v!=s) { // sum over BFS-children v.Delta = Sum (w:v.DownNbrs) { v.Sigma / w.Sigma * (1 + w.Delta) }; v.BC += v.Delta @s; // accumulate BC } } } } ``` Summary: three domain-specific systems for expressing operations on graphs - **GraphLab** - Programmer thinks about vertices exchanging data - Asynchronous execution as a key feature (exact results not needed in many ML algorithms) - **Ligra** - Programmer thinks more about graph traversal (computation happens when code “traverses” to node) - Traversal expressed using flat data-parallel operators - **Green-Marl** - Add graph-specific iteration concepts to a language - Programmer thinks about traversal, but codes it up him/herself - Compiler smarts are largely in optimizing application-specified iteration **Not discussed today:** Google’s Pregel Elements of good programming system design - **Simple:** - A small number of key primitives and operations - Ligra: only two operations! - GraphLab: run computation per vertex, force neighbors to run using signaling - Design gets messy with all the scheduling options - Few primitives = focus optimization on these primitives - Ligra example: sparse vs. dense optimization (developed for BFS) but is applied all algorithms written using EDGEMAP/VERTEXMAP - **Expressive:** - Composition/use of primitives allows for broad space of uses (wide application scope, even if it is limited to a domain) - **Optimized for the common case** - Expression: common operations are easy to express, intuitive, and efficient - Most important optimizations can be performed by system Streaming graph computations - Would like to process large graphs on a single machine - Managing clusters of machines is difficult - Partitioning graphs is expensive and difficult - Challenge: cannot fit all edges in memory for large graphs (although all vertices may fit) - Example: 1 billion edge graph - Consider sparse representation from assignment 3: each edge represented twice in structure (incoming/outgoing): 8 bytes per edge for adjacency structure - Must also store per-edge values (e.g., 4 bytes for a per-edge weight) - ~12 GB of memory for edge information - Graph algorithms traverse edges, which is random access (single operation on a graph might require billions of tiny loads from disk) Streaming graph computations - Would like to process large graphs on a single machine - Challenge: cannot fit all edges in memory for large graphs (although all vertices may fit) - Caching subset of vertices in memory is unpredictable/difficult, clustering/partitioning also requires significant amounts of memory as well... Parallel sliding window approach - Assumption: graph operations to update a vertex require only immediate neighboring information in the graph - Main idea: organize the graph data structure so that graph operations require only a small number of large, bulk loads/stores to disk - Partition graph vertices into P intervals - Store vertices and only incoming edges to these vertices are stored together in a shard (total of P shards) Sharded graph representation - Partition graph vertices into intervals (sized so that each shard fits in memory) - Store vertices and only incoming edges to these vertices are stored together in a shard - Sort edges in a shard by source vertex id Data required to process vertices in interval 1 Notice: to construct subgraph containing vertices in interval 1 and their incoming and outgoing edges, only need to load contiguous information from other P-1 shards Writes to updated outgoing edges require P-1 bulk writes Sharded graph representation - Partition graph vertices into intervals (sized so that each shard fits in memory) - Store vertices and only incoming edges to these vertices are stored together in a shard - Sort edges in a shard by source vertex id Data required to process vertices in interval 2 Observe: due to sort of incoming edges, iterating over all intervals is sliding window over the shards PageRank in GraphChi - GraphChi is a system that implements the out-of-core sliding window approach. **PageRank in graphChi** ```plaintext 1 typedef: VertexType float 2 Update(vertex) begin 3 var sum ← 0 4 for e in vertex.inEdges() do 5 sum += e.weight * neighborRank(e) 6 end 7 vertex.setValue(0.15 + 0.85 * sum) 8 broadcast(vertex) 9 end ``` Alternative model: assume vertex data can be kept in memory and redefine neighborRank() function ```plaintext 1 typedef: EdgeType { float weight; } 2 float[] in_mem_vert 3 neighborRank(edge) begin 4 return edge.weight * in_mem_vert[edge.vertex_id] 5 end ``` Take per-vertex rank and distribute to all outbound edges (memory inefficient: replicates per-vertex rank to all edges) Performance on a Mac mini (8 GB RAM) - Performance remains stable as graph size is increased
{"Source-Url": "http://15418.courses.cs.cmu.edu/spring2014content/lectures/23_graphdsl/23_graphdsl_slides.pdf", "len_cl100k_base": 5251, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 62318, "total-output-tokens": 6839, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.00025177001953125, "__label__crime_law": 0.0002684593200683594, "__label__education_jobs": 0.0006093978881835938, "__label__entertainment": 5.9545040130615234e-05, "__label__fashion_beauty": 0.0001131296157836914, "__label__finance_business": 0.0001558065414428711, "__label__food_dining": 0.0003581047058105469, "__label__games": 0.0004916191101074219, "__label__hardware": 0.0008425712585449219, "__label__health": 0.00034356117248535156, "__label__history": 0.0001895427703857422, "__label__home_hobbies": 8.606910705566406e-05, "__label__industrial": 0.0003495216369628906, "__label__literature": 0.00017559528350830078, "__label__politics": 0.00021195411682128904, "__label__religion": 0.0003750324249267578, "__label__science_tech": 0.00894927978515625, "__label__social_life": 7.49826431274414e-05, "__label__software": 0.0034637451171875, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0002963542938232422, "__label__transportation": 0.0005197525024414062, "__label__travel": 0.000194549560546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20985, 0.00927]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20985, 0.35394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20985, 0.75524]], "google_gemma-3-12b-it_contains_pii": [[0, 51, false], [51, 220, null], [220, 704, null], [704, 888, null], [888, 1146, null], [1146, 1666, null], [1666, 1974, null], [1974, 2365, null], [2365, 2620, null], [2620, 3296, null], [3296, 4754, null], [4754, 5315, null], [5315, 6167, null], [6167, 7486, null], [7486, 8108, null], [8108, 8549, null], [8549, 9287, null], [9287, 10032, null], [10032, 10701, null], [10701, 11243, null], [11243, 11800, null], [11800, 12905, null], [12905, 13361, null], [13361, 14364, null], [14364, 14812, null], [14812, 15355, null], [15355, 16246, null], [16246, 16917, null], [16917, 17720, null], [17720, 18443, null], [18443, 18771, null], [18771, 19209, null], [19209, 19731, null], [19731, 20132, null], [20132, 20892, null], [20892, 20985, null]], "google_gemma-3-12b-it_is_public_document": [[0, 51, true], [51, 220, null], [220, 704, null], [704, 888, null], [888, 1146, null], [1146, 1666, null], [1666, 1974, null], [1974, 2365, null], [2365, 2620, null], [2620, 3296, null], [3296, 4754, null], [4754, 5315, null], [5315, 6167, null], [6167, 7486, null], [7486, 8108, null], [8108, 8549, null], [8549, 9287, null], [9287, 10032, null], [10032, 10701, null], [10701, 11243, null], [11243, 11800, null], [11800, 12905, null], [12905, 13361, null], [13361, 14364, null], [14364, 14812, null], [14812, 15355, null], [15355, 16246, null], [16246, 16917, null], [16917, 17720, null], [17720, 18443, null], [18443, 18771, null], [18771, 19209, null], [19209, 19731, null], [19731, 20132, null], [20132, 20892, null], [20892, 20985, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20985, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20985, null]], "pdf_page_numbers": [[0, 51, 1], [51, 220, 2], [220, 704, 3], [704, 888, 4], [888, 1146, 5], [1146, 1666, 6], [1666, 1974, 7], [1974, 2365, 8], [2365, 2620, 9], [2620, 3296, 10], [3296, 4754, 11], [4754, 5315, 12], [5315, 6167, 13], [6167, 7486, 14], [7486, 8108, 15], [8108, 8549, 16], [8549, 9287, 17], [9287, 10032, 18], [10032, 10701, 19], [10701, 11243, 20], [11243, 11800, 21], [11800, 12905, 22], [12905, 13361, 23], [13361, 14364, 24], [14364, 14812, 25], [14812, 15355, 26], [15355, 16246, 27], [16246, 16917, 28], [16917, 17720, 29], [17720, 18443, 30], [18443, 18771, 31], [18771, 19209, 32], [19209, 19731, 33], [19731, 20132, 34], [20132, 20892, 35], [20892, 20985, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20985, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
23c24d3fba1de99d4e077d978953a24fe8b61e0a
DeepShuai: Deep Reinforcement Learning based Chinese Chess Player Chengshu Li 1 Kedao Wang 1 Zihua Liu 1 Abstract Chinese chess has long been viewed as one of the most popular board games in China. It has a larger state and action space than chess, hence greater difficulty for AI to conquer. Many previous work focused on search based algorithm or simple TD learning to tackle Xiangqi. However, in this project, we propose a deep reinforcement learning based algorithm inspired by AlphaGo. We first used supervised learning to initialize player agent and use reinforcement learning algorithms to update the players against a commercial Xiangqi agent called Elephant Eye. We are able to achieve a consistent 56% win rate over Elephant Eye evaluated in 100 games. 1. Introduction Xiangqi is a traditional chess game much like chess itself with unique pieces and different rules to move these pieces. In terms of game tree complexity, Xiangqi surpasses Chess with a branching factor of 38 and game tree complexity of 150 compared to branching factor of 35 and game tree complexity of 123 to Chess. The board for Xiangqi is also larger: a 10 x 9 board with 17 pieces on each side. In addition, there are also additional constraints and conditions imposed on the way Xiangqi pieces move. For instance, the King equivalent of Chess in Xiangqi cannot leave a set "Palace", a 3 x 3 grid in the middle of each side. Differences aside, there are significantly less research on autonomous player for Xiangqi. Most existing researches explore search based methods and evaluate against a commercial agent Elephant Eye. In this project, we propose a deep method of encoding game states and devising policies to play Xiangqi inspired by methods and principles employed by AlphaGo. The following paper is structured as follow: Section 2 will introduce some current research into Xiangqi Agent. Section 3 will introduce our dataset and environment. Section 4 and 5 will discuss approaches we have experimented and their experimental results. Section 6 and 7 will offer insights on some challenges and future work and a conclusion. 2. Related Work Early reinforcement learning based agent for board games such as Chess or Xiangqi first originated in TD-Gammon by Tesauro (Tesauro, 1995). In his paper, he first used a neural network to approximate the value of a board state and applied TD learning on the game Backgammon, a popular board game at the time. Inspired by Tesauro’s work, Yin et al first proposed applying Temporal Difference learning on Xiangqi (Yin & Fu, 2012). Current day Xiangqi research focuses on primarily two different categories: 1) advancing search based algorithms, 2) new state evaluation functions. An example of former category of research is by Liu et al, who devised a variation of alpha-beta pruning for Xiangqi aimed at learning state search at end games (Liu & Guo, 2012). On the other hand an example of latter category is by Fu et al, who applied three layer feed forward neural network, combined with information from prior heuristic, to obtain a new evaluation function for the value of a position (Fu & Yin, 2012). However, with the rise of deep learning methods, we now have tools to encode more complex information about the board state. Such methods have been employed in other games like Chess or Go. Lai first applied deep reinforcement learning to produce Giraffe. In this work, a multi-layered perceptron network was used to encode the value of a board state to perform TD learning (Lai, 2015). A more recent and highly impactful work is AlphaGo by DeepMind. In this work, a deep convolutional neural network is used to approximate both policy and value of a game board (Silver et al., 2016). ![Figure 1. Example Xiangqi Board](image) In our project, we will closely follow strategies employed by AlphaGo: we will first use supervised training followed by reinforcement learning. 3. Dataset 3.1. Dataset To complete the aforementioned task, we have scraped 70,000 complete expert games from a database for Xiangqi, each with an average of 100 moves. Each move is in Xiangqi notation style. We have also built up an environment Xiangqi environment that allows us to virtually replay these 70,000 games to produce 5,595,966 unique board states without the draws. We used this dataset in the supervised setting by splitting the dataset into 80% of training data and 20% of validation data. To prevent the network from memorizing the game state itself instead of evaluating the board, we split the dataset based on game: board states from one game is either entirely in train set or validation set. <table> <thead> <tr> <th>Results</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>Red win</td> <td>37.78%</td> </tr> <tr> <td>Black win</td> <td>27.90%</td> </tr> <tr> <td>Draw</td> <td>34.32%</td> </tr> </tbody> </table> Table 1. Distribution of Game Results in Scrapped Dataset 3.2. Environment As mentioned before, we have implemented a Chinese chess environment that will be core of our agent interaction with the opponent. The use for this environment is as followed. 1) We use this environment to generate the dataset from Xiangqi Notation to independent board positions that we use to perform supervised learning. 2) We use this environment to generate all possible next board states following the current one for evaluating each board state to determine the next state with highest value via our network in the reinforcement learning setting. 3) Integrate with Elephant Eye (Eleeye), a commercial Xiangqi agent. And 4) Allows the network to self-play to learn the end-game scenarios. 4. Approach In this section, we will discuss in details three methods we experimented to create a Xiangqi agent. These approaches include value based method, policy based method, and actor-critic method. In the value based method, our network approximates the value of the given state based on each player; in the policy based method, our network instead predicts the policy, namely the next move taken by the player, given the board state; in actor-critic method, a network predicts the next move and approximates an advantage estimate on the predicted policy. The network architecture for these methods are very similar. Each of these networks takes a vectorized, one-hot encoded board state as input. They also share the same stem of network and only differ by the last few output layers. The shared portion is a 3-layer Residual Network with 2048 hidden neurons per layer as demonstrated in Figure 2. No convolution layer is used in any methods. Each of the method can be split into three distinct phases: supervised learning, reinforcement learning with Eleeye, and reinforcement learning through self-play. The first stage of the three is supervised learning. Depending on the method, we can create training criteria on each of the 5,595,966 unique game positions scrapped from the Xiangqi database. The point of this supervised learning stage is to bootstrap reinforcement learning with some prior information learned from expert moves. The second stage of the three is reinforcement learning by playing against Eleeye. In this stage, our network will be updated with both our experiences playing against Eleeye and their experience from play against us. The last stage is self-play. In this stage, our agent will play against a past variant of itself to further increase the strength of its policies. In the fol- following subsections, we will discuss in detail each method we experimented. 4.1. Value Based Method 4.1.1. Supervised Learning We trained the value network via supervised learning, followed by Temporal Difference learning. We trained a deep residual value network that takes board positions as input, and outputs a single value between -1 to 1, with -1 being a lost game, 0 being a draw, and 1 being a win. The input is the chess board encoded as a 1-D one-hot vector, with dimension being $board.height \times board.width \times n.pieces = 10 \times 9 \times 17 = 1530$. The pieces dimension encodes all pieces from both players (King, Assistant, Bishop, Knight, Rook, Cannon, Pawn, Pawn.across.river), as well as empty cell, for a total of $8 + 8 + 1 = 17$ piece types. We doubled the training data by switching perspective between red and black players. For example, a red players board position corresponding to a win, would become a black players position corresponding to a loss when the board is rotated 180 degrees. The labels are discounted sum of future rewards: $$R = \sum_t \gamma^t z$$ $$z = \begin{cases} +1 & \text{win} \\ 0 & \text{draw} \\ -1 & \text{loss} \end{cases}$$ Where reward is only non-zero at end of game ($z$). By using discounted sum of future rewards as training label, the network can prioritize value over states closer to a terminal state. 4.1.2. Reinforcement Learning We subsequently trained the RL agent via reinforcement learning, by initializing the same value network with weights learned from supervised learning. We trained the RL agent via TD learning, by minimizing the following objective: $$\mathbb{E} [r(s, a) + \gamma V(s') - V(s)]$$ (3) Where the transition is deterministic $s' \leftarrow s, a$. Each action is chosen with e-greedy probability epsilon. At each step, we generate all legal next states of the chess board, feed them through the value network to obtain the values for all next states. With probability $1 - \epsilon$ the agent picks the action yielding the highest value for next state. With probability $\epsilon$ the agent chooses an action by random. $\epsilon$ is a hyperparameter. We used double Q-learning (Van Hasselt et al., 2016) as the reinforcement learning algorithm. Double Q-learning works by using two sets of weights, one for the Q-network and one for the target network. The two networks are swapped with some probability (we chose 50%) after a fixed number of iterations. Experience replay is used to break the temporal dependency between actions. Experience replay helps the network converge faster in situations where the reward signal is sparse (Lin, 1993), for instance when an agent only receives award at the end of the game. Using naive TD learning with exploration, it would take an unreasonable amount of trials to propagate the reward through the early states. We allocated an experience buffer to store $(state, action, next.state, reward)$ tuples. At back-propagation time, we sample a batch of experience tuples from the buffer to update the weights of network. 4.2. Policy Based Method 4.2.1. Supervised Learning We then formulated the problem from a policy perspective. A policy network takes state as input and outputs action probabilities. In the domain of Chinese Chess, a brute force way to list all possible moves is huge, due to the large product of grid cells, piece types, and move options. Therefore we chose an efficient output space of two vectors, each with size $board.width \times board.height = 10 \times 9 = 90$ (spa). One vector represents the from-coordinate of a piece, while the other vector represents the to-coordinate of a piece. The policy network takes one-coordinate of a piece as input. We trained the policy network with the scraped state-action pairs. 4.2.2. Reinforcement Learning During reinforcement learning phase, we used the learned network from supervised learning phase to train the agent. DeepShuai: Deep Reinforcement Learning based Chinese Chess Player When taking a step, we feed the current board state into the network. The network outputs two probability distribution over from-coordinates and to-coordinates. We calculate the joint probability of all legal moves, and select the move with the highest joint probability: \[ P_{\text{move}} = P_{\text{from}} \cdot P_{\text{to}} \] (4) During reinforcement learning, we used vanilla policy gradient to train our agent. \[ \nabla \theta J(\theta) = \mathbb{E} [\nabla \theta \log \pi(s, a) A(s, a)] \] (5) Where the advantage is \[ A = \sum_t \gamma^t z \] (6) We initially played our agent against Elephant Eye, an open-source Chinese Chess AI. Since our agent initially loses all the games against Elephant Eye, experience from both the RL agent and Elephant Eye are used for TD learning. The intuition is that, by providing balanced positive (winning) and negative (losing) examples, our network is able to learn good behavior, and at the same time avoid bad behavior. 4.3. Actor-Critic Method Policy gradient is prone to variance, where a tiny policy gradient step in the wrong direction could result in a disastrous policy useless for learning. In order to reduce the variance of policy updates, we used Actor-Critic learning to train our RL agent, by initializing the policy network and value network using weights learned from supervised learning phase. Actor-Critic method works by using one policy network to predict the actions, and another network to predict the value of a state. The policy network is trained via policy gradient: \[ \nabla \theta J(\theta) = \mathbb{E} [\nabla \theta \log \pi(s, a) A(s, a)] \] (7) The value network is trained via TD learning: \[ \min \{ \mathbb{E} [r(s, a) + \gamma V(s') - V(s)] \} \] (8) We used Generalized Advantage Estimation (Schulman et al., 2015), where advantage is defined as: \[ A_t^{GAE} = \sum_l (\gamma \lambda)^l \delta_{t+l} \] (9) \[ \delta_t = r_t + \gamma V(s_{t+1}) - V(s_t) \] (10) We experimented with two network architectures: - Separate networks: we used two separate networks, one for policy network and one for value network. - Shared network: we used a single network with shared bottom layers, and outputs both action probabilities and state value. The hope here is to use multi-task learning to help the network transfer knowledge between two tasks. Both architectures are first trained via supervised learning, and then via actor-critic reinforcement learning. 5. Experiment In this section, we will present experimental results for all three different methods in both supervised learning stage and reinforcement learning stage. All of our experiments are trained on a single NVIDIA GeForce GTX TITAN GPU. 5.1. Value Based Method 5.1.1. Supervised Learning For training a value network, as suggested by the previous section, the output of our value network is a regression value. After 6 epochs of training, we are able to achieve a validation loss of 0.1877 with discount factor \( \gamma = 0.98 \). Dropout of 0.5 is used whereas no batch normalization is applied. For reference, we employed the same kind of value evaluation scheme with that of AlphaGo where they achieved a validation loss of 0.23. 5.1.2. Reinforcement Learning For the reinforcement learning section, the performance of TD learning proves to be less than satisfactory. We applied TD learning with experience replay on both Agent’s experience and Eleeye’s experience. After 7000 epochs of 2500 games each of playing against Eleeye, Value Network is able to reduce the error objective of Q-learning down to 0.05. We experimented multiple representation of the board: both one-hot encoding of all pieces and direct representation of the board given a heuristic value of each piece. However the win rate against even the easiest Eleeye setting is still 0 evaluated over the past 100 games after 7000 epochs of training. 5.2. Policy Based Method 5.2.1. Supervised Learning For training a policy network, the output of our network becomes two vectors of length 90 to encode the broad position that the player should move from and move to. After 11 epochs of training, we are able to achieve a validation accuracy of 46.29% on move from position and 60.33% on move to position. Both batch normalization and dropout keep probability of 0.4 is used. For reference, AlphaGo got a 55% validation accuracy on supervised training of policy network, with only one output probability because each stone and position are encoded equivalently in Go. Training graph for supervised learning is provided in Figure 4. 5.2.2. Reinforcement Learning For the reinforcement learning stage, agent makes decision based on the max probability legal move paired with REINFORCE algorithm to update the policies. However, once again, playing against Eleeye does not offer us much gain. The our win rate against Eleeye is kept consistent at 0%. On the other hand, we are able to observe some results from self-play. For self-play, we have two agents initialized to policy network from supervised training, but only one agent is received gradient updates. We call a generation to be an extended period of time before either 2000 epochs or the updated agents has a 80% win rate over the un-updated agent. Our observed result was that the agent quickly learns from playing against itself. About after 5 generations, the win rate of updated agent plateaus at 50% at generation 6. Win rate with respect to training epochs are shown for each of the 6 generations in Figure 3. However, when we evaluate the agent against Eleeye, our win rate is 1%. When we manually examine each steps our agent makes, we found out that the agent is able to learn some standard opening moves as well as taking and defending crucial pieces. 5.3. Actor-Critic Method 5.3.1. Supervised Learning For using actor critic method, we combine the value network and policy network from both previous method. The output of our network is both the single dimension regres- sion value and two position vectors of length 90. After 13 epochs of training, we are able to achieve a validation loss of 0.192 on value regression and validation accuracy of 42.49% on move from position and 56.91% on move to position on policy prediction. Both batch normalization and dropout of 0.4 is used. Training graph is provided in Figure 5. ![Policy Network Supervised Learning](image) **Figure 5.** Policy-Value Joint Network Accuracy over time 6. Challenges and Future Work The main challenges that we face throughout the project is on the reinforcement learning part. We achieved close to state-of-the-art results in the supervised learning phase of all three methods. Yet the RL results for the first two methods are less than satisfactory. For our RL models, training and validation loss steadily decrease but we fail to transfer decreasing loss into actual win games. For our actor-critic method, we did achieved over 50% win rate. Yet after reading the log, we realized that we almost always won the game in the exactly same way. The reason is that Elephant Eye is a deterministic AI agent and we also only won the game if our agent played first. As training progressed, we failed to further increase our win rate and started to overfit to this particular opponent. We have tried to use e-greedy to add randomness to the system to see if our agent can beat Elephant Eye if Elephant Eye plays first. We haven’t achieved good results as the model quickly diverged to 0 win rate again. Another challenge is that since we lose most of the games to Elephant Eye in the reinforcement learning phase, our agent basically “unlearns” what it has learned from supervised learning, especially for the first two methods, because it decreases the probability of its action for a lost game. Furthermore, we found policy gradient method relatively unstable and we experienced multiple loss and gradient explosion when training REINFORCE. For future work, we would love to try Monte Carlo tree search (MCTS) on top of our value network to have a more accurate heuristic. We also want to introduce trust region policy optimization (TRPO), which has proven to be more stable than the vanilla policy gradient method. Furthermore, we need to develop a better exploration mechanism for actor-critic system when playing against a deterministic AI to avoid overfitting. Lastly, we want to try Asynchronous Actor-Critic to speed up our training time. 7. Conclusion In conclusion, in this study, we proposed and experimented several deep reinforcement learning based approach for an autonomous Xiangqi agent, including Value-based network with TD learning, Policy-based network with REINFORCE, and actor-critic-based network with REINFORCE. Among them, actor-critic based method significantly out performs other variants and achieved 56% win rate against Eleeeye. This study shows that Xiangqi, much like Go or Chess, can be tackled with non-traditional methods other than search. This work also opens up the gate to many future work, for we have also mapped out many potential pitfalls and problems future research may encounter in attempting to create stronger agents. 8. Acknowledgment Throughout this project, all team members contributed equally in designing the model, running experiments, and drafting the report, milestone and poster. References Final Report 100 / 100 + 0 pts Correct + 12 pts Clear description of the problem, and having it be clearly related to reinforcement learning + 8 pts Why is the problem important / significant / hard + 12 pts If the proposal is to tackle a new domain: why will the new domain be harder than prior work? Why choose this? + 12 pts If the proposal is a new algorithm (plus potentially a new domain): what are the limitations of prior approaches? + 12 pts If doing a replication study: why choose to replicate this particular algorithm, and why choose the domains that you did? + 60 pts Provide a clear description of what was done and accomplished + 8 pts what are the next steps / open issues + 50 pts Description of work completed was a bit sparse in places + 4 pts Good but not detailed description of next steps + 55 pts description of work completed could've been further described in some places + 0 pts Click here to replace this description. + 100 Point adjustment Nice work! Much of the paper I was wondering if you'd try using MCTS on top of the RL agent. I think that could substantially further improve the results and can leverage the fact that the agent doesn't need to learn the dynamics and reward. It would be interesting to hear what happens if you do try this!
{"Source-Url": "http://web.stanford.edu/class/cs234/past_projects/2017/2017_Li_Wang_Liu_DeepShuai_Paper.pdf", "len_cl100k_base": 4875, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 9795, "total-output-tokens": 5900, "length": "2e12", "weborganizer": {"__label__adult": 0.002292633056640625, "__label__art_design": 0.0017461776733398438, "__label__crime_law": 0.00281524658203125, "__label__education_jobs": 0.0030078887939453125, "__label__entertainment": 0.0012979507446289062, "__label__fashion_beauty": 0.0012655258178710938, "__label__finance_business": 0.0013589859008789062, "__label__food_dining": 0.00273895263671875, "__label__games": 0.3232421875, "__label__hardware": 0.00431060791015625, "__label__health": 0.00266265869140625, "__label__history": 0.00223541259765625, "__label__home_hobbies": 0.0005974769592285156, "__label__industrial": 0.00257110595703125, "__label__literature": 0.0013685226440429688, "__label__politics": 0.0014867782592773438, "__label__religion": 0.0017871856689453125, "__label__science_tech": 0.2607421875, "__label__social_life": 0.0003633499145507813, "__label__software": 0.00867462158203125, "__label__software_dev": 0.364990234375, "__label__sports_fitness": 0.0052490234375, "__label__transportation": 0.0020275115966796875, "__label__travel": 0.0010204315185546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23716, 0.04408]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23716, 0.17751]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23716, 0.92495]], "google_gemma-3-12b-it_contains_pii": [[0, 3774, false], [3774, 7360, null], [7360, 11307, null], [11307, 15266, null], [15266, 17363, null], [17363, 20535, null], [20535, 22437, null], [22437, 23716, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3774, true], [3774, 7360, null], [7360, 11307, null], [11307, 15266, null], [15266, 17363, null], [17363, 20535, null], [20535, 22437, null], [22437, 23716, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23716, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23716, null]], "pdf_page_numbers": [[0, 3774, 1], [3774, 7360, 2], [7360, 11307, 3], [11307, 15266, 4], [15266, 17363, 5], [17363, 20535, 6], [20535, 22437, 7], [22437, 23716, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23716, 0.04]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
3bdf594807862c08deaeaec444fb8a8b195c475e
Federated Authorization Best Practices - Recommended Approach for "In-Band" Authorization - Handling Groups and Non-Unique Values - Delegating Entitlement Identification - A Note Regarding eduPersonAffiliation - Examples - General Library Access - WebAssign - Appendix A. - Background - Definitions - Challenges and Incentives - Solution Patterns - Considerations for "In-Band" Authorization This is a DRAFT for consideration. Recommended Approach for "In-Band" Authorization The `eduPersonEntitlement` attribute or "claim" is the recommended way to carry authorization data along with ("in-band" to) authentication. It is also suitable for use with supplemental lookup approaches such as LDAP, SAML Attribute Queries, or OpenID offline scopes, to name a few. This attribute does not itself define any specific values for authorization, but is defined to carry only URIs so that its values are inherently unique and unambiguous. It supports (but does not require) the use of a registry of shared values, so it scales to address both shared and service-specific values. When considering a new use case, deployers should review any registries of common entitlement values for any that may match, but do not bend or force-fit a definition if it doesn't suit. When creating new "shared" values, one should generally do so in as small a scope as practical initially and expand the scope as it becomes beneficial to do so. Note that REFEDS maintains a registry for all of eduPerson that includes standard `eduPersonEntitlement` values. Both URLs and URNs have advantages and disadvantages as entitlement values. URNs tend to work well when creating "shared" values that expand beyond the scope of individual services. URLs tend to work well for service-specific values. Handling Groups and Non-Unique Values Groups (and roles, used interchangeable in this discussion) are a common means of associating application permissions with sets of subjects, but they generally are named in ad hoc fashion. Frequently when LDAP is used, groups may be collected under application-specific OU "trees", or at least identified via Distinguished Name (DN), which makes them potentially unique but not readily mappable into a URI or suitable for use outside the administrative domain or in modern federated protocols. In addition, the "short" names of such groups frequently collide with each other. Consider how often group names such as "admin", "user", or many other similar names tend to be encountered when managing systems. Now consider what happens if such a group name is supplied to an application and an accident of configuration causes membership in the "admin" group for one service to be supplied to another. This is why URIs are a necessary precaution. At least for service-specific groups, it is a suggested practice to use a service's unique identifier as a URL prefix for service-specific entitlement values. Turning "raw" group or role names into entitlements by prefixing them in this way makes it very easy to create automated rules for both constructing the values and for limiting data release. This approach works well with SAML when sensible entityIDs are used. It also works with OpenID Connect provided an appropriate client ID is used, or alternatively a "scope" could be created in the form of a URL to both identify the right data and prefix the values. Scopes are typically not URLs but certainly can be. For example, if a service is identified as "https://sp.example.org/saml2", then entitlements might be constructed by an Identity Provider with values like: - `https://sp.example.org/saml2/admin` - `https://sp.example.org/saml2/publishers` - `https://sp.example.org/saml2/viewers` - etc. Note that this does not constrain an application's own naming scheme for groups or authorizations. If required by application constraints on group names, mapping these values back into locally accepted ones is straightforward. Suffixes can certainly be chosen to map directly to the local names if this information is available. On the other hand, simply supporting such group or role names directly is advisable and should cause no particular difficulty. Delegating Entitlement Identification While the advice in the previous section is appropriate for service-specific scenarios, it does not account for entitlements with a broader scope and may still require some degree of coordination. It tends to work best (though certainly not exclusively) within the enterprise or for software-as-a-service deployments. When lacking a tight coupling between the management of a service and the administrative domain(s) providing the authorization values, services should consider providing user interface features that allow each administrative domain (i.e., each Identity Provider) to provide as a configuration setting the entitlement value(s) to map into particular local groups or permission sets. This allows the establishment of entitlement values to be completely delegated to the administrative domain asserting them, getting the service out of the business of worrying about the particulars. This approach requires more up front development, but provides the least ongoing cost for everyone. This approach is of course entirely compatible with the prefixing suggestion above. Alternatively, it is generally better for services to establish the values to use than to leave this up to each administrative domain because that will quickly become unmanageable for the service operator. This also affords the opportunity to identify appropriately-defined shared values if possible. A Note Regarding eduPersonAffiliation While it is a common practice historically to use eduPersonAffiliation (or eduPersonScopedAffiliation) for authorization, the values of this attribute are not precisely defined by design, and vary quite a bit across organizations and cultural contexts (e.g., the meanings of "staff" and "employee" can be very surprising due to language differences). While there are scenarios in which it may be sufficient to approximate an authorization rule with some of the more commonly understood affiliation values (particularly "member" and "student"), this should never be done with resources of any significant value because there will be substantial numbers of exceptions on both sides of the line with any rule based on them. Using an entitlement instead allows the home organization to use their internal affiliation data to populate a value for the majority of cases while identifying exceptions that should (or shouldn't) get the value at the same time, providing a much more accurate answer. Unfortunately, supporting both options at once is difficult unless a rule can be applied that recognizes which organizations wish to rely on entitlements; the absence of an entitlement value may cause a service to fall back to the wrong answer derived from an affiliation. This would, though, at least allow for positive exceptions to be handled easily, if not negative ones. Examples General Library Access The most commonly federated authorization use case is library resource access under "standard" contract terms that cover most of an institution's active community and those physically present at a library, but typically excludes guests and some other types of non-traditional affiliates. An eduPersonEntitlement value of "urn:mace:dir:entitlement:common-lib-terms" was defined in 2003 to address this use case and should be used any time this kind of arrangement applies. Organizations can apply this value to the appropriate people without regard for the particular service being accessed since it is by design a general value that can be applied to any service that uses this kind of standard contract language. Note that this value is not meant as a generic signal to any library (or other) service that "access should be granted". If that were the meaning, then every single use of the value would have to be individually managed to ensure it aligns with the terms of use for a given service. It is not intended on its own to signal that a given organization's users have access to any given service. As with the (mis)use of affiliation, considerations of organizational access, billing, etc. have to be addressed separately. The point of a common definition is for the home organization to be able to apply it on the basis of who the subject is, not what the service is. Misusing this value to apply to contractual arrangements other than intended undermines the purpose of the common value, just as misusing affiliations for authorization undermines the ability of organizations to freely release those values without examining every service's (mis)use of them beforehand. WebAssign WebAssign is a Learning Management System, and was an early adopter of SAML. Like most applications of this sort, it requires the ability to group students and instructors into courses or sub-groupings of courses. They also chose to allow for in-band enrollment of students. To do so, WebAssign followed one of the patterns described above and provides a text box for each grouping of students that course administrators can fill in with an eduPersonEntitlement value to look for to populate a student into the course. This has worked extremely well for a number of universities that generate automated entitlement values based on student enrollment data and requires no additional work on the part of WebAssign to accommodate any university's particular approach to producing those values. Appendix A. The remaining material provides additional background, terminology, and discusses the various solution patterns commonly seen for this problem. It is helpful in understanding the recommendations, and things to consider with other approaches. Background Most of the work undertaken over the last two decades or more in the area of federation has been focused almost exclusively on the problem of authentication, identifying subjects and data associated with them, largely data that exists independently of a subject's relationship with a particular service. Considerably less time and attention has been expended on the authorization problem. This is partly because authorization is much harder than authentication. Many services do very rudimentary authorization, if they do it at all, and by far the most common approach to authorization involves maintaining raw lists of users in application-specific databases for use by only a single application at a time. Sharing rules for authorization across applications has never been terribly widespread or successful even within the enterprise (as anybody involved in an effort to define "roles" can attest). Adding federation introduces a layer of business complication that often defies solutions. Few services have the practical ability to delegate authorization to other administrative domains because those domains have no knowledge of, or interest in, the authorization problems of applications for which they are not responsible. Even if a few would be willing, the need to get most or all of the home organizations of an application's users to manage their access to such applications has proven intractable, and the difficulties tend to heavily depress any interest in this problem. Nevertheless, there are situations in which the only practical way to manage access to a federated service is by the organization responsible for authentication. While this is most common chiefly in "software-as-a-service" scenarios which do not truly meet the definition of "federated", there are cases in which the sheer scale of use requires that authorization be managed externally by the users' home organizations, and some of these cases are federated in the truest sense. Thus, even if the relative number of authorization use cases remains somewhat small, it remains important to establish good practice around how to solve this problem in the most standardized fashion possible. Definitions A few definitions are useful in gaining a clear understanding of why this problem can be so difficult. User/Subject – These terms are used interchangeably to represent the entity accessing an application or service. Some subjects are "users" in the sense of being people but the concepts addressed by this document apply equally whether a subject is a person or a service account. Subject is the generic term of art. Administrative Domain – An organizational boundary within which there is direct control or coordination over users and management of identity and access. Attribute – Overloaded term (used particularly with SAML) for how to communicate a discrete piece of data about a subject. Other protocols (and SAML vendors) use the term "Claim" to mean essentially the same thing. Corresponds in many cases to the attributes on an LDAP entry in a directory. Identity Provider – The source of authentication and authorization information in a federated protocol. Service Provider – The target of authentication and authorization information in a federated protocol. Often termed a "Relying Party", or (in OpenID, confusingly) a "client". Federated Authorization – A deployment in which control over who can access an application/service is managed by an administrative domain different from the one that controls or operates the service. Challenges and Incentives There are "degrees" of federation that impact the scope of the authorization problem. When an application is operated by one organization on behalf of another, and the data belongs to the organization managing the users, this is federated in practice but not in spirit. This is because the incentives for managing access properly lie with the user-managing organization rather than with the application-managing one. It also tends to involve only one, or a very small number of, administrative domains of users, which limits the scale of the problem. Getting one or two or three organizations to agree on an approach to something is much different in scale than needing 100 or 1000 or more to agree to something. A truly federated authorization problem exists when an organization operating a service owns the data or resources and thus has the incentive to properly manage access, but is delegating authentication to a potentially large number of other administrative domains. In these cases, somebody has to manage access, but the information and incentives are misaligned. The service owner may have a grasp of the criteria for access control, while the home organizations of the users may have a grasp over which users actually fit the criteria. But if those organizations can't be made to care enough about the service to do work on behalf of the service owner, the problem quickly becomes insubstantial at a business level even before the technical challenge is considered. This is why most practical use cases for federated authorization involve contractual scenarios in which managing access is simply a legal /contractual requirement and thus has to be solved. Solution Patterns The solutions to the federated authorization problem tend to align around whether the information is delivered in batch or in real-time. In digging into these solutions, one may notice that managing authorization tends to be deeply intertwined more generally with the larger topic of "provisioning". Many solutions for provisioning accounts and keeping them up to date tend to also address (or need to address) the authorization problem as a subset of the larger one. In turn, changing how authorization is done will often require re-examining how provisioning is handled as well. With a batch approach, by far the most historically-common way this is handled, a feed of data about the users and their appropriate levels of access to a service is delivered on some periodic basis by each organization that manages the user population in the feed. For many years this was the only method commercial services provided for managing access to their services, and is still very common in large enterprises and large commercial services in the HR and Finance sectors. It's the "mainframe" way of doing this: bulky and reliable. The main problems with this approach are the freshness of the data (delaying access or removal of access by hours or days) and the sheer scale of managing feeds for the number of services organizations have outsourced these days. Furthermore, this approach works poorly with truly federated services because most organizations are not likely to be willing to manage and support feeds for services they do not contract for. For most service operators, though, it is likely the simplest way for them to deal with the problem, leading to resistance to the adoption of more complex approaches. Real-time approaches to authorization include a number of different possible "channels" to communicate the information. Principally the distinction is between in-band and out-of-band methods relative to the authentication channel. Out-of-band approaches rely on direct communication from site to site to create, update, and delete information in a target system. These approaches can be complex and expensive to maintain (arguably moreso than batch feeds), and most importantly they add to the integration cost of a system because they don't address authentication at the same time. Their chief advantage is that they provide a real time view of the "state" of an integration from the perspective of an application. Whether somebody exists and has particular access can be generally known by anyone with access to the application, independently of whether that somebody has ever even accessed the application. In contrast, the least complex solution with the lowest aggregate cost for all parties, and the subject of the rest of this document, is to communicate authorization information at the same time as, and together with, the authentication exchange when users attempt to access a system. Virtually all modern authentication protocols have the capability to pass additional information about users, including authorization information that can be as up to date as the organization's identity and access management make possible. It then becomes a requirement of the implementation of that protocol on the service side to support the use of that information when users login to an application, often including storing/updating the information in the application for auditing and efficiency purposes. The key weakness of this approach is the lack of real-time knowledge by the service of the “state” of access management at any single point in space or time. It also provides no means of deprovisioning a system because naturally if a user is gone or even just loses access, they will generally not login to the application to make it known that they in fact don't have the right to do so. This is a particular problem when services insist on charging for every record rather than metering based on “active” access by users. Naturally, there is a profit motive to doing that as well. Considerations for "In-Band" Authorization Provided that one accepts the value of an "in-band" (with respect to authentication) approach to authorization, these are some core considerations to bear in mind. Commonality The name and syntax of the Attribute used to carry the information doesn't strictly have to be standardized, but as with all uses of federation protocols, doing so helps both Identity and Service Providers limit the need for custom configuration. Uniqueness Authorization values either have to be unambiguous in and of themselves to avoid conflicts across services, or every Identity and Service Provider has to be careful to handle the data such that a value meant only for one service isn't accidentally used with another. For example, associating a group named "admin" with a subject isn't terribly safe if one isn't careful to denote somehow which service the group is about. This historically works very badly (because it introduces processing requirements above and beyond the simple handling of data). Using inherently unambiguous values that can't accidentally be misinterpreted is much simpler. URIs (that is, either URLs or less commonly URNs) work very well for this purpose, which is an approach very common to SAML but much less so with other protocols. Roles vs. Groups vs. Entitlements A good way to get hung up over how to deal with authorization is to approach the subject from an "academic" perspective that tries to split hairs amongst various ontologies for representing a subject's access to an application. At some very deep level, they're all fundamentally different. At the level where most people operate and try to deploy services, the distinctions seem quite meaningless in practice. Scope The scope of an authorization value should be as broad as possible, but no broader. That is, when multiple services (perhaps many of them) can usefully share a value, do so; if not, a service-specific value is appropriate. Accuracy Resist the temptation to re-use an existing value that isn't accurate simply because it exists or is deployed for some other service. This is not a good way to approach a use case that matters to either party. If there is honestly so little business value in arriving at the right answer, it is very likely that federated authorization itself is simply not a good fit for the application in the first place. Privacy Along with more prosaic considerations such as size limits, services should only receive authorization values applicable to them and not the full set of possible authorizations a subject may possess. This is an important privacy control to prevent information leakage about a subject. This in turn means that it is important to efficiently associate authorization values with services, particularly when dealing with service-specific values that may change frequently. It bears noting that this is impossible to do with some proxy-based approaches to federated authentication because it may be difficult or impossible to know what service(s) a subject is actually trying to access.
{"Source-Url": "https://wiki.refeds.org/download/temp/pdfexport-20221213-131222-2013-695/GROUPS-FederatedAuthorizationBestPractices-131222-2013-696.pdf?contentType=application/pdf", "len_cl100k_base": 4194, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 9872, "total-output-tokens": 4406, "length": "2e12", "weborganizer": {"__label__adult": 0.0003273487091064453, "__label__art_design": 0.0005898475646972656, "__label__crime_law": 0.0014514923095703125, "__label__education_jobs": 0.0037689208984375, "__label__entertainment": 0.00014722347259521484, "__label__fashion_beauty": 0.00019240379333496096, "__label__finance_business": 0.002254486083984375, "__label__food_dining": 0.0002830028533935547, "__label__games": 0.0005955696105957031, "__label__hardware": 0.000896453857421875, "__label__health": 0.0005636215209960938, "__label__history": 0.0003769397735595703, "__label__home_hobbies": 0.00014495849609375, "__label__industrial": 0.0004055500030517578, "__label__literature": 0.0005726814270019531, "__label__politics": 0.0005693435668945312, "__label__religion": 0.0003197193145751953, "__label__science_tech": 0.06268310546875, "__label__social_life": 0.0002918243408203125, "__label__software": 0.24609375, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.00020170211791992188, "__label__transportation": 0.0003871917724609375, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22150, 0.00118]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22150, 0.3122]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22150, 0.943]], "google_gemma-3-12b-it_contains_pii": [[0, 4549, false], [4549, 11299, null], [11299, 18488, null], [18488, 22150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4549, true], [4549, 11299, null], [11299, 18488, null], [18488, 22150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22150, null]], "pdf_page_numbers": [[0, 4549, 1], [4549, 11299, 2], [11299, 18488, 3], [18488, 22150, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22150, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
06aa14dfa00b4ad291b6fd683f7e361a3e4a261e
[REMOVED]
{"Source-Url": "https://www.ei.ruhr-uni-bochum.de/media/nds/veroeffentlichungen/2013/03/25/paper_2.pdf", "len_cl100k_base": 6953, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37334, "total-output-tokens": 9070, "length": "2e12", "weborganizer": {"__label__adult": 0.0004668235778808594, "__label__art_design": 0.0003287792205810547, "__label__crime_law": 0.001064300537109375, "__label__education_jobs": 0.0003643035888671875, "__label__entertainment": 8.70823860168457e-05, "__label__fashion_beauty": 0.0001609325408935547, "__label__finance_business": 0.00021779537200927737, "__label__food_dining": 0.0003275871276855469, "__label__games": 0.000946044921875, "__label__hardware": 0.0021190643310546875, "__label__health": 0.0005440711975097656, "__label__history": 0.0002727508544921875, "__label__home_hobbies": 9.268522262573242e-05, "__label__industrial": 0.0005540847778320312, "__label__literature": 0.0002310276031494141, "__label__politics": 0.0003464221954345703, "__label__religion": 0.0005488395690917969, "__label__science_tech": 0.07110595703125, "__label__social_life": 8.624792098999023e-05, "__label__software": 0.01140594482421875, "__label__software_dev": 0.90771484375, "__label__sports_fitness": 0.00035262107849121094, "__label__transportation": 0.0004532337188720703, "__label__travel": 0.00018727779388427737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32562, 0.0386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32562, 0.44163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32562, 0.83519]], "google_gemma-3-12b-it_contains_pii": [[0, 2124, false], [2124, 4282, null], [4282, 6353, null], [6353, 8760, null], [8760, 9891, null], [9891, 11777, null], [11777, 13724, null], [13724, 15210, null], [15210, 17547, null], [17547, 20224, null], [20224, 22440, null], [22440, 24988, null], [24988, 27661, null], [27661, 29808, null], [29808, 32204, null], [32204, 32562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2124, true], [2124, 4282, null], [4282, 6353, null], [6353, 8760, null], [8760, 9891, null], [9891, 11777, null], [11777, 13724, null], [13724, 15210, null], [15210, 17547, null], [17547, 20224, null], [20224, 22440, null], [22440, 24988, null], [24988, 27661, null], [27661, 29808, null], [29808, 32204, null], [32204, 32562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32562, null]], "pdf_page_numbers": [[0, 2124, 1], [2124, 4282, 2], [4282, 6353, 3], [6353, 8760, 4], [8760, 9891, 5], [9891, 11777, 6], [11777, 13724, 7], [13724, 15210, 8], [15210, 17547, 9], [17547, 20224, 10], [20224, 22440, 11], [22440, 24988, 12], [24988, 27661, 13], [27661, 29808, 14], [29808, 32204, 15], [32204, 32562, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32562, 0.07774]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
dc5946cb393c720a88d9a24be5f0149d1ae4b1a1
In Memoriam: Paris C. Kanellakis Our colleague and dear friend, Paris C. Kanellakis, died unexpectedly and tragically on December 20, 1995, together with his wife, Maria-Teresa Otoya, and their beloved young children, Alexandra and Stephanos. En route to Cali, Colombia, for an annual holiday reunion with his wife’s family, their airplane strayed off course without warning during the night, minutes before an expected landing, and crashed in the Andes. As researchers, we mourn the passing of a creative and thoughtful colleague who was respected for his many contributions, both technical and professional, to the computer science research community. As individuals, we grieve over our tragic loss—of a friend who was regarded with great affection, and of a happy, thriving family whose warmth and hospitality were gifts appreciated by friends around the world. Their deaths create for us a void that cannot be filled. Paris left unfinished several projects, including a paper on database theory intended for this special issue of Computing Surveys, to be written during his holiday visit to Colombia. Instead, we wish to offer here a brief biography of Paris, and a description of the research topics that interested Paris over the last few years, together with the contributions that he made to these areas. It is not our intention to outline definitive surveys or histories of these areas, but rather to honor the significant contemporary research of our friend. Paris was born in Greece in 1953 to Eleftherios and Argyroula Kanellakis. In 1976, he received the Diploma in Electrical Engineering from the National Technical University of Athens; his undergraduate thesis was titled *Easy-to-test Criteria for Weak Stochastic Stability of Dynamical Systems*, advised by Prof. E. N. Protonotarios. Paris continued his studies at the graduate level in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, where he received his M. Sc. in 1978, submitting the thesis *Algorithms for a Scheduling Application of the Asymmetric Traveling Salesman Problem*, supervised by Profs. R. Rivest and M. Athans, followed by his Ph. D. in 1982; his doctoral dissertation was *On the Complexity of Concurrency Control for Distributed Databases*, supervised by Prof. C. H. Papadimitriou. In 1981, Paris joined the Computer Science Department of Brown University as assistant professor. He was promoted to associate professor with tenure in 1986, and to full professor in 1990. Intermittent with his appointment at Brown, Paris also held several temporary positions, including posts at the IBM Watson Research Center, the MIT Laboratory for Computer Science, GIP Alطالب, and INRIA Rocquencourt. He served as an associate editor of the new journal *Constraints*, as well as *Information and Computation, ACM Transactions on Database Systems, SIAM Journal of Computing, Theoretical Computer Science*, and *Journal of Logic Programming*. Paris served in addition as program committee member, program chair, and invited speaker at many of the prominent research conferences in computer science. We take this opportunity to present some of Paris’ contributions to database theory, including deductive, object-oriented, and constraint databases, as well as his work in fault-tolerant distributed computation and in type theory. In each of these areas, we recognize research contributions that were not merely examples of good problem *solving*, but also examples of insightful problem *formulation*. In synchrony with his technical ability in solving problems, Paris added a mature editorial voice which, by proposing new kinds of research questions, and answering some of them in novel and sometimes surprising ways, helped to change our perceptions of what was technically significant. In several cases—for example, in deductive databases and type theory—Paris brought the tools and techniques of complexity theory and algorithmics to analyze the efficiency of constructs in programming language design. These themes were found again in the area of constraint databases, an area he played a major role in initiating, while guiding its development via sound and feasible algorithmic principles. In distributed computing, Paris advanced new computational frameworks intended to align algorithmic paradigms with salient aspects of realizable system architectures. In object-oriented databases, Paris worked to build a semantic foundation that provides an implementation-independent meaning for these systems, much in the same spirit that the relational model provides an implementation-independent meaning for relational databases. We recognize in all this work Paris’ desire to understand better the theoretical foundations of practical systems, to study them with precise analytical tools, and to use the results to improve the functionality and performance of these systems. The authors of this technical obituary feel honored by the privilege they had in collaborating with Paris on many of these projects. In mourning his tragic death, we miss his technical facility, his broad knowledge, his insight, his commitment, and his humor. To write a research paper with Paris was also an opportunity to observe his indefatigable attention to detail, and to engage in vigorous debate with his editorial voice. To write this obituary allowed us to feel his voice once more, to understand better what a good scientist he was, and to appreciate his uncommon decency and kindness. It is our great loss that we will not hear his voice again. 1 Deductive Databases Paris Kanellakis was a major contributor to the theoretical foundations of deductive databases. It has been recognized since the early 1980s that first-order database query languages such as SQL are lacking in expressive power. This insight lead to the investigation of many higher-order query languages, in particular Datalog, the language of logic programs without function symbols. A canonical use of Datalog is to compute transitive closure, where we think of the database as a directed graph: \[ \begin{align*} \text{path}(X,Y) & \leftarrow \text{edge}(X,Y), \\ \text{path}(X,Y) & \leftarrow \text{path}(X,Z), \text{path}(Z,Y). \end{align*} \] In this example, we take edge to be an \textit{extensional database (EDB) predicate}, representing basic facts stored in the database. For example, \textit{edge}(1,5) is an EDB fact stating that there is an edge between vertices 1 and 5. The \textit{intensional database (IDB) predicate} path represents facts deduced from the database via the logic program above: the first rule says every directed edge forms a path, and the second rule tells how paths can be joined together. We can now query, for instance, \textit{path}(1,7) or \textit{path}(2,V) to determine, respectively, whether there is a path from vertex 1 to vertex 7, or what vertices V are connected to vertex 2 by a path. The path facts are \textit{deduced} from the edge facts, hence the name \textit{deductive databases}. Paris’ work addressed the problem of finding efficient evaluation methods for Datalog queries. He viewed the challenge of deductive databases as the need to combine the technology of logic programming with the efficiency of database technology, providing a concrete step towards a new generation of computing. The focus of his research in this area was in identifying classes of Datalog queries that can be evaluated efficiently. **Datalog and Parallel Computation:** Paris investigated what kind of Datalog queries can be sped-up by massive parallelism [CK86, Kan88]. He identified speed-up with the complexity class NC, which consists of the problems that can be computed in polylogarithmic time through the use of polynomially bounded hardware. Problems in NC are exactly those with a great deal of potential parallelism. In contrast, significant speed-ups cannot be achieved for PTIME-complete problems, unless NC=PTIME, which is widely believed not to be the case. Thus, PTIME-complete problems are often called inherently sequential. Paris proposed to measure the computational complexity of Datalog programs both by their time complexity as well as by their *database complexity*, which measures the number of calls the Datalog query engine makes to the underlying relational database system. He proved that there are Datalog queries that are hard to evaluate in parallel, regardless of which complexity measure is being used. For example, Paris showed that the query \[ reach(X) \quad := \quad reach(Y), \text{reach}(Z), \text{edge}(Y, X), \text{edge}(X, Z), \text{edge}(Z, Y) \] is PTIME-complete and, furthermore, its database complexity is provably super-polylogarithmic. The latter bound is significant, since it does not depend on whether NC is a proper subclass of PTIME. Paris also proved a *gap theorem* for the database-complexity measure. He showed that the database complexity of every Datalog query is either \(O(1)\) or \(\Omega(\log n)\); surprisingly, there is nothing in between. **Bounded vs. Unbounded Queries:** It is clear that recursive applications of Datalog rules make queries hard to evaluate. In particular, Datalog queries whose database complexity is in \(O(1)\) can be evaluated in NC; such queries are called *bounded*. It is known that a Datalog query is equivalent to a first-order query if it is bounded. This makes it highly desirable to be able to identify which Datalog queries are bounded. Unfortunately, the distinction between bounded and unbounded queries can be quite subtle. For example, the query \[ \text{buys}(X, Y) \quad := \quad \text{likes}(X, Y). \] \[ \text{buys}(X, Y) \quad := \quad \text{trendy}(X), \text{buys}(Z, Y). \] is bounded, while the query \[ \text{buys}(X, Y) \quad := \quad \text{likes}(X, Y). \] \[ \text{buys}(X, Y) \quad := \quad \text{knows}(X, Z), \text{buys}(Z, Y). \] is unbounded. This subtlety is not accidental; it is known that the problem of testing whether a given Datalog query is bounded or not is undecidable. Paris was engaged in a long-term project whose goal was to delineate the boundary between the decidable and undecidable for classes of Datalog queries [CGKV88, AK89, HKMV95], that is, to identify maximal classes of Datalog queries whose boundedness problem is decidable and minimal classes of Datalog queries whose boundedness problem is undecidable. One way of classifying Datalog programs is by the *arity* of their IDB predicates. For example, the *path* and *buys* queries in the examples above are binary, while the *reach* query is unary. Together with colleagues, Paris was able to show that the boundary between the decidable and the undecidable lies between the unary and the binary. More precisely, the boundedness problem for unary Datalog queries is decidable [CGKV88], while the boundedness problem for binary Datalog queries in undecidable [HKMV95]. 2 Object-Oriented Databases In 1988-89, Paris visited INRIA Rocquencourt, where he held a joint position in the Database Research Group and in the Altair R&D Group. At that time, object-oriented database systems were emerging from research venues and into product development with start-up companies such as Altair. The new technology, however, lacked the formal foundations of the relational model. Paris’ goal, while participating in development efforts, was to bridge this gap. While at Altair, Paris had a major influence on several aspects of the O₂ system, bringing his theoretical expertise to a practical project. His most visible impact was on the data model: he helped formulate the O₂ data model, and provided a final theoretical framework for it. He was also the lead editor of the definitive monograph [BDK92] documenting the relevant design and analysis of this multi-year research and development effort. One of Paris’ ambitions was to provide sound theoretical foundations to database systems. He strongly believed that understanding the functionalities of these systems, providing formal models for them, and finding connections with already mature theories were the keys to developing better systems. In this enterprise, Paris could rely on a wide culture in theoretical computer science and mathematics, as well as a strong intuition for practical issues. Paris developed an object-based database model in a collaborative work [AK89, AK91]. The structural part generalized most of the known complex-object data models. The main contribution is the operational part of the data model, the query language IQL, which uses object identities (oid’s) for three critical purposes: (1) to represent data-structures with sharing and cycles, (2) to manipulate sets, and (3) to express any computable database query. The language IQL can be statically type checked, can be evaluated bottom-up and naturally generalizes most popular rule-based database languages. The model was also extended to incorporate type inheritance, without changes to the language. The main purpose of this work was to capture formally the essence of object database application programming, and to highlight the new dimension brought by object identity with regard to old questions such as duplicate elimination or query completeness. In particular, Paris observed that standard proofs of query completeness (e.g., [CH85]) did not work as usual in this setting because of the presence of oid’s. He showed that an analogous value-based data model could be founded on regular infinite trees, thereby capturing fundamental aspects of object identity that had been overlooked by previous researchers. Paris was also concerned with the essential novelty of programming applications in an object-oriented paradigm. In [AKRW94, HKR94] he investigated, with collaborators, a simple programming formalism for object-oriented databases, namely method schemas. The syntax is based on an application of program schemas, using a hierarchy of classes, method composition, recursion via function calls and name overloading. The semantics is based on term rewriting with late binding. Paris and colleagues concentrated on understanding the problem of consistency checking, i.e., of testing at compile time whether in some interpretation, a method invocation would lead to an error. Not surprisingly, the problem is undecidable in general. Paris studied in depth decidable cases such as monadic schemas (methods having arity 1) or recursion-free schemas (absence of cycles in the method dependence graph). Some surprising consequences of covariance, a standard restriction imposed on method definitions, were demonstrated: for an important class of monadic, recursion-free schemas, consistency checking is complete for NLOGSPACE, whereas it is complete for DLOGSPACE if covariance is also imposed. Paris was convinced that object-oriented databases were an important technological step, but that they needed to rely on previously developed theories and techniques. He viewed his more recent work on type theory (see Section 5) as part of a general program to provide formal foundations for database programming languages. 3 Constraint Databases Paris was one of the founders of the area of Constraint Databases [KKR95]. While visiting the IBM T. J. Watson Research Center, he was shown a demonstration of the CLP(R) system. This system is an instance of the CLP(X) framework, an extension of logic programming in which rules contain, besides normal terms, constraints from a domain X. CLP(R), in which rules contain constraints from the domain of real numbers, uses a combination of lazy evaluation and an appropriate constraint solver, together with standard techniques from logic programming. Paris immediately wondered whether a database theory could be developed for such systems, by analogy with the way deductive databases were inspired by logic programming. The direct consequence of this idea was his collaborative work on Constraint Query Languages [KKR95]. A class of database models and query languages was defined that could be instantiated by specific constraint domains, similar to the CLP(X) framework. The basic idea was to replace the notion of tuple in a relational database by that of a generalized tuple, i.e., a conjunction of constraints. A relational tuple \((a_1, \ldots, a_n)\), for example, can be regarded as a special case of a generalized tuple, i.e., \((x_1 = a_1) \land \cdots \land (x_n = a_n)\). By choosing more powerful classes of constraints, one can represent potentially infinite sets of points in a compact way. For example, a rectangle with corners at \((a, b)\) and \((c, d)\) can be represented by the generalized tuple \((a \leq x \leq b) \land (c \leq y \leq d)\). Other examples are linear arithmetic constraints, which can be used to model many spatial applications (e.g., GIS), and polynomial constraints, which can be used to describe more complex spatial objects, such as those in CAD systems. Of course, just extending the database model would be not be very interesting unless we were able to query the database. Paris realized that for the model to be of interest we would need query languages that were not just decidable, but of low complexity. This research goal was addressed in [KKR95] for dense-order constraints, i.e., inequalities \(x < y\) where \(<\) is a dense order (e.g., the rationals). Some results in this direction have also been obtained for more powerful query classes such as linear constraints and polynomial constraints. Paris’ original paper showed that the first-order query language with dense-order constraints could be evaluated in \(\text{LOGSPACE}\). Subsequent joint work [KG94] improved these bounds to \(\text{AC}^0\). One consequence of this result was to resolve, for dense-order constraints, a problem that Paris had proposed for first-order constraint query languages in general: whether the \textsc{Parity} query—is the cardinality of a (finite) database even or odd?—is expressible in such languages. His conjecture was that the answer is negative, as is well-known to be the case for relational databases. This question turned out to be a very difficult: for polynomial constraints the negative result was only obtained recently [BDIW96] using sophisticated techniques from non-standard model theory. The \(\text{AC}^0\) result referred to above was obtained by defining an algebra for dense-order constraints and then analyzing its complexity. In one of his very last papers, [GGK96], Paris extended this work and studied algebras for constraint databases in more detail. This research included further work on dense-order constraints (including update issues) and linear arithmetic constraints. In the latter case, he defined a particularly promising algebra for 2-variable constraints—for example, temporal constraints. One of Paris’ goals was to include recursion (e.g., transitive closure) in the model. Unfortunately, languages more powerful than dense-order constraints were not closed under recursion. For example, if \( R(x, y) \) contained the generalized tuple \( y = 2x \), the transitive closure of \( R \) would have to contain all tuples of the form \( y = 2^nx \), for \( n \geq 1 \). On the other hand, it was possible to write recursive programs that did not have this problem, intuitively by ensuring that the rules one wrote did not create new objects. Defining such a notion in a precise way was an abiding interest of Paris’—unfortunately, it remained an uncompleted project. Paris was always concerned about the practical side of the field, and was aware of the risk of it becoming a theoretical exercise with limited practical value. From the very beginning he was interested in the question of appropriate index structures for constraint databases, i.e., how to store constraints so that they could be accessed efficiently. He spent an extended visit to IBM studying the state of the art in index structures for spatial databases, and how applicable they were to constraints. He was immediately struck by the contrast between the elegant combination of theory and practice in the original B-tree paper, and the lack of such an analysis for spatial index structures. The result of this study was the paper [KRVV94], proposing a data structure for indexing constraints on one attribute. This index structure has optimal worst-case storage and query performance, and optimal amortized insert time (i.e., averaged over a sequence of inserts), with performance very close to that of the B-tree. By studying the techniques that he had used for indexing constraints [RK95], Paris was also able to develop index methods for object-oriented databases, and at the time of his death he was investigating their application to temporal databases. 4 Fault-Tolerant Parallel Computation Paris Kanellakis, among other researchers, sought to bridge the gap between abstract models of parallel computation and realizable parallel architectures. The parallel random access machine (PRAM) model attracted significant attention, and numerous efficient parallel algorithms were developed for the model. The PRAM model elegantly combines the power of parallelism and the simplicity of the random access machine (RAM) model. Most parallel algorithms require a fault-free environment, where any unreliability of processors, interconnections, or synchronizations either eliminate efficiency, or result in incorrect computation. Paris proposed a formally defined notion of robustness that combines fault-tolerance and efficiency, and he led the development of deterministic parallel algorithms that remain efficient in the presence of arbitrary dynamic processor failure patterns. This work and relevant open problems were recently summarized in [KMS94, KS94]. Robust computation: The ultimate goal of this research is the synergy between the speed-up potential of parallel computation and the reliability potential of distributed computation. Paris investigated fault models and parallel computation models where it is possible to achieve algorithmic efficiency (i.e., speed-ups close to linear in the number of processors) despite the presence of faults. Such combinations of fault and computation models illustrate constructive trade-offs between reliability and efficiency. This trade-off exists because reliability usually requires introducing redundancy in the computation in order to detect errors and reassign resources, whereas gaining efficiency by massively parallel computing requires removing redundancy from the computation to fully utilize each processor. Even allowing for some abstraction in the model of parallel computation, it is not obvious that there are any non-trivial fault models that allow near-linear speed-ups, especially considering the generally intimidating impossibility results for distributed and parallel computation. Thus it was rather surprising when Paris demonstrated in collaborative work [KS92] that it is possible to combine efficiency and fault-tolerance for many basic algorithms, expressed as concurrent-read concurrent-write (CRCW) PRAMs, in a fault model [KS92] allowing any pattern of dynamic fail-stop processor errors, as long as one processor remains alive. The approach and techniques pioneered by Paris in his work were shown to be readily extendible to all CRCW PRAMs. In effect, any PRAM algorithm can be made robust by efficiently simulating a fault-free machine on a fault-prone one. Papers by other researchers followed, pursuing similar problems in related shared-memory and message-passing models. It was also demonstrated by Paris, in collaborative work, that although optimal simulations are possible in some models, such oblivious simulations are not necessarily as efficient as “handcrafted” robust algorithms, e.g., for the all-important pointer doubling and parallel prefix algorithms [KS94]. Paris and colleagues extended the fault model to include processor restarts [KS91], arbitrary initial memory initialization [KS94], and restricted memory-access patterns through controlled memory access [KMS95]. **Algorithms and lower bounds:** A key primitive in the above work is the Write-All operation [KS92], defined as follows: using $P$ processors, write 1s into all locations of an array of size $N$, where $P \leq N$. When $P = N$ this operation captures the computational progress that can be naturally accomplished in one time unit by a PRAM. Iterated Write-All forms the basis for the algorithm simulation techniques. Therefore, improved Write-All solutions lead to improved simulations of all parallel algorithms. Under dynamic failures, efficient deterministic solutions to Write-All, i.e., increasing the fault-free $O(N)$ work by small polylog($N$) factors, are non-obvious. The first such solution, proposed by Paris in joint work [KS92], was an algorithm having worst-case work bound $O(N + P \log^2 N / \log \log N)$. Memory-access concurrency is a major source of available redundancy in parallel algorithms, and deterministic robust (i.e., efficient and fault-tolerant) computation is impossible when concurrent writes are excluded [KMS94, KS92]. Paris believed that it should nevertheless be possible to limit the significant memory-access concurrency that was assumed by existing robust algorithms. Again, surprisingly, in [KMS95] he showed, with his colleagues, that concurrent writes are necessary only when faults actually occur and not in the anticipation of possible faults. Write-All algorithms can be constructed so that they can be executed on a fault-free exclusive-write machine, while on a fault-prone concurrent-write machine the number of concurrent writes is exactly the number of processor failures encountered during the execution. The Write-All algorithms for the fail-stop model have optimal ranges of processors for which the work is $O(N)$. Optimality is achieved by taking advantage of parallel slackness. Are there optimal algorithms for the full range of processors $N = P$? Paris et al. showed that such algorithms do not exist [KS92]; even for the models with instant memory snapshot and arbitrarily powerful instruction set, an adversary can be constructed that will force any Write-All algorithm to do $\Omega(N \log N / \log \log N)$ work. For memory snapshots this bound is tight—there is a simple algorithm with a matching upper bound [BKRS95]. While the current Write-All lower/upper bound gap is relatively small for the fail-stop model with dynamic failures, a much larger gap remains for the models with restarts and asynchrony. It was conjectured that for Write-All there is a quadratic lower bound in this case. Together with several colleagues [BKRS95], Paris constructed the first subquadratic deterministic algorithm with work $O(N \cdot \log N)$ in the same paper a $\Omega(N \log N)$ lower bound was shown for the model. The bound stands even if memory snapshots are allowed, but in this case there is also a matching upper bound. Paris considered the remaining gaps between lower and upper bounds to be interesting open problems, and intended to pursue their resolution. Left unfinished was a jointly-authored monograph bringing together the latest results in this area. 5 Type theory Paris Kanellakis made fundamental contributions to the algorithmic analysis and complexity-theoretic understanding of several important topics in programming language design, including first- and higher-order unification, type inference for ML-like programming languages, and expressiveness in the typed lambda calculus. Paris' interests in these areas were natural extensions of his earlier work in logic programming and database theory. The work on these topics combined a technical expertise with a careful, understated iconoclasm: he wanted, and succeeded, in changing his colleagues' perceptions of the relevant states of the art. First order unification: Unification is a ubiquitous building block in implementations of sophisticated programming languages. It is, for example, the workhorse of logic programming engines, and an essential component of compile-time type analysis. The dual emergence in the 1980s of logic programming and parallel computation, in both research and development, suggested to many computer scientists that parallel unification could greatly enhance the performance of logic programming systems. In this context, Paris' collaborative result [DKM84] that first-order unification is complete for PTIME—in other words, as hard as any polynomial-time problem—was an interesting technical result with an important editorial message.\(^1\) The theorem implied that unification was a formidable, if not essential, bottleneck in logic programming: any research program that succeeded in parallelizing unification—say, to run in sub-polynomial time—would have also succeeded in equivalently parallelizing every known polynomial time algorithm. Because of this seemingly insurmountable (or as most researchers believe that PTIME \(\neq\) NC, impossible) hurdle, serious attempts to parallelize unification were largely put aside. The key idea in this proof of the "linear nature" of unification, due to Paris, is that very simple unification problems can be constructed whose solution is isomorphic to the computation of Boolean logic gates: Boolean values are simulated by pairs of terms that are forced to unify if and only if the related value is "true." Since PTIME can be captured by certain easily-constructed polynomial-sized circuits, this technology permitted any decision problem solvable in polynomial time to be efficiently transformed into a unification problem. These ideas were effectively reused to analyze the complexity of type inference. Complexity of type inference: Type checking is an important safety feature of programming languages ensuring that no run-time type errors cause programs to "go wrong" via type errors, e.g., adding pointers and strings. Type inference incorporates type checking with automatic compile-time mechanisms that deduce the types of all run-time data. Unification is essential to many type inference algorithms, since first-order terms can characterize data types; terms are built via functions like "pair," "list," or "function" over constants such as "integer," "boolean," or type variables having values constrained by term equations. Many functional programming languages (e.g., ML, Haskell, Miranda) based on typed lambda calculus support type inference in the presence of so-called parametric polymorphism. In simpler terms, these combined features enable programmers to code implementations of abstract data \(^{1}\) The paper in which this result appeared was edited by the founding editor of the *Journal of Logic Programming*, J. Alan Robinson, and was published as the very first article in that journal. types (trees, stacks, queues, etc.) once, and "reuse" the code on data of different types, without the obligation of adding type annotation at each use. By comparison, other languages with compile-time type checking—the canonical example being Pascal—require a procedure to be repeatedly coded for each data type at which it is used; the type checker requires redundant source code, each copy with different type annotation, even though identical target code is generated for each copy. Although claims had been made in the research community that the ML type inference mechanism was efficient, Paris believed that such claims were unfounded. In a striking joint paper [KM89], the simpler problem of recognizing type-correct ML programs was shown to be PSPACE-hard (i.e., as hard as any problem that can be solved in polynomial space), and furthermore contained in EXPTIME. This result was both counterintuitive and surprising. The fundamental technology was extended via additional insights in another joint work [KMM91], showing the problem to be complete for EXPTIME. The proof technology used in these theorems was a direct descendant of Paris' earlier work on unification. The additional type flexibility introduced by ML polymorphism allows the simulation, via unification, of more powerful complexity classes. In Paris' joint paper [KM89], ML polymorphism was used as an iterative mechanism to simulate quantifier elimination for Quantified Boolean Formulas, resulting in the PSPACE-hardness bound, and in [KMM91] the same tools were enhanced to simulate arbitrary exponential-time computation. Later sophisticated extensions of his idea yielded significant lower bounds for other typed lambda calculi. **Expressibility:** Paris was also interested in characterizing complexity classes (AC0, PTIME, PSPACE, EXPTIME, EXPSPACE,...) by naturally defined classes of typed lambda terms, mirroring earlier work that had been done in the database community on logic and expressibility [Imm86, Var82]. Lambda calculus is the underlying theory of all functional programming languages, though its roots are found in the modern development of mathematical logic. Paris' work in this area was designed to provide foundations for functional database query languages, and also to rehabilitate the simply typed lambda calculus from a certain computational disrepute. Interest in higher-order typed lambda calculi had been motivated in part by negative results that the only expressible functions on the canonical type for integers were the extended polynomials, made from addition, multiplication, and conditional equality with zero—noticeably absent is exponentiation, subtraction, and integer equality. These results suggested that the calculus was no good for useful computation. Paris wanted to show that this was not the case: all feasible computation was comfortably found within the simply typed lambda calculus. The syntactic and (especially) semantic distortions of higher-order calculi, of certain mathematical interest, could then be shown to be computationally unnecessary. The means to this demonstration was a shift in the computation paradigm: instead of considering numerical computations, Paris suggested coding database queries. His suggestion was inspired by a lambda calculus encoding [Ma92] of the decision problem for higher-order logic, which Paris proposed to extend to an encoding of the complex object algebra [AB88], a powerful database query language. We briefly discuss some relevant details. Quantified Boolean Formulas allows quantification only over Booleans, but can be generalized by allowing quantification over iterated powersets of Booleans, and replacing propositional variables by prime formulas $x \in y$, where $x$ and $y$ are typed to range over the appropriate powerset. This decision problem for higher-order logic has nonelementary complexity (not solvable in any --- 2Given a closed propositional formula $F$ where each variable $X$ is $\forall$- or $\exists$-bound, is $F$ true under the naive interpretation? fixed stack of exponentials), and can be used to show that deciding equivalence of two typed lambda terms is nonelementary [Mey74, Sta79]. In the related complex object algebra, quantifi- cation ranges over atomic constants, sets, and tuples, expressing exactly the generic elementary database queries that are computable in some fixed stack of exponentials. By implementing the complex objects algebra in the simply typed lambda calculus, the data complexity of logical queries, studied in [Imm86, Var82], could be replaced by a \emph{procedural} variant. In this procedural implementation, the computational power of a query is measured by its integer \emph{rank}, describing the degree of its higher-order functionality. In joint work, Paris began implementation of this research program [IIKM93, HK94], where PTIME was given a precise characterization via query terms of fixed rank. In further sophisticated extensions of this functional framework for descriptive computational complexity, Paris succeeded, again in collaborative work, in providing syntactic characterizations for many standard complexity classes [HK95]. These results rely on complex query evaluation strategies that use efficient data structures, and avoid the redundancies incurred by more usual reduction strategies in the lambda calculus. At the time of his death, Paris was considering the use of this technical machinery in providing an alternative, and perhaps more compelling, treatment of the issues addressed by the area of \emph{optimal reduction} in the lambda calculus. Serge Abiteboul, Stanford University and Institut National de Recherche en Informatique et Automatique (INRIA) Gabriel M. Kuper, European Computer-Industry Research Centre (ECRC) Harry G. Mairson, Brandeis University Alexander A. Shvartsman, Massachusetts Institute of Technology Moshe Y. Vardi, Rice University References **Deductive Databases** **Object-Oriented Databases** Constraint Databases Fault-Tolerant Parallel Computation Type Theory
{"Source-Url": "http://www.mlahanas.de/Greeks/new/paris.pdf", "len_cl100k_base": 7472, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 38170, "total-output-tokens": 10604, "length": "2e12", "weborganizer": {"__label__adult": 0.0005106925964355469, "__label__art_design": 0.0007700920104980469, "__label__crime_law": 0.0004992485046386719, "__label__education_jobs": 0.00614166259765625, "__label__entertainment": 0.00020885467529296875, "__label__fashion_beauty": 0.00024819374084472656, "__label__finance_business": 0.0003552436828613281, "__label__food_dining": 0.0006628036499023438, "__label__games": 0.0006117820739746094, "__label__hardware": 0.0014057159423828125, "__label__health": 0.0015497207641601562, "__label__history": 0.001064300537109375, "__label__home_hobbies": 0.0002472400665283203, "__label__industrial": 0.0006232261657714844, "__label__literature": 0.0014972686767578125, "__label__politics": 0.0003955364227294922, "__label__religion": 0.0009069442749023438, "__label__science_tech": 0.287109375, "__label__social_life": 0.0004405975341796875, "__label__software": 0.01019287109375, "__label__software_dev": 0.68310546875, "__label__sports_fitness": 0.0003256797790527344, "__label__transportation": 0.0008215904235839844, "__label__travel": 0.0002390146255493164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43947, 0.01918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43947, 0.51841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43947, 0.92555]], "google_gemma-3-12b-it_contains_pii": [[0, 3524, false], [3524, 7306, null], [7306, 10901, null], [10901, 14773, null], [14773, 18764, null], [18764, 22704, null], [22704, 26935, null], [26935, 30787, null], [30787, 34844, null], [34844, 36717, null], [36717, 39111, null], [39111, 41787, null], [41787, 43947, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3524, true], [3524, 7306, null], [7306, 10901, null], [10901, 14773, null], [14773, 18764, null], [18764, 22704, null], [22704, 26935, null], [26935, 30787, null], [30787, 34844, null], [34844, 36717, null], [36717, 39111, null], [39111, 41787, null], [41787, 43947, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43947, null]], "pdf_page_numbers": [[0, 3524, 1], [3524, 7306, 2], [7306, 10901, 3], [10901, 14773, 4], [14773, 18764, 5], [18764, 22704, 6], [22704, 26935, 7], [26935, 30787, 8], [30787, 34844, 9], [34844, 36717, 10], [36717, 39111, 11], [39111, 41787, 12], [41787, 43947, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43947, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
98edf74c8b035bf68dc75130fb5230b2ba694522
Broadcasting in CSP-Style Programming Vinter, Brian; Skovhede, Kenneth; Larsen, Mads Ohm Published in: Communicating Process Architectures 2016 Publication date: 2016 Document version Publisher's PDF, also known as Version of record Citation for published version (APA): Download date: 13. maj. 2020 Broadcasting in CSP–Style Programming Article · August 2016 3 authors, including: Brian Vinter University of Copenhagen 101 PUBLICATIONS 333 CITATIONS Kenneth Skovhede University of Copenhagen 18 PUBLICATIONS 23 CITATIONS Some of the authors of this publication are also working on these related projects: Bohrium View project Synchronous Message Exchange View project All content following this page was uploaded by Kenneth Skovhede on 02 September 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Broadcasting in CSP-Style Programming Brian VINTER, Kenneth SKOVHEDE, and Mads Ohm LARSEN University of Copenhagen, Niels Bohr Institute Abstract. While CSP-only models process-to-process rendezvous-style message passing, all newer CSP-type programming libraries offer more powerful mechanisms, such as buffered channels, and multiple receivers, and even multiple senders, on a single channel. This work investigates the possible variations of a one-to-all, broadcasting, channel. We discuss the different semantic meanings of broadcasting and show three different possible solutions for adding broadcasting to CSP-style programming. Keywords. CSP, JCSP, broadcasting Introduction The concept of broadcasting — emitting a message from one process and receiving in all other processes in a group — has been debated in [1] and [2]. The work on Synchronous Message Exchange, SME [3] stems from a lack of broadcasting in CSP. In this work the authors will seek to establish the meaning of broadcasting and the possible benefit of a broadcast mechanism in CSP-style programming. Perhaps the easiest way to introduce the meaning of a broadcast mechanism is to compare it with the well established one-to-any or any-to-any mechanism which all modern CSP-style libraries offer [4,5,6,7]. With the any-to-any channel a message sent by one process is delivered to any process that is ready to receive. Neither one-to-any nor any-to-any channels are part of the CSP-theory, however they can be emulated using multiple channels [8]. Contrarily a broadcast would be a one-to-all/any-to-all operation where every receiving process on the channel would get a copy of the message. Figure 1a and 1b seek to sketch the difference between a to-any and a to-all send operation. Broadcasting is a common feature in message based parallel programming libraries, such as MPI [9] and PVM [10]. The need for broadcasting stems from algorithms that need global synchronization. Broadcast may be used directly, that is for distributing a new global bound value in a parallel branch-and-bound algorithm, or it may be used in combination with a reduction, that is for determining the global change in a system after an iteration in a converging algorithm. In SME, broadcasts were needed for a set of processes to all know a given value for the next simulated time-step, that is as a stand-alone broadcast. The use of reductions followed by a broadcast is mostly known in performance oriented parallel programs, though applications in the concurrency domain should not be entirely ignored. This work however, only investigates the feasibility of a pure broadcasting mechanism in a CSP-style library. 1. Broadcasting Broadcasting as a concept appears deceptively simple however, several variations exists that have a slightly different semantics. Fundamentally broadcasting is based on physical broadcasts, that is a sender transmits a message for anybody to receive, as shown in Figure 2. ![Figure 2. Physical broadcast mechanism.](image) We may define simple broadcast as the basic mechanism shown in Figure 2; one process may broadcast, any other process may receive. Thus, the simple broadcast mechanism has no delivery guarantee, a broadcast that is received by no process is still defined as a correct broadcast. Such a broadcast mechanism is of little use in real-world scenarios, and is thus often not provided. UDP/IP datagrams may be broadcast and if so it is done as simple broadcast, that is any layer in the network stack may choose to not propagate the broadcast. The only guarantee that the simple broadcast provides is message integrity, that is the message is either delivered in full and as originally sent, or not at all. Simple broadcast may be improved to be a reliable broadcast. A reliable broadcast mechanism has the added semantics that when a message is broadcasted, all processes must receive a correct version of that message. Reliable broadcast does not guarantee any ordering, that is two concurrent broadcasts by two different processes may be received in different order by different processes in the system. While reliable broadcasting may be physically intuitive, as sketched in Figure 3, the lack of total ordering is still a limitation that makes programming harder in various cases. ![Figure 3. Two senders transmits messages A and B at the same time, because of physical proximity process zero will receive the messages in the order A then B, while process one will receive them in the order B then A. This is still a correct reliable broadcast.](image) An example where reliable broadcast is not sufficient is described as follows: a system consists of a set of fire detectors, fire alarms, and control boards. Imagine that a detector, A, detects a false fire, that is a forgotten toaster, the person using the toaster immediately cancels the alarm using the control board \(A'\), but then another fire detector, \(B\), detects a real alarm. If the broadcast message from \(B\) is received before the cancelation from \(A'\) then a non-counting alarm would falsely turn off, even though a fire was indeed spreading. Reliable broadcasting may be further improved upon to guarantee total ordering; this type of broadcast is known as \textit{atomic broadcast}. In the \textit{atomic broadcast} all broadcasts are received by all processes, and in the same order. This also implies that a process that has failed, that is been unable to receive for some reason, cannot recover, but must leave the system and, if possible, rejoin. If a system implements \textit{atomic broadcasts} the two processes in Figure 3 will agree on which order the messages \(A\) and \(B\) are received. Whether they are received as \(A\) then \(B\) or as \(B\) then \(A\) is still not defined, only that all processes will receive them in the same order. A fourth broadcast mechanism which is well researched is the \textit{causal broadcast} [11]. In this model a broadcast message \(B\) cannot be delivered to a receiver ahead of another message, \(A\), if message \(A\) was received by the broadcaster of message \(B\) prior to that broadcast. Causal broadcasts however, has turned out to be very complex to implement and harder to work with than \textit{atomic broadcasts}, thus we will not investigate \textit{causal broadcasts} in this work. Apart from delivery guarantees and message order broadcasts may be defined as synchronous or not. A \textit{synchronous broadcast} will only be delivered to a receiver once all receivers are ready to receive, while an asynchronous delivery simply guarantees either reliable or \textit{atomic broadcasts}. In distributed systems \textit{synchronous broadcasts} are not available in any widespread libraries as the message cost becomes prohibitive, but in a CSP-library context a \textit{synchronous broadcast} could be more efficiently implemented. Finally, conventional broadcast literature differentiates between broadcasts in open and closed groups [12]. Open group broadcast means that a process, which is not on the list of recipients may still broadcast a message to the group, while closed group broadcasts require the sender to be a member of the recipient group. 2. Broadcasting in Message Passing Systems 2.1. Parallel Virtual Machines The Parallel Virtual Machines library, PVM, supports group communication from version 3. Since PVM has a rather low level approach to message passing messages must first be packed into a message buffer and can then be broadcast to a group. The below example in listing 1, adapted from the PVM \texttt{man} page, packs 10 integers from a variable called \texttt{array} and then broadcasts the values, using the tag 42 to a group of processes, called tasks in PVM, named worker. ```c info = pvm_initsend(PvmDataRaw); info = pvm_pkint(array, 10, 1); info = pvm_bcast("worker", 42); ``` \textbf{Listing 1.} PVM broadcast sender. The receiver will then have to issue an ordinary receive and then unpack the data as shown below in listing 2. ```c buf_id = pvm_recv(&tid, &tag) info = pvm_upkint(array, 10, 1) ``` \textbf{Listing 2.} PVM broadcast receiver. In PVM the messages are received by the individual recipients as ordinary messages. PVM broadcasts are open group and provides only reliable broadcasts. In addition to the broadcast operation PVM also provides a multicast operation where a group is dynamically created from a list of recipients. 2.2. Message Passing Interface Message Passing Interface, MPI, manages broadcasts rather differently than PVM. Groups are defined as subgroups of the overall set of processes, called $\texttt{MPI\_COMM\_WORLD}$. Message contents is defined like ordinary point-to-point messages, and addressing is simply to a subgroup. The major difference from PVM is that the broadcast is issued by all participants in the group, and a parameter in the broadcast specifies which process is the sending and all others are then receivers. This approach means that broadcasting in MPI is in closed groups only. A ten integer dense array broadcast example, from process zero to all other processes in the program, then looks as below in listing 3. ``` result = MPI_Bcast(data, 10, MPI_Int, 0, MPI_COMM_WORLD) ``` Listing 3. MPI broadcast. 3. Models for Broadcasting in CSP From the existing systems that features a broadcast method, it is clear that such a mechanism has a use and this would be interesting to provide in a CSP-style library as well. Merging broadcast with CSP-semantics is not trivial, however, as we have previously shown in the SME work [3]. In this section we sketch the various broadcast styles one may imagine for CSP, and try to evaluate their feasibility for CSP. 3.1. Broadcast Messages The simplest approach would be to introduce a broadcast message command, `channel.bcast(msg)`, similar to the semantics in PVM. With this approach, the library has to know about all processes that holds a reading end of the channel and then relay the message to each such process as they issue read commands from the channel. This approach raises a set of difficulties however; CSP dictates that the broadcast operation does not return until after the operation has finished, that is when all processes has received the message that was broadcast. However, it is not intuitive what should happen to the channel while the broadcast completes, if we require the channel to be blocked until the message is received everywhere then deadlocks may arise as other processes may have agreed to exchange a point-to-point message on the channel. If, on the other hand, we do not freeze the channel while the broadcast completes, other patterns that are non-intuitive may occur. If one process, that holds a receiving end of a channel, never issues a read on that channel then the channel will require infinite buffer-capacity to remain functional. A broadcast may be received by all but one process, and other messages have been passed as point-to-point afterwards, but if the last process then terminates, the broadcast has never truly been completed and the meaning of a broadcast becomes weaker. An emulation of a broadcast message could be implemented as in Figure 4. This naive network, and the theory behind, will be discussed in the next sections. 3.2. Mailboxes A slightly more complex approach would be to introduce a mailbox system, similar to mailboxes as they are found in Erlang, but with a subscription service so that a set of processes can read the same message. A mailbox model is very intuitive for a programmer to comprehend; a message is sent to a central position and all processes may read it. However, this model does not include any synchronization as a CSP-style programmer would expect. If the model is extended to synchronize — either by announcing to the sender when all processes have read the message, or by introducing a fully synchronous model — then the mailbox model is identical to the broadcast message as described above. A network using mailboxes for broadcast is shown in section 3.4. 3.3. Broadcast channels A third approach to CSP-style broadcasting is to introduce dedicated broadcast channels. Such a broadcast channel has semantics that are identical to the synchronous broadcast message in the first scenario. The difference is that with a broadcasting channel there are no point-to-point messages that may be interleaved with broadcasts. This way the deadlock scenario from the broadcast message cannot occur, the channel provides synchronized communication as a point-to-point channel and may be used in guarded expressions just as an ordinary channel. The downside of this approach is that programmers must use another type of channel. In JCSP this inconvenience is small as there is by design a large number of channel types, in PyCSP the change is more radical as the single channel type must be abandoned. 3.4. Broadcast in CSP-theory In the following section, primed processes (i.e. $S'$) will be used to show that the process can live on afterwards. Emulating broadcast channel in CSP-theory can be done with $n$ processes. Here $S$ can broadcast a message to all of $P_{0..n}$: $$S = m!x \rightarrow S'$$ $$P_i = m?x \rightarrow P_i'$$ $$S \parallel \left( \bigparallel_{i=0}^{n} P_i \right)$$ However, this is possible only, if we disregard the fact, that only two processes are allowed to communicate with each other at a given time according to Hoare CSP [13]. If we instead want to broadcast, but adhere to the original theory, we have to do something else. We will create a network of processes, where $S$ will act as sender, $B_c$ will act as a broadcast controller and $P_{0..n}$ will be receivers. They will pass messages using a two-phase commit protocol. $$S = m!x \rightarrow m_{\text{ACK}} \rightarrow S'$$ $$B_c = m?x \rightarrow \left( \bigparallel_{i=0}^{n} c_i!x \rightarrow c_{i,\text{ACK}} \rightarrow \checkmark \right) ; m_{\text{ACK}} \rightarrow B_c'$$ $$P_i = c_i?x \rightarrow c_{i,\text{ACK}} \rightarrow P_i'$$ $$S \parallel B_c \parallel \left( \bigparallel_{i=0}^{n} P_i \right)$$ Looking at the system with FDR [14] in Listing 4 we see that it is deadlock free. A trace were verified where the message was received out-of-order. ```plaintext N = 3 PNAMES = {0..N-1} MSG = {"MSG"} channel mack c hannel m:MSG channel cack:PNAMES channel c:PNAMES.MSG S(x) = m!x -> mack -> S(x) B = m?x -> (||| i:PNAMES @ c.i!x -> cack.i -> SKIP) ; mack -> B P(i) = c.i?x -> cack.i -> P(i) SYSTEM(x) = S(x) ||{|m, mack|}|| (B ||{|c, cack|}|| i:PNAMES &{|cack, c.i}|| P(i)) assert SYSTEM("MSG") : [deadlock free [F]] ``` assert SYSTEM("MSG") \ \{m, mack, cack\} : has trace [T]: <c.0."MSG", c.1."MSG", c.2."MSG"> Listing 4. FDR3 verified Broadcasting CSP-system. This of course means that the broadcast controller must know how many receivers there are present in the system and be able to pass messages to all of them on their respective channels. If we want to allow all processes to be the writer at a given time, we can model this as alternation on the communication: \[ P_i = \left( c_i!x \rightarrow c_i,ACK \rightarrow P'_i \right) \square \left( c_i?x \rightarrow c_i,ACK \rightarrow P''_i(x) \right) \] \[ B_c = \prod_{i=0}^{n} \left( c_i?x \rightarrow \left( \prod_{j=0}^{n} \left( j \neq i \right) c_j!x \rightarrow c_j,ACK \rightarrow \checkmark \right) ; c_i,ACK \rightarrow B'_c \right) \] \[ B_c \parallel \left( \prod_{i=0}^{n} P_i \right) \] Here all the processes either write on their channel or read from their shared channel with \( B_c \). The equivalent mailboxing scenario can be made with a mailbox process for each receiving process. Here the writer will be able to continue, once all mailboxes have ACK'ed back that they have received the message: \[ S = b!x \rightarrow b,ACK \rightarrow S \] \[ B = b?x \rightarrow \left( \prod_{i=0}^{n} m_i!x \rightarrow m_i,ACK \rightarrow \checkmark \right) ; b,ACK \rightarrow B \] \[ M_i(x : xs) = \left( m_i?y \rightarrow m_i,ACK \rightarrow M_i(x : xs : y) \right) \square \left( c_i!x \rightarrow c_i,ACK \rightarrow M_i(xs) \right) \] \[ P_i = c_i?x \rightarrow c_i,ACK \rightarrow P_i \] \[ S \parallel B \parallel \left( \prod_{i=0}^{n} M_i(\emptyset) \parallel P_i \right) \] where : means CONS and \( x : xs \) is the head and tail of the message list. If one does not need the guarantee on each mailbox having received the message, the ACK steps can be omitted. This has been tested with FDR and found to be deadlock free. A trace was also found with each process having received the message. The mailboxing works for arbitrarily big buffers; in Listing 5 it is shown with a buffer size of 5. \[ N = 3 \] \[ MAXBUFFER = 5 \] \[ PNAMES = \{0..N-1\} \] \[ MSG = \{"MSG"\} \] channel back cchannel b:MSG channel cack, mack:PNAMES channel c, m:PNAMES.MSG \[ S(x) = b!x \rightarrow \text{back} \rightarrow S(x) \] \[ B = b?x \rightarrow (||| i:PNAMES @ m.i!x \rightarrow mack.i \rightarrow \text{SKIP}) ; \text{back} \rightarrow B \] \[ M(i, <>) = m.i?y \rightarrow mack.i \rightarrow M(i, <y>) \] \[ M(i, xss) = \text{if } \# xss > \text{MAXBUFFER} \text{ then } Ml(i, xss) \text{ else } Ms(i, xss) \] \[ Ml(i, <x>xss) = c.i!x \rightarrow cack.i \rightarrow M(i, xs) \] \[ Ms(i, xss) = Ml(i, xss) \llbracket (m.i?y \rightarrow mack.i \rightarrow M(i, xss^<y>)) \rrbracket \] \[ P(i) = c.i?x \rightarrow cack.i \rightarrow P(i) \] MAILBOX = ||| i:PNAMES @ M(i, <>) RECV = ||| i:PNAMES @ P(i) COMM(x) = S(x) [||b, back||] B SYSTEM(x) = (COMM(x) [||m, mack||] MAILBOX) [||c, cack||] RECV assert SYSTEM("MSG") : [deadlock free \[F\]] assert SYSTEM("MSG") \ {b, back, m, mack, cack} : [has trace \[T\]]: <c.1."MSG", c.0."MSG", c.2."MSG"> Listing 5. FDR3 verified Mailboxing CSP-system. 4. Related work Both JCSP [1] (Java Communicating Sequential Processes) and CHP [2] (Communicating Haskell Processes) offers a form of broadcast channel. In the former, a “one-to-many” channel is implemented. This must know the number of readers when initialized, and works by having two barriers, one before read and one after, so that all readers are done reading, before the writer is released. The latter is implemented in a similar way, where each reader enrolls to receive the same value from a single writer. 5. Conclusion Broadcasting in CSP-style has been frequently discussed, and in the SME work replaced by a bus-style channel [15]. While adding broadcasting to CSP-style programming appears appealing and straightforward it has yet not been added to any CSP-style library. In this work we have outlined three approaches to adding broadcasting to CSP, and concluded that while they are all possible, only the explicit broadcasting channel is able to provide what the authors consider a CSP-style behavior and the common expected functionality of a broadcast operation. The need for broadcasting in CSP-style libraries is still an issue that must be investigated further, it is not obvious that broadcasting has a general use in application where CSP is commonly used, but might be reserved for fore traditional HPC style applications. References
{"Source-Url": "https://static-curis.ku.dk/portal/files/170083741/06_preprint_1_.pdf", "len_cl100k_base": 4825, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27980, "total-output-tokens": 6583, "length": "2e12", "weborganizer": {"__label__adult": 0.0003533363342285156, "__label__art_design": 0.0003001689910888672, "__label__crime_law": 0.00034332275390625, "__label__education_jobs": 0.0006489753723144531, "__label__entertainment": 9.310245513916016e-05, "__label__fashion_beauty": 0.0001347064971923828, "__label__finance_business": 0.00018715858459472656, "__label__food_dining": 0.0003802776336669922, "__label__games": 0.0005092620849609375, "__label__hardware": 0.0011491775512695312, "__label__health": 0.0005903244018554688, "__label__history": 0.0002586841583251953, "__label__home_hobbies": 0.00010055303573608398, "__label__industrial": 0.0004987716674804688, "__label__literature": 0.00030612945556640625, "__label__politics": 0.0002779960632324219, "__label__religion": 0.0005507469177246094, "__label__science_tech": 0.0595703125, "__label__social_life": 0.00012385845184326172, "__label__software": 0.00799560546875, "__label__software_dev": 0.92431640625, "__label__sports_fitness": 0.0003566741943359375, "__label__transportation": 0.0006699562072753906, "__label__travel": 0.0002244710922241211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23551, 0.02981]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23551, 0.41098]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23551, 0.90847]], "google_gemma-3-12b-it_contains_pii": [[0, 593, false], [593, 1287, null], [1287, 3218, null], [3218, 6122, null], [6122, 9438, null], [9438, 12581, null], [12581, 14188, null], [14188, 15941, null], [15941, 18087, null], [18087, 20448, null], [20448, 23551, null], [23551, 23551, null]], "google_gemma-3-12b-it_is_public_document": [[0, 593, true], [593, 1287, null], [1287, 3218, null], [3218, 6122, null], [6122, 9438, null], [9438, 12581, null], [12581, 14188, null], [14188, 15941, null], [15941, 18087, null], [18087, 20448, null], [20448, 23551, null], [23551, 23551, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23551, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23551, null]], "pdf_page_numbers": [[0, 593, 1], [593, 1287, 2], [1287, 3218, 3], [3218, 6122, 4], [6122, 9438, 5], [9438, 12581, 6], [12581, 14188, 7], [14188, 15941, 8], [15941, 18087, 9], [18087, 20448, 10], [20448, 23551, 11], [23551, 23551, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23551, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
53cfda415c3ec04c64286e6b55e112674c2acb8d
The CloudMdsQL Multistore System Boyan Kolev, Carlyna Bondiombouy, Patrick Valduriez, Ricardo Jiménez-Peris, Raquel Pau, José Pereira To cite this version: HAL Id: lirmm-01288025 https://hal-lirmm.ccsd.cnrs.fr/lirmm-01288025 Submitted on 14 Mar 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. The CloudMdsQL Multistore System Boyan Kolev, Carlyna Bondiombouy, Patrick Valduriez Inria and LIRMM Montpellier, France firstname.lastname@inria.fr Ricardo Jiménez-Peris LeanXcale and UPM Madrid, Spain Raquel Pau Sparsity Technologies Barcelona, Spain José Pereira INESC TEC and U. Minho Braga, Portugal ABSTRACT The blooming of different cloud data management infrastructures has turned multistore systems to a major topic in the nowadays cloud landscape. In this demonstration, we present a Cloud Multidatasstore Query Language (CloudMdsQL), and its query engine. CloudMdsQL is a functional SQL-like language, capable of querying multiple heterogeneous data stores (relational and NoSQL) within a single query that may contain embedded invocations to each data store’s native query interface. The major innovation is that a CloudMdsQL query can exploit the full power of local data stores, by simply allowing some local data store native queries (e.g. a breadth-first search query against a graph database) to be called as functions, and at the same time be optimized. Within our demonstration, we focus on two use cases each involving four diverse data stores (graph, document, relational, and key-value) with its corresponding CloudMdsQL queries. The query execution flows are visualized by an embedded real-time monitoring subsystem. The users can also try out different ad-hoc queries, not necessarily in the context of the use cases. Keywords Cloud; multistore system; heterogeneous data stores; SQL and NoSQL integration. 1. INTRODUCTION The blooming of different cloud data management infrastructures, specialized for different kinds of data and tasks, has led to a wide diversification of DBMS interfaces and the loss of a common programming paradigm. This makes it very hard for a user to integrate and analyze her data sitting in different data stores, e.g. RDBMS, NoSQL, and HDFS. For example, a media planning application, which needs to find top influencers inside social media communities for a list of topics, has to search for communities by keywords from a key-value store, then analyze the impact of influencers for each community using complex graph database traversals, and finally retrieve the influencers’ profiles from an RDBMS and an excerpt of their blog posts from a document database. The CoherentPaaS project\(^1\) addresses this problem, by providing a rich platform integrating different data management systems specialized for particular tasks, data and workloads. The platform is designed to provide a common programming model and language to query multiple data stores, which we herewith present. The problem of accessing heterogeneous data sources has long been studied in the context of multidatabase and data integration systems [7]. More recently, with the advent of cloud databases and big data processing frameworks, the solution has evolved towards multistore systems that provide integrated access to a number of RDBMS, NoSQL and HDFS data stores through a common query engine. Data mediation SQL engines, such as Apache Drill, Spark SQL, and SQL++ provide common interfaces that allow different data sources to be plugged in (through the use of wrappers) and queried using SQL. The polystore BigDAWG [3] goes one step further by enabling queries across “islands of information”, where each island corresponds to a specific data model and its language and provides transparent access to a subset of the underlying data stores through the island’s data model. Another family of multistore systems [2,6] has been introduced with the goal of tightly integrating big data analytics frameworks (e.g. Hadoop MapReduce) with traditional RDBMS, by sacrificing the extensibility with other data sources. However, since none of these approaches supports the ad-hoc usage of native queries, they do not preserve the full expressivity of an arbitrary data store’s query language. But what we want to give the user is the ability to express powerful ad-hoc queries that exploit the full power of the different data store languages, e.g. directly express a path traversal in a graph database. Therefore, the current multistore solutions do not directly apply to solve our problem. In this demonstration, we present Cloud multidatasstore query language (CloudMdsQL), a functional SQL-like language, designed for querying multiple heterogeneous databases (e.g. relational and NoSQL) within a single query containing nested subqueries [5]. Each subquery addresses directly a particular data store and may contain embedded invocations to the data store’s native query interface. Thus, the major innovation is that a CloudMdsQL query can exploit the full power of local data stores, by simply allowing some local data store native queries (e.g. a \(^1\) http://coherentpaas.eu breadth-first search query against a graph database) to be called as functions, and at the same time be optimized based on a simple cost model, e.g. by pushing down select predicates, using bind join, performing join ordering, or planning intermediate data shipping. CloudMdsQL has been extended [1] to address distributed processing frameworks such as Apache Spark by enabling the ad-hoc usage of user defined map/filter/reduce operators as subqueries, yet allowing for pushing down predicates and bind join conditions. 2. LANGUAGE OVERVIEW The CloudMdsQL language is SQL-based with the extended capabilities for embedding subqueries expressed in terms of each data store’s native query interface. The common data model respectively is table-based, with support of rich datatypes that can capture a wide range of the underlying data stores’ datatypes, such as arrays and JSON objects, in order to handle non-flat and nested data, with basic operators over such composite datatypes. Queries that integrate data from several data stores usually consist of subqueries and an integration SELECT statement. A subquery is defined as a named table expression, i.e. an expression that returns a table and has a name and signature. The signature defines the names and types of the columns of the returned relation. Thus, each query, although a. gnostic to the underlying data stores’ schemas, is executed in the context of an ad-hoc schema, formed by all named table expressions within the query. A named table expression can be defined by means of either an SQL SELECT statement (that the query compiler is able to analyze and possibly rewrite) or a native expression (that the query engine considers as a black box and delegates its processing directly to the data store). For example, the following simple CloudMdsQL query contains two subqueries, defined by the named table expressions T1 and T2, and addressed respectively against the data stores rdb (an SQL database) and mongo (a MongoDB database): ```sql T1(x int, y int)@rdb = ( SELECT x, y FROM A ) T2(x int, z int)@mongo = (* db.B.find( {{lt: {x, 10}}, (x:1, z:1, _id:0) } ) *) SELECT T1.x, T2.z FROM T1, T2 WHERE T1.x = T2.x AND T1.y <= 3 ``` The purpose of this query is to perform relational algebra operations (expressed in the main SELECT statement) on two datasets retrieved from a relational and a document database. The two subqueries are sent independently for execution against their data stores in order the retrieved relations to be joined by the common query engine. The SQL table expression T1 is defined by an SQL subquery, while T2 is a native expression (identified by the special bracket symbols {* *}) expressed as a native MongoDB call. Note that subqueries to some NoSQL data stores can also be expressed as SQL statements; in such cases, the wrapper must provide the mapping from relational operators to native calls. In our demonstration, unlike in the example above, we use an SQL wrapper to query MongoDB, which also benefits from subquery rewriting. CloudMdsQL allows named table expressions to be defined as Python functions, which is useful for querying data stores that have only API-based query interface. A Python expression yields tuples to its result set much like a user-defined table function. It can also use as input the result of other subqueries. Furthermore, named table expressions can be parameterized by declaring parameters in the expression’s signature. For example, the following Python expression uses the intermediate data retrieved by T2 to return another table containing the number of occurrences of the parameter v in the array T2.z: ```python T3(x int, c int WITHPARAMS v string)@python = {* for (x, z) in CloudMdsQL.T2: yield( x, z.count(v) ) }* ``` A (parameterized) named table can then be instantiated by passing actual parameter values from another native/Python expression, as a table function in a FROM clause, or even as a scalar function (e.g. in the SELECT list). Calling a named table as a scalar function is useful e.g. to express direct lookups into a key-value data store. Note that parameterization and nesting is also available in SQL and native named tables. In our demonstration, we give an example that involves the Sparksee graph database and we use its Python API to express subqueries that benefit from all of the features described above. In fact, our initial query engine implementation enables Python integration; however support for other languages (e.g. JavaScript) for user-defined operations can be easily added. 3. SYSTEM OVERVIEW The query engine follows a mediator/wrapper architecture. The query compiler decomposes the query into a query execution plan (QEP), which appears as a directed acyclic graph of relational operators where leaf nodes correspond to subqueries for the wrappers to execute directly against the data stores. 3.1 Query Optimization Before its actual execution, a QEP may be rewritten by the query optimizer. To compare alternative rewritings of a query, the optimizer uses basic cost information exposed by the wrappers in the form of cost functions or database statistics, and a simple cost model. In addition, the query language provides a possibility for the user to define cost and selectivity functions whenever they cannot be derived from the catalog, mostly in the case of using native subqueries. CloudMdsQL uses bind join as an efficient method for performing semi-joins across heterogeneous data stores that uses subquery rewriting to push the join conditions. For example, the list of distinct values of the join attribute(s), retrieved from the left-hand side subquery, is passed as a filter to the right-hand side subquery. To illustrate it, let us consider the following CloudMdsQL query: ```sql A(id int, x int)@DB1 = (SELECT a.id, a.x FROM a) B(id int, y int)@DB2 = (SELECT b.id, b.y FROM b) SELECT a.x, b.y FROM b JOIN a ON b.id = a.id ``` The purpose of this query is to perform a semi-join on different data stores. The join conditions are defined in the WHERE clause. The optimizer decides to use the bind join method and that the join condition will be bound to the right-hand side of the join operation. First, the relation B is retrieved from the corresponding data store using its query mechanism. Then, the distinct values of B.id are used as a filter condition in the query that retrieves the relation A from its data store. Assuming that the distinct values of B.id are B1 ... Bn, the query to retrieve the right-hand side relation of the bind join uses the following SQL approach (or its equivalent according to the data store’s query language), thus retrieving from A only the rows that match the join criteria: ```sql SELECT a.id, a.x FROM a WHERE a.id IN (b1, ... b) ``` The way to do the bind join analogue for native/Python queries is through the use of a JOINED ON clause in the named table signature. For example, if A is defined as the Python function below, as A.id participates in an equi-join, the values b1 ... bn will be provided to the Python code through the iterator Outer: A(id int, x int) JOINED ON id@DB1 = {* for id in CloudMdsQL.Outer: yield ( id, db.get_x(id) ) *} 3.2 Query Engine Implementation For the current implementation of the query engine, we modified the open source Apache Derby database to accept CloudMdsQL queries and transform the corresponding execution plan into Derby SQL operations. We developed the query planner and the query execution controller and linked them to the Derby core, which we use as the operator engine. Derby allows extending the set of SQL operations by means of create function statements. This type of statements creates an alias with an optional set of parameters, to invoke a specific Java component as part of an execution plan. Thus, for each named table expression in a query, a table function is created dynamically, which invokes the corresponding wrapper as a Java class. Thus, Derby handles global execution, delegating local optimization and execution to the underlying data stores. As a second step, the query engine evaluates which named expressions are queried more than once and must be cached into the temporary table storage, which will be always queried and updated from the specified Java functions to reduce the query execution time. Finally, the last step consists of translating all operation nodes that appear in the execution plan into a Derby specific SQL execution plan. 4. DEMONSTRATION The demonstration concentrates on two CloudMdsQL use case scenarios from different information systems: a social network analysis tool for marketing companies and a bibliographic recommendation system. The users will have the possibility to experience the use case scenarios through their web interfaces. They will be also able to try out custom CloudMdsQL queries, to follow their corresponding query execution plans, and to monitor their execution flow through X-Ray [4] – a subsystem of the CoherentPaaS platform for real-time visualization of performance and resource usage integrated with all the components of the platform (the query engine and the underlying data stores). Scenario 1. The first use case aims at finding the communities in a social network, for a specific set of topics, with their top influencers. Marketing companies are interested in discovering the people they need to convince about the quality of a specific brand. The dataset of this use case is a sample of Twitter, but it allows working with other social networks like Facebook or blogs. The application runs a Twitter listener of a set of topics in real-time; it modifies the database for each tweet it receives. The schema of this application contains a generic entity called Document to store text-items (tweets, messages, etc.), which can appear copies or references. An Entity (person or company) is an author of a document or a mention of a social-network account. The people interactions in social networks with copies, references or mentions, can be understood as a set of graph of influences. In other words, we can infer who influences who and about what. These Influences and the Communities are incrementally computed when a new tweet comes to the application and thus, these concepts are part of the application schema. The specification of the main query $Q_1$ the application uses is as follows: given a set of keywords $k_1$, $k_2$, $k_3$, find the 10 biggest communities and, for each community, find the 20 most influencers. For each of these influencers, the system must return the number of influenced entities inside the community, the influencer’s id, name and account creation date and the last published document. In order to implement this use case, we use a graph database (Sparksee) to store the graph of Influences and compute the Communities; a relational database (MonetDB) for all the basic information about Entities and Documents (only metadata); a document database (MongoDB) to store the Document contents; and a key-value data store (HBase) to index communities per keyword. Following the execution plan for the CloudMdsQL query $Q_1$, the query engine first invokes an HBase query to retrieve the communities preliminarily computed for a specific keyword; then, for each community, runs a Sparksee query using the Sparksee Python API to find the top 20 influencers, the number of influenced entities inside the community, and the maximum influence propagation depth. Finally, the basic information of each influencer (id, name, account creation date) and the last published document is retrieved by running queries to MonetDB and MongoDB. Figure 1 summarizes the described execution plan using a notation where each box represents a table expression as a data store subquery with its signature and a fragment, (pseudo)statement, or description of the subquery. ![Figure 1. Execution plan for $Q_1$.](image-url) For this execution plan, the query optimization plays an important role to assign the bind join method to all the join operations. The reason is that the selected communities relevant to the keywords $k_1$, $k_2$ and $k_3$ are always a few, and thus the Sparksee query is evaluated only for a few communities, which significantly reduces the number of executions of expensive graph computations. Analogously, using bind join to retrieve the latest documents only for the filtered influencers increases the overall efficiency significantly by pushing bind join conditions to the MonetDB and MongoDB subqueries that take advantage of the existing indexes in both databases. Note that the MongoDB subquery is expressed in SQL, but the wrapper maps its sub-plan to a chain of invocations of MongoDB native API. Within the results of this query, there is a nested level of information and the ranking of the suggested communities and influencers are important. For this reason, the $Q_1$ results are shown using a chart (see Figure 2) where the outer level of circles represents communities whereas the inner one corresponds to the influencers of those communities. The sizes of the community circles correspond to the relevance of the specified keywords with a community, while the sizes of the influencer circles correspond to the impact a person has on the community regarding the keywords. The query execution can be monitored using the integrated system for real-time monitoring and analysis X-Ray (see Figure 3), where the user can view details for each operation running within the process, including relative start/end times of operation executions, intermediate cardinalities, rewritten queries, etc. Figure 2. Visualization of communities and influencers. Figure 3. Monitoring of the query execution. Scenario 2. The second use case application recommends reviewers for a specific European project taking into account the DBLP and CORDIS knowledge base. DBLP is a bibliographic dataset focused in computer science that currently contains 1.8 million publications and 1 million authors. CORDIS is the European projects dataset, which currently contains 40000 projects and 1000 institutions. The main query $Q_2$ is one of the key functionalities of a system built by Sparsity-Technologies to offer recommendations for researchers. The system visualizes the results from a web browser using HTML5 because it provides a clear way to analyze the results. The schema of this information system contains Projects, whose participants are Institutions and one of them is the coordinator. On the other hand, a part of the schema stores a bibliographic dataset, which contains Documents (papers) and their authors (People) with the corresponding affiliations (Institutions) for each year. This information system also indexes Projects and Documents by Keywords; analyzes which are the top experts Institutions and People for each Keyword. The application and the query $Q_2$ use a graph database (Sparksee) to resolve the conflicting interests with the members of the project because graph databases are efficient solving paths/joins; a relational data store (LeanXcale) to store and retrieve the complete list of fields about the recommended reviewers; a key-value data store (HBase) to find the top experts in a list of topics taking advantage of a fast search by keywords; and a document data store (MongoDB) to retrieve the contents of the last paper produced by the suggested reviewers. The specification of $Q_2$ is as follows: given a specific project $p$ and a set of keywords $k_1, k_2, k_3$, find the people that have never worked in the same institutions as the participants of $p$ that are also experts in $k_1, k_2, k_3$. For these people, return their name, last affiliation and last paper title. 5. ACKNOWLEDGEMENTS This research has been partially funded by the European Commission under projects CoherentPaas and LeanBigData (grants FP7-611068, FP7-619606), the Madrid Regional Council, FSE and FEDER, project Cloud4BigData (grant S2013TIC-2894), and the Spanish Research Agency MICIN project BigDataPaas (grant TIN2013-46883). 6. REFERENCES
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01288025/file/CloudMdsQL_SIGMOD_demo_V.1.1.pdf", "len_cl100k_base": 4592, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 18356, "total-output-tokens": 5579, "length": "2e12", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.00033783912658691406, "__label__crime_law": 0.0003974437713623047, "__label__education_jobs": 0.0008692741394042969, "__label__entertainment": 0.00010788440704345704, "__label__fashion_beauty": 0.00015985965728759766, "__label__finance_business": 0.000774383544921875, "__label__food_dining": 0.0003833770751953125, "__label__games": 0.0004575252532958984, "__label__hardware": 0.0010328292846679688, "__label__health": 0.0006227493286132812, "__label__history": 0.0003371238708496094, "__label__home_hobbies": 9.763240814208984e-05, "__label__industrial": 0.0005092620849609375, "__label__literature": 0.0003349781036376953, "__label__politics": 0.0002925395965576172, "__label__religion": 0.0004143714904785156, "__label__science_tech": 0.120361328125, "__label__social_life": 0.00012755393981933594, "__label__software": 0.02899169921875, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.0002256631851196289, "__label__transportation": 0.00047898292541503906, "__label__travel": 0.0002290010452270508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23373, 0.02974]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23373, 0.43594]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23373, 0.85613]], "google_gemma-3-12b-it_contains_pii": [[0, 1117, false], [1117, 5926, null], [5926, 13033, null], [13033, 19247, null], [19247, 23373, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1117, true], [1117, 5926, null], [5926, 13033, null], [13033, 19247, null], [19247, 23373, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23373, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23373, null]], "pdf_page_numbers": [[0, 1117, 1], [1117, 5926, 2], [5926, 13033, 3], [13033, 19247, 4], [19247, 23373, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23373, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
e760be5f5154513addf91b520da5292b56dc81cd
On the introduction of Quality of Service awareness in legacy distributed applications R. Canonico*, M. D'Arienzo*, B. Fadini*, S.P. Romano* and G. Ventre** *Dipartimento di Informatica e Sistemistica Università di Napoli “Federico II” Via Claudio 21 – 80125 - Napoli - ITALY {rcanonico, maudarie, fadini, spromano}@unina.it ** Laboratorio Nazionale ITEM Consorzio Interuniversitario Nazionale per l'Informatica Via Diocleziano 328 – 80126 – Napoli – ITALY giorgio.ventre@napoli.consorzio-cini.it ABSTRACT A number of distributed applications require communication services with Quality of Service (QoS) guarantees. Work undertaken within the Internet Engineering Task Force (IETF) has led to the definition of novel architectural models for the Internet with QoS support. According to these models, the network has to be appropriately configured in order to provide applications with the needed performance guarantees. In a first proposal, called Integrated Services, applications need to explicitly interact with network routers by means of a signaling protocol (such as RSVP), in order to enforce QoS on a per-flow basis. The Differentiated Services architecture, on the other hand, looks after scalability, thus providing performance guarantees to aggregates of flows. In the case of real-time applications, a hybrid model capable of putting together micro-flow guarantees in the access network and aggregate management in the backbone seems to represent the ideal tradeoff between strict performance and scalability. In this scenario, giving applications a means to interact with the underlying QoS services is of primary importance. Hence, several special-purpose APIs have been defined to let applications negotiate QoS parameters across QoS-capable networks. However, so far, none of these APIs is available for the use of programmers in different operating environments. We believe that such features should be embedded in programming environments for distributed applications. In this work we present how we included QoS control features in a programming language that since years has been adopted for the development of network-based applications: Tcl. We present QTcl, an extension of Tcl, which provides programmers with a new set of primitives fully compliant with the standard SCRAPI programming interface for the RSVP protocol. We gave QTcl a high portability, in that it enables standard QoS negotiation to be performed in a seamless fashion on the most common operating systems. Keywords Distributed Applications, Quality of Service, Programming Language. 1. INTRODUCTION In the last few years, the availability of new communication technologies has brought to the development of a number of distributed multimedia applications. In particular, the availability of multicast support for IP applications has laid the grounds to the adoption of new paradigms for multimedia communication and Computer Supported Collaborative Work, within both local and wide area networks. However, this process has been highly chaotic, being focused more on satisfying specific needs than on designing open systems with high adaptiveness and interoperability features. A large number of applications, in particular multimedia applications, consist of a set of pre-existing components (building blocks) glued together by a common GUI. To develop applications of this kind, scripting languages have proved to be better suited than system programming languages [1]. Multimedia services impose a number of constraints on communication networks. Experiences over the Internet have shown the presence of a fundamental technical aspect: real-time applications do not work well across the network because of variable queuing delays and congestion losses. Before real-time applications can be broadly used, the Internet infrastructure must be modified in order to support more stringent QoS guarantees, mostly relating to the provision of some kind of control over end-to-end packet delays. Building global-scale distributed systems with predictable properties is one of the great challenges for computer systems engineering in the new century. Quality of Service requirements will be critical for distributed applications whose performance depends mainly on the characteristics of the communication service provided by the networking infrastructure [2]. Taking into account the current proposals stemming from the Internet research community, we envision a scenario where direct interaction between applications and the underlying network infrastructure is definitely needed at least in the case of real-time communication. Hence, the need arises to define programming interfaces suitable for QoS-aware applications deployment in both the Integrated Services [3] and the hybrid (i.e. IntServ/DiffServ) IETF models. According to these models, applications can, with the help of an appropriate signaling protocol like RSVP (Resource reSerVation Protocol) [4], request communication services with per-flow bounds on communication throughput or end-to-end latency. However, in spite of the impressive advance made in the design and implementation of performance guaranteed communication services, there has been lack of efforts in bringing such features at the application level in an organic approach. When developing distributed software with QoS awareness, programmers generally have to adopt ad-hoc solutions for specific operating systems and environments. We believe that this brings serious limitation to software reuse and portability and therefore that there is the need for a more flexible and open approach. One possible alternative approach is to provide support for QoS into modern scripting languages, which are characterized by a broad adoption in the development of networked and distributed applications. In this paper we present Q Tcl, an extension of Cornell’s Tcl-DP [11]. Q Tcl extends the Tcl-DP interpreter by providing a set of new commands, according to the standard SCRAPI application programming interface, defined by the IETF [5]. By exploiting the portability features of Tcl we wrote QTcl which represents the only available cross-platform SCRAPI implementation for the pursuit of quality of service on both Unix-based and Microsoft-based operating systems. We feel that this effort for improving software development on top of advanced network architecture is only at the beginning and there is the need for the design of alternative and more open solutions. The goal of this paper is therefore to present our project rationale and the process that has brought to the development of QTcl. The rest of the paper is organised as follows. In section 2 we briefly describe the QoS programming interface and in particular we focus the attention on a new programming interface made available for Microsoft Windows systems. In section 3 we present QTcl and the set of new commands that have been included in the language. In section 4 we illustrate how we have implemented QTcl, as an extension to Cornell’s Tcl-DP. Section 5 presents a distributed VoD application where QTcl is used to protect data and control flows in a QoS-enabled internetwork. Finally, we discuss our conclusions in section 6. The proceedings are the records of the conference. ACM hopes to give these conference by-products a single, high-quality appearance. To do this, we ask that authors follow some simple guidelines. In essence, we ask you to make your paper look exactly like this document. The easiest way to do this is simply to down-load a template from [2], and replace the content with your own material. 2. STANDARD QOS PROGRAMMING INTERFACES IP has been playing for several years the most important role in global internetworking. Its connectionless nature has proved to be one of the keys of its success. Based on this assumption, the IETF Integrated Services working group has specified a control QoS framework in order to provide new applications with the appropriate support. Such a framework proposes an extension to the Internet architecture and protocols which aims at making broadly available integrated services across the Internet. The key assumption on which the reference model for integrated services is built is that network resources (first of all its bandwidth) must be explicitly managed in order to meet application requirements. The overall goal in a real-time service, in fact, is that of satisfying a given set of application-specific requirements, and it seems clear that guarantees are hardly achieved without reservations. Thus, resource reservation and admission control will be playing an extremely important role in the global framework. The new element that arises in this context, with respect to the old (non-real-time) Internet model, is the need to maintain flow-specific state in the routers, which must now be capable to take an active part in the reservation process. RSVP protocol [4] is based on an exchange of messages according to the entity that supplies them (sender and receiver) or modifies them (intermediate network elements). The information supplied by each sender, and conveyed in the PATH messages, concerns the type of traffic that it is going to generate. Receivers send RESV messages which carry the service class to be used and the corresponding quality of service parameters. In this framework, applications need to be modified in order to interact with QoS network. The IETF has defined RAPI [10], an application programming interface compliant with the RSVP Functional Specification. It is a user-level library written in C, which can be used by applications aimed at exploiting the QoS functionalities made available by a network reservation protocol like RSVP. RAPI calls let an application interact with a local RSVP daemon process, in order to establish a communication with QoS guarantees. The RAPI interface is a first step towards the integration of communication services with QoS guarantees into applications; yet, its use is somewhat complex, since the application programmer must be aware of a number of parameters concerning the reservation. To cope with such problems, the IETF has proposed a simpler programming interface, layered on top of the RAPI and called SCRAPI [5]. SCRAPI provides three main functions: \[ \text{Scrapy\_sender, to be used by the sender of a data stream} \] \[ \text{associated to an RSVP session,} \] \[ \text{Scrapy\_receiver, to be used by the receiver, and} \] \[ \text{Scrapy\_close, to close an RSVP session.} \] SCRAPI relieves users from taking care of all the details related to flow characterization, in terms of a formal token bucket description: the only required parameter is the average rate of the multimedia flow for which a reservation has to be enforced. Such a parameter might be retrieved in an automatic fashion in those cases where interaction with the service provider is made transparent to the lay-person. This can be achieved, for example, by exploiting a metadata repository associated to the multimedia content [13]. 2.1 Interface W_SCRAPI SCRAPI is only a programming interface to access the RSVP service, which must be implemented by a proper operating system module. In UNIX-like systems, this is usually a daemon process, which runs with root privileges in the end systems. For Microsoft Windows operating systems, the situation is a little bit more complex. QoS-functionality and RSVP services are embedded in Win98. For WinNT, a distribution from Intel provides extensions to Winsock2 that are specific to RSVP protocol [14]. In Win2000, RSVP is seen as a particular service that might be launched. At the time of this writing, no implementation of the SCRAPI interface for these systems was available. To let developers be able to take advantage of QTcl in writing portable multimedia applications, we implemented a new interface for Microsoft Windows systems. In designing our interface, we followed the same philosophy that had led to the definition of the SCRAPI interface. Interface W_SCRAPI maps the SCRAPI calls onto the corresponding commands of QoS services on Windows based hosts. In other words, we ported the SCRAPI interface to Windows based systems. dp_scrapiSender dest_hostname dest_port source_hostname source_port bandwidth protocol dp_scrapiReceiver dest_hostname dest_port source_hostname source_port service protocol dp_scrapiStatus dest_hostname dest_port protocol dp_scrapiClose dest_hostname dest_port source_hostname source_port Using these commands, it is possible to manage the whole process of reservation setup. The bandwidth parameter must be expressed in Bytes/sec. The service parameter can be one of the following two values: cl indicating Controlled Load [7] or gs indicating Guaranteed Service [8]. Finally, the protocol parameter can be either tcp or udp. dp_scrapiSender opens an RSVP session and starts PATH message transmission from source host to destination host. PATH messages are refreshed every 30 seconds. dp_scrapiReceiver is invoked by a receiver in order to make a reservation request. The receiver specifies the desired QoS and class of service (Guaranteed Service or Controlled Load) according to the information contained in the PATH message. This request is forwarded to the sender across the network via a RESV message. After sending a RESV, the receiver waits for a confirmation of successful reservation from the sender for at most 10 seconds, as set by a specific timer; however, even in case of timer expiration the reservation process will go on. dp_scrapiStatus allows to verify the current status of a session, according to a simplified error model available in the SCRAPI interface [5] dp_scrapiClose is the function called to tear down an RSVP session, both in reception and in transmission Figure 2 shows a simple application made of a sender process and a receiver process. The two processes should be executed on different hosts connected by an RSVP-enabled internetwork. The sender process invokes the dp_scrapiSender command, to start the transmission of PATH messages and then waits in a loop until the reservation is completed. The receiver process, instead, issues the dp_scrapiReceiver command to start the transmission of RESV messages and waits for the reservation to be completed. As soon as the reservation is setup, the sender starts transmitting UDP messages, 1480 bytes in length. The receiver, in turn, measures the time needed to receive a number N of such messages and estimates the received throughput. This simple application can be tested in order to verify that the achieved throughput is independent of the network conditions, as long as the routers support QoS functionality. --- **Sender.tcl** ```tcl #!/home/qtcl/bin/tclsh8.0 package require dp set sender [dp_connect udp -host 143.225.229.105 -port 3000 -myaddr localhost -myport 5000] dp_scrapiSender 143.225.229.105 3000 143.225.229.116 5000 100000 udp while {$status != "green"} { after 1000 set status [dp_scrapiStatus 143.225.229.105 3000 udp] } puts $status set pkt "" for {set i 0} {$i < 1480} {incr i} { append pkt x } puts "Press ctrl-C to interrupt ...." while {1} { set lun [dp_send $sender $pkt] } close $sender ``` --- **Receiver.tcl** ```tcl #!/home/qtcl/bin/tclsh8.0 proc bench { N } { global receiver set count 0 while {$count < $N} { set rcv [dp_recv $receiver] incr count [string length $rcv] } } package require dp set receiver [dp_connect udp -myport 3000] fpconfigure $receiver -blocking 1 dp_scrapiReceiver 143.225.229.105 3000 143.225.229.116 5000 gs udp while {$status != "green"} { after 1000 set status [dp_scrapiStatus 143.225.229.105 3000 udp] } puts $status set N 10485760 set T [lindex [time { bench $N }] 0] set BW [format "$2.3f" [expr $N*8.0/$T]] puts "Elapsed time: $T microseconds" puts "Estimated bandwidth: $BW Megabit/sec" close $receiver ``` --- 4. QTcl implementation QTcl has been conceived as a tool for supporting the development of distributed applications with simple QoS requirements. In order to minimize the development effort, we tried to exploit some useful features that were already available in the Tcl-DP extension, developed at Cornell University [11]. In particular, we found the dp_RPC mechanism particularly suitable to support the receiver-initiated reservation mechanism of RSVP. Hence, QTcl has been developed starting from the original Tcl-DP source distribution. We then extended the Tcl interpreter by creating a set of C functions that implement the SCRAPI primitives. We briefly describe the steps we did to add the new command dp_scrapi_Sender to Tcl-dp. First of all, the presence of this new command has to be defined. That was done by modifying these two files. In file dpInit.c, we defined the dp_scrapiSender command and the related procedure: ```c static DpCmd commands[] = { ... "dp_scrapiSender", dp_createSender, ... } ``` In file dpInt.h, we wrote the prototype dp_createSender: ```c EXTERN int dp_createSender _ANSI_ARGS_((ClientData clientData,Tcl_Interp *interp, int argc, char **argv)); ``` Finally, we implemented new procedure in a new file (that we called scraPi_tcl.c) ```c int dp_scrapiSender (Tcl_Interp *interp); ``` Last step has been the upgrade of Makefile.in in order to take into account the new file scripi_tcl.c: ``` ... OBJS = $(OBJ_DIR)/dpChan.o $(OBJ_DIR)/dpCmds.o $(OBJ_DIR)/scripi_tcl.o $(OBJ_DIR)/dpInit.o ... ``` In the next section we will describe some trials we carried out using a heterogeneous scenario. 5. A QOS-AWARE DISTRIBUTED MULTIMEDIA APPLICATION BASED ON QTCL To show how effective QTcl can be in distributed software development and the easiness of RSVP bandwidth management in a real application, we used the extended scripting language to add the ability of making network resource reservations to DiVA, a distributed multimedia application for cooperative video distribution over the Internet, developed by our research group. DiVA is capable of playing and controlling remote audio/video documents over a community of users in a synchronized way in streaming mode. We gave DiVA both the capability to dynamically adapt to current network conditions and to actively interact with QoS-capable network architectures (via standard APIs) in order to guarantee the users a specified level of service. ![Fig. 3 Data streams generated by the DiVA application and associated to RSVP sessions.](image) DiVA is a complex multimedia application, since it involves the transmission of multiple data flows conveying the content, control commands and synchronization information, according to a layered software architecture. Figure 3 shows the relevant data streams produced by the DiVA application between a streaming server host and a client host. In particular, the UDP audio and video streams are transmitted downstream from the server on the right to the client on the left, while two TCP bi-directional streams are used to exchange control (console) and synchronization (LTS – Logical Time System) information. [15] Our approach is to exploit the capability of QTcl as a scripting language to have all the issues related to resource reservation for the different media solved according to a broker-based approach, in which all the interactions between the application and the network are performed by a new, independent software module added to the application client interface. According to this approach, it was required no intervention in the software modules involving both data and control communication. When the application client is launched, the new module for resource reservation takes care of interacting with the network infrastructure: because RSVP protocol is receiver initiated, the reservations are made by the clients on the basis of the sender specifications. Hence, server applications are in charge of starting the communications by sending the PATH messages with traffic envelope information. Our purpose was to let this procedure become automated and seamless to the users. To do that, we fruitfully used the RPC call available from Tcl-DP: in the following lines, we report some Tcl code from the DiVA client application that shows how reservations are done automatically on the basis of default values. ``` # RPC to request PATH messages from the server # Wait for .1 sec after 100 if [catch {dp_RPC $sockV -timeout 60000 ScripSndPath SC_UdpSrcV SC_UdpDestV $bVide $service} error] { # server error request catch {diva_CloseRPC $sockV} error "Unable to have full server connection. \n\nconnection reports: $error" } ``` ``` # Send reservation request to the server after 100 dp_scripiResv [index SC_UdpDestV 0] [index SC_UdpDestV 1] [index SC_UdpSrcV 0] [index SC_UdpSrcV 1] $service udp ``` In these code lines, the ScripSndPath call is a RPC to a server side procedure that causes the forwarding of PATH messages with video traffic specifications. Then, by using the dp_scripiResv procedure the client application can eventually complete the reservation with correct parameters. We tested the application in a testbed formed by two different Local Area Networks, connected by means of a WFQ router implemented in FreeBSD [6]. The router was connected to the first LAN through a 100 Mb/s Fast Ethernet card and to the second LAN through a 10 Mb/s Ethernet card. A host in the 10 Mb/s LAN acted as a client, while another host in the 100Mb/s LAN ran the DiVA video server. Hence, multimedia traffic flowed through the WFQ router. As we already mentioned, in our prototype, the bandwidth values used to setup reservations for the video and audio streams were determined empirically for each archived document, by observing the traffic produced by the application while streaming it. These values were provided by the final users to the DiVA client software. In a real-world application, however, it might be logical to expect that users are not aware of the QoS requirements of multimedia documents. We expect therefore that these values might be retrieved automatically by the client application in the form of metadata associated to the document, as in the GESTALT architectural model [9]. 6. DISCUSSION AND CONCLUSIONS An increasing number of distributed applications can benefit from the availability of improved communication services in RSVP-enabled IP internetworks, by acting in a proactive way, instead of passively adapting to the available QoS offered by current best-effort services. We believe that this support is helpful for a wide range of modern distributed applications. In this paper we have presented QTcl, a QoS control API which is compliant with the IETF SCRAPI interface, and has been designed as an extension of the Tcl scripting language. QTcl appears to be a good tool for the development of novel, QoS aware applications: in fact, extending a well established language appears to be a good initial solution for the problem of developing distributed software suited for the exploitation of new network infrastructures. The proposed extensions try to hide as much as possible to the programmer the need for a detailed knowledge of the technicalities associated to the reservation of network resources according to the resource control model available in the communication architecture. We have shown that it is possible to extend an existing distributed multimedia application by adopting a broker-based approach, where the inclusion of a new module enables the application both to allow to interact with QoS aware networks and to avoid extensive intervention on the existing code. It is clear, however, that with the large deployment of these infrastructures, it will be necessary to follow a different approach, consisting in the semantic extension of new languages towards the issues related to network programming with guaranteed communication performances. We believe that QTcl represents an initial, concrete answer to the request of programmers of real-time and multimedia distributed applications, as it has been shown by presenting a video distribution application developed with our scripting language. 7. REFERENCES
{"Source-Url": "http://www.researchgate.net/profile/Roberto_Canonico2/publication/228598004_On_the_introduction_of_quality_of_service_awareness_in_legacy_distributed_applications/links/09e41508ffe62481cc000000.pdf", "len_cl100k_base": 4971, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 25974, "total-output-tokens": 6128, "length": "2e12", "weborganizer": {"__label__adult": 0.00035262107849121094, "__label__art_design": 0.0002589225769042969, "__label__crime_law": 0.0003268718719482422, "__label__education_jobs": 0.0003330707550048828, "__label__entertainment": 0.00010722875595092772, "__label__fashion_beauty": 0.00015103816986083984, "__label__finance_business": 0.0002608299255371094, "__label__food_dining": 0.0003662109375, "__label__games": 0.0005307197570800781, "__label__hardware": 0.00164031982421875, "__label__health": 0.0006451606750488281, "__label__history": 0.0002505779266357422, "__label__home_hobbies": 6.633996963500977e-05, "__label__industrial": 0.0003991127014160156, "__label__literature": 0.00023746490478515625, "__label__politics": 0.0002651214599609375, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.0555419921875, "__label__social_life": 8.392333984375e-05, "__label__software": 0.0152435302734375, "__label__software_dev": 0.92138671875, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.000537872314453125, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26121, 0.03136]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26121, 0.17374]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26121, 0.89255]], "google_gemma-3-12b-it_contains_pii": [[0, 4478, false], [4478, 11056, null], [11056, 14554, null], [14554, 17290, null], [17290, 22238, null], [22238, 26121, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4478, true], [4478, 11056, null], [11056, 14554, null], [14554, 17290, null], [17290, 22238, null], [22238, 26121, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26121, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26121, null]], "pdf_page_numbers": [[0, 4478, 1], [4478, 11056, 2], [11056, 14554, 3], [14554, 17290, 4], [17290, 22238, 5], [22238, 26121, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26121, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
a82ff78b52a7400614cc85a664a42aa39e31a815
The following full text is a preprint version which may differ from the publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/84205 Please be advised that this information was generated on 2019-08-20 and may be subject to change. Narrating Formal Proof (Work in Progress) Carst Tankink Herman Geuvers James McKinna Institute for Computing and Information Science Radboud University Nijmegen, The Netherlands Abstract Building on existing work to proxy interaction with proof assistants, we have considered the problem of how to augment this data structure to support commentary on formal proof development. In this setting, we have studied extracting commentary from an online text by Pierce et al. [11]. Keywords: Coursebooks, Proof Assistants, Proof Communication 1 Introduction Much research in user interfaces for Proof Assistants (PAs) has gone into facilitating the authoring of proof documents. However, the communication of proof scripts to outsiders, such as mathematicians or students, has in our view not received the attention it deserves. In this paper we consider a method and tools for enriching a proof document for communication to such third parties. The enhancement of the document consists of adding a marked-up narrative to the document and including the PA responses for dynamic display. As a running example, we consider the writing of coursebooks used in teaching with a PA, especially the course notes of Pierce et al. on Software Foundations [11]. From these course notes, we can extract the markup using Coqdoc, and insert the PA responses using the concept of movies, introduced by us in a recent paper [12]. In this setting, we briefly sketch how to add editable exercise environments to proof documents. This paper is electronically published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs 2 Background 2.1 Scenario In this paper, we consider a scenario of communication: an author of a (formal) proof document wants to communicate this to a reviewer, who might not have prior experience in a PA and is definitely not an expert in the system. This restriction on reviewer expertise means that for him to interpret a proof document, one or more of the following should hold: • the proof document is enriched with a high level narrative, explaining why certain decisions (in design, representation, tactic invocation, etc.) were taken and what their effect is; • in the case of a tactic-based language, the proof document (a proof script in this case) can be loaded in a PA, so the reviewer can evaluate the effects of each tactic on the general proof state; or • the proof language in which the document is written mimics closely the vernacular of informal mathematics. To bring things into focus, we consider specific instances of author, reviewer and PA here: Author The author in this paper will be an author writing a coursebook for use in a computer science curriculum. The book does not necessarily have to teach the use of a PA, but can present a formal model of (a slice of) computer science that is verified by the PA. Reviewer The reviewer then becomes the prime consumer of a coursebook: a student taking the course. We assume the student has no prior experience with the PA used to write the coursebook. PA For concrete examples and tools, we choose Coq [13] as our PA: this choice is motivated by local expertise in the Coq system and tools, and the existence of at least two coursebooks written as a Coq script. These books are “Software Foundations” by Pierce et al. [11] and “Certified Programming with Dependent Types” by Chlipala [4]. Despite this choice, we believe that the techniques illustrated here are also applicable to other PAs, especially tactic-based ones. Choosing a coursebook as a concrete proof document allows us to make some assumptions about the content of such a document: • The non-formal content of the document is structured in chapters, sections, subsections and paragraphs. • The formal content of the document is the underlying ‘spine’ of the document, subservient to the total narrative of the book. At some points, the tactics might be brought to the foreground to be explained or to serve as an example or exercise, but the text explaining it is just as important as the proof script. - To improve a student’s understanding, the coursebook contains exercises. We assume these exercises consist of proofs or definitions that have holes in them, to be filled out by the reader. A coursebook created as a Coq script generally exists in two different forms: (i) A rendered version of the document, in which the narrative is displayed together with the formal content. The rendering is meant to reinforce the reader’s assimilation of the text, using bullet points, emphasis and other markup. (ii) The script itself, loaded in an interface to the PA such as CoqIDE (part of the Coq distribution) or ProofGeneral [1]. This gives an interactive view of the document, allowing the student to step through the tactics and see their effects, as well as fill in holes in exercises. The version displayed in the interface does not have the markup of the rendered version. These two modes of display correspond to the first two ways of assisting a reviewer in understanding a proof document: describing a proof using a high-level narrative and reviewing the proof script dynamically, by loading it in a PA and stepping through the tactics. Switching between a rendering of a document and the script requires a reader to switch contexts between the renderer and the PA: to our knowledge, no interface to a PA actually renders the documentation of a proof document in a nice way, and the rendering does not incorporate the PA output based on reader focus. Additionally, installing and configuring a PA requires effort of the reviewer, an effort that we have lightened by integrating script and output in a single document, a proof movie. 2.2 Movies A proof movie is a self-contained recording of the interaction between a user and a PA (for further details, please see our recent manuscript [12]). The PA responses can then later be retrieved from the movie without recomputation. The movie can be used to communicate the contents of a proof script without the reader needing to install and configure a PA, nor recompute the proof state. The movie data structure is a list of frames. In its most basic form, a frame ties together the command sent to the PA and the response of the PA to this command. We have implemented the movie as an XML file, with frame, command and response as node types. Watching a movie Watching a movie is done by viewing an HTML rendering of its contents. The script responsible for transforming the XML into HTML is dubbed Mo­viola, after an editing tool for physical film. The page presents the command part of each frame, creating a view that is similar to the proof script sent to the PA. When the reviewer places his cursor on a command, the corresponding response is obtained from the movie and shown to the reviewer. Watching a movie requires no sophisticated tools: all that is needed is the movie, the XSL script transforming the XML into HTML and a web browser. Additionally, instead of publishing the XML together with an XSL file, a stand-alone XSL processor can also be used to generate an HTML file. This HTML file can then be loaded into the browser. Constructing a movie Construction of a movie can be done either as a post-processing step of a proof script, or interactively. The post-processing of a script is done by splitting up the script into individual commands and sending these commands to the PA. The responses are subsequently recorded into the frame. Interactively constructing a movie is done by giving an author a view of the unfinished movie. In this view, it is possible to insert new commands and edit old ones, while the PA can insert responses to these commands, which are shown to the author, if requested. In this way, the author and the PA cooperate in constructing a movie, consisting of a proof script and the responses to the tactics in the script. The main benefit of the movie is that it cuts out the PAs computation when a reader wants to see the response to a specific command, at the cost of not having a certified answer. The resulting movies are just plain text, however, not enhanced with the pretty rendering provided by tools such as Coqdoc. 2.3 Adding narrative: Coqdoc and others To create pretty-printed documentation for proof scripts, there are broadly two categories: either one can use specific syntax to write documentation inside the proof script (typically as comments), or one can write a higher-level document from which both script and documentation can be extracted. The latter approach is also known as literate proving and allows the author to write both documentation and proof in tandem. Coqdoc is the Coq version of the first approach. Distributed together with the Coq PA, the tool produces a rendered (in HTML or in \(\LaTeX\)) version of a proof script. This rendered document contains both a pretty printed version of the commands, and extracts special comments from the document. These comments are taken as a narrative, and rendered as documentation. To provide some control over the appearance of the documentation, a light (Wikipedia-like) syntax is provided for marking up the narrative. As an example of the second approach, Aspinall, Lüth and Wolff [2] have developed an extension to their PG kit architecture based on literate proving. The extension is designed around a central document, that can be manipulated by tools and the proof author. The tools can extract relevant information from the text, and also insert information back into the document, through the concept of backflow. Example tools are a PA, that takes tactics and can insert proof state, or \LaTeX-related tools, that create PDF out of the narrative. To insert PA data inside the narrative, an author can use a command to insert a placeholder for the proof state, which is later replaced by the PA’s actual output. Both of these approaches could produce HTML pages, but the pages are static renditions of the script, only containing pretty-printing to support communication and teaching. In the next section, we investigate how we might improve the Coqdoc-produced pages by adding a movie-reel to it. Another interesting problem arises in both approaches when a new author wants to narrate a script that is provided ‘read only’: such a scenario, which might occur when documenting a third-party library, is not supported by both tools, although the PG kit approach might be adapted to support the scenario. ### 2.4 Course notes We have decided to focus on coursebooks for education using a PA, and as a specific case study, we will look at the course notes by Pierce et al. for a course on Software Foundations taught at the University of Pennsylvania [11]. As the name implies, the course is not about proof assistants — although Coq is introduced during the course, but about the mathematical foundations of software and the semantics of programs. The coursebook is entirely written as a set of Coq scripts, with the narrative as Coqdoc comments. Beyond the structuring in separate files, one for each chapter, the text is further structured in sections and subsections, by giving Coqdoc headers at the appropriate locations. This allows us to see the nesting of a single chapter as follows: (i) At the highest level we find a separation in sections. Each section can contain zero or more subsections. (ii) At the deepest level of the document tree, the subsections have paragraphs as leaves. These leaves can be either slices of proof script or paragraphs in the narrative. Chlipala has also written a coursebook, one on dependently typed pro- gramming [4], but we do not focus on it here, beyond the observation that he includes PA output as part of the narrative, reinforcing our belief that it is desirable to perform the interleaving of movie and rendering. We now show how we can overlay our movies, representing the command structure of the proof script, on top of the Coqdoc-rendered document repres- enting the narrative structure of the document. 3 Enhancing movies with commentary A movie is a sequential series of frames, which do not contain the pretty rendering. This rendering, provided by Coqdoc, can easily be integrated in the movies. To do so, we created a tool that takes the commands from the frames and feeds these commands to Coqdoc. Coqdoc outputs an HTML tree for the command, that contains more information about the intention of the command. In particular the tree can have nodes of the following types: • Documentation nodes, further structured in: • section headers, for different section levels, • narrative paragraphs, containing the text of the commentary. • Code nodes. These nodes contain the tactics of the script. The nodes produced by Coqdoc are added to the frame as additional data, that can be used for several purposes. 3.1 Rendering enhanced movies Instead of displaying the plain text of a movie, we can display the rendered text as created by Coqdoc instead. This display is similar to the normal display of Coqdoc HTML pages, with the exception that placing a cursor on the code fragments dynamically displays the response to the command currently in focus. Due to its dynamic nature, the best way to see the results is through the web, and we have provided a web page displaying the course notes dynamically. The page can be found at http://mws.cs.ru.nl/moviola/movies/coqdoc. Despite the obvious limitations of including static screenshots here in order to illustrate a dynamic feature, Figure 1 displays the effect of placing the cursor on a tactic. To prove such facts—indeed, to prove most interesting facts about numbers, lists, and other inductively defined sets—we need a more powerful reasoning principle—induction. Recall (from high school) the principle of induction over natural numbers: If $P(n)$ is some proposition involving a natural number $n$ and we want to show that $P(n)$ holds for all numbers $n$, we can reason like this: - show that $P(0)$ holds; - show that, for any $n$, if $P(n)$ holds, then so does $P(n+1)$; - conclude that $P(n)$ holds for all $n$. In Coq, the steps are the same but the order is backwards: we begin with the goal of proving $P(n)$ for all $n$ and break it down by applying the induction tactic into two separate subgoals: first showing $P(0)$ and then showing $P(n) \Rightarrow P(n + 1)$. Here’s how this works for the theorem we are trying to prove at the moment: ``` Theorem plus 6 r : forall n: nat, plus n 0 = n. Proof. intros n. intros r. reflexivity. Case n = 0. reflexivity. Case n + 1. simpl. rewrite IHn. reflexivity. Qed ``` Like most tactics, the induction tactic takes an argument that specifies the names of the variables to be introduced in the subgoals. In the first branch, $n$ is replaced by $0$ and the goal becomes $plus 0 0 = 0$, which follows by reflexivity. In the second, $n$ is replaced by $n + 1$ and the assumption $plus (n + 1) n = n$ is added to the context (with the name $IHn$, i.e., the Induction Hypothesis for $n$). The goal in this case becomes $plus (n + 1) (n + 1) = n + 1$, which simplifies to $n + 1 = n + 1$, which in turn follows from the Induction Hypothesis. 3.2 Scenes These rendered pages do not have the structure associated with the narrative of a coursebook built in: it still is just a sequence of frames, only now rendered prettily. For further analysis and better structuring, we can group a set of frames into a scene. A scene in a movie mirrors the section of an article. As such, it can contain the following data: - **Text** Text is just that: the narrative of the document. It can be rich text, including HTML markup and Unicode characters, but has no interactivity or structuring. - **Scenes** To further structure the movie, a scene can contain sub-scenes, just as sections can contain subsections for further structuring. - **Code frames** Beyond the normal text, a scene can contain frames. Each frame contains a single command from the proof script and the corresponding response from the PA. The display of the response is dynamic: only the commands are shown, and when a reviewer places the cursor on a command, the response is shown. The architecture of a scene is an instantiation of the Composite pattern, its class diagram is displayed in Figure 2. Because explanation within the narrative can refer to future or previous sections and recapitulate, or abstract from, previous fragments, it seems desirable that scenes can refer to other scenes freely, beyond the rigid structure noted above. Structuring a movie into scenes can be done automatically, based on the Coqdoc output. We already mentioned that Coqdoc sorts nodes into code and documentation nodes, and that documentation nodes can be both paragraphs and section headers. The headers can be used to group the paragraphs and frames following it, up to the next header. If this header is of a ‘lower’ level (for instance: a subsection header following a section header), the frames following the sub-header is a sub-scene of the scene being built, and if it is of the same or ‘higher’ level, we go up to this higher level, finishing all the scenes of a lower level. With the sketched recursive algorithm, we can simply group the frames of the movie into a nested structure mimicking the structure of the document. Additionally, it seems useful to group subsequent sequences of command frames into their own scene. More specifically, grouping the proof of a lemma or theorem into a scene seems the most logical, but this requires looking at the text of the commands itself, instead of the data on the structure of the HTML tree. 4 Adding Commentary to a Proof For rendering, a scene is a minimal addition, making the output to web pages a bit easier, but the real advantage for having scenes is in post-processing data: a scene forms a logical entity within the narrative, that might be enriched with specific metadata or be edited further. In particular, writing commentary after the script has been made can be supported by first grouping a set of frames into a scene, and then describing this scene as a whole. To write such a commentary track for a movie, an author needs the following: • A movie created from a proof script. • An interface through which she can write the commentary track, and tie it to the frames. We are still experimenting with the interface for writing the commentary track, but based on the data structure and an initial prototype, we observe that the interface should provide for the following activities: • Writing the actual text. • Grouping code frames and text into scenes. • Interleaving text and code to obtain a narrative. Writing the actual text can be done in either a WYSIWYG editor or with some light markup language (as used in Wikipedia and Coqdoc), and does not introduce new HCI problems. The first design decision to be made is how to allow an author to group text into frames. As the resulting document structure is a tree, a tree editor could be used for adding scenes to the document, or to select scenes for further editing. The main advantage of this approach is that the structure can be seen at a glance, and edited easily. On the other hand, inferring the movie’s structure when the author inserts a header might provide a faster editing workflow, as adding a new scene does not require her to switch to a different menu or editor. These two approaches could be combined, inferring the document structure from commands typed in the editor and explicitly allowing an author to insert scenes or move scenes in a structure editor, actions which get translated to modifications of the text in the editor. How to interleave the text and code is not yet clear to us. To make the scenes as flexible as possible, we decided that the relation between frames and scenes should be many-to-many: code and narrative are equally important, and it is not unlikely that the narrative refers to a previous definition or skips forward to a proof or lemma. It proves difficult to design an interface that allows creating this many-to-many relation without forcing the author to a specific workflow. The state-of-the-art in programming environments might be useful to borrow ideas from, but approaches like Javadoc [10] are normally used to document programs on the level of classes, methods and interfaces. In a proof setting, this would translate to documenting a lemma instead of describing chunks of commands. We have experimented with an interface that has a tree editor for adding scenes to a movie (only one level deep) and a rich text editor for writing the narrative. To link this text with the code of the command, a third pane gives the author a view on the movie’s commands and the responses, and allowing her to toggle scene inclusion by a click on the desired scenes. A screenshot is shown in Figure 3. This interface forces the user in a rather restricted workflow: she would first need to add a scene, then alternate between typing and choosing the code to be included. Furthermore, it does not allow her to interleave the code within the narrative. For now, improving the user interface for writing commentary is left as an open issue. 5 Interactive movie elements Although we have added dynamic content to Coqdoc documents, this does not make a proof document really interactive: the content of the movie does not change in response to a reader’s actions, only its display does. We now consider how we can add interactive scenes to a movie, without having to give the reviewer full access to the proof script or requiring him to load a PA. In our chosen context of course notes, the main way of providing an interactive version these notes is by providing exercises: a given set of theorems and definitions that still have holes in them that the reader can fill in. An actual PA supports doing exercises unsupervised by checking a proof once it is done, and by providing the state after each command, which helps in progressing through the proof. These holes are intended to be filled in by the student, leading to a fully checked proof document. On the other hand, the explanation in a text for students should not have to be edited by those students. To allow the distinction between exercises and text, we would like to have **editable scenes** in the movie. In this section, we propose an as of yet unimplemented design for such scenes. ### 5.1 Writing Editable scenes An editable scene is a scene that can be edited by the reader after the movie is published. Adding such a feature requires: - an interface option for the author through which she can mark which scenes can be edited later, and which should remain locked, and - a PA processing the commands the reader types in an exercise scene. Note that the author of a proof movie determines which scenes are editable and which scenes are locked: this can be done while she prepares a movie, by setting a property of the scene, comparable to making a file read-only in the file system. How the property is set depends on the editor style chosen: a WYSIWYG editor might provide it as an option in a context menu, while a markup language could allow some meta-command for setting the attribute of a scene. ### 5.2 Interacting with Editable Scenes Once we have integrated the notion of an editable scene within the movie’s data structure, the display of the movie needs to accommodate for editing these scenes. This would include marking the scene as editable, for example by providing an edit button next to the scene, and by including a PA-backed editor for filling out the exercise. We have not attempted to design such an editor, but we would prefer it to be very light-weight: the workflow of reading the document should not be disrupted too much by doing the exercise. Because of this, we do not want the student to switch to another page for filling out an exercise. This means we would like the following use case to be fulfilled by the editor: (i) The student clicks the 'edit button'. (ii) The movie’s server brings a PA into the state necessary for doing the exercise (iii) The editor is shown to the student, including the PA’s state (context and goals) for the exercise. (iv) In the editor, the student types commands, which update the PA’s state. (v) If the student solves the exercise, it is stored, if he abandons it, the exercise gets abandoned. To implement the communication with a PA, we would use the ProofWeb system [7], developed at Nijmegen. ProofWeb is a client-server architecture for doing formal proof over the web. At the server side, PAs are installed, that can be communicated with through a JavaScript client. Instead of the provided UI, we could build our own lightweight editor, and connect that to the ProofWeb server. The main open problem is handling the PA state: before the editor is shown, quite some computation is necessary to bring the PA into the right state. How to handle this computation remains an open question, but we have some ideas on how to tackle it: • At the moment the document is shown to the student, also feed it to the PA as a background process, stopping at the first exercise. This is a naive, but probably easily implemented solution, that does not account for exercises being skipped or abandoned. • To handle a student skipping an exercise, we could tacitly insert an Admitted command for every exercise. Once a student has solved it, we then remove the admission. This would work for Coq, but we do not know if all PAs support an Admitted-like construct. Apart that, the computation to get to the focused exercise might become too slow, as the student might start with the last exercise, requiring the entire chapter to be sent to the PA in order to start the exercise. • We could be smarter about the inter-proof dependencies: most PAs interpret the script as a linear sequence, each command depending on all of the previous. This is not always the case, however, especially for exercises, where the proof structure resembles a tree, with the exercise being leaves depending on the content of the explanation above it. We could exploit this structure by only checking the path to the leaf that is focused, instead of all subtrees. To actually make this work, either the PA needs to be more permissive about the proof structure, or our tool support could build a sequence of commands from the path to the selected leaf. • Finally, we observe that a large part of the proof does not change when a student starts an exercise: the proof script that is part of the explanation is locked by the author, and would not need to be rechecked each time an exercise is attempted. So, we could ‘restore’ a proof session starting at the exercise, but to our knowledge, no PA supports this behaviour, and getting this behaviour with external tools seems difficult. 6 Related work Leading up to this paper, we have created a dynamic version of the Software Foundations course notes [11]. We have applied our techniques to create handouts for a PA and type theory course Geuvers teaches at the Eindhoven University of Technology. Other documents that we could transform are the Coq tutorial by Huet et al. [6] and the tutorial by Bertot [3]. Several approaches exist based around a central document for formal proof, similar to our movie, of which we have already mentioned the PG kit approach by Aspinall, Lüth and Wolff [2]. Additionally, Mamane and Geuvers have experimented with a document-oriented Coq plugin for TeXmacs [5], and lhs2TeX [8] allows writing literal proof documents, from which both Coq code and \LaTeX documentation can be extracted. These approaches are mainly used for writing proof and documentation together, while our movie allows an author to first write a proof script, and then create a dynamic presentation of this script. The presentation can then be used in a narration of the proof. Nordström has suggested [9] using dependent type theory to enforce syntactic wellformedness of books and articles, ‘live’ documents, programs, and formal proofs in a unified way. Especially his notion of typed placeholders could be used to represent exercises in an online coursebook. 7 Conclusions We have shown how we can make on-line coursebooks using a PA more dynamic: by adding the PA’s output to the document and showing it when requested by the student reading the book. Constructing these dynamic books is the result of combining two techniques: our previous work on creating movies out of a proof script, and the addition of markup and commentary to a proof document using tools such as Coqdoc. We have further sketched how dynamic documents could be created from a proof script when the script itself cannot be modified, and how to add interactive elements to these documents. The techniques for creating the dynamic, non-interactive documents have been applied to the course notes for a “Software Foundations” course and have been received with great enthusiasm by the authors of these notes. This shows that the documents we create with the described tooling add value to the Coqdoc output, and gives motivation for improving the workflow and output. References
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/84205/84205.pdf?sequence=1", "len_cl100k_base": 6220, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 30871, "total-output-tokens": 7678, "length": "2e12", "weborganizer": {"__label__adult": 0.0005712509155273438, "__label__art_design": 0.001819610595703125, "__label__crime_law": 0.0005679130554199219, "__label__education_jobs": 0.054962158203125, "__label__entertainment": 0.00028204917907714844, "__label__fashion_beauty": 0.0003414154052734375, "__label__finance_business": 0.0005817413330078125, "__label__food_dining": 0.0008111000061035156, "__label__games": 0.0013427734375, "__label__hardware": 0.0012178421020507812, "__label__health": 0.001026153564453125, "__label__history": 0.0007967948913574219, "__label__home_hobbies": 0.000308990478515625, "__label__industrial": 0.00099945068359375, "__label__literature": 0.0016994476318359375, "__label__politics": 0.0005006790161132812, "__label__religion": 0.001094818115234375, "__label__science_tech": 0.19140625, "__label__social_life": 0.0004203319549560547, "__label__software": 0.0232696533203125, "__label__software_dev": 0.7138671875, "__label__sports_fitness": 0.0005545616149902344, "__label__transportation": 0.001033782958984375, "__label__travel": 0.0003750324249267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31799, 0.01465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31799, 0.36094]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31799, 0.92316]], "google_gemma-3-12b-it_contains_pii": [[0, 293, false], [293, 1934, null], [1934, 4293, null], [4293, 6689, null], [6689, 9212, null], [9212, 11868, null], [11868, 13903, null], [13903, 16512, null], [16512, 18536, null], [18536, 21103, null], [21103, 22183, null], [22183, 24532, null], [24532, 27197, null], [27197, 29517, null], [29517, 31799, null]], "google_gemma-3-12b-it_is_public_document": [[0, 293, true], [293, 1934, null], [1934, 4293, null], [4293, 6689, null], [6689, 9212, null], [9212, 11868, null], [11868, 13903, null], [13903, 16512, null], [16512, 18536, null], [18536, 21103, null], [21103, 22183, null], [22183, 24532, null], [24532, 27197, null], [27197, 29517, null], [29517, 31799, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31799, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 31799, null]], "pdf_page_numbers": [[0, 293, 1], [293, 1934, 2], [1934, 4293, 3], [4293, 6689, 4], [6689, 9212, 5], [9212, 11868, 6], [11868, 13903, 7], [13903, 16512, 8], [16512, 18536, 9], [18536, 21103, 10], [21103, 22183, 11], [22183, 24532, 12], [24532, 27197, 13], [27197, 29517, 14], [29517, 31799, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31799, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e0563c10ce20a6da5bc2326fd789c8021f0c2fb8
Toward an eBPF-based clone of iptables Original Toward an eBPF-based clone of iptables / Bertrone, Matteo; Miano, Sebastiano; Pi, Jianwen; Risso, FULVIO GIOVANNI OTTAVIO; Tumolo, Massimo. - ELETTRONICO. - (2018). ((Intervento presentato al convegno Netdev 0x12, The Technical Conference on Linux Networking tenutosi a Montreal, Canada nel July 2018. Availability: This version is available at: 11583/2712607 since: 2018-09-11T23:15:10Z Publisher: Linux Foundation Published DOI: Terms of use: openAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright (Article begins on next page) Toward an eBPF-based clone of iptables Matteo Bertrone, Sebastiano Miano, Jianwen Pi, Fulvio Risso, Massimo Tumolo Abstract Iptables, which is currently the most common firewall on Linux, has shown several limitations over the years, with scalability as a big concern. This paper reports the first results of a project that aims at creating a (partial) clone of iptables, using the eBPF/XDP technology. This project assumes unmodified Linux kernel and guarantees the full compatibility (in terms of semantics and syntax) with current iptables. Keywords eBPF, iptables, netfilter, firewall. Introduction Many Linux servers exploit iptables, which is part of the netfilter [5] kernel subsystem, to protect the server from threats coming from the external network. Although widely used, iptables has been criticized in many aspects, such as for its antiquate matching algorithm (linear search); its syntax, not always intuitive; its old code base, which is difficult to understand and maintain. Over the years, this triggered the creation of several alternative firewall projects trying to address some of the above mentioned limitations. For example, ufw [6] focused on a simpler user interface, although the components behind the hood are still the one used by iptables. Instead, nftables [2] proposed an extensible virtual machine that interprets code dynamically generated and loaded from user space, simplifying the kernel source code base and facilitating the possibility to add new features or support new protocols. However, the foregoing projects failed so far to replace iptables in real world deployment, hence leaving it as one of the most used software nowadays. On the other hand, eBPF hook points are different and are located before the traffic control (TC) module, which is earlier than the above filtering points for incoming traffic, and later for outgoing traffic, as shown in Figure 1. The different position of the filtering hooks in netfilter and eBPF poses bpf-iptables, which emulates the iptables filtering semantic and exploits a more efficient matching algorithm. Prototypal architecture This Section presents the architecture of the bpf-iptables prototype, derived from the necessity to solve the four main challenges listed in the Introduction. Preserving iptables semantic Iptables enables to filter traffic in three different locations, which are called INPUT, FORWARD and OUTPUT chains, as defined by the netfilter framework and shown in Figure 1. As suggested by the names, the first chain applies to traffic that is terminated on the host itself; the second can handle traffic that traverses the host (e.g., when Linux is asked to act as a router and forward IP traffic between multiple interfaces), while the third operates on traffic exiting from the host and directed to the Internet. It is important to notice that the FORWARD chain is used also when the traffic traverses the Linux kernel coming from (or directed to) non-root network namespaces, which is becoming a common case in many virtualized deployment (e.g., Kubernetes). On the other hand, eBPF hook points are different and are located before the traffic control (TC) module, which is earlier than the above filtering points for incoming traffic, and later for outgoing traffic, as shown in Figure 1. The different position of the filtering hooks in netfilter and eBPF poses non-negligible challenges in preserving the semantic of the iptables rules, which, when enforced in an eBPF program, operate on a different set of traffic compared to the one that would cross the chain they are attached to. As an example, rule “iptables -A INPUT -j DROP” drops all the incoming traffic directed to the current host, but it does not affect the traffic that is being forwarded by the host itself. A similar “drop all” rule, applied in the eBPF TC_INGRESS hook point, will instead drop all the incoming traffic, also including the one that would be forwarded by the host itself. This behavior suggests the necessity to introduce a classification logic in the eBPF hook points that predicts the set of traffic that would reach each individual iptables chain. This would enable bpf-iptables to emulate the same behavior of iptables although operating in a different hook point. A possible solution is the architecture depicted in Figure 2, which features an initial Chain Selector module in charge of predicting which path would be taken by the traffic, followed by the actual filtering block that is configured with exactly the same rules operating in the original iptables chain. According to this architecture, TC/XDP ingress hooks are used to emulate the filtering behavior of the iptables INPUT and FORWARD chains. Instead, the TC_EGRESS hook emulates only the OUTPUT chain; the associated Chain Selector module detects the traffic forwarded by the host and sends it directly in output, since that traffic has already been filtered by FORWARD emulation module attached to the TC/XDP ingress hooks. The Chain Selector block is a simple filter that classifies traffic based on the IP address of the traversed packets, particularly based on the Destination IP address for the Ingress Chain Selector, and Source IP address in case of the Egress Chain Selector. The idea is that traffic would cross the INPUT chain only if it is directed to a local IP address, visible from the host root namespace; similarly, a packet would traverse the OUTPUT chain only if it has been generated locally. This is achieved in our prototype with an additional control logic that (i) lists all the IP addresses visible from the root namespace and configures them in the Chain Selector when the systems starts; and (ii) attaches to the appropriate NETLINK messages in order to detect any change in the set of local addresses (e.g., an updated IP address, a network device turned on/off) and realign the content of the Chain Selector with the proper state of the system. While this simple solution suffices for most common processing case, it would not be able to support the several processing processing paths allowed by the netfilter framework, e.g., when the Linux host is configured to bridge the packets between two interfaces (this option is not shown in Figure 2) for the sake of simplicity. As a consequence, the emulation of the filtering behavior of iptables in eBPF may become rather complicated in case a 100% compatibility with iptables is required. In that case, a more effective solution would be to extend the netfilter framework with additional eBPF hook points, which would allow to intercept packets exactly in the desired position of network stack. This would greatly simplify the integration of eBPF-based components with the existing kernel native modules. Matching algorithm Iptables uses a linear search algorithm for matching traffic, which is the main responsible for its poor performance particularly in presence of a high number of firewall rules. However, the selection and implementation of a better matching algorithm prove to be a challenging choice due to the intrinsic limitation of the eBPF environment [4]. In fact, although better matching algorithms are well-known in the literature (e.g., cross-producing, decision-tree approaches, etc.), they require either sophisticated data structures that are not currently available in eBPF or an unpredictable amount of memory, which is not desirable for a kernel module. Given the above constraints, the current prototype of bpf-iptables exploits the Bit Vector Linear Search [3] (LBVS) algorithm, which proves to be reasonably fast while being feasible with current Linux kernels (and available eBPF maps). This algorithm follows the divide-and-conquer paradigm: it splits filtering rules into multiple classification steps, based on the number of protocol fields present in the rule set; intermediate results are combined to obtain the final solution. In fact, LBVS creates a specific (logical) bi-dimensional table for each field on which packets may match, such as the three fields (IP destination address, transport protocol, TCP/UDP destination port) shown in the example of Figure 3. Each table contains the list of unique values for that field present in the given ruleset, plus a wildcard for rules that do not care for any specific value. Each value in the table is associated with a bitvector of length $N$ equal to the number of rules, which keeps the list of rules that are satisfied when the field assumes the given value. Filtering rules, and the corresponding bits in the above bitvector, are ordered with highest priority rule first; hence, rule #1 corresponds to the most significant bit in the bitvector. As an example on how the bitvectors are created, rule #5 simply checks the IP destination address and ignores the value of other fields; hence, the 5th bit in each bitvector is true when the IP destination address is in range 10.0.0.0/8, for whatever value of transport protocol and TCP/UDP port (hence, the 5th bit is always 1 for all values in the other two tables). This matching process is repeated for each field we are operating with, such as the three fields shown in Figure 3. The final matching rule can be obtained by performing a bitwise AND operation on all the intermediate bitvectors returned in the previous steps; the resulting rule corresponds to the most significant bit with a value equal to ‘1’ in the resulting bitvector, which represents the matched rule with the highest priority. In the example in Figure 3, it corresponds to the rule #1. Each processing step is independent, hence each map can be implemented in a different way, based on the field characteristics (e.g., longest prefix match in case of IP addresses and ranges; hash tables for TCP/UDP ports). Per-CPU maps are used whenever possible to avoid cache pollution among different CPU cores and increase the effectiveness of parallel processing of multiple packets on different CPU cores. Then, the entire set of rules is split into these tables and the classification is carried out in different steps whose results (as bitvectors that maintain the list of matching rules for each field) are combined to obtain the final solution, as shown in Figure 3. In each matching step, if a lookup fails, the algorithm can infer that no match has been found and it can apply the default action, skipping the rest of the pipeline. The above logical pipeline has been implemented by means of a cascade of eBPF programs as shown in Figure 4, calling each other by means of tail calls. A first module is dedicated to the extraction of the packet headers in order to facilitate the processing of the following blocks. Each matching step, which operates on a single field, is translated into a dedicated eBPF program that integrates the required processing code with the most appropriate bi-dimensional per-CPU map, which keeps the couples value-bitvector associated to the given field. In each field matching step, the extracted bitvector is compared with the one already obtained at the previous step and stored in a shared per-CPU array, which will be used by the following blocks in the pipeline. Finally, the last block scans the final bitvector looking for the most significant bit at 1; when this is found, it implements the action associated to the rule (drop / accept) and it updates the structure that keeps the counters associated to each rule. Thanks to the dynamic code injection of eBPF, we created a matching pipeline that contains the minimum number of processing blocks required to handle exactly the fields required by the current ruleset, avoiding unnecessary processing for unused fields. For instance, if the TCP flags field is not used by any rule, that processing block is avoided in the pipeline; new processing blocks can be added at run-time if the matching against a new field is required, with the property of running always the optimal number of eBPF programs. Figure 3: Linear Bit Vector Search. Figure 4: eBPF matching pipeline. Connection tracking Netfilter tracks the state of TCP/UDP/ICMP connections and stores them in a session (or connection) table (conntrack). This table can be used by iptables to specify filtering rules that accept/drop packets based on the characteristic of the connection they belong to. For instance, iptables may have a rule that allows only packets belonging to a new or established connections, e.g., enabling the host to generate traffic toward the Internet (and to receive return packets), while connections initiated from the outside world may be forbidden. In addition of being associated to a connection, each packet can also trigger a state change in a connection; for example, a TCP SYN triggers the creation of a new entry in the connection table, while a RST packet (which represents the termination of an established connection) flushes an existing entry. Given the impossibility to exploit the connection tracking facility of the Linux kernel, our bpf-iptables prototype implements its own connection tracking module as a set of eBPF programs. However, due to the well known limitation e.g., in terms of code complexity allowed by this technology, we support basic connection tracking for stateful filtering of UDP, TCP, ICMP traffic that detects when a connection starts/ends, while we do not recognize additional states in the protocol state machines as well as we do not support advanced features such as related connections (e.g., when a SIP control session triggers the establishment of voice/video RTP sessions), nor we support IPreassemble. A possible more complete solu- The current implementation circumvent this problem by implementing the conntrack module with an LRU map, so that old entries are automatically recycled and assigned to new connections. In addition, it stores in each entry a timestamp representing the last time that entry was used; this enables to detect a possible access to an old entry, not yet purged by the LRU algorithm. This is not the optimal solution, but it represents the best compromise given the possibilities offered by the current eBPF technology. **Preserving iptables syntax** The compatibility with the iptables syntax is another must for this prototype. This enables potential users to exploit the same tools/scripts they currently use for controlling iptables to interact also with bpf-iptables, providing a smooth migration experience. However, since a full clone of iptables may not be feasible in the short term, we would like to guarantee users they will always obtain exactly the result they expect, even if the current bpf-iptables may not be able to support all issued commands. Our solution to the above problems was to create two executables, iptables and bpf-iptables, which are available at the same time in the system, the former controlling the traditional filtering based on netfilter, the second emulating iptables by means of the eBPF clone. Ideally, both tools should support the same syntax; in practice the latter supports only a subset of commands compared to the former due to the scarce maturity of our solution. This allow users to either call iptables or bpf-iptables, using the same syntax; in case a command is not supported by the latter, the user can always switch back to the original iptables and obtain the network behavior he needs. This solution is very simple and leaves the responsibility to choose the right executable to the user; while more sophisticated solution can be envisioned, this may be acceptable in the short term. Technically, this has been achieved by a lightweight set of modifications to the iptables source code, which has been cloned and renamed as bpf-iptables, and to the underlying libiptc library. Our modifications simply change the way iptables and libiptc push commands in the kernel, replacing netlink messages with an equivalent command line that is passed to an intermediate shell script in charge of calling the bpf-iptablesd daemon, which implements the actual eBPF-based iptables clone. This daemon has a REST interface waiting for JSON commands that ask to add/remove filtering rules, read the state of the system (e.g., statistics/counters) and more; the above commands are translated in eBPF-compatible primitives that are sent to the kernel, e.g., to generate a new set of eBPF programs\(^2\), to read data in a map, and more. This approach has the advantage of introducing no additional cost cost for parameter parsing and validation, already --- 1. https://www.mail-archive.com/netfilter-devel@vger.kernel.org/msg11139.html 2. Due to the LBVS internals and the way we create eBPF programs, we have to generate a new set of eBPF programs each time there is a change in the filtering rule dataset. performed by iptables, as well as limiting the number of bugs, as we reuse already existing and well-tested code for the frontend. An overview of the overall architecture is depicted in Figure 6. Conclusions This paper presents a preliminary architecture for a possible replacement of iptables with an equivalent software based on the eBPF technology. While proven to be feasible, the prototype highlighted also the complexity of creating a full clone of iptables, in particular considering that this paper addressed only a subset of the features available in that software. For instance, iptables is often used to handle both filtering and natting functions, with the latter not been considered in this paper, as well as the features available in eptables and arptables, and the additional packet paths in netfilter when bridging (instead of IP forwarding) is enabled. Starting from the above observation, we can envision two possible directions for our future work. First, instead of trying to substitute iptables with a full eBPF clone, it would be worth exploring the possibility of offloading a subset of the filtering rules in an eBPF program, running at high speed (e.g., in XDP). This may be the case for long lists of homogeneous rules (e.g., operating on IP destination address) that are used to discard traffic from malicious sources; instead of matching each incoming packet against that set of rules, which are processed with the linear search available in iptables, those could be moved to a processing module that implements a more efficient algorithm and runs earlier in the network stack, as long as the semantic of the rules is preserved. This results in having malicious packets dropped earlier, with the consequent reduced resource (CPU) consumption. Second, having a more extensive set of hook points operating in netfilter could add more flexibility in integrating eBPF programs in Linux, as it would enable the selective replacement of a single component while keeping the others unchanged (e.g., replace only the firewall, but keep the Linux IP forwarding). This flexibility is not available with the current eBPF hooks; for instance, if the filtering is implemented with eBPF, the NAT has to be re-implemented in eBPF as well in order to preserve the semantic of the rules. In fact, if we take the INPUT chain as example and assuming that the filtering is done in eBPF, the current NAT would operate on packets exiting from the filtering components, while in the original iptables the NAT is traversed before packets arrive to the filtering block. Acknowledgments The authors would like to thank the other people who contributed to this work, particularly Mauricio Vásquez Bernal who developed part of the software framework that was used to implement this prototype. This work was possible thanks to the generous support from Huawei Technologies and VMware. Part of this work was conducted within the framework of the ASTRID project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement no. 786922. Study sponsors had no role in writing this paper. The views expressed do not necessarily represent the views of the authors employers, the ASTRID project, or the Commission of the European Union. References Figure 6: Userspace integration of bpf-iptables.
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2712607/207551/18NetDev-iptables.pdf", "len_cl100k_base": 4255, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18930, "total-output-tokens": 4836, "length": "2e12", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0002598762512207031, "__label__crime_law": 0.00046181678771972656, "__label__education_jobs": 0.0002942085266113281, "__label__entertainment": 9.54270362854004e-05, "__label__fashion_beauty": 0.0001398324966430664, "__label__finance_business": 0.00024700164794921875, "__label__food_dining": 0.0003333091735839844, "__label__games": 0.0005583763122558594, "__label__hardware": 0.0035839080810546875, "__label__health": 0.0004825592041015625, "__label__history": 0.00024962425231933594, "__label__home_hobbies": 8.547306060791016e-05, "__label__industrial": 0.0005846023559570312, "__label__literature": 0.00017654895782470703, "__label__politics": 0.0002956390380859375, "__label__religion": 0.0004191398620605469, "__label__science_tech": 0.1123046875, "__label__social_life": 0.00010228157043457033, "__label__software": 0.035186767578125, "__label__software_dev": 0.8427734375, "__label__sports_fitness": 0.00029969215393066406, "__label__transportation": 0.0005435943603515625, "__label__travel": 0.0002161264419555664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21621, 0.01595]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21621, 0.41295]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21621, 0.92348]], "google_gemma-3-12b-it_contains_pii": [[0, 697, false], [697, 4078, null], [4078, 9216, null], [9216, 14316, null], [14316, 17463, null], [17463, 21621, null]], "google_gemma-3-12b-it_is_public_document": [[0, 697, true], [697, 4078, null], [4078, 9216, null], [9216, 14316, null], [14316, 17463, null], [17463, 21621, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21621, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21621, null]], "pdf_page_numbers": [[0, 697, 1], [697, 4078, 2], [4078, 9216, 3], [9216, 14316, 4], [14316, 17463, 5], [17463, 21621, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21621, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0c979b09102473352e903fd9344670d8ebe24d2d
[REMOVED]
{"Source-Url": "https://www.researchgate.net/profile/Philippe_Pasquier/publication/220867545_Towards_a_Generic_Framework_for_Automated_Video_Game_Level_Creation/links/0912f510ac2bed57d1000000.pdf", "len_cl100k_base": 5261, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23625, "total-output-tokens": 6723, "length": "2e12", "weborganizer": {"__label__adult": 0.0022296905517578125, "__label__art_design": 0.00283050537109375, "__label__crime_law": 0.0022430419921875, "__label__education_jobs": 0.005481719970703125, "__label__entertainment": 0.0014600753784179688, "__label__fashion_beauty": 0.0014543533325195312, "__label__finance_business": 0.0011796951293945312, "__label__food_dining": 0.0022678375244140625, "__label__games": 0.356689453125, "__label__hardware": 0.003566741943359375, "__label__health": 0.0026378631591796875, "__label__history": 0.0024433135986328125, "__label__home_hobbies": 0.0005612373352050781, "__label__industrial": 0.002246856689453125, "__label__literature": 0.00237274169921875, "__label__politics": 0.001277923583984375, "__label__religion": 0.0022220611572265625, "__label__science_tech": 0.2197265625, "__label__social_life": 0.00035500526428222656, "__label__software": 0.00872039794921875, "__label__software_dev": 0.372314453125, "__label__sports_fitness": 0.0027027130126953125, "__label__transportation": 0.0021152496337890625, "__label__travel": 0.0008893013000488281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29357, 0.02751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29357, 0.54661]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29357, 0.91074]], "google_gemma-3-12b-it_contains_pii": [[0, 2763, false], [2763, 6096, null], [6096, 9747, null], [9747, 13110, null], [13110, 14788, null], [14788, 18311, null], [18311, 20140, null], [20140, 22374, null], [22374, 25880, null], [25880, 29357, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2763, true], [2763, 6096, null], [6096, 9747, null], [9747, 13110, null], [13110, 14788, null], [14788, 18311, null], [18311, 20140, null], [20140, 22374, null], [22374, 25880, null], [25880, 29357, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29357, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29357, null]], "pdf_page_numbers": [[0, 2763, 1], [2763, 6096, 2], [6096, 9747, 3], [9747, 13110, 4], [13110, 14788, 5], [14788, 18311, 6], [18311, 20140, 7], [20140, 22374, 8], [22374, 25880, 9], [25880, 29357, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29357, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
b661fc3b6e0db78a882ad6c9efef9bf25126ee04
[REMOVED]
{"len_cl100k_base": 5208, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 19872, "total-output-tokens": 5560, "length": "2e12", "weborganizer": {"__label__adult": 0.000743865966796875, "__label__art_design": 0.001796722412109375, "__label__crime_law": 0.0008044242858886719, "__label__education_jobs": 0.0218048095703125, "__label__entertainment": 0.0009613037109375, "__label__fashion_beauty": 0.0004036426544189453, "__label__finance_business": 0.0009565353393554688, "__label__food_dining": 0.0004982948303222656, "__label__games": 0.001407623291015625, "__label__hardware": 0.0012426376342773438, "__label__health": 0.001003265380859375, "__label__history": 0.00140380859375, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0006399154663085938, "__label__literature": 0.0224456787109375, "__label__politics": 0.0008993148803710938, "__label__religion": 0.0013580322265625, "__label__science_tech": 0.276123046875, "__label__social_life": 0.000926494598388672, "__label__software": 0.1776123046875, "__label__software_dev": 0.48486328125, "__label__sports_fitness": 0.0005269050598144531, "__label__transportation": 0.000927448272705078, "__label__travel": 0.0004200935363769531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23943, 0.02167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23943, 0.35726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23943, 0.88454]], "google_gemma-3-12b-it_contains_pii": [[0, 2117, false], [2117, 4850, null], [4850, 5612, null], [5612, 7994, null], [7994, 11538, null], [11538, 15811, null], [15811, 19528, null], [19528, 23253, null], [23253, 23943, null], [23943, 23943, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2117, true], [2117, 4850, null], [4850, 5612, null], [5612, 7994, null], [7994, 11538, null], [11538, 15811, null], [15811, 19528, null], [19528, 23253, null], [23253, 23943, null], [23943, 23943, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23943, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23943, null]], "pdf_page_numbers": [[0, 2117, 1], [2117, 4850, 2], [4850, 5612, 3], [5612, 7994, 4], [7994, 11538, 5], [11538, 15811, 6], [15811, 19528, 7], [19528, 23253, 8], [23253, 23943, 9], [23943, 23943, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23943, 0.26357]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
bdcf7c55b505829422cc05ec4624dc6f0a80d376
Welcome to Release 7.0, the next major release of the DSpace platform. Any previous version of DSpace may be upgraded to DSpace 7 directly. For more information, please see Upgrading DSpace. - 7.0 Beta 3 Release Notes - 7.0 Beta 2 Release Notes - 7.0 Beta 1 Release Notes - 7.0 Acknowledgments - Major Contributing Institutions - Frontend / New User Interface Acknowledgments - Backend / REST API Acknowledgments - Additional Thanks 7.0 Beta 3 Release Notes Do not install a Beta release in Production! DSpace 7 is still under active development. As a Beta release, we do not recommend installing this in production. Rather, we ask that you consider installing it in a test environment, try it out, and report back any issues or bugs you notice via GitHub (UI issues, Backend/API issues) Get Started / Try it out! To try out DSpace 7.0 Beta 3 immediately, see Try out DSpace 7 Full (manual) installation instructions are also available at Installing DSpace. - DSpace 7.0 beta 3 UI: https://github.com/DSpace/dspace-angular/releases/tag/dspace-7.0-beta3 - DSpace 7.0 beta 3 Backend: https://github.com/DSpace/DSpace/releases/tag/dspace-7.0-beta3 For more information on the upcoming Beta and Final release schedule see DSpace 7 Release Goals. Included in Beta 3 - **Processes Admin UI** (video) allows Administrators to run backend scripts/processes while monitoring their progress & completion. (Login as an Admin, select "Processes" in sidebar) - Currently supported processes include "index-discovery" (reindex site), "metadata-export" (batch metadata editing CSV export), and "metadata-import" (batch metadata editing CSV import). - **Manage Account Profile** allows logged in users to update their name, language or password. (Login, click on the account icon, and select "Profile") - **New User Registration** (video) and password reset on the Login Screen - **Login As (Impersonate)** another account allows Administrators to debug issues that a specific user is seeing, or do some work on behalf of that user. (Login as an Admin, Click "Access Control" in sidebar, Click "People", Search for the user account & edit it. Click the "Impersonate EPerson" button. You will be authenticated as that user until you click "Stop Impersonating EPerson" in the upper right.) - Requires "webui.user.assumelogin=true" to be set in your local.cfg on backend. Also be aware that you can only "impersonate" a user who is not a member of the Administrator group. - **Manage Authorization Policies** of an Item allows Administrators to directly change/update the access policies of an Item, its Bundles or Bitstreams. (Login as an Admin, Click "Edit" "Item" in sidebar, and search for the Item. Click the "Authorization.." button on its "Status" tab. - **Manage Item Templates** of a Collection allows Administrators to create/manage template metadata that all new Items will start with when submitted to that Collection. (Login as an Admin, Click "Edit" "Collection" in sidebar and search for the Collection. Click the "Add" button under "Template Item" to get started.) - NOTE: unfortunately there's a known bug that while you can create these templates, the submission process is not yet using them. See https://github.com/DSpace/dspace-7.0-beta3 - **Administer Active Workflows** (video) allows Administrators to see every submission that is currently in the workflow approval process. From there, they have the option to delete items (if they are no longer needed), or send them back to the workflow pool (to allow another user to review them). (Login as an Admin, Click "Administer Workflow" in sidebar) - **CC License** step allows your users to select a Creative Commons License as part of their submission. Once enabled in the "item-submission. xml" (on the backend) it appears as part of the submission form. - **Angular CLI** compatibility was added to the User Interface. This allows developers to easily update the User Interface using standard Angular commandline tools. More information (including tutorials) is available at https://cli.angular.io/ - English, Latvian, Dutch, German, French, Portuguese, Spanish and Finnish language catalogs Numerous bugs were fixed based on early user testing. (Thanks to all who've tested Beta 1 or Beta 2 and reported your feedback!) Some bugs fixed include: - Login/Logout session fixes (including compatibility with Firefox and Safari browsers) - Improved Community/Collection tree browsing performance - Fixes to editing Communities, Collections and Items. This includes improved drag & drop reordering of bitstreams in an Item. - Improved performance of Collection dropdown in submission - Ability to download restricted bitstreams (previously these would error out) - Authorization & security improvements in both REST API and UI - Upgraded all REST API dependencies (Spring, Spring Boot, HAL Browser) and enhanced our automated testing via additional Integration Tests. - All features previously mentioned in Release Notes#7.0 Beta 2 Release Notes and Release Notes#7.0 Beta 1 Release Notes below Learn More: New videos are available highlighting features of the MyDSpace area: - Manage Submissions in MyDSpace (video) - Manage Tasks in MyDSpace (video) Coming Soon - For the upcoming Beta release schedule see DSpace 7 Release Goals Additional Resources At this time, the DSpace 7 documentation is still in progress, but has begun at https://wiki.lyrasis.org/display/DSDOC7x/ That said, we have a number of recorded presentations and workshops available which provide an overview of all the new 7.0 features. - Presentations / Workshops from OR2019 (June 2019). Some video recordings exist https://wiki.lyrasis.org/display/DSpace/DSpace+7+at+OR2019 - Additional DSpace 7 presentations/workshops/webinars are planned for 2020 as we get closer to the 7.0 final release. 7.0 Beta 2 Release Notes Included in Beta 2 - Administrative Search (video) combines retrieval of withdrawn items and private items, together with a series of quick action buttons. - EPeople, Groups and Roles can now be viewed, created and updated. - Manage Groups (Login as an Admin Access Control Groups) - Manage EPeople (Login as an Admin Access Control EPeople) - Manage Community/Collection Roles (Login as an Admin Edt Community/Collection Assign Roles). Note: this feature is Admin-only in beta 2, but will be extended to Community/Collection Admins in beta 3. - Bitstream Editing (video) has a drag-and-drop interface for re-ordering bitstreams and makes adding and editing bitstreams more intuitive. - Metadata Editing (video) introduces suggest-as-you-type for field name selection of new metadata. - Update Profile / Change Password (Login Select user menu in upper right Profile) - Shibboleth Authentication - Viewing Item Version History (requires upgrading from a 6.x site that includes Item Versioning) - Collection and Community (video) creation and edit pages. - English, Latvian, Dutch, German, French, Portuguese and Spanish language catalogs - Security and authorization improvements, including REST API support hiding specific metadata fields (metadata.hide property) and upgrades of different software packages on which DSpace 7 depends. - All features previously mentioned in Release Notes#7.0 Beta 1 Release Notes below Coming Soon - For the upcoming Beta release schedule see DSpace 7 Release Goals Additional Resources At this time, the DSpace 7 documentation is still in progress, but has begun at https://wiki.lyrasis.org/display/DSDOC7x/ That said, we have a number of recorded presentations and workshops available which provide an overview of all the new 7.0 features. - Presentations / Workshops from OR2019 (June 2019). Some video recordings exist https://wiki.lyrasis.org/display/DSpace/DSpace+7+at+OR2019 - Additional DSpace 7 presentations/workshops/webinars are planned for 2020 as we get closer to the 7.0 final release. A full list of all changes / bug fixes in 7.x is available in the Changes in 7.x section. 7.0 Beta 1 Release Notes New features to look for A completely new User Interface (demo site). This is the new Javascript-based frontend, built on Angular.io (with support for SEO provided by Angular Universal). This new interface is also via HTML and CSS (SCSS). For early theme building training, see the “Getting Started with DSpace 7 Workshop” from the North American User Group meeting; slides or video recording. A completely new, fully featured REST API (demo site), provided via a single "server" webapp backend. This new backend is not only a REST API, but also still supports OAI-PMH, SWORD (v1 or v2) and RDF. See the REST API's documentation / contract at https://github.com/DSpace/Rest7Contract/blob/master/RESTAPI.md A newly designed search box. Search from the header of any page (click the magnifying glass). The search results page now features automatic search highlight, expandable & searchable filters, and optional thumbnail-based results (click on the "grid" view). A new MyDSpace area, including a new, one-page, drag & drop submission form, a new workflow approval process, and searchable past submissions. (Login, click on your user profile icon, click "MyDSpace"). Find workflow tasks to claim by selecting "All tasks" in the "Show" dropdown. Dynamic user interface translations (Click the globe, and select a language). Anyone interested in adding more translations? See DSpace 7 Translation - Internationalization (i18n) - Localization (l10n). A new Admin sidebar. Login as an Administrator, and an administrative sidebar appears. Use this to create a new Community/Collection/Item, edit existing ones, and manage registries. (NOTE: A number of Administrative tools are still missing or greyed out. They will be coming in future Beta releases.) Optional, new Configurable Entities feature. DSpace now supports "entities", which are DSpace Items of a specific 'type' which may have relationships to other entities. These entity types and relationships are configurable, with two examples coming out-of-the-box: a set of Journal hierarchy entities (Journal, Volume, Issue, Publication) and a set of Research entities (Publication, Project, Person, OrgUnit). For more information see “The Power of Configurable Entities” from OR2019: slides or video recording. Additionally, a test data set featuring both out-of-the-box examples can be used when trying out DSpace 7 via Docker. Early documentation is available at Configurable Entities. Support for OpenAIREv4 Guidelines for Literature Repositories in OAI-PMH (See the new "openaire4" context in OAI-PMH). Additional major changes to be aware of in the 7.x platform (not an exhaustive list): - XMLUI and JSPUI are no longer supported or distributed with DSpace. All users should immediately migrate to and utilize the new Angular User Interface. There is no migration path from either the XMLUI or JSPUI to the new User interface. However, the new user interface can be themed via HTML and CSS (SCSS). - The old REST API ("rest" webapp from DSpace v4.x-6.x) is deprecated and will be removed in v8.x. The new REST API (provided in the "server" webapp) replaces all functionality available in the older REST API. If you have tools that rely on the old REST API, you can still (optionally) build & deploy it alongside the "server" webapp via the "-Pdspace-rest" Maven flag. - The Submission Form configuration has changed. The "item-submission.xml" file has changed its structure, and the "input-forms.xml" has been replaced by a "submission-forms.xml". For early documentation see Configuration changes in the submission process (FULL DOCUMENTATION COMING SOON) - ElasticSearch Usage Statistics have been removed. Please use SOLR Statistics or DSpace Google Analytics Statistics. - The traditional, 3-step Workflow system has been removed in favor of the Configurable Workflow System. For most users, you should see no effect or difference. The default setup for this Configurable Workflow System is identical to the traditional, 3-step workflow ("Approve/Reject", "Approve/Reject/Edit Metadata", "Edit Metadata"). - Apache Solr is no longer embedded within the DSpace installer (and has been upgraded to Solr v9). Solr now MUST be installed as a separate dependency alongside the DSpace backend. See Installing DSpace. - Some command-line tools/scripts are enabled in the new REST API (e.g. index-discovery): See new Scripts endpoint: https://github.com/DSpace/Rest7Contract/blob/master/scripts-endpoint.md - DSpace now has a single, backend "server" webapp to deploy in Tomcat (or similar). In DSpace 6.x and below, different machine interfaces (OAI-PMH, SWORD v1 or v2, RDF, REST API) were provided via separate deployable webapps. Now, all those interfaces along with the new REST API are in a single, "server" webapp built on Spring Boot. You can now control which interfaces are enabled, and what path they appear on via configuration (e.g. "oai.enabled=true" and "oai.path=oai"). See https://jira.lyrasis.org/browse/DS-4257 (FULL DOCUMENTATION COMING SOON) - Configuration has been upgraded to Apache Commons Configuration version 2. For most users, you should see no effect or difference. No DSpace configuration files were modified during this upgrade and no configurations or settings were renamed or changed. However, if you locally modified or customized the [dspace]/config/config-definition.xml (DSpace’s Apache Commons Configuration settings), you will need to ensure those modifications are compatible with Apache Commons Configuration version 2. See the Apache Commons Configuration’s configuration definition file reference for more details. - Handle Server has been upgraded to version 9.x: https://jira.lyrasis.org/browse/DS-4205 - DSpace now has sample Docker images (configurations) which can be used to try out DSpace quickly. See Try out DSpace 7 ("Install via Docker" section) Additional Resources At this time, the DSpace 7 documentation is still in progress, but has begun at https://wiki.lyrasis.org/display/DSDOC7x/ That said, we have a number of recorded presentations and workshops available which provide an overview of all the new 7.0 features. - Presentations / Workshops from OR2019 (June 2019). Some video recordings exist https://wiki.lyrasis.org/display/DSSPACE/DSpace+7+at+OR2019 - Additional DSpace 7 presentations/workshops/webinars are planned for 2020 A full list of all changes / bug fixes in 7.x is available in the Changes in 7.x section. 7.0 Acknowledgments Major Contributing Institutions The following institutions have been major code contributors to the DSpace 7 release (in general) Frontend / New User Interface Acknowledgments The following 29 individuals have contributed directly to the new DSpace (Angular) User Interface in this release (ordered by number of GitHub commits): Lotte Hofstede (LotteHofstede), Giuseppe Digilio (atarix83), Kristof De Langhe (Atmire-Kristof), Art Lowel (artlowel), William Welling (wwelling), Michael Spalti (mspalti), Laura Henze (lhenze), Jonas Van Goolen (jonas-atmire), Marie Verdonck (MarieVerdonck), Terry Brady (terrywbrady), Andrea Chiapparelli (andreachiapparelli), Ben Bosman (benbosman), Antoine Snyers (antoine-atmire), Matteo Perelli (sourcedump), Bram Luyten (bram-atmire), Courtney Pattison (courtneypattison), Álex Magaz Graça (rivaldi8), Tim Donohue (tdonohue), Chris Wiliper (cwiliper), Christian Scheible (christian-scheible), Alexander Sulfrian (AlexanderS), Paulo Graça (paulo-graca), Mohamed Mohideen Abdul Rasheed (mohideen), Philip Vissenaekens (PhilVis), Pascal-Nicolas Becker (pnbecker), Hardy Pottinger (hardyoyo), Mateus Mercer (MatMercer), Martin Walk (MW3000), Julius Gruber (Flusspferdl23). The following 5 individuals have contributed a translation of the new interface: Marina Muilwijk (Dutch), Claudia Jürgen (German), Maria Fernanda Ruiz (Spanish), Vítor Silvério Rodrigues (Brazilian Portuguese), Ivan Masar (Czech). The above contributor lists were determined based on historical contributions to the "dspace-angular" project in GitHub: https://github.com/DSpace/dspace-angular/graphs/contributors. Backend / REST API Acknowledgments The following 45 individuals have contributed directly to the DSpace backend (REST API, Java API, OAI-PMH, etc) in this release (ordered by number of GitHub commits): Raf Ponsaerts (Raf-atmire), Andrea Bollini (abollini), Mark Wood (mwooduipui), Luigi Andrea Pascarelli (lap82), Terry Brady (terrywbrady), Tom Desair (tomdesair), Ben Bosman (benbosman), Tim Donohue (tdonohue), Marie Verdonck (MarieVerdonck), Chris Wiliper (cwiliper), Michele Boychuk (micheleboychuk), Jelle Pelgrims (jpelgrims-atmire), Kevin Van de Velde (KevinVdV), Andrew Wood (AndrewZWood), Peter Nijis (peter-atmire), Michael Spalti (mspalti), Patrick Trotter (PTrotter), Pablo Prieto (ppmdo), Alexander Sulfrian (AlexanderS), Hardy Pottinger (hardyoyo), Kim Shepherd (ksherpherd), William Tantzen (santit96), Julius Gruber (Flusspferdl23), Saiful Amin (saiful-semantic), Mohamed Mohideen Abdul Rasheed (mohideen), József Marton (marton), Sven Soliman (ssoliman), Santiago Tettamanti (santit96), Maria Fernanda Ruiz (maraoua), Álex Magaz Graça (rivaldi8). The above contributor list was determined based on contributions to the "DSpace" project in GitHub since 6.0 (after Oct 24, 2016): https://github.com/DSpace/DSpace/graphs/contributors?from=2016-10-24&to=2020-03-03&type=c. Therefore this list may include individuals who contributed to later 6.x releases, but only if their bug fix was also applied to 7.0. Additional Thanks Additional thanks to our DSpace Leadership Group and DSpace Steering Group for their ongoing DSpace support and advice. Thanks also to LYRASIS for your leadership, collaboration & support in helping to speed up the development process of DSpace 7. Thanks also to the various developer & community Working Groups who have worked diligently to help make DSpace 7 a reality. These include: - DSpace 7 Working Group - DSpace 7 Entities Working Group - DSpace 7 Marketing Working Group - DSpace Community Advisory Team (DCAT) We apologize to any contributor accidentally left off this list. DSpace has such a large, active development community that we sometimes lose track of all our contributors. Our ongoing list of all known people/institutions that have contributed to DSpace software can be found on our DSpace Contributors page. Acknowledgments to those left off will be made in future releases.
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200902-020920-1328-117789/DSDOC7x-ReleaseNotes-020920-1328-117790.pdf?contentType=application/pdf", "len_cl100k_base": 4382, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 11923, "total-output-tokens": 4865, "length": "2e12", "weborganizer": {"__label__adult": 0.000244140625, "__label__art_design": 0.0003325939178466797, "__label__crime_law": 0.00016570091247558594, "__label__education_jobs": 0.0006814002990722656, "__label__entertainment": 7.43865966796875e-05, "__label__fashion_beauty": 8.958578109741211e-05, "__label__finance_business": 0.00023806095123291016, "__label__food_dining": 0.0001857280731201172, "__label__games": 0.0004239082336425781, "__label__hardware": 0.00035858154296875, "__label__health": 0.00012189149856567384, "__label__history": 0.00014913082122802734, "__label__home_hobbies": 6.502866744995117e-05, "__label__industrial": 0.00011348724365234376, "__label__literature": 0.00019931793212890625, "__label__politics": 0.00011277198791503906, "__label__religion": 0.00024318695068359375, "__label__science_tech": 0.0013227462768554688, "__label__social_life": 0.0001392364501953125, "__label__software": 0.06597900390625, "__label__software_dev": 0.92822265625, "__label__sports_fitness": 0.0001285076141357422, "__label__transportation": 0.00013184547424316406, "__label__travel": 0.00014221668243408203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19197, 0.02053]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19197, 0.01049]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19197, 0.8499]], "google_gemma-3-12b-it_contains_pii": [[0, 4172, false], [4172, 8546, null], [8546, 15362, null], [15362, 19197, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4172, true], [4172, 8546, null], [8546, 15362, null], [15362, 19197, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19197, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19197, null]], "pdf_page_numbers": [[0, 4172, 1], [4172, 8546, 2], [8546, 15362, 3], [15362, 19197, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19197, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
53e7d417cae69871ca6611db74c01ca3d4cf2b24
Aims • To give you a high level framework in which to build your systems for creative purposes • To get you to think past your code and output, and worry about the impact your project may have • To give you confidence in undertaking projects where you might one day call your software creative Overview • Very high level notions related to Computational Creativity • An attack on Turing-style tests • Technical and societal guiding principles for computational creativity • A first draft of a formalisation for Computational Creativity Theory Possibly Contentious I. High Level Notions Some Difficult Assumptions • There are no reliable definitions of creativity, in fact such definitions would probably contradict the idea of creativity • There are no right or wrong processes or methodologies, or good or bad artefacts, in certain domains • In some domains, the value ascribed by people to generated artefacts is based in part on how they were produced Science vs. Engineering • Cognitive sciences approach to computational creativity • Study human creativity and try to emulate aspects • Use models to further understand human creativity • Engineering approach to computational creativity • Realise that there is no single agreed upon description for multi-faceted human creative behaviour, and that other models of creativity may exist • Realise that we can change people’s opinions about the notion of creativity and get our software to do this too Creativity vs. the Perception of Creativity - People are perfectly capable of performing an ordinary/dull process and because the product of the process has some values not associated with it being creatively produced, those people may pretend that they acted creatively in the process, hence giving us the perception of creativity. - However, we usually talk about “real creativity” when talking about people, as if it is genuine, and would exist even if no-one was around to notice it happening. - In computational creativity, however, you might find it more useful to talk about the perception people have (or don’t have) of creativity in software. I certainly do. In fact, I only care about how people perceive my software, not whether it is actually creative or not (but my language doesn’t always reflect this). A Note about Language - When people say that a “building is creative”, either in scientific or layman situations, they probably mean that they are prepared to project the word creative onto the physical and cognitive processes that led to the completion of the building, i.e., that (some of) these processes were different from the norm in some way, or led to a novel concept, and (usually they mean that) the processes produced a building of value. It’s unlikely that they mean that the building itself is able to create, but some people talk about “creativity residing in the artefact”, which seems like ambiguous nonsense to me. A Note about Language • Of course, the phrase “the building is creative” is just a harmless shorthand, but it is factually incorrect, and we should strive to reduce ambiguity, especially in scientific writings. We also need to reserve the phrase “creative buildings” for future buildings which are indeed creative, (or for describing buildings in our thought experiments, or in our science fiction books, etc.) • And yes, we can estimate creativity based on the output of creative acts alone. But, we will likely rely upon default - and probably romantic - assumptions about how people create, and we probably only end up making relative comparisons (“person A has produced better music than person B, hence person A must’ve been more creative”). These are valid things to do, because person A and B are assumed to have largely similar processes by default, with A innovating somewhat, or with A having a fine-tuned aesthetic sense. However, if we learn of the processes behind the generation of A and B’s music, we could perfectly validly change our perception and valuation of their creativity. And this could have an effect of our valuation of their music. This has to be taken seriously into account in computational creativity, where the default notion of the process is uncreative. Formalisms vs. Implementations Turing-Style Tests - **Style 1:** A dialogue where the point of the exercise is to prove that it would be fair to call your software intelligent - **Closest to what Turing had in mind** - **Style 2:** A dialogue where the point of the exercise is to prove that people can’t tell the difference to talking to a person and talking to your software - **So, we implement software which often says unintelligent things** - **Style 3:** A comparison test with no dialogue, where the point of the exercise is to prove that the output of your software is of a similar (or higher) value to that produced by people - **This has often been applied in Computational Creativity research** Comparison Tests - It is certainly a milestone in the development of generative software (and for the field as a whole) if the output can be easily confused with that of people. This is because we can refer to the default position that people act creatively when they produce, and hence it is only fair to describe software similarly, as per my previous point. - And it allows objective comparison, enabling us to show progress in implementations. Importantly, we can be seen to be scientific in our evaluation methodology. - And journalists love setting up Turing-style tests, as it both informs and worries the general public, which helps to sell newspapers... - New Scientist and BBC Horizon However... - Imagine a comparison test where the tester performs the *reveal*: - “So, these paintings were painted by recent Royal College of Art graduates” - “And these ones were painted by..... a mass murderer! - Wouldn’t your value judgements change? Problems 1 and 2 - Turing-style comparison tests set the computer up for a fall - The implicit assumption is that software should be very grateful if it is mistaken occasionally for a human - So, human level output becomes seen as the only goal of Computational Creativity research - Software is NOT human! - So, we end up missing out on possibilities where the software creates valuable, interesting artefacts in non-human ways - We should instead be loud and proud about the generative system being computer based, and help people to appreciate the value of computer generated artefacts Problems 3 and 4 - Turing-style comparison tests massively underestimate the importance of process in certain domains - This can lead to alienation of people, certainly in the visual art world, where art theory is all about process - Turing-style comparison tests answer the wrong question, e.g., which would you prefer, if you had to make up your mind without knowing fully how they were produced - Whereas in (commercial/artistic/scientific) reality, we will have full disclosure of practice as well as product - Or should we go through this charade with our software for the rest of our lives? Example: Board Games - Nestorgames sells a physical version of Yavalath, which was invented by the LUDI system of Cameron Browne - There is evidence that the computer invention of the game is a good selling point, which is admittedly unusual - Nestor is organising a competition, where computer-designed games are blind tested against human-designed games. There will be a large element of publicity in this... - But they will sell the games with full disclosure of how they were produced - So, are they asking the wrong question? Problems 5 and 6 - There are no right or wrongs in the visual arts. However, critics can severely inflict pain by saying that your work is “naive” and a “pastiche” - Turing-style comparison tests might encourage software to act unintelligently, to make it seem more human, hence it could be criticised as naive - Turing-style comparison tests definitely encourage the generation of pastiche pieces, as the measure of success is whether you have successfully imitated something which isn’t you - Would art graduates be happy if you said their pieces all looked like Monet pictures? Well put by Alison... Turing-style comparison tests are inappropriate for testing aspects of creative intelligence in software. See paper for other arguments. Boden’s “A Turing Test for Artistic Creativity” In [11], Boden discusses the Turing Test and artistic creativity. She provides an interpretation of the Turing Test which is specifically designed for computer art systems: “I will take it that for an ‘artistic’ program to pass the TT would be for it to produce artwork which was: 1. indistinguishable from one produced by a human being; and/or 2. was seen as having as much aesthetic value as one produced by a human being.” [11, p. 409] Boden’s “A Turing Test for Artistic Creativity” Boden describes several systems which produce art or music, which she considers to be either non-interactive or unpredictably interactive (such as a piece of art which responds to audience members or participants in ways they do not understand). She discusses comparisons with both mediocre human art, in this case pastiches of given styles (perhaps comparable to work by an art student exploring a given style), as well as examples which match world class human art, of interest as an artwork in itself (comparable to work done by a practising artist). She argues that the following systems all pass (her version of) the TT: • Richard Brown’s Starfish • Harold Cohen’s AARON • Art by Boden and Edmunds • David Cope’s EMI Boden’s “Turing Test for Artistic Creativity” In particular, Boden argues that “If being exhibited alongside Rothko, in a ‘diamond jubilee’ celebration of these famous artists, does not count as passing the Turing Test, then I do not know what would.” [11, p. 410]. Our Objections... - It’s an interpretation of Turing’s test which bears little resemblance to the original idea - There is no dialogue or interaction of any kind with the system as part of the test - The test can be passed without comparison to human intelligence, or even human output - So, it’s possible to pass the originally conceived Turing test (testably achieving human-level intelligence), yet not pass Boden’s test - Yet - as evidenced by the Starfish and by Boden and Edmunds art - it’s possible to pass Boden’s test without exhibiting any higher level cognitive functions The Starfish... Guidelines • The idea is to possibly appeal to these guidelines during the engineering, testing and engagement parts of your project • But also, they’re here to get you thinking about some more of the philosophical aspects of Computational Creativity research 1. Ever decreasing circles - It’s important to recognise that we have the potential to contribute as much to the understanding of human creativity as psychological studies do. - We don’t necessarily have to wait for discoveries about the nature of human creativity to add creative behaviour to our software. - We can imagine mutual benefits where all fields learn from each other - spiralling down to the truth. 2. Paradigms lost - As AI researchers and practitioners, you don’t necessarily have to see every intelligent task as a problem solving exercise. - If you do apply a reductionist approach, remember to put the pieces back together again. - The artefact generation paradigm can be found again: intelligent tasks are framed as opportunities to generate something of cultural value. 3. The whole is more than a sum of the parts - It is often more difficult to get your software to talk to other software than to implement a pale version of the software you want. - However, it’s likely that your software will be more powerful if you join forces with others. - And it helps to attract people to Computational Creativity if we use their software. 4. Climbing the meta-mountain - We need to constantly ask ourselves how we can hand over creative responsibilities to the software. - Plan in advance to one day get the software to take over what you are doing in projects. - In particular, think about how the software can take on aesthetic responsibilities, and possibly show intentionality in its work. - Try and hand over meta-level control and climb the mountain to the top. 5. The creativity tripod - People often take details of a generative process into account when they valuate output artefacts. - The default position in public perception is that software cannot be creative, which can lead to a vicious circle where output is never seen as valuable. - Hence, we need to manage this public perception. - People will generally not ascribe creativity to software if it is lacking skill, appreciation or imagination. So, we can be proactive and aim to implement behaviours which tick these boxes. - Remember that tripods have three legs, with three sections to each leg: (programmer, user software, audience). 6. Beauty is in the mind of the beholder - Value is not just skin-deep. - If you aim for pastiche, you might get useful software, but it’s unlikely to ever be taken seriously as creative in its own right. - Think about the process/output of/from your software having an impact on people, rather than the imitation game. - Ask yourself: “Is a Turing-style test the right way to assess your software?” - people need to know about the entire creative act if they are to assess the output. 7. Good art makes you think - The output of creative software should really be seen as an invitation to start a dialogue. - Decorative art has value, but it is unlikely to be seen as great art, because it doesn’t give people an opportunity to have a dialogue with the artwork. - Dialogues can be audience-centric, or involve cultural aspects of the day, historical concepts, etc. - Our flavour of AI makes people think more rather than less. Aspirations for Computational Creativity Theory • Aim for computational learning theory: • “To give a rigorous, computationally detailed and plausible account of how learning can be done” (Dana Angluin) • Aim for computational creativity theory: • “To give a rigorous, computationally detailed and plausible account of how creativity can be done” Aim is to prove theorems about the nature of software, to enable comparisons Ground the theory in reality with respect to the amount of resources, user interaction, etc. Not aiming to capture all senses in which software can create, but be a rallying point Descriptive Models Should Provide... - Some simplifying assumptions related to programming/running software and the appreciation by an audience of its behaviour and its output - A set of conceptual definitions which can be used to describe behaviour in software/programmers/audiences associated with acts of creation - A set of concrete calculations based on the definitions, which can be used to compare and contrast different software systems - Some suggestions for how the calculations could be applied in different application domains The FACE model To describe creative acts performed by software • Simplifying assumptions: • Even the smallest generative act can be described as a creative act (e.g., multiplying two numbers together) • Independently of the amount of impact the act might have • We can effectively restrict ourselves to discussing how software can produce eight types of output • Both the processes performed by software and the results of the processing need to be covered • The quality and quantity of creative acts can be used to compare creative software We use lower case to denote the output from the individual generative acts in the creative act tuples, and a bar notation to indicate constituent generative acts performed by a third party. - \( E^g \): an expression of a concept - \( E^p \): a method for generating expressions of a concept - \( C^g \): a concept - \( C^p \): a method for generating concepts - \( A^g \): an aesthetic measure - \( A^p \): a method for generating aesthetic measures - \( F^g \): an item of framing information - \( F^p \): a method for generating framing information The FACE model To describe creative acts performed by software - Comparison methods: - Volume of creative acts - Ordering of creative acts, e.g., <Aₖ, Cₖ, Eₖ> deemed more creative than <Cₖ, Eₖ> - By the nature of the processes, e.g., random deemed less creative than inductive - By using the aesthetic function (given or invented) in a domain The IDEA model To describe the impact that creative acts may have - Motivations - Creative software can invent its own aesthetics, so we need to generalise past value judgements - The influence of the programmer/user has to be assessed to evaluate the impact caused by the behaviour of the software - Simplifying assumptions - An ideal software development process described by FACE-tuples - Full knowledge of the creative acts that went into the production of all the relevant background knowledge - An ideal audience of members, m, able to perfectly assess their appreciation of creative acts, A, along two axes: - Well being: \( w_b_m(A) \) and cognitive effort: \( c_e_m(A) \) [Note not creativity directly] The IDEA model To describe the impact that creative acts may have - Need a distance function, \( d \), to tell how close two creative acts are - Formalism for the development of creative software with respect to the programmer/user's influence - Compare software in terms of its autonomy from the programmer and from the cultural context it was programmed within The IDEA model To describe the impact that creative acts may have - Developmental stage: where all the creative acts undertaken by \( S \) are based on inspiring examples (c.f. (Ritchie 2007)), i.e., \( \forall K \in \kappa, (\exists B \in \beta \ s.t. \ d(K, B) = 0) \). - Fine tuned stage: where the creative acts performed by \( S \) are abstracted away from inspiring examples, but are still too close to have an impact as novel inventions, i.e., \( \forall K \in \kappa, (\exists B \in \beta \ s.t. \ d(K, B) < l) \). - Re-invention stage: where \( S \) performs creative acts similar to ones which are known, but which were not explicitly provided by the programmer, i.e., \( \exists K \in \kappa \ s.t. (\exists A \in \alpha \ s.t. \ d(K, A) < l \land A \notin \beta) \). - Discovery stage: where \( S \) performs creative acts sufficiently dissimilar to known ones to have an impact due to novelty, but sufficiently similar to be assessed within current contexts, i.e., \( \exists K \in \kappa \ s.t. (\exists \alpha \ s.t. \ d(K, A) < l) \land \exists A' \in \alpha \ s.t. \ d(K, A') < u) \). - Distraction stage: where \( S \) performs some creative acts which are too dissimilar to those known to the world to be assessed in current contexts, hence new contexts have to be invented, i.e., \( \exists K \in \kappa \ s.t. (\exists \alpha \ s.t. \ d(K, A) < u) \). - Disorientation stage: where all the creative acts performed by \( S \) are too dissimilar to known ones that there is no context within which to judge any of its activities, i.e., \( \forall K \in \kappa, (\exists A \in \alpha \ s.t. \ d(K, A) < u) \). Formalism attempting to capture some common notions of impact, using the well-being and cognitive effort measures of the ideal audience - \( m(A) \) is the mean well being amongst the ideal audience \[ \begin{align*} \text{dis}(A) &= \text{disgust}(A) = \frac{1}{2n} \sum_{i=1}^{n} (1 - wb_i(A)) \\ \text{div}(A) &= \text{divisiveness}(A) = \frac{1}{n} \sum_{i=1}^{n} |wb_i(A) - m(A)| \\ \text{ind}(A) &= \text{indifference}(A) = 1 - \frac{1}{n} \sum_{i=1}^{n} |wb_i(A)| \\ \text{pop}(A) &= \text{popularity}(A) = \frac{1}{2n} \sum_{i=1}^{n} (1 + wb_i(A)) \\ \text{prov}(A) &= \text{provocation}(A) = \frac{1}{n} \sum_{i=1}^{n} (ce_i(A)) \\ \text{acquired\_taste}(A) &= \frac{(\text{pop}(A) + \text{prov}(A))}{2} \\ \text{instant\_appeal}(A) &= \frac{(1 + \text{pop}(A)) - \text{prov}(A))}{2} \\ \text{opinion\_splitting}(A) &= \frac{(1 + \text{div}(A)) - \text{prov}(A))}{2} \\ \text{opinion\_forming}(A) &= \frac{(\text{div}(A) + \text{prov}(A))}{2} \\ \text{shock}(A) &= \frac{(1 + \text{dis}(A)) - \text{prov}(A))}{2} \\ \text{subversion}(A) &= \frac{(\text{dis}(A) + \text{prov}(A))}{2} \\ \text{triviality}(A) &= \frac{(1 + \text{ind}(A)) - \text{prov}(A))}{2} \end{align*} \] Comparison Study Mathematical Discovery Software • Comparison of types of creative act • AM and HR: \( <A^g, C^g, E^g> \) • But HR has more types of \( C^g \) and \( E^g \) generative acts • Meta-HR: \( <C^p, C^g, E^g> \) and \( <A^g, C^g, E^g> \) • TM took ModGen from \( <E^g> \) to \( <C^g, E^g> \) • In terms of precision, AM outperforms HR, but AM never left the fine-tuned stage of development, whereas we argue that HR is in the discovery stage, hence has had more impact Comparison Study Art Generation Software • Comparison of types of creative act • AARON and The Painting Fool: \( <C^g, E^g> \) • But The Painting Fool has more types of \( C^g \) • The Painting Fool collage generation: \( <A^g, C^g, E^g> \) • TPF + HR fitness function invention: • \( <A^g, C^g, E^g> = <\text{fitness function, scene, rendering}> \) • Most evolutionary art systems: \( <A^g, C^g, E^g> \), but NEvAr performs creative acts of the form: \( <F^g, A^g, C^g, E^g> \) because it uses mathematical fitness functions See Alison’s Paper for... - Motivations for the FACE and IDEA models coming from cognitive science, psychology and philosophy - Some links to existing Computational Creativity formalisms, such as from Ritchie, Wiggins, etc. - Case studies from the history of mathematics and the visual arts Next Steps... - Produce more comparison studies using the formalisms, especially in musical and linguistic domains (with your help...?) - Fix the problems in CCT which arise - Relate the theory more to existing formalisms - From Computational Creativity, but also by expanding CLT - Add descriptive models to CCT - I’m particularly interested in the ways in which software can obfuscate what it has done and what it has produced - Theory might drive practice in this instance Next Steps... In order to study creativity, and in particular to implement creative software, it is important to de-mystify creative processes. However, we argue that, in many senses, adding mystery (or allowing it to persist) is an inherent part of the creative process. This has led to the development of the XXX descriptive model which forms part of Computational Creativity Theory. This model can be used to compare and contrast software in terms of the value they gain by productive use of mystery in their behaviour. We motivate the model by argumentation and appeals to the literature, then introduce various formalisations and show how these could be used to show progress in certain domains of application. Next Steps... - “The Programmer’s Programmer” - Your software development environment will act as a creative partner in building software - Based on theoretical studies where CCT measures are applied to small changes in code - Practically: multi-core machine which is constantly making possible changes to your software, compiling it, and showing you the results - You can quickly choose a new path to go down The Take Home... - The discussions you’ve seen between the lecturers here (in public and in the pub) highlight that this is a great time to get involved in helping us to define aspects of our field through formalisations of notions of creativity in software - We’ve come a long, long way. But there is still a long, long way to go Questions?
{"Source-Url": "http://ccg.doc.gold.ac.uk/ccg_old/teaching/computational_creativity/cc_lectures5and6.pdf", "len_cl100k_base": 5479, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 38118, "total-output-tokens": 6611, "length": "2e12", "weborganizer": {"__label__adult": 0.00080108642578125, "__label__art_design": 0.008819580078125, "__label__crime_law": 0.0005941390991210938, "__label__education_jobs": 0.0171051025390625, "__label__entertainment": 0.00047135353088378906, "__label__fashion_beauty": 0.00039887428283691406, "__label__finance_business": 0.0007491111755371094, "__label__food_dining": 0.0006937980651855469, "__label__games": 0.001956939697265625, "__label__hardware": 0.0007791519165039062, "__label__health": 0.000926971435546875, "__label__history": 0.0006585121154785156, "__label__home_hobbies": 0.00032830238342285156, "__label__industrial": 0.00058746337890625, "__label__literature": 0.00604248046875, "__label__politics": 0.0005235671997070312, "__label__religion": 0.0012035369873046875, "__label__science_tech": 0.06982421875, "__label__social_life": 0.0005145072937011719, "__label__software": 0.01474761962890625, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.0005040168762207031, "__label__transportation": 0.00091552734375, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23822, 0.00499]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23822, 0.7515]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23822, 0.93556]], "google_gemma-3-12b-it_contains_pii": [[0, 297, false], [297, 592, null], [592, 1474, null], [1474, 2929, null], [2929, 4251, null], [4251, 4933, null], [4933, 5893, null], [5893, 7094, null], [7094, 8209, null], [8209, 8956, null], [8956, 9996, null], [9996, 10597, null], [10597, 10859, null], [10859, 11652, null], [11652, 12447, null], [12447, 13574, null], [13574, 14017, null], [14017, 14640, null], [14640, 15180, null], [15180, 16287, null], [16287, 17368, null], [17368, 20548, null], [20548, 21576, null], [21576, 22348, null], [22348, 23478, null], [23478, 23822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 297, true], [297, 592, null], [592, 1474, null], [1474, 2929, null], [2929, 4251, null], [4251, 4933, null], [4933, 5893, null], [5893, 7094, null], [7094, 8209, null], [8209, 8956, null], [8956, 9996, null], [9996, 10597, null], [10597, 10859, null], [10859, 11652, null], [11652, 12447, null], [12447, 13574, null], [13574, 14017, null], [14017, 14640, null], [14640, 15180, null], [15180, 16287, null], [16287, 17368, null], [17368, 20548, null], [20548, 21576, null], [21576, 22348, null], [22348, 23478, null], [23478, 23822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23822, null]], "pdf_page_numbers": [[0, 297, 1], [297, 592, 2], [592, 1474, 3], [1474, 2929, 4], [2929, 4251, 5], [4251, 4933, 6], [4933, 5893, 7], [5893, 7094, 8], [7094, 8209, 9], [8209, 8956, 10], [8956, 9996, 11], [9996, 10597, 12], [10597, 10859, 13], [10859, 11652, 14], [11652, 12447, 15], [12447, 13574, 16], [13574, 14017, 17], [14017, 14640, 18], [14640, 15180, 19], [15180, 16287, 20], [16287, 17368, 21], [17368, 20548, 22], [20548, 21576, 23], [21576, 22348, 24], [22348, 23478, 25], [23478, 23822, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23822, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
d2a32c51447adabb6e1759527183b476583c40d9
Impediments to Enterprise Agile by Kane Mar Once introduced into an organization Agile software development practices can spread quicker than can be controlled. Without a strategy Agile projects can emerge with different or even conflicting approaches. This is not necessarily a bad thing, but a traditionally top-down management structure can be uncomfortable adjusting to the more egalitarian Agile approach. There will be concerns over introducing Agile practices in a consistent manner without causing undue risk. **Impediments: People, Process and Technology** In the paper *Enterprise Strategy for Introducing Agile*, I discussed an overall road-map for introducing Agile methodologies into an enterprise followed by a plan. That paper only considered an idealized scenario, one in which there was a logical progression and there was little (if any) objection. This is seldom the case. There is often very strong resistance to the introduction of Agile methodologies from many quarters. This resistance may be direct, or it may be indirect. Ultimately however it is related to the organization’s or individual’s perceived threat to their position, authority or compensation (i.e., power and money). This article discusses some of the impediments that may be faced by a transition to Agile methods. **What Enterprises will successfully adopt Agile?** This question is asked a lot, but in a different format. The question that is asked is usually “What types of projects are most suitable for applying Agile practices?” Although these appear to be quite different questions they not because ultimately it’s not the type of project that determines Agile success, but rather the organizational and cultural constraints of the enterprise. The following quotation from Tom Poppendieck nicely sums this up. I’ve added the emphasis in order to drive the point home. “*The constraints on when agile approaches can be used are primarily organizational and cultural, not project types.* Some organizations and some contexts are incompatible with agile/lean thinking. When these organizations eventually come up against a strong competitive threat, they will fail to meet it unless they change their values and mindsets. Lean/Agile is at its foundation, the fourth industrial paradigm, the first being Craft Production, Factory Production with machine tooling, Automation and Taylorism. These come along every hundred years or so and take a few decades to work through. Each paradigm includes the preceding one and makes it dramatically more productive. There is no need to sell agile except to organizations that want to survive long term. If they don’t see the threat/opportunity they cannot succeed with agile or lean nor can they sustain economic viability in the long run.” -Tom Poppendieck The main traits that I look for when I work with an organization are a willingness to listen, and to change accordingly. Many organizations will insist that they listen to their employees and that they have an agile culture. Although this may be true to a certain extent, the real test always comes when the Agile values conflict with the organizational values. If the organization does a lot of up-front analysis and design, what is the response when you ask them to write User Stories? Do they insist that Use Cases are better than User Stories, or do they ask you to show them how? If the organization traditionally provides individual offices for their developers, what is the response when you suggest common team rooms? Does the organization try to justify their current status quo, or do they experiment with alternatives? Provided that an organization is willing to listen and change then all these issues will be eventually resolved; it will only be a question of time and effort. People, Process, Technology, Teams, Management and Culture When initially introducing Agile practices to a team, the difficulties experienced by the team are all centered around the immediate adoption of the practices, and the consequences of that adoption. After some experimentation with Agile methods the focus becomes centered on larger problems that confront an entire team. Once experienced with Agile practices the problems are large still and will confront may teams. The scope and nature of the problems faced by an Agile team grow over time as knowledge of Agile practices spread throughout the organization in ever increasing circles of influence. In an effort to discuss the issues faced by Agile teams, I’ve broken down these issues into the following categories: People, Process, Technology, Teams, Management and Culture. People Organizations will have people issues with or without introducing an Agile methodology. But I think it’s fair to say that the introduction of Agile methods increases the rate of change, and expose any problems that might have been previously hidden beneath the surface. I feel that any solutions to people problems should be based around fostering good clear communication between team members, and should emphasize team work over individual heroism. The following are typical situations that are frequently seen in Agile projects: • **Overly specialized skill-sets** requiring work to be handed-off several times before it can be described as complete. For example, an Analyst will describe the problem before handing-off to a DBA, who will work with the database before handing-off to a developer, who write the functionality before handing-off to a Tester. The act of handing-off partially completed work is very expensive, and it is desirable to keep this to a minimum. An approach to overcoming this is to ensure that all hand-offs [ideal there will be no hand-offs but reality often dictates otherwise] are informal and to encourage pairing at all stages. Encouraging the team to work in small chunks of work also helps in this regards as it forces the team to meet and communicate more frequently. • **Lack of ownership by the team.** This may be a problem at the start of an Agile project. Team members who are use to being handed instructions often have difficulty engaging with the project. Their learnt reaction is to wait until being told. In order to work like a team and to take ownership of the problem, teams need to be able to communicate in an open fashion. This can be encouraged by letting the team work directly with the customer, by ensuring that technical conversations are not shut down and by soliciting input from different members of the team. • Architects, Developers, Testers, etc **refuse to interact with the team.** It is not uncommon for some team members to refuse to participate in the Agile process. They may have many justifications for this … they have to many prior commitments, their work cannot be broken down in a fashion that’s amenable to Agile SW development, their time is too important, they work better as an individual etc. These are nothing but excuses. Regardless of the reason, the end result is that they are deliberately putting themselves outside the team. The team has to make it’s best effort to resolve these issue and to work with others who are attempting to separate themselves. However, every member of the team needs to contribute and if one or two members of the team are putting themselves above the team then this needs to be quickly address by management, usually by the quick remove of the individual(s). • **Senior Management giving mixed signals** regarding support for Agile. Nothing is more efficient at crippling an Agile project than a few carelessly chosen words from Senior Management. Senior management need to decide prior to introducing Agile methods if this is something that they are committed to. Regardless of their own personal perspective, once an organization has decided to adopt an Agile methodology senior management have an obligation to publicly support the effort. Agile teams and customer must also realise that they play an important part because unless they communicate their experiences to the wider organization, their successes will remain unacknowledged. It’s important for Agile teams to communicate frequently with senior management, to clearly articulate their successes, and any obstacles to their success. **Process** One of the common attributes for all Agile methods is that they are very process-light. Scrum specifically is often described as more of a framework than a process. What Agile methods explicitly acknowledge is that we don’t need complex processes to build software. In fact, the opposite is true. But for something that is conceptually simple, a lot of projects have difficulty trying to keep it simple. Here are some commons process related problems: - **No single customer (Product Owner) can be identified.** This is common a problem with there is a large separation between the business units and the software development departments, or in situations where several different groups have an equal interest in the success of a project. In my experience, most business customers really want their projects to be successful but they lack the understanding of how they can constructively interact with software projects. Methodologies such as Waterfall or RUP have reacted to negative customer involvement by trying to distance the customer from the development teams. Agile on the other hand seeks to integrate the customer, but with a better understanding of the interaction. In the situation where a single customer cannot be identified, it’s really necessary to identify the project sponsor … the single person who ultimately approves the funding of the project. By working with the sponsor and clearly articulating the need for a single business representative on the project team this situation can usually be quickly resolved. In the situation where several different groups have an equal interest in the success of a project, the team still needs a single representative who is willing to work with each of the different groups and prioritize accordingly. Again, working with the project sponsor will provide the path to a solution. - **Management wants to combine elements from both RUP and Agile.** The end result of doing this will have the disadvantages of both methodologies without the advantages of either. There are several different approaches to doing this; RUP management process combined with XP engineering practices being the most common. The motivation for doing this is often a desire to continue to provide familiar reports without having to re-educate senior management or business partners. I personally feel that this approach under estimates senior management, and does the organization a disservice. Better results can be had by educating management and business partners while adopting a single approach whether it be RUP or Agile. - **Scrum Master refusing to protect the team.** The primary function of the Scrum Master (or Iteration Manager) is simply to protect the team and to remove impediments. It takes a lot of courage to report problems to senior management and to make changes that others in the organization may view as a direct threat to their position. At the very best most SM will [at some point] have their credibility questioned. Without an effective SM, however, who will protect the team? Who will standup to those who would distract the team with unrelated activities, or to the Product Owner to say that their expectations are unreasonable? The SM role is essential to maintaining health communications between the team members and to protect against distracting external influences. If the team isn’t getting the help they need, then perhaps the firstly problem that needs to be resolved is a more effective SM. **Technology** Process practices are common to all Agile methods, but engineering practices are not. Extreme Programming (XP) is different in that it explicitly prescribes engineering practices in addition to process practices. But why would specific engineering practices be of interest and how is this different from other (Waterfall, RUP) methodologies? The heart of the answer lies with reducing the cost of producing a usable product on a frequent and repeatable basis. Iterations are common to all Agile methods, and the outcome of those iterations is typically a usable product. It takes some considerable overhead to build, test and package a product. In order to be able to do this repeatedly (every iteration) in a cost effective manner requires a significant amount of automation, hence Agile engineering practices. Processes which either increase the cost [of producing a usable product] per iteration, or do not actively reduce the cost per iteration should be immediately eliminated. Examples of wasteful practices are listed below: - **Not having a reliable build system and processes.** The source control, build and testing of software is an intimate part of any software development team. It simply makes sense to have this process as efficient as possible. If it takes 2 hours for a developer to check in code and do a build then automating the process is going to allow more constructive use of that time. Clearly, if the team is building and releasing code many times per iteration then any effort expended to create a reliable build and testing framework is going to be well worthwhile. Not addressing QA issues. With more traditional development methodologies testing is usually addressed at the end of the cycle. So, it’s not surprising to see teams take up Agile and then not pay appropriate attention to defects and code quality. Code quality will quickly become a problem with repeated iterations; especially if the team is trying to build new functionality on top of defect laden code. In addition, delaying testing until the later stages of a project will result in late discovery of unanticipated side-effects. Producing high quality code from the very onset should be the ultimate goal of every project. It is only by addressing quality issues early and proactively that the fully nature of the software will be known. Ineffective tools mandates by a committee or external parties. Large organizations often feel the need to control the number and types of tools that are used by project teams. This can be for a number of very good reasons such as licensing, IP and IP rights management, and security. But when the authorization process or the tools themselves act as an impediment to the team making progress, something needs to be done. If a tool doesn’t meet developer needs, then why should they use it? If the authorization process for introducing CruiseControl takes more than a few weeks, then what does this say about the organizations attitude towards software quality? At this point the SM and project team need to make their point of view known to senior management. They need to explain why tools that are relevant to the work that they’re doing should be supported, and that having the right tools can make the difference between a product and a high quality product. Why all the discussion on Agile impediments? Dealing with issues, problems and difficulties is part of everyday work life and there is nothing new in Agile methodologies which tells us how to deal with these problems. So why all the discussion on Agile impediments? For me than answer is that Agile methods make visible those problems that were previously ignore, hidden or otherwise invisible to the organization. It is only by actively acknowledging the dysfunction that teams and organizations have that we are able to address them. And it’s only by discussing these problems that we’re able to share information, to help us understand that many of the problems we face today are shared by individuals in software organizations all over the world. Impediments: Teams, Management and Culture When initially introducing Agile practices to a team, the difficulties experienced by the team are all centered on the immediate adoption of the practices, and the consequences of that adoption. After some experimentation with Agile methods the focus becomes centered on larger problems that confront an entire team. Once experienced with Agile practices the problems are larger still. and will confront many teams. The scope and nature of the problems faced by an Agile team grow over time as knowledge of Agile practices spread throughout the organization in ever increasing circles of influence. In an effort to discuss the issues faced by Agile teams, I’ve broken down these issues into the following categories: People, Process, Technology, Teams, Management and Culture. The final segment of this paper discusses team, management and cultural impediments faced by teams transitioning to an Agile model. **Teams** On the whole, teams seldom present impediments to Agile adoption. Most Agile frameworks strongly advocate close, cohesive team work and the result is typically advantageous to team environments. The learning process that the team needs to undergo may be very slow [at least initially], but I personally won’t consider that to be an impediment. I simply view this as being the nature of change. So, if there is going to be any friction, it is likely to be between different teams than within a particular team. There is the possibility of friction in any situation when one team is dependant upon another or where two different teams are using different methodologies (e.g., Agile vs. RUP). In both these situations the standard approach is to prioritize dependant functionality early and code to an agreed upon interface ... but this is overly simplistic. I feel that it’s important to recognize that there is a dynamic relationship between dependant teams which needs to be actively, and continuously worked. Failing to acknowledge the risks involved in dependant projects can quickly lead to confusion and impact the team performance. **Management** >“In the culture of management, the worst thing you can do is admit to anyone that you have a problem you can’t handle by yourself. If you really do need help, you have to sneak it in somehow without admitting in public that there is any problem at all.” - Gerald Weinberg Educating a new client on how to build an Agile organization can be very slow and difficult work. In order to be successful you need to be able to speak persuasively at many different levels of the organization. Speaking to the team or to individual members of the team is the usual approach because this is the level into which most consultants are brought. But there is also a wider audience to consider which includes functional managers, the PMO and HR. Failure to address this wider audience can hobble the transition to Agile (see below). There is an assumption that management are able to see the changes for themselves and that they will automatically understand the value of Agile methods. I’ve personally found management to no different from everyone else and that they also require education, coaxing and convincing that there is a better way to develop software. The only difference is that the topics of conversation are different for management than they are for developers: instead of discussing TDD, discuss Agile metrics (burn-down graphs over actual developer hours); instead of discussing Pair Programming, discuss the need for collocation. When working with management, I’ve found that I’ve needed to pay special attention to both Agile metrics (or the lack, there-of), and adaptive planning over predictive planning. The Agile approach to both of these is counter intuitive for most classically trained managers and requires constant reinforcing. Company Culture As I’ve mentioned in previous articles, cultural changes are the most difficult to make within an organization. There will constantly be the argument “That’s not how we do things here”, or “We can’t implement that here”. These are “Yes but ...” arguments as Ken Schwaber points out in his course. There are no easy answers, especially for something that’s as intangible as company culture. “... logic and culture have nothing to do with one another.” - Gerald Weinberger Cultural specific impediments are often related to how an organization rewards it’s employees, specifically the compensation, promotion and career planning models. In an environment where performance is determined by specialization of knowledge, the promotions and compensation models reward the compartmentalization of knowledge. Over a period of time (several years) this results in the organization having two (and maybe more) attributes: Certain activities are performed by specific individuals; and, the organization is managed as a matrix. The first is immediately obvious. If Alice is rewarded for her understanding of the security system, for example, then she is likely to continue doing this until the reward mechanism is changed. The second point is not nearly as obvious but is a reaction to increased specialization. In order to try and help circulate knowledge and information, groups are formed where the individuals share a common function. Typically this results in an organization where there are groups for Analysts, Architects, Developers, and Testers etc. Project teams are then composed by selection individuals from each of the different groups. Ironically, this functional grouping of people serves to in further segregate of information. Agile methodologies breakdown these arbitrary boundaries on a project-by-project basis by encouraging cross functional teams. Long term solutions are dependant on rewarding teamwork and breadth of understanding. This is what you should not do ... When I was a child one of my favorite books was “The Bike Lesson” where the common refrain was always “This is what you should not do.” In the same spirit I offer the following example of what not to do. As an example of failing to address all relevant parties in an Agile transition, I’d like to use an actual project of mine from several years ago. A small team of experienced XP developers and I were asked to help a client introduce XP into the organization. We had some initial success and quickly brought the team up to speed. After a couple of months we had a collocated team, working in pairs to deliver tested software every two weeks. At the end of eight months however, we had left the client and had had only minor success in influencing other Agile projects. So what went wrong? Looking back on it I made several mistakes of omissions: - I failed to adequately address dependencies between two project. The result was a delay in completing functionality, and this directly impeded the team’s progress. - I failed to recognize and address issues around developer compensation. Developers were reward (by promotion and compensation) for both specialization and building large frameworks. Introducing Agile and encouraging generalization over specialization of skills was a threatening move because it challenged both their position and their year end bonuses. By failing to address the issues around compensation I failed to gain influence with the architects, and eventually was unable to make necessary recommendations. - I failed to fully educated senior managers on adaptive planning. The result was that they continued to try and fix an end-date which became increasingly unrealistic with each passing iteration and casting doubt on the other successes that the project team were achieving. My reason for providing this example is to make you aware that successfully introducing Agile methods into an organization is not a simple task. It takes many different skills both technical and political. The physical and political environment that you’re work within requires as much attention as the technical problems that you’re trying to solve. **Summary** Over the last few months, I’ve discussed a roadmap for the introduction of Agile methods, one possible model for that introduction, and some of the impediments that can occur along the way. If you find yourself going down a similar path, I wish you every success and I would love to hear about your journeys. About the Author: Kane Mar: I’m a Agile coach specializing in Scrum and Extreme Programming currently working with great people at Danube Technologies ( http://www.Danube.com ). I’ve been a ScrumMaster and Agile Coach for over five years. I have recently become a Scrum Trainer (CSMT) and I’m looking forward to sharing my experience with others in the Agile community. My personal blog can be found at http://kanemar.wordpress.com. Prior to my work with Agile software development, I had a 15 years of experience as a developer and project manager for waterfall and RUP projects working with Java, Smalltalk, C, SQL and PL/SQL. My recent technical interests have turned towards Ruby primarily because of it’s similarity to Smalltalk. References: 1 Weinberg, Gerald, *Secrets of Consulting: A Guide to Giving and Getting Advice Successfully* 2 Ibid. 4 Berenstain, Stan and Jan, “The Bike Lesson”
{"Source-Url": "http://s3.amazonaws.com/public.kanemar.com/Impediments_to_Enterprise_Agile.pdf", "len_cl100k_base": 4746, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20539, "total-output-tokens": 5195, "length": "2e12", "weborganizer": {"__label__adult": 0.0004286766052246094, "__label__art_design": 0.0004239082336425781, "__label__crime_law": 0.000408172607421875, "__label__education_jobs": 0.004131317138671875, "__label__entertainment": 5.799531936645508e-05, "__label__fashion_beauty": 0.0001823902130126953, "__label__finance_business": 0.0041046142578125, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.00041961669921875, "__label__hardware": 0.0003600120544433594, "__label__health": 0.00041866302490234375, "__label__history": 0.00019884109497070312, "__label__home_hobbies": 0.0001145005226135254, "__label__industrial": 0.00041103363037109375, "__label__literature": 0.00023877620697021484, "__label__politics": 0.00037479400634765625, "__label__religion": 0.0003693103790283203, "__label__science_tech": 0.001583099365234375, "__label__social_life": 0.00018584728240966797, "__label__software": 0.00685882568359375, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00046753883361816406, "__label__transportation": 0.0004794597625732422, "__label__travel": 0.0002551078796386719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25179, 0.00203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25179, 0.19027]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25179, 0.96352]], "google_gemma-3-12b-it_contains_pii": [[0, 2397, false], [2397, 5177, null], [5177, 8083, null], [8083, 10709, null], [10709, 13377, null], [13377, 16268, null], [16268, 19166, null], [19166, 22059, null], [22059, 24192, null], [24192, 25179, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2397, true], [2397, 5177, null], [5177, 8083, null], [8083, 10709, null], [10709, 13377, null], [13377, 16268, null], [16268, 19166, null], [19166, 22059, null], [22059, 24192, null], [24192, 25179, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25179, null]], "pdf_page_numbers": [[0, 2397, 1], [2397, 5177, 2], [5177, 8083, 3], [8083, 10709, 4], [10709, 13377, 5], [13377, 16268, 6], [16268, 19166, 7], [19166, 22059, 8], [22059, 24192, 9], [24192, 25179, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25179, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
0a8066c66d44f81af6853fabee69455efbd7a6db
Original software publication **SENinja: A symbolic execution plugin for Binary Ninja** Luca Borzacchiello *, Emilio Coppa, Camil Demetrescu *Sapienza University of Rome, Italy* **A R T I C L E I N F O** **Article history:** Received 5 October 2020 Received in revised form 16 December 2021 Accepted 26 September 2022 **Keywords:** Reverse engineering Symbolic execution Cybersecurity **A B S T R A C T** Symbolic execution is a program analysis technique that aims to automatically identify interesting inputs for an application, using them to generate program executions covering different parts of the code. It is widely used in the context of vulnerability discovery and reverse engineering. In this paper we present SENinja, a symbolic execution plugin for the BinaryNinja disassembler. The tool allows the user to perform symbolic execution analyses directly within the user interface of the disassembler, and can be used to support a variety of reverse engineering tasks. © 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). **1. Motivation and significance** Software reverse engineering is the process of reconstructing the operation, the design, and the architecture of a piece of software, starting from an end product, e.g., a compiled binary program. The process is typically hard since it involves analyzing thousands of lines of code, written in low-level languages (e.g., assembly), without documentation and often obfuscated to be harder to analyze. Despite the difficulties, reverse engineering is crucial in several circumstances: for example, in malware analysis and security assessment of proprietary software. While reverse engineering is mostly a manual task, researchers and developers have built tools and techniques that can help to speed up the process. Disassemblers are essential tools for analyzing compiled binary programs. The job of a disassembler is to translate a compiled binary into human-readable assembly code, arranging it in a Control-Flow Graph (CFG) that highlights the structure of the code. There are several available disassemblers [1–4], and among them, BinaryNinja [5] is one of the most used by the cybersecurity community. In addition to the normal tasks of a disassembler, it implements other types of analyses and exposes them in a complete and well-documented set of APIs. For example, BinaryNinja performs code lifting, which is the translation of assembly code of a given architecture to a higher-level intermediate language (IL). Examples of such languages are LLVM IR [6] and VEX [7]. Lifting simplifies program analysis as it: (a) reduces the number of different (often redundant) instructions that need to be handled by an analysis and (b) favors portability since any architecture supported by the lifter will be also handled by the analysis. BinaryNinja lifts instructions of the most common architectures (e.g., x86, x86_64, ARM, MIPS) to LLIL (Low Level IL): Fig. 1(b) shows on the right the LLIL generated by BinaryNinja when lifting the x86_64 code shown on left. Symbolic execution is a widely popular technique in the context of bug detection and reverse engineering [8–14] that can automatically generate inputs for a program. The goal is achieved by constructing expressions over symbolic inputs and using a satisfiability modulo theory (SMT) solver (e.g., Z3 [15], FuzzySAT [16]) to reason over them. As an example, consider Fig. 1(a). On the left, we have a function authenticate\(^1\), while on the right we have the symbolic tree that represents the result of the symbolic exploration on this function. A symbolic execution engine evaluates the code of a function as an interpreter, initializing input variables as symbols (in the example, variable \(a\) is initialized with symbol \(α\) which can assume initially any value in the interval \([0, 2^{32} - 1]\)), and building symbolic expressions instead of performing computations on concrete values. A state is the abstract object that holds the memory and the constraints accumulated in an execution path. When the execution hits a branch, if the condition involves symbolic values, the execution forks, i.e., the symbolic engine splits the current state into two states. The two states model the outcomes of the two branch directions (in the example, line 3 generates two states to model when \(α \oplus 170 \neq 187\) is true and false, respectively). The execution can continue on the two states separately. At any time during the exploration, the constraints collected in a state can be used to generate, with the help of an SMT solver, an input that would have driven a concrete execution along the same path of the state. In Fig. 1(a), the two final states in the execution tree can be reproduced using input values equal to \(α = 0\) and \(α = 17\) respectively. Notice that it is very unlikely that a brute-force approach would generate an input that covers line 6, since the search space has \(2^{32}\) values. Symbolic execution has proven to be a fundamental ingredient for finding bugs and vulnerabilities. For instance, it was used during the development of Windows 7, finding almost one-third of the bugs revealed with fuzzing techniques [17]. Moreover, it has been also a pivotal component for most systems playing in the Cyber Grand Challenge [18], a two-year competition run by DARPA seeking to create automated tools for finding, exploiting, and patching software vulnerabilities. 2. Software description In this article, we present SENinja, a tool that implements a symbolic execution engine as a plugin of BinaryNinja. SENinja evaluates the Low Level IL (LLIL) generated by BinaryNinja and is integrated into the BinaryNinja user interface (UI), allowing users to perform symbolic execution without switching to other tools. Fig. 4 gives a visual overview of the plugin. 2.1. Software architecture Fig. 2 shows the architecture of SENinja. The main software component of the tool is the Executor. It is a high-level interface that is in charge of holding the generated states and of executing instructions symbolically on the current active state. It interacts with BinaryNinja to obtain crucial information about a binary, such as the LLIL representation and the memory layout. The commands exposed by SENinja, that are accessible through the UI of BinaryNinja, are constructed using this high-level interface. In the next sections, we describe in more detail the inner components of the Executor, explaining some of the design choices that we made. 2.1.1. State A state represents a snapshot of the execution for a path. Looking at the right-hand side of Fig. 1(a), every node in the --- 1 The function is written in C for simplicity; SENinja targets binary code. A well-known problem in symbolic execution is state explosion. While SENinja cannot solve this problem in general, it can at least minimize the overhead of keeping track of different but similar states generated during the exploration. To this aim, we have designed every component of the state to have a Copy-on-Write (CoW) behavior in order to reduce resource consumption when forking a state. Another common problem in symbolic execution is the handling of symbolic memory accesses, i.e., reasoning on the effects of a memory operation when the memory address depends on the value of the program inputs. SENinja supports different memory models for handling memory accesses: **Fully symbolic.** Symbolic memory accesses are handled by considering every memory cell that can be accessed. While this is the slowest mode, it is also the most accurate. **Fully concrete.** This model concretizes the expression of the address to a single concrete value. This is the fastest mode, but also the less accurate. **Partially symbolic.** This model falls in the middle of the previous approaches. It uses a fully symbolic approach, but only if the number of possible values that the symbolic address can assume is sufficiently small. If the address is concretized, the access is correctly handled to a newly allocated page and any other symbolic address referring it as a base address is handled accurately within the allocated page. This is the default memory model in SENinja. To evaluate the impact of the symbolic memory models and the CoW strategy, we consider a benchmark involving a symbolic computation of a CRC32 checksum. The left chart of Fig. 3 shows the running time of different symbolic executors when computing the checksum on an increasing number of symbolic bytes (from 1 to 1024 bytes). The benchmark is characterized by several symbolic accesses, whose result is crucial to compute the input that when processed should generate an expected CRC value. We consider: (a) *Klee* [10], a source-based symbolic executor, (b) *angr* [8], a binary symbolic framework, enabling the support for symbolic accesses, (c) SENinja (fully concrete), which uses the fully concrete memory model, and (d) SENinja (partially symbolic), which uses the partially symbolic memory model. We do not consider the fully symbolic memory model in this benchmark since the memory accesses are restricted within a few memory pages, thus generating the same behavior as the partially symbolic memory model. SENinja (fully concrete) is very efficient but very inaccurate: it fails (cross markers in the chart) to derive the input for most checksum sizes. *angr* scales only for small checksum sizes (up to 16 bytes), as then it takes more than 1 hour (which was the timeout during our experiment). *Klee* is very efficient, however, it exploits knowledge derived from the source code (in particular, the size of an array accessed by the benchmark). SENinja (partially symbolic) can correctly reason on the checksum computation up to 512 bytes, being faster than *Klee* for several checksum sizes. Recently proposed array optimizations could be integrated into SENinja to further improve its scalability. The middle and right charts of Fig. 3 show the resource consumption of SENinja (partially symbolic) with and without the CoW strategy. During these experiments, we have disabled the solver to focus on the resource consumption due to state exploration, which is what is impacted by the CoW strategy. The benefits resulting from the CoW strategy can be clearly seen in terms of running time and memory consumption. 2.1.3. Symbolic expressions SENinja represents symbolic expressions using the theory of bitvectors [28], which models the semantics of fixed-size bitvectors arithmetic. In particular, SENINJA uses a custom Abstract Syntax Tree class to wrap bitvector objects from the Z3 SMT solver. It does not use directly the AST of Z3 for mainly two reasons: (a) concrete computations can be performed more efficiently and (b) SENINJA can be easily ported to other SMT solvers by updating the wrapper class. Additionally, SENINJA enriches the AST representing an expression with a range interval, that provides an over-approximation on the possible values that an expression can assume in a state. For instance, SENINJA computes the interval range $[256, 512]$ given the expression $256 + \alpha$, which represents a 32-bit addition of the constant 256 to a zero-extended 8-bit input value $\alpha$. Interval analysis is extremely valuable in the presence of symbolic memory accesses as it may allow SENINJA to evaluate which memory pages could be modified during the execution without querying an SMT solver. The current implementation does not yet support strided intervals and in case of wrap-around returns the range $[0, 2^n - 1]$, where $n$ is the number of bits in the expression. 2.1.4. OS and function models To handle system calls and invocations to functions from dynamic libraries, SENINJA devises models [8] that describe the effects of external code on the current state. Currently, SENINJA provides models for the most common C library functions (e.g., `memcpy`, `memset`), and the most used Linux system calls. The models are written in Python, and new models can be added with a few lines of code [29]. However, to reduce the need of writing OS models from scratch, SENINJA offers preliminary support for a compatibility layer that allows it to reuse models available for the well-known symbolic executor ANGR [8]. Finally, SENINJA supports custom hooks [30]. They allow modeling a small part of the functionalities of an external piece of code, which is sufficient in several reverse engineering tasks and can be used to overcome the lack of some models. 2.2. Tool functionalities Fig. 4 shows an overview of the interface of SENINJA. We now review the main functionalities, highlighting how they can be accessed directly through the UI of BinaryNinja. **Symbolic state construction and initialization.** The symbolic execution can start at any point in the program. SENINJA initializes a state using the memory content obtained from BinaryNinja. It also exploits the Value Set Analysis [31] performed by BinaryNinja to detect, e.g., constant registers. By default, unknown data is marked as symbolic, however a user can choose other policies (e.g., zero-initialization). **Debugger-like step functions.** In SENINJA only a single state can be active at any time. Symbolic execution can be performed on the current active state using commands that are inspired by debuggers. The commands are: `single step`, `continue until address` and `continue until branch`. Hence, through the UI, the user can --- Fig. 4. The BinaryNinja interface with the SENINJA plugin. The active state is at the address in green (1). Deferred states are marked in red (2), showing a comment to indicate the number of states at the same address. The memory and registers of the active state can be viewed using widgets (3) and (4). Symbolic buffers can be viewed and created using (5). Commands are accessible through the right-click menu (6). The CLI can be accessed using the Python console (7). change the current active state and start a new exploration using one of the previous commands. Since the symbolic exploration may take a long time to, e.g., reach a specific address, the user can bound the exploration time by setting a timeout (through the panel settings), or stop the exploration at any time using a dedicated command from the right-click menu. After an exploration, SENinja can highlight in the CFG which instructionshave been executed by a state during the exploration. **State merging.** If two or more states are executing the same instruction, the user can decide to merge them [33]. While state merging can reduce memory consumption, the solver may struggle in reasoning on formulas derived from a merged state, since they can be more complex. The merging algorithm is inspired by the strategy implemented in the source-based symbolic executor Klee [10]. Before merging two states, SENinja checks their successors: if they are different, i.e., the two states would take different directions, then the merging operation is aborted. **Automatic searchers.** In addition to executing a single state, SENinja devides automatic searchers that can be used to search through the paths of the program in order to find an input that reaches a certain program point. The user, through the right-click menu, can set an address as the target of the search and can mark a set of addresses to be avoided during the search. Then the user can start the searching process using a DFS or BFS algorithm. **Memory, register and buffer view.** The memory and the registers of the current active state can be viewed using the SENinja widgets (see (3) and (4) in Fig. 4). The widgets can be used to view and modify concrete data, view symbolic expressions, evaluate expressions using the solver, or inject new symbols. When evaluating an expression, the user can generate up to $k$ solutions, where $k$ is a user-defined value. Symbolic buffers can be created and constrained using a dedicated widget (see (5) in Fig. 4). **Command line interface.** Complex operations can be performed using the command-line interface. BinaryNinja has an embedded Python console, which can be used to invoke the command-line API of SENinja. For example, the user can set specific constraints over an input, or can define a custom hook for a library function. A detailed description of the command-line API can be found in the project wiki. 3. Illustrative example: analyzing virtual machine obfuscation In this section, we present one case study in which we use SENinja for reverse engineering of obfuscated code. Obfuscation is the act of producing code that is difficult to understand by a human. Developers obfuscate code in order to make the reverse engineering process more difficult, e.g., to protect a license checker or a proprietary algorithm. Obfuscation is also widespread among malware writers. Virtual machine obfuscation is one of the most used and effective obfuscation techniques [34]: it translates the code to obfuscate into a custom byte code and then replaces the original code in the binary with the bytecode and a custom virtual machine that at runtime is able to reproduce the behavior of the original code when interpreting with custom opcode handlers the generated bytecode. As an example of obfuscated code, we consider the 11th challenge [32] from the reverse competition Flare-On 6 [35]. The program is a 64-bit PE that uses virtual machine obfuscation to protect a function that checks several conditions on user-provided inputs. Hence, this function could be seen as a license key checker and we use SENinja to automatically find inputs that are accepted by this checker. 3.1. Preliminary analysis We begin by manually analyzing the binary using BinaryNinja. The main function can be identified at address 0x140001220 (see Fig. 5). This function considers two input strings (obtained as command-line arguments), where the second string has a size of 32 bytes. It then calls vm_loop: this function is the virtual machine dispatcher loop, i.e., the routine that fetches the bytecode and calls the proper handlers to perform the obfuscated computation. After running the obfuscated code, main calls function final_checks, which checks that the first string is FLARE2019 and validates the output of the obfuscated computation, executing the code at 0x14000169d in case of success or the code at 0x14000178a in case of failure. Since the first input is known after this preliminary analysis, the main goal is to find the value of the second input without spending hours manually reversing the obfuscated computation. 3.2. Finding a valid input After obtaining a general idea of the structure of the binary, we can use SENinja to automatically identify a value for the second input able to satisfy the check. We first create an initial state at beginning of main (right-click, Start symbolic execution). Then we use the buffers widget to create a new symbolic buffer of 32 bytes (step 1 in Fig. 6). We then set up the command-line arguments using the Setup argv command from the SENinja toolbar (step 2), setting the string FLARE2019 as the first argument and the buffer that we just created as the second argument. After defining the symbolic inputs and creating an initial state, we define address 0x14000169d as the target point (right-click, Set target) in the code that we want to reach during the symbolic exploration and address 0x14000178a as an avoid point (right-click, Set avoid) in the code that is not interesting for our exploration. Finally, we can start the execution exploiting the DFS searcher (right-click, run DFS). After a few seconds, SENinja is able to generate a state reaching the target point. Using the buffers widget (steps 3 and 4 in Fig. 6), we can obtain the concrete input that passes the check: cHcyrAHSmxEkpyqoCBCyGhuyFyCmy86Ee. 4. Comparison with other tools A few previous works [36,37] have explored solutions for integrating symbolic execution into graphical reverse engineering tools. For instance, Ponce [36] integrates the dynamic symbolic execution engine Triton [38] into the commercial disassembler and debugger IDA Pro. A crucial design difference with SENinja is that Ponce cannot analyze code statically, which is a common requirement in presence of binaries for non-standard architectures, or non-executable memory dumps. Another interesting solution is IDAngr [37], which combines the symbolic framework angr [8] with IDA Pro. Unfortunately, this plugin is not actively maintained anymore and the integration with the UI of IDA Pro is quite limited. AngryGhidra [39] and modality [40] are two recent projects that expose the functionalities of angr in Ghidra [4] and Radare2 [2], respectively. AngryGhidra is designed to obtain some exploration parameters (e.g., the starting target) from the user through the UI but then it starts angr using a fixed and predefined script, leaving very limited opportunity for interactions. Modality instead embraces the spirit of Radare2 and exposes several new actions in its command-line interface. Several steps from Section 3 cannot be performed when using the current release of these two plugins, forcing the user to manually interact with ANGR or to face severe path explosion. Finally, SymNav [41] devises a visual representation of the symbolic tree. Unfortunately, this viewer is a standalone component that cannot be currently integrated into debuggers, such as IDA PRO or BinaryNinja. 5. Impact and conclusions SENinja is a symbolic execution plugin for BinaryNinja, a commercial disassembler widely used by the cybersecurity community. SENinja extends the functionalities of the disassembler, giving the user access to symbolic execution analysis directly within BinaryNinja, possibly simplifying reverse engineering activities. Furthermore, it is designed to be extensible, allowing users to implement new features by typically adding a few lines of Python code. After the public release of SENinja on GitHub, the community of BinaryNinja has shown a positive interest in it: SENinja has been recently officially included in the community plugin repository [42] of BinaryNinja. Moreover, a well-known security expert has tried SENinja, positively mentioning it in a blog post [43]. We hope that, in the next few years, SENinja can become one of the reference tools for reverse engineers. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A. Supplementary data Supplementary material related to this article can be found online at https://doi.org/10.1016/j.softx.2022.101219. References
{"Source-Url": "https://iris.luiss.it/retrieve/7a5eb3f4-e837-41b8-8e9d-e94082e851ac/Borzacchiello_SENinja_2022.pdf", "len_cl100k_base": 4815, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23266, "total-output-tokens": 7719, "length": "2e12", "weborganizer": {"__label__adult": 0.0003008842468261719, "__label__art_design": 0.00022351741790771484, "__label__crime_law": 0.0003800392150878906, "__label__education_jobs": 0.000179290771484375, "__label__entertainment": 4.982948303222656e-05, "__label__fashion_beauty": 9.673833847045898e-05, "__label__finance_business": 0.0001030564308166504, "__label__food_dining": 0.00023424625396728516, "__label__games": 0.0004646778106689453, "__label__hardware": 0.0006546974182128906, "__label__health": 0.0002263784408569336, "__label__history": 0.00012874603271484375, "__label__home_hobbies": 4.732608795166016e-05, "__label__industrial": 0.00024306774139404297, "__label__literature": 0.00013744831085205078, "__label__politics": 0.00015544891357421875, "__label__religion": 0.0003147125244140625, "__label__science_tech": 0.0088958740234375, "__label__social_life": 6.407499313354492e-05, "__label__software": 0.01096343994140625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.00018715858459472656, "__label__transportation": 0.00022208690643310547, "__label__travel": 0.0001316070556640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29460, 0.04792]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29460, 0.47321]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29460, 0.86561]], "google_gemma-3-12b-it_contains_pii": [[0, 3522, false], [3522, 6831, null], [6831, 10436, null], [10436, 14024, null], [14024, 16633, null], [16633, 21089, null], [21089, 29460, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3522, true], [3522, 6831, null], [6831, 10436, null], [10436, 14024, null], [14024, 16633, null], [16633, 21089, null], [21089, 29460, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29460, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29460, null]], "pdf_page_numbers": [[0, 3522, 1], [3522, 6831, 2], [6831, 10436, 3], [10436, 14024, 4], [14024, 16633, 5], [16633, 21089, 6], [21089, 29460, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29460, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
5e4bddd0f48b40d76671c41cbea2bc7f01dd08f2
Secure Coding. Practical steps to defend your web apps. Copyright SANS Institute Author Retains Full Rights This paper is from the SANS Software Security site. Reposting is not permitted without express written permission. Interested in learning more? Check out the list of upcoming events offering "Defending Web Applications Security Essentials (DEV522)" at http://software-security.sans.orghttp://software-security.sans.org/events/ Mobile devices, particularly those owned by employees and used to access work applications, represent the latest front for attackers. Employees are downloading applications vulnerable to or infected with malware that mix with company e-mail, productivity/workforce and other business applications. Because of this new threat, SANS conducted a survey to discover organizational awareness and the procedures around mobile application risk. (This survey follows a first survey, focused on the state of mobile device security,¹ and a second, focused on policies and practices used to secure those devices.²) In this survey on security of mobile applications, we found that most organizations are concerned about mobile applications and the security threat they impose. The survey showed the following concerns of the organizations: - Their biggest concern is the security of the device and how it can help protect the apps and app data available on the device. - They most consistently rely on VPN/access controls, a tried and tested technology for company-issued mobile device access, to protect company applications from rogue bring your own device (BYOD) access. - The largest percentage (nearly 60%) feel security checks throughout the software development lifecycle (SDLC) are important, and a smaller number is actually practicing these processes. - Wrapping it all together under management of applications, organizations are having the most difficulty achieving the level of unified access they need to support their policies, including SDLC. This report covers these and other trends in more detail in the following pages. Nearly 900 people started the survey, with more than 600 answering the first 11 questions on awareness and practices. When asked if they conducted application development, 253 indicated they did and were sent to another set of questions. That suggests a strong set of developers took the survey; but the majority, being from enterprise organizations that own and manage apps, don’t develop them. The survey was held online for six weeks during February and March 2013. Respondents came from a wide range of industries. The largest number of respondents came from highly regulated sectors such as the government sector (18%), followed closely by the financial services and “Other” categories—both of which had slightly less than 16% of the total. Government and financial services were similarly represented in our first two surveys, suggesting that mobile application security is becoming a priority or these groups. Figure 1 provides a breakdown of the survey respondents. **Figure 1. Industries Represented** Respondents from multinational organizations were our largest group (26%). This might indicate that these organizations are seeing and adopting more policies around their applications accessed by BYOD practices. But smaller organizations with 100–499 employees were also well represented (15%), hinting that even smaller organizations are beginning to understand the importance of mobile application security (see Figure 2). Echoing the results from the previous two mobile surveys conducted by SANS in 2012, most of the respondents were security analysts and network administrators, lending a security-in-depth perspective to the survey. When you combine the IT manager/CIO column with the Security manager/CSO columns, you see that a large representation of senior managers (48%) also took this survey. The roles represented in survey responses are shown in Figure 3. Interestingly, some respondents (41 staff members and 19 consultants) indicate they hold the new title of “mobility director.” The majority of respondents were staff members rather than consultants. While consultants can and do provide specialized skills and help, staff members often have a better and more long-term focus on the mobility issues an organization is experiencing. Their participation in the survey provides us with a clearer view of the needs and concerns of organizations from all perspectives. For example, developers and software engineers provide a different perspective (perhaps wanting to build in security and encryption during app development) than those holding administrative and security jobs, who see security of applications as something to handle after the fact. For example, the latter group—those who manage applications—would be more inclined to add in endpoint security, data encryption and application-wrapping security capabilities to protect against intrusion and leakage through mobile applications. Our goal with this survey was to go beyond the extent of personal mobile device use to explore what people do on those devices. It is clear that employees in most commercial organizations use both BYOD and corporate devices to access work applications, as shown in Figure 4. In our original survey, 37% of respondents did not allow BYOD, so this shows an increase in BYOD usage since last year. In this survey, only a small percentage checked that 100% of their users had BYOD access to work applications. The largest group (21%) actually represented the smallest number of BYOD users accessing work applications. Many respondents commented that they did not allow personal devices. Consistent with our previous two surveys, there were lower levels of BYOD usage in the government sector than in the private sector. What Apps They're Using But, what are people doing with their personal devices? The answer to this question will allow us to understand the usage and, therefore, the risks organizations face and the precautions they might undertake. For example, if an organization allows remote access to its network and the connecting device is compromised, the attacker has access into the network and any unencrypted apps. The compromised device can provide remote access, becoming a pivot point from which attacks can be launched. And, the risk is greater when access is granted to business applications such as ERP and collaborative programs. As shown in Figure 5, not surprisingly, the most common applications accessed remotely are communications and collaboration (i.e., e-mail) and general Internet access (i.e., browser, file-sharing). Approximately 26% of the respondents say their organizations also allow access to business systems, and 33% allow access to productivity applications. From their large response to network access/VPN (44%), they are likely doing so through secured connections. This implies that these organizations either believe that the device is a safe risk for that level of data access or that they accept the related risks. These risks can be significant, given the media’s attempts to dub the past few years as the “year of mobile malware.” Another 5% are accessing control system applications from mobile and personal devices; and another 8% are accessing field service applications, which can also be attached directly or indirectly to control systems, providing another pathway into critical control systems. Our hope is that those devices, their operators and employers are 100% aware of the risk and have layered their security or further secured the mobile applications accordingly. According to a SANS survey on SCADA security practices published in February 2013, 70% of nearly 700 respondents feel that there is a high level of cyber risk to their systems, yet lowering risk was very low on their list of priorities, and only 30% have strong security requirements for their control system procurement processes.³ What Scares Them So, what are the biggest concerns an organization faces with regard to mobile security? Figure 6 reveals that most organizations are worried first about their data, which is, of course, accessed from the mobile app, and then about the device as a launch point for attacks. These concerns make sense, given that these two categories cover most of the app security risk around mobile devices. It is also interesting that secured access to applications is of concern to 53% of our respondents, given that 44% of them are granting BYOD access via VPN services, as shown in the previous figure (Figure 5). The rest of the categories checked indicate that at least 30% to 40% are also thinking about the network infrastructure, secure application components and unauthorized BYOD sprawl (managing the proliferation of devices). That number could be higher, and we suspect it will be if we conduct this survey again next year. ³ www.sans.org/reading_room/analysts_program/sans_survey_scada_2013.pdf In addition to studying how respondents are protecting their applications being used and accessed by BYOD, this survey was also designed to find out how they manage the applications they developed themselves. **What They’re Developing** When asked what type of mobile applications they developed, about 2/3 of the survey base were skipped to the end because they didn’t do development. Of those 253 respondents who did answer this question, they are primarily developing web applications and updates (68%), and 32% are developing line-of-business (LOB) applications accessed by mobile/BYOD. Only 20% of those who answered this question are developing mobile apps for commercial use, with about the same amount developing their own cloud applications for mobile users, as shown in Figure 7. With the organizations’ focus on web development capabilities, and augmented with their cloud development, it’s clear that the web-based interface will be used to replace manual processes to meet new demands to account for mobile access. And, with 32% of respondents building “internal” applications, new risks of data loss and intrusion arise. Organizations deploying their applications on mobile devices will have to develop plans to mitigate these new vectors. **What are your mobile application development focus areas?** *Figure 7. Mobile Applications Being Developed* **Choice of Platforms** Because of their popularity today, iOS (87%) and Android (74%) lead the pack as popular platforms organizations develop for. As shown in Figure 8, development for Windows 8 is low (30%) compared to iOS (87%) and Android (74%). The comparatively low level of Windows development was surprising, given the release of Windows 8 tablets and mobile devices last year. Interestingly, 31% of respondents develop for the BlackBerry platform, which does not seem to have disappeared from the business landscape, despite the growth of other smart device adoption. ![Figure 8. Popularity of Development for Mobile Platforms](image) **Their Priorities** In the survey, respondents were asked to rank their objectives, with 1 being the most important and 9 being the least important objective. It is rewarding to note that they rate the security of the data as the most important objective during development (average rating of 3.38). From the results, performance is marginally more important (3.80) than security of the application (3.86), which is promising. See Table 1 for the ranking of their priorities. <table> <thead> <tr> <th>Objective</th> <th>Ranking</th> </tr> </thead> <tbody> <tr> <td>Security of the data</td> <td>3.38</td> </tr> <tr> <td>Performance</td> <td>3.80</td> </tr> <tr> <td>Security of the application</td> <td>3.86</td> </tr> <tr> <td>Reliability</td> <td>4.16</td> </tr> <tr> <td>Usability/user interface</td> <td>4.59</td> </tr> <tr> <td>Security of the network/enterprise</td> <td>4.75</td> </tr> <tr> <td>Rapid time to market</td> <td>5.11</td> </tr> <tr> <td>Scalability</td> <td>5.72</td> </tr> <tr> <td>Other</td> <td>8.64</td> </tr> </tbody> </table> *Table 1. Top Development Objectives* Not surprisingly, usability, scalability, reliability and performance are seen as more important than application security. This may reflect both the importance organizations place on performance of all applications and a level of management that does not yet appreciate the consequences of insecure applications. Next, we need to look at what organizations are doing to ensure the security of their systems, data and users. Figure 9 lists several processes that organizations are implementing to different degrees. The security practices are fairly evenly split among the various phases of the software development lifecycle (SDLC)—with a secure lifecycle being the highest chosen among them overall. No more than 50% chose any other practice; but for those developers, or the organization that supports them, they are evenly focused on dealing with security issues during coding and development. Specifically, when asked about the use of vulnerability scans, almost 34% of the respondents either do not perform vulnerability scans of the applications at all or perform them infrequently. Continuous monitoring for possible attack vectors is a critical component of the Critical Security Controls (CSCs), particularly Critical Control 2 (inventory of authorized/unauthorized software), Critical Control 4 (continuous vulnerability assessment and remediation) and Critical Control 6 (application software security). Figure 10 shows how frequently companies conduct vulnerability scans. The apparent lack of application-level scanning implies that the organizations are depending on source code reviews and threat assessments to protect their applications and data. We often see organizations depend on their authentication system to verify that a user actually is that user, but tools like Firesheep are able to hijack the mobile application’s session. (Firesheep was originally an attack against Facebook, but it has been improved to add support for multiple mobile applications.) Flaws found in applications after production are more expensive to repair, and if left unattended (as discovered in so many penetration tests), are even more expensive in terms of loss. Application management is part of the SDLC. Even for organizations that don’t develop the applications being accessed by their mobile users, management of these apps is as critical as secure access and device scanning. Of the organizations responding, more than half (55%) use internal processes to handle management and services for their applications, as shown in Figure 11. Externally provided management, represented by use of third-party providers and public clouds, has become popular. Most of the respondents are using multiple approaches to securing mobile applications. The most common focuses are on securing not only the devices, but also the mobile apps and content those devices use every day. **Their Practices** Figure 12 provides a list of many of the security policies and practices surrounding corporately-owned devices and how frequently they are used. We hope that these policies also address the BYOD devices. This multilayered approach is commendable, because it builds upon the existing security controls in the non-mobile space. In fact, mobile applications often make use of existing web applications or use features, including encryption libraries, added to the existing applications for additional protection. --- **How are these apps managed for end-users today?** ![Figure 11. Application Management Options](image) **What are your security policies and practices around the deployment, use, and management of corporately-owned mobile applications?** ![Figure 12. Security Policies and Practices for Corporately-Owned Mobile Applications](image) However, this overlap of systems and applications makes it imperative that organizations understand their existing controls and how they can be leveraged to protect the mobile users and applications. So, for example, securing the devices through use of tools such as Mobile Device Management (MDM) or enhanced NAC (Network Access Controls) organizations are also protecting against rogue applications, because both of those will check for unapproved or malicious applications based on their programmed policies. This is also a plea to MDM vendors to provide more integrated security services regarding encrypting access to business applications from MDM protected devices. **Management Difficulties** There appears to be little difference in organizations’ perceptions of how difficult these policies and practices are to implement. Most respondents consider implementation to have some moderate level of difficulty. Figure 13 shows the average difficulty ratings for each of the security policies and procedures, where 1 is “not particularly difficult,” 2 is “difficult,” and 3 is “extremely difficult.” The higher the average score, the more difficult it is to implement the policies and procedures. Not surprisingly, protecting applications with strong authentication was comparatively easy (1.70), which is in keeping with organizational concerns (Figure 5) and controls (Figure 9). The security industry has been using such procedures for a long time with company-issued mobile devices. On the other hand, providing an identity management framework to support remote devices (2.05) and ensuring secure development of applications (2.01), were considered more difficult. Not far behind is secure development and lifecycle practices. This applies to both development and management of applications being accessed remotely by BYOD. As stated earlier, there needs to be much more maturity in the mobile space, given the breadth and nature of threats being aimed at mobile devices. Taken together, the results provided in Figures 12 and 13 suggest that organizations recognize and are doing the difficult work of implementing policies and practices. Moreover, they suggest that organizations should place some emphasis on developing techniques that rely on tried and true security policies to secure mobile applications. In addition, it seems clear that additional focus is required on providing adequate security review during the SDLC. Conclusion As organizations and their staffs continue to rush down the path of implementing and using mobile devices and applications, security needs to continue to focus on our implementations. This is becoming both easier and harder as time goes by. The rush to implement or build mobile applications is adding to the complexity security and IT staffs have to handle. This means that responsible staff members have to be on top of the latest threats and controls available to the attackers and defenders. Proactive security during development and deployment should become a best practice. The following suggestions can help organizations accomplish their security goals: - Ensure and adjust policies to include the devices the organization allows to access network resources. For example, institute a policy that describes the type of mobile devices allowed to access the network. - Evaluate the applications, data and access the mobile devices use to determine what needs can be addressed. - Consider the inclusion of mobile app security encryption libraries during development, or apply them to third-party apps being used for larger scale corporate deployment. - Perform security assessments of applications being built or developed. Start even before the application development begins, and continue assessing applications in production, as per the CSCs. - Assess the mobile devices and their supporting architecture as often as possible, keeping in mind that many of the devices may be owned by employees. - Continue to enable users with education and security updates. These are broad-stroke mechanisms to consider for protecting the network, resources and data on endpoints from malicious hostile applications. Architectures deployed to manage this new risk need to be capable of expanding to new types of devices and applications because users will continue to make more demands in the future. Kevin Johnson is a senior security consultant with Secure Ideas. Kevin has a long history in the IT field including system administration, network architecture and application development. He has been involved in building incident response and forensic teams, architecting security solutions for large enterprises and penetration testing everything from government agencies to Fortune 100 companies. Kevin is the author of three classes: SEC542: Web Application Penetration Testing, Ethical Hacking; SEC642: Advanced Web Application Penetration Testing and SEC571: Mobile Device Security. In addition, he is an instructor and author for the SANS Institute, a faculty member at IANS and a contributing blogger at TheMobilityHub. James Jardine is a principal security consultant with Secure Ideas, LLC. James has over 12 years of software development experience, with over half of that focusing on application security. During his long development history, he has written both large enterprise applications, thick clients and mobile applications. He has held many roles including senior developer, software architect and application security expert. James is also involved in the open source community. He runs a number of open source projects, including WCSA (a security analyzer for web.config files) and EventValMod (a tool to modify event validation values in .Net). He is also a contributor to the Laudanum project (a collection of injectable web payloads). In addition, James is an instructor and author for the SANS Institute. He is also a contributing blogger for the Secure Ideas blog, the Jardine Software blog, and the SANS Appsec blog. # Upcoming SANS App Sec Training <table> <thead> <tr> <th>Event Name</th> <th>City</th> <th>Dates</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>SANS 2020</td> <td>Orlando, FL</td> <td>Apr 03, 2020 - Apr 10, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Mentor Session - DEV522</td> <td>Novi, MI</td> <td>Apr 21, 2020 - May 21, 2020</td> <td>Mentor</td> </tr> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Silicon Valley - Cupertino 2020</td> <td>Cupertino, CA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Copenhagen August 2020</td> <td>Copenhagen, Denmark</td> <td>Aug 24, 2020 - Aug 29, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Online</td> <td>Anytime</td> <td>Self Paced</td> </tr> <tr> <td>SANS SelfStudy</td> <td>Books &amp; MP3s Only</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/2013-mobile-application-security-survey", "len_cl100k_base": 4641, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 28283, "total-output-tokens": 5262, "length": "2e12", "weborganizer": {"__label__adult": 0.00046372413635253906, "__label__art_design": 0.0003085136413574219, "__label__crime_law": 0.00363922119140625, "__label__education_jobs": 0.001422882080078125, "__label__entertainment": 9.697675704956056e-05, "__label__fashion_beauty": 0.0002231597900390625, "__label__finance_business": 0.0015735626220703125, "__label__food_dining": 0.0003230571746826172, "__label__games": 0.0011539459228515625, "__label__hardware": 0.004268646240234375, "__label__health": 0.0009751319885253906, "__label__history": 0.00016045570373535156, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0005564689636230469, "__label__literature": 0.00018465518951416016, "__label__politics": 0.0003199577331542969, "__label__religion": 0.00029587745666503906, "__label__science_tech": 0.072265625, "__label__social_life": 0.00014269351959228516, "__label__software": 0.07061767578125, "__label__software_dev": 0.83984375, "__label__sports_fitness": 0.00030684471130371094, "__label__transportation": 0.0004024505615234375, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23524, 0.02021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23524, 0.15709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23524, 0.95394]], "google_gemma-3-12b-it_contains_pii": [[0, 438, false], [438, 438, null], [438, 2070, null], [2070, 3083, null], [3083, 4993, null], [4993, 6643, null], [6643, 8970, null], [8970, 10339, null], [10339, 12089, null], [12089, 12989, null], [12989, 14260, null], [14260, 15846, null], [15846, 18289, null], [18289, 20198, null], [20198, 21845, null], [21845, 23524, null]], "google_gemma-3-12b-it_is_public_document": [[0, 438, false], [438, 438, null], [438, 2070, null], [2070, 3083, null], [3083, 4993, null], [4993, 6643, null], [6643, 8970, null], [8970, 10339, null], [10339, 12089, null], [12089, 12989, null], [12989, 14260, null], [14260, 15846, null], [15846, 18289, null], [18289, 20198, null], [20198, 21845, null], [21845, 23524, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23524, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23524, null]], "pdf_page_numbers": [[0, 438, 1], [438, 438, 2], [438, 2070, 3], [2070, 3083, 4], [3083, 4993, 5], [4993, 6643, 6], [6643, 8970, 7], [8970, 10339, 8], [10339, 12089, 9], [12089, 12989, 10], [12989, 14260, 11], [14260, 15846, 12], [15846, 18289, 13], [18289, 20198, 14], [20198, 21845, 15], [21845, 23524, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23524, 0.21296]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
421013ff3156a46a10bcc4240f290728bdc557e1
[REMOVED]
{"Source-Url": "https://hal.laas.fr/hal-02511881/document", "len_cl100k_base": 6917, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34922, "total-output-tokens": 8765, "length": "2e12", "weborganizer": {"__label__adult": 0.000335693359375, "__label__art_design": 0.000453948974609375, "__label__crime_law": 0.0003628730773925781, "__label__education_jobs": 0.0007829666137695312, "__label__entertainment": 8.696317672729492e-05, "__label__fashion_beauty": 0.00017964839935302734, "__label__finance_business": 0.0003190040588378906, "__label__food_dining": 0.0003554821014404297, "__label__games": 0.0006513595581054688, "__label__hardware": 0.0009822845458984375, "__label__health": 0.0005860328674316406, "__label__history": 0.00032591819763183594, "__label__home_hobbies": 0.00014650821685791016, "__label__industrial": 0.0007658004760742188, "__label__literature": 0.00026702880859375, "__label__politics": 0.0003142356872558594, "__label__religion": 0.0005955696105957031, "__label__science_tech": 0.0911865234375, "__label__social_life": 0.0001271963119506836, "__label__software": 0.01076507568359375, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.0003848075866699219, "__label__transportation": 0.0007119178771972656, "__label__travel": 0.0002188682556152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30772, 0.06763]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30772, 0.27043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30772, 0.8795]], "google_gemma-3-12b-it_contains_pii": [[0, 930, false], [930, 3309, null], [3309, 6457, null], [6457, 9894, null], [9894, 11789, null], [11789, 15015, null], [15015, 17170, null], [17170, 20503, null], [20503, 23920, null], [23920, 27062, null], [27062, 29831, null], [29831, 30772, null]], "google_gemma-3-12b-it_is_public_document": [[0, 930, true], [930, 3309, null], [3309, 6457, null], [6457, 9894, null], [9894, 11789, null], [11789, 15015, null], [15015, 17170, null], [17170, 20503, null], [20503, 23920, null], [23920, 27062, null], [27062, 29831, null], [29831, 30772, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30772, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30772, null]], "pdf_page_numbers": [[0, 930, 1], [930, 3309, 2], [3309, 6457, 3], [6457, 9894, 4], [9894, 11789, 5], [11789, 15015, 6], [15015, 17170, 7], [17170, 20503, 8], [20503, 23920, 9], [23920, 27062, 10], [27062, 29831, 11], [29831, 30772, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30772, 0.18797]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
6dc43dc78b3b16865d99e33c30b2e0ec8179ee10
1 DNS in the real world In this problem, you will learn more about DNS using the UNIX utility `dig`. You may also find it useful to consult RFC1035 for some of the questions. There are two types of DNS queries, recursive and iterative. When a DNS resolver issues a recursive query to a name server, the server attempts to resolve the name completely with full answers (or an error) by following the naming hierarchy all the way to the authoritative name server. Upon receiving an iterative query, the name server can simply give a referral to another name server for the resolver to contact next. A resolver sets the RD (recursion desired) bit in DNS query packet to indicate that it would like to have the query resolved recursively. Not all servers support recursive queries from arbitrary resolvers. 1. What is `www.nyu.edu`'s canonical name? What are its authoritative name servers? Based on `dig`'s output, could you tell which DNS server answers this DNS query? Is it a recursive query? Answer: NYU's canonical name: WEB.nyu.edu NYU's authoritative name servers: NS1.nyu.edu, NS2.nyu.edu, NYU-NS.BERKELEY.edu, LAPIETAR.NYU.FLORENCE.IT The DNS server that answers this query: It depends on where you do the query. For example, if you do the query at home, the DNS server your ISP assigned to you will answer it. Is it a recursive query: Yes, it is a recursive query. 2. Instead of using your default name server, issue the query for `www.nyu.edu` to one of the root DNS servers (e.g. `a.root-servers.net`). Does this server accept recursive query from you? If not, perform iterative queries yourself using `dig` by following the chain of referrals to obtain the `www.nyu.edu`'s address. What are the sequence of name servers that you have queried? Which domain is each name server responsible for? Answer: No, it doesn’t accept recursive query. A query chain example: a.root-servers.net (root) H3.NSTLD.COM (edu) NS2.nyu.edu (nyu.edu) 3. Use multiple recursive DNS servers located at different geographical regions\(^1\) as well as your default name server to resolve `www.google.com`. Attach your `dig` output. What geographical regions do those IP addresses reside? How quickly do the corresponding A and NS records \(^1\)Here are two name servers that answer recursive queries: ns2.cna.ne.jp ns2.suomen2g.fi expire? Why do A records expire so soon? Compare this setup using DNS with some alternative way of achieving the same goal. Answer: Many students misunderstood this question. This question asks for the geographical locations of web servers returned by different DNS servers, not the geographical regions of DNS servers themselves. One common way to map an IP address to its geographic location is to use “traceroute” to obtain domain names of the routers along the path to an IP address in the hope of getting some location hints based on routers’ domain names. Unfortunately, traceroute did not work for Google’s IP addresses as most routers along the path do not have domain names with helpful location hints. I tried to infer an IP address’s geographic location using RTT times obtained from “ping”. This could be very accurate if you can perform the “ping” from multiple machines in different geographical areas. I used a couple of PlanetLab machines to do the ping. Here is what I got: <table> <thead> <tr> <th></th> <th>local DNS server</th> <th>ns2.cna.ne.jp</th> <th>ns2.suomen2g.fi</th> </tr> </thead> <tbody> <tr> <td>IP address of</td> <td>64.233.169.147</td> <td>66.249.89.104</td> <td>66.249.91.103</td> </tr> <tr> <td>Google server</td> <td>64.233.169.99</td> <td>66.249.89.99</td> <td>66.249.91.104</td> </tr> <tr> <td></td> <td>64.233.169.103</td> <td>66.249.89.147</td> <td>66.249.91.147</td> </tr> <tr> <td></td> <td>64.233.169.104</td> <td></td> <td>66.249.91.99</td> </tr> <tr> <td>Location of ping source</td> <td>nyu</td> <td>Japan</td> <td>U.K</td> </tr> <tr> <td>RTT</td> <td>8ms</td> <td>9ms</td> <td>12ms</td> </tr> <tr> <td>inferred location</td> <td>east coast</td> <td>east Asia</td> <td>west Europe</td> </tr> </tbody> </table> As the above table shows, queries from DNS servers at different geographical regions return Google servers at different geographical regions. This is because Google uses DNS to perform server selection so clients would be able to access a geographically close-by server with lower latency. How quickly do the corresponding A and NS records expire? Why do A records expire so soon? Answer: TTL for A records goes up to 300 seconds and 86400 seconds for NS records. A records expire relatively quickly to allow load balancing among servers. In this case, when a cached web server encounters heavy load, the DNS server will query again after 300 seconds and hopefully obtain another lightly loaded server. Another way to do load-balance is at the application level. A load balancer server receives all requests and redirects them to different application (e.g. Web) servers according to servers’ load. 4. Alice works at a search engine startup whose main competitor is Google. She would like to crush her competitor in the “non-traditional” way by messing up with DNS servers. Recalling from her networking class that DNS servers cache A and NS records from DNS replies and referrals, Alice realizes she can configure her own DNS server to return incorrect results for arbitrary domains. If the resolver caches Alice’s malicious results, it will return bad results to future DNS queries. Help Alice complete her master plan to hijack Google’s domain name by writing down exactly what Alice’s name server returns upon a DNS query. What must a robust DNS server implementation do to counter this attack? Answer: Here is an attempt to hijack Google's domain name. When Alice’s DNS server receives a query for www.alicestartup.com, it returns the following malicious results: www.alicestartup.com long-TTL in NS ns1.google.com google.com long-TTL in NS ns1.google.com ns1.google.com long-TTL in A w.x.y.z (Alice’s DNS server) If a DNS server blindly caches everything, it will redirect all future queries for www.google.com to Alice’s nameserver (w.x.y.z). A robust DNS server implementation should be less trustful of results returned by other DNS servers and only cache information that’s directly relevant to the queried domain. In the above example, since google.com is not a subdomain of alicestartup.com, a correct DNS server implementation should ignore all information related to google.com in the results. Figure 1: The architecture of Alice’s mesh network. Round circles denote WiFi-based mesh nodes with dotted lines representing wireless connectivities among them. Each mesh node has a mesh interface. Each gateway mesh node, denoted by a black circle, has an additional wired interface to a DSL or cable modem link for connecting to the rest of the Internet. Each mesh node also acts as an 802.11 base station with whom a client node such as a user’s laptop associates itself with. Unlike mesh nodes, Alice does not control clients’ software configuration and would (ideally) not want to install any custom software on them. 2 Setting up an urban mesh network Alice got fired from her last job at the search engine company for being evil and she has recently joined an effort to provide community supported WiFi mesh network in New York City. Figure 1 explains the basic setup of her network. In this problem, you will help Alice decide on a good addressing scheme for her mesh network. Each mesh gateway’s wired interface is already assigned an IP address (via DHCP or statically) by its ISP as shown in Figure 1. Alice’s mesh software runs on each mesh node and automatically builds a routing table based on mesh nodes’ IP addresses. Therefore, each mesh node knows how to route a packet to any other IP address belonging to another mesh interface. Therefore, Alice needs an addressing scheme that assigns an IP address to each mesh interface (both mesh nodes and mesh gateways have a mesh interface). Explain how you would assign an IP address to a mesh interface. Additionally, describe how to assign an IP address to a client laptop (Remember that you cannot expect to change a client’s software). Answer: A solution submitted by many students is to designate one mesh node as the DHCP server who assigns IP addresses to other mesh nodes. However, this solution does not work as mesh nodes cannot route to the DHCP server unless they already have IP addresses. This is because Alice’s routing software builds routing tables based on nodes’ IP addresses. One working scheme is to manually assign IP address to mesh nodes and laptops. Obviously this is labor intensive. A better scheme is to let each mesh node randomly assign an IP address to itself. Since the assigned IP address space (e.g. 10.*.*.*) is fairly large, it is highly unlikely that two independently assigned random IP addresses happen to be the same. The exact probability can be calculated by observing that this problem is a variant of the famous "birthday paradox" problem. Specifically, when there are much fewer than a few hundred nodes ($\ll \sqrt{2^{24}} = 2000$), the collision probability is negligible. To generate random numbers in a deterministic fashion, we can use a collision-resistant hash of a node's MAC address so a node's IP address could remain the same across reboots. A number of students suggest simply using the last 24-bits of a node's MAC address as its IP address. This is not desirable because there's no guarantee that these last 24-bits are truly random. Once each mesh node has an IP address, it can act as a DHCP and NAT server to assign IP address for laptop clients from another private address space (e.g. 192.168. *.*). Suppose the client laptop sends an IP packet destined for Google (216.239.51.104) (Figure 1), describe the source and destination IP address fields of the IP packet as it traverses the sequence of path segments: h1, h2, h3, h4. Answer: The destination field never changes in the full path, while the source field will be changed by various NAT servers along the way. Suppose the mesh node that the laptop associates with has IP address 10.1.1.23 and it assigns the laptop a private address 192.168.1.10. Let us assume the gateway mesh node has IP address 10.0.0.13 and a public IP address 216.27.133.9. Then, the source and destination IP address fields on each path segment for a packet travelling from the laptop to an Internet host (216.239.51.104) are: <table> <thead> <tr> <th>Source IP</th> <th>Destination IP</th> </tr> </thead> <tbody> <tr> <td>h1</td> <td>192.168.1.10</td> </tr> <tr> <td>h2</td> <td>10.0.0.13</td> </tr> <tr> <td>h3</td> <td>10.0.0.13</td> </tr> <tr> <td>h4</td> <td>216.27.133.9</td> </tr> </tbody> </table> The source/destination fields for a packet travelling in the reverse direction from 216.239.51.104 to the laptop are: <table> <thead> <tr> <th>Source IP</th> <th>Destination IP</th> </tr> </thead> <tbody> <tr> <td>h4</td> <td>216.239.51.104</td> </tr> <tr> <td>h3</td> <td>216.239.51.104</td> </tr> <tr> <td>h2</td> <td>216.239.51.104</td> </tr> <tr> <td>h1</td> <td>192.168.1.10</td> </tr> </tbody> </table> Does your addressing scheme support seamless mobility? (i.e. can the client laptop keep its ongoing connections while moving its association from one mesh node to another?) If not, can you sketch a different addressing scheme that does? Answer: The above scheme doesn’t support seamless mobility. Packets intended for the laptop will be routed to the original mesh gateway (10.0.0.13) even after the laptop has moved to another mesh node. One way to support seamless mobility is to assign each laptop an IP address in the same subnet (10.*.*.*) as mesh nodes and rely on underlying routing protocol to automatically update routes as laptops roam. However, a big disadvantage of this scheme is that client laptops’ software need to be updated to run Alice’s mesh routing protocol. Another proposal uses a scheme inspired by MobileIP. Alice could (manually) assign a mesh node (home agent) to each laptop. When a laptop associates with a new mesh node (foreign agent), the mesh node informs the laptop’s home agent about the laptop’s new location. Then packets from the laptop are directly routed through its foreign agent to Internet hosts. However, packets destined to a laptop is always routed to its home agent first who then forwards them to the laptop’s current foreign agent. 3 TCP checksum If you look up TCP headers carefully in any standard textbook, you will notice that TCP has a checksum field that covers parts of the IP header (source address, destination address and length fields). 1. Why does TCP checksum include part of IP header fields when IP already computes a separate checksum covering its own header? Answer: TCP’s checksum tries to ensure integrity on the “end-to-end” basis. In contrast, IP’s checksum only ensures “hop-to-hop” integrity. The IP header could be modified and a new checksum could be re-calculated along each IP forwarding hop. For example, IP routers need to decrease each packet’s TTL at each hop and thus recalculates IP checksum. If IP routers happen to corrupt the source IP address during TTL processing, one will not be able to detect the corruption using IP checksum since it is recalculated. However, TCP-checksum will detect such incorrectness and discard the corrupted packets. 2. When a TCP receiver detects an incorrect checksum, it can either a) discard the segment and send a cumulative ACK for the expected in-sequence byte or b) discard the segment and do nothing else. Which action is preferable? Why? Answer: It is preferable to discard the segment and do nothing else. When a TCP receiver detects an incorrect checksum, it has no way to decide whether the source address is correct or not. Sending an error message to a wrong source makes no sense. Since senders will eventually retransmit un-acknowledged packets, the receiver can simply drop corrupted packets. 4 Designing a transport protocol for RPCs Remote Procedure Call (RPC) is a popular paradigm for programming distributed systems. (It is also sometimes called Remote Method Invocation as in java). RPC emulates the semantics of a local procedure call in which a caller makes a call into a procedure and blocks until the call returns. It is implemented using a request/reply message passing paradigm between an RPC client and server (see Figure 2). The pseudocode gives an example where a client reads a file from the server using the `writefile` RPC. In this problem, you will design a simple protocol for transporting RPC messages between the client and server using the standard UDP datagram socket interface. We start with a design called `SimpleRPC`. In `SimpleRPC`, each UDP message consists of a RPC header with three fields: message type (indicating whether the message is a request or reply), a unique identifier (UID), procedure identifier. The RPC data contains marshalled procedure arguments or return values. For each RPC request, the client generates a new UID (e.g. by incrementing a counter), suspends the current running thread and awaits for a corresponding reply with the same UID from the server. If a reply arrives with a UID for which a blocked thread is waiting for, it resumes the execution of the thread. (If a reply arrives with a UID that no thread is waiting for, the client simply discards it.) If no matching RPC reply arrives within 20 ms, the client retransmits the request. The RPC server is completely stateless: it simply invokes the desired function based on procedure identifier for each received RPC request and sends back the corresponding reply. 1. Explain the significance of the fixed timeout threshold of 20 ms. Under what deployment scenarios do you expect it to work well? Or alternatively, what are the circumstances in which a 20 ms fixed timeout becomes problematic? Answer: The fixed timeout works well when the client-server RTT plus the server computation time is always less than 20ms. A typical such scenario would be on a LAN network (< 1ms RTT) and with fast RPC procedures (e.g. read/write 32K blocks from a lightly loaded file server). 2. An RPC system possesses at-most-once semantics if it guarantees no procedures are executed more than once at the server as a result of the same RPC invocation. Is SimpleRPC at-most-once? If not, do you think it affects the correctness of all applications using SimpleRPC? Give some concrete examples. For example, does duplicate execution affect our procedure writefile in the pseudocode? Answer: SimpleRPC is not at-most-once. For example, if the server’s RPC reply is lost, the client will re-send the same RPC request, causing the server to execute the same request twice. Duplicate execution of writefile doesn’t affect correctness. The offset argument in the pseudocode causes any duplicate execution to overwrite the same range of a file with the same contents. However, duplicate execution of other types of procedures will cause correctness problems, such as a procedure that counts numbers or a procedure that appends to a file. Someone suggests you add a small amount buffer at the server to remember the UID and corresponding results of recently executed RPCs. If an RPC request arrives with a UID already present in this buffer, the server simply replies with the corresponding saved result. Does it solve the problem of potential duplicate execution of RPCs? If not, give a design that guarantees at-most-once execution of RPCs. What about a design guaranteeing exactly-once? (In addition to message losses, you also need to consider untimely server or client crash.) Answer: Caching some UIDs does not solve the problem of duplicate RPC execution because the server can only have a finite buffer size and thus cannot remember all previously executed RPC requests. A simple design guaranteeing at-most-once is one that does not retransmit any RPC requests. Obviously, that is not very interesting as what many applications really need is exactly-once execution. One could imagine some fixes to SimpleRPC for exactly-once execution: for example, we can change SimpleRPC to require each client to send a RPC reply-ack message upon receiving an RPC reply, thus allowing the server to remove acknowledged UIDs from the cache. Together with some other basic flow control mechanisms, we can ensure clients do not overwhelm the server’s finite-sized buffer. Unfortunately, if the server or client crash unexpectedly, they will forget previously executed RPC requests. Thus, unless one logs each RPC execution persistently to disk, there is no way to guarantee exactly-once execution across machine failures. As you can see, guaranteeing exactly-once execution could be quite expensive and not all applications need such semantics from the RPC system. Hence, it may be better to leave such strong semantics to individual applications themselves. 3. The RPC arguments and results might be arbitrarily large. In order to avoid extra design and implementation effort, you propose to rely on IP fragmentation to deal with big RPC requests/replies. Why might this not be a good idea? Answer: Here is a concrete example why IP fragmentation is bad. If a large message is fragmented into 100 IP packets, then the loss of any one packet will cause the sender to retransmit the entire set of 100 IP fragments. If each fragment is lost with probability 1%, then the retransmission probability will reach as high as $1 - (1 - 1\%)^{100} = 63.4\%$. 4. You start to get ambitious and want to use SimpleRPC for the wide area network. Apart from having to change the timeout threshold, you also realize that you will have to incorporate congestion control. Why is it okay to overlook the issue of congestion control on the local area network? Answer: A typical local area network only has a few machines each with 10/100Mbps network interface card and a switching capacity of 100/1000Mbps. Furthermore, the RPC system can probably handle less than 10Mbps traffic (depending on how efficient your implementation is), therefore, you probably won’t see much congestion on local area network. (Of course, we are just talking about typical LAN setup here, if you have a huge number of machines, that's a different story.) Things are very different with WAN. Without congestion control, your client can push out 10/100Mbps from its network interface card. Majority of this will be immediately dropped at your access link (a T1 link has only 1.5Mbps) before even traveling across the wide area Internet and encountering congested links with even less available bandwidth. 5. You think it is too much hassle to implement TCP-like functionalities such as careful RTO calculation and congestion control in SimpleRPC and decide to implement a RPC system on top of TCP instead of UDP. Why might TCP-based RPC not perform well? Answer: TCP imposes in-order delivery. Therefore, if one RPC request is lost, TCP will not deliver to the server any of the rest received RPC requests until a retransmission of the lost RPC request is received later. This is not desirable as the server wastes idle CPU/disk resource while it could be processing other received requests. This situation is referred to as the head-of-line blocking. ### Summary #### A Statistics of the Students’ Grades <table> <thead> <tr> <th>Grades</th> <th>Student Num</th> </tr> </thead> <tbody> <tr> <td>30</td> <td>1</td> </tr> <tr> <td>25~29</td> <td>6</td> </tr> <tr> <td>20~24</td> <td>7</td> </tr> <tr> <td>&lt;20</td> <td>3</td> </tr> </tbody> </table>
{"Source-Url": "http://www.news.cs.nyu.edu/~jinyang/fa07/ps1_key.pdf", "len_cl100k_base": 4941, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20636, "total-output-tokens": 5274, "length": "2e12", "weborganizer": {"__label__adult": 0.000484466552734375, "__label__art_design": 0.00038743019104003906, "__label__crime_law": 0.0007381439208984375, "__label__education_jobs": 0.0220184326171875, "__label__entertainment": 0.00019669532775878904, "__label__fashion_beauty": 0.0002560615539550781, "__label__finance_business": 0.0006437301635742188, "__label__food_dining": 0.0005764961242675781, "__label__games": 0.0010814666748046875, "__label__hardware": 0.005443572998046875, "__label__health": 0.0009222030639648438, "__label__history": 0.0007386207580566406, "__label__home_hobbies": 0.00022268295288085935, "__label__industrial": 0.000812530517578125, "__label__literature": 0.0008182525634765625, "__label__politics": 0.0003426074981689453, "__label__religion": 0.0006561279296875, "__label__science_tech": 0.398681640625, "__label__social_life": 0.00042510032653808594, "__label__software": 0.08160400390625, "__label__software_dev": 0.48095703125, "__label__sports_fitness": 0.0003590583801269531, "__label__transportation": 0.0013532638549804688, "__label__travel": 0.0003757476806640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21413, 0.05371]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21413, 0.43449]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21413, 0.90785]], "google_gemma-3-12b-it_contains_pii": [[0, 2357, false], [2357, 5740, null], [5740, 6562, null], [6562, 8826, null], [8826, 12330, null], [12330, 13894, null], [13894, 15409, null], [15409, 19282, null], [19282, 21208, null], [21208, 21413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2357, true], [2357, 5740, null], [5740, 6562, null], [6562, 8826, null], [8826, 12330, null], [12330, 13894, null], [13894, 15409, null], [15409, 19282, null], [19282, 21208, null], [21208, 21413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21413, null]], "pdf_page_numbers": [[0, 2357, 1], [2357, 5740, 2], [5740, 6562, 3], [6562, 8826, 4], [8826, 12330, 5], [12330, 13894, 6], [13894, 15409, 7], [15409, 19282, 8], [19282, 21208, 9], [21208, 21413, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21413, 0.25472]]}
olmocr_science_pdfs
2024-11-28
2024-11-28