id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
9a1a8fff340cc3ae8792e2e17f75f20ce7750d6f
|
Conferencing protocols and Petri net analysis
E. ANTONIDAKIS
Department of Electronics, Technological Educational Institute of Crete, GREECE
Abstract: During a computer conference, users desire to view the same computer generated information and displays. This is often accomplished by communicating this information over wideband links to provide real-time displays. A concept was developed for communicating inputs (e.g., keystrokes) instead of outputs (e.g., displays). Techniques were designed to allow two computers to execute the same application program, get the same inputs at the same relative point in their execution, and produce the same outputs. Models were created to describe and analyze the way in which inputs are entered into executing programs. Protocols were written to distribute and synchronize the inputs between the computers. Petri nets were used to validate the synchronization of the inputs. Timing analysis was performed to guarantee simultaneity in execution.
Key-Words: -computer conferencing, protocols, simultaneous execution, program synchronization, Petri nets
1. Introduction
Computer users at geographically separated sites may desire to view the same computer generated information and displays simultaneously, and thus have a conference. This is often accomplished by communicating this information over wideband links to provide real-time displays. This paper presents a new approach, which provides the same services with greatly reduced communication requirements. The philosophy is one of "Compute rather than Communicate." To some degree, the use of computer image compression techniques and those of transmitting only the changed portion of images follows this philosophy but not to a significant enough extent.
A concept was developed for communicating inputs (e.g., keystrokes) instead of outputs (e.g., displays). Techniques were designed to allow two computers to execute the same application program, get the same inputs at the same relative point in their execution, and produce the same outputs. Models were created to describe and analyze the way in which inputs are entered into executing programs. Protocols were written to distribute and synchronize the inputs between the computers. Petri nets were used to validate the synchronization of the inputs. Timing analysis was performed to guarantee simultaneity in execution.
The SPE technique is based on the creation of a Shell running at each computer and the transmission of messages between the Shells. Only a low bandwidth connection, with quick response time is required between the computer systems. The Shell is constructed at each computer system, in software, and resides between the operating system and the executing program of the two computers. In this paper, Petri nets are constructed and analyzed that model the synchronization of simultaneous program execution between two computers. The interactions between the Shell, the operating system and the application program of the two computers, as well as the messages sent between them are modeled with Petri nets. The analysis of the Petri nets indicates proper operation of the system. The analysis includes safeness, conservability and liveness. Programs may execute on computers of different speeds. This results in a delay in the execution of the faster computer. Timing analysis was performed to calculate the delays and the affect they have on the performance of the system.
2. The Shell and the Inputs
The Shell can be considered as an extension to the operating system handling the remote execution section. It accepts all the inputs to the system, checks if valid, and transmits them to one of the computers called the Master. The Master collects all the inputs, places them in order, appends some synchronization information and distributes them to the Slave computer. The Shell is created between the application program and the operating system and between the inputs and the operating system. The Shell:
1. runs on both computers
2. gets activated only by an input interrupt and takes the action shown in 3 or 4 below.
3. accepts the inputs and hides them from the operating system. If the inputs are valid, it stores them in Temporary Input Buffers (TIB).
4. accepts all requests for inputs and distributes and synchronizes the inputs that are already in the TIBs if any, at a time, by communicating with the Shell running at the other computer.
The assumption has been made that programs requesting inputs, as well as, inputs entering programs, go through the operating system. This is not a severe restriction since most high level programs in single user systems and all programs in multi user systems are written in this manner. An application program can request input in a synchronous or asynchronous manner. The application program can either wait until an input gets entered, in which case the input always enters the program at the same point in its execution and is synchronous, or, proceed on its execution and periodically check for the presence of input, in which case it is asynchronous. The number of times C that the application program checks for the presence of an input (through requests to the operating system), until the input gets entered, designate the exact point in program execution that the input got entered, and is used for the synchronization of the asynchronous inputs between the two computer systems. The Shell updates the values of the counter C each time the application program checks for the presence of an input. The Shells running on the two computers make sure that the input is presented to the application programs on the same C count.
Asynchronous inputs may be entered from either one of the two computers. An input accessing method is required that designates what inputs are valid from each one of the two computers at any time. For example, the keyboards of the two computers may not be active at the same time. The user can specify the input accessing method that they prefer at the beginning of the session. All the inputs from both computers are presented to the Master where they get validated and ordered. The Master considers all valid inputs as if they were its own inputs and distributes them to the Slave, with some synchronization information. The synchronization information is the count C. When an input gets presented to the application programs on the two computers, the count of the Master, Cm, must be equal to the count of the Slave, Cs.
Synchronous inputs may be entered from either of the two computers in which case they must be distributed to the other computer. Synchronization is not required but validation is. There are cases where synchronous inputs may originate from data that exists on both computers, like reading of files that exist on both computers. In such cases the application program can itself get the input (through the operating system) without any action to be taken by the Shells. The rest of the paper is about the asynchronous inputs.
3. Asynchronous Input Synchronization
Petri nets are used for modeling the synchronization of the asynchronous inputs. One of the most important use of Petri nets is the modeling of asynchronous systems, especially ones that experience concurrency, asynchronism and nondeterminism [1]. The system under construction possesses all three conditions. The system experiences concurrency since there is execution taking place at two computers. It experiences asynchronism since some inputs are entered in an asynchronous way. Finally, it experiences nondeterminism, in its execution, since the inputs are entered interactively, which means that the execution takes a path related to the user response. In Petri net language, this is called decision or data-dependency. For the analysis that will follow we adopt basic Petri net theory, notation and formulation of Coolahan and Roussopoulos[2].
In figure 1 is the Petri net model of a running process that checks for an input. Place p1 represents the application process that is executing. When the process wants to check for an input, transition t1 fires and the token goes to place p2. Transition t1 is the call to the operating system to check for an input. In p2 the checking for an input is performed by the operating system. If no input is available, then transition t2 fires and the token returns to p1 where the user process resumes.
execution. The token loops in p1, t1, p2, t2 until an input is entered. After an input is entered, the next time that the token gets in p2, transition t25 will fire and the loop will terminate. The token goes to p23 where the user process gets the input. Then transition t26 fires and the token returns to p1 where the user process returns to processing. This scenario will continue as long as the process is executing. When the process terminates, transition t3 fires and the token returns to the operating system, place p3. The numbering of places and tokens is not done consecutively in figure 1 as this will be part of the bigger picture in figure 3.
Fig. 1 Petri net of a User Process checking for inputs
In figure 1 is the Petri net model of a running process that checks for an input. Place p1 represents the application process that is executing. When the process wants to check for an input, transition t1 fires and the token goes to place p2. Transition t1 is the call to the operating system to check for an input. In p2 the checking for an input is performed by the operating system. If no input is available, then transition t2 fires and the token returns to p1 where the user process resumes execution. The token loops in p1, t1, p2, t2 until an input is entered. After an input is entered, the next time that the token gets in p2, transition t25 will fire and the loop will terminate. The token goes to p23 where the user process gets the input. Then transition t26 fires and the token returns to p1 where the user process returns to processing. This scenario will continue as long as the process is executing. When the process terminates, transition t3 fires and the token returns to the operating system, place p3. The numbering of places and tokens is not done consecutively in figure 1 as this will be part of the bigger picture in figure 3. The handshake of messages between the two computers for synchronizing one asynchronous input is shown in figure 2. The messages exchanged are symbolized “msgX” and they are not numbered consecutively or in order. If the input is from the Slave, the Slave sends the input to the Master with msg6 and the Slave proceeds on its execution. The Master receives msg6 and checks if the Slave has access of the inputs. If it does, the Master accepts the Slave’s input and considers it as one of its own, otherwise it discards the key.
Fig. 2 Handshaking for Asynchronous Input Synchronization
If the Master has an input, it sends a msg5 to the Slave with the input and counter, Cm. If that input was a Slave input that arrived at the Master with a msg6, then it is msg9 instead of msg5. When the Slave receives msg5 or msg9 with the input and Cm, it compares Cm with its own count Cs. If Cs<=Cm then the Slave continuous execution to catch up with the Master (Cs=Cm) and then it sends msg3 to the Master. If Cs>Cm then the Slave sends msg4 to the Master with its count Cs and waits. When Master receives msg3 it puts the input in effect, sends msg7 to the Slave and resumes execution. If Master had received msg4, it continues execution until Cm=Cs. Then it puts the input in effect, sends msg7 to the Slave and resumes execution. When Slave receives msg7 it puts the input into effect and resumes execution. The above actions of transmitting and receiving messages, comparing counts, etc, takes place in the Shells running at the Master and the Slave. This is shown in detail in figure 3.
Methodology on how to use Petri nets to represent communication protocols was found in [3] and [4]. Figure 3 displays the Petri net that represents the communication protocol for the synchronization of the inputs. This Petri net is divided into three parts with the dotted lines: the Master, the Slave and the Communication (COMM). This Petri net models the handshake of figure 2. Figure 1 shows how a program requests and gets an input, and figure 3 shows how two programs request and get the same input at the same point in their executions with the help of the Shell. The input accessing method modeled on this Petri net is that the inputs from the Master and the Slave are allowed in the order that they arrive at the Master. No inputs are deleted.
unless the TIBs local to each machine, get full. The inputs are processed one at a time.
Fig. 3 Perti net for synchronizing the asynchronous inputs
The places on the Petri net represent processing or checking of simple conditions. The processing can be part of the application (user) process, of the Shell process, or of the operating system. The transitions on the Petri net represent events happening.
The Petri net of figure 3 consists mainly of: a) five loops b) four messages c) two wait states d) some extra processing states.
The five loops are, first: p1, t1, p2, t2; second: p4, t4, p5, t5; third: p9, t9, p10, t10; forth: p18, t18, p17, t19; fifth: p21, t23, p22, t24;
The four messages are: first is "msg6": p7, t8, p8; second is "msg5" or "msg9": p14, t15, p15; third is "msg3" or "msg4": p19, t21, p20; and forth is "msg7": p24, t27, p25.
Messages consist of three fields:
1. type of message (or msg number)
2. input info
3. local counter (Cm or Cs).
In the case of keyboard inputs, the message type field is one byte long, the input info field is two bytes: one byte for the ASCII code of the key, and the counter field can be of fixed or variable size. A variable length counter field requires the message to have a forth field containing the message length.
The two wait states are: first: t14, p13, t22; second: t20, p18, t28.
In the first loop, in p1, the user process is running and when it checks for input transition t1 fires and the token comes to p2. As was seen in figure 1 this would be a call to the operating system, checking for inputs, but in this case p2 is the Shell which intercepts the calls of the user process to the operating system concerning input. In p2 the shell will check if a msg6 has arrived from the Slave holding an input (token in p8) and then transition t11 would fire. If no msg6 has arrived from Slave then if a local input has been inserted at Master, t13 will fire; else t2 will fire and the token returns to p1 where the Master user process resumes execution. While a token is ready at p2, priority is given first to t11 to fire, then to t13 and last to t2.
All loops work in a similar manner. Looking at these five loops in figure 3, the top place of each loop is execution of the user process, and counter C increments by one each time the token passes through. The bottom place is execution in the shell that checks conditions and priorities. All actions required to synchronize the inputs are taken while executing in the Shell.
In the second loop, in place p5 priority is given first to t17 to fire if a message has arrived from the Master. If no message from the Master is available, if an Slave input is available the t7 will fire. Otherwise, t5 will fire.
In the third loop, in place p10, priority is given first to t16 to fire if a message from Master has arrived, else t10 will fire.
In the fourth loop, in place p17, t20 will fire if Cs>Cm+1; else (if Cs<Cm+1) t19 will fire.
In the fifth loop, in place p22, t24 will fire if Cm<Cs; if Cm=Cs t25 will fire.
Examples of the Execution of the Perti net.
Case 1: Master gets input (key located in one of its TIBs).
The user program starts at p1 for Master and at p4 for the Slave. While Master is waiting for key (Master keyboard TIB is empty), Master is looping at p1, t1, p2, t2 and Slave is looping at p4, t4, p5, t5. Master presses a key (key in TIB placed there by the Shell hiding the key from the operating system). Next time the Master gets at p2, t13 will fire and p12 will prepare msg5 to send to Slave containing the key and Cm. When t14 fires msg5 is
send to Slave and the Master waits at p13. In p14 takes place the transmission of msg5 and t15 fires when msg5 arrives in the Slave. In p15 the computer where the Slave process is running receives msg5. The Slave process is executing in the loop p4, t4, p5, t5 and next time the token gets to p5, t17 will be enabled and will fire to p16. At p16 the Slave user process resumes, and the Slave will loop in p16, t18, p17, t19 until Cs>=Cm+1. During this loop the Slave user process has the chance to catch up with the Master user process if necessary. While at p17, t20 will fire and the Slave will wait at p18 while a msg3 (if Cs=Cm+1) or a msg4 (if Cs>Cm+1) is sent to Master. The transmission takes place at p19. When t21 fires, msg3 or msg4 has arrived at Master and is received at p20. A ready token at p20 resumes the wait of the Master at p13. t22 fires and Master user process resumes execution. Master loops at p21, t23, p22, t24 until Cm=Cs. This gives a chance for Master user process to catch up with the Slave user process. While at p22, t25 will fire (Cm=Cs). A token goes at p23 where the input is passed to the Master user process, Cm is set to zero, and eventually t26 fires and the token returns to the Master user process at p1. Another token is sent to p24 which is msg7 transmitted to the Slave. When t27 fires, msg7 has arrived at Slave and is received at p25. A ready token at p25 resumes the wait of the Salve at p18. When t28 fires the token goes at p26 where the input is passed to the Slave user process, Cs is set to zero, and eventually t29 fires and the token returns to the Slave user process at p4.
Note that the input passed to the Slave user process (p26, t29) could take place during the wait (t20, p18, t28). In other words p26 is executed before p18. Thus the wait is replaced by t20, p26, t29, p18, t28 and the firing of t28 returns the token to p4.
For i=23, and for i=26
\[
P_1 \quad d_{P} \quad \quad P_2 \quad \quad d_{P} \quad \quad P_3 \quad \quad d_{P} \quad \quad P_4 \quad \quad d_{P}
\]
- User Process
- Operating System
- Shell Process
- all three
At p23 and p26 the input was passed to the Master or Slave user process respectively. The Shell presents the input, which in this case is a keystroke entered in the keyboard buffer of the Master or the Slave. A call to the operating system is invoked from the Shell to check for an input (key). The call to the operating system returns that an input is present. Since an input is present, a call from the user process to the operating system may follow, to read and process the input. The token returns to p1 or p4 to start processing the next input.
**Case2**: Slave gets an input (keypress).
Master is looping at p1, t1, p2, t2 and Slave is looping at p4, t4, p5, t5 waiting for a key. When the Slave gets an input, next time the token gets to p5, t7 will fire. A token goes to p7 where msg6 is transmitted to Master, and another token goes to p9 making the Slave loop in p9, t9, p10, t10. When msg6 arrives at Master, t8 fires and a token goes to p8 where the Master receives msg6. The token at p8 will break the loop p1, t1, p2, t2 next time the token comes around to p2. t11 will fire, the token goes to where the Slave's input is considered as Master's input. t12 fires and the token goes to p12 where Master assembles msg9 (instead of msg5 since it is Slave's input) and when t14 fires it transmits it to the Slave. Master is waiting at p13. When msg9 arrives at the Slave, t15 fires and a token comes to p15 where the Slave receives msg9. Since the Slave is looping at p9, t9, p10, t10, next time it comes around to p10, t16 fires and a token comes to p16. From then on is the same as in Case1 above.
**4 Petri Net Analysis**
Analysis on the Petri net was performed to ensure the proper operation of the system. The reachability tree was constructed and studied. Conclusions of the reachability tree are:
1. All the places in the Petri net are **safe** except place p8 where it is 2-bounded. But that is not a problem since p8 is a buffer that can hold two messages.
Safeness is a property that must hold in order for the system to work properly. Each place represents the execution of a routine. Safeness states: never more than one token at any place. Assume that a token arrives at a place, which means that a routine will start executing. If another token arrives at the same place before the first routine finishes executing, another routine (with the same code) will start executing and the first routine will stop, causing the system to be in an unknown and unwanted state and therefore unable to recover.
2. The Petri net is not **conservative** since the number of tokens in the net does not remain constant. Even though the number of tokens is not constant, it can be noted that there is one "resident" token at the Master and one at the Slave at all times that show where the Master and the Slave currently execute. In addition, there are some extra tokens generated and deleted upon the transmission and
reception of messages that designate processing at communications processors.
3. The Petri net is live, which means that all the transitions are live and that there are not any deadlocks.
The protocol is written in such a way that no message can be transmitted unless all previous messages have been received properly. There exist communications processors that handle the transmission and reception of messages and the retransmissions if necessary. This takes place at a lower level not seen and not affecting this Petri net. At this level, we assume that all messages arrive properly.
For further analysis on the Petri net and on other Petri nets modeling for different input accessing methods refer to [5].
5 Timing Analysis
Suppose that the Master executes three times faster than the Slave. The Master process is checking for the presence of inputs three times faster than the Slave process. Figure 4 shows the requests for input of the Master and Slave processes to the operating system. These are denoted as markings on the Master and Slave axis respectively. Also shown are all the messages to synchronize a key that was pressed by the user at the Slave machine.

On the left side of figure 4, msg7 designates the end of the synchronization of the previous input. After the last synchronization, the Master and the Slave keep track of how many times the application process asks the operating system to check for the presence of inputs, counters Cm and Cs respectively. Slave presses a key when Cs=2 and next time the Shell gets access (Cs becomes 3), msg6 is send to Master. Master receives msg6 at Cm=11 and gets processed when the Shell at Master gets access (Cm becomes 12). Master compares counts and sends msg5 and Slave receives it at Cs=4 and will process it when Cs=5. Then the Slave has to catch up, and the Master has to wait for the Slave. At Cs=13 the Slave sends msg3 to the Master and stops its wait. On the next Master count, Cm=13, Master sends msg7 and a new synchronization phase starts.
It is noted that the Master process had to wait idle for an interval of $8*T_s$, where $T_s$ is the time interval between two consecutive times that the Slave checks for inputs, while the Slave catches up. This time interval will be the Catch Up Time (CUT) of the slower process.
The CUT must be such that the responsiveness of the system, from the moment that an input is entered till the moment the input is registered by the process, is in acceptable values for the users. That value will be called the Maximum Allowable Wait Interval (MAWI).
It is of interest to note that, even if the Master was executing idle loops waiting for an input, the Slave still has to do "idle catch up". In other words, the CUT of the Slave does not do any useful processing. The Slave's idle catch up cannot be eliminated since there is no way of knowing what the user process is executing without interfering with the process.
The more often the computers communicate, the closer they remain synchronized and the smaller the CUT required to regain synchronization on the next handshake. On the other hand, the more often the computers communicate, a higher bandwidth is required and less processing is achieved.
When some time passes without handshake (a timer expires), the Master computer initiates communication by introducing a null key which is removed after it gets synchronized. The users can set the value of this timer and tune the responsiveness of the system to their needs. In other words, they can set up the value for the MAWI. When the counter at the Master or the Slave approaches overflow a handshake with a null key is forced, initiated from the process whose counter approaches overflow.
The worst case response (WCR) time of the system is equal to the CUT plus four Communication delays (ComDelay). Where ComDelay is the transmission time of a message. The Slave input gets synchronized with three messages to the Master and with four messages to the Slave. A Master input gets synchronized with two messages to the Master and with three messages to the Slave. All messages are less than 10 bytes long.
6 Conclusion
The SPE technique helps to reduce communications in cases such as computer conferencing. Application programs can execute synchronously at geographically separated computers. Conferencing can take place through an
application program. For example, if the conference is about the design of a building, the conference supporting program or application program could be AutoCad™. The inputs to the program, originating from either computer according to an input accessing method, and some minimal synchronization information, such as the counts, are the only data that has to be communicated between the two computer systems. With existing methods of computer conferencing, execution takes place on one computer and every time the screen changes, the screens or the changes of the screens have to be communicated to the other computer. When the SPE technique is applied to computer conferencing, only the inputs to the program, such as the keystrokes, have to be communicated. Teamwork will be promoted even if its members are physically separated. Collaboration can take place with the use of existing, off the shelf software, and without any added hardware. Tutoring the use of software, debugging software, tutoring a subject through an application program, are some aspects that can take place remotely with the proposed approach. Application programs that produce many screens at a high frequency are currently unable to be used remotely unless high bandwidth is provided. Using the proposed approach may allow communicating over existing telephone lines. Graphical programs ordinarily require considerable bandwidth. However, using the SPE approach allows them to run remotely over low speed lines. Security is embedded in the communications when done with the proposed method. There are many circumstances where the inputs to generate the screens are not classified, while the generated screens (outputs) are classified. Many computer users who would like to have a session through their computers, will be greatly benefited from the results of this work.
References:
|
{"Source-Url": "http://www.wseas.us/e-library/conferences/2006elounda1/papers/537-089.pdf", "len_cl100k_base": 6057, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22690, "total-output-tokens": 6750, "length": "2e12", "weborganizer": {"__label__adult": 0.0003457069396972656, "__label__art_design": 0.000568389892578125, "__label__crime_law": 0.00037169456481933594, "__label__education_jobs": 0.0026531219482421875, "__label__entertainment": 0.00011843442916870116, "__label__fashion_beauty": 0.0001575946807861328, "__label__finance_business": 0.0004935264587402344, "__label__food_dining": 0.0003876686096191406, "__label__games": 0.00046324729919433594, "__label__hardware": 0.00682830810546875, "__label__health": 0.0006432533264160156, "__label__history": 0.0003495216369628906, "__label__home_hobbies": 0.00013744831085205078, "__label__industrial": 0.0009946823120117188, "__label__literature": 0.0003230571746826172, "__label__politics": 0.0002295970916748047, "__label__religion": 0.0004584789276123047, "__label__science_tech": 0.3291015625, "__label__social_life": 0.00011867284774780272, "__label__software": 0.0269622802734375, "__label__software_dev": 0.626953125, "__label__sports_fitness": 0.00029158592224121094, "__label__transportation": 0.0009660720825195312, "__label__travel": 0.0002300739288330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28438, 0.02621]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28438, 0.48114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28438, 0.92705]], "google_gemma-3-12b-it_contains_pii": [[0, 3455, false], [3455, 8409, null], [8409, 12610, null], [12610, 16201, null], [16201, 21231, null], [21231, 25636, null], [25636, 28438, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3455, true], [3455, 8409, null], [8409, 12610, null], [12610, 16201, null], [16201, 21231, null], [21231, 25636, null], [25636, 28438, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28438, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28438, null]], "pdf_page_numbers": [[0, 3455, 1], [3455, 8409, 2], [8409, 12610, 3], [12610, 16201, 4], [16201, 21231, 5], [21231, 25636, 6], [25636, 28438, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28438, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
781e25b13f3d8efed7b6d8f1193f0a5b7f2b3a7d
|
Generating various contexts from permissions for testing Android applications
Kwang sik Song, Ah-Rim Han, Sehun Jeong, Sungdeok Cha
Department of Computer Science and Engineering
Korea University
Seoul, South Korea
{kwangsik_song, arhan, gifaranga, scha}@korea.ac.kr
Abstract—Context-awareness of mobile applications yields several issues for testing, since the mobile applications should be testable in any environment and with any contextual input. In previous studies of testing for Android applications as event-driven systems, many researchers have focused on using the generated test cases considering only GUI events. However, it is difficult to detect failures in the changes in the context in which applications run. It is important to consider various contexts since the mobile applications adapt and use novel features and sensors of mobile devices. In this paper, we provide the method of systematically generating various executing contexts from permissions. By referring the lists of permissions, the resources that the applications use for running Android applications can be inferred easily. The various contexts of an application can be generated by permuting resource conditions, and the permutations of the contexts are prioritized. We have evaluated the usefulness and effectiveness of our method by showing that our method contributes to detect faults.
Keywords—Android application testing, permissions, various contexts, context-aware application, mobile application testing
I. INTRODUCTION
The proliferation of the novel features and sensors of mobile devices (i.e., operating systems, hardware platforms, and device sensors) has enabled the development of mobile applications that can provide rich, highly-localized, context-aware content to users [1]. In particular, the market for disease diagnostic systems is growing fast due to the development of mobile applications that log personal health data (e.g., blood glucose, blood pressure, and heart rate) by using the sensors, cameras, additional simple adapters (or accessories) in mobile devices and sending the results to the system in real-time. For instance, in the mobile application called Peek Vision [2], medical images can be captured by using a clip-on camera adapter that gives high quality images of the back of the eye and can be sent to the system so diagnosis can be done remotely. The mobile application has been designed to be aware of the computing context in which it runs and to adapt and react according to its findings; therefore, it belongs to the category of context-aware applications [3].
The context-awareness of mobile applications yields several issues for testing [4] because the mobile applications should be testable in any environment and with any contextual input [5]. These applications are notified of a change to their context by means of events, and the variability in the running conditions of a mobile application depends on the possibility of using it in variable contexts. A context represents the overall environment that the application is able to perceive [6]. More precisely, Abowd et al. [7] define a context as: “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is including the user and applications themselves.”
In previous studies of testing of Android applications as event-driven systems, many researchers focused on using the generated test cases considering only GUI events. However, it is difficult to detect failures in the changes in the contexts, which can be influenced by context events, in which applications run. Even in studies that considered context events, the specific event sequences generated based on a limited number of scenarios were considered. This has limitations in terms of finding bugs that occur in various complex contexts. It is important to discover the unacceptable behaviors of an app (such as crashes or freezes) that are often reported in the bug reports of mobile apps and appear when the app is impulsively solicited by contextual events, such as the alerts for the connection/disconnection of a plug (e.g., USB and headphone), an incoming phone call, GPS signal loss, etc. Therefore, for testing mobile applications, we need a systematic testing method to take into account the various conditions in context-aware systems. This is increasingly needed given trends in mobile applications due to the advancement in the novel features and sensors of mobile devices, which reveal new trends of bug types [8].
To access resources from Android devices, each Android application includes a manifest file, AndroidManifest.xml, which lists the permissions [9] that the application requires for its execution and requests permissions for the resources. By referring to the lists of permissions, the resources that the applications use for running Android applications can be inferred easily. The context events occurred from/by those identified resources, and the state for each condition can be changed by those context events. Thus, we use the permissions to generate the various contexts used for testing Android applications.
To test mobile applications in various contexts, we provide a method for systematically generating various executing contexts from permissions. In our paper, an executing context represents a permutation of resource conditions that have variable states, and Graphical User Interface (GUI) event based generated test cases [10] can be run in those contexts. The state of each condition can be changed/sensed/perceived according
to several types of context events, such as:
- events coming from the external environment and sensed by device sensors (e.g., Wi-Fi and GPS);
- events generated by the device hardware platform (e.g., battery and other external peripheral port, such as USB, headphone, and network receiver/sender); and
- events typical of mobile phones (e.g., the arrival of a phone call or a SMS message).
The brief procedure for generating various executing contexts using permissions is as follows. First, the related resources and their possible states are identified from the permissions. Then, the various executing contexts are generated by permuting the resource conditions that have variable states. Finally, the executing contexts are prioritized and the part of those executing contexts are selected. We applied our testing method to two open-source projects, Open Camera [11] and Subsonic [12]. Experiments reveal that the proposed method is significantly effective in detecting faults.
The rest of this paper is organized as follows: Section II contains a discussion of related studies. Section III explains the definition and the need to use permissions when testing Android applications, and the related resources that can inferred from the permissions are identified. Section IV explains the procedure to generate various contexts from the permissions. In Section V, we present an experiment to evaluate the proposed approach and discuss the results. We conclude and discuss future research in Section VI.
II. RELATED WORK
Mobile applications are event-driven systems, but, unlike other traditional event-driven software systems, GUI [10], [13]–[15] or web applications [16], they are able to sense and react to a wide range of events. In the following subsections, we discuss the related studies that provide methods for testing Android applications as event-driven systems.
A. GUI Testing
Random testing. The UI/Application Exerciser Monkey [13] is part of the Android SDK and generates random user input. Originally designed for stress-testing Android applications, it randomly generates pseudo-random streams of user events such as clicks, touches, or gestures, as well as a number of system-level events. Monkey testing is a random and automated unit test. The test is not scripted and is run mainly to check whether a system or an application will crash. It is easy to set up and can be used in any application. The cost of using the testing is relatively small. However, detection of only a few bugs is possible.
Model-based testing. AndroidRipper [14] is an automated technique implemented in a tool that tests Android applications using a GUI model. AndroidRipper is based on a user interface-driven ripper that automatically explores the application’s GUI to exercise the application in a structured manner. More specifically, it dynamically analyses the application’s GUI for obtaining sequences of events that are fireable through the GUI widgets. Each sequence provides an executable test case. During its operation, AndroidRipper maintains a state machine model of the GUI (called a GUI Tree). The GUI Tree contains the set of GUI states and state transitions encountered during the ripping process. However, by using generated test cases that consider only GUI events, it is difficult to find failures that could otherwise be detected by considering the changes in the context, which can be influenced by context events, in which applications run.
B. Context-aware Testing
Amalfitano et al. [6] took into account both context events and GUI events for testing Android applications. They manually define reusable event patterns—representations of event sequences that abstract meaningful test scenarios. These event patterns are manually defined after a preliminary analysis is conducted on the bug reports of open source applications. Based on the defined event patterns, test cases are generated using the three scenario-based mobile testing approaches that (1) manually generate test cases, (2) mutate existing test cases, and (3) support the systematic exploration of the behavior of an application (an extension of the GUI ripping technique is presented in [14]). For dynamically recognizing the context events that the application is able to sense and react at a given time, events can be deduced from event handlers. In this work, they also use a set of Intent Messages to figure out the events that are managed by other application components. This set can be obtained by means of static analysis of the a Android manifest file of the application.
The methodology proposed by Amalfitano et al. has some limitations. The number of scenarios that define relevant ways of exercising an application is limited because specific event sequences are considered. By manual analysis by experts, the events possibly trigger a faulty behavior may not be properly identified. By analyzing bug history, a sequence of events that has never occurred might not be chosen, but they may cause catastrophic failures. These event patterns may need to be redefined when testing other types of applications. From the perspective of triggering the context events, the source codes also need to be analyzed and altered. Moreover, the effectiveness of the testing approach is evaluated only by measuring the code coverage. Statement coverage may not be effective and sufficient enough on fault detection capability [17]. In our paper, we provide a systematic method of generating various contexts. Since this method may cause to produce many test cases to be run, we provide a prioritization technique to rank the test cases in the order of the likelihood of detecting faults.
III. INFERRING RESOURCES FROM PERMISSIONS
A. Permissions in Android Application
Android uses a system of permissions [9] to control how applications access sensitive devices and data stores. More specifically, to ensure security and privacy, Android uses a permission-based security model to mediate access to sensitive data (e.g., location, phone call logs, contacts, emails, or photos) and potentially dangerous device functionalities (e.g., Internet, GPS, and camera) [18].
To access resources from Android devices, each Android app requests permissions for resources by listing the permissions. Each Android application includes a manifest file,
TABLE I: List of permissions and related resources with their possible states.
<table>
<thead>
<tr>
<th>Permission</th>
<th>Allows an App to</th>
<th>Related Resources [Possible States]</th>
<th>Android Version</th>
</tr>
</thead>
<tbody>
<tr>
<td>ACCESS_FINE_LOCATION</td>
<td>Access precise location from location sources</td>
<td>Wi-Fi [on/off], GPS [on/off], Radio [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>INTERNET</td>
<td>Open network sockets</td>
<td>Wi-Fi [on/off], Radio [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>CAMERA</td>
<td>Be able to access the camera device</td>
<td>Camera [on/off], SD card [free/full], Bluetooth [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>BLUETOOTH</td>
<td>Connect to paired bluetooth devices</td>
<td>Bluetooth [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>WRITE_EXTERNAL_STORAGE</td>
<td>Write (but not read) the user’s contacts data</td>
<td>Radio [on/off]</td>
<td>Android 4.0.3 ~</td>
</tr>
<tr>
<td>WRITE_INTERNAL_STORAGE</td>
<td>Write to external storage</td>
<td>SD card [on/off]</td>
<td>Android 1.5 ~</td>
</tr>
<tr>
<td>BND_FILESYSTEM_ADMIN</td>
<td>Ensure that only the system can interact with device</td>
<td>Camera [on/off], Flash [on/off], SD card [free/full], Wi-Fi [on/off]</td>
<td>Android 2.2.x ~</td>
</tr>
<tr>
<td>VIBRATE</td>
<td>Access to the vibrator</td>
<td>Vibrate [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>NFC</td>
<td>Perform I/O operations over NFC</td>
<td>NFC [on/off]</td>
<td>Android 2.3 ~</td>
</tr>
<tr>
<td>NFC</td>
<td>Access to the flashlight</td>
<td>Flash [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>CHANGE_NETWORK_STATE</td>
<td>Change network connectivity state</td>
<td>Wi-Fi [on/off], GPS [on/off], Radio [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
<tr>
<td>CAPTURE_VIDEO_OUTPUT</td>
<td>Capture video output</td>
<td>LCD [on/off], Camera [on/off]</td>
<td>Android 1.0 ~</td>
</tr>
</tbody>
</table>
AndroidManifest.xml [19], which lists the permissions that the application requires for its execution. When the user wants to install an app, this list of permissions is presented and confirmation is requested. When the user confirms the access, the app will have the requested permissions at all times (until the app is uninstalled). If an application requests the resource without having the appropriate permission, then the Android OS may throw a Security Exception or simply not grant the requested resource [20]. These permission-protected resources are accessed through the Android API and other classes resident on the phone. For example, having the ACCESS_FINE_LOCATION permission will give the application access to a number of Android API calls that use resources such as GPS, Wi-Fi, and Radio.
B. Identification of Related Resources from Permissions
We have used the permissions in an app’s manifest file for generating various context used for testing Android applications. By referring to the lists of permissions, the resources that the applications would (potentially) use for running Android applications can be inferred easily, without analyzing source codes of the applications. The context events occur from/by those identified resources, and the state for each condition can be changed by those context events. Thus, by using permissions, we can generate various executing contexts that represent permutations of resource conditions that have variable states.
The latest Android platform release contains a list of 152 permissions. Among them, we focus on the permissions related to communicating with the environments, because they are more critical for making context-aware apps. For each permission, the related resources with their possible states are identified in Table I. It is intuitive to identify the related resources in the permissions of BLUETOOTH or CAMERA. Meanwhile, in the permission of ACCESS_FINE_LOCATION, it covers multiple resources such as GPS, Wi-Fi, and Radio. To consider the variable states of resource conditions, the possible states are defined in terms of an availability (i.e., on or off). It is also worth to note that the table is independent to the features of an app and thus it is reusable.
IV. TESTING ANDROID APPLICATIONS IN VARIOUS CONTEXTS
Fig. 1 represents the overall procedure for generating various executing contexts using permissions.
A. Identifying Resources from Permissions
We have used the permissions in an app’s manifest file for generating various context used for testing Android applications. By referring to the lists of permissions, the resources that the applications would (potentially) use for running Android applications can be inferred easily, without analyzing source codes of the applications. The context events occur from/by those identified resources, and the state for each condition can be changed by those context events. Thus, by using permissions, we can generate various executing contexts that represent permutations of resource conditions that have variable states.
The latest Android platform release contains a list of 152 permissions. Among them, we focus on the permissions related to communicating with the environments, because they are more critical for making context-aware apps. For each permission, the related resources with their possible states are identified in Table I. It is intuitive to identify the related resources in the permissions of BLUETOOTH or CAMERA. Meanwhile, in the permission of ACCESS_FINE_LOCATION, it covers multiple resources such as GPS, Wi-Fi, and Radio. To consider the variable states of resource conditions, the possible states are defined in terms of an availability (i.e., on or off). It is also worth to note that the table is independent to the features of an app and thus it is reusable.
IV. TESTING ANDROID APPLICATIONS IN VARIOUS CONTEXTS
Fig. 1 represents the overall procedure for generating various executing contexts using permissions.
A. Generating Various Executing Contexts
The executing contexts of an app can be generated by permuting resource conditions. For instance, if the resources that an app uses are $r_1$, $r_2$, ..., $r_n$, and the number of possible states for those corresponding resource conditions are $N(r_1)$, $N(r_2)$, ..., $N(r_n)$, then the total number of generated executing contexts is $N(r_1) \times N(r_2) \times \ldots \times N(r_n)$. For example, if an app’s permission has links with Bluetooth, GPS, and Wi-Fi, then net executing contexts include eight different permutations because each resource condition has two candidate states.
B. Prioritizing Contexts
While the generation of executing contexts is straightforward and easy to automate, the number of generated executing contexts increases as the number of considered resources increase. An app is executed on every test case for all the
89
generated various contexts, and the test runs increase exponentially. Thus, we need to prioritize the executing contexts to select the contexts to be tested first.
We suggest the two-level prioritizing strategies to rank the generated executing contexts. The first step is weighting each resource condition according to the testing objectives (e.g., testing normal or unacceptable behaviors). To test normal behaviors of the apps, the executing contexts, in which more resources are used, should be more highly ranked. Thus, for example, weights can be assigned to resource conditions as follows: Wi-Fi[on]=1, GPS[on]=1, Camera[on]=1, and SD card[free]=1. If the objective of the testing is to detect unacceptable behaviors of an app, then executing contexts related to the exceptional scenarios should be more highly ranked; thus, the resource conditions constituting those executing contexts need to be weighted, such as Wi-Fi[off]=1, GPS[off]=1, Camera[off]=1, and SD card[full]=1. To obtain the score of each generated executing context, the weights of resource conditions of the executing context are summed.
In the second step, to distinguish the executing contexts that have the same scores, we provide the method to assign weights to individual or combinatorial resources residing in an executing context. We suggest three criteria—frequency, user controllability, and minimum required resource conditions—as follows.
- **Frequency.** It represents how much a resource is required via permissions and is to be used in an app. It counts the identified number of each resource over the lists of permissions. For example, let an app have the permissions in Table I, then the frequency of the Radio resource is four. Thus, frequently identified resources need to be weighted to test more used resource-related behaviors of an app first.
- **User controllability.** It indicates how easily a user can control a resource. For example, users can enable or disable GPS or Wi-Fi but do not control hardware-related sensors directly. Thus, resources that are more user controllable can be weighted to test usable resource-related behaviors of an app first.
- **Minimum required resource conditions.** The certain combinations of resource conditions need to be weighted to test permission-related behaviors first. The rational of the idea comes from the observations that several permissions are related to multiple resources and require minimum resource conditions to provide expected services to an app. For example, the ACCESS_FINE_LOCATION permission uses three resources, GPS, Wi-Fi, and Radio; and among the three resources, GPS[on] and Wi-Fi[on] are the necessary and sufficient resource conditions to provide the service which is to access precise location from location sources. On the other hand, if we focus on detecting faults, the combination of states that could trigger a faulty behavior (e.g., GPS[on] and Wi-Fi[off]) could be more highly weighted.
V. EVALUATION
We investigated the two research questions in our experiment.
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
<th>Method #</th>
<th>LOC #</th>
</tr>
</thead>
<tbody>
<tr>
<td>Subsonic for Android [4.4] [12]</td>
<td>Playing music and video by receiving media files from the stream server (e.g., personal PC) and supports offline mode and bitrates</td>
<td>265</td>
<td>16,064</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Fault No.</th>
<th>Open Camera Bug ID. (refer in [21])</th>
<th>Fault No.</th>
<th>Subsonic Bug ID. (refer in [22])</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1 150</td>
<td>2</td>
<td>126</td>
</tr>
<tr>
<td>2</td>
<td>2 64</td>
<td>3</td>
<td>3 64</td>
</tr>
<tr>
<td>3</td>
<td>9 102</td>
<td>4</td>
<td>4 39</td>
</tr>
<tr>
<td>4</td>
<td>20 38</td>
<td>5</td>
<td>5 38</td>
</tr>
<tr>
<td>5</td>
<td>3 82</td>
<td>6</td>
<td>6 82</td>
</tr>
<tr>
<td>7</td>
<td>30 46</td>
<td>8</td>
<td>8 39</td>
</tr>
<tr>
<td>8</td>
<td>31 35</td>
<td>9</td>
<td>9 35</td>
</tr>
<tr>
<td>9</td>
<td>37 32</td>
<td>10</td>
<td>10 32</td>
</tr>
<tr>
<td>10</td>
<td>42 21</td>
<td>12</td>
<td>12 21</td>
</tr>
<tr>
<td>12</td>
<td>33 83</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### TABLE IV: Bugs that can be detected using our approach.
**RQ1.** Is our prioritization technique effective in detecting faults?
**RQ2.** Is our prioritization technique effective in detecting faults?
**A. Experimental Design**
Two open source projects are chosen as experimental subjects: Open Camera [11] and Subsonic [12]. We selected these as subjects because they are open source projects and their development histories (such as bug issues) are accessible. They contain a relatively large number of classes and methods (large size) as well. Table II summarizes characteristics of each subject.
The testing is performed by running the test cases under each context. In other words, the same test cases had run in a iterative manner as much as the number of the (selected) contexts. We first execute test cases generated from the Android GUI ripper tool [10]. They provide the sequences of events associated with GUI tree paths that link the root node to the leaves of the tree, but the results of statement code coverage on the experimental subjects were low (i.e., average from 45% to 47%). Since GUI-based approaches have limitations for covering all components, we additionally performed testing by focusing on the scenarios that users use more frequently and faulty behaviors may be more occurred.
Table III shows the generated and used executing contexts for Open Camera and Subsonic. As mentioned in Section IV-A, the executing contexts to be tested first need to be prioritized and selected because too many test runs are required, which is computation-intensive. To test the normal scenario, we select the context where all resources are on (active). On the other hand, to test the exceptional scenario, we also select the context where all resources are off (inactive). The contexts that might
causing faulty behavior are also highly ranked. For example, the faults may occur more on the situations when SD card is full (it may cause a problem in file processing) and when GPS is on while Wi-Fi is off (it may result in logging wrong location information). When setting these contexts, other resources— not involving these situations—are set to be inactivated. As a result, we selected four among 32 and six among 128 executing contexts in Open Camera and Subsonic, respectively.
Since the subjects are open source projects, we can access the bug histories of Open Camera [21] and Subsonic [22]. We manually analyzed the issues in those repositories and extracted the faults that could have been detected if our testing technique had been used. Then, we execute the test cases in the contexts generated using our approach and discover unacceptable behaviors of the app (such as crashes or freezes). These bugs are matched the corresponding faults extracted from the repositories. For example, we found a crash (i.e., runtime exception: fail to connect camera service) by executing test cases in the context where a camera is disabled. This crash can be matched to the faults of camera malfunctions or exceptions.
To evaluate the effectiveness of our method of prioritizing contexts, we compare three different sequences of contexts in which test cases run: the generated order T, the reversed order T\(_r\), and the ordered prioritized sequence T\(_p\) (which is ranked according to our prioritization method). The order T is the order generated from our approach but not prioritized. The generated order T and the reversed order T\(_r\) can be regarded as random sequences.
To quantify the capability of the contexts on fault detection, we use a metric called APFD (Average Percent Fault Detection) [23]. The APFD is calculated by taking the weighted average of the percentage of faults detected over the life of the suite. The higher numbers imply faster (better) fault detection rate. Let T be a test suite containing n contexts in which test cases run, and let F be a set of m faults revealed by T. Let TF\(_i\) be the first context in ordering of T\(^*\) of T which reveals fault i. The APFD for test suite T\(^*\) is given by the equation:
\[ APFD = 1 - \frac{TF_1 + TF_2 + \ldots + TF_m}{mn} + \frac{1}{2n}. \]
We also measure the fault detection rate according to the order of executing contexts.
### B. Results
**Number of detected bugs.** We found 12 out of 38 bugs in Open Camera, and 14 out of 151 bugs in Subsonic (see Table IV). These results show that our testing is useful in detecting faults.
**APFD measure.** For Open Camera, # of contexts running with test cases (n) is 32, and # of faults (m) is 12. The APFD for three orders (i.e., T (generated order), T\(_r\) (reversed order), T\(_p\) (prioritized order using our approach)) are calculated as follow:
\[ T: 0.92 = 1-(6+1+1+1+1+9+9+3+1+1+1+1)/384, \]
\[ T\(_r\): 0.62 = 1-(3+9+1+7+1+1+1+1+32+17+17)/384, \]
\[ T\(_p\): 0.97 = 1-(4+1+1+1+1+2+1+1+1+1)/384. \]
For Subsonic, # of contexts running with test cases (n) is 128, and # of faults (m) is 14. The APFD for three orders are calculated as follow:
\[ T: 0.96 = 1-(2+1+1+1+2+1+1+6+6+1+1+1+1)/1536, \]
\[ T\(_r\): 0.92 = 1-(10+1+1+1+1+1+1+1+5+5+1+1+1+1)/1536, \]
\[ T\(_p\): 0.98 = 1-(1+1+1+1+1+1+1+1+1+1+1+1+3+11)/1536. \]
In both projects, the APFDs for T\(_p\) represent the highest scores. Note that, in Subsonic, the number of generated executing contexts is large (i.e., 128) and many of the faults are detected by small number of the executing contexts, thus, the APFD measures are not much different in three orders.
**Fault detection rate.** We present the graphs of fault detection rate for Open Camera and Subsonic in Fig. 2(a) and in Fig. 2(b), respectively. The graphs show the percentages of faults detected versus the fraction of the contexts used, for each sequence of comparators. For Open Camera, we (T\(_p\)) reached to 100% of fault detection rate after running four executing contexts, while the generated order (T) and the reversed order (T\(_r\)) required 9 and 32 executing contexts, respectively. For Subsonic, our prioritized sequence (T\(_p\)) reached to 100% of fault detection rate after running six executing contexts, while the generated order (T) and the reversed order (T\(_r\)) required 10 and 20 executing contexts, respectively. From the results, we can conclude that the order prioritized using our prioritization method results in the earliest detection of the faults.
### VI. Conclusion and Future Work
In our paper, we provide a method for systematically generating various executing contexts from permissions to
test Android applications. To generate the various contexts, the related resources and their possible states are identified from the permissions. Then, the various executing contexts are generated by permuting resource conditions, and the executing contexts are prioritized and selected. We applied our testing method to two open-source projects and showed the method is effective in fault detection.
For future work, we plan to consider more permissions and identify the relations between the resources and those permissions. We also plan to perform the more detailed experiment considering context events in event-based testing of mobile applications.
ACKNOWLEDGMENT
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2014R1A1A2054098). This research was supported by the MSIP(Ministry of Science, ICT and Future Planning), Korea, under the ITRC(Information Technology Research Center) support program (IITP-2015-H8501-15-1012) supervised by the IITP(Institute for Information & communications Technology Promotion).
REFERENCES
|
{"Source-Url": "http://dependable.korea.ac.kr/wpTest/papers/2015-ic-kssong-SEKE.pdf", "len_cl100k_base": 6781, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21671, "total-output-tokens": 8178, "length": "2e12", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.00031566619873046875, "__label__crime_law": 0.0003535747528076172, "__label__education_jobs": 0.0007333755493164062, "__label__entertainment": 7.05718994140625e-05, "__label__fashion_beauty": 0.0002058744430541992, "__label__finance_business": 0.00015985965728759766, "__label__food_dining": 0.0003063678741455078, "__label__games": 0.0007486343383789062, "__label__hardware": 0.00246429443359375, "__label__health": 0.000621795654296875, "__label__history": 0.000232696533203125, "__label__home_hobbies": 7.635354995727539e-05, "__label__industrial": 0.0003223419189453125, "__label__literature": 0.0002808570861816406, "__label__politics": 0.0002142190933227539, "__label__religion": 0.0003898143768310547, "__label__science_tech": 0.04595947265625, "__label__social_life": 8.094310760498047e-05, "__label__software": 0.01117706298828125, "__label__software_dev": 0.93408203125, "__label__sports_fitness": 0.0002887248992919922, "__label__transportation": 0.0004911422729492188, "__label__travel": 0.00018417835235595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35421, 0.0431]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35421, 0.48169]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35421, 0.8972]], "google_gemma-3-12b-it_contains_pii": [[0, 5595, false], [5595, 11934, null], [11934, 19094, null], [19094, 25651, null], [25651, 30349, null], [30349, 35421, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5595, true], [5595, 11934, null], [11934, 19094, null], [19094, 25651, null], [25651, 30349, null], [30349, 35421, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35421, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35421, null]], "pdf_page_numbers": [[0, 5595, 1], [5595, 11934, 2], [11934, 19094, 3], [19094, 25651, 4], [25651, 30349, 5], [30349, 35421, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35421, 0.21127]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
0eb0a4ff3340f2368be85c39bc0fb040b04740c8
|
Abstract—Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm’s efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
Keywords—requirement prioritization, requirements interdependencies, meta-model, page-rank
I. INTRODUCTION
Software development is a time and budget intensive process. A successfully developed software system not only depends on its correct functioning but also depends on the value delivered to the stakeholders [1]. In today’s software development culture where requirement changes are frequent and continuous, the decision of which requirements will be considered first (i.e., requirements prioritization (RP)) in a specific iteration is very important. Requirements prioritization is defined as a process of prioritizing a set of requirements based on different parameters of interest e.g., risk, cost, time and inter-dependencies [1]. The prioritized list (if performed for a software release) is developed and used with the same process being repeated for upcoming iterations. For example, the RP results can be used to plan and select the optimal set of requirements for development in the next release.
A number of RP techniques (e.g., [2][3][4]) have been proposed in the literature together with some empirical evidence on their evaluation. Genetic Algorithms (GA) is also being used for prioritizing the requirements (e.g., [5][6]) with promising results. In addition, the Analytic Hierarchy Process (AHP) is one of the most discussed prioritization approaches in the research community and is actively being used for RP [7][8][9]. The use of fuzzy logic-based techniques for prioritization of requirements has also been investigated in the literature (e.g., [9][10]). Unfortunately, most of the existing techniques are only focusing on prioritizing conflict-free requirements (for instance in [11]) and in many cases, these approaches are ignoring requirements interdependencies (e.g., [7][12][13][14]). Requirements dependencies play a huge role in the overall software engineering process and researchers have tried to compute them automatically [15].
Model-Driven Engineering (MDE) is an efficient and effective way of both managing software complexity, as well as providing a basis for the systematic development of software at various abstraction levels. MDE has been applied in Requirements Engineering (RE) for structuring, formalizing, and visualizing the requirements in the form of models (e.g., [16][17][18]). The resulting models can be used for generating design models and executable code by using Model-Driven Development (MDD) tools and technologies and are useful to aid in software analysis during the whole development process (e.g., trade-off analysis [19]). Extending the potential of using such models, one can use these for defining the dependency between requirements, with the goal of automatically performing dependency-based prioritization. A more recent work suggests that the use of the PageRank algorithm [20] for RP [11] is effective for ranking requirements based on dependencies. However, these kinds of approaches are not taking into account non-functional requirements and consider only optional requirements without any requirement conflicts. This impedes the possibility of using such RP approaches in realistic scenarios.
To alleviate the aforementioned restrictions, we proposed a meta-model based approach to facilitate the modeling, visualization, and prioritization of requirements and their related test cases [21]. The proposed meta-model borrows concepts from System Modeling Language (SysML) which can be found in other meta-models in the literature (for
instance representing stakeholder information [22], requirements [17] and their relationships [17][23]). The proposed meta-model is capable of modeling requirements along with interdependencies between them and other factors (e.g., risk, cost, time to develop and business value) that are significant for RP. The meta-model is supported by a tool that aids the visualization, modeling, and prioritization of the requirements. These models are integrated with RP and we propose the use of a modified version of the PageRank algorithm in which the initial rank is assigned differently, and it can distinguish conflicting edges. To provide meaningful experimental evidence on the use of such an approach, we evaluated our proposed prioritization algorithm in terms of the following questions:
**RQ1**: Does the modified PageRank algorithm effectively prioritize a set of requirements?
**RQ2**: Does the modified PageRank algorithm efficiently prioritize a set of requirements?
Based on these research questions, our experiment compares our proposed approach with a list of 104 requirements prioritized by 28 graduate students registered in the Advanced Requirements Engineering course in a private university (onwards called the baseline).
The rest of the paper is structured as follows: in Section II we discuss the related work, while in Section III we discuss our proposed model-based RP approach. In Section IV we demonstrate our proposed model-based requirements prioritization approach on a small example case, while in Section V we evaluate our modified PageRank algorithm by comparing the results with the baseline and other algorithms. In Section VI we discuss the results of this paper, and in Section VII we discuss the threats to validity. Finally, in Section VIII we describe the limitations of our work and concluded the paper.
**II. RELATED WORK**
Recently, RP techniques using AI have been proposed and deployed as a key component of an efficient and effective process. For example, the fuzzy logic and evolutionary algorithms along with traditional ones like the AHP [24] are widely used independently and in combination [25][26]. An evolutionary algorithm called Interactive Genetic Algorithm (IGA) [27] was used to prioritize forty-nine functional requirements based on a real case. The algorithm was compared with AHP to determine the reduction in pairwise comparison effort. The results showed that IGA outperformed AHP by decreasing the elicitation effort by 10%. Another GA based technique was proposed to prioritize requirements called Least-Square-Based Random Genetic Algorithm (LSRGA) and was empirically evaluated to measure its performance in comparison to IGA [5]. Gradient Descent Ranking (GDRanking) [28] is a machine learning approach for prioritization of requirements elicited through the Quality Function Deployment (QFD) [28]. This approach has two distinct phases for pairing and balancing both the functional and non-functional requirements. The proposed method was evaluated for four pairs on both functional and non-functional requirements. However, issues like requirement dependencies, renewing the requirements rank with an addition of new requirements and scalability are not considered. Another machine learning based approach called Case-Based Ranking (CB-Ranking) is proposed for requirements prioritization [29]. CB-ranking uses pair-wise comparisons (e.g., AHP) and the elicitation of the candidate priority between requirements relies on Boolean values. This gives less noisy data and concrete priority values for the pairs. Their results showed that CB-ranking is more effective and accurate as compared with AHP with an increase in the size of the dataset. Further, the experimental results showed that CB-ranking performs better than AHP by reducing the number of disagreements. Nevertheless, this work relies on a rather small data set and is not taking into account the dependency factor between requirements. Interested readers can have look at the comprehensive review on RP techniques [30].
AI techniques are helping RP by providing better support for handling non-functional RP, prioritizing a large number of requirements for large-scale software systems, and tackling the precision and accuracy issues. However, there is a need to evaluate RP techniques in more realistic situations by taking into account factors like the interdependency between requirements, and conflicting requirements. In this context, a recent study proposed an i* model-based requirement prioritization technique using the PageRank algorithm [11]. This work considered only optional and conflict-free functional requirements. To the best of our knowledge, only one recent study [11] considered the information available in the models for prioritization. Moreover, our approach seeks to improve previous approaches by taking into account other factors for requirements prioritization such as risk, business value, cost, dependencies, and even conflicts. This also makes SysML not applicable in our case. SysML lacks in taking into account the relevant factors like risk, business value, and cost. Our approach is also independent of the type (i.e., Functional/Non-Functional) of requirements being prioritized. Prioritizing requirements with conflicts can help in providing decision support for deciding the resolution of conflicting requirements based on priorities.
**III. PROPOSED APPROACH**
This section describes our proposed approach for requirements prioritization using PageRank algorithm and the meta-model (developed in Ecore 3) behind the approach. We also provide a (prototype) tool 4 support that aims to support the requirement visualization, analysis, and prioritization.
Our tool supported approach allows the generation of an instance model from a spreadsheet exported from a requirements management tool (e.g., DOORS 5). At the moment, the requirements dependencies are to be written manually by the requirement analyst in a spreadsheet or visually in our tool. We aim to automate this step by the use of natural language processing algorithms. We also considered other vital factors like risk, cost, business value and time to develop (cost) for RP. The optional factors can also be modelled manually or can be provided in the spreadsheet. The generated model is based on the meta-model shown in Fig. 1. The meta-model allows modelling of both functional and non-functional requirements with dependencies.
**A. The Meta-Model and Concrete Syntax**
Our requirements model borrows concepts from the SysML and other models in literature. In our case, each requirement has an id, title, description, rationale and other optional properties vital to the
requirements prioritization process (i.e., risk, cost, and businessValue) can take values between one and ten representing the risk factor associated with a requirement, the expected cost for the development and value it adds to the overall project, respectively.
An optional initial priority (StakeHolderPriority) is assigned to the requirements using MoSCoW technique (Must have, Should have, Could have and Would have)) [31] priorities. In our approach, it is recommended to model the optional properties (if available) for a better prioritization. The mbprPriority is used for automated priority calculation by our modified PageRank algorithm.
Stakeholder(s) information can also be modeled and linked to requirements. Each Requirement contains multiple instances of RequirementRelationship which can represent different types of dependencies (e.g. depends, conflicts, derives, defines, refines and realizes). A Requirement can also be linked to multiple test cases. Note here that it can be extended to support the requirements analysis phase in a more comprehensive way and with more factors.
We have developed a concrete syntax for our instance model in Sirius. We used Sirius because it allows the generation of Eclipse-based model editors. The tools allow end-users to model the requirements and view the model visually. TABLE I. shows our concrete syntax with respect to the meta-model elements.
Functional requirements are represented using yellow notes with id and title information associated with it. Non-Functional requirements are represented using a purple note with id and title information associated with it. A Stakeholder is represented with a user icon. In addition, a TestCase is represented using orange color note with id and priority on it. The different types of dependencies are mentioned on the edges and are also color coded as follows: Depends (black arrow), Derives (grey arrow), Refines (blue arrow), Conflicts (red arrow) and Realizes (black arrow with “<realizes>” label). The association of requirements with the respective stakeholder(s) is shown through dotted green lines.
TABLE I. CONCRETE SYNTAX
<table>
<thead>
<tr>
<th>Meta-Model Element</th>
<th>Concrete Syntax</th>
<th>Source</th>
<th>Target</th>
</tr>
</thead>
<tbody>
<tr>
<td>Requirement</td>
<td>Requirement</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>NonFunctionalRequ</td>
<td>Requirement</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Stakeholder</td>
<td>Stakeholder</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>TestCase</td>
<td>Test Case</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>linkedRequirements</td>
<td>linkedRequests</td>
<td>Stakeholder</td>
<td>Req.</td>
</tr>
<tr>
<td>Depends</td>
<td><<depends>></td>
<td>Req.</td>
<td>Req.</td>
</tr>
<tr>
<td>Derives</td>
<td><<derivs>></td>
<td>Req.</td>
<td>Req.</td>
</tr>
<tr>
<td>Refines</td>
<td><<refines>></td>
<td>Req.</td>
<td>Req.</td>
</tr>
<tr>
<td>Conflicts</td>
<td><<conflicts>></td>
<td>Req.</td>
<td>Req.</td>
</tr>
<tr>
<td>Realizes</td>
<td><<realizes>></td>
<td>Req.</td>
<td>Req.</td>
</tr>
</tbody>
</table>
B. Requirements Prioritization
Requirements are prioritized by following the steps shown Fig. 2. The requirement model is fed into the tool and for each requirement, the initial rank, cost contribution, risk contribution, and links contribution are calculated. All the calculated values are summed, and the sum is assigned as a priority to the requirement. These steps are repeated for each requirement and then the tool sorts the new list based on
---
resultant priorities producing a prioritized list of requirements. We further explain each step, in detail in this section.
The Initial Rank is assigned as shown in eq. (1). The \( N_{req} \) in the equation represents the total number of requirements in the requirements’ model. The initial rank equation has the value of 0.625 representing the degree to which the priorities should be dictated by the interdependencies.
\[
R_{i} = N_{req} \times 0.625 \quad (1)
\]

The cost contribution to the priority is also calculated for each requirement and is added to the priority of the requirement. The cost contribution (to the priority) of a requirement is calculated by taking the ratio of \( \text{businessValue} \) and \( \text{cost} \) attributes of the Requirement. Note that if the required values for calculation of cost contribution is not modeled, the algorithm will just add one to the priority of that respective requirement.
The algorithm adds the risk contribution to the priority of the requirement. The risk contribution is calculated by taking the ratio of \( \text{businessValue} \) and \( \text{riskFactor} \) attributes of the Requirement. The algorithm will increment the priority value by one if the required values for the calculation of risk contribution to the priority are missing.
The optional initial stake holder’s priority is also added to the priority of the requirements. These priorities are modeled using the StakeholderPriority enumeration in the meta-model. As in the MoSCoW method our literals also contribute to the priority in the ordinal way. The literal \( \text{MustHave} \) contributes 9.0 to the priority, \( \text{ShouldHave} \) contributes 6.0 to the priority, \( \text{CouldHave} \) contributes 3.0 to the priority and finally \( \text{WouldHave} \) contributes 1.0 to the priority. Based on the selected literal for the requirement, the targeted contribution is added to the priority of the requirement.
For the calculation of the link contributions, the algorithm extracts all the dependencies among dependent requirements. Each dependency edge (incoming) is weighted by dividing the current priority of the source (of the dependency) requirement equally among all dependency edges except for the edges representing a Conflict. The weighting for each dependency edge is done as shown in eq (2).
\[
L_{cont_{i}} = \frac{\text{Priority}_{req}}{\text{len}(relatedReqs.)} \quad (2)
\]
In case of conflict edges, the contribution of the edge to the requirement priority is evaluated as per the following conditions:
- If the source requirement of the edge has a higher priority than the target of the edge, then half of the weight of the edge is subtracted from the priority of the target requirement.
- Otherwise, half of the weight of the edge is added to the priority of edge’s source requirement.
The link contribution is added to the overall priority if the edge is not representing a conflict.
The steps of this process (mirrored as steps 1, 2, 3, and 4 in Fig. 2) are repeated for each requirement and then the links are weighted, and the links contributions are added to the priorities of each requirement. The algorithm then sorts the requirements based on the mbrpPriority and generates a prioritized list of requirements.
IV. DEMONSTRATION OF THE PROPOSED APPROACH
For the illustration of our proposed approach, we have presented a small example of a requirements dataset with four requirements. TABLE II. shows the IDs of the requirements and their relations (Link column), Business Value (BV column), cost, initial MoSCoW priority (IP column), and risk factors associated to a requirement (Risk).

<table>
<thead>
<tr>
<th>ID</th>
<th>Requirements Data</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Link</td>
</tr>
<tr>
<td>R1</td>
<td>-</td>
</tr>
<tr>
<td>R2</td>
<td>Depends on 1</td>
</tr>
<tr>
<td>R3</td>
<td>Refines 2, Depends on 1</td>
</tr>
<tr>
<td>R4</td>
<td>Depends on 1, Depends on 3</td>
</tr>
</tbody>
</table>
All the four requirements have initial MoSCoW priority of ‘9’ tagged as “Must Have”. As per our algorithm, an initial rank is assigned to each requirement as per eq. (1). So, in this case, all the requirements have an initial rank of 2.5 (4*0.625) = 2.5. The MoSCoW priorities are then added to the initial rank. For the example dataset, all the requirements are tagged as “Must Have” and thus contributing the number 9.0 to the priority (making overall priority = 2.5 + 9.0 = 11.5, shown in PI column of TABLE III.). After this step, risk and cost contributions are summed (shown in the CR column in TABLE III.) and are added to the overall priority of each requirement. Cost and risk contributions to the priority are calculated as discussed in Section III (e.g., for R1 cost contribution is 6/2=3 and the risk contribution is also 3 eventually resulting in a contribution of 6 to the priority).
Till now the resulted priorities of each requirement are considered as the current priorities and are shown in Px column of TABLE III. For calculation of the links contribution to the priority, the algorithm takes requirements one by one and starts dividing the current priority in the outgoing edges. In the case of R1, there are no out-going links, so this step is skipped. In case of R2 the priority value (15.8) is assigned to the edge (R2 to R1) and is added to the priority of R1. For R3 the current priority (15.3) is divided between the two outgoing edges, incrementing R1 and R2’s priority by 7.6. For R4 the current priority (14.7) is divided equally in between the two edges, resulting in an increment of 7.3 in R1 and R3’s priority. The total sum of the edges contribution for each requirement is shown in EC column of TABLE III. The last column (Pf) shows the final priority of a requirement calculated by using our modified PageRank algorithm.
<table>
<thead>
<tr>
<th>ID</th>
<th>Requirements Priorities</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PI</td>
</tr>
<tr>
<td>R1</td>
<td>11.5</td>
</tr>
<tr>
<td>R2</td>
<td>11.5</td>
</tr>
<tr>
<td>R3</td>
<td>11.5</td>
</tr>
<tr>
<td>R4</td>
<td>11.5</td>
</tr>
</tbody>
</table>
TABLE II. shows that R4 is of the highest value to the stakeholder but since it has dependencies and thus requires R1 and R3 (that depend on R2) to be developed first. In this case, our algorithm correctly ranked the example data set of requirements.
V. Evaluation
For evaluation of our approach, this section compares our results with existing algorithms and with a baseline (obtained from an experiment performed with human subjects). The baseline is required in our case because we wanted to evaluate our proposed approach for accuracy and efficiency. We conducted an experiment where 30 graduate students prioritized a dataset of 104 requirements. The lists from the graduate students are evaluated and an average priority (from 28 valid lists) of each requirement is considered as the baseline, the valid lists were averaged, and the baseline was obtained. The same 104 requirements dataset was then prioritized on AHP, and fuzzy Analytic Network Process (ANP), FAHP, FANP, IGA, and our modified PageRank algorithm. The obtained lists from the prioritization algorithms were compared with the baseline using a statistical test. Fig. 4 shows the overall flow of our experiment. The rest of the section explains each step of the experiment in more detail.
A. Preparing the Baseline
The evaluation of our proposed approach for RP is done on a dataset of 104 requirements used in the development of a smart home system. The dataset contained the required information for requirements prioritization (such as dependencies, expected development time, etc.). In order to be able to compare the results of our approach, we conducted an experiment to obtain a baseline prioritized list of requirements. The experiment was conducted in a purely academic setup where 30 graduate students participated in this activity. The students were enrolled in the “Advanced Software Requirements Engineering” course and had already worked on at least one real software development project. A one-hour session was conducted prior to the experiment where the students were briefed about the requirements’ dataset. The students assigned the initial stakeholder priorities, risk, cost contributions, dependency factors as per their own understanding. For each requirement, all the mentioned values were summed, and the sum was assigned as priority to the requirement. All students prioritized the dataset, but two submissions were not completed and were not included in the baseline. Finally, a total of 28 submissions were considered for creating the baseline. The baseline list was obtained by considering the average of all the 28 priorities for each requirement.
B. Evaluation Experiment Execution
We prioritized the same dataset (used for preparing the baseline) on AHP [24], ANP [26], FAHP, FANP, IGA [27], and our modified PageRank algorithm. We selected these techniques since they are widely used for multiple criteria decision-making and deals with quantitative data. For example, AHP is a pair-wise comparison technique used for prioritization. It is definitely the most widely used and studied requirements prioritization technique.
The tool that we used for evaluating AHP is named “Super-Decision”. It automates the manual input of data into models and helps us in getting the pair-wise decisions. The tool was used for AHP prioritization in many fields of business and marketing [2] and is also used for requirements prioritization especially for the multi-criteria decision-making process. We considered several optional factors: stakeholder’s priority, risk, and cost. According to these factors, we prioritize the requirements dataset. The tool facilitates us to mention all these factors as criteria and all the requirements in the form of clusters and also show their relation to the mentioned factors (criteria). We created different clusters of the modules to represent the requirements as the 3rd hierarchy of the process. After creating this model, we perform the pair-wise comparisons of the requirements according to the criteria and we obtained the prioritized list of requirements.
Analytical network process (ANP) is a generalization of AHP also a multi-criteria decision-making technique used for prioritization. In ANP the number of comparisons is almost double than that of AHP because of the bidirectional relationship between the clusters. We implemented the ANP process by using the “Super Decision” tool which facilitated the bidirectional relation between the cluster and the requirements. The bidirectional arrow defines the relationship of each factor to the elements of the alternative cluster. The elements in the alternative cluster also have a dependency on the factors of criteria cluster. The tool facilitated us in all pair-wise comparisons of the clusters according to the model and allowed us to rank the comparisons. After ranking the comparison, the tool calculated the normalized priority.
We also used Fuzzy AHP which is the fuzzy version of AHP. This technique is also a multi-criteria decision-making technique and handles both qualitative and quantitative form
of data. We used the fuzzy interface system (FIS) using the Matlab ranking the dataset on Fuzzy-AHP.
Fuzzy-ANP is the advanced form of ANP just like in case of Fuzzy-AHP. For Fuzzy-ANP we created different FIS files as the clusters of Fuzzy-ANP and then implemented our dataset to get the results in the form of priorities.
![Diagram]
**Fig. 4. Process of the evaluation experiment**
The IGA was with default setup and the fitness function used for the IGA was to check if top 7 known requirements are listed in the top 20 of the obtained lists. The IGA would stop if and only if all the top 7 requirements are listed in the top of the obtained list OR it would stop if the predefined number of iterations are completed. In our case, we limited the iterations to 50,000. This took around 60 minutes on an Intel Core i3 (2.20 GHz) 2nd gen. machine with RAM of 4 gigabytes. Note that as an initial population, we provided ten lists obtained from the students.
The proposed PageRank algorithm was used to prioritize the created requirement model by using our own tool implementation. Our tool has the option of loading a requirement list from a comma-separated values file. We loaded the dataset from a file and the model was automatically generated by our tool. We manually verified the correctness of the generated model. We then prioritized the requirements model and a sorted prioritized list was generated and written to a file within seconds.
**C. Experimental Results and Analysis**
To answer our RQ1 (Does the modified PageRank algorithm effectively prioritize a set of requirements?) we checked the normality of the prioritized lists obtained from AHP, ANP, FAHP, FANP, IGA and modified PageRank. Our data observations are drawn from an unknown distribution, we applied the Wilcoxon signed-rank test [32] to evaluate if there is any significant difference between the generated RP lists and the baseline without making any assumptions on the distribution.
We found that all the lists obtained from the selected techniques (including modified PageRank) produced different results as compared to the baseline. The p-values of the applied statistical test are listed in TABLE IV. From our results, we infer that there is a statistical difference between all data sets at the 0.05 level. To check the actual difference, we applied Cohen’s D effect size [33] and the result against each technique is shown in TABLE IV. Cohen D test was used to calculate two data sets and returning the deviation that a sample from one set will be different than a randomly selected sample from the other set. According to Cohen [33], the effect can be small (<0.2), medium and large (>0.8).
**TABLE IV. THE P-VALUES AND THE EFFECT SIZE**
<table>
<thead>
<tr>
<th>Technique</th>
<th>Statistical Tests and Values</th>
<th>Effect Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>PageRank</td>
<td>0.004 (Wilcoxon signed-rank)</td>
<td>0.1</td>
</tr>
<tr>
<td>AHP</td>
<td>P<2.2e-16</td>
<td>1.41</td>
</tr>
</tbody>
</table>
**Fig. 5. Comparison of the accuracy results**
We have found that the list produced by PageRank is closer to the actual baseline and thus it can be concluded that our modified PageRank algorithm effectively prioritized the list of 104 requirements when compared to our baseline than the other techniques. Fig. 5 shows the exact differences for each technique from the baseline.
To answer our RQ2 (Does the modified PageRank algorithm efficiently prioritize a set of requirements?) we recorded the time taken by each technique to prioritize our dataset of 104 requirements. Our results show that ANP and AHP took the most time and that is because these techniques are impacted by the manual effort needed to perform them.
IGA took more than one hour to converge upon a solution, while FANP and FAHP outperformed IGA in terms of time. The FAHP technique was even more efficient in terms of time when directly compared to the FANP technique. We have also recorded the rule coding time for FANP, and the results are shown in Fig. 6.
Our proposed approach produced the prioritized list of requirements in less than a second. Note that our approach generates the requirements model automatically from the generated .csv file. Based on the time results for each technique we can clearly see that the modified Page-Rank algorithm efficiently prioritized the list of 104 requirements. The time (in minutes) taken by each technique is shown in Fig. 6.
<table>
<thead>
<tr>
<th>Technique</th>
<th>Statistical Tests and Values</th>
<th>P-Value (Wilcoxon signed-rank)</th>
<th>Effect Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>ANP</td>
<td></td>
<td>P<2.2e-16</td>
<td>1.37</td>
</tr>
<tr>
<td>FAHP</td>
<td></td>
<td>P=0.003</td>
<td>0.2</td>
</tr>
<tr>
<td>FANP</td>
<td></td>
<td>P<2.6e-13</td>
<td>1.2</td>
</tr>
<tr>
<td>IGA</td>
<td></td>
<td>P<2.9e-11</td>
<td>1.71</td>
</tr>
</tbody>
</table>

Fig. 6. Comparison of time taken by all techniques
VI. DISCUSSION
In this paper, we used PageRank algorithm on models for RP and evaluated our approach. The results obtained from this evaluation show that all the selected techniques for the experiment produced different results when compared to a baseline. Some techniques (e.g., the Fuzzy-AHP and PageRank) was able to produce closer results to the baseline than the other techniques. Our results suggest that prioritization techniques considering requirement dependencies can produce more closer (to humans) results. In the cases where the requirements dependencies are hard to determine, automated dependencies extraction approaches can be used. Since most of the approaches have no way to represent conflicting requirements or competing stakeholder’s interest, a multi-criteria decision support system with the ability to use requirement-level dependency information and conflict resolution should be considered for requirement prioritization.
In addition, the results obtained from the experimental evaluation show that manual techniques (even the tool supported) are not efficient and consume a huge amount of computational time. It is important to mention that the evolutionary algorithms might not converge upon a solution (in case of larger datasets) in a reasonable amount of time.
VII. THREATS TO VALIDITY
The internal validity threats are related to our experimental design. The experiment was not conducted in a real industrial setup and was conducted in an academic environment. The participants of the experiments were students and not industry professionals, and this could affect the final results. Nevertheless, we ensured that the students are familiar with the requirements prioritization topic and have worked on one real software development project. While it is possible that prioritizations created by industrial engineers would yield different results, there is some scientific evidence [34] supporting the use of students in software engineering experiments.
Some external validity threats were addressed by selecting a diverse set of requirements. We argue that having access to a realistic dataset and rather a good number of requirements can be representative. The size of the dataset is another limitation which makes our results less generalizable for large-scale scenarios in thousands of requirements are to be consider. More studies are needed to generalize these results to other domains and RP methods used.
VIII. CONCLUSION
In this study, we considered dependencies for requirements prioritization with the help of a meta-model and PageRank algorithm. Our approach helps in modeling the requirements dependencies and prioritizing the requirements based on dependencies. Our approach is implemented in a tool and helps in visualizing the requirements along with dependencies. We evaluated our approach on a dataset of 104 requirements. We conducted an experiment to obtain a baseline so that we can compare the results of our algorithm and other state-of-the-art to a ground truth (baseline). We compared the results of our PageRank based prioritization with the baseline and also with five other techniques (i.e., AHP, ANP, FAHP, FANP, and IGA). We found that our modified PageRank algorithm prioritized the list of requirements effectively and efficiently and taking into account the dependency factor between requirements.
As future work, our research target includes the automated extraction of dependencies based on natural language requirements. Adding more representations and viewpoints to enhance the visualization and analysis of requirements is also one of our future concerns.
ACKNOWLEDGMENT
This work has been supported by and received funding from the XIVT project (https://itea3.org/project/xivt.html). This work was also supported by the Electronic Component Systems for European Leadership Joint Undertaking under grant agreement No. 737494 (MegaM@Rt2).
REFERENCES
|
{"Source-Url": "http://www.es.mdh.se/pdf_publications/5641.pdf", "len_cl100k_base": 7516, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31894, "total-output-tokens": 10089, "length": "2e12", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.0002570152282714844, "__label__education_jobs": 0.00220489501953125, "__label__entertainment": 7.11679458618164e-05, "__label__fashion_beauty": 0.0001850128173828125, "__label__finance_business": 0.0005979537963867188, "__label__food_dining": 0.0003306865692138672, "__label__games": 0.0006046295166015625, "__label__hardware": 0.0005459785461425781, "__label__health": 0.00035500526428222656, "__label__history": 0.00021076202392578125, "__label__home_hobbies": 8.64267349243164e-05, "__label__industrial": 0.0003612041473388672, "__label__literature": 0.00036406517028808594, "__label__politics": 0.0001932382583618164, "__label__religion": 0.0003654956817626953, "__label__science_tech": 0.019317626953125, "__label__social_life": 0.00010448694229125977, "__label__software": 0.00850677490234375, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.00019359588623046875, "__label__transportation": 0.0003268718719482422, "__label__travel": 0.00016200542449951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41838, 0.03522]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41838, 0.18984]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41838, 0.90848]], "google_gemma-3-12b-it_contains_pii": [[0, 5132, false], [5132, 11844, null], [11844, 15233, null], [15233, 20235, null], [20235, 26559, null], [26559, 29584, null], [29584, 35283, null], [35283, 41838, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5132, true], [5132, 11844, null], [11844, 15233, null], [15233, 20235, null], [20235, 26559, null], [26559, 29584, null], [29584, 35283, null], [35283, 41838, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41838, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41838, null]], "pdf_page_numbers": [[0, 5132, 1], [5132, 11844, 2], [11844, 15233, 3], [15233, 20235, 4], [20235, 26559, 5], [26559, 29584, 6], [29584, 35283, 7], [35283, 41838, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41838, 0.22222]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
27e3371b43e67a249770e739bb3f375a9c668927
|
A Portable Toolkit for Providing Straightforward Access to Medical Image Data\textsuperscript{1}
Robert J.T. Sadleir, BEng
Paul F. Whelan, PhD
Padraic MacMathuna, MD
Helen M. Fenlon, MB
This work was supported by the Irish Cancer Society, the Health Research Board (of Ireland) and the Science Foundation Ireland.
Address correspondence to R.J.T.S.
Vision Systems Laboratory,
School of Electronic Engineering
Dublin City University
Dublin 9,
Ireland.
Tel: +353 1 7008592
Fax: +353 1 7005508
E-mail: Robert.Sadleir@dcu.ie
Exhibit Space Number: 9316
\textsuperscript{1} From the Vision Systems Group, School of Electronic Engineering, Dublin City University, Ireland. (R.J.T.S., P.F.W.); the Gastrointestinal Unit, Mater Misericordiae Hospital, Dublin 7, Ireland. (P.MacM); and the Department of Radiology, Mater Misericordiae Hospital, Dublin 7, Ireland. (H.M.F.);
A Portable Toolkit for Providing Straightforward Access to Medical Image Data
Abstract
Computer aided medical image analysis is emerging as an important area in radiology research. Analysis of medical images usually involves the development of custom software applications that interpret, process and ultimately display medical image data. The interpretation stage is generally the same for all such applications and involves decoding image data and presenting it to the application developer for further processing. This article introduces a toolkit created specifically for interpreting medical image data and thus acting as a platform for medical imaging application development. The toolkit, which will be referred to as NeatMed, is intended to reduce development time by removing the need for the application developer to deal directly with medical image data. NeatMed was implemented using Java, a programming language with a range of attractive features including portability, ease of use and extensive support material. NeatMed was developed specifically for use in a research environment. It is straightforward to use, well documented and is intended as an alternative to commercially available medical imaging toolkits. NeatMed currently provides support for both the digital imaging and communications in medicine (DICOM) and Analyze medical image file formats and its development is ongoing. Support material including sample source code is available via the Internet and links to related resources are also provided. Most importantly NeatMed is freely available and its continuing development is motivated by requests and suggestions received from end users.
One sentence summary: This article introduces a freely available and user friendly Java toolkit that is intended to act as a platform for the development of custom medical imaging applications.
Introduction
Custom medical imaging applications are becoming more common place. This is primarily driven by the increasing role of computer aided image processing and analysis in radiology research. An important and common stage in the development of such applications is the interpretation of medical image data. This data is generally stored in accordance with the Digital Imaging and Communications in Medicine (DICOM) standard (1) (summarised by Mildenberger et al. (2)). Interpretation of medical images involves decoding the relevant DICOM data and making it readily available to the application developer for analysis and display. Our group has developed a Java based medical imaging toolkit to facilitate the rapid development and deployment of medical imaging applications in a research environment. Over the past number of years this toolkit has been used internally as a platform for the development of a number of research applications in the areas of CT colonography (3, 4), MRCP (5) and more recently functional MRI. In the course of this work the toolkit has been exposed to a wide range of medical images obtained from various imaging modalities.
This medical imaging toolkit, which will be referred to as the NeatMed application developers interface (API), was developed using the Java programming language (Sun Microsystems, Mountain View, Calif.). The use of Java has previously been demonstrated in the development of custom medical imaging applications. Fernandez-Bayo et al. describe a web based Java viewer developed for use with a custom web server (6), Bernarding et al. describe a framework for the development of DICOM server applications implemented using Java (7) and Mikolajczyk et al. describe a Java environment for the analysis of positron emission tomography (PET) image data (8).
The choice of Java for implementing the NeatMed API was also influenced by a number of its key features:
- **Ease of use:** Java is modern programming language that was designed with simplicity in mind. Many of the complexities that are associated with other programming languages have been omitted while much of the power and flexibility has been retained. This makes Java very easy to learn and use particularly in the case of novice programmers.
- **Level of support:** Although Java is a relatively new programming language, there is a significant amount of support material available. Numerous texts have been written dealing with all aspects of the language. In addition: tutorials, sample source code, API documentation and freely available integrated development environments (IDEs) can all be accessed via the Internet.
- **Portability:** Java is a multi-platform programming language. This means that a Java application developed on one operating system (e.g. Mac OS) can be deployed on a number of other different operating systems (e.g. Windows, Solaris and Linux). This is sometimes referred to as the “write once, run anywhere” paradigm. In theory, the use of a multi-platform programming language significantly increases the potential user base with little or no extra development overhead.
- **Core functionality:** The core Java libraries encapsulate an extensive range of functionality that can be easily reused to create reliable, diverse and powerful applications. Some of the main capabilities supported by the core libraries include: networking, file IO, image processing, database access and graphical user interface development.
- **Extension APIs & toolkits**: There are a large number of extension APIs and toolkits that can be used in conjunction with the core Java libraries. These extensions usually deal with a particular speciality such as advanced image processing, 3D graphics or speech recognition.
The concept of a Java based medical imaging toolkit is not entirely new. In fact there are a number of commercial toolkits already available: the DICOM Image I/O Plugin (Apteryx, Issy-les-Moulineaux, France), the LEADTOOLS Medical Imaging suite (LEAD Technologies, Charlotte, N.C.) and the Java DICOM toolkit (Softlink, Halle Zoersel, Belgium). NeatMed was developed as a freely available alternative to these commercial toolkits. It is primarily intended for use in a research environment, specifically for dealing with offline medical image data. A major benefit associated with NeatMed is that it was designed with simplicity and ease of use in mind. This is especially important as it makes NeatMed assessable to all those involved in medical imaging research including radiologists, computer scientists and engineers. Other freely available toolkits e.g. ImageJ (9) and jViewbox (10) facilitate the development of applications that deal with DICOM images however they lack the low level functionality required to create application that deal with all aspects of the DICOM file format.
The NeatMed API was released prior to the 88th scientific assembly and annual meeting of the Radiological Society of North America. This article is intended to provide an introduction to the NeatMed API and deals with implementation issues, toolkit structure, sample applications, support material and current API status. The reader is encouraged to visit the official NeatMed website (9) for further information.
about the NeatMed API and associated resources. The NeatMed API is distributed in accordance with the terms and conditions laid out in the GNU Lesser General Public License. This licence was select to insure that the NeatMed API is accessible to all potential users.
**API Implementation**
NeatMed was developed using the Java programming language. Java was created by Sun Microsystems in the early ‘90s. It was initially intended for the development of software for use in the consumer electronics industry (e.g. set-top boxes). Unfortunately the Java development team were unable to find a target market for their new programming language. Forced to reconsider, it was decided to redeploy Java as an Internet technology. Java programs known as Applets first appeared on the Internet in 1994. These interactive Applets were embedded in standard hypertext mark-up language (HTML) web pages and greatly enhanced the previously static Internet. The Java programming language can also be used for the development standalone client side applications which can run independently of a web browser and are not subjected to the same security restrictions as Applets. The core Java libraries maintained by Sun Microsystems are used as the foundation for the development of any Java application. These libraries can be used in conjunction with extension an API in order to develop specialised applications. See (11) for further information about the Java programming language.
An extension API is a set of classes that can be instantiated by a programmer to create a particular type of application, thus facilitating software reuse. NeatMed is an
example of an extension API that can be used for the development of applications that deal with offline medical image data. A large number of Java APIs exist, these deal with a broad range of applications ranging from communicating with the serial and parallel ports to advanced image processing. The set of classes representing the API is deployed in some type of library. Java provides a packaging tool that can be used to package a set of class files and associated resources into a Java archive or JAR file. In order to be useful an API must be well documented. Java provides a documentation tool called Javadoc that allows an API developer to document software as it is being written. The resulting documentation provides detailed information about each class, method and variable that is defined in the associated API. The structure of Javadoc documentation is more or less the same for every API. This makes it very easy for programmers to familiarise themselves with a new API once they are comfortable with the basic Javadoc documentation structure. The API documentation is generated in HTML and can be viewed using any standard web browser.
Java has a wide range of benefits associated with it however there are also some limitations. One example is performance: Java is a multiplatform programming language, the byte code (i.e. binary form) that represents a java program is interpreted and not executed directly. This reduces the performance of a Java program compared to a natively executed program. Overall however, the benefits associated with Java (listed previously) far outweigh the drawbacks hence its selection for the development of NeatMed.
API Overview
The NeatMed API is a group of core and support classes that can be used to interpret, represent and manipulate images and related data that are stored in DICOM compliant files. The central class in the API is the DICOMImage class. A DICOMImage object can be instantiated by specifying a reference to a suitable data source in the constructor. The constructor will accept data from a number of sources (e.g. local file, data stream and remote URL). Once constructed a DICOMImage object provides direct access to all of the data elements stored within the specified DICOM source. Other classes in the API are used to represent individual components within a DICOM file such as DICOMTag: a data element tag (i.e. group number and element number combination) or DICOMDataElement: a special wrapper class created for storing data associated with a particular value representation. There is a subclass of DICOMDataElement for each of the 26 value representations, examples include: DICOMAgeString, DICOMPTypeName and DICOMUniqueIdentifier The API also provides utilities classes such as DICOMDataDictionary: a data dictionary containing value representation definitions for all of the data elements defined in the DICOM 3.0 standard (required when the value representation is not specified explicitly). This section describes how the DICOMImage class operates and how it interacts with other classes in the API to provide access to DICOM encoded data.
Data Interpretation
When a DICOMImage object is constructed it reads and decodes all the data from the specified source. The decoder automatically determines the type of data and the transfer syntax to be used. If the value representation is not specified explicitly then
the required value representation for a particular data element is obtained from the pre-programmed data dictionary (i.e. DICOMDataDictionary). Each data element in the DICOM file is subsequently decoded and stored using the relevant wrapper class. Data stored within a wrapper class can easily be accessed for further processing. The image data (0x7fe0, 0x0010) is a special type of data element that can be stored using one of two possible value representations: other byte string or other word string. The image data is always packed and sometimes compressed using either joint photographic experts group (JPEG) (13, 14) or run length encoding (RLE) compression. Information required to unpack and decompress the image data is usually stored within group 0x0028 data elements. The DICOM decoder automatically unpacks and decodes the image data and stores it internally as a one dimensional array of signed 32-bit integer primitives.
Data Management
Once decoded all data elements are stored internally using a modified hash table structure. A hash table allows information to be stored as key/value pairs. In this case the key is the data element tag which consists of a group number and an element number and the value is the data element associated with the relevant tag. A tag is represented by a DICOMTag class and all data elements are represented by dedicated wrapper classes which are subclasses of DICOMDataElement. A specific data element value can be retrieved from the hash table by specifying the relevant group number and element number combination. The method employed for storing and querying decoded data elements is illustrated in Figure 1.
Data Access/Manipulation
Data can be accessed at several levels. At the most basic level a data element can be retrieved from the hash table using the `getDataElement()` method of the `DICOMImage` class. There are two versions of the `getDataElement()` method. The first version takes a single argument of type `DICOMTag` which encapsulates the group number and element number of the desired data element and the second version takes two integer arguments, one for the group number and the second for the element number. In either case the `getDataElement()` method will return an object of type `DICOMDataElement` representing the requested data element. This object must be cast (i.e. converted) into the relevant subclass of `DICOMDataElement` before the actual data element value can be accessed. An existing data element can be modified or a new data element can be added to a `DICOMImage` object by calling the `setDataElement()` method. This method takes arguments that represent a tag and the new data element value to be associated with that tag. The basic level representation does not provide direct access to the actual image data. This data is packed and in some cases compressed and further decoding is required in order to facilitate pixel level operations.
There are an extremely large number of data elements defined by the DICOM standard. It is possible for any of these data elements to be present in a DICOM file. Some data elements occur more frequently than other and have particular significance to the application developer. In order to facilitate straightforward access to important and frequently used data elements a number of special accessor methods are defined by the `DICOMImage` class. This set of methods represents a higher level of access than that provided by the `getDataElement()` and `setDataElement()` methods and simplifies access to particular frequently used or important data.
elements. Internally each accessor method calls the `getDataElement()` method with the relevant `DICOMKey` argument and casts the returned data element value into the relevant Java primitive (or object). Some examples of the accessor methods provided by the `DICOMImage` class are as follows:
- **String getPatientID()** retrieves the data element value associated with the key `(0x0010, 0x0020)` from the hash table. The `DICOMLongString` object at this location is converted into a Java `String` object which is then returned.
- **int getSeriesNumber()** retrieves the data element value associated with the key `(0x0020, 0x0011)` from the hash table. The `DICOMIntegerString` object at this location is converted into a signed 32-bit integer primitive and returned.
- **int getBitsAllocated()** retrieves the data element value associated with the key `(0x0028, 0x0100)` from the hash table. The `DICOMUnsignedShort` object at this location is converted into a signed 32-bit integer primitive and returned.
Accessor methods are also provided for adding new data element values or modifying existing data element values. These methods take a single argument which represents the new value of the relevant data element. Internally these methods call `setDataElement()` with the relevant `DICOMKey` and new value arguments. Examples include `setPatientID(String ID)`, `setSeriesNumber(int seriesNumber)` and `setBitsAllocated(int bitsAllocated)`.
The final level of abstraction is used to provide direct access to image data. As mentioned previously the image data is stored in data element `(0x7fe0,
After the initial decoding/interpretation stage this data is still packed and sometimes compressed. The final stage of decoding performed by the `DICOMImage` class automatically unpacks and decompresses the image data which is ultimately stored within the `DICOMImage` class as a one dimensional array of signed integer values. Individual pixel values can be accessed/modified using special accessor methods: `getSample()` and `setSample()`. In either case the (x, y) coordinates of the relevant pixel must be specified in the argument list of the accessor method. In the case of multi-frame image data (i.e. where the number of frames is greater than one) a time coordinate must also be specified and for multi-plane image data (e.g. ARGB or CYMK) a colour plane index must also be specified. Some examples of the pixel level accessor methods provided by the `DICOMImage` class are as follows:
- `getSample(int x, int y)` returns an integer primitive that represents the pixel value at the specified (x, y) coordinates.
- `getSample(int x, int y, int t)` returns an integer primitive that represents the pixel value at the specified (x, y, t) coordinates.
- `setSample(int x, int y, int value)` sets the value of the pixel at the specified (x, y) coordinates to be the same as that indicated by the `value` argument.
**Data Storage**
In many cases an application developer will only need to read from a DICOM file. However, there are some cases where it may be useful to modify the contents of a DICOM file. In order to facilitate this operation the `DICOMImage` class provides a `save()` method. This method saves the current state of the associated `DICOMImage` object and takes a single argument which represents the output stream where the data
will be written. The saving operation involves writing all of the data elements represented by the `DICOMImage` object (including the packed pixel information) to the specified output stream. The saved data will be stored according to the specified transfer syntax, i.e. data element `(0x0002, 0x0010)`. If this data element is not present then the default transfer syntax will be used. The data storage feature completes the range of functionality provided by the NeatMed API.
## Sample Applications
The NeatMed API can be used to develop a wide variety of medical imaging applications. This section describes a number of sample applications that demonstrate the power, flexibility and ease of use of the NeatMed API (see Figure 3). In some cases NeatMed is used in conjunction with extension APIs and toolkits in order to demonstrate its development potential. The source code and associated instructions for the compilation and execution of each sample application can be downloaded directly from the NeatMed website.
### Simple DICOM viewer
A simple DICOM image viewer application can be created quite easily. The source code for the application is extremely compact (see Figure 3). The application takes a single argument which represents the name of the image to be displayed. This filename is subsequently used to construct a `DICOMImage` object. A `BufferedImage` object representation of the `DICOMImage` object is then obtained. This `BufferedImage` object is ultimately displayed using Swing graphical user interface (GUI) components (see `DICOMViewer.java`).
Sequence Viewer
The simple DICOM viewer application can be extended to deal with multi-frame DICOM images. This extension is facilitated by adding two buttons to the application GUI. These buttons allow the user to sequence backwards and forwards through the available frames. Internally the buttons update the value of a counter, the backwards button decrements the counter and the forwards button increments the counter. After either button is pressed the frame with the index that corresponds to the counter value displayed. This application demonstrates how user interaction is handled by Java. (see SequenceViewer.java).
Volume Rendering
The NeatMed API can be used in conjunction with the visualisation toolkit (VTK) (15) to provide three dimensional volume or surface renderings of medical image data. NeatMed is used to read in axial slices from a volumetric dataset. The image data from the slices is used to populate an array of scalar values that represents the volumetric data. Two transfer functions (opacity and colour) are specified to indicate how the volume should be displayed. The opacity transfer function indicates the opacity values associated with particular voxel intensity. The colour transfer function indicates the RGB colour value associated with a particular voxel intensity, this can be used for pseudo colouring the volume data. The rendering of the volume is handled by the VTK and the user can interact with the resulting model using the mouse. Supported operations include rotate, zoom and translate. A sample volume rendering using this application is illustrated in Figure 4 (see VolumeRendering.java).
Anonymising Data
A DICOM file usually contains a number of data elements that hold sensitive patient information. It is important to be able to anonymise this information in certain cases in order to protect the patient’s identity. The anonymisation process involves constructing a `DICOMImage` object from a suitable source, overwriting all of the sensitive data element values (e.g. patient’s name, patient’s birth date and patient’s address) and saving the modified `DICOMImage` object to an output stream using the `save()` method. The saved file represents the anonymized data. Each of the sensitive data element values can be overwritten using the `setDataElement()` method or special accessor methods (where available). Note that this application is a command line only application there is no GUI (see `Anonymise.java`).
**Image Processing**
The NeatMed API can be used for the development of medical image analysis applications. This can be achieved using any of the well documented image processing operations that are described in the literature, see (11) for examples of image processing algorithms in Java. The threshold operation is a simple example of a point operation that is based on a raster scan of the input image. Thresholding converts a grey scale image into a binary (black & white) image based on a user defined threshold value. Any pixel that is greater than or equal to the threshold is set to white while any pixel less than the threshold is set to black. The threshold operation performs a rough segmentation of the image and demonstrates both looping and conditional data processing. The threshold operation is illustrated in Figure 5 (see `Threshold.java`).
As mentioned earlier, NeatMed has been used internally over the past number of years as a platform for the development of a number of medical imaging research applications. Examples of this research are illustrated in Figure 6.
**Support Material**
All support material for the NeatMed API is accessible through the official NeatMed website. In addition to providing access to the latest version of the NeatMed API the website also provides access to documentation, sample source code and contact information for the NeatMed development team.
The NeatMed API documentation is generated using Javadoc. The NeatMed API documentation (see Figure 7) can be browsed online or downloaded as a ZIP file for subsequent offline access. In either case the documentation can be viewed using a standard web browser. The documentation itself provides an overview of the entire API. Each class and its associated methods and variables are described in detail. The relationship between classes is indicated and hyperlinks are provided to facilitate easy navigation through the documentation.
Fully commented source code for each of the sample applications described earlier in this paper can be downloaded from the NeatMed website. These applications can be compiled and executed using the Java 2 Software Developer Kit (J2SDK) from Sun Microsystems. The latest version of the J2SDK (currently v1.4.2) can be downloaded at no cost from official Java website (16). All sample applications require the
NeatMed API to be included in the Java class path and some applications also require additional APIs or toolkits such as the VTK to be included. The NeatMed website should be consulted for detailed information about compiling and executing the sample applications. Links to sites providing DICOM images that can be used in conjunction with the sample applications can also be accessed via the NeatMed website.
Finally the NeatMed team can be contacted by email either for general queries or to suggest additional functionality that may be useful to other NeatMed users. All queries regarding the NeatMed API should be directed to neatmed@eeng.dcu.ie
**Current Status**
The NeatMed API is constantly evolving. Its ongoing development is driven by internal requirements and external requests for additional features. NeatMed supports data that conforms to version 3 of the DICOM standard as well as the older American College of Radiology – National Electrical Manufacturers Association (ACR-NEMA) standard. NeatMed currently supports all of the uncompressed DICOM transfer syntaxes as well as the lossless RLE transfer syntax (1.2.840.10008.1.2.5). Support for JPEG compressed image data is currently being incorporated into the API. Dedicated wrapper classes are provided for storing each of the 26 possible data element value representations defined in the DICOM standard. The pixel data decoder is extremely flexible and supports any valid pixel encoding (packing) scheme, the majority of commonly used photometric interpretations are also supported and the
recent addition of the basic data write feature completes the range of functionality available for dealing with DICOM data.
NeatMed has recently been updated to support version 7.5 of the ANALYZE file format. As with DICOM data, the ANALYZE file pair (image & header) is automatically interpreted and all encoded information is made readily available to the programmer. The ANALYZE file format has a finite number of header fields and direct access is provided to each of these. Individual image sample values can be obtained by simply specifying the relevant 2D, 3D or 4D co-ordinates. The ANALYZE section of the API can also be used in conjunction with other toolkits and APIs to create powerful medical imaging applications.
The NeatMed API is intended for use by programmers interested in developing medical imaging applications, particularly for computer aided analysis. There is no reason why nonprogrammers should be excluded from developing such applications. In order to facilitate nonprogrammers NeatMed has recently been incorporated into the NeatVision visual programming environment (11, 17). NeatVision is a free, Java based software development environment designed specifically for use in image analysis applications. NeatVision is both user-friendly and powerful providing high-level access to over 300 image manipulation, processing and analysis algorithms. A visual program can be created by simply instantiating the required algorithms (blocks) and creating data paths (interconnections) between the inputs and outputs to indicate the data flow within the program. This combination NeatVision and NeatMed makes a vast library of image processing operators easily accessible to the medical image analysis community.
Summary
The interpretation of medical image data is an important and common stage in the development of any medical imaging application. NeatMed removes the need to deal directly with encoded medical image data, thus increasing productivity and allowing the developer to concentrate on other aspects of application development. NeatMed is written in Java, a multiplatform programming language with a large amount of freely available support material that is straightforward to learn and use. These and other features of Java make NeatMed accessible to a large group of potential users. NeatMed is well supported with documentation and sample code available through the NeatMed website. Most importantly, NeatMed is a freely available research tool whose ongoing development is driven by the needs and requirements of its users.
Acknowledgements: The authors wish to thank members of the Department of Radiology and the Gastrointestinal Unit at the Mater Misericordiae Hospital, particularly Dr. John F. Bruzzi and Dr Alan C. Moss.
References
10. JViewbox, Laboratory of Neuro Imaging, UCLA. Available at http://www.loni.ucla.edu/Software/Software_Detail.jsp?software_id=1. 2003; Accessed December 5.
Illustrations
**Figure 1:** A representation of how data is stored and queried at the most basic level of the **DICOMImage** class.
Figure 2: An overview of how the NeatMed API should be used in the development of medical imaging applications.
import java.awt.*;
import javax.swing.*;
import neatmed.DICOM.*;
public class ImageViewer
{
public static void main(String[] args)
{
new ImageViewer(args);
}
public ImageViewer(String[] args)
{
try
{
// Create a new DICOMImage object...
DICOMImage image = new DICOMImage(args[0]);
// Create a GUI to display the image...
JLabel label = new JLabel(new ImageIcon(image.getAsBufferedImage()));
JFrame frame = new JFrame("DICOM Image Viewer");
frame.getContentPane().setLayout(new BorderLayout());
frame.getContentPane().add(label);
frame.setVisible(true);
frame.pack();
}
catch(Exception e)
{
System.out.println(e.toString());
}
}
}
**Figure 3**: Sample Java source code for the DICOM image viewer application.
Figure 4: A volume rendering of the human colon generated from an abdominal CTC dataset using a combination of the NeatMed API and the VTK.
Figure 5: The threshold image processing operation. A CT image (a) before and (b) after application of the threshold.
Figure 6: Advanced image processing facilitated by NeatMed (a) Centreline calculation at CT colonography. (b) Segmentation of the biliary tree from MRCP data.
Figure 7: The HTML Javadoc documentation for the NeatMed API.
|
{"Source-Url": "http://www.cipa.dcu.ie/papers/radiographics_a_2004.pdf", "len_cl100k_base": 6434, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 46906, "total-output-tokens": 8427, "length": "2e12", "weborganizer": {"__label__adult": 0.0010137557983398438, "__label__art_design": 0.0005826950073242188, "__label__crime_law": 0.0008392333984375, "__label__education_jobs": 0.0013761520385742188, "__label__entertainment": 9.840726852416992e-05, "__label__fashion_beauty": 0.0004954338073730469, "__label__finance_business": 0.000377655029296875, "__label__food_dining": 0.0009870529174804688, "__label__games": 0.0011568069458007812, "__label__hardware": 0.0035152435302734375, "__label__health": 0.01904296875, "__label__history": 0.00040793418884277344, "__label__home_hobbies": 0.0001977682113647461, "__label__industrial": 0.0006761550903320312, "__label__literature": 0.000331878662109375, "__label__politics": 0.0002810955047607422, "__label__religion": 0.0009293556213378906, "__label__science_tech": 0.055328369140625, "__label__social_life": 0.00016033649444580078, "__label__software": 0.0151824951171875, "__label__software_dev": 0.89453125, "__label__sports_fitness": 0.0011692047119140625, "__label__transportation": 0.0008192062377929688, "__label__travel": 0.0003695487976074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35121, 0.02142]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35121, 0.73011]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35121, 0.87074]], "google_gemma-3-12b-it_contains_pii": [[0, 869, false], [869, 947, null], [947, 2542, null], [2542, 2736, null], [2736, 4554, null], [4554, 6211, null], [6211, 7997, null], [7997, 9637, null], [9637, 11302, null], [11302, 13036, null], [13036, 14725, null], [14725, 16622, null], [16622, 18230, null], [18230, 19983, null], [19983, 21558, null], [21558, 23218, null], [23218, 24892, null], [24892, 26381, null], [26381, 27944, null], [27944, 29681, null], [29681, 30714, null], [30714, 32159, null], [32159, 33486, null], [33486, 33619, null], [33619, 33731, null], [33731, 34643, null], [34643, 34783, null], [34783, 34901, null], [34901, 35060, null], [35060, 35121, null]], "google_gemma-3-12b-it_is_public_document": [[0, 869, true], [869, 947, null], [947, 2542, null], [2542, 2736, null], [2736, 4554, null], [4554, 6211, null], [6211, 7997, null], [7997, 9637, null], [9637, 11302, null], [11302, 13036, null], [13036, 14725, null], [14725, 16622, null], [16622, 18230, null], [18230, 19983, null], [19983, 21558, null], [21558, 23218, null], [23218, 24892, null], [24892, 26381, null], [26381, 27944, null], [27944, 29681, null], [29681, 30714, null], [30714, 32159, null], [32159, 33486, null], [33486, 33619, null], [33619, 33731, null], [33731, 34643, null], [34643, 34783, null], [34783, 34901, null], [34901, 35060, null], [35060, 35121, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35121, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35121, null]], "pdf_page_numbers": [[0, 869, 1], [869, 947, 2], [947, 2542, 3], [2542, 2736, 4], [2736, 4554, 5], [4554, 6211, 6], [6211, 7997, 7], [7997, 9637, 8], [9637, 11302, 9], [11302, 13036, 10], [13036, 14725, 11], [14725, 16622, 12], [16622, 18230, 13], [18230, 19983, 14], [19983, 21558, 15], [21558, 23218, 16], [23218, 24892, 17], [24892, 26381, 18], [26381, 27944, 19], [27944, 29681, 20], [29681, 30714, 21], [30714, 32159, 22], [32159, 33486, 23], [33486, 33619, 24], [33619, 33731, 25], [33731, 34643, 26], [34643, 34783, 27], [34783, 34901, 28], [34901, 35060, 29], [35060, 35121, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35121, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
fa5fdfc00d4dd36d6f4d192324e1011e9a7b17e9
|
Section 04: Solutions
Section Problems
1. Valid BSTs and AVL Trees
For each of the following trees, state whether the tree is (i) a valid BST and (ii) a valid AVL tree. Justify your answer.
(a)
```
7
/ \```
6 43
|
2
```
Solution:
This is not a valid BST! The 2 is located in the right sub-tree of 7, which breaks the BST property. Remember that the BST property applies to every node in the left and right sub-trees, not just the immediate child!
All AVL trees are BSTs. Because of this, this tree can't be a valid AVL tree either.
(b)
```
7
/ \```
6 43
|
3
```
Solution:
This tree is a valid BST! If we check every node, we see that the BST property holds at each of them.
However, this is not a valid AVL tree. We see that some nodes (for example, the 42) violate the balance condition, which is an extra requirement compared to BSTs. Because the heights of 42’s left and right sub-trees differ by more than one, this violates the condition.
Solution:
This tree is a valid BST! If we check every node, we see that the BST property holds at each of them.
This tree is also a valid AVL tree! If we check every node, we see that the balance condition also holds at each of them.
2. Constructing AVL trees
Draw an AVL Tree as each of the following keys are added in the order given. Show intermediate steps.
(a)
Solution:
```
fowl
/ \
bat penguin
/ \ / \
badger cat moth shrew
/ \ / \
lion otter raven stork
```
(b)
{6, 43, 7, 42, 59, 63, 11, 21, 56, 54, 27, 20, 36}
Solution:
```
43
/ \
21 59
/ \ / \
7 36 56 63
/ \ / \
6 11 27 42 54
/ \
20 27
```
Note: The Section slides have a step-by-step walkthrough of this one!
(c)
Solution:
3. AVL tree rotations
Consider this AVL tree:
```
6
/ \ \
2 10
/ \ \
14
```
Give an example of a value you could insert to cause:
(a) A single rotation
Solution:
Any value greater than 14 will cause a single rotation around 10 (since 10 will become unbalanced, but we'll be in the line case).
(b) A double rotation
Solution:
Any value between 10 and 14 will cause a double rotation around 10 (since 10 will be unbalanced, and we'll be in the kink case).
(c) No rotation
Solution:
Any value less than 10 will cause no rotation (since we can't cause any node to become unbalanced with those values).
4. **Inserting keys and computing statistics**
In this problem, we will see how to compute certain statistics of the data, namely, the minimum and the median of a collection of integers stored in an AVL tree. Before we get to that let us recall insertion of keys in an AVL tree. Consider the following AVL tree:
![AVL Tree Diagram]
(a) We now add the keys \{21, 14, 20, 19\} (in that order). Show where these keys are added to the AVL tree. Show your intermediate steps.
**Solution:**
![Modified AVL Tree Diagram]
(b) Recall that if we use an unsorted array to store \(n\) integers, it will take us \(O(n)\) runtime in order to compute the minimum element in the array. This can be done by running a loop that scans the array from the first index to the last index, which keeps track of the minimum element that it has seen so far. Now we will see how to compute the minimum element of a set of integers stored in an AVL tree which runs *much* faster than the procedure described above.
i. Given an AVL tree storing numbers, like the one above, describe a procedure that will return the minimum element stored in the tree.
Solution:
Remember that an AVL tree satisfies the BST property, i.e. for any node, all keys in the left sub-tree below that node must be smaller than all the keys in the right sub-tree. Since the minimum is the smallest element in the tree, it must lie in the left sub-tree below the root. By the same reasoning, the minimum must also lie in the left sub-tree below the left node connected to the root and so on and so forth.
Proceeding this way, we can set \( l_0 \) to be the root of the tree and for all \( i \geq 1 \), we can set \( l_i \) to the left node connected to \( l_{i-1} \). By our reasoning above, the minimum lies in the subtree below \( l_i \) for every \( i \). Hence, we can simply start at the root i.e. \( l_0 \) and keep following the edge towards the nodes \( l_1, l_2, \ldots \) until we hit a leaf! The leaf must be the minimum, as there is no subtree rooted below it.
ii. Supposing an AVL tree has \( n \) elements, what is the runtime of the above procedure in terms of \( n \)? How does this runtime compare with the \( O(n) \) runtime of the linear scan of the array?
Solution:
The above procedure, is essentially a loop that starts at the root and stops when it reaches a leaf. The length of any path from the root to a leaf in an AVL tree with \( n \) elements is a most \( O(\log n) \). Hence, the above procedure has runtime \( O(\log n) \). This runtime is exponentially better than the linear scan which takes \( O(n) \) time!
(c) In the next few problems, we will see how to compute the median element of the set of elements stored in the AVL tree. The median of a set of \( n \) numbers is the element that appears in the \( \lceil n/2 \rceil \)-th position, when this set is written in sorted order. When \( n \) is even, \( \lceil n/2 \rceil = n/2 \) and when \( n \) is odd, \( \lceil n/2 \rceil = (n + 1)/2 \). For example, if the set is \{3, 2, 1, 4, 6\} then the set in sorted order is \{1, 2, 3, 4, 6\}, and the median is 3.
If we were to simply store \( n \) integers in an array, one way to compute the median element would be to first sort the array and then look up the element at the \( \lceil n/2 \rceil \)-th position in the sorted array. This procedure has a runtime of \( O(n \log n) \), even when we use a clever sorting algorithm like Mergesort. We will now see how to compute the median, when the data is stored in a rather modified AVL tree *much* faster.
For the time being, assume that we have a modified version of the AVL tree that lets us maintain, not just the key but also the number of elements that occur below the node at which the key is stored plus one (for that node). The use of this will become apparent very soon. As an example, the modified version of the AVL tree above, would like so (the number after the semi-colon in each node accounts for the number of nodes below that node plus one).
![AVL Tree Diagram]
i. We now again add the keys \{21, 14, 20, 19\} (in that order) to the modified AVL tree. How does the modified AVL tree look after the insertions are done?
ii. Given a modified AVL tree, like the one above, describe a procedure that will return the median element stored in the tree. Note that in the modified tree, you can access the number of elements lying below a node in addition to the number stored in that node. Can you use this extra information to find the median more quickly?
We will actually show that using a modified AVL tree, we can compute the \( k \)-th smallest element for any \( k \). The \( k \)-th smallest element of a set of \( n \) numbers is the number at index \( k \) when the set is written in sorted order. Note that this problem is more general than computing the median! If we plug \( k = \lceil n/2 \rceil \), we can compute the median!
Similar to the strategy that we used to compute the minimum, we start by setting \( l_0 \) to be the root of the tree. At this point, we check the number of nodes that lie below the left and right nodes connected to the root. Let these numbers be \( x_{l_0,0} \) and \( x_{l_0,1} \) respectively. We consider three cases below.
I. Let us suppose for the moment that \( x_{l_0,0} = k - 1 \). We observe that if the elements in the AVL tree were to be written in sorted order, all the elements in the left subtree below root would appear before the root, which itself would appear before the elements in the right sub-tree. Since there are \( k - 1 \) elements in the left subtree, the index of the root is \( k \), which is the desired element.
II. Now suppose \( x_{l_0,0} < k - 1 \). Again, if we were to write the elements in the AVL tree in sorted order, the \( k \)-th smallest element would now lie in the subtree below the right node.
III. Finally, if \( x_{l_0,0} > k - 1 \), the \( k \)-th smallest number would lie in subtree below the left node.
The upshot of this is that by checking the number of nodes in the left and right subtrees below a given node, we were able to find out which subtree the \( k \)-th smallest element lies in! We can repeat this, recursing in the appropriate subtree. For example, if \( x_{l_0,0} < k - 1 \) then we recurse in the right subtree. However, the \( k \)-th smallest element in the entire tree may not be the \( k \)-th smallest element in the right subtree!
We want to find out what element we need to look for in the right subtree in order to find the \( k \)-th smallest element in the entire tree. Let us suppose that the \( k \)-th smallest element in the entire subtree is in fact the \( k' \)-th smallest element in the right subtree. If we were to write out the elements in the AVL tree in sorted order, it follows that the \( k' \)-th smallest element in the right subtree is at position \((x_{l_0,0} + 1) + k'\) in the entire tree. But this position is also the position of the \( k \)-th smallest element. It follows that
\[
(x_{l_0,0} + 1) + k' = k \implies k' = k - (x_{l_0,0} + 1).
\]
Therefore, in order to locate the \( k \)-th smallest element in the entire tree, it suffices to locate the \((k - (x_{l_0,0} + 1))\)-th smallest element in the right subtree, which we can do as detailed above.
We repeat the procedure until, we either find the \( k \)-th smallest element to be a node, like in Case I, or we hit a leaf.
iii. Supposing a modified AVL tree has \( n \) elements, what is the runtime of the above procedure in terms of \( n \)? How does this runtime compare with the \( O(n \log n) \) runtime described earlier?
**Solution:**
The above procedure, is essentially a loop that starts at the root and stops when it reaches a leaf. The length of any path from the root to a leaf in an AVL tree with \( n \) elements is at most \( O(\log n) \). Hence, the above procedure has runtime \( O(\log n) \). This runtime is far far better than the solution based on sorting which takes \( O(n \log n) \) time; we managed a shave off the linear term in the latter expression!
iv. Bonus: After every insertion, the number of nodes that lie below a given node need not remain the same. For example, after four insertions, the number of nodes below the root increased and the number of nodes below the node where the key "29" was stored, decreased. Describe a procedure that takes as input a modified AVL tree \( T \) with \( n \) nodes, an integer key \( k \) and, returns the modified AVL \( T' \), that has the key \( k \) inserted in \( T \). What is the runtime of this procedure?
5. Hash table insertion
For each problem, insert the given elements into the described hash table. Do not worry about resizing the internal array.
(a) Suppose we have a hash table that uses separate chaining and has an internal capacity of 12. Assume that each bucket is a linked list where new elements are added to the front of the list.
Insert the following elements in the EXACT order given using the hash function \( h(x) = 4x \):
\[0, 4, 7, 1, 2, 3, 6, 11, 16\]
Solution:
To make the problem easier for ourselves, we first start by computing the hash values and initial indices:
<table>
<thead>
<tr>
<th>key</th>
<th>hash</th>
<th>index (pre probing)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>16</td>
<td>4</td>
</tr>
<tr>
<td>7</td>
<td>28</td>
<td>4</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>2</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>3</td>
<td>12</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>11</td>
<td>44</td>
<td>8</td>
</tr>
<tr>
<td>16</td>
<td>64</td>
<td>4</td>
</tr>
</tbody>
</table>
The state of the internal array will be:
\[6 \rightarrow 3 \rightarrow 0 \quad / \quad / \quad / \quad 16 \rightarrow 1 \rightarrow 7 \rightarrow 4 \quad / \quad / \quad / \quad 11 \rightarrow 2 \quad / \quad / \quad /\]
(b) Suppose we have a hash table implemented using separate chaining. This hash table has an internal capacity of 10. Its buckets are implemented using a linked list where new elements are appended to the end. Do not worry about resizing.
Show what this hash table internally looks like after inserting the following key-value pairs in the order given using the hash function \( h(x) = x \):
\[(1, a), (4, b), (2, c), (17, d), (12, e), (9, e), (19, f), (4, g), (8, c), (12, f)\]
Solution:
\[0 \quad 1 \quad 2 \quad 3 \quad 4 \quad 5 \quad 6 \quad 7 \quad 8 \quad 9\]
\[\downarrow \quad \downarrow \quad \downarrow \quad \downarrow \quad \downarrow \quad \downarrow \quad \downarrow \quad \downarrow \quad \downarrow \]
\[(1, a) \quad (2, c) \quad (4, g) \quad (17, d) \quad (8, c) \quad (9, e) \quad (12, f) \quad (19, f)\]
6. Evaluating hash functions
Consider the following scenarios.
(a) Suppose we have a hash table with an initial capacity of 12. We resize the hash table by doubling the capacity. Suppose we insert integer keys into this table using the hash function \( h(x) = 4x \).
Why is this choice of hash function and initial capacity suboptimal? How can we fix it?
**Solution:**
Notice that the hash function will initially always cause the keys to be hashed to at most one of three spots: 12 is evenly divided by 4.
This means that the likelihood of a key colliding with another one dramatically increases, decreasing performance.
This situation does not improve as we resize, since the hash function will continue to map to only a fourth of the available indices.
We can fix this by either picking a new hash function that's relatively prime to 12 (e.g. \( h(x) = 5x \)), by picking a different initial table capacity, or by resizing the table using a strategy other than doubling (such as picking the next prime that's roughly double the initial size).
7. **Hash tables**
(a) Consider the following key-value pairs.
\((6, a), (29, b), (41, d), (34, e), (10, f), (64, g), (50, h)\)
Suppose each key has a hash function \( h(k) = 2k \). So, the key 6 would have a hash code of 12. Insert each key-value pair into the following hash tables and draw what their internal state looks like:
(i) A hash table that uses separate chaining. The table has an internal capacity of 10. Assume each bucket is a linked list, where new pairs are appended to the end. Do not worry about resizing.
**Solution:**
```
<p>| | | | | | | | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>f</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td></td>
<td>a</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td></td>
<td>b</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td></td>
<td>d</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>e</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>f</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>6</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>h</td>
<td></td>
<td></td>
</tr>
<tr>
<td>7</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>5</td>
</tr>
<tr>
<td>9</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0</td>
</tr>
</tbody>
</table>
```
(10, f) (6, a) (29, b)
(50, h) (41, d) (34, e) (64, f)
8. Analyzing dictionaries
(a) What are the constraints on the data types you can store in an AVL tree?
Solution:
The keys need to be orderable because AVL trees (and BSTs too) need to compare keys with each other to decide whether to go left or right at each node. (In Java, this means they need to implement Comparable). Unlike a hash table, the keys do not need to be hashable. (Note that in Java, every object is technically hashable, but it may not hash to something based on the object's value. The default hash function is based on reference equality.)
The values can be any type because AVL trees are only ordered by keys, not values.
(b) When is using an AVL tree preferred over a hash table?
Solution:
(i) You can iterate over an AVL tree in sorted order in \( O(n) \) time.
(ii) AVL trees never need to resize, so you don’t have to worry about insertions occasionally being very slow when the hash table needs to resize.
(iii) In some cases, comparing keys may be faster than hashing them. (But note that AVL trees need to make \( O(\log n) \) comparisons while hash tables only need to hash each key once.)
(iv) AVL trees may be faster than hash tables in the worst case since they guarantee \( O(\log n) \), compared to a hash table's \( O(n) \) if every key is added to the same bucket. But remember that this only applies to pathological hash functions. In most cases, hash tables have better asymptotic runtime (\( O(1) \)) than AVL trees, and in practice \( O(1) \) and \( O(\log n) \) have roughly the same performance.
(c) When is using a BST preferred over an AVL tree?
Solution:
One of AVL tree’s advantages over BST is that it has an asymptotically efficient find even in the worst case.
However, if you know that insert will be called more often than find, or if you know the keys will be inserted in a random enough order that the BST will stay balanced, you may prefer a BST since it avoids the small runtime overhead of checking tree balance properties and performing rotations. (Note that this overhead is a constant factor, so it doesn’t matter asymptotically, but may still affect performance in practice.)
BSTs are also easier to implement and debug than AVL trees.
(d) Consider an AVL tree with \( n \) nodes and a height of \( h \). Now, consider a single call to get(\ldots). What’s the maximum possible number of nodes get(\ldots) ends up visiting? The minimum possible?
Solution:
The max number is \( h + 1 \) (remember that height is the number of edges, so we visit \( h + 1 \) nodes going from the root to the farthest away leaf); the min number is 1 (when the element we’re looking for is just the root).
(e) **Challenge Problem:** Consider an AVL tree with \( n \) nodes and a height of \( h \). Now, consider a single call to `insert(...)`. What's the maximum possible of nodes `insert(...)` ends up visiting? The minimum possible? Don’t count the new node you create or the nodes visited during rotation(s).
**Solution:**
The max number is \( h + 1 \). Just like a get, we may have to traverse to a leaf to do an insertion.
To find the minimum number, we need to understand which elements of AVL trees we can do an insertion at, i.e. which ones have at least one `null` child.
In a tree of height 0, the root is such a node, so we need only visit the one node.
In an AVL tree of height 1, the root can still have a (single) `null` child, so again, we may be able to do an insertion visiting only one node.
On taller trees, we always start by visiting the root, then we continue the insertion process in either a tree of height \( h - 1 \) or a tree of height \( h - 2 \) (this must be the case since the the overall tree is height \( h \) and the root is balanced). Let \( M(h) \) be the minimum number of nodes we need to visit on an insertion into an AVL tree of height \( h \). The previous sentence lets us write the following recurrence
\[
M(h) = 1 + \min\{M(h - 1), M(h - 2)\}
\]
The 1 corresponds to the root, and since we want to describe the minimum needed to visit, we should take the minimum of the two subtrees.
We could simplify this recurrence and try to unroll it, but it's easier to see the pattern if we just look at the first few values:
\[
M(0) = 1, M(1) = 1, M(2) = 1 + \min\{1, 1\} = 2, M(3) = 1 + \min\{1, 2\} = 2, M(4) = 1 + \min\{2, 2\} = 3
\]
In general, \( M() \) increases by one every other time \( h \) increases, thus we should guess the closed-form has an \( h/2 \) in it. Checking against small values, we can get an exactly correct closed-form of:
\[
M(h) = \lceil h/2 \rceil + 1
\]
which is our final answer.
Note that we need a very special (as empty as possible) AVL tree to have a possible insertion visiting only \( \lceil h/2 \rceil + 1 \) nodes. In general, an AVL of height \( h \) might not have an element we could insert that visits only \( \lceil h/2 \rceil + 1 \). For example, a tree where all the leaves are at depth \( h \) is still a valid AVL tree, but any insertion would need to visit \( h + 1 \) nodes.
|
{"Source-Url": "https://courses.cs.washington.edu/courses/cse373/20au/sections/section04-solutions.pdf", "len_cl100k_base": 6040, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28338, "total-output-tokens": 6543, "length": "2e12", "weborganizer": {"__label__adult": 0.0003228187561035156, "__label__art_design": 0.0003516674041748047, "__label__crime_law": 0.000522613525390625, "__label__education_jobs": 0.0034637451171875, "__label__entertainment": 0.00010377168655395508, "__label__fashion_beauty": 0.0001590251922607422, "__label__finance_business": 0.00029206275939941406, "__label__food_dining": 0.0004210472106933594, "__label__games": 0.0009369850158691406, "__label__hardware": 0.0015420913696289062, "__label__health": 0.0005583763122558594, "__label__history": 0.0004050731658935547, "__label__home_hobbies": 0.0002294778823852539, "__label__industrial": 0.0007224082946777344, "__label__literature": 0.0004305839538574219, "__label__politics": 0.0002751350402832031, "__label__religion": 0.0006351470947265625, "__label__science_tech": 0.1046142578125, "__label__social_life": 0.00015544891357421875, "__label__software": 0.0135345458984375, "__label__software_dev": 0.869140625, "__label__sports_fitness": 0.0003590583801269531, "__label__transportation": 0.000537872314453125, "__label__travel": 0.0002301931381225586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20485, 0.04492]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20485, 0.61928]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20485, 0.88657]], "google_gemma-3-12b-it_contains_pii": [[0, 609, false], [609, 1218, null], [1218, 1884, null], [1884, 2657, null], [2657, 3790, null], [3790, 6843, null], [6843, 7175, null], [7175, 11217, null], [11217, 13335, null], [13335, 15457, null], [15457, 18118, null], [18118, 20485, null]], "google_gemma-3-12b-it_is_public_document": [[0, 609, true], [609, 1218, null], [1218, 1884, null], [1884, 2657, null], [2657, 3790, null], [3790, 6843, null], [6843, 7175, null], [7175, 11217, null], [11217, 13335, null], [13335, 15457, null], [15457, 18118, null], [18118, 20485, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20485, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20485, null]], "pdf_page_numbers": [[0, 609, 1], [609, 1218, 2], [1218, 1884, 3], [1884, 2657, 4], [2657, 3790, 5], [3790, 6843, 6], [6843, 7175, 7], [7175, 11217, 8], [11217, 13335, 9], [13335, 15457, 10], [15457, 18118, 11], [18118, 20485, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20485, 0.11682]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
2e5071c2c15d86de8e146422bba6e5a736a912c2
|
Module 4: Compound data: structures
Readings: Sections 6 and 7 of HtDP.
- Sections 6.2, 6.6, 6.7, 7.4, and 10.3 are optional readings; they use the obsolete draw.ss teachpack.
- The teachpacks image.ss and world.ss are more useful.
- Note that none of these particular teachpacks will be used on assignments or exams.
Compound data
Data may naturally be joined, but a function can produce only a single item.
A structure is a way of “bundling” several pieces of data together to form a single “package”.
We can
- create functions that consume and/or produce structures, and
- define our own structures, automatically getting (“for free”) functions that create structures and functions that extract data from structures.
A new type
Suppose we want to design a program for a card game such as poker or cribbage. Before writing any functions, we have to decide on how to represent data.
For each card, we have a suit (one of hearts, diamonds, spades, and clubs) and a rank (for simplicity, we will consider ranks as integers between 1 and 13). We can create a new structure with two fields using the following **structure definition**.
```
(define-struct card (rank suit))
```
Using the **Card** type
Once we have defined our new type, we can:
- Create new values using the **constructor** function `make-card`:
```scheme
(define h5 (make-card 5 "hearts"))
```
- Retrieve values of the individual fields using the **selector** functions `card-rank` and `card-suit`:
```scheme
(card-rank h5) ⇒ 5
(card-suit h5) ⇒ "hearts"
```
We can also
- Check if a value is of type Card using the type predicate function card?
(card? h5) ⇒ true
(card? "5 of hearts") ⇒ false
Once the new structure card has been defined, the functions make-card, card-rank, card-suit, card? are created by Racket. We do not have to write them ourselves.
We have grouped all the data for a single card into one value, and we can still retrieve the individual pieces of information.
More information about Card
The structure definition of Card does not provide all the information we need to use the new type properly. We will use a comment called a data definition to provide additional information about the types of the different field values.
(define-struct card (rank suit))
;; A Card is a
;; (make-card Nat (anyof "hearts" "diamonds" "spades" "clubs"))
;; requires
;; rank is between 1 and 13, inclusive
Functions using Card values
;; (pair? c1 c2) produces true if c1 and c2 have the same rank, and false otherwise
;; pair?: Card Card → Bool
(define (pair? c1 c2) (= (card-rank c1) (card-rank c2)))
;; (one-better c) produces a Card, with the same suit as c, but whose rank is one more than c (to a maximum of 13)
;; one-better: Card → Card
(define (one-better c)
(make-card (min 13 (+ 1 (card-rank c))) (card-suit c)))
Posn structures
A Posn (short for Position) is a built-in structure that has two fields containing numbers intended to represent \( x \) and \( y \) coordinates. We might want to use a Posn to represent coordinates of a point on a 2-D plane, positions on a screen, or a geographical position.
The structure definition is built-in. We’ll use the following data definition.
```scheme
;; A Posn is a (make-posn Num Num)
```
Built-in functions for **Posn**
;;; make-posn: Num Num → Posn
;;; posn-x: Posn → Num
;;; posn-y: Posn → Num
;;; posn?: Any → Bool
Examples of use
```
(define myposn (make-posn 8 1))
(posn-x myposn) ⇒ 8
(posn-y myposn) ⇒ 1
(posn? myposn) ⇒ true
```
Substitution rules and simplified values
For any values \( a \) and \( b \)
\[(\text{posn-x } (\text{make-posn } a \ b)) \Rightarrow a\]
\[(\text{posn-y } (\text{make-posn } a \ b)) \Rightarrow b\]
The \text{make-posn} you type is a function application.
The \text{make-posn} DrRacket displays indicates that the value is of type \text{posn}.
\[(\text{make-posn } (+ \ 4 \ 4) \ (\!-\! \ 2 \ 1)) \text{ yields } (\text{make-posn } 8 \ 1), \text{ which cannot be further simplified.}\]
Similar rules apply to our newly defined \text{card} structure as well.
Example: point-to-point distance
\[ \text{distance} = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2} \]
\((x_1, y_1)\)
\((x_2, y_2)\)
The function **distance**
;;; (distance posn1 posn2) produces the Euclidean distance
;;; between posn1 and posn2.
;;; distance: Posn Posn → Num
;;; example:
(\text{check-expect} \ (\text{distance} \ (\text{make-posn} \ 1 \ 1) \ (\text{make-posn} \ 4 \ 5)) \ 5)
\text{(define} \ \text{(distance} \ \text{posn1} \ \text{posn2)}
(\text{sqrt} \ (+ \ (\text{sqr} \ (- \ (\text{posn-x} \ \text{posn1}) \ (\text{posn-x} \ \text{posn2}))))
(\text{sqr} \ (- \ (\text{posn-y} \ \text{posn1}) \ (\text{posn-y} \ \text{posn2}))))))
Function that produces a Posn
;; (scale point factor) produces the Posn that results when the fields
;; of point are multiplied by factor
;; scale: Posn Num → Posn
;; Examples:
(check-expect (scale (make-posn 3 4) 0.5) (make-posn 1.5 2))
(check-expect (scale (make-posn 1 2) 1) (make-posn 1 2))
(define (scale point factor)
(make-posn (∗ factor (posn-x point))
(∗ factor (posn-y point))))
When we have a function that consumes a number and produces a number, we do not change the number we consume.
Instead, we make a new number.
The function `scale` consumes a `Posn` and produces a new `Posn`.
It doesn’t change the old one.
Instead, it uses `make-posn` to make a new `Posn`.
Misusing Posn
What is the result of evaluating the following expression?
\[
\text{(define point1 (make-posn "Math135" "CS115"))}
\]
\[
\text{(define point2 (make-posn "Red" true))}
\]
\[
\text{(distance point1 point2)}
\]
This causes a run-time error, but possibly not where you think.
Racket does not enforce contracts or data definitions, which are just comments, and ignored by the machine.
Each value created during the running of the program has a type ([Int, Bool, etc.]).
Types are associated with values, not with constants or parameters.
```
(define p 5)
(define q (mystery-fn 5))
```
Racket uses dynamic typing
Dynamic typing: the type of a value bound to an identifier is determined by the program as it is run,
e.g. (define x (check-divide n))
Static typing: constants and what functions consume and produce have pre-determined types,
e.g. real distance(Posn posn1, posn2)
While Racket does not enforce contracts, we will always assume that contracts for our functions are followed. Never call a function with data that violates the contract and requirements.
Pros and cons of dynamic typing
Pros:
- No need to write down pre-determined types.
- Flexible: the same definitions can be used for various types (e.g. lists in Module 5, functions in Module 10).
Cons:
- Contracts are not enforced by the computer.
- Type errors are caught only at run time.
Dealing with dynamic typing
Dynamic typing is a potential source of both flexibility and confusion. Writing a data definition for each new structure will help us avoid making mistakes with structures.
Data definition: a comment specifying a data type; for structures, include name of structure, number and types of fields.
Data definitions like this are also comments, and are not enforced by the computer. However, we will assume data definitions are always followed, unless explicitly told otherwise.
Any type defined by a data definition can be used in a contract.
Structure definitions for new structures
Structure definition: code defining a structure, and resulting in constructor, selector, and type predicate functions.
(define-struct sname (field1 field2 field3))
Writing this once creates functions that can be used many times:
- **Constructor**: make-sname
- **Selectors**: sname-field1, sname-field2, sname-field3
- **Predicate**: sname?
Design recipe modifications
Data analysis and design: design a data representation that is appropriate for the information handled in our function.
Determine which structures are needed and define them. Include
- a structure definition (code) and
- a data definition (comment).
Place the structure and data definitions immediately after the file header, before your constants and functions.
Structure for Movie information
Suppose we want to represent information associated with movies, that is:
- the name of the director
- the title of the movie
- the duration of the movie
- the genre of the movie (sci-fi, drama, comedy, etc.)
(define-struct movieinfo (director title duration genre))
;; A MovieInfo is a (make-movieinfo Str Str Nat Str)
;; where:
;; director is the name of director of the movie,
;; title is the name of movie,
;; duration is the length of the movie in minutes,
;; genre is the genre (type or category) of the movie.
Note: If all the field names for a new structure are self-explanatory, we will often omit the field-by-field descriptions.
The structure definition gives us:
- Constructor `make-movieinfo`
- Selectors `movieinfo-director`, `movieinfo-title`, `movieinfo-duration`, and `movieinfo-genre`
- Predicate `movieinfo?`
```scheme
(define et-movie
(make-movieinfo "Spielberg" "ET" 115 "Sci-Fi"))
(movieinfo-duration et-movie) ⇒ 115
(movieinfo? 6) ⇒ false
```
Templates and data-directed design
One of the main ideas of the HtDP text is that the form of a program often mirrors the form of the data.
A **template** is a general framework which we will complete with specifics. It is the starting point for our implementation.
We create a template once for each new form of data, and then apply it many times in writing functions that consume that data.
A template is derived from a data definition.
A template for **MovieInfo**
The template for a function that consumes a structure selects every field in the structure, though a specific function may not use all the selectors.
;; movieinfo-template: MovieInfo → Any
(define (movieinfo-template info)
(... (movieinfo-director info) ...
(movieinfo-title info) ...
(movieinfo-duration info) ...
(movieinfo-genre info) ...))
An example
;; (correct-genre oldinfo newg) produces a new MovieInfo
;; formed from oldinfo, correcting genre to newg.
;; correct-genre: MovieInfo Str → MovieInfo
;; example:
(check-expect
(correct-genre
(make-movieinfo "Spielberg" "ET" 115 "Comedy")
"Sci-Fi")
(make-movieinfo "Spielberg" "ET" 115 "Sci-Fi"))
Using templates to create functions
- Choose a template and examples that fit the type(s) of data the function consumes.
- For each example, figure out the values for each part of the template.
- Figure out how to use the values to obtain the value produced by the function.
- Different examples may lead to different cases.
- Different cases may use different parts of the template.
- If a part of a template isn’t used, it can be omitted.
- New parameters can be added as needed.
The function **correct-genre**
We use the parts of the template that we need, and add a new parameter.
```scheme
(define (correct-genre oldinfo newg)
(make-movieinfo (movieinfo-director oldinfo)
(movieinfo-title oldinfo)
(movieinfo-duration oldinfo)
newg))
```
We could have done this without a template, but the use of a template pays off when designing more complicated functions.
Additions to syntax for structures
The special form `(define-struct sname (field1 . . . fieldn))` defines the structure type `sname` and automatically defines a constructor function, selector functions for each field, and a type predicate function.
A **value** is a number, a string, a boolean, or is of the form `(make-sname v1 . . . vn)` for values `v1` through `vn`.
Additions to semantics for structures
The substitution rule for the $i$th selector is:
$$(\text{sname-field}_i \ (\text{make-sname} \ v_1 \ldots \ v_i \ldots \ v_n)) \Rightarrow v_i$$
The substitution rules for the type predicate are:
$$(\text{sname}?) \ (\text{make-sname} \ v_1 \ldots \ v_n) \Rightarrow \text{true}$$
$$(\text{sname}? \ V) \Rightarrow \text{false} \text{ for a value } V \text{ of any other type.}$$
An example using posns
Recall the definition of the function \texttt{scale}:
\begin{verbatim}
(define (scale point factor)
(make-posn (* factor (posn-x point))
(* factor (posn-y point))))
\end{verbatim}
Then we can make the following substitutions:
\[
\text{(define myposn \((\text{make-posn} \ 4 \ 2)\))}
\]
\[
\text{(scale myposn \ 0.5)}
\]
\[
\Rightarrow \ \text{(scale \((\text{make-posn} \ 4 \ 2)\) \ 0.5)}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow \ \text{(make-posn}
\]
\[
\Rightarrow (\text{make-posn} \ 2 \ (* \ 0.5 \ (\text{posn-y} \ (\text{make-posn} \ 4 \ 2)))) \\
\Rightarrow (\text{make-posn} \ 2 \ (* \ 0.5 \ 2)) \\
\Rightarrow (\text{make-posn} \ 2 \ 1)
\]
Since \((\text{make-posn} \ 2 \ 1)\) is a value, no further substitutions are needed.
Another example
```
(define mymovie (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
(correct-genre mymovie "Fantasy")
⇒ (correct-genre
(make-movieinfo "Reiner" "Princess Bride" 98 "War") "Fantasy")
⇒ (make-movieinfo
(movieinfo-director (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
(movieinfo-title (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
(movieinfo-duration (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
"Fantasy")
```
⇒ (make-movieinfo
"Reiner"
(movieinfo-title (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
(movieinfo-duration (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
"Fantasy")
⇒ (make-movieinfo
"Reiner"
"Princess Bride"
(movieinfo-duration (make-movieinfo "Reiner" "Princess Bride" 98 "War"))
"Fantasy")
⇒ (make-movieinfo "Reiner" "Princess Bride" 98 "Fantasy")
Design recipe for compound data
Do this once per new structure type:
Data analysis and design: Define any new structures needed for the problem. Write structure and data definitions for each new type (include right after the file header).
Template: Create one template for each new type defined, and use for each function that consumes that type. Use a generic name for the template function and include a generic contract.
Do the usual design recipe steps for every function:
**Purpose:** Same as before.
**Contract and requirements:** Can use both built-in data types and defined structure names.
**Examples:** Same as before.
**Function Definition:** To write the body, expand the template based on the examples.
**Tests:** Same as before. Be sure to capture all cases.
Design recipe example
Suppose we wish to create a function `total-length` that consumes information about a TV series, and produces the total length (in minutes) of all episodes of the series.
Data analysis and design.
```scheme
(define-struct tvseries (title eps len-per))
;; A TVSeries is a (make-tvseries Str Nat Nat)
;; where
;; title is the name of the series
;; eps is the total number of episodes
;; len-per is the average length (in minutes) for one episode
```
The structure definition gives us:
- Constructor `make-tvseries`
- Selectors `tvseries-title`, `tvseries-eps`, and `tvseries-len-per`
- Predicate `tvseries`?
The data definition tells us:
- types required by `make-tvseries`
- types produced by `tvseries-title`, `tvseries-eps`, and `tvseries-len-per`
Templates for TVSeries
We can form a template for use in any function that consumes a single TVSeries:
;; tvseries-template: TVSeries \(\rightarrow\) Any
(define (tvseries-template show)
(\ldots (tvseries-title show) \ldots
(tvseries-eps show) \ldots
(tvseries-len-per show) \ldots)))
You might find it convenient to use constant definitions to create some data for use in examples and tests.
```
(define murdoch (make-tvseries "Murdoch Mysteries" 168 42))
(define friends (make-tvseries "Friends" 236 22))
(define fawlty (make-tvseries "Fawlty Towers" 12 30))
```
Mixed data and structures
Consider writing functions that use a streaming video file (movie or tv series), which does not require any new structure definitions.
(define-struct movieinfo (director title duration genre))
;; A MovieInfo is a (make-movieinfo Str Nat Str) pause
(define-struct tvseries (title eps len-per))
;; A TVSeries is a (make-tvseries Str Nat Nat) pause
;; A StreamingVideo is one of:
;; * a MovieInfo or
;; * a TVSeries.
The template for **StreamingVideo**
The template for mixed data is a `cond` with one question for each type of data.
```scheme
;; streamingvideo-template: StreamingVideo → Any
(define (streamingvideo-template info)
(cond [(movieinfo? info) . . . ]
[else . . . ]))
```
We use type predicates in our questions.
Next, expand the template to include more information about the structures.
;; StreamingVideo-template: StreamingVideo → Any
(define (streamingvideo-template info)
(cond [(movieinfo? info)
(. . . (movieinfo-director info) . . .
(movieinfo-title info) . . .
(movieinfo-duration info) . . .
(movieinfo-genre info) . . . ) ]
[else
(. . . (tvseries-title info) . . .
(tvseries-eps info) . . .
(tvseries-len-per info) . . . ) ]))
An example: StreamingVideo
;; (streamingvideo-title info) produces title of info
;; streamingvideo-title: StreamingVideo → Str
;; Examples:
(check-expect (streamingvideo-title
(make-movieinfo "Spielberg" "ET" 115 "Sci-Fi")) "ET")
(check-expect (streamingvideo-title
(make-tvseries "Friends" 236 22)) "Friends")
(define (streamingvideo-title info) ...)
The definition of streamingvideo-title
(define (streamingvideo-title info)
(cond
[(movieinfo? info) (movieinfo-title info)]
[else (tvseries-title info)]))
Reasons for the design recipe and the template design
- to make sure that you understand the type of data being consumed and produced by the function
- to take advantage of common patterns in code
Reminder: **anyof** types
If a consumed or produced value for a function can be one of a restricted set of types or values, we will use the notation
\[(\text{anyof type1 type2} \ldots \text{type}K v1 \ldots vT)\]
For example, if we hadn’t defined `StreamingVideo` as a new type, we could have written the contract for `streamingvideo-title` as
`;; streamingvideo-title: (anyof MovieInfo TVSeries) \rightarrow \text{Str}`
A nested structure
(define-struct doublefeature (first second start-hour))
;; A DoubleFeature is a
;; (make-doublefeature MovieInfo MovieInfo Nat),
;; requires:
;; start-hour is between 0 and 23, inclusive
An example of a DoubleFeature is
\[
\text{(define classic-movies}
\text{(make-doublefeature}
\text{(make-movieinfo "Welles" "Citizen Kane" 119 "Drama")}
\text{(make-movieinfo "Kurosawa" "Rashomon" 88 "Mystery")}
20))
\]
- Develop the function template.
- What is the title of the first movie?
- Do the two movies have the same genre?
- What is the total duration for both movies?
Goals of this module
You should be comfortable with these terms: structure, field, constructor, selector, type predicate, dynamic typing, static typing, data definition, structure definition, template.
You should be able to write functions that consume and produce structures, including Posns.
You should be able to create structure and data definitions for a new structure, determining an appropriate type for each field.
You should know what functions are defined by a structure definition, and how to use them.
You should be able to write the template associated with a structure definition, and to expand it into the body of a particular function that consumes that type of structure.
You should understand the use of type predicates and be able to write code that handles mixed data.
|
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs115/coursenotes1/1191/04-struct-post.pdf", "len_cl100k_base": 5483, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 73935, "total-output-tokens": 7718, "length": "2e12", "weborganizer": {"__label__adult": 0.0004703998565673828, "__label__art_design": 0.0004315376281738281, "__label__crime_law": 0.0003025531768798828, "__label__education_jobs": 0.00539398193359375, "__label__entertainment": 8.89897346496582e-05, "__label__fashion_beauty": 0.00017690658569335938, "__label__finance_business": 0.00018393993377685547, "__label__food_dining": 0.0005402565002441406, "__label__games": 0.0010156631469726562, "__label__hardware": 0.0004763603210449219, "__label__health": 0.0003387928009033203, "__label__history": 0.00025963783264160156, "__label__home_hobbies": 0.0001232624053955078, "__label__industrial": 0.00034165382385253906, "__label__literature": 0.0003814697265625, "__label__politics": 0.0002465248107910156, "__label__religion": 0.0006136894226074219, "__label__science_tech": 0.0030670166015625, "__label__social_life": 0.00017368793487548828, "__label__software": 0.003692626953125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0003795623779296875, "__label__transportation": 0.00048828125, "__label__travel": 0.00023996829986572263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20234, 0.01001]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20234, 0.79561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20234, 0.72325]], "google_gemma-3-12b-it_contains_pii": [[0, 320, false], [320, 726, null], [726, 1183, null], [1183, 1550, null], [1550, 1978, null], [1978, 2411, null], [2411, 2832, null], [2832, 3256, null], [3256, 3507, null], [3507, 4068, null], [4068, 4195, null], [4195, 4717, null], [4717, 5124, null], [5124, 5418, null], [5418, 5707, null], [5707, 6018, null], [6018, 6499, null], [6499, 6795, null], [6795, 7367, null], [7367, 7753, null], [7753, 8149, null], [8149, 8392, null], [8392, 8824, null], [8824, 9154, null], [9154, 9597, null], [9597, 9995, null], [9995, 10316, null], [10316, 10799, null], [10799, 11205, null], [11205, 11578, null], [11578, 12002, null], [12002, 12226, null], [12226, 13143, null], [13143, 13426, null], [13426, 13889, null], [13889, 14272, null], [14272, 14699, null], [14699, 15053, null], [15053, 15532, null], [15532, 15836, null], [15836, 16131, null], [16131, 16412, null], [16412, 16855, null], [16855, 17254, null], [17254, 17697, null], [17697, 18059, null], [18059, 18225, null], [18225, 18423, null], [18423, 18848, null], [18848, 19059, null], [19059, 19441, null], [19441, 19867, null], [19867, 20234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 320, true], [320, 726, null], [726, 1183, null], [1183, 1550, null], [1550, 1978, null], [1978, 2411, null], [2411, 2832, null], [2832, 3256, null], [3256, 3507, null], [3507, 4068, null], [4068, 4195, null], [4195, 4717, null], [4717, 5124, null], [5124, 5418, null], [5418, 5707, null], [5707, 6018, null], [6018, 6499, null], [6499, 6795, null], [6795, 7367, null], [7367, 7753, null], [7753, 8149, null], [8149, 8392, null], [8392, 8824, null], [8824, 9154, null], [9154, 9597, null], [9597, 9995, null], [9995, 10316, null], [10316, 10799, null], [10799, 11205, null], [11205, 11578, null], [11578, 12002, null], [12002, 12226, null], [12226, 13143, null], [13143, 13426, null], [13426, 13889, null], [13889, 14272, null], [14272, 14699, null], [14699, 15053, null], [15053, 15532, null], [15532, 15836, null], [15836, 16131, null], [16131, 16412, null], [16412, 16855, null], [16855, 17254, null], [17254, 17697, null], [17697, 18059, null], [18059, 18225, null], [18225, 18423, null], [18423, 18848, null], [18848, 19059, null], [19059, 19441, null], [19441, 19867, null], [19867, 20234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 20234, null]], "pdf_page_numbers": [[0, 320, 1], [320, 726, 2], [726, 1183, 3], [1183, 1550, 4], [1550, 1978, 5], [1978, 2411, 6], [2411, 2832, 7], [2832, 3256, 8], [3256, 3507, 9], [3507, 4068, 10], [4068, 4195, 11], [4195, 4717, 12], [4717, 5124, 13], [5124, 5418, 14], [5418, 5707, 15], [5707, 6018, 16], [6018, 6499, 17], [6499, 6795, 18], [6795, 7367, 19], [7367, 7753, 20], [7753, 8149, 21], [8149, 8392, 22], [8392, 8824, 23], [8824, 9154, 24], [9154, 9597, 25], [9597, 9995, 26], [9995, 10316, 27], [10316, 10799, 28], [10799, 11205, 29], [11205, 11578, 30], [11578, 12002, 31], [12002, 12226, 32], [12226, 13143, 33], [13143, 13426, 34], [13426, 13889, 35], [13889, 14272, 36], [14272, 14699, 37], [14699, 15053, 38], [15053, 15532, 39], [15532, 15836, 40], [15836, 16131, 41], [16131, 16412, 42], [16412, 16855, 43], [16855, 17254, 44], [17254, 17697, 45], [17697, 18059, 46], [18059, 18225, 47], [18225, 18423, 48], [18423, 18848, 49], [18848, 19059, 50], [19059, 19441, 51], [19441, 19867, 52], [19867, 20234, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20234, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
b0d2b65c044db4d8ac57a447aba00d35b8af5720
|
[REMOVED]
|
{"Source-Url": "http://lia.deis.unibo.it/~pt/Publications/aiia03.pdf", "len_cl100k_base": 8146, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39930, "total-output-tokens": 10226, "length": "2e12", "weborganizer": {"__label__adult": 0.0004076957702636719, "__label__art_design": 0.0005946159362792969, "__label__crime_law": 0.0007176399230957031, "__label__education_jobs": 0.0016355514526367188, "__label__entertainment": 0.00016999244689941406, "__label__fashion_beauty": 0.0002334117889404297, "__label__finance_business": 0.000492095947265625, "__label__food_dining": 0.0005612373352050781, "__label__games": 0.001224517822265625, "__label__hardware": 0.0010519027709960938, "__label__health": 0.0011310577392578125, "__label__history": 0.0005059242248535156, "__label__home_hobbies": 0.0001837015151977539, "__label__industrial": 0.0007681846618652344, "__label__literature": 0.0010089874267578125, "__label__politics": 0.0005950927734375, "__label__religion": 0.0006384849548339844, "__label__science_tech": 0.343505859375, "__label__social_life": 0.00020420551300048828, "__label__software": 0.013763427734375, "__label__software_dev": 0.62939453125, "__label__sports_fitness": 0.00030684471130371094, "__label__transportation": 0.0009131431579589844, "__label__travel": 0.0002536773681640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35863, 0.01801]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35863, 0.56637]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35863, 0.86487]], "google_gemma-3-12b-it_contains_pii": [[0, 2721, false], [2721, 5795, null], [5795, 8911, null], [8911, 11727, null], [11727, 14447, null], [14447, 17544, null], [17544, 20413, null], [20413, 23168, null], [23168, 26134, null], [26134, 29368, null], [29368, 32286, null], [32286, 35459, null], [35459, 35863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2721, true], [2721, 5795, null], [5795, 8911, null], [8911, 11727, null], [11727, 14447, null], [14447, 17544, null], [17544, 20413, null], [20413, 23168, null], [23168, 26134, null], [26134, 29368, null], [29368, 32286, null], [32286, 35459, null], [35459, 35863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35863, null]], "pdf_page_numbers": [[0, 2721, 1], [2721, 5795, 2], [5795, 8911, 3], [8911, 11727, 4], [11727, 14447, 5], [14447, 17544, 6], [17544, 20413, 7], [20413, 23168, 8], [23168, 26134, 9], [26134, 29368, 10], [29368, 32286, 11], [32286, 35459, 12], [35459, 35863, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35863, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
dcd0ca100b3d476b6371b2eef247de5571845ce2
|
XEP-0122: Data Forms Validation
Matthew Miller
mailto:linuxwolf@outer-planes.net
xmpp:linuxwolf@outer-planes.net
2018-03-21
Version 1.0.2
<table>
<thead>
<tr>
<th>Status</th>
<th>Type</th>
<th>Short Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Draft</td>
<td>Standards Track</td>
<td>xdata-validate</td>
</tr>
</tbody>
</table>
This specification defines a backwards-compatible extension to the XMPP Data Forms protocol that enables applications to specify additional validation guidelines related to a form, such as validation of standard XML datatypes, application-specific datatypes, value ranges, and regular expressions.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
1 Introduction
Data Forms (XEP-0004) \(^1\) ("x:data") provides a simple and interoperable way to request and present information for both applications and humans. However, the simple nature of "x:data" requires the form interpreter at times to guess as to exactly what type of information is being requested or provided. This document builds upon "x:data" to provide this additional validation.
2 Requirements
The requirements for this document are:
- Backwards compatible with the existing "x:data" protocol.
- Provide extended datatypes (such as dates, times, and numbers).
- Allow for multiple validation methods.
- Allow for user-defined datatypes.
- Accommodate value ranges.
- Accommodate regular expression matching.
3 Use Cases
This document defines a new namespace, "http://jabber.org/protocol/xdata-validate". The root element for this namespace is `<validate/>`, and MUST be contained within a `<field/>` element (qualified by the 'jabber:x:data' namespace) for each Data Forms field that possesses additional validation information.
3.1 Datatype Validation
The simplest usage is to provide a more-granular datatype for a `<field/>` element used in Data Forms. To provide this datatype information, a `<validate/>` element is included whose 'datatype' attribute specifies the data type of any `<value/>` contained within the `<field/>` element:
---
The preceding example demonstrates a field that is expected to contain a date/time value. The 'datatype' attribute specifies the datatype. This attribute is OPTIONAL, and defaults to "xs:string". It MUST meet one of the following conditions:
- Start with "xs:", and be one of the "built-in" datatypes defined in XML Schema Part 2
- Start with a prefix registered with the XMPP Registrar
- Start with "x:", and specify a user-defined datatype.
Note that while "x:" allows for ad-hoc definitions, its use is NOT RECOMMENDED.
### 3.2 Validation Methods
In addition to datatypes, the validation method can also be provided. The method is specified via a child element. The validation methods defined in this document are:
- <basic/> for validation only against the datatype itself
- <open/> for open-ended validation against the datatype
- <range/> for validation against a given min/max and the datatype
- <regex/> for validation against a given regular expression and the datatype
If no validation method is specified, form processors MUST assume <basic/> validation. The <validate/> element SHOULD include one of the above validation method elements, and MUST NOT include more than one.
Any validation method applied to a field of type "list-multi", "list-single", or "text-multi" (other than <basic/>) MUST imply the same behavior as <open/>, with the additional constraints defined by that method.
---
1. XML Schema Part 2: Datatypes [http://www.w3.org/TR/xmlschema11-2/].
2. The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see [https://xmpp.org/registrar/].
3.2.1 <basic/> Validation
Building upon the earlier example, to indicate that the value(s) should simply match the field type and datatype constraints, the <validate/> element shall contain a <basic/> child element.
Listing 2: Basic validation
```xml
<field var='evt.date' type='text-single' label='Event_Date/Time'>
<validate xmlns='http://jabber.org/protocol/xdata-validate' datatype='xs:dateTime'>
<basic/>
</validate>
<value>2003-10-06T11:22:00-07:00</value>
</field>
```
Using <basic/> validation, the form interpreter MUST follow the validation rules of the datatype (if understood) and the field type.
3.2.2 <open/> Validation
For "list-single" or "list-multi", to indicate that the user may enter a custom value (matching the datatype constraints) or choose from the predefined values, the <validate/> element shall contain an <open/> child element:
Listing 3: Open validation
```xml
<field var='evt.category' type='list-single' label='Event_Category'>
<validate xmlns='http://jabber.org/protocol/xdata-validate' datatype='xs:string'>
<open/>
</validate>
<option><value>holiday</value></option>
<option><value>reminder</value></option>
<option><value>appointment</value></option>
</field>
```
The <open/> validation method applies to "text-multi" differently; it hints that each value for a "text-multi" field shall be validated separately. This effectively turns "text-multi" fields into an open-ended "list-multi", with no options and all values automatically selected.
3.2.3 <range/> Validation
To indicate that the value should fall within a certain range, the <validate/> element shall contain a <range/> child element:
3 USE CASES
Listing 4: Range validation
```xml
<field var='evt.date' type='text-single' label='Event Date/Time'>
<validate xmlns='http://jabber.org/protocol/xdata-validate'
datatype='xs:dateTime'>
<range min='2003-10-05T00:00:07-07:00'
max='2003-10-24T23:59:59-07:00'/>
</validate>
<value>2003-10-06T11:22:00 -07:00</value>
</field>
```
The 'min' and 'max' attributes of the `<range/>` element specify the minimum and maximum values allowed, respectively. The 'max' attribute specifies the maximum allowable value. This attribute is OPTIONAL. The value depends on the datatype in use. The 'min' attribute specifies the minimum allowable value. This attribute is OPTIONAL. The value depends on the datatype in use. The `<range/>` element SHOULD possess either a 'min' or 'max' attribute, and MAY possess both. If neither attribute is included, the processor MUST assume that there are no range constraints.
3.2.4 `<regex/>` Validation
To indicate that the value should be restricted to a regular expression, the `<validate/>` element shall contain a `<regex/>` child element:
```xml
Listing 5: Regular expression validation
<field var='ssn' type='text-single' label='Social Security Number'>
<desc>This field should be your SSN, including '-' (e.g. 123-12-1234)</desc>
<validate xmlns='http://jabber.org/protocol/xdata-validate'
datatype='xs:string'>
<regex>([0-9]{3})-([0-9]{2})-([0-9]{4})</regex>
</validate>
</field>
```
The XML character data of this element is the pattern to apply. The syntax of this content MUST be that defined for POSIX extended regular expressions\(^4\), including support for Unicode\(^5\). The `<regex/>` element MUST contain character data only.
\(^4\) The "best" definition of this syntax can be found in the `re_format(7)` man page
\(^5\) Guidelines for adapting regular expressions to support Unicode is defined at [http://www.unicode.org/reports/tr18/](http://www.unicode.org/reports/tr18/)
3.3 Selection Ranges in "list-multi"
For "list-multi", validation can indicate (via the <list-range/> element) that a minimum and maximum number of options should be selected and/or entered. This selection range MAY be combined with the other methods to provide more flexibility.
Listing 6: Selection Range validation
```xml
<field var='evt.notify-methods' type='list-multi' label='Notify me by'>
<validate xmlns='http://jabber.org/protocol/xdata-validate' datatype='xs:string'>
<basic/>
<list-range min='1' max='3'/>
</validate>
<option><value>e-mail</value></option>
<option><value>jabber/xmpp</value></option>
<option><value>work phone</value></option>
<option><value>home phone</value></option>
<option><value>cell phone</value></option>
</field>
```
The <list-range/> element SHOULD be included only when the <field/> is of type "list-multi" and SHOULD be ignored otherwise.
The 'max' attribute specifies the maximum allowable number of selected/entered values. This attribute is OPTIONAL. The value MUST be a positive integer.
The 'min' attribute specifies the minimum allowable number of selected/entered values. This attribute is OPTIONAL. The value MUST be a positive integer.
The <list-range/> element SHOULD possess either a 'min' or 'max' attribute, and MAY possess both. If neither attribute is included, the processor MUST assume that there are no selection constraints.
4 Implementation Notes
4.1 Required to Support
At a minimum, implementations MUST support the following:
- Datatype validation
- The <basic/> validation method
If an implementation does not understand the specified datatype, it MUST validate according to the default "xs:string" datatype. If an implementation does not understand the specified
method, it MUST validate according to the `<basic/>` method.
### 4.2 Namespacing
While all elements associated with this document MUST be qualified by the 'http://jabber.org/protocol/xdata-validate' namespace, explicitly declaring the default namespace for each instance can be overly verbose. However, Jabber/XMPP implementations have historically been very lax regarding namespacing, thus requiring some careful use of prefixes.
The use of namespace prefixes is RECOMMENDED for large forms, to reduce the data size. To maintain the highest level of compatibility, implementations sending the form using prefixes SHOULD use the namespace prefix "xdv", and SHOULD declare the namespace prefix mapping in the ancestor `<x xmlns='jabber:x:data'/>` element:
Listing 7: Example of recommended namespace prefixing
```xml
<x xmlns='jabber:x:data'
xmlns:xdv='http://jabber.org/protocol/xdata-validate'
type='form'>
<title>Sample Form</title>
<instructions>
Please provide information for the following fields...
</instructions>
<field type='text-single' var='name' label='Event Name'/>
<field type='text-single' var='date/start' label='Starting Date'>
<xdv:validate datatype='xs:date'>
<basic/>
</xdv:validate>
</field>
<field type='text-single' var='date/end' label='Ending Date'>
<xdv:validate datatype='xs:date'>
<basic/>
</xdv:validate>
</field>
</x>
```
### 4.3 Internationalization/Localization
This document relies on the internationalization/localization mechanisms provided by XMPP Core 6. As much as possible, all datatype formats MUST be locale-independent.
---
4.4 Form Submissions
Form processors MUST NOT assume that a form with validation has actually been validated when submitted. There is no realistic expectation that form interpreters honor validation.
4.5 Existing Protocols
While this document is compatible with the existing "x:data" definition, form providers SHOULD first determine support for it, using either Entity Capabilities (XEP-0115) if presence-aware or Service Discovery (XEP-0030). This is especially important for limited-connection and/or limited-capabilities devices, such as cell phones.
4.6 Display Considerations
Although primarily intended for validating form submission, validation MAY have an impact on display, and MAY be applied to data forms that are not submitted (e.g. "result" type forms). The following table outlines which field types a particular validation method is or is not appropriate for, and how a display SHOULD interpret the validation methods if considered:
<table>
<thead>
<tr>
<th>Validation Method</th>
<th>SHOULD be Allowed</th>
<th>SHOULD NOT be Allowed</th>
<th>Display Suggestions</th>
</tr>
</thead>
<tbody>
<tr>
<td>basic</td>
<td>fixed</td>
<td>list-multi</td>
<td>list-single</td>
</tr>
<tr>
<td>open</td>
<td>jid-multi</td>
<td>list-multi</td>
<td>list-single</td>
</tr>
</tbody>
</table>
9If a particular field type is not listed, the display MAY include validation support, but is not expected to do so.
5 SECURITY CONSIDERATIONS
### Validation Ranges
The `<range/>` validation method MUST be used only with datatypes that have finite quantities. Within the standard datatype set, it MUST NOT be used with "xs:string".
### 5 Security Considerations
This document introduces no security concerns above and beyond those specified in XEP-0004: Data Forms.
6 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA).
7 XMPP Registrar Considerations
7.1 Protocol Namespaces
The XMPP Registrar includes 'http://jabber.org/protocol/xdata-validate' in its registry of protocol namespaces.
7.2 Registries
7.2.1 Datatype Prefixes Registry
The XMPP Registrar maintains a registry of datatype prefixes used in the context of Data Forms Validation (see <https://xmpp.org/registrar/xdv-prefixes.html>), where each prefix denotes a group of related datatypes.
In order to submit new values to this registry, the registrant shall define an XML fragment of the following form and either include it in the relevant XMPP Extension Protocol or send it to the email address registrar@xmpp.org:
```xml
<datatype-prefix>
<prefix>the prefix token (e.g., "xs")</prefix>
<desc>a natural-language description of the datatype family</desc>
<doc>the document in which datatype family is specified</doc>
</datatype-prefix>
```
The registrant may register more than one prefix at a time, each contained in a separate <datatype-prefix/> element.
As part of this document, the following datatype prefixes shall be registered:
```xml
<datatype-prefix>
<prefix>x</prefix>
<desc>An ad-hoc datatype</desc>
<doc>XEP-0122</doc>
</datatype-prefix>
<datatype-prefix>
<prefix>xs</prefix>
</datatype-prefix>
```
9 The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
7.2.2 Datatypes Registry
The XMPP Registrar maintains a registry of datatypes used in the context of Data Forms Validation (see <https://xmpp.org/registrar/xdv-datatypes.html>), where each datatype name includes the relevant prefix (e.g., "xs:anyURI").
In order to submit new values to this registry, the registrant shall define an XML fragment of the following form and either include it in the relevant XMPP Extension Protocol or send it to the email address registrar@xmpp.org:
```xml
<datatype>
<name>the full datatype name (e.g., "xs:string")</name>
<desc>a natural-language description of the datatype</desc>
<methods>the validation methods that may apply to the datatype</methods>
<min>the minimum value for the datatype (if any)</min>
<max>the maximum value for the datatype (if any)</max>
</datatype>
```
The registrant may register more than one datatype at a time, each contained in a separate `<datatype/>` element.
The following submission contains the built-in datatypes defined in XML Schema Part 2 that are deemed mostly likely to be useful in the context of the Data Forms protocol; additional datatypes defined therein, as well as other datatypes not defined in XML Schema Part 2, may be registered via separate submissions in the future.
```xml
<datatype>
<name>xs:anyURI</name>
<desc>a Uniform Resource Identifier Reference (URI)</desc>
<methods>basic regex</methods>
<min>N/A</min>
<max>N/A</max>
</datatype>
<datatype>
<name>xs:byte</name>
<desc>an integer with the specified min/max</desc>
<methods>basic range</methods>
<min>-128</min>
<max>127</max>
</datatype>
<datatype>
<name>xs:date</name>
<desc>a calendar date</desc>
</datatype>
```
<methods>basic range regex</methods>
<min>N/A</min>
<max>N/A</max>
</datatype>
<datatype>
<name>xs:dateTime</name>
<desc>a specific instant of time</desc>
<methods>basic range regex</methods>
<min>N/A</min>
<max>N/A</max>
</datatype>
<datatype>
<name>xs:decimal</name>
<desc>an arbitrary-precision decimal number</desc>
<methods>basic range</methods>
<min>none</min>
<max>none</max>
</datatype>
<datatype>
<name>xs:double</name>
<desc>an IEEE double-precision 64-bit floating point type</desc>
<methods>basic range</methods>
<min>none</min>
<max>none</max>
</datatype>
<datatype>
<name>xs:int</name>
<desc>an integer with the specified min/max</desc>
<methods>basic range</methods>
<min>-2147483648</min>
<max>2147483647</max>
</datatype>
<datatype>
<name>xs:integer</name>
<desc>a decimal number with no fraction digits</desc>
<methods>basic range</methods>
<min>none</min>
<max>none</max>
</datatype>
<datatype>
<name>xs:language</name>
<desc>a language identifier as defined by RFC 1766</desc>
<methods>basic regex</methods>
<min>N/A</min>
<max>N/A</max>
</datatype>
<datatype>
<name>xs:long</name>
<desc>an integer with the specified min/max</desc>
</datatype>
8 XML Schema
```xml
<?xml version='1.0' encoding='UTF-8'?>
<xs:schema
xmlns:xs='http://www.w3.org/2001/XMLSchema'
targetNamespace='http://jabber.org/protocol/xdata-validate'
xmlns='http://jabber.org/protocol/xdata-validate'
elementFormDefault='qualified'>
<xs:element name='validate'>
<xs:complexType>
<xs:sequence>
<xs:choice minOccurs='0' maxOccurs='1'>
<xs:element ref='basic'/>
<xs:element ref='open'/>
<xs:element ref='range'/>
<xs:element ref='regex'/>
</xs:choice>
<xs:element ref='list-range' minOccurs='0' maxOccurs='1'/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
```
<xs:attribute name='datatype'
type='xs:string'
use='optional'
default='xs:string'/>
</xs:complexType>
</xs:element>
<xs:element name='basic' type='empty'/>
<xs:element name='open' type='empty'/>
<xs:element name='range'>
<xs:complexType>
<xs:simpleContent>
<xs:extension base='empty'>
<xs:attribute name='min' use='optional'/>
<xs:attribute name='max' use='optional'/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:element name='regex' type='xs:string'/>
<xs:element name='list-range'>
<xs:complexType>
<xs:simpleContent>
<xs:extension base='empty'>
<xs:attribute name='min' type='xs:unsignedInt' use='optional'/>
<xs:attribute name='max' type='xs:unsignedInt' use='optional'/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:simpleType name='empty'>
<xs:restriction base='xs:string'>
<xs:enumeration value=''/>
</xs:restriction>
</xs:simpleType>
</xs:schema>
|
{"Source-Url": "https://xmpp.org/extensions/xep-0122.pdf", "len_cl100k_base": 5153, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 41674, "total-output-tokens": 6260, "length": "2e12", "weborganizer": {"__label__adult": 0.00023221969604492188, "__label__art_design": 0.0002868175506591797, "__label__crime_law": 0.0006890296936035156, "__label__education_jobs": 0.0003654956817626953, "__label__entertainment": 5.08427619934082e-05, "__label__fashion_beauty": 0.00010204315185546876, "__label__finance_business": 0.00044035911560058594, "__label__food_dining": 0.0001659393310546875, "__label__games": 0.00031757354736328125, "__label__hardware": 0.0008106231689453125, "__label__health": 0.00019550323486328125, "__label__history": 0.00014698505401611328, "__label__home_hobbies": 5.412101745605469e-05, "__label__industrial": 0.00032138824462890625, "__label__literature": 0.00017023086547851562, "__label__politics": 0.00019657611846923828, "__label__religion": 0.00024819374084472656, "__label__science_tech": 0.0155029296875, "__label__social_life": 5.537271499633789e-05, "__label__software": 0.040283203125, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.00015807151794433594, "__label__transportation": 0.0002081394195556641, "__label__travel": 0.0001157522201538086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21915, 0.02342]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21915, 0.30917]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21915, 0.60409]], "google_gemma-3-12b-it_contains_pii": [[0, 599, false], [599, 3134, null], [3134, 3134, null], [3134, 4577, null], [4577, 6318, null], [6318, 7986, null], [7986, 9953, null], [9953, 11713, null], [11713, 13449, null], [13449, 15278, null], [15278, 15631, null], [15631, 17264, null], [17264, 18968, null], [18968, 20204, null], [20204, 20900, null], [20900, 21915, null]], "google_gemma-3-12b-it_is_public_document": [[0, 599, true], [599, 3134, null], [3134, 3134, null], [3134, 4577, null], [4577, 6318, null], [6318, 7986, null], [7986, 9953, null], [9953, 11713, null], [11713, 13449, null], [13449, 15278, null], [15278, 15631, null], [15631, 17264, null], [17264, 18968, null], [18968, 20204, null], [20204, 20900, null], [20900, 21915, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21915, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21915, null]], "pdf_page_numbers": [[0, 599, 1], [599, 3134, 2], [3134, 3134, 3], [3134, 4577, 4], [4577, 6318, 5], [6318, 7986, 6], [7986, 9953, 7], [9953, 11713, 8], [11713, 13449, 9], [13449, 15278, 10], [15278, 15631, 11], [15631, 17264, 12], [17264, 18968, 13], [18968, 20204, 14], [20204, 20900, 15], [20900, 21915, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21915, 0.01977]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e780a1dd1dcc68e7ca15ef5c8fae0d2ed981ce38
|
A method for generating a representation of multimedia content by first segmenting the multimedia content spatially and temporally to extract objects. Feature extraction is applied to the objects to produce semantic and syntactic attributes, relations, and a containment set of content entities. The content entities are coded to produce directed acyclic graphs of the content entities, where each directed acyclic graph represents a particular interpretation of the multimedia content. Attributes of each content entity are measured and the measured attributes are assigned to each corresponding content entity in the directed acyclic graphs to rank order the multimedia content.
11 Claims, 8 Drawing Sheets
U.S. PATENT DOCUMENTS
5,873,811 A * 2/1999 Harel ...................... 707/3
6,002,803 A * 12/1999 Qian et al. ............. 382/242
6,516,690 B1 * 2/2003 Lennon et al. ........... 382/173
6,660,603 B1 * 12/2003 Vahid et al. ......... 709/247
OTHER PUBLICATIONS
E. Wold et al., “Content-Based Classification, Search, and Retrieval of Audio,” IEEE, Fall 1999, pp. 27-36.
* cited by examiner
FIG. 1a
Audio-Visual DS
Syntactic DS
Syntactic/Semantic Relation Graph DS
Semantic DS
FIG. 1b
Syntactic DS
Segment DS
Segment/Region Relation Graph DS
Region DS
FIG. 1c
Semantic DS
Event DS
Event/Object Relation Graph DS
Object DS
Prior Art
FIG. 2
FIG. 3B
"Commentary"
Attributes
Properties
Speaker Bob Costas
Text:
"Clavin Schtaldi winds up with a fast ball"
FIG. 4
METHOD FOR REPRESENTING AND COMPARING MULTIMEDIA CONTENT ACCORDING TO RANK
CROSS-REFERENCE TO RELATED APPLICATION
This is a Continuation-in-Part application of U.S. patent application Ser. No. 09/385,169, “Method for Representing and Comparing Multimedia Content” filed on Aug. 30, 1999 now U.S. Pat. No. 6,546,135 by Lin et al.
FIELD OF THE INVENTION
This invention relates generally to processing multimedia content, and more particularly, to representing and comparing ranked multimedia content.
BACKGROUND OF THE INVENTION
There exist many standards for encoding and decoding multimedia content. The content can include audio signals in one dimension, images with two dimensions in space, video sequences with a third dimension in time, text, or combinations thereof. Numerous standards exist for audio and text.
For images, the best known standard is JPEG, and for video sequences, the most widely used standards include MPEG-1, MPEG-2 and H.263. These standards are relatively low-level specifications that primarily deal with the spatial compression in the case of images, and spatial and temporal compression for video sequences. As a common feature, these standards perform compression on a frame basis. With these standards, one can achieve high compression ratios for a wide range of applications.
Newer video coding standards, such as MPEG-4, see “Information Technology—Generic coding of audio/visual objects,” ISO/IEC 14496-2 (MPEG4 Visual), November 1998, allow arbitrary-shaped objects to be encoded and decoded as separate video object planes (VOP). This emerging standard is intended to enable multimedia applications, such as interactive video, where natural and synthetic materials are integrated, and where access is universal. For example, one might want to “cut-and-paste” a moving figure or object from one video to another. In this type of scenario, it is assumed that the objects in the multimedia content have been identified through some type of segmentation algorithm, see for example, U.S. patent application Ser. No. 09/326,750 “Method for Ordering Image Spaces to Search for Object Surfaces” filed on Jun. 4, 1999 by Lin et al.
The most recent standardization effort taken on by the MPEG committee is that of MPEG-7, formally called “Multimedia Content Description Interface,” see “MPEG-7 Context, Objectives and Technical Roadmap,” ISO/IEC N2729, March 1999. Essentially, this standard plans to incorporate a set of descriptors and description schemes that can be used to describe various types of multimedia content. The descriptor and description schemes are associated with the content itself and allow for fast and efficient searching of material that is of interest to a particular user. It is important to note that this standard is not meant to replace previous coding standards. Rather, it builds on other standard representations, especially MPEG-4, because the multimedia content can be decomposed into different objects and each object can be assigned a unique set of descriptors. Also, the standard is independent of the format in which the content is stored. MPEG-7 descriptors can be attached to compressed or uncompressed data.
Descriptors for multimedia content can be used in a number of ways, see for example “MPEG-7 Applications,” ISO/IEC N2728, March 1999. Most interesting, for the purpose of the description below, are database search and retrieval applications. In a simple application environment, a user may specify some attributes of a particular object. At this low-level of representation, these attributes may include descriptors that describe the texture, motion and shape of the particular object. A method of representing and comparing shapes has been described in U.S. patent application Ser. No. 09/326,759 “Method for Ordering Image Spaces to Represent Object Shapes” filed on Jun. 4, 1999 by Lin et al. One of the drawbacks of this type of descriptor is that it is not straightforward to effectively combine this feature of the object with other low-level features. Another problem with such low-level descriptors, in general, is that a high-level interpretation of the object or multimedia content is difficult to obtain. Hence, there is a limitation in the level of representation.
To overcome the drawbacks mentioned above and obtain a higher-level of representation, one may consider more elaborate description schemes that combine several low-level descriptors. In fact, these description schemes may even contain other description schemes, see “MPEG-7 Description Schemes (V0.5),” ISO/IEC N2844, July 1999.
As shown in FIG. 1a, a generic description scheme (DS) has been proposed to represent multimedia content. This generic audio-visual DS 100 includes a separate syntactic DS 101, and a separate semantic DS 102. The semantic structure refers to the physical and logical signal aspects of the content, while the semantic structure refers to the conceptual meaning of the content. For a video sequence, the syntactic elements may be related to the color, shape and motion of a particular object. On the other hand, the semantic elements may refer to information that cannot be extracted from low-level descriptors, such as the time and place of an event or the name of a person in the multimedia content. In addition to the separate syntactic and semantic DSs, a syntactic-semantic relation graph DS 103 has been proposed to link the syntactic and semantic DSs.
The major problem with such a scheme is that the relations and attributes specified by the syntactic and semantic DS are independent, and it is the burden of the relation graph DS to create a coherent and meaningful interpretation of the multimedia content. Furthermore, the DSs mentioned above are either tree-based or graph-based. Tree-based representations provide an efficient means of searching and comparing, but are limited in their expressive ability; the independent syntactic and semantic DSs are tree-based. In contrast, graph-based representations provide a great deal of expressive ability, but are notoriously complex and prone to error for search and comparison.
For the task at hand, it is crucial that a representation scheme is not limited to how multimedia content is interpreted. The scheme should also provide an efficient means of comparison. From a human perspective, it is possible to interpret multimedia content in many ways; therefore, it is essential that any representation scheme allows multiple interpretations of the multimedia content. Although the independent syntactic and semantic DS, in conjunction with the relation graph DS, may allow multiple interpretations of multimedia content, it would not be efficient to perform comparisons.
As stated above, it is possible for a DS to contain other DSs. In the same way that the generic DS includes a syntactic DS, a semantic DS, and a syntactic/semantic relation graph DS. It has been proposed that the syntactic DS
directed acyclic graphs of the content entities. Edges of the directed acyclic graphs represent the content entities, and nodes represent breaks in the segmentation. Each directed acyclic graph represents a particular interpretation of the multimedia content.
In one aspect the multimedia content is a two dimensional image, and in another aspect the multimedia content is a three dimensional video sequence.
In a further aspect of the invention, representations for different multimedia contents are compared based on similarity scores obtained for the directed acyclic graphs. Attributes of each content entity are measured and the measured attributes are assigned to each corresponding content entity in the directed acyclic graphs to rank order the multimedia content.
In another aspect of the invention, attributes of each content entity are measured, and the entities are ranked according to the measured attributes. The rank list can be culled for desirable permutations of primary content entities as well as secondary entities associated with the primary entities. By culling desirable permutations, one can summarize, browse or traverse the multimedia content. For example, the most active and least active video segments of a video sequence form a summary that has the desirable attribute of conveying the dynamic range of action contained in the video sequence.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1a-1c are block diagrams of prior art description schemes;
FIG. 2 is a block diagram of a description scheme for a general content entity according to the invention;
FIGS. 3a-3c are block diagrams of description schemes for example content entities;
FIG. 4 is a flow diagram of a method for generating the description scheme according to the invention;
FIG. 5 is a flow diagram for a method for comparing the description schemes according to the invention;
FIG. 6 is a block diagram of a client accessing multimedia on a server according to the invention;
FIG. 7 is a ranked graph;
FIG. 8 is a summary of the graph of FIG. 7; and
FIG. 9 is a ranked graph with secondary content entities.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Introduction
We describe methods for representing and comparing multimedia content according to a ranking of the content. The methods are based on a new generic data structure, which includes a directed acyclic graph (DAG) representation. In the following, we describe objects in our scheme and the advantages of the DAG representation. It is the DAG representation that allows the scheme to infer multiple interpretations of multimedia content, yet still be efficient in the comparison with other multimedia content. In fact, when we score with respect to a probability likelihood function, the computations are not only tractable, but also optimal.
Besides describing the generic data structure, we also describe three important functions that allow us to realize this efficient representation and perform comparisons. The first function will be referred to as a DAG-Coder. The DAG-Coder is responsible for taking individual content entities contained in the object and producing a DAG-
Composition. The second function is an Object-Compare. The Object-Compare efficiently compares two content entities by determining a similarity score. The third function is Content Ranker. This function ascribes a ranking score to content entities so that DAG-Compositions can be traversed, browsed, or summarized according to rank. The traversing, browsing, and summarizing can be an increasing or decreasing rank order.
After the data structure and three functions mentioned above have been described, we review and elaborate on applications that are enabled by our representation scheme. An integrated application system that performs feature extraction, database management and object comparison is described. Also described is an application system for traversing, browsing, and summarizing multimedia content according to a ranking of the content.
Generic Description Scheme of a Content Entity
To introduce our scheme of representing content objects, we define generic object types, and restrictions on instantiations of such generic object types.
As shown in FIG. 2, a content entity, for example, a video entity 200 is the main part of our scheme. The content entity is a data object that relates contained objects together. The content entity is a recursive data structure divided into four parts: attributes (properties) 201, relations 202, DAG-Compositions 203, and a containment set 204.
Attributes
The attributes 201 form the basis within our recursive description scheme. Attributes are an unordered set that contains properties that may provide details about parts of the entity or summarize the entity as a whole. Attributes are global to the object and may refer to such syntactic properties as color and motion, or other semantic properties of the object such as time and place. The attributes provide basic, low-level information without any structure, however, after structure is added, it is these properties that actually contribute to the degree of similarity. Also, as we will describe later, attributes can define an ordering that helps to compose and interpret the individual entities contained within the object. It should be noted that these properties are inherent qualities of the content entity that contains them and instantiations of this entity should be accessible/visible through the content entity itself.
As an example, a video sequence of an airplane landing on a runway may contain the semantic attributes of place, date, time, and temperature, along with the caption, “airplane (767) landing.” Some syntactic attributes that may be attached to this multimedia content are the trajectory of descent. Attached to the airplane object may be the color and shape of the airplane itself. Here, we make an important distinction between attributes of the multimedia content and attributes of the objects. The reason that the trajectory is an attribute of the multimedia content is because trajectory is relative to the ground. Therefore, it does not make sense as an attribute of the plane alone, whereas color and shape do make sense.
Relations
The relations (R) 202 are objects that detail relationships between content entities (VE). It is important to note that the context of the relations is given by the containing content entity. The reason is that multimedia content that are segmented differently will produce different relations. Essentially, the relation can be viewed as a hyperlink between a contained object and something else, for example, another content entity. Types of relations are global and instantiation of relations should only be accessible within the content entity itself. One of the utilities of relations is that they may be useful in guiding a search. Returning to our example of the airplane landing, several relations can be identified: the plane is landing on the runway, the lights are guiding the plane, and the runway is located at a particular airport with a particular orientation.
The relations are different from containment, described below, in that the related object may not be completely contained by the content entity and therefore is not considered in similarity comparisons. However, relations allow a user to search for other relevant objects to the content entity in question. All the relations in the content entity must have one argument that is contained within the content entity.
DAG-Compositions
In general, the DAG-Compositions 203 are directed acyclic graphs 205 where edges 206 represent content entities and nodes 207 correspond to breakpoints in the segmentation. The DAG-Composition allows us to infer multiple interpretations of the same multimedia content. Because DAGs operate on 1D spaces, segmentation in this context refers to the delineation of some 1D process. For instance, if we consider a spatio-temporal multimedia content, then the temporal segmentation is a 1D process that defines points in time where several successive events may begin and end. Hence, we may have a DAG-Composition that corresponds to temporal actions. In the spatial domain, we may define an order from left to right across an image. In this way, we may have a DAG-Composition that corresponds to object positions from left to right. Of course, we may define other orderings such as a counter-clockwise spatial ordering, which may serve a totally different purpose.
In U.S. patent application Ser. Nos. 09/326,750 and 09/326,759, incorporated herein by reference, Voronoi ordering functions were respectively defined over the exterior and interior image space with respect to an object boundary. The ordering on the interior space was particularly useful in obtaining a skeleton-like representation of the object shape, then forming a partially ordered tree (POT), which made use of the DAG representation.
It should be emphasized though that the method of ordering 2D images or 3D video sequences to achieve DAG-Compositions is not the focus here, rather we are concerned with techniques that use the DAG-Composition to infer higher-level interpretations of a particular multimedia content.
Containment Set
The containment set 204 includes pointers to other content entities that are strictly contained temporally and/or spatially within the content entity 200. The restriction on the containment set is that one object cannot contain another object that contains the first object, i.e., containment induces a directed acyclic graph. The content entities need not be mutually exclusive and there is no ordering within the containment set. For example, in the video sequence of the airplane landing, the containment set includes pointers to each content entity. Some possibilities include pointers to the plane, the runway, the runway lights, the plane touching down, radio communications, etc.
DAG-Coder
The DAG-Compositions are the result of different DAG-Coders applied to the content entity. In other words, given the content entities in the containment set and their relations, different DAG-Coders produce different interpretations of the multimedia content. This function is further described in the following.
A DAG-Coder is a function that segments a given content entity into its components by inducing an ordering over the content entity components. The DAG-Coder produces the DAG-Composition 204. The DAG-Coder is global to the database and can be applied to any content entity. The DAG-Coder provides a perspective on the spatio-temporal content space and make similarity calculations between objects more tractable. A path in the DAG represents an interpretation of the content entity 200. This DAG representation becomes a framework for the description scheme that can interchange syntactic and semantic information, at any level. Furthermore, the complexity of the description scheme is hidden from the user.
Multiple Path Through a DAG
The DAG-Coder produces multiple interpretations of the multimedia content through such DAG-Compositions. This is achieved through the multiple path structure of the DAG. In the following, we focus on what these multiple paths really mean in terms of the multimedia content.
FIGS. 3a-3c illustrate multiple paths in terms of an example “baseball video” entity 300. In FIG. 3a, the content entity 300 includes attributes 301, relations 302, DAG-compositions 303, and a containment set 304. In FIG. 3b, a content entity 310 includes attributes 311, relations 312, DAG-Compositions 313, and a containment set 314.
As illustrated, a temporal DAG can represent equivalent interpretations of the same event. For instance as shown in FIGS. 3a and 3b, in the baseball video, a pitching and hitting sequence, or the inning that is being played may be recognizable through the observation of syntactic elements, such as motion, color and/or activity. However, as an alternate means of representation as shown in FIG. 3c, such a sequence or event can also be summarized by attributes 321 of the commentary of the announcer 320. So, from this example, is evident that multiple temporal interpretations of multimedia content are possible and that they may or may not occur simultaneously.
In the case of spatial DAGs, multiple paths can also represent equivalent interpretations, and in some sense can add a higher level of expressiveness. This added level of expressiveness is achieved by a grouping of individual objects into a composite object, then realizing that this composite object can be interpreted with a different semantic meaning. Usually, this new semantic interpretation is higher than before since more information is considered as a whole.
As an example, consider several objects: a gasoline pump, a gas attendant and a car. Individually, these objects have their own set of attributes and are distinct in their semantic meaning. Put together though, these individual objects can obviously be interpreted as a gas station. These multiple paths are efficiently represented by the DAG structure. On the syntactic side, various interpretations of the shape of an object for example may be deduced in a similar manner.
Generating Multimedia Content Description
FIG. 4 illustrates a method 400 for generating a description scheme 409 from a multimedia content 401. The multimedia content can be a 2D image or a 3D video sequence. First, spatial and temporal segmentation 410 is applied to the multimedia content to extract objects 411. Next, feature extraction 420 is applied to the objects to obtain a set of all content entities 429. Feature extraction includes attribute extraction 421, containment extraction 422, and relations extractions 423. The DAG-Coder 430, according to an ordering 431, generates the DAG-Compositions for the entities 429 to form the multimedia content description 409 according to the invention.
Comparing Different Multimedia Content
FIG. 5 shows a method for comparing two different multimedia contents, content 1 501 and content 2 502. The method generates 400 two description schemes, DS 503 and DS 504. The descriptions are compared 510 to produce a similarity score 509. Given two types of objects, the object comparator returns a similarity score in terms of the probabilistic likelihood that the two objects are the same. The Object-Compare function 510 may recursively call other Object-Compare functions. The Object-Compare is very similar to the algorithm using for comparing Partially Ordered Trees (POT) as described in U.S. patent application Ser. No. 09/326,759 incorporated herein by reference. The key points are reviewed below.
We consider the matching algorithms used to compare Ordered Trees. Because trees are recursive structures, we can do optimal comparisons recursively and base the comparisons upon single node trees. Let us score our trees in the range of 0 to 1. Two single node trees are assigned a score of 1, while a single node tree and any tree with more than one node is assigned a score of 0.
For our inductive step, we note that each node corresponds to a sequence of edges and their respective children. To compare trees, we merely find the best correspondence between the sequences, while recursively comparing their corresponding children. A Largest Common Subsequence (LCS) matching can be used for this step. The Object-Compare methods allows efficient, robust and optimal comparison of objects at the same complexity of the Ordered Tree comparisons.
To handle the extra freedom in the expressiveness of DAGs, we use a DAG-Compare algorithm, see “Lin, et al., “Coding and Comparison of DAGs as a novel neural structure with application to on-line handwritten recognition,” IEEE Trans Signal Processing, 1996, incorporated herein by reference. We find the two best-matching paths between two DAGs. Although more general, the DAG-Compare is of the same order complexity as the LCS search. Lastly, we should mention that the constraints on the containment hierarchy (as a DAG) allow us to use the POT-Compare algorithm, but the POT is merely a subset of our generic content entity.
Applications
The content description scheme described above is not only an expressive means of describing content entities, but also provides a robust similarity measure that is computationally efficient and can seamlessly integrate various descriptions, both semantic and syntactic. Within the description scheme according to our invention, content entities, their attributes and their relations form a basic hyperlink network such as available from the HTTP standard.
By constraining our graph structures to Directed Acyclic Graphs onto the hierarchy of our content entities and their descriptions, we can give an extra expressiveness over ordered trees while maintaining computational complexity for robust comparison between content entities that is equivalent to an ordered tree comparison.
Freedom in Expressiveness
There is no strict hierarchy of content entities: any object may strictly contain another object as long as the containment is not contradictory. Instead of a tree hierarchy, the containment relation over the content entities induces a directed acyclic graph. Acyclicity is maintained by disallowing contradictory containment. The restriction on cycles enables an efficient recursive formulation of comparison.
Focusing on the DAG structure, we map the DAG structure of DAG-Composition as follows: edges represent content entities, and nodes correspond to breakpoints segmentation. We can structure the object as a configuration of contained content entities within DAGs according to a predefined topological order. The restrictions on the DAGs compared to a general graph structure is its topological ordering. This order may be temporal or spatial, but it must be ID. By following the order and obeying connectivity, a subgraph of DAG structure leads to a new concept: an ordered path represents a particular interpretation of multimedia content, i.e., a representative view of the content entity as an ordered subset of its contained entities.
Because a DAG can contain multiple ordered paths, the DAG becomes a compact representation of the multiple interpretations of the data. The DAG data structure allows for the concept of parallel paths; thus, the DAG may integrate both semantic and syntactic elements through this parallel structure. The semantic and syntactic elements are not necessarily equivalent, but, within the context of the DAG structure, they can be made interchangeable by placing them on these parallel constructions and its ordering.
These functionalities are a subset of a generic graph structure. However, as most graph matching problems are still open, these restrictions will allow us to compare these expressive structures. Although this ordering constrains the expressiveness of a DAG-composition, it does allow for element alignment in robust comparison of content entities.
Universal Multimedia Access
Because our description scheme is capable of representing and comparing multiple interpretations of multimedia content, it fits very well with the concept of Universal Multimedia Access (UMA). The basic idea of UMA, as shown in FIG. 6, is to enable client devices 601 with limited communication, processing, storage, and display capabilities to access, via a network 602, rich multimedia content 603 maintained by a server device 604.
Recently, several solutions have focussed on adapting the multimedia content to the client devices. UMA can be provided in two basic ways—the first by storing, managing, selecting, and delivering different versions of the media objects (images, video, audio, graphics and text) that comprise the multimedia presentations. The second way is by manipulating the media objects on-the-fly, such as by using methods for text-to-speech translation, image and video transcoding, media conversion and summarization. This allows the multimedia content delivery to adapt to the wide diversity of client device capabilities in communication, processing, storage, and display.
Our description scheme can support UMA through the first item mentioned above, that is, depending on the client-side capabilities, the server-side may choose to send a more elaborate interpretation of the multimedia content or simply send a brief summary of the multimedia content. In this way, our description scheme acts as a managing structure that helps decide which interpretation of the multimedia content is best suited for the client-side devices. As part of the attributes for a content entity, the requirements may include items such as the size of each image or video frame, the number of video frames in the multimedia content, and other fields that pertain to resource requirements.
Ranking
As an additional feature, the content entities in a DAG can have associated ranks. FIG. 7 shows a DAG 700 including edges 701-709 having associated ranks (R) 711-719. The ranking is according to the attributes, e.g., semantic intensity, syntactic direction, spatial, temporal, and so forth. The ranking can be in an increasing or decreasing order depending on some predetermined scale, for example a scale of one to ten, or alternatively, ten to one.
For example, the various segments of an “adventure-action” movie video can be ranked on a scale of 1-10 as to the intensity of the “action” in the movie. Similarly, the segments of a sports video, such as a football match can be ranked, where a scoring opportunity receives a relatively high score, and an “injury” on the field receives a relatively low score. Segments of gothic romance videos can be ranked on the relative level of “romantic” activity, horror films on the level of the levels of fright inducing scenes, comedies on their level of humor, rock videos on their loudness, and so forth. It should be understood that the measurements can be based on the semantic and/or the syntactic properties of the content.
The ranking can be manual, or machine generated. For example, a high number of short segments in a row would generally indicate a high level of activity, whereas long segments would tend to indicate a lower level of activity. See Yeo et al., in “Rapid Scene Analysis on Compressed Video,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 5, No. 6, December 1995, page 533-544, for one way of measuring content attributes.
Once the various segments have been ranked, as shown in FIG. 8, it becomes possible to traverse the DAG 800 according to the rank-ordering. The traversal can be considered a permutation of the content. In FIG. 8, the arrows 801 indicates “skips,” and the bolded edges indicate the only segments that are traversed. For example, here the ranking is based an “action,” and only segments having an “action” ranking of eight or greater are traversed. It should be apparent that the traversing can be according to other rank orderings of the content.
Summary
Specifying a particular rank-based traversal in effect allows one to summarize a video. The “summary” shown in FIG. 8 is a “high-action” summary. Thus, if summaries for two different video are extracted based on the same ranking criteria, the summaries can be compared with the scheme as shown in FIG. 5. The advantage here is that when the videos are fairly lengthy, extraneous segments not germane to the comparison can be rapidly skipped and ignored to provide a more meaningful and faster comparison.
In another embodiment, as shown in FIG. 9, some or all of the “primary content” entities 711-719 have associated secondary content entities (2nd) 901-909. A secondary content entity characterizes its associated primary entity in a different manner. For example, a fifteen minute interview clip of a person speaking, can be associated with just one frame of the segment, a still image of the same person, or perhaps, text containing the persons name, and a brief description of what the person is saying. Now, a traversal can be via the primary or associated secondary content entities, and a summary can be the primary content entities, or the secondary content entities, or a mix of either. For example, a low bandwidth summary of a video would include only textual secondary entities in its traversal or selected permutations, and perhaps a few still images.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore,
it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. A computer implemented method for ordering multimedia content, comprising the steps of:
- segmenting the multimedia content to extract video objects, in which the objects are video object planes;
- extracting and associating features of the video object to produce content entities, wherein the content entities are recursive data structures comprising features, relations, directed acyclic graphs and containment sets;
- coding the content entities to produce directed acyclic graphs of the content entities, each directed acyclic graph representing a particular interpretation of the multimedia content;
- measuring high-level temporal attributes of each content entity;
- assigning the measured high-level temporal attributes to each corresponding content entity in the directed acyclic graphs to order the content entities of the multimedia content;
- comparing the ordered content entities in a plurality of the directed acyclic graphs to determine similar interpretations of the multimedia content;
- traversing the multimedia content according to the directed acyclic graph and the measured attributes assigned to the content; and
- summarizing the multimedia content according to the directed acyclic graph and the measured attributes assigned to the content entities.
2. The method of claim 1 wherein the measured attributes include intensity attributes.
3. The method of claim 1 wherein the measured attributes include direction attributes.
4. The method of claim 1 wherein the measured attributes include spatial attributes and the order is spatial.
5. The method of claim 1 wherein the measured attributes include temporal attributes and the order is temporal.
6. The method of claim 1 wherein the measured attributes are arranged in an increasing rank order.
7. The method of claim 1 wherein the measured attributes are arranged in a decreasing rank order.
8. The method of claim 1 wherein the multimedia content is a three dimensional video sequence.
9. The method of claim 1 wherein nodes of the directed acyclic graphs represent the content entities and edges represent breaks in the segmentation, and the measured attributes are associated with the corresponding edges.
10. The method of claim 1 wherein at least one secondary content entity is associated with a particular content entity, and wherein the secondary content entity is selected during the traversing.
11. The method of claim 1 wherein a summary of the multimedia is a selected permutation of the content entities according to the associated ranks.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/7383504", "len_cl100k_base": 7090, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 18352, "total-output-tokens": 9780, "length": "2e12", "weborganizer": {"__label__adult": 0.0005035400390625, "__label__art_design": 0.002529144287109375, "__label__crime_law": 0.0012025833129882812, "__label__education_jobs": 0.0009474754333496094, "__label__entertainment": 0.0005702972412109375, "__label__fashion_beauty": 0.0002739429473876953, "__label__finance_business": 0.0009775161743164062, "__label__food_dining": 0.0004472732543945313, "__label__games": 0.00087738037109375, "__label__hardware": 0.00677490234375, "__label__health": 0.0005412101745605469, "__label__history": 0.0004124641418457031, "__label__home_hobbies": 0.0001302957534790039, "__label__industrial": 0.0009899139404296875, "__label__literature": 0.0006151199340820312, "__label__politics": 0.0003879070281982422, "__label__religion": 0.00052642822265625, "__label__science_tech": 0.443603515625, "__label__social_life": 7.194280624389648e-05, "__label__software": 0.0701904296875, "__label__software_dev": 0.466552734375, "__label__sports_fitness": 0.00019407272338867188, "__label__transportation": 0.0004723072052001953, "__label__travel": 0.00018775463104248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40592, 0.05452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40592, 0.51539]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40592, 0.88441]], "google_gemma-3-12b-it_contains_pii": [[0, 710, false], [710, 5923, null], [5923, 6179, null], [6179, 6186, null], [6186, 6186, null], [6186, 6194, null], [6194, 6302, null], [6302, 6309, null], [6309, 6309, null], [6309, 6309, null], [6309, 13261, null], [13261, 16407, null], [16407, 23519, null], [23519, 30652, null], [30652, 37857, null], [37857, 40592, null]], "google_gemma-3-12b-it_is_public_document": [[0, 710, true], [710, 5923, null], [5923, 6179, null], [6179, 6186, null], [6186, 6186, null], [6186, 6194, null], [6194, 6302, null], [6302, 6309, null], [6309, 6309, null], [6309, 6309, null], [6309, 13261, null], [13261, 16407, null], [16407, 23519, null], [23519, 30652, null], [30652, 37857, null], [37857, 40592, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40592, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40592, null]], "pdf_page_numbers": [[0, 710, 1], [710, 5923, 2], [5923, 6179, 3], [6179, 6186, 4], [6186, 6186, 5], [6186, 6194, 6], [6194, 6302, 7], [6302, 6309, 8], [6309, 6309, 9], [6309, 6309, 10], [6309, 13261, 11], [13261, 16407, 12], [16407, 23519, 13], [23519, 30652, 14], [30652, 37857, 15], [37857, 40592, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40592, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
2cb322c96620f8e1ae1796684da683d55774c1cd
|
Abstract—Code search techniques are well-known as one of the techniques that helps code reuse. If developers input queries that represent functionality that they want, the techniques suggest code fragments that are related to the query. Generally, code search techniques suggest code at the component level of programming language such as class or file. Due to this, developers occasionally need to search necessary code in the suggested area. As a countermeasure, there is a code search technique where code is suggested based on the past reuse. The technique ignores structural code blocks, so that developers need to add some code to the pasted code or remove some code from it. That is, the advantages and disadvantages of the former technique are disadvantages and advantages of the latter one, respectively. In this research, we have conducted a comparative study to reveal which level of code suggestion is more useful for code reuse. In the study, we also compared a hybrid technique of the two techniques with them. As a result, we revealed that component-level suggestions were able to provide reusable code more precisely. On the other hand, reuse-level suggestions were more helpful to reuse larger code.
Keywords—Code search, Code reuse, Code clone
I. INTRODUCTION
Software reuse is known as a promising way to promote efficient software development. Keyword-based code search systems are a kind of frameworks that support code reuse [1], [2], [3]. Such systems return source code (code fragments) related to a given query (keywords that a developer inputs). However, existing keyword-based code search systems have a drawback. They suggest code at a component level of the programming language. Thus, they sometimes suggest extra (unnecessary) code together with necessary one. Developers need to seek code that they actually want to reuse from the suggested code. Most of the existing systems suggest code at file-level or class-level, so that developers need to search a small piece of reusable code from a large amount of suggested code if they want to implement small functions. Besides, developers do not always need code at the same level [4]. Sometimes they need a whole class but at other times they need only several lines of code. Search systems should suggest code at the level of developers’ demands.
As a way to solve the above issue, Ishihara et al. proposed a code search technique that suggests code at the level of the past reuse [5]. This technique suggests code that has been reused. In the technique, code clones (hereafter, clones) across software systems are regarded as reused code. That is, detecting clones among systems is regarded as identifying code reuse among them. Before now, some techniques have been proposed to detect clones across systems [6], [7], [8]. However, suggesting code at reuse-level requires manual code addition or deletion after developers copy and paste suggested code as it is. That is because reused code does not necessarily match with the structural code blocks such as class, method, or simple block in methods. Developers do not need to seek code that they actually want to reuse from a large amount of suggested code, but they need to add some code to the pasted code or remove some code from it. The advantage and disadvantage of the component-level suggestion are the disadvantage and advantage of the reuse-level suggestion, respectively.
It is unclear which of the two techniques supports developers more efficiently and more effectively than the other one. Consequently, in this research, we have conducted a comparative study on the following techniques.
Technique-A: the first technique is a component-level code suggestion [1], which suggests code at the method-level1. In this paper, we do not introduce this technique any more in this paper due to space limitation.
Technique-B: the second technique is a code suggestion based on past reuse [5].
Technique-C: the third technique is a revised version of the second technique. That is, suggested code is identified based on the past reuse, but they are reshaped to match with the structural units of programming language like the first technique. The third technique is our proposed technique in this paper.
The remainder of this paper is organized as follows: In Section II, we introduce our previous technique (Technique-B), and in Section III we explain our proposed technique (Technique-C); in Section IV reports the experimental result, then we discuss it in Section V; threats to validity in the experiment is described in Section VI; lastly, we conclude this paper in Section VII.
II. TECHNIQUE-B: REUSE-LEVEL CODE SEARCH
Ishihara et al. proposed a code search technique that suggests code reused in the past [5]. Their technique firstly detects the past code reuse in advance. When a developer inputs a query, it suggests reused code related to the query. The highlighted area in Figure 1(a) is an example of code
---
1The technique originally suggests code at class-level. But, in this experiment, we developed a tool that suggests code at method-level with the component-level code search technique because the class-level suggestion is too coarse in this comparative study.
The proposed technique is an enhancement of Ishihara’s technique. Contrary to Technique-A, their technique ignores borders of code blocks.
The technique leverages the index-based clone detection [9] to detect past code reuse quickly. Code reuse is the most common cause that clones occur in source code [10], [11]. Thus, detecting clones can identify past code reuse.
Ishihara’s technique consists of two procedures, Code Analysis and Code Suggestion. The Code Analysis procedure detects clones from a given set of source files and extracts keywords from each of the detected clones. Clones and keywords are stored into a database. The Code Suggestion procedure suggests code related to query input by developers. The Code Suggestion procedure utilizes the database created by the Code Analysis procedure for suggesting code.
III. TECHNIQUE-C: REUSE-LEVEL CODE SEARCH CONSIDERING BLOCK BORDERS
The proposed technique is an enhancement of Ishihara’s technique. We explain the proposed techniques by using Java code examples because currently our implementation handles Java language. However, it is not difficult to expand it to other programming languages.
Ishihara’s technique ignores block borders in source code, so that suggested code occasionally includes unnecessary program statements or lacks necessary ones for completing tasks. In Figure 1(a), the hatched area is the code suggested by Ishihara’s technique, and the dashed rectangles mean their block borders that are split by the suggested code. Thus, if we copy and paste the code as it is, the pasted code requires further modifications such as deleting unnecessary program statements or adding necessary ones.
Hereafter, we use term decoupled block, which means a code block where the border of a given clone exists. Two if-blocks that start from the 240 line and the 245 line are decoupled blocks of the clone in Figure 1(a).
The proposed technique suggests cloned code with consideration for code block borders. The proposed technique suggests code with the code range adjustment by considering decoupled blocks. More concretely, the proposed technique has two ways to adjust code ranges.
- Enlarging code range so as to include a decoupled block
- Shrinking code range so as to exclude a decoupled block
Decoupled blocks can occur at the top and the bottom of clones. For each decoupled block, the proposed technique adjusts code range with the above two strategies.
Figure 1 shows an example of code range adjustment. In Figure 1(a), the highlighted code is a clone, and the two if-blocks that start from the 240 line and the 245 line are its decoupled blocks. If we adopt the strategy enlarging, the highlighted code in Figure 1(b) is suggested to developers. Conversely, if we adopt the strategy shrinking, the highlighted code in Figure 1(c) is suggested. There are not partially-included sub-blocks in both of the suggested code ranges.
A. Procedures
Figure 2 shows an overview of the proposed technique. The procedure of the proposed technique is similar Ishihara’s technique [5], but it has some additional operations. The hatched items show the additional operations.
The proposed technique consists of two procedures, Code Analysis and Code Suggestion. The Code Analysis procedure includes the following 5 steps.
STEP-A1: detecting clones from given source files. The number of clones for each clone group is counted.
STEP-A2: extracting keywords from the detected clones.
STEP-A3: computing CRS (component rank scores) for methods that include the detected clones.
STEP-A4: identifying code blocks in given source files.
STEP-A5: adjusting code ranges for each of the clones by comparing it with the code blocks.
The Code Suggestion procedure includes the 3 steps.
STEP-S1: ranking code fragments that include keywords related to a given query.
STEP-S2: selecting an adjusted code range for a code fragment selected by the developer.
STEP-S3: suggesting the adjusted range to the developer.
The remainder of this section explains the added steps.
B. STEP-A4: Block Detection
The start and end lines of code blocks in given source files are identified. In our implementation, we use Java Development Tools (JDT) for obtaining the block positions in Java source code. The tool traverses the abstract syntax tree of each method to identify the following types of code blocks:
do-while, for, foreach, if, switch, synchronized, try, and while. If a node is recognized as one of those types, the start and end lines are computed by using APIs in JDT.
C. STEP-A5: Code Adjustment
Each clone is compared with code blocks. If the top border or bottom border of the clone is dividing sub-blocks, code ranges that include the sub-blocks and exclude them are obtained. In the case of Figure 1(a), the two adjusted code ranges shown in Figures 1(b) and 1(c) are obtained.
D. STEP-S2: Selecting Code Range
In this step, the proposed technique ranks adjusted code ranges of a clone selected by the developer. The ranking is performed by computing a metric misalignment. Here, we assume that \( L(c) \) is a set of lines included in a given code fragment \( c \), and \( c_o, c_1, c_2, \cdots, c_n \) are a clone and its adjusted code ranges.
Misalignment \( (m) \) between a clone \((c_o)\) and its adjusted code range \((c_1)\) is computed with the following formula.
\[
m(c_o, c_1) = |L(c_1) \cap L(c_o)| + |L(c_1) \cap L(c_o)| \quad (1)
\]
Just after a developer selects a clone, the adjusted code range whose misalignment is the smallest is selected as the default code range. She/he can see other adjusted code ranges by operating the front-end GUI.
In the case of Figure 1, the two adjusted code ranges have the following misalignment values.
\[
m(c_o, c_1) = 18, \quad m(c_o, c_2) = 6
\]
\( c_o, c_1, \) and \( c_2 \) mean code fragments in Figures 1(a), 1(b), and 1(c), respectively. In this case, adjusted code range \( c_2 \) has higher priority than \( c_1 \).
IV. EXPERIMENT
In this section, we describe the experiment that we compared the three code search techniques A, B, and C.
A. Experimental Design
Nine research participants attended the experiment. Each of them was given nine tasks, and she/he implemented Java code for each task with an assigned technique. The technique assignment was performed as follows: firstly, the participants were divided into three groups; then, we assigned one of the three techniques to each group.
Before participants’ implementation, the authors prepared several test cases for each task and gave a lecture to let them know how to use the tools of the techniques. In a given task, the participants read the specifications firstly and then they implemented code with an assigned tool. The participants decided keywords by themselves. We imposed no restriction on the number of code search. They were able to input any keywords as long as they wanted. Their screens were captured as motion pictures. If any one of the following conditions is satisfied, the task was regarded as finished.
1) For the non-GUI tasks, the implemented code passed all the test cases. Herein, a non-GUI task means a task that does not make any GUI component.
2) For the GUI tasks, the participants checked the GUI of their implementations by using the printed GUI examples for the tasks. If they regarded their GUI matched with the example, then the authors checked the behavior of the GUI by operating them.
3) The participants searched code fragments by using a specified tool 5 minutes, but they were not able to find reusable code. This condition was introduced because participants had only 20 minutes for each task. If they spend much time for searching code, it is difficult for them to complete tasks within the time limit even if they find reusable code.
4) Twenty minutes passed before any of the above 3 conditions was satisfied.
In the cases of (1) and (2), we regarded that the task was successfully finished. But in the cases of (3) and (4), we regarded that the task failed.
We imposed the two restrictions for the participants.
They were able to use only one of the tools for a given task. Using two or three tools in the task was prohibited.
- They had to reuse code at least 1 line of code suggested from the tool.
B. Research Participants
The research participants in this experiment were two undergraduate students, five master's course students, two Ph.D. candidates in the department of computer science at Osaka University. All the participants had at least half year experience of Java programming and they had developed at least 5,000 lines of Java code in the past. Their experiences are on their exercise lessons and their research activities. The nine participants were divided into three groups with the consideration for their Java experiences so that the average programming skills of the three participants in the three groups are approximately the same.
C. Source Code Used for Making Database
Techniques-BC need a database to suggest code when they take keywords as their inputs (see Figure 2). In this experiment, we made a database from UCI [7]. The size of UCI dataset is huge: it includes 13,192 projects, 2,127,877 Java source files, and 20,449,896 methods.
We detected clones with Ishihara’s technique [5], and then we detected block borders for each of the clones. The detected clones and its block borders were registered to the database. It took about 8 hours to finish the operations from UCI dataset. The database creation had been performed before we decided tasks because we needed to use the database to decide tasks.
D. Tasks
In this experiment, the authors prepared 9 tasks, each of which was implementing a Java method that met the given specifications. When we were deciding the tasks, we made sure that all the 3 techniques were able to suggest (a part of) reusable code if they took appropriate keywords as their inputs from research participants. Concretely speaking, we browsed source code of many clones in the database one by one, and if possible we created a task that the cloned code or its surrounding code was reusable on. On the other hand, Technique-A can suggest code in any method if they take keywords used in the method. By deciding tasks based on clones in the database, we were able to ensure that all the three techniques suggested reusable code for the tasks if they took appropriate keywords.
The participants were given signatures of the method and Javadoc comments including the specifications. They implemented the bodies of the methods. For non-GUI tasks, the authors prepared some test cases. For GUI tasks, the authors implemented the tasks and executed the programs. Then, the authors captured the screens and printed them for distributing to the participants. The whole bodies of the methods excepting signatures and Javadoc comments were regarded as participants’ implementations.
The following list is the collections of the tasks that participants did in the experiment.
- **T1**: sorting the numbers included in a given string.
- **T2**: removing vowels and changing capitals into small letters in a given string.
- **T3**: implementing a simple window with three buttons (red, yellow, and blue) by using Swing, which is a Java GUI library. If a button is clicked, the background of the window is changed to the color.
- **T4**: storing a given string (the 1st parameter) into a file with a given name (the 2nd parameter).
- **T5**: sorting string in the alphabetical order.
- **T6**: performing a multiplication of two matrices.
- **T7**: counting the number of words in a given text file.
- **T8**: implementing a simple window with three labels by using Swing. The strings on the labels are given by parameters.
- **T9**: performing exclusive-OR operation on given two byte arrays.
Each group did the tasks with the tools shown in Table I. Unfortunately, there were tasks where the participants failed implementations. Table II shows the number of failed tasks for each of the tools. There were 81 tasks in this experiment, and 34 out of them were failed.
E. Measures
In this experiment, we leveraged the following three measures to investigate to what extent each of the techniques was able to support code reuse. Those measures were computed by watching the motion pictures carefully.
- **time**: this is a difference between starting time and finishing time. Starting time is a clock time when a participant inputs a first character to the method body or a query to the tool for searching code. Finishing time is a clock time when all the test cases were passed. All the test cases were able to run by batched processing because the authors had built an Ant task for the test cases.
- **usage rate**: this measure represents how accurately the tool suggests reusable code to participants. This is a fraction where the denominator is the number of the suggested program statements and numerator is the number of reused program statements in the suggested ones. **Usage rate** can be represent with
<table>
<thead>
<tr>
<th>TABLE I</th>
<th>ASSIGNMENT OF TOOLS FOR GROUPS</th>
<th>Tool</th>
<th>T1</th>
<th>T2</th>
<th>T3</th>
<th>T4</th>
<th>T5</th>
<th>T6</th>
<th>T7</th>
<th>T8</th>
<th>T9</th>
</tr>
</thead>
<tbody>
<tr>
<td>G1</td>
<td>Technique-A</td>
<td>T1, T2, T3</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>G2</td>
<td>Technique-C</td>
<td>T4, T5, T6</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>G3</td>
<td>Technique-B</td>
<td>T7, T8, T9</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
the following formula.
\[
\text{Usage Rate} = \frac{\# \text{ of used statements}}{\# \text{ of suggested statements}}
\]
**contribution rate**: this measure represents how much code reuse contributes to the implementation. This is a fraction where the denominator is the number of all the program statements in participant’s code and the numerator is the number of reused program statements in the code. Contribution rate can be represent with the following formula.
\[
\text{Contribution Rate} = \frac{\# \text{ of reused statements}}{\# \text{ of all statements}}
\]
Figure 3 is an example of computing usage rate and contribution rate. The left window is a code suggestion by the proposed technique, and the right window is a source file that the developer is implementing. The rectangle region in the left window is suggested code and the hatched region is reused code. The code is pasted in the developer’s code three times. The developer implements 19 program statements and 9 of them come are reused code. In this case, usage rate becomes 0.6 (3/5) and contribution rate becomes 0.47 (9/19).
**F. Results**
Figure 4 shows the values of usage rate for all the tasks. The tasks where the height of bar graph is 0 are failed ones. The graph also shows the average of usage rate for each technique on each task. The right-most graph shows the average on all the tasks. The average of all the tasks shows that Technique-C marked a higher average than Techniques-AB.
Figure 5 shows the values of contribution rate for the tasks in the same fashion as Figure 4. The value of Technique-C is greater than the other two techniques in 4 out of the 9 tasks and the average of Technique-C on all the tasks is higher than Techniques-AB.
Figure 6 shows the values of time for the tasks. Regarding the average on all the tasks, Technique-C took less time than Technique-A but it took more time than Technique-B.
To wrap up, Technique-C suggested reusable code more accurately than the other two techniques and code reuse with Technique-C was the most helpful to implement tasks. However, the time for implementation with Technique-C was longer than Technique-B.
**V. DISCUSSION**
In this section, we discuss some factors that affected the results shown in Figures 4, 5, and 6.
**A. Long Implementation Time with Technique-C**
While both the usage rate and contribution rate of Technique-C has higher values than the other two techniques, the time of Technique-C was not shortest in the three techniques. The reason was that the participants took longer time to operate the GUI of Technique-C than the GUIs of the other techniques. The GUI of Technique-C had buttons for changing suggested code ranges of a selected clone. There were some tasks where the participants changed the suggested code ranges by pushing the buttons plenty of times and looked for reusable code. On the other hand, the GUIs of Techniques-AB are very simple. They just suggest code of the selected code fragment. From this finding, we can say that default code range is very important to shorten the implementation time.
**B. Some Cases Where Technique-C’s Usage and Contribution Rates are Low**
There are some tasks where both the usage rate and contribution rate of Technique-C was lower than the other two techniques. The reason was that the participants overlooked reusable code outside the suggested range.
Figure 7 shows such a case. The solid-rectangle area is suggested to the participant and the dotted-rectangle area is reusable code. The solid-rectangle area is the clone but the proposed technique excluded the available code by adjusting code range. The available code is just below the suggested area. Even if available code existed in the viewer of Technique-C, it was overlooked because it was not highlighted correctly.
One promising way to avoid overlooking is using data dependencies between program statements [12]. In the case of Figure 7, the cloned code includes some variable declarations (“converted”, “spaces”, “ats”, and “linefeeds”). Those variables are referenced in the reusable code. If the proposed technique adjusted code ranges with consideration for such data dependencies, the code area including the reusable code will get higher priority than the one excluding it. In such a situation, the participant will not overlook the available code.

C. Usage and Contribution Rates
Technique-A is the second place in usage rate, but it is worst in contribution rate. These facts mean that the block-level suggestion is more precisely providing reusable code than the reuse-level suggestion. But, the latter is more helpful to reuse code as much as possible than the former. Consequently, we can say that the hybrid technique has advantages of both the two techniques.
D. Developers’ Faith In Code Suggestion
If a developer believes that suggested code is really reusable, she/he might insist on reusing it. In such case, implementation with code reuse may take longer time than implementing code from scratch. In this experiment, we imposed the participants to reuse at least 1 line of code for each task. This restriction was intended to collect enough data of code reuse, but this restriction can be interpreted as a big faith to code suggestion.
If a developer does not believe code suggestion so much, she/he will give up searching reusable code after she/he input different keywords a few times. There should have been some cases in the experiments where the participants would have been able to finish the tasks without code reuse.
VI. Threats to Validity
Here, we describe threats to validity in the experiment.
Research participants: all the research participants had at least half year experiences of Java programming and each of them had developed at least 5000 lines of code.
In the experiment, we compared the three technique from the three points. The first point is how accurately the techniques were able to suggest reusable code. The second point is to what extent code reuse contributes to the implementations. The third point is time required for implementing code with the techniques.
The hybrid technique marked the best scores on the first and second points, but it ranked the second in the implementation time. The experimental result indicates that the method-level code suggestions are able to provide reusable code more precisely than the reuse-level ones. On the other hand, the reuse-level code suggestions are more helpful to reuse larger code than the method-level ones.
In the future, we are going to introduce data dependencies for deciding appropriate code ranges for suggestion. This is because we found some cases where the code range adjustments were not appropriate and the participants overlooked reusable code. Introducing data dependencies will be able to recognize cohesive code chunk in the source code. Suggesting cohesive code chunk will be more helpful to assist code reuse.
REFERENCES
VII. CONCLUSION
In this research, we conducted an experiment where three kinds of keyword-based code search techniques were compared: the first technique suggested code at method-level. The second one suggested code that had been reused in the past. The third technique is a hybrid of the first and second techniques. That is, the third technique suggests code based on the past reuse, but it adjusted suggested code range with consideration for the code blocks.
Fig. 7. A code suggestion where a developer overlooked reusable code of code. If participants with different experiences had joined the experiment, we might have gotten a different result.
Experimental restriction: we imposed the participants to reuse code at least 1 line of suggested code in each task. This restriction was intended to collect enough data of code reuse. However, this restriction may have affected the values of three measures.
Clones: we detected clones for Techniques-BC. Those clones were detected by a detection tool, and so the detected clones must include false positives. We had not removed the false positives before the participants performed tasks. False positives are not reused code but they are anyway duplicated code. In the experiment, we do not think the presence of false positives had a negative impact on Techniques-BC.
Task: the three groups implemented 9 tasks in the different orders. The order of the tasks that they implemented might affect the three measures.
Tool: the three groups used the three techniques in the different orders. The order of using the techniques might impact on the experimental result.
Database: all the tools used in the experiments suggest code that had been registered to the database in advance. Consequently, the suggested code vary if the database has a different set of source files. Besides, code reuse generate clones between different software systems. If we reuse unreliable code, the pasted code may requires many modifications in the future. Thus, it is very important to build a database with reliable code.
public static String convertTabs(String text) {
boolean preformatted = false;
String converted = "";
int spaces = 0;
int ats = 0;
String linefeeds = "\n";
for (int i = 0; i < text.length(); i++) {
if (text.charAt(i) == ' ')
spaces++;
else if (text.charAt(i) == '
')
ats = 0;
}
return converted;
}
|
{"Source-Url": "http://sdl.ist.osaka-u.ac.jp/pman/pman3.cgi?DOWNLOAD=291", "len_cl100k_base": 6084, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 28144, "total-output-tokens": 7283, "length": "2e12", "weborganizer": {"__label__adult": 0.0004627704620361328, "__label__art_design": 0.00024437904357910156, "__label__crime_law": 0.00034499168395996094, "__label__education_jobs": 0.000568389892578125, "__label__entertainment": 5.054473876953125e-05, "__label__fashion_beauty": 0.00014221668243408203, "__label__finance_business": 0.00012755393981933594, "__label__food_dining": 0.0003769397735595703, "__label__games": 0.0005383491516113281, "__label__hardware": 0.0005831718444824219, "__label__health": 0.0002918243408203125, "__label__history": 0.00014853477478027344, "__label__home_hobbies": 6.753206253051758e-05, "__label__industrial": 0.0002092123031616211, "__label__literature": 0.00021469593048095703, "__label__politics": 0.00021755695343017575, "__label__religion": 0.0003573894500732422, "__label__science_tech": 0.0012159347534179688, "__label__social_life": 8.916854858398438e-05, "__label__software": 0.0031757354736328125, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.00030922889709472656, "__label__transportation": 0.0003180503845214844, "__label__travel": 0.00019049644470214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30704, 0.01539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30704, 0.8316]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30704, 0.93384]], "google_gemma-3-12b-it_contains_pii": [[0, 5227, false], [5227, 9602, null], [9602, 13306, null], [13306, 18771, null], [18771, 23186, null], [23186, 24630, null], [24630, 30704, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5227, true], [5227, 9602, null], [9602, 13306, null], [13306, 18771, null], [18771, 23186, null], [23186, 24630, null], [24630, 30704, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30704, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30704, null]], "pdf_page_numbers": [[0, 5227, 1], [5227, 9602, 2], [9602, 13306, 3], [13306, 18771, 4], [18771, 23186, 5], [23186, 24630, 6], [24630, 30704, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30704, 0.02994]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
14fefaf3179ff2d314c02be3a733f0add92307af
|
Using BPMN to model Internet of Things behavior within business process
Dulce Domingos
LaSIGE, Faculdade de Ciências, Universidade de Lisboa
Campo Grande, Lisboa, 1749-016
Portugal
www.shortbio.org/mddomingos@fc.ul.pt
Francisco Martins
LaSIGE, Faculdade de Ciências, Universidade de Lisboa e
Faculdade de Ciências e Tecnologia, Universidade dos Açores
Rua da Mãe de Deus, Ponta Delgada, 9500-321
Portugal
www.shortbio.org/fmartins@acm.org
Abstract:
Whereas, traditionally, business processes use the Internet of Things (IoTs) as a distributed source of information, the increase of computational capabilities of IoT devices provides them with the means to also execute parts of the business logic, reducing the amount of exchanged data and central processing. Current approaches based on Business Process Model and Notation (BPMN) already support modelers to define both business processes and IoT devices behavior at the same level of abstraction. However, they are not restricted to standard BPMN elements and they generate IoT device specific low-level code. The work we present in this paper exclusively uses standard BPMN to define central as well as IoT behavior of business processes. In addition, the BPMN that defines the IoT behavior is translated to a neutral-platform programming code. The deployment and execution environments use Web services to support the communication between the process execution engine and IoT devices.
Keywords:
Internet of Things; Business Process modelling; BPMN; IoT-aware business process.
DOI: 10.12821/ijispm050403
Manuscript received: 3 July 2017
Manuscript accepted: 17 November 2017
1. Introduction
In the last years, organizations have been using more and more business processes to capture, manage, and optimize their activities. A business process is a collection of inter-related events, activities, and decisions points that involve actors and resources and that collectively lead to an outcome that is of value for an organization or a customer [1]. In areas such as supply chain management, intelligent transport systems, domotics, or remote healthcare [2], business processes can gain a competitive edge by using the information and functionalities provided by Internet of Things (IoTs) devices. The IoT is a global infrastructure that interconnects things (physical and virtual). IoT devices connects things with communication networks. These devices can also have capabilities of sensing, actuation, data capture, data storage, and data processing [3].
Business processes can use IoT information to incorporate real world data, to take informed decisions, optimize their execution, and adapt itself to context changes [4]. Moreover, the increase in processing power of IoT devices enables them to become active participants by executing parts of the business logic: IoT devices can aggregate and filter data, and make decisions locally, by executing parts of the business logic whenever central control is not required, reducing both the amount of exchanged data and of central processing [5]. Indeed, sensors and actuators can be combined to implement local flows, without needing central coordination.
However, decentralizing business processes into IoT devices presents two main challenges. First, IoT devices are heterogeneous by nature. They differ in terms of communication protocols, interaction paradigms, and computing and storage power. In addition, business modelers define processes using high-level languages (such as Business Process Model and Notation version 2.0 [6], henceforth simply referred as BPMN), as they must know the domain, but do not need to have specific knowledge to program IoT devices, nor want to deal with their heterogeneity. Therefore, this decentralization requires design as well as execution time support.
At design time, current approaches allow modelers to define both business processes and IoT devices behavior at the same level of abstraction, using, for instance, BPMN-based approaches [7], [8], [9], [10], [11], [12]. BPMN already provides the concepts to define the behavior of various participants, by using different pools. The interaction amongst participants is specified through collaboration diagrams. Supporting the execution of these hybrid processes requires bridging the gap between high-level BPMN and the programming code that IoT devices can execute. These approaches use a three-step procedure: (1) translation of the process model to a neutral intermediate language; (2) translation of the intermediate code to a platform specific executable code; and (3) deployment of the executable code into IoT devices.
By taking advantage of these approaches, business modelers can define both business processes and IoT behavior at the same (high) level of abstraction. However, they still use non-standard BPMN to integrate, for instance, IoT device information into business processes and they generate IoT device specific code, so that it must be generated again for each different IoT device.
The proposal we present in this paper only uses standard BPMN to define both central and IoT behavior of business processes. Besides using pools and collaboration diagrams, we use the BPMN resource class to integrate the information about IoT devices into the model, and we use the BPMN performer class to define the IoT devices that will be participants of the process. In addition, the BPMN that defines the IoT behavior is translated into Callas bytecode [13]. We use the Callas sensor programming language as an alternative to the target platform-specific languages taken by previous proposals, since it can be executed in every IoT device for which there is a Callas virtual machine available. This way, we abstract hardware specificities and make executable code portable among IoT devices from different manufacturers. Business process and IoT devices communicate via web services (directly or indirectly through gateways). In addition, Callas also supports remote IoT devices reprogramming, a feature that is the first step to support ad-hoc changes [14] in the parts of business processes that define IoT behavior. A preliminary version of this work can be found elsewhere [15].
This paper is organized as follows. Section 2 presents the related work. Our proposal is described in the following two sections: whereas section 3 describes how we support modelling IoT behavior within BPMN business processes, section 4 focuses on the implementation aspects, including the description of the translation procedure into Calais source code, and the overview of the deployment and execution phases. Finally, section 5 concludes the paper and hints for future work directions.
2. Background
Business modelers define business processes with languages such as Web Services Business Process Execution Language (WS-BPEL) [16] or BPMN, which use an abstraction level closer to the domain being specified. At this level, modelers should not deal with IoT devices heterogeneity and specificities: IoT devices use different operating systems (e.g., TinyOS, Contiki [17]), different programming languages (e.g., nesC [18], Protothreads), and different communication protocols. Traditionally, web services are used to provide IoT information and functionalities, abstracting and encapsulating low-level details. This way, web services are the glue between IoT and business processes, as model languages already support their usage. In addition, more recent approaches take a step forward by supporting IoT behavior definition within the business process [19], [20].
2.1 IoT as web services – the centralized approach
Current IoT technology exposes IoT information and functionalities as web services, facilitating interoperability and encapsulating heterogeneity and specificities of IoT devices. Zen, Guo, and Cheng survey two approaches to implement IoT web services [21]. Some works provide web services directly in IoT devices: they simplify, adapt, and optimize Service-Oriented Architecture (SOA) tools and standards to deal with the well-known limitations of resource-constrained devices. Other approaches provide web services indirectly through middleware systems. This way, IoT devices that do not support web services can still be accessed.
Taking a step forward on integrating IoT into business processes, some authors propose the explicit integration of IoT concepts into business process models. Domingos et al. [22], [23] and George and Ward [24] extend WS-BPEL with context variables to monitor IoT information, abstracting the set of operations to interact with IoT devices. The IoT-A project proposes some BPMN extensions to explicitly include IoT devices and their services in an IoT-aware process model, as well as some characteristics of IoT devices, such as uncertainty and availability [25], [26]. The uBPMN project adds ubiquitous elements to BPMN: it defines the BPMN Task extension for Sensor, Reader, Collector, Camera, and Microphone, as well as an IoT-driven Data Object to represent the data transmitted from IoT devices [27], [28].
GWELS [29] provides a graphical user interface to design IoT processes and send them automatically to IoT devices as a sequence of operation calls that have been uploaded to IoT devices in advance. It uses proprietary communication protocols to interact with IoT devices. IoT processes are provided as web services and, in this way, can also be integrated into business processes.
The above approaches assume a centralized control of the process execution, where a single central system executes and coordinates processes and communicates with IoT devices using web services. However, business modelers are unable to define the behavior of IoT devices, they can only use services whose behavior is previously defined.
2.2 IoT as active participants of business processes – a decentralized approach
In a decentralized approach, IoT devices can work together to execute parts of business processes, reducing the number of exchanged messages and promoting the scalability of central process engines, since information is processed locally. Another important advantage, present in many scenarios, is that the lower network traffic between the central engine and IoT devices improves battery lifetime of IoT devices. To model business processes according to this decentralized approach, business modelers need a unified framework where they can specify the behavior of IoT devices as well as
their interactions with the central system. BPMN already provides the concepts to define the behavior of various participants by using different pools; their interactions are specified through collaboration diagrams.
Following a decentralized approach, Caracas and Bernauer [7], [8], [9] use standard BPMN to model both central and IoT behavior. However, IoT device information is integrated to the BPMN model in a non-standard way, by appending it to the pool name or with additional attributes added to the pool element. They translate the BPMN that defines the IoT behavior to target IoT device specific code. The authors state that the sensor code they generate does not perform much worse than hand-written code.
Casati et al. [10] propose the makeSense framework. They extend BPMN with attributes to support the new intra-WSN participant, which contains the part of the process that IoT devices will execute. To model IoT behavior, makeSense uses a second meta model [10], [30], [12]. The translation procedure into executable code for IoT device uses two models: the application capability model has information about available sensors and actuators and their operations, while the system capability model has additional information about the target IoT devices, which is used to generate different code based on IoT device capabilities. MakeSense uses its own message format and transmission encoding to support the communication between the central process engine and IoT devices [12].
Whereas in these two proposals, IoT devices execute device specific code, Pryss et al. [31] follow a different approach by executing process engines in IoT devices. Despite the advantage of avoiding generating the executable code for IoT devices, this option is only applicable for IoT devices with higher computational capabilities.
In our work, we exclusively use standard BPMN to define all the business process, and IoT device information is added to the model by using the BPMN resource class. We translate the BPMN that defines the IoT behavior into Callas bytecode [13], a non-platform-specific language that IoT devices with an available Callas virtual machine can execute.
3. Modelling the behavior of IoT devices
This section describes how we use the BPMN language to model IoT behavior within business processes, both at the same level of abstraction. It starts by presenting the use case scenario we choose to illustrate the application of our proposal.
3.1 Use case scenario
Fig. 1 presents our use case scenario, a simplified process for automatic irrigation control. The process includes three participants: the central process (named Irrigation) and two IoT devices, the IoT irrigation device and the IoT read rainfall device. The behavior of each participant is modelled in separated pools.
The IoT read rainfall device, periodically, wakes up. Its process starts by reading the rainfall sensor, and only sends a message to the IoT irrigation device if it is not raining; otherwise, it stops. When the IoT irrigation device receives the message from the IoT read rainfall device, it starts the irrigation by activating an actuator, which lasts for a pre-defined period. Upon finishing the irrigation, the IoT irrigation device reads the flowmeter sensor to make sure it is sealed. If water still flows, it sends an alert to the central process. This way, the central process receives a notification when the IoT irrigation device detects a water leak that needs human intervention to be fixed.
This simplified process omits a lot of details, such as the functionalities to change both timers (the irrigation interval and duration), for instance.
Using BPMN to model Internet of Things behavior within business process
3.2 Using BPMN to model the behavior of IoT devices
To model all the business process, including the behavior of IoT devices, we only use standard BPMN elements. BPMN already provides the concepts to define the behavior of various participants, by using different pools, as well as the interaction amongst participants, through collaboration diagrams. This approach is illustrated making use of our use case scenario described in the previous subsection.
We select a subset of BPMN to model the behavior of IoT devices, avoiding the use of two different meta models. The selection of the subset considers two main factors:
- To model the behavior of IoT devices, business modelers do not need all BPMN elements, as stated by Caracas [9]. This way, we include in our subset the BPMN elements that Caracas use to model the IoT programming patterns, and
- Callas already considers general IoT devices limitations, for instance, it does not support parallel tasks. In addition, it is a block-structured language, consequently, it also does not support unstructured control flow, unlike BPMN, which allows gateways to loop back and forward. Fig. 2 illustrates an example of a flow control that we do not support in IoT behavior definitions, since there is no way to represent the flow from Task B to Task A.
The BPMN subset includes the following elements:
- Flow control: events (message received, timers, and the start and end events), activities (script task, send task, and receive task), and gateways (exclusive gateway and merging exclusive gateway);
- Connecting objects: sequence flow, message flow, and data associations; and
- Data: data objects.
Fig. 1. BPMN use case scenario
Using BPMN to model Internet of Things behavior within business process
We use script tasks to define the tasks that corresponds to invocations of hardware functionalities of IoT devices. For instance, in our use case example, IoT read rainfall device has a rainfall sensor, and we use the script task Read rainfall to model the sensor data acquisition. In a similar way, within the process of the IoT irrigation device, we also use a script task (named Start irrigation) to model the activation of the actuator that starts the irrigation.
The BPMN Resource class and the BPMN Performer class are used to define the IoT device that will execute the processes, avoiding the integration of this information into BPMN models in a non-standard way. The Resource class is used to specify resources (i.e., IoT devices) that can be referenced by processes, whereas the Performer class defines the resource that will perform the processes. Fig. 3 presents a simplified example based on our use case scenario, which can be reused in other business processes. It includes the definition of the resource named IoTdevice with three parameters: deviceType, address, and operations. The performer definition exemplifies the way we apply the deviceType resource parameter in queries for resource assignment. We use the operations parameter to access to the list of the hardware functionalities of the IoT device and their signatures, i.e., the name, its return type, and the type of its parameters.
4. Implementing the behavior of IoT devices
The implementation phase includes the translation of BPMN processes into Callas bytecode (the executable program for IoT devices), and how to deploy and execute it in IoT devices.
4.1 Translating BPMN to Callas code
Unlike related work approaches that use platform-specific code to specify the behavior of IoT devices, we translate it into Callas bytecode [13]. By choosing the Callas programming language, we take advantage of some of its characteristics and functionalities specifically tailored to address IoT devices. For instance, Callas takes, as a pattern, the path followed by the Java programming language, and proposes a virtual machine for IoT devices that abstracts hardware specificities and makes executable code portable among IoT devices from different manufacturers. The Callas language is founded on a well-established formal semantics and, along with its virtual machine, statically guarantees...
that well-typed programs are free from certain runtime errors (type-safety). Moreover, the Callas virtual machine is sound, that is, its semantics corresponds to that of the Callas language. It includes domain-specific IoT operations, such as for sending and for receiving messages from the network. These Callas operations are supported directly at the Callas virtual machine level, and may have different implementations depending on the hardware where it is deployed. Currently the Callas virtual machine is available for SunSpot, TinyOS, Arduino, VisualSense, and Unix platforms (more information can be found in the Callas website http://www.dcc.fc.up.pt/callas/). Other interactions with the hardware of IoT devices are performed by calling external functions of the language. This typically corresponds to operating system calls or to direct implementations (on the bare metal) of functions on the Callas virtual machine. The operations made available by each kind of device is described by a type in the Callas language, allowing the compiler to verify if the source code is adequate to run on a specific target device. A distinguished feature of the language is its ability to deploy executable code, which we use to install the code in IoT devices, remotely. We also consider this feature as the first step to support ad-hoc changes [14] in IoT business process parts.
Fig. 4 presents the Callas source code that implements the behavior of the IoT devices of our case study. The left column has the source code that corresponds to the behavior of the IoT read rainfall device, whereas the right column has the source code that corresponds to the behavior of the IoT irrigation device. Programs start by declaring two module types: Nil, an empty module type used to represent void function returns; and a second module that defines the message signatures that flow on IoT devices.
The implementation of the IoT_read_rainfall_device spans from line 9 to line 17. Each function is implemented using the def construct. The checkIrrigation function reads the rainfall sensor (using the readRainfall external function) and responds to the behavior of the und in the Callas website (waterLeakAlert). The implementation of the IoT_irrigation_device spans from line 22 to line 30. Programs start by declaring two
```callas
# declare module IoT_read_rainfall_device
module m of IoT_irrigation_system:
defmodule Nil:
pass
defmodule IoT_irrigation_system:
Nil startIrrigation ()
Nil waterLeakAlert ()
# declare module IoT_irrigation_device
module m of IoT_irrigation_system:
defmodule Nil:
pass
defmodule IoT_irrigation_system:
Nil startIrrigation ()
Nil waterLeakAlert ()
# declare module IoT_read_rainfall_device
module m of IoT_irrigation_system:
def checkIrrigation (self):
if not isRaining :
send startIrrigation ()
return
curFlowmeter = readFlowmeter ()
if curFlowmeter > 0
send waterLeakAlert ()
pass
mem = load
newMem = mem || m
store newMem
# invoke tasks every day
checkIrrigation () every 24+60+60+1000
```
Within the implementation of the IoT_irrigation_device, upon receiving the startIrrigation message, it invokes the startIrrigation external function. The system irrigates for 20 minutes, stops by calling the stopIrrigation function, and checks whether the water valve is sealed. It alerts the central process in case there is a water leak (send waterLeakAlert).
```
# load the device memory
# update with Sampler module m
# replace the device memory
# load the device memory
# update with Sampler module m
```
Using BPMN to model Internet of Things behavior within business process
The translation from BPMN to Callas is divided into three phases: (1) parse of BPMN XML files; (2) syntactic and semantic validations of BPMN processes; and (3) translation into Callas.
The parsing phase takes the BPMN XML file and creates an abstract syntax tree (AST) representation. At this phase, we rule out programs with constructs that are not compliant with the BPMN subset we define for modelling IoT devices’ behavior.
The second phase traverses the AST multiple times for validations: it verifies that (a) the control flow is plausible for being translated into a block-structured language, and that (b) the domain-specific script tasks are valid. The former checks that tasks and events only have one input and one output sequence flow and that every possible control flow path from an outgoing exclusive gateway arrives at the corresponding incoming exclusive gateway. Otherwise, it denotes a control flow only possible to represent using a goto statement (absent in Callas). This validation is challenging, since we need to figure out the correspondence between the outgoing and the incoming exclusive gateways, and that control flow may include various exclusive gateways (corresponding to nested if statements in structured languages). As for the latter, valid domain-specific script tasks have well-known names fixed in advance, associated with information specific per each task.
Semantic validation checks that the domain-specific tasks are used correctly on what concerns the types of the data objects being used and that these types are in conformance with the domain-specific task signatures (also provided in advance).
The translation from the AST into Callas proceeds as follows:
- Event elements: (a) message received start events are translated into function calls and the process triggered by these events are translated into function definitions. We figure out the function signature from the types of the values in the message. The function name can be set arbitrarily, since the whole interaction process is going to be generated automatically at compile time and set for both participants (the BPMN engine or the IoT devices). In Fig. 4 we do not use arbitrarily function names for the sake of legibility. Upon message reception, the Callas virtual machine takes the responsibility of invoking the correct message handler function. For instance, upon reception of an irrigation message by the IoT device, the Callas virtual machine dispatches it to the startIrrigation function that implements the behavior of the IoT irrigation device; (b) There are two distinct behaviors for business process timers, depending whether they are attached to starting events or used in the middle of processes to model the passing of time. The former are directly supported by a Callas timer that invokes a function encoding the timer behavior (vide line 24, Fig. 4). The latter, is naturally implemented using the Callas delay function; (c) the end event is twofold: when it happens inside a process, it is translated into a return statement inside the function that models the process; when it marks the end of the top-level process there are multiple ways to interpret it. The simplest is just to ignore the event, since we can think of a IoT program as a never-ending program, always ready to handle new requests. The other extreme is to end the Callas virtual machine, but then we need to get explicit access to the IoT device hardware to reset it, or put in place an additional software procedure to reset the IoT devices remotely. We have decided to follow the former option and simply ignore the end event;
- Activity elements: send and receive tasks have a direct correspondence with the send and receive constructs from the language. The script tasks we support are predefined and implemented as part of a Callas library for the BPMN subset. As an example, the Read rainfall script tasks is translated into a call to the hardware functionality named readRainfall (vide line 11, Fig. 4);
- Gateway elements: Exclusive gateways are represented by if statements. In case the gateway has more than two alternatives, it is translated into a series of nested if–else statements. Merging exclusive gateways are ignored during the translation process, since their behavior is captured by the Callas sequential composition statement. It is just used during semantic validation as described above;
Using BPMN to model Internet of Things behavior within business process
- Connecting object elements: sequence flows are modelled by Callas control flow mechanisms, which is sequential composition, branching, looping, and function calls. The validation phase guarantees that the IoT BPMN model can be represented by these control flow primitives. Message flows are initiated with a send task, concluded with a receive task, and the flow itself is supported by the underlying data layer of the communication protocol stack of IoT devices;
- Data elements: data objects and their associations are modelled by Callas program variables that store objects and, when used in expressions, represent data associations, capturing the data flows specified at the BPMN model.
During the whole translation procedure, we keep track of each BPMN element identification, defined in the XML model file. We use them to map the errors we unveil during the validation and the translation phases, and report them to modelers in the BPMN design tool context.
4.2 Deployment and execution
Deployment and execution phases include the installation of Callas bytecode in IoT devices as well as the creation of the web services to support the communication between the process execution engine (jBPMN) and IoT devices.
The steps our deployment algorithm performs are the following:
- Generate the Callas code and deploy it to the IoT device by invoking the install code service. For that, we provide the target IoT device identification taken from the IoT device database, by using a query based on the parameters of the resource, and the bytecode produced during the Callas compilation;
- Remove IoT pools from the BPMN model file, since this behavior is going to be performed by IoT devices, instead of the jBPM server;
- Update the BPMN model file by setting send message tasks (or throw events) that target the IoT pool to use IoT web services, providing its address; and
- For each receive message tasks (or catch event) that initiates at an IoT pool, we take its address and pass it to the IoT devices so they can deliver messages using the jBPM RESTful API.
4.3 Prototype
In our prototype, we use the Eclipse IDE [32] and the jBPM [33], a BPMN server from RedHat. Our irrigation use case is composed of two types of components: the IoT devices and the application. For the IoT side, the one installed in the garden, we devised a hardware board for controlling irrigation. The board uses the ATmega644PA processor from Atmel corporation. We adapted the Callas virtual machine for this processor and programmed a firmware that controls the garden’s irrigation following a programmable schedule. The devices address directly jBPM via its RESTful API; likewise, the application can address each IoT device using its service address. The application includes several irrigation related processes modeled and deployed with our proposal. We currently have the prototype deployed at Avenida da Liberdade in collaboration with Lisbon city council, where we control four electrovalves managing a total of 40 sprinklers. The project is running with success for three months.
5. Conclusion and future work
The IoT opens an opportunity to create a new generation of business processes that can benefit from IoT ubiquity, taking advantage of their computational power, networking, and sensing capabilities. IoT devices can even execute parts of the business logic.
The work we present in this paper allows modelers to define IoT behavior within business process and with the same level of abstraction, by only using standard BPMN. By translating BPMN, the IoT behavior part, into Callas, we generate neutral-platform portable code, which can also be sent to IoT devices remotely. Our approach opens new opportunities to bring together process modelling and information and functionalities provided by IoT devices.
Modelers do not need to cope with IoT specificities, and use BPMN without any extensions; Callas allows to abstract from the hardware, making the generated code able to run on different devices, if these devices offer the required functionalities. Moreover, the Callas ability for remote reprogramming facilitates code deployment and adds support for dynamic ad-hoc business process changes.
As future work, we want to support the automatic decomposition of business process to determine which process parts can be executed by IoT devices.
In addition, this decentralized approach brings new challenges considering security, as we need to assure confidentiality and authenticity between central process and IoT devices as well as between IoT devices.
Acknowledgments
This work is partially supported by National Funding from FCT - Fundação para a Ciência e a Tecnologia, under the projects PTDC/EEI-ESS/5863/2014 and UID/CEC/00408/2013. We thanks to Fábio Ferreira for implementing part of the prototype.
References
Using BPMN to model Internet of Things behavior within business process
Using BPMN to model Internet of Things behavior within business process
Biographical notes
Dulce Domingos
She received the BSc in "Informática" from Faculdade de Ciências da Universidade de Lisboa, Portugal, in 1993, the MSc degree in "Engenharia Electrotécnica e de Computadores" from Instituto Superior Técnico da Universidade Técnica de Lisboa, Portugal, in 1997, and the PhD degree in "Informática" from Faculdade de Ciências da Universidade de Lisboa, Portugal, in 2005. She is an Assistant Professor at the Departamento de Informática, Faculdade de Ciências, Universidade de Lisboa and a senior researcher of the Large Scale Computer Systems Laboratory (LaSIGE). Her current research interests include security, business processes, and Internet of Things (IoT). She is the coordinator of the master program in information security of Faculdade de Ciências, Universidade de Lisboa and Pró-rector at Universidade de Lisboa.
www.shortbio.org/mddomingos@fc.ul.pt
Francisco Martins
Francisco Martins is an Assistant Professor at University of Azores. Until September 2017, he was an Assistant Professor at the Department of Informatics, Faculty of Sciences, University of Lisbon. Until September 2006, he was Assistant Professor at the Department of Mathematics, University of Azores where he began teaching (as teaching assistant) in October 1997. Previously he was an I. T. manager at Banco Comercial dos Açores since 1990. Francisco received his Ph.D. in Computer Science at University of Lisbon (Faculty of Sciences) in 2006, his M.Sc. (by research) in Computer Science at University of Azores in 2000, and his B.Sc. in Mathematics and Informatics at the University of Azores in 1995.
www.shortbio.org/fmartins@acm.org
|
{"Source-Url": "http://www.sciencesphere.org/ijispm/archive/ijispm-050403.pdf", "len_cl100k_base": 6610, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39926, "total-output-tokens": 9418, "length": "2e12", "weborganizer": {"__label__adult": 0.00038909912109375, "__label__art_design": 0.0005154609680175781, "__label__crime_law": 0.000461578369140625, "__label__education_jobs": 0.0036106109619140625, "__label__entertainment": 0.00011628866195678712, "__label__fashion_beauty": 0.0002397298812866211, "__label__finance_business": 0.0021648406982421875, "__label__food_dining": 0.00046944618225097656, "__label__games": 0.0006775856018066406, "__label__hardware": 0.0022449493408203125, "__label__health": 0.0010919570922851562, "__label__history": 0.00042176246643066406, "__label__home_hobbies": 0.0001823902130126953, "__label__industrial": 0.00122833251953125, "__label__literature": 0.0003676414489746094, "__label__politics": 0.00038695335388183594, "__label__religion": 0.0004529953002929687, "__label__science_tech": 0.38134765625, "__label__social_life": 0.0001552104949951172, "__label__software": 0.015350341796875, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.00035262107849121094, "__label__transportation": 0.0013751983642578125, "__label__travel": 0.00022804737091064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39806, 0.02467]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39806, 0.65625]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39806, 0.88467]], "google_gemma-3-12b-it_contains_pii": [[0, 1636, false], [1636, 6206, null], [6206, 10468, null], [10468, 14136, null], [14136, 15898, null], [15898, 18348, null], [18348, 21952, null], [21952, 26444, null], [26444, 30335, null], [30335, 33325, null], [33325, 36900, null], [36900, 38078, null], [38078, 39806, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1636, true], [1636, 6206, null], [6206, 10468, null], [10468, 14136, null], [14136, 15898, null], [15898, 18348, null], [18348, 21952, null], [21952, 26444, null], [26444, 30335, null], [30335, 33325, null], [33325, 36900, null], [36900, 38078, null], [38078, 39806, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39806, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39806, null]], "pdf_page_numbers": [[0, 1636, 1], [1636, 6206, 2], [6206, 10468, 3], [10468, 14136, 4], [14136, 15898, 5], [15898, 18348, 6], [18348, 21952, 7], [21952, 26444, 8], [26444, 30335, 9], [30335, 33325, 10], [33325, 36900, 11], [36900, 38078, 12], [38078, 39806, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39806, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e4f9f37e3ccfd27e34f1f8566738a9243170cd3b
|
Integrating CLIPS Applications into Heterogeneous Distributed Systems
Richard M. Adler
Symbiotics, Inc.
875 Main Street Cambridge, MA 02139 (617) 876-3633
Abstract. SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of "wrapper" objects called Agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within Agents and establish interactions between distributed Agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL Agent that is specialized for integrating CLIPS-based applications. The Agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other Agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous, remote systems. The design and operation of CLIPS Agents is illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and managing problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.
INTRODUCTION
The central problems of developing heterogeneous distributed systems include:
- communicating across a distributed network of diverse computers and operating systems in the absence of uniform interprocess communication services;
- specifying and translating information (i.e., data, knowledge, commands), across applications, programming languages and development shells with incompatible native representational models, programmatic data and control interfaces;
- coordinating problem-solving across heterogeneous applications, both intelligent and conventional, that were designed to operate as independent, standalone systems;
- accomplishing these integration tasks non-intrusively, to minimize re-engineering costs for existing systems and to ensure maintainability and extensibility of new systems.
SOCIAL is an innovative distributed computing tool that provides a unified, object-oriented solution to these difficult problems (Adler 1991). SOCIAL provides a family of "wrapper" objects called Agents, which supply predefined capabilities for distributed communication, control, and information management. Developers embed applications in Agents, using high-level, message-based interfaces to specify interactions between programs, their embedding Agents, and other application Agents. These message-based Application Programming Interfaces (APIs) conceal low-level complexities of distributed computing, such as network protocols and platform-specific interprocess communication models (e.g., remote procedure calls, pipes, streams). This means that distributed systems can be developed by programmers who lack expertise in system-level communications (e.g., Remote Procedure Calls, TCP/IP, ports and sockets, platform-specific data architectures). Equally important, SOCIAL's high-level APIs enforce a clear separation between
application-specific functionality and generic distributed communication and control capabilities. This partitioning promotes modularity, maintainability, extensibility, and portability.
This paper describes a particular element of the SOCIAL development framework called a CLIPS Knowledge Gateway Agent. Knowledge Gateways are SOCIAL Agents that are specialized for integrating intelligent systems implemented using standardized AI development shells such as CLIPS and KEE. Knowledge Gateways exploit object-oriented inheritance to isolate and abstract a shell- and application-independent model for distributed communication and control. Particular subclasses of Knowledge Gateway Agents, such as the CLIPS Gateway, add a dedicated high-level API for transporting information and commands across the given shell's data model and data and control interfaces. To integrate a CLIPS application, a developer simply (a) creates a subclass of the CLIPS Gateway Agent class, and (b) specializes it using the high-level CLIPS Gateway API to define the desired message-based interactions between the program, its embedding Gateway, and other application Agents.
The remainder of the paper is divided into three major parts. The first section provides an overview of SOCIAL, emphasizing the lower-level distributed computing building blocks underlying Gateway Agents. The second section describes the architecture and functionality of Knowledge Gateway Agents. Structures and behaviors specific to the CLIPS Gateway are used for illustration. The third section presents two examples of SOCIAL applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and managing problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.
OVERVIEW OF SOCIAL
SOCIAL consists of a unified collection of object-oriented tools for distributed computing, depicted below in Figure 1. Briefly, SOCIAL's predefined distributed processing functions are bundled together in objects called Agents. Agents represent the active computational processes within a distributed system. Developers assemble distributed systems by (a) selecting Agents with suitable integration behaviors from SOCIAL's library of predefined Agent classes, and (b) using dedicated APIs to embed individual application elements within Agents and to establish the desired distributed interactions among their embedded applications. A separate interface allows developers to create entirely new Agent classes by combining (or extending) lower-level SOCIAL elements to satisfy unique application requirements (e.g., supporting a custom, in-house development tool). These new Agent types can be incorporated into SOCIAL's Agent library for subsequent reuse or adaptation. The following subsections review SOCIAL's major subsystems.

Distributed Communication
SOCIAL's distributed computing utilities are organized in layers, enabling complex functions to be built up from simpler ones. The base or substrate layer of SOCIAL is the MetaCourier tool, which provides a high-level, modular distributed communications capability for passing information between applications based on heterogeneous languages, platforms, operating systems, networks, and network protocols (Symbiotics 1990). The basic Agent objects that integrate software programs or information resources are defined at SOCIAL's MetaCourier level. Developers use the MetaCourier API to pass messages between applications and their embedding Agents, as well as among application Agents. Messages typically consist of (a) commands that an Agent passes directly into its embedded application, such as database queries or calls to execute signal processing programs; (b) data arguments to program commands that an Agent might call to invoke its embedded application; or (c) symbolic flags or keywords that signal the Agent to invoke one or another fully preprogrammed interactions with its embedded application.
For example, a high-level MetaCourier API call issued from a local LISP-based application Agent such as (Tell :agent 'sensor-monitor :sys 'Symbl '(poll measurement-Z)) transports the message contents, in this case a command to poll measurement-Z, from the calling program to the Agent sensor-monitor resident on platform Symbl. The Tell function initiates a message transaction based on an asynchronous communication model; once the message is issued, the application Agent can immediately move on to other processing tasks. The MetaCourier API also provides a synchronous "Tell-and-Block" message function for "wait-and-see" processing models.
Agents contain two procedural methods that control the processing of messages, called :in-filters and :out-filters. In-filters parse incoming messages, based on a contents structure that is specified when the Agent is defined. After parsing a message, an :in-filter typically either invokes the Agent's embedded application, or passes the message (which it may modify) on to another Agent. The MetaCourier semantic model entails a directed acyclic computational graph of passed messages. When no further passes are required, the :in-filter of the terminal Agent runs to completion. This Agent's :out-filter method is then executed to prepare a message reply, which is automatically returned (and possibly modified) through the :out-filters of intermediate Agents back to the originating Agent. Developers specify the logic of :in-filters and :out-filters to meet their particular requirements for application interactions.
A MetaCourier runtime kernel resides on each application host. The kernel provides (a) a uniform message-passing interface across network platforms; and (b) a scheduler for managing messages and Agent processes (i.e., executing :filter methods). Each Agent contains two attributes (slots) that specify associated Host and Environment objects. These MetaCourier objects define particular hardware and software execution contexts for Agents, including the host processor type, operating system, network type and address, language compiler, linker, and editor. The MetaCourier kernel uses the Host and Environment associations to manage the hardware and software platform specific dependencies that arise in transporting messages between heterogeneous, distributed Agents (cf. Figure 2).

MetaCourier's high-level message-based API is basically identical across different languages such as C, C++, and Lisp. Equally important, MetaCourier's communication model is also symmetrical or "peer-to-peer." In contrast, client-server computing, a popular alternative model for distributed communication, is asymmetric: clients are active (i.e., only clients can initiate communication) while servers are passive. Moreover, while multiple clients can interact with a particular server, a specific client process can only interact with a single (hardwired) server. MetaCourier's communication model eliminates these restrictions on interprocess interactions.
Data Specification and Translation
A major difficulty in getting heterogeneous applications and information resources to interact with one another is the basic incompatibility of their underlying models for representing data, knowledge, and commands. These problems are compounded when applications are distributed across heterogeneous computing platforms with different data architectures (e.g., opposing byte ordering conventions).
SOCIAL applies a uniform "plug compatible" approach to these issues. This approach consists of two elements, a design methodology and a set of tools to support that methodology. SOCIAL defines a uniform application-independent information model. In the case of Knowledge Gateways, the information model defines a set of common data elements commonly used in intelligent systems, including facts, fact-groups, frames/objects, and rules. SOCIAL's Data Management Subsystem (DMS) provides tools (a) for defining canonical structures to represent these data types, and (b) for accessing and manipulating application-specific examples of these structures. These tools are essentially uniform across programming languages. Equally important, DMS tools encode and decode basic data types transparently across different machine architectures (e.g., character, integer, float).
Developers use SOCIAL's DMS tools to construct intermediate-level APIs for the Gateway Agent class that integrates particular applications. This API establishes mappings between SOCIAL's "neutral exchange" structures and the native representational model for the target application or application shell. For example, the API for the CLIPS Knowledge Gateway Agent translates between DMS frames and CLIPS deftemplates or fact-groups (e.g., Make-CLIPS-fact-group-from-frame Frame-x). Similarly, the KEE Gateway API transparently converts DMS frames to KEE units and KEE units back into frames. If necessary, new DMS data types and supporting API enhancements can be defined to extend SOCIAL's neutral exchange model. This uniform mapping approach simplifies the problem of interconnecting N disparate systems from O(N*N) to O(N), as illustrated in Figure 3.
SOCIAL integrates DMS with MetaCourier to obtain transparent distributed communication of complex data structures across heterogeneous computer platforms as well as across disparate applications: developers embed DMS API function calls within the :in-filter and :out-filter methods of interacting Agents, using MetaCourier messages to transport DMS data structures across applications residing on distributed hosts. DMS API functions decode and encode message contents, mapping information to and from the native representational models of source and target applications and DMS objects. SOCIAL thereby separates distributed communication from data specification and translation, and cleanly partitions both kinds of generic functionality from application-specific processing.
Distributed Control (Specialized Agents and Agent APIs)
SOCIAL's third layer of object-oriented tools establishes a library of predefined Agent classes and associated high-level API interfaces that are specialized for particular integration or coordination functionality. MetaCourier and DMS API functions are used to construct Agent API data and control interfaces. These high-level Agent APIs largely conceal lower-level MetaCourier and DMS interfaces from SOCIAL users. Thus, developers typically use specialized Agent classes as the primary building blocks for constructing distributed systems, accessing the functionality of each such Agent type through its dedicated high-level API. If necessary, developers can define new Agent classes and APIs by specializing (e.g., modifying or extending) existing ones.
Currently, SOCIAL's library defines Gateway and Manager Agent classes. Gateways, as noted earlier, simplify the integration of applications based on development tools such as AI shells, DBMSs, CASE tools, 4GLs, and so on. Manager Agents are specialized to coordinate application Agents to work together cooperatively. The HDC-Manager (for Hierarchical Distributed Control) functions much like a human manager, mediating interactions among "subordinate" application Agents and between subordinates and the outside world. The Manager acts as an intelligent router of task requests, based on a directory knowledge base that identifies available services (e.g., data, problem-solving skills) and the application Agents that support them. The Manager also provides a global, shared-memory "bulletin-board." Application Agents are only required to know the names of services within the Manager's scope and the high-level API for interacting with the Manager; they do not need to know about the functionality, structure, location, or even the existence of particular application Agents. The Manager establishes a layer of control abstraction, decoupling applications from one another. This directory-driven approach to Agent interaction promotes maintainability and extensibility, and is particularly valuable in complex distributed systems that evolve as applications are enhanced or added over an extended lifecycle.
KNOWLEDGE GATEWAY AGENTS
Knowledge Gateway Agents combine several important SOCIAL tools and design concepts:
- MetaCourier's high-level, message-based distributed communication capabilities for remote interactions across disparate hardware and software environments;
- DMS data modeling and mapping facilities for transparently moving data, knowledge, and control structures across disparate applications and shells;
a modular object-oriented architecture that defines a uniform partitioning of integration functionality;
• a non-intrusive design methodology for programming specific, discrete interactions between the application being integrated, its embedding Gateway Agent, and other application Agents;
• extensibility to encompass generalized hooks for security, error management, and session management utilities.
Gateway functional capabilities are distributed across the class hierarchy of Gateways to exploit object-oriented inheritance of behaviors of common utility across Agent subclasses. The partitioning and inheritance of behaviors are summarized below in Figure 4.
Figure 4. Inheritance of SOCIAL Knowledge Gateway Agent Behaviors
The root Knowledge Gateway Agent, KNOWL-GW, defines the overall structure and functional behavior of all Agent subclasses that are developed to integrate shell-based (or custom) intelligent applications. In particular, KNOWL-GW establishes:
• the uniform MetaCourier/DMS message format structure for communicating with all Knowledge Gateway Agents;
• the Agent :in-filter method for parsing and managing incoming messages;
• the Agent :out-filter method for post-processing results;
• default (stub) API methods that are overridden at the Gateway Agent subclass level.
Knowledge-based systems, like most conventional systems, typically function as servers, responding to programmatic (or user) queries or commands. In a server configuration, data and control flow in, while data (results) alone flows out. However, intelligent systems can also initiate control activities autonomously, in response to dynamic, data-driven reasoning. This active (client) role entails "derived" requirements for capabilities to process results returned in response to previous outgoing messages. Since intelligent systems can be configured to act as clients, servers, or play both roles within the same distributed application, any generalized integration technology such as Knowledge Gateways must support bidirectional flow of data and control.
Accordingly, the generic in-filter method handles two cases (a) a MetaCourier message coming in from some external application Agent to be handled by the Knowledge Gateway's embedded application, and (b) a message from the embedded application that is to be passed via the Knowledge Gateway Agent to some external application Agent. The KNOWL-GW distinguishes the two cases automatically, based on the message's target Agent. Similarly, the generic out-filter method handles two cases (a) dispatching the embedded application's reply via the Knowledge Gateway Agent to the external requesting Agent, and (b) processing the response from an external application Agent to a message passed by the Gateway from its embedded application and injecting it back into the embedded application via the shell. This simple dual logic in the :filter methods enables Gateway Agents to function as clients or servers, as required by particular messages.
KNOWL-GW also establishes (a) a uniform, top-level Gateway API; and (b) a uniform model for applying or invoking this API in the filter methods. Specifically, the KNOWL-GW Agent class establishes stub versions of the top-level Gateway API methods. Gateway Agent subclasses for specific development shells override the stubs with method definitions tailored to the corresponding knowledge model, and data and control interfaces. The top-level API consists of the following five methods:
- :extract-data;
- :inject-data;
- :initialize-shell;
- :process-request;
- :process-response.
The first three top-level API methods are defined for each Gateway Agent subclass in terms of intermediate-level DMS API functions, which differ in reflection of variations in shell architectures. However, each subclass API contains elements from the following categories:
- external interface functions;
- data interface functions (for data and knowledge access control);
- shell control functions.
The first API category encompasses shell-specific functions for passing data and commands from an application out to the Knowledge Gateway. Depending on the shell in question, the Gateway API may be more or less elaborate. For example, the Gateway for CLIPS V4.3 defines a single external API function to initiate interactions between rules in a CLIPS application and its embedding Agent, which hides MetaCourier and DMS API functions completely. The CLIPS Gateway API will be extended to reflect the procedural and object-oriented programming extensions in CLIPS Version 5.
Functions in the second category combine (a) the DMS API, which maps data and knowledge between SOCIAL's canonical DMS structures and the representational model native to a specific shell with (b) the shell-specific programmatic data interface used to generate, modify, and access data and knowledge structures in the native representational format. Examples include asserting and retracting structures, sending object-oriented messages, and modifying object attributes. For example, Make-CLIPS-fact-from-fact converts a DMS fact into a string, which is automatically inserted into the current CLIPS facts-list using the CLIPS assert function. The third
category encompasses shell-specific control interface capabilities such as start, clear, run, reset, exit, and saving and loading code and/or knowledge base files.
The top-level :extract-data and :inject-data Gateway methods consolidate the intermediate-level data interface API functions. Typically, :inject-data and :extract-data consist of program Case statements that invoke conversion functions for translating between different types defined in the native information model for a given shell and structures in the SOCIAL/DMS model. For example, :inject-data may call a DMS-level API function to map and insert a DMS frame structure as a fact-group into a CLIPS knowledge base and another to insert a DMS fact. Access direction (reading or writing) is implicitly reflected in the developer's choice of Extract (read) or Inject (write). Both methods are preprogrammed to dispatch automatically on data type, with options to override defaults (e.g., to map a DMS frame into a CLIPS fact-group instead of a deftemplate). Similarly, the :initialize-shell method represents the locus for control interface functions. Behavior is again classified by case and dependent on the target tool or program. For example, CLIPS employs different API functions to load textual and compiled knowledge bases.
The remaining pair of top-level Knowledge Gateway API methods, :process-request and :process-response, are application-specific. A shell-based application is integrated into a distributed system by specializing the Gateway Agent subclass for the relevant shell. Specialization here consists of overriding the stub versions of Process-request and :process-response inherited from KNOWL-GW and defining the required integration behaviors. Developers redefine these two methods by employing the generic API functions :extract-data, :inject-data, and :initialize-shell to pass information and control into and out of the target application via its associated shell.
The Gateway model is particularly powerful for integrating shell-based applications, in that the shell-specific methods (viz., :inject-data, :extract-data, :initialize-shell), are defined only once, namely in a KNOWL-GW subclass for the given shell. Application developers do not have to modify these API elements unless API extensions are necessary. Any application based on that shell can be embedded in a Gateway that is a subclass of the shell-specific Gateway Agent. The application Gateway inherits the generic tool-specific API interface, which means that the developer only has to program the methods :process-request (for server behaviors) and :process-response (for client behaviors). Individual interactions with the shell are specified using the inherited API to extract or inject particular data and to control the shell.
For custom applications, all five API methods are defined in one and the same Gateway Agent, namely the KNOWL-GW Agent subclass level. Therefore, inheritance does not play as powerful a role in assisting the application integrator as it does for multiple programs based on a common shell interface. Nevertheless, the generic :in-filter and :out-filter methods are inherited, providing the standardized message control model for peer-to-peer interactions. Moreover the Gateway model is useful as a methodological template in that it prescribes a uniform and intuitive partitioning of interface functionality: specific interactions between an application, its Gateway, and external systems are isolated in :process-request and :process-response, which invoke the utility API functions such as :inject-data as appropriate.
**CLIPS Knowledge Gateway**
CLIPS-GW is a subclass of the KNOWL-GW Agent class. As with all other Knowledge Gateway Agent subclasses, it inherits the KNOWL-GW message structure, :in-filter and :out-filter, and stub API methods. CLIPS-GW defines custom :inject-data, :extract-data, and :initialize-shell methods tailored to the CLIPS knowledge model, data and control interfaces. These custom methods are built up from a set of intermediate level API functions, which are summarized in Table 1. Specifically, :inject-data is based on Load-CLIPS-Data, which depends on CLIPS-Dispatch, and Load-CLIPS-Files. :extract-data relies on the function gw-return. :initialize-shell invokes the
basic shell control API functions, based on keyword symbols specified in incoming messages. Analogous APIs are defined for Knowledge Gateways for other AI shells, such as KEE.
<table>
<thead>
<tr>
<th>Category</th>
<th>Function</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shell Control</td>
<td>CLIPS-Start</td>
<td>starts CLIPS and sets a global flag</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Clear</td>
<td>clears all facts from CLIPS fact-list</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Init</td>
<td>if flag is set, calls CLIPS-Clear, otherwise calls CLIPS-Start</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Run-Appl</td>
<td>runs CLIPS rule engine to completion, accepts optional integer to limit # of rule firings</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Load-Appl</td>
<td>loads a specified rule base file into CLIPS</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Assert</td>
<td>asserts a fact (string) into CLIPS fact-list</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Retract</td>
<td>retracts fact (C pointer) from CLIPS fact-list</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Display-Facts</td>
<td>displays facts to output stream</td>
</tr>
<tr>
<td></td>
<td>CLIPS-Dispatch</td>
<td>calls a C dispatch routine to translate DMS structures and create CLIPS facts, fact-groups, deftemplates, or rules, as appropriate.</td>
</tr>
<tr>
<td></td>
<td>Load-CLIPS-Data</td>
<td>reads DMS data from composite DMS structure and calls CLIPS-Dispatch on each one (except files)</td>
</tr>
<tr>
<td></td>
<td>Load-CLIPS-Files</td>
<td>reads the composite DMS object for file pathname strings and calls CLIPS-Load-Appl</td>
</tr>
<tr>
<td>External Interface</td>
<td>gw-return</td>
<td>an external/user function defined to CLIPS for passing data from rules back to Agent</td>
</tr>
</tbody>
</table>
Table 1. Intermediate Level API for the CLIPS Gateway Agent (Version 4.3)
The CLIPS Gateway API defines an external user function that provides a high-level interface between a CLIPS application and its embedding Gateway. This function, called gw-return, enables CLIPS applications to pass data and/or control information to their Gateway Agents by stuffing a stream buffer that is unpacked using the top-level :extract-data command. gw-return function calls appear as consequent clauses, as illustrated in the example rule shown below. gw-return takes two arguments - a DMS structure type such as a Fact and a string or pointer. The first item is used to parse the datum and convert it into the specified type of DMS structure. Multiple gw-return clauses can be placed into the right-hand side of a single rule. Also, multiple rules can contain gw-return clauses.
(deffrule TALK-BACK-TO-GATEWAY "rule that passes desired result, a fact, that has been asserted into appl KB back through the Gateway to requesting Agent"
?requestor <- (requestor ?appl-agent)
?answer <- (answer $?result)
=>
(printout t "Notifying" ?requestor "of result" $?result" crlf)
(gw-return FACT (str-implode $?result))
Messages to Knowledge Gateway Agents contain five elements, the target Agent, Environment, Host, data, and command options. In server mode (responding to messages from other application Agents), the CLIPS-GW :in-filter executes :initialize-shell for the specified command options to prepare CLIPS, invokes :process-request for the incoming data, and sets the results. Typically :process-request injects data, which includes loading rule bases, runs the rule engine, and extracts results. The :out-filter translates the :in-filter results into SOCIAL neutral exchange format, which constitute the reply that MetaCourier returns to the requesting Agent.
In client mode, a CLIPS application initiates a message to some external application Agent via its embedding CLIPS Agent. Here, the :in-filter invokes :extract-data and passes the message contents and any specified command options to the target application Agent. The :out-filter then invokes :process-response to deal with the reply. Typically, :process-response invokes :inject-data to introduce response data into the CLIPS fact-list and restarts the CLIPS rule engine to resume reasoning.
EXAMPLE APPLICATIONS OF CLIPS GATEWAY AGENTS
This section of the paper describes two demonstration systems that employ CLIPS Gateways to integrate expert systems for operations support for the Space Shuttle fleet. Processing, testing, and launching of Shuttle vehicles takes place at facilities dispersed across the Kennedy Space Center complex. The Launch Processing System (LPS) provides the sole direct, real-time interface between Shuttle engineers, Orbiter vehicles and payloads, and associated Ground Support Equipment to support these activities (Heard 1987). The locus of control for the LPS is the Firing Room, an integrated network of computers, software, displays, controls, switches, data links and hardware interface devices. Firing Room computers are configured to perform independent LPS functions through application software loads. Shuttle engineers use Console computers to monitor and control specific vehicle and Ground Support systems. These computers are connected to data buses and telemetry channels that interface with Shuttles and Ground Support Equipment. The Master Console is a computer that is dedicated to operations support of the Firing Room itself.
Integrating Configuration and Fault Management
The first application illustrates the use of a CLIPS Gateway in a server role to integrate expert systems that automate configuration and fault management operations support tasks (Adler 1990). X-Switcher is a prototype expert system that supports operators of the Switching Assembly used to manage computer configurations in Firing Rooms. X-Switcher was implemented using CLIPS V4.3 on a Sun workstation. OPERA (for Operations Analyst) is an integrated collection of expert systems that automates critical operations support functions for the Firing Room (Adler 1989). In essence, OPERA retrofits the Master Console with automated, intelligent capabilities for detecting, isolating and managing faults in the Firing Room. The system is implemented in KEE and runs on a Texas Instruments Explorer Lisp Machine. PRACA, NASA's Problem Reporting and Corrective Action database, was simulated using the Oracle relational DBMS, again on a Sun workstation.
A distributed system prototype was constructed with SOCIAL, using appropriate library Agents to integrate these three applications - a CLIPS Gateway for X-Switcher, a KEE Gateway for OPERA, and an Oracle Gateway for PRACA. The prototype executes the following scenario. First, OPERA receives LPS error messages that indicate a failure in a Firing Room computer subsystem. OPERA then requests a reconfiguration action from X-Switcher via the OPERA KEE Gateway Agent. This request is conveyed via a MetaCourier message to the CLIPS Gateway Agent. The message contains a DMS fact-group that specifies the observed computer problem, the pathname for the X-Switcher rule base on the Sun platform, and the current Firing Room Configuration Table. OPERA models the Configuration Table as a unit, which is KEE's hybrid frame-object knowledge structure. The OPERA Agent automatically unpacks slot data from the Table unit and appends it to the DMS fact-group via KEE Gateway API calls.
Upon receiving the KEE Gateway's message, the CLIPS Gateway Agent executes the following sequence of tasks. First, CLIPS is loaded, if necessary, and initialized. Second, the X-Switcher expert system rule base is loaded. Third, the DMS OPERA data object from the KEE Gateway message is translated and asserted as a CLIPS fact-group. Fourth, CLIPS is reset and the rule engine is run. X-Switcher rules derive a set of candidate replacement CPUs for the failed Firing
Room computer and prompt the user to select a CPU. It then displays specific instructions for reconfiguring the Switching Assembly to connect the designated CPU and prompts the user to verify successful completion of the switching activity. Finally, X-Switcher interacts with its CLIPS Gateway to reply to OPERA that reconfiguration of the specified CPU succeeded or failed. This process is triggered when CLIPS executes an X-Switcher rule containing a consequent clause of the form (gw-return result). The CLIPS Gateway converts result into a DMS fact, which is transmitted to the OPERA KEE Gateway. This Agent asserts this fact as an update value in a subsystem status slot in the Configuration Table KEE unit. Finally, the OPERA Gateway formulates an error report, which is dispatched in a message to the Oracle Gateway Agent, which updates the simulated PRACA Problem-Tracking Database.
Figure 5. CLIPS Application configured as a Server
Coordinating Independent Systems to Enhance Fault Diagnosis Capabilities
The second distributed application illustrates the use of a SOCIAL CLIPS Gateway Agent in a client role (Adler 1991). GPC-X is a prototype expert system for isolating faults in the Shuttle vehicle's on-board computer systems, or GPCs. GPC-X was implemented using CLIPS V4.3 on a Sun workstation. One type of memory hardware fault in GPC computers manifests itself during switchovers of Launch Data Buses. These buses connect GPCs to Firing Room Console computers until just prior to launch, when communications are transferred to telemetry links. Unfortunately, the data stream that supplies the GPC-X expert system does not provide any visibility into the occurrence of Launch Data Bus switchovers (or the health of the GPC Console Firing Room computer). Thus, GPC-X can propose but not test certain fault hypotheses about GPC problems, which seriously restricts the expert system's overall diagnostic capabilities.
However, Launch Data Bus switchover events are monitored automatically by the LPS Operating System, which triggers warning messages that are detected and processed by the OPERA system discussed above. CLIPS and KEE Gateway Agents were used to integrate GPC-X and OPERA, as before. A SOCIAL Manager Agent was used to mediate interactions between these application Gateway Agents to coordinate their independent fault isolation and test activities.
Specifically, GPC-X, at the appropriate point in its rule-based fault isolation activities, issues a request via its embedding Agent to check for Launch Data Bus switchovers to the Manager. The request is initiated by a gw-return consequent clause in the CLIPS rule that proposes the memory fault hypothesis. When this rule fires, CLIPS executes the gw-return function, which sends a message to the GPC-X CLIPS Gateway Agent. This Agent formulates a message to the Manager which contains a Manager API task request for the LDB-Switchover-Check service.
The Manager searches its directory for an appropriate server Agent for LDB-Switchover-Check, reformulates the task data into a suitable DMS-based message, and passes it to the OPERA KEE Gateway Agent. The :process-request method for this application Agent performs a search of the knowledge base used by OPERA to store interpreted LPS Operating System error messages. The objective is to locate error messages, represented as KEE units, indicative of LDB switchover events. The OPERA Gateway :out-filter uses the Manager API to translate search results into a suitable DMS structure, which is posted back to the Manager. In this situation, the OPERA Gateway Agent contains all of the request processing logic: OPERA itself is a passive participant that continues its monitoring and fault isolation activities without significant interruption.
Next, the Manager returns the results of the LDB-Switchover-Oheck request back to GPC-X's CLIPS Gateway Agent. The Agent :in-filter executes the :process-response method, which transparently converts the Manager DMS object into a CLIPS fact that is asserted into the GPC-X fact base. Finally, the GPC-X Agent re-activates the CLIPS rule engine to complete GPC fault diagnosis. Obviously, new rules had to be added to GPC-X to exploit the newly available hypothesis test data. However, all of the basic integration and coordination logic is supplied by the embedding GPC-X Gateway Agent or the HDC-Manager.

This SOCIAL prototype demonstrates non-intrusive system-level coordination of distributed applications that solve problems at the subsystem level. Neither application is capable of full diagnosis individually. GPC-X can generate GPC fault candidates, but lacks data concerning other LPS subsystems that is necessary for testing these hypotheses. OPERA automatically detects LPS error messages that are relevant to GPC-X's candidate test requirements. However, it lacks the contextual knowledge about GPC computers, and awareness of GPC-X's capabilities and current activities, to recognize the potential significance of specific LPS data as a test for GPC-X fault hypotheses. Gateway Agents integrate the two systems, supplying communication and data mapping capabilities. The Manager establishes the logical connections required to combine and utilize the fragmented subsystem-specific knowledge of the two applications to enhance diagnostic capabilities. This coordination architecture is non-intrusive in that neither system was modified to include direct knowledge of the other, its interfaces, knowledge model, or platform. The Manager directory and routing capabilities introduce an isolating layer of abstraction, enhancing the "plug-compatible character of the integration architecture."
RELATED WORK
The most closely related research to SOCIAL CLIPS Gateway Agents is the AI Bus (Schultz 1990), a framework for integrating rule-based CLIPS applications in a distributed environment. SOCIAL and AI Bus both rely on modular message-based communications. AI Bus uses a client-server model based on remote procedural calls that is currently restricted to Unix hosts. SOCIAL's MetaCourier layer supports a fully peer-to-peer model that is transparent across diverse platforms. SOCIAL and AI Bus integrate applications using Agents, whose API functionality are roughly comparable. Each Agent has a dedicated message control module and can communicate directly with one another. Indirect interactions are mediated by a dedicated organizational Agent, the SOCIAL Manager or the AI Bus Blackboard. It appears that AI Bus Agents are currently restricted primarily to CLIPS-based knowledge sources, while SOCIAL Gateways provide broader support for KEE, CLIPS, and other tool-based and custom applications.
Other tools for developing heterogeneous distributed intelligent systems include GBB (Corkill 1986), ERASMUS (Jagannathan 1988), MACE (Gasser 1987), and ABE (Hayes-Roth 1988). These systems lack SOCIAL's modular, layered, architecture, and are considerably less extensible below the top-level developer interfaces. GBB and ERASMUS impose a blackboard control architecture for integrating distributed applications. ABE allows multiple kinds of interaction models (e.g., transaction, data flow, blackboard), but it is not clear how easily these can be combined within a single system. MACE provides few organizational building blocks for developing complex architecture beyond a relatively simple routing Agent. None of these frameworks provide a predefined integration interface to CLIPS, although ABE and GBB include simple "black box" tools such as external or foreign function call passing to build one.
STATUS AND FUTURE DEVELOPMENT
The original CLIPS Gateway Agent was implemented for CLIPS V4.3 in Franz Common Lisp, using a foreign function interface to the CLIPS API, which is written in C. Within the next year, we intend to develop a full C implementation of the Agent. This Agent will also be extended to reflect enhancements in CLIPS V5.0, most notably, procedural programming and the CLIPS Object-Oriented Language.
CONCLUSIONS
CLIPS was designed to facilitate embedding intelligent applications within more complex systems. However, lacking built-in support for distributed communications capability, applications implemented with CLIPS are generally "hardwired" directly to other software systems residing either on the same platform or on a parallel multi-processor. Moreover, CLIPS integration interfaces are typically custom-built, by systems level programmers who are experienced with the mechanics of interprocess communication. SOCIAL CLIPS Gateway Agents provide a generalized, high-level approach to integrating CLIPS applications with other intelligent and conventional programs across heterogeneous hardware and software platforms. Gateways exploit object-oriented inheritance to partition generic distributed communication and control capabilities, shell-specific APIs, and application-specific functionality. Developers need only learn the high-level APIs to integrate CLIPS applications with other application Agents. SOCIAL's modular and extensible integration technologies promote a uniform, "plug compatible" model for non-intrusive, peer-to-peer interactions among heterogeneous distributed systems.
ACKNOWLEDGMENTS
Development of SOCIAL, including the CLIPS Gateway Agent, was sponsored by the NASA Kennedy Space Center under contract NAS10-11606. MetaCourier was developed by Robert Paslay, Bruce Nilo, and Robert Silva, with funding support from the U.S. Army Signals Warfare
center under Contract DAAB10-87-C-0053. Rick Wood designed and implemented the C portions of the CLIPS Gateway API. Bruce Cottman developed the prototypes for SOCIAL's data management tools and the Oracle Gateway Agent.
REFERENCES
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920007379.pdf", "len_cl100k_base": 8188, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 30495, "total-output-tokens": 9309, "length": "2e12", "weborganizer": {"__label__adult": 0.00030541419982910156, "__label__art_design": 0.0003190040588378906, "__label__crime_law": 0.0002865791320800781, "__label__education_jobs": 0.0004622936248779297, "__label__entertainment": 8.672475814819336e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.0002894401550292969, "__label__food_dining": 0.000286102294921875, "__label__games": 0.000560760498046875, "__label__hardware": 0.002147674560546875, "__label__health": 0.00034308433532714844, "__label__history": 0.0003113746643066406, "__label__home_hobbies": 9.495019912719728e-05, "__label__industrial": 0.0005922317504882812, "__label__literature": 0.0002332925796508789, "__label__politics": 0.0002503395080566406, "__label__religion": 0.00038552284240722656, "__label__science_tech": 0.0908203125, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.0188140869140625, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.0002484321594238281, "__label__transportation": 0.0007214546203613281, "__label__travel": 0.0002015829086303711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45257, 0.01002]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45257, 0.31743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45257, 0.87654]], "google_gemma-3-12b-it_contains_pii": [[0, 3138, false], [3138, 6027, null], [6027, 9594, null], [9594, 13195, null], [13195, 15844, null], [15844, 17915, null], [17915, 21070, null], [21070, 25372, null], [25372, 29147, null], [29147, 33270, null], [33270, 36207, null], [36207, 39012, null], [39012, 42839, null], [42839, 45257, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3138, true], [3138, 6027, null], [6027, 9594, null], [9594, 13195, null], [13195, 15844, null], [15844, 17915, null], [17915, 21070, null], [21070, 25372, null], [25372, 29147, null], [29147, 33270, null], [33270, 36207, null], [36207, 39012, null], [39012, 42839, null], [42839, 45257, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45257, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45257, null]], "pdf_page_numbers": [[0, 3138, 1], [3138, 6027, 2], [6027, 9594, 3], [9594, 13195, 4], [13195, 15844, 5], [15844, 17915, 6], [17915, 21070, 7], [21070, 25372, 8], [25372, 29147, 9], [29147, 33270, 10], [33270, 36207, 11], [36207, 39012, 12], [39012, 42839, 13], [42839, 45257, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45257, 0.10606]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
19bcc5636661220a2cff4c0478f93c4f72be6f2e
|
Decentralized Data Flows in Algebraic Service Compositions for the Scalability of IoT Systems
DOI:
10.1109/WF-IoT.2019.8767238
Document Version
Accepted author manuscript
Link to publication record in Manchester Research Explorer
Citation for published version (APA):
Published in:
IEEE 5th World Forum on Internet of Things
Citing this paper
Please note that where the full-text provided on Manchester Research Explorer is the Author Accepted Manuscript or Proof version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version.
General rights
Copyright and moral rights for the publications made accessible in the Research Explorer are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Takedown policy
If you believe that this document breaches copyright please refer to the University of Manchester’s Takedown Procedures [http://man.ac.uk/04Y6Bo] or contact uml.scholarlycommunications@manchester.ac.uk providing relevant details, so we can investigate your claim.
Decentralized Data Flows in Algebraic Service Compositions for the Scalability of IoT Systems
Damian Arellanes and Kung-Kiu Lau
School of Computer Science
The University of Manchester
Manchester M13 9PL, United Kingdom
{damian.arellanesmolina, kung-kiu.lau}@manchester.ac.uk
Abstract—With the advent of the Internet of Things, scalability becomes a significant concern due to the huge amount of data involved in IoT systems. A centralized data exchange is not desirable as it leads to a single performance bottleneck. Although a distributed exchange removes the central bottleneck, it has network performance issues as data passes among multiple coordinators. A decentralized data flow exchange is the only solution that fully enables the realization of efficient IoT systems as there is no single performance bottleneck and the network overhead is minimized. In this paper, we present an approach that leverages the algebraic semantics of DX-MAN for realizing decentralized data flows in IoT systems. As data flows are not mixed with control flows in algebraic service compositions, we developed an algorithm that smoothly analyzes data dependencies for the generation of a direct relationship between data consumers and data producers. The result prevents passing data alongside control among multiple coordinators because data is only read and written on a data space. We validate our approach using the Blockchain as the data space and conducted experiments to evaluate the scalability of our approach. Our results show that our approach scales well with the size of IoT systems.
Index Terms—Internet of Things, decentralized data flows, Blockchain, DX-MAN, exogenous connectors, scalability, separation between control and data, algebraic service composition
I. INTRODUCTION
The Internet of Things (IoT) envisions a world where everything will be interconnected through distributed services. As new challenges are forthcoming, this paradigm requires a shift in our way of building software systems. With the rapid advancement in hardware, the number of connected things is considerably increasing to the extent that scalability becomes a significant concern due to the huge amount of data involved in IoT systems. Thus, IoT services shall exchange data over the Internet using efficient approaches.
Although a centralized data exchange approach has been successful in enterprise systems, it will easily cause a bottleneck in IoT systems which potentially generate huge amount of data continuously. To avoid the bottleneck, a distributed approach can be used to distribute the load of data over multiple coordinators. However, this would introduce unnecessary network overhead as data is passed among many loci of control.
A decentralized data exchange approach is the most efficient solution to tackle the imminent scale of IoT systems, as it achieves better response time and throughput by minimizing network hops [1], [2], [3], [4], [5]. However, exchanging data among loosely-coupled IoT services is challenging, specially in resource-constrained environments where things have poor network connection and low disk space.
Moreover, constructing data dependency graphs is not trivial when control flow and data flow are tightly coupled. The separation of such concerns would allow a separate reasoning, monitoring, maintenance and evolution of both control and data [6]. Consequently, an efficient data exchange approach can be realized without considering control flow. Thus, the number of messages transmitted over the Internet can be reduced considerably.
This paper proposes an approach that leverages the algebraic semantics of DX-MAN [7], [8] for the realization of decentralized data flows in IoT systems. The algebraic semantics of DX-MAN allows a well-defined structure of data flows which are smoothly analyzed by an algorithm, in order to form a direct relationship between data consumers and data producers. For this analysis, the algorithm particularly takes advantage of the fact that DX-MAN separates control flow and data flow.
The rest of the paper is organized as follows. Sect. II introduces the composition semantics of the DX-MAN model. Sect. III describes its data flow dimension. Sect. IV presents the algorithm that analyzes data flows. Sect. V presents the implementation of our approach. Sect. VI outlines a quantitative evaluation of our approach. Finally, we present the related work in Sect. VII and the conclusions in Sect. VIII.
II. DX-MAN MODEL
DX-MAN is an algebraic model for IoT systems where services and exogenous connectors are first-class entities. An exogenous connector is a variability operator that defines multiple workflows with explicit control flow, while a DX-MAN service is a distributed software unit that exposes a set of operations through a well-defined interface.
An atomic service provides a set of operations and it is formed by connecting an invocation connector with a computation unit. A computation unit represents an actual service implementation (e.g., a RESTful Microservice or a WS-* service) and it is not allowed to call other computation units. The red arrows in Fig. 1(a) show that, as a consequence of the algebraic semantics, the interface of an atomic service has all the operations in the computation unit. An invocation
connector defines the most primitive workflow which is the invocation of one operation in the computation unit.
Our notion of algebraic service composition is inspired by algebra where functions are hierarchically composed into a new function of the same type. The resulting function can be further composed with other functions so as to yield a more complex one. Algebraic service composition is the operation by which a composition connector is used as an operator to compose multiple services, resulting in a (hierarchical) composite service whose interface has all sub-service operations. Thus, a top-level composite will always contains the operations of all atomic services. Fig. 1(c) illustrates this concept. In particular, there are composition connectors for sequencing, branching and parallelism. A sequencer connector enables infinite workflows for the sequential invocation of sub-service operations. A selector connector defines $2^n$ branching workflows and chooses the sub-service operations to invoke, such that $n$ is the number of operations in the composite service interface. A parallel connector defines $2^n$ parallel workflows and executes sub-service operations in parallel according to user-defined tasks.
Fig. 1(d) shows that an adapter can be connected with only one exogenous connector. A looping adapter iterates over a sub-workflow while a condition holds true, and a guard adapter invokes a sub-workflow whenever a condition holds true. There are also adapters for sequencing, branching and parallelism over the operations of an individual atomic service.
Fig. 1(e) shows that control, data and computation are orthogonal dimensions in DX-MAN. Exogenous connectors enable the separation between control flow and computation, since they decouple service implementations from the (hierarchical) composition structure. Unlike existing composition approaches, data flow never follows control flow as exogenous connectors only pass control to coordinate workflow executions. For further details about the control flow dimension, we refer the reader to our previous papers [7, 8].
### III. DATA CONNECTORS
A DX-MAN operation is a set of input parameters and output parameters. An input parameter defines the required data to perform a computation, while an output parameter is the resulting data from a specific computation. Although exogenous connectors do not provide any operation (because they do not perform any computation), some of them require data. In particular, selector connectors, looping adapters and guard adapters require input values to evaluate boolean conditions. Connectors do not have any parameters by default since designers define the parameters they require when choosing a workflow. Workflow selection is out of the scope of this paper, but we refer the reader to our previous paper on workflow variability [8].
In addition to the operations created on algebraic composition, custom operations can be defined in composite services. This is particularly useful when designers want to hide the operations created during algebraic composition or when designers want to create a unified interface for a composite service.
A data connector defines explicit data flow by connecting a source parameter with a destination parameter. Fig. 2 shows that an algebraic data connector is automatically created during composition and it is available for all the workflows defined by a composite. In particular, an algebraic data connector connects two parameters vertically, i.e., in a bottom-up way for outputs or in a top-down fashion for inputs. The top-down approach connects a parameter of a composite service operation to a parameter of a sub-service operation, whilst the bottom-up approach means the other way round. Fig. 3 shows the data connection rules, where we can see that the algebraic data connectors are defined in four different ways. A custom data connector is manually created by a designer for only one workflow. Custom data connectors connect two parameters either vertically or horizontally. An horizontal approach connects the parameters of two sub-service operations, or an operation parameter with an exogenous input. A quick glance at Fig. 3 reveals that a designer is allowed to connect parameters in 16 different ways.
A designer uses custom data connectors to define data flows for a particular workflow. Currently, DX-MAN supports the most common patterns: sequencing and map-reduce. For the sequencing pattern, the parameters of two different operations are horizontally connected. Fig. 4 shows an example of this pattern, where operation $OpB$ requires data from operation $OpA$. In particular, a custom data connector links the output $A0$ with the input $B0$, while another custom data connector connects the output $A1$ with the input $B1$. To improve readability, we ignore algebraic data connectors.
A data processor is particularly useful when data preprocessing needs to be done before executing an operation. It waits until all input values have been received, then performs some computation and returns transformed data in the form of outputs. A mapper executes a user-defined function on each input value received. A reducer takes the result from a mapper and executes a user-defined reduce function on inputs. A reducer can also be used in isolation to perform straightforward computation such as combining data into a list. Fig. 5 shows
an example of the map-reduce pattern, where operation \( opB \) requires the pre-processing of data generated by operation \( opA \). In particular, two custom data connectors link the input \( A0 \) and the output \( A1 \) with the inputs of the mapper. The output of the mapper is connected to the input of the reducer and, similarly, the output of the reducer is connected to the input \( B0 \). Please note that \( A0 \) can only be connected from the composite service operation, according to the rules shown in Fig. 3.
In some workflows, algebraic data connectors may not be useful. For that reason, such connectors can be removed manually at the discretion of the designer. For example, in Fig. 5 all algebraic connectors were removed because data is only needed for the realization of the map-reduce pattern. At quick glance at Fig. 4 reveals that composite services encapsulate data flows to ensure reusability. Thus, composite services are black boxes who are not aware of data flows of other composites.
IV. ANALYSIS OF DATA CONNECTORS
Algebraic service composition and the separation of concerns are key enablers for the realization of decentralized data flows. The separation between control and data allows a separate reasoning of these dimensions. In particular, exogenous connectors provide a hierarchical control flow structure that is completely separated from the data flow structure enabled by data connectors. The data connections in a composite service form a well-structured data dependency graph that is analyzed at deployment-time by means of the Algorithm 1. To understand this algorithm, it is necessary to underline some formal definitions.
### Algorithm 1 Algorithm for the analysis of data connectors
1. **procedure** ANALYZE(\( dc \))
2. \( X_w \leftarrow \emptyset \)
3. \( Y_r \leftarrow \emptyset \)
4. if \( \Pi_1(dc) \notin PD \land \Pi_1(dc) \in dom(R) \) then
5. \( X_w \leftarrow R(\Pi_1(dc)) \)
6. else
7. \( X_w \leftarrow \{ \Pi_1(dc) \} \)
8. if \( \Pi_2(dc) \notin PD \land \Pi_2(dc) \in dom(W) \) then
9. \( Y_r \leftarrow W(\Pi_2(dc)) \)
10. for each \( y \in Y_r \) do
11. \( R \uplus \{ y \mapsto R(y) \} = \{ \Pi_2(dc) \} \cup X_w \}
12. else
13. \( Y_r \leftarrow \{ \Pi_2(dc) \} \}
14. for each \( y \in Y_r \) do
15. \( R \uplus \{ y \mapsto R(y) \} \cup X \}
16. for each \( x \in X_w \) do
17. \( W \uplus \{ x \mapsto W(x) \} \cup Y \}
Let \( D \) be the data type, \( PD \) the type of processor parameters, \( OD \) the type of operation parameters and \( CD \) the type of exogenous connector inputs, such that \( PD \subseteq D \) and \( CD \subseteq D \). A data connector is then a tuple of type \( DC : D \times D \) that connects a source \( \in D \) parameter with an origin \( \in D \) parameter.
Reader parameters are the entities that directly consume data produced by writer parameters. \( I_r \) is the set of inputs that read data during a workflow execution, namely the inputs of atomic service operations, the inputs of exogenous connectors and the inputs of data processors. \( O_w \) is the set of operation outputs in the top-level composite, useful for reading data resulting from a workflow execution. The set \( I_w \) represents the required data for a workflow execution, which are the inputs of operations in the top-level composite. \( O_w \) is the set of outputs that write data during a workflow execution, namely the outputs of atomic service operations and the outputs of data processors.
Basically, the Algorithm 1 analyzes data connectors for all composite services, in order to create a relationship between reader parameter and writer parameters, while ignoring those parameters who do not need to manipulate data. It receives a data connector \( dc \in DC \) as an input, and uses \( R = I_w \cup O_r \mapsto \{ w \mid w \subset I_w \cup O_w \} \) for mapping a reader parameter to a set of writer parameters and \( W = I_w \cup O_w \mapsto \{ r \mid r \subset I_r \cup O_r \} \) for mapping a writer parameter to a set of reader parameters.
The Algorithm 1 creates two empty sets $\mathbf{X}_w$ and $\mathbf{Y}_r$, in order to analyze the endpoints of a data connector $dc \in DC$. $\mathbf{X}_w$ is the set of parameters connected to the source parameter $\Pi_1(dc)$ iff $\Pi_1(dc)$ is not a data processor parameter and $\Pi_2(dc)$ has incoming data connectors; otherwise, $\mathbf{X}_w$ only contains $\Pi_1(dc)$. Similarly, if the destination parameter $\Pi_2(dc)$ is not a data processor parameter and $\Pi_2(dc)$ has outgoing data connectors, then $\mathbf{Y}_r$ is the set of parameters connected from $\Pi_2(dc)$ and $\mathbf{X}_w$ (without $\Pi_2(dc)$) is added into the writers of each element $y \in \mathbf{Y}_r$; otherwise, $\mathbf{Y}_r$ only contains $\Pi_2(dc)$. Finally $\mathbf{X}_w$ is added into the writers of each element $Y \in \mathbf{Y}_r$, while the set $\mathbf{Y}_r$ is added into the readers of each element $x \in \mathbf{X}_w$. The result of the algorithm is a mapping of reader parameters to writer parameters.
V. IMPLEMENTATION
We implemented our approach on top of the DX-MAN Platform [9], and we used the Blockchain as the underlying data space for persisting parameter values while leveraging the capabilities provided by these decentralized platforms, such as performance, security and auditability. Furthermore, the Blockchain ensures that every service is the owner of its own data, while data provenance is provided to discover data flows (i.e., how data is moved between services) or to find out how parameters change over time. In particular, we defined three smart contracts using Hyperledger Composer 0.20.0 for executing transactions on Hyperledger Fabric 1.2. We do not show the source code due to space constraints, but it is available at .
The DX-MAN platform provides an API to support the three phases of a DX-MAN system lifecycle: design-time, deployment-time and run-time. Composite service templates only contain algebraic data connectors, as they represent a general design with multiple workflows. Using API constructs, a designer chooses a workflow and defines custom data connectors (and perhaps data processors) for every composite service involved. Data processor functions are defined by designers using API constructs.
The Algorithm 1 analyzes the data connectors defined at design-time, in order to construct the readers map at deployment-time. In particular, the map is a Java HashMap where the keys are reader parameter UUIDs and the values are lists of writer parameter UUIDs. After getting the map for a given workflow, reader parameters (with their respective list of writers) are stored as assets in the Blockchain by means of the transaction CreateParameters.
At run-time, exogenous connectors pass control using CoAP messages. In particular, an invocation connector performs five steps to invoke an operation, as shown in Fig. 6. Although the rest of exogenous connectors behave similarly, they only perform the first two steps. First, the invocation connector uses the transaction readParameters to read all input values from the Blockchain. For a given input, the Blockchain reads values directly from the writers list. As there might be multiple writer parameters, this transaction returns a list of the most recent input values that were updated during the workflow execution. Hence, a timestamp is set whenever a parameter value is updated. readParameters returns an exception if there are no input values. Output values are written onto the data space as soon as they are available, even before control reaches data consumers. Thus, having concurrent connectors (e.g., a parallel connector) may lead to synchronization issues during workflow execution. To solve this, control flow blocks in the invocation connector until all input values are read.

Once all inputs are ready, the invocation connector invokes the implementation of an operation by passing the respective input values. Then, the operation performs some computation and returns the result in the form of outputs. Finally, the invocation connector writes the output values onto the Blockchain using the transaction updateParameters.
An UpdateParameterEvent is published whenever a new parameter value has been updated. During deployment, the platform automatically subscribes data processor instances to the events produced by the respective writer parameters. Thus, a data processor instance waits until it receives all events, before performing its respective designer-defined computation. Although our current implementation supports only mappers and reducers, more data processors can be introduced using the semantics of a data processor presented in Sect. III, e.g., we can add a shuffler to sort data by key.
Our approach enables transparent data exchange as data routing is embodied in the Blockchain. Thus, reader parameters are not aware where the data comes from, and writer parameters do not know who reads the data they produce. Furthermore, the map generated by the Algorithm 1 avoids the inefficient approach of passing values through data connectors during workflow execution. Thus, exogenous connectors and data processors read data directly from parameters who only write values onto the Blockchain. Undoubtedly, this enables a transparent decentralized data exchange.
VI. EVALUATION
In this section, we present a comparative evaluation between distributed data flows and decentralized data flows for a DX-MAN composition. In the former approach, data is passed over the network through data connectors, whereas the second approach is our solution. Our evaluation intends to answer two major research questions: (A) Does the approach scale with the number of data connectors? and (B) Under which conditions is decentralized data exchange beneficial?
As a DX-MAN composition has a multi-level hierarchical structure, an algebraic data connector passes a data value vertically in a bottom-up way (for inputs) or in a top-down fashion (for outputs) while a custom data connector passes values horizontally or vertically. For our evaluation, we only consider vertical routing through algebraic data connectors.
$$M_p = \{\lambda_j | \lambda_j \in \mathbb{R} \}$$
is the set of network message costs for vertically routing the value of a parameter $p$, where $\lambda_j$ is the
cost of passing that value through an algebraic data connector $j$. Likewise, $\Gamma_p$ and $\omega_p$ are the costs of reading and writing the value on the data space, respectively.
Equations 1 and 2 calculate the total message cost of routing a value with a distributed approach. In particular, equation 1 is used for input values, whilst equation 2 is used for output values. As the decentralized approach does not pass values through data connectors, the total message cost of routing the value of $p$ is $\Gamma_p$ for inputs, and $\omega_p$ for outputs.
$$\Gamma_p + \sum_{j=0}^{\lfloor |M_p|-1 \rfloor} \lambda_j$$
$$\omega_p + \sum_{j=0}^{\lfloor |M_p|-1 \rfloor} \lambda_j$$
Equations 1 and 2 calculate the total message cost of routing a value with a distributed approach. In particular, equation 1 is used for input values, whilst equation 2 is used for output values. As the decentralized approach does not pass values through data connectors, the total message cost of routing the value of $p$ is $\Gamma_p$ for inputs, and $\omega_p$ for outputs.
$$\Gamma_p + \sum_{j=0}^{\lfloor |M_p|-1 \rfloor} \lambda_j$$
$$\omega_p + \sum_{j=0}^{\lfloor |M_p|-1 \rfloor} \lambda_j$$
Fig. 7 depicts the DX-MAN composition that we consider for our evaluation, which has three levels, three atomic services and two composite services. The composites ServiceD and ServiceE have three and five data connectors, respectively. Fig. 7 shows that a data connector has a $\lambda_j\in[0,7]$ cost of passing a value over the network. Then, the vertical routing sets for the parameters are $M_{A0} = \{\lambda_3\}$, $M_{A1} = \{\lambda_4\}$, $M_{B0} = \{\lambda_0, \lambda_5\}$, $M_{B1} = \{\lambda_1, \lambda_6\}$ and $M_{C0} = \{\lambda_2, \lambda_7\}$.
For clarity, we assume that the DX-MAN composition interacts with an external application through a shared data space. So, we can ignore the cost of passing data between the application and the composition. The costs of reading the inputs $A0$, $B0$ and $C0$ are $\Gamma_{A0}$, $\Gamma_{B0}$ and $\Gamma_{C0}$, respectively, and the costs of writing the outputs $A1$ and $B1$ are $\omega_{A1}$ and $\omega_{B1}$, respectively.
Suppose that a specific workflow requires the invocation of the operations $opA$ and $opC$. Using a distributed approach would require passing and reading values for two inputs, and returning and writing one output value. Therefore, according to equations 1 and 2, the total message cost would be $\lambda_3 + \lambda_4 + \lambda_2 + \lambda_7 + \Gamma_{A0} + \omega_{A1} + \Gamma_{C0}$. Remarkably, the total message cost using the decentralized approach would be $\Gamma_{A0} + \omega_{A1} + \Gamma_{C0}$.
A. RQ1: Does the approach scale with the number of data connectors?
We conducted an experiment that dynamically increases the number of data connectors of the DX-MAN composition depicted in Fig. 7. The experiment is carried out in 100000 steps with $\Gamma_{A0} = \omega_{A1} = \Gamma_{B0} = \omega_{B1} = \Gamma_{C0} = 1$.
For each step of the experiment, we add a new parameter in a random atomic operation. As a consequence of algebraic composition, another parameter is added in the respective composite operation and a data connector links these parameters.
In this experiment, we particularly compare the cost of the distributed approach vs. the cost of the decentralized approach. Rather than computing the costs for the invocation of specific operations, we compute the total costs for the DX-MAN composition using $\Gamma_{A0} + \omega_{A1} + \Gamma_{B0} + \omega_{B1} + \Gamma_{C0} + \sum_{j=0}^{7} \lambda_j$. Fig. 8 shows that the costs grow linearly with the number of data connectors, and that the decentralized approach outperforms its counterpart by reducing costs by a factor of 2.67 in average.
Fig. 7. Distributed Data Flow
Fig. 8. Impact of increasing the number of data connectors in a DX-MAN composition.
B. RQ2: Under which conditions is decentralized data exchange beneficial?
We conducted an experiment of 100000 steps to see the benefit of the decentralized approach as the number of levels of the composition increases. We particularly consider the total costs for the input $A0$ and we assume that $\Gamma_{A0} = 1$. At each step, the number of levels is increased by 1 and $\sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}$ by 0.0004. Thus, increasing the sum of vertical costs means that $\frac{\sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}}{\Gamma_{A0} + \sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}} = 1\text{ and increasing the number of levels by 1 means that } |M_{A0}|\text{ is also increased by 1. The improvement rate of the decentralized data exchange is } 1 - \frac{\Gamma_{A0} + \sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}}{\Gamma_{A0} + \sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}}.
Fig. 9 shows the results of this experiment, where it is clear that the benefit of the decentralized approach becomes more evident as the number of levels of the composition increases. This is because the number of data connectors increases with the number of levels and so the cost of the distributed approach. The only way a distributed approach would outperform the decentralized one is when the cost of performing operations on the data space is more expensive than the total cost of passing values vertically. In particular, for our experiment the DX-MAN composition gets a benefit only if $\Gamma_{A0} < \sum_{j=0}^{\lfloor |M_{A0}| \rfloor \lambda_j}$.
VII. RELATED WORK
To the best of our knowledge, there are no solutions to enable decentralized data flows in IoT systems. In this section we present SOA-based solutions as they are applicable to IoT.
We classified our findings into three categories, depending on the composition semantics the approaches are built on: orchestration (with central control flows and decentralized data flows), decentralized orchestration, data flows and choreographies.
Approaches belonging to the first category [10], [1] partially separate data from control so as to enable P2P data exchanges. To do so, an orchestrator coordinates the exchanges by passing data references alongside control. Thus, extra network traffic is introduced as data references (and acknowledge messages) are transferred over the network. These approaches are typically based on proxies that keep data, thus representing an issue for things with low disk space. By contrast, DX-MAN does not require any coordinator for the data exchange, and exogenous connectors do not store data. Besides, exogenous connectors do not exchange references, thanks to the separation of concerns.
Only few approaches discuss data decentralization using the semantics of decentralized orchestration. [11] stores data and control in distributed tuple spaces which may become a bottleneck in IoT environments that continuously generate huge amount of data. [3] solves that issue by storing references instead of values. However, references are needed because data is mixed with control. Moreover, [3] requires the maintenance of tuple spaces for passing references and databases for storing data. DX-MAN only reads and writes onto the data space.
Although distributed data flows [12] allocate flows over different things, there is a master engine that coordinates data exchange for slave engines. Hence, this approach introduces extra network hops as data is passed among multiple engines. Although Service Invocation Triggers [2] exchange data directly, they rely on workflows that do contain loops and conditionals. This limitation arises from the fact that it is not trivial to analyze data dependencies when control is mixed with data.
A choreography describes interactions among participants using decentralized message exchanges (a.k.a. conversations). Workflow participants [13], pass data among multiple engines leading to network degradation. Although services may exchange data through direct message passing, they are not reusable because data and control are mixed [6]. [4] uses peers to exchange data and invoke services, thus separating control and computation. However, peers pass data alongside control according to predefined conversations, leading to the issues discussed in [5]. Although [14] proposes the separation between control and data for choreographies, it uses a middleware which may potentially become a central bottleneck.
VIII. CONCLUSIONS
In this paper, we presented an approach on top of DX-MAN to enable decentralized data flows in IoT systems. At design-time, the algebraic semantics of DX-MAN enables a well-defined structure of data connections. As data connections are not mixed with control flow structures, then an algorithm smoothly analyzes data connections at deployment-time. The result is a mapping between reader parameters and writer parameters, which prevents passing values through data connectors. In our current implementation, the Blockchain embodies this mapping to manage data values at run-time.
DX-MAN is currently the only service model that provides the separation between data flow, control flow and computation; thus, allowing a separate reasoning, monitoring, maintenance and evolution of these concerns. In particular, separating data flow from control flow prevents passing data alongside control among exogenous connectors, and enables the use of different technologies to handle data flows and control flows separately.
Our experiments confirm that our approach scales well with the number of data connectors and the number of levels of a DX-MAN composition. They also suggest that our approach provides the best performance when the cost of performing operations on the data space is less than the cost of passing data over the network. Thus, our approach is extremely beneficial for IoT systems consisting of plenty of services.
REFERENCES
|
{"Source-Url": "https://www.research.manchester.ac.uk/portal/files/85913574/1570514450.pdf", "len_cl100k_base": 6889, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24927, "total-output-tokens": 8185, "length": "2e12", "weborganizer": {"__label__adult": 0.00036716461181640625, "__label__art_design": 0.0005292892456054688, "__label__crime_law": 0.0004203319549560547, "__label__education_jobs": 0.0008678436279296875, "__label__entertainment": 0.00015592575073242188, "__label__fashion_beauty": 0.0002225637435913086, "__label__finance_business": 0.00079345703125, "__label__food_dining": 0.0005030632019042969, "__label__games": 0.0006284713745117188, "__label__hardware": 0.0017566680908203125, "__label__health": 0.0010166168212890625, "__label__history": 0.0004754066467285156, "__label__home_hobbies": 0.00012683868408203125, "__label__industrial": 0.00079345703125, "__label__literature": 0.0005602836608886719, "__label__politics": 0.0004718303680419922, "__label__religion": 0.0005712509155273438, "__label__science_tech": 0.401611328125, "__label__social_life": 0.0001392364501953125, "__label__software": 0.01568603515625, "__label__software_dev": 0.57080078125, "__label__sports_fitness": 0.0003528594970703125, "__label__transportation": 0.0010461807250976562, "__label__travel": 0.0002422332763671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34494, 0.02852]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34494, 0.48982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34494, 0.86534]], "google_gemma-3-12b-it_contains_pii": [[0, 1418, false], [1418, 6731, null], [6731, 12159, null], [12159, 16201, null], [16201, 22579, null], [22579, 28278, null], [28278, 34494, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1418, true], [1418, 6731, null], [6731, 12159, null], [12159, 16201, null], [16201, 22579, null], [22579, 28278, null], [28278, 34494, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34494, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34494, null]], "pdf_page_numbers": [[0, 1418, 1], [1418, 6731, 2], [6731, 12159, 3], [12159, 16201, 4], [16201, 22579, 5], [22579, 28278, 6], [28278, 34494, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34494, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
67ff9f316bcfd80f313917b77a354df21f00cd0a
|
A Real-Time Middleware for Networked Control Systems and Application to an Unstable System
Kyoung-Dae Kim and P. R. Kumar
Abstract—A well-designed software framework is important for the rapid implementation of a reliable and evolvable networked control applications, and to facilitate the proliferation of networked control by enhancing its ease of deployment. In this paper we address the problem of developing such a framework for networked control that is both real-time and extensible. We enhance Etherware, a middleware developed at the University of Illinois, so that it is suitable for time-critical networked control applications. We introduce a notion of Quality of Service (QoS) for the execution of a component. We also propose a real-time scheduling mechanism so that the execution of components can not only be concurrent but also be prioritized based on the specified QoS of each execution. We have implemented this framework in Etherware. We illustrate the applicability of this software framework by deploying it for the control of an unstable system, a networked version of an inverted pendulum control system, and verify the performance of the enhanced Etherware. We also exhibit sophisticated runtime functionalities such as runtime controller upgrade and migration, to demonstrate the flexible and temporally predictable capabilities of the enhanced Etherware. Overall, Etherware thus facilitates rapid development of control system applications with temporally predictable behavior so that physical properties such as stability are maintained.
Index Terms—Networked Control Systems, Middleware, Real-Time Systems, Unstable System.
I. INTRODUCTION
At a time when networked control systems are on the cusp of a takeoff, a software framework which enables the rapid implementation of a reliable and evolvable networked control application is indeed one of the most important factors for the proliferation of such networked control systems [1]. This motivates the development of a middleware specifically for networked control systems that can support component-based application development. Such a framework can facilitate an application to be developed easily through the composition of a set of relevant software components. Moreover, by providing service components, it can further simplify the development of networked applications which typically require expertise to address the issues caused by the existence of a not perfectly reliable communication network that mediates interactions between components.
Such a middleware, called Etherware, has been developed at the University of Illinois, which is also flexible in that it supports runtime system evolution through its component model. The ACE ORB (TAO) [2] is another software platform that can be used for networked control systems. Its architecture is based on RT-CORBA [3] which is a real-time extension of CORBA specification for distributed object computing from the Object Management Group (OMG). Also, Costa et al. [4] have proposed a component-based middleware, called RUNES middleware, for reconfigurable, ubiquitous, networked embedded systems. As in Etherware, one of its useful feature is that it supports runtime reconfiguration via a middleware service, called the Logical Mobility Service. ROS [5] is another development platform which is designed for rapid implementation of large-scale robot system applications. Even though there are several common concepts in ROS and Etherware such as message-based interaction and microkernel-based design, the fundamental design goals of ROS are quite different from those of Etherware. For example, while ROS is designed to be language neutral and to make it easy to reuse existing codes for robotic applications, Etherware is designed to support component-based application development for general purpose networked control systems.
In this paper, we address critical additional aspects necessary for networked control applications. Among them, we are especially interested in providing Etherware with real-time capability. This is especially important for a middleware for networked control systems, since many control systems are in fact time-critical systems, with the right action being required to be executed at the right time, for otherwise the performance of a control system can be degraded or even become unstable.
As an approach to real-time capability of Etherware, we first introduce a notion of the Quality of Service (QoS) for the execution of a component. More precisely, it is the QoS of a Message for a component that gets executed when it receives a Message. In general, the QoS of a component execution can contain any information, such as the period of the execution, or the deadline of each instance of execution, etc., all of which can potentially be used in scheduling decisions within Etherware. The scheduling mechanism of Etherware is enhanced so that multiple components can be executed concurrently while the preemption between executions is allowed based on the priority that is determined by the QoS of each component’s execution. Thus, the enhanced Etherware provides real-time performance through the prioritized and concurrent Message delivery based on the QoS of each Message.
We assess the performance of the real-time middleware for networked control by experimentation on a networked
inverted pendulum system. We also assess the flexibility of the framework through experiments including runtime controller upgrade and runtime controller migration. Together, these demonstrate the capabilities of the enhanced Etherware that are enabled by the combination of the flexibility that is provided by Etherware design and the temporal predictability which is provided through the enhancement in this paper. Thereby, we show how the incorporation of the real-time mechanisms with the other flexible mechanisms of Etherware leads to a powerful framework for networked control systems.
In Section II, we address the characteristics of networked control systems and describe some requirements for a software framework. Then we introduce Etherware in Section III. Section IV discusses the design of Etherware mechanisms to support the timeliness requirement of the domain. Some implementation related issues are discussed in Section V. Then in Section VI, the deployment of Etherware on a networked inverted pendulum control system is described and the experiment results are presented.
II. NETWORKED CONTROL SYSTEMS
A. Domain Characteristics
There are many potential examples of networked control systems, including smart power grids, intelligent air/ground traffic control systems, automatic warehouse management systems. Even though these examples are in different application areas, they share some common characteristics as networked control systems:
1) Large-scale: Due to the existence of a communication network, the scale of networked control systems is typically larger than that of classical control systems. It is large not only because the number of entities to be controlled is large but also because the structure of networked control systems is complex due to multi-scale control objectives. As an example, in an automated warehouse system, each robot is controlled to move some objects from one place to another based on a given command that is determined to meet a high level control objective. In addition to these, there also should be another control layer for safety such as collision avoidance between robots while they are moving around, which adds complexity to the overall system.
2) Openness: In a networked environment, a system configuration which forms a control-loop of a system can easily be changed at runtime. More specifically, a new entity can join or an existing entity can leave the system at any point of time. Moreover, it is also possible that an existing entity can be replaced or even be migrated to another computing node at runtime depending on the system states.
3) Time-criticality: In most cases, a control system is a time-critical system in which given actions such as sensing and control are required to occur at the right time. Failure to do so can degrade the system’s performance or can even cause the system to become unstable.
4) Safety-criticality: A safety-critical system is one in which the cost of system failure is very expensive, causing severe damage or harm to people, equipment or the environment. It is easy to see that many control systems are indeed safety-critical systems since they are employed on physical systems.
B. Domain Requirements
The following are some of the requirements for a middleware framework for networked control systems.
1) Operational Requirements: The fact that the overall system is distributed over a communication network makes it potentially much harder and more time-consuming to develop an application. Also, clock differences between computing nodes accentuate difficulties in developing a networked control application. Therefore, it is necessary to have mechanisms in a middleware framework which can resolve the problems of both location difference and time discrepancy. Besides these two requirements, a mechanism which supports semantic addressing (or context-aware addressing) is also a desirable feature since it can significantly improve the portability and reusability of the application code.
2) Management Requirements: Owing to the openness feature, a networked control application is typically subject to change and evolution after its deployment. However it is not always possible to stop the whole system to implement these changes. Therefore, it is necessary that a middleware framework provides mechanisms for runtime system management which allow continuous system evolution.
3) Non-functional Requirements: The time-critical feature of a control system requires a middleware framework to provide mechanisms which guarantee timeliness of action. Also, the safety-criticality of a control system requires that the middleware framework itself be error-free, and also provide some mechanisms to tolerate faults of an application program, so as to achieve overall reliability.
III. ETHERWARE
We begin by describing the initial capabilities of Etherware [6], that was developed at the University of Illinois. In this section, we describe the basic features and see how they satisfy the domain requirements.
A. Etherware Architecture
From the viewpoint of its software architecture, Etherware can be decomposed into roughly two parts, Kernel and Components. Components can interact with each other by exchanging objects, called Messages, each of which is a well-defined XML document object. As shown in Fig. 1, a Message contains three XML elements. The name of the receiver component of a Message is specified in Profile, the time when a Message is created is specified in Time Stamp, and any information concerning the interaction semantics can be specified in Contents.
The Kernel is indeed the main part of Etherware since it provides essential functionalities for middleware operations. A component can be created by the Kernel and also be destroyed when it is necessary. Moreover, the Kernel is responsible for delivering Messages among components. When the Kernel delivers a Message from one component to another, it first
Etherware consists of a set of components. Components can be classified further into service components and application components. Service components are the components that need to be developed by a control application developer for an application, while service components are the components that are provided as parts of the Etherware infrastructure to make it easy to develop an application. Section III-C discusses service components in more detail.
B. Etherware Component Model
To make it easy to develop Etherware components, Etherware provides a template of components, in an Etherware Component Model, which is shown in Fig. 2. As shown in the figure, it consists of roughly three different parts, Shell, Component Logic, and Component State, with each of them being designed based on several software design patterns [7] such as Facade, Strategy, Memento, respectively. The Shell is a class object that provides an interface between the Etherware Kernel and the Component Logic that is a class object which implements a user defined application logic. The Component State is a class object where the runtime execution states of the Component Logic are stored and maintained.
An Etherware component can easily be developed with the Etherware Component Model since the Shell is already implemented and provided as part of the Etherware infrastructure. Moreover, a template class is provided so that an object of the Component State can easily be created. Hence, an application developer needs to implement only an interface for the Component Logic, called Component, to develop an application Etherware component. One additional notable feature of the Etherware Component Model is that, due to the software design patterns used in designing the component model, it indeed supports runtime system management, making possible capabilities such as runtime component replacement and runtime component migration. We illustrate these two capabilities in Section VI.
C. Etherware Services
There are several functionalities that can commonly be useful in any networked control application. Etherware provides such functionalities as Etherware service components as shown in Fig. 2.
ProfileRegistry provides a naming service which enables a component to send a Message to other components without knowing the physical address of a recipient component. Instead, a sender component can specify the Profile, a semantic name of a component such as ‘controller of car 1’.
NetworkMessenger maintains the network connections between Etherwares over a communication network. It is also responsible for sending and receiving Messages over the network. Thus, all the details about the network are hidden from application components by the NetworkMessenger.
In general, every computer in a distributed system has a different time clock. To address this issue, Etherware provides the NetworkTime service component which estimates the time difference between computing nodes. It also translates the time stamp of a Message from the clock of the remote computing node to that of the local machine whenever a new Message arrives over a network from a remote computing node. In many control systems, some activities such as sensing and control action have to be taken based on time. For such situations, Etherware provides the Notifier which generates and sends time-driven Messages so that a component can be activated at the time that it is supposed to.
D. Domain Requirement Analysis
Table I summarizes the domain requirements and the functionalities supported by the existing Etherware. As shown in the table, Etherware satisfies many important domain requirements, especially the requirements relevant to the operation of a distributed application and flexible system management. However, it still needs to satisfy some critical requirements which are essential for control systems. In this paper, we focus our attention on the real-time related issues. We now discuss in more detail the design and implementation of Etherware’s mechanisms for real-time guarantees.
IV. DESIGN OF ETHERWARE MECHANISMS FOR REAL-TIME GUARANTEES
A real-time system is not a system that is fast, but is rather a system which is temporally predictable [8]. Here, what we
TABLE I
Domain Requirements vs. Etherware Implementation
<table>
<thead>
<tr>
<th>Domain Requirements</th>
<th>Etherware Implementation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Operational</td>
<td></td>
</tr>
<tr>
<td>Location transparency</td>
<td>NetworkMessenger</td>
</tr>
<tr>
<td>Hiding time discrepancy</td>
<td>NetworkTime</td>
</tr>
<tr>
<td>Semantic addressing</td>
<td>ProfileRegistry</td>
</tr>
<tr>
<td>Management</td>
<td></td>
</tr>
<tr>
<td>System evolution</td>
<td>Component model</td>
</tr>
<tr>
<td>Non-functional</td>
<td></td>
</tr>
<tr>
<td>Timeliness</td>
<td>Reliability</td>
</tr>
</tbody>
</table>
mean by the temporal predictability\(^2\) of a system is that if a task is specified with some information relevant to its execution such as release time, finish time, and so on, then a system should provide a guarantee of supporting such specifications when it executes a task. In general, the temporal predictability of an entire system relies on the temporal predictability of every subsystem, including H/W platform, communication network, operating system, etc. However, in the following discussion, we only consider the problem of how to improve the temporal predictability at the level of middleware, especially in Etherware, which is a software framework interposed between the operating system and application programs. In particular, we do not consider issues such as packet delay, jitter, and omission in the communication layer. Hence, the overall temporal predictability of Etherware-based systems is restricted by the features supported by the underlying platforms such as operating systems and communication networks.
A. Quality of Service (QoS) of Message Delivery
For the temporal predictability of the Etherware, we first need to develop a mechanism that can be used to specify the execution relevant information. In this paper, we call a collection of such information as Quality of Service (QoS). Then the first design decision is: Where does this QoS specification need to be embedded so that it can be used in the Etherware Scheduler? Notice that the Etherware is an event-driven system. Which means that a component is to be executed by a Dispatcher only when it receives a message. Hence, it turns out that the Message class object is the right place to specify QoS. Now, the second design decision is: What information is to be specified in QoS? A particular choice of a set of information that is used in defining the QoS is shown in Fig. 3. Once the QoS of a Message is specified by a component when the Message is created, then the Etherware scheduler can utilize the QoS when it makes a scheduling decision for Message delivery.
B. Priority-based and Concurrent Scheduling
In terms of temporal predictability, there are two main issues with the prior existing Etherware [6] that need to be enhanced. The first is to support a true notion of ordering among Messages when they are scheduled for delivery, rather than simply scheduling them in a first in first out (FIFO) order for delivery within a Dispatcher. For temporal correctness of behavior of the overall system, as well as to give priority to properties such as stability over other second order objectives, it is necessary to order Messages based on extra informations specified within a Message in making scheduling decision for delivery. In Section IV-A, we already have described how such information can be specified in a Message. Thus an application designer has to confront the question of how to order Messages based on the QoS specifications, i.e., what real-time scheduling policy\(^3\) is to be used. The particular choice of a real-time scheduling policy depends critically on the application that is being developed over the Etherware platform. Hence, it is desirable to design a Etherware scheduling mechanism that is independent of a specific real-time scheduling policy. That is, we separate mechanism from policy, and design Etherware specifically to provide mechanisms for real-time control, leaving the schedulability analysis to the application designer.
A second feature to support in Etherware is concurrent Message delivery. Concurrency is in fact an essential feature of any real-time scheduler since it allows preemption so that an important (with respect to some notion of ordering) Message can be delivered first before other less important Messages.
To address these issues, a real-time scheduling mechanism, called a hierarchical scheduling mechanism, is proposed as shown in Fig. 4. As shown in the figure, concurrency is supported through a module, called Dispatching Module, consisting of multiple Dispatchers with each of them executing independently to deliver Messages. Preemption between Dispatchers occurs based on the priority assigned to each of
---
\(^2\)We use the terms ‘real-time’ and ‘temporally predictable’ interchangeably.
\(^3\)In general, a real-time scheduling policy is a rule governing how to order the executions of multiple tasks that are running concurrently.
the Dispatchers if the underlying platform of the Etherware provides a priority-based thread\(^4\) scheduling. Notice that most real-time operating systems do indeed support priority-based scheduling. Moreover, the job queue in each of the Dispatchers is also modified to become a prioritized job queue so that jobs in each queue are automatically ordered based on an attribute of a job. An example of such attribute can be the absolute deadline of a Message. Thus, Messages within a Dispatching Module can be ordered by the priority of a Dispatcher and the attribute of a job. To determine the right position for a Message within a Dispatching Module, the Etherware Scheduler uses an object, called Job Placement Rule (JPR), that is comprised of the ID of a Dispatcher and the value for an attribute that is used within a job queue of a Dispatcher.

As mentioned above, it is the scheduling policy that decides what QoS specification of a Message to use, and how to use it, to determine a corresponding JPR. To make the scheduling mechanism independent of a specific scheduling policy, an interface is defined to communicate between the Scheduler and a module, called JPR Implementation, which is an implementation of a scheduling policy. Thus a scheduling policy can be chosen and its implementation can be provided to the Etherware’s overall scheduling process as a separate module. We note that there are many scheduling policies that can be supported via the hierarchical scheduling mechanism. For example, the rate-monotonic (RM) static scheduling policy, or a non-preemptive version of the earliest-deadline-first (EDF), or the combination of these two can all be supported. Algorithm 1 is an example of such scheduling policy implemented as a JPR implementation. In addition to the flexibility with respect to the choice of a scheduling policy, the proposed real-time scheduling mechanism also supports flexibility with respect to the configuration of the Dispatching Module. In fact, the number of Dispatchers and the priority for each Dispatcher\(^5\) can be specified as a separate rule, called Thread Scheduling Rule (TSR).
---
\(^4\) A thread of execution (or thread in short) is the smallest unit of processing that can be scheduled by an operating system [9].
\(^5\) The specific priority set is given by the underlying software platform.
V. DISCUSSION ON IMPLEMENTATION
While Etherware was originally developed using the Java programming language [10], we implement the proposed Etherware’s real-time mechanisms in the Sun Java Real-Time System (Sun Java RTS) [11]. Java provides several important advantages over other programming languages, especially when developing a software framework for distributed system applications such as Etherware. First, it is a well-designed Object-Oriented Programming (OOP) language with many useful built-in class packages for networking, data structures, multiprocessing. Second, it provides a garbage collection mechanism for automatic memory management during the execution of a program. Third, it supports platform independent application development. Thus it is easy to write a distributed software program in Java.
However, Java was originally designed and optimized for performance in terms of overall throughput rather than for temporal predictability. Specifically, the dynamic loading, linking, and initialization of classes or interfaces, just-in-time (JIT) compilation, and the garbage collection of Java Virtual Machine (JVM) could cause unpredictable delays in program execution. As stated in Section IV, the temporal predictability of the underlying platform is necessary to provide real-time performance of the overall Etherware-based networked control system. Therefore, to implement the proposed Etherware’s real-time mechanisms, we use a special version of Java Virtual Machine (JVM), called Sun Java Real-Time System (Sun Java RTS) [11], which is an implementation of the Real-Time Specification of Java (RTSJ) [12] that has been developed for better predictability of Java programs. One notable enhancement of RTSJ compared to the standard JVM specification is that it supports hard real-time thread scheduling via a fixed-priority scheduling mechanism. Hence, unlike in the standard JVM, in Sun Java RTS, a control task can be configured to run with a priority higher than those of others such as garbage collector so that it’s execution is not affected at runtime.
VI. NETWORKED INVERTED PENDULUM CONTROL SYSTEM
We demonstrate the flexibility along with the temporal predictability of the enhanced Etherware on an unstable system, the inverted pendulum control system shown in Fig. 5, when it is controlled over a network.
A. Inverted Pendulum Control System
An inverted pendulum system is chosen to illustrate the real-time performance as well as flexibility in application development made possible by Etherware, since due to its open loop instability it is representative of control system which requires strict predictability with respect to its sensing and control action. We first describe the configuration of the system.
1) System Configuration: Fig. 6 shows the schematic of the inverted pendulum control system. As shown in the figure, the inverted pendulum has two joints, and correspondingly two links attached to each joint. Joint 1 is at the base of the system, and is actuated by an attached DC motor, while joint
2 is passive. Thus, the DC motor has to be controlled in an appropriate manner to keep link 2 upright, i.e., regulate \( \theta \) close to zero based on our selected coordinate frame. A DSP board is used to read encoder values from two joints upon request by a controller. It is also used to implement a PWM signal to the DC motor based on the control value delivered from a controller.
To control such an inverted pendulum, a controller is developed as a component running on Etherware, which in turn runs on a separate PC that has a serial communication with the DSP board. In our implementation of the controller, the controller component is activated every 15 ms by the Etherware. In Etherware, a component can be periodically activated when it receives a periodic time-driven message, call a Tick, from the Notifier service. Hence, the Tick for the controller is set to a 15 ms period. At each activation, the controller first sends a request to the DSP board to read the angles of the two joints. Once it receives the data, it computes a control output value and sends it back to the DSP board. Then the DSP board delivers the control action right after it receives the control command from a controller.
Table II shows the TSR specification that is used to configure Etherware’s real-time scheduler. Dispatcher 0 is the default Dispatcher for Etherware’s operations that are not necessary to be temporally predictable, while the others are configured for real-time execution. The range of the priority levels is based on the priorities provided by the underlying Sun JavaRTS platform. The higher the number is, the higher the priority. Thus, a task executed by Dispatcher 1 has the highest priority and can preempt any other tasks executed by the other Dispatchers. In terms of the real-time scheduling policy that is used in this experiment, we use the JPR implementation as shown in Algorithm 1.
2) Controller Design: A rotating type inverted pendulum system can generally be modeled as a dynamic system in the following form [13].
\[
M\ddot{q} + C\dot{q} + G = \tau \tag{1}
\]
\[
M = \begin{bmatrix}
J_1 + m(l_1^2 + l_2^2 \sin^2 \theta_2) & ml_1 l_2 \cos \theta_2 \\
ml_1 l_2 \cos \theta_2 & J_2 + m l_2^2
\end{bmatrix}
\]
\[
C = \begin{bmatrix}
ml_2^2 \dot{\theta}_2 \sin \theta_2 \cos \theta_2 & ml_2^2 \dot{\theta}_1 \sin \theta_2 \cos \theta_2 \\
-ml_1 l_2 \dot{\theta}_2 \sin \theta_2 & -ml_1 l_2 \dot{\theta}_2 \sin \theta_2
\end{bmatrix}
\]
\[
G = \begin{bmatrix}
0 \\
-ml_2 g \sin \theta_2
\end{bmatrix}
\]
where \( q = [\theta_1, \theta_2]^T \), \( \tau = [u, 0]^T \), \( u \) is a control input, \( J_i \) is the moment of inertia of link \( i \) about its center of mass. The values for the parameters of the inverted pendulum are \( m = 0.2(kg) \), \( l_1 = 8(cm) \), \( l_2 = 20(cm) \), \( J_1 \approx 0.0064(kg \cdot m^2) \), and \( J_2 \approx 0.0002(kg \cdot m^2) \).
Let \( x = [x_1, x_2, x_3, x_4]^T = [\theta_1, \dot{\theta}_1, \theta_2, \dot{\theta}_2]^T \) be the state vector. Then the dynamics of the inverted pendulum in (1) can be represented by a nonlinear differential equation in the state space form of \( \dot{x} = f(x, u) \). Linearizing the nonlinear equations about the equilibrium point \( x_0 := [\theta_1^0, 0, 0, 0] \) for some \( \theta_1^0 \in [0, 2\pi) \) that is a set position for \( \theta_1 \), gives us the following linear differential equation,
\[
\dot{x} = Ax + Bu, \tag{2}
\]
where
\[
A = \left. \frac{\partial f}{\partial x} \right|_{x=x_0, u=u_0} \quad \text{and} \quad B = \left. \frac{\partial f}{\partial u} \right|_{x=x_0, u=u_0}.
\]
Notice that \( u_0 = 0 \) at the equilibrium state \( x_0 \), \( A \in \mathbb{R}^{4 \times 4} \) and \( B \in \mathbb{R}^{4 \times 1} \). To stabilize the linearized system (2), the control input \( u \) is designed as a full state feedback controller in the form of \( u = -Kx \). Here, the gain \( K \) is determined through the pole placement technique.
B. Periodic Control Under Computational Stress
To verify that the enhanced Etherware indeed supports temporal predictability, we control an inverted pendulum under a stress condition on the computational system. Fig. 7
illustrates how the system is configured in this experiment. As shown in the figure, we run an extra periodic task, called a stressing task component, along with a periodic control component. In our experiment, the stressing task is activated every 5 seconds and it executes its computation which takes about 1 second to finish at each activation. Furthermore, the stressing task is activated upon receiving a Tick that has QoS with low criticality, while Tick for a controller has high criticality in its QoS. Thus the system is configured so that the controller is executed in higher priority than the stressing task. However, it is important to notice that even though the system is so configured, if the executions of two periodic tasks are not scheduled appropriately by Etherware, then the inverted pendulum cannot be controlled successfully since the 15 ms period of a controller is too short compared to the 1 second average execution time of the stressing task.
Fig. 7. System configuration for the stress test where C and S represent the controller component and stressing task component, respectively.
Fig. 8 shows the serial communication signal between the PC and the DSP board that is captured while the inverted pendulum is being controlled by the controller under the computational stress condition. In each cycle of sensing and control action, the first signal from the PC is sent by the controller to request joint angles, and this request is immediately followed by a response from the DSP board to return the measured joint angles to the controller. Then the controller sends a control value that is the second signal from the PC to the DSP board. As shown in the figure, this sensing and control action cycle is indeed periodic, which implies that the execution of the controller is not affected at all by the execution of the stressing task in the PC, which in turn demonstrates that the controller is successfully scheduled by Etherware to execute at each of its execution periods regardless of the execution of the stressing task. The measured joint angles are plotted in Fig. 9. As shown in the result, the inverted pendulum is successfully stabilized at its upright position, \( \theta_2 = 0 \), even under the stressing condition. This shows that real-time temporal predictability is indeed achieved by Etherware.
Fig. 8. Periodic sensing and control action over RS-232C communication.
Fig. 9. Periodic control of an inverted pendulum under stress.
C. Runtime System Management
Now we show how Etherware makes possible the ready development of rather sophisticated application functionalities. In specific we will show that it is feasible to do runtime migration which allows real-time reconfiguration of a control system. Such a functionality can be very useful, for example, in enhancing reliability of a control systems when some component fail under operation. This is but one example of higher level functionalities which Etherware makes readily feasible. The networked inverted pendulum control system is implemented as shown in Fig. 10 to demonstrate the suitability of the enhanced Etherware as a software platform for networked control systems. More specifically, we demonstrate the Etherware’s capabilities to support runtime reconfiguration.
of a control system, such as controller upgrade and controller migration, which typically require both flexible and temporally predictable execution behavior of the underlying execution platform. One notable difference from the system configuration in Section VI-B is that a component, called DSPProxy, is run at the computational node that has a serial communication with an inverted pendulum system, to mediate the interaction between a DSP board and a controller that is typically running at a remote computing node. In fact, it is necessary to have a component such as a DSPProxy component to allow a controller to run at a remote computer.
1) Controller Upgrade: In this experiment, the goal is to upgrade a controller while an inverted pendulum is being controlled. More precisely, while an inverted pendulum is being controlled by a networked controller at a computing node 2, another component, called Requester, requests the Etherware at node 2 to upgrade the running controller component. Upon receiving such a request, the Etherware at node 2 first terminates the running controller and then initializes a new controller component to continue to control the inverted pendulum over a network. The inverted pendulum is an inherently open-loop unstable system, and it is controlled with 15 ms period. Hence, if Etherware cannot complete this upgrading procedure in a timely manner, then the link 2 of the inverted pendulum cannot be controlled to be at its upright position during this controller transition process.
Fig. 11 shows the results of the experiment. In this experiment, a Requester component initiates the controller upgrade procedure at around 30 seconds elapsed time by sending a request message. Moreover, the request message is specified with a QoS that has high criticality so that the controller can be upgraded at the highest priority level by the Etherware. As shown in the figure, it is easy to see that the controller is indeed replaced around 30 seconds, and that the inverted pendulum is successfully controlled around its upright position during this controller upgrade. Notice that, in this experiment, to clearly visualize the effect of runtime controller upgrade, the control gains of the controller before and after upgrade are intentionally tuned to have different control performance. As we can see in Fig. 11, the control performances of the replacing controller is better than that of the replaced one.
2) Controller Migration: In this next experiment, the goal is to move a controller from one computing node to another at runtime. Such a capability can be important in optimizing the behavior of control systems. For example, if network delays cause congestion, one may want to change the node where the control law is computed. As in Section VI-C1, a Requester component sends a request message to the Etherware at node 2 where a controller is running to control an inverted pendulum. However, in this experiment, the request message contains the information that requests the Etherware to migrate a controller from node 2 to node 1. To complete this migration procedure, two Etherware processes at node 1 and 2 have to interact with each other in (i) initializing a new controller at node 1, (ii) terminating a currently running controller at node 2, and (iii) migrating the runtime state of the terminated controller from node 2 to a new controller at node 1. Thus, migration is a more complicated procedure than upgrade. Hence it is more difficult to preserve the stability of the inverted pendulum while a controller is being migrated.
Fig. 12 shows the results of this experiment. In this experiment, a migration request message is sent by a Requester component at around 40 seconds elapsed time. At the moment when the migration request is received by Etherware, the controller running at node 2 is automatically terminated, migrated from node 2 to node 1, and then restarted at node 1. As shown in the figure, there are no noticeable changes in the motion of the inverted pendulum due to the timely controller migration. Thus the stability of the system is well preserved during the migration of a control component through Etherware, which in turn demonstrates that Etherware indeed provides both temporal predictability and flexibility in complex application development.
VII. Conclusion
In this paper, we have demonstrated the importance of a well-designed software framework for the development of networked control applications. We have enhanced Etherware [6] so that it can support the temporally predictable execution behavior that is necessary for safety critical applications. A notion of Quality of Service (QoS) of Message delivery is introduced, and the scheduling mechanisms of the Etherware are enhanced in such a way that a Message with a QoS specification can be delivered with the priority that is determined by the QoS of a Message.
We have also implemented a networked inverted pendulum control system to experimentally verify the real-time performance as well as the application development flexibility of the enhanced Etherware. Through experiments including the runtime controller upgrade and migration, which necessarily require both flexibility and temporal predictability of the software platform, we have demonstrated that the enhanced Etherware indeed exhibits satisfactory performance in both respects.
Fig. 12. Joint angles of the inverted pendulum during a runtime controller migration.
REFERENCES
|
{"Source-Url": "http://cesg.tamu.edu/wp-content/uploads/2014/09/A-Real-Time-Middleware-for-Networked-Control-Systems-and-Application-to-an-Unstable-System.pdf", "len_cl100k_base": 7879, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29067, "total-output-tokens": 8977, "length": "2e12", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.00035858154296875, "__label__crime_law": 0.0004916191101074219, "__label__education_jobs": 0.0004181861877441406, "__label__entertainment": 8.684396743774414e-05, "__label__fashion_beauty": 0.00015807151794433594, "__label__finance_business": 0.00031948089599609375, "__label__food_dining": 0.0003654956817626953, "__label__games": 0.0008192062377929688, "__label__hardware": 0.00370025634765625, "__label__health": 0.0005640983581542969, "__label__history": 0.0002949237823486328, "__label__home_hobbies": 0.00012862682342529297, "__label__industrial": 0.0012226104736328125, "__label__literature": 0.0002262592315673828, "__label__politics": 0.0002791881561279297, "__label__religion": 0.00046944618225097656, "__label__science_tech": 0.132568359375, "__label__social_life": 6.29425048828125e-05, "__label__software": 0.017486572265625, "__label__software_dev": 0.83740234375, "__label__sports_fitness": 0.00036215782165527344, "__label__transportation": 0.00148773193359375, "__label__travel": 0.00022792816162109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41130, 0.01452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41130, 0.53408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41130, 0.91951]], "google_gemma-3-12b-it_contains_pii": [[0, 5370, false], [5370, 11335, null], [11335, 15585, null], [15585, 20569, null], [20569, 26068, null], [26068, 30260, null], [30260, 33546, null], [33546, 38935, null], [38935, 41130, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5370, true], [5370, 11335, null], [11335, 15585, null], [15585, 20569, null], [20569, 26068, null], [26068, 30260, null], [30260, 33546, null], [33546, 38935, null], [38935, 41130, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41130, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41130, null]], "pdf_page_numbers": [[0, 5370, 1], [5370, 11335, 2], [11335, 15585, 3], [15585, 20569, 4], [20569, 26068, 5], [26068, 30260, 6], [30260, 33546, 7], [33546, 38935, 8], [38935, 41130, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41130, 0.06897]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
f7eb7cee3a18c91fbbf2143f519a8abcb53982af
|
How validation can help in testing business processes orchestrating web services
Damian Grela, Krzysztof Sapiecha, Joanna Strug
Department of Computer Science, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
Abstract – Validation and testing are important in developing correct and fault free SOA-based systems. BPEL is a high level language that makes it possible to implement business processes as an orchestration of web services. In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. The paper describes an experiment that aims at answering a question whether or not the validation test scenarios are also adequate for testing an implementation of BPEL processes. The experiment employs a Software Fault Injector for BPEL Processes that is able to inject faults when the test scenarios are running. The results of the experiment seem very promising. Hence, it seems that validation tests might give a strong support for testing.
1 Introduction
Recently, SOA (Service Oriented Architecture) [1] has become the most promising architecture for IT systems. It offers a way of composing systems from loosely coupled and interoperable services. The services are independent business functions made accessible over a network by remote suppliers. A developer of a SOA-based system should only select the most appropriate services and coordinate them into business processes that cover specification requirements for the system.
BPEL (Business Process Execution Language) [2] is a high level language that makes it possible to implement business processes as an orchestration of web services. The
orchestration consists in subsequent invoking the web services by a special element of the process, called its coordinator. It leads to a very simple and structured SOA where only the coordinator and communication links between the coordinator and the services need to be tested. A correctness of the services may be assumed, as they are provided as ready-to-use components and should be tested by their developers before being shared.
Both, validation and testing may be performed with the help of test scenarios. In [3, 4] a method of generation of test scenarios for validation of a BPEL process was given. Test scenarios obtained by means of the method cover all functional requirements for the process and provide high validation accuracy [4]. This paper presents a case study that aims at answering a question to what extent such test scenarios are adequate for testing an implementation of the process. To this end an experiment employing Software Fault Injector for BPEL Processes (SFIBP) was carried out and fault coverage for the test scenarios was calculated.
The paper is organised as follows. In Section 2 a related work is briefly described. In section 3 the problem is formulated. Section 4 defines fault coverage for the test scenarios. Section 5 contains a description of a case study. The paper ends with conclusions.
## 2 Related work
The problem of testing the SOA-based systems is not new, but most researchers focused on test generation [5, 6, 7, 8, 9, 10, 11, 12]. Their works fall loosely into two categories: developing efficient algorithms for selection of adequate tests [6, 7, 8, 9] and automation of the selection process [10, 11, 12]. Y. Yuan and Y. Yan [6, 7] proposed the graph-based approaches to handle concurrency activities of BPEL processes, in addition to basic and structured activities. Their approach was extended, combined with other techniques and implemented by several other researchers [8, 9]. M. Palomo-Duarte, A. Garcia-Dominguez, and I. Medina-Bulo based their approaches on the traditional white-box testing methods [10, 11, 12] and used formal methods and hybrid approaches along with the ActiveBPEL [13] and BPELUnit [14] test library for generating tests. However, in the works there are not any studies concerning the adequacy of generated tests for both validation and testing of BPEL processes.
The adequacy of tests can be measured with regard to some predefined metrics or by injecting faults and observing whether they are detected or not [15]. Fault injection is a popular technique that has been already applied in the context of SOA-based systems [16, 17, 18, 19]. The technique was often used for test generation [15]. PUPPET (Pick UP Performance Evaluation Test-bed) [16] is a tool for automatic generation of test-beds to empirically evaluate the QoS [17] features of a Web Service under development. GENESIS [18] generates executable web services from a description provided by the user and provides an environment in which the services can be tested prior to deployment in a production system. Another fault injection tool, WSInject [19], is a
script-driven fault injector that is able to inject interface and communication faults. WSInject works at the SOAP level and intercepts SOAP messages.
All of these approaches concern web-services or communication between a BPEL process and web-services (i.e., a fault is injected when a Web service is invoked). In the case of business processes various types of faults (e.g., replacement of input values) may appear. Therefore, SFIBP should be easily configurable to inject a rich variety of faults appearing in the very specific operational environment.
3 Problem statement
Validation aims to determine whether a software system satisfies requirements specification or not [20]. Requirements specification defines, in a formal way, what the system is expected to do. Test scenarios derived from such specification may be successfully used for the validation. In [3] an effective method for generation of test scenarios for validation of BPEL processes against specification requirements defined in SCR [21] was given. However, specification requirements should not contain anything that is not of interest for a user. Thus, test scenarios derived from the specification can check all specified requirements, but not necessarily implementation details that are introduced in further stages of development of the system. Therefore, the system should be tested to detect implementation errors. As generation of tests is usually time consuming, it is of high importance to find out to what extent the validation test scenarios are useful for the testing. To this end, an experiment might be performed and the implementation error coverage for the test scenarios could be calculated.
In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. To measure the coverage of implementation errors by the validation test set, Software Fault Injector [22] for BPEL Processes will be applied. Implementation errors of BPEL process will be simulated by injecting faults when the test is running.
4 Faults in the SOA-based systems
In the SOA-based systems faults may be caused by two reasons:
1. incorrect interaction between web-services, and
2. incorrect internal logic of the system components (web-services and/or coordinator).
Interaction faults affect communication between different web-services or between the coordinator and the web-services. Internal logic errors are introduced by human developers or production facilities when components of the system are implemented. Eight types of interaction faults and four types of internal logic errors were identified [23]. Three out of them concern the systems orchestrating web services. These are the following:
1. Misbehaving execution flow. The fault occurs when a programmer invokes improper web-service\(^1\) (i.e. different from the specified one). Fig. 1 gives an example of an improper web-service invocation error (a) and a faulty free version of the code (b).

---
2. Incorrect response. The fault is caused by incorrect processing, within a coordinator, of correct response of a web-service (other causes related to incorrect internal logic of a web-service, as defined in [23], are not considered due to the assumption of correctness of web-services). Incorrect processing means, that:
- a response from a wrong output port is used (Fig. 2),
- a response is assigned to a wrong variable (Fig. 3), or
- a response is not assigned at all (Fig. 4).
\(^1\)The invoked web-service should exist and the invocation should be correct with regard to the specification of the web-service (otherwise such error will be reported by the compiler).
3. **Parameter incompatibility**. It occurs when a web-service receives, as an input data, incorrect arguments or arguments of incorrect types. The following four errors introduced into the implementation of a coordinator cause such a fault:
- a different operation of a web-service is invoked (Fig. 5). The operation should belong to the web-service (otherwise such error will be reported by a compiler).
- a wrong input port is used (Fig. 6). The port used should be consistent with the one that should be used (otherwise such error will be reported by a compiler).
- a wrong output port is used (Fig. 6), or
- a wrong value is assigned to an input port (Fig. 7).
Fig. 5. Different (a) and proper (b) operations of a web-service are invoked.
Fig. 6. Wrong (a) and correct (b) input and output ports are used.
Fig. 7. Wrong (a) and correct (b) values are assigned to an input port.
Effects of the faults are visible because the faults make the external behaviour of the coordinator be different from the expected one. The cause-effect table is shown in Fig. 8.
Fig. 8. Implementation errors, interaction and development faults and their effects.
All other faults defined in [23] are not relevant for this work. These faults are either related to a physical layer or caused by providers of web-services (incorrectness of web-services or interaction between web-services).
5 Case study
The goal of the case study is to evaluate the adequacy of validation test scenarios for testing BPEL processes. The test scenarios are evaluated based on their fault coverage calculated with respect to the faults generated by the SFIBP. The SFIBP generates the following three types of faults:
1. replacing web-service output parameters (OP),
2. replacing values of a web-service input parameters (IP),
3. replacing requested web-service with another one (WS).
The faults generated by SFIBP give the same observable effects as those described in Section 4, but their injection does not require the implementation of a coordinator to be changed.
The fault coverage for a set of test scenarios (FC) is expressed as a percentage of detected faults to all injected faults.
\[
FC = \frac{F_D}{F_I} \cdot 100\%,
\]
where:
- \( F_D \) is the number of detected faults,
- \( F_I \) is the number of injected faults.
How validation can help in testing business...
\[ F_D \] – a number of detected faults,
\[ F_I \] – a total number of injected faults.
As the faults are artificially generated and injected, their total number is known. However, it is not possible to determine the number and the types of all errors that might be the real source of the faults. Nevertheless, this is not a shortcoming of the approach because only the coverage has considerable meaning.
The subsequent subsections describe briefly SFIBP that was used in the experiment to generate and inject faults (Section 5.1), an example system and test scenarios generated for the system (Section 5.2), and the experiment and its results (Section 5.3).
### 5.1 Software Fault Injector for the BPEL Processes
SFIBP is an execution-based injector [15], which is able to inject faults into the BPEL processes when test scenarios are running.
The SFIBP has been implemented as a special local service that is invoked instead of the proper web-service. Such an approach helps reduce costs of the experiment, as the faults are injected without changing the implementation of a coordinator. A configuration file produced by the SFIBP defines three parameters of the proper web-services:
- identifiers of all methods provided by the web-services (ID),
- names of the methods,
- the number and names of parameters of the methods.
It also includes predefined values of input and output parameters, values of alternative web-services IDs that are used to generate faults and the probability that a fault will be injected. Information about the injected faults is stored in a log file.
### 5.2 Football Reservation System
Football Reservation System (FRS) is a simple system allowing its users to book tickets for football games, hotels to stay during the games and plane or train tickets to arrive at the games.
The system was implemented as a BPEL process orchestrating five web-services. Each of the services is accessible on a different server and the whole process of reservation is coordinated through a central coordinator (Fig. 9).
Short descriptions of the web-services and their input and output parameters are given in Table 1. Types of the parameters are placed in brackets next to the parameters names.
A set of test scenarios generated for the system consists of 4 test scenarios having between two and five input/output events. The total number of the events is 16. The test scenarios were generated by means of the checking path method presented in [3]. Their usage provided high validation accuracy for the system.
Table 1
<table>
<thead>
<tr>
<th>web-service ID</th>
<th>description</th>
<th>Parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client</td>
<td>retrieves data from the client and sends information about order</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>TicketRS</td>
<td>checks an availability of a football ticket at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>HotelBS</td>
<td>checks an availability of a hotel room at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>TrainTR</td>
<td>checks an availability of a train at the given date</td>
<td>input: Date [String]</td>
</tr>
<tr>
<td>PlaneTR</td>
<td>checks an availability of a plane at the given date</td>
<td>input: Date [String]</td>
</tr>
</tbody>
</table>
5.3 The experiment
The experiment consisted in:
1. implementing a fault free BPEL process for FRS and generating validation test scenarios,
2. configuring the SFIBP,
3. starting the SFIBP and running the BPEL process with the test scenarios,
4. comparing the outputs generated by the BPEL process with the expected ones given by test scenarios,
5. saving the results,
6. calculating the fault coverage.
Steps 3, 4 and 5 were repeated 1000 times. At each of the iteration randomly generated faults were injected into the BPEL process.
Table 2 shows the setting for all web-services of the FRS. The first row of the table shows IDs of web-service. The next two rows show the values of output and input parameters that are used to replace the proper ones when the faults are injected. IDs of web-services that are invoked instead of the proper ones are shown in the last row. The probability that a fault will occur was set to 33% for all faults.
<table>
<thead>
<tr>
<th>Web-service</th>
<th>TicketRS</th>
<th>HotelBS</th>
<th>TrainTR</th>
<th>PlaneTR</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>output parameter</strong></td>
<td>“Yes”, “No”</td>
<td>“OK”, “No”</td>
<td>“Success”, “Failure”</td>
<td>“True”, “False”</td>
</tr>
<tr>
<td><strong>input parameter</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>„HotelBS”, „PlaneTR”, „TrainTR”</td>
<td>„TicketRS”, „PlaneTR”, „TrainTR”</td>
<td>„TicketRS”, „HotelBS”, „PlaneTR”</td>
<td>„TicketRS”, „HotelBS”, „TrainTR”</td>
</tr>
<tr>
<td><strong>alternative web-services</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The outputs generated by TicketRS, HotelBS, TrainTR and PlaneTR depend on an interval between a date of reservation and a date of football match. If the interval is equal or longer than it was assumed, then the respective web-service generates positive answer, otherwise the answer is negative. The intervals were set as follows: 15 days for TicketRS, 5 days for HotelBS, 1 day for TrainTR and 30 days for PlaneTR. These rules were introduced into the implementation of the web-services.
In the experiment the reservation date is an actual date (a day on which the process was invoked) and the date of the football match is the date that was specified by the user during the FRS invocation.
During the experiment the SFIBP could generate various combinations of the three types of faults (Section 5) or not introduce any fault. This gives eight different configurations of faults for each of the web-services and about 4000 for the whole system.
At the end of the experiment its results were analyzed and the fault coverage for the test scenarios was calculated. Table 3 summarises the results. It reports, for each of
the web-services, the total number of fault injected, and detected. The fault numbers were grouped based upon the type of faults.
Table 3
<table>
<thead>
<tr>
<th>Faults</th>
<th>TicketRS</th>
<th>HotelBS</th>
<th>TrainTR</th>
<th>PlaneTR</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>IP</td>
<td>OP</td>
<td>WS</td>
<td>IP</td>
</tr>
<tr>
<td>injected</td>
<td>304</td>
<td>212</td>
<td>348</td>
<td>144</td>
</tr>
<tr>
<td>detected</td>
<td>295</td>
<td>208</td>
<td>348</td>
<td>140</td>
</tr>
<tr>
<td>FC</td>
<td>97%</td>
<td>98%</td>
<td>100%</td>
<td>97%</td>
</tr>
</tbody>
</table>
Due to the nature of the example majority of the injected faults is related to the first web-service (TicketRS) and the minority of them to the last web-service (PlaneTR). Almost all injected faults were detected by the test scenarios. The average fault coverage calculated based on the results of the experiments was 98%.
6 Conclusions
The paper describes a statistical experiment carried out to evaluate the test scenarios generated for validation of BPEL processes in context of testing the processes. Test generation is a time consuming activity, thus the possibility of having one set of tests scenarios providing accurate results for both validation and testing, was worth investigating.
The experiment was performed on a small example orchestrating five web-services. For the system, the SFIBP was able to generate three types of faults giving in total 4000 different fault configurations. For more complex systems the number of different fault configurations may be much higher than for the FRS. That is why not exhaustive but statistical testing was performed. It illustrates a general approach to the problem.
The experimental results seem very promising. The calculated fault coverage shows that almost all injected faults (98%) were detected by the test scenarios. The results confirmed the earlier assumptions that in the case of BPEL processes validation test scenarios may be adequate, also when they are used for testing. Hence, it seems that validation tests might give a strong support for testing. However, the experiment was carried out only on one simple system and focused on faults that only simulate implementation errors. More experiments are needed in order to make the conclusions more general. This will be one of the main goals of our further research.
References
|
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3372/2566", "len_cl100k_base": 4344, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 24761, "total-output-tokens": 6126, "length": "2e12", "weborganizer": {"__label__adult": 0.00030422210693359375, "__label__art_design": 0.00029850006103515625, "__label__crime_law": 0.00031828880310058594, "__label__education_jobs": 0.0006384849548339844, "__label__entertainment": 6.341934204101562e-05, "__label__fashion_beauty": 0.0001475811004638672, "__label__finance_business": 0.00021505355834960935, "__label__food_dining": 0.00032591819763183594, "__label__games": 0.0004596710205078125, "__label__hardware": 0.0008134841918945312, "__label__health": 0.0004880428314208984, "__label__history": 0.0001842975616455078, "__label__home_hobbies": 5.9545040130615234e-05, "__label__industrial": 0.0003414154052734375, "__label__literature": 0.00026702880859375, "__label__politics": 0.00019538402557373047, "__label__religion": 0.0003762245178222656, "__label__science_tech": 0.027069091796875, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.0086822509765625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00022101402282714844, "__label__transportation": 0.0003867149353027344, "__label__travel": 0.00017452239990234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23467, 0.03321]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23467, 0.52419]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23467, 0.87811]], "google_gemma-3-12b-it_contains_pii": [[0, 1808, false], [1808, 4924, null], [4924, 7770, null], [7770, 8758, null], [8758, 9000, null], [9000, 9644, null], [9644, 11064, null], [11064, 13650, null], [13650, 14634, null], [14634, 17073, null], [17073, 19424, null], [19424, 23013, null], [23013, 23467, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1808, true], [1808, 4924, null], [4924, 7770, null], [7770, 8758, null], [8758, 9000, null], [9000, 9644, null], [9644, 11064, null], [11064, 13650, null], [13650, 14634, null], [14634, 17073, null], [17073, 19424, null], [19424, 23013, null], [23013, 23467, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23467, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23467, null]], "pdf_page_numbers": [[0, 1808, 1], [1808, 4924, 2], [4924, 7770, 3], [7770, 8758, 4], [8758, 9000, 5], [9000, 9644, 6], [9644, 11064, 7], [11064, 13650, 8], [13650, 14634, 9], [14634, 17073, 10], [17073, 19424, 11], [19424, 23013, 12], [23013, 23467, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23467, 0.15108]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
5a808c95e9ba6b4ff25a5434cbeb447db48b363d
|
Web Service Composition Automation based on Timed Automata
Hu Jingjing\(^1\), Zhu Wei\(^1\), Zhao Xing\(^2\) and Zhu Dongfeng\(^3\)
\(^1\) School of Software, Beijing Institute of Technology, Beijing 100081 P. R. China
\(^2\) School of Mathematics, Capital Normal University, Beijing 100037 P. R. China
\(^3\) School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081 P. R. China
Received: 29 Aug. 2013, Revised: 1 Dec. 2013, Accepted: 2 Dec. 2013
Published online: 1 Jul. 2014
Abstract: Web service composition is a new direction in the research of service computing. To promote the portfolio, the key problem is to achieve efficient and automatic composition process. We propose the web service composition model based on timed automata. In the computing framework, we design the formal model and its construction algorithm; provide a web service interface description language and composition automation engine. In order to validate its performance, using UPPAAL as the service composition simulator, realized the automation process from independent web services into composite ones. The experimental results verify the feasibility of automatic service composition and the effectiveness of the proposed configuration.
Keywords: Web service composition; Timed automata; Automation
1 Introduction
Web services are the most famous implementation of service oriented architectures allowing the construction and the sharing of independent and autonomous software. Web service composition (WSC) consists in combining web services, developed by different organizations and offering diverse functional, behavioral and non-functional properties to offer more complex services. The spreading of services and WSC increases the difficulty and time for its applications. And the key technology of WSC is to provide a solution which can perform more efficient and automatic composition.
Several web service composition models are put forward to the research fields. On the one hand, WSC based on workflow is a common recognized approach \([1]\). This solution allows creating flows with composed activities, which models composed workflow with logics and provides the ability of services calling, data processing and exceptions checking. BPEL4WS is a commercial business process execution language designed for web service, and provides a method to describe workflow framework \([2]\). It has become a standard in web service composition. It is a static portfolio requiring manual intervention and difficult to provide a real-time combination with non-functional requirement \([3,4]\). On the other hand, WSC based on AI planning provides another way \([5]\). OWL-S is used to model non-functional properties, and every single web service is built as an action in AI, each of which contains an initial state, a target and some possible state-transition paths. Thus, WSC based on AI planning is a dynamical portfolio supporting complex behavioral and non-functional properties, but it lacks verification \([6]\).
Although there are lots of methods on WSC, the approach to automatic composition is urgent because it can improve the efficiency and correctness of service composition, while it is also hard to actualize because it is difficult to perform standard operation for modeling web service and logical composition.
Timed automata are theories for modeling and verification of real time systems \([7]\). A timed automaton is a finite automaton extended with a set of real-valued variables modeling clocks. Constraints on the clock variables are used to restrict the behavior of an automaton, and accepting conditions are used to enforce progress properties. The timed automata can describe the real-time systems formally and provide the high-efficiency property verification of real-time systems.
\(^*\) Corresponding author e-mail: hujingjing@bit.edu.cn
But the state space explosion exists for the clocks reset [8].
Timed automata can be combined with web service composition [9]. And the research in the field is mainly focused on modeling and property verification, which is the straightforward way to implement the service composition [10]. However, it is not possible to perform the whole process from single web services to service composition with logic flow. That is to say, it couldn’t achieve WSC directly in these modes.
We propose how to use the timed automata to model composed web service and implement the web service composition automation. It provides a formal model built on timed automata for web service composition, and constructs the algorithm to implement this model automatically. The model and algorithm are tested to be feasible and efficient.
The rest of the chapter is organized as follows: In the next section, the timed automata model for WSC is proposed, and the construction algorithm is proposed. Section 3 presents a web service interface language and the implementation of automation engine. Section 4 describes the test cases and performance evaluations for the model and methods. Finally, it gives a brief conclusion and acknowledgement.
2 WSC model based on Timed Automata
Definition 1 (Timed automata, TA [11]): A timed automata model is a tuple \( < N, l_0, E, I > \) where \( N \) is a finite set of locations (or nodes), \( l_0 \in N \) is the initial location, \( E \subseteq N \times \beta(C) \times \Sigma \times 2^\beta(C) \times N \) is the set of edges, and \( I : N \rightarrow \beta(C) \) assigns invariants to locations. When we shall write \( t \xrightarrow{g,x,y,z} t' \) when \( < l, g, a, r, l' > \in E. C \) is a set of real-value variables or clocks ranged over by \( x, y, z \) etc. \( \Sigma \) is a set of actions ranged over by \( a, b, c \) etc. Atom Clock Constraint is a formula like \( x \sim n \) or \( x - y \sim n \), where \( x, y \in C, \sim \in \{ =, \neq, <, >, \geq \} \) and \( n \in N \). Clock Constraint is a formed-formula of atom clock constraints ranged over constraint guards \( g, D \) etc. \( \beta(C) \) is a set of clock constraints.
The theory is the outstanding work of Alur and Dill [12]. Many verification tools (like UPPAAL) are built on it [13].
Definition 2 (Timed Automata for Web Service, TAW) An atom web service can be modeled as a timed automaton \( < N, l_0, E, I > \) where \( N \) is a finite, non-empty set of states of web service, \( l_0 \) is the initial state, \( E \) is the set of transition function, which represents the evolution from a state to another, and \( I \) assigns the clock constraints to the service calling. The clocks indicate the cost in the current migration routes. Especially, the TAW head model is the starting of TAC.
Definition 3 (Timed Automata for WSC, TAC): It is an integration of timed automata describing the whole composed web service, where the TAW head model is added to start the TAC, the branch constraints are reduced and the global clock is reset.
2.1 Algorithm for web service composition
The algorithm is to generate a timed automaton model for each web service interface and they are synchronized through branches and end tags. The algorithm of constructing TAC model for web service composition (A-TAC) is shown as Tab 1.
The equivalent graph is a topology which connects each web service interface by equivalence relation. The equivalent tree is a data structure without loop which is generated by breadth-first traversing the equivalent graph.
2.2 TAW for single node in equivalence graph
The algorithm A-TAW to get TAW model for a single node in equivalence graph is shown as Tab 2.
For the parameters of web service interface represented by the nodes of equivalent tree, their value intervals can be divided into several parts, which
corresponds to the jump from one node to several nodes respectively. This part of TAW is called fission graph.
In algorithm A-TAC, the global clock records the summary of cost. The minimal cost computed by TAC model remains into the global clock, and the path implementing the value of clock will be the best solution for the WSC.
3 Service composition automation
The automation of WSC is implemented by the web service composition automation engine, and the framework of it is shown in Fig.1. The web service composition automation engine is a pipeline inside and its final result is a TAC model.
3.1 Web service interface description language
There need to be a simple and specific language to describe the web service interface in order to implement the service composition automation. This section presents the web service interface description language (WSIL) denoted by context-free grammar which is shown in Tab.3.
WSIL is a structured language and able to describe web service interface. It contains the equivalent relations of parameters. Thus, WSIL provides the input standard needed to get the equivalent graph for the WSC automation engine. The engine is essentially a compiler whose grammar is that of the WSIL itself and the results of semantic analysis are such data structures as equivalent graph or equivalent tree for web service interface. The compiler is constructed by an integrated tool of Context-free Grammar which wraps Lex and YACC [14].
3.2 Semantics parser of automation engine
The flow of analysis and parsing in web service composition automation engine is shown as Fig.2. The
<table>
<thead>
<tr>
<th>Table 3: Web service interface description language</th>
</tr>
</thead>
<tbody>
<tr>
<td>01 <Start> ::= <ObjectList>;</td>
</tr>
<tr>
<td>02 <ObjectList> ::= <Object><ObjectList></td>
</tr>
<tr>
<td>03 <Object> ::= <Param></td>
</tr>
<tr>
<td>04 <Param> ::= “class” Identifier “;”</td>
</tr>
<tr>
<td>05 <DefinitionList> ::= <Definition><DefinitionList></td>
</tr>
<tr>
<td>06 <Definition> ::= <Variable></td>
</tr>
<tr>
<td>07 <Variable> ::= <Modifier></td>
</tr>
<tr>
<td>08 <Type> ::= <Modifier></td>
</tr>
<tr>
<td>09 <ObjectBody> ::= “<FunctionParamList>” “}” “{“</td>
</tr>
<tr>
<td>10 <FunctionParamList> ::= “(” <FunctionBody> “)” “{“</td>
</tr>
<tr>
<td>11 <FunctionBody> ::= “<FunctionParamList>” “}” “{“</td>
</tr>
<tr>
<td>12 <FunctionParam> ::= <Type> Identifier “{“</td>
</tr>
<tr>
<td>13 <VariableList> ::= <Variable></td>
</tr>
<tr>
<td>14 <Variable> ::= “public”</td>
</tr>
<tr>
<td>15 <Type> ::= Identifier</td>
</tr>
<tr>
<td>16 <BasicType> ::= “int”</td>
</tr>
<tr>
<td>17 <Variable> ::= “byte”</td>
</tr>
<tr>
<td>18 <FunctionParamList> ::= “{” <FunctionParam></td>
</tr>
<tr>
<td>19 <Function> ::= “true”</td>
</tr>
<tr>
<td>20 <Function> ::= “true”</td>
</tr>
<tr>
<td>21 <Function> ::= “true”</td>
</tr>
<tr>
<td>22 <Function> ::= “true”</td>
</tr>
<tr>
<td>23 <Function> ::= “true”</td>
</tr>
<tr>
<td>24 <Function> ::= “true”</td>
</tr>
<tr>
<td>25 <Function> ::= “true”</td>
</tr>
<tr>
<td>26 <Function> ::= “true”</td>
</tr>
<tr>
<td>27 <Function> ::= “true”</td>
</tr>
<tr>
<td>28 <Function> ::= “true”</td>
</tr>
<tr>
<td>29 <Function> ::= “true”</td>
</tr>
<tr>
<td>30 <Function> ::= “true”</td>
</tr>
<tr>
<td>31 <Function> ::= “true”</td>
</tr>
<tr>
<td>32 <Function> ::= “true”</td>
</tr>
<tr>
<td>33 <Function> ::= “true”</td>
</tr>
<tr>
<td>34 <Function> ::= “true”</td>
</tr>
<tr>
<td>35 <Function> ::= “true”</td>
</tr>
</tbody>
</table>
© 2014 NSP
Natural Sciences Publishing Co.
semantics parser gets semantic information by traversing the syntax tree of web service interface language and takes out the instance of TAC model.
In the process of recursive traversal, the keywords matching (such as the fixed identifiers of class and interface etc. in grammars) is used to distinguish the information of parameters and web service interfaces, and filled into independent data structures. The class diagram to implement the feature is shown in Fig.3.
The semantics parser analyzes syntax tree and get all information needed for the NTA. NTA is the model that can be received and verified by the TA tools. The algorithm to generate NTA is as Tab.4.
UPPAAL is able to read the NTA model directly after NTA is generated. The whole process from atomic web service to composed one is finished automatically.
4 Performance evaluation
The feasibility and effectiveness of the presented WSC model and its automation engine with UPPAAL are evaluated in this chapter.
The version of UPPAAL used is 4.0, and JRE6.0 is also needed. The running environment is CPU: Intel 2.40GHZ, RAM: 3.0GB.
4.1 Test case design
We propose a solution for test cases based on template, which meets the validity and integrity of comprehensive cases running through the model. Different instances for all kinds of TAC models can be generated by adjusting the template parameters, and it is also convenient to implement automatic testing.
The parameters of test cases template are listed according to the character of TAC model.
(1) The count of TAW
The count of TAW models dominates the scale of test cases for TAC model, and it is represented by ‘N’.
(2) The count of nodes in TAW
The count of nodes can reflect the number of parameters in a TAW model, which sets the scale of test cases indirectly. The count of nodes in a TAW is represented by ‘L’ and the count of parameters in the
```
Table 4: Algorithm A-NTA
01 Insert the zero interface to NTA model.
02 Generate the fission graph for each parameter of every web service interface.
03 Link neighbor parameters for each web service interface.
04 Insert a starting node for each tree graph representing a web service interface and set the launch condition.
05 Insert an ending node for each tree graph representing a web service interface and set the update event.
06 Set corresponding integer clock variables for each parameter as branch signals in global declaration.
07 Create the global clock as the container to store total cost in global declaration.
08 Create an instance for each TAW model in system configuration, and the instance is set to start with the system.
```
TAW is ‘x’. Their relation can be denoted by which
\[ L = \sum_{i=1}^{x} Sem(i) + x + 2 \] (1)
The ‘Sem(i)’ means the number of semantics partition to corresponding parameter i.
(3) **Association strength of TAC**
It is not feasible to set all connection relations for each TAC model one by one for the topology structure of WSC. On one hand, the automation test will be difficult to design and implement, and on the other hand, the key point of feasibility and effectiveness testing is the overall complexity and association strength of TAC. So, we propose the association intensity index (Ave) to represent this norm. For each Ave value, there may be different corresponding topology graphs. Set ‘P’ as the count of parameters in a TAC model, \( CP(i) \) as the number of occurrences of parameter ‘i’ in all TAW models of TAC, and \( TA(i) \) as the count of TAW model where there is at least one parameter after parameter ‘i’ is removed. Then there is
\[ Ave = \frac{\sum_{i=1}^{P} CP(i)^2}{\sum_{i=1}^{P} TA(i)} \] (2)
The ‘Ave’ ranges from 0 to 1. It means the association strength is weaker when Ave is closer to 0 and the association strength is stronger when it is closer to 1.
(4) **Association diversity of TAC**
The diversity of parameters’ number in TAC and the diversity of association strengths of different TAW models make various test cases. In order to supplement the deficiency of association strength, we set ‘E’ as the association diversity factor of different TAW models, in which \( P(i) \) is set as the count of the i-th parameters in the TAW model and \( Pa \) as the average value of parameters’ number in all TAW models. Then there is
\[ E = \frac{\sum_{i=1}^{P} (P(i) - Pa)^2}{P * Pa^2} \] (3)
The ‘E’ ranges from 0 to 1. It means the association diversity is little when E is closer to 0 and the association diversity is large when E is closer to 1.
In conclusion, the scale of test case is regulated by the count of TAW models, the complexity is set by the number of parameters in TAW, the association strength and diversity adjust the distribution of topology structure.
4.2 **The result of tests**
The WSC model based on timed automata is capable of describing composite web service with inner logics and workflow. However, the state-space explosion problem restricts its feasibility with the increasing of clocks’ number [15]. Zone automata is a choice. The test result is shown as follows.
(1) **The simulator of zone automata**
In the test, the parameters of \( N = 1, L = 2, Ave = 0.5, E = 0.25 \) were set to shield unrelated parameters. The results are shown in Fig.4. The state-space still grows exponentially in zone automata, but it becomes much slower than region automata.

The results show that it divides less state-space with zone automata than region automata and they possess the same ability of describing TAC model. The results illustrate that time zone performs better than regional division of equivalence and relieves the state-space combinatorial explosion problem. So the simulator of zone automata was adopted.
(2) **Integer clock**
The parameters of \( N = 1, L = 2, Ave = 0.5, E = 0.25 \) were also set to detect the performance of different clocks. The calculation time is shown in Fig.5.

It is not possible to compare the advantage of integer clocks with real value clocks in isolation. This result shows an average value of the two clocks.
It shows that the running time of integer clocks is faster than real value clocks and it improves the effectiveness of WSC based on TAC model. Though it is not evident when the count of clocks is limited, it takes much less time to finish the simulation with integer clocks than real value ones with the clocks’ number increasing. The test result shows an intuitive advantage of computing.
(3) Scale of TAW models
The parameters’ values were set by \( L \geq 2, \text{Ave} = 0.5, E = 0.25 \). The result in Fig.6 shows the tendency of time as the number of TAW increasing.
As long as the number of TAW grows, the time complexity of WSC is basically a linear growth with slightly accelerated trend. It indicates the composition is controlled in the linear time and the count of TAW models does not increase the time complexity of TAC.
(4) Scale of nodes in TAW models
In the test, the values of parameters were set as \( N \geq 2, \text{Ave} = 0.5, E = 0.25 \). The running time of TAC with different numbers of nodes in TAW models are shown in Fig.7.
There is a slow growth as the count of nodes in TAW models increases. The difference between this test and the previous one is that only the total count of nodes in TAW models changes while the count of TAW models may not changes. The composition time presents the characteristic of a linear approximation, which indicates that the huge amounts of nodes of TAW models have little influence with the feasibility of TAC.
(5) Association strength and diversity of TAC model
The association strength and diversity of TAC are related to each other. The test results for the two parameters in various combinations of the TAC are shown in Fig.8.
It will takes less time for the running of with the association strength TAC enhancing while slight time fluctuations exists when the association diversity changes. That is to say, the stronger the association strength, the higher the efficiency of the implementation TAC, and the association diversity has little influences with the effectiveness of TAC.
As a result, the above test case results show that it spends less state-space with zone automata than region automata for the same TAC model. The TAC with integer clock runs faster than that with real value clock for a large number of clocks. The Computational complexity of TAC does not reach \( O(N^2) \) with the increasing of parameters’ value, which verify its effectiveness.
5 Conclusions
In this paper, we presented an automated web service composition method based on timed automata in which the composition model and implantation algorithm were provided. The innovation is mainly reflected in three aspects.
Firstly, we proposed the TAC model with the computing framework of timed automata, which is a kind of construction method for WSC. It provides the algorithm of building TAW and TAC, implementing the whole process from independent web services to composed service automatically.
Secondly, ‘zero service’ and ‘integer clock’ were introduced into the framework model. The former makes the model easier to convert to a unified form which is convenient to be generated automatically, and the latter reduces the complexity of computing for WSC.
Finally, we implemented the web service composition automation engine, which can receive web service interface description language (i.e., WSIL) and automatically generate TAC model for performing composite service. The engine has strong expansibility and can be applied to different fields.
The performance evaluation indicates that it spends less state-space with zone automata than region automata while they share the same ability of describing TAC model; the computing time of integer clocks is faster than real value clocks and it improves the efficiency of WSC based on TAC model; The complexity of TAC grows between the linear and quadratic with the parameters’ variety, which verifies its feasibility and effectiveness.
Acknowledgement
This work has been supported by the National Science Foundation of China (Grant No. 61101214, 61371195), the Key Project of National Defense Basic Research Program of China (Grant No. B1120132031) and the Fundamental Research Funds for the Central Universities (Grant No. 20120842003, 20110842001).
References
Hu jingjing received the PhD degree in Computer science from Beijing Institute of Technology, Beijing, China. She is currently a lecturer in the school of Software of Beijing Institute of Technology. Her research interests are in the areas of service computing, multi-agent systems, and GPU-based computer tomography.
Zhu wei is a postgraduate in the school of Software, Beijing Institute of Technology, China. His research interests include artificial intelligence, services computing, software engineering, etc.
Zhao xing received the Ph.D degree in Computer science from University of Science & Technology of China, HeFei, China. He is currently an associate professor in the school of Mathematical sciences of Capital Normal University. His research interests are in the areas of computer tomography, service computing.
Zhu dongfeng is a Ph.D candidate in the school of Computer, Beijing Institute of Technology, China. He received his B.S. degree in computer science from Beijing Jiaotong University. His research interests include p2p computing, services computing, delay tolerant networks, etc.
|
{"Source-Url": "http://www.naturalspublishing.com/files/published/7b677a2g737dm8.pdf", "len_cl100k_base": 5470, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26251, "total-output-tokens": 6505, "length": "2e12", "weborganizer": {"__label__adult": 0.0003209114074707031, "__label__art_design": 0.0003657341003417969, "__label__crime_law": 0.0003228187561035156, "__label__education_jobs": 0.0004987716674804688, "__label__entertainment": 7.301568984985352e-05, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.00026035308837890625, "__label__food_dining": 0.0003540515899658203, "__label__games": 0.0004124641418457031, "__label__hardware": 0.000667572021484375, "__label__health": 0.0005526542663574219, "__label__history": 0.0002033710479736328, "__label__home_hobbies": 7.539987564086914e-05, "__label__industrial": 0.0003323554992675781, "__label__literature": 0.0002834796905517578, "__label__politics": 0.0002903938293457031, "__label__religion": 0.0003952980041503906, "__label__science_tech": 0.025665283203125, "__label__social_life": 9.92417335510254e-05, "__label__software": 0.006427764892578125, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.000438690185546875, "__label__travel": 0.0001817941665649414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24891, 0.03671]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24891, 0.33328]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24891, 0.87487]], "google_gemma-3-12b-it_contains_pii": [[0, 3890, false], [3890, 7736, null], [7736, 11100, null], [11100, 13733, null], [13733, 17297, null], [17297, 20251, null], [20251, 24614, null], [24614, 24891, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3890, true], [3890, 7736, null], [7736, 11100, null], [11100, 13733, null], [13733, 17297, null], [17297, 20251, null], [20251, 24614, null], [24614, 24891, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24891, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24891, null]], "pdf_page_numbers": [[0, 3890, 1], [3890, 7736, 2], [7736, 11100, 3], [11100, 13733, 4], [13733, 17297, 5], [17297, 20251, 6], [20251, 24614, 7], [24614, 24891, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24891, 0.22561]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
477f96a02232a1ad36571b57c222afd8b480af1f
|
To: MTB Distribution
From: Gary M. Palter
Date: 1 September 1980
Subject: HASP Workstation Simulator
Purpose
This memo describes the Multics HASP workstation simulator facility.
Overview
Several sites have requested the capability to use Multics to submit jobs to and receive output from other computer systems. The majority of these systems can communicate with remote job entry (RJE) stations employing the HASP communications protocol. Thus, an effort was undertaken to permit Multics to simulate a HASP workstation.
One of the major advantages of the HASP protocol over other RJE protocols is that it permits a workstation to have multiple card readers, punches, and line printers all operating simultaneously. To take advantage of this feature on Multics, it is necessary to have a separate process for each device being simulated. However, all communications with the remote system takes place on a single physical channel which normally can be accessed only by a single Multics process. To solve this problem, a ring-0 multiplexer was developed which splits the single channel into separate logical channels for each device allowing for the desired independent operation of simulated devices.
The structure chosen to simulate the devices of the workstation is the I/O daemon structure. A new I/O daemon driver program, similar to the remote_driver_module, was developed to permit an I/O daemon process to simulate a device of the workstation. Unlike other I/O daemon drivers, however, this driver takes requests from a queue and sends them through a card reader to the remote system; it also receives files from the remote system through card punches and line printers and queues these files for local printing or punching.
A feature was added to the I/O daemon driver to allow output files received from the remote system to be placed into the system pool. When this feature is enabled, users are able to retrieve the output of their foreign jobs and examine it online. In order to use this feature, however, it is necessary to require that special control records, similar to the card input control records in use today, be placed in each output file to identify which Multics users "owns" the file. As these control records must be included by the remote system, requiring possible modifications to the remote system itself,
Multics Project internal working documentation. Not to be reproduced or distributed outside the Multics Project.
the decision was made that the default mode of operation for the simulator would be automatic printing or punching of the received files.
Organization of this Document
The remainder of this document is a draft of the additions to the Bulk I/O manual and the MFM Reference Guide which describe this facility.
Multics provides a facility for the simulation of a remote job entry (RJE) workstation using the HASP communications protocol. Through this facility, Multics users can request that job decks be transmitted to a remote system for execution and the resulting output be returned to Multics for printing/punching or online perusal.
A HASP workstation is composed of card readers, card punches, line printers, and an operator's console. Each device to be simulated by Multics is configured as a separate sub-channel of a physical communications channel defined in the CMF as a HASP multiplexer channel. (See MAM Communications for details on configuring a HASP multiplexer.) Up to eight card readers may be configured in a workstation; a total of no more than eight line printers and card punches may be configured; exactly one operator's console must be configured.
Structure of the Simulator
The I/O daemon driver module hasp_ws_sim_driver_ simulates the operation of a workstation's card readers, line printers, and card punches; the command hasp_host_operators_console simulates the console. A separate process is used to simulate each device to permit all devices to operate asynchronously, thus achieving maximum throughput over the communications line.
The simulated operator's console is used to establish the identity of the workstation with the remote system. Subsequently, it may be used to control the operation of the workstation, request status on jobs executing on the remote system, and examine the queues of output files waiting for transmission to Multics.
Card decks are transmitted from Multics through the simulated card readers to the remote system. These decks are normally jobs to be executed by the remote system. On Multics, each card deck must be contained in a segment. A Multics user requests that a deck be transmitted by issuing the dpunch command; a separate request type is used for each remote system.
The remote system transmits output files to Multics through the simulated line printers and card punches. By default, the simulator automatically issues dprint or dpunch requests for these files as appropriate. However, a site may choose to have these output files placed into the system pool storage for subsequent retrieval by Multics users. To use this option, the driver process must be instructed to expect control records in each output file and the remote system must include these Multics control records to indicate which Multics user owns the file. Adding control records to an output file may involve modifications to the remote computer's operating system, the JCL of each job submitted for remote execution, the programs executed by the each
job, or a combination of the above. (See MPM Reference for a description of the format of these control records.)
Defining a HASP Workstation Simulator
To define a workstation simulator, the local administrator(s) must:
- Define the configuration of the workstation being simulated: the number of card readers, line printers, and card punches must be agreed upon with the remote system's administrator(s).
- Determine if the remote system requires that a SIGNON control record be transmitted to establish the identity of the workstation. (The SIGNON record is a special record defined by the HASP protocol to enable the host system to establish the identity of the workstation. Many operating systems do not require this control record, but validate the workstation in other ways.) If a SIGNON record is required, its exact content must be determined for use in the attach descriptions described below.
- Define the HASP multiplexer channel as described in MAM Communications.
- Define a major device for each simulated device except the operator's console and a request type for the submission of card decks in the system iod_tables.
- Create an ACS segment for each sub-channel of the HASP multiplexer channel, give the process which will attach that sub-channel rw access to the ACS and the dialok attribute in the PDT. (See MAM Communications and MAM System.) It is recommended that the process which attaches the simulated operator's console not be registered on the SysDaemon project.
- Determine the printer channel stops used in output files returned from the remote system and insure that the Multics request type(s) used to print those files include the appropriate logical channel stops in their RQTI segments. (See "Request Type Info Segments" in section 2 of this manual.) For example, many systems use channel stop #1 to represent the top of a page; the RQTI segments should specify "Line (1): 1;" to insure correctly formatted output.
IOD_TABLES
With the exception of the operator's console, each simulated device is controlled by an I/O daemon using the hasp_ws_sim_driver_module. A separate major device with exactly one minor device must be defined in the iod_tables for each simulated device.
The major device definition must include a line statement specifying the sub-channel of the simulated device; the "line: variable;" construct is not
allowed. Additionally, an args statement must be included specifying a station ID and use of the hasp_host_ terminal I/O module (see MPM Communications).
The minor device specification must include a minor_args statement which specifies the type of device being simulated. Additional keywords may be used in this statement as described below.
See "I/O Daemon Tables" in section 2 of this manual for a description of the iod_tables source language.
Sample iod_tables Definition
The iod_tables entries to simulate a HASP workstation with a card reader, card punch, and two line printers follows:
```plaintext
Device: odc_rdr1; /* Card reader */
line: a.h014.rdr1;
driver_module: hasp_ws_sim_driver_;
args: "station= CDC, desc= -terminal hasp_host_ -comm hasp";
minor_device: rdr1;
minor_args: "dev= reader_out";
default_type: odc_jobs;
Device: cdc_prt1; /* Line printer #1 */
line: a.h014.prt1;
driver_module: hasp_ws_sim_driver_;
args: "station= CDC, desc= -terminal hasp_host_ -comm hasp";
minor_device: prt1;
minor_args: "dev= printer_in, request_type= cdc_output";
default_type: dummy;
Device: cdc_prt2; /* Line printer #2 */
line: a.h014.prt2;
driver_module: hasp_ws_sim_driver_;
args: "station= CDC, desc= -terminal hasp_host_ -comm hasp";
minor_device: prt2;
minor_args: "dev= printer_in, auto_queue= no";
default_type: dummy;
Device: cdc_pun1; /* Card punch */
line: a.h014.pun1;
driver_module: hasp_ws_sim_driver_;
args: "station= CDC, desc= -terminal hasp_host_ -comm hasp";
minor_device: pun1;
minor_args: "dev= punch_in";
default_type: dummy;
```
Additions to Bulk I/O Manual
Request_type: cdc_jobs; /* Request type for submitting card decks to remote CDC system */
generic_type: punch;
max_queues: 1;
device: cdc_rdr1.rdr1;
Request_type: dummy; /* Required by line printers and card punches to avoid errors */
generic_type: dummy;
max_queues: 1;
device: cdc_prt1.prt1;
device: cdc_prt2.prt2;
device: cdc_pun1.pun1;
args Statement Keywords
station= <station_id>
identifies returned output files when said files are printed/punched automatically. This keyword is required; the same value should be used for all devices of a workstation simulator.
desc= <attach_description>
specifies the attach description used to attach the terminal/device I/O module. This keyword is required. The attach description must include the "-terminal hasp_host_" and "-comm hasp" options; the "-tty" option is provided automatically by the driver process. If the remote system requires a SIGNON record, the "-signon" option must be included for all devices of the workstation. (See MPM Communications for a description of the hasp_host_ I/O module.)
minor_args Statement Keywords
dev= <device_type>
specifies the type of device being simulated by this driver process. This keyword is required. The acceptable values for device_type are:
reader_out
simulates a card reader for sending card decks to the remote system.
printer_in
simulates a line printer for receiving output files from the remote system.
punch_in
simulates a card punch for receiving card decks from the remote system.
Additions to Bulk I/O Manual
auto_queue= <switch_value>
specifies whether output files received by this driver are (1) automatically printed or punched locally or (2) scanned for Multics control records and made available for online perusal as described above. The possible values for switch_value are:
yes
automatically queue the files for printing/punching; do not scan for control records, or
no
scan the output files for Multics control records and store them in system pool storage for online perusal; do not automatically queue files for printing/punching.
This keyword can not be given if "dev= reader_out" is specified. This keyword is optional; the default value is "yes" (automatically queue output files).
request_type= <rqt_name>
rqt= <rqt_name>
specifies the Multics request type to be used for automatically printing or punching output files. The request type specified must be of generic type "printer" if "dev= printer_in" is given or generic type "punch" if "dev= punch_in" is given; this keyword can not be given if "dev= reader_out" is specified. This keyword is optional; the default request type used is the default specified for the appropriate generic type.
Operating a HASP Workstation Simulator
SIMULATOR INITIALIZATION
To start a HASP workstation simulator:
- If necessary, issue the initializer "load_mpx" command described in the HOH to cause the HASP multiplexer channel to wait for a connection.
- Login the process which is to run the simulated operator's console of the workstation and issue the hasp_host_operators_console (hhoc) command, described below, to wait for the connection to be completed. If the remote system requires a SIGNON record as part of the connection procedure, include the "-signon" option on the hhoc command line.
- Complete the physical connection to the remote system.
- When the process running the operator's console prints the message "Input:" indicating that the physical connection is established, perform any logon sequence required to identify the workstation to the remote system. The exact sequence used, if any, should be determined from the...
remote system's administrative staff.
- Login each of the driver processes for the other simulated devices. The sequence used to login a driver process is described in "Login and Initialization of Device Drivers" in section 3 of this manual.
- On the terminal of the process running the operator's console, issue any commands to the remote system required to ready all the devices of the workstation.
- For each driver process running a simulated card reader, issue the commands:
```
ready
pun_control autopunch
go
```
These commands will start the transmission of card decks to the remote system.
- Issue the "receive" command for each driver process running a simulated line printer or card punch. This command will cause these drivers to wait for output files to be sent by the remote system. As each output file is received, it is processed according to the specifications given in the minor_args statement of the driver as described above.
**SPECIAL INSTRUCTIONS FOR RUNNING THE PRINTER AND PUNCH SIMULATORS**
In addition to the commands described in this section, the only other I/O daemon commands which may be used in the driver process of a simulated line printer or card punch are: logout, hold, new_device, inactive_time, x, start, help, status, reinit, release, and clean_pool. These commands are described in section 3 of this manual.
After use of the "receive" command described below, the driver only recognizes pending commands while it is between output files. If it is necessary to execute a command while a file is being received, a QUIT must be issued to the driver to bring the driver to QUIT command level. The "hold" command can then be used to cause the driver to remain at QUIT level; the "release" command can be used to abort receiving the file and return to normal command level; and the "start" command can be used to resume receiving the file.
The receive command causes the driver to wait for output files to be transmitted from the remote system. A message is issued at the start and end of each file received. If automatic queueing of output files is enabled for this simulated device, output files will be locally printed or punched after they have been successfully received; otherwise, the output files will be placed into system pool storage as specified by the ++IDENT control records which must be present in the files.
Usage
receive
-----------------
auto_queue
-----------------
Name: auto_queue
The auto_queue command controls whether output files received by this driver are (1) automatically printed or punched locally or (2) scanned for Multics control records and placed in system pool storage for online perusal.
Usage
auto_queue <switch_value>
where:
1. switch_value
must be chosen from:
yes
automatically queue the files for printing/punching; do not scan for control records, or
no
scan the output files for Multics control records and store them in system pool storage for online perusal; do not
auto_queue
automatically queue files for printing/punching.
request_type
_name:_ request_type, rqt
The request_type command is used to specify the request type to be used for the automatic queuing of output files received by this device.
**Usage**
\[ \text{rqt <rqt_name>} \]
where:
1. **rqt_name**
- is the name of the request type to be used for automatic queuing. The generic type of this request type must agree with the type of device being simulated ("printer" for simulated line printers, etc). This parameter is optional; the default value is the request type specified in the iod_tables definition of this driver.
**SIMULATOR SHUTDOWN**
To shutdown a HASP workstation simulator:
- Issue the "halt" command for each process running a simulated card reader. This command will cause these drivers to stop transmitting card decks after the current deck (if any) has completed transmission.
- On the terminal of the process simulating the operator's console, issue any commands to the remote system to request it to stop transmitting additional output files.
When all driver processes are idle, issue the "logout" command to each driver.
On the terminal of the process simulating the operator's console, issue any commands to the remote system to indicate that this workstation is signing off. After giving these commands, issue the "!quit" request to the hhoc program and logout the console process.
Break the physical connection.
If desired, shutdown the HASP multiplexer by using the initializer "dump_mpx" command described in the MOH.
The hasp_host_operators_console command is used to simulate the operation of the operator's console of a HASP workstation. The operator's console is used to identify a workstation to a remote system, to issue commands governing the operation of the workstation, and to receive status information from the remote system.
**Usage**
```bash
hhoc tty_channel {attach_arguments}
```
where:
1. `tty_channel` is the name of the terminal channel to be attached as the operator's console. This channel must be configured as the console sub-channel of a HASP multiplexer channel (eg: a.h014.op). See MAM Communications for a further description of the HASP multiplexer.
2. `attach_arguments` are options acceptable to the hasp_host_I/O module. This command supplies the `-comm`, `-tty`, and `-device` options automatically; these options need not be given on the command line. (See MPM Communications for a description of the hasp_host_I/O module.)
**Notes**
If the remote system requires a SIGNON record be transmitted before normal operations of a workstation may commence, the `-signon` option should be supplied on the command line specifying the exact SIGNON record to be transmitted.
For example, the command line:
```bash
hhoc a.h014.opr -signon "/#SIGNON REMOTE7"
```
may be used to attach the channel a.h014.opr as the operator's console of a remote IBM system expecting a connection from the workstation named REMOTE7.
After attaching the channel specified on the command line, hasp_host_operators_console prompts the user for terminal input with the string "Input:".
DRAFT: MAY BE CHANGED
Input from the terminal is transmitted directly to the remote system unless the line begins with the request character, an exclamation mark (!); lines beginning with the request character are interpreted by this command. The valid requests are described below.
Any text received from the remote system is displayed directly on the terminal without any interpretation by hasp_host_operators_console.
**HASP_HOST_OPERATORS_CONSOLE REQUESTS**
The following requests are recognized by hasp_host_operators_console when given at the beginning of a line of terminal input:
1.
the rest of the line is passed to the Multics command processor for execution as ordinary commands.
1!
prints a message of the form:
hasp_host_operators_console N.N; connected to channel NAME.
where N.N is the current version of this program and NAME identifies the channel connected as a console to the remote system.
!quit
causes the command to hangup the operator's console channel and return to Multics command level.
Multics provides facilities for users to submit card decks to a remote computer system for execution and to receive output from that execution for either printing/punching locally or online perusal. This section describes the mechanisms available for using this facility.
Submitting Card Decks to a Remote System
Each card deck to be transmitted to a remote system for execution must be contained in a separate Multics segment. This segment can be created using an editor, bulk card input, or any other appropriate mechanism.
The segment must consist of ASCII text only; no binary data (object segments, etc.) may be included. The exact format of the contents of the segment is dependent on the remote system being accessed and should be determined from the appropriate documentation for the remote system.
To transmit the segment to the remote system, issue the dpunch command specifying the mcc conversion mode and the request type established by your system administrator(s) explicitly for this purpose. A separate request type will be used for each remote system to which card decks can be submitted.
For example, to submit the card deck contained in the segment "sample.cdc" in the working directory to a remote CDC system, deleting the deck after it is successfully transmitted, issue the command:
```
dpunch -mcc -rqt cdc_jobs -dl sample.cdc
```
where "cdc_jobs" is the request type established by your system administrator(s) to submit decks to the CDC system.
Receiving Output from a Remote System
By default, printed and punched output returned by a remote system to Multics is automatically printed or punched locally. However, your system administrator(s) may decide that the returned output should be made available to users for online perusal.
If output is to be available for online perusal, each output file must contain Multics control records which establish the identity of the user who owns the file. Either the job control language (JCL) submitted to the remote system or the program(s) executed on the remote system must be modified to cause the required control records to appear in the output files.
system administrator(s) to determine which mechanism must be used for each remote system.
Returned output files which are to be available for online perusal are placed in system pool storage where they may be retrieved using the `copy_cards` command described in MPM Commands. Output files must be copied in a reasonable time, as they are periodically deleted from the system pool.
Format of an Output File Transmitted to Multics for Online Perusal
```
++IDENT FILE_NAME PERSON_ID PROJECT_ID
++FORMAT MODES
++CONTROL OVERWRITE AUTO_QUEUE
++INPUT
(output data)
<EOF record>
```
The only user-supplied control records required are `++IDENT` and `++INPUT`. For an explanation of these control records, refer to Appendix H of this manual.
Each output file is delimited by an end-of-file (EOF) record supplied automatically by the remote system. All control records in the output file from `++IDENT` through `++INPUT` inclusive and the EOF record are removed from the file before it is placed into pool storage.
For printed output, each paper motion command in the file is translated into the character sequence which will best simulate the requested motion when (and if) the file is printed locally via the `dprint` command. The exact character sequences used are given in Table 5-1.
One of the paper motion commands that may be received is a request to skip to a specific printer channel stop. This command is converted to a logical channel slew sequence as defined in "Vertical Format Control" earlier in this section. The user should check the RQTI segment of the request type used for printing the output file to determine which channel stops may be used in the output file. (The program executed on the remote system is responsible for placing this particular paper motion command in the output file. The exact mechanism used to do this should be determined from the appropriate documentation for the remote system.)
## Table 5-1: Translations of Paper Motion Commands in Output Files
<table>
<thead>
<tr>
<th>Paper motion command</th>
<th>Character sequence</th>
</tr>
</thead>
<tbody>
<tr>
<td>Slew zero lines</td>
<td>CR (octal 015) (1)</td>
</tr>
<tr>
<td>Slew one line</td>
<td>NL (octal 012)</td>
</tr>
<tr>
<td>Skip to channel (N)</td>
<td>ESC c (<N>) ETX (2)</td>
</tr>
</tbody>
</table>
(1) Overprint the current line with the previous line.
(2) This sequence is octal 033, octal 143, the decimal representation of the channel number encoded as ASCII characters (e.g., octal 061, octal 065 for channel #15), and octal 003.
APPENDIX H
RETURNED OUTPUT CONTROL RECORDS
This appendix defines the control records which are permitted in output files returned by a remote system to Multics for online perusal by Multics users.
All characters on a control record are converted to lower case except those immediately following the escape character (backslash, \). For example, \SMITH\SYS\MAINT is mapped into Smith.SysMaint.
Control record format is as follows:
- Columns one and two contain ++,
- A keyword appears starting in column three,
- Remainder of the record is free form,
- Continuation of control records is not permitted; the entire record must be contained within one punch or printer record, and
```
++IDENT
```
**Name:** ++IDENT
This control record identifies the Multics user who is to receive the output file. All records in the output file before the ++IDENT control record are discarded. All three fields of this control record must be specified in the order shown.
**Usage**
```
++IDENT <FILE_NAME> <PERSON_ID> <PROJECT_ID>
```
where:
1. **FILE_NAME**
- is the name used to identify the output file in system pool storage.
- It should be unique among the user's output files recently received.
- In the event of name duplications, the system output receiving
process appends a numeric component to the end of the supplied name and creates a duplicate segment for FILE_NAME unless the OVERWRITE control option is specified on the ++CONTROL record.
2. PERSON_ID
is the registered person name of the owner of this output file.
Only this person is able to copy the file from the pool.
3. PROJECT_ID
is the registered project name of the owner.
Notes
Multics person and project names normally begin with uppercase letters. Such names must have the escape character before each uppercase letter, since all letters in a control record are mapped to lowercase except those immediately following the escape character (backslash).
Angle brackets in the "Usage" line indicate information supplied by the user.
++CONTROL
++CONTROL <CTL_KEYS>
where:
1. CTL_KEYS
specifies the operating modes of the software and may be one of the following:
OVERWRITE
specifies that if an output file already exists with the name
given on the ++IDENT control record, the old file is to be deleted before the new file is received. The default action is described under the FILE_NAME argument of the ++IDENT control record.
**AUTO_QUEUE** specifies that the output file is to be automatically queued for printing or punching locally as appropriate. The default action is to not queue the file.
**REQUEST_TYPE <RQT_NAME>**
RQT <RQT_NAME> specifies use of the RQT_NAME print/punch queue if this output file is automatically printed/punched. RQT_NAME must identify a request type whose generic type is "printer" for print files and "punch" for punch files. (See the description of print_request_types in MPM Commands.) If this ctl_key is not given and automatic queuing is requested, the request type established by the system administrator(s) for output from this remote system will be used. This ctl_key is ignored unless the AUTO_QUEUE ctl_key is also given.
**++FORMAT**
**Name:** ++FORMAT
This control record is used to specify the conversion modes used to format the data in the output file. This record is optional.
**Usage**
++FORMAT <MODES>
where:
1. MODES may be any of the following modes. The meaning of these modes is discussed in "Card Conversion Modes" in Appendix C.
**++FORMAT**
TRIM
NOTRIM (default)
LOWERCASE
NOCONVERT (default)
ADDNL
NOADDNL (default)
CONTIN
NOCONTIN (default)
**++INPUT**
*Name:* ++INPUT
This control record marks the end of the control records and is required for all output files. The next record is the first record of the user's output file to be placed into system pool storage.
*Usage*
++INPUT
There are no fields following the key on this control record.
*Notes*
The system treats all records received after the ++INPUT record as data and places them into the output file even if they have control record syntax as described above.
|
{"Source-Url": "https://multicians.org/mtbs/MTB-454.pdf", "len_cl100k_base": 6269, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 36328, "total-output-tokens": 7233, "length": "2e12", "weborganizer": {"__label__adult": 0.0007414817810058594, "__label__art_design": 0.00051116943359375, "__label__crime_law": 0.0006670951843261719, "__label__education_jobs": 0.004474639892578125, "__label__entertainment": 0.0002522468566894531, "__label__fashion_beauty": 0.00021386146545410156, "__label__finance_business": 0.0013818740844726562, "__label__food_dining": 0.00043582916259765625, "__label__games": 0.0031681060791015625, "__label__hardware": 0.056365966796875, "__label__health": 0.0003998279571533203, "__label__history": 0.0005707740783691406, "__label__home_hobbies": 0.000396728515625, "__label__industrial": 0.01047515869140625, "__label__literature": 0.0003147125244140625, "__label__politics": 0.000301361083984375, "__label__religion": 0.0007662773132324219, "__label__science_tech": 0.10760498046875, "__label__social_life": 0.00012743473052978516, "__label__software": 0.399658203125, "__label__software_dev": 0.4091796875, "__label__sports_fitness": 0.0004725456237792969, "__label__transportation": 0.0012264251708984375, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29218, 0.01154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29218, 0.17043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29218, 0.87296]], "google_gemma-3-12b-it_contains_pii": [[0, 2462, false], [2462, 2772, null], [2772, 5464, null], [5464, 7831, null], [7831, 9548, null], [9548, 11075, null], [11075, 13203, null], [13203, 15100, null], [15100, 16199, null], [16199, 17277, null], [17277, 17761, null], [17761, 19363, null], [19363, 20381, null], [20381, 22515, null], [22515, 24442, null], [24442, 25106, null], [25106, 26374, null], [26374, 27341, null], [27341, 28599, null], [28599, 29218, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2462, false], [2462, 2772, null], [2772, 5464, null], [5464, 7831, null], [7831, 9548, null], [9548, 11075, null], [11075, 13203, null], [13203, 15100, null], [15100, 16199, null], [16199, 17277, null], [17277, 17761, null], [17761, 19363, null], [19363, 20381, null], [20381, 22515, null], [22515, 24442, null], [24442, 25106, null], [25106, 26374, null], [26374, 27341, null], [27341, 28599, null], [28599, 29218, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29218, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29218, null]], "pdf_page_numbers": [[0, 2462, 1], [2462, 2772, 2], [2772, 5464, 3], [5464, 7831, 4], [7831, 9548, 5], [9548, 11075, 6], [11075, 13203, 7], [13203, 15100, 8], [15100, 16199, 9], [16199, 17277, 10], [17277, 17761, 11], [17761, 19363, 12], [19363, 20381, 13], [20381, 22515, 14], [22515, 24442, 15], [24442, 25106, 16], [25106, 26374, 17], [26374, 27341, 18], [27341, 28599, 19], [28599, 29218, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29218, 0.01706]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
76176a02209adf3fdf21e7d249b538bbd019824f
|
Effective Management of Bugs in Software Repositories
Jeemol P Thomas¹, Jincy Anna George², Kaviya V³, Reshma K Rajan⁴, Jyolsna Mary P⁵
U.G. Student, Department of Computer Engineering, MBC Engineering College, Kuttikanam, Kerala, India¹
U.G. Student, Department of Computer Engineering, MBC Engineering College, Kuttikanam, Kerala, India²
U.G. Student, Department of Computer Engineering, MBC Engineering College, Kuttikanam, Kerala, India³
U.G. Student, Department of Computer Engineering, MBC Engineering College, Kuttikanam, Kerala, India⁴
Associate Professor, Department of Computer Engineering, MBC Engineering College, Kuttikanam, Kerala, India⁵
ABSTRACT: Software organizations spend most of cost in managing programming bugs. An unavoidable stride of settling bugs is bug triage, which plans to accurately dole out an engineer to another bug. To diminish the time cost in manual work, content order strategies are connected to direct programmed bug triage. In this paper, we address the problem of information lessening for bug triage, i.e., how to decrease the scale and enhance the nature of bug data. We consolidate case choice with highlight choice to all the while decrease information scale on the bug measurement and the word measurement. To decide the request of applying occurrence determination and highlight choice, we remove qualities from recorded bug informational collections and fabricate a prescient model for another bug informational index. Our work gives a way to deal with utilizing procedures on information preparing to shape decreased and superb bug information in programming advancement and upkeep.
KEYWORDS: Mining programming vaults, utilization of information preprocessing, information administration in bug stores, bug information lessening, include determination, case choice, bug triage, forecast for decrease orders.
I. INTRODUCTION
Mining programming storehouses is an interdisciplinary space, which plans to utilize information mining to bargain with programming designing issues. In cutting edge programming advancement, programming storehouses are expansive scale databases for putting away the yield of programming advancement, e.g., source code, bugs, messages, and details. Conventional programming examination is not totally appropriate for the extensive scale and complex information in programming storehouses. Information mining has risen as a promising intends to deal with programming information.
By utilizing information mining methods, mining programming vaults can reveal intriguing data in programming stores and settle real world programming issues. A bug storehouse (a commonplace programming archive, for putting away points of interest of bugs) assumes a vital part in overseeing programming bugs. Programming bugs are unavoidable and settling bugs is costly in programming improvement. Programming organizations spend more than 45 percent of cost in settling bugs. Expansive programming ventures send bug vaults (additionally called bug following frameworks) to bolster data accumulation and to help engineers to deal with bugs [2].
In a bug store, a bug is kept up as a bug report, which records the literary depiction of recreating the bug and redesigns agreeing to the status of bug settling [27]. A bug vault gives a information stage to bolster many sorts of undertakings on bugs, e.g., blame forecast bug confinement and reopened bug investigation [26]. In this paper, bug reports in a bug store are called bug information. There are two difficulties identified with bug information that may influence the successful utilization of bug storehouses in programming advancement assignments, to be specific the huge scale and...
the low quality. On one hand, because of the day by day detailed bugs, an expansive number of new bugs are put away in bug storehouses. A set of users can upload files after registering into the site. Developers who are inactive will be deactivated by the classifier. Developers resolves the bug reports by specifying the solution in the specified area. New bugs will be added into the bug data set by the classifier [10].
Two average attributes of low-quality bugs are commotion and excess. Loud bugs may misdirect related engineers [27] while excess bugs squander the restricted time of bug taking care of [22].
A tedious stride of taking care of programming bugs is bug triage, which plans to relegate a right designer to settle another bug [1],[7],[13]. In customary programming improvement, new bugs are physically triaged by a specialist designer, i.e., a human triager. Because of the vast number of every day bugs and the absence of aptitude of the considerable number of bugs, manual bug triage is costly in time cost and low in precision. To maintain a strategic distance from the costly cost of manual bug triage, existing work [1] has proposed a programmed bug triage approach, which applies content grouping systems to foresee designers for bug reports. In this approach, a bug report is mapped to an archive and a related designer is mapped to the mark of the archive. At that point, bug triage is changed over into an issue of content arrangement what's more, is consequently illuminated with develop content arrangement strategies. In view of the aftereffects of content characterization, a human triager doles out new bugs by consolidating his/her skill.
To enhance the precision of content characterization strategies for bug triage, some further strategies are researched. Nonetheless, substantial scale and low-quality bug information in bug vaults hinder the strategies of programmed bug triage. Since programming bug information are a sort of freestyle content information (produced by engineers), it is important to create all around prepared bug information to encourage the application [29]. In this paper, we address the issue of information decrease for bug triage, i.e., how to decrease the bug information to spare the work cost of designers and enhance the quality to encourage the procedure of bug triage. Information decrease for bug triage intends to fabricate a little scale and high caliber set of bug information by expelling bug reports and words, which are excess or non-instructive.
In our work, we consolidate existing strategies of occasion choice and highlight choice to at the same time diminish the bug measurement what's more, the word measurement. The decreased bug information contain less bug reports and less words than the first bug information and give comparable data over the unique bug information. We assess the diminished bug information as indicated by two criteria: the size of an informational index and the exactness of bug triage. To stay away from the predisposition of a solitary calculation, we experimentally look at the aftereffects of four example determination calculations and four element choice calculations.
Given a case choice calculation and an element determination calculation, the request of applying these two calculations may influence the consequences of bug triage. In this paper, we propose a prescient model to decide the request of applying example choice and highlight determination. We allude to such assurance as expectation for decrease orders. Drawn on the encounters in programming metrics,1 we separate the properties from chronicled bug informational collections. At that point, we prepare a parallel classifier on bug informational collections with removed traits and anticipate the request of applying example determination and highlight choice for another bug informational collection.
Test comes about demonstrate that applying the case choice method to the informational index can diminish bug reports yet the precision of bug triage might be diminished; applying the component choice method can diminish words in the bug information and the precision can be expanded. In light of the traits from authentic bug informational indexes, our prescient model can give the exactness of 71.8 percent for foreseeing the lessening arrange.
In view of top hub investigation of the qualities, comes about demonstrate that no individual quality can decide the diminishishment arrange and every ascribe is useful to the forecast. The essential commitments of this paper are as per the following:
1) We show the issue of information diminishishment for bug triage. This issue expects to increase the informational collection of bug triage in two viewpoints, in particular a) to all the while lessen the sizes of the bug measurement and the word measurement and b) to enhance the exactness of bug triage.
2) We propose a blend way to deal with tending to the issue of information lessening. This can be seen as an utilization of occurrence determination and highlight choice in bug vaults.
3) We assemble a double classifier to foresee the request of applying occurrence determination and highlight choice.
To our insight, the request of applying case choice what's more, element determination has not been researched in related areas. This paper is an augmentation of our past work [25]. In this augmentation, we include new traits separated from bug informational collections, expectation for lessening requests, and examinations on four occasion determination calculations, four element choice calculations, and their blends.
II. BACKGROUND AND MOTIVATION
A. BACKGROUND
Bug stores are broadly utilized for looking after programming bugs, e.g., a well known and open source bug storehouse, Bugzilla. Once a product bug is found, a columnist (commonly a designer, an analyzer, or an end client) records this bug to the bug store. In a bug report, the synopsis and the depiction are two key things about the data of the bug, which are recorded in regular dialects. As their names propose, the outline indicates a general proclamation for distinguishing a bug while the portrayal gives the points of interest for recreating the bug.
Some different things are recorded in a bug report for encouraging the distinguishing proof of the bug, such as the item, the stage, and the significance. Once a bug report is framed, a human triager doles out this bug to a designer, who will attempt to settle this bug. This engineer is recorded in a thing allocated to. The doled out to will change to another engineer if the already appointed engineer can't settle this bug. The way toward appointing a adjust designer for settling the bug is called bug triage.
An engineer, who is doled out to another bug report, begins to settle the bug in view of the information of authentic bug settling [12], [27]. Regularly, the designer pays endeavours to get it the new bug report and to look at verifiably settled bugs as a kind of perspective (e.g., hunting down comparable bugs [54] and applying existing answers for the new bug [10]). A thing status of a bug report is changed by the present consequence of taking care of this bug until the bug is totally settled. Changes of a bug report are put away in any thing history. This bug has been allotted to three designers and as it were the last designer can deal with this bug accurately.
Fig 2.1: Reduction of bug data for bug triage
Evolving designers goes on for more than seven months while settling this bug just costs three days. Manual bug triage by a human triager is time consuming what's more, mistake inclined since the quantity of day by day bugs is
vast to effectively allot and a human triager is difficult to ace the learning about every one of the bugs [12]. Existing work utilizes the methodologies in view of content characterization to help bug triage, e.g., [1], [7]. In such methodologies, the synopsis and the portrayal of a bug report are removed as the printed content while the engineer who can settle this bug is set apart as the mark for arrangement.
According to Fig 2.1 classifier creates the bug data set which contains attributes such as bug name, category and bug description. A developer can assign a bug report to another developer if he could not able to solve it. Developers solves the bug report assigned to him and returns back to the classifier. In points of interest, existing bug reports with their engineers are shaped as a preparation set to prepare a classifier, new bug reports are dealt with as a test set to look at the aftereffects of the characterization., we outline the essential system of bug triage in light of content order. Every line of the grid demonstrates one bug report while every segment of the grid demonstrates single word. To stay away from the low precision of bug triage, a suggestion list with the size k is utilized to give a rundown of k engineers, who have the top-k probability to settle the new bug.
III. Motivation
Genuine information dependably incorporate clamour and repetition. Loud information may deceive the information investigation methods [29] while excess information may build the cost of information preparing [4]. In bug stores, all the bug reports are filled by designers in regular dialects. The low-quality bugs aggregate in bug stores with the development in scale. Such huge scale and low-quality bug information may break down the adequacy of settling bugs [10], [27].
To concentrate the boisterous bug report, we take the bug report of bug 201598 as Example 2 (Note that both the outline and the portrayal are incorporated). Case 2 (Bug 201598). 3.3.1 about says 3.3.0. Manufacture id: M20070829-0800. 3.3.1 about says 3.3.0. This bug report exhibits the mistake in the adaptation discourse. Be that as it may, the points of interest are not clear. Unless an engineer is exceptionally acquainted with the foundation of this bug, it is elusive the points of interest. As indicated by the thing history, this bug is settled by the engineer who has revealed this bug. Be that as it may, the rundown of this bug may make different designers confounded.
Besides, from the point of view of information handling, particularly programmed preparing, the words in this bug might be evacuated since these words are not useful to recognize this bug. Along these lines, it is important to evacuate the uproarious bug reports what's more, words for bug triage. To concentrate the repetition between bug reports, we list two bug reports of bugs 200019 and 204653 in Example 3 (the things portrayal are precluded). Case 3. Bugs 200019 and 204653. (Bug 200019) Argument popup not highlighting the amend content . . . (Bug 204653) Argument highlighting wrong . . . In bug stores, the bug report of bug 200019 is set apart as a copy one of bug 204653 (a copy bug report, indicates that a bug report depicts one programming blame, which has a similar underlying driver as a current bug report [22]).
The printed substance of these two bug reports are comparable. Henceforth, one of these two bug reports might be picked as the agent one. In this way, we need to utilize a certain method to expel one of these bug reports. Along these lines, a procedure to expel additional bug reports for bug triage is required. In light of the over three illustrations, it is important to propose a way to deal with diminishing the scale and increasing the nature of bug information
IV. DATA REDUCTION FOR BUG TRIAGE
Persuaded by the three cases in Section 2.2, we propose bug information decrease to lessen the scale and to enhance the nature of information in bug stores. Fig 2.1 outlines the bug information diminish in our work, which is connected as a stage in information arrangement of bug triage. We consolidate existing procedures of example determination also, include determination to evacuate certain bug reports and words. An issue for lessening the bug information is to decide the request of applying example choice and include choice, which is indicated as the
expectation of lessening orders. In this area, we first present how to apply occasion determination and highlight choice to bug information, i.e., information lessening for bug triage. At that point, we list the advantage of the information lessening. The points of interest of the forecast for diminishment orders will be appeared in Section 4.
Algorithm 1:
Information decrease in view of FS -> IS
Input: preparing set T with n words and m bug reports,
decrease arrange FS -> IS last number n of words,
last number m of bug reports,
Output: decreased informational index T for bug triage
1) apply FS to n expressions of T and figure target values for every one of the words;
2) select the top n expressions of T and create a preparation set T;
3) apply IS to m bug reports of T;
4) end IS the point at which the quantity of bug reports is equivalent to alternately not as much as m and produce the last preparing set T.
i. Applying Instance Selection and Feature Selection
In bug triage, a bug informational collection is changed over into a content lattice with two measurements, specifically the bug measurement and the word measurement. In our work, we use the blend of example determination and highlight choice to create a decreased bug informational collection. We supplant the first informational index with the decreased informational collection for bug triage. Occurrence determination and highlight choice are broadly utilized methods in information preparing. For a given informational index in a certain application, case choice is to get a subset of pertinent occasions (i.e., bug reports in bug information) while include choice plans to acquire a subset of significant elements (i.e., words in bug information) [4]. In our work, we utilize the blend of occasion choice and highlight choice. To recognize the requests of applying example choice and highlight choice, we give the accompanying indication. Given an occurrence choice calculation IS and an element determination calculation FS, we utilize FS -> IS to indicate the bug information decrease, which first applies FS and afterward IS; then again, IS -> FS indicates first applying IS and afterward FS. In Algorithm 1, we quickly show how to diminish the bug information in light of FS -> IS.
Given a bug informational index, the yield of bug information diminishment is another and lessened informational collection. Two calculations FS and IS are connected consecutively. Take note of that in Step 2), some of bug reports might be clear amid highlight i.e., every one of the words in a bug report are evacuated. Such clear bug reports are likewise expelled in the component choice. In our work, FS -> IS and IS -> FS are seen as two requests of bug information diminishment. To dodge the inclination from a solitary calculation, we inspect aftereffects of four common calculations of case choice and highlight determination, individually.
We quickly present these calculations as takes after. Example choice is a system to diminish the quantity of cases by expelling loud and repetitive occurrences. A case choice calculation can give a decreased informational collection by expelling non-agent cases [28]. As indicated by a current correlation think about and an existing survey, we pick four example determination calculations, to be specific Iterative Case Filter (ICF), Learning Vectors Quantization (LVQ) [9], Decremental Reduction Advancement Procedure (DROP) and Patterns by Requested Projections (POP) [14]. Highlight determination is a preprocessing procedure for selecting a diminished arrangement of components for vast scale informational collections [4]. The diminished set is considered as the delegate highlights of the first list of capabilities. Since bug triage is changed over into content grouping, we concentrate on the component determination calculations in content information. In this paper, we pick four very much performed calculations in content information [16], and programming information to be specific Information Gain (IG) [6], x2 measurement (CH) Symmetrical Uncertainty property assessment (SU) [51], and Relief-F Attribute determination (RF) [15]. In view of
highlight choice, words in bug reports are sorted concurring to their element values and a given number of words with expansive qualities are chosen as agent components.
ii. Benefit of Data Reduction
In our work, to spare the work cost of designers, the information diminishment for bug triage has two objectives,
1) lessening the information scale and
2) enhancing the precision of bug triage. Interestingly to demonstrating the literary substance of bug reports in existing work.
iii. Reducing the Data Scale
We lessen sizes of informational collections to spare the work cost of engineers. Bug measurement. As specified in Section 2.1, the point of bug triage is to appoint designers for bug settling. Once an engineer is doled out to another bug report, the designer can analyse verifiably settled bugs to frame an answer for the current bug report [12], [27]. For instance, recorded bugs are checked to recognize whether the new bug is the copy of a current one [22]; in addition, existing answers for bugs can be sought and connected to the new bug [10]. In this manner, we consider diminishing copy and boisterous bug reports to diminish the quantity of authentic bugs.
Practically speaking, the work cost of designers (i.e., the cost of looking at verifiable bugs) can be spared by diminishing the quantity of bugs based on occasion choice. Word measurement. We utilize highlight choice to evacuate uproarious on the other hand copy words in an informational collection. In light of highlight choice, the diminished informational collection can be taken care of all the more effortlessly by programmed procedures (e.g., bug triage approaches) than the unique informational index. Other than bug triage, the decreased informational index can be further utilized for other programming assignments after bug triage.
iv. Improving the Accuracy
Precision is a critical assessment foundation for bug triage. In our work, information decrease investigates and expels loud or copy data in informational indexes (see cases in Section 2.2). Bug measurement. Example determination can evacuate uninformative bug reports; then, we can watch that the exactness might be diminished by evacuating bug reports.
<table>
<thead>
<tr>
<th>Table 1: Bug Data Set</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Catgory</strong></td>
</tr>
<tr>
<td>User Interface Error</td>
</tr>
<tr>
<td>Error Handling</td>
</tr>
<tr>
<td>Boundary Related Errors</td>
</tr>
<tr>
<td>Calculation Errors</td>
</tr>
<tr>
<td>Control flow Errors</td>
</tr>
</tbody>
</table>
V. PREDICTION FOR REDUCTION ORDERS
In light of Section 3.1, given an occasion choice calculation IS and an element determination calculation FS, FS -> IS and IS -> FS are seen as two requests for applying diminishing procedures. Consequently, a test is the manner by which to decide the request of diminishment strategies, i.e., how to pick one between FS -> IS and IS -> FS. We allude to this issue as the expectation for diminishment orders.
i. Reduction Orders
To apply the information diminishment to each new bug informational index, we need to check the precision of both two requests (FS -> IS and IS -> FS) and pick a superior one. To maintain a strategic distance from the time cost of physically checking both diminishment orders, we consider foreseeing the diminishment arrange for another bug informational collection in light of chronicled informational indexes. As appeared in Fig 2.1, we change over the issue of forecast for lessening orders into a parallel arrangement issue. A bug informational index is mapped to an occurrence and the related decrease arrange (either FS -> IS or IS -> FS) is mapped to the name of a class of occurrences. Take note of that a classifier can be prepared just once when confronting numerous new bug informational indexes.
That is, preparing such a classifier once can foresee the lessening orders for all the new information sets without checking both lessening orders. To date, the issue of foreseeing decrease requests of applying highlight determination and occurrence choice has not been examined in other application situations. From the point of view of programming designing, foreseeing the diminishment arrange for bug informational indexes. Ventures of foreseeing lessening orders for bug triage a sort of programming measurements, which includes exercises for measuring some property for a bit of programming . Notwithstanding, the components in our work are separated from the bug informational collection while the components in existing work on programming measurements are for individual programming artifacts, e.g., a person bug report or an individual bit of code. In this paper, to keep away from questionable significations, a characteristic alludes to an extricated highlight of a bug informational collection while an element alludes to an expression of a bug report.
VI. DISCUSSION
In this paper, we propose the issue of information diminishment for bug triage to diminish the sizes of informational indexes and to progress the nature of bug reports. We utilize strategies of occurrence choice and highlight choice to decrease clamour and repetition in bug informational indexes. Be that as it may, not all the commotion and excess are expelled. The reason for this reality is that it is difficult to precisely identify clamour and repetition in certifiable applications. On one hand, due to the expansive sizes of bug vaults, there exist no sufficient names to stamp whether a bug report or a word has a place with clam or excess; then again, since all the bug reports in a bug archive are recorded in characteristic dialects, even loud and repetitive information may contain valuable data for bug settling. In our work, we propose the information decrease for bug triage. This reality is brought about by the multifaceted nature of bug triage. We clarify such multifaceted nature as takes after. Initially, in bug reports, articulations in normal dialects might be hard to plainly see; second, there exist numerous potential designers in bug vaults; third, it is difficult to cover all the information of bugs in a product extend and even human triagers may allocate engineers by misstep. Our work can be utilized to help human triagers as opposed to supplant them. In this paper, we build a prescient model to decide the lessening request for another bug informational collection in view of chronicled bug informational collections. Qualities in this model are measurement estimations of bug informational collections, e.g., the quantity of words or the length of bug reports. No illustrative expressions of bug information sets are separated as properties. We plan to concentrate more nitty gritty traits in future work.
In our work, we tend to introduce a determination to decide the decrease request of applying case choice and highlight choice. Our work is not a perfect determination to the expectation of diminishment requests and can be seen as a stage towards the programmed expectation. We can prepare the prescient show once and anticipate diminishment orders for each new bug informational collection. The cost of such forecast is not costly, contrasted and attempting every one of the requests for bug informational collections. Another potential issue is that bug reports are most certainly not revealed in the meantime in true bug vaults. In our work, we separate characteristics of a bug informational index and consider that every one of the bugs in this informational collection are accounted for in certain days. Contrasted and the season of bug triage, the time extend of a bug informational collection can be overlooked. Along these lines, the extraction of qualities from a bug informational collection can be connected to certifiable applications.
VII. CONCLUSION
Bug triage is a costly stride of programming upkeep in both work cost and time cost. In this paper, we consolidate highlight choice with occasion determination to lessen the size of bug informational indexes and in addition enhance the information quality. To decide the request of applying occasion choice and highlight determination for another bug informational collection, we extricate qualities of every bug informational collection and prepare a prescient model in light of chronicled informational qualities. We experimentally explore the information diminishment for bug triage in bug archives of two substantial open source ventures, specifically Eclipse and Mozilla. Our work gives a way to deal with utilizing methods on information preparing to shape decreased and astounding bug information in programming improvement and upkeep. In future work, we anticipate enhancing the aftereffects of information lessening in bug triage to investigate how to set up a high quality bug informational collection and handle a space particular programming assignment. A set of users can upload files after registering into the site. Developers who are inactive will be deactivated by the classifier. Developers resolves the bug reports by specifying the solution in the specified area. New bugs will be added into the bug data set by the classifier[10]. For foreseeing lessening orders, we plan to pay endeavours to discover the potential relationship between the qualities of bug informational indexes and the lessening orders.
REFERENCES
|
{"Source-Url": "http://www.ijirset.com/upload/2017/february/78_Effective%20Management%20of%20Bugs%20in%20Software%20Repositories.pdf", "len_cl100k_base": 5790, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28259, "total-output-tokens": 8383, "length": "2e12", "weborganizer": {"__label__adult": 0.00031876564025878906, "__label__art_design": 0.0003075599670410156, "__label__crime_law": 0.0002944469451904297, "__label__education_jobs": 0.0010328292846679688, "__label__entertainment": 5.447864532470703e-05, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.000301361083984375, "__label__food_dining": 0.0002205371856689453, "__label__games": 0.0006022453308105469, "__label__hardware": 0.0006542205810546875, "__label__health": 0.0003285408020019531, "__label__history": 0.00017333030700683594, "__label__home_hobbies": 9.018182754516602e-05, "__label__industrial": 0.00026679039001464844, "__label__literature": 0.00026416778564453125, "__label__politics": 0.00016486644744873047, "__label__religion": 0.0002651214599609375, "__label__science_tech": 0.0126190185546875, "__label__social_life": 9.053945541381836e-05, "__label__software": 0.007419586181640625, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.0002104043960571289, "__label__transportation": 0.0003380775451660156, "__label__travel": 0.00013208389282226562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34899, 0.02511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34899, 0.12329]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34899, 0.88651]], "google_gemma-3-12b-it_contains_pii": [[0, 3705, false], [3705, 8904, null], [8904, 11397, null], [11397, 15780, null], [15780, 19960, null], [19960, 23175, null], [23175, 27980, null], [27980, 33320, null], [33320, 34899, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3705, true], [3705, 8904, null], [8904, 11397, null], [11397, 15780, null], [15780, 19960, null], [19960, 23175, null], [23175, 27980, null], [27980, 33320, null], [33320, 34899, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34899, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34899, null]], "pdf_page_numbers": [[0, 3705, 1], [3705, 8904, 2], [8904, 11397, 3], [11397, 15780, 4], [15780, 19960, 5], [19960, 23175, 6], [23175, 27980, 7], [27980, 33320, 8], [33320, 34899, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34899, 0.06957]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
2bc6d5d98596e06cea9f6aedad334ea1cad029f8
|
Durham Research Online
Deposited in DRO:
01 November 2010
Version of attached file:
Published Version
Peer-review status of attached file:
Peer-reviewed
Citation for published item:
Further information on publisher’s website:
http://dx.doi.org/10.1109/ICEBE.2005.33
Publisher’s copyright statement:
© 2005 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Additional information:
Use policy
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-profit purposes provided that:
- a full bibliographic reference is made to the original source
- a link is made to the metadata record in DRO
- the full-text is not changed in any way
The full-text must not be sold in any format or medium without the formal permission of the copyright holders.
Please consult the full DRO policy for further details.
Abstract
E-commerce development and applications have been bringing the Internet to business and marketing and reforming our current business styles and processes. The rapid development of the Web, in particular, the introduction of the semantic web and web service technologies, enables business processes, modeling and management to enter an entirely new stage. Traditional web based business data and transactions can now be analyzed, extracted and modeled to discover new business rules and to form new business strategies, let alone mining the business data in order to classify customers or products. In this paper, we investigate and analyze the business integration models in the context of web services using a micro-payment system because a micro-payment system is considered to be a service intensive activity, where many payment tasks involve different forms of services, such as payment method selection for buyers, security support software, product price comparison, etc. We will use the micro-payment case to discuss and illustrate how the web services approaches support and transform the business process and integration model.
1. Introduction
Use of the Internet has reshaped people’s life in many ways. At the beginning, people use the Web as a huge information repository to search for what they need. Nowadays, people do much more on the Web. They advertise, they publish articles, they play games, they go shopping, and they even gamble. Two decades ago, people started doing business through the Internet. EDI (electronic data exchange) has been a standard for transferring data in order to classify customers or products. EDI, which is established in the 1980s, is a standard for exchanging business documents electronically. It provides a means of exchanging business documents between trading partners. EDI allows companies to exchange business documents with their trading partners in a standardized format. This facilitates the exchange of information between different organizations and reduces the cost and time involved in manual data entry. Nowadays, the Internet and the Web are reshaping people’s business life. These terms, like e-commerce, e-procurement, e-payment, e-marketplace, have illustrated a large number of different requirements, which need to be met in various e-business models and technologies [9]. In this paper, we attempt to investigate the business process and models in the context of web services.
1.1. Business process and integration model
The rapid development of the Web, in particular, the introduction of the semantic web and web service technologies enables business processes, models, and patterns to enter an entirely new stage. Traditional web based business data and transactions can now be extracted and modeled to discover business rules and business strategies, let alone mining the business data in order to classify customers or products.
A nowadays business model is a very complicated system which, in general, consists of information processing, collaborative work, enterprise analysis, and service management. These components represent a horizontal view of the business structure, while the service management component itself also provides a refinement mechanism that can be roughly expressed in three intersectional facets in vertical dimension, that is, business level, component level, and functional level. At the business level, we describe general business objectives and requirements. At the component level we refine the business objectives into a number of concepts or processes, which are modeled in information system development. These concepts or processes can be further decomposed. At the functional level, we focus on mainly on implementation issues. The high level components are now described in various functions and programming modular.
Most of the web services components are those of
concepts, processes, or functions. For example, the services, described in OWL-S, have an ontology, which contains service profile, service process, and service grounding. Here the service profile provides the concepts describing the service, such as business contacts, the service process describes how the service works (including input and output parameters), which is very close to function description of the service [8]. Another issue proposed in [11] suggests viewing software components as services. This concept is significant in the sense of modeling, communicating, and composing services. However, it is quite difficult to build up a repository to manage this huge number of software components.
The induction of the semantic web methodology, such as ontology definition, gives us a powerful tool to describe the meaning of business processes, business structures, and business rules. It is more useful to build up various specific ontologies for business objects within a business organization, such as personnel, products, and management hierarchies. For example, to model the access rights for business, RBAC has been produced [2].
1.2. Web service issues
Web Service is a piece of software that makes itself available over the Web and uses a standardized XML messaging system [1]. Its significance lies in composition of software for various purposes for users who attempt to accomplish certain tasks with the composed services through the Web. A web based application system requires various software components, such as security supporting programs and payment methods, to work together to accomplish the required tasks. In other words, it is required that a dynamic construction of various related services and software be performed to meet a particular user’s specific need.
Many pieces of such software exist in the Web. The Web Services methods provide various supporting facilitates to describe the software for sharing and reuse. Among others, there are two issues significant for a web based application system development. The first is service description and selection of software components. Web Service Description Language (WSDL) and Universal Description, Discovery, and Integration (UDDI) are proposed standards for software component description. The purpose of the description is for the users to select the right and suitable software components. The second is service integration or composition of the selected software components. The integration issue is based on the service description. During the integration process, the service components, which fit together to serve the users requirements, are composed to form the unified software for particular tasks.
In this paper, a web based application system, the micro-payment system, is a software and service intensive activity, where many payment tasks involve different forms of services, such as payment method selection for buyers, security support software, product price comparison, etc. Using the web service technology will greatly enhance the quality and performance of the micro-payment system.
1.3. Business model for micro-payments
A key business issue to apply the micro-payment techniques in the context of web services is that more and more services distributed over the Web will be organized dynamically to meet a given task by a user. These services are comprised by a collection of software components or functions from various sources. Together with the data and resources provided (and changed) by the content/service providers, the services or software pieces will be also charged. For example, a customer may like to know the one day cost for her mobile phone when she was abroad.
Currently, the description of a service mainly focuses on its simple semantic description and ontology using UDDI, and profiles and processes using WSDL. While payment is concerned, a suitable description for services in terms of business processes and transactions is dispensable. In [4], BPEL4WS has been developed for description of business processes for web services. However, to implement semantic description in web services modeling and hence to represent the business objectives and transactions is crucial because we need to compose web services to form a greater task for a given goal where the description for payment is considered to be a dynamic part of the composition process of web services.
1.4. Paper overview
The paper is organized as follows. In the next section, we describe general business integration model, focusing on business integration model, business ontologies, and integration issues. Then, in section 3, we describe the case of micro-payment, discussing its procedure, system, and components. We will also discuss what the business process model issues are considered. In section 4, the web service approaches to the business model of micro-payment system are discussed, where we propose some business service patterns and semantic description model for micro-payment systems. Finally we conclude the paper in section 6 by proposing our future work in this direction.
2. The business integration model
According to [4], a business process model can be viewed to be a formal definition of composing objects, which provides the description of the behavior and interactions of a process instance relative to its partners and resources through Web services interfaces. It provides a standard XML language to express business processes consisting of functions defined. The business process model considers both design and runtime uses. At design time, development or modeling tools can use, import, or export the proposed models and objects, allowing business analysts to specify processes and developers to refine them and bind process steps to specific service implementations. At the runtime, its business workflow engine can use the models and objects to control the execution of processes, and invoke the services required to implement them.
From the viewpoint of business integration, the business process model describes a set of requirements for an integration framework that enables the high level requirements and the low level functions to match each other. Through web services support, a business process can be effectively mapped into their functions via a logical process model. The business process can be exported and translated into the business workflow and functions, which are further implemented as web services. In a distributed environment, web services are used to enable the business integration technology, so un-interoperable software components can work together.
One of the goals of web services is universal interoperability between applications by using web standards. The Business Integration technology can be used for this purpose, to deliver a loosely coupled integration model, allowing flexible integration of heterogeneous systems within an enterprise or in a variety of domains, including a business-to-business or business-to-consumer process model.
2.1. Business integration model
The business integration consists of four components, Information processing, Collaboration, Enterprise modeling, and Service self-management, which can be further decomposed into business functions. These four components are generally described as follows:
- Information processing component is to handle various sources of data from the service providers and aggregate the data for the end users. The Web services technology will be used to make the content of the data available to the customers.
- Collaboration component is to deal with the individuals, programs, and organizations to work together to make payment transactions done. Access controls and management are the key issue, which should be processed in the Web Services infrastructure.
- Enterprise component is to handle the business processes, systems requirements, and basic services for the payment systems. The entire business value chain will be modeled and supported by using the Web Service technology.
- Service self-management component is to deal with all the services that provide support to the above three components. The services will be selected and integrated based on the availability and on-demand principles.
2.2. Ontologies
As depicted in Fig. 1, the business ontology group contains four specific ontologies, which inherit the characteristics from the upper level ontology. These four ontologies, described below, attempt to represent various dimensions in a business process and integration model.
- Service ontology provides various ontologies for the application domains. These domain ontologies vary from one application to another. For example, payment service has its own payment ontology, while messaging service provides its specific messaging ontology.
- Task ontology provides conceptual description for various flows, which express the relationships between business tasks and their sub-tasks.
- Organization ontology describes structural conceptions and relationships. It contains departmental issues, personnel issues, across-organizational issues, and various internal and external agent issues.
- Economy and finance ontology is actually a specific ontology for our application case, the micro-payment system. Its aim is to indicate that it is the business ontology beyond the business domain.
Fig. 1 Ontology group for web based business system
2.3. Integration issues
The integration issues involve the following four aspects, i.e., platform integration, ontology integration, virtual organization, and metadata model integration.
- Platform integration deals with integration of various platforms, such as e-Wallet in mobile devices, server-managers, and web service interfaces.
- Ontology integration considers different views represented in different ontologies. Examples of these ontologies, include those given in the above section.
- Virtual organization indicates a collaborative work required in order to accomplish a given task. For example, a payment transaction may need to co-use the micro-payment method and credit card payment method.
- Metadata model integration copes with different representations of business requirements and functions due to using different metadata modeling languages, such as RDF, OWL, WSRF, WML.
3. Case Study: a Micro-payment system
Let us consider how a micro-payment system works, see Fig. 2. We assume that all transactions take place over the Internet. Similar to ordinary marketplace, where people visit, select, and buy things, now we have an electronic marketplace. You can enter the e-marketplace, choose what you want and pay for them. Consequently, there are at least two basic components. One is a collection of goods (probably services or pieces of information) in the e-marketplace. The other is a wallet holding some money (usually small amount) enabling you to pay for your purchase. This wallet is called electronic wallet or e-wallet. As a matter of fact, there are another three parties involved in the e-marketplace and micro-payment. The first one is Internet Service Providers (ISPs), which enable you to connect to the Internet, such as telecom companies. The second one is Internet Content Providers (ICPs), which provide products, goods, services, or other items for buyers [3, 6]. The third one is a bank or a financial unit to certify the wallet you hold is valid and has a certain amount of electronic cash (e-money).
3.1. System structure
A micro-payment system consists of these components: an e-wallet, a billing system at ISP/bank, and a billing system at merchant. In the following, we describe these components and their connections.
The e-wallet component itself has three parts: an Internet connection to ISP and ICP, a small billing mechanism, and an interactive interface. The Internet connection allows it to access the required certificate data from ISP or a bank (of course, ISP has already provided the user the Internet services) when necessary, and to ICP for shopping. The small billing mechanism (compared with the large billing system maintained at the ISP side) lets the user know how much he has spent and how much left in the e-wallet. The mechanism also provides a list of items the user has bought and other related information. The interactive interface allows the user to communicate with the system, for example, login messages.
3.2. Transaction flows
In this section, we describe a collection of possible certificate transactions between the micro-payment parties. See Fig. 3. First of all, we assume that the buyer’s wallet, which has been initially certified by the ISP or bank, holds some e-money. This is called pre-certify payment. When a buyer has chosen some item to buy, by a click of payment, he issues a “certify payment” to ICP. Then ICP sends a message of “certify check” to ISP. This is the first round of certificate, from buyer, to ISP, and to ICP.
ISP checks the buyer’s ID and the amount the buyer has in his wallet. If the buyer’s ID is correct and the amount is sufficient for this payment, the ISP sends a message “certify reply” back to the ICP. This is the second round of certificate, from buyer, to ISP,
and to ICP. When the ICP gets “yes” from the ISP, it delivers the item to the buyer.
During the second round, the ISP, in this order, updates the billing records at the buyer’s wallet, its own billing records, and the ICP’s billing records.
If the buyer’s ID is wrong or the amount is not sufficient, the ISP will send messages to both the buyer and the ICP, informing “no certify reply”.
Fig. 3 Transaction flow among Micro-Payment components
3.3. Payment description
In the micro-payment system, data transfer through the Internet is most important. Therefore, how to describe the transferring data so that such description can meet the current and future requirements of the Internet data exchange and maintenance is critical. Since W3C proposed XML to be a standard for various data representation in the Web, XML has been widely accepted and used for description of a great variety of the Web resources. XML has the advantages of extensibility, separation of content from presentation, strict syntax, well-formedness, etc.
Recently, based on the XML specifications, a particular markup language specification for describing micro-payment is proposed to W3C [15]. This specification provides an extensible way to embed in a Web page all the information necessary to initialize a micro-payment (amounts and currencies, payment systems, etc.). This embedding allows different micro-payment electronic wallets to coexist in an interoperable manner.
This specification defines a set of tags for description of payment related information to be transferred on the Web.
3.4. Business process in Micro-payment
The micro-payment system provides us a good case where various business issues have occurred. From the end users (such as buyers) and the experts in the finance domain, the requirements for the system are usually very general, informal, and less precise but very critical and essential. These requirements directly motivate business process and high-level service description, as well as various subjects and concepts, on one hand.
On the other hand, the micro-payment system development requires a large number of existing and reusable software components and data sources. These software components and data sources are available in various types of services via the web. An effective description framework for the business integration model is indispensable because we need these services to match our goals and these services must be coupled and integrated together in order to accomplish a given task.
Still another aspect is the middle level requirements and description, which is also modeled using the business process and integration model. In this model, we need to specify the business tasks and workflows, which are representing the general users’ requirements and objectives, embodying the business strategies, structure, and nature, and motivating the functional and dynamic requirements and specifications for the software components and data structures.
In brief, from the micro-payment case, it is quite obvious that the business process and integration model with web services mechanism is required to describe a vertical business refinement process as well as a horizontal integration process. In the next section, we will discuss how these issues are coped with under the web services.
4. Web Service in the business model of micro-payment
In this section, we will discuss how web services will support the business process and integration model in the micro-payment case. Micro-payment is a complex, software component based, distributed system. It involves different parties for different functions. To accomplish one micro-payment transaction, many services are required, such as setting up appropriate security channels, checking the payer’s identity, making daily purchase records. These business services form different patterns [12], serving different users’ requirements and purposes. In order to describe the businesses and services in a semantic rich manner, we need to build up a semantic description framework for the business services. This framework is also used to describe the end user’s requirements and profile. In the following we will first discuss the semantic description framework for web services, and then illustrate the three aspects of business services: business combination patterns, semantic description, and requirements matching.
4.1. Semantic description framework
The proposed semantic description framework model, based on the W3C recommendation – RDF
and RDF, contains the following components: a characteristic based semantic description modeling component for representing the businesses and general services; a process based description modeling component for expressing service functions and processes; a structured requirement modeling component to represent the end users’ requirements and requests, as well as their profiles. In addition, we also use an ontology model for representing the business ontology and service ontology, where the domain knowledge is structurally expressed.
The semantic description framework contains three major components: a semantic descriptor, a pattern constructor, and a user request formation. The semantic descriptor is for business, service, and function description. As we discussed earlier, business description specifies business services and services specifies service functions. In addition, ontology support for general business and service is also maintained within this component.
The pattern constructor uses the service and pattern repository to collect the matched services discovered through matchmaking and semantic mapping mechanism to form the three services patterns, integration, aggregate, and composite. For the integration pattern, the service function description will be critical because to couple different services together to form a chain of services for a specific purpose requires accurate function specifications.

The user request formation is to apply the criteria supplied from the semantic description framework and the ontology structure to re-structure the user requests, so that the requests will bring more semantics for later matchmaking with the service description within the registry component. Currently we are developing a graph matching algorithm, in which a user request is seen as a sub-graph and the service description schema is considered to be a big service graph [7].
4.2. Business combination patterns
In a payment environment, the users are using a number of payment services, for example pay-by-e-cash, pay-by-e-wallet, pay-by-e-card, or using payment services provided by different service providers. Each payment method can be decomposed into a number of services. These services and sub-services will work together in one way or another to meet the users’ requirements. These working ways are considered to follow certain kind of service patterns.
Service patterns suggest some forms that services are organized to serve certain purposes or to accomplish certain tasks. Such form or model is peculiarly significant in the situations where many distributed services are linked, integrated, coordinated, and cooperated together with their data sources and even human power sources for achieving a tremendous target. These situations or organizations are called “virtual organization”.
These service patterns can be considered to be in these three forms, a service integration pattern, a service aggregate pattern, and a service composite pattern. The service integration pattern maintains a set of services or software which is organized in parallel. When a service request comes, a “broker” checks the requirements of the request and finds a most suitable one to meet the request. To the users, it appears as if there were only one service, for example, payment service, that the users need to know.
The service composite pattern has been widely discussed and is mainly considered to be a line of services. Each service is dependent on its preceded service to provide inputs and supplies outputs to its follower. A special situation is that one service may request or call another service in order to fulfill its own tasks. The key issue here is dynamic composite of services with a secure or fault tolerant mechanism.
The service aggregate pattern indicates the situations where a set of services available but one of them is selected according to the users’ requirements. These services almost serve the same purposes and do the same functions. Their difference may lie in for example, low cost but light security support, better usability but less functionality, etc. This pattern emphasizes more on service availability whereas the composite pattern focuses on-demands.
How to develop these business patterns for payments is heavily dependent on a good semantic description method, which can provide rich semantics for services and better information exchange through the Web. The aim to propose these patterns is to suggest some business models of payments that better serve the users with web service techniques.
4.3. Service description
According to the triangle architecture for web services [13, 14], a web service involves three
parties, a registry where the service is registered and “described”, a service provider that also provides service specifications, a service consumer or user that makes service request together with sort of service description. The service description mainly contains business description and service description. The business description is about the business situation of the service provider and the service description is about the services that the business is providing. The proposed semantic description framework aims at improving the business services with richer semantics for better matching between the users requirements and the service description.
**Payment service description.** We attempt to maintain a semantic description framework to provide full description for a payment service. The framework will include a static part, which describes general transactions methods and general payment processes, and a dynamic part, which stipulates operations such as match bindings, etc. Following is an example of the semantic description framework (static part) for payment transaction.
A transaction object includes transaction ID, transaction description, transaction properties, and transaction ontology. Transaction ID is the unique number for the transaction, which is used to identify the transaction. Transaction Description is a short description for the transaction for human understanding and NL semantic processing. Transaction Properties are RDF based description for the transaction. The properties include, for example, name, relationships with other transactions, transaction types, etc. Transaction ontology describes a set of hierarchical structures, such as conceptual taxonomy, process hierarchy structure, etc.
In Fig. 5, we illustrate a payment object with its description elements. The description is based on RDF. The pay object is related to a number of values through the properties like “From” and “To”. It is also related to a function property, called “PAY”. The object is about a “previous” payment, which includes the payment amount, payment methods and security requirements. The above description fragment is, however, our first attempt to formalize the description for payment transactions.

**Service function description.** Service function is currently not well very described part within service description but it is a useful part for improving the user requirement matches. As it is difficult for a business to provide a description of its services meeting some normalized form, they are in most cases written in natural languages with very informal structures. In our framework, we also propose a formal specification for service function description. This formal description mainly concerns that the services accomplish what tasks, perform what processes, possess what features, and involve what other processes and services. For example, the functions for a payment service include access checking, identity verification, accounting, clearance, etc.
**User request description.** As a most important party in business service, the users description will heavily influence the quality of requirement matching. However, this work is now still left to the end users. The end users have to input some keywords and select some ontology terms for an application domain to a service registry in order to search for the services they request. In most cases, the users do not know their exact requirements for services and even do not know how to structurally form their requirements. Their expression of requests can be very vague and ambiguous. To support the end users with their payment requests, for example, the users’ requirements can be transferred in a certain structure together with their profiles.
Suppose that a user wants to download a single song from a Web-based music shop and pays 50 cents for it. The user may demand a payment service for this transaction using his “e-wallet” and to be “secure”. His “e-wallet” provides e-cash, e-card, phone-bill account. By “secure” he means that a security channel should be set in SSL. According to these requirements, a payment service is searched against the service registry and a matchmaker mechanism is performed to find a best match. Here we need to emphasize that to structure the users’ requirements and to use a semantic framework to describe the users’ profile is extremely important. The users’ profile helps us with good understanding of “secure”.
4.4. Service integration
Initially, the service integration process is to search and find the services from the service registry according to the users’ requests. However, the users’ requests are currently expressed only in terms of service names or keywords. Many approaches have been proposed to enrich the semantics for the services to meet the users’ requirements [5, 7, 14]. The main feature of the approaches is to introduce ontological structure for description of services and businesses. For example, in order to provide more semantic support to the description and search of services in the service registry, a set of mapping rules
can be established between the user query entries described by the semantic description framework and the service registry. Each service description item is also mapped into a registry entry. The ontology model can be used to identify services and therefore a richer semantic description for businesses and services, as well as the users requirements can be achieved using the semantic framework.
The semantic description is useful also in matching techniques, which are used for finding required services or selecting a best match among a number of service candidates. Currently, we are investigating a formal match method, which we call graph match. Suppose that $a$ and $b$ are two service nodes with an edge $ab$ indicating a possible relationship, say, $a$ depending-on $b$, between them and $G$ is a service description schema. There may be a sub-graph $G'$ in $G$ which contains nodes $a$ and $b$, as well as a chain of edges linking $a$ and $b$, which contains $ab$. With support from the semantic relativity approach [11] and large number of services available, this graph match technique will be more effective for selection of best services.
5. Conclusion
Through an application case to analyze business models, we can see clearly the various components in the business model. In this paper, we have generally proposed a business process and integration model initiated by IBM [4] with a view of web services technology, which will extend the business infrastructure. Through an analysis of the development of micro-payment we described some business and service patterns for the business model of micro-payment system. We believe that with gradual deployment of web services technique, business integration model and its implementation can be better achieved as web services will well bridge the gap between the general business requirements and the descriptions of software components and functions.
However, as this is our initial investigation we have realized that there are many holes in the model. For instance, the stream of vertical refinement is unclear to us yet. We know, the process of turning a general end users’ requirement into a business process description is very difficult, let alone defining a model or framework to introduce a formal and precise method for this process. Nevertheless, we consider this is a move.
Our next step is to study further business models with web services and investigate how the web services methodologies will support business process and integration. More concretely, we will pursue the work in the following three aspects. First, we will develop a semantic based web service description model, which is able to cope with both high level requirements and low level functions. Second, we intend to describe a workflow or business process in this description model. Third, we will build a business and service pattern repository where each pattern corresponds to a pool of software components and functions.
References
|
{"Source-Url": "http://dro.dur.ac.uk/642/1/642.pdf?DDD4+dcs0wws=", "len_cl100k_base": 6582, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27357, "total-output-tokens": 7805, "length": "2e12", "weborganizer": {"__label__adult": 0.0005884170532226562, "__label__art_design": 0.0006818771362304688, "__label__crime_law": 0.0007758140563964844, "__label__education_jobs": 0.001956939697265625, "__label__entertainment": 0.0001914501190185547, "__label__fashion_beauty": 0.00031375885009765625, "__label__finance_business": 0.0209197998046875, "__label__food_dining": 0.0007166862487792969, "__label__games": 0.0008478164672851562, "__label__hardware": 0.0011739730834960938, "__label__health": 0.0012493133544921875, "__label__history": 0.0005335807800292969, "__label__home_hobbies": 0.00013077259063720703, "__label__industrial": 0.0007119178771972656, "__label__literature": 0.0007586479187011719, "__label__politics": 0.0007061958312988281, "__label__religion": 0.0005021095275878906, "__label__science_tech": 0.07171630859375, "__label__social_life": 0.0001590251922607422, "__label__software": 0.034942626953125, "__label__software_dev": 0.8583984375, "__label__sports_fitness": 0.000347137451171875, "__label__transportation": 0.0011234283447265625, "__label__travel": 0.00041103363037109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38260, 0.0232]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38260, 0.18401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38260, 0.92677]], "google_gemma-3-12b-it_contains_pii": [[0, 1482, false], [1482, 5373, null], [5373, 10477, null], [10477, 14785, null], [14785, 18574, null], [18574, 23104, null], [23104, 27860, null], [27860, 33028, null], [33028, 38260, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1482, true], [1482, 5373, null], [5373, 10477, null], [10477, 14785, null], [14785, 18574, null], [18574, 23104, null], [23104, 27860, null], [27860, 33028, null], [33028, 38260, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38260, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38260, null]], "pdf_page_numbers": [[0, 1482, 1], [1482, 5373, 2], [5373, 10477, 3], [10477, 14785, 4], [14785, 18574, 5], [18574, 23104, 6], [23104, 27860, 7], [27860, 33028, 8], [33028, 38260, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38260, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0be94594e4dab324b6a1c8f442e96e06de7fc08c
|
LXI SECURITY EXTENDED FUNCTION ................................................................. 1
REVISION HISTORY......................................................................................... 5
22 LXI SECURITY EXTENDED FUNCTION ...................................................... 6
22.1 PURPOSE AND SCOPE OF THIS DOCUMENT.............................................. 6
22.1.1 Purpose ................................................................................................. 6
22.1.2 Scope .................................................................................................... 6
22.2 DEFINITION OF TERMS.............................................................................. 7
22.3 RELATIONSHIP TO OTHER LXI STANDARDS............................................. 7
22.4 REFERENCES ............................................................................................. 7
22.5 TERMINOLOGY .......................................................................................... 8
22.5.1 LXI Security ........................................................................................... 8
22.5.2 Command-and-Control Interface ......................................................... 8
22.6 ACRONYMS .............................................................................................. 9
22.7 COMPLIANCE REQUIREMENTS................................................................. 9
22.8 RULE – LXI SECURITY WEB INTERFACE ................................................ 10
22.8.1 RULE – LXI Security Web Page Unsecure Mode Indication .................. 10
22.9 RULE – LXI SECURITY XML IDENTIFICATION DOCUMENT ..................... 10
22.10 OPERATION .............................................................................................. 10
22.10.1 RULE – Unsecure Mode ................................................................. 10
22.10.2 RULE – Multiple LAN Interfaces supporting LXI Security .................. 11
22.11 INTERFACE REQUIREMENTS ................................................................. 11
22.11.1 RULE – Support IPv4 Secure Configuration ....................................... 11
22.11.2 RULE – Support IPv6 Secure Configuration ....................................... 11
22.11.3 RULE – Ignore mDNS Unicast Queries From Outside the Local Link .... 11
22.11.4 HTTPS Changes from Device Specification ....................................... 12
22.11.5 RULE – Use TLS Version Specified by NIST 800-52 .......................... 12
22.11.6 RULE – Use Cipher Suites Permitted by NIST 800-52 ....................... 12
22.12 PKI REQUIREMENTS .............................................................................. 12
22.12.1 RULE – IEEE 802.1AR Compliance .................................................. 12
22.12.2 RULE – Use the Most Recently Provisioned DevID ........................... 13
22.12.3 IDevID Requirements .......................................................................... 13
22.13 COMMAND-AND-CONTROL REQUIREMENTS ...................................... 14
22.13.1 RULE – Secure Command-and-Control Interface ............................. 14
22.13.2 RULE – Client Authentication Configuration ..................................... 15
22.13.3 RULE – Unsecure Command-and-Control Interfaces ....................... 15
22.13.4 RULE – HiSLIP Devices Supported SASL Mechanisms ..................... 15
22.13.5 RULE – Devices Shall Support IVI 6.5, SASL Mechanism Specification 15
22.14 RULE – LXI API SECURITY METHODS .................................................. 15
Notices
Notice of Rights All rights reserved. This document is the property of the LXI Consortium. It may be reproduced, unaltered, in whole or in part, provided the LXI copyright notice is retained on every document page.
Notice of Liability The information contained in this document is subject to change without notice. “Preliminary” releases are for specification development and proof-of-concept testing and may not reflect the final “Released” specification.
The LXI Consortium, Inc. makes no warranty of any kind with regard to this material, including but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The LXI Consortium, Inc. shall not be liable for errors or omissions contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
LXI Standards Documents are developed within the LXI Consortium and LXI Technical Working Groups sponsored by the LXI Consortium Board of Directors. The LXI Consortium develops its standards through a consensus development process modeled after the American National Standards Institute, which brings together volunteers representing varied viewpoints and interests to achieve the final product. Volunteers are not necessarily members of the Consortium and serve without compensation. While the LXI Consortium administers the process and establishes rules to promote fairness in the consensus development process, the LXI Consortium does not exhaustively evaluate, test, or verify the accuracy of any of the information contained in its standards.
Use of an LXI Consortium Standard is wholly voluntary. The LXI Consortium and its members disclaim liability for any personal injury, property or other damage, of any nature whatsoever, whether special, indirect, consequential, or compensatory, directly or indirectly resulting from the publication, use of, or reliance upon this, or any other LXI Consortium Standard document.
The LXI Consortium does not warrant or represent the accuracy or content of the material contained herein, and expressly disclaims any express or implied warranty, including any implied warranty of merchantability or fitness for a specific purpose, or that the use of the material contained herein is free from patent infringement. LXI Consortium Standards documents are supplied “as is”. The existence of an LXI Consortium Standard does not imply that there are no other ways to produce, test, measure, purchase, market, or provide other goods and services related to the scope of the LXI Consortium Standard. Furthermore, the viewpoint expressed at the time a standard is approved and issued is subject to change brought about through developments in the state of the art and comments received from users of the standard. Every LXI Consortium Standard is subjected to review at least every five years for revision or reaffirmation. When a document is more than five years old and has not been reaffirmed, it is reasonable to conclude that its contents, although still of some value, do not wholly reflect the present state of the art. Users are cautioned to check to determine that they have the latest edition of any LXI Consortium Standard.
In publishing and making this document available, the LXI Consortium is not suggesting or rendering professional or other services for, or on behalf of, any person or entity. Nor is the LXI Consortium undertaking to perform any duty owed by any other person or entity to another. Any person utilizing this, and any other LXI Consortium Standards document, should rely upon the advice of a competent professional in determining the exercise of reasonable care in any given circumstances.
This specification is the property of the LXI Consortium, a Delaware 501c3 corporation, for the use of its members.
Interpretations Occasionally questions may arise regarding the meaning of portions of standards as they relate to specific applications. When the need for interpretations is brought to the attention of LXI Consortium, the Consortium will initiate action to prepare appropriate responses. Since LXI Consortium
Standards represent a consensus of concerned interests, it is important to ensure that any interpretation has also received the concurrence of a balance of interests. For this reason, LXI Consortium and the members of its working groups are not able to provide an instant response to interpretation requests except in those cases where the matter has previously received formal consideration. Requests for interpretations of this standard must be sent to interpretations@lxistandard.org using the form “Request for Interpretation of an LXI Standard Document”. This document plus a list of interpretations to this standard are found on the LXI Consortium’s Web site: http://www.lxistandard.org
**Trademarks** Product and company names listed are trademarks or trade names of their respective companies. No investigation has been made of common-law trademark rights in any work.
LXI is a registered trademark of the LXI Consortium
**Patents**: Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. A holder of such patent rights has filed a copy of the document “Patent Statement and Licensing Declaration” with the Consortium. By publication of this standard, no position is taken with respect to the existence or validity of any patent rights in connection therewith. Other patent rights may exist for which the LXI Consortium has not received a declaration in the form of the document “Patent Statement and Licensing Declaration”. The LXI Consortium shall not be held responsible for identifying any or all such patent rights, for conducting inquiries into the legal validity or scope of patent rights, or determining whether any licensing terms or conditions are reasonable or non-discriminatory. Users of this standard are expressly advised that determination of the validity of any patent rights, and the risk of infringement of such rights, is entirely their own responsibility.
**Conformance** The LXI Consortium draws attention to the document “LXI Consortium Policy for Certifying Conformance to LXI Consortium Standards”. This document specifies the procedures that must be followed to claim conformance with this standard.
**Legal Issues** Attention is drawn to the document “LXI Consortium Trademark and Patent Policies”. This document specifies the requirements that must be met in order to use registered trademarks of the LXI Consortium.
### Revision history
<table>
<thead>
<tr>
<th>Revision</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0 2022-05-10</td>
<td>Initial Version</td>
</tr>
<tr>
<td>1.0 2022-05-24</td>
<td>This revision updates the term ‘insecure’ to ‘unsecure’ throughout the document.</td>
</tr>
<tr>
<td>1.1 2023-01-26</td>
<td>Added rules regarding NIST 800-52 (TLS version and cipher suites)</td>
</tr>
</tbody>
</table>
22 LXI Security Extended Function
The LXI Security Extended Function adds support for securing the Command-and-Control Interface to the device, making secure connections and authentication with the device Web Server, and adding a REST API for configuring security settings.
22.1 Purpose and Scope of this Document
This document is an extension of the LXI Device Specification 2021. Numbering for Section, RULES, and RECOMMENDATIONS is consistent with the hierarchy of the LXI Device Specification 2021.
22.1.1 Purpose
The purpose of the LXI Extended function is:
- The LXI Security Extended Function specifies device behavior that enables clients to establish secure connections to devices. The required device capabilities include:
- Protocols and capabilities to support secure communication
- Requirements that devices permit more extensive configuration of protocols required by other LXI specifications
- APIs to configure the capabilities of interest to secure applications and provision certificates to devices
- The primary goals for security within industrial networks are following the key principles Confidentiality, Integrity and Authenticity. Confidentiality ensures that data transported in the network cannot be read by anyone but the intended recipient. Integrity means any message received is confirmed to be exactly the message that was sent and finally Authenticity ensures that a message that claims to be from a given source is, in fact, from that source.
- Secure communication between test computers and LXI devices requires encryption and authentication. The LXI Security Extended Function supports authenticated and encrypted communication to T&M instruments and also addresses security for LXI device hosted webpages, confirming device authenticity and providing secure communication.
22.1.2 Scope
This document defines a set of RULES and RECOMMENDATIONS for constructing a LXI Device conformant with the LXI Security Extension. Whenever possible these specifications use existing standards.
The standard specifies:
1. LXI Security Extended Function Operation Rules
2. LXI Security Extended Function Interface Requirements
3. LXI Security Extended Function PKI Requirements
4. LXI Security Extended Function Command-and-Control Requirements
Copyright 2021 LXI Consortium, Inc. All rights reserved
5. LXI API Extended Function Requirements (this extended function is defined in an additional document)
22.2 Definition of Terms
This document contains both normative and informative material. Unless otherwise stated the material in this document shall be considered normative.
NORMATIVE: Normative material shall be considered in determining whether an LXI Device is conformant to this standard. Any section or subsection designated as a RULE or PERMISSION is normative.
INFORMATIVE: Informative material is explanatory and is not considered in determining the conformance of an LXI Device. Any section or subsection designated as RECOMMENDATION, SUGGESTION, or OBSERVATION is informative. Unless otherwise noted examples are informative.
RULE: Rules SHALL be followed to ensure compatibility for LAN-based devices. A rule is characterized by the use of the words SHALL and SHALL NOT. These words are not used for any other purpose other than stating rules.
RECOMMENDATION: Recommendations consist of advice to implementers that will affect the usability of the final device. Discussions of particular hardware to enhance throughput would fall under a recommendation. These should be followed to avoid problems and to obtain optimum performance.
SUGGESTION: A suggestion contains advice that is helpful but not vital. The reader is encouraged to consider the advice before discarding it. Suggestions are included to help the novice designer with areas of design that can be problematic.
PERMISSION: Permissions are included to clarify the areas of the specification that are not specifically prohibited. Permissions reassure the reader that a certain approach is acceptable and will cause no problems. The word MAY is reserved for indicating permissions.
OBSERVATION: Observations spell out implications of rules and bring attention to things that might otherwise be overlooked. They also give the rationale behind certain rules, so that the reader understands why the rule must be followed. Any text that appears without heading should be considered as description of the specification.
22.3 Relationship to other LXI Standards
This specification impacts several of the rules and recommendations in the LXI Device Specification and other LXI Extended Functions. In every case, the rules and recommendations in this specification supersede the other LXI standards. Devices that claim compliance to the LXI Security Extended Function shall follow all the rules specified in this document.
The LXI specifications with rules that are either extended or modified by this specification are:
- LXI Device Specification
- LXI HiSLIP Extended Function
- LXI IPv6 Extended Function
22.4 References
This specification relies heavily on security standards created by other organizations such as the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronic Engineers (IEEE).
Several of these specifications are under continual improvements. Generally, where a referenced standard has a successor LXI devices should comply with the most recent version of the specification.
Several of the specifications noted below have extensions as well. Generally, LXI devices should be designed accounting for the complete specification and any extensions.
Some of the key references are:
- IEEE802.1AR Secure Device Identity Specification
- RFC5280 Internet X.509 Public Key Infrastructure Certificate and CRL Profile
- RFC 7235 HTTP Authentication Framework
- RFC 7616 HTTP Digest Mechanism
- RFC 7617 HTTP Basic Mechanism
- TPM Specs Trusted Computing Group Trusted Platform Module (TPM) specifications
- HiSLIP IVI Foundation HiSLIP specification (IVI-6.1)
- SCPI IVI Foundation SCPI specification
- NIST 800-52 NIST Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations
22.5 **Terminology**
The following terms are used throughout this specification.
### 22.5.1 LXI Security
The full reference to this specification is the LXI Security Extended Function. For clarity and brevity, it is generally referred to as *LXI Security* in this document.
### 22.5.2 Command-and-Control Interface
LXI devices usually provide a mechanism for remote clients to programmatically control, read data from or write data to the device to perform LXI device functions. Popular approaches include:
- Instrument command interfaces (such as SCPI) via TCP, HiSLIP, or VXI-11
- REST interfaces via HTTP or HTTPS
In this specification, these mechanisms are referred to as Command-and-Control interfaces. Command-and-Control interfaces include SCPI, REST, and other interfaces that provide the end user programmatic access to the LXI device.
When the term is used in this document, it refers to Ethernet-based communication. Devices frequently implement other communication interfaces such as USB and GPIB.
22.6 Acronyms
The following acronyms are used in this specification:
- **API** Application Program Interface
- **CA** Certificate Authority
- **CMS** Cryptographic Message Syntax, as defined by RFC 5652 or its successors
- **CSR** Certificate Signing Request
- **DevID** Device Identifier as defined by IEEE 802.1AR. When used in this document, the clarifications in section 22.12.1, *RULE – IEEE 802.1AR Compliance*, are assumed.
- **EST** Enrolment over Secure Transport (RFC 7030)
- **HiSLIP** High-speed LAN Instrument Protocol
- **HTTP** Hyper-text transfer protocol
- **HTTPS** Hyper-text transfer protocol performed over a TLS connection
- **IDevID** Initial Device Identifier as defined by IEEE 802.1AR. When used in this document, the clarifications in section 22.12.1, *RULE – IEEE 802.1AR Compliance*, are assumed.
- **IVI** IVI Foundation (responsible for various standards referenced here)
- **LCI** LAN Connection Initialize
- **LDevID** A locally significant Device Identifier, as defined by IEEE 802.1AR, this is a DevID provisioned to the instrument by the end-customer. When used in this document, the clarifications in section 22.12.1, *RULE – IEEE 802.1AR Compliance*, are assumed.
- **LXI** LAN Extensions for Instruments
- **PEM** Stands for Privacy Enhanced Mail, although the use in this specification is to refer to the conventional PEM File format for representing X.509 certificates.
- **PKI** Public Key Infrastructure
- **REST** Refers to an HTTP API. Stylistically, an API organized as a Representational State Transfer API.
- **SCPI** Standard Commands for Programmable Instruments
- **TCP** Transmission Control Protocol
- **TLS** Transport Layer Security
- **TPM** Trusted platform module
- **XML** Extensible Mark-up Language
22.7 Compliance Requirements
For a device to comply with this specification the device is required to comply with:
- **LXI Device specification version 1.6 or later**
- **The rules called out in this specification**
This includes requirements explicitly called out as rules and any behavior or requirement that states that devices *shall* behave in a certain fashion or provide a certain capability.
22.8 RULE – LXI Security Web Interface
Devices implementing the LXI Security Extended Function shall include ‘LXI Security’ in the ‘LXI Extended Functions’ display item of the welcome web page.
22.8.1 RULE – LXI Security Web Page Unsecure Mode Indication
LXI Secure devices shall provide an indication on the LXI welcome web page if they are currently operating in the Unsecure Mode.
Observation
The LXI welcome page is not necessarily the instrument welcome page nor the instrument landing page. See the LXI Device specification for details.
22.9 RULE – LXI Security XML Identification Document
Devices implementing LXI Security extended function shall include Function elements for the LXI Security Extended Function. The Function element are contained in the XML Device element.
With the FunctionName attribute of “LXI Security” and a Version attribute containing the version number of this document.
The Function element shall have a child element, CryptoSuites, which indicates the cryptographic suites supported by the device in a comma-separated list.
Example:
```
<Function FunctionName="LXI Security" Version="1.0">
<CryptoSuites>name,name</CryptoSuites>
</Function>
```
Observation
The names of the listed crypto suites may be sent to the instrument using the LXI API extended function to specify the crypto suite to use for a certificate.
22.10 Operation
This section contains rules and recommendations related to general device operation.
22.10.1 RULE – Unsecure Mode
An LXI Secure device is regarded as operating unsecurely if its configuration enables protocols or behaviors that are known to be unsecure. If any part of a device configuration is known to explicitly enable unsecure operation, the device is operating in the Unsecure Mode.
If any device setting is in an unsecure configuration, the device is operating unsecurely. The LXI API Extended function identifies LXI-specific device configuration settings that are unsecure. In addition, devices shall determine if device-specific settings put the instrument into an unsecure mode. Generally:
A device is operating in a known unsecure mode if a client can change the device measurement/stimulus/routing configuration or measurement results over an ethernet connection that does not authenticate the device (server) and provide encryption.
Devices provide an API that interrogates if the device’s current configuration is in an Unsecure Mode. See the LXI API Extended Function.
**Observation**
The LXI API Extended Function reports if individual Ethernet interfaces are operating in Unsecure Mode. The Device unsecure state is the logical OR of each interfaces unsecure mode. Devices presumably will report the Device unsecure state from the human interface.
**Observation**
LXI does not make the assertion that a device is in a secure operating mode if it is NOT in the Unsecure Mode. Such an assertion cannot be made without qualifications that are beyond the scope of LXI. Therefore, LXI assertions are limited to asserting that the device is not in a known unsecure mode however, that may not meet all the customers’ needs for security.
22.10.1.1 **RULE – Vendors Shall Indicate Unsecure for non-LXI device Settings**
Devices shall also indicate they are operating in an Unsecure Mode if settings beyond the scope of LXI Security are considered by the device manufacturer to be unsecure.
22.10.2 **RULE – Multiple LAN Interfaces supporting LXI Security**
If multiple LAN network interface cards (NICs) are present in an LXI Secure device, those that are LXI compliant shall support the LXI Security Extended Function.
**Observation**
The security settings may be independently configured for each LXI compliant NIC.
22.11 **Interface Requirements**
22.11.1 **RULE – Support IPv4 Secure Configuration**
All LXI Devices implement IPv4. LXI Secure devices shall implement the secure requirements for IPv4 in this section and the required security methods of the LXI API Extended Function specification.
22.11.2 **RULE – Support IPv6 Secure Configuration**
Devices that implement IPv6 capability and LXI Security shall implement the secure requirements for IPv6 in this section and the required security methods of the LXI API Extended Function specification. This requirement shall be followed regardless of if a device complies with the LXI IPv6 extended function.
22.11.3 **RULE – Ignore mDNS Unicast Queries From Outside the Local Link**
Since it is possible for an mDNS unicast query to be received from a machine outside the local link, LXI Secure devices shall check that the source address in the mDNS query packet matches the local subnet for that link (or, in the case of IPv6, the source address has an on-link prefix) and silently ignore the packet if not. This behavior is as recommended in RFC6762.
22.11.4 HTTPS Changes from Device Specification
The LXI Device specification rev 1.6 or later requires that devices provide an HTTP and HTTPS server (HTTP over TLS) and that certain privileged operations be forwarded to HTTPS and protected using the LXI password. However, devices that also comply with LXI Security differ in that they do not have the LXI password defined in the LXI Device specification. Instead, they use the username/password pairs managed by the LXI Security API. For details, see the LXI Common Configuration API as referenced in 22.14, RULE – LXI API Security Methods.
22.11.5 RULE – Use TLS Version Specified by NIST 800-52
Device TLS implementations shall be able to restrict the TLS version to those permitted by the TLS version guidelines for TLS servers in the version of NIST 800-52 that is current at the time the device goes through LXI compliance testing.
If a device is awarded LXI conformance based on technical justification, it shall be able to restrict the TLS versions to those required by this rule at the time that the new device is presented for conformance.
22.11.6 RULE – Use Cipher Suites Permitted by NIST 800-52
Devices shall have the ability the restrict the cipher suites to those permitted for TLS servers in the version of NIST 800-52 that is current at the time that the device goes through LXI compliance testing.
If a device is awarded LXI conformance based on technical justification, it shall be able to restrict accepted cipher suites to those required by this rule at the time the new device is presented for conformance.
22.12 PKI Requirements
This section has requirements associated with how a device participates in a public key infrastructure to authenticate its identity.
Note that central to these requirements are the LXI Security APIs which provide the capability to acquire a certificate signing request, provision a certificate, and manage certificates.
22.12.1 RULE – IEEE 802.1AR Compliance
Devices shall comply with the device requirements stated in IEEE 802.1AR with the following caveats:
1. IEEE 802.1AR has a detailed description of the DevID module. In general, LXI Secure device software has no such module externally visible, thus those requirements do not directly bear on an LXI device although the device implementation is expected to substantially follow those requirements. This may be ideally accomplished through either a physical or firmware TPM in conjunction with the LXI Security API.
LXI Security does require an API that includes several certificate management features similar to the DevID Module requirements, see the LXI API Extended Function.
2. IEEE 802.1AR 6.4 implies that DevID certificates can be validated using a CA root certificate as the trust anchor. Although not clearly in conflict with IEEE 802.1AR, LXI Security explicitly permits devices to use self-signed certificates in their DevID, thus making the self-signed certificate itself the trust anchor.
3. IEEE 802.1AR section 5.5, Supplier Requirements, places several requirements on the supplier which are beyond the scope of LXI and are not placed on the device vendor by LXI.
22.12.2 **RULE – Use the Most Recently Provisioned DevID**
If any LDevID has been provisioned to the device, the IDevID shall not be used, regardless of the cryptographic suite of the LDevID.
Unless explicitly configured otherwise, devices shall use the most recently provisioned valid certificate for each cryptographic suite that the device supports to authenticate itself regardless of the protocol being used.
**Observation**
Note that 22.12.2, **RULE – Use the Most Recently Provisioned DevID**, requires that devices initially use the IDevID
**Observation**
Most browsers will warn about the IDevID. It potentially warns for the following reasons:
- The IDevID never expires
- Depending on how the IDevID is signed and the authorities in the browser’s trust store, the browser may not trust the root authority.
**Observation**
There may be multiple ways to provision DevIDs to a device, including the LXI API and device-specific mechanism such as EST, SCEP, or physically copying to the device.
22.12.3 **IDevID Requirements**
This section has requirements related to the IDevID defined by IEEE 802.1AR.
The information for the **Subject DN** attributes must be provided by the LXI manufacturer for each LXI Device so unique IDevIDs can be installed during the manufacturing process on each LXI Device.
LXI permits multiple OUs.
22.12.3.1 **RULE – Distinguished Name**
Subject Distinguished Name (DN) – field shall have the following attributes:
<table>
<thead>
<tr>
<th>Attribute Name</th>
<th>Description</th>
<th>Max Size</th>
<th>Example</th>
<th>Data Type</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>CN</td>
<td>Common Name</td>
<td>64</td>
<td>XYZ Oscilloscope 54321D – 123456</td>
<td>UTF8</td>
<td>LXI default mDNS description name (Alternative: Instrument serial number)</td>
</tr>
<tr>
<td>O</td>
<td>Organization Name</td>
<td>64</td>
<td>Keysight</td>
<td>UTF8</td>
<td>This is the LXI manufacturer Static field for each LXI manufacturer</td>
</tr>
<tr>
<td>OU</td>
<td>Organization Unit Name</td>
<td>64</td>
<td>MXA9020B,Opt U3</td>
<td>UTF8</td>
<td>Instrument model name</td>
</tr>
</tbody>
</table>
### Attribute Table
<table>
<thead>
<tr>
<th>Attribute Name</th>
<th>Description</th>
<th>Max Size</th>
<th>Example</th>
<th>Data Type</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Serial Number</td>
<td>Instrument Serial Number</td>
<td>64</td>
<td>CO0123456789</td>
<td>UTF8</td>
<td>This is the instrument serial number</td>
</tr>
</tbody>
</table>
#### 22.12.3.2 RULE – Subject Alternate Name
Devices that have a hardware or firmware TPM shall have a SAN field that contains:
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
<th>Max Size</th>
<th>Example</th>
<th>Data Type</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>SAN</td>
<td>HW Module Name (HMN)</td>
<td>NA (as required)</td>
<td><hex encoded TPM identifier></td>
<td>DER Encoded</td>
<td>This is a two field ASN.1 entry that identifies the TPM version and serial number. This is required for both hardware and firmware TPMs. This is an OID as specified in the Technical Computing Group and cited in IEEE 802.1AR. Note that there are versions for TPM 1.2 and 2.0.</td>
</tr>
</tbody>
</table>
**Observation**
LXI does not require either OCSP or CRLs for IDevIDs. LDevIDs are more appropriate to ascribe a high degree of trust to a device, so complex revocation infrastructure for IDevIDs is only provided at the vendor’s discretion.
#### 22.13 Command-and-Control Requirements
This section has general rules regarding the device Command-and-Control interfaces.
##### 22.13.1 RULE – Secure Command-and-Control Interface
Devices shall provide at least one secure Command-and-Control interface. That is, a protocol that provides encryption and server authentication (e.g., IVI HiSLIP rev2.0, HTTPS, etc.).
22.13.2 RULE – Client Authentication Configuration
At least one Command-and-Control protocol shall provide a configuration that requires client authentication.
22.13.3 RULE – Unsecure Command-and-Control Interfaces
Devices implementing unsecure Command-and-Control interfaces shall provide settings to control which of these protocols are enabled.
Observation
Non-Ethernet unsecure interfaces like USBTMC and GPIB connections are permitted and do not require settings to disable them.
22.13.4 RULE – HiSLIP Devices Supported SASL Mechanisms
Devices that implement the HiSLIP extended function shall support client authentication using the SASL mechanisms of ANONYMOUS, PLAIN, and SCRAM.
Additional SASL mechanisms may be supported.
22.13.5 RULE – Devices Shall Support IVI 6.5, SASL Mechanism Specification
IVI 6.5, SASL Mechanism Specification contains requirements on usernames, passwords and SASL mechanism implementation. Devices shall follow these requirements for usernames and passwords. Devices that implement SCRAM shall comply with the SCRAM requirements.
22.14 RULE – LXI API Security Methods
Devices shall provide the following APIs as defined in the LXI API Extended Function:
<table>
<thead>
<tr>
<th>URL</th>
<th>HTTP Method</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
<tr>
<td>/lx/i/identification</td>
<td>GET</td>
<td>Returns identity information about the device (and connected devices). This is an unsecure API. Note compliant implementations might not include the Content-Type response header.</td>
</tr>
<tr>
<td>/lx/i/api/common-configuration OR /lx/i/common-configuration</td>
<td>GET</td>
<td>Returns the overall device LXI configuration and capabilities. This API is available both over secure and unsecure connections.</td>
</tr>
<tr>
<td>lx/i/api/common-configuration</td>
<td>PUT</td>
<td>Configures the overall device LXI configuration. The network settings managed by this API can usually be applied to all devices in a system.</td>
</tr>
<tr>
<td>/lx/i/api/device-specific-configuration OR /lx/i/device-specific-configuration</td>
<td>GET</td>
<td>Returns device-specific configuration and capabilities. This API is available over both secure and unsecure connections. The two endpoints behave identically.</td>
</tr>
<tr>
<td>URI</td>
<td>Method</td>
<td>Description</td>
</tr>
<tr>
<td>----------------------------</td>
<td>--------</td>
<td>-----------------------------------------------------------------------------</td>
</tr>
<tr>
<td>/lx/api/device-specific-configuration</td>
<td>PUT</td>
<td>Configures device-specific network settings. The network settings managed by this API are potentially unique to a particular device.</td>
</tr>
<tr>
<td>/lx/api/certificates</td>
<td>GET</td>
<td>Returns a list of certificate GUIDs.</td>
</tr>
<tr>
<td>/lx/api/certificates</td>
<td>POST</td>
<td>Places a PKCS#7 style certificate or certificate chain on the device to use with its LDevID. The certificate must be based on CSR acquired from the device. The response XML has the GUID that is used to identify this certificate.</td>
</tr>
<tr>
<td>/lx/api/certificates/<GUID></td>
<td>GET</td>
<td>Returns the PKCS#7 certificate, certificate chain, or PKCS#10 CSR identified by <GUID></td>
</tr>
<tr>
<td>/lx/api/certificates/<GUID></td>
<td>DEL</td>
<td>Deletes the certificate, certificate chain or CSR identified by <GUID></td>
</tr>
<tr>
<td>/lx/api/get-csr</td>
<td>GET</td>
<td>Acquires a PKCS#10 CSR from the device based on the request parameters.</td>
</tr>
<tr>
<td>/lx/api/create-certificate</td>
<td>PUT</td>
<td>Tell the device to create a self-signed certificate (also known as an LDevID) based on the request parameters.</td>
</tr>
<tr>
<td>/lx/api/certificates/<GUID>/enabled</td>
<td>PUT</td>
<td>Used to enable or disable a certificate identified by <GUID></td>
</tr>
<tr>
<td>/lx/api/certificates/<GUID>/enabled</td>
<td>GET</td>
<td>Used to determine if a certificate identified by <GUID> is enabled or disabled.</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://public.lxistandard.org/specifications/LXI_1.6_Specifications/LXI_Security_Extended_Function_1.1_2023-01-26.pdf", "len_cl100k_base": 7600, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 43530, "total-output-tokens": 8208, "length": "2e12", "weborganizer": {"__label__adult": 0.0011148452758789062, "__label__art_design": 0.0011186599731445312, "__label__crime_law": 0.0029163360595703125, "__label__education_jobs": 0.0006647109985351562, "__label__entertainment": 0.0001862049102783203, "__label__fashion_beauty": 0.0004868507385253906, "__label__finance_business": 0.0019016265869140625, "__label__food_dining": 0.00046539306640625, "__label__games": 0.0017452239990234375, "__label__hardware": 0.1767578125, "__label__health": 0.000766754150390625, "__label__history": 0.0005269050598144531, "__label__home_hobbies": 0.00030994415283203125, "__label__industrial": 0.006160736083984375, "__label__literature": 0.00043129920959472656, "__label__politics": 0.0006170272827148438, "__label__religion": 0.00110626220703125, "__label__science_tech": 0.308837890625, "__label__social_life": 8.022785186767578e-05, "__label__software": 0.05389404296875, "__label__software_dev": 0.4375, "__label__sports_fitness": 0.0005893707275390625, "__label__transportation": 0.0016012191772460938, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36054, 0.03795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36054, 0.26819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36054, 0.82907]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3607, false], [3607, 7765, null], [7765, 10187, null], [10187, 10489, null], [10489, 12833, null], [12833, 15741, null], [15741, 17767, null], [17767, 20035, null], [20035, 22368, null], [22368, 24858, null], [24858, 28016, null], [28016, 30227, null], [30227, 32129, null], [32129, 34485, null], [34485, 36054, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3607, true], [3607, 7765, null], [7765, 10187, null], [10187, 10489, null], [10489, 12833, null], [12833, 15741, null], [15741, 17767, null], [17767, 20035, null], [20035, 22368, null], [22368, 24858, null], [24858, 28016, null], [28016, 30227, null], [30227, 32129, null], [32129, 34485, null], [34485, 36054, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36054, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36054, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3607, 2], [3607, 7765, 3], [7765, 10187, 4], [10187, 10489, 5], [10489, 12833, 6], [12833, 15741, 7], [15741, 17767, 8], [17767, 20035, 9], [20035, 22368, 10], [22368, 24858, 11], [24858, 28016, 12], [28016, 30227, 13], [30227, 32129, 14], [32129, 34485, 15], [34485, 36054, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36054, 0.11913]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
b9257c0b67133e8acfc220998487aef1d132030f
|
Supporting the preservation lifecycle in repositories
Luis Faria lfaria@keep.pt
KEEP SOLUTIONS www.keep-solutions.com
Open Repositories 2013
Charlottetown, PEI, Canada, 2013-07-09
http://goo.gl/V6142
KEEP SOLUTIONS
- Company specialized in information management
- Digital preservation experts
- Open source: RODA, KOHA, DSpace, Moodle, etc.
- Scientific research
- SCAPE: large-scale digital preservation environments
- 4C: digital preservation cost modeling
http://www.keep-solutions.com
This work was partially supported by the SCAPE Project. The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
The past: RODA 1.0.0
- Presented in Open Repositories 2009
- Open source digital repository
- Based on Fedora Commons
- Modern web interface
- For archives
- For digital preservation
Disseminations of 20 - "National Geographic"
- JPEG (original)
- TIFF (normalized)
- Photo preview
- Book preview
The present: RODA Community
- Adapted to be a true open-source project
- For users
- Easy to install
- Easy to test (virtual machine)
- Support mailing lists and documentation
- Free or paid support
- For developers
- Development and translation guidelines
- Easy build (maven)
- Available on GitHub
- Support mailing lists
- Plenty more documentation
- More info: http://www.roda-community.org
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
RODA
The world’s most advanced open-source digital repository
An open-source digital repository designed for preservation
RODA is a complete digital repository that delivers functionality for all the main units of the OAIS reference model. RODA is capable of ingesting, managing and providing access to the various types of digital content produced by large corporations or public bodies. RODA is based on open-source technologies and is supported by existing standards such as the OAIS, METS, EAD and PREMIS.
Conforms to open standards
RODA follows open standards using EAD for description metadata, PREMIS for preservation metadata, METS for structural metadata, several standards for technical metadata (e.g. NISO Z39.87 for digital still images).
Vendor independent
RODA is 100% built on top of open-source technologies. The entire infrastructure required to support RODA is vendor independent. This means that you may use the hardware and Linux distributions that best fit your institutional needs.
Scalable
The service-oriented nature of RODA’s architecture allows the system to be highly scalable.
Embedded preservation actions
Preservation actions and management within RODA is handled by a task scheduler. The task scheduler is designed to allow users to run preservation actions, check preservation state, and log preservation actions.
Current practice problems
- Repository has content
- Organization has policies in place (e.g. no compression allowed)
P1: Does the content conform to policies? Are there any risks? Even on a changing content, policies and environment?
- Found a preservation risk!
P2: How to easily and trustworthily decide which action to take?
Current practice problems
• Know what action to take
P3: How to ensure and monitor the quality of chosen action and that the decision assumptions remain valid?
• Content grows exponentially in volume, heterogeneity and complexity
P4: How to do digital preservation in large-scale environments?
Preservation lifecycle
Environment and users
access, ingest, harvest
Repository
Preservation lifecycle
Environment and users
Watch
monitored environment and users
monitored content and events
access, ingest, harvest
Repository
Preservation lifecycle
Environment and users
Watch
Planning
Repository
monitored environment and users
create/re-evaluate plans
access, ingest, harvest
monitored content and events
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
Preservation lifecycle
This work was partially supported by the SCAPE Project. The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
Preservation lifecycle
Environment and users
Repository
access, ingest, harvest
monitored environment and users
Watch
monitored content and events
Policies
monitored actions
Operations
School action plan
Planning
create/re-evaluate plans
deploy plan
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
Preservation lifecycle (in practice)
Environment and users
Repository
Operations
Watch
Policies
Planning
monitored environment and users
monitored content and events
monitored actions
create/re-evaluate plans
deploy plan
access, ingest, harvest
execute action plan
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
Preservation lifecycle (in practice)
Environment and users
Repository
Operations
Workflow engine
Scout
Watch
Policies
Planning
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
SCAPE Preservation Suite
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
SCAPE Preservation Suite
Environment and users
- access, ingest, harvest
Repository
- execute plan
Workflow engine
- Workflow engine API
Data Connector API
- monitored events and actions
Plan management API
- monitored content
Scout Adaptors
- Scout Web UI & Email notification
Scout
- Planner
- create/re-evaluate plans
Plato
- Plato Web UI
- deploy plan
Small scale: Taverna
Large scale: SCAPE platform
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
http://www.taverna.org.uk
http://wiki.opf-labs.org/display/SP/SCAPE+Platform
This work was partially supported by the SCAPE Project. The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
SCAPE Preservation Suite
Tools and APIs
SCAPE Digital object model
• Standard model for representing digital objects
• Based on METS and PREMIS
• Specifies intellectual entity (SIP, AIP and DIP)
• Specification:
https://github.com/openplanets/scape-platform-api
Data Connector API
- Access and modify content on the repository
- HTTP REST API
- Methods:
- **Retrieve** intellectual entity, metadata, representation, file or named bit stream
- **Ingest** intellectual entity (sync or async)
- **Update** intellectual entity, representation or file
- **Search** intellectual entities, representations or files (SRU)
- API specification: [https://github.com/openplanets/scape-platform-api](https://github.com/openplanets/scape-platform-api)
- Ref. implementation: Fall 2013 in Fedora 4 and RODA
Report API
- Provides access to repository events
- Events:
- **Ingest** started and finished
- **Viewed** or **downloaded** descriptive metadata or representation
- Preservation **plan executed**
- OAI-PMH data provider
- PREMIS events metadata
- Agent: **who** triggered the event
- Date/time: **when** did the event occur
- Details: **what** happened
- API specification: [https://github.com/openplanets/scape-platform-api](https://github.com/openplanets/scape-platform-api)
- Ref. implementation: [https://github.com/openplanets/roda](https://github.com/openplanets/roda)
Scout: a preservation watch system
- Monitors aspects of the world to detect preservation risks and opportunities
- Triple store
- Adaptors
- Data Connector & Report API
- SCAPE Policy model
- PRONOM
- Web semantic extraction
- Renderability experiments
- Web interface
- Triggers: templates and SPARQL
- Email notifications
Dashboard
All about your own information.
My triggers
You have no triggers defined, create one now!
+ Create trigger
My policies
<table>
<thead>
<tr>
<th>Objective</th>
<th>Measure</th>
<th>Description</th>
<th>Modality</th>
<th>Qualifier</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Running costs per object</td>
<td>Running operational costs of an action in € per object.</td>
<td>MUST</td>
<td>LT</td>
<td>0.24</td>
</tr>
<tr>
<td>1</td>
<td>elapsed time per MB</td>
<td>elapsed processing time per Megabyte of input data, measured in milliseconds</td>
<td>MUST</td>
<td>LT</td>
<td>2000</td>
</tr>
<tr>
<td>2</td>
<td>stability judgement</td>
<td>Judgement of the stability of an action</td>
<td>SHOULD</td>
<td>stable</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>ease of integration</td>
<td>Assessment of how easy it is to integrate an action into a particular server environment.</td>
<td>SHOULD</td>
<td>good</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>software licence source code</td>
<td>Indicates if and in which way the source code of the software is accessible.</td>
<td>MUST</td>
<td>openSource</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>ease of use</td>
<td>Assessment of how easy it is to use an action in operations</td>
<td>SHOULD</td>
<td>openSource</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>image width equal</td>
<td>true if image width has been preserved.</td>
<td>MUST</td>
<td>true</td>
<td></td>
</tr>
</tbody>
</table>
Collection size The overall size
43.97 GB
Data type: Very big integer number (File or storage size).
Property history: There are 8 different values of this property, this is number 7 (starts at 0).
Value provenance: Current value was measured 1 times by 1 different sources.
Property history
This property has changed in time as represented in the chart below. Click on the chart dots for more information.
Format distribution
<table>
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tagged Image File Format</td>
<td>160</td>
</tr>
<tr>
<td>Hypertext Markup Language</td>
<td>23</td>
</tr>
<tr>
<td>Portable Document Format</td>
<td>17</td>
</tr>
<tr>
<td>Plain text</td>
<td>16</td>
</tr>
<tr>
<td>XLS</td>
<td>16</td>
</tr>
<tr>
<td>FPX</td>
<td>9</td>
</tr>
<tr>
<td>Microsoft Word</td>
<td>7</td>
</tr>
<tr>
<td>Extensible Markup Language</td>
<td>2</td>
</tr>
<tr>
<td>Extensible Hypertext Markup Language</td>
<td>2</td>
</tr>
<tr>
<td>Postscript</td>
<td>2</td>
</tr>
<tr>
<td>Macromedia Flash data (compressed), version 6</td>
<td>1</td>
</tr>
<tr>
<td>Macromedia Flash data, version 5</td>
<td>1</td>
</tr>
<tr>
<td>PPT</td>
<td>1</td>
</tr>
<tr>
<td>news or mail, ASCII text</td>
<td>1</td>
</tr>
</tbody>
</table>
Property history
This property has changed in time as represented in the chart below. Click on the chart dots for more information.
Property history
This property has changed in time as represented in the chart below. Click on the chart dots for more information.
### Category
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>format</td>
<td>Represents a file format</td>
</tr>
</tbody>
</table>
#### Entities
1-20 of 843
<table>
<thead>
<tr>
<th>Name</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Broadcast WAVE[audio/x-wav; version=0]</td>
<td></td>
</tr>
<tr>
<td>Broadcast WAVE[audio/x-wav; version=1]</td>
<td></td>
</tr>
<tr>
<td>Graphics Interchange Format[image/gif; version=1987a]</td>
<td></td>
</tr>
<tr>
<td>Graphics Interchange Format[image/gif; version=1989a]</td>
<td></td>
</tr>
<tr>
<td>Audio/Video Interleaved Format[video/x-msvideo]</td>
<td></td>
</tr>
<tr>
<td>Waveform Audio[audio/x-wav]</td>
<td></td>
</tr>
<tr>
<td>Name</td>
<td>Value</td>
</tr>
<tr>
<td>-------------------------------------------</td>
<td>---------------</td>
</tr>
<tr>
<td>Minimum preservation action execution time</td>
<td>1.5002512</td>
</tr>
<tr>
<td>Average preservation action execution time</td>
<td>1.8746954</td>
</tr>
<tr>
<td>Maximum preservation action execution time</td>
<td>2.3340003</td>
</tr>
<tr>
<td>Ingest average time (ms)</td>
<td>1092798.0</td>
</tr>
</tbody>
</table>
Advanced query
Use SPARQL to make your own query
Target
- Category
- Property
- Entity
- Value
- Measurement
Snippets
- Relations
- Resources
SPARQL
```
SELECT ?s WHERE { ?s rdf:type watch:EntityType . }
```
[+ Create trigger] [Search]
Query
Select a pre-made question template or go to **advanced query**.
**Query templates**
- Check collection policy conformance
- Collection size limit
**Check collection policy conformance**
Check if selected collection conforms to the defined policy (only compression scheme policy is checked right now)
**Collection**
The ID from the URL
Your collection profile already inserted into scout
[Search] [Create trigger]
Plan management API
- Deploy and management preservation plans in the repository
- HTTP REST API
Methods:
- **Search** and **retrieve** plans
- **Deploy** a new plan
- **Retrieve** or **add** a plan execution state (in progress, success or fail)
- **Update** plan lifecycle status (enabled or disabled)
Implementation can use:
- Workflow engine: Taverna or SCAPE platform
- Data connector API
API specification: [https://github.com/openplanets/scape-platform-api](https://github.com/openplanets/scape-platform-api)
Ref. implementation: Fall 2013 for Fedora 4 and RODA
Plato: a preservation planning tool
http://ifs.tuwien.ac.at/dp/plato
- Systematic planning
- Traceable, documented, trustworthy
- Integrated:
- Data Connector API (Content)
- Scout (Watch, Content profile, sampling)
- SCAPE Policy model
- Plan management API (Operations)
- Taverna compatible workflows
This work was partially supported by the SCAPE Project.
The SCAPE project is co-funded by the European Union under FP7 ICT-2009.4.1 (Grant Agreement number 270137).
What is Plato?
Digital content is short-lived, yet may prove to have value in the future. How can we keep it alive? Finding the right action to enable future access to digital content in a transparent way is the task of Plato.
The mission of digital preservation is to ensure continued, authentic long-term access to digital objects in a usable form for specific user communities. This requires preservation actions to be carried out when the original environment of digital objects is unavailable. A variety of preservation actions exist, but each shows specific peculiarities, and a variety of factors influence the decision.
The **mission of preservation planning** is to ensure authentic future access for a specific set of objects and designated communities by defining the actions needed to preserve it.
The planning tool **Plato** is a decision support tool that implements a solid preservation planning process and integrates services for content characterisation, preservation action and automatic object comparison in a service-oriented architecture to provide maximum support for preservation planning endeavours.
What's new?
**June 2013: Plato 4.2**
We are pleased to announce the release of Plato 4.2, including:
- Better policy integration
- Improved selection of alternatives
- Improved tree views
- Initial MyExperiment migration service allows passing parameters
For further details check the [Github milestone](https://github.com/ifs-tuwien/plato/releases).
Preservation lifecycle (in practice)
- Environment and users
- monitored environment and users
- Repository
- access, ingest, harvest
- monitored content and events
- execute action plan
- Operations
- Workflow engine
- monitored actions
- Scout
- Policies
- create/re-evaluate plans
- Planning
- deploy plan
- Plato
Conclusions
P1: Does the content **conform to policies**? Are there any **risks**? Even on a **changing** content, policies and environment?
S1: Use Scout: preservation watch system
P2: How to easily and **trustworthily decide** which action to take?
S2: Use Plato: preservation planning tool
Conclusions
P3: How to ensure and monitor the **quality** of chosen action and that the **decision assumptions** remain valid?
S3: Q&A in preservation plans (Plato), monitoring of Q&A (Report API & Scout), automatic Scout triggers created by Plato
P4: How to do digital preservation in large-scale environments?
S4: Automation and end-to-end integration of preservation processes.
Roadmap
• Scout:
• User support
• More adaptors
• More trigger templates
• Plato:
• Automatic create Scout triggers
• Automatic deploy using plan management API
• Repository reference implementations: RODA and Fedora 4
Conclusions
- All APIs published
- Ref. implementations in RODA and Fedora 4 in Fall 2013
- All tools available in Github
Add preservation to your repository now!
Supporting the preservation lifecycle in repositories
Luis Faria lfaria@keep.pt
KEEP SOLUTIONS www.keep-solutions.com
Open Repositories 2013
Charlottetown, PEI, Canada, 2013-07-09
http://goo.gl/V6142
|
{"Source-Url": "https://or2013.net/sites/or2013.net/files/slides/OR2013-PW_lifecycle/index.pdf", "len_cl100k_base": 4308, "olmocr-version": "0.1.49", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 60222, "total-output-tokens": 5673, "length": "2e12", "weborganizer": {"__label__adult": 0.0003287792205810547, "__label__art_design": 0.0010480880737304688, "__label__crime_law": 0.0008230209350585938, "__label__education_jobs": 0.004650115966796875, "__label__entertainment": 0.00015676021575927734, "__label__fashion_beauty": 0.00022017955780029297, "__label__finance_business": 0.001868247985839844, "__label__food_dining": 0.0002815723419189453, "__label__games": 0.000484466552734375, "__label__hardware": 0.0008902549743652344, "__label__health": 0.0004143714904785156, "__label__history": 0.0007405281066894531, "__label__home_hobbies": 0.00017976760864257812, "__label__industrial": 0.0003736019134521485, "__label__literature": 0.00041556358337402344, "__label__politics": 0.0005011558532714844, "__label__religion": 0.00040221214294433594, "__label__science_tech": 0.06658935546875, "__label__social_life": 0.00033855438232421875, "__label__software": 0.2325439453125, "__label__software_dev": 0.68603515625, "__label__sports_fitness": 0.00018274784088134768, "__label__transportation": 0.00029206275939941406, "__label__travel": 0.0003063678741455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18001, 0.01503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18001, 0.13954]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18001, 0.81862]], "google_gemma-3-12b-it_contains_pii": [[0, 203, false], [203, 499, null], [499, 664, null], [664, 848, null], [848, 963, null], [963, 1543, null], [1543, 2895, null], [2895, 3228, null], [3228, 3526, null], [3526, 3609, null], [3609, 3762, null], [3762, 3951, null], [3951, 4116, null], [4116, 4305, null], [4305, 4734, null], [4734, 5178, null], [5178, 5479, null], [5479, 5670, null], [5670, 6332, null], [6332, 6497, null], [6497, 6537, null], [6537, 6762, null], [6762, 7301, null], [7301, 7895, null], [7895, 8372, null], [8372, 10474, null], [10474, 10888, null], [10888, 11757, null], [11757, 11890, null], [11890, 12023, null], [12023, 12744, null], [12744, 13173, null], [13173, 13415, null], [13415, 13845, null], [13845, 14418, null], [14418, 14899, null], [14899, 16384, null], [16384, 16718, null], [16718, 17015, null], [17015, 17400, null], [17400, 17632, null], [17632, 17797, null], [17797, 18001, null]], "google_gemma-3-12b-it_is_public_document": [[0, 203, true], [203, 499, null], [499, 664, null], [664, 848, null], [848, 963, null], [963, 1543, null], [1543, 2895, null], [2895, 3228, null], [3228, 3526, null], [3526, 3609, null], [3609, 3762, null], [3762, 3951, null], [3951, 4116, null], [4116, 4305, null], [4305, 4734, null], [4734, 5178, null], [5178, 5479, null], [5479, 5670, null], [5670, 6332, null], [6332, 6497, null], [6497, 6537, null], [6537, 6762, null], [6762, 7301, null], [7301, 7895, null], [7895, 8372, null], [8372, 10474, null], [10474, 10888, null], [10888, 11757, null], [11757, 11890, null], [11890, 12023, null], [12023, 12744, null], [12744, 13173, null], [13173, 13415, null], [13415, 13845, null], [13845, 14418, null], [14418, 14899, null], [14899, 16384, null], [16384, 16718, null], [16718, 17015, null], [17015, 17400, null], [17400, 17632, null], [17632, 17797, null], [17797, 18001, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18001, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18001, null]], "pdf_page_numbers": [[0, 203, 1], [203, 499, 2], [499, 664, 3], [664, 848, 4], [848, 963, 5], [963, 1543, 6], [1543, 2895, 7], [2895, 3228, 8], [3228, 3526, 9], [3526, 3609, 10], [3609, 3762, 11], [3762, 3951, 12], [3951, 4116, 13], [4116, 4305, 14], [4305, 4734, 15], [4734, 5178, 16], [5178, 5479, 17], [5479, 5670, 18], [5670, 6332, 19], [6332, 6497, 20], [6497, 6537, 21], [6537, 6762, 22], [6762, 7301, 23], [7301, 7895, 24], [7895, 8372, 25], [8372, 10474, 26], [10474, 10888, 27], [10888, 11757, 28], [11757, 11890, 29], [11890, 12023, 30], [12023, 12744, 31], [12744, 13173, 32], [13173, 13415, 33], [13415, 13845, 34], [13845, 14418, 35], [14418, 14899, 36], [14899, 16384, 37], [16384, 16718, 38], [16718, 17015, 39], [17015, 17400, 40], [17400, 17632, 41], [17632, 17797, 42], [17797, 18001, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18001, 0.11053]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
c42eb6fde735d0cd718948096b540b160f1126c0
|
[REMOVED]
|
{"Source-Url": "http://btn1x4.inf.uni-bayreuth.de/publications/SAM2014-Winetzhammer.pdf", "len_cl100k_base": 7488, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40254, "total-output-tokens": 9605, "length": "2e12", "weborganizer": {"__label__adult": 0.0003197193145751953, "__label__art_design": 0.00046706199645996094, "__label__crime_law": 0.00031685829162597656, "__label__education_jobs": 0.00069427490234375, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.00015723705291748047, "__label__finance_business": 0.0002167224884033203, "__label__food_dining": 0.0003075599670410156, "__label__games": 0.0005927085876464844, "__label__hardware": 0.000820159912109375, "__label__health": 0.0003619194030761719, "__label__history": 0.0002639293670654297, "__label__home_hobbies": 8.338689804077148e-05, "__label__industrial": 0.0005297660827636719, "__label__literature": 0.00026702880859375, "__label__politics": 0.0002799034118652344, "__label__religion": 0.00043654441833496094, "__label__science_tech": 0.041168212890625, "__label__social_life": 8.434057235717773e-05, "__label__software": 0.007152557373046875, "__label__software_dev": 0.9443359375, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0006890296936035156, "__label__travel": 0.00020170211791992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40825, 0.01896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40825, 0.65838]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40825, 0.89287]], "google_gemma-3-12b-it_contains_pii": [[0, 2353, false], [2353, 4591, null], [4591, 7547, null], [7547, 8667, null], [8667, 12114, null], [12114, 14088, null], [14088, 17494, null], [17494, 20037, null], [20037, 23280, null], [23280, 25806, null], [25806, 27969, null], [27969, 29799, null], [29799, 32941, null], [32941, 34829, null], [34829, 37511, null], [37511, 40825, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2353, true], [2353, 4591, null], [4591, 7547, null], [7547, 8667, null], [8667, 12114, null], [12114, 14088, null], [14088, 17494, null], [17494, 20037, null], [20037, 23280, null], [23280, 25806, null], [25806, 27969, null], [27969, 29799, null], [29799, 32941, null], [32941, 34829, null], [34829, 37511, null], [37511, 40825, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40825, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40825, null]], "pdf_page_numbers": [[0, 2353, 1], [2353, 4591, 2], [4591, 7547, 3], [7547, 8667, 4], [8667, 12114, 5], [12114, 14088, 6], [14088, 17494, 7], [17494, 20037, 8], [20037, 23280, 9], [23280, 25806, 10], [25806, 27969, 11], [27969, 29799, 12], [29799, 32941, 13], [32941, 34829, 14], [34829, 37511, 15], [37511, 40825, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40825, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bb681b860db0a0c53ec162364545c175e4cc316d
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01284816/file/icsr-main.pdf", "len_cl100k_base": 7352, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 41631, "total-output-tokens": 9661, "length": "2e12", "weborganizer": {"__label__adult": 0.000347137451171875, "__label__art_design": 0.00030231475830078125, "__label__crime_law": 0.0002620220184326172, "__label__education_jobs": 0.0006852149963378906, "__label__entertainment": 5.2094459533691406e-05, "__label__fashion_beauty": 0.0001398324966430664, "__label__finance_business": 0.00015234947204589844, "__label__food_dining": 0.0002868175506591797, "__label__games": 0.0003812313079833984, "__label__hardware": 0.0004968643188476562, "__label__health": 0.00035881996154785156, "__label__history": 0.0001951456069946289, "__label__home_hobbies": 6.526708602905273e-05, "__label__industrial": 0.0003006458282470703, "__label__literature": 0.00027298927307128906, "__label__politics": 0.00021207332611083984, "__label__religion": 0.0004055500030517578, "__label__science_tech": 0.00852203369140625, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.004131317138671875, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0002315044403076172, "__label__transportation": 0.0003600120544433594, "__label__travel": 0.00016772747039794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40746, 0.02529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40746, 0.71221]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40746, 0.87199]], "google_gemma-3-12b-it_contains_pii": [[0, 1195, false], [1195, 3747, null], [3747, 7160, null], [7160, 10045, null], [10045, 11998, null], [11998, 13909, null], [13909, 15884, null], [15884, 17391, null], [17391, 18959, null], [18959, 21596, null], [21596, 24512, null], [24512, 28364, null], [28364, 29168, null], [29168, 32289, null], [32289, 34583, null], [34583, 37469, null], [37469, 40746, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1195, true], [1195, 3747, null], [3747, 7160, null], [7160, 10045, null], [10045, 11998, null], [11998, 13909, null], [13909, 15884, null], [15884, 17391, null], [17391, 18959, null], [18959, 21596, null], [21596, 24512, null], [24512, 28364, null], [28364, 29168, null], [29168, 32289, null], [32289, 34583, null], [34583, 37469, null], [37469, 40746, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40746, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40746, null]], "pdf_page_numbers": [[0, 1195, 1], [1195, 3747, 2], [3747, 7160, 3], [7160, 10045, 4], [10045, 11998, 5], [11998, 13909, 6], [13909, 15884, 7], [15884, 17391, 8], [17391, 18959, 9], [18959, 21596, 10], [21596, 24512, 11], [24512, 28364, 12], [28364, 29168, 13], [29168, 32289, 14], [32289, 34583, 15], [34583, 37469, 16], [37469, 40746, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40746, 0.0411]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
ff53bd7b5d2ad1390c67c8a6c398c977f480a0db
|
RFC 8952
Captive Portal Architecture
Abstract
This document describes a captive portal architecture. Network provisioning protocols such as DHCP or Router Advertisements (RAs), an optional signaling protocol, and an HTTP API are used to provide the solution.
Status of This Memo
This document is not an Internet Standards Track specification; it is published for informational purposes.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are candidates for any level of Internet Standard; see Section 2 of RFC 7841.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8952.
Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
1. Introduction
In this document, "Captive Portal" is used to describe a network to which a device may be
voluntarily attached, such that network access is limited until some requirements have been
fulfilled. Typically, a user is required to use a web browser to fulfill requirements imposed by the
network operator, such as reading advertisements, accepting an acceptable-use policy, or
providing some form of credentials.
Implementations of captive portals generally require a web server, some method to allow/block
traffic, and some method to alert the user. Common methods of alerting the user in
implementations prior to this work involve modifying HTTP or DNS traffic.
This document describes an architecture for implementing captive portals while addressing most
of the problems arising for current captive portal mechanisms. The architecture is guided by
these requirements:
- Current captive portal solutions typically implement some variations of forging DNS or HTTP
responses. Some attempt man-in-the-middle (MITM) proxy of HTTPS in order to forge
responses. Captive portal solutions should not have to break any protocols or otherwise act
in the manner of an attacker. Therefore, solutions **MUST NOT** require the forging of responses from DNS or HTTP servers or from any other protocol.
- Solutions **MUST** permit clients to perform DNSSEC validation, which rules out solutions that forge DNS responses. Solutions **SHOULD** permit clients to detect and avoid TLS man-in-the-middle attacks without requiring a human to perform any kind of "exception" processing.
- To maximize universality and adoption, solutions **MUST** operate at the layer of Internet Protocol (IP) or above, not being specific to any particular access technology such as cable, Wi-Fi, or mobile telecom.
- Solutions **SHOULD** allow a device to query the network to determine whether the device is captive, without the solution being coupled to forging intercepted protocols or requiring the device to make sacrificial queries to "canary" URIs to check for response tampering (see Appendix A). Current captive portal solutions that work by affecting DNS or HTTP generally only function as intended with browsers, breaking other applications using those protocols; applications using other protocols are not alerted that the network is a captive portal.
- The state of captivity **SHOULD** be explicitly available to devices via a standard protocol, rather than having to infer the state indirectly.
- The architecture **MUST** provide a path of incremental migration, acknowledging the existence of a huge variety of pre-existing portals and end-user device implementations and software versions. This requirement is not to recommend or standardize existing approaches, but rather to provide device and portal implementors a path to a new standard.
A side benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of captivity. However, this document does not describe a mechanism for such devices to negotiate for unrestricted network access. A future document could provide a solution to devices without user interfaces. This document focuses on devices with user interfaces.
The architecture uses the following mechanisms:
- Network provisioning protocols provide end-user devices with a Uniform Resource Identifier (URI) [RFC3986] for the API that end-user devices query for information about what is required to escape captivity. DHCP, DHCPv6, and Router Advertisement options for this purpose are available in [RFC8910]. Other protocols (such as RADIUS), Provisioning Domains [CAPPORT-PVD], or static configuration may also be used to convey this Captive Portal API URI. A device **MAY** query this API at any time to determine whether the network is holding the device in a captive state.
- A Captive Portal can signal User Equipment in response to transmissions by the User Equipment. This signal works in response to any Internet protocol and is not done by modifying protocols in band. This signal does not carry the Captive Portal API URI; rather, it provides a signal to the User Equipment that it is in a captive state.
- Receipt of a Captive Portal Signal provides a hint that User Equipment could be captive. In response, the device **MAY** query the provisioned API to obtain information about the network state. The device can take immediate action to satisfy the portal (according to its configuration/policy).
The architecture attempts to provide confidentiality, authentication, and safety mechanisms to the extent possible.
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
1.2. Terminology
Captive Portal
A network that limits the communication of attached devices to restricted hosts until the user has satisfied Captive Portal Conditions, after which access is permitted to a wider set of hosts (typically the Internet).
Captive Portal Conditions
Site-specific requirements that a user or device must satisfy in order to gain access to the wider network.
Captive Portal Enforcement Device
The network equipment that enforces the traffic restriction. Also known as "Enforcement Device".
Captive Portal User Equipment
A device that has voluntarily joined a network for purposes of communicating beyond the constraints of the Captive Portal. Also known as "User Equipment".
User Portal
The web server providing a user interface for assisting the user in satisfying the conditions to escape captivity.
Captive Portal API
An HTTP API allowing User Equipment to query information about its state of captivity within the Captive Portal. This information might include how to obtain full network access (e.g., by visiting a URI). Also known as "API".
Captive Portal API Server
A server hosting the Captive Portal API. Also known as "API Server".
Captive Portal Signal
A notification from the network used to signal to the User Equipment that the state of its captivity could have changed.
Captive Portal Signaling Protocol
The protocol for communicating Captive Portal Signals. Also known as "Signaling Protocol".
Captive Portal Session
Also referred to simply as the "Session", a Captive Portal Session is the association for a particular User Equipment instance that starts when it interacts with the Captive Portal and gains open access to the network and ends when the User Equipment moves back into the original captive state. The Captive Network maintains the state of each active Session and can limit Sessions based on a length of time or a number of bytes used. The Session is associated with a particular User Equipment instance using the User Equipment’s identifier (see Section 3).
2. Components
2.1. User Equipment
The User Equipment is the device that a user desires to be attached to a network with full access to all hosts on the network (e.g., to have Internet access). The User Equipment communication is typically restricted by the Enforcement Device, described in Section 2.4, until site-specific requirements have been met.
This document only considers devices with web browsers, with web applications being the means of satisfying Captive Portal Conditions. An example of such User Equipment is a smartphone.
The User Equipment:
• **SHOULD** support provisioning of the URI for the Captive Portal API (e.g., by DHCP).
• **SHOULD** distinguish Captive Portal API access per network interface, in the manner of Provisioning Domain Architecture [RFC7556].
• **SHOULD** have a non-spoofable mechanism for notifying the user of the Captive Portal.
• **SHOULD** have a web browser so that the user may navigate to the User Portal.
• **SHOULD** support updates to the Captive Portal API URI from the Provisioning Service.
• **MAY** prevent applications from using networks that do not grant full network access. For example, a device connected to a mobile network may be connecting to a captive Wi-Fi network; the operating system could avoid updating the default route to a device on the captive Wi-Fi network until network access restrictions have been lifted (excepting access to the User Portal) in the new network. This has been termed "make before break".
None of the above requirements are mandatory because (a) we do not wish to say users or devices must seek full access to the Captive Portal, (b) the requirements may be fulfilled by manually visiting the captive portal web application, and (c) legacy devices must continue to be supported.
If User Equipment supports the Captive Portal API, it **MUST** validate the API Server's TLS certificate (see [RFC2818]) according to the procedures in [RFC6125]. The API Server's URI is obtained via a network provisioning protocol, which will typically provide a hostname to be used in TLS server certificate validation, against a DNS-ID in the server certificate. If the API Server is identified by IP address, the IPAddress subjectAltName is used to validate the server certificate.
An Enforcement Device **SHOULD** allow access to any services that User Equipment could need to contact to perform certificate validation, such as Online Certificate Status Protocol (OCSP) responders, Certificate Revocation Lists (CRLs), and NTP servers; see Section 4.1 of [RFC8908] for more information. If certificate validation fails, User Equipment **MUST NOT** make any calls to the API Server.
The User Equipment can store the last response it received from the Captive Portal API as a cached view of its state within the Captive Portal. This state can be used to determine whether its Captive Portal Session is near expiry. For example, the User Equipment might compare a timestamp indicating when the Session expires to the current time. Storing state in this way can reduce the need for communication with the Captive Portal API. However, it could lead to the state becoming stale if the User Equipment’s view of the relevant conditions (byte quota, for example) is not consistent with the Captive Portal API’s.
### 2.2. Provisioning Service
The Provisioning Service is primarily responsible for providing a Captive Portal API URI to the User Equipment when it connects to the network, and later if the URI changes. The Provisioning Service could also be the same service that is responsible for provisioning the User Equipment for access to the Captive Portal (e.g., by providing it with an IP address). This section discusses two mechanisms that may be used to provide the Captive Portal API URI to the User Equipment.
#### 2.2.1. DHCP or Router Advertisements
A standard for providing a Captive Portal API URI using DHCP or Router Advertisements is described in [RFC8910]. The captive portal architecture expects this URI to indicate the API described in Section 2.3.
#### 2.2.2. Provisioning Domains
[CAPPORT-PVD] proposes a mechanism for User Equipment to be provided with Provisioning Domain (PvD) Bootstrap Information containing the URI for the API described in Section 2.3.
### 2.3. Captive Portal API Server
The purpose of a Captive Portal API is to permit a query of Captive Portal state without interrupting the user. This API thereby removes the need for User Equipment to perform cleartext “canary” (see Appendix A) queries to check for response tampering.
The URI of this API will have been provisioned to the User Equipment. (Refer to Section 2.2.)
This architecture expects the User Equipment to query the API when the User Equipment attaches to the network and multiple times thereafter. Therefore, the API **MUST** support multiple repeated queries from the same User Equipment and return the state of captivity for the equipment.
At minimum, the API **MUST** provide the state of captivity. Further, the API **MUST** be able to provide a URI for the User Portal. The scheme for the URI **MUST** be “https” so that the User Equipment communicates with the User Portal over TLS.
If the API receives a request for state that does not correspond to the requesting User Equipment, the API SHOULD deny access. Given that the API might use the User Equipment's identifier for authentication, this requirement motivates Section 3.2.2.
A caller to the API needs to be presented with evidence that the content it is receiving is for a version of the API that it supports. For an HTTP-based interaction, such as in [RFC8908], this might be achieved by using a content type that is unique to the protocol.
When User Equipment receives Captive Portal Signals, the User Equipment MAY query the API to check its state of captivity. The User Equipment SHOULD rate-limit these API queries in the event of the signal being flooded. (See Section 6.)
The API MUST be extensible to support future use cases by allowing extensible information elements.
The API MUST use TLS to ensure server authentication. The implementation of the API MUST ensure both confidentiality and integrity of any information provided by or required by it.
This document does not specify the details of the API.
2.4. Captive Portal Enforcement Device
The Enforcement Device component restricts the network access of User Equipment according to the site-specific policy. Typically, User Equipment is permitted access to a small number of services (according to the policies of the network provider) and is denied general network access until it satisfies the Captive Portal Conditions.
The Enforcement Device component:
• Allows traffic to pass for User Equipment that is permitted to use the network and has satisfied the Captive Portal Conditions.
• Blocks (discards) traffic according to the site-specific policy for User Equipment that has not yet satisfied the Captive Portal Conditions.
• Optionally signals User Equipment using the Captive Portal Signaling Protocol if certain traffic is blocked.
• Permits User Equipment that has not satisfied the Captive Portal Conditions to access necessary APIs and web pages to fulfill requirements for escaping captivity.
• Updates allow/block rules per User Equipment in response to operations from the User Portal.
2.5. Captive Portal Signal
When User Equipment first connects to a network, or when there are changes in status, the Enforcement Device could generate a signal toward the User Equipment. This signal indicates that the User Equipment might need to contact the API Server to receive updated information. For instance, this signal might be generated when the end of a Session is imminent or when
network access was denied. For simplicity, and to reduce the attack surface, all signals **SHOULD** be considered equivalent by the User Equipment as a hint to contact the API. If future solutions have multiple signal types, each type **SHOULD** be rate-limited independently.
An Enforcement Device **MUST** rate-limit any signal generated in response to these conditions. See Section 6.4 for a discussion of risks related to a Captive Portal Signal.
### 2.6. Component Diagram
The following diagram shows the communication between each component in the case where the Captive Portal has a User Portal and the User Equipment chooses to visit the User Portal in response to discovering and interacting with the API Server.

**Figure 1: Captive Portal Architecture Component Diagram**
In the diagram:
- During provisioning (e.g., DHCP), and possibly later, the User Equipment acquires the Captive Portal API URI.
• The User Equipment queries the API to learn of its state of captivity. If captive, the User Equipment presents the portal user interface from the User Portal to the user.
• Based on user interaction, the User Portal directs the Enforcement Device to either allow or deny external network access for the User Equipment.
• The User Equipment attempts to communicate to the external network through the Enforcement Device.
• The Enforcement Device either allows the User Equipment's packets to the external network or blocks the packets. If blocking traffic and a signal has been implemented, it may respond with a Captive Portal Signal.
The Provisioning Service, API Server, and User Portal are described as discrete functions. An implementation might provide the multiple functions within a single entity. Furthermore, these functions, combined or not, as well as the Enforcement Device, could be replicated for redundancy or scale.
3. User Equipment Identity
Multiple components in the architecture interact with both the User Equipment and each other. Since the User Equipment is the focus of these interactions, the components must be able to both identify the User Equipment from their interactions with it and agree on the identity of the User Equipment when interacting with each other.
The methods by which the components interact restrict the type of information that may be used as an identifying characteristic. This section discusses the identifying characteristics.
3.1. Identifiers
An identifier is a characteristic of the User Equipment used by the components of a Captive Portal to uniquely determine which specific User Equipment instance is interacting with them. An identifier can be a field contained in packets sent by the User Equipment to the external network. Or, an identifier can be an ephemeral property not contained in packets destined for the external network, but instead correlated with such information through knowledge available to the different components.
3.2. Recommended Properties
The set of possible identifiers is quite large. However, in order to be considered a good identifier, an identifier SHOULD meet the following criteria. Note that the optimal identifier will likely change depending on the position of the components in the network as well as the information available to them. An identifier SHOULD:
• uniquely identify the User Equipment
• be hard to spoof
• be visible to the API Server
• be visible to the Enforcement Device
An identifier might only apply to the current point of network attachment. If the device moves to a different network location, its identity could change.
### 3.2.1. **Uniquely Identify User Equipment**
The Captive Portal **MUST** associate the User Equipment with an identifier that is unique among all of the User Equipment interacting with the Captive Portal at that time.
Over time, the User Equipment assigned to an identifier value **MAY** change. Allowing the identified device to change over time ensures that the space of possible identifying values need not be overly large.
Independent Captive Portals **MAY** use the same identifying value to identify different User Equipment instances. Allowing independent captive portals to reuse identifying values allows the identifier to be a property of the local network, expanding the space of possible identifiers.
### 3.2.2. **Hard to Spoof**
A good identifier does not lend itself to being easily spoofed. At no time should it be simple or straightforward for one User Equipment instance to pretend to be another User Equipment instance, regardless of whether both are active at the same time. This property is particularly important when the User Equipment identifier is referenced externally by devices such as billing systems or when the identity of the User Equipment could imply liability.
### 3.2.3. **Visible to the API Server**
Since the API Server will need to perform operations that rely on the identity of the User Equipment, such as answering a query about whether the User Equipment is captive, the API Server needs to be able to relate a request to the User Equipment making the request.
### 3.2.4. **Visible to the Enforcement Device**
The Enforcement Device will decide on a per-packet basis whether the packet should be forwarded to the external network. Since this decision depends on which User Equipment instance sent the packet, the Enforcement Device requires that it be able to map the packet to its concept of the User Equipment.
### 3.3. **Evaluating Types of Identifiers**
To evaluate whether a type of identifier is appropriate, one should consider every recommended property from the perspective of interactions among the components in the architecture. When comparing identifier types, choose the one that best satisfies all of the recommended properties. The architecture does not provide an exact measure of how well an identifier type satisfies a given property; care should be taken in performing the evaluation.
3.4. Example Identifier Types
This section provides some example identifier types, along with some evaluation of whether they are suitable types. The list of identifier types is not exhaustive; other types may be used. An important point to note is that whether a given identifier type is suitable depends heavily on the capabilities of the components and where in the network the components exist.
3.4.1. Physical Interface
The physical interface by which the User Equipment is attached to the network can be used to identify the User Equipment. This identifier type has the property of being extremely difficult to spoof: the User Equipment is unaware of the property; one User Equipment instance cannot manipulate its interactions to appear as though it is another.
Further, if only a single User Equipment instance is attached to a given physical interface, then the identifier will be unique. If multiple User Equipment instances are attached to the network on the same physical interface, then this type is not appropriate.
Another consideration related to uniqueness of the User Equipment is that if the attached User Equipment changes, both the API Server and the Enforcement Device MUST invalidate their state related to the User Equipment.
The Enforcement Device needs to be aware of the physical interface, which constrains the environment; it must either be part of the device providing physical access (e.g., implemented in firmware), or packets traversing the network must be extended to include information about the source physical interface (e.g., a tunnel).
The API Server faces a similar problem, implying that it should co-exist with the Enforcement Device or that the Enforcement Device should extend requests to it with the identifying information.
3.4.2. IP Address
A natural identifier type to consider is the IP address of the User Equipment. At any given time, no device on the network can have the same IP address without causing the network to malfunction, so it is appropriate from the perspective of uniqueness.
However, it may be possible to spoof the IP address, particularly for malicious reasons where proper functioning of the network is not necessary for the malicious actor. Consequently, any solution using the IP address SHOULD proactively try to prevent spoofing of the IP address. Similarly, if the mapping of IP address to User Equipment is changed, the components of the architecture MUST remove or update their mapping to prevent spoofing. Demonstrations of return routability, such as that required for TCP connection establishment, might be sufficient defense against spoofing, though this might not be sufficient in networks that use broadcast media (such as some wireless networks).
Since the IP address may traverse multiple segments of the network, more flexibility is afforded to the Enforcement Device and the API Server; they simply must exist on a segment of the network where the IP address is still unique. However, consider that a NAT may be deployed between the User Equipment and the Enforcement Device. In such cases, it is possible for the components to still uniquely identify the device if they are aware of the port mapping.
In some situations, the User Equipment may have multiple IP addresses (either IPv4, IPv6, or a dual-stack [RFC4213] combination) while still satisfying all of the recommended properties. This raises some challenges to the components of the network. For example, if the User Equipment tries to access the network with multiple IP addresses, should the Enforcement Device and API Server treat each IP address as a unique User Equipment instance, or should it tie the multiple addresses together into one view of the subscriber? An implementation MAY do either. Attention should be paid to IPv6 and the fact that it is expected for a device to have multiple IPv6 addresses on a single link. In such cases, identification could be performed by subnet, such as the /64 to which the IP belongs.
3.4.3. Media Access Control (MAC) Address
The MAC address of a device is often used as an identifier in existing implementations. This document does not discuss the use of MAC addresses within a captive portal system, but they can be used as an identifier type, subject to the criteria in Section 3.2.
3.5. Context-Free URI
A Captive Portal API needs to present information to clients that is unique to that client. To do this, some systems use information from the context of a request, such as the source address, to identify the User Equipment.
Using information from context rather than information from the URI allows the same URI to be used for different clients. However, it also means that the resource is unable to provide relevant information if the User Equipment makes a request using a different network path. This might happen when User Equipment has multiple network interfaces. It might also happen if the address of the API provided by DNS depends on where the query originates (as in split DNS [RFC8499]).
Accessing the API MAY depend on contextual information. However, the URIs provided in the API SHOULD be unique to the User Equipment and not dependent on contextual information to function correctly.
Though a URI might still correctly resolve when the User Equipment makes the request from a different network, it is possible that some functions could be limited to when the User Equipment makes requests using the Captive Portal. For example, payment options could be absent or a warning could be displayed to indicate the payment is not for the current connection.
URIs could include some means of identifying the User Equipment in the URIs. However, including unauthenticated User Equipment identifiers in the URI may expose the service to spoofing or replay attacks.
4. **Solution Workflow**
This section aims to improve understanding by describing a possible workflow of solutions adhering to the architecture. Note that the section is not normative; it describes only a subset of possible implementations.
4.1. **Initial Connection**
This section describes a possible workflow when User Equipment initially joins a Captive Portal.
1. The User Equipment joins the Captive Portal by acquiring a DHCP lease, RA, or similar, acquiring provisioning information.
2. The User Equipment learns the URI for the Captive Portal API from the provisioning information (e.g., [RFC8910]).
3. The User Equipment accesses the Captive Portal API to receive parameters of the Captive Portal, including the User Portal URI. (This step replaces the clear-text query to a canary URL.)
4. If necessary, the user navigates to the User Portal to gain access to the external network.
5. If the user interacted with the User Portal to gain access to the external network in the previous step, the User Portal indicates to the Enforcement Device that the User Equipment is allowed to access the external network.
6. The User Equipment attempts a connection outside the Captive Portal.
7. If the requirements have been satisfied, the access is permitted; otherwise, the "Expired" behavior occurs.
8. The User Equipment accesses the network until conditions expire.
4.2. **Conditions about to Expire**
This section describes a possible workflow when access is about to expire.
1. Precondition: the API has provided the User Equipment with a duration over which its access is valid.
2. The User Equipment is communicating with the outside network.
3. The User Equipment detects that the length of time left for its access has fallen below a threshold by comparing its stored expiry time with the current time.
4. The User Equipment visits the API again to validate the expiry time.
5. If expiry is still imminent, the User Equipment prompts the user to access the User Portal URI again.
6. The user accepts the prompt displayed by the User Equipment.
7. The user extends their access through the User Portal via the User Equipment's user interface.
8. The User Equipment's access to the outside network continues uninterrupted.
4.3. **Handling of Changes in Portal URI**
A different Captive Portal API URI could be returned in the following cases:
- If DHCP is used, a lease renewal/rebind may return a different Captive Portal API URI.
- If RA is used, a new Captive Portal API URI may be specified in a new RA message received by end User Equipment.
When the Provisioning Service updates the Captive Portal API URI, the User Equipment can retrieve updated state from the URI immediately, or it can wait as it normally would until the expiry conditions it retrieved from the old URI are about to expire.
5. **IANA Considerations**
This document has no IANA actions.
6. **Security Considerations**
6.1. **Trusting the Network**
When joining a network, some trust is placed in the network operator. This is usually considered to be a decision by a user on the basis of the reputation of an organization. However, once a user makes such a decision, protocols can support authenticating that a network is operated by who claims to be operating it. The Provisioning Domain Architecture [RFC7556] provides some discussion on authenticating an operator.
The user makes an informed choice to visit and trust the Captive Portal URI. Since the network provides the Captive Portal URI to the User Equipment, the network SHOULD do so securely so that the user’s trust in the network can extend to their trust of the Captive Portal URI. For example, the DHCPv6 AUTH option can sign this information.
If a user decides to incorrectly trust an attacking network, they might be convinced to visit an attacking web page and unwittingly provide credentials to an attacker. Browsers can authenticate servers but cannot detect cleverly misspelled domains, for example.
Further, the possibility of an on-path attacker in an attacking network introduces some risks. The attacker could redirect traffic to arbitrary destinations. The attacker could analyze the user’s traffic leading to loss of confidentiality, or the attacker could modify the traffic inline.
6.2. **Authenticated APIs**
The solution described here requires that when the User Equipment needs to access the API Server, the User Equipment authenticates the server; see Section 2.1.
The Captive Portal API URI might change during the Captive Portal Session. The User Equipment can apply the same trust mechanisms to the new URI as it did to the URI it received initially from the Provisioning Service.
6.3. Secure APIs
The solution described here requires that the API be secured using TLS. This is required to allow the User Equipment and API Server to exchange secrets that can be used to validate future interactions. The API **MUST** ensure the integrity of this information, as well as its confidentiality.
An attacker with access to this information might be able to masquerade as a specific User Equipment instance when interacting with the API, which could then allow them to masquerade as that User Equipment instance when interacting with the User Portal. This could give them the ability to determine whether the User Equipment has accessed the portal, deny the User Equipment service by ending their Session using mechanisms provided by the User Portal, or consume that User Equipment’s quota. An attacker with the ability to modify the information could deny service to the User Equipment or cause them to appear as different User Equipment instances.
6.4. Risks Associated with the Signaling Protocol
If a Signaling Protocol is implemented, it may be possible for any user on the Internet to send signals in an attempt to cause the receiving equipment to communicate with the Captive Portal API. This has been considered, and implementations may address it in the following ways:
- The signal only signals to the User Equipment to query the API. It does not carry any information that may mislead or misdirect the User Equipment.
- Even when responding to the signal, the User Equipment securely authenticates with API Servers.
- The User Equipment limits the rate at which it accesses the API, reducing the impact of an attack attempting to generate excessive load on either the User Equipment or API. Note that because there is only one type of signal and one type of API request in response to the signal, this rate-limiting will not cause loss of signaling information.
6.5. User Options
The Captive Portal Signal could signal to the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices **MAY** permit users to disable automatic reaction to Captive Portal Signal indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end-user devices **MAY** allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains.
6.6. Privacy
Section 3 describes a mechanism by which all components within the Captive Portal are designed to use the same identifier to uniquely identify the User Equipment. This identifier could be abused to track the user. Implementers and designers of Captive Portals should take care to ensure that identifiers, if stored, are stored securely. Likewise, if any component communicates the identifier over the network, it should ensure the confidentiality of the identifier on the wire by using encryption such as TLS.
There are benefits to choosing mutable anonymous identifiers. For example, User Equipment could cycle through multiple identifiers to help prevent long-term tracking. However, if the components of the network use an internal mapping to map the identity to a stable, long-term value in order to deal with changing identifiers, they need to treat that value as sensitive information; an attacker could use it to tie traffic back to the originating User Equipment, despite the User Equipment having changed identifiers.
7. References
7.1. Normative References
7.2. Informative References
Appendix A. Existing Captive Portal Detection Implementations
Operating systems and user applications may perform various tests when network connectivity is established to determine if the device is attached to a network with a captive portal present. A common method is to attempt to make an HTTP request to a known, vendor-hosted endpoint with a fixed response. Any other response is interpreted as a signal that a captive portal is present. This check is typically not secured with TLS, as a network with a captive portal may intercept the connection, leading to a host name mismatch. This has been referred to as a “canary” request because, like the canary in the coal mine, it can be the first sign that something is wrong.
Another test that can be performed is a DNS lookup to a known address with an expected answer. If the answer differs from the expected answer, the equipment detects that a captive portal is present. DNS queries over TCP or HTTPS are less likely to be modified than DNS queries over UDP due to the complexity of implementation.
The different tests may produce different conclusions, varying by whether or not the implementation treats both TCP and UDP traffic and by which types of DNS are intercepted.
Malicious or misconfigured networks with a captive portal present may not intercept these canary requests and choose to pass them through or decide to impersonate, leading to the device having a false negative.
Acknowledgments
The authors thank Lorenzo Colitti for providing the majority of the content for the Captive Portal Signal requirements.
The authors thank Benjamin Kaduk for providing the content related to TLS certificate validation of the API Server.
The authors thank Michael Richardson for providing wording requiring DNSSEC and TLS to operate without the user adding exceptions.
The authors thank various individuals for their feedback on the mailing list and during the IETF 98 hackathon: David Bird, Erik Kline, Alexis La Goulette, Alex Roscoe, Darshak Thakore, and Vincent van Dam.
**Authors' Addresses**
**Kyle Larose**
Agilicus
Email: kyle@agilicus.com
**David Dolson**
Email: ddolson@acm.org
**Heng Liu**
Google
Email: liucougar@google.com
|
{"Source-Url": "https://www.ietf.org/rfc/rfc8952.pdf", "len_cl100k_base": 7635, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 41671, "total-output-tokens": 9019, "length": "2e12", "weborganizer": {"__label__adult": 0.00038552284240722656, "__label__art_design": 0.00057220458984375, "__label__crime_law": 0.0013189315795898438, "__label__education_jobs": 0.0008959770202636719, "__label__entertainment": 0.00025200843811035156, "__label__fashion_beauty": 0.00019502639770507812, "__label__finance_business": 0.0008792877197265625, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.001033782958984375, "__label__hardware": 0.01678466796875, "__label__health": 0.0004100799560546875, "__label__history": 0.0005593299865722656, "__label__home_hobbies": 0.0001266002655029297, "__label__industrial": 0.0009222030639648438, "__label__literature": 0.0003814697265625, "__label__politics": 0.0003707408905029297, "__label__religion": 0.0004656314849853515, "__label__science_tech": 0.341064453125, "__label__social_life": 9.822845458984376e-05, "__label__software": 0.218505859375, "__label__software_dev": 0.4130859375, "__label__sports_fitness": 0.0002720355987548828, "__label__transportation": 0.0008244514465332031, "__label__travel": 0.0002856254577636719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39627, 0.0219]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39627, 0.28304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39627, 0.91293]], "google_gemma-3-12b-it_contains_pii": [[0, 1570, false], [1570, 1570, null], [1570, 2729, null], [2729, 6067, null], [6067, 7952, null], [7952, 10800, null], [10800, 13719, null], [13719, 16264, null], [16264, 17209, null], [17209, 19701, null], [19701, 22218, null], [22218, 24959, null], [24959, 28010, null], [28010, 30249, null], [30249, 32461, null], [32461, 35128, null], [35128, 37414, null], [37414, 38998, null], [38998, 39627, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1570, true], [1570, 1570, null], [1570, 2729, null], [2729, 6067, null], [6067, 7952, null], [7952, 10800, null], [10800, 13719, null], [13719, 16264, null], [16264, 17209, null], [17209, 19701, null], [19701, 22218, null], [22218, 24959, null], [24959, 28010, null], [28010, 30249, null], [30249, 32461, null], [32461, 35128, null], [35128, 37414, null], [37414, 38998, null], [38998, 39627, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39627, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39627, null]], "pdf_page_numbers": [[0, 1570, 1], [1570, 1570, 2], [1570, 2729, 3], [2729, 6067, 4], [6067, 7952, 5], [7952, 10800, 6], [10800, 13719, 7], [13719, 16264, 8], [16264, 17209, 9], [17209, 19701, 10], [19701, 22218, 11], [22218, 24959, 12], [24959, 28010, 13], [28010, 30249, 14], [30249, 32461, 15], [32461, 35128, 16], [35128, 37414, 17], [37414, 38998, 18], [38998, 39627, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39627, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
d7aabf257a503aa7a033a026d8fdf6fc5b2d66e5
|
## Contents
**Preface**
- Audience ................................................................. v
- Documentation Accessibility ........................................... v
- Related Documents ....................................................... v
- Conventions .................................................................. vi
1.1 Oracle Fusion Middleware Overview ................................ 1-1
1.2 About Oracle Enterprise Data Quality ................................ 1-1
1.3 Understanding the Software Components .......................... 1-2
1.3.1 What Are the Client Applications? ............................. 1-2
1.3.1.1 Where Is Data Stored? .................................. 1-3
1.3.1.2 Network Communications .................................. 1-3
1.3.2 How is Data Stored in the EDQ Repository? ................. 1-3
1.3.2.1 What Is the Config Schema? .............................. 1-3
1.3.2.2 What Is the Results Schema ............................... 1-3
1.3.3 Where does EDQ Store Working Data on Disk? ............ 1-4
1.3.4 What Is the Business Layer? .................................. 1-4
2.1 Understanding EDQ Terms ........................................... 2-1
2.2 What Is Data Capture? ............................................... 2-2
2.2.1 Understanding Network Communications and CPU Load ... 2-3
2.3 What Is General Data Processing? ................................. 2-4
2.3.1 What Is Streaming? .............................................. 2-4
2.3.2 What Is Work Sharing? ....................................... 2-4
2.3.3 What Is Whole Record Set Processing? ..................... 2-5
2.3.4 What Are Run Labels? ......................................... 2-6
2.4 What Is Match Processing? ........................................... 2-6
2.5 What Is Real-Time Processing? .................................... 2-8
Preface
An overview, the main terms, data processing, and basic steps to use Oracle Enterprise Data Quality are presented in this guide.
Audience
This guide is intended for anyone interested in an overview of the key concepts and architecture of Oracle Enterprise Data Quality.
Documentation Accessibility
For information about Oracle’s commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Related Documents
For more information, see the following documents in the Oracle Enterprise Data Quality documentation set.
EDQ Documentation Library
The following publications are provided to help you install and use EDQ:
- Oracle Fusion Middleware Release Notes for Enterprise Data Quality
- Oracle Fusion Middleware Installing and Configuring Enterprise Data Quality
- Oracle Fusion Middleware Administering Enterprise Data Quality
- Oracle Fusion Middleware Understanding Enterprise Data Quality
- Oracle Fusion Middleware Integrating Enterprise Data Quality With External Systems
- Oracle Fusion Middleware Securing Oracle Enterprise Data Quality
- Oracle Enterprise Data Quality Address Verification Server Installation and Upgrade Guide
Oracle Enterprise Data Quality Address Verification Server Release Notes
Find the latest version of these guides and all of the Oracle product documentation at http://docs.oracle.com
Online Help
Online help is provided for all Oracle Enterprise Data Quality user applications. It is accessed in each application by pressing the F1 key or by clicking the Help icons. The main nodes in the Director project browser have integrated links to help pages. To access them, either select a node and then press F1, or right-click on an object in the Project Browser and then select Help. The EDQ processors in the Director Tool Palette have integrated help topics, as well. To access them, right-click on a processor on the canvas and then select Processor Help, or left-click on a processor on the canvas or tool palette and then press F1.
Conventions
The following text conventions are used in this document:
<table>
<thead>
<tr>
<th>Convention</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>boldface</strong></td>
<td>Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.</td>
</tr>
<tr>
<td><em>italic</em></td>
<td>Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.</td>
</tr>
<tr>
<td>monospace</td>
<td>Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.</td>
</tr>
</tbody>
</table>
Overview of Oracle Enterprise Data Quality
This chapter gives an overview of Oracle Fusion Middleware and Oracle Enterprise Data Quality.
This chapter includes the following sections:
- Section 1.1, "Oracle Fusion Middleware Overview"
- Section 1.2, "About Oracle Enterprise Data Quality"
- Section 1.3, "Understanding the Software Components"
1.1 Oracle Fusion Middleware Overview
Oracle Fusion Middleware is a collection of standards-based software products that spans a range of tools and services: from Java EE and developer tools, to integration services, business intelligence, and collaboration. Oracle Fusion Middleware offers complete support for development, deployment, and management of applications. Oracle Fusion Middleware components are monitored at run time using Oracle Enterprise Manager Fusion Middleware Control Console.
1.2 About Oracle Enterprise Data Quality
EDQ provides a comprehensive data quality management environment that is used to understand, improve, protect and govern data quality. EDQ facilitates best practice master data management, data integration, business intelligence, and data migration initiatives. EDQ provides integrated data quality in customer relationship management and other applications.
Following are the key features of EDQ:
- Integrated data profiling, auditing, cleansing and matching
- Browser-based client access
- Ability to handle all types of data (for example, customer, product, asset, financial, and operational)
- Connection to any Java Database Connectivity (JDBC) compliant data sources and targets
- Multi-user project support (role-based access, issue tracking, process annotation, and version control)
- Services Oriented Architecture (SOA) support for designing processes that may be exposed to external applications as a service
- Designed to process large data volumes
A single repository to hold data along with gathered statistics and project tracking information, with shared access
Intuitive graphical user interface designed to help you solve real world information quality issues quickly
Easy, data-led creation and extension of validation and transformation rules
Fully extensible architecture allowing the insertion of any required custom processing
1.3 Understanding the Software Components
EDQ is a Java Web Application that uses a Java Servlet Engine, a Java Web Start graphical user interface, and a Structured Query Language (SQL) relational database management system (RDBMS) system for data storage.
EDQ is a client-server architecture. It is comprised of several client applications that are Graphical User Interfaces (GUIs), a data repository, and a business layer. This section provides details on the architecture of these components, their data storage, data access, and I/O requirements.
1.3.1 What Are the Client Applications?
EDQ provides a number of client applications that are used to configure and operate the product. Most are Java Web Start applications, and the remainder are simple web pages. The following table lists all the client applications, how they are started, and what each does:
<table>
<thead>
<tr>
<th>Application Name</th>
<th>Starts In</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>Director</td>
<td>Web Start</td>
<td>Design and test data quality processing</td>
</tr>
<tr>
<td>Server Console</td>
<td>Web Start</td>
<td>Operate and monitor jobs</td>
</tr>
<tr>
<td>Match Review</td>
<td>Web Start</td>
<td>Review match results and make manual match decisions</td>
</tr>
<tr>
<td>Dashboard</td>
<td>Browser</td>
<td>Monitor data quality key performance indicators and trends</td>
</tr>
<tr>
<td>Case Management</td>
<td>Web Start</td>
<td>Perform detailed investigations into data issues through configurable workflows</td>
</tr>
<tr>
<td>Case Management Administration</td>
<td>Web Start</td>
<td>Configure workflows and permissions for Case Management</td>
</tr>
<tr>
<td>Web Service Tester</td>
<td>Browser</td>
<td>Test EDQ Web Services</td>
</tr>
<tr>
<td>Configuration Analysis</td>
<td>Web Start</td>
<td>Report on configuration and perform differences between versions of configuration</td>
</tr>
<tr>
<td>Issue Manager</td>
<td>Web Start</td>
<td>Manage a list of DQ issues</td>
</tr>
<tr>
<td>Administration</td>
<td>Browser</td>
<td>Administer the EDQ server (users, groups, extensions, launchpad configuration)</td>
</tr>
<tr>
<td>Change Password</td>
<td>Browser</td>
<td>Change password</td>
</tr>
<tr>
<td>Configuration Analysis</td>
<td>Web Start</td>
<td>Analyze project configurations, and report on differences</td>
</tr>
</tbody>
</table>
The client applications can be accessed from the EDQ Launchpad on the EDQ server. When a client launches one of the Java Web Start applications, such as Director, the
application is downloaded, installed, and run on the client machine. The application communicates with the EDQ server to instantiate changes and receive messages from the server, such as information about tasks that are running and changes made by other users.
Since EDQ is an extensible system, it can be extended to add further user applications when installed to work for a particular use case. For example, Oracle Watchlist Screening extends EDQ to add a user application for screening data against watchlists.
---
**Note:** Many of the client applications are available either separately (for dedicated use) or within another application. For example, the Configuration Analysis, Match Review and Issue Manager applications are also available in Director.
---
### 1.3.1 Where Is Data Stored?
The client computer only stores user preferences for the presentation of the client applications, while all other information is stored on the EDQ server.
### 1.3.2 Network Communications
The client applications communicate over either an Hypertext Transfer Protocol (HTTP) or a Secure Hypertext Transfer Protocol (HTTPS) connection, as determined by the application configuration on start-up. For simplicity, this connection is referred to as 'the HTTP connection' in the remainder of this document.
### 1.3.2 How is Data Stored in the EDQ Repository?
EDQ uses a repository that contains two database schemas: the Config schema and the Results schema.
---
**Note:** Each EDQ server must have its own Config and Results schemas. If multiple servers are deployed in a High Availability architecture, then the configuration cannot be shared by pointing both servers to the same schemas.
---
#### 1.3.2.1 What Is the Config Schema?
The Config schema stores configuration data for EDQ. It is generally used in the typical transactional manner common to many web applications: queries are run to access small numbers of records, which are then updated as required.
Normally, only a small amount of data is held in this schema. In simple implementations, it is likely to be in the order of several megabytes. In the case of an exceptionally large EDQ system, especially where Case Management is heavily used, the storage requirements could reach 10 GB.
Access to the data held in the Config schema is typical of configuration data in other relational database management system (RDBMS) applications. Most database access is in the form of read requests, with relatively few data update and insert requests.
#### 1.3.2.2 What Is the Results Schema
The Results schema stores snapshot, staged, and results data. It is highly dynamic, with tables being created and dropped as required to store the data handled by processors running on the server. Temporary working tables are also created and
dropped during process execution to store any working data that cannot be held in the available memory.
The amount of data held in the Results schema will vary significantly over time, and data capture and processing can involve gigabytes of data. Data may also be stored in the Results database on a temporary basis while a process or a job runs. In the case of a job, several versions of the data may be written to the database during processing.
The Results schema shows a very different data access profile to the Config schema, and is extremely atypical of a conventional web-based database application. Typically, tables in the Results schema are:
- Created on demand
- Populated with data using bulk JDBC application programming interfaces (APIs)
- Queried using full table scans to support process execution
- Indexed
- Queried using complex SQL statements in response to user interactions with the client applications
- Dropped when the process or snapshot they are associated with is run again
The dynamic nature of this schema means that it must be handled carefully. For example, it is often advisable to mount redo log files on a separate disk.
**1.3.3 Where does EDQ Store Working Data on Disk?**
EDQ uses two configuration directories, which are separate from the installation directory that contains the program files. These directories are:
- The base configuration directory: This directory contains default configuration data. This directory is named `oedq.home` in an Oracle WebLogic installation but can be named anything in an Apache Tomcat installation.
- The local configuration directory: This directory contains overrides to the base configuration, such as data for extension packs or overrides to default settings. EDQ looks in this directory first, for any overrides, and then looks in the base directory if it does not find the data it needs in the local directory. The local configuration directory is named `oedq.local.home` in an Oracle WebLogic installation but can be named anything in an Apache Tomcat installation.
Some of the files in the configuration directories are used when processing data from and to file-based data stores. Other files are used to store server configuration properties, such as which functional packs are enabled, how EDQ connects to its repository databases, and other critical information.
The names and locations of the home and local home directories are important to know in the event that you need to perform any manual updates to templates or other individual components.
These directories are created when you install EDQ. For more information, see *Oracle Fusion Middleware Installing and Configuring Enterprise Data Quality*.
**1.3.4 What Is the Business Layer?**
The business layer fulfills three main functions:
- Provides the API that the client applications use to interact with the rest of the system.
- Notifies the client applications of server events that may require client applications updates.
- Runs the processes that capture and process data.
The business layer stores configuration data in the Config schema, and working data and results in the Results schema.
When passing data to and from the client application, the business layer behaves in a manner common to most traditional Java Web Applications. The business layer makes small database transactions and sends small volumes of information to the front-end using the HTTP connection. This is somewhat unusual in that the application front-ends are mostly rich GUIs rather than browsers. Therefore the data sent to the client application consists mostly of serialized Java objects rather than the more traditional HTML.
However, when running processes and creating snapshots, the business layer behaves more like a traditional batch application. In its default configuration, it spawns multiple threads and database connections in order to handle potentially very large volumes of data, and uses all available CPU cores and database I/O capacity.
It is possible to configure EDQ to limit its use of available resources, but this has clear performance implications. For further information, see the EDQ Installation Guide and EDQ Admin Guide.
This chapter provides information about key EDQ concepts.
This chapter includes the following sections.
- Section 2.1, "Understanding EDQ Terms"
- Section 2.2, "What Is Data Capture?"
- Section 2.3, "What Is General Data Processing?"
- Section 2.4, "What Is Match Processing?"
- Section 2.5, "What Is Real-Time Processing?"
### 2.1 Understanding EDQ Terms
The most important terms used in EDQ are:
**Project**
A group of related processes working on a common set, or sets, of data using shared reference data.
**Data Store**
A connection to a store of data, whether the data is stored in a database or in one or more files. The data store may be used as the source of data for a process, or you may export the written Staged Data results of a process to a data store, or both.
**Process**
Specifies a set of actions to be performed on some specified data. It comprises a series of **processors**, each specifying how data is to be handled and the rules that should be applied to it. A process may produce:
- **Staged data**: data or metrics produced by processing the input data and choosing to write output data to the results database.
- **Results data**: metric information summarizing the results of the process. For example, a simple validation process may record the number of records that failed and the number of records that passed validation.
**Processor**
A logical element that performs some operation on the data. Processors can perform statistical analysis, audit checks, transformations, matching, or other operations. Processors are chained together to form processes.
Published Processors
Additional processors (in addition to those in the Processor Library) that have been installed from an Extension Pack (such as, the Customer Data pack) or published onto the server by EDQ users. They are included in the Project Browser so that they can be packaged like other objects. There are three types of Published processors: Template, Reference, and Locked Reference.
Reference Data
Consists of lists and maps that can be used by a processor to perform checking, matching, transformations and so on. Reference data can be supplied as part of EDQ or by a third party, or can be defined by the user.
Staged Data
Consists of data snapshots and data written by processes and is stored within the Results schema.
Snapshot
A captured copy of external data stored within the EDQ repository.
Job
A configured and ordered set of tasks that may be instigated either by EDQ or externally. Examples of tasks include executions of file downloads, snapshots, processes, and exports.
Images
Customizing the icon associated to a processor.
Exports
There are two types of exports:
- A prepared export (Staged Data Export or Results Book Export) that uses a saved configuration.
- An ad-hoc export of the current results from the Results Browser to an Excel file.
Web Services
Used to deploy processes (as configured in Director) to provide easy integration with source systems for real time data auditing, cleansing, and matching (for example, for real time duplicate prevention).
Run Profiles
Optional templates that specify configuration settings that you can use to override the default job run settings.
Issues
Allows users to keep a record of their key findings when analyzing data, and also provides a way for work on a project to be allocated and tracked amongst several users.
Project Notes
Allows you to use Director as the definitive repository for all information associated with the project. Any information that needs to be made available to all users working on a project can be added as a note throughout the progress of a project.
2.2 What Is Data Capture?
The data capture process begins with retrieving the data to be captured from an external data source. Data can be captured from databases, text files, XML files and so
What Is Data Capture?
Understanding Key Concepts of Enterprise Data Quality
Depending on the type of data source, data capture may involve:
- Running a single SQL query on the source system.
- Sequentially processing a delimited or fixed format file.
- Processing an XML file to produce a stream of data.
As the data is retrieved, it is processed by a single thread. This involves:
- Assigning an internal sequence number to each input record. This is usually a monotonically increasing number for each row.
- Batching the rows into work units. Once a work unit is filled, it is passed into the results database work queue.
The database work queue is made up of work requests — mostly data insertion or indexing requests — to be executed on the database. The queue is processed by a pool of threads that retrieve work units from the queue, obtain a database connection to the appropriate database, and execute the work. In the case of snapshotting, the work will consist of using the JDBC batch API to load groups of records into a table.
Once all the data has been inserted for a table, the snapshot process creates one or more indexing requests and adds them to the database work queue. At least one indexing request will be created per table to index the unique row identifier, but depending on the volume of data in the snapshot and the configuration of the snapshot process other columns in the captured data may also be used to generate indexes into the snapshot data.
Figure 2–1 The Data Capture Process
2.2.1 Understanding Network Communications and CPU Load
Snapshotting is expected to generate:
- I/O and CPU load on the machine hosting the data source while data is read
What Is General Data Processing?
Once the data has been captured, it is ready for processing. The reader processor provides the downstream processors with managed access to the data, and the downstream processors produce results data. If any writer processors are present, they will write the results of processing back to the staged data repository.
Running a process causes the web application server to start a number of process execution threads. The default configuration of EDQ will start as many threads as there are cores on the EDQ application server machine.
2.3.1 What Is Streaming?
Instead of capturing data in a snapshot and storing it in the results database (other than temporarily during collation), it can be pulled from a source and pushed to targets as a stream.
2.3.2 What Is Work Sharing?
Each process execution thread is assigned a subset of the data to process. When the input data for a process is a data set of known size, such as snapshot or staged data, each thread will execute a query to retrieve a subset of the data, identified by the unique row IDs assigned during snapshotting. So, in the example scenario in Section 2.2.1, “Understanding Network Communications and CPU Load” describing the processing 10,000 records of data on a 4-core machine, four queries will be issued against the Results schema. The queries would be of the form:
```sql
SELECT record_id, column1, column2, ..., column10
```
FROM DN_1
WHERE record_id > 0 AND record_id <= 2500;
In the case where the process is not run against a data set of known size, such as a job scheduled to run directly against a data source, records are shared among the process execution threads by reading all records into a queue, which is then consumed by the process execution threads.
Each process execution thread is also made aware of the sequence of processors that comprise the process. The process execution threads pass the records through each of the appropriate processors. As the processors work, they accumulate results that need to be stored in the Results schema and, in the case of writer processors, they may also accumulate data that needs to be written to staged data. All this data is accumulated into insertion groups and added into database work units, which are processed as described in the 4.1 Data capture section.
Once an execution thread has processed all its assigned records, it waits for all other process execution threads to complete. The process execution threads then enter a **collation phase**, during which the summary data from the multiple copies of the process are accumulated and written to the Results database by the database work queue.
The following behavior is expected during batch processing:
- Read load on the Results schema as the captured data is read.
- CPU load on the web application server as the data is processed.
- Significant write load on the Results schema as results and staged data are written to the schema.
- Reduced CPU load as the collation phase is entered.
- A small amount of further database work as any outstanding database work units are processed and accumulated results written.
- Further write load on the Results schema at the end of the collation phase, in the form of requests to index the results and staged data tables, as necessary. The size and number of the index requests will vary, depending on data volumes and system configuration.
Processes that are heavily built around cleaning and validation operations will tend to be bound by the I/O capacity of the database. Some processors consume significant CPU resource, but generally the speed of operation is determined by how quickly data can be provided from and written to the Results schema.
### 2.3.3 What Is Whole Record Set Processing?
There are a number of processors, such as the Record Duplication Profiler and Duplicate Check processors, that require access to the whole record set in order to work. If these processors only had access to a subset of the data, they would be unable to detect duplicate records with any accuracy. These processes use multiple threads to absorb the input records and build them into a temporary table. Once all the records have been examined, they are re-emitted by distributing the records amongst the various process execution threads. There is no guarantee that a record will be emitted on the same process execution thread that absorbed it.
2.3.4 What Are Run Labels?
During the design phase of a project, processes and jobs are typically run interactively using Director. When a job is run in Director, results will typically be written for inspection by the user so that the configuration of processes and jobs can be iterated to work optimally with the in-scope data. The amount of data to write can be controlled in Director.
However, when a job is deployed to production such detailed results are not normally required, and jobs are typically run with Run Labels, either from the Server Console or from the command line. When run with a Run Label, a job will only write the staged data and results views that the user has configured to be staged in the job, for better performance efficiency.
---
**Note:** Jobs may be designed independently of any specific source or target of data. Such jobs will normally be run with a set of command line parameters, or a stored Run Profile that sets the same parameters, that dynamically change key configuration points such as the physical source of the data to read, key processing options, and the physical source of the data to write. Such jobs need to be run with a Run Label so that the written data and results are clearly separated from other runs of the job on different data. Server Console allows users to inspect results by Run Label.
---
2.4 What Is Match Processing?
EDQ match processors are handled in a significantly different way from the simpler processors. Due to the nature of the work carried out by match processors, multiple passes through the data are required.
A match processor is executed by treating it as a series of sub-processes. For example, consider a process designed to match a customer data snapshot against a list of prohibited persons. The process contains a match processor that is configured to produce a list of customer reference numbers and related prohibited person identifiers. Each data stream that is input to, or output from, the match processor, is considered to be a sub-process of the match processor. Therefore, there are three sub-processes in this example, representing the customer data input stream, the prohibited persons input stream and the output data stream of the match processor. The match processor itself forms a fourth sub-process, which effectively couples the data inputs to its outputs. Each sub-process is assigned the normal quota of process execution threads, so on a 4-core machine, each sub-process would have four process execution threads.
When execution of the match processor begins, the input data sub-processes run first, processing the input data. At this point, there is no work available for the match or match output sub-processes, which remain dormant. The input data sub-processes generate cluster values for the data streams and store the cluster values and incoming records in the Results schema, using the normal database work units mechanism.
Once the input data sub-processes have processed all the available records, they terminate and commence collation of their sub-process results. Meanwhile, the match sub-process will become active. The match sub-process then works through a series of stages, with each process execution thread waiting for all the other process execution threads to complete each stage before they progress to the next. Each time a new stage begins, the work will be subdivided amongst the processor executor threads in a manner that is appropriate at that stage. The processing stages are:
<table>
<thead>
<tr>
<th>Phase</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Comparison phase</td>
<td>The customer data and prohibited people data is retrieved, ordered by cluster values. The data is gathered into groups of equal cluster values, queued and passed to the match process threads to compare the records. Where relationships are found between records the relationship information is written to the Results schema.</td>
</tr>
</tbody>
</table>
This completes the match sub-process, and so the match processor execution threads now move into their collation phase.
At this point, the sub-process associated with the output of match data becomes active. The output data is divided amongst the process execution threads for the output sub-process and passed to the processors down stream from the match processor. From this point onwards, the data is processed in the normal batch processing way.
Benchmarks and production experience have shown that the comparison phase of a match processor is one of the few EDQ operations that is likely to become CPU bound. When anything other than very simple comparison operations are performed, the ability of the CPU to handle the comparison load limits the process. The comparison operations scale very well and are perfectly capable of utilizing all CPU cycles available to the EDQ Web Application Server.
**Tip:** Oracle recommends that the reader familiarizes themselves with the material contained in the "Advanced Features: Matching Concept Guide" in the Online Help.
### 2.5 What Is Real-Time Processing?
EDQ is capable of processing messages in real time. Currently, EDQ supports messaging using:
- Web Services
- JMS-enabled messaging software
When configured for real-time message processing, the server starts multiple process execution threads to handle messages as they are received. An incoming message is handed to a free process execution thread, or placed in a queue to await the next process execution thread to become free. Once the message has been processed, any staged data will be written to the Results database, and the process execution thread will either pick up the next message from the queue, if one exists, or become available for the next incoming message.
When processing data in real time, the process may be run in **interval mode**. Interval mode allows the process to save results at set intervals so that they can be inspected by a user and published to the EDQ Dashboard. The interval can be determined either by the number of records processed or by time limit. When an interval limit is reached, EDQ starts a new set of process execution threads for the process. Once all the new process execution threads have completed any necessary initialization, any new incoming messages are passed to the new threads. Once the old set of process execution threads have finished processing any outstanding messages, the system
directs those threads to enter the collation phase and save any results, after which the old process execution threads are terminated and the data is available for browsing.
|
{"Source-Url": "https://docs.oracle.com/middleware/1213/edq/concept/DQARC.pdf", "len_cl100k_base": 6748, "olmocr-version": "0.1.49", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 38865, "total-output-tokens": 7388, "length": "2e12", "weborganizer": {"__label__adult": 0.0002830028533935547, "__label__art_design": 0.00033664703369140625, "__label__crime_law": 0.00035381317138671875, "__label__education_jobs": 0.002437591552734375, "__label__entertainment": 0.00013625621795654297, "__label__fashion_beauty": 0.00014209747314453125, "__label__finance_business": 0.0022106170654296875, "__label__food_dining": 0.0002570152282714844, "__label__games": 0.0008678436279296875, "__label__hardware": 0.0010433197021484375, "__label__health": 0.00030922889709472656, "__label__history": 0.0002415180206298828, "__label__home_hobbies": 0.0001067519187927246, "__label__industrial": 0.0006151199340820312, "__label__literature": 0.0003285408020019531, "__label__politics": 0.0002071857452392578, "__label__religion": 0.0003821849822998047, "__label__science_tech": 0.03271484375, "__label__social_life": 0.00015175342559814453, "__label__software": 0.2261962890625, "__label__software_dev": 0.72998046875, "__label__sports_fitness": 0.0002188682556152344, "__label__transportation": 0.0002963542938232422, "__label__travel": 0.00018465518951416016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33789, 0.01453]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33789, 0.47978]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33789, 0.88925]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1898, false], [1898, 1898, null], [1898, 3402, null], [3402, 4914, null], [4914, 6767, null], [6767, 10003, null], [10003, 12803, null], [12803, 15694, null], [15694, 17003, null], [17003, 17003, null], [17003, 18598, null], [18598, 20862, null], [20862, 22555, null], [22555, 23992, null], [23992, 26976, null], [26976, 29502, null], [29502, 31157, null], [31157, 33616, null], [33616, 33789, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1898, true], [1898, 1898, null], [1898, 3402, null], [3402, 4914, null], [4914, 6767, null], [6767, 10003, null], [10003, 12803, null], [12803, 15694, null], [15694, 17003, null], [17003, 17003, null], [17003, 18598, null], [18598, 20862, null], [20862, 22555, null], [22555, 23992, null], [23992, 26976, null], [26976, 29502, null], [29502, 31157, null], [31157, 33616, null], [33616, 33789, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33789, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1898, 3], [1898, 1898, 4], [1898, 3402, 5], [3402, 4914, 6], [4914, 6767, 7], [6767, 10003, 8], [10003, 12803, 9], [12803, 15694, 10], [15694, 17003, 11], [17003, 17003, 12], [17003, 18598, 13], [18598, 20862, 14], [20862, 22555, 15], [22555, 23992, 16], [23992, 26976, 17], [26976, 29502, 18], [29502, 31157, 19], [31157, 33616, 20], [33616, 33789, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33789, 0.08627]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
bb84a3271ae1ac869d91df6fb47e37635556495d
|
5. Examples of Specifications and Implementations
This handout is a supplement for the first two lectures. It contains several example specifications and implementations, all written using Spec.
Section 1 contains a specification for sorting a sequence. Section 2 contains two specifications and one implementation for searching for an element in a sequence. Section 3 contains specifications for a read/write memory. Sections 4 and 5 contain implementations for a read/write memory based on caching and hashing, respectively. Finally, Section 6 contains an implementation based on replicated copies.
1. Sorting
The following specification describes the behavior required of a program that sorts sets of some type \( T \) with a \( < \times < \) comparison method. We do not assume that \( < \times < \) is antisymmetric; in other words, we can have \( t_1 \leq t_2 \) and \( t_2 \leq t_1 \) without having \( t_1 = t_2 \), so that \( < \times < \) is not enough to distinguish values of \( T \). For instance, \( T \) might be the record type \( \text{name:} \text{String, salary:} \text{Int} \] with \( < \times < \) comparison of the salary field. Several \( T \)'s can have different \( \text{name} \)s but the same \( \text{salary} \).
\[
\text{APROC Sort(a: SET T) -> SEQ T} = << \text{VAR b: SEQ T} | (\text{ALL i: T} | \text{a.count(i) = b.count(i)}) / \backslash \text{Sorted(b)} \gg \text{RET b} >>
\]
This specification uses the auxiliary function \( \text{Sorted} \), defined as follows.
\[
\text{FUNC Sorted(a: SEQ T) -> Bool} = \text{RET (ALL i \in a.dom – \{0\} | a(i-1) \leq a(i))}
\]
If we made \( \text{Sort} \) a \( \text{FUNC} \) rather than a \( \text{PROC} \), what would be wrong?\(^1\) What could we change to make it a \( \text{FUNC} \)?
We could have written this more concisely as
\[
\text{APROC Sort(a: SET T) -> SEQ T} = << \text{VAR b :IN a.perms | Sorted(b)} \gg \text{RET b} >>
\]
using the \( \text{perms} \) method for sets that returns a set of sequences that contains all the possible permutations of the set.
2. Searching
\text{Search specification}
We begin with a specification for a procedure to search an array for a given element. Again, this is an \( \text{APROC} \) rather than a \( \text{FUNC} \) because there can be several allowable results for the same inputs.
\[
\text{APROC Search(a: SEQ T, x: T) -> Int RAISES (NotFound)} = \text{RET i} [^*\text{] RAISE NotFound}
\]
Or, equivalently but slightly more concisely:
\[
\text{APROC Search(a: SEQ T, x: T) -> Int RAISES (NotFound)} = \text{RET i} [^*\text{] RAISE NotFound}
\]
\text{Sequential search implementation}
Here is an implementation of the \( \text{Search} \) specification given above. It uses sequential search, starting at the first element of the input sequence.
\[
\text{APROC SeqSearch(a: SEQ T, x: T) -> Int RAISES (NotFound)} = \text{RET i} [^*\text{] RAISE NotFound}
\]
\text{Alternative search specification}
Some searching algorithms, for example, binary search, assume that the input argument sequence is sorted. Such algorithms require a different specification, one that expresses this requirement.
\[
\text{APROC Search1(a: SEQ T, x: T) -> Int RAISES (NotFound)} = \text{RET i} [^*\text{] RAISE NotFound}
\]
You might consider writing the specification to raise an exception when the array is not sorted:
\[
\text{APROC Search2(a: SEQ T, x: T) -> Int RAISES (NotFound, NotSorted)} = \text{RET i} [^*\text{] RAISE NotSorted}
\]
This is not a good idea. The whole point of binary search is to obtain \( O(\log n) \) time performance (for a sorted input sequence). But any implementation of the \( \text{Search2} \) specification requires an \( O(n) \) check, even for a sorted input sequence, in order to verify that the input sequence is in fact sorted.
This is a simple but instructive example of the difference between defensive programming and efficiency. If \( \text{Search} \) were part of an operating system interface, it would be intolerable to have \( \text{HAVOC} \) as a possible transition, because the operating system is not supposed to go off the deep end no matter how it is called (though it might be OK to return the wrong answer if the input isn’t sorted; what would that specify)? On the other hand, the efficiency of a program often depends on assumptions that one part of it makes about another, and it’s appropriate to express such an assumption in a spec by saying that you get \( \text{HAVOC} \) if it is violated. We don’t care to be more specific about what happens because we intend to ensure that it doesn’t happen. Obviously a program written in this style will be more prone to undetected or obscure errors than one that checks the assumptions, as well as more efficient.
3. Read/write memory
The simplest form of read/write memory is a single read/write register, say of type \( D \) (for data), with arbitrary initial value. The following Spec module describes this:
\[
\text{MODULE Register} \ [D] \ \text{EXPORT} \ \text{Read, Write} = \\
\text{VAR} \ x : D \quad \% \text{arbitrary initial value} \\
\text{APROC} \ \text{Read()} -> D = << \text{RET} \ x >> \\
\text{APROC} \ \text{Write}(d) = << x := d >> \\
\text{END Register}
\]
Now we give a specification for a simple addressable memory with elements of type \( D \). This is like a collection of read/write registers, one for each address in a set \( A \). In other words, it’s a function from addresses to data values. For variety, we include new \text{Reset} and \text{Swap} operations in addition to \text{Read} and \text{Write}.
\[
\text{MODULE Memory} \ [A, D] \ \text{EXPORT} \ \text{Read, Write, Reset, Swap} = \\
\text{TYPE} \ M = A -> D \\
\text{VAR} \ m := \text{Init()} \\
\text{APROC} \ \text{Init()} -> M = << \text{VAR} \ m' | (\text{ALL} a | m'!a) => \text{RET} \ m' >> \\
\% \text{Choose an arbitrary function that is defined everywhere.} \\
\text{APROC} \ \text{Init}(a) -> D = << \text{RET} \ m(a) >> \\
\text{APROC} \ \text{Write}(a, d) = << m(a) := d >> \\
\text{APROC} \ \text{Reset}(d) = << m := M(*) -> d >> \\
\% \text{Set all memory locations to} \ d. \\
\text{APROC} \ \text{Swap}(a, d) -> D = << \text{VAR} \ d' := m(a) | m(a) := d; \text{RET} \ d' >> \\
\% \text{Set location} \ a \ \text{to the input value and return the previous value.} \\
\text{END Memory}
\]
The next three sections describe implementations of Memory.
4. Write-back cache implementation
Our first implementation is based on two memory mappings, a main memory \( m \) and a write-back cache \( c \). The implementation maintains the invariant that the number of addresses at which \( c \) is defined is constant. A real cache would probably maintain a weaker invariant, perhaps bounding the number of addresses at which \( c \) is defined.
\[
\text{MODULE WBCache} \ [A, D] \ \text{EXPORT} \ \text{Read, Write, Reset, Swap} = \\
\% \text{implements Memory} \\
\text{TYPE} \ M = A -> D \\
\text{C} = A -> D \\
\text{CONST} \ \text{Csize} : \text{Int} := \ldots \quad \% \text{cache size} \\
\text{VAR} \ m := \text{InitM()} \\
\text{c} := \text{InitC()} \\
\text{APROC} \ \text{InitM()} -> M = << \text{VAR} \ m' | (\text{ALL} a | m'!a) => \text{RET} \ m' >> \\
\% \text{Returns a} \ M \ \text{with arbitrary values.} \\
\text{APROC} \ \text{InitC()} -> C = << \text{VAR} \ c' | c'.\text{dom.size} = \text{CSize} => \text{RET} \ c' >> \\
\% \text{Returns a} \ C \ \text{that has exactly} \ \text{CSize} \ \text{entries defined, with arbitrary values.} \\
\text{APROC} \ \text{Read}(a) -> D = << \text{Load}(a); \text{RET} \ c(a) >> \\
\text{APROC} \ \text{Write}(a, d) = << \text{IF} \ ~c!a \ \text{SKIP} \ FI; \ c(a) := d >> \\
\% \text{Makes room in the cache if necessary, then writes to the cache.} \\
\text{APROC} \ \text{Reset}(d) = \ldots \quad \% \text{exercise for the reader} \\
\text{APROC} \ \text{Swap}(a, d) -> D = << \text{VAR} \ d' | \text{Load}(a); d' := c(a); c(a) := d; \text{RET} \ d' >> \\
\% \text{Internal procedures.} \\
\text{APROC} \ \text{Load}(a) = << \text{IF} \ ~c!a \ \text{SKIP} \ FI; \ c(a) := m(a) >> \\
\% \text{Ensures that address a appears in the cache.} \\
\text{APROC} \ \text{FlushOne()} = \\
\% \text{Removes one (arbitrary) address from the cache, writing the data value back to main memory if necessary.} \\
\% \text{RET} \ \text{c(a)} \ \text{if} \ c(a) \ \text{is in the cache, and} \ m(a) \ \text{otherwise.} \\
\text{FUNC} \ \text{Dirty}(a) -> \text{Bool} = \text{RET} \ c(a) / \ c(a) \ # m(a) \\
\% \text{Returns true if the cache is more up-to-date than the main memory.} \\
\text{END WBCache}
\]
The following Spec function is an abstraction function mapping a state of the \text{WBCache} module to a state of the \text{Memory} module. It’s written to live inside the module. It says that the contents of location \( a \) is \( c(a) \) if \( a \) is in the cache, and \( m(a) \) otherwise.
\[
\text{FUNC} \ \text{AF}() -> M = \text{RET} (\ \text{\{a | c(a) \* m(a) \})}
\]
5. Hash table implementation
Our second implementation of Memory uses a hash table for the representation.
**MODULE HashMemory** [A WITH \( hf: A \rightarrow \text{Int} \), D] EXPORT Read, Write, Reset, Swap = % Implements Memory.
% The module expects that the hash function \( A.hf \) is total and that its range is \( 0 \ldots n \) for some \( n \).
\[
\text{TYPE Pair} = [a, d] \\
\text{B} = \text{SEQ Pair} \quad \% \text{Bucket in hash table} \\
\text{HashT} = \text{SEQ B} \\
\text{VAR} \ nb := \text{NumB()} \quad \% \text{Number of Buckets} \\
\text{m} := \text{HashT.fill(B(), nb)} \quad \% \text{Memory hash table; initially empty} \\
default : D \quad \% \text{arbitrary default value} \\
\text{APROC} \text{Read}(a) \rightarrow D = << \text{VAR} b := \text{m}(a.hf), i: \text{Int} | \\
i := \text{FindEntry}(a, b) \except \text{NotFound} \Rightarrow \text{RET} \text{default} ; \text{RET} b(i).d >> \\
\text{APROC} \text{Write}(a, d) = << \text{VAR} b := \text{DeleteEntry}(a, \text{m}(a.hf)) | \\
\text{m(a.hf)} := b + (\text{Pair}(a, d)) >> \\
\text{APROC} \text{Reset}(d) = << m := \text{HashT.fill(B(), nb)} ; \text{default} := d >> \\
\text{APROC} \text{Swap}(a, d) \rightarrow D = << \text{VAR} d' | d' := \text{Read}(a) ; \text{Write}(a, d) ; \text{RET} d' >>
\]
% Internal procedures.
\[
\text{FUNC} \text{NumBs()} \rightarrow \text{Int} = \\
\% \text{Returns the number of buckets needed by the hash function; havoc if the hash function is not as expected.} \\
\text{IF} \text{VAR} n: \text{Nat} | \text{A.hf.rng} = (0 \ldots n).\text{set} \Rightarrow \text{RET} n + 1 [\!* \text{HAVOC} \text{FI} \\
\text{APROC} \text{FindEntry}(a, b) \rightarrow \text{Int \ RAISES (NotFound)} = \\
\% \text{If a appears in a pair in b, returns the index of some pair containing a; otherwise raises NotFound.} \\
\text{VAR} i :\text{IN} b.\text{dom} | b(i).a = a \Rightarrow \text{RET} i [\!* \text{RAISE NotFound >>} \\
\text{APROC} \text{DeleteEntry}(a, b) \rightarrow B = \text{VAR} i: \text{Int} | \\
\% \text{Removes some pair with address a from b, if any exists.} \\
i := \text{FindEntry}(a, b) \except \text{NotFound} \Rightarrow \text{RET} b ; \\
\text{RET}.b.sub(0, i-1) + b.sub(i+1, b.size-1) >> \\
\text{END} \text{HashMemory}
\]
Note that \text{FindEntry} and \text{DeleteEntry} are APROCS because they are not deterministic when given arbitrary \( b \) arguments.
The following is a key invariant that holds between invocations of the operations of HashMemory:
\[
\text{FUNC Inv()} \rightarrow \text{Bool} = \text{RET} \\
( \text{nb} > 0 \\n/ \m.size = \text{nb} \\
/ (\text{ALL} a | a.hf \text{IN m.dom})} \\
/ (\text{ALL} i | \text{IN m.dom, p :IN m(i).rng | p.a.hf} = i) \\
/ (\text{ALL a | \{} j :\text{IN m(a.hf).dom | m(a.hf)(j).a = a \}.size} <= 1) \\)
\]
This says that the number of buckets is positive, that the hash function maps all addresses to actual buckets, that a pair containing address \( a \) appears only in the bucket at index \( a.hf \) in \( m \), and that at most one pair for an address appears in the bucket for that address. Note that these conditions imply that in any reachable state of HashMemory, each address appears in at most one pair in the entire memory.
The following Spec function is an abstraction function between states of the HashMemory module and states of the Memory module:
\[
\text{FUNC AF()} \rightarrow M = \text{RET} \\
(\Lambda(a) \rightarrow D = \\
\text{IF} \text{VAR} i :\text{IN m.dom, p :IN m(i).rng | p.a = a} \rightarrow \text{RET} p.d [\!* \text{RET default \ FI}]
\]
That is, the data value for address \( a \) is any value associated with address \( a \) in the hash table; if there is none, the data value is the default value. Spec says that a function is undefined at an argument if its body can yield more than one result value. The invariants given above ensure that the \text{LAMBDA} is actually single-valued for all the reachable states of HashMemory.
Of course HashMemory is not a fully detailed implementation. Its main deficiency is that it doesn’t explain how to maintain the variable-length bucket sequences, which is usually done with a linked list. However, the implementation does capture all the essential details.
6. Replicated copies
Our final implementation is based on some number \( k \geq 1 \) of copies of each memory location. Initially, all copies have the same default value. A \text{Write} operation only modifies an arbitrary \( k \) of the copies. A \text{Read} operation only modifies an arbitrary \( k \) of the values it sees. In order to allow the \text{Read} to determine which value is the most recent, each \text{Write} records not only its value, but also a sequence number.
For simplicity, we just show the module for a single read/write register. The constant \( k \) determines the number of copies.
**MODULE MajorityRegister** [D] = % implements Register
\[\text{CONST} k = 5\]
\[
\text{TYPE} N = \text{Nat} \quad \% \text{ints between 1 and k} \\
\text{Kint} = \text{SET KInt} \quad \% \text{all majority subsets of KInt} \\
\text{SUCHTHAT} (\text{\{m: Maj | m.size} > k/2) \\
\text{TYPE P} = [D, \text{seqno: N}] \quad \% \text{Pair} \\
M = \text{\text{Kint} -> P} \quad \% \text{Memory} \\
S = \text{\text{SET P}}
\]
\[
\text{VAR} \text{default : D} \\
\text{m} := M[\text{* -> P[default, seqno} = 0)]} \\
\text{APROC Read()} \rightarrow D = << \text{RET ReadPair(),d >>} \\
\text{APROC Write(d) = << \text{VAR} i: \text{Int, maj |} \\
\% \text{Determines the highest sequence number i, then writes d paired with i+1 to some majority m} \text{maj} \text{of the copies.}
\]
\[ i := \text{ReadPair().seqno}; \]
\[
\text{DO VAR } j :\text{IN maj} \mid m(j).\text{seqno} \# i+1 \Rightarrow m(j) := P\{d := d, \text{seqno} := i+1\} \text{ OD} >>
\]
% Internal procedures.
APROC ReadPair() -> P = \[
\text{VAR } s := \text{ReadMaj()} \mid \text{Returns a pair with the largest sequence number from some majority of the copies.} \\
\text{VAR } p :\text{IN } s \mid p.\text{seqno} = (p' :\text{IN } s \mid p'.\text{seqno}).\text{max} \Rightarrow \text{RET } p >>
\]
APROC ReadMaj() -> S = \[
\text{VAR } \text{maj} \mid \text{RET } \{ i :\text{IN } \text{maj} \mid \text{m(i)} \} >>
\]
% Returns the set of pairs belonging to some majority of the copies.
END MajorityRegister
We could have written the body of \text{ReadPair} \text{as}
\[
\text{VAR } s := \text{ReadMaj()} \mid \text{RET } s.\text{fmax}((\ p1, p2 \mid p1.\text{seqno} \leq p2.\text{seqno})) >>
\]
except that \text{fmax} always returns the same maximal \(p\) from the same \(s\), whereas the \text{VAR} in \text{ReadPair} chooses one non-deterministically.
The following is a key invariant for \text{MajorityRegister}.
FUNC Inv(m: M) -> Bool = RET
\[
(\text{ALL } p :\text{IN } m.\text{rng}, p' :\text{IN } m.\text{rng} \mid p.\text{seqno} = p'.\text{seqno} \Rightarrow p.d = p'.d) \\
\lor (\text{EXISTS } \text{maj} \mid (\text{ALL } i :\text{IN } \text{maj}, p :\text{IN } m.\text{rng} \mid m(i).\text{seqno} \geq p.\text{seqno}))
\]
The first conjunct says that any two pairs having the same sequence number also have the same data. The second conjunct says that the highest sequence number appears in some majority of the copies.
The following \text{Spec} function is an abstraction function between states of the \text{MajorityRegister} module and states of the \text{Register} module.
FUNC AF() -> D = RET m.\text{rng}.\text{fmax}((\ p1, p2 \mid p1.\text{seqno} \leq p2.\text{seqno})).d
That is, the abstract register data value is the data component of a copy with the highest sequence number. Again, because of the invariants, there is only one \(p.d\) that will be returned.
|
{"Source-Url": "http://web.mit.edu/6.826/archive/S99/05.pdf", "len_cl100k_base": 5236, "olmocr-version": "0.1.49", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 12597, "total-output-tokens": 5776, "length": "2e12", "weborganizer": {"__label__adult": 0.0003614425659179687, "__label__art_design": 0.00023376941680908203, "__label__crime_law": 0.000247955322265625, "__label__education_jobs": 0.0003974437713623047, "__label__entertainment": 5.91278076171875e-05, "__label__fashion_beauty": 0.0001271963119506836, "__label__finance_business": 0.00011301040649414062, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0004506111145019531, "__label__hardware": 0.0017604827880859375, "__label__health": 0.0003674030303955078, "__label__history": 0.00019788742065429688, "__label__home_hobbies": 0.0001018047332763672, "__label__industrial": 0.0004036426544189453, "__label__literature": 0.0001844167709350586, "__label__politics": 0.0002307891845703125, "__label__religion": 0.0004436969757080078, "__label__science_tech": 0.00667572021484375, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.0027942657470703125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00029659271240234375, "__label__transportation": 0.0005030632019042969, "__label__travel": 0.00019752979278564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16672, 0.00385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16672, 0.82921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16672, 0.71246]], "google_gemma-3-12b-it_contains_pii": [[0, 4751, false], [4751, 8972, null], [8972, 14589, null], [14589, 16672, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4751, true], [4751, 8972, null], [8972, 14589, null], [14589, 16672, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16672, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16672, null]], "pdf_page_numbers": [[0, 4751, 1], [4751, 8972, 2], [8972, 14589, 3], [14589, 16672, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16672, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
3226f6fbd9da7beadc5a8020d698c0a80254c433
|
A Model of Feedback Relationships between Software Maintenance and Information Systems Staff Management: A Case of an E-government System
Gunadi*, Geoffrey A. Sandy, and G. Michael McGrath**
* Department of Information Systems, Faculty of Engineering and Informatics, Gajayana University, Malang, Indonesia
** School of Management and Information Systems, Victoria University, Melbourne, Australia
ABSTRACT
To ensure systems sustainability in delivering services, software maintenance (SM) is a necessary condition to solve emerging errors and satisfy new requirements during operation. This maintenance involves a great variety of interdependent elements and processes as well as requires competent and motivated staff. This research aims at developing a model capable of explicitly explaining the complexity of causal feedback-relationships between elements and processes of SM and information systems (IS) staff management related factors. A preliminary causal loop diagram of system dynamics method was developed from literature. A successful e-government system of a Ministry in Indonesia was selected as the case. IS staff and managers of the selected case were deeply interviewed using prepared questionnaires developed based on the preliminary model. Data collected is used to validate and refine the preliminary model. The resulting empirical causal loop model explicitly describes how various factors, which might not be close in space and time, through chains of causal relationships, influence software availability level over time, hence IS sustainability, and how this level influences back those factors. Therefore, the model helps management with further insight about aligning an important domain of information technology with people to attain system sustainability.
Copyright © 2013 Information Systems International Conference. All rights reserved.
Corresponding Author:
Gunadi,
Department of Information Systems, Faculty of Engineering and Informatics,
Gajayana University, Malang,
Jalan Mertojoyo Blok L, Merjosari, Malang 65144, East Java, Indonesia.
Email: gunadi3374@gmail.com
1. INTRODUCTION
While most of the current knowledge on IS is associated with and dominated by system development, in fact a system is expected to spend most of its lifetime within operation and maintenance stage to deliver services. The importance of the maintenance stage is emphasised by van Vliet [1] as “compared to software development, software maintenance has more impact on the well-being of an organization”. In addition, literature in IS indicates a large portion of IS budget was allocated for SM [2]. During this stage the emergence of unresolved errors and unsatisfied new users’ requirements can jeopardise the sustainability of a system’s capability in serving users.
In order to achieve the system sustainability an organisation needs to ensure a high software availability level over time (sustainable software) which requires productive and quality SM [3]. Sustainable software can operate consistently to deliver services for a long period, at least as long as its designed lifetime. During this period of sustainability, emerging errors and new functionality requirements can be addressed successfully until it is no longer feasible to maintain the software but rather it must be replaced.
Considering SM process, any SM activity is initiated by a maintenance request (MR) which is caused by software errors and/or new user’s requirements. Conceptually, a maintenance process is similar to a system development life cycle but with much smaller scale [4]. A completed SM task necessarily increases software complexity [5] and any undertaken maintenance might cause recurring maintenance because of possible ripple effects [6]. This, in turn, affects the availability level of the software or systems and triggers further maintenance requests. Therefore, to attain a high level of software availability, it is necessary to understand the existence of feedback relationships between SM elements and activities.
In addition, an organisation cannot ignore the role of IS staff who undertake the maintenance. Managing their performance necessarily determines SM completion and quality levels [7]. In achieving SM success, factors associated with staff motivation are among the most important issues [3, 4, 8]. SM has been associated with being a second-class job compared with software development; and the maintainers are usually within their early careers in information technology [9]. In a more general context of IS management, critical roles of internal IS human resource have been emphasised by many IS literature, for example, by Acuna, Juristo and Moreno [10], and Madachy [11].
Previous research has paid much attention on motivation and performance of human resources [12], especially IS staff [13, 14]. Within the SM context, performance of IS staff might be measured by their productivity in completing SM and quality of the resulting SM. According to the expectancy theory [12], IS staff performance relates dynamically to effort, expected rewards and rewards in feedback fashion [13]. On the other hand, the performance in SM is also affected by software complexity [15]. In addition, the IS staff performance influences or is influenced by their competence [11], affects or is affected by their experience [16]. Environment surrounding IS staff also has an impact on their performance [13]. Organisational rewards which are normally determined by staff performance, to some degree, relate to absenteeism [17]. In information technology area, competence level naturally degrades over time unless continuous improvement through appropriate training is undertaken [18].
These previous studies indicate the existence of complex factors and feedback relationships affecting the availability level of software over time. Little has been done in previous research that visually models this complexity and accommodates dynamic feedback relationships between those factors. It is important to understand how elements and processes of software maintenance relate to each other and relate to IS staff management factors in order to assist SM management achieving successful SM.
2. RESEARCH METHOD
2.1. Qualitative system dynamics
This research implements qualitative system dynamics (SD) approach [19-21], which is represented visually as a causal loop diagram (CLD), to model the relationships between elements and processes of SM and IS staff management related factors. The approach is chosen because the resulting model is capable of revealing the chain of feedback relationships between variables forming causal loops; therefore indicating the way two or more variables relate to each other which they might not be close in time and distance and providing comprehension of how the effect variable in turn influences the cause variable. Therefore, the CLD can generate insight into the dynamic structure of the system [11, 22]. The method is a particularly appropriate modelling approach where time and feedback loops are important, and where considerable complexity, ambiguity and uncertainty exist [23].
Technically, a CLD consists of words or phrases which are linked by curved arrows, each of which has attached polarity and time delay symbols [22, 24]. The arrow represents a causal relationship between two factors. The polarity is symbolised by ‘+’ indicating the two related variables change in the same direction, or ‘-’ showing the two linked variables vary in two different directions; and the time delay is shown by ‘//’ crossing the arrow. A CLD is formulated by referring to the relevant theories and previous research associated with the formulated problem as well as the researcher’s mental model associated with the research problem [22].
2.2. The Case
A unit of a Ministry of the Republic of Indonesia was selected as the Case. The selection is based on the following reasons. The unit plans, develops, operates, and maintains information systems of the Ministry which include software, hardware, people, data, and computer networks, to support the attainment of the Ministry’s mission. The unit won e-Government Award in several consecutive years. Currently, it runs a number of application systems and was supported by 16 staff members who have skill and knowledge in computer programming, database management, computer network installation operation and maintenance. Website of the Ministry and most of the application systems currently being operated were developed in-house. The unit runs and maintains successfully many of the application systems to serve and support organisational needs in central office and in other units all over the country for more than five years.
2.3. Data collection, model development and validation
A CLD was constructed as a preliminary model based on previous research and researcher’s mental model [25]. Data collected from the Case, which reflects a real world, is used to validate the preliminary model.
Development of the preliminary model from previous research was started by identifying the dynamic behaviour of system software sustainability, and introducing endogenous factors from within SM.
and IS staff management domains that cause this dynamic. Included within the identification process is the
time delay taken by a factor in influencing other factor(s). The identified factor can be a condition, situation,
action, decision or physical element’s condition within the domains that can influence and be influenced by
other success factors; and both quantitative and qualitative success factors are possible.
Data was collected through in-depth interviews guided by the preliminary CLD and structured open-ended
questionnaires. This CLD along with the relevant existing studies were used to prepare the
questionnaires.
The interviewees consisted of a senior manager, an application system manager, and nine IS staff of
the unit. The senior manager was asked to evaluate the preliminary model if the factors and dynamic
feedback-relationships reflect reality. The model was presented in a stage-wise fashion starting from the
simplest model drawn on a single paper to the most complex model on a last single paper. The model is
revised accordingly based on the senior manager views if the manager does not confirm any factors and
relationships in the preliminary model. In addition, the senior manager was interviewed using a questionnaire
focusing on managerial aspects of SM and IS staff. The application system manager was interviewed by
using the second questionnaire addressing operational matter of SM and IS staff management under his
responsibility. The IS staff was interviewed by using the third questionnaire in order to elicit their actual
practice in doing SM, effort, competence, and rewards received.
Findings from the interviews with the senior manager were used to revise and improve the
preliminary model, while those elicited from the application systems manager and IS staff were used to
corroborate the ones obtained from the senior manager. Further clarification from the interviewees would be
sought if discrepancies had been found during interviews.
3. RESULTS AND ANALYSIS
Data collected from the Case indicates that the availability levels of the system software have been very
high for a number of consecutive years which can be described as in Figure 1. Management and staff of the
Unit have been able to maintain and sustain the software they operate.

Figure 2 is the resulting CLD showing the model of feedback-relationships of elements and processes of the
SM and IS staff management related factors of the e-government system of the Case. The model might be
decomposed into a maintenance process, motivational factors, and competence development subsets.
Maintenance process. Causal feedback relationships of this subset of the CLD in Figure 2 involve some key factors: software availability level, maintenance requests, required effort and competence, effort gap and competence gap, delivery level and maintenance quality, software complexity and recurrent maintenance. This subset also includes factors which are not part of the causal loop: software quality resulting from development project, environmental pressure for enhancements, and environmental support factor.
System software sustainability can be indicated by a high software availability level over time. This dynamic level is influenced by three factors. First, it is positively influenced by software quality resulting from the development project. Second, it is also affected by environmental pressure for enhancements but in a negative direction. Third, the level of software availability is also negatively influenced by the maintenance itself, through the increasing software complexity and recurrent maintenance requests as well as the maintenance delivery level. In turn, the software availability level will negatively influence the MRs. The maintenance requests occur randomly and a large number of maintenance requests mean large workloads of maintenance. The workloads depend on both the degree of difficulty of a maintenance problem and the number of workloads at a particular time unit.
Solving the maintenance workloads need two things simultaneously: staff effort and competence. These two represent two crucial dynamic factors: motivation and ability of the SM staff. However, as the levels of these factors are dynamic, there are always possible gaps between the required and actual levels. These gaps affect completed maintenance delivery level and quality. The delivery level will also be positively affected by environmental support factors, such as smoothness of communication with other competent colleagues and users to obtain their support. Over time, as the number of completed maintenance increases, the software complexity also increases. In turn, this software complexity negatively affects the software availability level forming a balancing closed-loop of dynamics feedback relationships B1. Hence, this results in reduction of the software availability. Much research in software engineering has shown that complex software, especially the one with long SLOC, has higher probability of software error occurrence than the less one.
On the other hand, the quality level of maintenance has a negative effect on recurrent maintenance. This recurrent maintenance especially takes place when a fault caused by delivered maintenance becomes evident at some point of time in the future (ripple effect). It also happens when the requesters repeat their request due to cancellation, incomplete delivery or rescheduling of maintenance. The more recurrent maintenance occurs the lower the software availability. This forms a reinforcing loop R1. Observing this subset of the CLD, SM management could comprehend the importance of motivation and competence of the IS staff in ensuring the software availability to deliver services. The relationships between these factors are masked by chain of causal relationships.
Motivational factors. Another subset of the CLD in Figure 2 presents feedback-relationships between factors related to IS staff motivation. This subset is a representation and implementation of the expectancy theory and other related factors in an e-government SM context which involves: maintenance
quality and delivery level, actual organisational rewards and staff expectation on rewards, staff satisfaction over provided rewards, staff perception of their needs and goals fulfilment, absence level and actual individual effort, effort gap, and total actual effort. This subset also includes total staff and VIP requests and other duty factors which are not involved in a causal loop.
The delivery level and maintenance quality factors, which represent maintenance performance, positively influence both actual organisational rewards and staff expectation of rewards. These influences may take time. Any discrepancy between these two will affect staff satisfaction over the rewards provided. As underlined by the expectancy theory, the value of the actual rewards depends on how the IS staff perceive their value. In turn, this satisfaction will positively affect staff perception on their needs and goal fulfilment. A high level of staff perception on their needs and goal fulfilment improves actual individual effort and reduces the absenteeism level, which in turn increases the total actual effort. On the contrary, over time, the staff may be present in the workplace but not exert their full effort in maintaining the software as a result of their perception that their effort does not give rise to the fulfilment of their needs and goals. An increase in the absenteeism level may be assumed to be the staff’s attempt to fulfil their needs and goals from other places. A chain of relationships involving actual organisational rewards factor forms a reinforcing loop R4, while the one containing staff expectation on rewards creates a balancing loop B3.
The effort factor of the expectancy theory is represented by actual individual effort, total actual effort and the effort gap. Individual effort is defined as the amount of time (hours) for a period of time (such as one day) spent by IS staff to work on an assigned maintenance task. Total effort represents the organisational effort in terms of its staff spent in performing maintenance tasks for a specified period of time. It is a multiplicative function of individual effort and total staff. As the actual individual effort increases, the total actual effort also increases, assuming the total staff is constant, which in turn reduces the effort gap.
The expectancy theory also mentions that staff members understand that the rewards can only be obtained by performance. An observation made by staff towards their performance could lead to a change in their exerted effort level, which in turn influences the performance level. The staff members tend to raise their effort – for example, by increasing the number of working hours solely dedicated to maintenance – if their normal effort does not lead to their expected performance, and the other way around. These relationships are expressed as a balancing loop B2 of delivery level $\rightarrow$ actual individual effort $\rightarrow$ total actual effort $\rightarrow$ effort gap $\rightarrow$ delivery level.
**Competence development.** In addition to motivation, another subset of the CLD in Figure 2 presents feedback causal relationships of IS staff competence-related factors. In this subset, some key success factors are average current competence level, maintenance quality and delivery level, learning and experience level, needs for training, training gap, fraction of less competent and competent staff. Factors included in the subset which are not part of a causal loop are actual training, new staff, and competence obsolescence due to technological change.
The average current competence level expresses the average nominal amount of maintenance tasks that can be completed for a particular period of time by a staff member when there is no loss factor. That is when the staff is fully motivated and the supporting facilities to perform maintenance are perfect. The average current competence level is negatively influenced by an exogenous factor called competence obsolescence due to IT advancement, but the decrease in the competence level takes time. There is a reinforcing loop R2 which links the average current competence level, competence gap, delivery level and learning and experience levels. To a degree, any completed maintenance delivery level improves staff’s learning and experience levels. Through time, as the exposure to various challenges of maintenance increases in the course of the increase of the delivery level, then the staff’s experience also increases. The increase in staff experience causes the average current competence level to increase as well.
The average current competence level also relates to training, especially in response to the competence obsolescence due to IT advancement. The increase in the average current competence level, through narrowing the competence gap, improves maintenance quality which in turn reduces the needs for training or, at least, the level of training needed. The need for training and the actual training, which responds to the need for training, will determine the training gap which represents the value or effectiveness of training to the improvement of the average competence level. Assuming the actual training and the competence obsolescence factors are at a constant level, a higher need for training will widen the training gap which indicates a decrease in the average current competence level. This chain of relationships forms a reinforcing loop R3. On the other hand, the need for training is negatively influenced by the average current competence level.
Additionally, the fractions of less competent and competent staff, which is an exogenous factor, negatively influence the average current competence level and positively affect the need for training. Newly recruited staff determines the fractions.
4. CONCLUSION
This study has developed an empirical CLD, based on an e-government system Case, which explicitly presents the way elements and processes of SM and IS staff management factors, which are not close in space and time, relate to each other influencing the software availability level over time. The model describes how the factors influence the availability level and the level influences back the factors through chains of causal relationships. The CLD facilitates exploration and explanation of the dynamic feedback relationships between factors. For example, the model shows how dynamic rewards eventually influence the software availability level and vice versa. Therefore, the model can assist management to establish fruitful policies in aligning an important domain of information technology with people to attain system sustainability.
5. ACKNOWLEDGMENT
Draft of this paper was proofread by Sri Wahyuni, PhD.
REFERENCES
Copyright © 2013 ISICO
BIBLIOGRAPHY OF AUTHORS
Gunadi is a lecturer in Information Systems at the Department of Information Systems, Gajayana University, Malang. He received a PhD degree in Information Systems from Victoria University, Melbourne, Australia. His research interests are e-government, information systems management, information systems human resource management, system dynamics modelling, etc.
Geoffrey A. Sandy was an Associate Professor of Information Systems, School of Information Systems, Victoria University, Melbourne, Australia.
G. Michael McGrath is a Professor of Information Systems at the School of Management and Information Systems, Victoria University, Melbourne, Australia.
Copyright © 2013 ISICO
|
{"Source-Url": "http://is.its.ac.id/pubs/oajis/index.php/file/download_file/1218", "len_cl100k_base": 4128, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18877, "total-output-tokens": 5857, "length": "2e12", "weborganizer": {"__label__adult": 0.00043487548828125, "__label__art_design": 0.0006513595581054688, "__label__crime_law": 0.0006499290466308594, "__label__education_jobs": 0.038970947265625, "__label__entertainment": 0.00018143653869628904, "__label__fashion_beauty": 0.0002589225769042969, "__label__finance_business": 0.0082244873046875, "__label__food_dining": 0.0006628036499023438, "__label__games": 0.0009775161743164062, "__label__hardware": 0.0011539459228515625, "__label__health": 0.0017824172973632812, "__label__history": 0.0006008148193359375, "__label__home_hobbies": 0.00026607513427734375, "__label__industrial": 0.0008530616760253906, "__label__literature": 0.0009207725524902344, "__label__politics": 0.0004901885986328125, "__label__religion": 0.0005297660827636719, "__label__science_tech": 0.1805419921875, "__label__social_life": 0.0004429817199707031, "__label__software": 0.051513671875, "__label__software_dev": 0.70849609375, "__label__sports_fitness": 0.0003478527069091797, "__label__transportation": 0.0008563995361328125, "__label__travel": 0.00035262107849121094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27399, 0.02699]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27399, 0.21857]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27399, 0.92729]], "google_gemma-3-12b-it_contains_pii": [[0, 4040, false], [4040, 9203, null], [9203, 11876, null], [11876, 15424, null], [15424, 21245, null], [21245, 26246, null], [26246, 27399, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4040, true], [4040, 9203, null], [9203, 11876, null], [11876, 15424, null], [15424, 21245, null], [21245, 26246, null], [26246, 27399, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27399, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27399, null]], "pdf_page_numbers": [[0, 4040, 1], [4040, 9203, 2], [9203, 11876, 3], [11876, 15424, 4], [15424, 21245, 5], [21245, 26246, 6], [26246, 27399, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27399, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4df316231d9b459cf091408bacea096a53c16ae2
|
An Approach for Achieving Local Consistency in Binary CSP Involving Disjunctive Constraints
Carlos Castro
INRIA Lorraine & CRIN
615, rue du Jardin Botanique, BP 101,
54602 Villers-lès-Nancy Cedex, France
e-mail: Carlos.Castro@loria.fr
Abstract
In this paper we present an approach for systematic manipulation of disjunctive constraints when verifying local consistency in CSP. We propose to decompose a set of constraints in several subproblems, making explicit difference between elementary and disjunctive constraints. Then local consistency is verified for each subproblem and the results, elimination of impossible values for the variables, are communicated through membership constraints. The algorithm stops when there are no more changes in these constraints. We use constructive disjunction to deal with disjunctive constraints in order to prune the search space. We prove that if we can decompose the set of disjunctive constraints in several subproblems whose set of variables are disjoints we can do better than the existing approaches, like choice point. This general approach for solving constraints fits very well in the particular case of scheduling problems, one of the most successful applications of constraint programming.
Keywords: Artificial Intelligence, Constraint Satisfaction Problems, Disjunctive Constraints, Constructive Disjunction.
1 Introduction
One of the most difficult problems in constraint solving comes from disjunctive constraints. Several techniques have been proposed by the Artificial Intelligence community to deal with this kind of constraints. Because of its theoretical and practical interest, manipulation of disjunctive constraints in Constraint Satisfaction Problems (CSP) is a hot research topic [13, 4, 16]. In this work we present an approach for systematic manipulation of disjunctive constraints when verifying local consistency. We first propose to decompose a set of constraints in two subproblems, the first one with all the elementary constraints and the second one with all the disjunctive constraints. Then we apply a graph decomposition algorithm on the second subproblem in order to obtain several sets of disjunctive constraints whose set of variables are disjoints. We verify local consistency for each subproblem and the results, elimination of impossible values for the variables, are communicated through membership constraints. The algorithm stops when there are no more changes in the membership constraints. We prove that if we can decompose the set of disjunctive constraints in at least two subproblems we can do better than the existing approaches, like choice point. We have implemented these ideas in a prototype for solving CSP and we have carried out some simple benchmarks to validate this theoretical result. We have realised that this general approach to manipulate CSP fits very well in the case of scheduling problems, one of the most successful applications of constraint programming [3]. This paper is organised as follows. Section 2 presents CSP, its definition and a brief description of techniques used to solve them. Section 3 introduces disjunctive constraints and presents different approaches used by the CSP community to deal with them. In section 4 we present our approach in detail. Finally, in section 5 we conclude the paper.
2 CSP
In this section we present a formal definition of CSP and briefly describe techniques used to solve them. More details can be found in [2].
2.1 Definitions
An elementary constraint $e^i$ is an atomic formula built on a signature $\Sigma = (\mathcal{F}, \mathcal{P})$, where $\mathcal{F}$ is a set of ranked function symbols and $\mathcal{P}$ a set of ranked predicate symbols, and a denumerable set $\mathcal{X}$ of variable symbols $^1$. Elementary constraints are combined with usual first-order connectives. We denote the set of constraints built from $\Sigma$ and $\mathcal{X}$ by $\mathcal{C}(\Sigma, \mathcal{X})$. Given a structure $D = (D, I)$, where $I$ is an interpretation function and $D$ the domain of this interpretation, a $(\Sigma, \mathcal{X}, D)$-CSP is any set $C = (c_1^1 \land \ldots \land c_n^1)$ such that $c_i^1 \in \mathcal{C}(\Sigma, \mathcal{X})$ for all $i = 1, \ldots, n$. A solution of $C$ is a mapping from $\mathcal{X}$ to $D$ that associates to each variable $x \in \mathcal{X}$ an element in $D$ such that $x$ is satisfiable in $D$. A solution of $C$ is a mapping such that all constraints $c_i^1 \in C$ are satisfiable in $D$. Given a variable $x \in \mathcal{X}$ and a non-empty set $D_x \subseteq D$, the membership constraint of $x$ is a relation given by $x \in^n D_x$. We use these membership constraints to make explicit the domain reduction process during the constraint solving. In practice, the sets $D_x$ have to be set up to $D$ at the beginning of the constraint solving process, and constraint propagation will eventually reduce them. As all first-order connectives can be expressed in terms of conjunctions and disjunctions we consider the set of constraints $C$ as follows
$$C = \bigwedge_{x \in \mathcal{X}} (x \in^n D_x) \land \bigwedge_{i \in I} (c_i^1) \land \bigwedge_{j \in J} (c_1^j \lor c_2^j)$$
$^1$For clarity, constraints are syntactically distinguished from formulae by a question mark exponent on their predicate symbols.
where \( I \) is the set of elementary constraints and \( J \) the set of disjunctive constraints. For simplicity reason we will only consider disjunctive constraints as disjunctions of only two elementary constraints. We use \( e \), \( n \), and \( a \) to denote the number of constraints, the number of variables and the size of the variable's domain, respectively, in a CSP, and we also denote by \( \text{Var}(c^e) \) the set of variables in a constraint \( c^e \). In this work we only consider Binary CSP, i.e., problems where at most two variables are involved in each constraint \(^2\).
### 2.2 Solving CSP
Typical tasks defined in connection with CSP are to determine whether a solution exists, and to find one or all the solutions. In this section we present three categories of techniques used in processing CSP: Searching Techniques, Problem Reduction, and Hybrid Techniques. Kumar's work \([7]\) is an excellent survey on this topic.
**Searching Techniques in CSP** Searching consists of techniques for systematic exploration of the space of all solutions. The simplest force brute algorithm *generate-and-test*, also called *trial-and-error search*, is based on the idea of testing every possible combination of values to obtain a solution of a CSP. This generate-and-test algorithm is correct but it faces an obvious combinatorial explosion. Intending to avoid that poor performance the basic algorithm commonly used for solving CSPs is the *simple backtracking* search algorithm, also called *standard backtracking* or *depth-first search with chronological backtracking*, which is a general search strategy that has been widely used in problem solving. Although backtracking is much better than generate and test, one almost always can observe pathological behaviour. Bobrow and Raphael have called this class of behaviour *thrashing* \([1]\). Thrashing can be defined as the repeated exploration of subtrees of the backtrack search tree that differ only in inessential features, such as the assignments to variables irrelevant to the failure of the subtrees. The time complexity of backtracking is \( O(a^n) \), i.e., the time taken to find a solution tends to be exponential in the number of variables \([9]\). In order to avoid the resolution of this kind of complex problem, the notion of problem reduction has been developed.
**Problem Reduction in CSP** Problem reduction techniques transform a CSP to an equivalent problem by reducing the values that the variables can take. Problem reduction is often referred to as *consistency maintenance* \([12]\). Consistency concepts have been defined in order to identify in the search space classes of combinations of values which could not appear together in any set of values satisfying the set of constraints. Mackworth \([8]\) proposes three levels of consistency: node, arc and path-consistency. These names come from the fact that general graphs have been used to represent binary CSP \([12]\). The most widely used level of consistency is arc consistency whose definition is the following:
Given the variables \( x_i, x_j \in \mathcal{X} \) and the constraints \( c^e_i(x_i), c^e_j(x_j), c^e_{k}(x_i, x_j) \in \mathcal{C} \), the arc associated to \( c^e_{k}(x_i, x_j) \) is consistent if
\[
\forall \alpha \in \alpha^e_i \exists \alpha' \in \alpha^e_j : \alpha \in \text{Sol}_D(x_i \in \mathcal{D}_x, \land c^e_i(x_i)) \Rightarrow \alpha' \in \text{Sol}_D(x_j \in \mathcal{D}_y, \land c^e_j(x_j) \land c^e_k(\alpha(x_i), x_j)).
\]
A network of constraints is *arc-consistent* if all its arcs are consistent. In \([10]\) Mohr and Henderson propose the algorithm AC-4 whose worst-case time complexity is \( O(ea^2) \) and they prove its optimality in terms of time.
\(^2\)As a disjunction is considered itself as a constraint we do not allow more than two variables involved in a disjunctive constraint.
It is important to realize that the varying forms of consistency algorithms can be seen as **approximation algorithms**, in that they impose **necessary** but not always **sufficient** conditions for the existence of a solution on a CSP, that is why they are often referred to as local consistency algorithms.
**Hybrid Techniques**
As backtracking suffers from thrashing and consistency algorithms can only eliminate local inconsistencies, hybrid techniques have been developed. In this way we obtain a complete algorithm that can solve all problems and where thrashing has been reduced. Hybrid techniques integrate constraint propagation algorithms into backtracking in the following way: whenever a variable is instantiated \(^3\), a new CSP is created; a constraint propagation algorithm can be applied to remove local inconsistencies of these new CSPs [17]. Embedding consistency techniques inside backtracking algorithms is called Hybrid Techniques. A lot of research has been done on algorithms that essentially fit the previous format. In particular, Nadel [11] empirically compares the performance of the following algorithms: Generate and Test, Simple Backtracking, Forward Checking, Partial Lookahead, Full Lookahead, and Really Full Lookahead. These algorithms primarily differ in the degrees of arc consistency performed at the nodes of the search tree.
### 3 Disjunctive Constraints
The combination of two elementary constraints with a disjunction operator is called a disjunctive constraint. A lot of combinatorial problems involve this kind of constraints. For example, in scheduling problems these constraints come from the fact that several tasks must use the same resource and the limited capacity of that resource does not allow to perform all tasks at a same time [14]. Let \( Task_{ij} \) the start time of task \( i \) of job \( j \) and \( d_{ij} \) the duration of task \( i \) of job \( j \). On a machine performing a simple task at a time, the capacity constraints enforce the mutual exclusion for each pair of tasks assigned to the same machine. If we consider task \( k \) of jobs \( i \) and \( j \), the fact that on machine \( k \) job \( i \) runs before job \( j \) or vice versa can be expressed by the following disjunctive constraint
\[
Task_{kj} \geq Task_{ki} + d_{ki} \lor Task_{ki} \geq Task_{kj} + d_{kj}
\]
In order to perform all tasks using the same resource a sequential order must be established, these precedence relations that are not known a priori are determinated by the solution to a scheduling problem. This feature changes the nature of the problem, and no efficient polynomial algorithm can be exhibited for solving all problems involving disjunctive constraints. In this section we present some techniques to deal with disjunctive constraints.
#### 3.1 Choice Point
The first approach used by the Constraint Logic Programming (CLP) community to deal with disjunctive constraint was to choose one disjunct during the search process, i.e., an a priori choice is made and one disjunct is posted, if the resulting set of constraint is inconsistent then the other disjunct is chosen and posted. This approach is based on the general idea of backtracking, the search space is not reduce actively but only when a clause is non deterministically chosen possibly leading to combinatorial explosion. In the worst case \( 2^{ND} \) combinations of constraints have to be analysed, where \( ND \) stand for the number of disjunctions. So, for many problems such an approach introduces too many choice points and yields an unsatisfactory performance.
\(^3\)Variable instantiation is also called labelling process.
3.2 Binary Variables
Another technique to deal with disjunctive constraints, widely used by the Operational Research community, is to introduce binary \((0,1)\) variables \([15]\). Each of both values activates one disjunct and deactivates the other. A constraint is said activated if it is trivially satisfied for all value combinations of all its variables. This gives the effect of setting the constraint at a choice point, but in this case the labelling routine can select the variable for labelling at the best point in the search. Considering the disjunctive constraint
\[
\text{Task}_{4j} \geq \text{Task}_{4i} + d_{kj} \quad \text{and} \quad \text{Task}_{4i} \geq \text{Task}_{4j} + d_{kj}
\]
which establishes that on machine \(k\) job \(i\) runs before job \(j\) (first disjunct) or job \(j\) runs before job \(i\) (second disjunct). We introduce a binary variable \(X_{ij} \in \{0,1\}\) in the following way
\[
\begin{align*}
(1 - X_{ij}) \cdot M + \text{Task}_{4j} & \geq \text{Task}_{4i} + d_{kj} \\
X_{ij} \cdot M + \text{Task}_{4i} & \geq \text{Task}_{4j} + d_{kj}
\end{align*}
\]
where \(M\) is a large enough number. In this way we have transformed a disjunctive constraint in a conjunction of two elementary constraints. If \(X_{ij} = 1\) then the second constraint is active, trivially satisfied, and the first disjunct will constraint the value of the variables, in other words, job \(i\) runs before job \(j\) on machine \(k\).
As soon as a value is assigned to the binary variable during the solving process one disjunct will be entailed by the set of constraints and the other one will be used to reduce the variables domain. In the worst case the labelling process will try values for the binary variables, using it as a choice point as in the first approach, i.e., in the worst case we also have to analyse \(2^{|\mathcal{D}|}\) cases.
3.3 Constructive Disjunction
A rather new approach is called constructive disjunction, which lifts common information from the alternatives \([13]\). We explain this idea using an example taken from \([16]\). Consider the following set of constraints that enforces the mutual exclusion of jobs \(A\) and \(B\) on machine \(i\)
\[
\begin{align*}
\text{Task}_{iA} + 7 & \leq \text{Task}_{iB} \quad \text{and} \quad \text{Task}_{iB} + 7 & \leq \text{Task}_{iA} \\
\text{Task}_{iA} & \in \{1, \ldots , 10\} \\
\text{Task}_{iB} & \in \{1, \ldots , 10\}
\end{align*}
\]
The first disjunct constraints \(\text{Task}_{iA} \in \{1,2,3\}\) and \(\text{Task}_{iB} \in \{8,9,10\}\), the second disjunct constraints \(\text{Task}_{iA} \in \{8,9,10\}\) and \(\text{Task}_{iB} \in \{1,2,3\}\). Thus independent which alternative will succeed we know that neither \(\text{Task}_{iA}\) nor \(\text{Task}_{iB}\) can take the values \(\{1,5,6,7\}\). The common information that we can deduce from both alternatives is \(\text{Task}_{iA} \in \{1,2,3,8,9,10\}\) and \(\text{Task}_{iB} \in \{1,2,3,8,9,10\}\), the union of the set of remaining values for the variables in each disjunct. This is the essence of constructive disjunction, extract common information from the alternatives and thus allow other parts of the computation to benefit from this extra information, disjunctive constraints are used actively without a priori choices. In the worst case, if not extra information can be extracted from the disjunctions, the labelling process will analyse \(2^{|\mathcal{D}|}\) cases.
4 Systematic Manipulation of Disjunctive Constraints
In this section we present our approach for achieving local consistency in CSP involving disjunctive constraints. The level of local consistency most widely used by the CSP community, arc consistency, can be achieved in a polynomial time for a set of elementary constraints, but when we consider disjunctive constraints the problem becomes harder. The general idea we propose is to decompose a problem in two subproblems, the first one with all the elementary constraints and the second one with all the disjunctive constraints. Local consistency can be achieved efficiently for the first subproblem, and the second subproblem is used to extract extra information using a constructive disjunction approach. As both subproblems share variables, information about values of variables must be communicated, that is done through the use of membership constraints. Figure 1 presents the general schema considering the set of constraints in the form that we explain in section 2.

Figure 1: General schema for manipulating the set of constraints
But, as verifying local consistency for a set of disjunctive constraints is a hard problem, we propose to decompose the set of disjunctive constraints as much as possible. In order to do that we use a graph decomposition algorithm which will detect the maximum number of subproblems whose set of variables are disjoints. In this way we deal with a set of easier problems, and the added cost is not significative since the decomposition algorithm has a linear time complexity. Figure 2 presents the refined general schema.

Figure 2: Refined general schema
Once we apply the graph decomposition algorithm on the set of disjunctive constraints we obtain $M$
subsets
\[ J = \bigcup_{i=1}^{n} J_i \]
so the initial set \( C \) of disjunctive constraints can be expressed by
\[ \bigwedge_{i=1}^{n} \bigwedge_{j \in J_i} (c_1^j \lor c_2^j) \]
and such that
\[ \forall i < j \in J : k \neq l \Rightarrow (\text{Var}(c_1^i) \cup \text{Var}(c_2^i)) \cap (\text{Var}(c_1^j) \cup \text{Var}(c_2^j)) = \emptyset \]
In the diagram of figure 2 the coordination level is in charge of decompose the set of disjunctive constraints and add the adequate membership constraints coming from the subproblem with elementary constraints. In the same way, the coordination level send to this subproblem the results obtained from the set of disjunctive constraints. Local consistency is verified for each subproblem and the results are communicated through the membership constraints, the algorithm stops when there is no more changes in the set of membership constraints. For clarity reasons we express the sets of elementary and disjunctive constraints in the following way
\[ C_1 = \bigwedge_{i \in I} c_i' \]
\[ C_2 = \bigwedge_{i=1}^{n} C_{2,i} \]
where
\[ C_{2,i} = \bigwedge_{j \in J_i} (c_1^j \lor c_2^j) \]
These ideas are expressed in the following algorithm
1: begin
2: Get \( C_1 \) and \( C_2 \) from the set of constraints \( C \)
3: repeat
4: Verify local consistency for \( C_1 \)
5: Decompose \( C_2 \) as \( \bigwedge_{i=1}^{n} C_{2,i} \)
6: for each \( C_{2,i} \) do
7: Verify local consistency using a constructive disjunction approach
8: if one of the constraints, \( c_1^j \) and \( c_2^j \) for \( j \in J_i \), is inconsistent with the associated membership constraints then
9: Eliminate the disjunctive constraint from \( C_2 \)
10: Add the other elementary constraint to \( C_1 \)
11: end if
12: end do
13: until The set of membership constraints does not change
14: end
Theorem 1 The algorithm terminates and it is correct.
Proof: Termination of local consistency algorithms is well known, so we only have to prove the termination of the loop Repeat. In the worst case, after verification of local consistency for all \( C_{ij} \), the resulting membership constraints will be different because only one element has been eliminated, as we have at most \( n \) variables in the set of disjunctive constraints and each variable can take \( m \) values, the maximal number of iterations is \( (nm) \), so at most after \( (nm) \) iterations the algorithm will terminate. Correctness is evident because we only eliminate values when they are locally inconsistent, so we do not eliminate any solution.
Theorem 2 If we can decompose a set of constraints in \( M \) subgraphs the worst case time complexity of our algorithm is bounded by \( M2^{ND-M+1} \), where \( ND \) is the total number of disjunctive constraints.
Proof: If \( ND_i; i = 1 \ldots M \) denotes the number of disjunctive constraints in the subgraph \( i \), in each iteration we have to verify local consistency for \( \sum_{i=1}^{M} 2^{ND_i} \) sets of disjunctive constraints. As \( ND = \sum_{i=1}^{M} ND_i \), we have \( \sum_{i=1}^{M} 2^{ND_i} \leq M2^{ND-M+1} \). In the worst case \( M = 1 \), so the worst case time complexity is \( O(2^{ND}) \), the same result as the choice point approach.
Evidently the use of our approach depends on the specific characteristics of the problem. First, the more we can decompose the set of disjunctive constraints the more efficient will be the use of constructive disjunction because we will deal with easier problems (this is the meaning of the expression \( M2^{ND-M+1} \)). Second, the benefits of using constructive disjunction depend on the extra information we can extract from the disjunctions, so the more restricted are the constraints the more information we can extract from them, i.e., more impossible values will be eliminated. That is why our approach must be seen as a preprocessing step, after that another technique should be used, such as choice point, for example.
In [3] we have applied our approach to solve job-shop problems where the general idea of decompose a set of disjunctive constraints correspond to manipulate as a whole the constraints related to a particular machine but independently for different machines, so for problems involving \( M \) machines we are sure that we can decompose the set of disjunctive constraints in \( M \) subproblems.
Finally, it is important to note that our approach can be naturally implemented using several solvers in parallel for verifying local consistency for the sets of disjunctives constraints.
4.1 Examples
Two advantages of the approach presented in this work with respect to the choice point can be better understood in the following examples. We consider the following problem:
\[
\begin{align*}
z & \leq 5 \\
y & \leq z \\
x & \leq y \\
x & \geq 1 \lor x & \leq 6 \\
y & \geq 2 \lor y & \leq 7 \\
z & \geq 1 \lor z & \leq 1 \\
x, y, z & \in [0, \ldots, 12]
\end{align*}
\]
Constraints 1, 2 and 3 correspond to elementary constraints, constraints 4, 5 and 6 correspond to disjunctive constraints. If we verify local consistency for the set of elementary constraints we obtain the modified membership constraints \( x \in [0, \ldots, 3] \), \( y \in [1, \ldots, 4] \) and \( z \in [2, \ldots, 5] \). Applying a decomposition algorithm on...
the set of disjunctive constraints 1-6 we obtain three subproblems, so we verify local consistency for each of them using a constructive disjunction approach.
- Verifying local consistency for the disjunctive constraint 1 and the associated membership constraint \( x \in [0, 3] \), we obtain \( x \in [1, 3] \) and \( x \in [0, 3] \). So, the resulting membership constraint using a constructive disjunction approach is \( x \in [0, 3] \), it is unchanged.
- Verifying local consistency for the disjunctive constraint 2 and the associated membership constraint \( y \in [1, 4] \), we obtain \( y \in [2, 4] \) and \( y \in [1, 4] \). So, the resulting membership constraint is \( y \in [1, 4] \), it is unchanged.
- Verifying local consistency for the disjunctive constraint 3 and the associated membership constraint \( z \in [2, 5] \), the first branch has no solution and the second one \( z \in [4, 5] \). So, the resulting membership constraint is \( z \in [1, 5] \).
But, as additional information we know that only one disjunct is possible in the third disjunction. We can post the second disjunct of the third disjunctive constraint (constraint 6), the only possible alternative in that disjunction, as an elementary constraint, eliminate that disjunctive constraint, and verify again local consistency for the set of elementary constraints.
Another interesting situation occurs if we replace the third disjunctive constraint (constraint 6) by the following:
\[
z \geq 6 \lor z \leq 1
\]
When we verify local consistency for this constraint and the membership constraint \( z \in [4, 5] \) both disjunct are inconsistent with the membership constraint, so no value remains possible for the variable \( z \) and then the problem has no solution. It is interesting to note that this situation could have been also detected using a choice point approach, but it is well known that the number of leaves visited by that approach depend on the order of the input of the constraints, however the performance of our approach does not depend on that order. If we create the choice points posting the disjunctive constraints in the order 1, 3, 5 and 6 we will visit 8 leaves before detect that the problem has no solution, but if we create the choice points posting firstly the third disjunctive constraint we have to visit only 2 leaves to verify that the problem has no solution.
We can see how our approach allows to reduce the search space and also eliminate some disjunctive constraints.
### 4.2 Implementation
In general, we are interested in solving CSP using computational systems, a logical framework integrating rewrite rules and strategies [5]. We have implemented a prototype of a solver for CSP which is currently executable in the system ELAN [6], an interpreter of computational systems. We have integrated in this prototype the ideas presented in this work as a preprocessing phase that carries out local consistency verification. Once we have decomposed the set of disjunctive constraints in several subproblems we run a solver for each subproblem in order to verify local consistency in parallel, in this way we can fully profit of the advantages of this approach. All details about the prototype can be obtained at [http://www.loria.fr/~castro/CSP/csp.html](http://www.loria.fr/~castro/CSP/csp.html).
---
1To verify local consistency for these subproblems we use a choice point approach that generates two branches, each one for each disjunct.
5 Conclusion
We have presented an approach to deal systematically with disjunctive constraints in CSP when verifying local consistency. We have shown how this approach can eventually reduce the search space and eliminate some disjunctions. We have proved that when we can decompose the set of disjunctive constraints we can do better than the existing approaches, like choice point. In real life problems, like scheduling, it is often possible to decompose the set of disjunctive constraints, so we think that our approach can be an interesting contribution for practical applications of CSP techniques. As future work we are interested in to estimate the benefits of apply this approach in terms of parameters of the set of constraints and we are also interested in to integrate these ideas in the search process.
References
|
{"Source-Url": "https://clei.org/proceedings_data/CLEI1997/XXIII%20Conferencias%20-TOMO%201/XXIII%20Conferencias-1997-Tomo%201%20_Por%20Capitulo_con%20OCR/CLEI%201997_%20XXIII%20Conferencias_87-96_OCR.pdf", "len_cl100k_base": 6702, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 44924, "total-output-tokens": 8345, "length": "2e12", "weborganizer": {"__label__adult": 0.0003914833068847656, "__label__art_design": 0.0004658699035644531, "__label__crime_law": 0.0006313323974609375, "__label__education_jobs": 0.0018138885498046875, "__label__entertainment": 0.00013399124145507812, "__label__fashion_beauty": 0.0002868175506591797, "__label__finance_business": 0.0006389617919921875, "__label__food_dining": 0.000484466552734375, "__label__games": 0.0009407997131347656, "__label__hardware": 0.0010395050048828125, "__label__health": 0.0011453628540039062, "__label__history": 0.0004305839538574219, "__label__home_hobbies": 0.00017964839935302734, "__label__industrial": 0.0010786056518554688, "__label__literature": 0.0004398822784423828, "__label__politics": 0.0005364418029785156, "__label__religion": 0.0006818771362304688, "__label__science_tech": 0.2232666015625, "__label__social_life": 0.00015044212341308594, "__label__software": 0.01136016845703125, "__label__software_dev": 0.75244140625, "__label__sports_fitness": 0.0004124641418457031, "__label__transportation": 0.0008540153503417969, "__label__travel": 0.0002434253692626953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31228, 0.03425]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31228, 0.29006]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31228, 0.86874]], "google_gemma-3-12b-it_contains_pii": [[0, 1366, false], [1366, 5323, null], [5323, 9209, null], [9209, 12877, null], [12877, 16313, null], [16313, 18103, null], [18103, 19984, null], [19984, 23407, null], [23407, 27156, null], [27156, 31228, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1366, true], [1366, 5323, null], [5323, 9209, null], [9209, 12877, null], [12877, 16313, null], [16313, 18103, null], [18103, 19984, null], [19984, 23407, null], [23407, 27156, null], [27156, 31228, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31228, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31228, null]], "pdf_page_numbers": [[0, 1366, 1], [1366, 5323, 2], [5323, 9209, 3], [9209, 12877, 4], [12877, 16313, 5], [16313, 18103, 6], [18103, 19984, 7], [19984, 23407, 8], [23407, 27156, 9], [27156, 31228, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31228, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
1907a82192f0cf4a6d90f4ce65e735dac717c566
|
A Framework for Software Reference Architecture Analysis and Review
Silverio Martínez-Fernández, Claudia Ayala, Xavier Franch, David Ameller
GESSI Research Group, Universitat Politècnica de Catalunya, Barcelona, Spain
{smartinez,cayala,franch,dameller}@essi.upc.edu
Abstract. Tight time-to-market needs pushes software companies and IT consulting firms (ITCFs) to continuously look for techniques to improve their IT services in general, and the design of software architectures in particular. The use of software reference architectures allows ITCFs reusing architectural knowledge and components in a systematic way. In return, ITCFs face the need to analyze the return on investment in software reference architectures for organizations, and to review these reference architectures in order to ensure their quality and incremental improvement. Little support exists to help ITCFs to face these challenges. In this paper we present an empirical framework aimed to support the analysis and review of software reference architectures and their use in IT projects by harvesting relevant evidence from the wide spectrum of involved stakeholders. Such a framework comes from an action research approach held in an ITCF, and we report the issues found so far.
Keywords: Software architecture, reference architecture, architecture analysis, architecture evaluation, empirical software engineering.
1 Introduction
Nowadays, the size and complexity of information systems, together with critical time-to-market needs, demand new software engineering approaches to design software architectures (SA) [17]. One of these approaches is the use of software reference architectures (RA) that allows to systematically reuse knowledge and components when developing a concrete SA [8][13].
As defined by Bass et al. [3], a reference model (RM) is “a division of functionality together with data flow between the pieces” and an RA is “a reference model mapped onto software elements (that cooperatively implement the functionality defined in the reference model) and the data flows between them”.
A more detailed definition of RAs is given by Nakagawa et al. [17]. They define an RA as “an architecture that encompasses the knowledge about how to design concrete architectures of systems of a given application [or technological] domain; therefore, it must address the business rules, architectural styles (sometimes also defined as architectural patterns that address quality attributes in the reference architecture), best practices of software development (for instance, architectural decisions, domain constraints, legislation, and standards), and the software elements that support develop-
ment of systems for that domain. All of this must be supported by a unified, unambiguous, and widely understood domain terminology”.
In this paper, we use these two RA definitions. We show the relationships among RM, RM-based RA and RA-based concrete SA in Fig. 1. Throughout the paper, we use the term RA to refer to RM-based RA and SA to refer to RA-based concrete SA. Angelov et al. have identified the generic nature of RAs as the main feature that distinguishing them from concrete SAs. Every application has its own and unique SA, which is derived from an RA. This is possible because RAs are abstract enough to allow its usage in differing contexts. [2]
The motivations behind RAs are: to facilitate reuse, and thereby harvest potential savings through reduced cycle times, cost, risk and increased quality [8]; to help with the evolution of a set of systems that stem from the same RA [13]; and to ensure standardization and interoperability [2]. Due to this, RAs are becoming a key asset of organizations [8].
However, although the adoption of an RA might have plenty of benefits for an organization, it also implies several challenges, such as the need for an initial investment [13] and ensuring its adequacy for the organization’s portfolio of applications. Hence, in order to use RAs, software companies and information technology consulting firms (ITCFs) face two fundamental questions:
- Is it worth to invest on the adoption of an RA?
- Once adopted, how the suitability of an RA for deriving concrete SAs for an organization’s applications can be ensured?
Currently, organizations lack of support for dealing with these questions. On the one hand, there is a shortage of economic models to precisely evaluate the benefit of architecture projects [6] in order to take informed decisions about adopting an RA in an organization. On the other hand, although there are qualitative evaluation methods for RAs [1][12][14], they do not systematize how these RAs should be evaluated.
regarding certain quality attributes (for instance, their capability to satisfy the variability in applications developed from RAs [18]).
In this context, the goal of this research is to devise a framework that supports organizations to deal with the aforementioned questions by providing procedural guidelines for setting up and carrying out empirical studies aimed to extract evidence for: 1) supporting organizations to assess if it is worth to adopt an RA, and 2) ensuring the suitability of an RA for deriving concrete SAs for an organization’s applications. It is worth mentioning that this research has its origin in an ongoing action-research initiative among our research group and the Center of Excellence on Software Architectures (ARCHEX) of Everis, a multinational consulting company based in Spain. ARCHEX faced the fundamental questions stated above and the framework proposed in this paper was mainly originated and shaped throughout our involvement for helping ARCHEX to envisage a suitable solution. The idea behind devising such a framework is twofold: to help other organizations dealing with similar problems as ARCHEX; and to improve the guidelines of the framework by the experience gained in each application of the framework in order to consolidate architectural knowledge from the industrial practice.
The paper is structured as follows. In Section 2 we describe the fundamental aspects of RAs that are suggested to be assessed. In Section 3 we describe the empirical studies that compose the framework. In Section 4 we present the context of ITCFs and show how the framework can be applied in the context of an ITCF. In Sections 5 and 6 we present preliminary results of two studies of the framework applied in Everis. In Section 7 we end up with conclusions and future work.
2 Practical Review Criteria for Reference Architectures
A commonly accepted set of criteria to assess RAs does not exist [1][12-14]. Thus, in this section we identify important aspects to assess RAs out of practice and out of the literature. The framework presented in this paper envisages these aspects as a primary input for their further refinement based on the evidence from organizations.
Table 1. Summary of relevant aspects for software reference architecture assessment
<table>
<thead>
<tr>
<th>Aspect</th>
<th>Description of the Architectural Aspect</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Overview: functionalities [1], origin, utility and adaptation [13]</td>
</tr>
<tr>
<td>2</td>
<td>Requirements and quality attributes analysis [1][10][12]</td>
</tr>
<tr>
<td>3</td>
<td>Architectural knowledge and decisions [10][12][17]</td>
</tr>
<tr>
<td>4</td>
<td>Business qualities [1] and architecture competence [4]</td>
</tr>
<tr>
<td>5</td>
<td>Software development methodology [10][17]</td>
</tr>
<tr>
<td>6</td>
<td>Technologies and tools [10][17]</td>
</tr>
<tr>
<td>7</td>
<td>Benefits and costs metrics to derive SAs from RAs [10]</td>
</tr>
</tbody>
</table>
In [1], Angelov et al. state that SAs and RAs have to be assessed for the same aspects. For this reason, we started by analyzing some available works on SA assessment [4][10]. However, existing evaluation methods for SAs are not directly applica-
ble to RAs because they do not cover the generic nature of RAs [1]. Therefore, we elaborated further this analysis considering both the specific characteristics of RAs as described in [1][12-13][17] and our own experience in the field. The resulting aspects for assessing RA are detailed below and summarized in Table 1.
Aspect 1 refers to the need of having an overview of the RA. It includes an analysis of its generic functionalities, its domain [1], its origin and motivation, its correctness and utility, and its support for efficient adaptation and instantiation [13]. Since RAs are defined to abstract from certain contextual specifics allowing its usage in differing contexts [2], their support for efficient adaptation and instantiation while deriving concrete SAs is an aspect to assess [13].
Many prominent researchers [1][7][10][12] highlight the importance of quality attributes, as well as architectural decisions for the SA design process and the architectural assessment. These two aspects should also be considered for the RA assessment because, as we said, SAs and RAs have to be assessed for the same aspects [1]. Thus, we considered them as Aspects 2 and 3 respectively. However, since an RA has to address more architectural qualities than an SA (e.g., applicability) [1], this analysis could be wider for RAs in this sense. A list of quality attributes that are strictly determined by SAs is defined in [7]. This list consists of the following ten quality attributes: performance, reliability, availability, security, modifiability, portability, functionality, variability, subsetability and conceptual integrity.
SAs also address business qualities [1] (e.g., cost, time-to-market) that are business goals that affect their competence [4]. It is considered as Aspect 4.
To improve the SA design process, there also exist supportive technologies such as methods, techniques and tools [10][17]. Thus, it is not only important for an RA to collect data to assess its design process, but also its supportive technologies, which are assessed by Aspects 5 and 6.
As stated in [10], a crucial aspect to define the goodness of a SA is related to the ROI. The optimal set of architectural decisions is usually the one that maximizes the ROI. Aspect 7 is intended to quantify benefits and costs of RAs to calculate their ROI.
We recommend gathering evidence about all these aspects, which are summarized in Table 1, while assessing an RA. Existing methods for SA assessment have been previously applied for RA assessment, such as in [1][12][14]. However, none of them cover all the aspects of Table 1, especially Aspect 7. Hence, new approaches to assess RAs considering these aspects altogether are required. This has motivated our work.
These architectural aspects can be divided in two areas of different nature. First, Aspects 1 to 6 are qualitative architectural concerns. Second, Aspect 7 consists of quantitative metrics to calculate the benefits and costs of deriving SAs from RAs.
3 An Empirical Framework to Review Reference Architectures
In this section, we present the ongoing version of our empirical framework. In order to design the framework, we have followed the guidelines for conducting empirical studies in software engineering recommended by Wohlin et al. [21]. It is composed of
an assortment of empirical studies. Each empirical study reviews a subset of the relevant architectural aspects presented in Table 1.
Current economic models [16] and RAs evaluation methods (e.g., [1][12-14]) suggest to gather Aspects 7, and Aspect 1-6 respectively, directly from the organizations. However none of them provides support nor guidelines for doing so. Thus, our framework is aimed to provide support for such gathering process while suggests to apply any of the existing methods to evaluate the economic costs of adopting an RA or its quality based on the empirically obtained data. The selection of the method used in each situation would depend on the organization context [5].
Regarding to Aspect 7, an economic model becomes necessary. The data needed to feed such an economic model depends on the existing value-driven data in the organization (see Sections 3.1 and 5). Such data may be gathered by conducting post-mortem studies that collect real metrics or, when the organization does not have previous experience with RAs, to estimate these metrics is suggested.
In order to gather data to cover Aspects 1-6, our framework suggests conducting surveys (see Sections 3.3 and 6). These studies gather information not only from RA projects, but also from SA projects as they are a direct outcome of the RA usage. This allows analyzing the RA’s suitability for producing the SAs of enterprise applications in organizations as well as detecting improvement opportunities.
Fig. 2 summarizes the studies that compose the framework. The studies are classified by their approach to assess the RA (qualitative or quantitative), depending on which question of Section 1 they answer. It is important to note that the studies suggested by our framework are complementary and support each other. Our framework benefits from this combination of studies. For instance, results from a preceding empirical study can be used to corroborate or develop further these results (e.g., results from the survey to check existing value-driven data in organizations indicate the data that is useful for calculating the ROI). For this reason, the suggested studies can be conducted sequentially.
The assortment of studies that has been envisaged for our framework is detailed below. In order to explain each study, a similar same structure as in [10] has been used: context and motivation, objectives, and method.
**Survey to check existing value-driven data in organizations**
- Existing data that organizations have to quantitatively calculate the costs and benefits of adopting an RA in an organization.
**Economic model to calculate the ROI of adopting an RA**
- What is the value of an RA? (quantitative)
**Survey to understand the impact of using an RA**
- Evidence about RA practices and RA impact in the organization.
- Refined review criteria for RA.
- Context of the organization.
**Architectural evaluation method specific for RA**
- How well an RA support key aspects? (qualitative)
---
**Fig. 2.** Empirical studies of the framework to assess RAs.
3.1 Surveys to check existing value-driven data in RA projects
**Context.** Typically, organizations do not have resources to compare the real cost of creating applications with and without an RA. Thus, alternatives should be envisaged.
**Objective.** To discover existing data that organizations have to quantitatively calculate the costs and benefits of adopting an RA in an organization.
**Method.** Exploratory surveys with personalized questionnaires applied to relevant stakeholders (e.g., manager, architect, developer) to find out the quantitative data that has been collected in RA projects and application projects.
3.2 Applying an economic model to calculate the ROI of adopting an RA
**Context.** Before deciding to launch an RA, organizations need to analyze whether undertaking or not the investment. Offering them an economic model that has been used in former projects can help them to make more informed decisions.
**Objective.** To assess whether it is worth investing in an RA.
**Method.** Depending on the maturity of the organization, two methodologies can be applied. If the organization does not have an RA, the economic model should be fed with estimated data. Nevertheless, when the organization already has an RA, real data can be gathered by means of an exploratory quantitative post-mortem analysis. Then, the economic model quantifies the potential advantages and limitations of using an RA. Some related works explain how to calculate the ROI of a product [11], and software reuse [19]. We suggest using the economic model for RAs presented in [14].
3.3 Surveys to understand the impact of using an RA
**Context.** To refine the set of review criteria for RAs, it is needed to understand RA’s characteristics, as well as its potential benefits and limitations. Assessing previous RA projects is a feasible way to start gaining such an understanding.
**Objective.** To understand the impact and suitability of an RM for the elaboration of RAs, and of an RA for the creation of SAs. Improvement insights can also be identified from different stakeholders.
**Method.** Exploratory surveys with personalized questionnaires applied to relevant stakeholders (e.g., architects, developers) to gather their perceptions and needs.
3.4 Applying an architectural evaluation method to prove RA effectiveness
**Context.** Architecture is the product of the early design phase [7]. RA evaluation is a way to find potential problems before implementing RA modules, and to gain confidence in the RA design provided to SA projects.
**Objective.** To analyze the RA strengths and weaknesses and to determine which improvements should be incorporated in the RA.
**Method.** An existing evaluation method specific for RAs such as [1][12-14]. The selection of the method would depend on the organization context [5].
4 Use of the framework in an IT consulting firm
4.1 Context of Information Technology Consulting Firms
Motivation. We are interested in the case in which an ITCF has designed an RA with the purpose of deriving concrete SAs for each application of a client organization. This usually happens when the ITCF is regularly contracted to create or maintain information systems in client organizations. Each information system is built upon the RA and includes many enterprise applications (see Fig. 3).
As Angelov et al. point out, an RA can be designed with an intended scope of a single organization or multiple organizations that share a certain property. Although Fig. 3 shows RAs that are used for the design of concrete SAs in a single organization, there also exist RAs for multiple organizations that share a market domain or a technological domain such as enterprise web applications [2].
The use of RAs allows ITCFs to reuse the architectural knowledge of their RM, and software components (normally associated to particular technologies) for the design of SAs in client organizations. Thus, RAs inherit best practices from previous successful experiences and a certain level of quality. The goal of these RAs provide a baseline that facilitates standardization and interoperability as well as the attainment of business goals during enterprise applications’ development and maintenance.
Kinds of projects. There are three kinds of projects with different targets (Fig. 3): 1) RM projects; 2) RA projects; and 3) SA projects.
Stakeholders for RA analysis. Stakeholders need to be clearly defined for RA assessment purposes [1]. The people involved in an RA assessment are the evaluation team, which conducts the empirical studies of the framework, and stakeholders from architectural projects. In the three kinds of projects defined above performed by ITCFs, we consider the following five stakeholders essential for RA assessment: project business manager, project technological manager, software architect, developer, and application builder. Each of these stakeholders has a vested interest in different architectural aspects, which are important to analyze and reason about the appropriateness and the quality of the three kinds of projects [12]. However, there could be more people involved in an architectural evaluation, as Clements et al. indicate in [7]. Below, we describe to which kind of project stakeholders belong and their interests.
RM projects. It is composed of software architects from the ITCF that worked in previous successful RA projects. They are specialized in architectural knowledge management. Their goal is to gather the best practices from previous RA projects’ experiences in order to design and/or improve the corporate RM.
RA projects. RA projects involve people from the ITCF and likely from the client organization. Their members (project technological managers, software architects and architecture developers) are specialized in architectural design and have a medium knowledge of the organization business domain.
Project technological managers from the ITCF are responsible for meeting schedule and interface with the project business managers from the client organization.
Software architects (also called as RA managers) usually come from the ITCF, although it may happen that the client organization has software architects in which
organization’s managers rely on. In the latter case, software architects from both sides cooperatively work to figure out a solution to accomplish the desired quality attributes and architecturally-significant requirements.
Architecture developers come from the ITCF and are responsible of coding, maintaining, integrating and testing the RA software components and documenting them.
SA projects. Enterprise application projects can involve people from the client organization and/or subcontracted ITCFs (which may even be different than the RM owner) whose members are usually very familiar with the specific organization domain. The participation of the client organization in RA and SA projects is one possible strategy for ensuring the continuity of their information systems without having much dependency on subcontracted ITCF.
Project business managers (i.e., customer) come from client organizations. They have the power to speak authoritatively for the project, and to manage resources. Their aim is to provide their organization with useful applications that meet the market expectations on time.
Application builders take the RA reusable components and instantiate them to build an application.
![Diagram of stakeholders for RM, RA, and SA projects]
Fig. 3. Relevant stakeholders for the assessment of RM, RA and SA projects.
4.2 Instantiation of the Framework
The presented empirical framework is currently being applied at the Architecture Centre of Excellence of Everis, a multinational ITCF. The main motivation of Everis for conducting the empirical studies is twofold: 1) strategic: providing quantitative evidence to their clients about the potential benefits of applying an RA; 2) technical: identifying strengths and weaknesses of an RA.
Everis fits into the context described in Section 4.1 (e.g., they carry out the three types of projects described there). Following the criteria found in [1], RAs created by Everis can be seen as Practice RAs, since they are defined from the accumulation of practical knowledge (the architectural knowledge of their corporate RM). According to the classification of [2], they are also classical, facilitation architectures designed to be implemented in a single organization. They are classical because their creation is based on experiences, and their aim is to facilitate guidelines for the design of systems, specifically for the information system domain.
All the studies suggested in Section 3 are planned to be conducted to understand and evaluate RAs defined by Everis. In this paper, we present the protocol and preliminary results of the two surveys of the understanding step. Section 5 describes the available value-driven data in projects in order to create or choose an economic model to calculate the ROI of adopting an RA. In Section 6, an excerpt of the survey protocol, which has already been designed and reviewed, is presented. The survey is still in the analysis step. However, the data about the Aspect 4 (business qualities) have already been processed. Preliminary results about this aspect show the benefits and aspects to consider for organizations that adopt RAs.
Table 2 shows how the roles are covered by the different studies in the Everis case.
Table 2. Stakeholders of the Everis case
<table>
<thead>
<tr>
<th>Project</th>
<th>Business Manager</th>
<th>Technical Manager</th>
<th>Software Architect</th>
<th>Architecture Developer</th>
<th>Application Builder</th>
</tr>
</thead>
<tbody>
<tr>
<td>RM</td>
<td>n/a</td>
<td>n/a</td>
<td>S1, ROI, S2</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>RA</td>
<td>S2, Eva</td>
<td>S1, S2, Eva</td>
<td>ROI, S2, Eva</td>
<td>S2, Eva</td>
<td>S2, Eva</td>
</tr>
<tr>
<td>SA</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
<td>S1, S2</td>
</tr>
</tbody>
</table>
a. Legend: Survey to study existing data (S1), Economic model for RA’s ROI (ROI), Survey to understand RA projects (S2), and RA evaluation (Eva).
5 Survey to check existing value-driven data in projects
5.1 Protocol
Objectives of this study. The objective of this survey is to identify the quantitative information that can be retrieved from past projects in order to perform a cost-benefit analysis. The cost-benefit analysis, which is the evaluation step of the framework, needs this kind of data to calculate the ROI of adopting an RA in an organization.
Sampling. A sample of four RA projects and several enterprise application built upon them has been selected.
The main perceived economic benefit on the use of RAs are the cost savings in the development and maintenance of systems due to the reuse of software elements and the adoption of best practices of software development that increase the productivity of developers [16]. We use online questionnaires to ask project technical managers and application builders about existing information that there is about past projects for calculating these cost savings. When the organization does not have any experience in RAs, these data can be estimated.
5.2 Preliminary results: costs and benefits metrics for RAs
In this section we describe the information that typically is available in order to calculate the costs and benefits of adopting an RA. We divide existing information in two categories: effort and software metrics. On the one hand, the invested effort from the tracked activities allows the calculation of the costs of the project. On the other hand, software metrics indicate the benefits that can be found in the source code.
Effort metrics to calculate projects’ costs. The effort of the training, development and maintenance activities is usually tracked. The training effort is the total amount of hours invested to the training on the SA teams. These training could be done through workshops, user manuals, wikis, web portals, and/or other training activities. The development effort is the total amount of hours invested in the development of the RA and the SAs of applications. It could be extracted from the spent time for each activity of the development of the projects. The maintenance effort is the total amount of hours invested in the maintenance of the RA and the SAs of applications. Maintenance activities include changes, incidences, support and consults.
Software metrics to calculate benefits in reuse and maintainability. The analysis of the code from RA and SA projects allow to quantify the size of these projects in terms of LOC or function points (number of methods). Having calculated the project costs as indicated above, we can calculate the average cost of a LOC or a function point. Since the cost of applications’ development and maintenance is lower because of the reuse of RA modules, we can calculate the benefits of RA by estimating the benefits of reusing them. Poulin defines a model for measuring the benefits of software reuse [19]. Maintenance savings due to a modular design could be calculated with design structured matrices (DSM) [15]. For a detailed explanation about how such metrics can be used in a cost-benefit analysis, the reader is referred to [16].
5.3 Lessons learned.
Architecture improvements are extremely difficult to evaluate in an analytic and quantitative fashion as the efficacy of the business (sales, marketing, and manufacturing) [6]. This is because software development is a naturally low-validity environment and reliable expert intuition can only be acquired in a high-validity environment [9]. In order to evaluate RAs based on an economics-driven approach, software development needs to move to a high-validity environment. The good news is that it could be done with the help of good practices like time tracking, continuous feedback, test-driven development and continuous integration. There are tools that allow it. In
order to get the metrics defined in the Section 5.2, JIRA\(^1\) and Redmine\(^2\) allow managing the tasks and their invested time, general software metrics (like LOC) and percentages of tests and rules compliance can be calculated by Sonar\(^3\) and Jenkins\(^4\). We think that adopting good practices to collect data is the basis for moving software development to a high-validity environment and consequently being able of performing an accurate cost-benefit analysis.
6 Survey to understand the impact of using an RA
6.1 Protocol.
**Objectives of this survey.** The purpose of the survey is to understand the impact of using RAs for designing the SAs of the applications of an information system of a client organization. This is a descriptive survey that measures what occurred while using RAs rather than why. The following research questions are important in order to review relevant Aspects 1 to 6 of RAs (defined in Section 2):
1. How is an RA adapted for creating SAs of an organization’s applications?
2. What is the state of practice on requirements engineering for RAs?
3. What is the state of practice on architectural design for RAs?
4. How does the adoption of RAs provide observable benefits to the different involved actors?
5. What methodologies are currently being used in RA projects by Everis?
6. Which tools and technologies are currently being used in RAs projects by Everis?
**Sampling.** The target population of this survey are RA projects and SA projects. A representative sample of these projects in nine organizations in Europe (seven from Spain) has been selected.
**Approach for data collection.** On the one hand, semi-structured interviews are used for project technological managers, software architects, and client’s project business managers. The reason of using interviews is that these roles have higher knowledge than the other roles about the architectural aspects of the Table 1, or another perspective in the case of client’s project business managers, so we want to collect as much information as possible from them. Prior to the interviews, questionnaires are delivered to collect personal information about the interviewee and to inform him/her about the interview. On the other hand, online questionnaires are used for RA developers and application builders, since most of their questions are about supportive technologies and their responses can be previously listed, simplifying the data collection process.
This is an excerpt of the survey protocol, the complete version of the protocol is available at http://www.essi.upc.edu/~gessi/papers/eselaw13-survey-protocol.pdf.
---
\(^1\) JIRA, http://www.atlassian.com/es/software/jira/overview
\(^2\) Redmine, http://www.redmine.org/
\(^3\) Sonar, http://www.sonarsource.org/
\(^4\) Jenkins, http://jenkins-ci.org/
6.2 Preliminary results: strengths and weaknesses of RAs
In this section we present preliminary results about the business quality section of the survey, which are the answer to the fourth research question of the protocol: “How does the adoption of RAs provide observable benefits to the different involved actors?” Below, the benefits and aspects to consider that we found out are reported.
Benefits. The benefits of using an RA for the creation of applications are mainly:
- Increased quality of the enterprise applications.
- An RA helps to accomplish business needs by improving key quality attributes.
- An RA helps to improve the business processes of an organization.
- An RA reuses architectural knowledge of previous successful experiences.
- Reduction of the development time and faster delivery of applications.
- An RA allows starting developing applications since the first day by following architectural decisions already taken.
- An RA decreases the development time of applications since the RA’s modules that implement needed functionality are reused in the application.
- Increased productivity of application builders.
- An RA facilitates material and tools for the development, testing and documentation of applications, and for training application builders.
- An RA generates or automatizes the creation of code in the applications.
- An RA indicates the guidelines to be followed by the application builders.
- An RA reduces the complexity of applications’ developments because part of the functionality is already resolved in the RA.
- An RA facilitates the configuration of its modules and the integration with legacy systems or external systems.
- Cost savings in the maintenance of the applications.
- An RA increases the control over applications through their homogeneity.
- An RA maintains only once reused services by all applications.
- An RA allows adding, changing or deleting functionalities by means of a modular design.
- An RA establishes standards and “de facto” technologies that will be supported in future versions.
Aspects to consider. The adoption of an RA for the creation of applications implies the consideration of the following aspects that eventually can become risks:
- Initial investment. An RA implies an initial investment of time to start developing applications. This initial investment could be decreased in the case of using an RM. Also, it is also recommended to invest in an RA when the organization has a wide portfolio of applications that can be based on the RA.
- Additional learning curve. An RA implies an additional training for their own tools and modules, even if its technologies are standard or “de facto” already known by the application builder.
- Dependency on the RA. Applications depend on the reused modules of the RA. If it is necessary to make changes in a reused module of the RA or to add a new func-
tionality, application builders have to wait for the RA developers to include it in the RA for all the applications.
- Limited flexibility of the applications. The use of an RA implies following its guidelines during the application development and adopting its architectural design. If business needs require a different kind of application, the RA would limit the flexibility of that application.
6.3 Lessons learned
During the pilot of the survey, we learnt the following lessons about its design:
- The same term could have slightly different meaning in the academia and in the industry (for instance, the term “enterprise architecture” is sometimes use in the industry to mean “software reference architecture for a single organization”).
- Questions that deal with several variables disconcert the interviewee and make the analysis more difficult. It is better to split them to cover only one variable.
- If a survey targets several stakeholders, their questionnaires should be designed having into account their knowledge and interest about architectural concerns.
- In online questionnaires, it is recommendable to allow the interviewee to write any comments or clarifications in some field and also include a “n/a” option when necessary. Besides, a previous button is useful to make changes in previous questions.
7 Conclusions and Future Work
Driving empirical studies is becoming one of the main sources of communication between practitioners and the academia. The main contribution of this ongoing work intends to be the formulation of a framework to conduct empirical studies for supporting decision making and assessment related to RAs. It consists of a list of relevant aspects for RAs assessment, and an assortment of four complementary empirical studies that allow understanding and evaluating these aspects. It is a practical framework that can be adapted to the specific context of software companies and ITCFs. Consequently, organizations that apply the framework could benefit from a common reference framework to review RAs.
The framework is being applied in Everis. This allows getting feedback for assessing its effectiveness and gathering industrial evidence. Preliminary results of this application indicate the importance of good practices like time tracking, continuous feedback, test-driven development and continuous integration in order to quantitatively evaluate RAs. Another result is that the adoption of an RA implies cost savings in the development and maintenance of applications.
Future work spreads into two directions. In terms of validation, we are also conducting the evaluation step of the framework in Everis. With respect to this first version of the framework, we aim to extend it considering Wohlin’s improvement step (see Section 3) in order to build preliminary guidelines for improving RAs in ITCFs.
References
11. Forrester Research, “Forrester’s Total Economic Impact (TEI)”, available online at: www.forrester.com/TEI.
|
{"Source-Url": "http://upcommons.upc.edu/bitstream/handle/2117/24040/eselaw2013_submission_14+(2).pdf;jsessionid=2A9D50E78522254B5263BEDE2CB971E5?sequence=1", "len_cl100k_base": 7451, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 46106, "total-output-tokens": 8883, "length": "2e12", "weborganizer": {"__label__adult": 0.00038051605224609375, "__label__art_design": 0.0005970001220703125, "__label__crime_law": 0.00032711029052734375, "__label__education_jobs": 0.0011644363403320312, "__label__entertainment": 6.008148193359375e-05, "__label__fashion_beauty": 0.0001544952392578125, "__label__finance_business": 0.000568389892578125, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.0005311965942382812, "__label__hardware": 0.00052642822265625, "__label__health": 0.00040221214294433594, "__label__history": 0.0002236366271972656, "__label__home_hobbies": 5.942583084106445e-05, "__label__industrial": 0.00032639503479003906, "__label__literature": 0.00026988983154296875, "__label__politics": 0.00024628639221191406, "__label__religion": 0.00034999847412109375, "__label__science_tech": 0.009368896484375, "__label__social_life": 6.902217864990234e-05, "__label__software": 0.005275726318359375, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0002446174621582031, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.0001838207244873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39738, 0.01756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39738, 0.37147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39738, 0.92047]], "google_gemma-3-12b-it_contains_pii": [[0, 2687, false], [2687, 4685, null], [4685, 7776, null], [7776, 11099, null], [11099, 14164, null], [14164, 17005, null], [17005, 20394, null], [20394, 21737, null], [21737, 24938, null], [24938, 28240, null], [28240, 31060, null], [31060, 33976, null], [33976, 36835, null], [36835, 39738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2687, true], [2687, 4685, null], [4685, 7776, null], [7776, 11099, null], [11099, 14164, null], [14164, 17005, null], [17005, 20394, null], [20394, 21737, null], [21737, 24938, null], [24938, 28240, null], [28240, 31060, null], [31060, 33976, null], [33976, 36835, null], [36835, 39738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39738, null]], "pdf_page_numbers": [[0, 2687, 1], [2687, 4685, 2], [4685, 7776, 3], [7776, 11099, 4], [11099, 14164, 5], [14164, 17005, 6], [17005, 20394, 7], [20394, 21737, 8], [21737, 24938, 9], [24938, 28240, 10], [28240, 31060, 11], [31060, 33976, 12], [33976, 36835, 13], [36835, 39738, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39738, 0.07179]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
556d3c9b17fc2a5a8d5fdcfd6b8a58c2c5831ecd
|
NAG Library Function Document
nag_dsytrs (f07mec)
1 Purpose
nag_dsytrs (f07mec) solves a real symmetric indefinite system of linear equations with multiple right-hand sides,
\[ AX = B, \]
where \( A \) has been factorized by nag_dsytrf (f07mdc).
2 Specification
```c
#include <nag.h>
#include <nagf07.h>
void nag_dsytrs (Nag_OrderType order, Nag_UploType uplo, Integer n,
Integer nrhs, const double a[], Integer pda, const Integer ipiv[],
double b[], Integer pdb, NagError *fail)
```
3 Description
nag_dsytrs (f07mec) is used to solve a real symmetric indefinite system of linear equations \( AX = B \), this function must be preceded by a call to nag_dsytrf (f07mdc) which computes the Bunch–Kaufman factorization of \( A \).
If \( \text{uplo} = \text{Nag\_Upper} \), \( A = PUDU^T P^T \), where \( P \) is a permutation matrix, \( U \) is an upper triangular matrix and \( D \) is a symmetric block diagonal matrix with 1 by 1 and 2 by 2 blocks; the solution \( X \) is computed by solving \( PUDY = B \) and then \( U^TP^TX = Y \).
If \( \text{uplo} = \text{Nag\_Lower} \), \( A = PLDL^T P^T \), where \( L \) is a lower triangular matrix; the solution \( X \) is computed by solving \( PLDY = B \) and then \( L^TP^TX = Y \).
4 References
5 Arguments
1: order – Nag_OrderType
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by order = Nag_RowMajor. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.
Constraint: order = Nag_RowMajor or Nag_ColMajor.
2: uplo – Nag_UploType
On entry: specifies how \( A \) has been factorized.
uplo = Nag_Upper
\( A = PUDU^T P^T \), where \( U \) is upper triangular.
\textbf{f07mec}\hspace{1cm} \textit{NAG Library Manual}
\texttt{\texttt{uplo = Nag\_Lower}}
\[ A = PLDL^T, \text{ where } L \text{ is lower triangular.} \]
\textit{Constraint:} \texttt{uplo} = Nag\_Upper or Nag\_Lower.
3: \hspace{1cm} \texttt{n} \hspace{0.5cm} \text{Integer} \hspace{1cm} \textit{Input}
\textit{On entry:} \texttt{n}, the order of the matrix \texttt{A}.
\textit{Constraint:} \texttt{n} \geq 0.
4: \hspace{1cm} \texttt{nrhs} \hspace{0.5cm} \text{Integer} \hspace{1cm} \textit{Input}
\textit{On entry:} \texttt{r}, the number of right-hand sides.
\textit{Constraint:} \texttt{nrhs} \geq 0.
5: \hspace{1cm} \texttt{a[\texttt{dim}]} \hspace{0.5cm} \text{const double} \hspace{1cm} \textit{Input}
\textit{Note:} the dimension, \texttt{dim}, of the array \texttt{a} must be at least \texttt{max(1, pda \times n)}.
\textit{On entry:} details of the factorization of \texttt{A}, as returned by nag_dsytrf (f07mdc).
6: \hspace{1cm} \texttt{pda} \hspace{0.5cm} \text{Integer} \hspace{1cm} \textit{Input}
\textit{On entry:} the stride separating row or column elements (depending on the value of \texttt{order}) of the matrix in the array \texttt{a}.
\textit{Constraint:} \texttt{pda} \geq \texttt{max(1, n)}.
7: \hspace{1cm} \texttt{ipiv[\texttt{dim}]} \hspace{0.5cm} \text{const Integer} \hspace{1cm} \textit{Input}
\textit{Note:} the dimension, \texttt{dim}, of the array \texttt{ipiv} must be at least \texttt{max(1, n)}.
\textit{On entry:} details of the interchanges and the block structure of \texttt{D}, as returned by nag_dsytrf (f07mdc).
8: \hspace{1cm} \texttt{b[\texttt{dim}]} \hspace{0.5cm} \text{double} \hspace{1cm} \textit{Input/Output}
\textit{Note:} the dimension, \texttt{dim}, of the array \texttt{b} must be at least
\[ \texttt{max(1, pdb \times nrhs)} \] when \texttt{order} = Nag\_ColMajor;
\[ \texttt{max(1, n \times pdb)} \] when \texttt{order} = Nag\_RowMajor.
The \((i, j)\)th element of the matrix \texttt{B} is stored in
\[ \texttt{b[(j - 1) \times pdb + i - 1]} \] when \texttt{order} = Nag\_ColMajor;
\[ \texttt{b[(i - 1) \times pdb + j - 1]} \] when \texttt{order} = Nag\_RowMajor.
\textit{On entry:} the \texttt{n} by \texttt{r} right-hand side matrix \texttt{B}.
\textit{On exit:} the \texttt{n} by \texttt{r} solution matrix \texttt{X}.
9: \hspace{1cm} \texttt{pdb} \hspace{0.5cm} \text{Integer} \hspace{1cm} \textit{Input}
\textit{On entry:} the stride separating row or column elements (depending on the value of \texttt{order}) in the array \texttt{b}.
\textit{Constraints:}
\[ \text{if } \texttt{order} = \text{Nag\_ColMajor}, \texttt{pdb} \geq \text{max(1, n)}; \]
\[ \text{if } \texttt{order} = \text{Nag\_RowMajor}, \texttt{pdb} \geq \text{max(1, nrhs)}. \]
10: \hspace{1cm} \texttt{fail} \hspace{0.5cm} \text{NagError *} \hspace{1cm} \textit{Input/Output}
The NAG error argument (see Section 3.6 in the Essential Introduction).
6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.2.1.2 in the Essential Introduction for further information.
NE_BAD_PARAM
On entry, argument \textlangle value\rangle had an illegal value.
NE_INT
On entry, \(n = \langle value\rangle\).
Constraint: \(n \geq 0\).
On entry, \(\text{nrhs} = \langle value\rangle\).
Constraint: \(\text{nrhs} \geq 0\).
On entry, \(\text{pda} = \langle value\rangle\).
Constraint: \(\text{pda} > 0\).
On entry, \(\text{pdb} = \langle value\rangle\).
Constraint: \(\text{pdb} > 0\).
NE_INT_2
On entry, \(\text{pda} = \langle value\rangle\) and \(n = \langle value\rangle\).
Constraint: \(\text{pda} \geq \max(1, n)\).
On entry, \(\text{pdb} = \langle value\rangle\) and \(n = \langle value\rangle\).
Constraint: \(\text{pdb} \geq \max(1, n)\).
On entry, \(\text{pdb} = \langle value\rangle\) and \(\text{nrhs} = \langle value\rangle\).
Constraint: \(\text{pdb} \geq \max(1, \text{nrhs})\).
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
An unexpected error has been triggered by this function. Please contact NAG.
See Section 3.6.6 in the Essential Introduction for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 3.6.5 in the Essential Introduction for further information.
7 Accuracy
For each right-hand side vector \(b\), the computed solution \(x\) is the exact solution of a perturbed system of equations \((A + E)x = b\), where
\[
\text{if } \text{uplo} = \text{Nag} \_\text{Upper}, \ |E| \leq c(n)\epsilon P|U||D||U^T|P^T; \\
\text{if } \text{uplo} = \text{Nag} \_\text{Lower}, \ |E| \leq c(n)\epsilon P|L||D||L^T|P^T,
\]
\(c(n)\) is a modest linear function of \(n\), and \(\epsilon\) is the \textit{machine precision}.
If \(\hat{x}\) is the true solution, then the computed solution \(x\) satisfies a forward error bound of the form
\[
\frac{\|x - \hat{x}\|_\infty}{\|x\|_\infty} \leq c(n) \text{cond}(A, x)\epsilon
\]
where \(\text{cond}(A, x) = \|A^{-1}\|A\|x\|_\infty/\|x\|_\infty \leq \text{cond}(A) = \|A^{-1}\|A\|_\infty \leq \kappa_\infty(A)\).
Note that \( \text{cond}(A, x) \) can be much smaller than \( \text{cond}(A) \).
Forward and backward error bounds can be computed by calling \text{nag_dsyrs} (f07mhc), and an estimate for \( \kappa_1(A) \) (\( = \kappa_2(A) \)) can be obtained by calling \text{nag_dsycon} (f07mgc).
8 Parallelism and Performance
\text{nag_dsytrs} (f07mec) is not threaded by NAG in any implementation.
\text{nag_dsytrs} (f07mec) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users’ Note for your implementation for any additional implementation-specific information.
9 Further Comments
The total number of floating-point operations is approximately \( 2n^2r \).
This function may be followed by a call to \text{nag_dsyrs} (f07mhc) to refine the solution and return an error estimate.
The complex analogues of this function are \text{nag_zhetrs} (f07msc) for Hermitian matrices and \text{nag_zsytrs} (f07nsc) for symmetric matrices.
10 Example
This example solves the system of equations \( AX = B \), where
\[
A = \begin{pmatrix}
2.07 & 3.87 & 4.20 & -1.15 \\
3.87 & -0.21 & 1.87 & 0.63 \\
4.20 & 1.87 & 1.15 & 2.06 \\
-1.15 & 0.63 & 2.06 & -1.81 \\
\end{pmatrix}
\quad \text{and} \quad
B = \begin{pmatrix}
-9.50 & 27.85 \\
-8.38 & 9.90 \\
-6.07 & 19.25 \\
-0.96 & 3.93 \\
\end{pmatrix}.
\]
Here \( A \) is symmetric indefinite and must first be factorized by \text{nag_dsytrf} (f07mdc).
10.1 Program Text
/* \text{nag_dsytrs} (f07mec) Example Program. */
/* * Copyright 2014 Numerical Algorithms Group. */
/* * Mark 7, 2001. */
#include <stdio.h>
#include <nag.h>
#include <nag_stdlib.h>
#include <nagf07.h>
#include <nagx04.h>
int main(void)
{
/* Scalars */
Integer i, j, n, nrhs, pda, pdb;
Integer exit_status = 0;
NagError fail;
Nag_UploType uplo;
Nag_OrderType order;
/* Arrays */
char nag_enum_arg[40];
Integer *ipiv = 0;
double *a = 0, *b = 0;
```c
#ifdef NAG_LOAD_FP
/* The following line is needed to force the Microsoft linker
* to load floating point support */
float force_loading_of_ms_float_support = 0;
#endif /* NAG_LOAD_FP */
#ifdef NAG_COLUMN_MAJOR
define A(I, J) a[(J-1)*pda + I - 1]
define B(I, J) b[(J-1)*pdb + I - 1]
order = Nag_ColMajor;
#else
define A(I, J) a[(I-1)*pda+J-1]
define B(I, J) b[(I-1)*pdb +J-1]
order = Nag_RowMajor;
#endif
INIT_FAIL(fail);
printf("nag_dsytrs (f07mec) Example Program Results\n\n");
#ifdef _WIN32
scanf_s("%*[\n"]
#else
scanf("%*[\n"]
#endif
#ifdef _WIN32
scanf_s("%"NAG_IFMT "%"NAG_IFMT"%*[\n"] , &n, &nrhs);
#else
scanf("%"NAG_IFMT "%"NAG_IFMT"%*[\n"] , &n, &nrhs);
#endif
#ifdef NAG_COLUMN_MAJOR
pda = n;
pdb = n;
#else
pda = n;
pdb = nrhs;
#endif
/* Allocate memory */
if (!(ipiv = NAG_ALLOC(n, Integer)) ||
!(a = NAG_ALLOC(n * n, double)) ||
!(b = NAG_ALLOC(n * nrhs, double)))
{
printf("Allocation failure\n");
exit_status = -1;
goto END;
}
/* Read A and B from data file */
#ifdef _WIN32
scanf_s(" %39s%*[\n"] , nag_enum_arg, _countof(nag_enum_arg));
#else
scanf(" %39s%*[\n"] , nag_enum_arg);
#endif
/* nag_enum_name_to_value (x04nac).
* Converts NAG enum member name to value */
uplo = (Nag_UploType) nag_enum_name_to_value(nag_enum_arg);
if (uplo == Nag_Upper)
{
for (i = 1; i <= n; ++i)
{
for (j = i; j <= n; ++j)
{
#ifdef _WIN32
scanf_s("%lf", &A(i, j));
#else
scanf("%lf", &A(i, j));
#endif
}
}
#else
scanf_s("%*[\n"]
```
```c
#else
scanf("%*[\n ] ");
#endif
else
{
for (i = 1; i <= n; ++i)
{
for (j = 1; j <= i; ++j)
{
#ifdef _WIN32
scanf_s("%lf", &A(i, j));
#else
scanf("%lf", &A(i, j));
#endif
}
}
#ifdef _WIN32
scanf_s("%*[\n ] ");
#else
scanf("%*[\n ] ");
#endif
}
for (i = 1; i <= n; ++i)
{
for (j = 1; j <= nrhs; ++j)
{
#ifdef _WIN32
scanf_s("%lf", &B(i, j));
#else
scanf("%lf", &B(i, j));
#endif
}
#ifdef _WIN32
scanf_s("%*[\n ] ");
#else
scanf("%*[\n ] ");
#endif
}
/* Factorize A */
/* nag_dsytrf (f07mdc). */
/* Bunch-Kaufman factorization of real symmetric indefinite matrix */
nag_dsytrf(order, uplo, n, a, pda, ipiv, &fail);
if (fail.code != NE_NOERROR)
{
printf("Error from nag_dsytrf (f07mdc).\n%s\n", fail.message);
exit_status = 1;
goto END;
}
/* Compute solution */
/* nag_dsytrs (f07mec). */
/* Solution of real symmetric indefinite system of linear equations, multiple right-hand sides, matrix already factorized by nag_dsytrf (f07mdc) */
nag_dsytrs(order, uplo, n, nrhs, a, pda, ipiv, b, pdb, &fail);
if (fail.code != NE_NOERROR)
{
printf("Error from nag_dsytrs (f07mec).\n%s\n", fail.message);
exit_status = 1;
goto END;
}
/* Print solution */
/* nag_gen_real_mat_print (x04cac). */
/* Print real general matrix (easy-to-use) */
fflush(stdout);
nag_gen_real_mat_print(order, Nag_GeneralMatrix, Nag_NonUnitDiag, n, nrhs, b, pdb, "Solution(s)", 0, &fail);
if (fail.code != NE_NOERROR)
```
10.2 Program Data
nag_dsytrs (f07mec) Example Program Data
4 2 :Values of n and nrhs
Nag_Lower :Value of uplo
2.07
3.87 -0.21
4.20 1.87 1.15
-1.15 0.63 2.06 -1.81 :End of matrix A
-9.50 27.85
-8.38 9.90
-6.07 19.25
-0.96 3.93 :End of matrix B
10.3 Program Results
nag_dsytrs (f07mec) Example Program Results
Solution(s)
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>-4.0000</td>
<td>1.0000</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>-1.0000</td>
<td>4.0000</td>
</tr>
<tr>
<td>3</td>
<td></td>
<td>2.0000</td>
<td>3.0000</td>
</tr>
<tr>
<td>4</td>
<td></td>
<td>5.0000</td>
<td>2.0000</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.nag.com/numeric/cl/nagdoc_cl25/pdf/f07/f07mec.pdf", "len_cl100k_base": 4641, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19825, "total-output-tokens": 5367, "length": "2e12", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.00028204917907714844, "__label__crime_law": 0.00047898292541503906, "__label__education_jobs": 0.0005674362182617188, "__label__entertainment": 0.00010323524475097656, "__label__fashion_beauty": 0.00016057491302490234, "__label__finance_business": 0.00020456314086914065, "__label__food_dining": 0.000507354736328125, "__label__games": 0.0008716583251953125, "__label__hardware": 0.002033233642578125, "__label__health": 0.0005903244018554688, "__label__history": 0.00025010108947753906, "__label__home_hobbies": 0.0001342296600341797, "__label__industrial": 0.0007305145263671875, "__label__literature": 0.00018715858459472656, "__label__politics": 0.00034356117248535156, "__label__religion": 0.0005397796630859375, "__label__science_tech": 0.08148193359375, "__label__social_life": 0.00010162591934204102, "__label__software": 0.00867462158203125, "__label__software_dev": 0.900390625, "__label__sports_fitness": 0.00042819976806640625, "__label__transportation": 0.0005898475646972656, "__label__travel": 0.0002135038375854492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13098, 0.04859]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13098, 0.72817]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13098, 0.62501]], "google_gemma-3-12b-it_contains_pii": [[0, 1977, false], [1977, 4888, null], [4888, 7137, null], [7137, 9347, null], [9347, 10949, null], [10949, 12586, null], [12586, 13098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1977, true], [1977, 4888, null], [4888, 7137, null], [7137, 9347, null], [9347, 10949, null], [10949, 12586, null], [12586, 13098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13098, null]], "pdf_page_numbers": [[0, 1977, 1], [1977, 4888, 2], [4888, 7137, 3], [7137, 9347, 4], [9347, 10949, 5], [10949, 12586, 6], [12586, 13098, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13098, 0.01948]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
2648940dbcbcd0ac18db2c27faa4af666ae93b96
|
<table>
<thead>
<tr>
<th>Problem</th>
<th>Points</th>
</tr>
</thead>
<tbody>
<tr>
<td>Problem 1</td>
<td>15</td>
</tr>
<tr>
<td>Problem 2</td>
<td>5</td>
</tr>
<tr>
<td>Problem 3</td>
<td>10</td>
</tr>
<tr>
<td>Problem 4</td>
<td>10</td>
</tr>
<tr>
<td>Problem 5</td>
<td>10</td>
</tr>
<tr>
<td>Problem 6</td>
<td>10</td>
</tr>
<tr>
<td>Problem 7</td>
<td>10</td>
</tr>
<tr>
<td>Problem 8</td>
<td>5</td>
</tr>
<tr>
<td>Problem 9</td>
<td>10</td>
</tr>
<tr>
<td>Problem 10</td>
<td>15</td>
</tr>
</tbody>
</table>
1 What do they Do?
(a) Complete the specification in the following
```python
def f(s):
"""
PreC: s is a string.
"""
t = s
nullstring = ''
for c in s:
if s.count(c)>1:
t = t.replace(c,nullstring)
return t
```
5 points:
Returns a string obtained from s by deleting all characters that appear more than once
-1 if "return" is not mentioned
(b) What is the output of the call F([30,40,10,20])?
```python
def F(x):
"""
PreCondition: x is a nonempty list of distinct ints
"""
n = len(x)
for k in range(n-1):
if x[k]>x[k+1]:
t = x[k]
x[k] = x[k+1]
x[k+1] = t
print x
```
5 points:
30 40 10 20
30 10 40 20
30 10 20 40
4 points
30 40 10 20
30 10 40 20
4 points
30 10 20 40
2 points for these 1-liners
30 10 40 20
30 10 10 20
10 20 30 40
40 30 20 10
(c) The following code displays a 10,000 non-intersecting randomly colored disks. Comment on the expected number of displayed red disks, the expected number of displayed white disks, and the expected number of displayed blue disks. FYI, `randu(a,b)` returns a float that is randomly chosen from the interval $[a, b]$.
```python
from random import uniform as randu
from simpleGraphics import *
MakeWindow(101)
r = 0.3
for i in range(100):
for j in range(100):
x = float(i)
y = float(j)
p = randu(0,1)
if p <= .1:
DrawDisk(x,y,r,RED)
elif p <= .4:
DrawDisk(x,y,r,WHITE)
else:
DrawDisk(x,y,r,BLUE)
ShowWindow()
```
5 points
1000 Red
3000 White
6000 Blue
5 points
10%
30%
60%
3 points
1000 Red
4000 White
5000 Blue
2 Functions and Lists
Complete the following function so that it performs as specified
```python
def Trim(L):
""" Returns a list of strings K that has four properties:
(1) every entry in K is in L
(2) every entry in L is in K
(3) no entry in K is repeated
(4) K is sorted.
L is not modified.
"""
PreC: L is a nonempty list of strings
Thus, if L = ['a', 'c', 'a', 'b', 'h', 'a', 'c']
then ['a', 'b', 'c', 'h'] is returned.
5 point solution:
K = []
for s in L:
if s not in K:
K.append(s)
K.sort()
return K
5 point point solution:
K = []
for k in range(len(L)):
if L[k] not in K:
K.append(L[k])
K.sort()
return K
3 point solution:
K = L
for s in L:
if s not in K:
K.append(s)
K.sort()
return K
```
5 point solution:
```python
K = []
for s in L:
if s not in K:
if K.count(s) == 0:
K.append(s)
else:
print("-1 for using find on a list")
K.sort()
return K
```
5 point point solution:
```python
K = []
for k in range(len(L)):
if L[k] not in K:
if K.count(s) == 0:
K.append(L[k])
else:
print("-1 for using find on a list")
K.sort()
return K
```
3 point solution:
```python
K = L
for s in L:
if s not in K:
if K.count(s) == 0:
K.append(s)
K.sort()
return K
```
3 Boolean Operations
(a) Implement the following function so that it performs as specified.
```python
def Q1(s1, s2, s3):
""" Returns True if s1, s2, and s3 have a character in common and False otherwise.
PreCondition: s1, s2, and s3 are nonempty strings
""
for c in s1:
for c1 in s1:
if c in s2 and c in s3:
for c2 in s2:
return True
return False
return False
return True
```
5 point solutions
```python
for c in s1:
if c in s2 and c in s3:
return True
return False
```
-2 if "or" instead of "and". -2 if "True" part is right but "False" part is not. And vice versa.
3 point solution:
```python
for c in s1:
if c in s2 and c in s3:
return True
return False
```
1 point
No loop but some relevant Boolean expression
Note: It is possible to do this problem using find
(b) Assume that $B_1, B_2, B_3, B_4$, and $B_5$ are initialized Boolean variables. Rewrite the following code so that it does not involve any nested ifs. The rewritten code must be equivalent to the given code, i.e., it must render exactly the same output no matter what the value of the five initialized Boolean variables.
```python
if B1:
if B2:
print 'A'
elif B3:
print 'B'
else:
if B4 or B5:
print 'C'
else:
print 'D'
```
5 points
```python
if B1 and B2: # 3 points for printing A and B correctly
print 'A'
elif B1 and B3:
print 'B'
elif (not B1) and (B4 or B5): # 2 points for printing C and D correctly
print 'C'
elif (not B1): # -2 if the not B1 is missing
print 'D'
```
3 points
```python
if B1 and B2:
print 'A'
elif B1 and B3:
print 'B'
elif (B4 or B5):
print 'C'
else:
print 'D'
```
Typical 1 point solution
```python
if B1 and B2
print 'A'
if B1 and B3:
print 'B'
if B4 or B5:
print 'C'
else:
print 'D'
```
4 While Loops
(a) Rewrite the following code so that it does the same thing but with while-loops instead of for-loops.
```python
s = 'abcdefghijklmnopqrstuvwxyz'
for i in range(26):
for j in range(0, i-1):
for k in range(j, i):
print s[k] + s[j] + s[i]
```
5 points
```python
s = 'abcdefghijklmnopqrstuvwxyz'
i = 0
while i < 26:
j = 0
while j < i-1:
k = j
while k < i:
print s[k] + s[j] + s[i]
k += 1
j += 1
i += 1
```
3 points
```python
s = 'abcdefghijklmnopqrstuvwxyz'
i = 0
j = 0
k = 0
while i < 26:
while j < i-1:
while k < i:
print s[k] + s[j] + s[i]
k += 1
j += 1
i += 1
```
2 points
```python
s = 'abcdefghijklmnopqrstuvwxyz'
i = 0
j = 0
k = 0
while i < 26:
while j < i-1:
while k < i:
print s[k] + s[j] + s[i]
j += 1
i += 1
```
1 point same as preceding but no initialization
1 point same as preceding but no updates
(b) Implement the following function so that it performs as specified.
```python
def OverBudget(A, M):
""" Returns the smallest k so that \( \text{sum}(|A[0:k,0]|) \geq M, \text{sum}(|A[0:k,1]|) \geq M, \text{and} \text{sum}(|A[0:k,2]|) \geq M \). If no such k exists, returns 0.
PreC: A is an n-by-3 numpy array of ints. M is an int.
"""
To illustrate, suppose
\[
A = \begin{bmatrix}
2 & 7 & 1 \\
1 & 0 & 4 \\
3 & 2 & 5 \\
0 & 1 & 4 \\
4 & 0 & 6
\end{bmatrix}
\]
If \( M = 3 \), then the value returned should be 2. If \( M = 10 \), then the returned value should be 5. If \( M = 100 \), then the returned value should be 0. You are not allowed to use the built-in function `sum` or `for-loops`.
5 point solution:
```python
k = 0
s0 = 0
s1 = 0
s2 = 0
(m, n) = A.shape
while k < m:
s0 += abs(A[k,0])
s1 += abs(A[k,1])
s2 += abs(A[k,2])
k += 1
if s0>=M and s1>=M and s2 >= M:
return k
return 0
```
5 point solution:
```python
k = 0
s0 = 0
s1 = 0
s2 = 0
(m, n) = A.shape
OneSumShort = (s0<M or s1<M or s2<M)
while k < m and OneSumShort:
s0 += abs(A[k,0])
s1 += abs(A[k,1])
s2 += abs(A[k,2])
OneSumShort = (s0<M or s1<M or s2<M)
k += 1
if not OneSumShort:
return k
else:
return 0
```
5 Recursion
Binary search is a divide and conquer process that can be used to determine whether or not a given value is an entry in a sorted list. Here is an informal, recursive illustration of the process applied to finding a name in a phone book assuming that there is one name per page:
Look-Up Process:
- if the phone book has one page
- Report whether or not the name is on that page
- else
- Tear the phone book in half
- Apply the Look-Up Process to the relevant half-sized phonebook
Develop a recursive binary search implementation of the following function so that it performs as specified. You are not allowed to use the "in" operator.
```python
def BinSearch(x, a, L, R):
"""Returns True if x in a[L:R+1] is True and False otherwise.
Precondition: a is a length n-list of distinct ints whose entries are sorted from smallest to largest. L and R are ints that satisfy 0<=L<=R<n. x is an int with the property that a[L]<=x<=a[R]."
if R==L:
return x = a[L] # -2 for len(a)==1 instead of R==L
-1 for x in a[L,R+1]
-1 for return L
else:
mid = (L+R)/2 # -2 for (a[L]+a[R])/2
-2 for len(a)/2
if x <= a[mid]: # -1 for x <= mid
return BinSearch(x, a, L, mid) # 2 points
else:
return BinSearch(x, a, mid+1, R) # 2 points
# -1 if "mid" and not "mid+1"
```
For very wrong solutions,
- 1 point for a single if-else
- 1 point if the if-part tries to deal with the base case
- 1 point if the else part tries to come up with a half-sized problem
- 1 point if there is a recursive Binsearch call and it recognizes that BinSearch returns a Boolean value
Better solution
```python
def BinSearch(x, a, L, R):
if R==L:
return True
else:
m = (L+R)/2
if a[L]<=x<=a[m]:
return BinSearch(x, a, L, m,)
elif a[m+1]<=x<=a[R]:
return BinSearch(x, a, m+1, R)
else:
return False
```
Better solution
```python
def BinSearch(x, a, L, R):
if R==L:
return True
else:
m = (L+R)/2
if a[L]<=x<=a[m]:
return BinSearch(x, a, L, m,)
elif a[m+1]<=x<=a[R]:
return BinSearch(x, a, m+1, R)
else:
return False
```
6 Function Execution
What is the output if the following Application Script is executed?
```python
def F(a):
b = True
for k in range(len(a)):
b = D(a,k) and b
return b
def D(a,k):
a[k] = a[k]-1
return a[k] >= 0
if __name__ == '__main__':
a = [1,2,3,4]
print F(a)
print a
print F(a)
print a
```
Fact: D(a,k) subtracts 1 from a[k] and returns True iff the modified a[k] is nonnegative
Fact: F(a) subtracts 1 from every entry in a and returns True iff every entry in the modified a is nonnegative
The first call to F modifies a to [0,1,2,3] and returns True.
The second call to F modifies a to [-1,0,1,2] and returns False
So the 10 point solutions are
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>True, [0,1,2,3]</td>
</tr>
<tr>
<td>[0,1,2,3]</td>
<td>False, [-1,0,1,2]</td>
</tr>
<tr>
<td>False</td>
<td></td>
</tr>
<tr>
<td>[-1,0,1,2]</td>
<td></td>
</tr>
</tbody>
</table>
8 points
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>[0,1,2,3]</td>
</tr>
<tr>
<td>[0,1,2,3]</td>
<td>True</td>
</tr>
<tr>
<td>True</td>
<td>[-1,0,1,2]</td>
</tr>
<tr>
<td>[-1,0,1,2]</td>
<td>False</td>
</tr>
</tbody>
</table>
5 points
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>[0,2,3,4]</td>
</tr>
<tr>
<td>True</td>
<td>[0,1,3,4]</td>
</tr>
</tbody>
</table>
3 points
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>[1,2,3,4]</td>
</tr>
<tr>
<td>True</td>
<td>[1,2,3,4]</td>
</tr>
</tbody>
</table>
In any of the above, if there are extra lines of output (as if there was a print statement inside the functions) then -2
2 points if some good chit chat and output looks like
```
list
boolean
list
boolean
```
7 Short Answer
(a) Why is inheritance such an important aspect of object oriented programming?
4 points for any of these
With inheritance, it is legal for a method from an existing class to be applied to an object of the new class
2 point answers:
- Enables one to reuse software.
- Enables one to build on old software.
- Makes it easier to maintain software.
(b) What does it mean to say that an operator like "+" is overloaded?
3 point answers:
The operation performed depends on the operands.
Thus, x+y may mean concatenation if
x and y are strings and addition if x and y are floats
(c) The numpy module supports the addition of arrays. What does this mean?
3 point answers
If x and y are numerical arrays of the same size, then x+y creates a new array of the same size obtained by adding entries. (OK not to say "numerical")
An example like [1,2,3]+[4,5,6] = [5,7,9]
8 Inverting a Dictionary
Implement the following function so that it performs as specified.
```python
def Invert(D):
""" Returns a dictionary that is obtained from D by swapping its keys and values."
PreC: D is a dictionary with the property that every value is either a string or a number, and no values are repeated throughout the entire dictionary.
Thus, if D = {1: 'x', 'z': 4, 'x': 'z'}, then the dictionary { 'x': 1, 4: 'z', 'z': 'x' } is returned. You are not allowed to use the dictionary methods keys or values.
5 points
E = {}
for d in D:
theKey = d
theValue = D[d]
E[theValue] = theKey
return E
5 points
E = {}
for d in D:
E[D(d)] = d
return E
2 points
E = {}
for d in D:
E.append(d)
return E
5 points
Keys = []
Values = []
for d in D:
Keys.append(d)
Values.append(D[d])
E = {}
for k in range(length(Keys)):
E[Values[k]] = Keys[k]
return E
3 points If everything is OK but they overwrite D and that causes a screw up
9 A Modified Energy Class
Consider the following modification of the class `Energy` that was part of A7:
```python
class EnergyMod:
Name: a string that is the name of the building
Image: a string that specifies the path of the building's jpeg image
E_rate: a length-24 numpy array where E-Rate[k] is the cost of electricity per
unit of consumption during the kth hour of the day, k in range(24)
S_rate: a length-24 numpy array where S-Rate[k] is the cost of steam per
unit of consumption during the kth hour of the day, k in range(24)
C_rate: a length-24 numpy array where C-Rate[k] is the cost of chilled water per
unit of consumption during the kth hour of the day, k in range(24)
A: a 35040-by-3 numpy array that houses all the energy consumption snapshots.
In particular, A[k,0], A[k,1], and A[k,2] house the
electricity, steam, and chilled water consumption during the kth 15-minute
period of the year.
TS_dict: a 35040-item time stamp index dictionary. If ts is a valid time stamp and
k is the value of TS_dict(ts), then A[k,:], houses the consumption data
associated with ts.
```
Notice that instead of a single consumption rate for each of the three energies we have a list of 24 rates, one for each hour in the day. ASSUME STANDARD TIME. And just to be clear about what we mean by “hour of the day”, if a consumption reading is associated with time stamp `dd-MMM-2014-hh-mm`, then the relevant hour of the day is `int(hh)`.
Implement a method `arbitraryBill(self, T1, T2)` for the `EnergyMod` class that returns the total cost of energy consumed by the building represented by `self` from time stamp `T1` up to time stamp `T2`. As an example,
```python
M = EnergyMod('Gates')
x = M.arbitraryBill('15-May-2014-08-00', '16-May-2014-11-45')
```
would assign to `x` the total energy cost of running Gates Hall from 8AM May 15 up to noon May 16. You are allowed to use the function `Invert` from Problem 8.
```python
def arbitraryBill(self, T1, T2):
D = Invert(self.TS_dict)
total = 0
k1 = self.TS_dict[T1]
K2 = self.TS_dict[T2]
for k in range(T1, T2):
TS = D[k]
Hour = int(TS(12:14))
E = self.E_rate[Hour]
S = self.S_rate[Hour]
C = self.C_rate[Hour]
```
-1 if they do not use the `self.` notation
10 Methods
Assume the availability of the following class.
class Fraction:
A class that can be used to represent fractions.
Attributes:
num: the numerator [int]
den: the denominator [positive int]
Invariant: num and den have no common factors larger than 1.
def __init__(self,p,q):
""" Returns a Fraction Object that represents p/q in lowest terms.
PreC p and q are ints and q is nonzero
"""
def lowestTerms(self):
""" Updates self so that its numerator and denominator are
reduced to lowest terms.
"""
(a) Write a method AddOne(self) that updates self by adding one to the numerator and denominator of the fraction represented by self.
5 points 3 points 2 points
def AddOne(self):
self.num += 1
self.den += 1
self.lowestTerms()
At most -1 for syntax errors like
p.Fraction(q) instead of Fraction(p,q)
lowestTerms(self) instead of self.lowestTerms()
No points if they leave off the
def AddOne(self) header
(b) Consider the class
```python
class pointFract:
"""
A class that can be used to represent points whose
x and y coordinates are fractions
"""
Attributes:
x: x-coordinate [Fraction]
y: y-coordinate [Fraction]
"""
def __init__(self,F1,F2):
"""
Returns a Fraction Object that represents the point (F1,F2)
"""
PreC: F1 and F2 are Fractions
"""
```
Write a method `distToOrigin(self)` for this class that returns the distance of self to the origin. FYI, the distance of the point \((a, b)\) to the origin is given by \(\sqrt{a^2 + b^2}\). You may assume that `math.sqrt` is available.
8 point solutions
```python
distToOrigin(self):
F1 = self.x # 1
xfloat = float(F1.num)/float(F1.den) # 1
F2 = self.y # 1
yfloat = float(F2.num)/float(F2.den) # 1
d = math.sqrt(xfloat**2 + yfloat**2) # 1
return d # 1
distToOrigin(self):
xfloat = float(self.x.num)/float(self.x.den) # 2
yfloat = float(self.y.num)/float(self.y.den) # 2
return math.sqrt(xfloat**2 + yfloat**2) # 1
```
3 point solution
```python
distToOrigin(self):
a = self.x # 1
b = self.y # 1
return math.sqrt(a**2 + b**2) # 1
```
2 point solution
```python
distToOrigin(self):
a = self.F1 # 1
b = self.F2 # 1
return math.sqrt(a**2 + b**2) # 1
```
-1 if forget to use float
-1 (max) if syntax mistake like self(x)
(c) Consider the code
```python
F1 = Fraction(1,2)
F2 = Fraction(3,4)
P1 = pointFract(F1,F2)
P2 = P1
F2 = F1
```
P2 references a `pointFraction` object. What are the coordinates of the point represented by that object? For full credit, you must draw a state diagram that fully depicts all the references and objects.
**P2 represents the point (1/2,3/4)**
```
-------------- ---------
F1--->| num: 1 | | |
| den: 2 |<------ x |<------ P1
/ |
/ |
/ |
/ |<-------- P2
F2 | num: 3 |<-------- y |
| den: 4 | | |
-------------- ---------
```
2 points for correct point.
2pts: (.50,.75) 1pt: (.50,.50)
3 points for state diagram.
1 point for showing 3 objects.
1 point for arrows from P1, P2, F1 and F2
1 point for arrows from x and y
## Function Information
<table>
<thead>
<tr>
<th>Function</th>
<th>What It Does</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>len(s)</code></td>
<td>returns an <code>int</code> that is the length of string <code>s</code></td>
</tr>
<tr>
<td><code>s.count(t)</code></td>
<td>returns an <code>int</code> that is the number of occurrences of string <code>t</code> in string <code>s</code></td>
</tr>
<tr>
<td><code>s.find(t)</code></td>
<td>returns an <code>int</code> that is the index of the first occurrence of string <code>t</code> in the string <code>s</code>. Returns -1 if no occurrence.</td>
</tr>
<tr>
<td><code>s.replace(t1,t2)</code></td>
<td>returns a string that is obtained from <code>s</code> by replacing all occurrences of <code>t1</code> with <code>t2</code>.</td>
</tr>
<tr>
<td><code>floor(x)</code></td>
<td>returns a float whose value is the largest integer less than or equal to the value of <code>x</code>.</td>
</tr>
<tr>
<td><code>ceil(x)</code></td>
<td>returns a float whose value is the smallest integer greater than or equal to the value of <code>x</code>.</td>
</tr>
<tr>
<td><code>int(x)</code></td>
<td>If <code>x</code> has type <code>float</code>, converts its value into an <code>int</code>. If <code>x</code> is a string like <code>-123</code>, converts it into an <code>int</code> like -123.</td>
</tr>
<tr>
<td><code>float(x)</code></td>
<td>If <code>x</code> has type <code>int</code>, converts its value into a <code>float</code>. If <code>x</code> is a string like <code>'1.23'</code>, converts it into a <code>float</code> like 1.23.</td>
</tr>
<tr>
<td><code>str(x)</code></td>
<td>Converts the value of <code>x</code> into a string.</td>
</tr>
<tr>
<td><code>DrawDisk(x,y,r,c)</code></td>
<td>Draws a circle with center <code>(x, y)</code>, radius <code>r</code> and color <code>c</code>.</td>
</tr>
<tr>
<td><code>x.append(y)</code></td>
<td>adds a new element to the end of the list <code>x</code> and assigns to it the value referenced by <code>y</code>.</td>
</tr>
<tr>
<td><code>deepcopy(x)</code></td>
<td>creates a complete copy of the object that is referenced by <code>x</code>.</td>
</tr>
<tr>
<td><code>sum(x)</code></td>
<td>returns the sum of the values in list <code>x</code> assuming that all its entries are numbers.</td>
</tr>
<tr>
<td><code>(m,n) = A.shape</code></td>
<td>assigns the row and column dimensions of the numpy 2D array <code>A</code> to <code>m</code> and <code>n</code> resp.</td>
</tr>
<tr>
<td><code>x.sort()</code></td>
<td>modifies the list of numbers <code>x</code> so that its entries range from smallest to largest</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.cs.cornell.edu/courses/cs1110/2017sp/exams/final/2015-spring-final-answers.pdf", "len_cl100k_base": 6985, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 33907, "total-output-tokens": 7693, "length": "2e12", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.00030231475830078125, "__label__crime_law": 0.00020599365234375, "__label__education_jobs": 0.0015773773193359375, "__label__entertainment": 7.617473602294922e-05, "__label__fashion_beauty": 0.0001329183578491211, "__label__finance_business": 0.00013196468353271484, "__label__food_dining": 0.0005106925964355469, "__label__games": 0.0006322860717773438, "__label__hardware": 0.0017547607421875, "__label__health": 0.00026917457580566406, "__label__history": 0.00019919872283935547, "__label__home_hobbies": 0.00018358230590820312, "__label__industrial": 0.0004901885986328125, "__label__literature": 0.0002663135528564453, "__label__politics": 0.00016808509826660156, "__label__religion": 0.00042629241943359375, "__label__science_tech": 0.00670623779296875, "__label__social_life": 9.745359420776369e-05, "__label__software": 0.004711151123046875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002601146697998047, "__label__transportation": 0.0004508495330810547, "__label__travel": 0.00017440319061279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20960, 0.05731]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20960, 0.36134]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20960, 0.74919]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 276, false], [276, 1147, null], [1147, 1953, null], [1953, 3413, null], [3413, 4326, null], [4326, 5351, null], [5351, 6353, null], [6353, 7610, null], [7610, 9875, null], [9875, 11284, null], [11284, 12170, null], [12170, 13259, null], [13259, 15728, null], [15728, 16772, null], [16772, 18310, null], [18310, 19029, null], [19029, 20960, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 276, true], [276, 1147, null], [1147, 1953, null], [1953, 3413, null], [3413, 4326, null], [4326, 5351, null], [5351, 6353, null], [6353, 7610, null], [7610, 9875, null], [9875, 11284, null], [11284, 12170, null], [12170, 13259, null], [13259, 15728, null], [15728, 16772, null], [16772, 18310, null], [18310, 19029, null], [19029, 20960, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20960, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20960, null]], "pdf_page_numbers": [[0, 0, 1], [0, 276, 2], [276, 1147, 3], [1147, 1953, 4], [1953, 3413, 5], [3413, 4326, 6], [4326, 5351, 7], [5351, 6353, 8], [6353, 7610, 9], [7610, 9875, 10], [9875, 11284, 11], [11284, 12170, 12], [12170, 13259, 13], [13259, 15728, 14], [15728, 16772, 15], [16772, 18310, 16], [18310, 19029, 17], [19029, 20960, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20960, 0.07452]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
184e6b64b82fcdf835953b5d8723af94ae0bf8bb
|
Comp 401-F16
Course Overview
Instructor: Prasun Dewan (FB 150, help401-002-f16@cs.unc.edu)
GETTING STARTED
Course page:
http://www.cs.unc.edu/~dewan/comp401/current/
Contents
Course Overview
Course Syllabus in UNC Format
Instructor
Teaching Assistants
Lectures Time and Location
Recitations and Associated Material
Exams
Exam Schedule (Subject to change)
Old Exams
Exam Information
Resources
Getting Help and Class Discussion
Course Web Site (From CS)
Google (dewan comp401) to find page.
Also linked from my home page (google “dewan”)
Use index and local web search to find parts
Piazza link available from course page (Search piazza)
Sign up on Piazza asap, as all announcements will be made there
ASSIGNMENTS AND ASSOCIATED RESOURCES
First assignment is due in a week!
Search (assignments) for table column with assignments or go to link “Resources by Topics”
<table>
<thead>
<tr>
<th>Resources by Topics</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Unit</strong></td>
</tr>
<tr>
<td>Course Information (8/18, 8/20)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>Scanning</strong></th>
<th><strong>PowerPoint PDF YouTube Mix</strong></th>
<th><strong>Docx PDF Drive</strong></th>
<th><strong>Scanning Visualization</strong></th>
<th><strong>Number Scanner Checks File</strong></th>
<th><strong>lectures.scanning Package</strong></th>
</tr>
</thead>
</table>
# Before Next Class
<table>
<thead>
<tr>
<th>JDK Download</th>
<th>PowerPoint PDF</th>
<th></th>
<th></th>
<th></th>
<th>✓</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eclipse Install & Use</td>
<td>PowerPoint PDF</td>
<td>Installing JDK on Mac</td>
<td>PDF</td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>Checkstyle with UNC Checks Install</td>
<td>PowerPoint PDF</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ObjectEditor Install</td>
<td>PowerPoint PDF</td>
<td>PDF</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Importing Git Project (JavaTeaching)</td>
<td>PowerPoint PDF</td>
<td></td>
<td></td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>Specifying Main Args in Eclipse</td>
<td>PowerPoint PDF</td>
<td>PDF</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Debugging in Eclipse</td>
<td>PowerPoint PDF</td>
<td>PDF</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Relevant Java Syntax</td>
<td>PowerPoint PDF</td>
<td>PDF</td>
<td></td>
<td></td>
<td>lectures.java_syntax_overview_package ✓</td>
</tr>
<tr>
<td>Scanning</td>
<td>PowerPoint PDF</td>
<td>YouTube Mix</td>
<td>Docx PDF Drive</td>
<td>Scanning Visualization</td>
<td>Number Scanner Checks File</td>
</tr>
</tbody>
</table>
BACKGROUND?
- **Java vs. Non-Java?**
- Course is not about Java
- Expected to use only those Java features taught in class.
- **Object-Oriented vs. Conventional Programming**
- Assume background in conventional programming: Types, variables, assignment, constants, expression, conditionals and loops, input and output, arrays and/or strings, procedures/methods.
- Weeding out first assignment.
- Will teach all aspects of object-oriented programming.
- Repetition for those who know object-oriented programming.
### Course Content
<table>
<thead>
<tr>
<th>Component Complexity</th>
<th>No. of Components</th>
</tr>
</thead>
<tbody>
<tr>
<td>110</td>
<td>Introductory Programming</td>
</tr>
<tr>
<td>401</td>
<td>Intermediate Programming</td>
</tr>
<tr>
<td>410</td>
<td>Data Structures</td>
</tr>
</tbody>
</table>
- **Small number of simple code fragments connected by you**
- **Large number (~40) of simple code fragments connected by you**
- **Small number of complex code fragments connected by you**
Only the optional compiler course will involve more components!
**Layered Assignments = Project**
<table>
<thead>
<tr>
<th>Course Information (8/18)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>Bridge Scene - 1st day (long)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Bridge Scene - 2nd day (short)</td>
</tr>
</tbody>
</table>
- **Assignment 11**
- **Assignment 2**
- **Assignment 1**
Weekly assignments will build on each other to create a semester project.
Each assignment with its own due date and points.
**Extra Credit**
---
**Extra Credit**
Allow (a) a number to be succeeded or preceded by a variable number of blanks as in " 2 4 5 6 2 5 3 0 0 0 " (b) an arbitrary number of numbers in a line. Do not terminate the program after encountering the first illegal (unexpected) character. Print the illegal character and continue scanning assuming the character had not been input.
---
<table>
<thead>
<tr>
<th>Students have varying interests and abilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Make up or insurance against bad grade in other assignments or exams</td>
</tr>
<tr>
<td>Better to give early without extra credit than late with</td>
</tr>
<tr>
<td>But if you are already late, might as well do extra credit to make up for late points</td>
</tr>
</tbody>
</table>
## CONSTRAINTS
**Constraints**
1. Java has libraries that make the writing of this program trivial. The only library functions you should use are the `Character.isDigit()`, `substring()` and the `Integer.parseInt()` functions. `Character.isDigit()` is like `Character.isUpperCase()` except that it tells us whether a character is a digit rather than whether it is an uppercase letter. `substring()`, applicable to any string, is explained in the class material. `Integer.parseInt()` takes a string consisting of digits and converts into an integer.
<table>
<thead>
<tr>
<th>Constraint</th>
</tr>
</thead>
<tbody>
<tr>
<td>Forbid use of certain Java libraries</td>
</tr>
<tr>
<td>Goal is not to teach Java and its libraries</td>
</tr>
<tr>
<td>It is to teach you how to build these libraries</td>
</tr>
<tr>
<td>Usually Java features not covered in class will be banned</td>
</tr>
<tr>
<td>Correctness is only one of the goals</td>
</tr>
<tr>
<td>Require use of certain programming techniques</td>
</tr>
<tr>
<td>Program must also be efficient and well crafted</td>
</tr>
</tbody>
</table>
Early Reward
Assignment 1:
Writing a Number Scanner
Date Assigned: Tue Aug 23, 2016
Completion Date: Fri Sep 2, 2016 (11:55 pm)
Early Submission Date: Wed Aug 31, 2016 (11:55 pm)
5% Extra credit if submitted early on a Wednesday
Normal submission date is a Friday
If you shoot for Wednesday you should be ready by Friday with TA help
End of day is 11:55pm for regular and late deadlines
## Late Penalty
- **5% late if submitted next Monday, 10% late if submitted next Friday**
- **20% late if submitted any day after that, *but no manual grading points***
- **0% for manual grading component – fairly high in later assignments**
- **No correction of auto grading results**
**Why Small Late Penalty?**
- Big difference between getting code working and almost working
- Big differences in grades also
- Very little partial credit if program not working
- Errors will accumulate because of layered assignments
- Better late than not working
- But being more than 1 week late for multiple assignments is recipe for failing
CODING VS DESIGN/DEBUGGING
The TAS and I are here to help you debug and design (but not write code)
Assignments may contain solution in English (read only if stuck)
Implementation Hints
Read this only if you have trouble developing your own solution. The small print is encouraging you to first think of the problem on your own.
As in the class example, you should define a variable that keeps track of the index of the start of each token. In the class example, the size of the token was constant (1) and there were no spaces in between tokens. This means that the startIndex of a token was always one more than the startIndex of the previous token. Now the tokens are of variable size. This means that in changing the startIndex, you must take into account the end of the variable-length token and spaces) in between tokens. Given a start index, the end of the token can be computed using the `indexOf()` function. As in the class example, make sure the startIndex does not go beyond the length of the string.
Can help each other with design and debugging as long as it does not lead to code sharing
Honor Code
Sharing of code is honor code violation
More or less same project as last time, but do not look at past solutions
Automatic grader may be extended with plagiarism detector
Why We Lie: TED Radio Hour: NPR
**Out of Office Help**
**Do**
- Use Piazza for out of class questions
- Use [help401-002-f16@cs.unc.edu](mailto:help401-002-f16@cs.unc.edu) for grades and other private queries
**Don’t**
- Send mail to individual instructors
Automated Checking
Warn against requirements not met
Indicate potential sources of error
AUTOMATIC CHECKING: CODE ANALYZER
Code analyzer runnable from Eclipse
package main;
Multiple markers at this line
- expectedDeclaredSignatures: (Assignment1.java:3) In type Assignment1, missing declared signature: processInput:->void
- typeDefined: (Assignment1.java:3) Class/Interface Assignment1 matching tag main.Assignment(.* defined
CODE ANALYZER: PROBLEMS Pane
0 errors, 135 warnings, 66 others (Filter matched 166 of 201 items)
- illegalMethodCall: (IllegalMethodCalls.java:123) called disallowed method bar---> String.split
- illegalMethodCall: (IllegalMethodCalls.java:126) called disallowed method bar---> String.split
Number Scanner
Checks File
<property name="expectedTypes" value="main.Assignment(.*, mp.scanner.Scanner)"/>
<module name="unc.tools.checkstyle.ANonCachingTreeWalker">
<property name="severity" value="warning"/>
<module name="ExpectedDeclaredSignatures">
<property name="severity" value="warning"/>
<property name="expectedSignatures" value="main.Assignment1 = processInput
indexOfNot: String; char; int->int // EC, grail.scanner.ScanningIterator = indexOfNot
module name="MissingMethodCall">
<property name="severity" value="warning"/>
<property name="expectedCalls" value="main.Assignment1 = processInput:->void
indexOfNot: String; char; int->void // EC AND (.*!hasNext:->boolean // EC AND (.*
>void AND indexOfNot: String; char; int->void // EC "/>
</module>
AUTOMATIC CHECKING: LOCAL CODE EXECUTOR
AUTOMATIC CHECKING: SERVER CODE ANALYZER AND CODE EXECUTOR
```java
//OFrame editor2 = ObjectEditor.edit(interpreterView);
OFrame editor = ObjectEditor.edit(bridgeScene);
bridgeScene.setOFrame(editor);
editor.hideMainPanel();
editor.setSize(800, 500);
pm.stepComplete();
sleep(2000);
```
Send Assignment
COMP 401-038
Assignment
Preconditions, Commands, Threads, Animation (Assignment 10)
Onyen vitkus
Password ●●●●●●●●●●●
Automatic Checking: Server Code Analyzer and Code Executor
Grading response for Preconditions, Commands, Threads, Animation (Assignment 10)
<table>
<thead>
<tr>
<th>Requirement</th>
<th>% Autograded</th>
<th>Points</th>
<th>Possible</th>
</tr>
</thead>
<tbody>
<tr>
<td>Precondition methods</td>
<td>100.0</td>
<td>6</td>
<td>12</td>
</tr>
<tr>
<td>Console view shows precond events</td>
<td>0.0</td>
<td>0</td>
<td>12</td>
</tr>
<tr>
<td>Say & move cmd objects</td>
<td>100.0</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>Move cmd constructor</td>
<td>100.0</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>Say cmd constructor</td>
<td>100.0</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>Say and move parsers</td>
<td>100.0</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>Command object invoked</td>
<td>100.0</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<td>Animating methods</td>
<td>100.0</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>Methods start new threads</td>
<td>100.0</td>
<td>0</td>
<td>10</td>
</tr>
<tr>
<td>Animating command classes</td>
<td>0.0</td>
<td>0</td>
<td>20</td>
</tr>
<tr>
<td>Animator classes</td>
<td>0.0</td>
<td>0</td>
<td>20</td>
</tr>
<tr>
<td>Guard animation</td>
<td>100.0</td>
<td>2</td>
<td>5</td>
</tr>
<tr>
<td>Precondition buttons</td>
<td>0.0</td>
<td>0</td>
<td>10</td>
</tr>
<tr>
<td>Awesome demo</td>
<td>0.0</td>
<td>0</td>
<td>5</td>
</tr>
</tbody>
</table>
Notes:
- Command object invoked
- Command objects returned from say and move are invoked.
- Couldn't find a parse invoke that called undo.
Send Assignment
COMP 401-038
Assignment
Preconditions, Commands, Threads, Animation (Assignment 10)
Onyen vitkus
Password: ***************
Send Assignment
public Line getXAxis() {return xAxis;}
public Line getYAxis() {return yAxis;}
public StringShape getXLabel() {return xLabel;}
public StringShape getYLabel() {return yLabel;}
public int getAxesLength() {return axesLength;}
public void setAxesLength(int anAxesLength) {
axesLength = anAxesLength;
xAxis.setWidth(axesLength);
yAxis.setHeight(axesLength);
xAxis.setX(toXAxisX());
xAxis.setY(toXAxisY());
yAxis.setX(toYAxisX());
yAxis.setY(toYAxisY());
...
}
OBJECT EDITOR: TRAINING WHEELS
RESEARCH VS REQUIRED TOOLS
Only ObjectEditor is required
Others are optional, research tools with consent form
package main;
Multiple markers at this line
- expectedDeclaredSignatures: (Assignment1.java:3) In type Assignment1, missing declared signature: processInput: -> void
- typeDefined: (Assignment1.java:3) Class/Interface Assignment1 matching tag main.Anonymous(,*) defined
# Downloads and Consent Form
<table>
<thead>
<tr>
<th>Downloads</th>
</tr>
</thead>
</table>
| ObjectEditor Version 3 (used in comp110) | oeall3
| ObjectEditor Version 19 (used last year) | oeall19
| ObjectEditor Version 21 | oeall21
| ObjectEditor Version 22 (latest, use this unless it fails on you) | oeall22
| Checkstyle | UNCChecks 6.5.0.jar, Checkstyle 6.5.zip
| GraderBasics | GraderBasics
| Consent Form | ConsentForm
| Images | images.zip
### Unusual Course
<table>
<thead>
<tr>
<th>Component Complexity</th>
<th>No. of Components</th>
</tr>
</thead>
<tbody>
<tr>
<td>410</td>
<td></td>
</tr>
<tr>
<td>110</td>
<td></td>
</tr>
<tr>
<td>401</td>
<td></td>
</tr>
</tbody>
</table>
Only the optional compiler course will involve more components!
Some creativity in number and nature of components
Unusual course – no textbook at this level covers such large programs
LEARNING RESOURCES
No textbook!
Alternatives?
Java Program Structure
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments.
+ " + Terminating program.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters:");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (Character.isUpperCase(nextLetter))
System.out.print(nextLetter);
index++;
}
System.out.println();
}
}
```
Must have this procedure header in executable program
Predefined internal library operations
Print on new vs. previous line
**PowerPoint of Slides**
---
**Java Program Structure**
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments."
+ " . Terminating program.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters:");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (Character.isUpperCase(nextLetter))
System.out.println(nextLetter);
index++;
}
System.out.println();
}
}
```
- Must have this procedure header in executable program
- Predefined internal library operations
- Print on new vs. previous line
SLIDE SHOW ➔ SYNCHRONIZED RECORDING AND ANIMATIONS
JAVA PROGRAM STRUCTURE
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments:" + args.length + ". Terminating program.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters: ");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (Character.isUpperCase(nextLetter))
System.out.print(nextLetter);
index += 1;
}
}
}
```
Can escape out into unsynchronized or no audio mode (WPS Office on Android will play synchronized audio)
POWERPOINT SLIDES WITH UNSYNCHRONIZED RECORDINGS AND MEDIA CONTROL
JAVA PROGRAM STRUCTURE
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters:");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (Character.isUpperCase(nextLetter))
System.out.print(nextLetter);
index++;
}
System.out.println();
}
}
```
Print on new vs. previous line
Must have this procedure header in executable program
Predefined internal library operations
Recorded YouTube Videos
Play 2X, rewind, pause, fast-forward to match understanding pace
Youtube video generated from PPT Recordings, does not allow slide-based browsing
PPT modes allow slide-based browsing but requires downloading PPT
OFFICE MIX
COMP 401
BASICS OF SCANNING AND JAVA
Instructor: Prasun Dewan (FB 150, dewan@unc.edu)
SLIDE-BASED BROWSING
Comp 401
Basics of Scanning and Java
Problem
Algorithm
Representation
Code
Programming Overview Through Example
Scanning Problem
- Scanning image for text
- Scanning frequencies for radio stations
- Finding words in a sentence
- Finding identifiers, operators, in a program
Algorithm
- Description of solution to a problem
- Can be in any "language"
Problem
Input stream
- tokens
- output stream
Long pauses, you may know the answer
Cannot hear student answer
Audio is not the fastest way to get information, specially when studying for an exam
Recordings of live lectures with q/a rather than 15 minute lessons
Can fast forward
You can get a clue from my answer
John F. Kennedy, marker = 1, output = none
We continue incrementing, without output, until the marker is 5, when we output J.
John F. Kennedy, marker = 5, output = F
Again the marker is incremented without output, until it reaches 8, at which point we output K.
John F. Kennedy, marker = 8, output = K
Again we increment the marker.
John F. Kennedy, marker = 9, output =
A visual scan of the string shows that there are no more upper case characters. The computer must similarly scan the string to make this determination. Thus, it keeps incrementing the marker, finding no upper case letters, until it reaches the end, at which point the process stops.
John F. Kennedy, marker = 14, output = none
**Scanning Java Program**
Below, we see the data structures and algorithm converted to a Java program.
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments:" + args.length + ", Terminating program.");
System.exit(-1);
}
}
}
```
Lots of (obvious) mistakes
Little graphics, designed for mobile reading on mobile computers
John F. Kennedy, marker = 1, output = none
We continue incrementing, without output, until the marker is 5, when we output J.
John F. Kennedy, marker = 5, output = F
Again the marker is incremented without output, until it reaches 8, at which point we output K.
John F. Kennedy, marker = 8, output = K
Again we increment the marker.
John F. Kennedy, marker = 9, output = none
A visual scan of the string shows that there are no more upper case characters. The computer must similarly scan the string to make this determination. Thus, it keeps incrementing the marker, finding no upper case letters, until it reaches the end, at which point the process stops.
John F. Kennedy, marker = 14, output = none
Scanning Java Program
Below, we see the data structures and algorithm converted to a Java program.
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments:" + args.length + ", Terminating program.");
System.exit(-1);
}
}
}
```
Prasun Dewan
package lectures.scanning;
import util.annotations.WebDocuments;
@WebDocuments({"Lectures/Scanning.pptx", "Lectures/Scanning.pdf", "Video"})
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments:" + args[1] + ". Terminating program.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters:");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (nextLetter >= 'A' && nextLetter <= 'Z')
System.out.print(nextLetter);
index++;
}
System.out.println();
}
}
Eclipse Java Project of Lecture Code on Git
DESIGN SPACE OF STUDY MODES!
- You Tube video
- Slides without audio
- Slides with unsynchronized audio
- Slides with synchronized audio
- Office Mix
- Word PDFs
- Shared Google Docs
- Eclipse Project
-Browsable Project
# Web Site Links
<table>
<thead>
<tr>
<th>Scanning</th>
<th>PowerPoint</th>
<th>Docx</th>
<th>Scanning Visualization</th>
<th>Number Scanner</th>
<th>lectures.scanning Package</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PDF</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>YouTube</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Mix</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>PDF</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Drive</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
What do we do in class?
Live Lecture?
- Value
- Without resources
- With resources
Discussion is About Concrete Programs
Java Program Structure
```java
package lectures.scanning;
public class AnUpperCasePrinter {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Illegal number of arguments:" + args.length + ". Terminating program.");
System.exit(-1);
}
String scannedString = args[0];
System.out.println("Upper Case Letters:");
int index = 0;
while (index < scannedString.length()) {
char nextLetter = scannedString.charAt(index);
if (nextLetter >= 'A' && nextLetter <= 'Z') {
System.out.println(nextLetter);
}
index++;
}
}
}
```
Pace at which you understand lecture in general and code in particular varies
PACING YOURSELF
A variety of sources with different amounts of information
Each source can be browsed at your own pace
WHAT DO WE DO IN CLASS?
Homework?
Deep thinking done solo?
Limited discussion with classmates?
WHAT DO WE DO IN CLASS?
- Quizzes: Test that you understood support material?
- One hour means deep testing, puts pressure
- May not have discipline to access material
WHAT DO WE DO IN CLASS?
I code in class, you watch.
I am not a touch typist!
You learn more as a driver than passenger.
public class AConsoleReadingUpperCasePrinter {
/**
* MAIN METHOD HEADER
* Syntax of main method shown below.
* Methods correspond to procedures and functions in other languages.
* Method names should be camel case starting with `lowercase` letter.
* Everything before the first curly brace is the method header.
*/
public static void main(String[] args) {
/*
* What happens if you use the following header instead, can you execute the program?
* Comment out the header above and uncomment the following to see what happens?
* What is the difference between the two headers?
*/
// public static void main(String args) {
/**
* METHOD BODY
* The code between the outermost curly braces is the method body.
* In method body, you do all your code.
*/
CLASS STRUCTURE
Introduction to Praxis
Do praxis with as large a group as possible for 50 minutes and answer associated Sakai quiz
Answer class questions on the material
Finish praxis based on the answers and further understanding
If you ask for help from us, you are pledging that you have done the praxis
ATTENDANCE
Do
- Come to class
Don’t
- Come to class late or leave class early
ATTENDANCE SAKAI “QUIZ”
Attendance
Table of Contents
Part 1 of 1 -
Question 1 of 3
Identify the dates on which you cannot attend class and give the reason.
Maximum number of characters (including HTML tags added by text editor): 32,000
[Count Characters] [Show/Hide Rich-Text Editor]
Part 1 of 1 -
Question 1 of 5
Explain how you contributed constructively to the class discussion, giving the date.
Maximum number of characters (including HTML tags added by text editor): 32,000
Count Characters
**Grade Distribution (Current Plan)**
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Midterm</td>
<td>22%</td>
</tr>
<tr>
<td>Final</td>
<td>28%</td>
</tr>
<tr>
<td>In-Class Work (Recitations & Lecture Quizzes, Discussion, Attendance)</td>
<td>15%</td>
</tr>
<tr>
<td>Assignments</td>
<td>35%</td>
</tr>
<tr>
<td>Fudge Factor (Special participation, other distinguishing factors), particularly useful for borderline cases</td>
<td>5%</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.cs.unc.edu/~dewan/comp401/current/Lectures/CourseInfo-2016.pdf", "len_cl100k_base": 6721, "olmocr-version": "0.1.53", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 75982, "total-output-tokens": 8405, "length": "2e12", "weborganizer": {"__label__adult": 0.0010967254638671875, "__label__art_design": 0.0009226799011230468, "__label__crime_law": 0.0010004043579101562, "__label__education_jobs": 0.1756591796875, "__label__entertainment": 0.00021767616271972656, "__label__fashion_beauty": 0.0005655288696289062, "__label__finance_business": 0.000766754150390625, "__label__food_dining": 0.0013780593872070312, "__label__games": 0.0023250579833984375, "__label__hardware": 0.0016374588012695312, "__label__health": 0.0008978843688964844, "__label__history": 0.0005974769592285156, "__label__home_hobbies": 0.0005669593811035156, "__label__industrial": 0.0009126663208007812, "__label__literature": 0.0008487701416015625, "__label__politics": 0.0006518363952636719, "__label__religion": 0.00154876708984375, "__label__science_tech": 0.00351715087890625, "__label__social_life": 0.0007586479187011719, "__label__software": 0.0100555419921875, "__label__software_dev": 0.79052734375, "__label__sports_fitness": 0.001346588134765625, "__label__transportation": 0.0015916824340820312, "__label__travel": 0.0007777214050292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25969, 0.01618]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25969, 0.42874]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25969, 0.73189]], "google_gemma-3-12b-it_contains_pii": [[0, 92, false], [92, 606, null], [606, 726, null], [726, 1590, null], [1590, 2316, null], [2316, 2842, null], [2842, 3355, null], [3355, 3758, null], [3758, 4466, null], [4466, 5721, null], [5721, 6115, null], [6115, 6405, null], [6405, 6752, null], [6752, 7859, null], [7859, 8078, null], [8078, 8307, null], [8307, 8398, null], [8398, 8469, null], [8469, 8738, null], [8738, 9031, null], [9031, 9058, null], [9058, 9809, null], [9809, 9849, null], [9849, 10280, null], [10280, 11907, null], [11907, 12393, null], [12393, 12424, null], [12424, 12809, null], [12809, 13549, null], [13549, 13982, null], [13982, 14030, null], [14030, 14849, null], [14849, 15730, null], [15730, 16568, null], [16568, 17428, null], [17428, 17667, null], [17667, 17766, null], [17766, 18193, null], [18193, 18465, null], [18465, 19674, null], [19674, 20802, null], [20802, 21513, null], [21513, 21557, null], [21557, 21778, null], [21778, 22511, null], [22511, 22600, null], [22600, 23417, null], [23417, 23538, null], [23538, 23636, null], [23636, 23805, null], [23805, 23928, null], [23928, 24693, null], [24693, 25005, null], [25005, 25087, null], [25087, 25377, null], [25377, 25593, null], [25593, 25969, null]], "google_gemma-3-12b-it_is_public_document": [[0, 92, true], [92, 606, null], [606, 726, null], [726, 1590, null], [1590, 2316, null], [2316, 2842, null], [2842, 3355, null], [3355, 3758, null], [3758, 4466, null], [4466, 5721, null], [5721, 6115, null], [6115, 6405, null], [6405, 6752, null], [6752, 7859, null], [7859, 8078, null], [8078, 8307, null], [8307, 8398, null], [8398, 8469, null], [8469, 8738, null], [8738, 9031, null], [9031, 9058, null], [9058, 9809, null], [9809, 9849, null], [9849, 10280, null], [10280, 11907, null], [11907, 12393, null], [12393, 12424, null], [12424, 12809, null], [12809, 13549, null], [13549, 13982, null], [13982, 14030, null], [14030, 14849, null], [14849, 15730, null], [15730, 16568, null], [16568, 17428, null], [17428, 17667, null], [17667, 17766, null], [17766, 18193, null], [18193, 18465, null], [18465, 19674, null], [19674, 20802, null], [20802, 21513, null], [21513, 21557, null], [21557, 21778, null], [21778, 22511, null], [22511, 22600, null], [22600, 23417, null], [23417, 23538, null], [23538, 23636, null], [23636, 23805, null], [23805, 23928, null], [23928, 24693, null], [24693, 25005, null], [25005, 25087, null], [25087, 25377, null], [25377, 25593, null], [25593, 25969, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25969, null]], "pdf_page_numbers": [[0, 92, 1], [92, 606, 2], [606, 726, 3], [726, 1590, 4], [1590, 2316, 5], [2316, 2842, 6], [2842, 3355, 7], [3355, 3758, 8], [3758, 4466, 9], [4466, 5721, 10], [5721, 6115, 11], [6115, 6405, 12], [6405, 6752, 13], [6752, 7859, 14], [7859, 8078, 15], [8078, 8307, 16], [8307, 8398, 17], [8398, 8469, 18], [8469, 8738, 19], [8738, 9031, 20], [9031, 9058, 21], [9058, 9809, 22], [9809, 9849, 23], [9849, 10280, 24], [10280, 11907, 25], [11907, 12393, 26], [12393, 12424, 27], [12424, 12809, 28], [12809, 13549, 29], [13549, 13982, 30], [13982, 14030, 31], [14030, 14849, 32], [14849, 15730, 33], [15730, 16568, 34], [16568, 17428, 35], [17428, 17667, 36], [17667, 17766, 37], [17766, 18193, 38], [18193, 18465, 39], [18465, 19674, 40], [19674, 20802, 41], [20802, 21513, 42], [21513, 21557, 43], [21557, 21778, 44], [21778, 22511, 45], [22511, 22600, 46], [22600, 23417, 47], [23417, 23538, 48], [23538, 23636, 49], [23636, 23805, 50], [23805, 23928, 51], [23928, 24693, 52], [24693, 25005, 53], [25005, 25087, 54], [25087, 25377, 55], [25377, 25593, 56], [25593, 25969, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25969, 0.13495]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c9c1da4fa8e24fcb0876a084f510574033e6973a
|
Package ‘RCDT’
October 31, 2023
Type Package
Title Fast 2D Constrained Delaunay Triangulation
Version 1.3.0
Maintainer Stéphane Laurent <laurent_step@outlook.fr>
Description Performs 2D Delaunay triangulation, constrained or unconstrained, with the help of the C++ library ‘CDT’. A function to plot the triangulation is provided. The constrained Delaunay triangulation has applications in geographic information systems.
License GPL-3
URL https://github.com/stla/RCDT
BugReports https://github.com/stla/RCDT/issues
Imports colorsGen, gplots, graphics, Polychrome, Rcpp (>= 1.0.8), rgl, Rvcg
Suggests knitr, rmarkdown, testthat (>= 3.0.0), uniformly, viridisLite
LinkingTo BH, Rcpp, RcppArmadillo
SystemRequirements C++ 17
VignetteBuilder knitr
Config/testthat/edition 3
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation yes
Author Stéphane Laurent [aut, cre],
Artem Amirkhanov [cph] (CDT library)
Repository CRAN
Date/Publication 2023-10-31 13:20:02 UTC
**R topics documented:**
<table>
<thead>
<tr>
<th>RCDT-package</th>
<th>Fast 2D Constrained Delaunay Triangulation</th>
</tr>
</thead>
</table>
## Description
Performs 2D Delaunay triangulation, constrained or unconstrained, with the help of the C++ library 'CDT'. A function to plot the triangulation is provided. The constrained Delaunay triangulation has applications in geographic information systems.
## Details
The DESCRIPTION file:
- **Type:** Package
- **Package:** RCDT
- **Title:** Fast 2D Constrained Delaunay Triangulation
- **Version:** 1.3.0
- **Authors@R:** c( person("Stéphane", "Laurent", , "laurent_step@outlook.fr", role = c("aut", "cre")), person("Artem", "Amirkhanov", role = "cph", comment = "CDT library") )
- **Maintainer:** Stéphane Laurent <laurent_step@outlook.fr>
- **Description:** Performs 2D Delaunay triangulation, constrained or unconstrained, with the help of the C++ library 'CDT'. A function to plot the triangulation is provided. The constrained Delaunay triangulation has applications in geographic information systems.
- **License:** GPL-3
- **URL:** https://github.com/stla/RCDT
- **BugReports:** https://github.com/stla/RCDT/issues
- **Imports:** colorsGen, gplots, graphics, Polychrome, Rcpp (>= 1.0.8), rgl, Rvcg
- **Suggests:** knitr, rmarkdown, testthat (>= 3.0.0), uniformly, viridisLite
- **LinkingTo:** BH, Rcpp, RcppArmadillo
- **SystemRequirements:** C++ 17
- **VignetteBuilder:** knitr
- **Encoding:** UTF-8
- **RoxygenNote:** 7.2.3
- **Author:** Stéphane Laurent [aut, cre]. Artem Amirkhanov [cph] (CDT library)
- **Archs:** x64
Index of help topics:
<table>
<thead>
<tr>
<th>RCDT-package</th>
<th>Fast 2D Constrained Delaunay Triangulation</th>
</tr>
</thead>
<tbody>
<tr>
<td>delaunay</td>
<td>2D Delaunay triangulation</td>
</tr>
</tbody>
</table>
---
**Note:**
The content is from the DESCRIPTION file of the RCDT package, which provides a Fast 2D Constrained Delaunay Triangulation. This package is designed to perform 2D Delaunay triangulation, both constrained and unconstrained, using the C++ library 'CDT'. It includes functions for plotting the triangulation and has applications in geographic information systems.
The `delaunay` function is the main function of this package. It can build a Delaunay triangulation of a set of 2D points, constrained or unconstrained. The constraints are defined by the `edges` argument.
**Author(s)**
NA
Maintainer: Stéphane Laurent <laurent_step@outlook.fr>
---
**Description**
Performs a (constrained) Delaunay triangulation of a set of 2d points.
**Usage**
delaunay(points, edges = NULL, elevation = FALSE)
**Arguments**
- **points**: a numeric matrix with two or three columns (three columns for an elevated Delaunay triangulation)
- **edges**: the edges for the constrained Delaunay triangulation, an integer matrix with two columns; `NULL` for no constraint
- **elevation**: Boolean, whether to perform an elevated Delaunay triangulation (also known as 2.5D Delaunay triangulation)
**Value**
A list. There are three possibilities. *
- **If the dimension is 2** and `edges=NULL`, the returned value is a list with three fields: `vertices`, `mesh`, and `edges`. The `vertices` field contains the given vertices. The `mesh` field is an object of class `mesh3d`, ready for plotting with the `rgl` package. The `edges` field provides the indices of the vertices of the edges, given by the first two columns of a three-columns integer matrix. The third column, named `border`, only contains some zeros and some ones; a border (exterior) edge is labelled by a 1.
- **If the dimension is 2** and `edges` is not NULL, the returned value is a list with four fields: `vertices`, `mesh`, `edges`, and `constraints`. The `vertices` field contains the vertices of the triangulation. They coincide with the given vertices if the constraint edges do not intersect; otherwise there are the intersections in addition to the given vertices. The `mesh` and `edges` fields are similar to the previous case, the unconstrained Delaunay triangulation. The constraints field is an integer matrix with two columns, it represents the constraint edges. They are not the same as the ones provided by the user if these ones intersect. If they do not intersect, then in general these are the same, but not always, in some rare corner cases.
• If `elevation=TRUE`, the returned value is a list with five fields: `vertices`, `mesh`, `edges`, `volume`, and `surface`. The `vertices` field contains the given vertices. The `mesh` field is an object of class `mesh3d`, ready for plotting with the `rgl` package. The `edges` field is similar to the previous cases. The `volume` field provides a number, the sum of the volumes under the Delaunay triangles, that is to say the total volume under the triangulated surface. Finally, the `surface` field provides the sum of the areas of all triangles, thereby approximating the area of the triangulated surface.
Note
The triangulation can depend on the order of the points; this is shown in the examples.
Examples
```r
library(RCDT)
# random points in a square ####
set.seed(314)
library(uniformly)
square <- rbind(
c(-1, 1), c(1, 1), c(1, -1), c(-1, -1)
)
pts_in_square <- runif_in_cube(10L, d = 2L)
pts <- rbind(square, pts_in_square)
del <- delaunay(pts)
par(opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del, type = "n", xlab = NA, ylab = NA, asp = 1,
fillcolor = "random", luminosity = "light", lty_edges = "dashed"
)
par(opar)
# the order of the points matters ####
# the Delaunay triangulation is not unique in general;
# it can depend on the order of the points
points <- cbind(
c(1, 2, 1, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 4, 3, 4),
c(1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 2, 3, 4, 3, 4, 4)
)
del <- delaunay(points)
opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del, type = "p", pch = 19, xlab = NA, ylab = NA, asp = 1,
lwd_edges = 2, lwd_borders = 3
)
par(opar)
# now we randomize the order of the points
set.seed(666L)
points2 <- points[sample.int(nrow(points)), ]
del2 <- delaunay(points2)
opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del2, type = "p", pch = 19, xlab = NA, ylab = NA, asp = 1,
lwd_edges = 2, lwd_borders = 3
)
```
delaunayArea
asp = 1, lwd_edges = 2, lwd_borders = 3
)
par(opar)
# a constrained Delaunay triangulation: outer and inner dodecagons ####
# points
nsides <- 12L
angles <- seq(0, 2*pi, length.out = nsides+1L)[-1L]
points <- cbind(cos(angles), sin(angles))
points <- rbind(points, points/1.5)
# constraint edges
indices <- 1L:nsides
edges_outer <- cbind(
indices, c(indices[-1L], indices[1L])
)
edges_inner <- edges_outer + nsides
edges <- rbind(edges_outer, edges_inner)
# constrained Delaunay triangulation
del <- delaunay(points, edges)
# plot
opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del, type = "n", fillcolor = "yellow", lwd_borders = 2, asp = 1,
axes = FALSE, xlab = NA, ylab = NA
)
par(opar)
# another constrained Delaunay triangulation: a face ####
V <- read.table(
system.file("extdata", "face_vertices.txt", package = "RCDT")
)
E <- read.table(
system.file("extdata", "face_edges.txt", package = "RCDT")
)
del <- delaunay(
points = as.matrix(V)[, c(2L, 3L)], edges = as.matrix(E)[, c(2L, 3L)]
)
opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del, type = "n", col_edges = NULL, fillcolor = "salmon",
col_borders = "black", col_constraints = "purple",
lwd_borders = 3, lwd_constraints = 3,
asp = 1, axes = FALSE, xlab = NA, ylab = NA
)
par(opar)
delaunayArea Area of Delaunay triangulation
Description
Computes the area of a region subject to Delaunay triangulation.
Usage
delaunayArea(del)
Arguments
del an output of `delaunay` executed with `elevation=FALSE`
Value
A number, the area of the region triangulated by the Delaunay triangulation.
Examples
```
library(RCDT)
# random points in a square ####
set.seed(666L)
library(uniformly)
square <- rbind(
c(-1, 1), c(1, 1), c(1, -1), c(-1, -1)
)
pts <- rbind(square, runif_in_cube(8L, d = 2L))
del <- delaunay(pts)
delaunayArea(del)
# a constrained Delaunay triangulation: outer and inner squares ####
innerSquare <- rbind( # the hole
c(-1, 1), c(1, 1), c(1, -1), c(-1, -1)
)
outerSquare <- 2*innerSquare # area: 16
points <- rbind(innerSquare, outerSquare)
del <- delaunay(points, edges = edges)
delaunayArea(del) # 16-4
```
---
plotDelaunay
**Plot 2D Delaunay triangulation**
Description
Plot a constrained or unconstrained 2D Delaunay triangulation.
Usage
```
plotDelaunay(
del,
col_edges = "black",
col_borders = "red",
```
col_constraints = "green",
fillcolor = "random",
distinctArgs = list(seedcolors = c("#ff0000", "#00ff00", "#0000ff")),
randomArgs = list(hue = "random", luminosity = "dark"),
lty_edges = par("lty"),
lwd_edges = par("lwd"),
ltyBorders = par("lty"),
lwdBorders = par("lwd"),
lty_constraints = par("lty"),
lwd_constraints = par("lwd"),
...
)
Arguments
del an output of `delaunay` without constraints (edges=NULL) or with constraints
col_edges the color of the edges of the triangles which are not border edges nor constraint edges; NULL for no color
col_borders the color of the border edges; note that the border edges can contain the constraint edges for a constrained Delaunay triangulation; NULL for no color
col_constraints for a constrained Delaunay triangulation, the color of the constraint edges which are not border edges; NULL for no color
fillcolor controls the filling colors of the triangles, either NULL for no color, a single color, "random" to get multiple colors with `randomColor`, "distinct" get multiple colors with `createPalette`, or a vector of colors, one color for each triangle; in this case the the colors will be assigned in the order they are provided but after the triangles have been circularly ordered (see the last example)
distinctArgs if fillcolor = "distinct", a list of arguments passed to `createPalette`
randomArgs if fillcolor = "random", a list of arguments passed to `randomColor`
lty_edges, lwd_edges graphical parameters for the edges which are not border edges nor constraint edges
lty_borders, lwd_borders graphical parameters for the border edges
lty_constraints, lwd_constraints in the case of a constrained Delaunay triangulation, graphical parameters for the constraint edges which are not border edges
... arguments passed to `plot` for the vertices, such as type="n", asp=1, axes=FALSE, etc
Value
No value, just renders a 2D plot.
See Also
The mesh field in the output of `delaunay` for an interactive plot. Other examples of `plotDelaunay` are given in the examples of `delaunay`.
Examples
```r
library(RCDT)
# random points in a square ####
square <- rbind(
c(-1, 1), c(1, 1), c(1, -1), c(-1, -1)
)
l library(uniformly)
set.seed(314)
pts_in_square <- runif_in_cube(10L, d = 2L)
pts <- rbind(square, pts_in_square)
d <- delaunay(pts)
par(opar <- par(mar = c(0, 0, 0, 0)))
plotDelaunay(
d, type = "n", xlab = NA, ylab = NA, axes = FALSE, asp = 1,
fillcolor = "random", lwd_borders = 3
)
par(opar)
```
# a constrained Delaunay triangulation: pentagram ####
# vertices
R <- sqrt((5-sqrt(5))/10) # outer circumradius
r <- sqrt((25-11*sqrt(5))/10) # circumradius of the inner pentagon
k <- pi/180 # factor to convert degrees to radians
X <- R * vapply(0L:4L, function(i) cos(k * (90+72*i)), numeric(1L))
Y <- R * vapply(0L:4L, function(i) sin(k * (90+72*i)), numeric(1L))
x <- r * vapply(0L:4L, function(i) cos(k * (126+72*i)), numeric(1L))
y <- r * vapply(0L:4L, function(i) sin(k * (126+72*i)), numeric(1L))
vertices <- rbind(
c(X[1L], Y[1L]),
c(x[1L], y[1L]),
c(X[2L], Y[2L]),
c(x[2L], y[2L]),
c(X[3L], Y[3L]),
c(x[3L], y[3L]),
c(X[4L], Y[4L]),
c(x[4L], y[4L]),
c(X[5L], Y[5L]),
c(x[5L], y[5L])
)
# constraint edge indices (= boundary)
edges <- cbind(1L:10L, c(2L:10L, 1L))
# constrained Delaunay triangulation
del <- delaunay(vertices, edges)
# plot
par(opar <- par(mar = c(0, 0, 0, 0)))
plotDelaunay(
del, type = "n", asp = 1, fillcolor = "distinct", lwd_borders = 3,
)
xlab = NA, ylab = NA, axes = FALSE
)
par(opar)
# interactive plot with 'rgl'
mesh <- del[["mesh"]]
library(rgl)
open3d(windowRect = c(100, 100, 612, 612))
shade3d(mesh, color = "red", specular = "orangered")
wire3d(mesh, color = "black", lwd = 3, specular = "black")
# plot only the border edges - we could find them in `del[["edges"]]
# but we use the 'rgl' function `getBoundary3d` instead
open3d(windowRect = c(100, 100, 612, 612))
shade3d(mesh, color = "darkred", specular = "firebrick")
shade3d(getBoundary3d(mesh), lwd = 3)
# an example where 'fillcolor' is a vector of colors ####
n <- 50L # number of sides of the outer polygon
angles1 <- head(seq(0, 2*pi, length.out = n + 1L), -1L)
outer_points <- cbind(cos(angles1), sin(angles1))
m <- 5L # number of sides of the inner polygon
angles2 <- head(seq(0, 2*pi, length.out = m + 1L), -1L)
phi <- (1+sqrt(5))/2 # the ratio 2-phi will yield a perfect pentagram
inner_points <- (2-phi) * cbind(cos(angles2), sin(angles2))
points <- rbind(outer_points, inner_points)
# constraint edges
indices <- 1:n
edgesouter <- cbind(indices, c(indices[-1L], indices[1L]))
indices <- n + 1L:m
edgesinner <- cbind(indices, c(indices[-1L], indices[1L]))
edges <- rbind(edgesouter, edgesinner)
# constrained Delaunay triangulation
del <- delaunay(points, edges)
# there are 55 triangles:
del[["mesh"]]
# we make a cyclic palette of colors:
colors <- viridisLite::turbo(28)
colors <- c(colors, rev(colors[-1L]))
# plot
opar <- par(mar = c(0, 0, 0, 0))
plotDelaunay(
del, type = "n", asp = 1, lwd_borders = 3, col_borders = "black",
fillcolor = colors, col_edges = "black", lwd_edges = 1.5,
axes = FALSE, xlab = NA, ylab = NA
)
par(opar)
Index
* package
RCDT-package, 2
createPalette, 7
delaunay, 3, 6–8
delaunayArea, 5
mesh3d, 3, 4
plot, 7
plotDelaunay, 6
randomColor, 7
RCDT (RCDT-package), 2
RCDT-package, 2
|
{"Source-Url": "https://cran.r-project.org/web/packages/RCDT/RCDT.pdf", "len_cl100k_base": 4786, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21489, "total-output-tokens": 5513, "length": "2e12", "weborganizer": {"__label__adult": 0.00028395652770996094, "__label__art_design": 0.0008554458618164062, "__label__crime_law": 0.0002810955047607422, "__label__education_jobs": 0.00033402442932128906, "__label__entertainment": 8.64863395690918e-05, "__label__fashion_beauty": 0.00012683868408203125, "__label__finance_business": 0.00013208389282226562, "__label__food_dining": 0.00037384033203125, "__label__games": 0.00069427490234375, "__label__hardware": 0.00109100341796875, "__label__health": 0.0003211498260498047, "__label__history": 0.000278472900390625, "__label__home_hobbies": 0.0001475811004638672, "__label__industrial": 0.00047206878662109375, "__label__literature": 0.00012636184692382812, "__label__politics": 0.00016200542449951172, "__label__religion": 0.00034809112548828125, "__label__science_tech": 0.036041259765625, "__label__social_life": 0.00010442733764648438, "__label__software": 0.03851318359375, "__label__software_dev": 0.91845703125, "__label__sports_fitness": 0.00023186206817626953, "__label__transportation": 0.0002970695495605469, "__label__travel": 0.00025916099548339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14886, 0.03525]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14886, 0.88179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14886, 0.66737]], "google_gemma-3-12b-it_contains_pii": [[0, 975, false], [975, 3133, null], [3133, 5282, null], [5282, 7141, null], [7141, 8583, null], [8583, 9529, null], [9529, 11424, null], [11424, 12996, null], [12996, 14708, null], [14708, 14886, null]], "google_gemma-3-12b-it_is_public_document": [[0, 975, true], [975, 3133, null], [3133, 5282, null], [5282, 7141, null], [7141, 8583, null], [8583, 9529, null], [9529, 11424, null], [11424, 12996, null], [12996, 14708, null], [14708, 14886, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14886, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14886, null]], "pdf_page_numbers": [[0, 975, 1], [975, 3133, 2], [3133, 5282, 3], [5282, 7141, 4], [7141, 8583, 5], [8583, 9529, 6], [9529, 11424, 7], [11424, 12996, 8], [12996, 14708, 9], [14708, 14886, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14886, 0.01484]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
3db72e177f0dfdd8ed41cb2f8964acfb14475093
|
A Framework for Modeling, Building and Maintaining Enterprise Information Systems Software
Alexandre Cláudio de Almeida, Glauber Boff and Juliano Lopes de Oliveira
Informatics Institute - INF
Goiás Federal University - UFG
CP 131, Campus II, CEP 74001-970, Goiânia - Goiás, Brazil
{alexandre,glauber,juliano}@inf.ufg.br
Abstract—An Enterprise Information System (EIS) software has three main aspects: data, which are processed to generate business information; application functions, which transform data into information; and business rules, which control and restrict the manipulation of data by functions. Traditional approaches to EIS software development consider data and application functions. Rules are second class citizens, embedded on the specification of either data (as database integrity constraints) or on the EIS functions (as a part of the application software). This work presents a new, integrated approach for the development and maintenance of EIS software. The main ideas are to focus on the conceptual modeling of the three aspects of the EIS software - application functions, business rules, and database schema - and to automatically generate code for each of these software aspects. This improves software quality, reducing redundancies by centralizing EIS definitions on a single conceptual model. Due to automatic generation of code, this approach increases the software engineering staff productivity, making it possible to respond to the continuous changes in the business domain.
Keywords-Information Systems; Business Rules; Systems Maintenance; Model Driven Development; Schema Generation; Schema Evolution
I. INTRODUCTION
Modern organizations make intensive use of Enterprise Information Systems (EIS) to manage and control increasingly complex business processes and data in order to support both operational and decision making processes [1]. EIS support business processes by mediating the flow of information between the actors in the enterprise and providing accurate information to improve these actors' performance.
The development of EIS software requires thorough understanding of business domain and business rules (BR). The ability to connect business processes and business rules, making quality information available on time, defines the real value of the EIS software.
The construction and evolution of EIS software capable of providing this ability have challenged the Software Engineering community for many years. In spite of the advances in the software technology, which eliminated a lot of incidental problems, EIS software is still built using the traditional approach: software requirements, business rules, and data models are separately analyzed, designed, and implemented. The object-oriented approach was not able to solve this problem since persistence and performance issues frequently lead to the division of software responsibilities between the application software and the underlying database system, with business rules been embedded on one of both of these components.
While current EIS software is able to attend immediate business needs, it is very difficult to evolve or to adapt this software to the continuous changes in the business domain, since the concepts of this domain are spread among different components.
In this paper we present a software framework that solves this difficulty by adopting a model based approach to EIS software development. In this approach, the application software, the business rules, and the underlying database schema are generated from a single conceptual model of the business domain concepts. Thus, evolving or adapting the EIS software to new business requirements is restricted to changing the conceptual model of the EIS software. The software, the rules and the database are automatically constructed from this conceptual model.
Due to space limitations, we will not discuss the details of the user Interface component or the Service component. We focus on Metadata, Persistence and Business Rules components to explain our approach for code generation and database schema evolution based on EIS metadata.
To introduce the ideas of this approach and the framework which implements these ideas, this paper is organized as follows. Section II provides an overview of the framework that generates the EIS code from the system conceptual model. Section III discusses the conceptual modeling of databases and business rules for EIS. Section IV details the code generation for business rules and for the EIS database. Section V discusses aspects of EIS model evolution that are critical for the maintenance of EIS. Finally, Section VI presents concluding remarks, comparing our approach to related works and pointing directions for future work.
II. FRAMEWORK OVERVIEW
Our framework was inspired on the ideas of model-driven engineering [2]. The framework was built to support
automatic generation and maintenance of EIS software components using the EIS conceptual model as its input.
The macro-architecture of the framework, shown in Figure 1, has five main components. The Interface, Business Rules and Persistence components contains transformation procedures and tools that map each aspect of the EIS conceptual model (application functions, business rules, and database schema, respectively) to software implementation models, generating the corresponding code.
The Metadata component is at the core of the framework. It supports all other components and is responsible for managing the EIS conceptual model. This conceptual model defines metadata that are used by the other framework components to build and manage the execution of the EIS software code (application functions, business rules, and database schema). This component doesn’t need any service from other components but it provides some services in its interface that are used in Object-Relational Mapping, like obtaining logical key attributes from an entity.
The Interface component of the architecture uses the EIS conceptual model (or simply the EIS metadata) to automatically generate the user interface widgets for the business applications. The Metadata component maintains, for each EIS concept, a User-Interface mapping describing how to present, in a user interface widget, each business concept.
The Service component provides tools and services to register and convert information from the Interface to other components of the framework. Thus, it acts like a facade, isolating the user interface aspects of the EIS. For instance, the Interface component depends on the Service component to map user interface data to and from persistent data, as well as to evaluate business rules. The separation of the Interface component makes it easier to change the user interface without affecting the core of the EIS. This component need services provided from Business Rule Component to validate User information and services provided from Metadata Component used in ORM. The Service component provides services to manipulate the entity instance. These services are creating, read, update and delete.
The Business Rule component is responsible for managing a centralized business rules repository, which is stored in a database. To access the database, the Business Rule uses the Persistence component services provided in its interface. The main responsibilities of the Business Rule component is to translate the OCL rules definitions to platform specific code, and to provide runtime facilities for evaluating and enforcing these rules.
The Persistence component maps the EIS conceptual model to the operational data model of the underlying DBMS and manages the evolution of the database schema according to the EIS conceptual metadata. This component also manages all the access to persistent data, isolating the other framework components from changes in the database technology. This component requires services provided in Metadata component that are used to set up and adjust stored procedures parameters. This component also provides services to create and evolve database schema. This component requires the same functionalities used in ORM provided in the Service Component.
A. Using the Framework to Build an EIS
The first step to build an EIS using our framework is registering Business Entities Metadata. Entity Metadata is divided in three types: representation, presentation and rules. After register these information, tables and stored procedures (used by Persistence component PC) are generate automatically to store, manipulate and validate the Entity data (used by Rules component RC). After that, the software to manipulate Entity information is ready to use.
When a program is executed, the Interface component obtains, from Metadata component (MC) the relative Meta information from the Business Entity. This Meta information is used to assembly the User Interface (UI) that manipulates Entity instances. Create, read, update and delete are functions provided by UI. The Entity instance obtained by UI is passed to Service Component (SC). For example, if the user chooses the Create operation in UI the Service component pass this instance to Rule component for validation. If ok, the SC obtains the metadata from MC and does the Object Relational Mapping (ORM); the SC pass the mapped data to PC that call the specifics stored procedures from the manipulated Entity.
III. CONCEPTUAL MODELING OF EIS DATABASE AND BUSINESS RULES
The Business Rules (BR) of an EIS can be considered as statements that define or constraint any business aspect [3]. These rules formalize the business concepts, the relationships among these concepts, and the constraints that must be enforced to guarantee the integrity and consistency of business data and processes.
Since BR constraint business operations, they are also known as application domain rules [4]. The implementation of application programs allows EIS users to perform business processes and to manipulate business information according to the BR. Therefore, one can think of BR as abstract expressions that define and constraints the EIS, directing it to fulfill the underlying business domain information needs. From this perspective, business rules can be classified in four categories [5]:
1) definitions (of business terms);
2) facts (that connect terms);
3) constraints (on terms and/or facts); and
4) derivations (which infer new terms and/or facts from those already known).
The first two categories contain structural rules, which are well supported by current modeling tools (UML, or the
Thus, our work focuses on the other two rules categories: constraints and derivations (which we call, respectively, validation and derivation rules). We refer to the set of rules in these two categories as action rules.
Traditionally, BR are represented and implemented as code embedded into the application programs or into the database schema. Thus, business rules are second class citizens, subordinated to databases or application programs. Rules are analyzed, designed and implemented as a dependent concept, considering an implementation perspective of the system.
This approach has several drawbacks, mainly on the portability and maintainability of the EIS, due to the tight coupling between the definitions of what the system must do (specified by the BR) and how the system works (coded in the application programs and database constraints) [6]. To minimize these difficulties, EIS BR should be abstractly and independently represented, and should contain no implementation detail (platform or technological definitions, for instance). Defining rules in such a way is possible, but it demands an adequate infrastructure to manage the independent rules.
In our approach all BR properties are stored in a single Enterprise Information System Business Rule (EIS-BR) repository, implemented as a rules database. Thus, a database management system (DBMS) enforces security and provides accessibility for authorized users to access the BR repository. Only authorized staff has access to the centralized repository of rules definitions.
To conceptually represent the structural aspects of the EIS domain, such as business concepts, instances, relations, and static constraints, we implemented an object-oriented variation of the classic Entity-Relationship (ER) Conceptual Data Model. Using this model we can automatically generate the EIS database schema, using well-known ER to SQL mapping algorithms. However, the ER model provides appropriate support only to structural constraints; action rules, assertions and derivation rules are not directly supported by this model.
To define and evaluate these types of rules, which are not naturally represented with ER modeling primitives, we chose a language specifically designed to express rules within an object-oriented model. The OCL (Object Constraint Language) expressions allow the designer to define:
1) Invariant conditions (which must be satisfied in all system states);
2) Query expressions (covering all the model elements); and
3) Constraints on operations that change systems state (e.g., pre-conditions).
Combining the expressive power of the ER conceptual data model with OCL dynamic constraints expressions allows our framework to specify both structural and behavioral constraints in a high level, abstract model of the EIS. In the example illustrated in Figure 2, an ER model represents a simplified enterprise domain. This simple conceptual model contains structural constraints, such as the following cardinality (or multiplicity) constraints: Rule 1) "An Employee must work in a single Department"; and Rule 2) "An Employee can manage at most one Department". However, the constraint expressivity of this simple model is limited, since only structural constraints can be represented. For example, suppose that the following business rule must be enforced: Rule 3) "An Employee can manage only the Department in which he works". This business rule is not represented in Figure 2, and due to ER expressive power limitations, it can not be stated without modifying the EIS model structure.
Our solution to this problem is to use OCL for representing rules that cannot be expressed using ER modeling primitives, such as validation and derivation rules. The business rule mentioned above could be formally represented in OCL as the query expression shown in Code 1.
In this approach, rules are specified as an independent aspect, i.e., rules are first class citizen in the EIS conceptual model. The separation of rules, data and functions is not complete, since rules have influence on data and functions; however, there is no subordination of rules specification with regard to data or functions specifications. Moreover, rules are formally expressed, but without dependency of implementation technologies or specific platforms.
IV. CODE GENERATION MECHANISMS
The model-driven engineering approach on which our framework is based demands automatic code generation facilities from the abstract conceptual EIS model. In this work we focus on two main aspects concerning this code generation: EIS database definition and manipulation (Section IV-A) and business rules implementation (Section IV-B). User interface and application programs rely on the database and rules code, but due to space limitations are not discussed in this paper.
A. Schema generation and manipulation
A database schema transformation tool in the Persistence component generates a SQL schema for an EIS database through automatic translation of the EIS conceptual metadata. The specific operational data model of the DBMS defines the details of the mapping process, since there are variations of SQL features supported on different database management systems.
As we have said, the conceptual model used in our framework is based on the Entity-Relationship model. Our transformation tool is capable of generating SQL structures corresponding to the conceptual primitives of the source model, such as strong and weak entities, specialization hierarchies, relationship aggregation, and composite attributes.
There are several tools that perform the same kind of transformation, generating a SQL database schema from an ER conceptual schema. The new idea in our framework is to use behavior information that is also captured in our conceptual model (via action rules) to automatically generate stored procedures that provide basic CRUD (Create, Read,
Update, Delete) operations on entity instances. Every con-ceptual entity type has a set of built-in procedures to perform these CRUD operations on its instances. The application functions benefit from this feature.
To illustrate the database schema transformation tool operation, consider the schema shown in Figure 2. The purpose of this example is to show the result of database schema transformations to generate an operational data definition for the conceptual schema presented in Figure 2.
In a typical database environment, the application developer (or the database administrator) has to translate the conceptual schema into the operational schema. This translation is supported by database schema transformation tools in the main DBMS. Our tool also makes this translation automatically, generating the SQL schema shown in Figure 3.
The main advantages of our approach when comparing to conventional DBMS tools are: a) the generated schema is integrated with other generated aspects of the EIS software (rules and application functions); and b) data manipulation stored procedures are automatically incorporated to the database schema, providing an important facility to upper level application and user interface code.
The transformation tool generates the SQL-DDL schema and also the stored procedures (SQL-DML) to manipulate instances of all entities and relationships of the conceptual schema (which are all converted to SQL tables). Figure 4 shows the stored procedures created to manipulate the database schema in Figure 3. Only the signature of each stored procedure is shown due to space limitations, but our tool automatically generates the full procedure code for each basic data manipulation operation: select, insert, update and delete.
These automatically available stored procedures are used as basic building blocks for the construction of the EIS specific applications. All the persistence aspects of the application programs are encapsulated into these procedures. Thus, our framework enforces the data independence principle.
Figure 5 shows the ER Transformation package, which contains the main functions of the Persistence component. In this figure, the Metadata component is the same used in ER transformations. The MappingLibrary class uses the external xerces.jar [9] component, which is responsible for reading a XML file containing mappings from OCL to the target platform.
The Translator is an interface that contains all the essential methods for executing transformations between models. It uses the OCL Parser external component, which is responsible for parsing OCL expressions.
The following sequence of activities must be performed to transform an OCL expression into SQL code:
1) Get mappings between OCL and SQL: it is necessary to retrieve all mappings that are defined in the XML file. Each mapping indicates how to generate code in a specific target language (in our case, the PostgreSQL procedural language - plPgSQL was adopted).
2) Get rule elements: the AST previously built contains all tokens in the OCL rule. Thus, we can use this activity to visit any node in the AST and to make it available for further activities. This step makes it easier to access important parts of the rule, such as its parameters.
3) Get business entities metadata: as we have explained, the metadata structure represents model elements (like entities, their attributes and relationships). Each business rule must have a rule context, which can be an entity or an attribute. It is necessary to get the rule context related metadata to understand the entity structure (attribute composition and relationships). This knowledge is necessary to navigate through the specific rule context within the model.
4) Translate rule parameters: when an OCL rule have entry parameters, it is necessary to translate their types to corresponding SQL data types, according to the mappings defined in the XML file. All parameters have the same name as the respective attributes in the metadata model, but they can be changed in this step according to the needs of the transformation process.
5) Translate rule expressions: this is the most critical activity for the rule transformation process. In this step we walk through the AST visiting each node and, according to its type, it is possible to know which mapping pattern, defined in the XML file, should be applied to generate the SQL code.
The generated rule should be executed when predefined operations are done in application programs. Following the active database principle, our mechanism is sensitive to operations that modify the state of the EIS database: insert, update and delete. When these operations occur within an application program, the rule evaluation process is triggered.
As an example of rule code generation for stored procedures, consider the business rule modeled in Section III: "An Employee can manage only the Department in which he works". This business rule was modeled in OCL, as shown in Code 1. After being processed by the transformation tool, the code in Code 2 was generated.
```
CREATE OR REPLACE FUNCTION validate_Employee(
employee_ssn INTEGER
) RETURNS BOOL AS $$
DECLARE
idDeptEmpWork INTEGER;
idDeptEmpManage INTEGER;
rv_Employee BOOL;
BEGIN
SELECT INTO idDeptEmpWork Department.id
FROM Employee JOIN department_work ON Employee.pk = department_work.pkEmployee
JOIN Department ON department_work.pkDepartment = Department.pk
WHERE Employee.ssn = employee_ssn;
SELECT INTO idDeptEmpManage Department.id
FROM Employee JOIN department_manage ON Employee.pk = department_manage.pkEmployee
JOIN Department ON department_manage.pkDepartment = Department.pk
WHERE Employee.ssn = employee_ssn;
IF (idDeptEmpManage IS NOT NULL AND idDeptEmpWork <> idDeptEmpManage) THEN RETURN FALSE;
ELSE RETURN TRUE;
END IF;
END;
$$ LANGUAGE plpgsql;
```
Code 2: Transformation of the OCL Business Rule of Figure 3 into SQL.
Figure 3. SQL Schema generated from the Conceptual Schema in Figure 2.
V. EVOLUTION OF THE EIS CONCEPTUAL MODEL
When the business context changes, it is necessary to evolve the EIS conceptual model to incorporate the new business concepts. In our approach for database schema evolution, the Persistence component database transformation tool receives a modified version of the EIS metadata. For each modified entity type, the component analysis all modifications, comparing the given entity with the correspondent entity stored in the current metadata database.
Modifications on the conceptual model lead to a set of schema evolution operations that must be propagated to the operational model (including both SQL-DDL and stored procedures code).
After verifying the consistency of the new version of the entity type, the database evolution tool automatically modifies the SQL-DDL, the stored procedures, and all the
entity type instances stored in the database. Therefore, after the modification of a conceptual entity type, the whole schema is ready to be used to manipulate the entity type instances.
The design of the schema evolution mechanism defines all allowed modifications on a conceptual schema. Some modification operations can be executed directly on most Database Management Systems (DBMS); other operations are specific to our schema evolution mechanism.
For example, changing the domain of an integer attribute to alphanumeric is allowed by most DBMS, but not the contrary, because there may exist alphanumeric attribute values in the database instances that cannot be converted to integer. However, if all values in the current database state could be converted to the integer domain, the conversion operation could be allowed. This can only be decided at runtime, since it depends on the current state of the database instances.
Changing an attribute domain is relatively simple, but our component supports complex schema evolution operations, such as changing the state of entity type from strong to weak (i.e., creating an identification dependency with another entity type). The component validates the modifications in the conceptual model and propagates the permitted operations to the operational schema, assuring the EIS database consistency. Therefore, the component avoids directly modification of the operational database schema by users or developers. Modifications are performed in the conceptual level and automatically propagated to the operational database schema.
The examples below use the EIS conceptual model shown in Figure 2. First, we will remove the manage relationship between entities Employee and Department. In order to remove this relationship, the following schema evolution operations are necessary:
1) Drop all stored procedures that manipulate the manage relationship instances.
2) Drop table manage.
Figure 4. Stored Procedures generated from the schema in Figure 2.
3) Remove relationship manage from the conceptual model.
The second example shows how the mechanism works when an attribute is added to an entity. We will add the type attribute into the entity Employee. This attribute has two possible values: manager and employee. Mandatory operations to include this attribute are:
1) Create attribute type on the Employee entity in the conceptual model.
2) Drop stored procedures that manipulate the entity Employee.
3) Create type attribute on the table Employee.
4) Create a constraint that checks if an Employee is manager or employee.
5) Assign the value 'employee' to attribute type in all instances of Employee.
6) Create all stored procedures to manipulate the entity Employee.
Figure 7 shows the conceptual schema after all these modifications. Operation 1 makes the conceptual model update. Operations 2, 3, 4 and 6 update the physical schema and operation 5 propagates modifications to instances in the database.
Figure 8 shows the corresponding operational (SQL) schema re-generated from the updated conceptual schema.
In this example, after modifying the conceptual schema, we can note that a business rule specified in the schema before the modification ("An Employee can manage only the Department in which he works") is no longer necessary. The modified schema now is capable of structurally enforcing this business rule. Therefore, database schema evolution and business rules evolution have to be mutually consistent.
The obsolete rule must be eliminated, and it is now necessary to create other business rules to assure database
consistency. For example: "A department cannot have more than one employee whose type is manager". This new business rule can be easily modeled in OCL and implemented as stored procedures using our approach.
VI. CONCLUSIONS AND RELATED WORK
The framework presented in this paper was implemented on a research project developed from 2005 to 2008 with financial support from the Brazilian National Council for Scientific and Technological Development (CNPq). The final objective of this project is to build a complete framework to create and evolve Enterprise Information Systems (EIS) for agricultural business domains.
In our framework, application programs, database schemata and business rules are conceptually described in a single conceptual (metadata) model and automatically implemented as separated but interdependent aspects.
The software mechanism of the framework is implemented in Java and has approximately 67 thousand lines of code. The conceptual schema of the EIS developed as a proof of concept contains over 200 business entities from the agricultural business domain.
The EIS rules repository contains almost 150 business rules specified and implemented in the EIS software. Among these rules there are about 85 structural (validation) rules and 55 action and derivation rules.
The generated operational database schema contains over 560 tables and three thousand stored procedures, including those used for data manipulation and business rules.
Several tools use similar approaches to generate parts of the EIS software from model transformations [10]–[14]. Many of them generate the database schema from the conceptual model, but they do not generate stored procedures or other manipulation facilities for the conceptual entities, neither they provide an integrated framework for EIS software development. Other approaches, like JPA [15], provide database mechanisms to generate automatically tables and manipulation facilities using annotations and EJB-QL (Enterprise Java Bean Query Language), but this approach provides no facilities to make changes in a conceptual
model, for example, support to complex schema evolution operations, such as changing the type of an entity from strong to weak.
In AndroMDA, for instance, it is possible to transform business rules expressed in OCL into other languages, such as HQL (Hibernate Query Language) [16] and EJB-QL. Our translation mechanism differs from AndroMDA in the choice of the transformation paradigm: while our transformation is based on a high level platform independent language, AndroMDA transformations are based on strings and regular expressions.
In the database evolution context, works like [17] suggest the support to bidirectional changes on system models. Our architecture proposes that changes should be made only on the conceptual model, with automatic propagation of changes to the operational model. The idea is that changes to the operational schema should be prohibited, for the same reason that changes in the source code are allowed, but changing the machine code is not recommended.
Some works, like [18], allow having several complete versions of the logical schema in the system, but our approach keeps only the last version of the schema. For large EIS, keeping several versions of the database is impracticable due to the large amount of storage capacity needed and the low benefits associated with this practice. In other contexts, such as CAD software, the need for full versioning may be compensatory.
In the business rules context, several works investigated the automatic conversion of rules expressed in high level languages to software source code. The implementation in [19], for instance, shares rules between two rule languages from different domains: OCL together with UML and SWRL (Semantic Web Rule Language). In [20] there is a description of an OCL to SQL transformation and its tuning for web applications. [21] proposes a method for changing integrity constraint representations by changing its context, but without changing the constraint meaning.
Our solution is based on the Dresden OCL Toolkit, a modular software platform for OCL, providing facilities for the specification and evaluation of OCL constraints. The toolkit performs parsing and type-checking of OCL constraints and generates Java and SQL code [9].
We have reused many ideas from this toolkit, but we had to modify and adapt several features to fulfill the requirements of our mechanisms. One important modification is related to the target language. Our mechanism generates stored procedures to convert target business rules from OCL to SQL code while the original toolkit generates SQL code in form of database views [22].
The main advantages of our approach are the portability and the maintainability of the EIS, besides the automatic code generation, which reduces the programming efforts for building the EIS software.
The portability is improved because the business rules are represented with an abstract declarative language (OCL), which is an OMG standard, just like UML. The rules are defined in OCL (a platform independent model) and automatically converted to a specific platform using a software translator.
Our approach improves the availability of the business rules, since there is a single repository where all the EIS rules are stored. This repository is managed by a DBMS that offers browsing and querying facilities, besides security and access control capabilities. In traditional EIS, the rules are hard-wired in either the application program code or in the database schema as integrity constraints.
By applying the classic separation of concerns principle, the separation of business rules, application code, and database schemata improves the software maintainability. Rules are documented in a single model, and are not mixed with the application code. This centralized organization improves the code organization and makes it easier to evaluate the impact of changes on business rules.
As future works, it would be interesting to use metamodeling concepts of model-driven development to adapt the framework presented in this paper, allowing to use different conceptual models, like UML or ORM, for example. In business rule component, code generation performance can be improved. Besides, a workflow engine can be developed to control access and business rules changes, and also manage their evaluation during IS execution.
ACKNOWLEDGMENT
This research project was developed with financial support from the Brazilian National Council for Scientific and Technological Development (CNPq), under register 505198/2004-5.
REFERENCES
|
{"Source-Url": "http://www.lbd.dcc.ufmg.br/colecoes/sbes/2009/012.pdf", "len_cl100k_base": 6219, "olmocr-version": "0.1.51", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 28712, "total-output-tokens": 7946, "length": "2e12", "weborganizer": {"__label__adult": 0.00026106834411621094, "__label__art_design": 0.0002567768096923828, "__label__crime_law": 0.0002491474151611328, "__label__education_jobs": 0.0004978179931640625, "__label__entertainment": 3.647804260253906e-05, "__label__fashion_beauty": 0.0001049041748046875, "__label__finance_business": 0.00028777122497558594, "__label__food_dining": 0.0002677440643310547, "__label__games": 0.0002815723419189453, "__label__hardware": 0.0004162788391113281, "__label__health": 0.0002999305725097656, "__label__history": 0.00014138221740722656, "__label__home_hobbies": 5.739927291870117e-05, "__label__industrial": 0.0002696514129638672, "__label__literature": 0.00014078617095947266, "__label__politics": 0.00014579296112060547, "__label__religion": 0.0002658367156982422, "__label__science_tech": 0.00563812255859375, "__label__social_life": 5.412101745605469e-05, "__label__software": 0.00537109375, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0001854896545410156, "__label__transportation": 0.0002968311309814453, "__label__travel": 0.0001569986343383789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37022, 0.01613]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37022, 0.50347]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37022, 0.88811]], "google_gemma-3-12b-it_contains_pii": [[0, 4897, false], [4897, 10571, null], [10571, 14412, null], [14412, 16472, null], [16472, 20214, null], [20214, 23375, null], [23375, 25383, null], [25383, 26973, null], [26973, 29069, null], [29069, 34543, null], [34543, 37022, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4897, true], [4897, 10571, null], [10571, 14412, null], [14412, 16472, null], [16472, 20214, null], [20214, 23375, null], [23375, 25383, null], [25383, 26973, null], [26973, 29069, null], [29069, 34543, null], [34543, 37022, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37022, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37022, null]], "pdf_page_numbers": [[0, 4897, 1], [4897, 10571, 2], [10571, 14412, 3], [14412, 16472, 4], [16472, 20214, 5], [20214, 23375, 6], [23375, 25383, 7], [25383, 26973, 8], [26973, 29069, 9], [29069, 34543, 10], [34543, 37022, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37022, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
82e9c30d6b561ee8f35b894cce7e692a2359f50f
|
[REMOVED]
|
{"Source-Url": "http://www.valcasey.eu/MeDUD.pdf", "len_cl100k_base": 7354, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 42728, "total-output-tokens": 9618, "length": "2e12", "weborganizer": {"__label__adult": 0.0016803741455078125, "__label__art_design": 0.001987457275390625, "__label__crime_law": 0.00215911865234375, "__label__education_jobs": 0.01385498046875, "__label__entertainment": 0.00016689300537109375, "__label__fashion_beauty": 0.0009908676147460938, "__label__finance_business": 0.0015287399291992188, "__label__food_dining": 0.00203704833984375, "__label__games": 0.0026397705078125, "__label__hardware": 0.00685882568359375, "__label__health": 0.2392578125, "__label__history": 0.0008554458618164062, "__label__home_hobbies": 0.0006160736083984375, "__label__industrial": 0.0019378662109375, "__label__literature": 0.0009312629699707032, "__label__politics": 0.0004925727844238281, "__label__religion": 0.0011444091796875, "__label__science_tech": 0.252685546875, "__label__social_life": 0.0002789497375488281, "__label__software": 0.0222930908203125, "__label__software_dev": 0.441650390625, "__label__sports_fitness": 0.001956939697265625, "__label__transportation": 0.0013608932495117188, "__label__travel": 0.0005865097045898438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43610, 0.02818]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43610, 0.41818]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43610, 0.92862]], "google_gemma-3-12b-it_contains_pii": [[0, 2498, false], [2498, 5578, null], [5578, 8929, null], [8929, 12031, null], [12031, 14691, null], [14691, 16659, null], [16659, 19638, null], [19638, 22834, null], [22834, 24659, null], [24659, 27875, null], [27875, 30812, null], [30812, 31848, null], [31848, 34652, null], [34652, 37461, null], [37461, 40282, null], [40282, 43610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2498, true], [2498, 5578, null], [5578, 8929, null], [8929, 12031, null], [12031, 14691, null], [14691, 16659, null], [16659, 19638, null], [19638, 22834, null], [22834, 24659, null], [24659, 27875, null], [27875, 30812, null], [30812, 31848, null], [31848, 34652, null], [34652, 37461, null], [37461, 40282, null], [40282, 43610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43610, null]], "pdf_page_numbers": [[0, 2498, 1], [2498, 5578, 2], [5578, 8929, 3], [8929, 12031, 4], [12031, 14691, 5], [14691, 16659, 6], [16659, 19638, 7], [19638, 22834, 8], [22834, 24659, 9], [24659, 27875, 10], [27875, 30812, 11], [30812, 31848, 12], [31848, 34652, 13], [34652, 37461, 14], [37461, 40282, 15], [40282, 43610, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43610, 0.0407]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
bf847d7f3c2c62eb2ea210ca0a3ed66a99a0d3e2
|
IATI’s New Integrated Platform:
Where are we now?
Wendy Thomas - IATI Technical Lead
Alex Lydiate - IATI Senior Developer
Nik Osvalds - IATI Developer
Why an integrated platform?
- Core recommendation from the 2020 Technical Stocktake
- Decisions approved by Board
- Roadmap shared
- Implementing recommendations
1. System design recommendation
1. The Board is asked to approve a move for IATI towards an integrated architecture approach, as supported by Stocktake workshop participants.
➢ Link to IATI Strategic Plan Objective 3, Strengthen the IATI Standard by consolidating its technical core: ‘Review, consolidate, streamline and maintain IATI’s technical tools and core products, determining which of these need to be in-house and which ones out-sourced, to ensure that our technical infrastructure is fit for the achievement of IATI’s strategic objectives’.
What did the consultant recommend?: The key design recommendation from the Stocktake was that IATI should transition from a siloed set of applications to a single, integrated core architecture.
Current siloed architecture
Proposed integrated architecture
## Technical Roadmap
<table>
<thead>
<tr>
<th></th>
<th>Q1</th>
<th>Q2</th>
<th>Q3</th>
<th>Q4</th>
</tr>
</thead>
<tbody>
<tr>
<td>API Gateway</td>
<td>JAN</td>
<td>FEB</td>
<td>MAR</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>APR</td>
<td>MAY</td>
<td>JUN</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>JULY</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>AUG</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>SEP</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>OCT</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>NOV</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>DEC</td>
</tr>
</tbody>
</table>
- **Validator V2 / Public API**
- Q1: [Start]
- Q2: [Start]
- Q3: [End]
- Q4: [End]
- **Semantic Data Layer**
- Q3: [Start]
- Q4: [End]
- **d-portal v2**
- Q4: [Start]
- **Publishing Tool Options**
- Q1: [Start]
- Q2: [End]
- **Publisher Statistics**
- Q4: [Start]
- **Consultation/Design**
- **Implementation/Development**
- **Launch**
Technical Roadmap
- **Datastore**
- Consultation and/or Documentation
- Maintenance
- **Registry**
- Consultation and/or Documentation
- Maintenance
- **Datastore**
- Consultation and/or Documentation
- Maintenance
- **d-portal**
- Consultation and/or Documentation
- Maintenance
- **Storing Historical Data**
- Consultation and/or Documentation
- Maintenance
- **Hosting Publisher XML**
- Consultation and/or Documentation
- Maintenance
API Gateway
API’s are specifications of possible interactions with a software component. The API Gateway will manage APIs across our platforms. From the Stocktake:
“Initially, current APIs are publicly published through the API Gateway. Over time additional APIs are added per this proposed architecture and driven by User Personas and Stories, just like the unified User Interface. These microservices are loosely coupled, independent microservices.”
- Complete - Validator API Contract Consultation
- Future - Registry API Contract Consultation
Why does IATI want to use an API Gateway?
- We want to protect our APIs from overuse and abuse, using rate limiting and, where applicable, API key authentication.
- We want to understand how people use our APIs, via analytics and monitoring tools.
- Through that analysis, we might identify real-world use cases which we could better accommodate in future releases.
- Users will be able find all IATI API services, documentation and change logs in a single place.
Public Validator API Development Process
- Requirements gathering exercise
- Designing the draft API contract
- Community consultation on the draft contract
- **Implementing the API contract, which is the present phase**
- Testing the implementation, to include functional correctness, appropriate response times and appropriate load capacity
- Early access for a focussed group of API users
- Release of V2 of the IATI Validator, to include the implemented public API
Validator V2 + Public Validator API
Why?
● Version 2.0 - Migrate to modern, scalable cloud platform (Azure serverless functions) and implementation (Node.js)
○ Provide template for unified platform microservices
○ Enable a fast Synchronous API
● Public API - Give publishers and publishing tools a way to integrate validation into their workflows
○ Increase data quality
Live Demonstration
API Gateway / Developer Portal / Public Validator API / Validator V2
- In Development
- Not Yet Available for Public Use/Testing
- Subject to Change
Registry API
- Available for Public Use Directly (not through API Gateway)
Welcome to the IATI API Gateway!
Your source for information and access to the International Aid Transparency Initiative’s public APIs.
Sign Up Explore APIs
About us
IATI – the International Aid Transparency Initiative – brings together governments, multilateral institutions, private sector and civil society organisations and others to increase the transparency and openness of resources flowing into developing countries.
IATI data helps meet the needs of a wide range of stakeholders in international development. For example:
- Governments of developing countries need up-to-date information on which development and humanitarian organisations are operating in their country, so they can work with them effectively.
# APIs
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Echo API</td>
<td></td>
</tr>
<tr>
<td>IATI Registry</td>
<td>The IATI Registry is the place to register your organisation and IATI datasets.</td>
</tr>
<tr>
<td>IATI Validator</td>
<td>The IATI Validator Public API is a tool for checking if data aligns with the rules and guidance of IATI Standard. It allows users to check and improve the quality of IATI data to ensure it is accessible and useful to anyone working with data on development...</td>
</tr>
</tbody>
</table>
IATI Registry
The IATI Registry is the place to register your organisation and IATI datasets.
Publisher List
Return a list of the names of IATI publishers. When the authorisation header is provided and the user has sysadmin privileges, the call also returns publishers with a Pending status.
Request
```
GET https://dev-ati-api-gateway.azure-api.net/registry/action/organization_list
```
IATI Registry
The IATI Registry is the place to register your organisation and IATI datasets.
Publisher Details
Return the details of a publisher. When the authorisation header and include_datasets parameter are provided, the call also returns private datasets.
Request
GET https://dev-iati-api-gateway.azure-api.net/registry/action/organization_show?id
Request parameters
<table>
<thead>
<tr>
<th>Name</th>
<th>In</th>
<th>Required</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>query</td>
<td>true</td>
<td>string</td>
<td>the id or name of the publisher</td>
</tr>
</tbody>
</table>
HTTP response
HTTP/1.1 200 OK
cache-control: public, must-revalidate, max-age=0
content-length: 2377
content-type: application/json; charset=utf-8
date: Thu, 08 Apr 2021 07:53:34 GMT
x-cached: MISS
{
"help": "https://iatiregistry.org/api/3/action/help_show?name=organization_show",
"success": true,
"result": {
"publisher_frequency": "as per donor reporting requirements for specific projects",
"package_count": 1,
"publisher_record_exclusions": "n/a",
"num_followers": 0,
"publisher_implementaion_schedule": "n/a",
"publisher_country": "CH",
"id": "c38dbb9d-ffeb-47bb-9e70-e00dafa127079",
"publisher_ref": "n/a",
"publisher_ui_url": "http://actalliance.org/",
"title": "ACT Alliance",
"publisher_units": "Activities represent individual funding streams",
"publisher_contact": "Ecumenical Center, 150 Route de Ferney, 1211 Geneva, Switzerland",
"state": "active",
"publisher_contact_email": "arnold.christophe@gmail.com",
"publisher_field_exclusions": "n/a",
"publisher_first_publish_date": "",
}
}
IATI Validator
The IATI Validator Public API is a tool for checking if data aligns with the rules and guidance of IATI Standard. It allows users to check and improve the quality of IATI data to ensure it is accessible and useful to anyone working with data on development and humanitarian resources and results.
**validate**
Endpoint to send a full IATI File (either organisations or activities) and receive a JSON validation report.
**Request**
**POST** https://dev-isti-api-gateway.azure-api.net/validator/validate
**Request headers**
<table>
<thead>
<tr>
<th>Name</th>
<th>Required</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Accept</td>
<td>false</td>
<td>string</td>
<td>Indicates the type of response</td>
</tr>
<tr>
<td>Content-Type</td>
<td>false</td>
<td>string</td>
<td>Indicates the media type of the resource</td>
</tr>
</tbody>
</table>
**Request body**
IATI xml file
Response: 422 Unprocessable Entity
If no body is provided or it's in the wrong format
```json
{
"error": "No body"
}
```
Response: 200 OK
When the XML file is successfully processed by the IATI Validator it returns a JSON validation report.
Response headers
<table>
<thead>
<tr>
<th>Name</th>
<th>Required</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Content-Type</td>
<td>false</td>
<td>string</td>
<td>Content type of the response</td>
</tr>
</tbody>
</table>
valid IATI organisations file validation report
```json
{
"type": "object",
"properties": {
"schemaVersion": {
"type": "string"
},
"iatiVersion": {
"type": "string"
},
"summary": {
"type": "object",
"properties": {
"critical": {
"type": "integer"
},
"danger": {
"type": "integer"
},
"warning": {
"type": "integer"
},
"info": {
"type": "integer"
},
"success": {
"type": "integer"
}
}
}
}
}
```
IATI Validator / validate
**Authorization**
Subscription key
**Parameters**
+ Add parameter
**Headers**
- **Accept**
application/json
- **Content-Type**
application/xml
- **Cache-Control**
no-cache
**Body**
HTTP request
```plaintext
POST https://dev-iati-api-gateway.azure-api.net/validatordata
Accept: application/json
Content-Type: application/xml
Cache-Control: no-cache
```
Send
Sign up
Already a member? Sign in.
Email
e.g. name@example.com
Password
Confirm password
First name
e.g. John
Last name
e.g. Doe
User profile
Account details
Email
First name Nik
Last name Osvalds
Registration date 04/08/2021
Change name Change password Close account
Subscriptions
Subscription details
You don't have subscriptions.
# Subscriptions
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Exploratory</td>
<td>Subscribers will be able to run 5 calls/minute up to a maximum of 100 calls/week. No approval is required.</td>
</tr>
<tr>
<td>Full Access</td>
<td>Subscribers have unlimited access to Public IATI APIs with high rate limits. IATI Technical Team approval is required.</td>
</tr>
</tbody>
</table>
# User profile
## Account details
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Email</td>
<td><a href="mailto:nik@nikolaso.com">nik@nikolaso.com</a></td>
</tr>
<tr>
<td>First name</td>
<td>Nik</td>
</tr>
<tr>
<td>Last name</td>
<td>Osvalds</td>
</tr>
<tr>
<td>Registration date</td>
<td>04/08/2021</td>
</tr>
</tbody>
</table>
[Change name] [Change password] [Close account]
## Subscriptions
<table>
<thead>
<tr>
<th>Subscription details</th>
<th>Product</th>
<th>State</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>exploratorysubscription</td>
<td>Rename</td>
<td>Exploratory</td>
</tr>
<tr>
<td>Started on</td>
<td>04/08/2021</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Primary key</td>
<td>XXXXXXXXXXXXXXXXXXXXXXXXXXXX</td>
<td>Show I Regenerate</td>
<td></td>
</tr>
<tr>
<td>Secondary key</td>
<td>XXXXXXXXXXXXXXXXXXXXXXXXXXXX</td>
<td>Show I Regenerate</td>
<td></td>
</tr>
</tbody>
</table>
Authorization
Subscription key
- Exploratory
- Primary: exploratorysubscription
- Secondary: exploratorysubscription
HTTP request
POST https://dev-iat-api-gateway.azure-api.net/validator/validate HTTP/1.1
Accept: application/json
Content-Type: application/xml
Cache-Control: no-cache
Ocp-Apim-Subscription-Key: 9671140b32a54b66b17a8f2b7ff216df
<?xml version="1.0" encoding="UTF-8"?>
<iati-organisations version="2.02" generated-datetime="2018-11-07T05:25:55+00:00">
<iati-organisation last-updated-datetime="2018-11-07T05:25:55+00:00" xml:lang="en" default-currency="NPR">
<organisation-identifier>NP-SWC-1234</organisation-identifier>
<name>
<narrative>Test Org Nepal</narrative>
</name>
<reporting-org type="24" ref="NP-SWC-1234">
<narrative>Test Org Nepal</narrative>
</reporting-org>
</iati-organisation>
</iati-organisations>
HTTP response
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
date: Thu, 08 Apr 2021 08:06:40 GMT
request-context: appID=c1d-v177c97b7138c7-4b61-8c6b-abfad8d8b15
transfer-encoding: chunked
{
"schemaVersion": "1.0.1",
"iatiVersion": "2.02",
"summary": {
"critical": 0,
"danger": 0,
"warning": 0,
"info": 0,
"success": 0
},
"filetype": "iati-organisations",
"validation": "ok",
"feedback": [{
"category": "iati",
"label": "IATI file",
"messages": [{
"id": "0.0.1",
"text": "Congratulations! This IATI file has successfully passed IATI XML schema validation with no errors!",
"rulesets": [{
"src": "iati",
"severity": "success"
}],
"context": [{
"text": ""
}]
}
]
"organisations": [{
"title": "Test Org Nepal",
"identifier": "NP-SWC-1234",
"publisher": "NP-SWC-1234",
"feedback": []
]
}
<table>
<thead>
<tr>
<th>HTTP</th>
<th>Curl</th>
<th>C#</th>
<th>Java</th>
</tr>
</thead>
<tbody>
<tr>
<td>JavaScript</td>
<td>PHP</td>
<td>Python</td>
<td>Ruby</td>
</tr>
<tr>
<td>Objective C</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**HTTP request**
```text
POST https://dev-iati-api-gateway.azure-api.net/validator/validate HTTP/1.1
Accept: application/json
Content-Type: application/xml
Cache-Control: no-cache
Ocp-Apim-Subscription-Key: 9671140b32a54b66b17a8f2b7ff216df
[ iati-act-no-errors.xml ]
```
Reports
API calls
- Total requests
- Successful requests
- Failed requests
- Blocked requests
# Products
<table>
<thead>
<tr>
<th>Product</th>
<th>Successful calls</th>
<th>Blocked calls</th>
<th>Failed calls</th>
<th>Other calls</th>
<th>Total calls</th>
<th>Average response time</th>
<th>Bandwidth</th>
</tr>
</thead>
<tbody>
<tr>
<td>Exploratory</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>9 s</td>
<td>4 Kb</td>
</tr>
</tbody>
</table>
# Subscriptions
<table>
<thead>
<tr>
<th>Subscription</th>
<th>Successful calls</th>
<th>Blocked calls</th>
<th>Failed calls</th>
<th>Other calls</th>
<th>Total calls</th>
<th>Average response time</th>
<th>Bandwidth</th>
</tr>
</thead>
<tbody>
<tr>
<td>exploratorysubscription</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>9 s</td>
<td>4 Kb</td>
</tr>
</tbody>
</table>
# APIs
<table>
<thead>
<tr>
<th>API</th>
<th>Successful calls</th>
<th>Blocked calls</th>
<th>Failed calls</th>
<th>Other calls</th>
<th>Total calls</th>
<th>Average response time</th>
<th>Bandwidth</th>
</tr>
</thead>
<tbody>
<tr>
<td>IATI Validator</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>9 s</td>
<td>4 Kb</td>
</tr>
<tr>
<td>IATI Registry</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0 ms</td>
<td>0 bytes</td>
</tr>
<tr>
<td>Echo API</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0 ms</td>
<td>0 bytes</td>
</tr>
</tbody>
</table>
Discussion & Q&A
To find out more...
Technical Stocktake Newspost (Dec 2020):
iatistandard.org/en/news/technical-stocktake-next-steps-iati/
Technical Team Quarterly Update (w/b 19 April 2021)
Contact the Technical Team: support@iatistandard.org
|
{"Source-Url": "https://www.iaticonnect.org/system/files/2021-04/IATI%27s%20new%20integrated%20platform_%20where%20are%20we%20now_.pdf", "len_cl100k_base": 4617, "olmocr-version": "0.1.49", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 31434, "total-output-tokens": 5353, "length": "2e12", "weborganizer": {"__label__adult": 0.0002799034118652344, "__label__art_design": 0.00043272972106933594, "__label__crime_law": 0.0007281303405761719, "__label__education_jobs": 0.0009899139404296875, "__label__entertainment": 6.937980651855469e-05, "__label__fashion_beauty": 0.00012046098709106444, "__label__finance_business": 0.003757476806640625, "__label__food_dining": 0.0002875328063964844, "__label__games": 0.0002989768981933594, "__label__hardware": 0.0006465911865234375, "__label__health": 0.00031566619873046875, "__label__history": 0.00025653839111328125, "__label__home_hobbies": 6.633996963500977e-05, "__label__industrial": 0.0003616809844970703, "__label__literature": 0.0001958608627319336, "__label__politics": 0.0016155242919921875, "__label__religion": 0.00032520294189453125, "__label__science_tech": 0.0092315673828125, "__label__social_life": 0.00019156932830810547, "__label__software": 0.046783447265625, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.00014734268188476562, "__label__transportation": 0.0003871917724609375, "__label__travel": 0.000217437744140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16453, 0.01889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16453, 0.01527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16453, 0.69307]], "google_gemma-3-12b-it_contains_pii": [[0, 152, false], [152, 1127, null], [1127, 1886, null], [1886, 2354, null], [2354, 2904, null], [2904, 3369, null], [3369, 3839, null], [3839, 4219, null], [4219, 4554, null], [4554, 5282, null], [5282, 5983, null], [5983, 6376, null], [6376, 6376, null], [6376, 6960, null], [6960, 8070, null], [8070, 8971, null], [8971, 9502, null], [9502, 10086, null], [10086, 10483, null], [10483, 10621, null], [10621, 10831, null], [10831, 11487, null], [11487, 12277, null], [12277, 12400, null], [12400, 13153, null], [13153, 14218, null], [14218, 14604, null], [14604, 14700, null], [14700, 14700, null], [14700, 16206, null], [16206, 16223, null], [16223, 16453, null]], "google_gemma-3-12b-it_is_public_document": [[0, 152, true], [152, 1127, null], [1127, 1886, null], [1886, 2354, null], [2354, 2904, null], [2904, 3369, null], [3369, 3839, null], [3839, 4219, null], [4219, 4554, null], [4554, 5282, null], [5282, 5983, null], [5983, 6376, null], [6376, 6376, null], [6376, 6960, null], [6960, 8070, null], [8070, 8971, null], [8971, 9502, null], [9502, 10086, null], [10086, 10483, null], [10483, 10621, null], [10621, 10831, null], [10831, 11487, null], [11487, 12277, null], [12277, 12400, null], [12400, 13153, null], [13153, 14218, null], [14218, 14604, null], [14604, 14700, null], [14700, 14700, null], [14700, 16206, null], [16206, 16223, null], [16223, 16453, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16453, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16453, null]], "pdf_page_numbers": [[0, 152, 1], [152, 1127, 2], [1127, 1886, 3], [1886, 2354, 4], [2354, 2904, 5], [2904, 3369, 6], [3369, 3839, 7], [3839, 4219, 8], [4219, 4554, 9], [4554, 5282, 10], [5282, 5983, 11], [5983, 6376, 12], [6376, 6376, 13], [6376, 6960, 14], [6960, 8070, 15], [8070, 8971, 16], [8971, 9502, 17], [9502, 10086, 18], [10086, 10483, 19], [10483, 10621, 20], [10621, 10831, 21], [10831, 11487, 22], [11487, 12277, 23], [12277, 12400, 24], [12400, 13153, 25], [13153, 14218, 26], [14218, 14604, 27], [14604, 14700, 28], [14700, 14700, 29], [14700, 16206, 30], [16206, 16223, 31], [16223, 16453, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16453, 0.14698]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4c97c4ef424ffec2079d57251080006db644746c
|
Concurrency Control Approaches
- **Two-Phase Locking (2PL)**
- Determine serializability order of conflicting operations at runtime while txns execute.
- **Timestamp Ordering (T/O)**
- Determine serializability order of txns before they execute.
Today's Class
- Basic Timestamp Ordering
- Optimistic Concurrency Control
- Multi-Version Concurrency Control
- Partition-based T/O
- The Phantom Problem
- Weaker Isolation Levels
Timestamp Allocation
- Each txn Ti is assigned a unique fixed timestamp that is monotonically increasing.
- Let $\text{TS}(Ti)$ be the timestamp allocated to txn Ti
- Different schemes assign timestamps at different times during the txn.
- Multiple implementation strategies:
- System Clock.
- Logical Counter.
- Hybrid.
**T/O Concurrency Control**
- Use these timestamps to determine the serializability order.
- If $TS(Ti) < TS(Tj)$, then the DBMS must ensure that the execution schedule is equivalent to a serial schedule where $Ti$ appears before $Tj$.
**Basic T/O**
- Txns read and write objects without locks.
- Every object $X$ is tagged with timestamp of the last txn that successfully did read/write:
- $W-TS(X)$ – Write timestamp on $X$
- $R-TS(X)$ – Read timestamp on $X$
- Check timestamps for every operation:
- If txn tries to access an object “from the future”, it aborts and restarts.
**Basic T/O – Reads**
- If $TS(Ti) < W-TS(X)$, this violates timestamp order of $Ti$ w.r.t. writer of $X$.
- Abort $Ti$ and restart it (with same $TS$? why?)
- Else:
- Allow $Ti$ to read $X$.
- Update $R-TS(X)$ to $\max(R-TS(X), TS(Ti))$
- Have to make a local copy of $X$ to ensure repeatable reads for $Ti$.
**Basic T/O – Writes**
- If $TS(Ti) < R-TS(X)$ or $TS(Ti) < W-TS(X)$
- Abort and restart $Ti$.
- Else:
- Allow $Ti$ to write $X$ and update $W-TS(X)$
- Also have to make a local copy of $X$ to ensure repeatable reads for $Ti$.
Basic T/O – Example #1
- **TS(T1) = 1**
- **TS(T2) = 2**
**Database**
<table>
<thead>
<tr>
<th>Object</th>
<th>R-TS</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>B</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
No violations so both txns are safe to commit.
Basic T/O – Example #2
- **TS(T1) = 1**
- **TS(T2) = 2**
**Violations:**
- **TS(T1) < W-TS(A)**
T1 cannot overwrite update by T2, so it has to abort+restart.
Basic T/O – Thomas Write Rule
- If **TS(Ti) < R-TS(X):**
- Abort and restart Ti.
- If **TS(Ti) < W-TS(X):**
- **Thomas Write Rule:** Ignore the write and allow the txn to continue.
- This violates timestamp order of Ti
- Else:
- Allow Ti to write X and update **W-TS(X)**
We do not update **W-TS(A)**
Ignore the write and allow T1 to commit.
Basic T/O
- Ensures conflict serializability if you don’t use the Thomas Write Rule.
- No deadlocks because no txn ever waits.
- Possibility of starvation for long txns if short txns keep causing conflicts.
- Permits schedules that are not recoverable.
Recoverable Schedules
- Transactions commit only after all transactions whose changes they read, commit.
Recoverability
Schedule
<table>
<thead>
<tr>
<th>TIME</th>
<th>( T1 )</th>
<th>( T2 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \begin{align*} & {\text{BEGIN}} \cr & {\text{W(A)}} \cr & {\text{!}} \end{align*} )</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>( \begin{align*} & {\text{BEGIN}} \cr & {\text{R(A)}} \cr & {\text{W(B)}} \cr & {\text{COMMIT}} \end{align*} )</td>
<td></td>
</tr>
</tbody>
</table>
T2 is allowed to read the writes of T1.
This is not recoverable because we can’t restart T2.
T1 aborts after T2 has committed.
Basic T/O – Performance Issues
- High overhead from copying data to txn’s workspace and from updating timestamps.
- Long running txns can get starved.
- Suffers from timestamp bottleneck.
Today's Class
- Basic Timestamp Ordering
- Optimistic Concurrency Control
- Multi-Version Concurrency Control
- Partition-based T/O
- The Phantom Problem
- Weaker Isolation Levels
Optimistic Concurrency Control
- Assumption: Conflicts are rare
- Forcing txns to wait to acquire locks adds a lot of overhead.
- Optimize for the no-conflict case.
OCC Phases
- **Read:** Track the read/write sets of txns and store their writes in a private workspace.
- **Validation:** When a txn commits, check whether it conflicts with other txns.
- **Write:** If validation succeeds, apply private changes to database. Otherwise abort and restart the txn.
OCC – Example
Schedule
Database
<table>
<thead>
<tr>
<th>Schedule</th>
<th>Database</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>T1</td>
<td></td>
</tr>
<tr>
<td>BEGIN</td>
<td>A</td>
</tr>
<tr>
<td>READ (A)</td>
<td>Value</td>
</tr>
<tr>
<td>WRITE (A)</td>
<td>W-TS</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>T2</td>
<td></td>
</tr>
<tr>
<td>BEGIN</td>
<td>A</td>
</tr>
<tr>
<td>READ (A)</td>
<td>Value</td>
</tr>
<tr>
<td>WRITE (A)</td>
<td>W-TS</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Database</th>
<th>T1 Workspace</th>
<th>T2 Workspace</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Object</td>
<td>Value</td>
</tr>
<tr>
<td></td>
<td>A</td>
<td>456</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Database</th>
<th>T1 Workspace</th>
<th>T2 Workspace</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Object</td>
<td>Value</td>
</tr>
<tr>
<td></td>
<td>A</td>
<td>123</td>
</tr>
</tbody>
</table>
• Need to guarantee only serializable schedules are permitted.
• At validation, Ti checks other txns for RW and WW conflicts and makes sure that all conflicts go one way (from older txns to younger txns).
• Maintain global view of all active txns.
• Record read set and write set while txns are running and write into private workspace.
• Execute Validation and Write phase inside a protected critical section.
• Each txn’s timestamp is assigned at the beginning of the validation phase.
• Check the timestamp ordering of the committing txn with all other running txns.
• If $TS(Ti) < TS(Tj)$, then one of the following three conditions must hold…
• Ti completes all three phases before Tj begins.
OCC – Validation #1
BEGIN
READ
VALIDATE
WRITE
COMMIT
T1 T2
BEGIN
READ
VALIDATE
WRITE
COMMIT
OCC – Validation #2
• Ti completes before Tj starts its Write phase, and Ti does not write to any object read by Tj.
– WriteSet(Ti) ∩ ReadSet(Tj) = Ø
OCC – Validation #2
Schedule
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>123</td>
<td>0</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Database
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>123</td>
<td>0</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
T1 Workspace
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>456</td>
<td>∞</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
T2 Workspace
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>456</td>
<td>∞</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
T1 has to abort even though T2 will never write to the database.
OCC – Validation #2
Schedule
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>123</td>
<td>0</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Database
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>123</td>
<td>0</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
T1 Workspace
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>456</td>
<td>∞</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
T2 Workspace
<table>
<thead>
<tr>
<th>Object</th>
<th>Value</th>
<th>W-TS</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>456</td>
<td>∞</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Safe to commit T1 because we know that T2 will not write.
OCC – Validation #3
- Ti completes its **Read** phase before Tj completes its **Read** phase
- And Ti does not write to any object that is either read or written by Tj:
- $\text{WriteSet}(Ti) \cap \text{ReadSet}(Tj) = \emptyset$
- $\text{WriteSet}(Ti) \cap \text{WriteSet}(Tj) = \emptyset$
Faloutsos/Pavlo CMU SCS 15-415/615 30
OCC – Observations
- **Q:** When does OCC work well?
- **A:** When # of conflicts is low:
- All txns are read-only (ideal).
- Txns access disjoint subsets of data.
- If the database is large and the workload is not skewed, then there is a low probability of conflict, so again locking is wasteful.
Faloutsos/Pavlo CMU SCS 15-415/615 33
OCC – Performance Issues
- High overhead for copying data locally.
- **Validation/Write** phase bottlenecks.
- Aborts are more wasteful because they only occur *after* a txn has already executed.
- Suffers from timestamp allocation bottleneck.
Multi-Version Concurrency Control
- Writes create new versions of objects instead of in-place updates:
- Each successful write results in the creation of a new version of the data item written.
- Use write timestamps to label versions.
- Let $X_k$ denote the version of $X$ where for a given txn $T_i$: $W-\text{TS}(X_k) \leq \text{TS}(T_i)$
MVCC – Reads
- Any read operation sees the latest version of an object from right before that txn started.
- Every read request can be satisfied without blocking the txn.
- If $\text{TS}(T_i) > R-\text{TS}(X_k)$:
- Set $R-\text{TS}(X_k) = \text{TS}(T_i)$
MVCC – Writes
- If $\text{TS}(T_i) < R-\text{TS}(X_k)$:
- Abort and restart $T_i$.
- If $\text{TS}(T_i) = W-\text{TS}(X_k)$:
- Overwrite the contents of $X_k$.
- Else:
- Create a new version of $X_{k+1}$ and set its write timestamp to $\text{TS}(T_i)$.
MVCC – Example #1
TS(T1)=1 T1
TS(T2)=2 T2
BEGIN R(A)
W(A)
COMMIT
BEGIN R(A)
W(A)
COMMIT
Database
Object Value R-TS W-TS
A0 123 1 0
A1 456 2 1
A2 789 2 2
T1 reads version A1 that it wrote earlier.
MVCC
• Can still incur cascading aborts because a txn sees uncommitted versions from txns that started before it did.
• Old versions of tuples accumulate.
• The DBMS needs a way to remove old versions to reclaim storage space.
MVCC – Example #2
Schedule
T1 T2
BEGIN
R(A)
COMMIT
BEGIN
R(A)
COMMIT
Database
Object Value R-TS W-TS
A0 123 2 0
- - - -
- - - -
1 1 456 A1 2
2 2 789 A2
TS(T1)=1 TS(T2)=2
Violation:
TS(T1) < R-TS(A0)
T1 is aborted because T2 “moved” time forward.
MVCC Implementations
• Store versions directly in main tables:
– Postgres, Firebird/Interbase
• Store versions in separate temp tables:
– MSFT SQL Server
• Only store a single master version:
– Oracle, MySQL
Garbage Collection – Postgres
- Never overwrites older versions.
- New tuples are appended to table.
- Deleted tuples are marked with a tombstone and then left in place.
- Separate background threads (VACUUM) has to scan tables to find tuples to remove.
Garbage Collection – MySQL
- Only one “master” version for each tuple.
- Information about older versions are put in temp rollback segment and then pruned over time with a single thread (PURGE).
- Deleted tuples are left in place and the space is reused.
MVCC – Performance Issues
- High abort overhead cost.
- Suffers from timestamp allocation bottleneck.
- Garbage collection overhead.
- Requires stalls to ensure recoverability.
MVCC+2PL
- Combine the advantages of MVCC and 2PL together in a single scheme.
- Use different concurrency control scheme for read-only txns than for update txns.
MVCC+2PL – Reads
• Use MVCC for read-only txns so that they never block on a writer
• Read-only txns are assigned a timestamp when they enter the system.
• Any read operations see the latest version of an object from right before that txn started.
MVCC+2PL – Writes
• Use strict 2PL to schedule the operations of update txns:
– Read-only txns are essentially ignored.
• Txns never overwrite objects:
– Create a new copy for each write and set its timestamp to $\infty$.
– Set the correct timestamp when txn commits.
– Only one txn can commit at a time.
MVCC+2PL – Performance Issues
• All the lock contention of 2PL.
• Suffers from timestamp allocation bottleneck.
Today's Class
• Basic Timestamp Ordering
• Optimistic Concurrency Control
• Multi-Version Concurrency Control
• Partition-based T/O
• The Phantom Problem
• Weaker Isolation Levels
Observation
- When a txn commits, all previous T/O schemes check to see whether there is a conflict with concurrent txns.
- This requires locks/latches/mutexes.
- If you have a lot of concurrent txns, then this is slow even if the conflict rate is low.
Partition-based T/O
- Split the database up in disjoint subsets called partitions (aka shards).
- Only check for conflicts between txns that are running in the same partition.
Database Partitioning
Schema
```
WAREHOUSE ITEM
↓ ↓
DISTRICT STOCK
↓ ↓
CUSTOMER ORDER_ITEM
↓ ↓
ORDERS ORDER_ITEM
```
Schema Tree
```
WAREHOUSE
↓
DISTRICT
↓
CUSTOMER
↓
ORDERS
↓
ORDER_ITEM
```
Replicated
```
ITEM
```
Database Partitioning
Schema Tree
```
WAREHOUSE
↓
DISTRICT
↓
CUSTOMER
↓
ORDERS
↓
ORDER_ITEM
```
Replicated
```
ITEM
```
Partitions
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
```
P1
↓
P2
↓
P3
↓
P4
↓
P5
```
Partition-based T/O
- Txns are assigned timestamps based on when they arrive at the DBMS.
- Partitions are protected by a single lock:
- Each txn is queued at the partitions it needs.
- The txn acquires a partition’s lock if it has the lowest timestamp in that partition’s queue.
- The txn starts when it has all of the locks for all the partitions that it will read/write.
Partition-based T/O – Reads
- Do not need to maintain multiple versions.
- Txns can read anything that they want at the partitions that they have locked.
- If a txn tries to access a partition that it does not have the lock, it is aborted + restarted.
Partition-based T/O – Writes
- All updates occur in place.
- Maintain a separate in-memory buffer to undo changes if the txn aborts.
- If a txn tries to access a partition that it does not have the lock, it is aborted + restarted.
Partition-based T/O – Performance Issues
- Partition-based T/O protocol is very fast if:
- The DBMS knows what partitions the txn needs before it starts.
- Most (if not all) txns only need to access a single partition.
- Multi-partition txns causes partitions to be idle while txn executes.
Today's Class
- Basic Timestamp Ordering
- Optimistic Concurrency Control
- Multi-Version Concurrency Control
- Partition-based T/O
- The Phantom Problem
- Weaker Isolation Levels
Dynamic Databases
- Recall that so far we have only dealing with transactions that read and update data.
- But now if we have insertions, updates, and deletions, we have new problems…
The Phantom Problem
Schedule
T1
BEGIN
SELECT MAX(age)
FROM sailors
WHERE rating = 1
72
SELECT MAX(age)
FROM sailors
WHERE rating = 1
96
COMMIT
T2
BEGIN
INSERT INTO sailors
(age = 96, rating = 1)
COMMIT
How did this happen?
- Because T1 locked only existing records and not ones under way!
- Conflict serializability on reads and writes of individual items guarantees serializability only if the set of objects is fixed.
- Solution?
Predicate Locking
- Lock records that satisfy a logical predicate:
- Example: \texttt{rating=1}.
- In general, predicate locking has a lot of locking overhead.
- \textbf{Index locking} is a special case of predicate locking that is potentially more efficient.
Index Locking
- If there is a dense index on the \texttt{rating} field then the txn can lock index page containing the data with \texttt{rating=1}.
- If there are no records with \texttt{rating=1}, the txn must lock the index page where such a data entry would be, if it existed.
Locking without an Index
- If there is no suitable index, then the txn must obtain:
- A lock on every page in the table to prevent a record’s \texttt{rating} from being changed to 1.
- The lock for the table itself to prevent records with \texttt{rating=1} from being added or deleted.
Today's Class
- Basic Timestamp Ordering
- Optimistic Concurrency Control
- Multi-Version Concurrency Control
- Partition-based T/O
- The Phantom Problem
- Weaker Isolation Levels
Weaker Levels of Consistency
• Serializability is useful because it allows programmers to ignore concurrency issues.
• But enforcing it may allow too little concurrency and limit performance.
• We may want to use a weaker level of consistency to improve scalability.
Isolation Levels
• Controls the extent that a txn is exposed to the actions of other concurrent txns.
• Provides for greater concurrency at the cost of exposing txns to uncommitted changes:
– Dirty Reads
– Unrepeatable Reads
– Phantom Reads
Isolation Levels
<table>
<thead>
<tr>
<th></th>
<th>Dirty Read</th>
<th>Unrepeatable Read</th>
<th>Phantom</th>
</tr>
</thead>
<tbody>
<tr>
<td>SERIALIZABLE</td>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>REPEATABLE READ</td>
<td>No</td>
<td>No</td>
<td>Maybe</td>
</tr>
<tr>
<td>READ COMMITTED</td>
<td>No</td>
<td>Maybe</td>
<td>Maybe</td>
</tr>
<tr>
<td>READ UNCOMMITTED</td>
<td>Maybe</td>
<td>Maybe</td>
<td>Maybe</td>
</tr>
</tbody>
</table>
Isolation Levels
- **SERIALIZABLE**: Obtain all locks first; plus index locks, plus strict 2PL.
- **REPEATABLE READS**: Same as above, but no index locks.
- **READ COMMITTED**: Same as above, but S locks are released immediately.
- **READ UNCOMMITTED**: Same as above, but allows dirty reads (no S locks).
SQL-92 Isolation Levels
- Default: Depends...
- Not all DBMS support all isolation levels in all execution scenarios (e.g., replication).
## SET TRANSACTION ISOLATION LEVEL <isolation-level>;
Access Modes
- You can also provide hints to the DBMS about whether a txn will modify the database.
- Only two possible modes:
- **READ WRITE**
- **READ ONLY**
### Default Maximum
<table>
<thead>
<tr>
<th>Database</th>
<th>Default</th>
<th>Maximum</th>
</tr>
</thead>
<tbody>
<tr>
<td>Actian Ingres 10.0/10S</td>
<td>SERIALIZEABLE</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>Aerospike</td>
<td>READ COMMITTED</td>
<td>READ COMMITTED</td>
</tr>
<tr>
<td>Greenplum 4.1</td>
<td>READ COMMITTED</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>MySQL 5.6</td>
<td>REPEATABLE READS</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>MemSQL 1.0b</td>
<td>READ COMMITTED</td>
<td>READ COMMITTED</td>
</tr>
<tr>
<td>MS SQL Server 2012</td>
<td>READ COMMITTED</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>Oracle 11g</td>
<td>READ COMMITTED</td>
<td>SNAPSHOT ISOLATION</td>
</tr>
<tr>
<td>Postgres 9.2.2</td>
<td>READ COMMITTED</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>SAP HANA</td>
<td>READ COMMITTED</td>
<td>SERIALIZEABLE</td>
</tr>
<tr>
<td>ScaleDB 1.02</td>
<td>READ COMMITTED</td>
<td>READ COMMITTED</td>
</tr>
<tr>
<td>VoltDB</td>
<td>SERIALIZEABLE</td>
<td>SERIALIZEABLE</td>
</tr>
</tbody>
</table>
Source: Peter Bailis, *When is "ACID" ACID? Rarely*. January 2013
SQL-92 Access Modes
- **Default:** *READ WRITE*
- Not all DBMSs will optimize execution if you set a txn to in *READ ONLY* mode.
```
SET TRANSACTION <access-mode>;
```
**Postgres + MySQL 5.6**
```
START TRANSACTION <access-mode>;
```
Which CC Scheme is Best?
- Like many things in life, it depends…
- How skewed is the workload?
- Are the txns short or long?
- Is the workload mostly read-only?
Real Systems
<table>
<thead>
<tr>
<th>Scheme</th>
<th>Released</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ingres</td>
<td>Strict 2PL</td>
</tr>
<tr>
<td>Informix</td>
<td>Strict 2PL</td>
</tr>
<tr>
<td>IBM DB2</td>
<td>Strict 2PL</td>
</tr>
<tr>
<td>Oracle</td>
<td>MVCC</td>
</tr>
<tr>
<td>Postgres</td>
<td>MVCC</td>
</tr>
<tr>
<td>MS SQL Server</td>
<td>Strict 2PL or MVCC</td>
</tr>
<tr>
<td>MySQL (InnoDB)</td>
<td>MVCC+2PL</td>
</tr>
<tr>
<td>Aerospike</td>
<td>OCC</td>
</tr>
<tr>
<td>SAP HANA</td>
<td>MVCC</td>
</tr>
<tr>
<td>VoltDB</td>
<td>Partition T/O</td>
</tr>
<tr>
<td>MemSQL</td>
<td>MVCC</td>
</tr>
<tr>
<td>MS Hekaton</td>
<td>MVCC+OCC</td>
</tr>
</tbody>
</table>
Summary
- Concurrency control is hard.
|
{"Source-Url": "http://15415.courses.cs.cmu.edu/fall2015/slides/22CC2.pdf", "len_cl100k_base": 6383, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 47726, "total-output-tokens": 6591, "length": "2e12", "weborganizer": {"__label__adult": 0.0002968311309814453, "__label__art_design": 0.0001863241195678711, "__label__crime_law": 0.0003371238708496094, "__label__education_jobs": 0.0007781982421875, "__label__entertainment": 5.7637691497802734e-05, "__label__fashion_beauty": 0.00010722875595092772, "__label__finance_business": 0.0005764961242675781, "__label__food_dining": 0.00028228759765625, "__label__games": 0.0008006095886230469, "__label__hardware": 0.0008625984191894531, "__label__health": 0.00042319297790527344, "__label__history": 0.00022995471954345703, "__label__home_hobbies": 9.119510650634766e-05, "__label__industrial": 0.0005779266357421875, "__label__literature": 0.0001766681671142578, "__label__politics": 0.0002086162567138672, "__label__religion": 0.00035643577575683594, "__label__science_tech": 0.03759765625, "__label__social_life": 7.402896881103516e-05, "__label__software": 0.0230255126953125, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.0002593994140625, "__label__transportation": 0.0003781318664550781, "__label__travel": 0.00015735626220703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19232, 0.05501]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19232, 0.35565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19232, 0.79451]], "google_gemma-3-12b-it_contains_pii": [[0, 769, false], [769, 1914, null], [1914, 2652, null], [2652, 3633, null], [3633, 5162, null], [5162, 5863, null], [5863, 7215, null], [7215, 8142, null], [8142, 9010, null], [9010, 9913, null], [9913, 10769, null], [10769, 11629, null], [11629, 12835, null], [12835, 14002, null], [14002, 14814, null], [14814, 15833, null], [15833, 16744, null], [16744, 18262, null], [18262, 19232, null]], "google_gemma-3-12b-it_is_public_document": [[0, 769, true], [769, 1914, null], [1914, 2652, null], [2652, 3633, null], [3633, 5162, null], [5162, 5863, null], [5863, 7215, null], [7215, 8142, null], [8142, 9010, null], [9010, 9913, null], [9913, 10769, null], [10769, 11629, null], [11629, 12835, null], [12835, 14002, null], [14002, 14814, null], [14814, 15833, null], [15833, 16744, null], [16744, 18262, null], [18262, 19232, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19232, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19232, null]], "pdf_page_numbers": [[0, 769, 1], [769, 1914, 2], [1914, 2652, 3], [2652, 3633, 4], [3633, 5162, 5], [5162, 5863, 6], [5863, 7215, 7], [7215, 8142, 8], [8142, 9010, 9], [9010, 9913, 10], [9913, 10769, 11], [10769, 11629, 12], [11629, 12835, 13], [12835, 14002, 14], [14002, 14814, 15], [14814, 15833, 16], [15833, 16744, 17], [16744, 18262, 18], [18262, 19232, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19232, 0.15905]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0f5af54a7d23e18d76b0ac8e81f2180206238363
|
Progress in Linear and Integer Programming and Emergence of Constraint Programming
Dr. Irvin Lustig
Manager, Technical Services
Optimization Evangelist
ILOG Direct
Outline
- Mathematical Programming
- Improvements in Performance
- Constraint Programming
- A Quick Tutorial
- Constraint Programming Successes
Mathematical Programming
Some material courtesy of Bob Bixby
Linear Programming
Minimize \( c^T x \)
Subject to \( Ax = b \)
\( l \leq x \leq u \)
Mathematical Programs
Linear Programming
Minimize \( c^T x \)
Subject to \( Ax = b \)
\[ l \leq x \leq u \]
Maximize
\[ x_1 + 2 \ x_2 + 3 \ x_3 \]
Subject To
\[ - \ x_1 + \ x_2 + \ x_3 \leq 20 \]
\[ \ x_1 - 3 \ x_2 + \ x_3 \leq 30 \]
\[ 0 \leq x_1 \leq 40 \]
\[ x_2, \ x_3 \geq 0 \]
(Mathematical Programs)
Linear Programming
- George Dantzig, 1947
- Introduces LP and recognized it as more than a conceptual tool: Computing answer important.
- Invented “primal” simplex algorithm.
- First LP solved: Laderman, 9 cons., 77 vars., 120 MAN-DAYS.
- What is the single most important event in LP since Dantzig?
- We have (since ~1990) 3 algorithms to solve LPs
- Primal Simplex Algorithm (Dantzig, 1947)
- Dual Simplex Algorithm (Lemke, 1954)
- Barrier Algorithm (Karmarkar, 1984 and others)
PDS Models
<table>
<thead>
<tr>
<th>MODEL</th>
<th>ROWS</th>
<th>CPLEX1.0</th>
<th>CPLEX5.0</th>
<th>CPLEX8.0</th>
<th>SPEEDUP</th>
</tr>
</thead>
<tbody>
<tr>
<td>pdso2</td>
<td>2953</td>
<td>0.4</td>
<td>0.1</td>
<td>0.1</td>
<td>4.0</td>
</tr>
<tr>
<td>pdso6</td>
<td>9881</td>
<td>26.4</td>
<td>2.4</td>
<td>0.9</td>
<td>29.3</td>
</tr>
<tr>
<td>pdso10</td>
<td>16558</td>
<td>208.9</td>
<td>13.0</td>
<td>2.6</td>
<td>80.3</td>
</tr>
<tr>
<td>pdso20</td>
<td>33874</td>
<td>5268.8</td>
<td>232.6</td>
<td>20.9</td>
<td>247.3</td>
</tr>
<tr>
<td>pds30</td>
<td>49944</td>
<td>15891.9</td>
<td>1154.9</td>
<td>39.1</td>
<td>406.4</td>
</tr>
<tr>
<td>pds40</td>
<td>66844</td>
<td>58920.3</td>
<td>2816.8</td>
<td>79.3</td>
<td>743.0</td>
</tr>
<tr>
<td>pds50</td>
<td>83060</td>
<td>122195.9</td>
<td>8510.9</td>
<td>114.6</td>
<td>1066.3</td>
</tr>
<tr>
<td>pds60</td>
<td>99431</td>
<td>205798.3</td>
<td>7442.6</td>
<td>160.5</td>
<td>1282.2</td>
</tr>
<tr>
<td>pds70</td>
<td>114944</td>
<td>335292.1</td>
<td>21120.4</td>
<td>197.8</td>
<td>1695.1</td>
</tr>
</tbody>
</table>
Primal Simplex Dual Simplex Dual Simplex
Not just faster -- Growth with size: Quadratic *then* & Linear *now*!
# Time Periods: PDS02 -- PDS70
<table>
<thead>
<tr>
<th># Time Periods</th>
<th>CPLEX 1.0 seconds</th>
<th>CPLEX 8.0 seconds</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>10</td>
<td>50000</td>
<td>50</td>
</tr>
<tr>
<td>20</td>
<td>100000</td>
<td>100</td>
</tr>
<tr>
<td>30</td>
<td>150000</td>
<td>150</td>
</tr>
<tr>
<td>40</td>
<td>200000</td>
<td>200</td>
</tr>
<tr>
<td>50</td>
<td>250000</td>
<td>250</td>
</tr>
<tr>
<td>60</td>
<td>300000</td>
<td>300</td>
</tr>
<tr>
<td>70</td>
<td>350000</td>
<td>350</td>
</tr>
<tr>
<td>80</td>
<td>400000</td>
<td>400</td>
</tr>
</tbody>
</table>
Big Test: The testing methodology
- Not possible for one test to cover 10+ years: Combined several tests.
- The biggest single test:
- Assembled 680 real LPs (up to 7 millionconsts.)
- Test runs: Using a time limit (4 days per LP), two chosen methods would be compared as follows:
- Run method 1: Generate 680 solve times
- Run method 2: Generate 680 solve times
- Compute 680 ratios and form geometric mean (not arithmetic mean!)
The same methodology was applied throughout.
Linear Programming
Progress: 1988 – Present
- **Algorithms**
- Best simplex
- Best simplex + barrier
- **960x**
- **2360x**
- **Machines**
- Simplex algorithms
- Barrier algorithms
- **800x**
- **13000x**
Algorithm comparison: Extracted from the previous results …
- Dual simplex vs. primal: Dual 2x faster
- Best simplex vs. barrier: About even
- Best of three vs. primal: Best 7.5x faster
Mixed Integer Programming
Minimize \( c^T x \)
Subject to \( Ax = b \)
\[ l \leq x \leq u \]
Some \( x \) are integer
Maximize \( x_1 + 2x_2 + 3x_3 + x_4 \)
Subject to
\[-x_1 + x_2 + x_3 + 10x_4 \leq 20\]
\[x_1 - 3x_2 + x_3 \leq 30\]
\[x_2 - 3.5x_4 = 0\]
\[0 \leq x_1 \leq 40\]
\[x_2, x_3 \geq 0\]
\[2 \leq x_4 \leq 3\]
\( x_4 \) integer
(MIP)
Computational History: 1950 –1998
- 1954 Dantzig, Fulkerson, S. Johnson: 42 city TSP
- Solved to optimality using cutting planes and LP
- 1957 Gomory
- Cutting plane algorithm: A complete solution
- 1960 Land, Doig, 1965 Dakin
- Branch-and-bound (B&B)
- 1971 MPSX/370, Benichou et al.
- 1972 UMPIRE, Forrest, Hirst, Tomlin (Beale)
- 1972 – 1998 Good B&B remained the state-of-the-art in commercial codes, in spite of
- 1973 Padberg
- 1974 Balas (disjunctive programming)
- 1983 Crowder, Johnson, Padberg: PIPX, pure 0/1 MIP
- 1987 Van Roy and Wolsey: MPSARX, mixed 0/1 MIP
- Grötschel, Padberg, Rinaldi … TSP (120, 666, 2392 city models solved)
Mixed Integer Programming
1998... A new generation of MIP codes
- Linear programming
- Stable, robust dual simplex
- Variable/node selection
- Influenced by traveling salesman problem
- Primal heuristics
- 8 different tried at root
- Retried based upon success
- Node presolve
- Fast, incremental bound strengthening (very similar to Constraint Programming)
- Presolve – numerous small ideas
- Probing in constraints:
- $\sum x_j \leq (\sum u_j) y, \ y = 0/1$
- $\Rightarrow x_j \leq u_jy$ (for all j)
- Cutting planes
- Gomory, knapsack covers, flow covers, mix-integer rounding, cliques, GUB covers, implied bounds, path cuts, disjunctive cuts
- Various extensions
- Aggregation
Computational Results I: 964 models (30 hour time limit)
Solving to Optimality
- CPLEX 5.0
- 56%
- CPLEX 8.0
- 74%
Finding Feasible Solutions
- CPLEX 8.0
- 98% (19 MIPs)
Setting: "MIP emphasis feasibility"
- Integer Solution with > 10% Gap
- Integer Solution with < 10% Gap
- Solved to provable optimality
Computational Results II: 651 models (all solvable to optimality)
- Ran for 30 hours using defaults
- Relative speedups:
- All models (651): 12x
- CPLEX 5.0 > 1 second (447): 41x
- CPLEX 5.0 > 10 seconds (362): 87x
- CPLEX 5.0 > 100 seconds (281): 171x
Through a combination of advances in algorithms and computing machines, combined with developments in data availability and modern modeling languages, what is possible today could only have been dreamed of even 10 years ago.
The result is that whole new application domains have been enabled
- Larger, more accurate models and multiple scenarios
- Tactical and day-of-operations are possible, not just planning
- Disparate components of the extended enterprise can now be “optimized” in concert.
Constraint Programming
Problem Definition
- Minimize (or maximize) an **Objective Function**
- Subject to **Constraints**
- Over a set of values of **Decision Variables**
**Usual Requirements**
- Objective function and constraints have closed mathematical forms (linear, quadratic, nonlinear, etc.)
- Decision variables are real or integer-valued
- Each variable takes values over an interval
Problem Types
- Linear Program
- (Mixed) Integer Program
- Quadratic Program
- Nonlinear Program
- ...
A program is a problem
Computer Programming
- Knuth, 1968, The Art of Computer Programming
- “An expression of a computational method in a computer language is called a program.”
- Programming Paradigms
- Procedural Programming
- Object-oriented Programming
- Functional Programming
- Logic Programming
- ....
Definition
- A computer programming methodology
- Solves
- Constraint satisfaction problems
- Combinatorial optimization problems
- Methodology
- Represent a model of a problem in a computer programming language
- Describe a search strategy for solving the problem
Constraint Programming
Constraint Satisfaction Problems
- Find a **Feasible Solution**
- Subject to **Constraints**
- Over a set of values of **Decision Variables**
**Usual Requirements**
- Constraints are easy to evaluate
- Closed mathematical forms or table lookups
- Decision variables are values over a discrete set
Combinatorial Optimization Problems
- Minimize (or maximize) an **Objective Function**
- Subject to **Constraints**
- Over a set of values of **Decision Variables**
**Usual Requirements**
- Objective Function and Constraints are easy to evaluate
- Closed mathematical forms or table lookups
- Decision variables are values over a discrete set
What is a potential representation?
- Let $x_1, x_2, \ldots, x_n$ be the *decision variables*.
- Each $x_j$ (for $j = 1, 2, \ldots, n$) has a domain $D_j$ of allowable values.
- Note that a domain may be finite or infinite.
- A domain may have “holes” (e.g., even numbers between 0 and 100).
- The allowable values could be elements of a particular set.
- A *constraint* is a function $f$
\[ f(x_1, x_2, \ldots, x_n) \in \{0, 1\} \]
- The function may just be a table of values!
A constraint satisfaction problem is
Find values of $x_1, x_2, \ldots, x_n$ such that
\[
x_j \in D_j \quad (j = 1, 2, \ldots, n)
\]
\[
f_k (x_1, x_2, \ldots, x_n) = 1 \quad (k=1,\ldots,m)
\]
A solution of this problem is any set of values satisfying the above conditions.
Optimization Problem
- Suppose you have an *objective function* $g(x_1, x_2, ..., x_n)$ that you wish to minimize.
- Optimization Problem is then
minimize $g(x_1, x_2, ..., x_n)$
subject to
$x_j \in D_j$ \hspace{1cm} (j = 1, 2, ..., n)
$f_k(x_1, x_2, ..., x_n) = 1$ \hspace{1cm} (k=1,...,m)
Examples of Constraints
- **Logical constraints**
- If $x$ is equal to 4, then $y$ is equal to 5
- Either "Activity a" precedes "Activity B" OR "Activity B" precedes "Activity A"
- **Global constraints**
- All of the values in the array $x$ are different
- Element $i$ of the array $\text{card}$ is the number of times that the $i$th element of the array $\text{value}$ appears in the array $\text{base}$
- **Meta constraints**
- The number of times that the array $x$ has the value 5 is exactly 3
- **Element constraint**
- The cost of assigning person $i$ to job $j$ is $\text{cost}[\text{job}[i]]$, when $\text{job}[i]$ is $j$
Constraint Programming Provides:
- **A modeling methodology** for stating decision variables, constraints, and objective functions
- **A programming language** for stating a search algorithm for finding values of the variables that satisfy the constraints and optimize the objective
- **A programming system** that includes
- Predefined constraints with powerful *filtering algorithms* for reducing the size of the search space
- Functionality to allow definitions of new constraints and filtering algorithms
Examples of Constraints
- **Logical constraints**
- \((x = 4) \Rightarrow (y = 5)\)
- \((a.\text{end} \leq b.\text{start}) \lor (b.\text{end} \leq a.\text{start})\)
- **Global constraints**
- `alldifferent(x)`
- `distribute(card, value, base)`
- `card[i]` is the number of times `value[i]` appears in `base`
- **Meta constraints**
- `sum(i \text{ in } S) (x[i] < 5) = 3;`
- **Element constraint**
- `z = y[x[i]]`
Have a list of countries
```
enum Country {Belgium, Denmark, France, Germany, Netherlands, Luxembourg};
```
Have a set of colors to use on a map to color the countries
```
enum Colors {blue, red, yellow, gray};
```
Want to decide how to assign the colors to the countries so that no two bordering countries have the same color
```
var Colors color[Country];
```
The decision variables are values from a set
enum Country {Belgium, Denmark, France, Germany, Netherlands, Luxembourg};
enum Colors {blue, red, yellow, gray};
var Colors color[Country];
solve {
color[France] <> color[Belgium];
color[France] <> color[Luxembourg];
color[France] <> color[Germany];
color[Luxembourg] <> color[Germany];
color[Luxembourg] <> color[Belgium];
color[Belgium] <> color[Netherlands];
color[Belgium] <> color[Germany];
color[Germany] <> color[Netherlands];
color[Germany] <> color[Denmark];
};
Example
Constraint Satisfaction
```plaintext
enum Country {Belgium, Denmark, France, Germany, Netherlands, Luxembourg};
enum Colors {blue, red, yellow, gray};
var Colors color[Country];
solve {
color[France] <> color[Belgium];
color[France] <> color[Luxembourg];
color[France] <> color[Germany];
color[Luxembourg] <> color[Germany];
color[Luxembourg] <> color[Belgium];
color[Belgium] <> color[Netherlands];
color[Belgium] <> color[Germany];
color[Germany] <> color[Netherlands];
color[Germany] <> color[Denmark];
};
```
Constraint Satisfaction
```plaintext
enum Country {Belgium, Denmark, France, Germany, Netherlands, Luxembourg};
enum Colors {blue, red, yellow, gray};
var Colors color[Country];
solve {
color[France] <> color[Belgium];
color[France] <> color[Luxembourg];
color[France] <> color[Germany];
color[Luxembourg] <> color[Germany];
color[Luxembourg] <> color[Belgium];
color[Belgium] <> color[Netherlands];
color[Belgium] <> color[Germany];
color[Germany] <> color[Netherlands];
color[Germany] <> color[Denmark];
}
```
**Example**
### Optimization
```plaintext
enum Country {Belgium, Denmark, France, Germany, Netherlands, Luxembourg};
enum Colors {blue, red, yellow, gray};
var Colors color[Country];
solve {
color[France] <> color[Belgium];
color[France] <> color[Luxembourg];
color[France] <> color[Germany];
color[Luxembourg] <> color[Germany];
color[Luxembourg] <> color[Belgium];
color[Belgium] <> color[Netherlands];
color[Belgium] <> color[Germany];
color[Germany] <> color[Netherlands];
color[Germany] <> color[Denmark];
}
var int colorcount[Colors] in 0..card(Country);
maximize colorcount[yellow]
subject to {
forall (i in Colors)
colorcount[i] = sum(j in Country) (color[j] = i);
}
```
Custom Pilot Chemical Company is a chemical manufacturer that produces batches of specialty chemicals to order. Principal equipment consists of eight interchangeable reactor vessels, five interchangeable distillation columns, four large interchangeable centrifuges, and a network of switchable piping and storage tanks. Customer demand comes in the form of orders for batches of one or more specialty chemicals, normally to be delivered simultaneously for further use by the customer.
An order consists of a set of jobs. Each job has an optional precedence requirement, arrival week of the job, duration of the job in weeks, the week that the job is due, the number of reactors required, distillation columns required, and centrifuges required.
Find a schedule of the orders and jobs to minimize the completion time of all orders.
### Problem Data
<table>
<thead>
<tr>
<th>Order Number</th>
<th>Job number</th>
<th>Precedence relations</th>
<th>Arrival Week</th>
<th>Duration in weeks</th>
<th>Week due</th>
<th>Reactors</th>
<th>Distillation columns</th>
<th>Centrifuges</th>
</tr>
</thead>
<tbody>
<tr>
<td>AK14</td>
<td>1</td>
<td>None</td>
<td>15</td>
<td>4</td>
<td>22</td>
<td>5</td>
<td>3</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>1</td>
<td>15</td>
<td>3</td>
<td>22</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>None</td>
<td>15</td>
<td>3</td>
<td>22</td>
<td>2</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>1</td>
<td>None</td>
<td>16</td>
<td>3</td>
<td>23</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>None</td>
<td>16</td>
<td>2</td>
<td>23</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>1</td>
<td>16</td>
<td>2</td>
<td>23</td>
<td>2</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>AK15</td>
<td>1</td>
<td>None</td>
<td>17</td>
<td>5</td>
<td>23</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>None</td>
<td>17</td>
<td>1</td>
<td>23</td>
<td>1</td>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>AK16</td>
<td>1</td>
<td>None</td>
<td>17</td>
<td>5</td>
<td>23</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>None</td>
<td>17</td>
<td>1</td>
<td>23</td>
<td>1</td>
<td>3</td>
<td>0</td>
</tr>
</tbody>
</table>
Production Scheduling
Data input
```cpp
struct JobIndex {
string ordernumber;
int jobnum;
};
struct JobInfo {
int jobprec;
int arrival;
int duration;
int weekdue;
int reactors;
int columns;
int centrifuges;
};
struct JobData {
JobIndex ind;
JobInfo info;
};
setof (JobData) jobs = ...;
```
Data organization
```latex
\textbf{Data organization}
\textbf{setof}(\text{JobIndex}) \text{ joblist} = \{ i \mid \langle i,j \rangle \text{ in } \text{jobs} \};
\textbf{assert} ( \text{card}(\text{joblist}) = \text{card}(\text{jobs}) );
\textbf{JobInfo datarray}[\text{joblist}];
\textbf{initialize} {
\forall (j \text{ in } \text{jobs})
\quad \text{datarray}[j.\text{ind}] = j.\text{info};
}
\textbf{datarray}[\langle "AK14", 1\rangle] = \langle 0, 15, 4, 22, 5, 3, 2 \rangle
\textbf{datarray}[\langle "AK14", 2\rangle] = \langle 1, 15, 3, 22, 0, 1, 1 \rangle
\textbf{datarray}[\langle "AK14", 3\rangle] = \langle 0, 15, 3, 22, 2, 0, 2 \rangle
\textbf{datarray}[\langle "AK15", 1\rangle] = \langle 0, 16, 3, 23, 1, 1, 1 \rangle
\textbf{datarray}[\langle "AK15", 2\rangle] = \langle 0, 16, 2, 23, 2, 0, 0 \rangle
\textbf{datarray}[\langle "AK15", 3\rangle] = \langle 1, 16, 2, 23, 2, 2, 0 \rangle
\textbf{datarray}[\langle "AK16", 1\rangle] = \langle 0, 17, 5, 23, 2, 1, 1 \rangle
\textbf{datarray}[\langle "AK16", 2\rangle] = \langle 0, 17, 1, 23, 1, 3, 0 \rangle
```
\textbf{int reactors} = \ldots;
\textbf{int columns} = \ldots;
\textbf{int centrifuges} = \ldots;
Production Scheduling
Model
\[
\text{scheduleOrigin} = \min (j \in \text{jobs}) \ j.\text{info}.\text{arrival}; \\
\text{scheduleHorizon} = \max (j \in \text{jobs}) \ j.\text{info}.\text{weekdue}; \\
\]
\text{Activity makespan(0);} \\
\text{Activity a[j in joblist](datarray[j].duration);} \\
\text{DiscreteResource Reactors(reactors);} \\
\text{DiscreteResource Columns(columns);} \\
\text{DiscreteResource Centrifuges(centrifuges);} \\
\text{minimize makespan. end }
\text{subject to }
\{
\forall (j \in \text{joblist})
\begin{align*}
& \text{a[j] precedes makespan; } \\
& \text{if (datarray[j].jobprec > 0) then } \\
& \text{a[<j.ordernumber,datarray[j].jobprec>] precedes a[j]} \\
& \text{endif;} \\
& \text{a[j] requires (datarray[j].reactors) Reactors;} \\
& \text{a[j] requires (datarray[j].columns) Columns;} \\
& \text{a[j] requires (datarray[j].centrifuges) Centrifuges;} \\
& \text{a[j].start >= datarray[j].arrival;} \\
& \text{a[j].end <= datarray[j].weekdue;} \\
\end{align*}
\}
\}
**Solution for activities**
Optimal Solution with Objective Value: 22
makespan = [22 -- 0 --> 22]
<table>
<thead>
<tr>
<th>Order Number</th>
<th>Job Number</th>
<th>Start Time</th>
<th>End Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>AK14</td>
<td>1</td>
<td>15</td>
<td>19</td>
</tr>
<tr>
<td>AK14</td>
<td>2</td>
<td>19</td>
<td>22</td>
</tr>
<tr>
<td>AK14</td>
<td>3</td>
<td>19</td>
<td>22</td>
</tr>
<tr>
<td>AK15</td>
<td>1</td>
<td>16</td>
<td>19</td>
</tr>
<tr>
<td>AK15</td>
<td>2</td>
<td>19</td>
<td>21</td>
</tr>
<tr>
<td>AK15</td>
<td>3</td>
<td>19</td>
<td>21</td>
</tr>
<tr>
<td>AK16</td>
<td>1</td>
<td>17</td>
<td>22</td>
</tr>
<tr>
<td>AK16</td>
<td>2</td>
<td>21</td>
<td>22</td>
</tr>
</tbody>
</table>

Production Scheduling
Resource allocation (text)
Reactors = Discrete Resource
required by a[#<ordernumber:"AK16",jobnum:2>#] over [21,22] in capacity 1
required by a[#<ordernumber:"AK16",jobnum:1>#] over [17,22] in capacity 2
required by a[#<ordernumber:"AK15",jobnum:3>#] over [19,21] in capacity 2
required by a[#<ordernumber:"AK15",jobnum:2>#] over [19,21] in capacity 2
required by a[#<ordernumber:"AK15",jobnum:1>#] over [16,19] in capacity 1
required by a[#<ordernumber:"AK14",jobnum:3>#] over [19,22] in capacity 2
required by a[#<ordernumber:"AK14",jobnum:1>#] over [15,19] in capacity 5
Columns = Discrete Resource
required by a[#<ordernumber:"AK16",jobnum:2>#] over [21,22] in capacity 3
required by a[#<ordernumber:"AK16",jobnum:1>#] over [17,22] in capacity 1
required by a[#<ordernumber:"AK15",jobnum:3>#] over [19,21] in capacity 2
required by a[#<ordernumber:"AK15",jobnum:1>#] over [16,19] in capacity 1
required by a[#<ordernumber:"AK14",jobnum:2>#] over [19,22] in capacity 1
required by a[#<ordernumber:"AK14",jobnum:1>#] over [15,19] in capacity 3
Centrifuges = Discrete Resource
required by a[#<ordernumber:"AK16",jobnum:1>#] over [17,22] in capacity 1
required by a[#<ordernumber:"AK15",jobnum:1>#] over [16,19] in capacity 1
required by a[#<ordernumber:"AK14",jobnum:3>#] over [19,22] in capacity 2
required by a[#<ordernumber:"AK14",jobnum:2>#] over [19,22] in capacity 1
required by a[#<ordernumber:"AK14",jobnum:1>#] over [15,19] in capacity 5
Resource allocation (graphs)
Comparing CP and MP
Which is BETTER????
- It depends upon the data
- It depends on the search strategy
- It depends on the combinatorial nature of the problem
- For general applications, you need tools that allow you to try both methodologies!
What is a solution?
- Linear programs and integer programs always have objective functions.
- A constraint satisfaction problem may simply be a feasibility problem.
- It may have many possible solutions!
- People in constraint programming say that they have a “solution” when people in mathematical programming would say they have a “feasible solution”.
## Comparing CP and MP
### Vocabulary Differences
<table>
<thead>
<tr>
<th>Mathematical Programming</th>
<th>Constraint Programming</th>
</tr>
</thead>
<tbody>
<tr>
<td>Feasible Solution</td>
<td>Solution</td>
</tr>
<tr>
<td>Optimal Solution</td>
<td>Optimized Solution</td>
</tr>
<tr>
<td>Decision Variable</td>
<td>Constrained Variable</td>
</tr>
<tr>
<td>Fixed Variable</td>
<td>Bound Variable</td>
</tr>
<tr>
<td>Bound Strengthening</td>
<td>Domain Reduction (a superset)</td>
</tr>
<tr>
<td>Iterative Presolve</td>
<td>Constraint Propagation</td>
</tr>
</tbody>
</table>
Constraint Programming Successes
Optimization Successes
DaimlerChrysler
- Centralized Vehicle Scheduler: for vehicle production
- Results: Competitive advantage & savings
- 10-20% improvement in purge rates
- Increased production by 4,000 cars/year/plant
- Estimated savings of $27 million annually
First Union
- **Loan Arranger:** Searches for loan that best meets each customer’s requirements
- **Results:** Competitive advantage & savings
- 4 x increase in monthly loan volume
- 15% increase in average loan size
- Reduced “time to funding” from 21 to 8 days
- Reduced underwriting costs by 78%
Optimization Successes
SNCF Railways
- Rolling Stock Maintenance Operations
- Schedule Operations Efficiently
- Save 10% of 2,000 maintenance workers
Nissan (UK)
- **Challenge:** Build 3rd car model with 2 existing production lines
- **Results:** Europe’s already most efficient car production facility is even more productive
- No need to add any new production line and no significant investment needed
- Production capacity increased by 30%
- Schedule adherence rose from 3% to 90%
Constraint Programming
Applications
- Scheduling
- Dispatching
- Configuration
- Enumeration
- Sequencing
Conclusions
- Optimization technologies have significantly improved over the past 15 years
- Multiple techniques
- Traditional Mathematical Programming
- Newer Constraint Programming
- An explosion of applications
|
{"Source-Url": "http://focapo.cheme.cmu.edu/2003/PDF/IrvFOCAPO.pdf", "len_cl100k_base": 7720, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 77447, "total-output-tokens": 9668, "length": "2e12", "weborganizer": {"__label__adult": 0.0005221366882324219, "__label__art_design": 0.000518798828125, "__label__crime_law": 0.0007085800170898438, "__label__education_jobs": 0.002735137939453125, "__label__entertainment": 0.00011342763900756836, "__label__fashion_beauty": 0.0002474784851074219, "__label__finance_business": 0.0014095306396484375, "__label__food_dining": 0.0007452964782714844, "__label__games": 0.0012712478637695312, "__label__hardware": 0.0014677047729492188, "__label__health": 0.0016031265258789062, "__label__history": 0.0005598068237304688, "__label__home_hobbies": 0.00030040740966796875, "__label__industrial": 0.0021648406982421875, "__label__literature": 0.0003192424774169922, "__label__politics": 0.00042939186096191406, "__label__religion": 0.0007243156433105469, "__label__science_tech": 0.2425537109375, "__label__social_life": 0.00016057491302490234, "__label__software": 0.00875091552734375, "__label__software_dev": 0.72998046875, "__label__sports_fitness": 0.0005846023559570312, "__label__transportation": 0.001898765563964844, "__label__travel": 0.00029921531677246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24026, 0.03949]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24026, 0.29641]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24026, 0.69038]], "google_gemma-3-12b-it_contains_pii": [[0, 165, false], [165, 314, null], [314, 376, null], [376, 488, null], [488, 792, null], [792, 1295, null], [1295, 2237, null], [2237, 2997, null], [2997, 3492, null], [3492, 3716, null], [3716, 3903, null], [3903, 4257, null], [4257, 4919, null], [4919, 5632, null], [5632, 5949, null], [5949, 6212, null], [6212, 6710, null], [6710, 6733, null], [6733, 7108, null], [7108, 7236, null], [7236, 7537, null], [7537, 7811, null], [7811, 8137, null], [8137, 8485, null], [8485, 8981, null], [8981, 9257, null], [9257, 9564, null], [9564, 10210, null], [10210, 10726, null], [10726, 11159, null], [11159, 11572, null], [11572, 12083, null], [12083, 12609, null], [12609, 13159, null], [13159, 13890, null], [13890, 14723, null], [14723, 16505, null], [16505, 16844, null], [16844, 18026, null], [18026, 19113, null], [19113, 19780, null], [19780, 21257, null], [21257, 21286, null], [21286, 21533, null], [21533, 21892, null], [21892, 22591, null], [22591, 22624, null], [22624, 22898, null], [22898, 23206, null], [23206, 23358, null], [23358, 23700, null], [23700, 23808, null], [23808, 24026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 165, true], [165, 314, null], [314, 376, null], [376, 488, null], [488, 792, null], [792, 1295, null], [1295, 2237, null], [2237, 2997, null], [2997, 3492, null], [3492, 3716, null], [3716, 3903, null], [3903, 4257, null], [4257, 4919, null], [4919, 5632, null], [5632, 5949, null], [5949, 6212, null], [6212, 6710, null], [6710, 6733, null], [6733, 7108, null], [7108, 7236, null], [7236, 7537, null], [7537, 7811, null], [7811, 8137, null], [8137, 8485, null], [8485, 8981, null], [8981, 9257, null], [9257, 9564, null], [9564, 10210, null], [10210, 10726, null], [10726, 11159, null], [11159, 11572, null], [11572, 12083, null], [12083, 12609, null], [12609, 13159, null], [13159, 13890, null], [13890, 14723, null], [14723, 16505, null], [16505, 16844, null], [16844, 18026, null], [18026, 19113, null], [19113, 19780, null], [19780, 21257, null], [21257, 21286, null], [21286, 21533, null], [21533, 21892, null], [21892, 22591, null], [22591, 22624, null], [22624, 22898, null], [22898, 23206, null], [23206, 23358, null], [23358, 23700, null], [23700, 23808, null], [23808, 24026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24026, null]], "pdf_page_numbers": [[0, 165, 1], [165, 314, 2], [314, 376, 3], [376, 488, 4], [488, 792, 5], [792, 1295, 6], [1295, 2237, 7], [2237, 2997, 8], [2997, 3492, 9], [3492, 3716, 10], [3716, 3903, 11], [3903, 4257, 12], [4257, 4919, 13], [4919, 5632, 14], [5632, 5949, 15], [5949, 6212, 16], [6212, 6710, 17], [6710, 6733, 18], [6733, 7108, 19], [7108, 7236, 20], [7236, 7537, 21], [7537, 7811, 22], [7811, 8137, 23], [8137, 8485, 24], [8485, 8981, 25], [8981, 9257, 26], [9257, 9564, 27], [9564, 10210, 28], [10210, 10726, 29], [10726, 11159, 30], [11159, 11572, 31], [11572, 12083, 32], [12083, 12609, 33], [12609, 13159, 34], [13159, 13890, 35], [13890, 14723, 36], [14723, 16505, 37], [16505, 16844, 38], [16844, 18026, 39], [18026, 19113, 40], [19113, 19780, 41], [19780, 21257, 42], [21257, 21286, 43], [21286, 21533, 44], [21533, 21892, 45], [21892, 22591, 46], [22591, 22624, 47], [22624, 22898, 48], [22898, 23206, 49], [23206, 23358, 50], [23358, 23700, 51], [23700, 23808, 52], [23808, 24026, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24026, 0.09594]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
9d08cb6cda138524d1c97a846a92396fbd5bc83f
|
Insights into Continuous Integration Build Failures
Md Rakibul Islam
University of New Orleans, USA
Email: mislam3@uno.edu
Minhaz F. Zibran
University of New Orleans, USA
Email: zibran@cs.uno.edu
Abstract—Continuous integration is prevalently used in modern software engineering to build software systems automatically. Broken builds hinder developers’ work and delay project progress. We must identify the factors causing build failures.
This paper presents a large empirical study to identify the factors such as, complexity of a task, build strategy and contribution models (i.e., push and pull request), and projects level attributes (i.e., sizes of projects and teams), which potentially have impacts on the build results. We have studied 3.6 million builds over 1,090 open-source projects. The derived results add to our understanding of the role of those factors on build results, which can be used in minimizing build failures.
I. INTRODUCTION
Continuous integration (CI) systems provide facilities for automatic compilation, building, testing and deployment of a software [7] usually triggered by a unit of changes committed to a larger code base. Since its inception in 1991 as one of the twelve Extreme Programming (XP) practices [4], CI has become a widely accepted practice in software development community [5]. Although CI is continuously gaining popularity in software engineering, it has received very little attention from the research community [7], [6]. Despite having few quantitative studies [7], [9], [10], [11] on CI, research community lack quantifiable evidence on the implications of adoption and use of CI [6].
While the usage of CI improve the productivity and quality of software products [11], broken builds halt the development work of the entire team for a significant amount of time [8]. Thus, we must identify the factors that cause build failures. If we can identify those factors, developers can take cautious measurements to minimize their impacts and thus can substantially reduce development time. In this work, we quantitatively examine various development factors that cause broken builds. In particular, we address the following three research questions.
RQ1: Is there any relationship between the complexity of a task and CI build failure i.e., broken build?
— We hypothesize that a task with higher complexity increases the likelihood of a build failure. Kerzazi et al. [8] reported that probability of a build failure increases with the increase in number of changed code lines and number of changed files (that are parts of the factors to measure complexity of a task) included in a push initiating the build. We will verify their findings by conducting an in-depth analysis using a larger dataset.
RQ2: Do the build strategies, and the developers’ contribution models have any impact on CI build failures?
— A build strategy (i.e., choosing a build tool to run a build at a specific time in a particular build branch) can have impacts on the build results. For example, developers typically take more care when writing changes to a master branch than to a non-master branch, which can be related to higher number of successful builds in the master branch [7].
Again, different features (e.g., readability and simplicity of the build configuration files) offered by the different build tools may play role in build results (e.g., successful or broken). In the similar way, developers contribution models such as, direct push and pull request can be attributed to build results. Here, we want to examine all those causal relationships using quantitative analyses. Findings from our examination can help a development manager to set his build strategy and to choose a suitable contribution model for a project to prevent build failures.
RQ3: Do the sizes of teams and projects have any correlation with CI build failure?
— Dependencies among the team members and even among code components can be increased when the sizes of the team and the project are large. Such dependencies, particularly in code, are susceptible to broken builds [9]. Identifying any correlation between the build failures and the sizes of teams and projects will be helpful for developers to be more cautious to prevent build failures when those sizes become large.
II. DATA COLLECTION
To address the aforementioned research questions, we collect the snapshot revision travisTorrent_11_1_2017.csv.gz of a large dataset prepared by Beller et al. [6]. The collected dataset (i.e., snapshot) consists of source code and builds’ information of 1,300 software projects developed in Java and Ruby.
All those information are collected from three different sources (1) Travis CI- a CI tool (2) GIT- a version control system and (3) GitHub- a collaboration platform. Then, those collected information are combined together to prepare the dataset. For example, for a particular build, the dataset consists of the attributes such as, build ID, build status/result, build duration and build tool, which are collected from the Travis CI. Then, those attributes are combined with other attributes such as, number of changed code lines and changed files, commit ID and number of commits of that build, which are collected from GIT and GitHub. Further details about the dataset can be found elsewhere [6].
To ensure a project has used CI service sufficiently, we select only those projects from the aforementioned dataset,
which have meet the two criteria– (1) projects using a CI service for at least one year, (2) projects having at least 100 builds regardless of their success or failure status. In this way, we select 1,090 software projects consists of 3.6 million builds for the study.
**Categorization of Builds Results:** In the final dataset we find five types of build results such as passed, failed, errored, canceled and start. We exclude all the builds, which have start status, as their final results are unknown [5]. We consider the build result passed as successful and the remaining build results are termed as unsuccessful throughout the paper.
### III. Analysis and Findings
The research questions RQ1, RQ2 and RQ3 are respectively addressed in Section III-A, Section III-B and in Section III-C. To verify the statistical significance of the results derived from our quantitative analysis, we also apply the statistical Mann-Whitney-Wilcoxon (MWW) test [3] with $\alpha = 0.05$ for RQ1 and RQ3. We use Chi-squared [12] test of independence with the same value of $\alpha$ for RQ2. To measure the effect sizes, we use Cohen’s $d$ [1] value and Cramer’s $V$ [2] value along with MWW test and Chi-squared test, respectively.
#### A. Analysis of Complexity of a Task
We consider the complexity of a task is higher if the number of code churns, the number of changed files and the number of built commits in a single push for that task are higher. In our investigation, we use both source code churn and test code churn separately that will give us deeper level insights of impacts of code churn on build results.
**Source Code Churn (SCC).** SCC is defined as the number of changed lines i.e., lines added, deleted and modified to the files in a build. We investigate whether the number of SCC in a single build have any relationship with unsuccessful build. To do that for each of the selected projects, we compute the average number of SCC per build in both successful and unsuccessful builds. The box-plots in Figure 1a present the distributions of the computed averages of SCC per build in logarithmic scale for each of the projects in each type of the build results (i.e., successful and unsuccessful). The ‘x’ mark indicates the average number of SCC per build over all the projects. We notice from Figure 1a that both the median and average of SCC over all the projects are higher in the unsuccessful builds compared to those in the successful builds.
To determine the statistical significance of the observation, we conduct a MWW test between the distributions of average scores of SCC in each type of the build results for all the projects. The computed P-value ($P = 2.74 \times 10^{-15}, P < \alpha$) reveals that the difference is statistically significant. Moreover, the Cohen’s $d$ test returns the value 0.34, which suggests that the effect size is medium between the distributions of the average scores of SCC in each type of the build results for all the projects.
**Test Code Churn (TCC).** Again, we compute the average number of TCC per build in both successful and unsuccessful builds for each of the projects. The box-plots in Figure 1b present the distributions of computed average scores of TCC per build for each of the projects in both successful and unsuccessful builds. Again, from Figure 1b we see that both the median and average of TCC over all the projects are higher in unsuccessful builds compared to those in successful builds.
To measure the statistical significance, we conduct a MWW test between the distributions of averages of TCC in successful and unsuccessful builds for all the projects. The resulted P-value ($P = 3.31 \times 10^{-7}, P < \alpha$) implies that the difference is statistically significant, however, the computed Cohen’s $d$ value 0.213 indicates a small effect size.
**File Level Change (FLC).** We calculate the number of changed files i.e., FLC by summing up the number of files added, deleted and modified in a push or pull request to a development branch that initiates the build. Then, we compute the average number of FLC per build in both successful and unsuccessful builds for all the projects and plot those average scores in Figure 1c. We observe from Figure 1c that both the median and average of FLC over all the projects are higher in unsuccessful builds compared to those in successful builds. The computed P-value ($P = 2.2 \times 10^{-16}, P < \alpha$) of a MWW test between the distributions of average scores of FLC in both successful and unsuccessful builds for all the projects implies the difference is significant. Moreover, the resulted value 0.36 of Cohen’s $d$ test suggests a medium effect size.
**Built Commits in a Build (BCB).** To identify the impact of number of BCB on build results, we compute the average number of BCB in both successful and unsuccessful builds for each of the projects. The box-plots in Figure 1d present the computed average scores of BCB for each of the projects in both successful and unsuccessful builds. Again, we see from Figure 1d that both the median and average score of BCB over all the projects are higher in unsuccessful builds compared to those in successful builds. The computed P-value ($P = 2.2 \times 10^{-16}, P < \alpha$) of a MWW test between the distributions of average scores of BCB in both successful and unsuccessful builds for each of the projects, indicates that the difference is statistically significant.
Moreover, the computed Cohen’s $d$ value 0.61 suggests that the effect size is large.
Based on the observations and statistical analyses, we derive the answer to the RQ1 as follows:
**Ans. to RQ1:** Complexity of a task i.e., the number of built commits, the number of source code churns and the number of changed files in that task have a statistically significant relationship with unsuccessful builds.
### B. Analysis of Build Strategy and Contribution Model
In this study we consider a build strategy is comprised of taking decisions about selecting two components of a build- (1) build tools such as, Ant, Maven, Gradle, Ruby, and Plain and (2) build branches i.e., master branch and all other development branches termed as non-master branch. Again, based on the ways developers commit their changes in the development branches, we consider there are two types of contribution models- (1) direct push and (2) pull request. While direct push is the most prevalent contribution model, pull-request is gaining significant popularity in open-source projects. Here, we examine the impacts of build tools, build branches and contribution models on build results.
#### Types of Build Tools (TBT)
Table I presents the frequencies of builds according to their results found in different types of build tools. To gain a deeper picture, we calculate the proportion of builds results for each build tool as shown in the right most two columns in Table I. The tool Gradle shows the highest proportion of successful builds followed by the tool Ruby (i.e., rake). The tools Ant and Maven show almost equal proportions of successful builds. While for every tool, proportion of successful builds is higher compared to proportion of unsuccessful builds, interestingly, exception can be observed when Plain i.e., shell or other languages’ scripts are used to run the builds. The same tool also shows the highest proportion of unsuccessful builds followed by the tools Ant and Maven. A Chi-squared test ($\chi^2 = 93680$, df $= 4$, $P = 2.2 \times 10^{-16}, P < \alpha$) also indicates statistical significance of the relationship between uses of build tools and build results with a medium effect size (as Cramer’s $V = 0.168$).
#### Types of Development Branches (TDB)
Table II presents the frequencies and proportions of builds according to their results found in both master and non-master branches of development. We see from Table II that both the frequency and the proportion of successful builds are higher in the master branch compared to the non-master branch. A Chi-squared test ($\chi^2 = 3971.14$, df $= 1$, $P = 2.2 \times 10^{-16}, P < \alpha$) indicates statistical significance of the relationship between the development branches and build results, although the effect size is weak (as Cramer’s $V = 0.0331$).
#### Types of Development Model (TDM)
Table III presents the frequencies and proportions of builds according to their results found in both direct push and pull request models of contribution. Although push model has higher number of successful builds, the proportion of successful builds is higher in pull request model. A Chi-squared test ($\chi^2 = 1301.5$, df $= 1$, $P = 2.2 \times 10^{-16}, P < \alpha$) indicates a statistical significance of the relationship between the development models and the build results with a weak effect size (as Cramer’s $V = 0.019$). Based on the observations and the statistical analyses, we derive the answer to the RQ2 as follows:
**Ans. to RQ2:** A build tool has a statistically significant impact on the probability of the build results.
### C. Analysis of Project Level Attributes
We examine whether the sizes of project level attributes such as, the source code size in terms of lines of code (SLOC) and the test code size per 1,000 SLOC, have any correlation with the build results. We also check the sizes of the teams - measured in terms of the number of contributors in that project - to relate with the build results.
#### Size of Source Code (SSC).
We calculate the average number of SSC per build in successful and unsuccessful builds for each of the projects. The distributions of those calculated averages are presented in Figure 2a where the median and the average score over all the projects are found almost equal in the successful and unsuccessful builds. The computed P-value ($P = 0.881, P > \alpha$) of a MWW test between the average scores of SSC in the successful and unsuccessful builds for all the projects also implies that the difference is not statistically significant.
#### Size of Test Code (STC).
Similar to SSC, for each of the projects we compute the average number of STC per build in successful and unsuccessful builds. The box-plots in Figure 2b present the distributions of computed averages of STC per build for each of the projects in successful and unsuccessful builds. The computed P-value ($P = 0.708, P > \alpha$) of a MWW test indicates no significant difference between averages of STC in successful and unsuccessful builds.
#### Team Size of a Project (TSP).
Again, for each of the projects we compute the average number of TSP per build in both successful and unsuccessful builds. The box-plots in Figure 2c present the distributions of the computed averages of TSP per build for each of the projects in successful and unsuccessful builds. The computed P-value ($P = 0.529, P > \alpha$) of a MWW test indicates no statistical significant difference
between averages of TSP in **successful** and **unsuccessful** builds. Based on the analyses, we derive the answer to the RQ3 as follows:
**Ans. to RQ3:** Sizes of projects and teams have no correlation with build results.
### IV. Threats to Validity
Exclusion of 193 projects from the dataset can be questioned. One may also question the exclusion of builds with `start` status. Despite of all such exclusions, our studied dataset consists of 1,090 projects and 36.2 million builds, which are significantly large numbers for our quantitative analysis. We consider a single push that initiates a build represents a task, which may not be always true as a task can be comprised of multiple pushes.
All the studied projects are developed in either Java or Ruby, so the generalizability of the findings can be considered as a threat to validity of the study. The methodology of data collection, analysis, and results are well documented in this paper. Hence, it should be possible to reproduce the study results.
### V. Related Work
Beller et al. [5] identified that testing is the single most important reason why builds fail. While their work mainly focused on impacts of tests on build failures, we have considered many other important factors, which have statistically significant relationships with build results.
The work of Kerzazi et al. [8] is the most relevant to our work where they examined the impacts of the number of code churns, number of changed files and sizes of teams on build results in addition to a qualitative study. While both the works agree on the negative impacts of higher number of code churns and changed files on builds results, contradictory results can be observed on the relationship between teams’ sizes and unsuccessful builds. We gain more confidence on our results as we have conducted our study on 1,090 projects and 36.2 million builds, while the former study was based on only one project and 3,214 builds.
Vasilescu et al. [10] identified higher number of successful builds in pull request model than in direct push model (by using 223 GitHub projects), although we have not found any statistically significance difference in the number of successful and unsuccessful builds between direct push and pull request models using a larger dataset.
Hilton et al. [7] conducted a study to identify some costs and benefits (e.g., productivity) of projects using CI. Vasilescu et al. [11] also examined the productivity and quality (measured in terms of detecting bugs early before releases) of projects that use CI. Both the works found positive impacts of CI on productivity, while later work claimed that the increased productivity comes without negative effect on quality of a product. Instead of measuring the effects of using CI, we have examined what factors cause unsuccessful builds.
### VI. Conclusion
In this paper, we have presented a large-scale quantitative empirical study on the impacts of various development factors on build results. We have studied 3.6 million builds over 1,090 open-source projects.
In our study, we have found that build results are significantly affected by the of number of changed lines of code, number of changed files, and number of built commits in tasks. We have also identified correlation between build tools and build results. However, the number of changed test code lines, development branches and contribution models have no significant impacts on the results of builds.
The findings from this work are validated in the light of statistical significance. The results from this study substantially advance our understanding of the impacts of development factors on the build results, although contradictory results (as discussed in Section V) indicate the need for further investigations along those directions. In future, we also plan to conduct more studies on the impacts of using CI in projects.
### References
|
{"Source-Url": "http://www.cs.uno.edu/~zibran/resources/MyPapers/MSR_Challenge_2017.pdf", "len_cl100k_base": 4275, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14895, "total-output-tokens": 5288, "length": "2e12", "weborganizer": {"__label__adult": 0.0002887248992919922, "__label__art_design": 0.0002338886260986328, "__label__crime_law": 0.00027942657470703125, "__label__education_jobs": 0.0012254714965820312, "__label__entertainment": 4.214048385620117e-05, "__label__fashion_beauty": 0.00010514259338378906, "__label__finance_business": 0.0002522468566894531, "__label__food_dining": 0.0002779960632324219, "__label__games": 0.0003294944763183594, "__label__hardware": 0.00044155120849609375, "__label__health": 0.0003733634948730469, "__label__history": 0.0001671314239501953, "__label__home_hobbies": 7.390975952148438e-05, "__label__industrial": 0.0002491474151611328, "__label__literature": 0.00018489360809326172, "__label__politics": 0.0001908540725708008, "__label__religion": 0.0002925395965576172, "__label__science_tech": 0.007904052734375, "__label__social_life": 0.0001271963119506836, "__label__software": 0.00580596923828125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0002319812774658203, "__label__transportation": 0.00034809112548828125, "__label__travel": 0.0001779794692993164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22117, 0.01934]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22117, 0.37264]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22117, 0.92605]], "google_gemma-3-12b-it_contains_pii": [[0, 5448, false], [5448, 10898, null], [10898, 16368, null], [16368, 22117, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5448, true], [5448, 10898, null], [10898, 16368, null], [16368, 22117, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22117, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22117, null]], "pdf_page_numbers": [[0, 5448, 1], [5448, 10898, 2], [10898, 16368, 3], [16368, 22117, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22117, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0bbaf26f1436b41702c91a96edafa874b2fecbcc
|
Constraint Satisfaction Problems
• The constraint network model
– Variables, domains, constraints, constraint graph, solutions
• Examples:
– graph-coloring, 8-queen, cryptarithmetic, crossword puzzles, vision problems, scheduling, design
• The search space and naive backtracking,
• The constraint graph
Maybe:
• Consistency enforcing algorithms
– arc-consistency, AC-1, AC-3
Next time:
• Backtracking strategies
– Forward-checking, dynamic variable orderings
• Special case: solving tree problems
• Local search for CSPs
Constraint satisfaction problems (CSPs)
Standard search problem:
- **state** is a “black box”—any old data structure
that supports goal test, eval, successor
CSP:
- **state** is defined by **variables** $X_i$ with **values** from **domain** $D_i$
**goal test** is a set of **constraints** specifying
allowable combinations of values for subsets of variables
Simple example of a **formal representation language**
Allows useful **general-purpose** algorithms with more power
than standard search algorithms
Constraint Satisfaction
**Example: map coloring**
Variables - countries (A, B, C, etc.)
Values - colors (e.g., red, green, yellow)
Constraints: $A \neq B$, $A \neq D$, $D \neq E$, etc.
Constraint Satisfaction
Example: map coloring
Variables - countries (A,B,C,etc.)
Values - colors (e.g., red, green, yellow)
Constraints: \( A \neq B, A \neq D, D \neq E, \) etc.
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>red</td>
<td>green</td>
</tr>
<tr>
<td>red</td>
<td>yellow</td>
</tr>
<tr>
<td>green</td>
<td>red</td>
</tr>
<tr>
<td>green</td>
<td>yellow</td>
</tr>
<tr>
<td>yellow</td>
<td>green</td>
</tr>
<tr>
<td>yellow</td>
<td>red</td>
</tr>
</tbody>
</table>
Constraint Satisfaction
Example: map coloring
Variables - countries (A, B, C, etc.)
Values - colors (e.g., red, green, yellow)
Constraints: $A \neq B, A \neq D, D \neq E, \ etc.$
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>red</td>
<td>green</td>
</tr>
<tr>
<td>red</td>
<td>yellow</td>
</tr>
<tr>
<td>green</td>
<td>red</td>
</tr>
<tr>
<td>green</td>
<td>yellow</td>
</tr>
<tr>
<td>yellow</td>
<td>green</td>
</tr>
<tr>
<td>yellow</td>
<td>red</td>
</tr>
</tbody>
</table>
Example: Map-Coloring
Variables \( WA, NT, Q, NSW, V, SA, T \)
Domains \( D_i = \{\text{red}, \text{green}, \text{blue}\} \)
Constraints: adjacent regions must have different colors
e.g., \( WA \neq NT \) (if the language allows this), or
\( (WA, NT) \in \{(\text{red}, \text{green}), (\text{red}, \text{blue}), (\text{green}, \text{red}), (\text{green}, \text{blue}), \ldots\} \)
Example: Map-Coloring contd.
**Solutions** are assignments satisfying all constraints, e.g.,
\[
\{WA = \text{red}, NT = \text{green}, Q = \text{red}, NSW = \text{green}, V = \text{red}, SA = \text{blue}, T = \text{green}\}
\]
**Constraint graph**
*Binary CSP*: each constraint relates at most two variables
*Constraint graph*: nodes are variables, arcs show constraints
General-purpose CSP algorithms use the graph structure to speed up search. E.g., Tasmania is an independent subproblem!
Sudoku
Each row, column and major block must be all different
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be all different
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Each row, column and major block must be all different
“Well posed” if it has unique solution: 27 constraints
Each row, column and major block must be alldifferent.
“Well posed” if it has unique solution: 27 constraints.
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Sudoku
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
Variables: 81 slots
Domains = \{1,2,3,4,5,6,7,8,9\}
Constraints:
- 27 not-equal
Constraint propagation
A network of binary constraints
- Variables $X_1, \ldots, X_n$
- Domains $D_1, \ldots, D_n$
- sets of discrete values,
- Binary constraints $R_{ij}$
- represent the list of allowed pairs of values; $R_{ij} \subseteq D_i \cap D_j$
- Constraint graph:
- A node for each variable and an arc for each constraint
- Solution:
- An assignment of a value to each variable such that no constraint is violated.
- A network of constraints represents the relation of all solutions.
$\text{Solutions} = \{ (X_1, \ldots, X_n) \mid (X_i, X_j) \in R_{ij}, X_i \in D_i, X_j \in D_j \}$
Varieties of constraints
• **Unary** constraints involve a single variable,
– e.g., SA \neq \text{green}
• **Binary** constraints involve pairs of variables,
– e.g., SA \neq \text{WA}
• **Higher-order** constraints involve 3 or more variables,
– e.g., cryptarithmetic column constraints
Cryptarithmetic
• Each letter represents a different digit
• They should satisfy the addition constraint
\[
\begin{array}{cccc}
& & T & W & O \\
+ & T & W & O \\
\hline
& F & O & U & R \\
\end{array}
\]
Cryptarithmetic
• Each letter represents a different digit
• They should satisfy the addition constraint
\[
\begin{array}{c}
T \quad W \quad O \\
+ \quad T \quad W \quad O \\
\hline
F \quad O \quad U \quad R
\end{array}
\]
Variables: \( F, T, U, W, R, O, X_1, X_2, X_3 \)
Domains: \( \{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\} \)
Constraints
\( \text{alldiff}(F, T, U, W, R, O) \)
\( O + O = R + 10 \cdot X_1, \text{ etc.} \)
Examples
• Cryptarithmetic: SEND+MORE = MONEY
• n-Queens
• Crossword puzzles
• Graph coloring
• Vision problems
• Scheduling
– Assignment (who teaches what); timetable (where & when)
– Transportation scheduling, factory assembly, etc.
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
- **Variables**: each row is a variable.
- **Domains**: $D_i = \{1, 2, 3, 4\}$.
- **Constraints**: There are $\binom{4}{2} = 6$ constraints involved:
The standard CSP formulation of the problem:
1. $X_1$
2. $X_2$
3. $X_3$
4. $X_4$
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
Standard CSP formulation of the problem:
- **Variables**: each row is a variable.
- **Domains**: \( D_i = \{1,2,3,4\} \).
- **Constraints**: There are \( \binom{4}{2} = 6 \) constraints involved:
\[ R_{12} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\} \]
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
- **Domains:** \( D_i = \{1,2,3,4\} \).
- **Constraints:** There are \( \binom{4}{2} = 6 \) constraints involved:
\[
R_{12} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}
\]
\[
R_{13} = \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\}
\]
Standard CSP formulation of the problem:
- **Variables:** each row is a variable.
\[
\begin{array}{cccc}
X_1 & X_2 & X_3 & X_4 \\
\end{array}
\]
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
Standard CSP formulation of the problem:
- **Variables**: each row is a variable.
- **Domains**: \( D_i = \{1,2,3,4\} \).
- **Constraints**: There are \( \binom{4}{2} = 6 \) constraints involved:
\[
R_{12} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}
\]
\[
R_{13} = \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\}
\]
\[
R_{14} = \{(1,2)(1,3)(2,1)(2,3)(2,4)(3,1)(3,2)(3,4)(4,2)(4,3)\}
\]
\[
R_{23} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}
\]
\[
R_{24} = \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\}
\]
\[
R_{34} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}
\]
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
Standard CSP formulation of the problem:
- **Variables**: each row is a variable.
- **Domains**: \( D_i = \{1,2,3,4\} \).
- **Constraints**: There are \( \binom{4}{2} = 6 \) constraints involved:
\[
\begin{align*}
R_{12} &= \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\} \\
R_{13} &= \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\} \\
R_{14} &= \{(1,2)(1,3)(2,1)(2,3)(2,4)(3,1)(3,2)(3,4)(4,2)(4,3)\} \\
R_{23} &= \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\} \\
R_{24} &= \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\} \\
R_{34} &= \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}
\end{align*}
\]
- **Constraint Graph**:
- \( X_1 \)
- \( X_2 \)
- \( X_3 \)
- \( X_4 \)
Example 1: The 4-queen problem
Place 4 Queens on a chess board of 4x4 such that no two queens reside in the same row, column or diagonal.
- **Domains:** $D_i = \{1,2,3,4\}$.
- **Constraints:** There are $\binom{4}{2} = 6$ constraints involved:
$R_{12} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}$
$R_{13} = \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\}$
$R_{14} = \{(1,2)(1,3)(2,1)(2,3)(2,4)(3,1)(3,2)(3,4)(4,2)(4,3)\}$
$R_{23} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}$
$R_{24} = \{(1,2)(1,4)(2,1)(2,3)(3,2)(3,4)(4,1)(4,3)\}$
$R_{34} = \{(1,3)(1,4)(2,4)(3,1)(4,1)(4,2)\}$
Standard CSP formulation of the problem:
- **Variables:** each row is a variable.
- **Constraint Graph:**

Standard search formulation (incremental)
Let’s start with the straightforward, dumb approach, then fix it
States are defined by the values assigned so far
◊ **Initial state**: the empty assignment, \{\}
◊ **Successor function**: assign a value to an unassigned variable that does not conflict with current assignment.
\[\Rightarrow\text{ fail if no legal assignments (not fixable!)}\]
◊ **Goal test**: the current assignment is complete
1) This is the same for all CSPs!
2) Every solution appears at depth \(n\) with \(n\) variables
\[\Rightarrow\text{ use depth-first search}\]
3) Path is irrelevant, so can also use complete-state formulation
4) \(b = (n - \ell)d\) at depth \(\ell\), hence \(n!d^n\) leaves!!!!
Backtracking search
Variable assignments are commutative, i.e.,
\[ [WA = \text{red} \text{ then } NT = \text{green}] \text{ same as } [NT = \text{green} \text{ then } WA = \text{red}] \]
Only need to consider assignments to a single variable at each node
\[ \Rightarrow b = d \text{ and there are } d^n \text{ leaves} \]
Depth-first search for CSPs with single-variable assignments is called backtracking search
Backtracking search is the basic uninformed algorithm for CSPs
Can solve \( n \)-queens for \( n \approx 25 \)
The search space
The search space
\[ X_1, \ldots, X_n \]
- **Definition**: given an ordering of the variables
- **a state**:
- is an assignment to a subset of variables that is consistent.
- **Operators**:
- add an assignment to the next variable that does not violate any constraint.
- **Goal state**:
- a consistent assignment to all the variables.
Backtracking example
Backtracking example
Backtracking example
Backtracking example
Backtracking search
function Backtracking-Search(csp) returns solution/failure
return Recursive-Backtracking({}, csp)
function Recursive-Backtracking(assignment, csp) returns soln/failure
if assignment is complete then return assignment
var ← Select-Unassigned-Variable(Variables[csp], assignment, csp)
for each value in Order-Domain-Values(var, assignment, csp) do
if value is consistent with assignment given Constraints[csp] then
add {var = value} to assignment
result ← Recursive-Backtracking(assignment, csp)
if result ≠ failure then return result
remove {var = value} from assignment
return failure
Dependence on variable ordering
Example 2: Given the following constraint network:
Variables: $Z, X, Y$
Domains: $D_Z = \{5, 2\} \quad D_X = \{2, 4\} \quad D_Y = \{5, 2\}$
Constraints: $R_{ZX} \equiv Z \text{ divides } X, \quad R_{ZY} \equiv Z \text{ divides } Y,$
With the ordering $d = \{Z, X, Y\}$, the search space explored is:
With the ordering $d = \{X, Z, Y\}$, the search space explored is:
Recap
• Constraint Satisfaction problem (e.g., map coloring)
– Variables: (regions of a map)
– Domains: values that the variables can take (colors)
– Constraints: Restrictions on values that can be assigned simultaneously
• Backtracking search
– a state is an assignment to a subset of variables that is consistent.
– Operators: add an assignment to the next variable that does not violate any constraint.
– DFS in this state-space
• Select an unassigned variable and assign a value to it!!
Improving backtracking efficiency
General-purpose methods can give huge gains in speed:
1. Which variable should be assigned next?
2. In what order should its values be tried?
3. Can we detect inevitable failure early?
4. Can we take advantage of problem structure?
Minimum remaining values (MRV):
choose the variable with the fewest legal values
Which variable to choose next?
Degree heuristic
Tie-breaker among MRV variables
Degree heuristic:
choose the variable with the most constraints on remaining variables
Which variable to choose next?
Least constraining value
Given a variable, choose the least constraining value:
the one that rules out the fewest values in the remaining variables
Combining these heuristics makes 1000 queens feasible
In What order should a variable’s values be tried?
Can we detect failures early?
Forward checking
Idea: Keep track of remaining legal values for unassigned variables. Terminate search when any variable has no legal values.
Forward checking
Idea: Keep track of remaining legal values for unassigned variables. Terminate search when any variable has no legal values.
Forward checking
Idea: Keep track of remaining legal values for unassigned variables. Terminate search when any variable has no legal values.
Forward checking
Idea: Keep track of remaining legal values for unassigned variables. Terminate search when any variable has no legal values.
Constraint propagation
Forward checking propagates information from assigned to unassigned variables, but doesn’t provide early detection for all failures:
\[\text{WA} \rightarrow \text{NT} \rightarrow \text{Q} \rightarrow \text{NSW} \rightarrow \text{V} \rightarrow \text{SA} \rightarrow \text{T}\]
\(\text{NT}\) and \(\text{SA}\) cannot both be blue!
Constraint propagation repeatedly enforces constraints locally
Also called consistency enforcement techniques
Consistency enforcement
Consistency enforcement techniques
- Arc-consistency (Waltz, 1972)
- Path-consistency (Montanari 1974, Mackworth 1977)
- I-consistency (Freuder 1982)
- Transform the network into smaller and smaller networks.
Arc consistency
Simplest form of propagation makes each arc consistent
$X \rightarrow Y$ is consistent iff
for every value $x$ of $X$ there is some allowed $y$
Arc consistency
Simplest form of propagation makes each arc consistent
$X \rightarrow Y$ is consistent iff for every value $x$ of $X$ there is some allowed $y$
Arc consistency
Simplest form of propagation makes each arc consistent
\( X \rightarrow Y \) is consistent iff
for every value \( x \) of \( X \) there is some allowed \( y \)
If \( X \) loses a value, neighbors of \( X \) need to be rechecked
Arc consistency
Simplest form of propagation makes each arc consistent.
\[ X \rightarrow Y \] is consistent iff for every value \( x \) of \( X \) there is some allowed \( y \).
If \( X \) loses a value, neighbors of \( X \) need to be rechecked.
Arc consistency detects failure earlier than forward checking.
Can be run as a preprocessor or after each assignment.
Arc consistency algorithm
function AC-3 (csp) returns the CSP, possibly with reduced domains
inputs: csp, a binary CSP with variables \{X_1, X_2, \ldots, X_n\}
local variables: queue, a queue of arcs, initially all the arcs in csp
while queue is not empty do
\((X_i, X_j) \leftarrow \text{REMOVE-FIRST}(queue)\)
if \text{REMOVE-INCONSISTENT-VALUES}(X_i, X_j) then
for each \(X_k\) in \text{NEIGHBORS}[X_i] do
add \((X_k, X_i)\) to queue
function \text{REMOVE-INCONSISTENT-VALUES}(X_i, X_j) returns true iff succeeds
removed \leftarrow false
for each \(x\) in \text{DOMAIN}[X_i] do
if no value \(y\) in \text{DOMAIN}[X_j] allows \((x,y)\) to satisfy the constraint \(X_i \leftarrow X_j\)
then delete \(x\) from \text{DOMAIN}[X_i]; \ removed \leftarrow true
return removed
\(O(n^2d^3)\), can be reduced to \(O(n^2d^2)\) (but detecting all is NP-hard)
Arc-consistency
1 ≤ X, Y, Z, T ≤ 3
X < Y
Y = Z
T < Z
X ≤ T
\[ X \rightarrow Y \text{ is consistent iff for every value } x \text{ of } X \text{ there is some allowed } y \]
Arc-consistency
\[ 1 \leq X, Y, Z, T \leq 3 \]
\[ X < Y \]
\[ Y = Z \]
\[ T < Z \]
\[ X \leq T \]
\[ X \rightarrow Y \text{ is consistent iff} \]
\[ \text{for every value } x \text{ of } X \text{ there is some allowed } y \]
Arc-consistency
1 ≤ X, Y, Z, T ≤ 3
X < Y
Y = Z
T < Z
X ≤ T
X → Y is consistent iff
for every value x of X there is some allowed y
Arc-consistency
$1 \leq X, Y, Z, T \leq 3$
$X < Y$
$Y = Z$
$T < Z$
$X \leq T$
Arc-consistency
1 ≤ X, Y, Z, T ≤ 3
X < Y
Y = Z
T < Z
X ≤ T
X → Y is consistent iff for every value x of X there is some allowed y.
Arc-consistency
1 ≤ X, Y, Z, T ≤ 3
X < Y
Y = Z
T < Z
X ≤ T
X → Y is consistent iff for every value x of X there is some allowed y
Arc-consistency
\[ 1 \leq X, Y, Z, T \leq 3 \]
- \( X < Y \)
- \( Y = Z \)
- \( T < Z \)
- \( X \leq T \)
\( X \rightarrow Y \) is consistent iff for every value \( x \) of \( X \) there is some allowed \( y \)
Arc-consistency
1 \leq X, Y, Z, T \leq 3
X < Y
Y = Z
T < Z
X \leq T
• Incorporated into backtracking search
Problem structure
Tasmania and mainland are independent subproblems
Identifiable as connected components of constraint graph
Problem structure contd.
Suppose each subproblem has $c$ variables out of $n$ total
Worst-case solution cost is $\frac{n}{c} \cdot d^c$, linear in $n$
E.g., $n = 80$, $d = 2$, $c = 20$
$2^{80} = 4$ billion years at 10 million nodes/sec
$4 \cdot 2^{20} = 0.4$ seconds at 10 million nodes/sec
Tree-structured CSPs
Theorem: if the constraint graph has no loops, the CSP can be solved in $O(nd^2)$ time.
Compare to general CSPs, where worst-case time is $O(d^n)$.
This property also applies to logical and probabilistic reasoning: an important example of the relation between syntactic restrictions and the complexity of reasoning.
Algorithm for tree-structured CSPs
1. Choose a variable as root, order variables from root to leaves such that every node’s parent precedes it in the ordering
2. For $j$ from $n$ down to 2, apply $\text{REMOVEINCONSISTENT}(\text{Parent}(X_j), X_j)$
3. For $j$ from 1 to $n$, assign $X_j$ consistently with $\text{Parent}(X_j)$
**Nearly tree-structured CSPs**
**Conditioning:** instantiate a variable, prune its neighbors’ domains
**Cutset conditioning:** instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree
Cutset size $c \Rightarrow$ runtime $O(d^c \cdot (n - c)d^2)$, very fast for small $c$
Hill-climbing, simulated annealing typically work with "complete" states, i.e., all variables assigned
To apply to CSPs:
- allow states with unsatisfied constraints
- operators \texttt{reassign} variable values
Variable selection: randomly select any conflicted variable
Value selection by \texttt{min-conflicts} heuristic:
- choose value that violates the fewest constraints
- i.e., hillclimb with $h(n) = \text{total number of violated constraints}$
Example: 4-Queens
States: 4 queens in 4 columns \( (4^4 = 256 \text{ states}) \)
Operators: move queen in column
Goal test: no attacks
Evaluation: \( h(n) = \text{number of attacks} \)
Performance of min-conflicts
Given random initial state, can solve $n$-queens in almost constant time for arbitrary $n$ with high probability (e.g., $n = 10,000,000$)
The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio
$$R = \frac{\text{number of constraints}}{\text{number of variables}}$$
Summary
CSPs are a special kind of problem:
states defined by values of a fixed set of variables
goal test defined by constraints on variable values
Backtracking = depth-first search with one variable assigned per node
Variable ordering and value selection heuristics help significantly
Forward checking prevents assignments that guarantee later failure
Constraint propagation (e.g., arc consistency) does additional work to constrain values and detect inconsistencies
The CSP representation allows analysis of problem structure
Tree-structured CSPs can be solved in linear time
Iterative min-conflicts is usually effective in practice
Propositional Satisfiability
Example: party problem
• If Alex goes, then Becky goes: \( A \rightarrow B \) (or, \( \neg A \lor B \))
• If Chris goes, then Alex goes: \( C \rightarrow A \) (or, \( \neg C \lor A \))
• Query:
Is it possible that Chris goes to the party but Becky does not?
\[ \varphi = \{ \neg A \lor B, \neg C \lor A, \neg B, C \} \]
is the proposition satisfiable?
Unit Propagation
• Arc-consistency for cnfs.
• Involve a single clause and a single literal
• Example: \((A, \neg B, C) \land (B) \rightarrow (A, C)\)
Look-ahead for SAT
(Davis-Putnam, Logeman and Laveland, 1962)
\[
\text{DPLL}(\varphi)
\]
Input: A cnf theory \( \varphi \)
Output: A decision of whether \( \varphi \) is satisfiable.
1. \( \text{Unit\_propagate}(\varphi) \);
2. If the empty clause is generated, return(\text{false});
3. Else, if all variables are assigned, return(\text{true});
4. Else
5. \( Q = \text{some unassigned variable} \);
6. return( DPLL( \varphi \land Q) \lor \\
\quad \text{DPLL}(\varphi \land \neg Q) )
Figure 5.13: The DPLL Procedure
Look-ahead for SAT: DPLL
element: \((\neg A \lor B) \land (\neg C \lor A) \land (A \lor B \lor D) \land (C)\)
(Davis-Putnam, Logeman and Laveland, 1962)
GSAT – local search for SAT
(Selman, Levesque and Mitchell, 1992)
1. For i=1 to MaxTries
2. Select a random assignment A
3. For j=1 to MaxFlips
4. if A satisfies all constraint, return A
5. else flip a variable to maximize the score
6. (number of satisfied constraints; if no variable
7. assignment increases the score, flip at random)
8. end
9. end
Greatly improves hill-climbing by adding restarts and sideway moves
WalkSAT
(Selman, Kautz and Cohen, 1994)
Adds random walk to GSAT:
With probability $p$
random walk – flip a variable in some unsatisfied constraint
With probability $1-p$
perform a hill-climbing step
Randomized hill-climbing often solves large and hard satisfiable problems
More Stochastic Search
• Simulated annealing:
– A method for overcoming local minimas
– Allows bad moves with some probability:
• With some probability related to a temperature parameter T the next move is picked randomly.
– Theoretically, with a slow enough cooling schedule, this algorithm will find the optimal solution. But so will searching randomly.
• Breakout method (Morris, 1990): adjust the weights of the violated constraints
|
{"Source-Url": "http://www.hlt.utdallas.edu/~vgogate/ai/fall17/slides/csp-2.pdf", "len_cl100k_base": 7299, "olmocr-version": "0.1.50", "pdf-total-pages": 81, "total-fallback-pages": 0, "total-input-tokens": 110270, "total-output-tokens": 10429, "length": "2e12", "weborganizer": {"__label__adult": 0.000514984130859375, "__label__art_design": 0.0005125999450683594, "__label__crime_law": 0.0015230178833007812, "__label__education_jobs": 0.003925323486328125, "__label__entertainment": 0.0002061128616333008, "__label__fashion_beauty": 0.0003032684326171875, "__label__finance_business": 0.0007519721984863281, "__label__food_dining": 0.0006532669067382812, "__label__games": 0.00460052490234375, "__label__hardware": 0.0012331008911132812, "__label__health": 0.0009946823120117188, "__label__history": 0.0007205009460449219, "__label__home_hobbies": 0.0002512931823730469, "__label__industrial": 0.001434326171875, "__label__literature": 0.0006561279296875, "__label__politics": 0.00077056884765625, "__label__religion": 0.0007543563842773438, "__label__science_tech": 0.3330078125, "__label__social_life": 0.0001938343048095703, "__label__software": 0.0146331787109375, "__label__software_dev": 0.6298828125, "__label__sports_fitness": 0.0010709762573242188, "__label__transportation": 0.0012426376342773438, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23308, 0.02674]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23308, 0.73404]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23308, 0.74457]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 533, false], [533, 1052, null], [1052, 1240, null], [1240, 1553, null], [1553, 1867, null], [1867, 2261, null], [2261, 2488, null], [2488, 2755, null], [2755, 2874, null], [2874, 2992, null], [2992, 3111, null], [3111, 3229, null], [3229, 3340, null], [3340, 3452, null], [3452, 3570, null], [3570, 3688, null], [3688, 3798, null], [3798, 3916, null], [3916, 4141, null], [4141, 4721, null], [4721, 5017, null], [5017, 5222, null], [5222, 5641, null], [5641, 5881, null], [5881, 6254, null], [6254, 6645, null], [6645, 7172, null], [7172, 7904, null], [7904, 8700, null], [8700, 9436, null], [9436, 10161, null], [10161, 10691, null], [10691, 10708, null], [10708, 11062, null], [11062, 11083, null], [11083, 11104, null], [11104, 11125, null], [11125, 11146, null], [11146, 11825, null], [11825, 12230, null], [12230, 12738, null], [12738, 13006, null], [13006, 13119, null], [13119, 13289, null], [13289, 13545, null], [13545, 13719, null], [13719, 13862, null], [13862, 14005, null], [14005, 14148, null], [14148, 14616, null], [14616, 14851, null], [14851, 15014, null], [15014, 15176, null], [15176, 15424, null], [15424, 15794, null], [15794, 16687, null], [16687, 16862, null], [16862, 17089, null], [17089, 17221, null], [17221, 17304, null], [17304, 17437, null], [17437, 17569, null], [17569, 17783, null], [17783, 17893, null], [17893, 18020, null], [18020, 18316, null], [18316, 18656, null], [18656, 18986, null], [18986, 19300, null], [19300, 19763, null], [19763, 19952, null], [19952, 20291, null], [20291, 20939, null], [20939, 21328, null], [21328, 21482, null], [21482, 22005, null], [22005, 22160, null], [22160, 22580, null], [22580, 22860, null], [22860, 23308, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 533, true], [533, 1052, null], [1052, 1240, null], [1240, 1553, null], [1553, 1867, null], [1867, 2261, null], [2261, 2488, null], [2488, 2755, null], [2755, 2874, null], [2874, 2992, null], [2992, 3111, null], [3111, 3229, null], [3229, 3340, null], [3340, 3452, null], [3452, 3570, null], [3570, 3688, null], [3688, 3798, null], [3798, 3916, null], [3916, 4141, null], [4141, 4721, null], [4721, 5017, null], [5017, 5222, null], [5222, 5641, null], [5641, 5881, null], [5881, 6254, null], [6254, 6645, null], [6645, 7172, null], [7172, 7904, null], [7904, 8700, null], [8700, 9436, null], [9436, 10161, null], [10161, 10691, null], [10691, 10708, null], [10708, 11062, null], [11062, 11083, null], [11083, 11104, null], [11104, 11125, null], [11125, 11146, null], [11146, 11825, null], [11825, 12230, null], [12230, 12738, null], [12738, 13006, null], [13006, 13119, null], [13119, 13289, null], [13289, 13545, null], [13545, 13719, null], [13719, 13862, null], [13862, 14005, null], [14005, 14148, null], [14148, 14616, null], [14616, 14851, null], [14851, 15014, null], [15014, 15176, null], [15176, 15424, null], [15424, 15794, null], [15794, 16687, null], [16687, 16862, null], [16862, 17089, null], [17089, 17221, null], [17221, 17304, null], [17304, 17437, null], [17437, 17569, null], [17569, 17783, null], [17783, 17893, null], [17893, 18020, null], [18020, 18316, null], [18316, 18656, null], [18656, 18986, null], [18986, 19300, null], [19300, 19763, null], [19763, 19952, null], [19952, 20291, null], [20291, 20939, null], [20939, 21328, null], [21328, 21482, null], [21482, 22005, null], [22005, 22160, null], [22160, 22580, null], [22580, 22860, null], [22860, 23308, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23308, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23308, null]], "pdf_page_numbers": [[0, 0, 1], [0, 533, 2], [533, 1052, 3], [1052, 1240, 4], [1240, 1553, 5], [1553, 1867, 6], [1867, 2261, 7], [2261, 2488, 8], [2488, 2755, 9], [2755, 2874, 10], [2874, 2992, 11], [2992, 3111, 12], [3111, 3229, 13], [3229, 3340, 14], [3340, 3452, 15], [3452, 3570, 16], [3570, 3688, 17], [3688, 3798, 18], [3798, 3916, 19], [3916, 4141, 20], [4141, 4721, 21], [4721, 5017, 22], [5017, 5222, 23], [5222, 5641, 24], [5641, 5881, 25], [5881, 6254, 26], [6254, 6645, 27], [6645, 7172, 28], [7172, 7904, 29], [7904, 8700, 30], [8700, 9436, 31], [9436, 10161, 32], [10161, 10691, 33], [10691, 10708, 34], [10708, 11062, 35], [11062, 11083, 36], [11083, 11104, 37], [11104, 11125, 38], [11125, 11146, 39], [11146, 11825, 40], [11825, 12230, 41], [12230, 12738, 42], [12738, 13006, 43], [13006, 13119, 44], [13119, 13289, 45], [13289, 13545, 46], [13545, 13719, 47], [13719, 13862, 48], [13862, 14005, 49], [14005, 14148, 50], [14148, 14616, 51], [14616, 14851, 52], [14851, 15014, 53], [15014, 15176, 54], [15176, 15424, 55], [15424, 15794, 56], [15794, 16687, 57], [16687, 16862, 58], [16862, 17089, 59], [17089, 17221, 60], [17221, 17304, 61], [17304, 17437, 62], [17437, 17569, 63], [17569, 17783, 64], [17783, 17893, 65], [17893, 18020, 66], [18020, 18316, 67], [18316, 18656, 68], [18656, 18986, 69], [18986, 19300, 70], [19300, 19763, 71], [19763, 19952, 72], [19952, 20291, 73], [20291, 20939, 74], [20939, 21328, 75], [21328, 21482, 76], [21482, 22005, 77], [22005, 22160, 78], [22160, 22580, 79], [22580, 22860, 80], [22860, 23308, 81]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23308, 0.02878]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
548948b55a21ad94caf9728be8d6b53f77bb0895
|
Towards Composing Access Control Policies
Muhammad Shahzad
Department of Computer Science
North Carolina State University
Raleigh, NC 27606
Email: mshahza@ncsu.edu
Abstract—Existing access control languages assume a single monolithic specification of the entire access control policy. This assumption does not fit many real-world situations, where access control might need to combine independently stated restrictions in separate policies that should be enforced as one. Furthermore, due to the rapidly increasing number, size, and complexity of the access control policies in the enterprise and IoT communication networks, efficient evaluation of access requests against large number of policies is challenging. Thus, there is an imminent need to develop a framework that composes distributed access control policies into a single policy. In this paper, we propose ACE, an access control policy composition engine, which uses a novel algebraic framework to combine any pair of access control policies into a single resultant policy. Through repetitive application, ACE can generate a resultant policy from any arbitrary number of individual policies. ACE can perform three binary composition operations on access control policies, namely addition, conjunction, and subtraction. The decision to any arbitrary access control request by the composed policy is the same as the decision for that request obtained by combining the independently obtained decisions from all policies. We implemented ACE, integrated it with SUN’s implementation of XACML (a well-known access control language), and extensively evaluated it on a large number of policies. Our results show that the decisions obtained using the policy composed with ACE match the decision obtained from SUN’s implementation for 100% of the access requests. Our results also show that with ACE, the evaluation time of access requests reduces by at least an order of magnitude compared to when ACE is not used.
I. INTRODUCTION
Researchers have proposed several approaches to increase the expressiveness and flexibility of access control languages by supporting multiple policies within a single framework [3], [7]. Although all these approaches are based on powerful languages able to express different policies, they still assume a single monolithic specification of the entire policy. This assumption does not fit many real-world situations, where access control might need to combine independently stated restrictions that should be enforced as one. An example of such a real-world scenario is the smart city IoT infrastructure, where the infrastructure administrators and the infrastructure users impose their own independently formulated access restrictions on the data collected by the IoT infrastructure. In such a scenario, the infrastructure access control policy has to combine the access control policies from each of the administrators and users. Another example is represented by “dynamic coalition” scenarios where different parties, coming together for a common goal for a limited time, need to merge their access control policies for shared resources, such as the wireless network, in a controlled way while retaining their autonomy. It may also be desirable to make an aggregate access control policy by combining several small independently conceived policies. This situation calls for a policy composition framework, using which, different component policies can be integrated into a single policy while retaining their independence.
Another motivation to develop a policy composition framework is the rapidly increasing number, size, and complexity of the access control policies due to the explosive growth of resources and applications deployed in the enterprise and IoT communication networks. Typically, the access control policies are constructed independently by different entities in both enterprise and IoT communication networks. Efficient evaluation of access requests against even a single access control policy is already challenging [9]. Evaluation against multiple policies only exacerbates this challenge because a policy evaluation engine needs to retrieve all the access control policies, evaluate a request against each policy, and combine all decisions into a final decision based on some predefined mechanism for resolving the decision conflict. Therefore, it is very important to develop a framework that composes distributed access control policies into a single policy such that the policy evaluation engine only needs to evaluate a request against that single policy, which is much more efficient than evaluating a request against all distributed policies.
In this paper, we propose ACE, an access control policy composition engine, which uses a novel algebraic framework to combine any two arbitrary access control policies into a single aggregate access control policy. More specifically, given two access control policy specifications \(A\) and \(B\), ACE generates an aggregate composed policy specification \(R\) such that the access control decisions resulting from \(R\) are the same as the combined decisions resulting from using \(A\) and \(B\) separately. Through repetitive application, ACE can generate an aggregate access control policy from any arbitrary number of individual access control policies. It can perform three composition operations on access control policies, namely addition, conjunction, and subtraction. These are binary operations that take two policies and perform the desired composition operation to generate a resultant policy.
The objective of composing multiple component policies into a single aggregate composed policy is very challenging due to two main reasons. First, the aggregate composed policy may lead to unforeseen decisions to certain access requests due to interference between the component policies. Second,
the composed policy may lead to the loss of control on the individual component policies and their autonomous maintainability. In our design of ACE, we have addressed both these challenges. Our approach ensures that in combining any pair of policies, all theoretically possible outcomes are identified and accounted for. It also allows us to update composed policies if the component policy from which the composed policy has been derived is changed.
In its current implementation, ACE composes policies specified in XACML (eXtensible Access Control Markup Language) [6]. XACML is a rich language proposed by OASIS [1] that can be used to implement access control policies for various applications, such as networking [5], web services [10], smart homes [8], and several more. In XACML, whenever a subject has to access a resource (e.g., a user application probing an IoT sensor), it sends the request to a policy enforcement point (PEP). PEP manages the access to the protected resources. PEP forwards this request to a policy decision point (PDP) to find out whether or not the subject has the privilege to access the resource. PDP checks the request against the XACML policy and determines whether the request should be permitted or denied. PDP sends its decision back to PEP which enforces it. Our ACE resides in PDP, where it takes all individually specified XACML policies as input and generates a single XACML policy, which the PDP checks the requests against. While the current implementation of ACE works with XACML, the algebraic framework that ACE employs uses generic primitives that can be seamlessly extended to most other access control languages.
II. RELATED WORK
We first describe prior work on optimizing PDP of XACML. After that, we introduce some existing algebras proposed for access control. While the functionalities provided by these algebras are relevant to XACML policy compression, these all come with a set of fundamental limitations which make them impossible to be used for policy composition.
A. PDP Optimization
Sun provides an open-source implementation of XACML PDP [2]. This implementation performs a brute force search by comparing any given access request against all the rules specified in an XACML policy set. This, however, is not an efficient approach, and causes severe bottlenecks in large-scale systems at runtime. Liu et al. proposed XEngine, which introduces a new representative form of access control policies that lets PDP make the decisions only based on the first applicable rule, instead of a brute force search on the entire set of policies [9]. To achieve this, XEngine converts the values in string format in XACML access rules to a numerical format using an operation called policy numericalization. The numericalized XACML policy has a hierarchical structure with multiple matching rules. Next, it applies another operation, called policy normalization, on this numericalized XACML policy to convert it into a flat numerical structure with only a single matching rule. Last, it converts the numericalized and normalized policy to a tree structure, which is used to efficiently process access requests in PDP. While XEngine significantly improves the performance of PDP, it still cannot compose multiple policies into a single one.
B. Policy Algebras for Access Control
Ni et al. introduced an algebra, namely “\(\mathcal{D}\)-algebra”, where \(\mathcal{D}\) here stands for “decision”. \(\mathcal{D}\)-algebra is functionally complete, i.e., any possible decision matrix can be represented in this algebra. The primary design objective of \(\mathcal{D}\)-algebra is to avoid unintended results in standard policy algorithms that are due to the lack of formal semantics in the decision model. The powerset interpretation of \(\mathcal{D}\)-algebra highlights the existing drawbacks in XACML rule/policy evaluation truth tables and policy combining algorithms, and at the same time suggests a set of solutions to overcome these problems. Unfortunately, this algebra is not directly usable for our problem because it cannot perform operations such as addition and subtraction on individual policies. Bonatti et al. proposed another algebra of security policies along with the associated formal semantics [4]. The framework therein formulates complex policies as expressions of the algebra and is flexible enough to keep the composition process by organizing compound specifications into different levels of abstraction. More specifically, the authors analyze the problem of composing security policies in a modular and incremental fashion for a policy composition framework. Furthermore, they proposed an algebra of security policies as a composition language.
III. XACML OVERVIEW
In this section, we present a brief overview of XACML. The detailed description of XACML can be found in [6]. The fundamental entity in an XACML policy is a rule. A rule is made up of a target, an effect and optionally, a condition. The target is a predicate over subject(s) (such as an IoT infrastructure user), resource(s) (such as sensor data), and action(s) (such as delete sensor data) of the access requests. The effect of a rule is the decision made by that rule, which can either be permit or deny. The condition is used to further refine the applicability of the rule beyond the predicate specified by its target. The effect of a rule is returned in response to a request if and only if the request matches both the target and the condition of that rule.
On top of a set of rules exists a policy. A policy consists of a target, a set of rules, and a rule combining algorithm. PDP checks a request against the rules of a policy only if the rule satisfies the target of that policy. On top of a set of policies exists a policy-set. A policy-set consists of a sequence of policies or policy-sets, a policy combining algorithm, and a target. PDP matches the targets of a policy and a policy-set in the same way as it matches the target of a rule.
The rule/policy combining algorithms resolve an access decision in the case of a conflict or redundancy within a policy or policy set. XACML supports four combining algorithms: (1) first-applicable, (2) only-one-applicable,
(3) permit-overrides, and (4) deny-overrides. If using first-applicable, PDP returns the effect of the first rule (policy) that matches the access request. If using only-one-applicable, PDP returns the effect of the only applicable rule (policy). It returns indeterminate if more than one rule (policy) match a request. If using permit-overrides, PDP returns permit if at least one rule (policy) that matches the request has the effect of permit. If using deny-overrides, PDP returns deny if at least one rule (policy) that matches the request has the effect of deny. If a request does not match against any rule (policy), then PDP returns not-applicable.
IV. ACCESS CONTROL POLICY COMPOSITION ENGINE
Next, we describe how ACE composes any two independent policies into a single policy. We use the term compose and not combine because in addition to combining any two given policies, ACE can also subtract the permissions of one policy from the other. Specifically, we present three fundamental policy composition operations: (1) addition, (2) conjunction, and (3) subtraction. Next, we first give an overview of ACE, followed by the description of each step that ACE performs.
A. ACE Overview
To compose any two given policies, ACE first needs to identify the decision of both policies for any valid request without actually having to enumerate all possible valid requests. To do so, it performs four steps. First, it represents each policy through a tree-like structure called policy decision diagrams (PDDs). Second, it makes the tree structure of the two policies identical by applying a process that we call PDD shaping. Third, it applies the required composition operation (i.e., addition, conjunction, or subtraction) on the corresponding leaves of the two identical trees, and obtains a resultant PDD. Last, it regenerates rules from the resultant PDD according to the semantics of the language being used (in our case XACML). The policy evaluation engine uses the rules from this single resultant policy and arrives at the same decisions for any given request that it would arrive at had it used the two policies separately and then combined their results. Note that the four steps that ACE uses to compose the policies are generic and not tied to any particular access control language. Figure 1 shows the block diagram showing the four steps involved in composing any two given policies into a single resultant policy. Next, we describe these four steps.
B. PDD Conversion
To convert an XACML policy to a PDD, we first numericalize all attributes in its rules, where we map each attribute to a distinct consecutive integer starting from 0. After numericalization, each rule can be represented as a range of integers. This step is followed by the conversion of these numericalized rules into a PDD. The basic idea of the PDD is to generate a directed tree such that any two overlapping rules in the given policy are split into more than two non-overlapping rules represented by distinct directed paths from root to leaf. Each directed path from the root to the leaf represents a distinct non-overlapping rule. The leaf contains the effect of the rule and can either be permit, deny, or not-applicable. Two rules are considered overlapping if there is at least one attribute common in the subject, resource, and/or action parts of the targets of the rules. Formally, a PDD for any given XACML policy with \(d\) attributes \(A_1, A_2, \ldots, A_d\) has the following 5 properties:
1) There is exactly one vertex with no incoming edges. It is called the root. The vertices with no outgoing edges are called terminal nodes.
2) Each vertex \(v\) has a label \(L(v)\). If \(v\) is a nonterminal node, then \(L(v) \in \{A_1, A_2, \cdots, A_d\}\). If \(v\) is a terminal node, then \(L(v) \in \{\text{permit}, \text{deny}, \text{not-applicable}\}\).
3) Each edge \(e: u \rightarrow v\) is labeled with a nonempty set of integers, denoted \(I(e)\), where \(I(e)\) is a subset of the domain of \(u\)'s label (i.e., \(I(e) \subseteq D(A(u))\)).
4) A directed path from the root to a terminal node is called a decision path. No two nodes on a decision path have the same label.
5) The set of all outgoing edges of a node \(v\), denoted \(E(v)\), satisfies the following two conditions: (a) consistency: \(I(e) \cap I(e') = \emptyset\) for any two distinct edges \(e\) and \(e'\) in \(E(v)\); (b) completeness: \(\bigcup_{e \in E(v)} I(e) = D(A(v))\).
Figures 2(a) and 2(b) show two example PDDs. \(S\) represents the subject, \(R\) represents the resource, and \(A\) represents the action. Note from the figures that the range for subject is from 0 to 3, which means that in the original XACML policy, subject takes on 4 distinct string values. Similarly, there are two distinct values each for resource and action. Each terminal node gives the decision of the rule represented by the directed path from root to that terminal node.
Fig. 1: Architecture of our access control policy composition engine
Fig. 2: Examples of policy decision diagrams
C. PDD Shaping
Next, we shape these PDDs such that both PDDs become semi-isomorphic and are functionally equivalent to their respective original PDDs. These semi-isomorphic PDDs, represented as SPDDs, have the following three properties: (1) if the labels are ignored, two SPDDs are identical in structure, (2) the labels of the root nodes, all the internal nodes, and the directed arcs are the same across the two SPDDs, and (3) the labels of the corresponding terminal nodes across the two SPDDs can be different. To convert any pair of PDDs into two isomorphic PDDs, ACE applies the following three operations:
1) Node Insertion: If along all the decision paths containing a node $v$, there is no node that is labeled with the field $A_i$, then insert a node $v'$ labeled $A_i$ above $v$ and make all the incoming edges of $v$ point to $v'$, add an arc from $v'$ to $v$, and label this arc with the domain of $A_i$.
2) Arc Splitting: For an arc $e$ from $v_1$ to $v_2$, if $I(e) = S_1 \cup S_2$, where neither $S_1$ nor $S_2$ is empty, split $e$ into two arcs by replacing $e$ by two edges from $v_1$ to $v_2$, and labeling one arc with $S_1$ and the other with $S_2$.
3) Subgraph Replication: If a node $v$ has $m$ ($m \geq 2$) incoming edges, make $m$ copies of the subgraph rooted at $v$ and make each incoming edge of $v$ point to the root of one distinct copy.
Figures 3(a) and 3(b) show the SPDDs obtained after the shaping of the PDDs shown in Figure 2. We can see that the structures of both SPDDs are exactly the same. The only difference lies at the decision level in the greyed terminal nodes.
D. Policy Composition
Next, we describe how we apply the three fundamental composition operations of addition, conjunction, and subtraction on the SPDDs to obtain the resultant PDD.
1) Addition (+): Addition operation is useful for any scenario where access requests can be authorized if allowed by any of the two policies. Consider an example where all the engineering departments share a resource such as the financial documents of the engineering division of the university. An access to this kind of resource may be granted only if all the authorities that have a stake in such a resource agree on it. This means that in conjunction operation, $dec^{i_1}$ is equal to 1 if and only if both $dec^{i_1}$ and $dec^{i_2}$ are equal to 1. While addition enforces maximum privilege, conjunction enforces minimum privilege, i.e., for a given decision path in the SPDDs, the decision of the resultant policy for that decision path can be permitted if and only if none of the decisions in the two SPDDs on that path are deny. The decision of a terminal node $i$ of PDD $R$ is given by equation 2.
\[ dec^{i_R} = dec^{i_1} \land dec^{i_2} \]
(2)
where $\land$ represents standard AND operation. As discussed in the previous section, $dec^{i_1}$ can also be equal to $n$, so we need to define $\land$ operation for the case when either one or both of the operands are $n$. Following the same argument as for $\lor$ operator i.e., if an operand $dec^{i_1}$ is $n$, it means that there is no rule available for any request that matches the decision path
towards the terminal node $i$ in SPDD$_j$ and the decision of the other policy (SPDD$_k$) should be used. Thus, $dec^k_i \land n = n \land dec^k_i = dec^j_i$. Figure 5 shows the resultant PDD after performing the conjunction operation on SPDD$_1$ and SPDD$_2$.
3) Subtraction ($-$): This operation is used in the scenarios when a policy has to be restricted by eliminating all the accesses in a second policy. For example, when we have to remove a graduated student’s access to the wireless network, this subtraction operation is needed. For any given decision path, if the terminal node of SPDD$_1$ is deny, then the corresponding terminal node of the resultant policy has to be deny because our task is to remove any permits of SPDD$_2$ from SPDD$_1$, therefore, we can never convert a deny of SPDD$_1$ to permit. For any given decision path, if the terminal node of SPDD$_2$ is permit, then the corresponding terminal node of the resultant policy is permit if and only if the corresponding terminal node of SPDD$_2$ is not permit. If the corresponding terminal node of SPDD$_2$ is permit, then we have to remove this permission from SPDD$_1$. Therefore, we convert this permit to deny in the resultant policy as per the goal of the subtraction operation. If the terminal node of a decision path in SPDD$_1$ is not-applicable, then the corresponding terminal node of the resultant policy has to be deny if the terminal node of SPDD$_2$ is either permit or deny. Table I shows the required truth table for the subtraction operation.
We implemented ACE using Java and integrated it with Sun’s implementation of XACML PDP [2]. The Java program takes any arbitrary number of policies in the XACML format as input along with a composition expression defining how the results of the policies should be composed. It applies the binary composition operations on the policies according to the composition expression to obtain the final composed policy. An example of a composition expression when there are four independent policies is $(P_1 + (P_2 \cdot P_3)) - P_4$. This composition expression implies that the decisions of policies $P_2$ and $P_3$ should first be composed using Eq. (2). The resultant should next be composed with the decision of policy $P_1$ using Eq. (1). The new resultant should finally be composed with the decision of policy $P_4$ using Eq. (3). Given a set of policies, our java program applies the three steps described in Sections IV-B, IV-C, and IV-D to compose pairs of policies in the sequence specified in the composition expression. The final composed policy is still in the form of a PDD. Therefore, it applies the step described in Section IV-E on this final composed PDD to enumerate all rules, and converts them into XACML format, which the Sun’s PDP uses to evaluate requests. To keep the implementation just as flexible as the algebra of ACE itself is, in composing any pair of policies, our java program generates the PDDs of the two policies independently. This can lead to assignment of different numerical values to subjects, resources, and actions across the two policies during the numericalization of attributes in the step described in Section IV-B. To solve this problem, we need to make the numerical labels of the PDDs consistent. For this, our program performs an additional step of standardization between the two steps described in Sections IV-B and IV-C. During the standardization step, ACE first splits all the edges in PDD of policy B in such a way that each edge has a numerical label with a single value instead of a range. Next, ACE replaces the numerical labels of the attributes in PDD of B with the numerical labels of those attributes in the PDD of A. For any attributes in PDD of B for which there are no numerical labels in the PDD of A, i.e., those attributes never appeared in the policy A, ACE sequentially assigns them numerical labels that are bigger than the maximum value of numerical label in the PDD of A. After this, it sorts all edges according to the values of their numerical labels and adds the not-applicable branches to the PDD. We need to add the not-applicable branches to the PDD to make it complete because the PDD must have a path from node S to the leaf node for any arbitrary request. By this time, the PDD of B becomes significantly large, which can negatively effect the performance in evaluating the access requests. Thus, ACE traverses the PDD of B in reverse breadth first order and combines adjacent branches that have same decisions at nodes on the leaf level. Figure 7 visualizes the standardization step.
VI. Evaluation
We carried out our experiments on a desktop PC running Windows 10 with 16GB memory and Intel i7-6700 processor. We evaluated ACE from two perspectives: correctness and efficiency. For correctness, we compared the decisions for access requests by the standard implementation of Sun’s PDP and by the Sun’s PDP that uses the composed XACML policies generated by ACE. Our results show that the decisions for access requests by both standard PDP implementation and the PDP implementation appended with ACE matched 100% of the times, which demonstrates that the final composed XACML policy generated by ACE is functionally the same as the individual policies evaluated separately first and then their decisions combined. For efficiency, we compared the time it takes for both implementations to evaluate an access request. Our results show that when using the composed policy generated by ACE, the evaluation time of PDP is an order of magnitude smaller than the evaluation time of standard PDP implementation. Furthermore, the difference in the evaluation time of PDP with ACE and PDP without ACE grows almost linearly with the number of policies. We also measured the preprocessing time that ACE takes to compose policies. The preprocessing time includes the time for performing the four steps described in Sections IV-B through IV-E. Our results show that the preprocessing time increases linearly with the number of policies. For 1000 independently generated policies, each with 1000 rules, the processing time is just one minute.
Figure 8 plots the ratio of the access request processing time of standard PDP implementation and the PDP implementation augmented with ACE for different number and sizes of policies. We observe from this figure that the access request processing time of PDP, when augmented with ACE, is an order of magnitude smaller than without ACE. Furthermore, the difference in the processing times increases both with the increase in the number of policies, as well as the number of rules per policy. Each data point in this figure is obtained by averaging the request processing times of 1000 randomly generated requests. The vertical lines on each data point show the standard deviation in the values of the ratios.
Figure 9 plots the average time taken by ACE to generate a composed policy from the given set of policies. The figure also plots standard deviation in the preprocessing times at each data point. We observe from this figure that as the number of policies and the number of rules per policy increase, the preprocessing time increases linearly. However, the preprocessing time is very small. Even for 1000 independently generated policies with 1000 rules each, ACE takes just one minute to generate the composed policy. Note that, in practice, the preprocessing step is required only when component policy changes, which is a very infrequent process. Consequently, the 1 minute time of ACE does not become any bottleneck in implementing new policies in real-world settings.
VII. Conclusion
In this paper, we presented ACE, which can compose any arbitrary number of policies into a single resultant policy, which is functionally equivalent to the individual policies. The key technical depth of this paper is in developing the method to convert policies specified in XACML language into PDDs, shaping them to make them equivalent, applying composition operation on them, and regenerating XACML policy from the resultant PDDs. We implemented ACE, integrated it with SUN’s implementation of XACML, and extensively evaluated it on a large number of policies. Our results show that the decisions obtained using the policy composed with ACE are the same as the decision obtained from SUN’s implementation 100% of the times. Our results also show that with ACE, the evaluation time of access requests reduces by at least an order of magnitude compared to when ACE is not used to compose the policies.
REFERENCES
|
{"Source-Url": "https://people.engr.ncsu.edu/mshahza/publications/Shahzad2018Towards.pdf", "len_cl100k_base": 6055, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23333, "total-output-tokens": 6842, "length": "2e12", "weborganizer": {"__label__adult": 0.0003962516784667969, "__label__art_design": 0.0004224777221679687, "__label__crime_law": 0.0010194778442382812, "__label__education_jobs": 0.0010824203491210938, "__label__entertainment": 9.78708267211914e-05, "__label__fashion_beauty": 0.0002231597900390625, "__label__finance_business": 0.0007238388061523438, "__label__food_dining": 0.0003445148468017578, "__label__games": 0.0004868507385253906, "__label__hardware": 0.0016269683837890625, "__label__health": 0.001007080078125, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 0.0001289844512939453, "__label__industrial": 0.0008902549743652344, "__label__literature": 0.0003001689910888672, "__label__politics": 0.0006375312805175781, "__label__religion": 0.0004503726959228515, "__label__science_tech": 0.24462890625, "__label__social_life": 0.0001506805419921875, "__label__software": 0.0220184326171875, "__label__software_dev": 0.72216796875, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.000560760498046875, "__label__travel": 0.0002081394195556641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30260, 0.01864]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30260, 0.18909]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30260, 0.90888]], "google_gemma-3-12b-it_contains_pii": [[0, 5867, false], [5867, 12089, null], [12089, 17147, null], [17147, 20313, null], [20313, 24913, null], [24913, 30260, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5867, true], [5867, 12089, null], [12089, 17147, null], [17147, 20313, null], [20313, 24913, null], [24913, 30260, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30260, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30260, null]], "pdf_page_numbers": [[0, 5867, 1], [5867, 12089, 2], [12089, 17147, 3], [17147, 20313, 4], [20313, 24913, 5], [24913, 30260, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30260, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
403d1decb5c548c756af0a8b967a078996ba07d0
|
Assignment 52-61 Sample Solution
7.13 Given the relation $R_1 = (A, B, C, D)$ the set of functional dependencies $F_1 = A \rightarrow B, C \rightarrow D, B \rightarrow C$ allows three distinct BCNF decompositions.
$$R_1 = \{(A, B), (C, D), (B, C)\}$$
is in BCNF as is
$$R_2 = \{(A, B), (C, D), (A, C)\}$$
$$R_2 = \{(A, B), (C, D), (A, C)\}$$
$$R_3 = \{(B, C), (A, D), (A, B)\}$$
7.14 Suppose $R$ is in 3NF according to the textbook definition. We show that it is in 3NF according to the definition in the exercise. Let $A$ be a nonprime attribute in $R$ that is transitively dependent on a key $\alpha$ for $R$. Then there exists $\beta \subseteq R$ such that $\beta \rightarrow A, \alpha \rightarrow \beta, A \not\in \alpha, A \not\in \beta$, and $\beta \rightarrow \alpha$ does not hold. But then $\beta \rightarrow A$ violates the textbook definition of 3NF since
- $A \not\in \beta$ implies $\beta \rightarrow A$ is nontrivial
- Since $\beta \rightarrow A$ does not hold, $\beta$ is not a superkey
- $A$ is not any candidate key, since $A$ is nonprime
Now, we show that if $R$ is in 3NF according to the exercise definition, it is in 3NF according to the textbook definition. Suppose $R$ is not in 3NF according to the textbook definition. Then there is an FD $\alpha \rightarrow \beta$ that fails all three conditions. Thus
- $\alpha \rightarrow \beta$ is nontrivial.
- $\alpha$ is not a superkey for $R$.
- Some $A$ in $\beta \rightarrow A$ is not in any candidate key.
This implies that $A$ is nonprime and $\alpha \rightarrow A$. Let $\gamma$ be a candidate key for $R$. Then $\gamma \rightarrow \alpha, \alpha \rightarrow \gamma$ does not hold (since $\alpha$ is not a superkey), $A \not\in \alpha$, and $A \not\in \gamma$ (since $A$ is nonprime). Thus $A$ is transitively dependent on $\gamma$, violating the exercise definition.
7.15 Referring to the definitions in Practice Exercise 7.14, a relation schema $R$ is said to be in 3NF if there is no non-prime attribute $A$ in $R$ for which $A$ is transitively dependent on a key for $R$. We can also rewrite the definition of 2NF given here as: “A relation schema $R$ is in 2NF if no non-prime attribute $A$ is partially dependent on any candidate key for $R.” To prove that every 3NF schema is in 2NF, it suffices to show that if a nonprime attribute $A$ is partially dependent on a candidate key $a$, then $A$ is also transitively dependent on the key $a$. Let $A$ be a non-prime attribute in $R$. Let $a$ be a candidate key for $R$. Suppose $A$ is partially dependent on $a$.
- From the definition of a partial dependency, we know that for some proper subset $\beta$ of $a, \beta \rightarrow A$.
- Since $\beta \subset a, a \rightarrow \beta$. Also, $\beta \rightarrow a$ does not hold, since $a$ is a candidate key.
- Finally, since $A$ is non-prime, it cannot be in either $\beta$ or $a$.
Thus we conclude that $a \rightarrow A$ is a transitive dependency. Hence, we have proved that every 3NF schema is also in 2NF.
7.16 The relation schema $R = (A, B, C, D, E)$ and the set of dependencies
$$A \rightarrow BC$$
$$B \rightarrow CD$$
$$E \rightarrow AD$$
constitute a BCNF decomposition, however it is clearly not in 4NF. (It is BCNF because all FDs are trivial).
7.18 Certain functional dependencies are called trivial functional dependencies because they are satisfied by all relations.
7.22 Computing \( B^+ \) by the algorithm in Figure 8.8 we start with \( \text{result} = \{B\} \). Considering FDs of the form \( b \rightarrow g \) in \( F \), we find that the only dependencies satisfying \( b \subseteq \text{result} \) are \( B \rightarrow B \) and \( B \rightarrow D \). Therefore \( \text{result} = \{B, D\} \). No more dependencies in \( F \) apply now. Therefore \( B^+ = \{B, D\} \).
7.23
Following the hint, use the following example of \( r \):
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( c_1 )</td>
<td>( d_1 )</td>
<td>( e_1 )</td>
</tr>
<tr>
<td>( a_2 )</td>
<td>( b_2 )</td>
<td>( c_1 )</td>
<td>( d_2 )</td>
<td>( e_2 )</td>
</tr>
</tbody>
</table>
With \( R_1 = \{A, B, C\} \), \( R_2 = \{C, D, E\} \):
a. \( \Pi_{R_1}(r) \) would be:
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( e_1 )</td>
</tr>
<tr>
<td>( a_2 )</td>
<td>( b_2 )</td>
<td>( e_1 )</td>
</tr>
</tbody>
</table>
b. \( \Pi_{R_2}(r) \) would be:
<table>
<thead>
<tr>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>( c_1 )</td>
<td>( d_1 )</td>
<td>( e_1 )</td>
</tr>
<tr>
<td>( c_2 )</td>
<td>( d_2 )</td>
<td>( e_2 )</td>
</tr>
</tbody>
</table>
c. \( \Pi_{R_1}(r) \bowtie \Pi_{R_2}(r) \) would be:
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( c_1 )</td>
<td>( d_1 )</td>
<td>( e_1 )</td>
</tr>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( c_1 )</td>
<td>( d_2 )</td>
<td>( e_2 )</td>
</tr>
<tr>
<td>( a_2 )</td>
<td>( b_2 )</td>
<td>( c_1 )</td>
<td>( d_1 )</td>
<td>( e_1 )</td>
</tr>
<tr>
<td>( a_2 )</td>
<td>( b_2 )</td>
<td>( c_1 )</td>
<td>( d_2 )</td>
<td>( e_2 )</td>
</tr>
</tbody>
</table>
Clearly, \( \Pi_{R_1}(r) \bowtie \Pi_{R_2}(r) \neq r \). Therefore, this is a lossy join.
7.26 BCNF is not always dependency preserving. Therefore, we may want to choose another normal form (specifically, 3NF) in order to make checking dependencies easier during updates. This would avoid joins to check dependencies and increase system performance.
7.29
\( A \rightarrow BC \) holds on the following table:
<table>
<thead>
<tr>
<th>( r ) :</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( c_1 )</td>
<td>( d_1 )</td>
<td></td>
</tr>
<tr>
<td>( a_1 )</td>
<td>( b_2 )</td>
<td>( c_2 )</td>
<td>( d_2 )</td>
<td></td>
</tr>
<tr>
<td>( a_1 )</td>
<td>( b_1 )</td>
<td>( c_1 )</td>
<td>( d_2 )</td>
<td></td>
</tr>
<tr>
<td>( a_1 )</td>
<td>( b_2 )</td>
<td>( c_2 )</td>
<td>( d_1 )</td>
<td></td>
</tr>
</tbody>
</table>
If \( A \rightarrow B \), then we know that there exists \( t_1 \) and \( t_2 \) such that \( t_1[B] = t_2[B] \). Thus, we must choose one of the following for \( t_1 \) and \( t_2 \):
- \( t_1 = r_1 \) and \( t_2 = r_3 \), or \( t_1 = r_3 \) and \( t_2 = r_1 \):
- Choosing either \( t_2 = r_2 \) or \( t_2 = r_4 \), \( t_1[C] \neq t_2[C] \).
- \( t_1 = r_2 \) and \( t_2 = r_4 \), or \( t_1 = r_4 \) and \( t_2 = r_2 \):
- Choosing either \( t_2 = r_1 \) or \( t_2 = r_3 \), \( t_1[C] \neq t_2[C] \).
Therefore, the condition $t_3[C] = t_2[C]$ cannot be satisfied, so the conjecture is false.
7.30
4NF is more desirable than BCNF because it reduces the repetition of information. If we consider a BCNF schema not in 4NF, we observe that decomposition into 4NF does not lose information provided that a lossless join decomposition is used, yet redundancy is reduced.
15.1 Even in this case the recovery manager is needed to perform roll-back of aborted transactions.
15.5 Most of the concurrency control protocols (protocols for ensuring that only serializable schedules are generated) used in practise are based on conflict serializability—they actually permit only a subset of conflict serializable schedules. The general form of view serializability is very expensive to test, and only a very restricted form of it is used for concurrency control.
15.6 There is a serializable schedule corresponding to the precedence graph below, since the graph is acyclic. A possible schedule is obtained by doing a topological sort, that is, $T_1, T_2, T_3, T_4, T_5$.
15.7 A cascadeless schedule is one where, for each pair of transactions $T_i$ and $T_j$ such that $T_j$ reads data items previously written by $T_i$, the commit operation of $T_i$ appears before the read operation of $T_j$. Cascadeless schedules are desirable because the failure of a transaction does not lead to the aborting of any other transaction. Of course this comes at the cost of less concurrency. If failures occur rarely, so that we can pay the price of cascading aborts for the increased concurrency, noncascadeless schedules might be desirable.
15.8 The ACID properties, and the need for each of them are:
* **Consistency**: Execution of a transaction in isolation (that is, with no other transaction executing concurrently) preserves the consistency of the database. This is typically the responsibility of the application programmer who codes the transactions.
* **Atomicity**: Either all operations of the transaction are reflected properly in the database, or none are. Clearly lack of atomicity will lead to inconsistency in the database.
* **Isolation**: When multiple transactions execute concurrently, it should be the case that, for every pair of transactions $T_i$ and $T_j$, it appears to $T_i$ that either $T_j$ finished execution before $T_i$ started, or $T_j$ started execution after $T_i$ finished. Thus, each transaction is unaware of other transactions executing concurrently with it. The user view of a transaction system requires the isolation property, and the property that concurrent schedules take the system from one consistent state to another. These requirements are satisfied by ensuring that only serializable schedules of individually consistency preserving transactions are allowed.
* **Durability**: After a transaction completes successfully, the changes it has made to the database persist, even if there are system failures.
15.9 The possible sequences of states are:
a. active ! partially committed ! committed. This is the normal sequence a successful transaction will follow. After executing all its statements it enters the partially committed state. After enough recovery information has been written to disk, the transaction finally enters the committed state.
b. active ! partially committed ! aborted. After executing the last statement of the transaction, it enters the partially committed state. But before enough recovery information is written to disk, a hardware failure may occur destroying the memory contents. In this case the changes which it made to the database are undone, and the transaction enters the aborted state.
c. **active** \( \rightarrow \) **failed** \( \rightarrow \) **aborted**. After the transaction starts, if it is discovered at some point that normal execution cannot continue (either due to internal program errors or external errors), it enters the failed state. It is then rolled back, after which it enters the **aborted** state.
15.10 A schedule in which all the instructions belonging to one single transaction appear together is called a **serial schedule**. A **serializable schedule** has a weaker restriction that it should be **equivalent** to some serial schedule. There are two definitions of schedule equivalence – conflict equivalence and view equivalence. Both of these are described in the chapter.
15.11
a. There are two possible executions: \( T_{13} T_{14} \) and \( T_{14} T_{13} \).
<table>
<thead>
<tr>
<th>Case 1:</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>initially</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>after ( T_{13} )</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>after ( T_{14} )</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
Consistency met: \( A = 0 \lor B = 0 = T \lor F = T \)
<table>
<thead>
<tr>
<th>Case 2:</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>initially</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>after ( T_{14} )</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>after ( T_{13} )</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
Consistency met: \( A = 0 \lor B = 0 = F \lor T = T \)
b. Any interleaving of \( T_{13} \) and \( T_{14} \) results in a non-serializable schedule.
<table>
<thead>
<tr>
<th>( T_1 )</th>
<th>( T_2 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>read(A)</td>
<td>read(B)</td>
</tr>
<tr>
<td>read(B)</td>
<td>read(A)</td>
</tr>
<tr>
<td>if ( A = 0 ) then ( B = B + 1 )</td>
<td>if ( B = 0 ) then ( A = A + 1 )</td>
</tr>
<tr>
<td>write(B)</td>
<td>write(A)</td>
</tr>
<tr>
<td>( T_{13} )</td>
<td>( T_{14} )</td>
</tr>
<tr>
<td>read(A)</td>
<td>read(B)</td>
</tr>
<tr>
<td>read(B)</td>
<td>read(A)</td>
</tr>
<tr>
<td>if ( A = 0 ) then ( B = B + 1 )</td>
<td>if ( B = 0 ) then ( A = A + 1 )</td>
</tr>
<tr>
<td>write(B)</td>
<td>write(A)</td>
</tr>
</tbody>
</table>
c. There is no parallel execution resulting in a serializable schedule. From part a. we know that a serializable schedule results in \( A = 0 \lor B = 0 \). Suppose we start with \( T_{13} \) **read(A)**. Then when the schedule ends, no matter when we run the steps of \( T_2 \), \( B = 1 \). Now suppose we start executing \( T_{14} \) prior to completion of \( T_{13} \). Then \( T_2 \) **read(B)** will give \( B \) a value of 0. So when \( T_2 \) completes, \( A = 1 \). Thus \( B = 1 \lor A = 1 \land \sim (A = 0 \land B = 0) \). Similarly for starting with \( T_{14} \) **read(B)**.
A recoverable schedule is one where, for each pair of transactions $T_i$ and $T_j$ such that $T_j$ reads data items previously written by $T_i$, the commit operation of $T_i$ appears before the commit operation of $T_j$. Recoverable schedules are desirable because failure of a transaction might otherwise bring the system into an irreversibly inconsistent state. Non-recoverable schedules may sometimes be needed when updates must be made visible early due to time constraints, even if they have not yet been committed, which may be required for very long duration transactions.
Transaction-processing systems usually allow multiple transactions to run concurrently. It is far easier to insist that transactions run serially. However there are two good reasons for allowing concurrency:
- Improved throughput and resource utilization. A transaction may involve I/O activity, CPU activity. The CPU and the disk in a computer system can operate in parallel. This can be exploited to run multiple transactions in parallel. For example, while a read or write on behalf of one transaction is in progress on one disk, another transaction can be running in the CPU. This increases the throughput of the system.
- Reduced waiting time. If transactions run serially, a short transaction may have to wait for a preceding long transaction to complete. If the transactions are operating on different parts of the database, it is better to let them run concurrently, sharing the CPU cycles and disk accesses among them. It reduces the unpredictable delays and the average response time.
The recovery scheme using a log with deferred updates has the following advantages over the recovery scheme with immediate updates:
- The scheme is easier and simpler to implement since fewer operations and routines are needed, i.e., no UNDO.
- The scheme requires less overhead since no extra I/O operations need to be done until commit time (log records can be kept in memory the entire time).
- Since the old values of data do not have to be present in the log-records, this scheme requires less log storage space.
The disadvantages of the deferred modification scheme are:
- When a data item needs to accessed, the transaction can no longer directly read the correct page from the database buffer, because a previous write by the same transaction to the same data item may not have been propagated to the database yet. It might have updated a local copy of the data item and deferred the actual database modification. Therefore finding the correct version of a data item becomes more expensive.
- This scheme allows less concurrency than the recovery scheme with immediate updates. This is because write-locks are held by transactions till commit time.
- For long transaction with many updates, the memory space occupied by log records and local copies of data items may become too high.
The first phase of recovery is to undo the changes done by the failed transactions, so that all data items which have been modified by them get back the values they had before the first of the failed transactions started. If several of the failed transactions had modified the same data item, forward processing of log-records for undo-list transactions would make the data item get the value which it had before the last failed transaction to modify that data item started. This is clearly wrong, and we can see that reverse processing gets us the desired result.
The second phase of recovery is to redo the changes done by committed transactions, so that all data items which have been modified by them are restored to the value they had after the last of the committed transactions finished. It can be seen that only forward processing of log-records belonging to redo-list transactions can guarantee this.
Consider the bank account $A$ with balance $100$. Consider two transactions $T_1$ and $T_2$ each depositing $10$ in the account. Thus the balance would be $120$ after both these transactions are executed. Let the transactions execute in sequence: $T_1$ first and then $T_2$. The log records corresponding to the updates of $A$ by transactions $T_1$ and $T_2$ would be $<T_1, A, 100, 110>$ and $<T_2, A, 110, 120>$ resp.
Say, we wish to undo transaction $T_1$. The normal transaction undo mechanism will replaces the value in question—$A$ in this example—by the old value field in the log record. Thus if we undo transaction $T_1$ using the normal transaction undo mechanism the resulting balance would be $100$ and we would, in effect, undo both transactions, whereas we intend to undo only transaction $T_1$.
Let the erroneous transaction be $T_e$.
1. Identify the latest checkpoint, say $C$, in the log before the log record $<T_e, START>$.
2. Redo all log records starting from the checkpoint $C$ till the log record $<T_e, COMMIT>$. Some transaction—apart from transaction $T_e$—would be active at the commit time of transaction $T_e$. Let $S_1$ be the set of such transactions.
3. Rollback $T_e$ and the transactions in the set $S_1$.
4. Scan the log further starting from the log record $<T_e, COMMIT>$ till the end of the log. Note the transactions that were started after the commit point of $T_e$. Let the set of such transactions be $S_2$. Re-execute the transactions in set $S_1$ and $S_2$ logically.
17.6
We can maintain the LSNs of such pages in an array in a separate disk page. The LSN entry of a page on the disk is the sequence number of the latest log record reflected on the disk. In the normal case, as the LSN of a page resides in the page itself, the page and its LSN are in consistent state. But in the modified scheme as the LSN of a page resides in a separate page it may not be written to the disk at a time when the actual page is written and thus the two may not be in consistent state.
If a page is written to the disk before its LSN is updated on the disk and the system crashes then, during recovery, the page LSN read from the LSN array from the disk is older than the sequence number of the log record reflected to the disk. Thus some updates on the page will be redone unnecessarily but this is fine as updates are idempotent. But if the page LSN is written to the disk to before the actual page is written and the system crashes then some of the updates to the page may be lost. The sequence number of the log record corresponding to the latest update to the page that made to the disk is older than the page LSN in the LSN array and all updates to the page between the two LSNs are lost.
Thus the LSN of a page should be written to the disk only after the page has been written and; we can ensure this as follows: before writing a page containing the LSN array to the disk, we should flush the corresponding pages to the disk. (We can maintain the page LSN at the time of the last flush of each page in the buffer separately, and avoid flushing pages that have been flushed already.)
17.7
Volatile storage is storage which fails when there is a power failure. Cache, main memory, and registers are examples of volatile storage. Non-volatile storage is storage which retains its content despite power failures. An example is magnetic disk. Stable storage is storage which theoretically survives any kind of failure (short of a complete disaster!). This type of storage can only be approximated by replicating data.
In terms of I/O cost, volatile memory is the fastest and non-volatile storage is typically several times slower. Stable storage is slower than non-volatile storage because of the cost of data replication.
17.8
a. Stable storage cannot really be implemented because all storage devices are made of hardware, and all hardware is vulnerable to mechanical or electronic device failures.
b. Database systems approximate stable storage by writing data to multiple storage devices simultaneously. Even if one of the devices crashes, the data will still be available on a different device. Thus data loss becomes extremely unlikely.
17.11
Consider a banking scheme and a transaction which transfers $50 from account $A$ to account $B$. The transaction has the following steps:
a. $\text{read}(A,a_1)$
b. $a_1 := a_1 - 50$
c. $\text{write}(A,a_1)$
d. $\text{read}(B,b_1)$
e. $b_1 := b_1 + 50$
f. $\text{write}(B,b_1)$
Suppose the system crashes after the transaction commits, but before its log records are flushed to stable storage. Further assume that at the time of the crash the update of $A$ in the third step alone had actually been propagated to disk whereas the buffer page containing $B$ was not yet written to disk. When the system comes up it is in an inconsistent state, but recovery is not possible because there are no log records corresponding to this transaction in stable storage.
17.14
a. Two very safe is suitable here because it guarantees durability of updates by committed transactions, though it can proceed only if both primary and backup sites are up. Availability is low, but it is mentioned that this is acceptable.
b. One safe committing is fast as it does not have to wait for the logs to reach the backup site. Since data loss can be tolerated, this is the best option.
c. With two safe committing, the probability of data loss is quite low, and also commits can proceed as long as at least the primary site is up. Thus availability is high. Commits take more time than in the one safe protocol, but that is mentioned as acceptable.
1. The read-committed isolation level ensures that a transaction reads only the committed data. A transaction $T_i$ cannot read a data item $X$ which has been modified by a yet uncommitted concurrent transaction $T_j$. This makes $T_i$ independent of the success or failure of $T_j$. Hence, the schedules which follow read committed isolation level become cascade free.
2.
a. Read Uncommitted:
```
\begin{array}{c|c|c|c}
T_1 & & T_2 \\
\text{read}(A) & \text{write}(A) & \text{read}(A) & \text{write}(A) \\
\text{write}(A) & & \text{read}(A) & \\
\end{array}
```
In the above schedule, $T_2$ reads the value of $A$ written by $T_1$ even before $T_1$ commits. This schedule is not serializable since $T_1$ also reads a value written by $T_2$, resulting in a cycle in the precedence graph.
b. Read Committed:
In the above schedule, the first time \( T_1 \) reads \( A \), it sees a value of \( A \) before it was written by \( T_2 \), while the second read \( (A) \) by \( T_1 \) sees the value written by \( T_2 \) (which has already committed). The first read results in \( T_1 \) preceding \( T_2 \), while the second read results in \( T_2 \) preceding \( T_1 \), and thus the schedule is not serializable.
c. Repeatable Read:
Consider the following schedule, where \( T_1 \) reads all tuples in \( r \) satisfying predicate \( P \); to satisfy repeatable read, it must also share-lock these tuples in a two-phase manner.
Suppose that the tuple \( t \) inserted by \( T_2 \) satisfies \( P \); then the insert by \( T_2 \) causes \( T_2 \) to serialized after \( T_1 \), since \( T_1 \) does not see \( t \). However, the final read \( (A) \) operation of \( T_1 \) forces \( T_2 \) to precede \( T_1 \), causing a cycle in the precedence graph.
3.
a. The repeatable read schedule in the preceding question is an example of a schedule exhibiting the phantom phenomenon and is non-serializable.
b. Consider the schedule
Suppose that tuple \( t \) deleted by \( T_2 \) is from relation \( r \), but does not satisfy predicate \( P \), for example because its \( A \) value is 3. Then, there is no phantom conflict between \( T_1 \) and \( T_2 \), and \( T_2 \) can be serialized before \( T_1 \).
|
{"Source-Url": "http://www.ece.uprm.edu:80/~ahchinaei/courses/2010sep/icom5016/ICOM5016a52sol.pdf", "len_cl100k_base": 6831, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25815, "total-output-tokens": 7060, "length": "2e12", "weborganizer": {"__label__adult": 0.00031256675720214844, "__label__art_design": 0.0003151893615722656, "__label__crime_law": 0.0004661083221435547, "__label__education_jobs": 0.003452301025390625, "__label__entertainment": 8.58306884765625e-05, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.000492095947265625, "__label__food_dining": 0.0004253387451171875, "__label__games": 0.0007123947143554688, "__label__hardware": 0.0013294219970703125, "__label__health": 0.0005254745483398438, "__label__history": 0.0003440380096435547, "__label__home_hobbies": 0.00016105175018310547, "__label__industrial": 0.0007123947143554688, "__label__literature": 0.0004582405090332031, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005002021789550781, "__label__science_tech": 0.10479736328125, "__label__social_life": 0.0001304149627685547, "__label__software": 0.012542724609375, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.0002384185791015625, "__label__transportation": 0.0006170272827148438, "__label__travel": 0.00018334388732910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23455, 0.07086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23455, 0.66406]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23455, 0.8989]], "google_gemma-3-12b-it_contains_pii": [[0, 3244, false], [3244, 5907, null], [5907, 9563, null], [9563, 11833, null], [11833, 15618, null], [15618, 19563, null], [19563, 22039, null], [22039, 23455, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3244, true], [3244, 5907, null], [5907, 9563, null], [9563, 11833, null], [11833, 15618, null], [15618, 19563, null], [19563, 22039, null], [22039, 23455, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23455, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23455, null]], "pdf_page_numbers": [[0, 3244, 1], [3244, 5907, 2], [5907, 9563, 3], [9563, 11833, 4], [11833, 15618, 5], [15618, 19563, 6], [19563, 22039, 7], [22039, 23455, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23455, 0.25714]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
d781f2adb8d02c2dad2abe5f70921fb90b1fbff5
|
Chapter 3
Tree Drawing
3.1 Rooted Trees
Rooted trees are at the center of many problems and applications in computer science. Information systems, multimedia documents databases, or virtual reality scene descriptions are only a few examples in which they are used. Their widespread use is most probably the result of the fact that they capture and reflect the way humans often organize information. A visual representation of these structures is often a major tool to help the user find his/her way in exploring data; hence the importance of graph drawing and exploration in information visualization.
3.1.1 Tree definition
We first briefly review some basic concepts about trees. A tree is a connected acyclic graph. Trees can be divided into rooted trees and free trees. A rooted tree \( T \) has a specific vertex \( r \in T \) which is the root of the tree \( T \). In reverse, free trees does not have any prespecified vertex for root.
Trees can also be divided into binary trees and multiway trees. A binary tree of \( n \) nodes, \( n > 0 \), either is empty, if \( n = 0 \), or consists of a root node \( u \) and two binary trees \( T_1 \) and \( T_2 \) of \( n_1 \) and \( n_2 \) nodes, respectively, such that \( n = 1 + n_1 + n_2 \). We say that \( T_1 \) is the left subtree of \( T \), and \( T_2 \) is the right subtree of \( T \). A multiway tree of \( n \) internal nodes, \( n > 0 \), either is empty, if \( n = 0 \), or consists of a root node \( u \), an integer \( d_u > 1 \), which is the degree of \( u \), and multiway trees \( T_{d_1}, ..., T_{d_u} \) of \( n_{d_1}, ..., n_{d_u} \) respectively, such that \( n = 1 + n_1 + ... + n_{d_u} \).
If \( u_1, ..., u_{d_u} \) are the roots of \( T_{d_1}, ..., T_{d_u} \) respectively, then we say that \( u \) is the parent of \( u_1, ..., u_{d_u} \), and \( u_1, ..., u_{d_u} \) are the children of \( u \) and the siblings of each other. Every node in a tree is at a specific level that can be defined by using the following node-numbering scheme. Number the root node 0, and number every other node to be one more than its parent; then the number of a node \( u \) is that node’s level.
3.1.2 Layering
A tree is drawn to give us an intuitive understanding of the relationships appearing among the data during the solution of a problem. Tree drawings are common in books, articles, and reports. There are many different ways to draw a tree but they are not all equally appropriate. Several aesthetic rules have been proposed in an attempt to define a well-shaped drawing of a tree. The aesthetic rules 1 through 5 described in the following are presented by Wetherell and Shannon [WS79]; and rule 6 is presented by Tilford [RT81].
Aesthetic Rules
1. Trees impose a distance on the nodes; no node should be closer to the root than any of its ancestors.
2. Nodes at the same level of the tree should lie along a straight line, and the straight lines corresponding to the levels should be parallel.
3 The relative order of nodes on any level should be the same as in the level order traversal of the tree.
4 For a binary tree, a left child should be positioned to the left of its parent and a right child to the right.
5 A parent should be centred over its children.
6 A subtree of a given tree should be drawn the same way regardless of where it occurs in the tree.
The basic task in drawing a tree is to assign a pair of coordinates \((x, y)\) to each node of the tree. Since we physically draw trees vertically, the \(y\)-coordinates of nodes are easy to determine from their levels. The most difficult task is to decide the \(x\)-coordinates of the nodes. An easy method to do this is to assign at each node a number proportional to its rank in the inorder traversal (Algorithm 3.1) of the tree, like the example tree in Figure 3.1.
The Visit\((v)\) function of Algorithm 3.1 is equivalent to the numbering of node \(v\). Starting from the root node, the first step of INORDER_TRAVERSAL\((r)\) is to recursively call itself for the left child of node \(r\) (node 5). Again at node 5, it will be called for its left child (node 1). Node 1 has not a left child, so the first step of the algorithm fails and continues to step 2, which is the numbering of the node. Since it is the first node numbered, it gets value 1. Then it continues to step 3, which is to call itself for the right child of node 1, which exists (node 3), and so on.
**Algorithm 3.1 Inorder Traversal**
*Input:* The root node \(r\) of binary tree \(T\)
*Output:* An inorder numbering of the nodes of \(T\)
\[
\text{INORDER\_TRAVERSAL}(v)
\]
1. \text{INORDER\_TRAVERSAL}(v \? LeftChild)
2. \text{Visit}(v)
3. \text{INORDER\_TRAVERSAL}(v \? RightChild)
---

While this simplistic approach satisfies basic aesthetic rules (Aesthetic rules 1 through 4 above), the tree drawings it generates are not well structured since they do not satisfy other aesthetic rules; a parent vertex is not necessarily centred over its children and the drawing is much wider than necessary.
Reingold and Tilford [RT81] presented a *divide and conquer* approach to determine the position of nodes. The algorithm of Reingold and Tilford (RT algorithm) takes a modular approach to the positioning of nodes. The relative positions of the nodes in a sub-tree are calculated independently of the rest of the tree. After the relative positions of two sub-trees have been calculated, they can be joined as siblings in a larger tree by placing them together as close as possible and centering the parent node above them. Imagine that the two sub-trees of a binary node have been drawn and cut out of paper along their contours. Then, starting with the two sub-trees superimposed at their roots, move them apart until a minimal agreed-upon distance between the trees is obtained at each level. This can be done gradually and can be described as shown in Figure 3.2. Initially, their roots are separated by some agreed-upon minimum distance; then, at the next level, they are pushed apart until the minimum separation is established. This process is continued at successively lower levels until the last level of the shorter sub-tree is reached. When the process is complete, the position of the sub-trees is fixed relative to their parent, which is centered over them.

Concisely the steps of the algorithm are presented below:
**Algorithm 3.2 Layered-Binary-Tree-Draw**
- **Input:** A binary tree \( T \)
- **Output:** A layered drawing of \( T \)
- **Base**
- If \( T \) has only one vertex, the drawing is trivial.
- **Divide**
- Recursively apply the algorithm to draw the left and right subtrees of tree \( T \).
- **Conquer**
- Move the drawings of subtrees until their horizontal distance equals 2. At the end, place the root \( r \) of \( T \) vertically one level above and horizontally half way between its children. If there is only one child, place the root at horizontal distance 1 from the child.
Figure 3.3: Various steps of the RT algorithm. (a) Node $u$ is placed at distance 1 from subtree $T_1$ because it has only one child (the root of $T_1$). Node $v$ is placed at distance 2 from subtree $T_2$ and the node $r$ (parent of node $v$ and the root of $T_2$) is placed halfway between its children. (b) Node $u$ is placed at distance 1 from subtree $T_1$ because it has only one child (the root of $T_1$). (c) Subtrees $T_1$ and $T_2$ are placed at distance 2 and the parent is placed halfway between their roots ($r$'s children) resulting in the tree shown at Figure 3.4.
Figure 3.4: Drawing of the same binary tree after the RT algorithm. Note that the width of the tree is now 6 against 10 (in Figure 3.1)
Note that at any level two subtrees can never be moved closer; they can only be moved apart. Also note that once a subtree is laid out, its shape is fixed. The RT algorithm satisfies all six aesthetic rules presented above. Using the RT algorithm, the tree shown in Figure 3.1 is now redrawn as shown in Figure 3.4.
The above algorithm can be implemented in two traversals of the input binary tree which has an O(N) complexity, where N is the number of nodes of the tree to be drawn. The first traversal (postorder) sets the child nodes positions relative to their parent. For each vertex v, recursively computes the horizontal displacement of the left and right children of v with respect to v. The second traversal (preorder) fixes absolute positions by accumulating the displacements on the path from each vertex to the root for the x-coordinate, and by considering the depth of each vertex for the y-coordinate.
The crucial idea of the algorithm is to keep track of the contour of the sub-trees by special pointers, called threads, such that whenever two sub-trees are joined, only the top part of the trees down to the lowest level of the smaller tree need to be taken into account. The nodes are positioned on a fixed grid and are considered to have zero width.
In the postorder traversal part of the recursion is the merging of the contours of the two subtrees. The left contour of a binary tree T with height h is the sequence of vertices v₀, ..., vₜ such that vᵢ is the leftmost vertex of T with depth i. Similarly we can define the right subtree.
The construction of the contour of the resulting tree can be done in the following way: Suppose that we have two subtrees T₁ and T₂ and the rooted vertex r. T₁ and T₂ are the left and right subtrees of r respectively. Every subtree has a unique left and right contour and the computation of the left and right contour of the resulting tree can be derived by the initial contours of the two subtrees. During the construction, we can have one of the following three cases:
1 If both subtrees have the same height h, then the left contour of the resulting tree will be the left contour of T₁ (left subtree) plus the rooted vertex r, and respectively the right contour will be the right contour of T₂ (right subtree) plus the vertex r.
2 If the height of the left subtree is less than the height of the right subtree, then the contour of the resulting tree will be derived as follow: The right contour of the resulting tree will be the right contour of the right subtree plus the rooted vertex r. The left contour can be the result of the concatenation of two portions plus the rooted vertex r. Let the height of the left contour of the left subtree be h, and its bottommost vertex be u. Also, let the w vertex belong to the left contour of the right subtree and its depth is h+1. Then, the left contour will consist of two portions: (plus the rooted vertex r) the left contour of the left subtree, and the portion of the left contour of the right subtree from the vertex w until its bottommost vertex. This case is illustrated in Figure 3.5.
3 The case in which the left subtree has greater height than the right subtree is analogous to the previous one.

Incidentally, the modular approach taken by the RT algorithm is the reason that it fails to fulfill the need of tree drawings that occupy as little width as possible without violating the six aesthetic rules. As we can see in the Figure 3.6, the drawing of the tree constructed by the RT algorithm has width 14. But as shown in Figure 3.7, we can draw the same tree in a narrower manner (width 13). This drawing also fulfills all six aesthetic rules while occupying less space. The local horizontal compaction at each conquer step of the RT algorithm does not always compute a drawing of minimal width. This problem can be solved in polynomial time using linear programming, but it is NP-hard if there is a need for a grid drawing with integer values for the coordinates.

**Figure 3.6**: Example tree derived from the RT algorithm with non-optimal area occupation.

**Figure 3.7**: A narrower drawing of the same tree with Figure 3.6.
The properties for the Layered-Binary-Tree-Draw algorithm are summarized in the theorem below.
**Theorem 3.1**: Layered-Binary-Tree-Draw algorithm constructs a drawing of a binary tree $T$ with $n$ vertices in linear time such that is:
- Layered (the $y$-coordinate of each vertex is equal to minus its depth)
- Planar, straight-line and strictly downward.
- Occupies $O(n^2)$ area
- Two vertices are at horizontal and vertical distance at least 1
- Isomorphic subtrees have congruent drawing up to a translation
- Parent vertex is centered with respect to its children
Although the RT algorithm only draws binary trees, it can be straightforward extended to draw multiway trees (Algorithm 3.3). There is only a small imbalance problem with the $x$-coordinate of a parent vertex in case it has more than two children and we result in imbalanced layered drawings because the algorithm works from the left-to-right order for all the children. So, as we can see in Figure 3.8, the resulting drawing after we apply the algorithm in the particular rooted tree, is imbalanced.
**Algorithm 3.3**
*Layered-Tree-Draw*
**Input**: A tree $T$ with subtrees $T_1, T_2, ..., T_m$
**Output**: A layered drawing of $T$
- **Base**
- If $T$ has only one vertex, the drawing is trivial.
- **Divide**
- Recursively apply the algorithm to draw every subtree $T_i$.
- **Conquer**
- Move the drawings of subtrees $T_1, T_m$ until their horizontal distance equals 2. At the end, place the root vertically one level above and horizontally half way between the roots of $T_1$ and $T_m$. If there is only one child, place the root at horizontal distance 1 from the child.
---
**Figure 3.8**: Imbalanced layered drawing of a tree.
The properties for the Layered-Tree-Draw algorithm are now extended and summarized in the theorem below.
**Theorem 3.2:** Layered-Tree-Draw algorithm constructs a drawing of a tree $T$ with $n$ vertices in linear time such that is:
- Layered (the $y$-coordinate of each vertex is equal to minus its depth)
- Planar, straight-line and strictly downward.
- Occupies $O(n^2)$ area
- Two vertices are at horizontal and vertical distance at least 1
- Isomorphic subtrees have congruent drawing up to a translation
- Axially isomorphic subtrees have congruent drawings, up to a translation and a reflection in $y$-axis
### 3.1.2 Radial Drawing
*Radial drawing* is an alternative way to draw rooted and free trees (trees with no specified root). In radial drawing, the root (or the node chosen to represent the root) of the tree is placed at the center, and all the descendant nodes on concentric rings around the root as shown in the example tree of Figure 3.9. Vertices of depth $i$ are placed on circle $C_i$, as the $i$ increases, so does the radius $r(i)$ of each circle $C_i$. Radial drawings would appear as if you were looking down onto a tree with the branches radiating from the center. An important consideration would be that the branches of the tree do not overlap.

To ensure that the edges will not overlap, the subtree rooted at a vertex $v$ is drawn bounded by an area called *annulus wedge*, because of its shape. An example of an annulus wedge is shown in Figure 3.10. If the angle of the wedge is greater than a certain limit, then
edge crossing may occur because an edge with endpoints within the wedge can extend outside and intersect with other edges, as shown in Figure 3.11. To guarantee planarity, vertices must be restricted to a convex subset of the annulus wedge.
**Figure 3.10:** The annulus wedge of a subtree, and the concentric ring around the root of the same tree with Figure 3.8.
**Figure 3.11:** Edge escaping from an annulus wedge.
Suppose that we have a subtree rooted at vertex $v$ which is drawn in annulus wedge $W_v$. Let $l(v)$ be the number of leaves in the subtree. As shown in Figure 3.12, $v$ lies on $C_i$ and the tangent to $C_i$ through $v$ intersects $C_{i+1}$ at points $a$ and $b$. The unbounded segment $F_v$ formed by the line segment $ab$ and the rays from the origin through $a$ and $b$ is convex, and the
descendants of $v$ will be drawn inside this area. The children of $v$ will be arranged on $C_{i+1}$ according to the number of leaves in their respective subtrees. Specifically, the angle $\beta_u$ of the wedge $W_u$ of each child is
$$\beta_u = \min\left(\frac{\ell(u)\beta_{\ell(u)}}{\ell(v)}, \tau\right)$$
where $t$ is the angle formed by the region $F_v$. Note that $\cos(\tau / 2) = \frac{\rho(i)}{\rho(i+1)}$, where $\rho(i)$ is the radius of circle $C_i$.
Figure 3.12: Convex subset of the annulus wedge.
For a free tree, the root is selected such that the height of the resulting rooted tree is the minimum possible. A simple pruning algorithm can be used to find in linear time the center of the tree:
---
**Algorithm 3.4 Tree-Pruning**
*Input:* A tree $T$
*Output:* The root of tree $T$ such that the height of $T$ is the minimum possible.
1. *If the tree has at most two vertices, the center(s) have been found*
2. *Remove all the leaves, and goto 1*
---
If the number of nodes is odd there is a unique center, else for even number of nodes, the center corresponds to the center of the line segment which joins the two nodes.
3.1.2 HV-Drawing
The drawing of a rooted binary tree using the hv-drawing convention, is a planar grid drawing in which tree nodes are represented as points (of integer coordinates) in the plane and tree edges as non-overlapping vertical or horizontal line segments. Moreover, each node is placed immediately to the right or immediately below its parent and the drawings of subtrees rooted at nodes with the same parent are non-overlapping. Figure 3.13 shows an example of an hv-drawing representation of a binary tree.
\[ \text{Figure 3.13: An hv-drawing of a binary tree.} \]
Different hv-drawings of the same tree can be of different quality. The quality (or cost) is a function of the drawing. The most commonly used cost function is the area of the enclosing rectangle of the drawing. For a general binary tree, it is possible to construct an hv-drawing that is optimal with respect to one of several cost measures, including area and perimeter, in \( O(n^2) \) time. We can compute an optimal hv-drawing of a tree with \( n \) nodes with respect to a cost function \( w(x, y) \) which is non-decreasing in both parameters \( x, y \) where \( x \) and \( y \) are the width and the height of the enclosing rectangle of the drawing, respectively. Algorithm 3.4 is a general divide-and-conquer algorithm for constructing hv-drawings.
Algorithm 3.4 HV-Tree-Draw
**Input:** A rooted binary tree \( T \)
**Output:** An hv-drawing of \( T \)
- **Base**
If \( T \) has only one vertex, the drawing is trivial.
- **Divide**
Recursively construct hv-drawings for both left and right subtrees.
- **Conquer**
Perform either a horizontal combination, as shown in Figure 3.14.a or a vertical combination, as shown in Figure 3.14.b.
At the conquer step of Algorithm 3.4, we have to options to draw the subtrees of a node $u$ as shown in Figure 3.14. In horizontal combination a child of a node $u$ is horizontally aligned with and to the right of $u$, while the other child is vertically aligned with and immediately below $u$, as shown in Figure 3.14.a. In vertical combination a child of $u$ is vertically aligned with and below $u$, while the other child is horizontally aligned with and immediately to the right of $u$, as shown in Figure 3.14.b.
It is also easy to verify that if every subtree is placed in the left of every other subtree in the horizontal combination, then the width of the final hv-drawing will be at most $n-1$, where $n$ is the number of all vertices of all the subtrees. The same can be noted for the vertical combination too.
During the construction of the hv-drawing we may choose to perform only horizontal combinations which will lead to a non-optimal area of the drawing. A better way is to use both horizontal and vertical combinations. We can choose horizontal combinations for subtrees rooted at vertices of odd depth, and vertical combinations for the others. This will lead to a balanced drawing which has area $O(n)$ and aspect ratio $O(1)$ (the shape of the occupying area tends to be square).
There is a simple specialization of the above algorithm which is called Right-Heavy-HV-Tree-Draw (Algorithm 3.5). In this approach, at the conquer step we perform only horizontal combinations and place the largest subtree to the right of the smallest subtree. Figure 3.15
shows an example of an hv-drawing of a binary tree, constructed by Algorithm *Right-Heavy-HV-Tree-Draw*. For a binary tree $T$ with $n$ vertices, the height of the drawing of $T$ constructed by the *Right-Heavy-HV-Tree-Draw* is at most $O(\log n)$.
---
**Algorithm 3.5** Right-Heavy-HV-Tree-Draw
*Input:* A binary tree $T$
*Output:* An hv-drawing of $T$
- **Base**
*If $T$ has only one vertex, the drawing is trivial.*
- **Divide**
Recursively construct hv-drawings for both left and right subtrees.
- **Conquer**
Perform a horizontal combination by placing the subtree with the largest number of vertices to the right of the other one.
---
**Figure 3.15:** An example hv-drawing constructed by Algorithm 3.5 Right-Heavy-HV-Tree-Draw.
**Theorem 3.3:** Right-Heavy-HV-Tree-Draw algorithm constructs a drawing of a tree $T$ with $n$ vertices in linear time that is:
- **Downward, planar, grid, straight-line, and orthogonal, in other words, an hv-drawing.**
- Occupies $O(n\log n)$ area
- Its width is at most $n-1$
- Its height is at most $\log n$
- Axially isomorphic subtrees have congruent drawings, up to a translation and a reflection in $y$-axis
In general, the biggest problem in constructing an hv-drawing for a tree, is how many times we will apply the horizontal or the vertical combination, corresponding in the resulting area of the hv-drawing. Because of the imbalance of the aspect ration of the trees constructed by the algorithm Right-Heavy-HV-Tree-Draw, a good approach is to use horizontal combination for subtrees rooted at vertices of odd depth, and vertical combinations for subtrees rooted at vertices of even depth. The resulting drawing of this method occupies an $O(n)$ area.
Algorithm 3.5 Right-Heavy-HV-Tree-Draw can be easily extended from binary trees to general rooted trees as shown in Figure 3.16. In this case, slanted lines are allowed to connect vertices of different level.
**Figure 3.16**: Extended version of Algorithm 3.5 Right-Heavy-HV-Tree-Draw to draw general rooted trees.
**Theorem 3.4**: There exists an algorithm which constructs a drawing of a tree $T$ with $n$ vertices in linear time that is:
- Downward, planar, grid and straight-line
- Occupies $O(n \log n)$ area
- Its width is at most $n-1$
- Its height is at most $\log n$
- Axially isomorphic subtrees have congruent drawings, up to a translation and a reflection in $y$-axis
|
{"Source-Url": "https://www.csd.uoc.gr/~hy583/papers/ch8.pdf", "len_cl100k_base": 5804, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 29668, "total-output-tokens": 6535, "length": "2e12", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0011081695556640625, "__label__crime_law": 0.0004472732543945313, "__label__education_jobs": 0.001384735107421875, "__label__entertainment": 0.00015604496002197266, "__label__fashion_beauty": 0.0002161264419555664, "__label__finance_business": 0.00033283233642578125, "__label__food_dining": 0.0005273818969726562, "__label__games": 0.0016717910766601562, "__label__hardware": 0.0024929046630859375, "__label__health": 0.0008273124694824219, "__label__history": 0.0006818771362304688, "__label__home_hobbies": 0.00027251243591308594, "__label__industrial": 0.0008406639099121094, "__label__literature": 0.0005879402160644531, "__label__politics": 0.0003044605255126953, "__label__religion": 0.0008330345153808594, "__label__science_tech": 0.2978515625, "__label__social_life": 0.0001207590103149414, "__label__software": 0.0112152099609375, "__label__software_dev": 0.67626953125, "__label__sports_fitness": 0.0003938674926757813, "__label__transportation": 0.0007257461547851562, "__label__travel": 0.0002663135528564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23281, 0.01975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23281, 0.8919]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23281, 0.92011]], "google_gemma-3-12b-it_contains_pii": [[0, 2977, false], [2977, 4879, null], [4879, 7176, null], [7176, 7893, null], [7893, 11279, null], [11279, 12365, null], [12365, 13991, null], [13991, 15584, null], [15584, 16399, null], [16399, 17549, null], [17549, 19297, null], [19297, 20872, null], [20872, 22599, null], [22599, 23281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2977, true], [2977, 4879, null], [4879, 7176, null], [7176, 7893, null], [7893, 11279, null], [11279, 12365, null], [12365, 13991, null], [13991, 15584, null], [15584, 16399, null], [16399, 17549, null], [17549, 19297, null], [19297, 20872, null], [20872, 22599, null], [22599, 23281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 23281, null]], "pdf_page_numbers": [[0, 2977, 1], [2977, 4879, 2], [4879, 7176, 3], [7176, 7893, 4], [7893, 11279, 5], [11279, 12365, 6], [12365, 13991, 7], [13991, 15584, 8], [15584, 16399, 9], [16399, 17549, 10], [17549, 19297, 11], [19297, 20872, 12], [20872, 22599, 13], [22599, 23281, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23281, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
4b39e26f27aac9befc1f14067ca452295b297ce5
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-540-69384-0_29.pdf", "len_cl100k_base": 6117, "olmocr-version": "0.1.48", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 24477, "total-output-tokens": 6937, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.00043702125549316406, "__label__crime_law": 0.0005054473876953125, "__label__education_jobs": 0.0014371871948242188, "__label__entertainment": 0.0001016855239868164, "__label__fashion_beauty": 0.00021326541900634768, "__label__finance_business": 0.0007505416870117188, "__label__food_dining": 0.00041866302490234375, "__label__games": 0.0007028579711914062, "__label__hardware": 0.0025424957275390625, "__label__health": 0.0010499954223632812, "__label__history": 0.0004856586456298828, "__label__home_hobbies": 0.00016689300537109375, "__label__industrial": 0.0011205673217773438, "__label__literature": 0.0002994537353515625, "__label__politics": 0.00044655799865722656, "__label__religion": 0.0006403923034667969, "__label__science_tech": 0.397705078125, "__label__social_life": 0.00010848045349121094, "__label__software": 0.01226806640625, "__label__software_dev": 0.57666015625, "__label__sports_fitness": 0.00040078163146972656, "__label__transportation": 0.0010461807250976562, "__label__travel": 0.00028395652770996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24483, 0.05157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24483, 0.32353]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24483, 0.87202]], "google_gemma-3-12b-it_contains_pii": [[0, 2339, false], [2339, 5127, null], [5127, 7568, null], [7568, 10204, null], [10204, 13237, null], [13237, 16078, null], [16078, 19594, null], [19594, 20364, null], [20364, 21948, null], [21948, 24483, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2339, true], [2339, 5127, null], [5127, 7568, null], [7568, 10204, null], [10204, 13237, null], [13237, 16078, null], [16078, 19594, null], [19594, 20364, null], [20364, 21948, null], [21948, 24483, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24483, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24483, null]], "pdf_page_numbers": [[0, 2339, 1], [2339, 5127, 2], [5127, 7568, 3], [7568, 10204, 4], [10204, 13237, 5], [13237, 16078, 6], [16078, 19594, 7], [19594, 20364, 8], [20364, 21948, 9], [21948, 24483, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24483, 0.15646]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
a4202491345fcf4d6a34cd8284328caf9dc4e85f
|
HTTP Transport for Trusted Execution Environment Provisioning: Agent-to-TAM Communication
draft-ietf-teep-otrp-over-http-03
Abstract
The Trusted Execution Environment Provisioning (TEEP) Protocol is used to manage code and configuration data in a Trusted Execution Environment (TEE). This document specifies the HTTP transport for TEEP communication where a Trusted Application Manager (TAM) service is used to manage TEEs in devices that can initiate communication to the TAM. An implementation of this document can (if desired) run outside of any TEE, but interacts with a TEEP implementation that runs inside a TEE.
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on May 7, 2020.
Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect
1. Introduction
Trusted Execution Environments (TEEs), including environments based on Intel SGX, ARM TrustZone, Secure Elements, and others, enforce that only authorized code can execute within the TEE, and any memory used by such code is protected against tampering or disclosure outside the TEE. The Trusted Execution Environment Provisioning (TEEP) protocol is designed to provision authorized code and configuration into TEEs.
To be secure against malware, a TEEP implementation (referred to as a TEEP "Agent" on the client side, and a "Trusted Application Manager (TAM)" on the server side) must themselves run inside a TEE. However, the transport for TEEP, along with the underlying TCP/IP stack, does not necessarily run inside a TEE. This split allows the set of highly trusted code to be kept as small as possible, including
allowing code (e.g., TCP/IP) that only sees encrypted messages, to be kept out of the TEE.
The TEEP specification [I-D.tschofenig-teep-protocol] (and its predecessors [I-D.ietf-teep-opentrustprotocol] and [GP-OTrP]) describes the behavior of TEEP Agents and TAMs, but does not specify the details of the transport. The purpose of this document is to provide such details. That is, a TEEP-over-HTTP (TEEP/HTTP) implementation delivers messages up to a TEEP implementation, and accepts messages from the TEEP implementation to be sent over a network. The TEEP-over-HTTP implementation can be implemented either outside a TEE (i.e., in a TEEP "Broker") or inside a TEE.
There are two topological scenarios in which TEEP could be deployed:
1. TAMs are reachable on the Internet, and Agents are on networks that might be behind a firewall, so that communication must be initiated by an Agent. Thus, the Agent has an HTTP Client and the TAM has an HTTP Server.
2. Agents are reachable on the Internet, and TAMs are on networks that might be behind a firewall, so that communication must be initiated by a TAM. Thus, the Agent has an HTTP Server and the TAM has an HTTP Client.
The remainder of this document focuses primarily on the first scenario as depicted in Figure 1, but some sections (Section 4 and Section 8) may apply to the second scenario as well. A fuller discussion of the second scenario may be handled by a separate document.
```
+------------------+ TEEP +------------------+
| TEEP Agent | <----------------------> | TAM |
+------------------+ +------------------+
| |
+------------------+ TEEP-over-HTTP +------------------+
| TEEP/HTTP Client | <----------------------> | TEEP/HTTP Server |
+------------------+ +------------------+
| |
+------------------+ HTTP +------------------+
| HTTP Client | <----------------------> | HTTP Server |
+------------------+ +------------------+
```
Figure 1: Agent-to-TAM Communication
2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
This document also uses various terms defined in [I-D.ietf-teep-architecture], including Trusted Execution Environment (TEE), Trusted Application (TA), Trusted Application Manager (TAM), TEEP Agent, and TEEP Broker, and Rich Execution Environment (REE).
3. TEEP Broker Models
Section 6 of the TEEP architecture [I-D.ietf-teep-architecture] defines a TEEP "Broker" as being a component on the device, but outside the TEE, that facilitates communication with a TAM. As depicted in Figure 2, there are multiple ways in which this can be implemented, with more or fewer layers being inside the TEE. For example, in model A, the model with the smallest TEE footprint, only the TEEP implementation is inside the TEE, whereas the TEEP/HTTP implementation is in the TEEP Broker outside the TEE.
Figure 2: TEEP Broker Models
In other models, additional layers are moved into the TEE, increasing the TEE footprint, with the Broker either containing or calling the topmost protocol layer outside of the TEE. An implementation is free to choose any of these models, although model A is the one we will use in our examples.
Passing information from an REE component to a TEE component is typically spoken of as being passed "in" to the TEE, and information passed in the opposite direction is spoken of as being passed "out". In the protocol layering sense, information is typically spoken of as being passed "up" or "down" the stack. Since the layer at which information is passed in/out may vary by implementation, we will generally use "up" and "down" in this document.
3.1. Use of Abstract APIs
This document refers to various APIs between a TEEP implementation and a TEEP/HTTP implementation in the abstract, meaning the literal syntax and programming language are not specified, so that various concrete APIs can be designed (outside of the IETF) that are compliant.
Some TEE architectures (e.g., SGX) may support API calls both into and out of a TEE. In other TEE architectures, there may be no calls out from a TEE, but merely data returned from calls into a TEE. This document attempts to be agnostic as to the concrete API architecture for Broker/Agent communication. Since in model A, the Broker/Agent communication is done at the layer between the TEEP and TEEP/HTTP implementations, and there may be some architectures that do not support calls out of the TEE (which would be downcalls from TEEP in model A), we will refer to passing information up to the TEEP implementation as API calls, but will simply refer to "passing data" back down from a TEEP implementation. A concrete API might pass data back via an API downcall or via data returned from an API upcall.
This document will also refer to passing "no" data back out of a TEEP implementation. In a concrete API, this might be implemented by not making any downcall, or by returning 0 bytes from an upcall, for example.
4. Use of HTTP as a Transport
This document uses HTTP [I-D.ietf-httpbis-semantics] as a transport. When not called out explicitly in this document, all implementation recommendations in [I-D.ietf-httpbis-bcp56bis] apply to use of HTTP by TEEP.
Redirects MAY be automatically followed, and no additional request headers beyond those specified by HTTP need be modified or removed upon a following such a redirect.
Content is not intended to be treated as active by browsers and so HTTP responses with content SHOULD have the following headers as explained in Section 4.12 of [I-D.ietf-httpbis-bcp56bis] (replacing the content type with the relevant TEEP content type per the TEEP specification):
- Content-Type: <content type>
- Cache-Control: no-store
- X-Content-Type-Options: nosniff
- Content-Security-Policy: default-src 'none'
- Referrer-Policy: no-referrer
Only the POST method is specified for TAM resources exposed over HTTP. A URI of such a resource is referred to as a "TAM URI". A TAM URI can be any HTTP(S) URI. The URI to use is configured in a TEEP Agent via an out-of-band mechanism, as discussed in the next section.
When HTTPS is used, TLS certificates MUST be checked according to [RFC2818].
5. TEEP/HTTP Client Behavior
5.1. Receiving a request to install a new Trusted Application
In some environments, an application installer can determine (e.g., from an app manifest) that the application being installed or updated has a dependency on a given Trusted Application (TA) being available in a given type of TEE. In such a case, it will notify a TEEP Broker, where the notification will contain the following:
- A unique identifier of the TA
- Optionally, any metadata to provide to the TEEP implementation. This might include a TAM URI provided in the application manifest, for example.
- Optionally, any requirements that may affect the choice of TEE, if multiple are available to the TEEP Broker.
When a TEEP Broker receives such a notification, it first identifies in an implementation-dependent way which TEE (if any) is most appropriate based on the constraints expressed. If there is only one TEE, the choice is obvious. Otherwise, the choice might be based on factors such as capabilities of available TEE(s) compared with TEE requirements in the notification. Once the TEEP Broker picks a TEE, it passes the notification to the TEEP/HTTP Client for that TEE.
The TEEP/HTTP Client then informs the TEEP implementation in that TEE by invoking an appropriate "RequestTA" API that identifies the TA needed and any other associated metadata. The TEEP/HTTP Client need not know whether the TEE already has such a TA installed or whether it is up to date.
The TEEP implementation will either (a) pass no data back, (b) pass back a TAM URI to connect to, or (c) pass back a message buffer and TAM URI to send it to. The TAM URI passed back may or may not be the same as the TAM URI, if any, provided by the TEEP/HTTP Client, depending on the TEEP implementation’s configuration. If they differ, the TEEP/HTTP Client MUST use the TAM URI passed back.
5.1.1. Session Creation
If no data is passed back, the TEEP/HTTP Client simply informs its caller (e.g., the application installer) of success.
If the TEEP implementation passes back a TAM URI with no message buffer, the TEEP/HTTP Client attempts to create session state, then sends an HTTP(S) POST to the TAM URI with an Accept header and an
empty body. The HTTP request is then associated with the TEEP/HTTP Client’s session state.
If the TEEP implementation instead passes back a TAM URI with a message buffer, the TEEP/HTTP Client attempts to create session state and handles the message buffer as specified in Section 5.2.
Session state consists of:
- Any context (e.g., a handle) that identifies the API session with the TEEP implementation.
- Any context that identifies an HTTP request, if one is outstanding. Initially, none exists.
5.2. Getting a message buffer back from a TEEP implementation
When a TEEP implementation passes a message buffer (and TAM URI) to a TEEP/HTTP Client, the TEEP/HTTP Client MUST do the following, using the TEEP/HTTP Client’s session state associated with its API call to the TEEP implementation.
The TEEP/HTTP Client sends an HTTP POST request to the TAM URI with Accept and Content-Type headers with the TEEP media type in use, and a body containing the TEEP message buffer provided by the TEEP implementation. The HTTP request is then associated with the TEEP/HTTP Client’s session state.
5.3. Receiving an HTTP response
When an HTTP response is received in response to a request associated with a given session state, the TEEP/HTTP Client MUST do the following.
If the HTTP response body is empty, the TEEP/HTTP Client’s task is complete, and it can delete its session state, and its task is done.
If instead the HTTP response body is not empty, the TEEP/HTTP Client passes (e.g., using "ProcessOTrPMessage" API as mentioned in Section 6.2 of [I-D.ietf-teep-opentrustprotocol] if OTrP rather than TEEP is used for provisioning) the response body up to the TEEP implementation associated with the session. The TEEP implementation will then either pass no data back, or pass back a message buffer.
If no data is passed back, the TEEP/HTTP Client’s task is complete, and it can delete its session state, and inform its caller (e.g., the application installer) of success.
If instead the TEEP implementation passes back a message buffer, the TEEP/HTTP Client handles the message buffer as specified in Section 5.2.
5.4. Handling checks for policy changes
An implementation MUST provide a way to periodically check for TEEP policy changes. This can be done in any implementation-specific manner, such as:
A) The TEEP/HTTP Client might call up to the TEEP implementation at an interval previously specified by the TEEP implementation. This approach requires that the TEEP/HTTP Client be capable of running a periodic timer.
B) The TEEP/HTTP Client might be informed when an existing TA is invoked, and call up to the TEEP implementation if more time has passed than was previously specified by the TEEP implementation. This approach allows the device to go to sleep for a potentially long period of time.
C) The TEEP/HTTP Client might be informed when any attestation attempt determines that the device is out of compliance, and call up to the TEEP implementation to remediate.
The TEEP/HTTP Client informs the TEEP implementation by invoking an appropriate "RequestPolicyCheck" API. The TEEP implementation will either (a) pass no data back, (b) pass back a TAM URI to connect to, or (c) pass back a message buffer and TAM URI to send it to. Processing then continues as specified in Section 5.1.1.
5.5. Error handling
If any local error occurs where the TEEP/HTTP Client cannot get a message buffer (empty or not) back from the TEEP implementation, the TEEP/HTTP Client deletes its session state, and informs its caller (e.g., the application installer) of a failure.
If any HTTP request results in an HTTP error response or a lower layer error (e.g., network unreachable), the TEEP/HTTP Client calls the TEEP implementation’s "ProcessError" API, and then deletes its session state and informs its caller of a failure.
6. TEEP/HTTP Server Behavior
6.1. Receiving an HTTP POST request
When an HTTP POST request is received with an empty body, the TEEP/HTTP Server invokes the TAM’s "ProcessConnect" API. The TAM will then pass back a (possibly empty) message buffer.
When an HTTP POST request is received with a non-empty body, the TEEP/HTTP Server passes the request body to the TAM (e.g., using the "ProcessOTrPMessage" API mentioned in [I-D.ietf-teep-opentrustprotocol] if OTrP rather than TEEP is used for provisioning). The TAM will then pass back a (possibly empty) message buffer.
6.2. Getting an empty buffer back from the TEEP implementation
If the TEEP implementation passes back an empty buffer, the TEEP/HTTP Server sends a successful (2xx) response with no body.
6.3. Getting a message buffer from the TEEP implementation
If the TEEP implementation passes back a non-empty buffer, the TEEP/HTTP Server generates a successful (2xx) response with a Content-Type header with the appropriate media type in use, and with the message buffer as the body.
6.4. Error handling
If any error occurs where the TEEP/HTTP Server cannot get a message buffer (empty or not) back from the TEEP implementation, the TEEP/HTTP Server generates an appropriate HTTP error response.
7. Sample message flow
The following shows a sample TEEP message flow that uses application/teep+json as the Content-Type.
1. An application installer determines (e.g., from an app manifest) that the application has a dependency on TA "X", and passes this notification to the TEEP Broker. The TEEP Broker picks a TEE (e.g., the only one available) based on this notification, and passes the information to the TEEP/HTTP Client for that TEE.
2. The TEEP/HTTP Client calls the TEEP implementation’s "RequestTA" API, passing TA Needed = X.
3. The TEEP implementation finds that no such TA is already installed, but that it can be obtained from a given TAM. The TEEP Agent passes the TAM URI (e.g., "https://example.com/tam")
to the TEEP/HTTP Client. (If the TEEP implementation already had a cached TAM certificate that it trusts, it could skip to step 9 instead and generate a QueryResponse.)
4. The TEEP/HTTP Client sends an HTTP POST request to the TAM URI:
```plaintext
POST /tam HTTP/1.1
Host: example.com
Accept: application/teep+json
Content-Length: 0
User-Agent: Foo/1.0
```
5. On the TAM side, the TEEP/HTTP Server receives the HTTP POST request, and calls the TEEP implementation’s "ProcessConnect" API.
6. The TEEP implementation generates a TEEP message (where typically QueryRequest is the first message) and passes it to the TEEP/HTTP Server.
7. The TEEP/HTTP Server sends an HTTP successful response with the TEEP message in the body:
```plaintext
HTTP/1.1 200 OK
Content-Type: application/teep+json
Content-Length: [length of TEEP message here]
Server: Bar/2.2
Cache-Control: no-store
X-Content-Type-Options: nosniff
Content-Security-Policy: default-src ’none’
Referrer-Policy: no-referrer
[TEEP message here]
```
8. Back on the TEEP Agent side, the TEEP/HTTP Client gets the HTTP response, extracts the TEEP message and pass it up to the TEEP implementation.
9. The TEEP implementation processes the TEEP message, and generates a TEEP response (e.g., QueryResponse) which it passes back to the TEEP/HTTP Client.
10. The TEEP/HTTP Client gets the TEEP message buffer and sends an HTTP POST request to the TAM URI, with the TEEP message in the body:
POST /tam HTTP/1.1
Host: example.com
Accept: application/teep+json
Content-Type: application/teep+json
Content-Length: [length of TEEP message here]
User-Agent: Foo/1.0
[TEEP message here]
11. The TEEP/HTTP Server receives the HTTP POST request, and passes the payload up to the TAM implementation.
12. Steps 6-11 are then repeated until the TEEP implementation passes no data back to the TEEP/HTTP Server in step 6.
13. The TEEP/HTTP Server sends an HTTP successful response with no body:
HTTP/1.1 204 No Content
Server: Bar/2.2
14. The TEEP/HTTP Client deletes its session state.
8. Security Considerations
Although TEEP is protected end-to-end inside of HTTP, there is still value in using HTTPS for transport, since HTTPS can provide additional protections as discussed in Section 6 of [I-D.ietf-httpbis-bcp56bis]. As such, TEEP/HTTP implementations MUST support HTTPS. The choice of HTTP vs HTTPS at runtime is up to policy, where an administrator configures the TAM URI to be used, but it is expected that real deployments will always use HTTPS TAM URIs.
9. IANA Considerations
This document has no actions for IANA.
10. References
10.1. Normative References
[I-D.ietf-httpbis-semantics]
[I-D.ietf-teep-opentrustprotocol]
[I-D.tschofenig-teep-protocol]
10.2. Informative References
[I-D.ietf-httpbis-bcp56bis]
Author’s Address
Dave Thaler
Microsoft
EMail: dthaler@microsoft.com
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-teep-otrp-over-http-03.pdf", "len_cl100k_base": 4674, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 26877, "total-output-tokens": 5868, "length": "2e12", "weborganizer": {"__label__adult": 0.0003609657287597656, "__label__art_design": 0.00024509429931640625, "__label__crime_law": 0.00090789794921875, "__label__education_jobs": 0.0004334449768066406, "__label__entertainment": 9.435415267944336e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.00041961669921875, "__label__food_dining": 0.0003018379211425781, "__label__games": 0.0005788803100585938, "__label__hardware": 0.00435638427734375, "__label__health": 0.0003516674041748047, "__label__history": 0.0002994537353515625, "__label__home_hobbies": 6.836652755737305e-05, "__label__industrial": 0.0006108283996582031, "__label__literature": 0.00025534629821777344, "__label__politics": 0.000392913818359375, "__label__religion": 0.0004549026489257813, "__label__science_tech": 0.1204833984375, "__label__social_life": 7.82012939453125e-05, "__label__software": 0.050811767578125, "__label__software_dev": 0.81689453125, "__label__sports_fitness": 0.00027179718017578125, "__label__transportation": 0.0008301734924316406, "__label__travel": 0.00022149085998535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21556, 0.02561]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21556, 0.27221]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21556, 0.8695]], "google_gemma-3-12b-it_contains_pii": [[0, 1719, false], [1719, 2556, null], [2556, 4744, null], [4744, 5845, null], [5845, 6922, null], [6922, 9156, null], [9156, 11368, null], [11368, 13347, null], [13347, 15232, null], [15232, 17191, null], [17191, 18504, null], [18504, 19810, null], [19810, 21296, null], [21296, 21556, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1719, true], [1719, 2556, null], [2556, 4744, null], [4744, 5845, null], [5845, 6922, null], [6922, 9156, null], [9156, 11368, null], [11368, 13347, null], [13347, 15232, null], [15232, 17191, null], [17191, 18504, null], [18504, 19810, null], [19810, 21296, null], [21296, 21556, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21556, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21556, null]], "pdf_page_numbers": [[0, 1719, 1], [1719, 2556, 2], [2556, 4744, 3], [4744, 5845, 4], [5845, 6922, 5], [6922, 9156, 6], [9156, 11368, 7], [11368, 13347, 8], [13347, 15232, 9], [15232, 17191, 10], [17191, 18504, 11], [18504, 19810, 12], [19810, 21296, 13], [21296, 21556, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21556, 0.02924]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
274c55b3bd2ec36f1745883005bdf2cd060f5e04
|
ABSTRACT
Conceptual modeling is of great importance not only to Information Systems and Software Engineering, but also to Simulation Engineering. It is concerned with identifying, analyzing and describing the essential concepts and constraints of a real-world domain with the help of a (diagrammatic) modeling language that is based on a set of basic modeling concepts (forming a metamodel). In this tutorial, we introduce the ontologically well-founded conceptual modeling language Onto-UML and show how to use it for making conceptual simulation models as the basis of model-driven simulation engineering.
1 INTRODUCTION
Even though there is a common agreement that conceptual modeling is an important first step in a simulation engineering project, at the same time it is thought to be the least understood part of simulation engineering (Tako et al. 2010). In a recent panel discussion on conceptual modeling in simulation (Zee et al. 2010), the participants agreed that there is a lack of “standards, on procedures, notation, and model qualities”.
In Section 2, we therefore first resort to the disciplines of Information Systems and Software Engineering (IS&SE) for better understanding the possible purposes, languages and techniques of conceptual modeling and for answering the question of what is a conceptual model (CM). Essentially, conceptual modeling is an activity performed in the analysis phase of a software or simulation engineering project. Its main purpose is to capture, as faithfully as possible, a relevant part of the real-world domain under consideration, using a well-defined (typically diagrammatic) modeling language for making a CM in the form of a digital artifact.
This tutorial is based on our previous research on ontological foundations of conceptual modeling, reported in (Guizzardi et al. 2003, Guizzardi 2005, Guizzardi and Halpin 2008, Guizzardi and Wagner 2010a, Guizzardi and Wagner 2010b, Guizzardi and Wagner 2011a, Guizzardi and Wagner 2011b, Guizzardi 2011).
The main benefit obtained from establishing the ontological foundations of the core concepts of a conceptual modeling language is a clarification of its real world semantics. A clearly defined semantics of the conceptual model of a domain leads to a higher overall quality of the simulation software program built upon that model with respect to comprehensibility, maintainability, interoperability and evolvability.
2 MODEL-DRIVEN SOFTWARE AND SIMULATION ENGINEERING
Model-driven Engineering (MDE), also called model-driven development, is a well-established paradigm in IS&SE, see, e.g., the Model-Driven Architecture proposal of the Object Management Group (MDA 2012). Since simulation engineering can be viewed as a special case of software engineering, it is natural to apply the ideas of MDE also to simulation engineering. There have been several proposals of using an
MDE approach in Modeling and Simulation (M&S), see, e.g., the overview given in (Cetinkaya and Verbraeck 2011).
2.1 Models in Software Engineering and Information Systems Engineering
Historically, research in conceptual modeling has first been carried out in the computer science field of Database Systems. It started with two proposals for a conceptual data modeling language: the semantic model proposed by (Abrial, 1974) and the entity-relationship (ER) model proposed by (Chen, 1976), which triggered the series of ER conferences (ER 2012) starting in 1979. Later it was noticed that conceptual modeling, e.g., in the forms of enterprise modeling and business process modeling, plays an important role in software engineering, in general.
In MDE there is a clear distinction between three kinds of models as engineering artifacts resulting from corresponding activities in the analysis, design an implementation phases:
1. **domain models** (also called ‘computation-independent’ models);
2. platform-independent **design models**
3. platform-specific **implementation models**.
Domain models are solution-independent descriptions of a problem domain produced in the analysis phase of a software engineering project. The term ‘domain model’ is synonymous with the term ‘conceptual model’. A domain model may include both descriptions of the domain’s state structure (in conceptual information models) and descriptions of its processes (in conceptual process models). They are solution-independent, or ‘computation-independent’, in the sense that they are not concerned with making any system design choices or with other computational issues. Rather, they focus on the perspective and language of the subject matter experts for the domain under consideration.
In the design phase, first a platform-independent design model, as a general computational solution, is developed on the basis of the domain model. The same domain model can potentially be used to produce a number of (even radically) different design models. Then, by taking into consideration a number of implementation issues ranging from architectural styles, nonfunctional quality criteria to be maximized (e.g., performance, adaptability) and target technology platforms, one or more platform-specific implementation models are derived from the design model.
In the implementation phase, an implementation model is encoded in the programming language of the target platform. Finally, after testing and debugging, the implemented solution is then deployed in a target environment.
A model for a software (or information) system, which may be called a ‘software system model’, does not consist of just one model diagram including all viewpoints or aspects of the system to be developed (or to be documented). Rather it consist of a set of models, one (or more) for each viewpoint. The two most important viewpoints, crosscutting all three modeling levels: domain, design and implementation, are
1. **information modeling**, which is concerned with the state structure of the domain;
2. **process modeling**, which is concerned with the dynamics of the domain.
In the computer science field of database engineering, which is only concerned with information modeling, domain information models have been called ‘conceptual models’, information design models have been called ‘logical design models’, and database implementation models have been called ‘physical design models’.
Examples of widely used languages for information modeling are Entity Relationship Diagrams and UML Class Diagrams, which subsume the former. Examples of widely used languages for process modeling are (Colored) Petri Nets, UML Activity Diagrams and the Business Process Modeling Notation (BPMN). Some modeling languages, such as UML Class Diagrams and BPMN, can be used on all three modeling levels in the form of tailored variants. Other languages have been designed for being used on one or two of these three levels only. E.g. Petri Nets cannot be used for conceptual process modeling, since they lack the required expressivity.
We illustrate the distinction between the three modeling levels with an example in Fig. 1. In a simple conceptual information model of a person, expressed as a UML class diagram, we require that any person has exactly one mother and one father (according to our understanding of reality), expressed by corresponding binary many-to-one associations, and we may not care about the data types of attributes, while we do care about the data types of attributes in the design model where we also make the design decision that it is not required that information about the father or mother of a person is available and, hence, turn the multiplicity of the father and mother association ends from “exactly one” to “zero or one”. Finally, in the Java implementation model, we specify Java-specific data types for attributes and we express the binary associations mother and father with corresponding reference properties.

The fact that the mother/father associations in Fig. 1 are mandatory in the conceptual model, while they are optional in the design model, shows that the conceptual model is concerned with the real world, i.e. it takes an ontological perspective, while the design model is concerned with the representation of information about the real world, taking an epistemological perspective.
### 2.2 Models in Simulation Engineering
Unlike in IS&SE, there is no agreed upon definition, or common understanding, of what is a CM in M&S. Unfortunately, the results achieved in the conceptual modeling field of IS&SE are often ignored by M&S researchers.
A recent panel discussion (Zee et al 2010) revealed that there are at least three different definitions of what is a CM in M&S:
1. **A document that states “what you will and will not include in the simulation and why”,** or “a repository of high-level conceptual constructs and knowledge specified in a variety of communicative forms (e.g., animation, audio, chart, diagram, drawing, equation, graph, image, text, and video)” intended to assist in the design of a simulation, as proposed by (Balci et al 2008);
2. **“A formal specification of a conceptualization”,** or “an ontological representation of the simulation that implements it” as proposed in (Turnitsa et al 2010), corresponding to what is called a domain model in MDE;
3. **“The specification of an executable simulation model”,** or “a non-software specific description of the computer simulation model” as proposed by (Robinson 2008), corresponding to what is called a design model in MDE;
Definition 1 reflects a view that is widespread in the M&S community, according to which conceptual modeling is not a well-defined activity resulting in one or more model diagrams expressed in conceptual modeling languages with a well-defined semantics, but rather a loosely defined term referring to all the activities that precede the implementation of a simulation model. In particular, in this view, the issue of information modeling is typically neglected, and only process models are used, often in the form of ad-hoc flow diagrams that are not expressed in a well-defined language. This approach is exemplified by (Ingalls 2008), where entity types are only discussed, but not modeled (e.g., in an Entity Relationship Diagram), and only an ad-hoc process model (a “logic flow” diagram) is presented.
Definition 2 comes closest to the view taken in this tutorial, while definition 3 seems to presuppose that there is nothing like a domain model and the modeling process starts right away with design modeling.
In the MDE approaches of (McGinnis and Ustun 2009) and (Cetinkaya et al 2011), it is proposed that the CM is to be transformed to a design model or to a simulation program. However, it should be clear from the nature of a CM as a solution-independent description of a domain, that a CM cannot be automatically transformed into a computational specification without human assistance.
Model-driven simulation engineering is based on the same kinds of models as model-driven software engineering: going from a domain model via a simulation design model to a simulation implementation model for the simulation platform of choice (or to several implementation models if there are several target simulation platforms). The specific concerns of simulation engineering, like, e.g., the concern to capture certain parts of the overall system dynamics with the help of random variables, do not affect the applicability of MDE principles. However, they may affect the modeling languages to be used.
We disagree with (Robinson 2011) who states that conceptual modeling “is not a science, but an art” suggesting to make a conceptual model in the form of a set of documents about the M&S project objectives, requirements and design assumptions, completely ignoring the results achieved in IS&SE. Rather, conceptual modeling, both in software and simulation engineering, should be considered an engineering discipline based on scientific research results and best practices.
2.3 Conceptual Modeling Languages
There are general purpose (domain-independent) modeling languages and domain-specific modeling languages. In the sequel, we simply say ‘modeling language’ instead of ‘general purpose modeling language’.
Discrete event simulation (DES) is concerned with the simulation of real-world systems that are conceived as discrete event systems (or ‘discrete dynamic systems’). Such conceptualizations of discrete systems are immaterial entities that only exist in the mind of the user (or a community of users) of a language. In order to be documented, communicated and analyzed they must be captured, i.e. represented in the form of a concrete artifact with the help of a language. The representation of a conceptualization in a language is called a model and the language used for its creation is called a modeling language. Notice that a discrete event system is placed within a real-world domain, which can be viewed as a system of systems. A domain model may, therefore, also be called a conceptual system model.
One of the main success factors of a conceptual modeling language lies in its ability to provide a set of modeling constructs that enable its users to directly express relevant domain concepts in an unambiguous manner. A conceptual (or domain) modeling language is a representation of a meta-conceptualization of (a viewpoint of) the real world. We could also say that the meta-conceptualization, which exists in the mind of the language designer, provides an interpretation of the domain modeling language.
A domain (or system) conceptualization, which exists in the mind of the modeler and contains a number of domain (or system) concepts, instantiates a meta-conceptualization. It is represented in the form of a domain (or conceptual system) model expressed in the conceptual modeling language representing the meta-conceptualization. This is illustrated by the diagram shown in Fig. 2.
For instance, for the simple conceptual model of a person shown on the left side of Fig. 1, which is a representation of a corresponding domain conceptualization of persons, a subset of UML class diagrams containing language elements for the meta-concepts of classes, attributes and binary associations only, is sufficient as a conceptual modeling language.
Due to their great expressivity and their wide adoption as modeling standards, UML Class Diagrams and BPMN seem to be the best choices for conceptual information and process modeling. However, since they have not been specifically designed for this purpose, we may have to restrict, modify and extend them in a suitable way. In fact, both an analysis of UML with respect to its suitability for conceptual modeling in (Guizzardi 2005) and an analysis of BPMN with respect to its suitability for agent-based DES modeling in (Guizzardi and Wagner 2011) have revealed a number of ambiguities and shortcomings that will have to be resolved and fixed for making these languages fit for conceptual modeling. This issue is further discussed in the next section.
Several authors, e.g. (Wagner et al 2009), (Cetinkaya et al 2011) and (Onggo and Karpat 2011), have proposed to use BPMN for discrete event simulation modeling and for agent-based modeling.
2.4 Section Summary
- It is natural to apply the general methodology of Model-Driven Engineering (MDE) also to simulation engineering.
- Unfortunately, the results obtained in the conceptual modeling field of IS&SE have largely been ignored in M&S. Still today, conceptual modeling is often confused with design modeling in M&S.
- A conceptual model is a solution-independent description of a problem domain expressed in a well-defined (diagrammatic) modeling language.
- In model-driven simulation engineering, we first make a conceptual system model, from which we derive a (platform-independent) simulation design model, which is then transformed into one or more (platform-specific) simulation models.
- A conceptual modeling language is a representation of a meta-conceptualization of a viewpoint of the real world. A system conceptualization, which exists in the mind of the modeler and contains a number of system concepts, instantiates a meta-conceptualization. It is represented in the form of a conceptual system model expressed in the conceptual modeling language representing the meta-conceptualization.
- We propose to use suitable variants of UML class modeling and BPMN, which are based on a foundational ontology, for conceptual information and process modeling.
3 ONTOLOGICAL FOUNDATIONS OF CONCEPTUAL SIMULATION MODELING
In the DES literature, it is often stated that DES is based on the concept of “entities flowing through the system”. E.g., this is the paradigm of an entire class of simulation software such as ARENA (ARENA 2012). However, the loose metaphor of a “flow” only applies to entities of certain types: events, messages, and physical objects may, in some sense, flow, while many entities of other types, such as buildings or organizations, do not flow in any sense. Also, subsuming these three different kinds of flows under one common term “entity flow” obscures their meanings. It is therefore highly questionable, to associate DES with a “flow of entities”. Rather, one may say that DES is based on a flow of events.
A discrete event system (or discrete dynamic system) consists of:
- **objects** (of various types) whose states may be changed by
- **events** (of various types) occurring at times from a discrete set of time points.
For modeling a discrete event system, we have to do the following:
1. describe its **object types** and **event types**;
2. for any event type, specify the **state changes** of objects and the **follow-up events** caused by the occurrence of an event of that type.
In **Ontology**, which is the philosophical study of what there is, the following fundamental distinctions are made:
- there are entities (or individuals) and entity types, which are called ‘universals’ in philosophy;
- there are the following categories of entities: objects, trope individuals (existentially dependent entities such as qualities and relationships) and events.
We have discussed these ontological distinctions in depth in (Guizzardi 2005) and in (Guizzardi and Wagner 2010a), where we present our proposal of a Unified Foundational Ontology (UFO 2012) based on theories from Formal Ontology, Cognitive Psychology, Linguistics, Philosophy of Language and Philosophical Logics. In (Guizzardi and Wagner 2010b, Guizzardi and Wagner 2011a) we propose the discrete event system ontology DESO and its agent-based extension ABDESO, which are foundational ontologies based on UFO and tailored to the domain of (agent-based) DES.
For pragmatic reasons, we use the term ‘object’ ambiguously, both for objects in the narrow sense of Aristotelian substances, which are existentially independent entities that are founded on matter and may therefore be better called **physical objects**, and also for other kinds of objects in a broader sense, such as books and customer orders.
In the meta-model shown in Fig 3 we summarize the ontological type categories of DESO that form the foundation of conceptual simulation modeling languages. Entity types classify entities, which are said to be their instances. An entity type may be the domain of attributes and reference properties, which are also entity types since their instances, attributions and references, are entities (in fact, they are trope individuals). The range of an attribute is a datatype, which is an abstract thing (namely a structure consisting of a symbol set as the datatype’s lexical space, an abstract set as its value space and a mapping from the lexical space to the value space). The range of a reference property is an entity type. Reference properties are (binary) relationship types. Since objects may participate in events, an event type may have a number of object types as participant types. Causal laws define the dynamics of a discrete event system (formally, they may be viewed as transition functions). Each causal law has one type of triggering event and zero or more types of resulting events.
In addition to a theory of object type categories, summarized in Section 3.1, UFO/DESO also contains theories of attribution, relationships, parthood, causality and agency. All of these theories are relevant to conceptual simulation modeling and are the basis for special modeling elements n Onto-UML, but for a lack of space, we cannot report on them in this paper.
3.1 Different Categories of Object Types
Any object type is endowed with an application condition that allows us to use the object type for classifying objects, that is, to judge if an object is an instance of it. For being able to understand the issues of identity and dynamic classification, we need to make a number of distinctions between different categories of object types. We summarize the theory of object type categories presented in chapter 4 of (Guizzardi 2005).
### 3.1.1 Sortal Types and Mixin Types
A sortal type is an object type that is endowed with an object identity condition for its instances allowing us to judge if two of its instances are the same. The object types Person, Car, Dog, Child and Student are examples of sortal types.
Object types that are not sortal types are called mixin types. Examples of mixin types are RedThing and InsurableItem, as these object types do not provide any identity conditions for their instances, so we could not tell, for instance, if two red objects perceived at different times are the same or not.
We have the following two postulates about mixin types:
- **A mixin type cannot have direct instances.** This means that a mixin type M must have sortal subtypes, which are directly instantiated by the instances of M.
- **A mixin type cannot be a subtype of a sortal type.** This is a consequence of the fact that all subtypes of a sortal type are again sortal types since they inherit its object identity condition.
### 3.1.2 Kinds and Subkinds
We define the modal notions of rigidity and non-rigidity for being able to distinguish different categories of object types. An object type O is **rigid** if all instances of O are necessarily instances of O (as long as they exist). In other words, if x instantiates O in some possible world, then x must instantiate O in all some possible worlds, in which x exists.
A rigid sortal type may have rigid subtypes, which inherit its object identity condition. A top node in such a rigid sortal type hierarchy is called a **kind**, while a rigid subtype of a kind is called a **subkind**.
An important postulate of UFO is:
• **Every object must instantiate exactly one kind.**
Examples of kinds are Planet, Person and Organization. Examples of subkinds are FemalePerson, which is a rigid subtype of Person, and University, which is a rigid subtype of Organization.
### 3.1.3 Role Types and Phase Types
An object type $O$ is **anti-rigid** if no instance of $O$ is necessarily an instance of $O$. In other words, if $x$ instantiates $O$ in some possible world, then there is another possible world, in which $x$ exists, but does not instantiate $O$.
An anti-rigid sortal type is a subtype of a kind or a subkind and may be either a **phase type** or a **role type**. In the case of a phase type $P$, the specialization condition only depends on attributes of $P$. For instance, the phase type Child classifies persons in a certain age. For a role type $R$, in contrast, the specialization condition depends on a relationship type involving $R$ and one or more other sortal types that participate in this relationship type. For instance, the role type Student classifies persons who are enrolled in a school. Here, the relationship type is ‘enrolled in’ and the other involved sortal type is School.
In the special case of an anti-rigid mixin type that is partitioned into role subtypes, we speak of a **role mixin**.
As a direct consequence of the above definitions, we obtain the following postulate:
- **A rigid object type cannot be a subtype of an anti-rigid object type.**

Notice that the particular object types chosen to exemplify the proposed type categories are used for illustration purposes only. For example, when categorizing the object type Person as a kind, we are not advocating that Person must be, in general, considered as a kind in conceptual modeling. Rather, the intention is to make the consequences of such a modeling choice explicit. The choice itself, however, is always left to the model designer.
### 3.2 The Conceptual Modeling Language Onto-UML
We use the UML extension mechanism of a **UML profile** for defining a conceptual modeling language whose elements represent the DESO type categories, including the different categories of object types discussed in Section 3.1 to 3.4. It is important to emphasize, however, that the language defined does not depend on UML, which is used here only for exploiting the convenience of its built-in profile extension mechanism and due to its wide adoption in computer science and its practical relevance. Alternatively, we could have proposed a new modeling language based on the same concepts. For an introduction to UML Class Diagrams, which is a prerequisite for understanding Onto-UML, the reader is referred to (???).
The proposed **Onto-UML profile** contains a set of stereotyped classes that support the design of ontologically well-founded conceptual models according to UFO and (AB)DESO. Moreover, the profile also
contains a number of constraints that are derived from the postulates stated above restricting the way the modeling elements can be related.
Almost all of the basic type concepts of DESO shown in Fig. 3 are directly supported by UML Class Diagrams, where relationship types are called ‘associations’, as shown by the mapping in Table 1. Only the three categories of physical object types, event types and causal laws are not supported. Consequently, we have to add them to the Onto-UML profile in the form of the class stereotypes «physical object type», «event type» and «causal laws».
The postulates of UFO lead to the following Onto-UML modeling guidelines:
- Any subkind must be a subtype of a kind.
- Any anti-rigid sortal type must be a subtype of a kind.
- A kind must not be a subtype of a subkind, a phase, a role or a role mixin.
- A subkind must not be a subtype of a phase, a role or a role mixin.
- Mixin types must be represented as abstract classes (which are rendered in UML with class names in italics).
- A mixin type must not be a subtype of a kind, a subkind, a phase or a role.
Table 1: Mapping DESO type concepts to corresponding UML elements
<table>
<thead>
<tr>
<th>DESO type concept</th>
<th>Corresponding UML element</th>
</tr>
</thead>
<tbody>
<tr>
<td>Entity type</td>
<td>Class</td>
</tr>
<tr>
<td>Datatype</td>
<td>Datatype</td>
</tr>
<tr>
<td>Attribute</td>
<td>Property whose type is a datatype</td>
</tr>
<tr>
<td>Reference property</td>
<td>Property whose type is a class</td>
</tr>
<tr>
<td>Relationship type</td>
<td>Association</td>
</tr>
<tr>
<td>Object type</td>
<td>Class stereotyped «object type»</td>
</tr>
<tr>
<td>Physical object type</td>
<td>Class stereotyped «physical object type»</td>
</tr>
<tr>
<td>Kind</td>
<td>Class stereotyped «kind»</td>
</tr>
<tr>
<td>Subkind</td>
<td>Class stereotyped «subkind»</td>
</tr>
<tr>
<td>Role type</td>
<td>Class stereotyped «role»</td>
</tr>
<tr>
<td>Phase type</td>
<td>Class stereotyped «phase»</td>
</tr>
<tr>
<td>Mixin type</td>
<td>Class stereotyped «mixin»</td>
</tr>
<tr>
<td>Role mixin</td>
<td>Class stereotyped «role mixin»</td>
</tr>
<tr>
<td>Event type</td>
<td>Class stereotyped «event type»</td>
</tr>
<tr>
<td>Causal law</td>
<td>Class stereotyped «causal law»</td>
</tr>
</tbody>
</table>
3.3 The Conceptual Process Modeling Language Onto-BPMN
Conceptual process models are based on the event types and causal laws as modeled in a conceptual information model. They are expressed in a variant of BPMN, which we call Onto-BPMN, where a causal law takes the form of an event subprocess, which describes a process type the instances of which are triggered by events of a certain type as parallel threads. While a conceptual information model describes the informational aspects of events and causal laws, the corresponding conceptual process model describes the dynamic aspects, including the succession of events. As opposed to Onto-UML, which has originally been proposed in (Guizzardi 2005) and has been used and validated in many modeling projects since then, Onto-BPMN has not been presented before and is still under development by the authors of this tutorial.
In the same way as our Onto-UML information modeling concepts do not depend on the language of UML Class Diagrams, the concepts of Onto-BPMN do not depend on BPMN, which is used only due to its wide adoption in computer science and information systems, and its practical relevance. For an introduction to BPMN, which is a prerequisite for understanding Onto-BPMN, the reader is referred to (???)
Ontologically, BPMN activities, including tasks and subprocesses, are special events. However, this subsumption of activities under events is not supported by BPMN. It is one of the departures of Onto-BPMN from standard BPMN.
4 MAKING CONCEPTUAL MODELS – EXAMPLES
When we model a particular discrete system, including the many different things that make up the system, do we model these particular things and this particular system or do we model this type of system with all the types of things that make up such a type of system? The answer to this question seems to vary from case to case. In the case of modeling a machine, it is clear that we are interested in a model of this type of machine, and not in a model of a particular machine. But in the case of modeling an organization, it seems that we want a model of this particular organization only, and we are not really interested in considering the more general case of the type of organization that is instantiated by this particular organization. However, we argue that we should always model types, and not individuals. Even in the case of modeling a particular organization o, we should adopt the view that there is not only o, but there is also a corresponding type of organization O, which is instantiated by o, and the goal of our modeling project is to model O, and not o.
4.1 Example: A Service Queue System
In the service queue system example, as implemented in the Simurena Library (Simurena 2012), customers arrive at random times at a service desk where they have to wait in a queue when the service desk is busy. Otherwise, when the service desk is not busy, they are immediately served by the service clerk. Whenever a service is completed, the next customer from the queue will be served, if there is any.
4.1.1 The Conceptual Information Model
A naïve conceptual information model of this system may look as shown in Fig. 5. There are one-to-one binary relationship types between the object types ServiceDesk and ServiceClerk and between ServiceDesk and ServiceQueue. The fact that a service queue is composed of zero or more private customers is modeled with the help of the UML composition relationship (rendered with a solid ‘diamond’ at the side of the aggregate type).

Typically, in a simulation design model we would make several simplifications and, for instance, abstract away from the object type ServiceClerk, but in a conceptual system model, we include all entity types that are relevant for understanding the real-world system, independently from the simplifications we make in the solution design and implementation models.
The Onto-UML modeling guidelines require that we identify, which object types are kinds, role types or phase types, and that we identify suitable kinds to be added as supertypes of role types and phase types, for making the conceptual model ontologically complete. As a result of following these guidelines we obtain the improved model shown in Fig. 6. One of the improvements achieved is that a service clerk may now also be a customer, which is a special case of the real-world possibility that an employee may also be a customer.
Guizzardi and Wagner
Figure 6: An improved version of the model of Fig. 5.
The model of Fig. 6 can be further improved by adding event types and related causal laws, as shown in Fig. 7. There are three types of events in this system: customer arrival events, service start events and service end events (also called ‘customer departure events’).
Figure 7: Adding event types and causal laws.
For simplicity, we have omitted service start events, since they can be modeled to coincide either with customer arrival events, when the service queue is empty, or with customer departure events, when the queue is non-empty. In the extended model shown in Fig. 7, both customer arrival events and customer departure events have exactly two participants: a customer and the service desk. The CustomerArrivalLaw is triggered by a customer arrival event and causes a corresponding customer departure event. Similarly, the CustomerDepartureLaw, which is not shown in the diagram of Fig. 7, is triggered by a customer departure event and causes another customer departure event (for the next customer), if the queue is non-empty, as shown in Fig.8.
Figure 8: The customer departure law is triggered by a customer departure event.
4.1.2 The Conceptual Process Model
The conceptual process model for the service queue system consists of two event subprocesses, one for the CustomerArrivalLaw that is triggered by CustomerArrival events, and one for the CustomerDepartureLaw that is triggered by CustomerDeparture events, as shown in Fig. 9. The CustomerArrivalLaw subprocess includes the BPMN task “perform service for arrived customer”, while the CustomerDepartureLaw subprocess includes the BPMN task “perform service for next customer”.
![Diagram of the conceptual process model of the service queue system.]
Figure 9: A conceptual process model of the service queue system.
4.2 Example: A Drive-Thru Restaurant
In the Drive Thru example, as presented in (Ingalls 2008) and implemented in the Simurena Library (Simurena 2012), cars enter a drive thru from the street and the drivers decide whether or not to get in line. If the driver decides to leave the restaurant, he counts as a lost customer. If he decides to get in line, he waits until the menu board is available. At that time, he gives the order to the order taker at the menu board. After the order is taken, two things occur simultaneously:
1. The driver moves forward if there is room, otherwise he has to wait at the menu board until there is room to move forward.
2. The order is sent back to the kitchen where it is prepared with some delay.
As soon as the driver reaches the pickup window, he pays and picks up his food, if it is ready. If the food is not yet ready, he has to wait until his order is delivered to the pickup window. In (Ingalls 2008), neither a conceptual information model nor an information design model is presented. The only model presented is a “logic flow” model expressed in a diagram language without a clearly defined semantics.
4.2.1 The Conceptual Information Model
Our conceptual information model shown in Fig. 10 contains DriveThru as a subkind of Restaurant, which is a subkind of the kind Organization. The role types OrderTaker, Cook and PickupWindowClerk specialize the role type Employee, which is a subtype of the kind Person.

Figure 10: A conceptual information model of a drive thru.
REFERENCES
Guizzardi and Wagner
AUTHOR BIOGRAPHIES
GIANCARLO GUIZZARDI is Associate Professor at the Computer Science Department, Federal University of Espírito Santo (UFES), Brazil, and senior member of the Ontology and Conceptual Modeling Research Group (NEMO). His work is focused in the development of domain ontologies and foundational ontologies and their application in computer science and, primarily, in the area of conceptual modeling and organizational modeling. He has been involved in a number of industrial projects in domains such as off-shore software development, petroleum and gas, medical informatics, telecommunications and news information management. His email address is gguizzardi@inf.ufes.br.
GERD WAGNER is Professor of Internet Technology at the Department of Informatics, Brandenburg University of Technology, Germany. His research interests include agent-oriented modeling and agent-based simulation, foundational ontologies, (business) rule technologies and the Semantic Web. In recent years, he has been focusing his research on the development of an agent-based discrete event simulation framework, called ER/AOR Simulation (see www.AOR-Simulation.org) He can be reached at http://www.informatik.tu-cottbus.de/~gwagner/.
|
{"Source-Url": "https://nemo.inf.ufes.br/wp-content/papercite-data/pdf/conceptual_simulation_modeling_with_ontouml_2012.pdf", "len_cl100k_base": 7920, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 36936, "total-output-tokens": 10572, "length": "2e12", "weborganizer": {"__label__adult": 0.0005011558532714844, "__label__art_design": 0.0012912750244140625, "__label__crime_law": 0.0005564689636230469, "__label__education_jobs": 0.0112457275390625, "__label__entertainment": 0.00022983551025390625, "__label__fashion_beauty": 0.0003123283386230469, "__label__finance_business": 0.0008611679077148438, "__label__food_dining": 0.0006070137023925781, "__label__games": 0.0023040771484375, "__label__hardware": 0.0009441375732421876, "__label__health": 0.0010280609130859375, "__label__history": 0.0007538795471191406, "__label__home_hobbies": 0.00020122528076171875, "__label__industrial": 0.0010614395141601562, "__label__literature": 0.0015382766723632812, "__label__politics": 0.0004627704620361328, "__label__religion": 0.0008358955383300781, "__label__science_tech": 0.36962890625, "__label__social_life": 0.0002741813659667969, "__label__software": 0.017791748046875, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.0003859996795654297, "__label__transportation": 0.0009937286376953125, "__label__travel": 0.0002903938293457031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43666, 0.01886]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43666, 0.59401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43666, 0.89895]], "google_gemma-3-12b-it_contains_pii": [[0, 2887, false], [2887, 6973, null], [6973, 10403, null], [10403, 14373, null], [14373, 16599, null], [16599, 20615, null], [20615, 22752, null], [22752, 25709, null], [25709, 29447, null], [29447, 32676, null], [32676, 33899, null], [33899, 35698, null], [35698, 37656, null], [37656, 42033, null], [42033, 43666, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2887, true], [2887, 6973, null], [6973, 10403, null], [10403, 14373, null], [14373, 16599, null], [16599, 20615, null], [20615, 22752, null], [22752, 25709, null], [25709, 29447, null], [29447, 32676, null], [32676, 33899, null], [33899, 35698, null], [35698, 37656, null], [37656, 42033, null], [42033, 43666, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43666, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43666, null]], "pdf_page_numbers": [[0, 2887, 1], [2887, 6973, 2], [6973, 10403, 3], [10403, 14373, 4], [14373, 16599, 5], [16599, 20615, 6], [20615, 22752, 7], [22752, 25709, 8], [25709, 29447, 9], [29447, 32676, 10], [32676, 33899, 11], [33899, 35698, 12], [35698, 37656, 13], [37656, 42033, 14], [42033, 43666, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43666, 0.09189]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
538de799513ae0309b486aa15c1adc1fac843891
|
Bridging the Gap Between Requirements Document and Formal Specifications using Development Patterns
Imen Sayar, Jeanine Souquières
To cite this version:
Imen Sayar, Jeanine Souquières. Bridging the Gap Between Requirements Document and Formal Specifications using Development Patterns. IEEE 27th International Requirements Engineering Conference Workshops (REW), Sep 2019, Jeju Island, South Korea. 10.1109/REW.2019.00026 . hal-02962897
HAL Id: hal-02962897
https://hal.univ-lorraine.fr/hal-02962897
Submitted on 9 Oct 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Bridging the Gap Between Requirements Document and Formal Specifications using Development Patterns
Imen Sayar
*Université de Lorraine, CNRS, LORIA*
F-54000 Nancy, France
imen.sayar@loria.fr
Jeanine Souquières
*Université de Lorraine, CNRS, LORIA*
F-54000 Nancy, France
jeanine.souquieres@loria.fr
Abstract—Guaranteeing the correctness of critical and complex software and systems is a challenge that needs to be tackled right from the requirements engineering phase. This paper introduces two development patterns linked to the shape of requirements. The first one allows to automatically formalize a constraint and introduce it in an existing system. The second one is interested on requirements describing a sequence of operations. The verification activity is partly automated and the validation becomes easier to manage. The approach using these development patterns allow us an incremental development of formal specifications and their associated requirements, linked by a glossary. The case study of a hemodialysis system is used as a running example throughout this paper.
I. INTRODUCTION
A requirements document serves as a bridge between the clients and suppliers of software and systems development industry. The development of formal specifications is a manual activity based on the cognitive skills of the person in charge of developing them out of the informally described user requirements. Although there has been considerable focus on making requirements more understandable by reducing the ambiguity, there is hardly any support for the development of formal specifications. Approaches that propose the use of controlled natural language for the description of requirements improve the clarity of the requirements, but do not contribute directly to the development of formal specifications. Patterns are used in software programming to document solutions, facilitating their application to new problems [7]; they are templates for how to solve a problem that can be used in many different situations. Patterns are also used in software specifications, describing recurrent specification structures [9]. We propose development patterns to formalize requirements describing two concepts: constraints and sequences. A pattern uses a system of already developed formal specifications, their corresponding requirements and the trace links between them. They automate the development of parts of the formal specification. This work is an evolution of our previous work published in [15].
The remainder of the paper is organized as follows. First we present our approach in Section II with utilisation of different existing tools. The first pattern presented in Section III concerns the formalization of constraints described in the requirements and Section IV concerns the definition of a sequential pattern. We applied our patterns on several case studies as the landing gears sytem [3], the hemodialysis [13] and the Hybrid ERTMS/ETCS Level 3 standard [4]. In this paper, we use the hemodialysis machine case study. Section V discusses our contribution. Section VI compares our work with the existent and finally, Section VII concludes and sketches future work directions.
II. THE APPROACH
A. Description
Our work is situated in the context of bridging the gap between two different levels of formalisms i.e. the requirements document of the client and the formal specification expressed by the computer specialist. The first document describes informal or semi-formal artifacts used to describe user’s point-of-view of the system under development and the second one represents artifacts describing the developer’s point-of-view of the same system in a formal manner. A system is defined by:
- *Reqs.* They represent a list of rewritten user requirements using Abrial’s approach [2]. Our methodology is not dependent on a specific tool, however we use the ProR tool [11] for documenting the requirements.
- *Spec.* It denotes the specification of the future system, described by a formal method approach like the Event-B method based on the refinement technique. An Event-B specification is composed of two constructs:
- the *context* describes the static part of the model, using sets, constants, theorems and axioms,
- the *machine* describes the behavior of the model. It contains the system variables, invariants and events. An event is the dynamic element of the machine. It is composed of guards and actions.
- *Glossary.* It describes the traceability links between the two previous documents associating the formal terms
[https://www.southampton.ac.uk/abz2018/information/case-study.page]
coming from the Spec to their corresponding informal descriptions in the Reqs.
The development patterns arise from the informal requirements. They are used to perform activities of the specification development by simplifying the verification and validation activities. A development pattern concerns the three components of the <Reqs, Glossary, Spec> system. We present two patterns, Dev-if and Dev-seq. The first one serves on automatically introducing constraints in an existing system. The second one aims on automatically defining order between unordered operations of a system. The constraints and the sequences are described in the informal client requirements document.
B. Tools
1) Managing requirements: We use ProR, plug-in of Rodin[^1] to edit requirements and link them with formal specifications. Requirements can be organized in a hierarchical structure following:
- a brother-brother relation in which a requirement is in the same level as a "brother" requirement or
- a parent-child relation where a "child" requirement details a "parent" one, see Figure 9.
Three requirements of the hemodialysis machine case study ([^13] are shown in Figure 1.
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>R-5</td>
<td>During initiation, if the software detects that the pressure at the VP transducer falls below the lower pressure limit, then the software shall stop the BP and execute an alarm signal.</td>
</tr>
<tr>
<td>R-6</td>
<td>During initiation, if the software detects that the pressure at the AP transducer exceeds the upper pressure limit, then the software shall stop the BP and execute an alarm signal.</td>
</tr>
<tr>
<td>R-8</td>
<td>During initiation, if the software detects that the pressure at the VP transducer falls below the lower pressure limit, then the software shall stop the BP and execute an alarm signal.</td>
</tr>
</tbody>
</table>
Fig. 1. Some requirements from the hemodialysis case study
2) Verification: It concerns the specification and answers the question "are we constructing the system correctly?". The feedbacks of this activity give indications about shortcomings in the requirements document like contradictions or oversights ([^1]). The semantics of specifications are given by proof obligations (POs) ensuring that machines meet essential system properties, such as safety or invariant-preservations. The POs are generated by proof obligation generators and discharged using the automatic and interactive provers of the Rodin platform. The correctness of the specification of the patterns has been proved once and for all. The verification task is automatically done for the Event-B specifications in the same manner as proposed in ([^9]).
3) Validation: It checks if the developed specification is coherent relatively to the client requirements and focuses on responding to the question "are we developing the correct system?". ProB ([^12]) is used for the animation and model-checking of the specification. In ([^16]), we have proposed an approach for validating the formal specification with respect to the user requirements, using the glossary, representing the links between these two documents. The validation starts from the requirements analysis phase during which we extract validation elements from the client requirements. A validation element refers to formal or informal terms according to which the Spec will be validated. It concerns:
- the data that should be present in the system,
- the expected functionalities or services provided by the system,
- the conditions or obligations under which the system works. They are compared to preconditions and post-conditions of Hoare ([^10]) and
- the behavior defined using existing functionalities and described as a sequence of operations or services.
III. CONDITIONAL PATTERN
It formalizes informal constraints - described in client requirements - into formal elements in an existing system.
A. Problem
Given a system <Reqs, Glossary, Spec> and a requirement R-new, the problem is how to automatically introduce a constraint in this system. The resulting system should be correct. Then, the effort of developing is gained and the risk of oversights is reduced. We define a requirement describing a constraint as follows:
R-new: env vars if condition then action
where:
- env vars is a set of values of variables describing the state of the environment of the user requirement,
- condition is a boolean expression describing the constraints on variables of the system and
- action designates the instructions that modify the state of system variables.
B. Solution and formalization
The development pattern Dev-if operates on the requirement R-new that describes a constraint on the system’s functioning, see Figure 2.
[^1]: A platform based on Eclipse and available at http://www.event-b.org/
1) **Reqs**: The requirements document evolves by introducing **R-new'**, a formalized form of the original **R-new**. We mention that this latter is kept in the requirements document for traceability reasons and is preceded by an *. R-new' contains formal terms - put between square brackets [ ] - coming from the specification and replacing the informal terms.
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>R-new</td>
<td>env_vars if condition then action</td>
</tr>
<tr>
<td>R-new</td>
<td>[env_vars_val] if [condition] then [action_vars]</td>
</tr>
</tbody>
</table>
2) **Spec**: It is described by two concepts:
- The **invariants**. **Dev-if** formalizes system properties via predicates:
- **R-new'**-prop1 is an invariant describing the constraint described in **R-new**. This invariant means that if the system’s action variables change their values, then one can be sure that this system ensures the imposed condition and respects the environment variables values, see Figure [3].
- **Dev-if** ensures the preservation of the abstract properties of the formal specification by its concrete version with introducing the gluing invariant **glue-R-new'**. “…” expresses the parts that should be completed by the developer.
**NB.** The operator ”\("\)” expresses the conjunction between abstract and concrete states of variables and the exclusive disjunction operator ”\(\oplus\)” describes the exclusion between these states. The expertise of the developer is needed to accomplish this task.

3) **Glossary**: New formal terms attached with their informal descriptions are automatically added to the glossary [8].
C. **Activities**
1) **Verification**: The proposed pattern is described as an element of an Event-B specification. Six POs related to this pattern are automatically generated using the tools associated to the Rodin platform. When applying it to an existing system, these POs will be automatically instantiated and discharged since the proving activity is already done for the pattern.
2) **Validation**: The **Dev-if** pattern offers elements related to the validation activity. These elements are generic, that means they will be automatically instantiated when applying the pattern. They concern:
- data like the **action_vars** representing the variables involved in the action of the condition and the **env_vars** describing the variables of the system environment,
- functionalities such as the event **treatment_R-new'** which will be added to the **Spec** and
- obligations as the invariant **R-new'**-prop1 (see Figure [3]) constraining the system’s functioning. This obligation is generated and checked automatically when applying this pattern.
D. **Case study**
The hemodialysis case study is presented by a technical part and a safety requirements part. This latter is composed of general requirements and 36 software requirements. These last are almost describing constraints in the system components with a repetitive way. Our starting point is described by the following system [7].
- **Reqs**: They are representing the informal requirements document containing **R-5'**, a rewritten form of **R-5**. We use the recommendations of Abrial [2] and the ProR tool [11] to realize this requirement. **R-5'** contains formal terms coming from the **Spec**. The **Reqs** document is described as follows:
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>R-5</td>
<td>During initiation, if the software detects that the pressure at the VP transducer exceeds the upper pressure limit, then the software shall stop the BP and execute an alarm signal.</td>
</tr>
<tr>
<td>R-5'</td>
<td>[initiat] if [vp] exceeds [upper_press_limit] then stop [BP] and execute [ALM_excess_vp]</td>
</tr>
</tbody>
</table>
- **Spec**: It is described by the machine **R-5'_Mch** which refines the machine **Common_Mch** and sees a context **R-5'_Ctx**. An overview of this specification is provided in the Figure [5].
3) **Glossary**: It contains the available pairs:
<table>
<thead>
<tr>
<th>Formal term</th>
<th>Informal description</th>
</tr>
</thead>
<tbody>
<tr>
<td>initiat</td>
<td>During initiation</td>
</tr>
<tr>
<td>vp</td>
<td>the pressure at the VP transducer</td>
</tr>
<tr>
<td>ALM_excess_vp</td>
<td>an alarm signal</td>
</tr>
</tbody>
</table>
1) Introducing a condition: Let’s start the development of the requirement R-6 relatively to the existing system including the development of R-5 (see Figure 5). R-new corresponds to R-6. Table I shows the values of the parameters in the requirement in question.
### TABLE I
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>env_vars</td>
<td>During initiation</td>
</tr>
<tr>
<td>condition</td>
<td>the pressure at the VP transducer falls below the lower pressure limit</td>
</tr>
</tbody>
</table>
| action | - stop the BP
| - execute an alarm signal |
This requirement uses the variable vp, the pressure at the VP transducer. The Dev-if pattern takes as inputs: the existing development of R-5 and R-6 requirement. It updates the system by new details coming from R-6:
- **Reqs.** The rewritten requirement R-6' is added to the existing requirements document. It is described in Figure 6 as a brother of both R-5 and R-6. The two requirements have the same variable vp.
- **Spec.** A new event treatment_R-6' is automatically generated and added to the R-5'_Mch:
```plaintext
Event treatment_R-6'
when
grd1 : phase = initiat
grd2 : vp < lower_press_limit
then
R = 6' - act1 : BP := stopped
R = 6' - act2 : alarm_vp := ALM_deficit_vp
end
```
Note that the new constant ALM_deficit_vp is added to the context R-5'_Ctx.
- **Glossary.** It is updated by adding a new pair (ALM_deficit_vp, an alarm signal).
The verification and validation activities are fulfilled as follows:
- the POs associated to this specification are automatically discharged and
- the validation is performed using the tool ProB. We successfully animate the updated machine R-5'_Mch using the following scenario:
```plaintext
INITIALISATION → start_BP → change_vp → treatment_R-6'
```
in which the events start_BP and change_vp refer to respectively the operations of starting the blood pumping and changing the value of the pressure at the VP transducer of the hemodialysis machine.
2) Introducing the glue: We refine the development issued from Section III-D by taking into account the requirement R-8 (see Figure 5). The Dev-if pattern ensures the preservation of the abstract Spec previously described and automatically generates new elements. It updates the requirements document by a rewritten form of R-8 and updates the glossary.
*Dev-if* generates “…” which concerns the concept glue between R-8 and R-6. This concept expresses the state of the system when both the pressure at the AP transducer and the pressure at the VP transducer fall below the lower pressure limit. The developer completes the “…” by:
- **Reqs.** A requirement glue-R-8' is added. Hereby, a completed version of this requirement is a brother of both R-5'and R-6'.
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
</table>
- **Spec.** A gluing invariant is introduced.
```plaintext
glue_R-8' : alarm_vp = ALM_deficit_ap /
alarm_vp = ALM_deficit_vp ⇒
ap < lower_press_limit ∧ phase = initiat /
vp < lower_press_limit
```
- **Glossary.** There is no new pairs added.
The verification and validation of this system are automatically performed using the Rodin tools.
### IV. SEQUENTIAL PATTERN
This pattern helps the developer to automatically introduce the order between existing operations of a given system. It concerns the formal Event-B specification and the requirements and the glossary.
A. Problem
In the documents of the three case studies previously mentioned (see Section I), the requirements describe a sequence of operations using this form:
\[ R\text{-new}: (\text{env_var})^* \text{sequence} (\text{operation})^+ \]
in which:
- \text{env_var} is a set of variables describing the environment of the requirement. The asterisk following this element means that these variables may not exist and
- \text{operation} represents a series of ordered actions. It contains at least one action.
Given an existing system containing unordered operations and a requirement \text{R-new} describing a sequence, the problem is how to automatically introduce order between this system’s operations.
B. Solution and formalization
We formalize the pattern \text{Dev-seq} in Event-B using the Rodin platform and in TLA+ using TLA Toolbox\(^4\) an integrated development environment. Using Event-B language, every \text{operation} of \text{R-new} is translated into an event. The order is expressed by the guards and the assignments in the events: the assignment of an event is the guard of the next one. Comparing with the Hoare triplet\(^1\) (precondition \{\text{instructions}\} postcondition), each action in a sequence has:
- a precondition representing the result of the previous action and
- a postcondition resulting from its execution.
\text{Dev-seq} introduces the order between the operations of an existing system. Its use is described in the following figure. It takes as inputs the existing system composed of several unordered operations or events \text{evt}_i | i \in \{1..n\} and a requirement \text{R-new} describing a sequence. The generated output is a system strengthened by taking into account the order between the previous events/operations.

C. Activities
Verification and validation activities are automatically performed using \text{Dev-seq}. This pattern generates scenarios describing a chain of ordered operations/events. For instance, \text{INITIALISATION} \rightarrow \ldots \rightarrow \text{evt}_1 \rightarrow \ldots \rightarrow \text{evt}_n is one of these generic scenarios.
D. Case study
We treat the following requirement \text{R-th} of Figure\(^7\) taken from the section “Connecting the patient and starting therapy” of\(^13\):
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
</table>
| \text{R-th} | - The patient is connected arterially.
- The BP is started by pressing the START/STOP button on the UI.
- The blood flow is set.
- The blood tubing system is filled with blood. The BP stops automatically when blood is detected on the VRD in the SAD.
- The patient is connected venously.
- The blood pump is started and the prescribed blood flow is set.
- The machine is taken out of bypass mode. The HD machine switches to main flow and bicarbonate running. The signal lamps on the UI switch to green. |

This requirement describes implicitly a sequence. In order to explicit this sequential form, we re-write it as follows:
\[ \text{R-th}: \text{sequence} \text{therapy} \]
in which \text{therapy} is the series of actions presented by each item of Figure\(^7\). The \text{env_var} parameter is empty.
1) Existing system:
- \text{Reqs}. They represent the informal requirements containing \text{R-th'}, a rewritten form of \text{R-th}, in which formal terms are introduced between brackets "[ ]". This requirement is partially described as follows:
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>\text{R-th}</td>
<td>\text{sequence}</td>
</tr>
</tbody>
</table>
| | - [patient] is [connected_art].
| | - [BP] is [started] by pressing the START/STOP button on the UI.
| | - [blood_flow] is [set].
| | - ... |
- \text{Spec}. It is described by the following seven events. Each event formalizes one sentence of \text{R-th}. There is no order between them.
\[ \text{connect_patient_art} \text{start_Bp} \text{set_blood_flow} \text{stop_Bp_flow} \text{connect_patient_ven} \text{restart_Bp} \text{start_HD} \]
- \text{Glossary}. Some pairs of (formal term, informal description) are shown in the following:
Informal description
- Blood flow
- BP
2) Taking into account the order: Dev-seq introduces an order between the events of the previously described Spec. Their updated version is shown in Figure 8 where:
- the result of act1 of event connect_patient_art is a guard grd1 of the event start_BP and
- the result of act1 of event start_BP is a guard grd1 in the event set_blood_flow.

Fig. 8. Introducing an order between events
Thanks to the concepts of guards and actions in the events, these ones are executed as a chain:
connect_patient_art → start_BP → set_blood_flow
The requirements document is updated in the Figure 9 by:
- decomposing R-th’ into sub-requirements and
- introducing an order between sub-requirements using numbers.

Fig. 9. Updated requirements document
The verification and validation activities are realized automatically. For example, the sub-scenario
connect_patient_art → start_BP → set_blood_flow
is associated to R-th’ and some of its children is instantiated by Dev-seq. Using the ProB tool, the resulting machine animates successfully this scenario.
V. Recapitulations and Lessons Learned
These two patterns automate parts of the development:
- Dev-if allows taking into account constraints of a system. It updates an existing system by staying in the same level or refining it.
- Dev-seq introduces an order between the operations or events of an existing system.
Using these patterns for the landing gears system and the hemodialysis case studies, we obtain results shown in Table II:
- The landing gears system: four models of the formal specification are developed using the Dev-seq pattern. In total, 310 POs are discharged for these specifications in which 256 are automatic, thus the 82 percent of the total POs.
- The hemodialysis machine: nine specifications are developed using Dev-if for which 301 POs by 341 are discharged automatically.
<table>
<thead>
<tr>
<th>Number of Specs</th>
<th>Landing gears</th>
<th>Hemodialysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>Automatic POs</td>
<td>256</td>
<td>301</td>
</tr>
<tr>
<td>Total of POs</td>
<td>310</td>
<td>341</td>
</tr>
<tr>
<td>Percentage</td>
<td>82%</td>
<td>88%</td>
</tr>
</tbody>
</table>
The application of these two patterns allows us to be aware of the importance of the existence of tools for managing requirements, verifying and validating the formal specifications. Without these tools, the task of using our patterns is hard to accomplish.
This work is a beginning of a future project for constructing a library for development patterns. While working on these patterns, several questions are arising:
- In the Reqs. How to introduce the rewritten forms of the original requirement? Is it a brother, a father or a son of the other requirements?
- In the Spec. How a new requirement will be integrated? What kind of efforts and of skills should have the developer when applying these patterns? How can the pattern decide whether it adds or not the glue?
- In the Glossary. How can this document help to ensure the "good" application of the development patterns? Does it allow to detect errors generated by the patterns in the Reqs and the Spec?
The Glossary is important in our approach. It allows linking permanently the formal elements with their informal description in the requirements. The trace described by this document helps to facilitate the access to elements of the Reqs while validating the Spec. The Glossary is described in our patterns and is instantiated when applying them.
VI. Related Work
A pattern enables the description of an identified subproblem and its solution by reusing knowledge acquired through experience. Our idea is to predefine development solutions and incorporate them in the requirements document, using Event-B and the Rodin platform. They allow the reuse...
of the specification models and their correctness in terms of proofs. Hoang et al. [9] define patterns for Event-B specification in order to reuse an existing formal model and to reduce the proving effort. We have exploited the patterns emerging from the writing styles of the requirement descriptions.
KAOS [17] proposes a goal-oriented approach for requirements modeling and refinement. It enables the identification of the system goals and their gradual refinement until obtaining constraints using formal refinement patterns. [14] consider the interactions between the artifacts of the requirements. [6] define a preliminary work on a language dedicated to combine requirements with the formal specifications. They use several approaches like KAOS and OCL to describe the requirements and to their formalization. In our approach, we make the evolution of the three components (Reqs, Spec and the Glossary) at the same time. We use tools such as ProR, ProB, provers of the Rodin Platform at any moment of the development.
The authors of [5] define three refinement patterns for algebraic state-transition diagrams (ASTDs). These patterns are complementary to the specification patterns [9] by gradually introducing details in the specification using refinement. The authors compare their patterns with CSP and Event-B refinement. Our patterns are used for requirements and specifications and take into account the refinement by automatically generating the glue between abstract and concrete models.
VII. CONCLUSION AND FUTURE WORK
This paper presents an approach based on well-proven generic scheme, the development patterns used to write down predefined forms for the requirements. These patterns concern informal requirements described in a repetitive form, a conditional one and sequential one. They take into account the refinement technique and are proved complete and correct. We demonstrate our approach by applying it on the hemodialysis machine case study and on the landing gears system. Tools incorporated in the Rodin platform are used in all the development steps: ProR for managing requirements and their links with the formal specification, automatic and interactive provers for checking the correctness of the specification, ProB for animating and model-checking the formal models and graphical tools like Event-B state machine[5] and Project Diagram[6] for showing machines and contexts relations.
Dev-if is formalized in the Event-B language. Dev-seq is formalized in both Event-B and TLA+ in order to prove that our patterns are independent of the specification language. One limitation of our patterns is that their use is still not automated; it is done manually. As a future work of this approach, we are looking forward to extract other requirement patterns according to the context of the problem in hand and provide support for them in our methodology. A plug-in retrieving documents from ProR and the Rodin editor can help to implement and automatically use our patterns.
REFERENCES
|
{"Source-Url": "https://hal.univ-lorraine.fr/hal-02962897/file/FormReqs-2019.pdf", "len_cl100k_base": 6489, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28410, "total-output-tokens": 7759, "length": "2e12", "weborganizer": {"__label__adult": 0.0003514289855957031, "__label__art_design": 0.0003268718719482422, "__label__crime_law": 0.00023746490478515625, "__label__education_jobs": 0.0011472702026367188, "__label__entertainment": 6.026029586791992e-05, "__label__fashion_beauty": 0.00017189979553222656, "__label__finance_business": 0.00036215782165527344, "__label__food_dining": 0.00035309791564941406, "__label__games": 0.0004992485046386719, "__label__hardware": 0.0007052421569824219, "__label__health": 0.000629425048828125, "__label__history": 0.00021767616271972656, "__label__home_hobbies": 8.618831634521484e-05, "__label__industrial": 0.0004570484161376953, "__label__literature": 0.00032901763916015625, "__label__politics": 0.00017499923706054688, "__label__religion": 0.0004010200500488281, "__label__science_tech": 0.0210113525390625, "__label__social_life": 8.577108383178711e-05, "__label__software": 0.00565338134765625, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.00028443336486816406, "__label__transportation": 0.0004825592041015625, "__label__travel": 0.0001914501190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32372, 0.01616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32372, 0.42346]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32372, 0.85366]], "google_gemma-3-12b-it_contains_pii": [[0, 1070, false], [1070, 5698, null], [5698, 10439, null], [10439, 14840, null], [14840, 18395, null], [18395, 22552, null], [22552, 26386, null], [26386, 32372, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1070, true], [1070, 5698, null], [5698, 10439, null], [10439, 14840, null], [14840, 18395, null], [18395, 22552, null], [22552, 26386, null], [26386, 32372, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32372, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32372, null]], "pdf_page_numbers": [[0, 1070, 1], [1070, 5698, 2], [5698, 10439, 3], [10439, 14840, 4], [14840, 18395, 5], [18395, 22552, 6], [22552, 26386, 7], [26386, 32372, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32372, 0.15041]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
248d2a67949dbefbb388b79b69433367b7154b9e
|
Improving Software Process in Agile Software Development Projects: Results from Two XP Case Studies
Outi Salo
VTT Technical Research Centre of Finland
P.O. Box 1100, FIN-90571 Oulu, Finland
Outi.Salo@vtt.fi
Abstract
One of the Agile principles is that software development teams should regularly reflect on how to improve their practices to become more effective. Some systematic approaches have been proposed on how to conduct such a self-reflection process, but little empirical evidence yet exits. In this paper, the empirical results are reported from two XP (Extreme Programming) projects where the project teams conducted "post-iteration workshops" after all process iterations in order to improve and optimize working methods. Both qualitative and quantitative data from the total of eight post-iteration workshops is presented in order to evaluate and compare the findings of the two projects. The results show the decline of both positive and negative findings, as well as the narrower variation of negative findings and process improvement actions towards the end of both projects. In both projects, the data from post-iteration workshops indicate increased satisfaction and learning of project teams.
1. Introduction
AGILE methodologies and principles\(^1\) place emphasis on incremental software development with short iterations, adaptation to changing requirements, close communication, and simplicity, for example. One specific agile principle closely relates to the software process improvement (SPI): “regular reflection of teams in how and where to improve”. Furthermore, agile proponents have noted that "each situation calls for a different methodology" [1, p. 184]. Thus, when using any of the agile approaches, continuous improvement, tuning and adjusting of the software development process is required.
Although individuals and interactions are placed over processes and tools in the Agile Manifesto\(^2\), at least Extreme Programming (XP) claims to be a disciplined process [2] and may actually be characterized as such from a Capability Maturity Model for Software (The Software CMM) [3] viewpoint, for example.
In the CMMI staged model, only the reaching of maturity level 2 (Managed) [4] includes, implementation of PPQA process area (Process and Product Quality Assurance), amongst six other process areas. For example, it includes the evaluation of performed processes and identification of lessons learned that could improve processes. In XP, some references can be found that display such activities. For example, in the death phase of XP it is instructed to "Imagine with the team how they would run things differently next time" [5, p. 137]. However, the XP practices [5], do not include detailed procedures on how to actually carry out such activities to improve the software development process.
Recently, some systematic approaches have been proposed on how to improve the software development process in agile context for an individual project. Cockburn suggests a methodology-growing technique for "on-the-fly methodology construction and tuning" [1, p. 185] that embodies a reflection workshop technique for the mid- or post-increment workshops. Also, Dingsøyr and Hanssen [6] have suggested a workshop technique called postmortem reviews to be used as an extension for agile software development methods. It pursues on making good use of the experiences of project participants at the end of the short iterations to enhance the development process and also over the project boundaries.
However, only a very limited amount of empirical evidence can be found on applying team reflection
\(^1\) www.agilealliance.org
\(^2\) www.agilemanifesto.org
workshops, lightweight postmortem reviews [6] or any other SPI efforts from agile software development projects. This paper presents a comparison of empirical results from two case studies (eXpert and zOmbie) conducted at the Technical Research Centre of Finland. Two consecutive projects adopted XP and conducted post-iteration workshops systematically after each iteration to improve their software development process. The post-iteration workshops included elements from both the lightweight postmortem review technique [6] and the methodology-growing technique [1] and focus on the project level SPI.
This paper presents the analysis of the post-iteration workshop data (i.e. number and content of negative and positive findings, and number of SPI actions) from the two case studies. The aim is to present the consistencies and deviations in the data of two somewhat similar, yet also divergent projects, and the underlying causes for such findings are also suggested. Thus, in this paper, the findings of the post-iteration workshops are discussed in the light of the different characteristics of the two case projects. Also, one goal is also to either support or ever the early conclusions previously drawn based on the eXpert case study [7].
The focus of this paper is to analyze the quantitative and qualitative findings generated during the post-iteration workshops and the quantity of the implemented process improvement actions. However, these two case studies do not offer extensive enough data for firm generalizations, but some conclusions can be brought forward for further evaluation.
This paper is composed as follows. The next section presents the research design including the method, the research target and settings, with the presentation of the similar and divergent characteristics of the two case studies. The paper continues by presenting the results and analysis of post-iteration workshops, and ends with a discussion and acknowledgements.
2. Research Design
In this section, the research method, data collection, and the research setting are described for both the eXpert and zOmbie projects.
2.1. Research Method and Data Collection
The research method used in this study was action research [8] that can be seen as one form of case study. The focus in action research is more on what practitioners do rather than what they say they do [9]. The resulting knowledge should guide the practice [10]. In action research, the modification of reality requires the possibility to intervene [11]. In the post-iteration workshops the researchers’ acted in a role of a moderator and participated in the generation of positive and negative findings, and enhanced the process with the project team. Also, one role for researchers’ was to provide the boundaries in which the project team was allowed to enhance the process.
In both projects, quantitative and qualitative research data was collected on a) effort used on workshops, b) quantity of findings and c) their content and, d) quantity of suggested and actual process enhancements (i.e. action points) and e) their content. Furthermore, the developers maintained diaries to record their negative and positive perceptions. Also, a post-project workshop and a group interview were held for the project team at the end of both projects.
2.2. Research Target: Post-Iteration Workshop technique
This research aims to study how a short iterative reflection session suits for self-adapting and improving the practices during an Agile software development project. Thus, the focus is SPI on project level.
The post-iteration workshop technique applied in both of the case studies was evolved by combining attributes from both of the existing reflection techniques (i.e., lightweight postmortem review and team reflection workshop techniques) as described in more detail in [7]. In short, as suggested in postmortem review technique, the problem-solving brainstorming method called the KJ method [12] was adopted in the post-iteration workshops for generating, collecting and structuring positive and negative experiences. The project team recorded positive and negative issues concerning the previous iteration on post-it notes. These notes were then grouped, and negative issues were discussed to generate process enhancements.
Both existing techniques suggested prioritizing the negative findings and analyzing only the most important ones. However, in post-iteration workshops all the negative findings were considered as equal and all of them were included for further discussion. Then, the actual software process improvement actions (hereafter referred as SPI actions) were decided together with the project team and researchers. This data was collected in action point lists by the project team member during the post-iteration workshops. Finally, the previous action point list was revised to find out what improvements had actually taken place and which ones were not implemented for one reason or another.
The quantitative as well as qualitative data from the post-iteration workshops is the central research data presented in this paper. This includes the positive and negative findings, and the implemented SPI actions.
The results from the first case study (eXpert) were earlier reported in [7]. It was suggested that post-iteration workshops concretely help to improve and optimize practices, and enhance the learning and satisfaction of the project team. This argument is evaluated in this paper by strengthening the case with the comparative analysis of eXpert and zOmbie case studies. Another target of this research is to seek out any consistencies and deviations between the two projects, and to find underlying factors behind them.
2.3. Research setting
The two case studies presented in this paper are the first ones in the ongoing series of Agile software development case studies at VTT Electronics. As this paper presents results from two case studies, i.e. eXpert and zOmbie, their characteristics need to be addressed in order to offer a framework for the interpretation of results. Thus, the similarities and divergences of the two projects are described in this sub-section.
Similarities of eXpert and zOmbie case studies
Both case studies were conducted in a co-located development environment. In fact, the projects worked in exactly the same open office space. Also, the common tools that were not dependent on the developed application type were identical in the two projects. These included configuration management, data collection, and documentation tools.
Intensive two-day training was given to both of the teams including XP practices, configuration management and data collection issues. The teams were advised to follow XP process as suggested by Beck [5] including planning game, small releases, metaphor, simple design, testing practices, refactoring, pair programming, collective ownership, continuous integration, 40-hour week, and coding standards. However, also other practices, such as SPI activities, were employed to support software development.
The two projects had an identical calendar time (nine weeks) and length of working week (4-day week of 24-hours). As proposed by the 40-hour week rule, no overtime was recommended. The possible overtime was compensated in the following iteration.
Differences of eXpert and zOmbie case studies
Table 1. presents the central differences between the eXpert and zOmbie projects that should be taken into consideration when interpreting the data from post-iteration workshops.
<table>
<thead>
<tr>
<th>Characteristic</th>
<th>eXpert</th>
<th>zOmbie</th>
</tr>
</thead>
<tbody>
<tr>
<td>Team size</td>
<td>4 developers</td>
<td>4 developers</td>
</tr>
<tr>
<td></td>
<td>1 project manager</td>
<td></td>
</tr>
<tr>
<td>Type of end product</td>
<td>Intranet</td>
<td>Mobile</td>
</tr>
<tr>
<td></td>
<td>application</td>
<td>application</td>
</tr>
<tr>
<td>Experience in XP/Agile</td>
<td>4 novice</td>
<td>1 experienced</td>
</tr>
<tr>
<td></td>
<td>4 novice</td>
<td></td>
</tr>
<tr>
<td>Experience in the end product</td>
<td>1 experienced</td>
<td>5 novice</td>
</tr>
<tr>
<td>development</td>
<td>3 novice</td>
<td></td>
</tr>
<tr>
<td>Experience in coding</td>
<td>2 experienced</td>
<td>4 experienced</td>
</tr>
<tr>
<td></td>
<td>2 novice</td>
<td>1 novice</td>
</tr>
<tr>
<td>Iterations</td>
<td>3 x two weeks</td>
<td>1 x one week</td>
</tr>
<tr>
<td></td>
<td>3 x one week</td>
<td>3 x two weeks</td>
</tr>
<tr>
<td></td>
<td>2 x one week</td>
<td></td>
</tr>
<tr>
<td>Size of end product</td>
<td>10 000 LOC</td>
<td>7 000 LOC</td>
</tr>
<tr>
<td>Software development tools</td>
<td>Eclipse</td>
<td>Eclipse</td>
</tr>
<tr>
<td></td>
<td>Apache Tomcat</td>
<td>Apache Tomcat</td>
</tr>
<tr>
<td></td>
<td>MySQL</td>
<td>MySQL</td>
</tr>
<tr>
<td></td>
<td>Java +JSP</td>
<td>J2ME</td>
</tr>
<tr>
<td>XP practices</td>
<td>On-site customer</td>
<td>Off-site customer</td>
</tr>
</tbody>
</table>
Firstly, the team size was slightly different. Both project teams included 4 software developers all being university students at the final stage of their information processing science studies. However, zOmbie employed also a project manager from the previous eXpert team to provide expertise on the Agile development approach and XP. The project manager worked a shorter week than the rest of the project team, i.e. 2/3 of their effort. Despite this, he participated in all the post-iteration workshops and thus is to be noticed in the quantity of the findings.
Also, the end product being developed was divergent in the two projects. The eXpert team implemented an intranet application for managing research data of a Finnish research institute. The zOmbie team had a task to implement a mobile application for managing transactions in a stock exchange. Naturally, the distinct application types also caused some differences in the software development tools and languages (see table 1). Furthermore, the size of the end product varied from the eXpert’s 10000 LOC to zOmbie’s 7000 LOC.
Furthermore, experiences of the team members differed between the two projects. In eXpert, all the team was novice on using agile software development methods. The zOmbie project had the advantage that the project manager was one of the developers from the previous eXpert project. As such, he was a valuable inboard “coach” [5] for the zOmbie team and also may have influenced some of the project practices based on his experiences from eXpert. However, as mentioned earlier, all the actual SPI decisions were to be made solely in the post-iteration workshops.
The experiences of project teams varied in the end product development, as well as coding skills. In eXpert, one team member was experienced on the development of intranet applications, whereas the zOmbie team totally lacked knowledge on mobile software development. Vice versa, two of the four eXpert team members were experienced coders but in zOmbie, three of the developers plus the project manager could be regarded as experienced coders, and only one novice. In this context, novice on coding is regarded as “no industrial experience”. Thus, zOmbie project had more advanced coders, but their experience in the end product development was lacking.
The length of iterations was different in the two projects. Both projects consisted of six iterations. In eXpert, the project started with three two week iterations and finished with three one week iterations, the last one being a corrective iteration. In zOmbie, the first and last two iterations lasted one week and the 2nd, 3rd, and 4th iterations for two weeks each. As this paper involves analysis on the first four iterations of both projects, the difference in the length of first and fourth iterations should be noted in interpretations.
Probably the biggest difference strictly related to XP practices was the role of the customer. The eXpert project included an on-site customer working in the same office space with the software developers, as suggested by XP [5]. In the zOmbie project, however, one of the research targets was to study how an off-site customer would suit an XP project. This matter is out of the scope of this paper, but it is clearly one of the factors to be noted when comparing the findings of the case studies reported in this paper.
3. Case Study Results
In this section, the analysis of the post-iteration workshop findings is presented with interpretations. Only the first four post-iteration workshops are included in this analysis. The fifth workshops in both projects concentrated on the experiences from the entire project instead on the previous iteration and, thus, are not comparable. However, these post-project workshops are valuable from the viewpoint of organizational level SPI and will be reported elsewhere in the near future.
3.1. Post-Iteration Workshop Findings
Table 1 presents the costs of holding post-iteration workshops in terms of duration per workshop and the percentual effort spent on the workshops calculated from the total iteration effort.
<table>
<thead>
<tr>
<th>Iteration</th>
<th>eXpert Duration</th>
<th>eXpert Effort %</th>
<th>zOmbie Duration</th>
<th>zOmbie Effort %</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2.68 h</td>
<td>5.5%</td>
<td>2.18 h</td>
<td>7.8%</td>
</tr>
<tr>
<td>2</td>
<td>1.83 h</td>
<td>3.8%</td>
<td>2.35 h</td>
<td>5.5%</td>
</tr>
<tr>
<td>3</td>
<td>1.0 h</td>
<td>2.1%</td>
<td>1.63 h</td>
<td>3.0%</td>
</tr>
<tr>
<td>4</td>
<td>0.93 h</td>
<td>3.3%</td>
<td>1.13 h</td>
<td>2.2%</td>
</tr>
<tr>
<td>Avg</td>
<td>1.6 h</td>
<td>3.7%</td>
<td>1.82 h</td>
<td>4.1%</td>
</tr>
</tbody>
</table>
Results show the tendency of the duration, as well as the percentual effort lowering from iteration to iteration in both of the projects. To explain the seemingly high effort percentage in both cases, several explanatory factors can be found. First, it should be noted that each software developer worked a 24-hour week instead of ‘normal’ 40-hour week. In the latter case, the corresponding figures would be substantially lower. However, this requires an assumption that a 40-hour week does not increase the duration of post-iteration workshops. Also, in the zOmbie project, the project manager worked about a 16 hour-week, which also increases the percentual effort spent on post-iteration workshops. Secondly, it should be noted that in both cases the shorter iteration (4th in eXpert and 1st in zOmbie) causes the proportion of effort to rise even though the actual effort spent may even be lower.
In Dingsøyr and Hanssen’s [6] study the effort spent on lightweight postmortem reviews was around 4.7% and the duration of one workshop was roughly 1.4 hours (calculated from their data). Cockburn [1] estimates a minimal duration from two to four hours. In these two cases the average effort percentage was 3.7% in eXpert and 4.1 % in zOmbie whereas the average duration in eXpert was 1.6 hours and 1.8 hours in zOmbie. One interesting observation from table 2 is that in both projects the duration of the workshop has halved from the 2nd iteration to the 4th iteration.
Furthermore, it can be presumed that the learning of the post-iteration workshop technique took some time during the first workshops and also affects the decline trend in effort and duration data. However, it can be seen in Table 2, that the duration of all but the first workshop was longer in the zOmbie project compared to eXpert. One reason for this is the clearly larger amount of negative findings (Figure 2), as well as the topics behind these findings (Figure 3). Thus, the discussion and decision-making during the post-iteration workshops obviously took more time in zOmbie than in eXpert. The long duration of the first post-iteration workshop in eXpert project can be explained by the fact that the technique was applied for the first time and, thus, took some time for the moderator (i.e., researcher) to learn as well.
Quantitative data from the four post-iteration workshops is presented in the figures 1 and 2.
**Fig. 1. Number of positive findings from eXpert and zOmbie post-iteration workshops**
The first four iterations of eXpert produced a total of 93 positive findings whereas the corresponding number in zOmbie was 102 (Figure 1). This is a total of 23.3 positive findings per person in eXpert and 20.4 in zOmbie. In both cases the trend seems to be a decline in positive findings towards the end of the project.
One reason for this is the frequent occurrence of post-iteration workshops (one to two weeks apart). Thus, the project team may not always find it necessary to repeat neither the positive nor negative findings - even though they still may be valid. In fact, one comment made by a software developer during the 3rd post-iteration workshop in zOmbie was: "The charm of novelty is gone. Trifles don't make one so happy anymore". At the time, he found it hard to think of any (new) positive findings. Thus, as the team gets more accustomed to the adopted practices, their weaknesses and rewards may be taken for granted.
**Fig. 2. Number of negative findings from eXpert and zOmbie post-iteration workshops**
The first four iterations of eXpert produced a total of 52 negative findings. The corresponding number in zOmbie was 91 (Figure 2). It should be noted, that the zOmbie team had one "extra" team member yet the number of negative findings per person is still noticeably higher (13 in eXpert and 18.2 in zOmbie).
Also, the negative findings in both projects seem to follow the declining trend as in positive findings. This supports the argument of Cockburn [1], that the changes needed in the process will be much smaller after the second and subsequent increments. Also, the trend lines in both positive and negative findings indicate that the duration of the iteration does not affect the amount of findings that are generated in post-iteration workshops.
The closer examination of the research data reveals that as the topics causing negative findings became fewer during both projects they also drew closer to each other (Figure 3). In other words, the criticism of the project team became more focused.
**Fig. 3. Number of topics behind negative findings**
The above zOmbie data clearly supports the earlier reported eXpert case study results [7]. The declining trends of negative findings and topics behind them (Figure 3) support the argument that the process actually adapted to the needs of the project team, and increased their satisfaction for the process [7].
Table 3 illustrates the central topics behind the positive and negative findings in eXpert and zOmbie.
<table>
<thead>
<tr>
<th>Top 5 positive findings</th>
<th>Top 5 negative findings</th>
</tr>
</thead>
<tbody>
<tr>
<td>eXpert</td>
<td>zOmbie</td>
</tr>
<tr>
<td>1 Pair programming</td>
<td>Team spirit</td>
</tr>
<tr>
<td>2 Short iterations</td>
<td>Working environment</td>
</tr>
<tr>
<td>3 Continuous integration</td>
<td>Technical environment</td>
</tr>
<tr>
<td>4 On-site customer</td>
<td>Planning game</td>
</tr>
<tr>
<td>5 Refactoring</td>
<td>Pair programming</td>
</tr>
</tbody>
</table>
Interestingly, all the top five positive topics in eXpert focus on the practices of XP. In zOmbie, the
positive findings concentrated more on human and environmental aspects.
The closer examination of the negative findings in the two projects discloses some project specific problem areas especially regarding the zOmbie project. One clearly emerging issue is the off-site customer that was applied in only the zOmbie project. 20% of the negative findings in the first iteration related to the off-site customer that was too busy. Changing the off-site customer - which often may be impossible - solved much of the problem in this case. Also, the communication practices with customers were improved throughout the project. Another clearly project specific problem area in the zOmbie project was testing in the mobile software development environment. 12% of the 92 negative findings throughout the project related to this problem. Specifically, this problem related to testing (test-first) in the client side (i.e., simulator) environment and was not solved during the project. Also, the set-up of the technical environment was found more complicated in mobile software than in intranet application development as it required fire wall configurations and setting-up of IP server, for example. In zOmbie, 8.7% of the negative findings related to environmental problems where as this topic resulted zero findings in eXpert.
Common in negative findings for both projects was the lack of clear exit-criteria for tasks, i.e. criteria to verify if the task is actually done. Also, data collection was found problematic. Due to the research character of the project, the collection of measurement data was heavy and time tracking detailed. However, both of these findings appeared strongly after the first iteration and sharply lowered towards the end of the project. Also, the importance of coding standards as well as their proper use was an issue that came up in both of the projects. Furthermore, the estimation of tasks was found as clearly problematic in both of the projects. In zOmbie, 15% of all the negative findings related to this topic and 17% in eXpert.
Two topics that were reported in only the eXpert project were test-first and short iterations. The test-first approach [13] was clearly problematic due to the fact that there was no expert available on this approach. In the latter project, however, the project manager was a member of the previous eXpert team and thus had some experience and knowledge on this topic. In eXpert, the negative findings related to testing refer to the decrease of motivation of the external testing team that reflected on the development team as lack of testing results and feedback.
Figure 4 presents the number of actual process improvements that were carried on after iterations.
As it can be seen, the post-iteration workshops of the zOmbie project implemented 56 improvement actions whereas eXpert only 16. The thorough analysis of improvement actions and their effect is out of the scope of this paper. One reason for this is that the existing project level SPI techniques lack a detailed procedure for follow-up of SPI actions as well as their support with, for example, measurement data. However, quantitative data of SPI actions is interesting in order to evaluate how the number of negative findings in each workshop relates to the number of process improvements (Figure 5).
Figure 5 illustrates how the variation between the amount of negative findings and process improvements actions is widest after the first iteration in both of the case studies. For one, this data indicates that the project teams were cautious on making any process changes at the beginning of the project. One reason for this was the novelty of the methods and techniques used which made it impossible to evaluate whether the negativity was caused by the method itself or the lack of ability of the project team to apply it. For example,
in eXpert, the test-first approach caused negative findings during the first two iterations but they actually turned into positive findings towards the end of the project as the project team learned to apply the technique.
Learning was something that also took place during the post-iteration workshops. For example, some of the negative findings could be identified as misunderstandings or problems in communication. These issues needed no specific actions but were solved by discussion within the team. Also, it can be interpreted from Figure 5 that the implemented SPI actions influence in decreasing the amount of negative findings after the following iteration. This, again, implicates to the increased satisfaction of the project team to the enhanced process. For example, as the data collection tools were improved both in eXpert and zOmbie, the negative findings this topic were dried up.
On the contrary, the SPI actions, though relatively small at times, seemed to produce positive findings even on the annoying topics such as data collection.
4. Conclusions and Further Work
Agile principles suggest that the software development team should regularly reflect on how to become more effective and tune and adjust its behaviour accordingly. Some systematic approaches have been proposed on how to execute this self-reflection process effectively but little empirical evidence yet exists. This paper presents a comparison of empirical results of two case studies where two known self-reflection approaches were combined [1, 6] and four post-iteration workshops were held in two XP projects.
The goal was to study how the post-iteration workshop results from two infinitive, yet divergent projects vary in order to strengthen the conclusions of earlier reported eXpert results [7], and to broaden the study to find coherences and deviations from the research data. The data includes the quantity and quality of positive and negative findings from the post-iteration workshops as well as quantity of the actual SPI actions made based on the findings. Though these two case studies do not offer extensive enough data to draw any generalizations, some conclusions can be brought forward for further evaluation.
Firstly, several consistencies could be found on the data of the two projects. For one, the amount of both positive and negative findings decreased towards the end of the projects quite rapidly. This indicates the customization of the project team to the new tools and practices and especially the decrease in negative findings refer to increased satisfaction of the project team towards the end of the project. In other words, the findings support the assumption that post-iteration workshops were effective on improving the process to suit the development team. Secondly, the correlation between the number of negative findings and the number of SPI actions clearly drew closer towards the end of the projects as the amount of needed improvements clearly lowered. This data also speaks for the successful adaptation of the software process and the effectiveness of post-iteration workshops.
Thirdly, the data from both projects show that the effort needed on post-iteration workshops clearly decreases from iteration to iteration. This somewhat indicates the learning of using the technique, but it also correlates to the increased satisfaction of the project team. The lower amount of negative findings in consecutive iterations shortened the time spent on discussion and decision-making concerning the SPI actions for next iteration.
Also some deviations could be found when comparing the research data from the two projects. The zOmbie case study produced clearly a larger amount of negative post-iteration workshop findings. The underlying causes for this were found in several factors. One of these was the complexity of a project. In the mobile software development project (zOmbie) few clearly complex factors, such as environment setup and testing in mobile device were found to increase the amount of negative findings compared to eXpert. Also, one clear factor to increase the amount of negative findings was the off-site customer in zOmbie. Naturally, the larger amount of negative findings can also be seen in the longer duration of workshops in zOmbie. In other words, factors such as the complexity of project and suitability of the used software process for the specific team effect on the time spent on post-iteration workshops and the amount of changes needed in the process.
The effort used on post-iteration workshops decreases towards the end of the project in both case studies. It could be calculated to be as low as 2.2% (4th zOmbie workshop). When taking into consideration the shorter working week (i.e., 24 hours per week), the effort needed on post-iteration workshops is quite tolerable especially when considering the immeasurable value of increased satisfaction and learning of project team. Still, the effectiveness of post-iteration workshops in regard to effort and duration is something that should be further increased especially during the first iterations. On the research data presented, some normative base can be found to estimate how much effort should be allocated in organizations for holding post-iteration workshops.
According to the software developers of eXpert and zOmbie, the rapid visibility of the SPI actions and the
concrete possibility to influence the working practices increase the satisfaction of the project team. These strong implications of the benefit of the post-iteration workshops were found in the positive remarks made by both development teams in the final interviews.
This paper does not report the quality of the actual SPI actions made during the two case studies nor their effect on, for example, the quality of the end product. One reason for this is that the existing project level SPI techniques lack a detailed procedure for the follow-up of SPI actions, as well as their support with, for example, measurement data. Also, the existing techniques lack important aspects in enhancing the extensive learning in the future projects. Based on this observation, the post-iteration workshop technique has been evolved and is currently being applied for further evaluation in the two following case studies (bAmbie and uniCorn) at the VTT Technical Research Centre of Finland.
Overall, based on the data presented in this paper, the second case study (zOmbie) is in line with the results of eXpert and, thus supports the early conclusions presented in [7]. Accordingly, the iterative workshop gathering is a concrete way to improve and adapt Agile software processes during the iterative cycles of software development. Thus, post-iteration workshops should be regarded as a useful method to be included in Agile software development projects, especially if supplemented with follow-up and validation of process improvements.
5. Acknowledgements
The research work presented in this paper has been carried on in the ICAROS project funded by TEKES (National Technology Agency of Finland). Acknowledgement also to the eXpert and zOmbie teams for their thorough participation in the process improvement activities. Also, thanks to Dr. Pekka Abrahamsson for his co-operation and support, and Annukka Mäntyniemi for her valuable comments.
6. References
|
{"Source-Url": "http://agile.vtt.fi/docs/publications/2004/2004_improving_software_process_in_sgile_software_development.pdf", "len_cl100k_base": 7262, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24126, "total-output-tokens": 7998, "length": "2e12", "weborganizer": {"__label__adult": 0.00041103363037109375, "__label__art_design": 0.00032019615173339844, "__label__crime_law": 0.00034809112548828125, "__label__education_jobs": 0.003162384033203125, "__label__entertainment": 4.9591064453125e-05, "__label__fashion_beauty": 0.0001621246337890625, "__label__finance_business": 0.0004224777221679687, "__label__food_dining": 0.0003788471221923828, "__label__games": 0.0005340576171875, "__label__hardware": 0.0004835128784179687, "__label__health": 0.00041794776916503906, "__label__history": 0.00019240379333496096, "__label__home_hobbies": 9.02414321899414e-05, "__label__industrial": 0.00034737586975097656, "__label__literature": 0.0002529621124267578, "__label__politics": 0.0002624988555908203, "__label__religion": 0.0003619194030761719, "__label__science_tech": 0.0024204254150390625, "__label__social_life": 0.00012385845184326172, "__label__software": 0.0032501220703125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003974437713623047, "__label__transportation": 0.0004606246948242187, "__label__travel": 0.00020301342010498047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36242, 0.02506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36242, 0.33428]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36242, 0.95457]], "google_gemma-3-12b-it_contains_pii": [[0, 3696, false], [3696, 8906, null], [8906, 14534, null], [14534, 19912, null], [19912, 23140, null], [23140, 27009, null], [27009, 32405, null], [32405, 36242, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3696, true], [3696, 8906, null], [8906, 14534, null], [14534, 19912, null], [19912, 23140, null], [23140, 27009, null], [27009, 32405, null], [32405, 36242, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36242, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36242, null]], "pdf_page_numbers": [[0, 3696, 1], [3696, 8906, 2], [8906, 14534, 3], [14534, 19912, 4], [19912, 23140, 5], [23140, 27009, 6], [27009, 32405, 7], [32405, 36242, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36242, 0.26471]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4f0e1ddf9ef9075020ce481e44f21e5fc13ff2d2
|
COMPARATIVE EVALUATION OF A MAXIMIZATION AND MINIMIZATION APPROACH FOR TEST DATA GENERATION WITH GENETIC ALGORITHM AND BINARY PARTICLE SWARM OPTIMIZATION
Ankur Pachauri and Gursaran
Department of Mathematics, Dayalbagh Educational Institute, Agra 282110
ankurpachauri@gmail.com, gursaran.db@gmail.com
ABSTRACT
In search based test data generation, the problem of test data generation is reduced to that of function minimization or maximization. Traditionally, for branch testing, the problem of test data generation has been formulated as a minimization problem. In this paper we define an alternate maximization formulation and experimentally compare it with the minimization formulation. We use genetic algorithm and binary particle swarm optimization as the search technique and in addition to the usual operators we also employ a branch ordering strategy, memory and elitism. Results indicate that there is no significant difference in the performance or the coverage obtained through the two approaches and either could be used in test data generation when coupled with the branch ordering strategy, memory and elitism.
KEYWORDS
Search based test data generation, program test data generation, genetic algorithm, software testing
1. INTRODUCTION
Search-based software test data generation has emerged [1, 2, 3, 4, 5, 6] as a significant area of research in software engineering. In search based test data generation, the problem of test data generation is reduced to that of function minimization or maximization. The source code is instrumented to collect information about the program as it executes. Collected information is used to heuristically measure how close the test data is to satisfying the test requirements. The measure is then used to modify the input parameters to progressively move towards satisfying the test requirement. It is here that the application of metaheuristic search techniques has been explored. Traditionally, for branch testing, the problem of test data generation has been formulated as a minimization problem. In this paper we define an alternate maximization formulation and experimentally compare with the traditional minimization formulation.
During testing, program P under test is executed on a test set of test data - a specific point in the input domain - and the results are evaluated. The test set is constructed to satisfy a test adequacy criterion that specifies test requirements [7, 8]. The branch coverage criterion is a test adequacy criterion that is based on the program flow graph. More formally, a test set T is said to satisfy the branch coverage criterion if on executing P on T, every branch in P’s flow graph is traversed at least once.
Metaheuristic techniques such as genetic algorithms [9], quantum particle swarm optimization [10], scatter search [11] and others have been applied to the problem of automated test data generation and provide evidence of their successful application. Amongst these several have addressed the issue of test data generation with program-based criteria [10] and in particular the branch coverage criterion [10, 11, 12, 13, 14, 15, 16, 17, 18]. Further, [12, 13, 14, 19, 20] have formulated the problem as a minimization problem. In this paper we consider an alternate
maximization formulation and compare it with the minimization strategy. We use both Genetic Algorithm and the Binary Particle Swarm Optimization as the search strategy.
In this paper, in Section 2 we describe the Genetic Algorithm (GA) and in Section 3 we describe the Particle Swarm Optimization (PSO) and Binary Particle Swarm Optimization (BPSO). We outline the basic maximization and minimization strategies for test data generation in Section 4 and also describe the other strategies employed by us. In Section 5 we present the experimental setup and in Section 6 we discuss the results of the experiments. Section 7 concludes the paper.
2. GENETIC ALGORITHM
Genetic Algorithm (GA) is a search algorithm that is based on the idea of genetics and evolution in which new and fitter set of string individuals are created by combining portions of fittest string individuals of the parent population [21]. A genetic algorithm execution begins with a random initial population of candidate solutions \{s_i\} to an objective function f(s). Each candidate s_i is generally a vector of parameters to the function f(s) and usually appears as an encoded binary string (or bit string) called a chromosome or a binary individual. An encoded parameter is referred to as a gene, where the parameter’s values are the gene’s alleles. If there are m inputs parameters with the ith parameter expressed in n_i bits, then the length of the chromosome is simply \(\sum_i n_i\). In this paper each binary individual, or chromosome, represents an encoding of test data.
After creating the initial population, each chromosome is evaluated and assigned a fitness value. Evaluation is based on a fitness function that is problem dependent. From this initial selection, the population of chromosomes iteratively evolves to one in which candidates satisfy some termination criteria or, as in our case, fail to make any forward progress. Each iteration step is also called a generation.
Each generation may be viewed as a two stage process [21]. Beginning with the current population, selection is applied to create an intermediate population and then recombination and mutation are applied to create the next population. The most common selection scheme is the roulette-wheel selection in which each chromosome is allocated a wheel slot of size in proportion to its fitness. By repeatedly spinning the wheel, individual chromosomes are chosen using “stochastic sampling with replacement” to construct the intermediate population. Additionally with elitism the fittest chromosomes survive from one population to the other.
After selection, crossover, i.e., recombination, is applied to randomly paired strings with a probability. Amongst the various crossover schemes are the one point, two point and the uniform crossover schemes [21]. In the one point case a crossover point is identified in the chromosome bit string at random and the portions of chromosomes following the crossover point, in the paired chromosomes, are interchanged. In addition to crossover, mutation is used to prevent permanent loss of any particular bit or allele. Mutation application also introduces genetic diversity. Mutation results in the flipping of bits in a chromosome according to a mutation probability which is generally kept very low.
The chromosome length, population size, and the various probability values in a GA application are referred to as the GA parameters in this paper. Selection, crossover, mutation are also referred to as the GA operators.
3. PARTICLE SWARM OPTIMIZATION AND BINARY PARTICLE SWARM OPTIMIZATION
Particle Swarm Optimization (PSO) was initially proposed to find optimal solutions for continuous space problems by Kennedy and Eberhart [22, 23] in 1995. In PSO the search starts with a randomly generated population of solutions called the swarm of particles in \( d \)-dimensional solution space. Particle \( i \) is represented as \( X_i = (x_{i1}, x_{i2}, \ldots, x_{id}) \) which is called the position of the particle \( i \) in \( d \)-dimensional space. With every particle \( i \) a velocity vector \( V_i = (v_{i1}, v_{i2}, \ldots, v_{id}) \) is associated that plays an important role in deciding next position of particle and is updated in each iteration. For updating the velocity of each particle, the particle’s best \( P_{ibest} = (p_{i1}, p_{i2}, \ldots, p_{id}) \) which is the best position of particle \( i \) achieved so far and global best \( P_{gbest} = (p_{g1}, p_{g2}, \ldots, p_{gd}) \) which is the best position of the swarm achieved so far by any particle of the swarm, are used.
Following equations (1) and (2) are used to find new velocity and position of particle \( i \) in iteration \( t+1 \).
\[
V_i(t+1)=w \cdot V_i(t)+c_1 \cdot \phi_1 \cdot (p_{ibest}-X_i(t))+c_2 \cdot \phi_2 \cdot (p_{gbest}-X_i(t)) \tag{1}
\]
\[
X_i(t+1)=X_i(t)+V_i(t+1) \tag{2}
\]
In equation (1), \( w \) is the inertia weight which controls the impact of previous history of velocity on global and local search abilities of particles [23], \( c_1 \) and \( c_2 \) are positive learning constants which determine the rate by which the particle moves towards individual’s best position and the global best position. Usually, \( c_1 \) and \( c_2 \) are chosen in a way so that their sum doesn’t exceed 4. If it exceeds 4 at any time then both the velocities and positions will explode toward infinity. \( \phi_1 \) and \( \phi_2 \) are random numbers drawn from uniform probability distribution of \((0, 1)\). In this way positions and velocities of the particles are evolved in each iteration until the optimal solution is not obtained.
In 1997 Kennedy and Eberhart [24] introduced the binary particle swarm optimization (BPSO) algorithm. In the binary version every particle is represented by a bit string and each bit is associated with a velocity, which is the probability of changing the bit to 1. Particles are updated bit by bit and velocity must be restricted within the range \([0, 1]\). Let \( P \) be the probability of changing a bit from 0 to 1, then \( 1-P \) will be the probability of not changing the bit to 1. This probability can be represented as the following function:
\[
P(x_{id}(t) = 1) = f(x_{id}(t), v_{id}(t-1), p_{ad}, p_{gd}) \tag{3}
\]
where \( P(x_{id} = 1) \) is the probability that an individual particle \( i \) will choose 1 for the bit at the \( d \)-site in the bit string, \( x_{id}(t) \) is the current state of particle \( i \) at bit \( d \), \( v_{id}(t-1) \) is a measure of the string’s current probability to choose a 1, \( p_{ad} \) is the best state found so far for bit \( d \) of individual \( i \), i.e., a 1 or a 0, \( p_{gd} \) is 1 or 0 depending on what the value of bit \( d \) in the global best particle.
The most commonly used measure for \( f \) is the sigmoid function which is defined as follows:
\[
f(v_{id}(t)) = \frac{1}{1 + e^{-v_{id}(t)}} \tag{4}
\]
where,
\[
v_{id}(t)=w_{id}v_{id}(t-1)+(\varphi_1)(p_{ad}-x_{id}(t-1))+(\varphi_2)(p_{gd}-x_{id}(t-1)) \tag{5}
\]
Equation (5) gives the update rule for the velocity of each bit, where \( \varphi_1 \) and \( \varphi_2 \) are random numbers drawn from the uniform distributions. Sometimes these parameters are chosen from the uniform
distribution, such that their sum is 4. The value is sometimes limited so that does not approach 0.0 or 1.0 too closely. In this case, constant parameters \([V_{\text{min}}, V_{\text{max}}]\) are used. When \(V_{\text{id}}\) is greater than \(V_{\text{max}}\), it is set to \(V_{\text{max}}\) and if \(V_{\text{id}}\) is smaller than \(V_{\text{min}}\), then \(V_{\text{id}}\) is set to \(V_{\text{min}}\). This simply limits the ultimate probability that bit \(x_{\text{id}}\) will take on a zero or one value. A higher value of \(V_{\text{max}}\) makes new vectors less likely. Thus \(V_{\text{max}}\) in the discrete particle swarm plays the role of limiting exploration after the population has converged [24], i.e., it can be said that \(V_{\text{max}}\) controls the ultimate mutation rate or temperature of the bit vector. Smaller \(V_{\text{max}}\) leads to a higher mutation rate [24]. This is explored in the experiment described in this paper.
4. TEST DATA GENERATION FOR BRANCH COVERAGE
In search based test data generation, test data is generated to meet the requirements of a particular test adequacy criterion. The criterion in our case is the branch coverage criterion. The setup phase begins with the choice of a suitable representation for test data and the identification of a suitable fitness function.
The inputs for one execution of the program under test \(P\), i.e., a single test data, are represented in a binary string also called a \textit{binary individual}. For instance, if the input to \(P\) is a pair of integers
ist represented in a bit sequence of length \(\text{rep}(I_1)+\text{rep}(I_2)\) where \(\text{rep}(x)\) is the number of bits taken to represent \(x\). The length of the bit strings representing \(I_1\) and \(I_2\) are chosen to represent the largest legal value that can be input to \(P\). In the representation, the bit sequence representing \(I_1\) is followed by the bit sequence representing \(I_2\).
The fitness of a binary individual is computed as
\[
\text{Fitness}(x) = \text{Approximation Level} + \text{Normalized Branch Distance}
\]
Traditionally, test data generation problem is formulated as a minimization problem as in [12, 13, 14, 19, 20] in which the approach level numbering starts from the target branch and the normalized branch distance is computed as,
\[
\text{Normalized Branch Distance} = 1 - 1.001^{\text{distance}}
\]
As opposed to this, the test data generation problem can also be formulated as a maximization problem. The definition of approximation level and normalized branch distance is also different from [2] although the basic idea is similar.
The \textit{approximation Level} is a count of the number of predicate nodes in the shortest path from the first predicate node in the flow graph to the predicate node with the \textit{critical branch} - a branch that leads the target to be missed in a path through the program- as shown in Figure 1.
The \textit{Normalized Branch Distance} is computed according to the formula
\[
\text{Normalized Branch Distance} = (1/(1.001^{\text{distance}}))
\]
where, \textit{distance}, or \textit{branch distance}, as defined in [20, 25], is computed at the node with the critical branch using the values of the variables or constants involved in the predicates used in the conditions of the branching statement. Table 1 summarizes the computation of distance.
<table>
<thead>
<tr>
<th>Decision Type</th>
<th>Branch Distance</th>
</tr>
</thead>
<tbody>
<tr>
<td>(a < b)</td>
<td>(a - b)</td>
</tr>
<tr>
<td>(a \leq b)</td>
<td>(a - b)</td>
</tr>
<tr>
<td>(a > b)</td>
<td>(b - a)</td>
</tr>
<tr>
<td>(a \geq b)</td>
<td>(b - a)</td>
</tr>
<tr>
<td>(a == b)</td>
<td>(\text{Abs}(a - b))</td>
</tr>
<tr>
<td>(a != b)</td>
<td>(\text{Abs}(a - b))</td>
</tr>
<tr>
<td>(a && b)</td>
<td>(a + b)</td>
</tr>
<tr>
<td>(a</td>
<td></td>
</tr>
</tbody>
</table>
Entries one through five are the same as in [19]. Table 1 also describes the computation of distance in the presence of logical operators AND (&&) and OR (||). In both these cases, the definition takes into account that branch distance is to be minimized whereas the fitness is to be maximized.
Figure 1. Approximation Level and Branch Distance Computation
In general, in order to generate test data to satisfy the branch coverage criterion using GA and BPSO, the sequence in which the branches will be selected for coverage must be defined. A chosen branch may become difficult to cover if the corresponding branch predicate is not reached by any of the test data or individuals in the current population. One of the proposals made by Pachauri and Gursaran [26] for sequencing is the path prefix strategy. We adopt this strategy for the experiments described in this paper. Further, each time a branch is traversed for the first time, it may be necessary to store the test data that traverse the branch and inject these into the population when the sibling branch is selected for traversal. This is referred to as memory and is used in this paper. In order to ensure that individuals reaching the sibling branch of the target are not destroyed by the genetic algorithm operators, elitism is adopted. Up to 10% of fit individuals, with a minimum of one individual, are carried forward to the next generation. Furthermore, it is also possible to initialize the population each time a new branch is selected for coverage or leave it uninitialized. In the experiments described in this paper, the population is not initialized.
Infeasibility may prevent test data from being generated to satisfy a coverage criterion. It may be dealt with as follows. If the search is attempting to traverse a particular branch, but is unable to do so over a sufficiently large, predetermined, number of iterations, then the search run is aborted and the branch is manually examined for infeasibility. If the branch is found to be infeasible then it is marked as traversed and the search is rerun.
5. **EXPERIMENTAL SETUP**
In this section we describe the various experiments carried out to test the performance of test data generation with genetic algorithm.
5.1 Benchmark Programs
Benchmark programs chosen for the experiments have been taken from [11, 27]. These programs have a number of features such as real inputs, equality conditions with the AND operator and deeply nested predicates that make them suitable for testing different approaches for test data generation.
- **Line in a Rectangle Problem**: This program takes eight real inputs, four of which represent the coordinates of rectangle and other four represents the coordinates of the line. The program determines the position of the line with respect to the position of rectangle and generates one out of four possible outputs:
A. The line is completely inside the rectangle;
B. The line is completely outside the rectangle;
C. The line is partially covered by the rectangle; and
D. Error: The input values do not define a line and/or a rectangle.
The maximum nesting level is 12. In total this program’s CFG has 54 nodes and 18 predicate nodes.
- **Number of Days between Two Dates Problem**: This program calculates the days between two given dates of the current century. It takes six integer inputs- three of which represent the first date (day, month, and year) and other three represents the second date (day, month, and year). The CFG has 43 predicate nodes and 127 nodes
- **Calday**: This routine returns the Julian day number. There are three integer input to the program. First input represent month, second represent day and the third represent the year. It's CFG has 27 Nodes with 11 predicate nodes. It has equality conditions, remainder operator. The maximum nesting level is 8.
- **Complex Branch**: It accepts 6 short integer inputs. In this routine there are some complex predicate conditions with relational operators combined with complex AND and OR conditions, it also contains while loops and SWITCH-CASE statement. Its CFG contains 30 nodes.
- **Meyer’s Triangle Classifier Problem**: This program classifies a triangle on the basis of its input sides as non triangle or a triangle, i.e., isosceles, equilateral or scalene. It takes three real inputs all of which represent the sides of the triangle. It's CFG has 14 Nodes with 6 predicate nodes. The maximum nesting level is 5. It has equality conditions with AND operator, which make the branches difficult to cover.
- **Sthamer’s Triangle Classifier Problem**: This program also classifies a triangle on the basis of its input sides as non triangle or a triangle that is isosceles, equilateral, right angle triangle or scalene. It takes three real inputs; all of them represent the sides of the triangle but with different predicate conditions. It's CFG has 29 Nodes with 13 predicate nodes. The maximum nesting level is 12. It has equality conditions with AND operator and complex relational operators.
- **Wegener’s Triangle Classifier Problem**: This program also classifies a triangle on the basis of its input sides as non triangle or a triangle that is isosceles, equilateral, orthogonal or obtuse angle. It takes three real inputs; all of them represent the sides of the triangle but with different predicate conditions. It’s CFG has 32 Nodes with 13 predicate nodes.
- **Michael’s Triangle Classifier Problem**: This program also classifies a triangle on the basis of its input sides as non triangle or a triangle that is isosceles, equilateral or scalene. It takes three real inputs; all of them represent the sides of the triangle but with different predicate conditions. It's CFG has 26 Nodes with 11 predicate nodes. The maximum nesting level is 6.
### 5.2 GA Operator and Parameter Settings
Table 2 lists the various operator and parameter settings for the genetic algorithm used in this study.
<table>
<thead>
<tr>
<th>Parameter/ Operator</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 Population Size</td>
<td>6, 10, 16, 20, 26, …, 110.</td>
</tr>
<tr>
<td>2 Crossover type</td>
<td>Two point crossover</td>
</tr>
<tr>
<td>3 Crossover Probability</td>
<td>1.0</td>
</tr>
<tr>
<td>4 Mutation Probability</td>
<td>0.01</td>
</tr>
<tr>
<td>5 Selection Method</td>
<td>Binary tournament</td>
</tr>
<tr>
<td>6 Branch Ordering Scheme</td>
<td>Path Prefix Strategy</td>
</tr>
<tr>
<td>7 Fitness Function</td>
<td>As described in Section 3.</td>
</tr>
<tr>
<td>8 Population Initialization</td>
<td>Initialize once at the beginning of the GA run</td>
</tr>
<tr>
<td>9 Population Replacement Strategy</td>
<td>Elitism with upto 10% carry forward</td>
</tr>
<tr>
<td>10 Maximum Number of Generations</td>
<td>10⁷</td>
</tr>
<tr>
<td>11 Memory</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Table 3 lists the various operator and parameter settings for the Binary Particle Swarm Optimization (BPSO) used in this study.
<table>
<thead>
<tr>
<th>Parameter/ Operator</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>ϕ₁, ϕ₂</td>
<td>Random numbers from the uniform distribution (0,4), such that ϕ₁ + ϕ₂ ≤ 4</td>
</tr>
<tr>
<td>w</td>
<td>[0.5+ (rnd/2.0)], where rnd is random number drawn from uniform distribution (0,1).</td>
</tr>
</tbody>
</table>
### 6. RESULTS
Experiments with the two approaches were carried out and compared independently for Genetic Algorithm and for Binary Particle Swarm Optimization. For each population size, hundred experiments were carried out and the following statistics were collected:
- Mean number of generations. It may be noted that the termination criterion for each experiment is either full branch coverage or 10⁷ generations whichever occurs earlier. The number of generations to termination over hundred experiments is used to compute the mean. The mean does not tell us if all the branches were covered.
- Mean percentage coverage achieved.
Additionally ANOVA was carried out using SYSTAT 9.0 to determine significant difference in means for experiments with Genetic Algorithm only.
In all the experiments with GA and BPSO, full (100%) coverage was achieved for all population sizes, for all benchmark programs and for both maximization and minimization approaches. This implies that the differentiating factor would have to be the difference in the mean number of generations.
Figure 2 and Figure 3 plot the mean number of generations for both the maximization and minimization approach with Genetic Algorithm. Table 4 summarizes the results of ANOVA with F and p values for the GA based results. Considering a significance level of 0.05, it can be seen that the difference for all the benchmark programs is not significant except for some isolated cases which are not generalizable.
Similar results are also obtained for BPSO, which can also be seen in Figure 4 through Figure 5. Results with both the approaches are comparable.
Further analysis in our case shows that with the path prefix strategy and memory, individuals are present in each generation that cause a traversal of the sibling branch of the target. This coupled with elitism may actually speed up the test data discovery process.
Figure 3 Plots of Mean Number of Generations for Benchmark Programs using Genetic Algorithm
Table 4. Results of test of ANOVA using Genetic Algorithm
<table>
<thead>
<tr>
<th></th>
<th>Calday Program</th>
<th>Complex-Branch Program</th>
<th>Sthamer-Triangle Program</th>
<th>Wegener-Triangle Program</th>
</tr>
</thead>
<tbody>
<tr>
<td>F value</td>
<td>P value</td>
<td>F value</td>
<td>P value</td>
<td>F value</td>
</tr>
<tr>
<td>10</td>
<td>0.095</td>
<td>0.762</td>
<td>0.003</td>
<td>0.003</td>
</tr>
<tr>
<td>20</td>
<td>0.028</td>
<td>0.866</td>
<td>0.869</td>
<td>0.003</td>
</tr>
<tr>
<td>30</td>
<td>0.264</td>
<td>0.608</td>
<td>0.000</td>
<td>0.000</td>
</tr>
<tr>
<td>40</td>
<td>0.679</td>
<td>0.411</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>50</td>
<td>1.311</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>60</td>
<td>1.647</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>70</td>
<td>2.127</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>80</td>
<td>3.335</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>90</td>
<td>4.004</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>100</td>
<td>5.308</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>110</td>
<td>6.786</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>120</td>
<td>8.711</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>130</td>
<td>11.309</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>140</td>
<td>13.716</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
<tr>
<td>150</td>
<td>16.907</td>
<td>0.001</td>
<td>0.001</td>
<td>0.001</td>
</tr>
</tbody>
</table>
Figure 4 Plots of Mean Number of Generations for Benchmark Programs using Binary Particle Swarm Optimization (BPSO)
Figure 5 Plots of Mean Number of Generations for Benchmark Programs using Binary Particle Swarm Optimization (BPSO)
7. CONCLUSION
In search based test data generation, the problem of test data generation is reduced to that of function minimization or maximization. Traditionally, for branch testing, the problem of test data generation has been formulated as a minimization problem. In this paper we have defined an alternate maximization formulation and experimentally compared it with the minimization formulation. We have used a genetic algorithm and binary particle swarm optimization as the search technique and in addition to the usual operators we have also employed the path prefix strategy as a branch ordering strategy and memory and elitism. Results indicate that there is no significant difference in the performance or the coverage obtained through the two approaches and either could be used in test data generation if coupled with the path prefix strategy, memory and elitism.
ACKNOWLEDGEMENTS
This work was supported by the UGC Major Project Grant F.No.36-70/2008 (SR) for which the authors are thankful.
REFERENCES
|
{"Source-Url": "http://airccse.org/journal/ijsea/papers/3112ijsea15.pdf", "len_cl100k_base": 6581, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32557, "total-output-tokens": 9446, "length": "2e12", "weborganizer": {"__label__adult": 0.0004048347473144531, "__label__art_design": 0.0003521442413330078, "__label__crime_law": 0.0004024505615234375, "__label__education_jobs": 0.0009975433349609375, "__label__entertainment": 7.528066635131836e-05, "__label__fashion_beauty": 0.00020325183868408203, "__label__finance_business": 0.00026607513427734375, "__label__food_dining": 0.0004096031188964844, "__label__games": 0.0007333755493164062, "__label__hardware": 0.0012149810791015625, "__label__health": 0.0007677078247070312, "__label__history": 0.00028395652770996094, "__label__home_hobbies": 0.0001195073127746582, "__label__industrial": 0.0004930496215820312, "__label__literature": 0.0003390312194824219, "__label__politics": 0.00028395652770996094, "__label__religion": 0.0005016326904296875, "__label__science_tech": 0.05279541015625, "__label__social_life": 0.00010877847671508788, "__label__software": 0.005901336669921875, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.0004172325134277344, "__label__transportation": 0.0006089210510253906, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32401, 0.06406]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32401, 0.41928]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32401, 0.86904]], "google_gemma-3-12b-it_contains_pii": [[0, 3272, false], [3272, 6797, null], [6797, 10537, null], [10537, 14306, null], [14306, 16484, null], [16484, 19828, null], [19828, 22618, null], [22618, 23878, null], [23878, 25713, null], [25713, 25948, null], [25948, 29346, null], [29346, 32401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3272, true], [3272, 6797, null], [6797, 10537, null], [10537, 14306, null], [14306, 16484, null], [16484, 19828, null], [19828, 22618, null], [22618, 23878, null], [23878, 25713, null], [25713, 25948, null], [25948, 29346, null], [29346, 32401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32401, null]], "pdf_page_numbers": [[0, 3272, 1], [3272, 6797, 2], [6797, 10537, 3], [10537, 14306, 4], [14306, 16484, 5], [16484, 19828, 6], [19828, 22618, 7], [22618, 23878, 8], [23878, 25713, 9], [25713, 25948, 10], [25948, 29346, 11], [29346, 32401, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32401, 0.25]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
bf1970c9d8779a4900cd42dee15c2b0fe4a3d602
|
PRISMA2020: an R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis
Short title: PRISMA2020 tool for transparent systematic review flow diagrams
Neal R. Haddaway1,2,3*, Matthew J. Page4, Chris C. Pritchard5, Luke A. McGuinness6
1 Stockholm Environment Institute, Stockholm, Sweden
2 Mercator Research Institute on Global Commons and Climate Change, Berlin, Germany
3 Africa Centre for Evidence, University of Johannesburg, Johannesburg, South Africa
4 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
5 Institute of Health and Allied Professions, School of Social Sciences, Nottingham Trent University, United Kingdom
6 Department of Population Health Sciences, Bristol Medical School, University of Bristol, United Kingdom
* Corresponding author: neal.haddaway@sei.org; neal_haddaway@hotmail.com
NOTE: This preprint reports new research that has not been certified by peer review and should not be used to guide clinical practice.
Abstract
Background
Reporting standards, such as PRISMA aim to ensure that the methods and results of systematic reviews are described in sufficient detail to allow full transparency. Flow diagrams in evidence syntheses allow the reader to rapidly understand the core procedures used in a review and examine the attrition of irrelevant records throughout the review process. Recent research suggests that use of flow diagrams in systematic reviews is poor and of low quality and called for standardised templates to facilitate better reporting in flow diagrams. The increasing options for interactivity provided by the Internet gives us an opportunity to support easy-to-use evidence synthesis tools, and here we report on the development of tools for the production of PRISMA 2020-compliant systematic review flow diagrams.
Methods and Findings
We developed a free-to-use, Open Source R package and web-based Shiny app to allow users to design PRISMA flow diagrams for their own systematic reviews. Our tools allow users to produce standardised visualisations that transparently document the methods and results of a systematic review process in a variety of formats. In addition, we provide the opportunity to produce interactive, web-based flow diagrams (exported as HTML files), that allow readers to click on boxes of the diagram and navigate to further details on methods, results or data files. We provide an interactive example here; https://driscoll.ntu.ac.uk/prisma/.
Conclusions
We have developed a user-friendly suite of tools for producing PRISMA 2020-compliant flow diagrams for users with coding experience and, importantly, for users without prior experience in coding by making use of Shiny. These free-to-use tools will make it easier to produce clear and PRISMA 2020-compliant systematic review flow diagrams. Significantly, users can also produce interactive flow diagrams for the first time, allowing readers of their reviews to smoothly and swiftly explore and navigate to further details of the methods and results of a review. We believe these tools will increase use of PRISMA flow diagrams, improve the compliance and quality of flow diagrams, and facilitate strong science communication of the methods and results of systematic reviews by making use of...
interactivity. We encourage the systematic review community to make use of these tools, and provide feedback to streamline and improve their usability and efficiency.
Keywords: evidence synthesis; flowchart; radical transparency; rapid review; scoping review; systematic literature review; data visualisation
Introduction
Evidence synthesis reporting standards
Evidence syntheses (e.g. systematic reviews and evidence maps) typically aim to reliably synthesise an evidence base, and are based on state-of-the-art methodologies designed to maximise comprehensiveness (or representativeness), procedural objectivity, and reproducibility, whilst minimising subjectivity and risk of bias (1, 2). Reproducibility is made possible through a high degree of transparency when reporting the planned or final methods used in a review protocol or final report. Reporting standards, such as PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses (3)) and ROSES (RepOrting standards for Systematic Evidence Syntheses (4)), aim to ensure that review methods and findings are described in sufficient detail.
In 2009, the PRISMA statement – a reporting guideline designed primarily for systematic reviews of health interventions – was released (3, 5). The guideline was developed by a consortium of systematic reviewers, methodologists and journal editors to address evidence of incomplete reporting in systematic reviews (6), with recommendations formed largely based on expert consensus obtained via Delphi surveys and consensus meetings. The PRISMA statement has been widely endorsed and adopted by journals, and evidence suggests use of the guideline is associated with more complete reporting of systematic reviews (7). However, to address the many innovations in methods for systematic reviews, changes in terminology, and new options to disseminate research evidence that have occurred since 2009, an update to the guideline (referred to now as PRISMA 2020 (8)) has recently occurred.
Review flow diagrams
Flow diagrams in evidence syntheses allow the reader to rapidly understand the core procedures used in a review and examine the attrition of irrelevant records throughout the
review process. The PRISMA flow diagram published in 2009 describes the sources, numbers and fates of all identified and screened records in a review (for more details, see the original flow diagram (3) and an update from 2014 (9)). A recent assessment of the quality and use of flow diagrams in systematic reviews found that only 50% of identified reviews made use of flow diagrams, with their quality generally being low and not significantly improving over time (quality defined by the presence of critical data on the flow of studies through a review (10)): as a result, the authors called for a standardised flow diagram template to improve reporting quality.
Several changes were made to the original PRISMA flow diagram in the 2020 update (8). The 2020 template: (i) recommends authors specify how many records were excluded before screening (e.g. because they were duplicate records that were removed, or marked as ineligible by automation tools); (ii) recommends authors specify how many full text reports were sought for retrieval and how many were not retrieved; (iii) gives authors the option to specify how many studies and reports included in a previous version of the review were carried over into the latest iteration of the review (if an updated review); and (iv) gives authors the option to illustrate the flow of records through the review as separated by type of source (e.g. bibliographic databases, websites, organisation and citation searching). Also, the phrase “studies included qualitative synthesis” has been replaced with “studies included in review”, given the former phrase has been incorrectly interpreted by some users as referring to syntheses of qualitative data. Furthermore, the recommendation to report in the flow diagram the number of studies included in quantitative synthesis (e.g. meta-analysis) has been removed, given a systematic review typically includes many quantitative syntheses, and the number of studies included in each varies (e.g. one meta-analysis might include 12 studies, another might include five).
Transparency and Open Science in evidence syntheses
Broadly speaking, the Open Science movement aims to promote research integrity, experimental and analytical repeatability and full transparency, from project inception to publication and communication. Various definitions and frameworks for Open Science have been proposed (e.g. Open Data, Open Methods, Open Access, Open Source proposed by Kraker et al. (11), and 44 components by Knoth and Pontika (12)). In addition, the FAIR principles (Findability, Accessibility, Interoperability and Reusability (13)) aim to ensure that available data can be readily retrieved and used. In addition, licensing can be used to specify what can be done with the data once it has been accessed (14).
The application of Open Science principles to evidence synthesis has been explored by Haddaway (15), defined as Open Synthesis: the concept has since been expanded to cover 10 proposed components (Open Synthesis Working Group 2020). Open Synthesis is important and beneficial for a number of key reasons (15): 1) there is a need to be able to access and verify methods used in reviews and allow interrogation of the fate of each record in the review process; 2) in order to reduce research waste, data collected within a review should be made publicly accessible and readily reusable in replications, updates and overlapping reviews; 3) capacity building via learning-by-doing is facilitated by having access to machine readable data and code from a review.
Interactivity and Web 2.0
Systematic review flow diagrams undoubtedly facilitate rapid comprehension of basic review methodology. However, they have far greater potential as a tool for communication and transparency when used not only as static graphics, but also as interactive ‘site maps’ for reviews. This is the essence of the concept ‘Web 2.0’; a rethinking of the internet as a tool for interactivity, rather than simply passive communication (16). Flow diagrams in their crudest
sense consist of inputs, processes and outputs, with the ‘nodes’ (i.e. boxes) in a systematic review flow diagram containing summaries of the numbers of records included or excluded at each stage, and ‘edges’ (i.e. arrows) indicating the ‘flow’ or movement of records from information sources, through the screening stages of the review, to the final set of included studies. For each node, there is a rich set of information relating both to the methods used and the respective associated records: for example, the number of records excluded at full text eligibility screening are presented alongside a summary of the reasons for exclusion.
In a static review document, it may require substantial effort to determine the methods used to process records or the underlying records themselves. Indeed, the difficulty in locating the relevant information (particularly if stored in supplementary data) often hampers peer-review and editorial assessment. This is one of the key reasons that reporting standards require authors to specify the location of relevant information in review protocols or reports (e.g. see the PRISMA checklist; http://www.prisma-statement.org/PRISMAStatement/Checklist). However, if we repurpose the flow diagram from a static element to an interactive ‘site map’ of the review, readers may immediately navigate to relevant information regarding review methods, inputs and outputs. Cross-linking between different elements of a review may help to facilitate the validation and assessment of systematic reviews and make it far easier to access and reuse their methods, data and code. Such interactivity could be achieved through hyperlinking within static digital files, such as PDF (portable document format) files, or through web-based visualisations that would facilitate updating or ‘living reviews’ (16).
Furthermore, by embedding and nesting relevant information behind an interactive visualisation such as a flow diagram, review authors could make use of a key concept in science communication: that of simplification. Simplification is a key principle in audio-visual science communication (17) and relies on prioritisation of information rather than ‘dumbing down’ (18). Extensive detail on the methods employed and on the reporting of information sources, data inputs and outputs could be accessed via hyperlinks, with core information placed front-and-centre. This layered or nested approach to science communication would
allow the reader to choose how much and what type of information to view, rather than the linear format currently used across science publishing.
Methods
Objectives
This project had the following aims:
1) to develop a novel package for the R programming environment (19) for producing systematic review flow diagrams that conform to the latest update of the PRISMA statement (8);
2) to adapt this code and publish a free-to-use, web-based tool (a Shiny app) for producing publication-quality flow diagram figures without any necessary prior coding experience;
3) to allow users to produce interactive versions of the flow diagrams that include hyperlinks to specific web pages, files or document sections.
The project was produced collaboratively as part of the Evidence Synthesis Hackathon (https://www.eshackathon.org) using a combination of languages (R, DOT, HTML and JavaScript) with the aim of being provided to the public as a free and open source R package and Shiny app. The project code was published and managed on GitHub ((20); https://github.com/nealhaddaway/PRISMA2020) and the Shiny app is hosted on a subscription-based Shiny server paid for by the Stockholm Environment Institute (https://estech.shinyapps.io/prisma_flowdiagram). Code has been annotated and documented in line with coding best practices and to facilitate understanding and reuse. At the time of submission, the PRISMA2020 package has been submitted to CRAN (the Comprehensive R Archive Network) for publication in their archive of R packages.
Results
In the following pages, we summarise the functionality of the R package and Shiny app, providing a summary in lay terms, along with a more detailed description for the code-savvy (‘Code detail’ boxes). Functions are indicated by courier font, whilst packages are indicated by italics.
The PRISMA2020 R package
Functionality:
1. Data import and cleaning
The data needed for the PRISMA_flowdiagram() function can be entered either directly as a set of numbers or R objects, but data upload can be facilitated by using a template comma separated value (CSV) file (see Table 1). We recommend the use of a CSV file as opposed to manually inputting numbers, as this allows for better reproducibility / transparency, as the underlying CSV can be shared. This file can be edited to a large extent and the edits incorporated into the text, numbers, hyperlinks and tooltips used to make the plot.
<table>
<thead>
<tr>
<th>data</th>
<th>node</th>
<th>box</th>
<th>description</th>
<th>boxtext</th>
<th>tooltips</th>
<th>url</th>
<th>n</th>
</tr>
</thead>
<tbody>
<tr>
<td>NA</td>
<td>node4</td>
<td>box1</td>
<td>Grey title box; Previous studies</td>
<td>Previous studies</td>
<td>Grey title box; Previous studies</td>
<td>prevstud.html</td>
<td>xxx</td>
</tr>
<tr>
<td>previous_studies</td>
<td>node5</td>
<td>box1</td>
<td>Studies included in previous version of review</td>
<td>Studies included in previous version of review</td>
<td>Studies included in previous version of review</td>
<td>previous_studies.html</td>
<td>xxx</td>
</tr>
<tr>
<td>previous_reports</td>
<td>NA</td>
<td>box1</td>
<td>Reports of studies included in previous version of review</td>
<td>Reports of studies included in previous version of review</td>
<td>NA</td>
<td>previous_reports.html</td>
<td>xxx</td>
</tr>
<tr>
<td>NA</td>
<td>node6</td>
<td>box1</td>
<td>Yellow title box; Identification of new studies via databases and registers</td>
<td>Identification of new studies via databases and registers</td>
<td>Identification of new studies via databases and registers</td>
<td>newstud.html</td>
<td>xxx</td>
</tr>
<tr>
<td>database_results</td>
<td>node7</td>
<td>box2</td>
<td>Records identified from: Databases</td>
<td>Databases</td>
<td>Records identified from: Databases</td>
<td>database_results.html</td>
<td>xxx</td>
</tr>
<tr>
<td>register_results</td>
<td>NA</td>
<td>box2</td>
<td>Records identified from: Registers</td>
<td>Registers</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>NA</td>
<td>node16</td>
<td>box1</td>
<td>Grey title box; Identification of new studies via other methods</td>
<td>Identification of new studies via other methods</td>
<td>Grey title box; Identification of new studies via other methods</td>
<td>othstud.html</td>
<td>xxx</td>
</tr>
<tr>
<td>website_results</td>
<td>node17</td>
<td>box11</td>
<td>Records identified from: Websites</td>
<td>Websites</td>
<td>Records identified from: Websites</td>
<td>website_results.html</td>
<td>xxx</td>
</tr>
<tr>
<td>organisation_results</td>
<td>box11</td>
<td>NA</td>
<td>Records identified from: Organisations</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>citations_results</td>
<td>node11</td>
<td>box11</td>
<td>Records identified from: Citation searching</td>
<td>Citation searching</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
</tr>
<tr>
<td>duplicates</td>
<td>node8</td>
<td>box3</td>
<td>Duplicate records</td>
<td>Duplicate records</td>
<td>NA</td>
<td>duplicates.html</td>
<td>xxx</td>
</tr>
<tr>
<td>excluded_automatic</td>
<td>NA</td>
<td>box3</td>
<td>Records marked as ineligible by automation tools</td>
<td>Records marked as ineligible by automation tools</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>excluded_other</td>
<td>NA</td>
<td>box3</td>
<td>Records removed for other reasons</td>
<td>Records removed for other reasons</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>records_screened</td>
<td>node9</td>
<td>box4</td>
<td>Records screened (databases and registers)</td>
<td>Records screened (databases and registers)</td>
<td>Records screened (databases and registers)</td>
<td>records_screened.html</td>
<td>xxx</td>
</tr>
<tr>
<td>records_excluded</td>
<td>node10</td>
<td>box5</td>
<td>Records excluded (databases and registers)</td>
<td>Records excluded (databases and registers)</td>
<td>Records excluded (databases and registers)</td>
<td>records_excluded.html</td>
<td>xxx</td>
</tr>
<tr>
<td>dbr_sought_reports</td>
<td>node11</td>
<td>box6</td>
<td>Reports sought for retrieval (databases and registers)</td>
<td>Reports sought for retrieval (databases and registers)</td>
<td>Reports sought for retrieval (databases and registers)</td>
<td>dbr_sought_reports.html</td>
<td>xxx</td>
</tr>
<tr>
<td>dbr_notretrieved_reports</td>
<td>node12</td>
<td>box7</td>
<td>Reports not retrieved (databases and registers)</td>
<td>Reports not retrieved (databases and registers)</td>
<td>Reports not retrieved (databases and registers)</td>
<td>dbr_notretrieved_reports.html</td>
<td>xxx</td>
</tr>
<tr>
<td>other_sought_reports</td>
<td>node18</td>
<td>box12</td>
<td>Reports sought for retrieval (other)</td>
<td>Reports sought for retrieval (other)</td>
<td>Reports sought for retrieval (other)</td>
<td>other_sought_reports.html</td>
<td>xxx</td>
</tr>
<tr>
<td>other_notretrieved_reports</td>
<td>node19</td>
<td>box13</td>
<td>Reports not retrieved (other)</td>
<td>Reports not retrieved (other)</td>
<td>Reports not retrieved (other)</td>
<td>other_notretrieved_reports.html</td>
<td>xxx</td>
</tr>
<tr>
<td>dbr_assessed</td>
<td>node13</td>
<td>box8</td>
<td>Reports assessed for eligibility (databases and registers)</td>
<td>Reports assessed for eligibility (databases and registers)</td>
<td>Reports assessed for eligibility (databases and registers)</td>
<td>dbr_assessed.html</td>
<td>xxx</td>
</tr>
<tr>
<td>dbr_excluded</td>
<td>node14</td>
<td>box9</td>
<td>Reports excluded (databases and registers); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>Reports excluded (databases and registers); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>Reports excluded (databases and registers); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>dbrexcludedrecords.html</td>
<td>Reason1, xxx; Reason2, xxx; Reason3, xxx</td>
</tr>
<tr>
<td>other_assessed</td>
<td>node20</td>
<td>box14</td>
<td>Reports assessed for eligibility (other)</td>
<td>Reports assessed for eligibility (other)</td>
<td>Reports assessed for eligibility (other)</td>
<td>other_assessed.html</td>
<td>xxx</td>
</tr>
<tr>
<td>other_excluded</td>
<td>node21</td>
<td>box15</td>
<td>Reports excluded (other); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>Reports excluded (other); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>Reports excluded (other); [separate reasons and numbers using ; e.g. Reason1, xxx; Reason2, xxx; Reason3, xxx]</td>
<td>other_excluded.html</td>
<td>Reason1, xxx; Reason2, xxx; Reason3, xxx</td>
</tr>
<tr>
<td>new_studies</td>
<td>node15</td>
<td>box10</td>
<td>New studies included in review</td>
<td>New studies included in review</td>
<td>New studies included in review</td>
<td>new_studies.html</td>
<td>xxx</td>
</tr>
<tr>
<td>new_reports</td>
<td>NA</td>
<td>box10</td>
<td>Reports of new included studies</td>
<td>Reports of new included studies</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>total_studies</td>
<td>node22</td>
<td>box16</td>
<td>Total studies included in review</td>
<td>Total studies included in review</td>
<td>Total studies included in review</td>
<td>total_studies.html</td>
<td>xxx</td>
</tr>
<tr>
<td>total_reports</td>
<td>NA</td>
<td>box16</td>
<td>Reports of total included studies</td>
<td>Reports of total included studies</td>
<td>NA</td>
<td>NA</td>
<td>xxx</td>
</tr>
<tr>
<td>identification</td>
<td>node1</td>
<td>identification</td>
<td>Blue identification box</td>
<td>Identification</td>
<td>Blue identification box</td>
<td>identification.html</td>
<td>xxx</td>
</tr>
<tr>
<td>screening</td>
<td>node2</td>
<td>screening</td>
<td>Blue screening box</td>
<td>Screening</td>
<td>Blue screening box</td>
<td>screening.html</td>
<td>xxx</td>
</tr>
<tr>
<td>included</td>
<td>node3</td>
<td>included</td>
<td>Blue included box</td>
<td>Included</td>
<td>Blue included box</td>
<td>included.html</td>
<td>xxx</td>
</tr>
</tbody>
</table>
The function `PRISMA_read()` reads in a template CSV file containing data to display in the flow diagram, including text contents, quantitative data (i.e. the number of records in each box), tooltips (i.e. the text that appears when the mouse hovers over a box), and hyperlinks for ‘on click’ functionality. The output is a list of named objects that can be read directly into `PRISMA_flowdiagram()`.
**Code detail:** The `PRISMA_read()` function uses text matching against a set of node (or box) names to assign the uploaded data to the appropriate box in the figure, for example:
```r
previous_studies <- data[grep('previous_studies', data[,1]),]$n
```
2. Creating a static flow diagram
The function `PRISMA_flowdiagram()` produces a PRISMA 2020-style flow diagram for systematic reviews. In summary, boxes are placed at specific locations across the graph, and they are automatically connected with arrows according to a specified set of connections.
**Code detail:** `PRISMA_flowdiagram()` uses the `grViz()` function from the `DiagrammeR` package (21) to plot a DOT graphic using `layout = neato` to explicitly place ‘nodes’ (boxes) at a particular location and `splines=’ortho’` to specify axis-aligned edges are drawn between nodes. The label (including data) and tooltip for each node is read in within the main text of the function by using `paste()` to combine DOT strings and R objects.
Along with the text, data, tooltips and hyperlinks, users can specify whether to plot the ‘previous studies’ arm or the ‘other studies’ arm of the flow diagram by specifying these options within the `PRISMA_flowdiagram()` function.
In addition, the font, box fill, box line colour and line arrow head/tail can be altered as desired.
**Code detail:** Since text rotation is not supported in DOT or DiagrammeR, the vertical labels for the left-hand blue bars are added via JavaScript appending using the `appendContent()` and `onStaticRenderComplete()` functions from the htmlwidgets package (22) to append a block of JavaScript to the HTML output.
First, within the R code, a placeholder label is created consisting of a single whitespace. The JavaScript code uses the `document.getElementById()` to locate each of the blue bar nodes and replace the whitespace with the appropriate label. A CSS transform is applied to rotate the label by 90 degrees and the correct x and y coordinates for the new label are calculated based on their previous values. This means that the label location is adjusted based on the presence or absence of the `previous` and `other` arms and is able to withstand changes to the diagram format moving forward.
The function also includes the ability to plot an interactive version if the function parameter is set to ‘TRUE’, as described in Point 4, below.
The final plot output (see Figure 1) can be saved in a range of file formats (HTML, PDF, PNG, SVG, PS or WEBP).
Figure 1. The full output plot from the `PRISMA_flowdiagram()` function.
3. Creating an interactive flow diagram
Flow diagrams can be made interactive by specifying the additional parameter `interactive = TRUE` (this defaults to `FALSE`) in the `PRISMA_flowdiagram()` function. The resulting HTML output plot includes hyperlinks on click for each box, along with the tooltips specified in the main `PRISMA_flowdiagram()` function (see above).
**Code detail:** The internal function `PRISMA_interactive_()` uses the `prependContent()` and `onStaticRenderComplete()` functions from the `htmlwidgets` package (22) to prepend a block of JavaScript to the HTML output. This JavaScript identifies each node in turn using `getElementById(id)` and inserts an HTML anchor element carrying the relevant hyperlink for each node using the internal function `PRISMA_add_hyperlink()`.
4. Saving the output as a file
The `PRISMA_save()` function allows for the flow diagram to be saved as a standalone HTML file (with interactivity preserved), or as a PDF, PNG, SVG, PS or WEBP file (without interactivity). This function takes the plot produced by `PRISMA_flowdiagram()` and saves the
A default option for filename is provided, but this can be overridden along with the filetype which is calculated from the file extension by default.
**Code detail:** When saving as HTML, the `PRISMA_save()` function uses the `savewidget()` function from the `htmlwidgets` package (22). When saving as other formats, the internal function `PRISMA_gen_tmp_svg()` is used, this first uses `savewidget()` to create an HTML file in a temporary directory and then uses the various XML manipulation functions from the xml2 package (23) to step through the HTML, using xpath (24) to find the SVG embedded within the HTML.
As JavaScript is not supported in SVG files, the xml2 package is again used to add a rotate transformation and programmatically alter the x and y coordinates to create the blue vertical labels. Following this, the temporary SVG is either copied to its final destination, or the rsvg (25) package is used to convert it into the desired output format.
**The Shiny app**
Shiny is a package within the R environment that allows users to construct standalone web-based applications based on R functions (26). The ‘app’ can be interacted with by entering data, running functions with user-specified settings to plot figures, and downloading the resultant figures in a variety of formats.
The PRISMA2020 Shiny app is available free-of-charge and can be found through the PRISMA website (http://prisma-statement.org/). The app landing page (the ‘Home’ tab) describes the app and its background, linking to the PRISMA website and PRISMA 2020 statement (8) (see Figure 2). Users can enter their data either by uploading an edited template CSV file, or by
manually entering data in the ‘Create flow diagram’ tab. Once uploaded, users proceed to this tab to see the resultant figure.
a) PRISMA flow diagram Shiny app landing page
[Image of the PRISMA flow diagram Shiny app landing page]
b) Data entry and flow diagram page
[Image of the PRISMA flow diagram Shiny app data entry page]
Figure 2. Screenshot of the PRISMA2020 Shiny app a) landing page and b) data entry and diagram visualisation page.
On the ‘Create flow diagram’ tab users can specify whether to include the ‘previous studies’ and ‘other studies’ arms of the flow diagram using the check boxes. The resulting flow diagram (see Figure 3). The ‘Previous studies’ and ‘Other studies’ arms can be toggled on and off via the ‘Data upload’ tab and the plot responds reactively.
Figure 3. The possible layouts that can be obtained via the ‘previous studies’ and ‘other studies’ arms checkboxes. a) the full plot; b) other studies omitted; c) previous studies and other studies omitted; d) previous studies omitted.
Whilst interactivity isn’t possible using the Shiny app itself, the app allows for the download of an interactive HTML plot. The links themselves can only be customised through the upload of a custom CSV file, rather than through the web interface.
**Code detail:** Shiny does not support HTML appending or prepending (adding code before or after a given element) via DiagrammeR. Instead, a different method is used to label the blue nodes. The javascript file `labels.js` is included via a script tag inserted in the `<head>` area of the shiny HTML pages. This contains several functions, the `renderLabel()` function adds a label to the given node, in the same way as the JS appends in DiagrammeR.
The `createLabels()` function is called just before the plot is re-rendered, this registers a `MutationObserver` that waits for the nodes to be created. Once the nodes are visible to the DOM (Document Object Model, a programming interface for HTML), the `renderLabel()` function is called, once for each node, to add the labels.
**Interactivity**
The interactivity here represents an additional step to cross link and host the interactive PRISMA flow diagram with the relevant texts and data. This obviously corresponds to additional effort on the behalf of review authors, but has clear benefits for transparency and communication.
Interactivity in the `PRISMA2020` flow diagrams is provided in 2 ways. Firstly, mouse-over tooltips appear as the user’s mouse is moved over a particular box. These popup boxes can contain user-specified text providing more information as desired. For example, a short elaboration of the numbers of text in each box in order to clarify meanings. Alternatively, tooltips can provide an explanation of the information that will be hyperlinked to on clicking. Secondly, the boxes can be given hyperlinks so that the user can follow a predetermined link. These links can be anchors within a document or webpage, or datafiles or web pages stored on external or local repositories (e.g. supplementary files on a data repository such as figshare or Zenodo).
As described above, this interactivity conforms to the principle of science communication simplicity by prioritising information provision hierarchically (i.e. showing critical information first, with further details accessible on interrogation). Tooltips provide a semi-passive means of providing information: the user is exposed to further details as they move their mouse over the plot. Hyperlinks are an active means of requesting further information.
This nested, hierarchical provision of information may be particularly useful for complex systematic review methodology.
It is worth noting that users should ensure they do not breach bibliographic database (and other) Terms of Use or inadvertently infringe copyright by linking to directly exported search results or including copyrighted data such as abstracts. Many providers would not count this form of transparency as outside acceptable use, but we encourage users to be certain for the resources they have used. One way to avoid this would be to provide a digitised (e.g. comma-separated text file) list of digital object identifiers for all records linked to in an interactive version of the flow diagram: this would contravene neither copyright nor Terms of Use and could be transformed into a full set of citations using freely accessible resources like CrossRef.
**Case study**
We have prepared a case study that demonstrates possible interactivity that can be employed in a web-based *PRISMA2020* flow diagram (see Figure 4). The example website is available at [https://driscoll.ntu.ac.uk/prisma/](https://driscoll.ntu.ac.uk/prisma/).
The website is based on data from an ongoing systematic review into ambulance clinician responses to adult male victims of intimate partner violence (27). The site uses a flowchart generated from this software, alongside bootstrap ([https://getbootstrap.com](https://getbootstrap.com)) to make a fully interactive experience, enabling users to interrogate various aspects of the review. As the review is currently underway, the site will be updated as the review progresses.
Figure 4. Screenshot of the case study website, showing the PRISMA flow diagram (a) and the resulting page linked to by clicking on the database exclusions box (b).
Discussion
The PRISMA 2020 update represents a significant development of the PRISMA statement, increasing the usability and level of detail needed in systematic reviews. The PRISMA2020 flow diagram similarly provides a clearer and more detailed template. We have developed a user-friendly suite of tools for producing PRISMA 2020-compliant flow diagrams for users with coding experience and, importantly, for users without prior experience in coding by making use of Shiny. These free-to-use tools will make it easier to produce clear and PRISMA 2020-compliant systematic review flow diagrams. Significantly, users can also produce interactive flow diagrams for the first time, allowing readers of their reviews to smoothly and swiftly explore and navigate to further details of the methods and results of a review.
In addition, the ability to produce flow diagrams using code in a data-driven approach carries with it a number of benefits, including: facilitating Open Science (specifically Open Code); reducing the risk of transcription errors; and, opening up possibilities for reproducible documents such as executable research articles (28) and communicating the results of living systematic reviews (17).
We believe these tools will increase use of PRISMA flow diagrams, improve the compliance and quality of flow diagrams, and facilitate strong science communication of the methods and results of systematic reviews by making use of interactivity. We encourage the systematic review community to make use of these tools, and provide feedback to streamline and improve their usability and efficiency.
Acknowledgements
We thank Jack Wasey for his work on an R package for PRISMA 2009-compliant flow diagrams. We also thank the many users of the beta version of the R package and Shiny App for feedback instrumental in improving the user experience. This work is a project of the Evidence Synthesis Hackathon (https://www.eshackathon.org/software/PRISMA2020.html).
Author contributions
This manuscript, the R package and the Shiny App were conceived and drafted by Neal Haddaway. Substantial code improvements were made by Luke McGuinness and Chris Pritchard. Matthew Page provided conceptual feedback. All authors contributed to the editing and revision of the manuscript, and have read and agreed to publication in its final form.
Conflicts of interest
Matthew Page co-led the development of the PRISMA 2020 statement and Luke McGuinness is a co-author of the PRISMA 2020 statement, but they have no commercial interest in the use of this reporting guideline.
References
|
{"Source-Url": "http://medrxiv.org/cgi/content/short/2021.07.14.21260492v1?rss=1", "len_cl100k_base": 8156, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 52895, "total-output-tokens": 8904, "length": "2e12", "weborganizer": {"__label__adult": 0.0006155967712402344, "__label__art_design": 0.0012369155883789062, "__label__crime_law": 0.0010013580322265625, "__label__education_jobs": 0.036163330078125, "__label__entertainment": 0.00019121170043945312, "__label__fashion_beauty": 0.0004854202270507813, "__label__finance_business": 0.0013895034790039062, "__label__food_dining": 0.0007519721984863281, "__label__games": 0.0009331703186035156, "__label__hardware": 0.001422882080078125, "__label__health": 0.0215911865234375, "__label__history": 0.0010843276977539062, "__label__home_hobbies": 0.0004801750183105469, "__label__industrial": 0.0007047653198242188, "__label__literature": 0.0009641647338867188, "__label__politics": 0.0010042190551757812, "__label__religion": 0.0008602142333984375, "__label__science_tech": 0.3154296875, "__label__social_life": 0.0006866455078125, "__label__software": 0.1484375, "__label__software_dev": 0.46240234375, "__label__sports_fitness": 0.001102447509765625, "__label__transportation": 0.0005841255187988281, "__label__travel": 0.0005536079406738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38751, 0.0154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38751, 0.23333]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38751, 0.85977]], "google_gemma-3-12b-it_contains_pii": [[0, 1072, false], [1072, 3361, null], [3361, 3671, null], [3671, 5559, null], [5559, 7619, null], [7619, 9605, null], [9605, 12063, null], [12063, 12209, null], [12209, 13596, null], [13596, 14496, null], [14496, 22923, null], [22923, 24559, null], [24559, 25825, null], [25825, 27001, null], [27001, 28666, null], [28666, 29453, null], [29453, 30391, null], [30391, 32235, null], [32235, 33862, null], [33862, 34027, null], [34027, 35638, null], [35638, 36602, null], [36602, 38202, null], [38202, 38751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1072, true], [1072, 3361, null], [3361, 3671, null], [3671, 5559, null], [5559, 7619, null], [7619, 9605, null], [9605, 12063, null], [12063, 12209, null], [12209, 13596, null], [13596, 14496, null], [14496, 22923, null], [22923, 24559, null], [24559, 25825, null], [25825, 27001, null], [27001, 28666, null], [28666, 29453, null], [29453, 30391, null], [30391, 32235, null], [32235, 33862, null], [33862, 34027, null], [34027, 35638, null], [35638, 36602, null], [36602, 38202, null], [38202, 38751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38751, null]], "pdf_page_numbers": [[0, 1072, 1], [1072, 3361, 2], [3361, 3671, 3], [3671, 5559, 4], [5559, 7619, 5], [7619, 9605, 6], [9605, 12063, 7], [12063, 12209, 8], [12209, 13596, 9], [13596, 14496, 10], [14496, 22923, 11], [22923, 24559, 12], [24559, 25825, 13], [25825, 27001, 14], [27001, 28666, 15], [28666, 29453, 16], [29453, 30391, 17], [30391, 32235, 18], [32235, 33862, 19], [33862, 34027, 20], [34027, 35638, 21], [35638, 36602, 22], [36602, 38202, 23], [38202, 38751, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38751, 0.20779]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
6b1610ed527b85ffb134754adac47d56de5a5bb5
|
Package ‘bumphunter’
January 21, 2017
Version 1.14.0
Title Bump Hunter
Description Tools for finding bumps in genomic data
Depends R (>= 2.10), S4Vectors (>= 0.9.25), IRanges (>= 2.3.23),
GenomeInfoDb, GenomicRanges, foreach, iterators, methods,
parallel, locfit
Suggests testthat, RUnit, doParallel, org.Hs.eu.db,
TxDb.Hsapiens.UCSC.hg19.knownGene
Imports matrixStats, limma, doRNG, BiocGenerics, utils,
GenomicFeatures, AnnotationDbi
License Artistic-2.0
LazyData yes
URL https://github.com/ririzzarr/bumphunter
biocViews DNAMethylation, Epigenetics, Infrastructure,
MultipleComparison
NeedsCompilation no
Author Rafael A. Irizarry [cre, aut],
Martin Aryee [aut],
Kasper Daniel Hansen [aut],
Hector Corrada Bravo [aut],
Shan Andrews [ctb],
Andrew E. Jaffe [ctb],
Harris Jaffe [ctb],
Leonardo Collado-Torres [ctb]
Maintainer Rafael A. Irizarry <rafa@jimmy.harvard.edu>
R topics documented:
annotateNearest .............................................................. 2
annotateTranscripts ........................................................ 3
bumphunter ................................................................. 4
clusterMaker ................................................................. 7
dummyData ................................................................. 8
getSegments ................................................................. 9
locfitByCluster ........................................................... 11
annotateNearest
Description
Annotate the results of nearest with more information about the type of match.
Usage
annotateNearest(x, subject, annotate = TRUE, ...)
Arguments
x The query. An IRanges or GenomicRanges object, or a data.frame with columns for start, end, and, optionally, chr or seqnames.
subject The subject. An IRanges or GenomicRanges object, or a data.frame with columns for start, end, and, optionally, chr or seqnames.
annotate Whether to annotate the result.
... Arguments passed along to nearest.
Details
This function runs nearest and then annotates the nearest hit. Note that the nearest subject range to a given query may not be unique and we arbitrarily chose one as done by default by nearest.
Value
A data frame with columns c("distance", "subjectHits", "type", "amountOverlap", "insideDistance", "size1", "size2") unless annotate is FALSE, in which case only the first two columns are returned as an integer matrix.
dist Signed distance to the nearest target. Queries downstream from (i.e. past) their nearest target are given a negative distance.
subjectHits The index of the nearest target.
type one of c("inside", "cover", "disjoint", "overlap").
amountOverlap The width of the overlap region, if any.
insideDistance When a query is contained in its nearest target, the signed minimum of the two distances target-start-to-query-start and query-end-to-target-end. The former is taken positive, and the latter, which wins in ties, negative. dist will be 0 in this case.
size1 equals width(x).
size2 equals width(subject).
annotateTranscripts
Author(s)
Harris Jaffee, Peter Murakami and Rafael A. Irizarry
See Also
nearest, matchGenes
Examples
```r
query <- GRanges(seqnames = 'chr1', IRanges(c(1, 4, 9), c(5, 7, 10)))
subject <- GRanges('chr1', IRanges(c(2, 2, 10), c(2, 3, 12)))
nearest(query, subject)
distanceToNearest(query, subject)
```
```r
## showing 'cover' and 'disjoint', and 'amountOverlap'
annotateNearest(query, subject)
```
```r
## showing 'inside' and 'insideDist', and 'amountOverlap'
annotateNearest(subject, query)
volumeNearest(GRanges('chr1', IRanges(3,3)), GRanges('chr1', IRanges(2,5)))
```
annotateNearest(GRanges('chr1', IRanges(3,4)), GRanges('chr1', IRanges(2,5)))
annotateNearest(GRanges('chr1', IRanges(4,4)), GRanges('chr1', IRanges(2,5)))
---
**annotateTranscripts**
*Annotate transcripts*
**Description**
Annotate transcripts
**Usage**
```r
annotateTranscripts(txdb, annotationPackage = NULL, by = c("tx","gene"), codingOnly=FALSE, verbose = TRUE, requireAnnotation = FALSE)
```
**Arguments**
- `txdb`: A TxDb database object such as TxDb.Hsapiens.UCSC.hg19.knownGene
- `annotationPackage`: An annotation data package from which to obtain gene/transcript annotation. For example `org.Hs.eg.db`. If none is provided the function tries to infer it from `organism(txdb)` and if it can’t it proceeds without annotation unless `requireAnnotation = TRUE`.
- `by`: Should we create a GRanges of transcripts (tx) or genes (gene).
- `codingOnly`: Should we exclude all the non-coding transcripts.
- `verbose`: logical value. If 'TRUE', it writes out some messages indicating progress. If 'FALSE' nothing should be printed.
- `requireAnnotation`: logical value. If 'TRUE' function will stop if no annotation package is successfully loaded.
Details
This function prepares a GRanges for the matchGenes function. It adds information and in particular adds exons information to each gene/transcript.
Value
A GRanges object with an attribute description set to annotatedTranscripts. The following columns are added. seqinfo is the information returned by seqinfo. CSS is the coding region start, CSE is the coding region end, Tx is the transcript ID used in TxDb, Entrez is the Entrez ID, Gene is the gene symbol, Refseq is the RefSeq annotation, Nexons is the number of exons, Exons is an IRanges with the exon information.
Author(s)
Harris Jaffee and Rafael A. Irizarry
See Also
matchGenes
Examples
## Not run:
library("TxDb.Hsapiens.UCSC.hg19.knownGene")
genes <- annotateTranscripts(TxDb.Hsapiens.UCSC.hg19.knownGene)
## and to avoid guessing the annotation package:
genes <- annotateTranscripts(TxDb.Hsapiens.UCSC.hg19.knownGene, annotation="org.Hs.eg.db")
## End(Not run)
bumphunter Bumphunter
Description
Estimate regions for which a genomic profile deviates from its baseline value. Originally implemented to detect differentially methylated genomic regions between two populations.
Usage
## S4 method for signature 'matrix'
bumphunter(object, design, chr=NULL, pos, cluster=NULL, coef=2, cutoff=NULL, pickCutoff = FALSE, pickCutoffQ = 0.99, ... useWeights=FALSE, B=ncol(permutations), permutations=NULL, verbose=TRUE, ...)
bumphunterEngine(mat, design, chr = NULL, pos, cluster = NULL, coef = 2, cutoff = NULL, pickCutoff = FALSE, pickCutoffQ = 0.99, ... useWeights=FALSE, B=ncol(permutations), permutations=NULL, verbose = TRUE, ...)
## S3 method for class 'bumps'
print(x, ...)
Arguments
- **object**: An object of class matrix.
- **x**: An object of class bumps.
- **mat**: A matrix with rows representing genomic locations and columns representing samples.
- **design**: Design matrix with rows representing samples and columns representing covariates. Regression is applied to each row of mat.
- **chr**: A character vector with the chromosomes of each location.
- **pos**: A numeric vector representing the chromosomal position.
- **cluster**: The clusters of locations that are to be analyzed together. In the case of microarrays, the clusters are many times supplied by the manufacturer. If not available the function `clusterMaker` can be used to cluster nearby locations.
- **coef**: An integer denoting the column of the design matrix containing the covariate of interest. The hunt for bumps will be only be done for the estimate of this coefficient.
- **cutoff**: A numeric value. Values of the estimate of the genomic profile above the cutoff or below the negative of the cutoff will be used as candidate regions. It is possible to give two separate values (upper and lower bounds). If one value is given, the lower bound is minus the value.
- **pickCutoff**: Should bumphunter attempt to pick a cutoff using the permutation distribution?
- **pickCutoffQ**: The quantile used for picking the cutoff using the permutation distribution.
- **maxGap**: If cluster is not provided this maximum location gap will be used to define cluster via the `clusterMaker` function.
- **nullMethod**: Method used to generate null candidate regions, must be one of ‘bootstrap’ or ‘permutation’ (defaults to ‘permutation’). However, if covariates in addition to the outcome of interest are included in the design matrix (ncol(design)>2), the ‘permutation’ approach is not recommended. See vignette and original paper for more information.
- **smooth**: A logical value. If TRUE the estimated profile will be smoothed with the smoother defined by `smoothFunction`.
- **smoothFunction**: A function to be used for smoothing the estimate of the genomic profile. Two functions are provided by the package: `loessByCluster` and `runmedByCluster`.
- **useWeights**: A logical value. If TRUE then the standard errors of the point-wise estimates of the profile function will be used as weights in the loess smoother `loessByCluster`. If the `runmedByCluster` smoother is used this argument is ignored.
- **B**: An integer denoting the number of resamples to use when computing null distributions. This defaults to 0. If `permutations` is supplied that defines the number of permutations/bootstrap and B is ignored.
- **permutations**: is a matrix with columns providing indexes to be used to scramble the data and create a null distribution when nullMethod is set to permutations. If the bootstrap approach is used this argument is ignored. If this matrix is not supplied and B>0 then these indexes are created using the function `sample`.
- **verbose**: logical value. If TRUE, it writes out some messages indicating progress. If FALSE nothing should be printed.
- **...**: further arguments to be passed to the smoother functions.
Details
This function performs the bump hunting approach described by Jaffe et al. International Journal of Epidemiology (2012). The main output is a table of candidate regions with permutation or bootstrap-based family-wide error rates (FWER) and p-values assigned.
The general idea is that for each genomic location we have a value for several individuals. We also have covariates for each individual and perform regression. This gives us one estimate of the coefficient of interest (a common example is case versus control). These estimates are then (optionally) smoothed. The smoothing occurs in clusters of locations that are ‘close enough’. This gives us an estimate of a genomic profile that is 0 when uninteresting. We then take values above (in absolute value) cutoff as candidate regions. Permutations can then performed to create null distributions for the candidate regions.
The simplest way to use permutations or bootstraps to create a null distribution is to set B. If the number of samples is large this can be set to a large number, such as 1000. Note that this will be slow and we have therefore provided parallelization capabilities. In cases were the user wants to define the permutations or bootstraps, for example cases in which all possible permutations/boostraps can be enumerated, these can be supplied via the permutations argument.
Uncertainty is assessed via permutations or bootstraps. Each of the B permutations/bootstraps will produce an estimated ‘null profile’ from which we can define ‘null candidate regions’. For each observed candidate region we determine how many null regions are ‘more extreme’ (longer and higher average value). The ‘p.value’ is the percent of candidate regions obtained from the permutations/boostraps that are as extreme as the observed region. These p-values should be interpreted with care as the theoretical properties are not well understood. The ‘fwer’ is the proportion of permutations/boostraps that had at least one region as extreme as the observed region. We compute p.values and FWER for the area of the regions (as opposed to length and value as a pair) as well. Note that for cases with more than one covariate the permutation approach is not generally recommended; the nullMethod argument will coerce to ‘bootstrap’ in this scenario. See vignette and original paper for more information.
Parallelization is implemented through the foreach package.
Value
An object of class bumps with the following components:
- tab: The table with candidate regions and annotation for these.
- coef: The single loci coefficients.
- fitted: The estimated genomic profile used to determine the regions.
- pvaluesMarginal: marginal p-value for each genomic location.
- null: The null distribution.
- algorithm: details on the algorithm.
Author(s)
Rafael A. Irizarry, Martin J. Aryee, Kasper D. Hansen, and Shan Andrews.
References
Examples
dat <- dummyData()
# Enable parallelization
require(doParallel)
registerDoParallel(cores = 2)
# Find bumps
bumps <- bumphunter(dat$mat, design=dat$design, chr=dat$chr, pos=dat$pos,
cluster=dat$cluster, coef=2, cutoff= 0.28, nullMethod="bootstrap",
smooth=TRUE, B=250, verbose=TRUE,
smoothFunction=loessByCluster)
bumps
# cleanup, for Windows
bumphunter:::foreachCleanup()
clusterMaker
Make clusters of genomic locations based on distance
Description
Genomic locations are grouped into clusters based on distance: locations that are close to each other
are assigned to the same cluster. The operation is performed on each chromosome independently.
Usage
clusterMaker(chr, pos, assumeSorted = FALSE, maxGap = 300)
boundedClusterMaker(chr, pos, assumeSorted = FALSE,
maxClusterWidth = 1500, maxGap = 500)
Arguments
chr A vector representing chromosomes. This is usually a character vector, but may
be a factor or an integer.
pos A numeric vector with genomic locations.
assumeSorted This is a statement that the function may assume that the vector pos is sorted
(within each chr). Allowing the function to make this assumption may increase
the speed of the function slightly.
maxGap An integer. Genomic locations within maxGap from each other are placed into
the same cluster.
maxClusterWidth An integer. A cluster large than this width is broken into subclusters.
Details
The main purpose of the function is to genomic location into clusters that are close enough to
perform operations such as smoothing. A genomic location is a combination of a chromosome
(chr) and an integer position (pos). Specifically, genomic intervals are not handled by this function.
Each chromosome is clustered independently from each other. Within each chromosome, clusters
are formed in such a way that two positions belong to the same cluster if they are within maxGap of
each other.
Value
A vector of integers to be interpreted as IDs for the clusters, such that two genomic positions with the same cluster ID is in the same cluster. Each genomic position receives one integer ID.
Author(s)
Rafael A. Irizarry, Hector Corrada Bravo
Examples
```r
N <- 1000
chr <- sample(1:5, N, replace=TRUE)
pos <- round(runif(N, 1, 10^5))
o <- order(chr, pos)
chr <- chr[o]
pos <- pos[o]
regionID <- clusterMaker(chr, pos)
regionID2 <- boundedClusterMaker(chr, pos)
```
---
dummyData
*Generate dummy data for use with bumphunter functions*
Description
This function generates a small dummy dataset representing samples from two different groups (cases and controls) that is used in bumphunter examples.
Usage
```r
dummyData(n1 = 5, n2 = 5, sd = 0.2, l = 100, spacing = 100,
clusterSpacing=1e5, numClusters=5)
```
Arguments
- **n1**: Number of samples in group 1 (controls)
- **n2**: Number of samples in group 2 (cases)
- **sd**: Within group standard deviation to be used when simulating data
- **l**: The number of genomic locations for which to simulate data
- **spacing**: The average spacing between locations. The actual locations have a random component so the actual spacing will be non-uniform
- **clusterSpacing**: The spacing between clusters. (Specifically, the spacing between the first location in each cluster.)
- **numClusters**: Divide the genomic locations into this number of clusters, each of which will contain locations spaced spacing bp apart.
getSegments
Value
A list containing data that can be used with various bumphunter functions.
mat A simulated data matrix with rows representing genomic locations and columns representing samples.
design Design matrix with rows representing samples and columns representing covariates.
chr A character vector with the chromosomes of each location.
pos A numeric vector representing the chromosomal position.
cluster A vector representing the cluster of each location.
n1 Number of samples in cluster 1
n2 Number of samples in cluster 2
Author(s)
Martin J. Aryee
Examples
dat <- dummyData()
names(dat)
head(dat$pos)
getSegments Segment a vector into positive, zero, and negative regions
Description
Given two cutoffs, L and U, this function divides a numerical vector into contiguous parts that are above U, between L and U, and below L.
Usage
getSegments(x, f = NULL, cutoff = quantile(abs(x), 0.99),
assumeSorted = FALSE, verbose = FALSE)
Arguments
x A numeric vector.
f A factor used to pre-divide x into pieces. Each piece is then segmented based on the cutoff. Setting this to NULL says that there is no pre-division. Often, \texttt{clusterMaker} is used to define this factor.
cutoff a numeric vector of length either 1 or 2. If length is 1, U (see details) will be cutoff and L will be -cutoff. Otherwise it specifies L and U. The function will furthermore always use the minimum of cutoff for L and the maximum for U.
assumeSorted This is a statement that the function may assume that the vector f is sorted. Allowing the function to make this assumption may increase the speed of the function slightly.
verbose Should the function be verbose?
getSegments
Details
This function is used to find the indexes of the ‘bumps’ in functions such as bumphunter.
x is a numeric vector, which is converted into three levels depending on whether x>=U (‘up’), L<x<U (‘zero’) or x<=L (‘down’), with L and U coming from cutoff. We assume that adjacent entries in x are next to each other in some sense. Segments, consisting of consecutive indices into x (i.e. values between 1 and length(x)), are formed such that all indices in the same segment belong to the same level of f and have the same discretized value of x.
In other words, we can use getSegments to find runs of x belonging to the same level of f and with all of the values of x either above U, between L and U, or below L.
Value
A list with three components, each a list of indices. Each component of these lists represents a segment and this segment is represented by a vector of indices into the original vectors x and f.
upIndex: a list with each entry an index of contiguous values in the same segment. These segments have values of x above U.
dnIndex: a list with each entry an index of contiguous values in the same segment. These segments have values of x below L.
zeroIndex: a list with each entry an index of contiguous values in the same segment. These segments have values of x between L and U.
Author(s)
Rafael A Irizarry and Kasper Daniel Hansen
See Also
clusterMaker
Examples
```r
x <- 1:100
y <- sin(8*pi*x/100)
chr <- rep(1, length(x))
indexes <- getSegments(y, chr, cutoff=0.8)
plot(x, y, type="n")
for(i in 1:3){
ind <- indexes[[i]]
for(j in seq(along=ind)) {
k <- ind[[j]]
text(x[k], y[k], j, col=i)
}
}
abline(h=c(-0.8,0.8))
```
locfitByCluster
Apply local regression smoothing to values within each spatially-defined cluster.
Description
Local regression smoothing with a gaussian kernal, is applied independently to each cluster of genomic locations. Locations within the same cluster are close together to warrant smoothing across neighbouring locations.
Usage
locfitByCluster(y, x = NULL, cluster, weights = NULL, minNum = 7, bpSpan = 1000, minInSpan = 0, verbose = TRUE)
Arguments
y A vector or matrix of values to be smoothed. If a matrix, each column represents a sample.
x The genomic location of the values in y
cluster A vector indicating clusters of locations. A cluster is typically defined as a region that is small enough that it makes sense to smooth across neighbouring locations. Smoothing will only be applied within a cluster, not across locations from different clusters.
weights weights used by the locfit smoother
minNum Clusters with fewer than minNum locations will not be smoothed
bpSpan The span used when locfit smoothing. (Expressed in base pairs.)
minInSpan Only smooth the region if there are at least this many locations in the span.
verbose Boolean. Should progress be reported?
Details
This function is typically called by smoother, which is in turn called by bumphunter.
Value
fitted The smoothed data values
smoothed A boolean vector indicating whether a given position was smoothed
smoother always set to ‘locfit’.
Author(s)
Rafael A. Irizarry and Kasper D. Hansen
See Also
smoother, runmedByCluster, loessByCluster
loessByCluster
Examples
dat <- dummyData()
smoothed <- locfitByCluster(y=dat$mat[,1], cluster=dat$cluster, bpSpan = 1000,
minNum=7, minInSpan=5)
Description
Loess smoothing is applied independently to each cluster of genomic locations. Locations within
the same cluster are close together to warrant smoothing across neighbouring locations.
Usage
loessByCluster(y, x = NULL, cluster, weights = NULL, bpSpan = 1000,
minNum = 7, minInSpan = 5, maxSpan = 1, verbose = TRUE)
Arguments
y A vector or matrix of values to be smoothed. If a matrix, each column represents
a sample.
x The genomic location of the values in y
cluster A vector indicating clusters of locations. A cluster is typically defined as a
region that is small enough that it makes sense to smooth across neighbouring
locations. Smoothing will only be applied within a cluster, not across locations
from different clusters.
weights weights used by the loess smoother
bpSpan The span used when loess smoothing. (Expressed in base pairs.)
minNum Clusters with fewer than minNum locations will not be smoothed
minInSpan Only smooth the region if there are at least this many locations in the span.
maxSpan The maximum span. Spans greater than this value will be capped.
verbose Boolean. Should progress be reported?
Details
This function is typically called by smoother, which is in turn called by bumphunter.
Value
fitted The smoothed data values
smoothed A boolean vector indicating whether a given position was smoothed
smoother always set to ‘loess’.
Author(s)
Rafael A. Irizarry
matchGenes
**See Also**
smoother, runmedByCluster, locfitByCluster
**Examples**
dat <- dummyData()
smoothed <- loessByCluster(y=dat$mat[,1], cluster=dat$cluster, bpSpan = 1000,
minNum=7, minInSpan=5, maxSpan=1)
dat <- dummyData()
matchGenes(x, subject, type = c("any", "fiveprime"), promoterDist = 2500, skipExons = FALSE, verbose = TRUE)
**Description**
Find and annotate closest genes to genomic regions
**Usage**
matchGenes(x, subject, type = c("any", "fiveprime"), promoterDist = 2500, skipExons = FALSE, verbose = TRUE)
**Arguments**
- **x**: An IRanges or GenomicRanges object, or a data.frame with columns for start, end, and, optionally, chr or seqnames.
- **subject**: An GenomicRanges object containing transcripts or genes that have been annotated by the function annotateTranscripts.
- **promoterDist**: Anything within this distance to the transcription start site (TSE) will be considered a promoter.
- **type**: Should the distance be computed to any part of the transcript or the five prime end.
- **skipExons**: Should the annotation of exons be skipped. Skipping this part makes the code slightly faster.
- **verbose**: logical value. If ‘TRUE’, it writes out some messages indicating progress. If ‘FALSE’ nothing should be printed.
**Details**
This function runs nearest and then annotates the the relationship between the region and the transcript/gene that is closest. Many details are provided on this relationship as described in the next section.
**Value**
A data frame with one row for each query and with columns c("name", "annotation", "description", "region", "distance", "subregion", "insideDistance", "exonnumber", "nexons", "UTR", "strand", "geneL", "codingL","Entrez", "subhectHits"). The first column is the _gene_ nearest the query, by virtue of it owning the transcript determined (or chosen by nearest) to be nearest the query. Note that the nearest gene to a given query, in column 3, may not be unique and we arbitrarily chose one as done by default by nearest.
The “distance” column is the distance from the query to the 5’ end of the nearest transcript, so may be different from the distance computed by nearest to that transcript, as a range.
regionFinder
Find non-zero regions in vector
Description
Find regions for which a numeric vector is above (or below) predefined thresholds.
Usage
regionFinder(x, chr, pos, cluster = NULL, y = x, summary = mean,
ind = seq(along = x), order = TRUE, oneTable = TRUE,
maxGap = 300, cutoff=quantile(abs(x), 0.99),
assumeSorted = FALSE, verbose = TRUE)
regionFinder
Arguments
- **x**: A numeric vector.
- **chr**: A character vector with the chromosomes of each location.
- **pos**: A numeric vector representing the genomic location.
- **cluster**: The clusters of locations that are to be analyzed together. In the case of microarrays, the cluster is many times supplied by the manufacturer. If not available the function `clusterMaker` can be used.
- **y**: A numeric vector with same length as `x` containing values to be averaged for the region summary. See details for more.
- **summary**: The function to be used to construct a summary of the `y` values for each region.
- **ind**: an optional vector specifying a subset of observations to be used when finding regions.
- **order**: if `TRUE` then the resulting tables are ordered based on area of each region. Area is defined as the absolute value of the summarized `y` times the number of features in the regions.
- **oneTable**: if `TRUE` only one results table is returned. Otherwise, two tables are returned: one for the regions with positive values and one for the negative values.
- **maxGap**: If cluster is not provided this number will be used to define clusters via the `clusterMaker` function.
- **cutoff**: This argument is passed to `getSegments`. It represents the upper (and optionally the lower) cutoff for `x`.
- **assumeSorted**: This argument is passed to `getSegments` and `clusterMaker`.
- **verbose**: Should the function be verbose?
Details
This function is used in the final steps of `bumphunter`. While `bumphunter` does many things, such as regression and permutation, `regionFinder` simply finds regions that are above a certain threshold (using `getSegments`) and summarizes them. The regions are found based on `x` and the summarized values are based on `y` (which by default equals `x`). The summary is used for the ranking so one might, for example, use t-tests to find regions but summarize using effect sizes.
Value
If `oneTable` is `FALSE` it returns two tables otherwise it returns one table. The rows of the table are regions. Information on the regions is included in the columns.
Author(s)
Rafael A Irizarry
See Also
`bumphunter` for the main usage of this function, `clusterMaker` for the typical input to the `cluster` argument and `getSegments` for a function used within `regionFinder`.
Examples
```r
x <- seq(1:1000)
y <- sin(8*pi*x/1000) + rnorm(1000, 0, 0.2)
chr <- rep(c(1,2), each=length(x)/2)
tab <- regionFinder(y, chr, x, cutoff=0.8)
print(tab[tab$L>10,])
```
---
runmedByCluster
**Apply running median smoothing to values within each spatially-defined cluster**
Description
Running median smoothing is applied independently to each cluster of genomic locations. Locations within the same cluster are close together to warrant smoothing across neighbouring locations.
Usage
```r
runmedByCluster(y, x = NULL, cluster, weights = NULL, k = 5, endrule = "constant", verbose = TRUE)
```
Arguments
- `y` A vector or matrix of values to be smoothed. If a matrix, each column represents a sample.
- `x` The genomic location of the values in `y`.
- `cluster` A vector indicating clusters of locations. A cluster is typically defined as a region that is small enough that it makes sense to smooth across neighbouring locations. Smoothing will only be applied within a cluster, not across locations from different clusters.
- `weights` weights used by the smoother.
- `k` integer width of median window; must be odd. See `runmed`.
- `endrule` character string indicating how the values at the beginning and the end (of the data) should be treated. See `runmed`.
- `verbose` Boolean. Should progress be reported?
Details
This function is typically called by `smoother`, which is in turn called by `bumphunter`.
Value
- `fitted` The smoothed data values
- `smoothed` A boolean vector indicating whether a given position was smoothed
- `spans` The span used by the loess smoother. One per cluster.
- `clusterL` The number of locations in each cluster.
- `smoother` always set to ‘runmed’.
Author(s)
Rafael A. Irizarry
See Also
`smoother`, `loessByCluster`. Also see `runmed`.
Examples
dat <- dummyData()
smoothed <- runmedByCluster(y=dat$mat[,1], cluster=dat$cluster,
k=5, endrule="constant")
---
**smoother**
**Smooth genomic profiles**
Description
Apply smoothing to values typically representing the difference between two populations across genomic regions.
Usage
`smoother(y, x = NULL, cluster, weights = NULL, smoothFunction,
verbose = TRUE, ...)`
Arguments
- `y` A vector or matrix of values to be smoothed. If a matrix, each column represents a sample.
- `x` The genomic location of the values in `y`
- `cluster` A vector indicating clusters of locations. A cluster is typically defined as a region that is small enough that it makes sense to smooth across neighbouring locations. Smoothing will only be applied within a cluster, not across locations from different clusters.
- `weights` weights used by the smoother.
- `smoothFunction` A function to be used for smoothing the estimate of the genomic profile. Two functions are provided by the package: `loessByCluster` and `runmedByCluster`.
- `verbose` Boolean. Should progress be reported?
- `...` Further arguments to be passed to `smoothFunction`.
Details
This function is typically called by `bumphunter` prior to identifying candidate bump regions. Smoothing is carried out within regions defined by the `cluster` argument.
Value
- **fitted**: The smoothed data values
- **smoothed**: A boolean vector indicating whether a given position was smoothed
- **spans**: The span used by the loess smoother. One per cluster.
- **clusterL**: The number of locations in each cluster.
- **smoother**: The name of the smoother used
Author(s)
Rafael A. Irizarry and Martin J. Aryee
See Also
- loessByCluster, runmedByCluster
Examples
```r
dat <- dummyData()
# Enable parallelization
require(doParallel)
registerDoParallel(cores = 2)
## loessByCluster
smoothed <- smoother(y=dat$mat[,1], cluster=dat$cluster, smoothFunction=loessByCluster,
bpSpan = 1000, minNum=7, minInSpan=5, maxSpan=1)
## runmedByCluster
smoothed <- smoother(y=dat$mat[,1], cluster=dat$cluster, smoothFunction=runmedByCluster,
k=5, endrule="constant")
# cleanup, for Windows
bumphunter:::foreachCleanup()
```
Description
Example data
Format
- **txdb**: Has a TxDb example.
- **org**: Has an Org DB example.
- **transcripts**: Has example transcripts output.
Index
*Topic datasets
TT, 18
annotateNearest, 2
annotateTranscripts, 3, 13
boundedClusterMaker (clusterMaker), 7
bumphunter, 4, 10–12, 15, 16
bumphunter,matrix-method (bumphunter), 4
bumphunter-methods (bumphunter), 4
bumphunterEngine (bumphunter), 4
clusterMaker, 5, 7, 9, 10, 15
dummyData, 8
getSegments, 9, 15
locfitByCluster, 11, 13
loessByCluster, 11, 12, 17, 18
matchGenes, 3, 4, 13
nearest, 2, 3, 13
print.bumps (bumphunter), 4
regionFinder, 14
runmed, 16, 17
runmedByCluster, 11, 13, 16, 18
seqinfo, 4
smoother, 11–13, 16, 17, 17
TT, 18
|
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/manuals/bumphunter/man/bumphunter.pdf", "len_cl100k_base": 7736, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 41125, "total-output-tokens": 9028, "length": "2e12", "weborganizer": {"__label__adult": 0.00039577484130859375, "__label__art_design": 0.0006017684936523438, "__label__crime_law": 0.0004963874816894531, "__label__education_jobs": 0.00194549560546875, "__label__entertainment": 0.00027179718017578125, "__label__fashion_beauty": 0.0002264976501464844, "__label__finance_business": 0.0003452301025390625, "__label__food_dining": 0.0004601478576660156, "__label__games": 0.0013675689697265625, "__label__hardware": 0.00177001953125, "__label__health": 0.0009255409240722656, "__label__history": 0.0004875659942626953, "__label__home_hobbies": 0.0002415180206298828, "__label__industrial": 0.0006866455078125, "__label__literature": 0.0003752708435058594, "__label__politics": 0.0004324913024902344, "__label__religion": 0.0005588531494140625, "__label__science_tech": 0.2822265625, "__label__social_life": 0.00025725364685058594, "__label__software": 0.0860595703125, "__label__software_dev": 0.61865234375, "__label__sports_fitness": 0.0004529953002929687, "__label__transportation": 0.0003170967102050781, "__label__travel": 0.00026702880859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32396, 0.01948]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32396, 0.83359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32396, 0.78774]], "google_gemma-3-12b-it_contains_pii": [[0, 1452, false], [1452, 3008, null], [3008, 4765, null], [4765, 6428, null], [6428, 9568, null], [9568, 12692, null], [12692, 14637, null], [14637, 16120, null], [16120, 17791, null], [17791, 19473, null], [19473, 21020, null], [21020, 22726, null], [22726, 24926, null], [24926, 25278, null], [25278, 27623, null], [27623, 29333, null], [29333, 30785, null], [30785, 31835, null], [31835, 32396, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1452, true], [1452, 3008, null], [3008, 4765, null], [4765, 6428, null], [6428, 9568, null], [9568, 12692, null], [12692, 14637, null], [14637, 16120, null], [16120, 17791, null], [17791, 19473, null], [19473, 21020, null], [21020, 22726, null], [22726, 24926, null], [24926, 25278, null], [25278, 27623, null], [27623, 29333, null], [29333, 30785, null], [30785, 31835, null], [31835, 32396, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32396, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32396, null]], "pdf_page_numbers": [[0, 1452, 1], [1452, 3008, 2], [3008, 4765, 3], [4765, 6428, 4], [6428, 9568, 5], [9568, 12692, 6], [12692, 14637, 7], [14637, 16120, 8], [16120, 17791, 9], [17791, 19473, 10], [19473, 21020, 11], [21020, 22726, 12], [22726, 24926, 13], [24926, 25278, 14], [25278, 27623, 15], [27623, 29333, 16], [29333, 30785, 17], [30785, 31835, 18], [31835, 32396, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32396, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
bb51cbc72b768aaeeaf3216b331e4ef326fe6ffd
|
Developing a New Use-Case Approach to Improve Analysis of SRS for a System
Soumen Biswas, Nirmalya Mukhopadhyay
Dept. of CSE, B.A.C.E.T., Jamshedpur, India
Abstract: System Requirement Specification is the first, foremost & bare important aspect for Software Development Life Cycle (often called SDLC). IEEE has set up the standards for writing system requirement specification (known as SRS or SyRS). Developing a SRS includes the identification, organization, presentation, and modification of the requirements. IEEE also addressed the conditions for incorporating operational concepts, design constraints and design configuration requirements into the specification. In IEEE Standard 1233, 1998 Edition, certain new formats have been designed for writing SRS for a system, where Use-Case diagrams are being used. Use-Case diagram is the most significant modelling tool in UML. In this paper we have presented a new proposal of constructing Use-Case in a different way to enhance the documentation of SRS. We hope this proposed new construction will improve the system by making it synchronized, indexed and user friendly.
Keywords: SDLC, SRS, Use-Case, UML, Functional Requirements
I. INTRODUCTION
The SRS document described in IEEE Std.12331998 is divided into a number of recommended sections to ensure that information relevant to stakeholders is captured. This specification document serves as a reference point during the development process and captures requirements that need to be met by the software product. Basic issues addressed in the SRS include functionality, external interfaces, performance requirements, attributes and design constraints. It also serves as a contract between the supplier and customer with respect to what the final product would provide and help achieve. Although the IEEE Std. 12331998 specifies the structure it does not choose one representation for requirements over the other. Neither does it specify what techniques should be used to populate the various sections. The Use-Case approach has become the de facto standard for capturing functional requirements. Many of the sections of the SRS document contain information that would be otherwise collected in UML Use-Case artifacts. A significant amount of effort could be spared if the description of functionality captured in these Use-Case artifacts is used to populate relevant SRS sections. For large projects, the number of Use- Cases and the amount of related documentation could quickly become unwieldy without the presence of an organization scheme. It is possible to systematically create and populate several of the SRS document sections if Use- Cases are documented using appropriate organization schemes. The advantage of systematic translation is avoiding duplicative specification efforts. After all, if time and effort have been expended creating the Use-Case artifacts, it makes sense to reuse the results of those efforts when writing the SRS document. It would also lessen the possibility of introducing inconsistencies that arise during duplication. Presently, there are no concrete techniques to identify and link Use-Cases to sections of the SRS. This process is at best ad-hoc, which generates inconsistencies in the final specification document. In this paper we show a systematic way to leverage through our modified Use-Cases to populate the SRS document. We do this with the help of various schemes for managing and organizing the Use-Cases, and by linking specific Use-Case types to related SRS sections. Our method provides additional support to analysts in preparing a standards compliant SRS document by avoiding redundant specification effort and through reduction in the cognitive load. We demonstrate how this taxonomy is used to develop a standards compliant SRS document with the help of case study.
II. SYSTEM REQUIREMENT SPECIFICATION
A software requirements specification describes the essential behavior of a software product from a user's point of view. Success in software requirements, and hence success in software development, depends on getting the voice of the customer as close as possible to the ear of the developer (Wiegers, 1999).
Fig. 1 Context for developing an SRS
The purposes of the SRS are to:
- **Establish the basis for agreement between the customers and the suppliers on what the software product is to do.** The complete description of the functions to be performed by the software specified in the SRS will assist the potential user to determine if the software specified meets their needs or how the software must be modified to meet their needs.
- **Provide a basis for developing the software design.** The SRS is the most important document of reference in developing a design.
- **Reduce the development effort.** The preparation of the SRS forces the various concerned groups in the customer’s organisation to thoroughly consider all of the requirements before design work begins. A complete and correct SRS reduces effort wasted on redesign, recoding and retesting. Careful review of the requirements in the SRS can reveal omissions, misunderstandings and inconsistencies early in the development cycle when these problems are easier to correct.
- **Provide a basis for estimating costs and schedules.** The description of the product to be developed as given in the SRS is a realistic basis for estimating project costs and can be used to obtain approval for bids or price estimates.
- **Provide a baseline for validation and verification.** Organisations can develop their test documentation much more productively from a good SRS. As a part of the development contract, the SRS provides a baseline against which compliance can be measured.
- **Facilitate transfer.** The SRS makes it easier to transfer the software product to new users or new machines. Customers thus find it easier to transfer the software to other parts of their organisation and suppliers find it easier to transfer it to new customers.
- **Serve as a basis for enhancement.** Because the SRS discusses the product but not the project that developed it, the SRS serves as a basis for later enhancement of the finished product. The SRS may need to be altered, but it does provide a foundation for continued product evaluation.
III. **ROLE OF SRS IN SDLC**
SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of software and the overall development process.
Stage 1: System Analysis: System (requirement) analysis is the most important and fundamental stage in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales department, market surveyors and domain experts in the industry. This information is then used to plan the basic project approach and to conduct product feasibility study in the economical, operational, and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the project is also done in the planning stage. The outcome of the technical feasibility study is to define the various technical approaches that can be followed to implement the project successfully with minimum risks.
Stage 2: System Requirements Specification: Once the requirement analysis is done the next step is to clearly define and document the product requirements and get them approved from the customer or the market analysts. This is done through ‘SRS’ – Software Requirement Specification document which consists of all the product requirements to be designed and developed during the project life cycle.
Stage 3: System Design: SRS is the reference for product architects to come out with the best architecture for the product to be developed. Based on the requirements specified in SRS, usually more than one design approach for the product architecture is proposed and documented in a DDS - Design Document Specification. This DDS is reviewed by all the important stakeholders and based on various parameters as risk. Assessment, product robustness, design modularity, budget and time constraints, the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its communication and data flow representation with the external and third party modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined with the manifesto of the details in DDS.
Stage 4: System Coding: In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this stage. If the design is performed in a detailed and organized manner, code generation can be accomplished without much hassle.
Developers have to follow the coding guidelines defined by their organization and programming tools like compilers, interpreters, debuggers etc are used to generate the code. Different high level programming languages such as C, C++, Pascal, Java, and PHP are used for coding. The programming language is chosen with respect to the type of software being developed.
Stage 5: System Testing: This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are mostly involved in all the stages of SDLC. However this stage of the product where products defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the SRS.
Stage 6: System Implementation: Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometime product deployment happens in stages as per the organizations’ business strategy. The product may first be released in a limited segment and tested in the real business environment (UAT- User Acceptance Testing).
Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. After the product is released in the market, its maintenance is done for the existing customer base.
IV. USE-CASE: A MODELLING TOOL
In 1986 Ivar Jacobson first formulated textual, structural and visual modeling techniques for specifying Use-Cases. In 1992 his co-authored book helped to popularize the technique for capturing functional requirements, especially in software development. Originally he used the terms usage scenarios and usage case – the latter being a direct translation of his Swedish term “användningsfall” – but found that neither of these terms sounded natural in English, and eventually he settled on Use-Case. Since then, others have contributed to improving this technique, notably including Alistair Cockburn, Larry Constantine, Dean Leffingwell, Kurt Bittner and Gunnar Overgaard. In 2011, Ivar Jacobson published an update to Use-Cases called Use-Case 2.0, the intention being to incorporate many of his practical experiences of applying Use-Cases since its original inception.
The Use-case modelling is a hallmark of the Unified Modelling Language (Rumbaugh, Jacobson et al., 1999) referred to as UML. A Use-Case is a specification of sequences of actions, including variant sequences and error sequences, that a system, subsystem, or class can perform by interacting with outside actors (Rumbaugh, Jacobson et al., 1999).
A Use-Case describes the interactions between one or more Actors and the system, in order to provide an observable result of value for the initiating actor. The Use-Cases partition the system behavior into transaction, such that each transaction performs some useful action from the user’s point view. Each transaction may involve either a single message or multiple message exchange between the user and the system to complete itself.
The Use-Case model represents a function or process model of a system. The functionality of a system is defined by different Use-Cases, each of which represents a specific goal (to obtain the observable result of value) for a particular actor. A Use-Case diagram is a visual representation of the relationships between actors and Use-Cases together that documents the system’s intended behaviour. An Admission Use-Case diagram is shown below.
Fig. 4 Student Admission Use-Case Diagram
Arrows and lines are draw between actors and Use-Cases and between Use-Cases to show their relationships. The default relationship between an actor and a Use-Case is the "communication" relationship, denoted by a line with a small circle. For example, the actor (student) communicating with the Use-Case (Course selection).
A Use-Case describes the interactions between the actor(s) and the system in the form of a dialog between the actor(s) and the system, structured as follows:
1. The actor «does something»
2. The system «does something in response»
3. The actor «does something else»
4. The system «does something else in response»
Each dialog of this form is called a “Flow of Events”. Each Use-Case will contain several flows, including one “Basic Flow of Events” and several “Alternative Flows”.
The Basic Flow of Events specifies the interactions between the actor(s) and the system for the ideal case, where everything goes as planned, and the actor’s goal (the observable result of value) is met. The basic flow represents the main capability provided by the system for this Use-Case. Alternative Flows specify alternative interactions associated with the same goal.
A. Flow of Events – Structure:
The two main parts of the flow of events are basic flow of events and alternative flows of events. The basic flow of events should cover what "normally" happens when the Use-Case is performed. The alternative flows of events cover behavior of optional or exceptional character in relation to the normal behavior, and also variations of the normal behavior. The straight arrow in the following figure represents the basic flow of events, and the curves represent alternative paths in relation to the normal. Some alternative paths return to the basic flow of events, whereas others end the Use-Case.
Fig. 5 Typical Structure of a Use-Case Flow of Events
To clarify where an alternative flow of events fits in the structure, it is very essential to describe the following for each detour to the basic flow of events:
- Where the alternative flow can be inserted in the basic flow of events;
- The condition that needs to be fulfilled for the alternative behaviour to start;
- How and where the basic flow of events is resumed, or how the Use-Case ends.
It might be tempting, if the alternative flow of events is very simple, to just describe it in the basic flow of events section (using some informal "if-then-else" construct). This should be avoided. Too many alternatives will make the
normal behavior difficult to see. Also, including alternative paths in the basic flow of events section will make the text more pseudo-code like and harder to read.
B. Preconditions and Post-conditions:
A precondition is the state of the system and its surroundings that are required before the Use-Case can be started. Post-Conditions are the states the system can be in after the Use-Case has ended. It can be helpful to use the concepts of precondition and post-condition to clarify how the flow of events starts and ends. The following figure shows an illustration of preconditions and resulting post-conditions.

Consider the above A precondition for the Use-Case Student admission process: The student selects a degree course and enters the Registration Form.
A post-condition for the Use-Case Cash Withdrawal in the ATM machine: After successful submission of the registration form student, receives a system generated registration number.
V. ROLE OF USE-CASE IN WRITING SRS
A Use-Case typically represents a major piece of functionality that is complete from beginning to end and captures a contract between the stakeholders of a system about its behaviour (Cockburn, 2000). The Use-Case model is an interpretation of the SRS. For ease of documentation, the Use-Case model along with the supplementary specifications document is used as the formal documentation for the project in any software industry. This may seem like an efficient system but it cannot be substituted for a formal SRS. The need for an SRS document is usually mandated by the industry standard. Under such circumstances, when an SRS standards document is unavailable, the Use-Case model is dissected and the Use-Case descriptions cannibalized in an attempt to populate the SRS. The functional requirements of any system can be converted into a modelling language by preparing Use-Cases for the required system. Changes in functional requirements in the specification document need to be reflected in the Use-Case model and vice versa. We should also point out that the Use-Case model is an abstraction of the system model. It does not capture all the relevant aspects of the system, especially non-functional requirements, which are required for completing the product documentation. An unstructured process for using Use-Cases to populate an SRS is inefficient and lacks traceability. The SRS forms the basis for testing plans at a later stage, further boosting its importance in software development process. So to prepare an effective Use-Case, first and foremost task is to understand the functional requirement of the system and create a Use-Case from collected requirements. At the same time we can over come from various functional redundancies of the system and thus Use-Case is a good modelling language.
VI. TRADITIONAL METHOD OF WRITING SRS
The SRS document usually contains all the user requirements in an informal form. Among all the documents produced during a software development life cycle, writing the SRS document is probably the toughest.
An SRS document should clearly document the following aspect of a system:
- Functional requirements:
The Functional requirements should discuss the functionalities required from the system. The functional requirements of the system as documented in the SRS document should clearly describe each function which the system would support along with corresponding input and output data set.
- Nonfunctional requirements:
The nonfunctional requirements deal with the characteristics of the system that cannot be expressed as functions. Examples of nonfunctional requirements include aspects concerning maintainability, portability and usability. It also includes other aspect like reliability, accuracy of result, human-computer interface issues.
- Goals of implementation:
The goal of implementation section contain document issues such as revision to the system functionalities that may be required in the future, new devices to be supported in the future and reusability issues etc.
In this section we are concentrating on function requirements of a system. To discuss the functionalities of a system we are taking an example of “student admission process” for a university.

**Fig. 7 Student Admission Process**
### A. SRS for Student Admission Process
R1: Student Admission
*Description:* The admission process function determines the type degree course for which students want to take the admission. The student selects the course type and department. After successful selection of degree and course type system displays a registration form. A student has to fill the form. After successful submission of the form, system generates a unique Registration number for every student. Later student can communicate to college through this Registration number.
R1.1 Select Department and Degree
*Input:* System Prompts for Department (populated from list box) and Degree (populated from list box). Student can select a course type for admission.
*Output:* System checks the valid combination from selection list and displays a Registration form.
R1.2 Fill the registration Form
*Input:* After successful course selection system displays a Registration form. Students have to fill the registration form and clicks on submit button.
*Output:* after successful submission of the registration form system validates the form and generates a registration number.
R1.3 Get Registration Number
*Input:* Students gets a new registration number.
*Output:* For any update information, student can communicate to the University through this registration number.
### VII. TRADITIONAL METHOD OF WRITING USE-CASE
As per IEEE standard 1233, 1998 Edition the traditional method of writing Use-Case is being followed by us to construct a “Student Admission Process”. The method is as follows:
**A. Use-Case Specification: Student Admission**
1. Use-Case Name
- Student Admission
1.1 Brief Description
- This Use-Case allows the student personnel to get registered for admission process in a degree course. Based on student’s information, the Administration personnel will allow the students for Registration and the system displays necessary information. Student Admission process handles 2 functionalities-
- i. Course Selection
- ii. Student Registration
2. Actors
- 2.1 Primary Actor
- 2.1.1 Student, Administrator
- 2.2 Supporting Actors
2.2.1 Registration Module.
3. Flow of Events
3.1 Basic Flow
The system window displays the list of buttons like Home, Department, Admission, Logout, About us. Each button contains various functionalities. User clicks on admission button to avail the admission process.
The system prompts for the type of course selection (Degree/Department). After successful selection of a degree course, system prompts for registration process. The flow continues as in 3.1.1 and 3.1.2 and respectively depending on the user input.
3.1.1 Course Selection
3.1.1.1 The System window displays brief information about the admission process and how to apply for a course.
3.1.1.2 The system prompt for two list box regarding course related information like Department (data populated from list box) and Degree (data populated from list box). The student personnel checks/un-checks any combination of the search option, populated from the list box. Searching will not be initiated unless at least one search option is checked from system populated list box. A student can select any department and Degree based on his/her interest.
3.1.1.3 After successful selection of a degree course and department, student clicks on “select” button. Then system will check and verify that, the selected degree course is available for a particular department or not. If matches found then, system will respond with a Registration form.
3.1.1.4 In case the selected degree course is not available for a particular department, then the flow is as 3.2.1.
3.1.2 Student Registration
This Use-Case begins when a user receives a Registration form after successful selection of a degree course for availing admission process.
3.1.2.1 The system displays a Student Registration Form.
3.1.2.2 The system prompts for mandatory information related to the student like Name, Father’s Name, City, State, District, Country, Sex (system populated list box), E-mail, Phone, Occupation, Department, Degree, Date of birth (system populated list box in dd/mm/yyyy format), Upload your photo(browsing button ), Board, Year of passing, % of marks.
3.1.2.3 The system also prompts for Reset and Submit button. After successful submission of the registration form, the user clicks on submit button. Then system displays a preview of the student Registration form which has been filled by the student.
3.1.2.4 The student personnel can edit/update the entered information by click on “Reset” button. Finally clicks on “Submit” To save the student data.
3.1.2.5 After successful submission of Registration form, system displays a message box containing “student Registration number” and other necessary information. The Registration number will be generated by the system.
3.1.2.6 Later student can check his/her status by entering the Registration Number.
3.2 Alternative Flows
3.2.1 Invalid Course Selection
3.2.1.1 In case of course selection, system checks suitable combination for degree and course. If matches not found, then displays a message “select another Degree Course”.
4. Preconditions
4.1 <Successful course selection>
This Use-Case can occur only after successful course selection by the student personnel.
4.2 <Registration>
After successful course selection student can avail the Registration process.
5. Post Conditions
After successful registration student receives a system generated registration number.
VIII. MODIFIED WAY OF WRITING USE-CASE
Without violating the IEEE standard we have designed the following proforma for writing a Use-Case for the same “Student Admission Process”. The process has been described below:
A. Use-Case Specification: Admission
<table>
<thead>
<tr>
<th>Functionality Description</th>
<th>This Use-Case allows the student personnel to select a degree course and register for admission process. Based on student’s information the Administration personnel will allow the students for Registration and the system displays necessary information. This Use-Case handle functionalities are-</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>1. Course Selection</td>
</tr>
<tr>
<td></td>
<td>2. Student Registration</td>
</tr>
<tr>
<td></td>
<td>Depending on the user input, the flow continues as in 1.1 and 1.2 and respectively.</td>
</tr>
</tbody>
</table>
1.1 Course Selection
Description
This Use-Case begins when the student requests for new admission Process.
Primary Actor
Student Personnel
Input Required (Mandatory/Optional)
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Source</th>
<th>Type/ Unit</th>
<th>Valid Range</th>
<th>Restrictions</th>
<th>Error Messages</th>
</tr>
</thead>
<tbody>
<tr>
<td>Department</td>
<td>Mouse/Keyboard</td>
<td>Selection</td>
<td>List box populated from master database</td>
<td>mandatory</td>
<td>“Please Select a Department” if not selected</td>
</tr>
<tr>
<td>Degree</td>
<td>Mouse/Keyboard</td>
<td>Selection</td>
<td>List box populated from master database</td>
<td>mandatory</td>
<td>“Please Select a Department” if not selected</td>
</tr>
</tbody>
</table>
Mandatory Fields
Department, Degree.
Processing
1) The system prompts for different information related to the student (like Department, Degree). The student personnel enter the relevant information as prompted by the system.
2) A student can select any department and Degree based on his/her interest.
3) If the selected degree course is available for a particular department then, system responds with a Registration form.
4) This Registration Form contains various entries related to the qualification and other necessary information of a student, which they have to fill carefully.
5) If the selected degree course is not available for a particular department then system responds with a message “select another Degree Course”.
Preconditions
User selects the admission button. Then system prompts for degree (system populated list box) and Department (system populated list box).
Post conditions
After successful selection of degree course for a department, system displays a Registration Form.
Exception Path
The attempt may be abandoned at any time.
1.2 Student Registration
Description
This Use-Case begins when the student Personnel requests for Registration in a degree course. After successful submission of the registration form, each student personnel will be provided a Registration number, generated by the system. For any updated information given by the institution, a student has to communicate through the Registration number.
Primary Actor
Student Personnel
Input Required (Mandatory/Optional)
<table>
<thead>
<tr>
<th>Field</th>
<th>Source</th>
<th>Type/ Unit</th>
<th>Valid Range</th>
<th>Restrictions</th>
<th>Error Messages</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the name</td>
</tr>
<tr>
<td>Father’s Name</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the Father’s name</td>
</tr>
<tr>
<td>City</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the City name</td>
</tr>
<tr>
<td>State</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the state name</td>
</tr>
<tr>
<td>District</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the district name</td>
</tr>
<tr>
<td>Country</td>
<td>Keyboard/Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the Country name</td>
</tr>
<tr>
<td>Sex</td>
<td>Keyboard/ Mouse</td>
<td>Numeric</td>
<td>List box populated from master data</td>
<td>mandatory</td>
<td>Please select a sex type</td>
</tr>
<tr>
<td>---------------------</td>
<td>-----------------</td>
<td>---------</td>
<td>-------------------------------------</td>
<td>-----------</td>
<td>-------------------------</td>
</tr>
<tr>
<td>E-mail</td>
<td>Keyboard/ Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the E-mail address</td>
</tr>
<tr>
<td>Phone</td>
<td>Keyboard/ Mouse</td>
<td>Numeric</td>
<td>Up to 40 digit</td>
<td>mandatory</td>
<td>Please enter the phone number</td>
</tr>
<tr>
<td>Occupation</td>
<td>Keyboard/ Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the occupation</td>
</tr>
<tr>
<td>Department</td>
<td>Keyboard/ Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Kindly choose a Department</td>
</tr>
<tr>
<td>Degree</td>
<td>Keyboard/ Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Kindly choose a Degree</td>
</tr>
<tr>
<td>Date of birth</td>
<td>Keyboard/ Mouse</td>
<td>Selection</td>
<td>List box populated from master data</td>
<td>mandatory</td>
<td>Please enter date of birth</td>
</tr>
<tr>
<td>Upload your photo</td>
<td>Keyboard/ Mouse</td>
<td>Selection</td>
<td>List box populated from master data</td>
<td>mandatory</td>
<td>Please choose a photograph</td>
</tr>
<tr>
<td>Qualification</td>
<td>Keyboard/ Mouse</td>
<td>Character</td>
<td>Up to 40 character</td>
<td>mandatory</td>
<td>Please enter the Board name</td>
</tr>
<tr>
<td>Year of passing</td>
<td>Keyboard/ Mouse</td>
<td>Numeric</td>
<td>Up to 40 digit</td>
<td>mandatory</td>
<td>Enter the year of passing</td>
</tr>
<tr>
<td>% of marks</td>
<td>Keyboard/ Mouse</td>
<td>Numeric</td>
<td>Up to 40 digit</td>
<td>mandatory</td>
<td>Please enter the % of mark</td>
</tr>
<tr>
<td>Reset</td>
<td>Keyboard/ Mouse</td>
<td>Button</td>
<td>User dependent</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Submit</td>
<td>Keyboard/ Mouse</td>
<td>Button</td>
<td>User dependent</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Mandatory Fields**
Name, Father’s Name, City, State, District, Country, Sex, E-mail, Phone, Occupation, Department, Degree, Date of birth, Upload your photo, Board, Year of passing, % of marks.
**Processing**
1) The system Displays a Student Registration Form.
2) The system prompts for mandatory information related to the student Registration. Student enters the information and click on **Submit** button. To edit the previous information, student clicks **Reset** button.
3) After successful submission of the registration form, the system displays all the information entered by the student. Then system prompts for **Reset** and **Submit** button. To update the information user can click on **Reset** button. To save the data student can click on **Submit** button.
4) After successful submission of Registration form, each student personnel will be provided a Registration number, generated by the system.
5) The administrator personnel checks all information entered by the student and set a valid or invalid status.
5) Later student can check his/her status by entering the Registration Number. After entering a registration number system verifies the status field and displays a message “valid or invalid registration number”.
**Precondition**
System displays a Registration Form.
**Post condition**
On successful submission of the Registration Form student receives a system generated Registration Number.
**Exception Path**
The attempt may be abandoned at any time.
IX. COMPARISON BETWEEN TRADITIONAL & PROPOSED METHODS OF WRITING USE-CASE
Our proposed method (with compared to the existing one):
- Include new Input fields like Field name, Source, Type/Unit, Valid range and Error Message.
- Describes each Use-Case separately and very precisely. Due to its simple structure a programmer can understand the relation between various functionalities of the system.
- Communicates easily between design part and coding part.
- Helps both developers and users to imagine a view of the new system, whereas in the traditional method Use-Cases are not of much use when it comes to defining user interface.
- Allows the developer to easily understand and take necessary steps as & when required.
- Provides brief classification about the input requirements irrespective of the design of the system.
- Makes it easy to create a database table view.
- Follows the chain: Use-Case description → Field Description → Mandatory Fields → Processing.
X. EFFECT ON SRS FOR THE NEWLY CONSTRUCTED USE-CASE
Preparing a Use-Case model from the SRS is not new. Our main aim is to construct Use-Case model for a system in a way such that it can be more functional and easily understandable by industry personnel as well as client.
Fig. 8 Proposed Model of Our Construct
If we expand our proposed model using this above diagram, then the flow goes like
- Collect functional requirements from the user and write it on simple language which is understandable by the industry personnel.
- To remove redundancies and understand the relationship between various functionalities, developer uses the Use-Case modeling.
- To make the system more transparent, we have used our proposed model which correlates between the system design and database view.
- By using this proposed model, both developers and end users can easily understand and get a brief knowledge the whole system.
To examine & elaborate our proposed model we have taken one example & implemented our newly constructed model through one-to-one mapping technology. The diagram is shown below:

**Fig. 9** A Case Study for Implementation of Our New Model
XI. CONCLUSION
UML provides a language and notations for identifying, documenting, and communicating system requirements, and among these Use-Case descriptions and diagrams are most frequently used during the requirements definition stage of a project. The SRS document prepared in compliance with the Std.12331998 can ensure unambiguous communication between the users & the developers. The Use-Case model alone cannot serve as the core piece of documentation as it is only an interpretation of the SRS document. But, it can shorten the time required to generate a standards compliant document if existing Use-Case description could be reused in some modified manner. After considering various aspects of System Requirement Specification, we can say that Use-Case diagram gives a better representation of the system. It becomes easier to correlate system design with database view. After all the whole SRS documentation has been enhanced. The new format of constructing Use-Case that we described here provides us an improved user view, developer view and system view of the SDLC. We have maintained all the standards according to the IEEE. We have made a modification on the description part of the Use-Case to make it more understandable by the customer who will use the software. This modification will help the developers to get a better vision of the fact: “What to do?” and they will be able to easily implement “How to do?” factors. We hope that this modified view of the Use-Case will produce much future scope of enhancing the existing methodologies.
Our future work involves comparing the effectiveness of our technique with traditional ad-hoc approaches. This would involve evaluating the completeness of an SRS Std.12331998 document prepared using our method and comparing it with the existing one. The method will then be verified to ascertain the extent cognitive load experienced in preparing the SRS document, which would serve as another evaluation metric.
REFERENCES
|
{"Source-Url": "http://ijarcsse.com/Before_August_2017/docs/papers/Volume_4/4_April2014/V4I4-0392.pdf", "len_cl100k_base": 7778, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39032, "total-output-tokens": 9279, "length": "2e12", "weborganizer": {"__label__adult": 0.0003719329833984375, "__label__art_design": 0.0004067420959472656, "__label__crime_law": 0.00031375885009765625, "__label__education_jobs": 0.002948760986328125, "__label__entertainment": 5.817413330078125e-05, "__label__fashion_beauty": 0.00017189979553222656, "__label__finance_business": 0.0003325939178466797, "__label__food_dining": 0.0003457069396972656, "__label__games": 0.0005030632019042969, "__label__hardware": 0.0006642341613769531, "__label__health": 0.0003223419189453125, "__label__history": 0.0002419948577880859, "__label__home_hobbies": 8.547306060791016e-05, "__label__industrial": 0.0004143714904785156, "__label__literature": 0.0003786087036132813, "__label__politics": 0.0002256631851196289, "__label__religion": 0.0004291534423828125, "__label__science_tech": 0.0097198486328125, "__label__social_life": 0.0001035928726196289, "__label__software": 0.004547119140625, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0002346038818359375, "__label__transportation": 0.0005183219909667969, "__label__travel": 0.00019633769989013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40658, 0.02001]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40658, 0.17694]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40658, 0.88789]], "google_gemma-3-12b-it_contains_pii": [[0, 4220, false], [4220, 6620, null], [6620, 12357, null], [12357, 14907, null], [14907, 19004, null], [19004, 21434, null], [21434, 25865, null], [25865, 29225, null], [29225, 32504, null], [32504, 34395, null], [34395, 34656, null], [34656, 40088, null], [40088, 40658, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4220, true], [4220, 6620, null], [6620, 12357, null], [12357, 14907, null], [14907, 19004, null], [19004, 21434, null], [21434, 25865, null], [25865, 29225, null], [29225, 32504, null], [32504, 34395, null], [34395, 34656, null], [34656, 40088, null], [40088, 40658, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40658, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40658, null]], "pdf_page_numbers": [[0, 4220, 1], [4220, 6620, 2], [6620, 12357, 3], [12357, 14907, 4], [14907, 19004, 5], [19004, 21434, 6], [21434, 25865, 7], [25865, 29225, 8], [29225, 32504, 9], [32504, 34395, 10], [34395, 34656, 11], [34656, 40088, 12], [40088, 40658, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40658, 0.125]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
4653e9d1e38247579326dfbb56a510b46231d783
|
Post installation configurations to ensure a smooth deployment of Ansible Automation Platform installation
Post installation configurations to ensure a smooth deployment of Ansible Automation Platform installation
Abstract
This guide provides instructions and guidance on post installation activities for Red Hat Ansible Automation Platform.
# Table of Contents
**PREFACE** .......................................................... 3
**MAKING OPEN SOURCE MORE INCLUSIVE** .............................................. 4
**PROVIDING FEEDBACK ON RED HAT DOCUMENTATION** ............................. 5
**CHAPTER 1. ACTIVATING RED HAT ANSIBLE AUTOMATION PLATFORM** .......... 6
1.1. ACTIVATE WITH CREDENTIALS ............................................. 6
1.2. ACTIVATE WITH A MANIFEST FILE ...................................... 6
**CHAPTER 2. OBTAINING A MANIFEST FILE** ........................................ 8
2.1. CREATE A SUBSCRIPTION ALLOCATION ................................... 8
2.2. ADDING SUBSCRIPTIONS TO A SUBSCRIPTION ALLOCATION ............. 8
2.3. DOWNLOADING A MANIFEST FILE ......................................... 9
**CHAPTER 3. CONFIGURING PROXY SUPPORT FOR RED HAT ANSIBLE AUTOMATION PLATFORM** ........................................... 10
3.1. ENABLE PROXY SUPPORT .................................................. 10
3.2. KNOWN PROXIES .......................................................... 10
3.2.1. Configuring known proxies .......................................... 10
3.3. CONFIGURING A REVERSE PROXY ....................................... 11
3.4. ENABLE STICKY SESSIONS ................................................. 12
**CHAPTER 4. CONFIGURING AUTOMATION CONTROLLER WEBSOCKET CONNECTIONS** ........................................ 13
4.1. WEBSOCKET CONFIGURATION FOR AUTOMATION CONTROLLER .......... 13
4.1.1. Configuring automatic discovery of other automation controller nodes .......................................................... 13
**CHAPTER 5. MANAGING USABILITY ANALYTICS AND DATA COLLECTION FROM AUTOMATION CONTROLLER** .............................................. 14
5.1. USABILITY ANALYTICS AND DATA COLLECTION .......................... 14
5.1.1. Controlling data collection from automation controller ................ 14
**CHAPTER 6. ENCRYPTING PLAINTEXT PASSWORDS IN AUTOMATION CONTROLLER CONFIGURATION FILES** .............................................. 15
6.1. CREATING POSTGRESQL PASSWORD HASHES ............................ 15
6.2. Encrypting the Postgres password ....................................... 15
6.3. RESTARTING AUTOMATION CONTROLLER SERVICES ...................... 16
After installing Red Hat Ansible Automation Platform, your system might need extra configuration to ensure your deployment runs smoothly. This guide provides procedures for configuration tasks that you can perform after installing Red Hat Ansible Automation Platform.
MAKING OPEN SOURCE MORE INCLUSIVE
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
We appreciate your feedback on our technical content and encourage you to tell us what you think. If you’d like to add comments, provide insights, correct a typo, or even ask a question, you can do so directly in the documentation.
**NOTE**
You must have a Red Hat account and be logged in to the customer portal.
To submit documentation feedback from the customer portal, do the following:
1. Select the **Multi-page HTML** format.
2. Click the **Feedback** button at the top-right of the document.
3. Highlight the section of text where you want to provide feedback.
4. Click the **Add Feedback** dialog next to your highlighted text.
5. Enter your feedback in the text box on the right of the page and then click **Submit**.
We automatically create a tracking issue each time you submit feedback. Open the link that is displayed after you click **Submit** and start watching the issue or add more comments.
CHAPTER 1. ACTIVATING RED HAT ANSIBLE AUTOMATION PLATFORM
Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following:
1. Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform.
2. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook.
1.1. ACTIVATE WITH CREDENTIALS
When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into Ansible Automation Platform.
Procedures
1. Enter your Red Hat username and password.
2. Click Get Subscriptions.
NOTE
You can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager.
3. Review the End User License Agreement and select I agree to the End User License Agreement.
4. The Tracking and Analytics options are checked by default. These selections help Red Hat improve the product by delivering you a much better user experience. You can opt out by deselecting the options.
5. Click Submit.
6. Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the Ansible Automation Platform interface. You can return to the license screen by clicking the Settings icon and selecting the License tab from the Settings screen.
1.2. ACTIVATE WITH A MANIFEST FILE
If you have a subscriptions manifest, you can upload the manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook.
Prerequisites
You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.
Uploading with the interface
1. Complete steps to generate and download the manifest file
2. Log in to Red Hat Ansible Automation Platform.
3. If you are not immediately prompted for a manifest file, go to **Settings → License**.
4. Make sure the **Username** and **Password** fields are empty.
5. Click **Browse** and select the manifest file.
6. Click **Next**.
**NOTE**
If the **BROWSE** button is disabled on the License page, clear the **USERNAME** and **PASSWORD** fields.
Uploading manually
If you are unable to apply or update the subscription info using the Red Hat Ansible Automation Platform interface, you can upload the subscriptions manifest manually in an Ansible playbook using the **license** module in the **ansible.controller** collection.
```yaml
- name: Set the license using a file
license:
manifest: "/tmp/my_manifest.zip"
```
CHAPTER 2. OBTAINING A MANIFEST FILE
You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management. After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.
To begin, login to the Red Hat Customer Portal using your administrator user account and follow the procedures in this section.
2.1. CREATE A SUBSCRIPTION ALLOCATION
Creating a new subscription allocation allows you to set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform.
Procedure
1. From the Subscription Allocations page, click New Subscription Allocation.
2. Enter a name for the allocation so that you can find it later.
3. Select Type: Satellite 6.8 as the management application.
4. Click Create.
2.2. ADDING SUBSCRIPTIONS TO A SUBSCRIPTION ALLOCATION
Once an allocation is created, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform.
Procedure
1. From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to add a subscription.
2. Click the Subscriptions tab.
3. Click Add Subscriptions.
4. Enter the number of Ansible Automation Platform Entitlement(s) you plan to add.
5. Click Submit.
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance, indicating you have exceeded the number of hosts in your subscription.
Other important information displayed include the following:
Hosts automated
Host count automated by the job, which consumes the license count
**Hosts imported**
Host count considering all inventory sources (does not impact hosts remaining)
**Hosts remaining**
Total host count minus hosts automated
### 2.3. DOWNLOADING A MANIFEST FILE
After an allocation is created and has the appropriate subscriptions on it, you can download the manifest from Red Hat Subscription Management.
**Procedure**
1. From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to generate a manifest.
2. Click the Subscriptions tab.
3. Click Export Manifest to download the manifest file.
**NOTE**
The file is saved to your default downloads folder and can now be uploaded to activate Red Hat Ansible Automation Platform.
CHAPTER 3. CONFIGURING PROXY SUPPORT FOR RED HAT ANSIBLE AUTOMATION PLATFORM
You can configure Red Hat Ansible Automation Platform to communicate with traffic using a proxy. Proxy servers act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. The following sections describe the supported proxy configurations and how to set them up.
3.1. ENABLE PROXY SUPPORT
To provide proxy server support, automation controller handles proxied requests (such as ALB, NLB, HAProxy, Squid, Nginx and tinyproxy in front of automation controller) via the REMOTE_HOST_HEADERS list variable in the automation controller settings. By default, REMOTE_HOST_HEADERS is set to ["REMOTE_ADDR", "REMOTE_HOST"].
To enable proxy server support, edit the REMOTE_HOST_HEADERS field in the settings page for your automation controller:
Procedure
1. On your automation controller, navigate to Settings → Miscellaneous System.
2. In the REMOTE_HOST_HEADERS field, enter the following values:
```
["HTTP_X_FORWARDED_FOR",
"REMOTE_ADDR",
"REMOTE_HOST"
]
```
Automation controller determines the remote host’s IP address by searching through the list of headers in REMOTE_HOST_HEADERS until the first IP address is located.
3.2. KNOWN PROXIES
When automation controller is configured with REMOTE_HOST_HEADERS = ["HTTP_X_FORWARDED_FOR", "REMOTE_ADDR", "REMOTE_HOST"], it assumes that the value of X-Forwarded-For has originated from the proxy/load balancer sitting in front of automation controller. If automation controller is reachable without use of the proxy/load balancer, or if the proxy does not validate the header, the value of X-Forwarded-For can be falsified to fake the originating IP addresses. Using HTTP_X_FORWARDED_FOR in the REMOTE_HOST_HEADERS setting poses a vulnerability.
To avoid this, you can configure a list of known proxies that are allowed using the PROXY_IP_ALLOWED_LIST field in the settings menu on your automation controller. Load balancers and hosts that are not on the known proxies list will result in a rejected request.
3.2.1. Configuring known proxies
To configure a list of known proxies for your automation controller, add the proxy IP addresses to the PROXY_IP_ALLOWED_LIST field in the settings page for your automation controller.
Procedure
1. On your automation controller, navigate to Settings → Miscellaneous System.
2. In the PROXY_IP_ALLOWED_LIST field, enter IP addresses that are allowed to connect to your automation controller, following the syntax in the example below:
Example PROXY_IP_ALLOWED_LIST entry
```
[
"example1.proxy.com:8080",
"example2.proxy.com:8080"
]
```
IMPORTANT
- PROXY_IP_ALLOWED_LIST requires proxies in the list are properly sanitizing header input and correctly setting an X-Forwarded-For value equal to the real source IP of the client. Automation controller can rely on the IP addresses and hostnames in PROXY_IP_ALLOWED_LIST to provide non-spoofed values for the X-Forwarded-For field.
- Do not configure HTTP_X_FORWARDED_FOR as an item in `REMOTE_HOST_HEADERS` unless all of the following conditions are satisfied:
- You are using a proxied environment with ssl termination;
- The proxy provides sanitization or validation of the X-Forwarded-For header to prevent client spoofing;
- `/etc/tower/conf.d/remote_host_headers.py` defines PROXY_IP_ALLOWED_LIST that contains only the originating IP addresses of trusted proxies or load balancers.
3.3. CONFIGURING A REVERSE PROXY
You can support a reverse proxy server configuration by adding HTTP_X_FORWARDED_FOR to the REMOTE_HOST_HEADERS field in your automation controller settings. The X-Forwarded-For (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
Procedure
1. On your automation controller, navigate to Settings → Miscellaneous System.
2. In the REMOTE_HOST_HEADERS field, enter the following values:
```
[
"HTTP_X_FORWARDED_FOR",
"REMOTE_ADDR",
"REMOTE_HOST"
]
```
3. Add the lines below to `/etc/tower/conf.d/custom.py` to ensure the application uses the correct headers:
USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True
3.4. ENABLE STICKY SESSIONS
By default, an Application Load Balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. To avoid authentication errors when running multiple instances of automation hub behind a load balancer, you must enable sticky sessions. Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. This custom cookie can include any of the cookie attributes required by the application.
Additional resources
- Refer to Sticky sessions for your Application Load Balancer for more information about enabling sticky sessions.
Disclaimer: Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
CHAPTER 4. CONFIGURING AUTOMATION CONTROLLER WEBSOCKET CONNECTIONS
You can configure automation controller in order to align thewebsocket configuration with your nginx or load balancer configuration.
4.1. WEBSOCKET CONFIGURATION FOR AUTOMATION CONTROLLER
Automation controller nodes are interconnected through websockets to distribute allwebsocket-emitted messages throughout your system. This configuration setup enables any browser clientwebsocket to subscribe to any job that might be running on any automation controller node. Websocket clients are not routed to specific automation controller nodes. Instead, any automation controller node can handle anywebsocket request and each automation controller node must know about allwebsocket messages destined for all clients.
You can configure websockets at `/etc/tower/conf.d/websocket_config.py` in all of your automation controller nodes and the changes will be effective after the service restarts.
Automation controller automatically handles discovery of other automation controller nodes through the Instance record in the database.
IMPORTANT
Your automation controller nodes are designed to broadcastwebsocket traffic across a private, trusted subnet (and not the open Internet). Therefore, if you turn off HTTPS forwebsocket broadcasting, thewebsocket traffic, composed mostly of Ansible playbook stdout, is sent unencrypted between automation controller nodes.
4.1.1. Configuring automatic discovery of other automation controller nodes
You can configure websocket connections to enable automation controller to automatically handle discovery of other automation controller nodes through the Instance record in the database.
- Edit automation controllerwebsocket information for port and protocol, and confirm whether to verify certificates with `True` or `False` when establishing thewebsocket connections.
```
BROADCAST_WEBSOCKET_PROTOCOL = 'http'
BROADCAST_WEBSOCKET_PORT = 80
BROADCAST_WEBSOCKET_VERIFY_CERT = False
```
CHAPTER 5. MANAGING USABILITY ANALYTICS AND DATA COLLECTION FROM AUTOMATION CONTROLLER
You can change how you participate in usability analytics and data collection from automation controller by opting out or changing your settings in the automation controller user interface.
5.1. USABILITY ANALYTICS AND DATA COLLECTION
Usability data collection is included with automation controller to collect data to better understand how automation controller users specifically interact with automation controller, to help enhance future releases, and to continue streamlining your user experience.
Only users installing a trial of automation controller or a fresh installation of automation controller are opted-in for this data collection.
Additional resources
- For more information, see the Red Hat privacy policy.
5.1.1. Controlling data collection from automation controller
You can control how automation controller collects data by setting your participation level in the User Interface tab in the Settings menu.
Procedure
1. Log in to your automation controller.
2. Navigate to Settings → User Interface.
3. Select the desired level of data collection from the User Analytics Tracking State drop-down list:
- **Off**: Prevents any data collection.
- **Anonymous**: Enables data collection without your specific user data.
- **Detailed**: Enables data collection including your specific user data.
4. Click **Save** to apply the settings or **Cancel** to discard the changes.
Passwords stored in automation controller configuration files are stored in plain text. A user with access to the `/etc/tower/conf.d/` directory can view the passwords used to access the database. Access to the directories is controlled with permissions, so they are protected, but some security findings deem this protection to be inadequate. The solution is to encrypt the passwords individually.
### 6.1. CREATING POSTGRESQL PASSWORD HASHES
**Procedure**
1. On your automation controller node, run the following:
```bash
# awx-manage shell_plus
```
2. Then run the following from the python prompt:
```python
>>> from awx.main.utils import encrypt_value, get_encryption_key
>>> postgres_secret = encrypt_value('$POSTGRES_PASS')
>>> print(postgres_secret)
```
**NOTE**
Replace the `$POSTGRES_PASS` variable with the actual plain text password you wish to encrypt.
The output should resemble the following:
```plaintext
$encrypted$UTF8$AESCBC$Z0FBQUFBQmtLdGRWVFJwGtkV1ZBR3hkNGVVbFFIU3hhY21UT081eXFkR09aUWZLcG9TSmpndmZYQXyRHFQ3ZYGSE15OUFuM1RHZHBoTFU3S0MyNEo2Y2JWUURSYktsdmc9PQ==
```
3. Copy the full values of these hashes and save them.
- The hash value begins with `$encrypted$`, and is not just the string of characters, as shown in the following example:
```plaintext
$encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjJyNDJRTtRtKHFRFRI09l2U5TgdBYVrlcXFRjimdmpZNd0ZVvE21QRWViMmNDOGJaM0dPeHN2b194NVxQ1M5X3dSc1gxQ29TdBKRLkljWHc9PQ==
```
Note that the `$*_PASS` values are already in plain text in your inventory file.
These steps supply the hash values that replace the plain text passwords within the automation controller configuration files.
### 6.2. ENCRYPTING THE POSTGRES PASSWORD
The following procedure replaces the plain text passwords with encrypted values. Perform the following steps on each node in the cluster:
Procedure
1. Edit `/etc/tower/conf.d/postgres.py` using:
```bash
$ vim /etc/tower/conf.d/postgres.py
```
2. Add the following line to the top of the file.
```python
from awx.main.utils import decrypt_value, get_encryption_key
```
3. Remove the password value listed after `PASSWORD`: and replace it with the following line, replacing the supplied value of `$encrypted..` with your own hash value:
```python
decrypt_value(get_encryption_key('value'), '$encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjYoNDJRVTRkaFRIR09ib2U5TGdaYVRfcXFXRjimmdmpZNjoZVpEZ21QRWViMmNDOGJaM0dPeHN2b194NUxvQ1M5X3dSc1gxQ29TdDBKRkljWHc9PQ=='),
```
**NOTE**
The hash value in this step is the output value of `postgres_secret`.
4. The full `postgres.py` resembles the following:
```python
# Ansible Automation platform controller database settings. from awx.main.utils import decrypt_value, get_encryption_key DATABASES = { 'default': { 'ATOMIC_REQUESTS': True, 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'awx', 'USER': 'awx', 'PASSWORD': decrypt_value(get_encryption_key('value'), '$encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjYoNDJRVTRkaFRIR09ib2U5TGdaYVRfcXFXRjimmdmpZNjoZVpEZ21QRWViMmNDOGJaM0dPeHN2b194NUxvQ1M5X3dSc1gxQ29TdDBKRkljWHc9PQ=='), 'HOST': '127.0.0.1', 'PORT': 5432, } }
```
6.3. RESTARTING AUTOMATION CONTROLLER SERVICES
Procedure
1. When encryption is completed on all nodes, perform a restart of services across the cluster using:
```bash
# automation-controller-service restart
```
2. Navigate to the UI, and verify you are able to run jobs across all nodes.
|
{"Source-Url": "https://access.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.4/pdf/red_hat_ansible_automation_platform_operations_guide/red_hat_ansible_automation_platform-2.4-red_hat_ansible_automation_platform_operations_guide-en-us.pdf", "len_cl100k_base": 4932, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 34194, "total-output-tokens": 5897, "length": "2e12", "weborganizer": {"__label__adult": 0.00021219253540039065, "__label__art_design": 0.0002160072326660156, "__label__crime_law": 0.0002015829086303711, "__label__education_jobs": 0.0010099411010742188, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 8.052587509155273e-05, "__label__finance_business": 0.0008559226989746094, "__label__food_dining": 0.00018346309661865232, "__label__games": 0.0003521442413330078, "__label__hardware": 0.0006232261657714844, "__label__health": 0.00015425682067871094, "__label__history": 0.0001194477081298828, "__label__home_hobbies": 0.00011420249938964844, "__label__industrial": 0.0003123283386230469, "__label__literature": 0.00012874603271484375, "__label__politics": 0.00015652179718017578, "__label__religion": 0.00020992755889892575, "__label__science_tech": 0.005428314208984375, "__label__social_life": 0.0001018047332763672, "__label__software": 0.10589599609375, "__label__software_dev": 0.88330078125, "__label__sports_fitness": 0.00011289119720458984, "__label__transportation": 0.00019049644470214844, "__label__travel": 0.00015163421630859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22584, 0.01537]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22584, 0.13312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22584, 0.75574]], "google_gemma-3-12b-it_contains_pii": [[0, 107, false], [107, 107, null], [107, 214, null], [214, 343, null], [343, 2680, null], [2680, 2680, null], [2680, 2948, null], [2948, 3339, null], [3339, 4299, null], [4299, 6299, null], [6299, 7161, null], [7161, 9139, null], [9139, 9926, null], [9926, 12441, null], [12441, 14361, null], [14361, 15589, null], [15589, 17584, null], [17584, 19081, null], [19081, 20976, null], [20976, 22584, null]], "google_gemma-3-12b-it_is_public_document": [[0, 107, true], [107, 107, null], [107, 214, null], [214, 343, null], [343, 2680, null], [2680, 2680, null], [2680, 2948, null], [2948, 3339, null], [3339, 4299, null], [4299, 6299, null], [6299, 7161, null], [7161, 9139, null], [9139, 9926, null], [9926, 12441, null], [12441, 14361, null], [14361, 15589, null], [15589, 17584, null], [17584, 19081, null], [19081, 20976, null], [20976, 22584, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22584, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22584, null]], "pdf_page_numbers": [[0, 107, 1], [107, 107, 2], [107, 214, 3], [214, 343, 4], [343, 2680, 5], [2680, 2680, 6], [2680, 2948, 7], [2948, 3339, 8], [3339, 4299, 9], [4299, 6299, 10], [6299, 7161, 11], [7161, 9139, 12], [9139, 9926, 13], [9926, 12441, 14], [12441, 14361, 15], [14361, 15589, 16], [15589, 17584, 17], [17584, 19081, 18], [19081, 20976, 19], [20976, 22584, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22584, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
955e0fc461df23e406ae78aae93d69be3593342e
|
Lazy products
In class on Tuesday, we learned about eager products. When eliminating eager products, we first reduced the components from left to right until both were values. Only then did we deem the pair to be a value. This is because we wanted to simultaneously use both components in the `case` elimination form, and in a call-by-value semantics, variables should only ever be instantiated with values.
With lazy products, a pair of expressions is always a value. Instead of a `case` elimination form that simultaneously extracts both components of an ordered pair, lazy products have left and right projections that extract the single corresponding component from the pair. Before we can project out of a term that is not an ordered pair, we must first reduce that term until it becomes one.
In this part of the assignment, you will explore the dynamics for lazy products and prove various properties about them. We explore lazy products in the context of the simple language we have seen so far. Its syntax, statics, and dynamics are given in the appendices.
We first extend our syntax:
\[
\tau ::= \cdots \quad | \tau_1 \& \tau_2 \quad \text{lazy product of } \tau_1 \text{ and } \tau_2
\]
\[
e ::= \cdots \quad | \langle e_1, e_2 \rangle \quad \text{ordered pair of } e_1 \text{ and } e_2
\]
\[
e \cdot l \quad \text{left projection} \quad | \quad e \cdot r \quad \text{right projection}
\]
The introduction rule for lazy pairs is:
\[
\frac{\Gamma \vdash e_1 : \tau_1 \quad \Gamma \vdash e_2 : \tau_2}{\Gamma \vdash \langle e_1, e_2 \rangle : \tau_1 \& \tau_2} \quad (I-\&)
\]
Its elimination rules are:
\[
\frac{\Gamma \vdash e : \tau_1 \& \tau_2}{\Gamma \vdash e \cdot l : \tau_1} \quad (E-\&-L) \quad \frac{\Gamma \vdash e : \tau_1 \& \tau_2}{\Gamma \vdash e \cdot r : \tau_2} \quad (E-\&-R)
\]
**Task 1** (10 points, WB). Give rules capturing the intended dynamics for lazy products. That is, give rules defining the judgments \(e \text{ val}\) and \(e \mapsto e'\) for terms \(e\) of the form \(\langle e_1, e_2 \rangle\), \(e_0 \cdot l\), and \(e_0 \cdot r\). They should satisfy \(\langle e_1, e_2 \rangle \cdot l \mapsto^* e_1\) and \(\langle e_1, e_2 \rangle \cdot r \mapsto^* e_2\) and capture the intuitions we gave above.
**Task 2** (10 points, WB). State and prove the canonical forms theorem for lazy pairs. The statement should be of the following form: “if \(\cdot \vdash e : \tau_1 \& \tau_2\) and \(e \text{ val}\), then \(e\) is a term of the form(s) . . .”.
**Task 3** (10 points, WB). Show that the progress property still holds after the addition of lazy pairs. In particular, prove if \(\cdot \vdash e : \tau\) by (I-\&), (E-\&-L), or (E-\&-R), then either \(e \text{ val}\) or there exists an \(e'\) such that \(e \mapsto e'\).
**Task 4** (5 points, WB). Show that the preservation property still holds after the addition of lazy pairs. In particular, show that if \(e \mapsto e'\) by one of the rules you gave in task 1 and \(\cdot \vdash e : \tau\), then \(\cdot \vdash e' : \tau\).
**Task 5** (5 points, WB). Write down terms \(\xi\) and \(\xi^{-1}\) such that
\[
\cdot \vdash \xi : \tau \otimes \sigma \rightarrow \tau \& \sigma \quad \cdot \vdash \xi^{-1} : \tau \& \sigma \rightarrow \tau \otimes \sigma
\]
and for all values \(v\) of type \(\tau \otimes \sigma\), \(\xi^{-1}(\xi(v)) \mapsto^* v\) (you do not need to prove this).
Do these terms witness the isomorphism? If so, state this. If not, briefly explain why. Then briefly speculate on changes we could make to the definition of mutual inverse for \(\xi\) and \(\xi^{-1}\) to be mutual inverses.
Hint. Consider the dynamics of eager and lazy products in the presence of non-termination.
A billion dollar mistake
A frequent source of bugs programs is attempting to dereference null pointers. Languages subject to this class of errors feature a type $\pi$ of “pointers” with a distinguished pointer $\text{null}$ called the “null pointer”. The intention is that the null pointer represent a lack of value, while all other pointers can be “dereferenced” to produce a value. Any attempt to dereference a null pointer is necessarily an error. This class of errors is so rampant that Hoare, the inventor of the null pointer, has called null pointers his “billion dollar mistake” [2].
The crux of the problem is a mis-identification of the type option type $\pi \text{ opt}$ with the type $\pi$. The option type $\tau \text{ opt}$ represents optional values of type $\tau$, where $\text{some } e$ captures the presence of a value $e$ of type $\tau$ and $\text{none}$ represents the lack of value. We can test for the presence of a value using the construction “case $e_1 \{ \text{some } x \Rightarrow e_2 | \text{none} \Rightarrow e_3 \}$”. This checks if $e_1$ is $\text{none}$, in which it produces $e_3$, and if $e_1$ is $\text{some } v_1$, then we produce $[v_1/x]e_2$. This can be encoded using sum types as follows:
$$
\begin{align*}
\tau \text{ opt} &= \tau + 1 \\
\text{some } e &= l \cdot x \\
\text{none} &= r \cdot \langle \rangle \\
\text{case } e_1 \{ \text{some } x \Rightarrow e_2 | \text{none} \Rightarrow e_3 \} &= \text{case } e_1 \{ l \cdot x \Rightarrow e_2 | r \cdot \langle \rangle \Rightarrow e_3 \}
\end{align*}
$$
The key point is that the type system stops us from directly using an optional value $e$ of type $\tau \text{ opt}$ in a context expecting a value of type $\tau$: the type system forces us to account for the fact that $e$ could be $\text{none}$. In contrast, $\pi$ treats the null pointer $\text{null}$ as a genuine pointer, allowing it to be used in any context expecting a genuine pointer.
The affected languages try to work around this problem by providing a function $\text{isnull} : \pi \rightarrow \text{bool}$ that produces $\text{true}$ if it is applied to $\text{null}$, and otherwise produces $\text{false}$. To avoid accidentally dereferencing a pointer, code is then littered with checks of the form:
$$
\text{if (isnull e) then ...handle error... else ...do something...}$$
Unfortunately, it is very easy to forget such a check and attempt to dereference the potentially $\text{null}$ value $e$. Implicitly, this work-around attempts to simulate the type $\tau \text{ opt}$ using $\text{bool} \otimes \tau$, where a value of type $\tau$ is tagged with a boolean that signals whether or not the value is actually present. The reader is referred to [1] pp. 92f.] for more details.
Task 6 (10 points, WB, [1] Ex. 11.2). Informally show that we cannot identify \( \text{bool} \otimes \tau \) and \( \tau_{\text{opt}} \) for arbitrary \( \tau \). Do so by attempting to give an implementation of \( \tau_{\text{opt}} \) in terms of \( \text{bool} \otimes \tau \) by sensibly completing the following chart:
\[
\begin{array}{c}
\text{none} = ? \\
\text{some} \ e = ? \\
\text{case} \ e_1 \ {\{ \text{some} \ x \Rightarrow e_2 \mid \text{none} \Rightarrow e_3 \}} = ? \\
\end{array}
\]
Where do you get stuck if you do not additionally assume the existence of a "null" value \( \text{null}_\tau \) for each type \( \tau \)? Argue that even by artificially assuming the existence of such values, you cannot complete the chart in a manner that captures the semantics of \( \tau_{\text{opt}} \) in their absence. (Please be brief: your answer must contain at most eight lines of prose.)
For your convenience, you may assume the existence of an \( \text{if} \ b \ \text{then} \ e_1 \ \text{else} \ e_2 \) construct as defined in lecture 6.
Gaining types
Task 7 (5 points, WB). Find terms \( e \) and \( e' \) and types \( \tau \) and \( \tau' \) such that \( \cdot \vdash e : \tau \), \( e \mapsto e' \), and \( \cdot \vdash e' : \tau' \), but not \( \cdot \vdash e : \tau' \).
Is zero a zero?
Task 8 (5 points, WB). Is the nullary sum type \( 0 \) the zero for \( \otimes \)? That is, prove or disprove that for all \( \tau \),
\[ \tau \otimes 0 \cong 0. \]
In the affirmative case, we only ask that you write down the two terms witnessing the isomorphism. In the negative case, you must provide a counter-example and explain why it is a counter-example.
Task 9 (0 points). How long did you spend on this assignment? Please the questions that you discussed with classmates using the whiteboard policy.
References
A Syntax
Types \( \tau \) and terms \( e \) are given by the grammars:
\[
\begin{align*}
\tau &::= \alpha \mid \tau_1 \rightarrow \tau_2 \mid 0 \mid \tau_1 + \tau_2 \mid \tau_1 \otimes \tau_2 \mid \tau_1 \& \tau_2 \\
e &::= x \mid \lambda x.e \mid e_1 e_2 \mid \text{case } e \{ \} \mid l \cdot e \mid r \cdot e \mid \text{case } e \{ \} \Rightarrow e_2 \mid \langle e_1, e_2 \rangle \mid \text{case } e \{ \langle \rangle \Rightarrow e_2 \} \mid \langle | e_1, e_2 | \rangle \mid e \cdot l \mid e \cdot r
\end{align*}
\]
B Statics
\[
\begin{align*}
\frac{x : \tau \in \Gamma}{\Gamma \vdash x : \tau} \quad &\text{(VAR)} & \frac{\Gamma, x : \tau \vdash e : \tau'}{\Gamma \vdash \lambda x.e : \tau \rightarrow \tau'} \quad &\text{(LAM)} \\
\frac{\Gamma \vdash e_1 : \tau \rightarrow \tau' \quad \Gamma \vdash e_2 : \tau}{\Gamma \vdash e_1 e_2 : \tau'} \quad &\text{(APP)} \\
\frac{\Gamma \vdash e : \tau}{\Gamma \vdash l \cdot e : \tau + \tau'} \quad &\text{(I+L)} & \frac{\Gamma \vdash e : \tau'}{\Gamma \vdash r \cdot e : \tau + \tau'} \quad &\text{(I+R)} \\
\frac{\Gamma \vdash e : \tau_1 + \tau_2 \quad \Gamma, x : \tau_1 \vdash e_1 : \tau \quad \Gamma, x : \tau_2 \vdash e_2 : \tau}{\Gamma \vdash \text{case } e \{ l \cdot x \Rightarrow e_1 \mid r \cdot x \Rightarrow e_2 \} : \tau} \quad &\text{(E+)} \\
\frac{\Gamma \vdash e : 0}{\Gamma \vdash \text{case } e \{ \} : \tau} \quad &\text{(E-0)}
\end{align*}
\]
No (I-0) rule.
\[
\begin{align*}
\frac{\Gamma \vdash \langle \rangle : 1}{\Gamma \vdash \text{case } e \{ \} \Rightarrow e_1 : \tau} \quad &\text{(E-1)}
\end{align*}
\]
\[
\begin{align*}
\frac{\Gamma \vdash e_1 : \tau_1 \quad \Gamma \vdash e_2 : \tau_2}{\Gamma \vdash \langle e_1, e_2 \rangle : \tau_1 \otimes \tau_2} \quad &\text{(I-\otimes)} \\
\frac{\Gamma \vdash e_0 : \tau_1 \otimes \tau_2 \quad \Gamma, x_1 : \tau_1, x_2 : \tau_2 \vdash e_1 : \tau}{\Gamma \vdash \text{case } e_0 \{ x_1, x_2 \Rightarrow e_1 \} : \tau} \quad &\text{(E-\otimes)} \\
\frac{\Gamma \vdash e_1 : \tau_1 \quad \Gamma \vdash e_2 : \tau_2}{\Gamma \vdash \langle e_1, e_2 \rangle : \tau_1 \& \tau_2} \quad &\text{(I-\&)} \\
\frac{\Gamma \vdash e : \tau_1 \& \tau_2}{\Gamma \vdash e \cdot l : \tau_1} \quad &\text{(E-\&-L)} & \frac{\Gamma \vdash e : \tau_1 \& \tau_2}{\Gamma \vdash e \cdot r : \tau_2} \quad &\text{(E-\&-R)}
\end{align*}
\]
5
C Dynamics
\[
\frac{\lambda x.e \text{ val}}{(\rightarrow) \text{VAL}} \quad \frac{(\lambda x.e_1)e_2 \mapsto [e_2/x]e_1}{\text{APP-RED}}
\]
\[
\frac{e_1 \mapsto e'_1}{\text{APP-STEP-L}} \quad \frac{e_1 \text{ val} \quad e_2 \mapsto e'_2}{\text{APP-STEP-R}}
\]
\[
\frac{e \text{ val}}{l \cdot e \text{ val}} (\text{VAL/L}) \quad \frac{e \text{ val}}{r \cdot e \text{ val}} (\text{VAL/R})
\]
\[
\frac{e \mapsto e'}{l \cdot e \mapsto l \cdot e'} (\mapsto/ \text{L}) \quad \frac{e \mapsto e'}{r \cdot e \mapsto r \cdot e'} (\mapsto/ \text{R})
\]
\[
\frac{\text{case } e \{ \} \mapsto \text{case } e' \{ \} }{\mapsto \text{case}_0}
\]
\[
\frac{\text{case } l \cdot x \Rightarrow e_1 \mid r \cdot y \Rightarrow e_2 \mapsto \text{case } e' \{ l \cdot x \Rightarrow e_1 \mid r \cdot y \Rightarrow e_2 \} }{\mapsto \text{case}_1}
\]
\[
\frac{v_1 \text{ val}}{\text{case } l \cdot v_1 \{ l \cdot x \Rightarrow e_1 \mid r \cdot y \Rightarrow e_2 \} \mapsto [v_1/x]e_1} (\mapsto \text{case}_l)
\]
\[
\frac{v_2 \text{ val}}{\text{case } r \cdot v_2 \{ l \cdot x \Rightarrow e_1 \mid r \cdot y \Rightarrow e_2 \} \mapsto [v_2/y]e_2} (\mapsto \text{case}_r)
\]
\[
\frac{\langle \rangle \text{ val}}{\langle \rangle \text{VAL}} \quad \frac{v_1 \text{ val}}{\langle v_1, v_2 \rangle \text{ val}} (\text{PAIR-VAL})
\]
\[
\frac{e_1 \mapsto e'_1}{(e_1, e_2) \mapsto (e'_1, e'_2)} (\text{STEP-L}) \quad \frac{v_1 \text{ val}}{(v_1, e_2) \mapsto (v_1, e'_2)} (\text{STEP-R})
\]
\[
\frac{e_0 \mapsto e'_0}{\text{case } e_0 \{ \langle \rangle \Rightarrow e_1 \} \mapsto \text{case } e'_0 \{ \langle \rangle \Rightarrow e_1 \}} (\text{STEP-SUBJ-1})
\]
\[
\frac{e_0 \mapsto e'_0}{\text{case } e_0 \{ \langle x_1, x_2 \rangle \Rightarrow e_1 \} \mapsto \text{case } e'_0 \{ \langle x_1, x_2 \rangle \Rightarrow e_1 \}} (\text{STEP-SUBJ-2})
\]
\[
\frac{\text{case } \langle \rangle \{ \langle \rangle \Rightarrow e_1 \} \mapsto e_1}{\text{STEP-CASE-1}}
\]
\[
\frac{(v_1, v_2) \text{ val}}{\text{case } (v_1, v_2) \{ \langle x_1, x_2 \rangle \Rightarrow e_1 \} \mapsto [v_1, v_2/x_1, x_2]e_1} (\text{STEP-CASE-2})
\]
|
{"Source-Url": "http://www.cs.cmu.edu/~fp/courses/15814-f18/misc/hw2-handout.pdf", "len_cl100k_base": 4408, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 25353, "total-output-tokens": 4979, "length": "2e12", "weborganizer": {"__label__adult": 0.000568389892578125, "__label__art_design": 0.0008211135864257812, "__label__crime_law": 0.0007567405700683594, "__label__education_jobs": 0.0243682861328125, "__label__entertainment": 0.00016582012176513672, "__label__fashion_beauty": 0.0002760887145996094, "__label__finance_business": 0.0004229545593261719, "__label__food_dining": 0.0009450912475585938, "__label__games": 0.0011043548583984375, "__label__hardware": 0.0008425712585449219, "__label__health": 0.0010471343994140625, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 0.00027871131896972656, "__label__industrial": 0.001064300537109375, "__label__literature": 0.0012407302856445312, "__label__politics": 0.0007281303405761719, "__label__religion": 0.0010204315185546875, "__label__science_tech": 0.06884765625, "__label__social_life": 0.0005173683166503906, "__label__software": 0.00629425048828125, "__label__software_dev": 0.88623046875, "__label__sports_fitness": 0.00046634674072265625, "__label__transportation": 0.0009794235229492188, "__label__travel": 0.0003352165222167969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12945, 0.01832]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12945, 0.19492]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12945, 0.65348]], "google_gemma-3-12b-it_contains_pii": [[0, 1069, false], [1069, 3612, null], [3612, 6448, null], [6448, 8497, null], [8497, 10843, null], [10843, 12945, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1069, true], [1069, 3612, null], [3612, 6448, null], [6448, 8497, null], [8497, 10843, null], [10843, 12945, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12945, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12945, null]], "pdf_page_numbers": [[0, 1069, 1], [1069, 3612, 2], [3612, 6448, 3], [6448, 8497, 4], [8497, 10843, 5], [10843, 12945, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12945, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
897dfa91ccb47baad25135bec1bf3b7d279c89c9
|
A Distributed Implementation of Many-to-Many Synchronous Channels
Maarten VAN STEEN and Mark POLMAN
Erasmus University, Department of Computer Science
P.O. Box 1738, 3000 DR, Rotterdam, {steen,polman}@cs.few.eur.nl
Abstract. Within the ESPRIT project Hamlet, we have developed a graphical-based Application Design Language (ADL). This language allows a developer to primarily focus on the high-level design of parallel applications in terms of processes communicating by means of message-passing. ADL has been tailored to allow for automated generation of efficient parallel target code. In most cases, this goal can be relatively easily met. However, ADL also supports some high-level communication constructs which may be quite difficult to implement in the general case. In this paper, we discuss one particular implementation aspect, namely that of synchronous channels that allow communication between multiple senders and multiple receivers. Our attention focuses on a distributed, scalable solution for transputer-based systems. This solution is, in fact, also applicable to occam-like languages that permit guarded output statements.
1. Introduction
1.1. Background
The graphical Hamlet Application Design Language (ADL) has been developed to support the construction of parallel applications [4, 5, 7]. The language is based on a notion of processes communicating by means of message-passing. Its primary goal is to support the logical and physical design of applications that are to be executed on transputer-based systems. In particular, advanced constructs are provided by which a developer can easily devise the intricate communication structures that are inherent to parallel applications, without being limited to specific support for expressing communication by the actual target language. Among these constructs are so-called replicators for specifying geometrically structured collections of communicating processes and various communication media with rich semantics for expressing the exchange of messages between processes.
Although focusing on the design rather than the implementation of a parallel application may be important from a software engineering point of view, it does not alleviate development problems if there is hardly or no support for deriving efficient implementations. To that aim, ADL has been carefully tailored to allow efficiently executable code to be derived automatically from a design. This is particularly easily established in those cases when communication constructs have an evident counterpart in target languages. Automated generation of efficient code may become a problem when the relation between an ADL construct and the target language is less obvious. In the case of ADL, this problem arises with the general implementation of so-called synchronous channels for transputer-based systems.
Synchronous channels in ADL are very similar to occam channels [2], with the exception that in ADL a channel can be shared between multiple senders and multiple receivers. As we shall explain, this in fact is equivalent to allowing alternative selection of output channels,
which is not permitted in occam. In this paper, we describe an efficient distributed solution of these synchronous channels for transputer-based systems. In the remainder of this section we shall first specify the semantics of ADL synchronous channels, and show that most forms can indeed be implemented efficiently. The core of the paper is presented in Section 2 where we outline a distributed implementation as a general solution. The computational complexity of our solution is discussed in an informal manner in Section 3. We conclude by putting our present research into context in Section 4.
1.2. Problem description
An ADL synchronous channel connects a collection S of senders to a collection R of receivers. The semantics of synchronous channels are such that if at time instant $T$, $M = |S_T|$ senders and $N = |R_T|$ receivers want to communicate, $\min\{M, N\}$ (sender,receiver)-pairs are selected nondeterministically and a message is transferred from sender to receiver. If a process cannot immediately communicate, it will block until communication is possible (assuming that the process is willing to communicate only by means of a single synchronous channel). Consequently, if $M > N$ a total of $M - N$ nondeterministically selected senders will remain blocked, and likewise, if $M < N$ a total of $N - M$ nondeterministically selected receivers will remain blocked.
The relationship between these semantics and those of occam channels is more obvious than one might initially suspect. For example, when $M = N = 1$, there is no difference between the two languages. This also means that we can efficiently implement ADL channels directly by means of occam channels. When $M > 1$ and $N = 1$, our semantics are the same as that of an occam program in which a single receiver can alternatively select amongst $M$ input channels, one for each sender. The opposite situation occurs when $M = 1$ and $N > 1$: in that case, a single sender should select between $N$ output channels, one for each receiver. An implementation in this case requires only two channels per receiver, adding up to a total of $2N$ occam channels. Per receiver, there is one channel from the sender to the receiver for passing the actual message, and one channel from the receiver to the sender by which the receiver can announce its willingness to communicate. The sender then first selects the communicating receiver by means of an alt-statement on the channels for announcements, and then proceeds by sending its message through the regular channel which connects it to the selected receiver.
But when $M > 1$ and $N > 1$, we may find ourselves in a difficult spot. When $M$ and $N$ are not too large, a centralized solution by which an additional process is responsible for (1) selecting a (sender,receiver)-pair, and (2) subsequently forwarding the message, may be acceptable (see [1] for further details). But as soon as the number of senders and receivers increase, the central process may turn out to be a bottleneck. Our goal at this point, is therefore to present a completely distributed and scalable solution that is suited for transputer-based systems.
2. A distributed solution
In order to come to an efficient distributed implementation of the semantics of ADL synchronous channels, we organize the senders and receivers into a logical ring and essentially adopt a token-based protocol for exchanging data. In particular, we let an envelope circulate counter-clockwise around the ring, whereas a process that requires the envelope will issue a request clockwise around the ring. The envelope can either be full or empty. This global architecture is shown in Figure 1.
Each process essentially consists of two components. The subprocess (called the main
2.1. The behavior of a process
The global behavior of our solution can now be explained by taking a closer look at a receiver and sender, respectively. To that aim, we say that a process becomes active if it wants to either send or receive data. Otherwise, the process is said to be inactive.
2.1.1. A receiver
When a receiver becomes active, it is prepared to accept any incoming data. In our solution, this means that the receiver should get a hold of a full envelope. Assuming that the envelope is currently not at the receiver, the receiver will issue a request for a full envelope to its left-hand neighbor. From that moment on, it simply waits until the envelope eventually arrives. As soon as the envelope arrives, or when the envelope was already located at the receiver, we need to distinguish two situations:
1. The envelope is full: in this case, the receiver can empty the envelope, and communication is considered to be finished. As soon as it knows that there is someone waiting for an empty envelope, the envelope is forwarded to its right-hand neighbor.
2. The envelope is empty: in this case, the receiver is in possession of an envelope that needs yet to be filled by a sender. Hence, it will first need to wait for a request from its right-hand neighbor, and then subsequently forward the envelope. In order to ensure that the envelope will eventually return, the receiver marks it.
The mechanisms for forwarding an envelope will be described below. An important observation is that we never forward the envelope unless there is a good reason to do so. In particular, the receiver should know for certain that there is a sender on the ring willing to fill the envelope. In this way, we avoid the situation of a continuously circulating envelope – a solution generally adopted for many token-based protocols.
2.1.2. A sender
The sender’s situation is almost symmetrical to that of a receiver. When becoming active, a sender should get hold of an empty envelope. Again, if we assume that the envelope is
not at the sender’s site, the sender will issue a request for the empty envelope to its lefthand neighbor and wait until the envelope arrives. When the envelope arrives, or when it was already available when the sender became active, two situations are to be considered:
1. The envelope is empty: in this case, the sender may fill the envelope, and subsequently wait until it receives a request for a full envelope (which can only come from a receiver). As soon as the sender is certain that there is a receiver on the ring, it forwards the envelope and the communication is considered to be finished.
2. The envelope is full: this can only happen when there was another sender on the ring as well and which had previously filled the envelope. In that case, the sender should forward the envelope to its righthand neighbor. Similar to the case of a receiver, the sender marks the envelope and passes it on.
Again, note that we only forward the envelope if it is really needed. Another point that we shall explain further below, is that the sender will not issue a request when it finds the envelope already filled. Instead, the envelope is marked.
2.1.3. An inactive process
Of course, a process need not be active at all. In that case, the envelope is simply forwarded if there is a need to do so. In other words, if the envelope is empty there should be a sender on the ring, or when it is full forwarding only takes place when there is a receiver. Note, by the way, that an active process will always forward the envelope: the only reason it got to the process was because there was a request for it.
2.2. Forwarding the envelope
Let us now take a closer look at the way the envelope is circulated across the ring. To that aim, we consider how requests are forwarded, and how the envelope is forwarded. Also, we consider the actual acceptance of an envelope.
2.2.1. Forwarding requests
As we have mentioned, a process can issue two types of requests: one for an empty envelope, and one for a full envelope, respectively. Whenever a process receives a request (which can only come from its righthand neighbor), it will forward this request if and only if (1) the process currently does not have the envelope in its possession, and (2) it had not previously forwarded a similar request. In this way, it is seen that requests are not accumulated through the ring, but instead, if a process has recorded that the envelope is requested, there may be several processes actually requiring the envelope.
2.2.2. Forwarding the envelope
Whenever a process wants to forward the envelope, a necessary and sufficient condition is that the process is certain that someone is actually in need for the envelope. Let us first assume that this is the case so that the process will forward the envelope. The following three situations need to be distinguished.
Case 1: The process itself is currently inactive. Assume the envelope is empty (the case of a full envelope is analogous), and that the process knows there is a sender in need of the envelope. If a request has arrived at the process for a full envelope, the envelope is marked by setting a boolean variable requestFull to true. This variable is located on the envelope. Likewise, there is also a boolean variable requestEmpty which is set whenever a
full envelope is forwarded, but when there is also an outstanding request at the forwarding process for an empty envelope. The envelope is then forwarded, and all administration local to the forwarding process regarding previously issued requests is cleared.
Case 2: The process is an active receiver. Again, let us first assume that the envelope is empty. In this case, the process will forward the envelope when it knows that there is a sender on the ring. However, it should also ensure that the envelope eventually returns, preferably having been filled in the meantime. To that aim, it increments a counter numOfRcvrs located on the envelope, indicating the number of active receivers that have passed the envelope while it was empty. Consequently, as long as this counter is non-zero, it is known that there is a receiver somewhere on the ring, and which is in need of a full envelope.
When the envelope is full when it got to the receiver, the receiver will first empty it, and subsequently inactivate. Consequently, forwarding proceeds according to situation (1).
Case 3: The process is an active sender. The special situation that we need to consider here, is when the sender has received an already filled envelope. This can only happen if there was already a receiver on the ring so that the envelope should always be forwarded. However, the sender should also indicate that it is still in need of an empty envelope. Analogously to the situation of an active receiver with an empty envelope, the sender will increment a counter numOfSndrs which is located at the envelope. This counter reflects the number of senders that have passed a filled envelope, but which are in need of an empty one.
When an empty envelope was passed to the sender, it will subsequently fill it and wait for a receiver. The envelope is then forwarded and the process becomes inactive again.
The necessary and sufficient conditions for forwarding an envelope can now be stated more explicit: (1) the envelope is empty, and either numOfSndrs is non-zero, or requestEmpty is true, or (2) the envelope is full, and either numOfRcvrs is non-zero, or requestFull is true. After possibly updating the values for the four markers on the envelope, the envelope is forwarded and local administration with respect to outstanding requests is cleared.
2.2.3. Acceptance of an envelope
The last behavioral aspect we need to consider is the actual acceptance of the envelope and updates of its markers. Again, we make a distinction between receivers and senders. If a filled envelope arrives at a receiver, the receiver will first decrement the counter numOfRcvrs if it had previously incremented it. Also, the marker requestFull is set to false if no request for a filled envelope had arrived from its right-hand neighbor. The envelope can then be emptied, after which forwarding is considered as described above. Likewise, if an empty envelope arrives at a sender, the process will decrement numOfSndrs if it had previously incremented it, and also clear the marker requestEmpty when there are no outstanding requests. Then, the envelope is filled and behavior proceeds as mentioned above.
2.3. Implementation aspects
As mentioned, the distributed solution can be implemented by distinguishing three threads per process, cooperating by means of shared data. The main thread represents the general behavior of a sender or receiver, whereas the two threads named put thread and get thread, respectively, form the core of the algorithm. The put thread is responsible for transmitting any information on the ring. In particular, it takes care of forwarding requests and the envelope. By contrast, the get thread is responsible for accepting the envelope from the process’ lefthand neighbor, or for receiving requests from its righthand neighbor. The reason for making an
explicit distinction between the two has everything to do with the synchronous nature of the communication links of our target language. To explain, consider the following situation.
Suppose a process $P_i$ has just become active to which end it decides to issue a request for the envelope by sending a request to its lefthand neighbor $P_{i-1}$. If, by that time, process $P_{i-1}$ has the envelope which it should forward on account of the fact that either numRows or numSndrs was non-zero, $P_{i-1}$ may simultaneously decide to forward the envelope to $P_i$. We will then find ourselves in the situation that $P_i$ and $P_{i-1}$ want to simultaneously send information to each other. Because we are dealing with synchronous communication, we will have created a deadlock. Deadlock in this case can be avoided if we implement a form of asynchronous communication by allowing a separate thread to deal with all incoming information.
The algorithm has been implemented as a state-transition machine on a per-process basis of which state information is maintained by cooperation of the get and put thread. The state-transition diagram is depicted in Figure 2. The shaded states represent the situation that a process has the envelope in its possession; the other states reflect that the envelope is somewhere else.

The following states and most important transitions are distinguished:
- **NULLNODE**: This represents an inactive process that will generally only forward requests and the envelope. The NULLNODE state is also the initial state of a process. Only one process will, of course, start in the situation that it possesses the envelope.
- **SNDR-I**: A process enters this state the instant it becomes an active sender. The process remains in this state until the envelope is in its possession, and it is suited to be filled by the main thread.
- **SNDR-II**: A sending process currently filling the envelope, or otherwise waiting for a receiver to announce itself will remain in this state. This state can only be entered from state SENDER-I. As soon as a filled envelope can be forwarded to a receiver, will the process continue in state NULLNODE.
- **RCVR-I**: Similar to state SENDER-I, a receiver will enter this state the instant it becomes active. It will remain in state RECEIVER-I until the filled envelope has arrived.
- **RCVR-II**: While the envelope is being emptied, an active receiver will remain in this state. Regardless if the envelope can be forwarded or not, the process will continue in one of the two NULLNODE states.
The actual implementation of the algorithm is now straightforward. The only aspect that needs some attention is how we can combine the communication between threads (which is
through shared data), and the communication between processes (which is across links by means of message-passing). More specifically, we need to find a way by which a get thread in a particular process can synchronize by means of a single mechanism on (1) information from one of the other threads in that process, and (2) information from other processes. A solution is found by using a local channel between the main thread and the get thread. This channel is used to inform the get thread that the main thread wants to either communicate, or that it is finished with communication. Using an alt-statement, it is then possible for the get thread to selectively wait on any incoming information.
We shall not further discuss implementation details, as these are now straightforward. Instead, the interested reader is referred to [6] where detailed skeleton code is presented\(^1\).
3. Complexity analysis
In this section we come to an informal and experimental complexity analysis of the algorithm. To that aim, we make a distinction with respect to the number of processes that are willing to communicate at a certain time. As we have explained above, we assume that each process generally resides in either a state in which it has no need to send or receive a message, or in a state in which it requires to communicate. When most processes are not willing to communicate, there will hardly be any network traffic. On the other hand, when the behavior of processes is predominated by the fact that they want to communicate, network traffic will be considerable but also rather unpredictable. For example, when there are many senders and receivers on the ring, it can be expected that the number of hops that a filled envelope has to make in order to deliver a message from a sender to a receiver is relatively low. Likewise, the number of hops an empty envelope has to make before it reaches a sender that can subsequently fill it, can also be expected to be low.
3.1. Analysis of a lightly loaded system
In the case when processes are hardly ever willing to send or receive a message, the analysis of the complexity of the algorithm is rather straightforward. To that aim, denote by \( N_{\text{proc}} \) the total number of processes, which, of course, is also the length of the ring. Denote by \( \text{loc}(E) \) the location of the envelope on the ring when there are initially no processes willing to communicate. Locations on the ring are clockwise numbered \( 0 \ldots N_{\text{proc}} - 1 \). Similarly, we use the notations \( \text{loc}(S) \) and \( \text{loc}(R) \) to denote the locations of a sender and a receiver, respectively. We make a further distinction between the following two situations:

SRE: In this case, we assume that, if we travel the ring clockwise starting at the sender, we encounter the receiver before the envelope, as shown in Figure 3(a).
\(^{1}\)The report is available on ftp-site ftp.cs.few.eur.nl.
SER: In this case, we assume the envelope is located between the sender and the receiver, as shown in Figure 3(b).
Let $\delta^+(R \rightarrow E)$ ($\delta^-(R \rightarrow E)$) denote the distance between the receiver and the envelope, expressed in the number of links that need to be crossed when we travel (counter-)clockwise from the receiver to the envelope. Similarly, we use the notations $\delta^+(S \rightarrow R)$ ($\delta^-(S \rightarrow R)$) and $\delta^+(S \rightarrow E)$ ($\delta^-(S \rightarrow E)$) to denote the (counter-)clockwise measured distance from the sender to the receiver, and from the sender to the envelope, respectively. Because we are assuming that there are initially no senders and receivers on the ring, the envelope at first instance will be empty.
Let us first consider situation SRE. SRE reflects that both the sender and the receiver are now on the ring and that the envelope was originally, i.e. when there were no communicating processes on the ring, located between the receiver and the sender. Because the locations of either sender, receiver, and envelope are arbitrary (provided the ordering dictated by SRE), we may assume that
$$\delta^+(R \rightarrow E) = \delta^+(E \rightarrow S) = \delta^+(S \rightarrow R) = \frac{1}{3}N_{proc}$$
Two cases need to be considered further:
- **SRE-a:** *The sender arrived before the receiver.* In this case, we may assume that the envelope reached the sender before the receiver entered the ring. This implies that the sender’s request for the envelope needed to travel a distance of $\delta^+(S \rightarrow E) = \frac{2}{3}N_{proc}$, whereas the receiver’s request for the envelope traveled a distance of $\delta^+(R \rightarrow S) = \frac{2}{3}N_{proc}$. Consequently, the two requests jointly traveled a total distance of $\frac{4}{3}N_{proc}$ links. The envelope, on the other hand, needed to travel a total distance of $\delta^-(E \rightarrow S) = \frac{2}{3}N_{proc}$ from its original location to the sender, and, after the receiver entered the ring, another distance of $\delta^-(S \rightarrow R) = \frac{2}{3}N_{proc}$ links from the sender to the receiver.
- **SRE-b:** *The receiver arrived before the sender.* In this case, the receiver’s request (for a full envelope) will first have to travel a distance of $\delta^+(R \rightarrow E) = \frac{1}{3}N_{proc}$ links where it arrives at the location of the envelope. At that point, nothing further happens due to the fact that the envelope is still empty. As soon as the sender enters the ring, its request for the empty envelope will have to travel a total distance of $\delta^+(S \rightarrow E) = \frac{2}{3}N_{proc}$. As soon as the request arrives, the envelope will be transferred over a total distance of $\delta^-(E \rightarrow S) = \frac{2}{3}N_{proc}$, marked with a request to forward it to the receiver as soon as it has been filled. After this has been done, it travels another $\delta^-(S \rightarrow R) = \frac{2}{3}N_{proc}$ links to the receiver adding up to a total distance of $\frac{4}{3}N_{proc}$.
In a completely analogous way we can derive the complexity for the SER situation. Using the same distinction between cases SER-a (when the sender arrives before the receiver) and SER-b (when the receiver arrives before the sender), we can summarize our analyses as shown in Table 1.
Either of the four cases (SRE-a, SRE-b, SER-a, and SER-b) can occur with equal probability. We can then draw the following conclusion:
Each message exchange between a sender and a receiver in a lightly loaded system, requires on average a total of $N_{proc}$ request transfers, and $N_{proc}$ envelope transfers.
Table 1: Average number of hops for requests and the envelope per message exchange.
<table>
<thead>
<tr>
<th></th>
<th>total traveling distance</th>
<th>requests</th>
<th>envelope</th>
</tr>
</thead>
<tbody>
<tr>
<td>SRE-a:</td>
<td>$\frac{4}{3}N_{\text{proc}}$</td>
<td>$\frac{4}{3}N_{\text{proc}}$</td>
<td></td>
</tr>
<tr>
<td>SRE-b:</td>
<td>$N_{\text{proc}}$</td>
<td>$\frac{4}{3}N_{\text{proc}}$</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>total traveling distance</th>
<th>requests</th>
<th>envelope</th>
</tr>
</thead>
<tbody>
<tr>
<td>SER-a:</td>
<td>$\frac{2}{3}N_{\text{proc}}$</td>
<td>$\frac{2}{3}N_{\text{proc}}$</td>
<td></td>
</tr>
<tr>
<td>SER-b:</td>
<td>$N_{\text{proc}}$</td>
<td>$\frac{2}{3}N_{\text{proc}}$</td>
<td></td>
</tr>
</tbody>
</table>
3.2. Analysis of heavily loaded systems
In the case of heavily loaded systems, we come to a completely different situation. On average, we may expect that the number of request and envelope transfers will decrease as more senders and receivers enter the ring. The reason is quite simple. In the first place, a request for an envelope need not always be forwarded to its initial destination. Instead, as soon as it reaches a process that had issued a similar request, its transfer halts. Likewise, the envelope may be successfully intercepted by a process that had entered the ring after the envelope had started to travel towards its initial destination. Rather than providing a mathematical analysis, we have run a number of simulations in order to get an impression of the behavior of the algorithm. How these simulations have been conducted is discussed in [6]. Here, we shall only briefly discuss their results.


Figure 4: Simulation results for various loaded systems with sendprob = 0.50.
Figure 4 shows the result of our simulations when the number of senders and receivers was equally balanced. In Figure 4(a) we have visualized how many links the envelope will, on average, travel per message exchange between a sender and a receiver. The curve marked “A” shows the situation in a lightly loaded system. As can be seen, this result corresponds with our informal analysis given above. Curves “B” and “C” reflect the situation when network traffic increases. Note that in extremely heavily loaded systems, the number of envelope transfers per link tends to be almost constant. Figure 4(b) shows a similar case for the number of request transfers per link, for each message exchange.
4. Discussion
In this paper we have concentrated on the implementation of synchronous channels that are shared between multiple senders and receivers. In particular, attention has focused
on a distributed, scalable solution for transputer-based systems. This solution is, in fact, also a solution for an implementation of occam-like languages that support guarded output statements. In particular, it is not difficult to see that if our target language supported the alternative selection of output statements, we could have easily derived an implementation in the form of the following skeleton code (we adopt an occam-like notation; the pseudo-variable self refers to the identity of the process in which it is used):
```
process receiver ( [M][N] chan of message channel )
alt i = 0 for M
channel[i][self] ? data
process sender ( [M][N] chan of message channel )
alt j = 0 for M
channel[self][j] ! data
```
The question whether or not guarded output statements are to be provided by a language has generally been a difficult one to answer. Due to the fact that a general efficient implementation is hard to derive, most language designers have omitted them. To date, only a few languages support guarded output statements (e.g. Joyce [3]). The reason for including a similar concept in ADL is motivated by the fact that from the perspective of application design, the availability of high-level communication constructs is a desirable feature. Later, when a developer is putting more effort into the derivation of an efficient implementation, he or she might choose to alter a design in such a way that communication constructs that are difficult to implement are avoided all together. This is the right thing to do during the implementation; it is not something that should bother a developer during the design phase.
A first version of ADL has been implemented as part of the Hamlet Design Entry System. This version allows for full generation of simulation code, by which the overall behavior of an application can be simulated for various hardware configurations. At present, our attention is directed towards the generation of actual parallel target code. Because many implementation decisions are application-dependent, code generation is supported in such a way that a developer can highly influence the generation process. The solution presented in this paper for the implementation of synchronous channels, will thus only be one out of several that can be selected when generating code.
References
|
{"Source-Url": "https://www.distributed-systems.net/my-data/papers/1994.pcat.pdf", "len_cl100k_base": 6550, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27799, "total-output-tokens": 7216, "length": "2e12", "weborganizer": {"__label__adult": 0.0003139972686767578, "__label__art_design": 0.0003256797790527344, "__label__crime_law": 0.00028586387634277344, "__label__education_jobs": 0.00042939186096191406, "__label__entertainment": 7.051229476928711e-05, "__label__fashion_beauty": 0.0001232624053955078, "__label__finance_business": 0.00018155574798583984, "__label__food_dining": 0.00031828880310058594, "__label__games": 0.00048232078552246094, "__label__hardware": 0.0014619827270507812, "__label__health": 0.0004553794860839844, "__label__history": 0.00025177001953125, "__label__home_hobbies": 8.851289749145508e-05, "__label__industrial": 0.0004687309265136719, "__label__literature": 0.0002287626266479492, "__label__politics": 0.0002312660217285156, "__label__religion": 0.0004696846008300781, "__label__science_tech": 0.03863525390625, "__label__social_life": 6.979703903198242e-05, "__label__software": 0.006793975830078125, "__label__software_dev": 0.947265625, "__label__sports_fitness": 0.00027632713317871094, "__label__transportation": 0.0005702972412109375, "__label__travel": 0.00020968914031982425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31638, 0.02213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31638, 0.32743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31638, 0.94753]], "google_gemma-3-12b-it_contains_pii": [[0, 3130, false], [3130, 6896, null], [6896, 8924, null], [8924, 12227, null], [12227, 16076, null], [16076, 18870, null], [18870, 21914, null], [21914, 25580, null], [25580, 28222, null], [28222, 31638, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3130, true], [3130, 6896, null], [6896, 8924, null], [8924, 12227, null], [12227, 16076, null], [16076, 18870, null], [18870, 21914, null], [21914, 25580, null], [25580, 28222, null], [28222, 31638, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31638, null]], "pdf_page_numbers": [[0, 3130, 1], [3130, 6896, 2], [6896, 8924, 3], [8924, 12227, 4], [12227, 16076, 5], [16076, 18870, 6], [18870, 21914, 7], [21914, 25580, 8], [25580, 28222, 9], [28222, 31638, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31638, 0.06897]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
0361c8715bba97160f518922392efcf2330f858c
|
Package ‘Rarr’
March 5, 2024
Title Read Zarr Files in R
Version 1.2.0
Description The Zarr specification defines a format for chunked, compressed, N-dimensional arrays. It's design allows efficient access to subsets of the stored array, and supports both local and cloud storage systems.
Rarr aims to implement this specification in R with minimal reliance on an external tools or libraries.
License MIT + file LICENSE
URL https://github.com/grimbough/Rarr
BugReports https://github.com/grimbough/Rarr/issues
Encoding UTF-8
Roxygen list(markdown = TRUE)
RoxygenNote 7.2.3
Imports jsonlite, httr, stringr, R.utils, utils, paws.storage, methods
Suggests BiocStyle, covr, knitr, tinytest, mockery
VignetteBuilder knitr
SystemRequirements GNU make
biocViews DataImport
git_url https://git.bioconductor.org/packages/Rarr
git_branch RELEASE_3_18
git_last_commit e9d1a91
git_last_commit_date 2023-10-24
Repository Bioconductor 3.18
Date/Publication 2024-03-04
Author Mike Smith [aut, cre] (<https://orcid.org/0000-0002-7800-3848>)
Maintainer Mike Smith <grimbough@gmail.com>
R topics documented:
- Rarr-package .................................................. 2
- .compress_and_write_chunk .......................... 3
- .create_replace_call ................................... 4
- .decompress_chunk ........................................ 4
- .format_chunk ................................................ 5
- .get_credentials ............................................ 6
- .normalize_array_path ................................. 6
- .parse_datatype ............................................. 7
- compressors .................................................. 7
- create_empty_zarr_array ............................. 8
- get_chunk_size .............................................. 10
- read_array_metadata .................................. 10
- read_chunk .................................................. 11
- read_zarr_array ............................................ 12
- update_fill_value ........................................ 13
- update_zarr_array ...................................... 13
- write_zarr_array ......................................... 14
- zarr_overview ................................................ 16
Index 18
Rarr-package Rarr: Read Zarr Files in R
Description
The Zarr specification defines a format for chunked, compressed, N-dimensional arrays. It’s design allows efficient access to subsets of the stored array, and supports both local and cloud storage systems. Rarr aims to implement this specification in R with minimal reliance on an external tools or libraries.
Author(s)
Maintainer: Mike Smith <grimbough@gmail.com> (ORCID)
See Also
Useful links:
- https://github.com/grimbough/Rarr
- Report bugs at https://github.com/grimbough/Rarr/issues
.compress_and_write_chunk
Compress and write a single chunk
Description
Compress and write a single chunk
Usage
```r
.compress_and_write_chunk(
input_chunk,
chunk_path,
compressor = use_zlib(),
data_type_size,
is_base64 = FALSE
)
```
Arguments
- **input_chunk**: Array containing the chunk data to be compressed. Will be converted to a raw vector before compression.
- **chunk_path**: Character string giving the path to the chunk that should be written.
- **compressor**: A "compressor" function that returns a list giving the details of the compression tool to apply. See compressors for more details.
- **data_type_size**: An integer giving the size of the original datatype. This is passed to the blosc algorithm, which seems to need it to achieve any compression.
- **is_base64**: When dealing with Py_unicode strings we convert them to base64 strings for storage in our intermediate R arrays. This argument indicates if base64 is in use, because the conversion to raw in .as_raw should be done differently for base64 strings vs other types.
Value
Returns TRUE if writing is successful. Mostly called for the side-effect of writing the compressed chunk to disk.
.create_replace_call Create a string of the form x[idx[[1]], idx[[2]]] <- y for an array x where the number of dimensions is variable.
Description
Create a string of the form x[idx[[1]], idx[[2]]] <- y for an array x where the number of dimensions is variable.
Usage
.create_replace_call(x_name, idx_name, idx_length, y_name)
Arguments
x_name Name of the object to have items replaced
idx_name Name of the list containing the indices
idx_length Length of the list specified in idx_name
y_name Name of the object containing the replacement items
Value
A character vector of length one containing the replacement commands. This is expected to be passed to parse() |> eval().
.decompress_chunk Decompress a chunk in memory
Description
R has internal decompression tools for zlib, bz2 and lzma compression. We use external libraries bundled with the package for blosc and lz4 decompression.
Usage
.decompress_chunk(compressed_chunk, metadata)
Arguments
compressed_chunk Raw vector holding the compressed bytes for this chunk.
metadata List produced by read_array_metadata() with the contents of the .zarray file.
**Value**
An array with the number of dimensions specified in the Zarr metadata. In most cases it will have the same size as the Zarr chunk, however in the case of edge chunks, which overlap the extent of the array, the returned chunk will be smaller.
**Description**
When a chunk is decompressed it is returned as a vector of raw bytes. This function uses the array metadata to select how to convert the bytes into the final datatype and then converts the resulting output into an array of the appropriate dimensions, including re-ordering if the original data is in row-major order.
**Usage**
`.format_chunk(decompressed_chunk, metadata, alt_chunk_dim)`
**Arguments**
- `decompressed_chunk` Raw vector holding the decompressed bytes for this chunk.
- `metadata` List produced by `read_array_metadata()` holding the contents of the .zarray file.
- `alt_chunk_dim` The dimensions of the array that should be created from this chunk. Normally this will be the same as the chunk shape in metadata, but when dealing with edge chunks, which may overlap the true extent of the array, the returned array should be smaller than the chunk shape.
**Value**
A list of length 2. The first element is the formatted chunk data. The second is an integer of length 1, indicating if warnings were encountered when converting types.
If "chunk_data" is larger than the space remaining in destination array i.e. it contains the overflowing elements, these will be trimmed when the chunk is returned to `read_data()`
.get_credentials
Description
This is a modified version of paws.storge:::get_credentials(). It is included to prevent using the ::: operator. Look at that function if things stop working.
Usage
.get_credentials(brand)
Arguments
- credentials: Content stored at .internal$config$credentials in an object created by paws.storage:::s3().
Value
A credentials list to be reinserted into a paws.storage s3 object. If no valid credentials are found this function will error, which is expected and is caught by .check_credentials.
.normalize_array_path
Description
Taken from https://zarr.readthedocs.io/en/stable/spec/v2.html#logical-storage-paths
Usage
.normalize_array_path(path)
Arguments
- path: Character vector of length 1 giving the path to be normalised.
Value
A character vector of length 1 containing the normalised path.
**parse_datatype**
*Parse the data type encoding string*
**Description**
Parse the data type encoding string
**Usage**
```
.parse_datatype(typestr)
```
**Arguments**
- **typestr**
The datatype encoding string. This is in the Numpy array typestr format.
**Value**
A list of length 4 containing the details of the data type.
---
**compressors**
*Define compression tool and settings*
**Description**
These functions select a compression tool and its setting when writing a Zarr file
**Usage**
```
use_blosc(cname = "lz4")
use_zlib(level = 6L)
use_gzip(level = 6L)
use_bz2(level = 6L)
use_lzma(level = 9L)
use_lz4()
```
**Arguments**
- **cname**
Blosc is a 'meta-compressor' providing access to several compression algorithms. This argument defines which compression tool should be used. Valid options are: 'lz4', 'lz4hc', 'blosclz', 'zstd', 'zlib', 'snappy'.
- **level**
Specify the compression level to use.
create_empty_zarr_array
Create an (empty) Zarr array
Description
Create an (empty) Zarr array
Usage
create_empty_zarr_array(
zarr_array_path,
dim,
chunk_dim,
data_type,
order = "F",
)
create_empty_zarr_array
```r
combiner = use_Zlib(),
fill_value,
nchar,
dimension_separator = "."
)
```
**Arguments**
- `zarr_array_path` Character vector of length 1 giving the path to the new Zarr array.
- `dim` Dimensions of the new array. Should be a numeric vector with the same length as the number of dimensions.
- `chunk_dim` Dimensions of the array chunks. Should be a numeric vector with the same length as the `dim` argument.
- `data_type` Character vector giving the data type of the new array. Currently this is limited to standard R data types. Valid options are: "integer", "double", "character". You can also use the analogous NumpPy formats: "<i4", "<f8", "|S". If this argument is not provided the `fill_value` will be used to determine the datatype.
- `order` Define the layout of the bytes within each chunk. Valid options are 'column', 'row', 'F' & 'C'. 'column' or 'F' will specify "column-major" ordering, which is how R arrays are arranged in memory. 'row' or 'C' will specify "row-major" order.
- `compressor` What (if any) compression tool should be applied to the array chunks. The default is to use zlib compression. Supplying `NULL` will disable chunk compression. See `compressors` for more details.
- `fill_value` The default value for uninitialized portions of the array. Does not have to be provided, in which case the default for the specified data type will be used.
- `nchar` For `datatype = "character"` this parameter gives the maximum length of the stored strings. It is an error not to specify this for a character array, but it is ignored for other data types.
- `dimension_separator` The character used to separate the dimensions in the names of the chunk files. Valid options are limited to "." and "/".
**Value**
If successful returns (invisibly) `TRUE`. However this function is primarily called for the size effect of initialising a Zarr array location and creating the `.zarray` metadata.
**See Also**
- `write_zarr_array()`, `update_zarr_array()`
**Examples**
```r
new_zarr_array <- file.path(tempdir(), "temp.zarr")
create_empty_zarr_array(new_zarr_array,
```
### get_chunk_size
**Determine the size of chunk in bytes**
**Description**
Determine the size of chunk in bytes
**Usage**
```r
get_chunk_size(datatype, dimensions)
```
**Arguments**
- `datatype`: A list of details for the array datatype. Expected to be produced by `.parse_datatype()`.
- `dimensions`: A list containing the dimensions of the chunk. Expected to be found in a list produced by `read_array_metadata()`.
**Value**
An integer giving the size of the chunk in bytes
---
### read_array_metadata
**Read the .zarray metadata file associated with a Zarr array**
**Description**
Read the .zarray metadata file associated with a Zarr array
**Usage**
```r
read_array_metadata(path, s3_client = NULL)
```
**Arguments**
- `path`: A character vector of length 1. This provides the path to a Zarr array or group of arrays. This can either be on a local file system or on S3 storage.
- `s3_client`: A list representing an S3 client. This should be produced by `paws::s3()`.
**Value**
A list containing the array metadata
**read_chunk**
**Read a single Zarr chunk**
**Description**
Read a single Zarr chunk
**Usage**
```r
read_chunk(
zarr_array_path,
chunk_id,
metadata,
s3_client = NULL,
alt_chunk_dim = NULL
)
```
**Arguments**
- **zarr_array_path**
A character vector of length 1, giving the path to the Zarr array
- **chunk_id**
A numeric vector or single data.frame row with length equal to the number of dimensions of a chunk.
- **metadata**
List produced by `read_array_metadata()` holding the contents of the `.zarray` file. If missing this function will be called automatically, but it is probably preferable to pass the meta data rather than read it repeatedly for every chunk.
- **s3_client**
Object created by `paws.storage::s3()`. Only required for a file on S3. Leave as `NULL` for a file on local storage.
- **alt_chunk_dim**
The dimensions of the array that should be created from this chunk. Normally this will be the same as the chunk shape in `metadata`, but when dealing with edge chunks, which may overlap the true extent of the array the returned array should be smaller than the chunk shape.
**Value**
A list of length 2. The entries should be names "chunk_data" and "warning". The first is an array containing the decompressed chunk values, the second is an integer indicating whether there were any overflow warnings generated will reading the chunk into an R datatype.
read_zarr_array
Description
Read a Zarr array
Usage
read_zarr_array(zarr_array_path, index, s3_client)
Arguments
zarr_array_path
Path to a Zarr array. A character vector of length 1. This can either be a location on a local file system or the URI to an array in S3 storage.
index
A list of the same length as the number of dimensions in the Zarr array. Each entry in the list provides the indices in that dimension that should be read from the array. Setting a list entry to NULL will read everything in the associated dimension. If this argument is missing the entirety of the Zarr array will be read.
s3_client
Object created by `paws.storage::s3()`. Only required for a file on S3. Leave as NULL for a file on local storage.
Value
An array with the same number of dimensions as the input array. The extent of each dimension will correspond to the length of the values provided to the index argument.
Examples
```
## Using a local file provided with the package
## This array has 3 dimensions
z1 <- system.file("extdata", "zarr_examples", "row-first", "int32.zarr", package = "Rarr")
## read the entire array
read_zarr_array(zarr_array_path = z1)
## extract values for first 10 rows, all columns, first slice
read_zarr_array(zarr_array_path = z1, index = list(1:10, NULL, 1))
## using a Zarr file hosted on Amazon S3
## This array has a single dimension with length 576
z2 <- "https://power-analysis-ready-datastore.s3.amazonaws.com/power_901_constants.zarr/lon/"
```
update_fill_value
Convert special fill values from strings to numbers
Description
Special case fill values (NaN, Inf, -Inf) are encoded as strings in the Zarr metadata. R will create arrays of type character if these are defined and the chunk isn’t present on disk. This function updates the fill value to be R’s representation of these special values, so numeric arrays are created. A "null" fill value implies no missing values. We set this to NA as you can’t create an array of type NULL in R. It should have no impact if there are really no missing values.
Usage
update_fill_value(metadata)
Arguments
metadata A list containing the array metadata. This should normally be generated by running read_json() on the .zarray file.
Value
Returns a list with the same structure as the input. The returned list will be identical to the input, unless the fill_value entry was on of: NULL, "NaN", "Infinity" or "-Infinity".
update_zarr_array
Update (a subset of) an existing Zarr array
Description
Update (a subset of) an existing Zarr array
Usage
update_zarr_array(zarr_array_path, x, index)
**Arguments**
- `zarr_array_path`
Character vector of length 1 giving the path to the Zarr array that is to be modified.
- `x`
The R array (or object that can be coerced to an array) that will be written to the Zarr array.
- `index`
A list with the same length as the number of dimensions of the target array. This argument indicates which elements in the target array should be updated.
**Value**
The function is primarily called for the side effect of writing to disk. Returns (invisibly) `TRUE` if the array is successfully updated.
**Examples**
```r
## first create a new, empty, Zarr array
new_zarry_array <- file.path(tempdir(), "new_array.zarr")
create_empty_zarr_array(
zarr_array_path = new_zarry_array, dim = c(20, 10),
chunk_dim = c(10, 5), data_type = "double"
)
## create a matrix smaller than our Zarr array
small_matrix <- matrix(runif(6), nrow = 3)
## insert the matrix into the first 3 rows, 2 columns of the Zarr array
update_zarr_array(new_zarry_array, x = small_matrix, index = list(1:3, 1:2))
## reading back a slightly larger subset,
## we can see only the top left corner has been changed
read_zarr_array(new_zarry_array, index = list(1:5, 1:5))
```
---
**Description**
Write an R array to Zarr
**Usage**
```r
write_zarr_array(
x,
zarr_array_path,
chunk_dim,
order = "F",
)```
write_zarr_array
compressor = use_zlib(),
fill_value,
nchar,
dimension_separator = "."
)
Arguments
x
The R array (or object that can be coerced to an array) that will be written to the
Zarr array.
zarr_array_path
Character vector of length 1 giving the path to the new Zarr array.
chunk_dim
Dimensions of the array chunks. Should be a numeric vector with the same
length as the dim argument.
order
Define the layout of the bytes within each chunk. Valid options are 'column',
'row', 'F' & 'C'. 'column' or 'F' will specify "column-major" ordering, which
is how R arrays are arranged in memory. 'row' or 'C' will specify "row-major"
order.
compressor
What (if any) compression tool should be applied to the array chunks. The
default is to use zlib compression. Supplying NULL will disable chunk com-
pression. See compressors for more details.
fill_value
The default value for uninitialized portions of the array. Does not have to be
provided, in which case the default for the specified data type will be used.
nchar
For character arrays this parameter gives the maximum length of the stored
strings. If this argument is not specified the array provided to x will be checked
and the length of the longest string found will be used so no data are truncated.
However this may be slow and providing a value to nchar can provide a modest
performance improvement.
dimension_separator
The character used to to separate the dimensions in the names of the chunk files.
Valid options are limited to "." and "/".
Value
The function is primarily called for the side effect of writing to disk. Returns (invisibly) TRUE if the
array is successfully written.
Examples
new_zarr_array <- file.path(tempdir(), "integer.zarr")
x <- array(1:50, dim = c(10, 5))
write_zarr_array(
x = x, zarr_array_path = new_zarr_array,
chunk_dim = c(2, 5)
)
Print a summary of a Zarr array
Description
When reading a Zarr array using `read_zarr_array()` it is necessary to know its shape and size. `zarr_overview()` can be used to get a quick overview of the array shape and contents, based on the `.zarray` metadata file each array contains.
Usage
```r
zarr_overview(zarr_array_path, s3_client, as_data_frame = FALSE)
```
Arguments
- `zarr_array_path`: A character vector of length 1. This provides the path to a Zarr array or group of arrays. This can either be on a local file system or on S3 storage.
- `s3_client`: A list representing an S3 client. This should be produced by `paws.storage::s3()`.
- `as_data_frame`: Logical determining whether the Zarr array details should be printed to screen (FALSE) or returned as a `data.frame` (TRUE) so they can be used computationally.
Details
The function currently prints the following information to the R console:
- array path
- array shape and size
- chunk and size
- the number of chunks
- the datatype of the array
- codec used for data compression (if any)
If given the path to a group of arrays the function will attempt to print the details of all sub-arrays in the group.
Value
If `as_data_frame = FALSE` the function `invisible` returns `TRUE` if successful. However it is primarily called for the side effect of printing details of the Zarr array(s) to the screen. If `as_data_frame = TRUE` then a `data.frame` containing details of the array is returned.
Examples
## Using a local file provided with the package
z1 <- system.file("extdata", "zarr_examples", "row-first", "int32.zarr", package = "Rarr")
## read the entire array
zarr_overview(zarr_array_path = z1)
## using a file on S3 storage
## don't run this on the BioC Linux build - it's very slow there
is_BBS_linux <- nzchar(Sys.getenv("IS_BIOC_BUILD_MACHINE")) && Sys.info()["sysname"] == "Linux"
if(!is_BBS_linux) {
z2 <- "https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.4/idr0101A/13457539.zarr/1"
zarr_overview(z2)
}
## Index
* **Internal**
- `.compress_and_write_chunk`, 3
- `.create_replace_call`, 4
- `.decompress_chunk`, 4
- `.format_chunk`, 5
- `.get_credentials`, 6
- `.normalize_array_path`, 6
- `get_chunk_size`, 10
- `read_array_metadata`, 10
- `read_chunk`, 11
- `update_fill_value`, 13
* **internal**
- `Rarr-package`, 2
- `.compress_and_write_chunk`, 3
- `.create_replace_call`, 4
- `.decompress_chunk`, 4
- `.format_chunk`, 5
- `.get_credentials`, 6
- `.normalize_array_path`, 6
- `.parse_datatype`, 7
- `.parse_datatype()`, 10
- `.compressors`, 3, 7, 9, 15
- `create_empty_zarr_array`, 8
- `get_chunk_size`, 10
- `paws.storage::s3()`, 10–12, 16
- `Rarr (Rarr-package)`, 2
- `Rarr-package`, 2
- `read_array_metadata`, 10
- `read_array_metadata()`, 10
- `read_chunk`, 11
- `read_zarr_array`, 12
- `read_zarr_array()`, 16
- `update_fill_value`, 13
- `update_zarr_array`, 13
- `update_zarr_array()`, 9
- `use_blosc (compressors)`, 7
- `use_bz2 (compressors)`, 7
- `use_gzip (compressors)`, 7
- `use_lz4 (compressors)`, 7
- `use_lzma (compressors)`, 7
- `use_zlib (compressors)`, 7
- `write_zarr_array`, 14
- `write_zarr_array()`, 9
- `zarr_overview`, 16
|
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/manuals/Rarr/man/Rarr.pdf", "len_cl100k_base": 5587, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 34468, "total-output-tokens": 6662, "length": "2e12", "weborganizer": {"__label__adult": 0.0002574920654296875, "__label__art_design": 0.0003216266632080078, "__label__crime_law": 0.00024318695068359375, "__label__education_jobs": 0.0003018379211425781, "__label__entertainment": 8.100271224975586e-05, "__label__fashion_beauty": 0.00010538101196289062, "__label__finance_business": 0.00014448165893554688, "__label__food_dining": 0.0003113746643066406, "__label__games": 0.0005006790161132812, "__label__hardware": 0.0009474754333496094, "__label__health": 0.00026702880859375, "__label__history": 0.00015854835510253906, "__label__home_hobbies": 8.845329284667969e-05, "__label__industrial": 0.00035500526428222656, "__label__literature": 0.0001424551010131836, "__label__politics": 0.00018084049224853516, "__label__religion": 0.00033211708068847656, "__label__science_tech": 0.024658203125, "__label__social_life": 9.363889694213869e-05, "__label__software": 0.0266571044921875, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.00019872188568115232, "__label__transportation": 0.00018286705017089844, "__label__travel": 0.000164031982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22195, 0.01685]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22195, 0.70959]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22195, 0.64683]], "google_gemma-3-12b-it_contains_pii": [[0, 1102, false], [1102, 2835, null], [2835, 4031, null], [4031, 5175, null], [5175, 6682, null], [6682, 7518, null], [7518, 8442, null], [8442, 8641, null], [8641, 10758, null], [10758, 11798, null], [11798, 13204, null], [13204, 14684, null], [14684, 15786, null], [15786, 17131, null], [17131, 18972, null], [18972, 20437, null], [20437, 20963, null], [20963, 22195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1102, true], [1102, 2835, null], [2835, 4031, null], [4031, 5175, null], [5175, 6682, null], [6682, 7518, null], [7518, 8442, null], [8442, 8641, null], [8641, 10758, null], [10758, 11798, null], [11798, 13204, null], [13204, 14684, null], [14684, 15786, null], [15786, 17131, null], [17131, 18972, null], [18972, 20437, null], [20437, 20963, null], [20963, 22195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22195, null]], "pdf_page_numbers": [[0, 1102, 1], [1102, 2835, 2], [2835, 4031, 3], [4031, 5175, 4], [5175, 6682, 5], [6682, 7518, 6], [7518, 8442, 7], [8442, 8641, 8], [8641, 10758, 9], [10758, 11798, 10], [11798, 13204, 11], [13204, 14684, 12], [14684, 15786, 13], [15786, 17131, 14], [17131, 18972, 15], [18972, 20437, 16], [20437, 20963, 17], [20963, 22195, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22195, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
73c9be881f0cb6439c36216e2737828bd0647789
|
4.1, 4.2 Performance and Sorting
“As soon as an Analytic Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will arise - By what course of calculation can these results be arrived at by the machine in the shortest time?” – Charles Babbage
Algorithmic Successes
N-body Simulation.
- Simulate gravitational interactions among N bodies.
- Brute force: \( N^2 \) steps.
N-body Simulation.
- Simulate gravitational interactions among N bodies.
- Brute force: \( N^2 \) steps.
- Barnes-Hut: \( N \log N \) steps, enables new research.
Running Time
Charles Babbage (1864)
Analytic Engine
Algorithmic Successes
Discrete Fourier transform.
• Break down waveform of $N$ samples into periodic components.
• Applications: DVD, JPEG, MRI, astrophysics, ....
• Brute force: $N^2$ steps.
Brute force: $N^2$ steps.
FFT algorithm: $N \log N$ steps, enables new technology.
Algorithmic Successes
Sorting
Sorting problem. Rearrange $N$ items in ascending order.
Applications. Binary search, statistics, databases, data compression, bioinformatics, computer graphics, scientific computing, (too numerous to list) ...
Hanley
Haskell
Horn
Hayes
Hauser
Hornet
Hsu
Haskell
Hanley
Horn
Hayes
Hauser
Hornet
Hsu
Insertion Sort
Insertion sort.
• Brute-force sorting solution.
• Move left-to-right through array.
• Insert each element into final position by exchanging it with larger elements to its left, one-by-one.
<table>
<thead>
<tr>
<th>i</th>
<th>j</th>
<th>a</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>6</td>
<td>and had him his was you the but</td>
</tr>
<tr>
<td>6</td>
<td>5</td>
<td>and had him his was the you but</td>
</tr>
<tr>
<td>6</td>
<td>4</td>
<td>and had him his the was you but</td>
</tr>
</tbody>
</table>
Inserting a[6] into position by exchanging with larger entries to its left
Insertion Sort: Java Implementation
```java
public class Insertion {
public static void sort(String[] a) {
int N = a.length;
for (int i = 1; i < N; i++)
for (int j = i; j > 0; j--)
if (a[j-1] > a[j])
exch(a, j-1, j);
else
break;
}
private static void exch(String[] a, int i, int j) {
String swap = a[i];
a[i] = a[j];
a[j] = swap;
}
}
```
**Insertion Sort: Observation**
Observe and tabulate running time for various values of $N$.
- **Data source:** $N$ random numbers between 0 and 1.
- **Machine:** Apple G5 1.8GHz with 1.5GB memory running OS X.
- **Timing:** Skagen wristwatch.
<table>
<thead>
<tr>
<th>$N$</th>
<th>Comparisons</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>5,000</td>
<td>6.2 million</td>
<td>0.13 seconds</td>
</tr>
<tr>
<td>10,000</td>
<td>25 million</td>
<td>0.43 seconds</td>
</tr>
<tr>
<td>20,000</td>
<td>99 million</td>
<td>1.5 seconds</td>
</tr>
<tr>
<td>40,000</td>
<td>400 million</td>
<td>5.6 seconds</td>
</tr>
<tr>
<td>80,000</td>
<td>1600 million</td>
<td>23 seconds</td>
</tr>
</tbody>
</table>
**Insertion Sort: Empirical Analysis**
**Data analysis.** Plot # comparisons vs. input size on log-log scale.
Hypothesis. # comparisons grows quadratically with input size $\sim N^2/4$.
**Analysis: Empirical vs. Mathematical**
**Empirical analysis.**
- Measure running times, plot, and fit curve.
- Easy to perform experiments.
- Model useful for predicting, but not for explaining.
**Mathematical analysis.**
- Analyze algorithm to estimate # ops as a function of input size.
- May require advanced mathematics.
- Model useful for predicting and explaining.
**Critical difference.** Mathematical analysis is independent of a particular machine or compiler; applies to machines not yet built.
Insertion Sort: Mathematical Analysis
**Worst case.** [descending]
- Iteration \(i\) requires \(i\) comparisons.
- Total = \((0 + 1 + 2 + ... + N-1) \approx N^2 / 2\) compares.

**Average case.** [random]
- Iteration \(i\) requires \(i / 2\) comparisons on average.
- Total = \((0 + 1 + 2 + ... + N-1) / 2 \approx N^2 / 4\) compares

---
Insertion Sort: Lesson
**Lesson.** Supercomputer can’t rescue a bad algorithm.
<table>
<thead>
<tr>
<th>Computer</th>
<th>Comparisons Per Second</th>
<th>Thousand</th>
<th>Million</th>
<th>Billion</th>
</tr>
</thead>
<tbody>
<tr>
<td>laptop</td>
<td>(10^7)</td>
<td>instant</td>
<td>1 day</td>
<td>3 centuries</td>
</tr>
<tr>
<td>super</td>
<td>(10^{12})</td>
<td>instant</td>
<td>1 second</td>
<td>2 weeks</td>
</tr>
</tbody>
</table>
---
Moore’s Law
**Moore’s law.** Transistor density on a chip doubles every 2 years.
**Variants.** Memory, disk space, bandwidth, computing power per $.
[Moore’s Law](http://en.wikipedia.org/wiki/Moore%27s_law)
---
Moore’s Law and Algorithms
**Quadratic algorithms do not scale with technology.**
- New computer may be 10x as fast.
- But, has 10x as much memory so problem may be 10x bigger.
- With quadratic algorithm, takes 10x as long!
“Software inefficiency can always outpace Moore’s Law. Moore’s Law isn’t a match for our bad coding.” – Jaron Lanier
**Lesson.** Need linear (or linearithmic) algorithm to keep pace with Moore’s law.
Exam 1 looms.
Written exam Tuesday 3/13 during your lecture time. Room TBD.
Programming exam Tuesday 3/13 or Wednesday 3/14 in your precept.
Review session will be held.
Rooms, rules, details on Exams page of website.
---
Mergesort
Mergesort:
- Divide array into two halves.
- Recursively sort each half.
- Merge two halves to make sorted whole.
Mergesort: Example
<table>
<thead>
<tr>
<th>input</th>
<th>was had him and you his the but</th>
</tr>
</thead>
<tbody>
<tr>
<td>sort left</td>
<td>and had him was you his the but</td>
</tr>
<tr>
<td>sort right</td>
<td>and had him was but his the you</td>
</tr>
<tr>
<td>merge</td>
<td>and but had him his the was you</td>
</tr>
</tbody>
</table>
Top-down mergesort
Merging. Combine two pre-sorted lists into a sorted whole.
How to merge efficiently? Use an auxiliary array.
Trace of the merge of the sorted left half with the sorted right half
Merging
• Keep track of smallest element in each sorted half.
• Choose smaller of two elements.
• Repeat until done.
Merge.
<table>
<thead>
<tr>
<th>i</th>
<th>j</th>
<th>k</th>
<th>aux[k]</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>4</td>
<td>0</td>
<td>and</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>1</td>
<td>but</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
<td>3</td>
<td>had</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
<tr>
<td>3</td>
<td>5</td>
<td>4</td>
<td>his</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
<tr>
<td>4</td>
<td>6</td>
<td>6</td>
<td>was</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
<tr>
<td>4</td>
<td>7</td>
<td>7</td>
<td>you</td>
<td>and</td>
<td>had</td>
<td>him</td>
<td>was</td>
<td>but</td>
<td>his</td>
<td>the</td>
<td>you</td>
</tr>
</tbody>
</table>
Merge.
• Keep track of smallest element in each sorted half.
• Choose smaller of two elements.
• Repeat until done.
Merge.
A G L O R H I M S T
Merging
- Keep track of smallest element in each sorted half.
- Choose smaller of two elements.
- Repeat until done.
Merging
- Keep track of smallest element in each sorted half.
- Choose smaller of two elements.
- Repeat until done.
Mergesort: Java Implementation
```java
def merge(String[] a, int lo, int hi)
int N = hi - lo;
int mid = lo + N/2;
sort(a, lo, mid);
sort(a, mid, hi);
// Merge sorted halves (see previous slide).
int[] aux = new int[N];
for (int k = 0; k < N; k++)
aux[k] = a[lo + k];
for (int k = 0; k < N; k++)
a[lo + k] = aux[k];
```
How to merge efficiently? Use an auxiliary array.
Mergesort: Empirical Analysis
**Experimental hypothesis.** Number of comparisons $\approx 20N$.
![Graph showing comparisons vs. input size]
Mergesort: Prediction and Verification
**Experimental hypothesis.** Number of comparisons $\approx 20N$.
**Prediction.** 80 million comparisons for $N = 4$ million.
**Observations.**
<table>
<thead>
<tr>
<th>$N$</th>
<th>Comparisons</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>4 million</td>
<td>82.7 million</td>
<td>3.13 sec</td>
</tr>
<tr>
<td>4 million</td>
<td>82.7 million</td>
<td>3.25 sec</td>
</tr>
<tr>
<td>4 million</td>
<td>82.7 million</td>
<td>3.22 sec</td>
</tr>
</tbody>
</table>
Agrees.
**Prediction.** 400 million comparisons for $N = 20$ million.
**Observations.**
<table>
<thead>
<tr>
<th>$N$</th>
<th>Comparisons</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>20 million</td>
<td>460 million</td>
<td>17.5 sec</td>
</tr>
<tr>
<td>50 million</td>
<td>1216 million</td>
<td>45.9 sec</td>
</tr>
</tbody>
</table>
Not quite.
Mergesort: Mathematical Analysis
**Analysis.** To mergesort array of size $N$, mergesort two subarrays of size $N/2$, and merge them together using $\leq N$ comparisons.
we assume $N$ is a power of 2
**Mathematical analysis.**
<table>
<thead>
<tr>
<th>analysis</th>
<th>comparisons</th>
</tr>
</thead>
<tbody>
<tr>
<td>worst</td>
<td>$N \log_2 N$</td>
</tr>
<tr>
<td>average</td>
<td>$N \log_2 N$</td>
</tr>
<tr>
<td>best</td>
<td>$\frac{1}{2} N \log_2 N$</td>
</tr>
</tbody>
</table>
**Validation.** Theory agrees with observations.
<table>
<thead>
<tr>
<th>$N$</th>
<th>actual</th>
<th>predicted</th>
</tr>
</thead>
<tbody>
<tr>
<td>10,000</td>
<td>120 thousand</td>
<td>133 thousand</td>
</tr>
<tr>
<td>20 million</td>
<td>460 million</td>
<td>485 million</td>
</tr>
<tr>
<td>50 million</td>
<td>1,216 million</td>
<td>1,279 million</td>
</tr>
</tbody>
</table>
**Lesson.** Great algorithms can be more powerful than supercomputers.
<table>
<thead>
<tr>
<th>Computer</th>
<th>Comparisons Per Second</th>
<th>Insertion</th>
<th>Mergesort</th>
</tr>
</thead>
<tbody>
<tr>
<td>laptop</td>
<td>$10^7$</td>
<td>3 centuries</td>
<td>3 hours</td>
</tr>
<tr>
<td>super</td>
<td>$10^{12}$</td>
<td>2 weeks</td>
<td>instant</td>
</tr>
</tbody>
</table>
N = 1 billion
---
**Binary Search**
**Intuition.** Find a hidden integer.
<table>
<thead>
<tr>
<th>Interval</th>
<th>size</th>
<th>Q</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 - 128</td>
<td>128</td>
<td>< 64?</td>
<td>no</td>
</tr>
<tr>
<td>64 - 128</td>
<td>64</td>
<td>< 96?</td>
<td>yes</td>
</tr>
<tr>
<td>32 - 64</td>
<td>32</td>
<td>< 80?</td>
<td>yes</td>
</tr>
<tr>
<td>16 - 32</td>
<td>16</td>
<td>< 72?</td>
<td>no</td>
</tr>
<tr>
<td>8 - 16</td>
<td>8</td>
<td>< 76?</td>
<td>no</td>
</tr>
<tr>
<td>4 - 8</td>
<td>4</td>
<td>< 78?</td>
<td>yes</td>
</tr>
<tr>
<td>2 - 4</td>
<td>2</td>
<td>< 77?</td>
<td>no</td>
</tr>
<tr>
<td>1 - 2</td>
<td>1</td>
<td>= 77</td>
<td></td>
</tr>
</tbody>
</table>
**Idea:**
- Sort the array (stay tuned)
- Play "20 questions" to determine the index associated with a given key.
**Ex.** Dictionary, phone book, book index, credit card numbers, ...
**Binary search.**
- Examine the middle key.
- If it matches, return its index.
- Otherwise, search either the left or right half.
Binary Search
Binary search. Given a key and sorted array \( a[] \), find index \( i \) such that \( a[i] = \text{key} \), or report that no such index exists.
Invariant. Algorithm maintains \( a[lo] \leq \text{key} \leq a[hi-1] \).
Ex. Binary search for 33.
Binary Search
Given a key and sorted array $a[]$, find index $i$ such that $a[i] = key$, or report that no such index exists.
Invariant. Algorithm maintains $a[lo] \leq key \leq a[hi-1]$.
Ex. Binary search for 33.
Binary Search
**Binary search.** Given a key and sorted array $a[]$, find index $i$ such that $a[i] = key$, or report that no such index exists.
**Invariant.** Algorithm maintains $a[lo] \leq key \leq a[hi-1]$.
**Ex.** Binary search for 33.

**Analysis.** To binary search in an array of size $N$: do one comparison, then binary search in an array of size $N/2$.
$N \rightarrow N/2 \rightarrow N/4 \rightarrow N/8 \rightarrow \ldots \rightarrow 1$
**Q.** How many times can you divide a number by 2 until you reach 1?
**A.** $\log_2 N$.

**Java library implementation:** Arrays.binarySearch()
**Binary Search: Java Implementation**
**Invariant.** Algorithm maintains $a[lo] \leq key \leq a[hi-1]$.
```java
public static int search(String key, String[] a) {
return search(key, a, 0, a.length);
}
public static int search(String key, String[] a, int lo, int hi) {
if (hi <= lo) return -1;
int mid = lo + (hi - lo) / 2;
int cmp = a[mid].compareTo(key);
if (cmp > 0) return search(key, a, lo, mid);
else if (cmp < 0) return search(key, a, mid+1, hi);
else return mid;
}
```
**Order of Growth Classifications**

<table>
<thead>
<tr>
<th>order of growth</th>
<th>function</th>
<th>factor for doubling hypothesis</th>
</tr>
</thead>
<tbody>
<tr>
<td>constant</td>
<td>$1$</td>
<td>$1$</td>
</tr>
<tr>
<td>logarithmic</td>
<td>$\log N$</td>
<td>$1$</td>
</tr>
<tr>
<td>linear</td>
<td>$N$</td>
<td>$2$</td>
</tr>
<tr>
<td>linearithmic</td>
<td>$N \log N$</td>
<td>$2$</td>
</tr>
<tr>
<td>quadratic</td>
<td>$N^2$</td>
<td>$4$</td>
</tr>
<tr>
<td>cubic</td>
<td>$N^3$</td>
<td>$8$</td>
</tr>
<tr>
<td>exponential</td>
<td>$2^N$</td>
<td>$2^N$</td>
</tr>
</tbody>
</table>
Commonly encountered growth functions
**Order of Growth Classifications**
**Observation.** A small subset of mathematical functions suffice to describe running time of many fundamental algorithms.
```java
public static void g(int N) {
if (N == 0) return;
g(N/2);
for (int i = 0; i < N; i++)
...
}
```
```
public static void f(int N) {
if (N == 0) return;
f(N-1);
f(N-1);
...
}
```
N \log N
\( \lg N = \log_2 N \)
\( \text{while} \ (N > 1) \{
N = N / 2;
...
\} \)
\( N \log N \)
\( \text{for (int i = 0; i < N; i++)}
...
\)
N
\( N^2 \)
\( \text{for (int i = 0; i < N; i++)}
\text{for (int j = 0; j < N; j++)}
...
\)
\( 2^N \)
**Summary**
Q. How can I evaluate the performance of my program?
A. Computational experiments, mathematical analysis
Q. What if it’s not fast enough? Not enough memory?
- Understand why.
- Buy a faster computer.
- Learn a better algorithm (COS 226, COS 423).
- Discover a new algorithm.
<table>
<thead>
<tr>
<th>attribute</th>
<th>better machine</th>
<th>better algorithm</th>
</tr>
</thead>
<tbody>
<tr>
<td>cost</td>
<td>$$$ or more.</td>
<td>$ or less.</td>
</tr>
<tr>
<td>applicability</td>
<td>makes "everything" run faster</td>
<td>does not apply to some problems</td>
</tr>
<tr>
<td>improvement</td>
<td>incremental quantitative improvements expected</td>
<td>dramatic qualitative improvements possible</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.cs.princeton.edu/courses/archive/spring12/cos126/lectures/41analysis-2x2.pdf", "len_cl100k_base": 5079, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 38464, "total-output-tokens": 4855, "length": "2e12", "weborganizer": {"__label__adult": 0.00035071372985839844, "__label__art_design": 0.00026345252990722656, "__label__crime_law": 0.000339508056640625, "__label__education_jobs": 0.0020809173583984375, "__label__entertainment": 6.747245788574219e-05, "__label__fashion_beauty": 0.00016105175018310547, "__label__finance_business": 0.0001652240753173828, "__label__food_dining": 0.00047206878662109375, "__label__games": 0.0006566047668457031, "__label__hardware": 0.001300811767578125, "__label__health": 0.0005154609680175781, "__label__history": 0.000240325927734375, "__label__home_hobbies": 0.0001455545425415039, "__label__industrial": 0.0004165172576904297, "__label__literature": 0.00025534629821777344, "__label__politics": 0.0002422332763671875, "__label__religion": 0.0004744529724121094, "__label__science_tech": 0.0162200927734375, "__label__social_life": 0.00012600421905517578, "__label__software": 0.0035152435302734375, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0004265308380126953, "__label__transportation": 0.0006341934204101562, "__label__travel": 0.00019729137420654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13237, 0.02739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13237, 0.1453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13237, 0.79914]], "google_gemma-3-12b-it_contains_pii": [[0, 667, false], [667, 1278, null], [1278, 2221, null], [2221, 3428, null], [3428, 4831, null], [4831, 5454, null], [5454, 6446, null], [6446, 7103, null], [7103, 8523, null], [8523, 9562, null], [9562, 9824, null], [9824, 10041, null], [10041, 11927, null], [11927, 13237, null]], "google_gemma-3-12b-it_is_public_document": [[0, 667, true], [667, 1278, null], [1278, 2221, null], [2221, 3428, null], [3428, 4831, null], [4831, 5454, null], [5454, 6446, null], [6446, 7103, null], [7103, 8523, null], [8523, 9562, null], [9562, 9824, null], [9824, 10041, null], [10041, 11927, null], [11927, 13237, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 13237, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13237, null]], "pdf_page_numbers": [[0, 667, 1], [667, 1278, 2], [1278, 2221, 3], [2221, 3428, 4], [3428, 4831, 5], [4831, 5454, 6], [5454, 6446, 7], [6446, 7103, 8], [7103, 8523, 9], [8523, 9562, 10], [9562, 9824, 11], [9824, 10041, 12], [10041, 11927, 13], [11927, 13237, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13237, 0.22287]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f330efa950aed60710a44f7f0d7b7df567730827
|
INTRODUCTION OF OPENACC FOR DIRECTIVES-BASED GPU ACCELERATION
Jeff Larkin, NVIDIA Developer Technologies
AGENDA
- Accelerated Computing Basics
- What are Compiler Directives?
- Accelerating Applications with OpenACC
- Identifying Available Parallelism
- Exposing Parallelism
- Optimizing Data Locality
- Next Steps
ACCELERATED COMPUTING BASICS
WHAT IS ACCELERATED COMPUTING?
Application Execution
High Serial Performance
High Data Parallelism
CPU
+ GPU
SIMPLICITY & PERFORMANCE
- **Accelerated Libraries**
- Little or no code change for standard libraries; high performance
- Limited by what libraries are available
- **Compiler Directives**
- High Level: Based on existing languages; simple and familiar
- High Level: Performance may not be optimal
- **Parallel Language Extensions**
- Expose low-level details for maximum performance
- Often more difficult to learn and more time consuming to implement
CODE FOR PORTABILITY & PERFORMANCE
Libraries
• Implement as much as possible using portable libraries.
Directives
• Use directives to implement portable code.
Languages
• Use lower level languages for important kernels.
WHAT ARE COMPILER DIRECTIVES?
WHAT ARE COMPILER DIRECTIVES?
int main() {
do_serial_stuff()
for(int i=0; i < BIGN; i++)
{
...compute intensive work
}
do_more_serial_stuff();
}
OPENACC: THE STANDARD FOR GPU DIRECTIVES
- **Simple:** Easy path to accelerate compute intensive applications
- **Open:** Open standard that can be implemented anywhere
- **Portable:** Represents parallelism at a high level making it portable to any architecture
OPENACC MEMBERS AND PARTNERS
ACCELERATING APPLICATIONS WITH OPENACC
Identify Available Parallelism
Parallelize Loops with OpenACC
Optimize Data Locality
Optimize Loop Performance
EXAMPLE: JACOBI ITERATION
- Iteratively converges to correct value (e.g. Temperature), by computing new values at each point from the average of neighboring points.
- Common, useful algorithm
Example: Solve Laplace equation in 2D: \( \nabla^2 f(x,y) = 0 \)
\[
A^{k+1}(i,j) = \frac{A^k(i-1,j) + A^k(i+1,j) + A^k(i,j-1) + A^k(i,j+1)}{4}
\]
while ( err > tol && iter < iter_max ) {
err=0.0;
for( int j = 1; j < n-1; j++) {
for( int i = 1; i < m-1; i++) {
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
for( int j = 1; j < n-1; j++) {
for( int i = 1; i < m-1; i++ ) {
A[j][i] = Anew[j][i];
}
}
iter++;
}
Identify Available Parallelism
Parallelize Loops with OpenACC
Optimize Data Locality
Optimize Loop Performance
while ( err > tol && iter < iter_max ) {
err=0.0;
for( int j = 1; j < n-1; j++) {
for(int i = 1; i < m-1; i++) {
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
for( int j = 1; j < n-1; j++) {
for( int i = 1; i < m-1; i++) {
A[j][i] = Anew[j][i];
}
}
iter++;
}
Identify Available Parallelism
Parallelize Loops with OpenACC
Optimize Data Locality
Optimize Loop Performance
OPENACC DIRECTIVE SYNTAX
- **C/C++**
```c
#pragma acc directive [clause [,] clause] ...
```
...often followed by a structured code block
- **Fortran**
```fortran
!$acc directive [clause [,] clause] ...
```
...often paired with a matching end directive surrounding a structured code block:
```fortran
!$acc end directive
```
Don’t forget `acc`
OPENACC PARALLEL LOOP DIRECTIVE
**parallel** - Programmer identifies a block of code containing parallelism. Compiler generates a *kernel*.
**loop** - Programmer identifies a loop that can be parallelized within the kernel.
NOTE: parallel & loop are often placed together
```c
#pragma acc parallel loop
for (int i=0; i<N; i++)
{
y[i] = a*x[i]+y[i];
}
```
**Kernel:**
A function that runs in parallel on the GPU
while ( err > tol && iter < iter_max ) {
err=0.0;
#pragma acc parallel loop reduction(max:err)
for( int j = 1; j < n-1; j++) {
for(int i = 1; i < m-1; i++) {
Anew[j][i] = 0.25 * (A[j][i+1] + A[j][i-1] +
A[j-1][i] + A[j+1][i]);
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
#pragma acc parallel loop
for( int j = 1; j < n-1; j++) {
for( int i = 1; i < m-1; i++ ) {
A[j][i] = Anew[j][i];
}
}
iter++;
}
$ pgcc -fast -acc -ta=tesla -Minfo=all laplace2d.c
main:
40, Loop not fused: function call before adjacent loop
Generated vector sse code for the loop
51, Loop not vectorized/parallelized: potential early exits
55, Accelerator kernel generated
55, Max reduction generated for error
56, #pragma acc loop gang /* blockIdx.x */
58, #pragma acc loop vector(256) /* threadIdx.x */
55, Generating copyout(Anew[1:4094][1:4094])
Generating copyin(A[:,][:])
Generating Tesla code
58, Loop is parallelizable
66, Accelerator kernel generated
67, #pragma acc loop gang /* blockIdx.x */
69, #pragma acc loop vector(256) /* threadIdx.x */
66, Generating copyin(Anew[1:4094][1:4094])
Generating copyout(A[1:4094][1:4094])
Generating Tesla code
69, Loop is parallelizable
The **kernels** construct expresses that a region *may contain parallelism* and *the compiler determines* what can safely be parallelized.
```c
#pragma acc kernels
{
for(int i=0; i<N; i++)
{
x[i] = 1.0;
y[i] = 2.0;
}
for(int i=0; i<N; i++)
{
y[i] = a*x[i] + y[i];
}
}
```
The compiler identifies 2 parallel loops and generates 2 kernels.
while ( err > tol && iter < iter_max ) {
err=0.0;
#pragma acc kernels
{
for( int j = 1; j < n-1; j++) {
for(int i = 1; i < m-1; i++) {
Anew[j][i] = 0.25 * (A[j][i+1] + A[j][i-1] +
A[j-1][i] + A[j+1][i]);
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
for( int j = 1; j < n-1; j++) {
for( int i = 1; i < m-1; i++ ) {
A[j][i] = Anew[j][i];
}
}
iter++;
}
$ pgcc -fast -ta=tesla -Minfo=all laplace2d.c
main:
40, Loop not fused: function call before adjacent loop
Generated vector sse code for the loop
51, Loop not vectorized/parallelized: potential early exits
55, Generating copyout(Anew[1:4094][1:4094])
Generating copyin(A[:][:])
Generating copyout(A[1:4094][1:4094])
Generating Tesla code
57, Loop is parallelizable
59, Loop is parallelizable
Accelerator kernel generated
57, #pragma acc loop gang /* blockIdx.y */
59, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */
63, Max reduction generated for error
67, Loop is parallelizable
69, Loop is parallelizable
Accelerator kernel generated
67, #pragma acc loop gang /* blockIdx.y */
69, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */
OPENACC PARALLEL LOOP VS. KERNELS
PARALLEL LOOP
• Requires analysis by programmer to ensure safe parallelism
• Will parallelize what a compiler may miss
• Straightforward path from OpenMP
KERNELS
• Compiler performs parallel analysis and parallelizes what it believes safe
• Can cover larger area of code with single directive
• Gives compiler additional leeway to optimize.
Both approaches are equally valid and can perform equally well.
Why did OpenACC slow down here?
Intel Xeon E5-2698 v3 @ 2.30GHz (Haswell) vs. NVIDIA Tesla K40
Very low Compute/Memcopy ratio
Compute: 5.0s
Memory Copy: 62.2s
EXCESSIVE DATA TRANSFERS
while ( err > tol && iter < iter_max )
{
err=0.0;
#pragma acc kernels
for( int j = 1; j < n-1; j++ ) {
for(int i = 1; i < m-1; i++) {
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}...
}...
while ( err > tol && iter < iter_max ) {
err=0.0;
#pragma acc kernels
{
for( int j = 1; j < n-1; j++ ) {
for(int i = 1; i < m-1; i++) {
Anew[j][i] = 0.25 * (A[j][i+1] + A[j][i-1] +
A[j-1][i] + A[j+1][i]);
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
for( int j = 1; j < n-1; j++ ) {
for( int i = 1; i < m-1; i++) {
A[j][i] = Anew[j][i];
}
}
}
iter++;
}
Identify Available Parallelism
Parallelize Loops with OpenACC
Optimize Loop Performance
Optimize Data Locality
The **data** construct defines a region of code in which GPU arrays remain on the GPU and are shared among all kernels in that region.
```c
#pragma acc data
{
#pragma acc parallel loop
...
#pragma acc parallel loop
...
}
```
Arrays used within the data region will remain on the GPU until the end of the data region.
### DATA CLAUSES
<table>
<thead>
<tr>
<th>Clause</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>copy (list)</td>
<td>Allocates memory on GPU and copies data from host to GPU when entering region and copies data to the host when exiting region.</td>
</tr>
<tr>
<td>copyin (list)</td>
<td>Allocates memory on GPU and copies data from host to GPU when entering region.</td>
</tr>
<tr>
<td>copyout (list)</td>
<td>Allocates memory on GPU and copies data to the host when exiting region.</td>
</tr>
<tr>
<td>create (list)</td>
<td>Allocates memory on GPU but does not copy.</td>
</tr>
<tr>
<td>present (list)</td>
<td>Data is already present on GPU from another containing data region.</td>
</tr>
</tbody>
</table>
and `present_or_copy[in|out]`, `present_or_create`, `deviceptr`.
ARRAY SHAPING
- Compiler sometimes cannot determine size of arrays
- Must specify explicitly using data clauses and array “shape”
**C/C++**
```c
#pragma acc data copyin(a[0:size]) copyout(b[s/4:3*s/4])
```
**Fortran**
```fortran
!$acc data copyin(a(1:end)) copyout(b(s/4:3*s/4))
```
- Note: data clauses can be used on `data, parallel, or kernels`
#pragma acc data copy(A) create(Anew)
while ( err > tol && iter < iter_max ) {
err=0.0;
#pragma acc kernels
{
for( int j = 1; j < n-1; j++ ) {
for(int i = 1; i < m-1; i++) {
Anew[j][i] = 0.25 * (A[j][i+1] + A[j][i-1] +
A[j-1][i] + A[j+1][i]);
err = max(err, abs(Anew[j][i] - A[j][i]));
}
}
for( int j = 1; j < n-1; j++ ) {
for( int i = 1; i < m-1; i++ ) {
A[j][i] = Anew[j][i];
}
}
iter++;
}
}
$ pgcc -fast -acc -ta=tesla -Minfo=all laplace2d.c
main:
40, Loop not fused: function call before adjacent loop
Generated vector sse code for the loop
51, Generating copy(A[:,][:])
Generating create(Anew[:,][:])
Loop not vectorized/parallelized: potential early exits
56, Accelerator kernel generated
56, Max reduction generated for error
57, #pragma acc loop gang /* blockIdx.x */
59, #pragma acc loop vector(256) /* threadIdx.x */
56, Generating Tesla code
59, Loop is parallelizable
67, Accelerator kernel generated
68, #pragma acc loop gang /* blockIdx.x */
70, #pragma acc loop vector(256) /* threadIdx.x */
67, Generating Tesla code
70, Loop is parallelizable
VISUAL PROFILER: DATA REGION
Iteration 1
Was 104ms
Iteration 2
Intel Xeon E5-2698 v3 @ 2.30GHz (Haswell) vs. NVIDIA Tesla K40
Single Thread: 1.00X
2 Threads: 1.82X
4 Threads: 3.13X
6 Threads: 3.90X
8 Threads: 4.38X
OpenACC: 27.30X
Socket/Socket: 6.24X
Speed-Up (Higher is Better)
OPENACC PRESENT CLAUSE
It’s sometimes necessary for a data region to be in a different scope than the compute region.
When this occurs, the `present` clause can be used to tell the compiler data is already on the device.
Since the declaration of A is now in a higher scope, it’s necessary to shape A in the present clause.
High-level data regions and the present clause are often critical to good performance.
```c
function main(int argc, char **argv)
{
#pragma acc data copy(A)
{
laplace2D(A,n,m);
}
}
function laplace2D(double[N][M] A,n,m)
{
#pragma acc data present(A[n][m]) create(Anew)
while ( err > tol && iter < iter_max ) {
err=0.0;
...
}
}
```
Identify Available Parallelism
Optimize Loops with OpenACC
Parallelize Loops with OpenACC
Optimize Loop Performance
Optimize Data Locality
Watch S5195 - Advanced OpenACC Programming on gputechconf.com
NEXT STEPS
1. **Identify Available Parallelism**
- What important parts of the code have available parallelism?
2. **Parallelize Loops**
- Express as much parallelism as possible and ensure you still get correct results.
- Because the compiler *must* be cautious about data movement, the code will generally slow down.
3. **Optimize Data Locality**
- The programmer will *always* know better than the compiler what data movement is unnecessary.
4. **Optimize Loop Performance**
- Don’t try to optimize a kernel that runs in a few *us* or *ms* until you’ve eliminated the excess data motion that is taking *many seconds*.
TYPICAL PORTING EXPERIENCE WITH OPENACC DIRECTIVES
Step 1: Identify Available Parallelism
Step 2: Parallelize Loops with OpenACC
Step 3: Optimize Data Locality
Step 4: Optimize Loops
Application Speed-up
Development Time
FOR MORE INFORMATION
- Check out [http://openacc.org/](http://openacc.org/)
- Share your successes at WACCPD at SC15.
- Email me: [jlarkin@nvidia.com](mailto:jlarkin@nvidia.com)
GPU strategies for the point_solve_5 kernel
POINT_SOLVE_5
PERFORMANCE COMPARISON
- CPU: One socket E5-2690 @ 3Ghz, 10 cores
- GPU: K40c, boost clocks, ECC off
- Dataset: DPW-Wing, 1M cells
- One call of point_solve5 over all colors
- No transfers
- 1 CPU core: 300ms
- 10 CPU cores: 44ms
!$acc parallel loop private(f1, f2, f3, f4, f5)
rhs_solve : do n = start, end
[...]
istart = iam(n)
iend = iam(n+1)
do j = istart, iend
icol = jam(j)
f1 = f1 - a_off(1,1,j)*dq(1,icol)
f2 = f2 - a_off(2,1,j)*dq(1,icol)
[...22 lines]
f5 = f5 - a_off(5,5,j)*dq(5,icol)
end do
[...]
end do
OPENACC1 - A_OFF ACCESS PATTERN
PERFORMANCE COMPARISON
- CPU: 44 milliseconds
- OpenACC1 - L2: 78 milliseconds
- OpenACC1 - L1: 141 milliseconds
- OpenACC1 - tex: 22 milliseconds
!$acc parallel loop collapse(2) private(fk)
rhs_solve : do n = start, end
do k = 1,5
...]
istart = iam(n)
iend = iam(n+1)
do j = istart, iend
icol = jam(j)
fk = fk - a_off(k,1,j)*dq(1,icol)
[... 3 lines]
fk = fk - a_off(k,5,j)*dq(1,icol)
end do
end do
end do
dq(k,n) = fk
end do
[Split off fw/bw substitution in extra loop]
OPENACC2 - A_OFF ACCESS PATTERN
CUDA FORTRAN - ADVANTAGES
- Shared Memory: as explicitly managed cache and for cooperative reuse
- Inter thread communication in a thread block with shared memory
- Inter thread communication in a warp with __shfl() intrinsic
! Calculate n, l, k based on threadIdx
! Loop over a_off entries
istart = iam(n)
iend = iam(n+1)
do j = istart, iend
ftemp = ftemp - a_off(k,l,j)*dq(l,jam(j))
end do
! Reduction along the rows
fk = ftemp - __shfl( ftemp, k+1*5)
fk = fk - __shfl( ftemp, k+2*5)
fk = fk - __shfl( ftemp, k+3*5)
fk = fk - __shfl( ftemp, k+4*5)
CUDA FORTRAN - 25 WIDE
warp
- active
- done
- cached
- uncached
PERFORMANCE COMPARISON
milliseconds
<table>
<thead>
<tr>
<th></th>
<th>CPU</th>
<th>OpenACC1</th>
<th>OpenACC2</th>
<th>CUDA Fortran</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>44</td>
<td>22</td>
<td>18</td>
<td>11.5</td>
</tr>
</tbody>
</table>
FUN3D CONCLUSIONS
- Unchanged code with OpenACC: 2.0x
- Modified code with OpenACC: 2.4x, modified code runs 50% slower on CPUs
- Highly optimized CUDA version: 3.7x
- Compiler options (e.g. how memory is accessed) have huge influence on OpenACC results
- Possible compromise: CUDA for few hotspots, OpenACC for the rest
- Very good OpenACC/CUDA interoperability: CUDA can use buffers managed with OpenACC data clauses
- Unsolved problem: data transfer in a partial port cause overhead
|
{"Source-Url": "http://www.nas.nasa.gov/assets/pdf/ams/2015/AMS_20150421_Larkin.pdf", "len_cl100k_base": 4774, "olmocr-version": "0.1.49", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 84984, "total-output-tokens": 7244, "length": "2e12", "weborganizer": {"__label__adult": 0.00055694580078125, "__label__art_design": 0.0004634857177734375, "__label__crime_law": 0.0004413127899169922, "__label__education_jobs": 0.0004074573516845703, "__label__entertainment": 0.00010478496551513672, "__label__fashion_beauty": 0.0002582073211669922, "__label__finance_business": 0.0002321004867553711, "__label__food_dining": 0.0004398822784423828, "__label__games": 0.001068115234375, "__label__hardware": 0.005962371826171875, "__label__health": 0.0006361007690429688, "__label__history": 0.00034880638122558594, "__label__home_hobbies": 0.0001512765884399414, "__label__industrial": 0.001071929931640625, "__label__literature": 0.00021028518676757812, "__label__politics": 0.0003490447998046875, "__label__religion": 0.0007929801940917969, "__label__science_tech": 0.0826416015625, "__label__social_life": 8.738040924072266e-05, "__label__software": 0.006649017333984375, "__label__software_dev": 0.89501953125, "__label__sports_fitness": 0.0006270408630371094, "__label__transportation": 0.0010194778442382812, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16580, 0.02939]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16580, 0.53663]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16580, 0.62191]], "google_gemma-3-12b-it_contains_pii": [[0, 175, false], [175, 392, null], [392, 421, null], [421, 535, null], [535, 1002, null], [1002, 1228, null], [1228, 1258, null], [1258, 1432, null], [1432, 1696, null], [1696, 1725, null], [1725, 1764, null], [1764, 1878, null], [1878, 2220, null], [2220, 2646, null], [2646, 2760, null], [2760, 3183, null], [3183, 3297, null], [3297, 3645, null], [3645, 4065, null], [4065, 4601, null], [4601, 5395, null], [5395, 5783, null], [5783, 6260, null], [6260, 7065, null], [7065, 7509, null], [7509, 7605, null], [7605, 7670, null], [7670, 8010, null], [8010, 8539, null], [8539, 8653, null], [8653, 8972, null], [8972, 10189, null], [10189, 10545, null], [10545, 11047, null], [11047, 11720, null], [11720, 11786, null], [11786, 12006, null], [12006, 12714, null], [12714, 12920, null], [12920, 12931, null], [12931, 13560, null], [13560, 13783, null], [13783, 14043, null], [14043, 14087, null], [14087, 14101, null], [14101, 14332, null], [14332, 14682, null], [14682, 14714, null], [14714, 14862, null], [14862, 15236, null], [15236, 15268, null], [15268, 15268, null], [15268, 15495, null], [15495, 15825, null], [15825, 15891, null], [15891, 16094, null], [16094, 16580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 175, true], [175, 392, null], [392, 421, null], [421, 535, null], [535, 1002, null], [1002, 1228, null], [1228, 1258, null], [1258, 1432, null], [1432, 1696, null], [1696, 1725, null], [1725, 1764, null], [1764, 1878, null], [1878, 2220, null], [2220, 2646, null], [2646, 2760, null], [2760, 3183, null], [3183, 3297, null], [3297, 3645, null], [3645, 4065, null], [4065, 4601, null], [4601, 5395, null], [5395, 5783, null], [5783, 6260, null], [6260, 7065, null], [7065, 7509, null], [7509, 7605, null], [7605, 7670, null], [7670, 8010, null], [8010, 8539, null], [8539, 8653, null], [8653, 8972, null], [8972, 10189, null], [10189, 10545, null], [10545, 11047, null], [11047, 11720, null], [11720, 11786, null], [11786, 12006, null], [12006, 12714, null], [12714, 12920, null], [12920, 12931, null], [12931, 13560, null], [13560, 13783, null], [13783, 14043, null], [14043, 14087, null], [14087, 14101, null], [14101, 14332, null], [14332, 14682, null], [14682, 14714, null], [14714, 14862, null], [14862, 15236, null], [15236, 15268, null], [15268, 15268, null], [15268, 15495, null], [15495, 15825, null], [15825, 15891, null], [15891, 16094, null], [16094, 16580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16580, null]], "pdf_page_numbers": [[0, 175, 1], [175, 392, 2], [392, 421, 3], [421, 535, 4], [535, 1002, 5], [1002, 1228, 6], [1228, 1258, 7], [1258, 1432, 8], [1432, 1696, 9], [1696, 1725, 10], [1725, 1764, 11], [1764, 1878, 12], [1878, 2220, 13], [2220, 2646, 14], [2646, 2760, 15], [2760, 3183, 16], [3183, 3297, 17], [3297, 3645, 18], [3645, 4065, 19], [4065, 4601, 20], [4601, 5395, 21], [5395, 5783, 22], [5783, 6260, 23], [6260, 7065, 24], [7065, 7509, 25], [7509, 7605, 26], [7605, 7670, 27], [7670, 8010, 28], [8010, 8539, 29], [8539, 8653, 30], [8653, 8972, 31], [8972, 10189, 32], [10189, 10545, 33], [10545, 11047, 34], [11047, 11720, 35], [11720, 11786, 36], [11786, 12006, 37], [12006, 12714, 38], [12714, 12920, 39], [12920, 12931, 40], [12931, 13560, 41], [13560, 13783, 42], [13783, 14043, 43], [14043, 14087, 44], [14087, 14101, 45], [14101, 14332, 46], [14332, 14682, 47], [14682, 14714, 48], [14714, 14862, 49], [14862, 15236, 50], [15236, 15268, 51], [15268, 15268, 52], [15268, 15495, 53], [15495, 15825, 54], [15825, 15891, 55], [15891, 16094, 56], [16094, 16580, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16580, 0.02058]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
80aca4451240132ef365e1bd40e25c80ec07226a
|
ICASE INTERIM REPORT 9
A SCHEME FOR SUPPORTING DISTRIBUTED DATA STRUCTURES ON MULTICOMPUTERS
Seema Hiranandani
Joel Saltz
Harry Berryman
Piyush Mehrotra
NASA Contract No. NAS1-18605
January 1990
INSTITUTE FOR COMPUTER APPLICATIONS IN SCIENCE AND ENGINEERING
NASA Langley Research Center, Hampton, Virginia 23665
Operated by the Universities Space Research Association
National Aeronautics and Space Administration
Langley Research Center
Hampton, Virginia 23665-5225
ICASE INTERIM REPORTS
ICASE has introduced a new report series to be called ICASE Interim Reports. The series will complement the more familiar blue ICASE reports that have been distributed for many years. The blue reports are intended as preprints of research that has been submitted for publication in either refereed journals or conference proceedings. In general, the green Interim Report will not be submitted for publication, at least not in its printed form. It will be used for research that has reached a certain level of maturity but needs additional refinement, for technical reviews or position statements, for bibliographies, and for computer software. The Interim Reports will receive the same distribution as the ICASE Reports. They will be available upon request in the future, and they may be referenced in other publications.
Robert G. Voigt
Director
1 Abstract
We propose a data migration mechanism that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. Our proposed integration of a data migration scheme with a compiler is able to eliminate the migration of unneeded data that can occur in multiprocessor paging or caching. The overhead of adjudicating multiple concurrent writes to the same page or cache line is also eliminated. We present data that suggests that the scheme we suggest may be a practical method for efficiently supporting data migration.
*This work was supported by the U.S. Office of Naval Research under Grant N00014-86-K-0310, and under NASA contract NAS1-18605 while the authors were in residence at ICASE, NASA Langley Research Center.
2 Introduction
It is well known that data distribution and load balance play critical roles in determining the performance one can expect to obtain from distributed machines. Data must be moved from processor to processor in response to computational demands. One way of supporting data migration is to explicitly designate blocks of data that are to be prefetched into the memory of a given processor and to copy the data into customized data structures. Programs are written on each processor with intimate knowledge of the format used to store off-processor data. For some problems, this approach to data distribution can be extremely efficient. Programming in this manner can be very time consuming, and can lead to programs that are difficult to debug.
Mechanisms have been proposed to allow data to migrate in an automatic fashion. The physical memory of a multiprocessor is viewed as a single logical memory. Data migrates to the processors that refer to particular logical memory addresses. The following methods have been proposed to support data migration [1], [4], [3], [5]
- Multiprocessor paging: A logically shared memory is divided into pages which are contiguous, equal sized ranges. Processors store copies of required pages in their local memories. A page table is used to find the page corresponding to a given address in logical memory.
- Multiprocessor Caching: Each processor stores copies of the contents of address ranges in a logically shared memory. A subset of address bits are used to determine the location of data in physical memory.
Before describing our methods, we will outline some of the well known shortcomings of each of these data migration mechanisms. One of the principal difficulties with multiprocessor paging is the problem of false sharing. In a given portion of code, most locations in logical memory may not be accessed by multiple processors. Since pages consist of ranges of contiguous memory locations, different portions of a given page may be accessed by different processors. At best, false sharing will cause processors to waste physical memory to store data that will not be used. False sharing has the potential for causing particularly severe performance degradation if multiple processors attempt to concurrently write to different memory locations on a given page. Decreasing page size will tend to reduce false sharing. However when page size is reduced, page table storage requirements and communications latency overheads increase.
In multiprocessor caching, we may also see false sharing when a large cache line size is chosen or alternately experience significant communications latency effects when small cache lines are employed. Furthermore, since the cost of obtaining off-processor data may be very high compared to the cost of fetching data from local memory or cache, we want to maintain an extremely high hit ratio. It is well known that it is easy to find patterns of data access that make very poor use of cache memory. It may not be possible to maintain an extremely high hit ratio in a data cache.
Finally, in both caching and paging schemes, maintaining coherency in large scale multiprocessors may be associated with very high overheads.
2.1 Overview of Hashed Cache Data Migration Scheme
We support distributed data structures in a way that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. A processor needing to read or write an array element gets a copy of that element. We constrain the form of programs so that only parallel loops or sequential code can be specified. Each data element can be written to by at most one processor during the course of a single set of (nested) parallel loops. This constraint eliminates the need for hardware coherency support. At the end of a set of nested parallel loops, all modified data is copied back to its home location.
A loop is transformed into two parts by a compiler. One part is called an inspector, the other is called an executor. The inspector is responsible for determining what data elements are required by a loop, the executor carries out the actual computations.
In a distributed memory machine, execution time procedures carry out the actual fetching of data. Local copies of data are stored in a hash table. Access to the table must be very fast. Location the in hash table is determined by taking low order bits of a quantity that is analogous to an address in logical memory. Unlike a traditional cache, we cannot afford to allow data elements to be thrown away just because too many required addresses have the same low order bits. We instead use a linked list when more than a single data element hashes to a given location.
3 The Hashed Cache System
3.1 Support of Distributed Arrays
We support distributed multidimensional arrays in a way that allows an explicit and controlled mapping of data to memory. As stated above, long term storage of each data element is assigned to a specific location in the memory of a particular processor. Users are able to specify the following attributes in their distributed array initializations:
- The topology of the processor array on which the data arrays are to be embedded
- The dimensionality of the data arrays
- Subset of processors used to store array elements
- Mapping of array elements onto the specified processors
Once a distributed array is initialized, we can use the specified partitioning information to find, for any distributed array element, a unique processor P along with a unique location in that processor P's storage. In each processor, contiguous memory locations are used to store local elements of a given distributed array. The unique location in P's storage can thus be expressed as an integer offset 0. Further details of our support for distributed arrays are beyond the scope of this note but can be found in [2], [6].
3.2 Details of the Hashed Cache System
The program in Figure 1 will be used as a running example in this discussion. This program performs a sequence of matrix vector multiplications. In order to compute $y[i]$ at each iteration, we need $yold[nbrs[i][j]]$. Both $y$ and $yold$ are to be defined as distributed arrays. When the loop is distributed, loop iterations may be assigned to processors in an arbitrary fashion. Consequently long term storage of elements of $y$ or $yold$ may not be assigned to the processors that execute code referring to those elements.
The global arrays are initialized at the start of the program. We proceed to describe the primitives that support the inspector and executor phases of the hashed-cache system. To best understand the details of the inspector and executor phases we describe them in the context of the example presented in Figure 1.
Firstly, there are 3 different kind of references to global arrays.
1. Local: The address of the reference corresponds to the local memory of the processor. The reference may be a read or a write.
2. Non-Local Read: The address of the reference corresponds to the local memory of some other processor. The element is only read.
3. Non-Local Write: The address of the reference corresponds to the local memory of some other processor; the element is written to.
3.3 The Inspector Phase
Figure 2 depicts the psuedo-code of the inspector phase for the sparse matrix vector multiply. During the inspector phase we go through the inner-loop once to check for local and non-local
global array accesses. If an array reference is local we do nothing. However, if it is a non-local reference to a global array, we compute the processor on which the element resides and its offset. We need to store this information in such a way that accessing it is efficient. This is achieved by using a hashed cache scheme.
Initially, we allocate a certain amount of memory for a cache. We partition the cache into blocks, one for each globally defined distributed array. Each block is treated as a separate hash table. The location of a non-local distributed array element is determined by a hash function. Currently, we use a hashing function that simply masks the lower k bits of the key where k depends on the size of the hash table. The key is formed by concatenating the processor-offset pair, (P, 0), that corresponds to a distributed array reference. Each entry in the hash table consists of the following:
1. a reference to the non-local data item, i.e., the data item’s processor-offset pair,
2. whether the item is to be read (read flag),
3. whether the item is to be written (write flag),
4. the data value itself.
If the data item is a non-local read reference R, it is processed by the process-global-read() routine. The routine is described as follows:
process-global-read()
1. Search for the reference R in the hashed-cache.
2. If R exists and the read flag is set, do nothing.
3. If R exists and the read flag is not set, set read flag.
4. If R is not found in the hashed cache, create an entry with read flag set and enter it in the hashed-cache.
5. In the latter two situations, increment a count variable that contains the number of non-local elements to be gathered from the processor P on which this element resides. The offset of this element is written to a list containing the offsets of all the elements to be gathered from P.
Non-local array references R that are written to, are processed by the process-global-write() routine described below:
process-global-write()
1. Search for the reference R in the hashed-cache.
Loop over local iterations i assigned to P
\[
\text{do } j = 0, m \\
\quad \text{Compute processor, offset pair for element of yold} \\
\quad \text{If yold reference is to non-local array element,} \\
\quad \quad \text{process-global-read()} \\
\quad \text{end do} \\
\]
End loop over local iterations
Loop over local iterations k assigned to P
\[
\text{If yold reference is to non-local array element,} \\
\quad \text{process-global-write()} \\
\]
End loop over local iterations
Figure 2: Inspector: Sparse Matrix Vector Multiply
2. If R exists and the write flag is set, do nothing.
3. If R exists and the write flag is not set, set write flag.
4. If R is not found in hashed cache, create an entry with write flag set and enter it in the hashed-cache.
5. In the latter two situations increment a count variable containing the number of non-local elements to be scattered to P. The offset of R is written to a list containing the offsets of all the elements to be scattered to P.
At the completion of the inspector phase we precompute the communication pattern required to efficiently gather or scatter all relevant non-local data referenced in the loop. This requires a global communication phase in which all processors participate. For a detailed description, see [6], [2].
3.4 The Executor Phase
Figure 3 depicts the pseudo-code of the executor phase for the sparse matrix vector multiply.
The non-local data required by the inner loop is first obtained from other processors and stored in the hashed-cache by the process-gather-data routine. We now proceed to execute the doall loop.
During the execution of the inner-loop we check each distributed array reference to decide whether it resides in the local array or not. If it does, we compute the offset of the element in the local array and fetch the data item from the appropriate memory location. If it does not, we fetch it from the hashed cache. If the array reference occurs on the left hand and it is non-local, we enter the new value in the hashed cache. At the end of the execution of the inner-loop, each processor calls the process-scatter-data routine. This routine goes through the list of non-local offsets of elements to be scattered, searches for these elements in the hashed cache and writes the value to a list containing the new values to be written to the distributed memory. The data is then scattered to the distributed memory.
The operations for computing processor number and offset are computationally very cheap since we assume the distributed array may be partitioned in a block or block wrap fashion. The size of each block is a power of 2 and thus we need to perform simple integer operations such as shifts to compute the offset and processor number of a distributed array element.
4 Experimental Results
The program depicted in Figure 1 exhibits greatly varying patterns of locality depending on how loop iterations are assigned to processors and depending on the contents of the integer array nbrs. Integer nbrs can be viewed as a representation of a sparse matrix. We used a synthetic workload to generate a number of sparse matrices with differing dependency patterns. A square mesh in which each point was linked to four nearest neighbors was incrementally distorted. Random edges were introduced subject to the constraint that in the new mesh, each point still required information from four other mesh points.
Our workload generator makes the following assumptions:
1. The problem domain consists of a 2-dimensional mesh of points which are numbered using their natural ordering;
2. Each point is initially connected to its four nearest neighbors;
3. Each link produced in the above step is examined, with probability Pr the link is replaced by a link to a randomly chosen point.
Once generated, this connectivity information is stored in integer array nbrs.
To obtain an experimental estimate of the efficiency of the executor on the Intel iPSC/2, we carried out a sequence of sparse matrix-vector multiplications using a 128*128 matrix generated from a square mesh using the workload generator described above.
do iter=1, num
process-gather-data() - obtain needed yold values from other processors, put in hashed cache
Loop over local iterations k assigned to P
do j = 0,m
Perform calculation reading yold values or writing y values using local memory or hashed cache as is appropriate.
end do
End loop over local iterations
Loop over local iterations k assigned to P
Perform assignment reading y values or writing yold values using local memory or hashed cache as is appropriate.
End loop over local iterations
process-scatter-data() - scatter yold values from hashed cache to appropriate processors
end do
Figure 3: Executor: Sparse Matrix Vector Multiply
We define
- $p =$ Total Number of Processors
- $\text{BlockSize} = \frac{(128 \times 128)}{p}$
- $\text{ArraySize} = 128 \times 128$
We partitioned $\text{yold}$ in the following ways:
1. partition the array in contiguous blocks of size $\text{BlockSize}$, i.e. processor $i$ is assigned indices $i \times \text{BlockSize}$ through $(i + 1) \times \text{BlockSize} - 1$
2. partition the indices in an interleaved fashion i.e. processor $i$ is assigned indices $i, i + p, i + 2p, \ldots, i + \left( \frac{\text{BlockSize} - 1}{p} \right) \times p$.
We first present the results obtained by partitioning $\text{yold}$ in contiguous blocks. Table 1 depicts the time required to carry out the inspector and executor loops along with the optimal time. We define the optimal time as the sequential time divided by the number of processors. The inspector took a time roughly equal that required by one or two optimally parallelized iterations. Since the inner loops of most scientific codes consist of many repetitions of loops with invariant dependency patterns, this inspector overhead is not expected to be a serious performance bottleneck in many programs of practical interest.
We define parallel efficiency as $\frac{T_{\text{parallel}}}{T_{\text{sequential}} \times P}$ where $T_{\text{sequential}}$ is the time taken by a sequential program to run on a single processor, $P$ is the number of processors and $T_{\text{parallel}}$ is the time required to run the parallelized program on $P$ processors. The parallel efficiencies were 0.76, 0.73, 0.67 and 0.56 for problems run on 4, 8, 16 and 32 processors respectively. We obtained relatively high efficiencies because most of the required data resided in the local memory of the processor. We also ran the sparse matrix vector multiply on 32 processors with probabilities $Pr$ equal to 0.2 and 0.4 that an edge is randomly displaced. As we increased $Pr$, we encountered more non-local references. Efficiencies dropped from 0.56 to 0.30 when the probability was 0.2 and to 0.21 when the probability was 0.4.
We next present results obtained using an interleaved partition of $\text{yold}$. When an interleaved partition is employed, most of the data required by each processor is non-local. We need to fetch large amounts of data from the hashed cache. Moreover, there are also non-local write data accesses to $\text{yold}$. Thus we have to write the new values to the distributed memory. The parallel efficiencies obtained using an interleaved partition of $\text{yold}$ are depicted in Table 2. As we increased the probability $Pr$, the number of edges displaced randomly increases. Due to the nature of the array partitioning, it is likely that the main effect of increasing $Pr$ is to increase the number of processors from which data needs to be fetched. We see a decrease in efficiency as we increase the probability of randomly displacing an edge.
Single processor timings of an optimized version were compared with the parallel code run on one processor. The sequential code required $T_{\text{sequential}} = 64.4$ milliseconds and the one processor parallel code required $T_{\text{parallel}} = 81.4$ milliseconds. The overhead due to the executor
Table 1: Matrix-Vector Multiply - Blocked Partitioning
<table>
<thead>
<tr>
<th>Processors</th>
<th>Inspector time (ms)</th>
<th>Executor time (ms)</th>
<th>Optimal time (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>11.8</td>
<td>21.3</td>
<td>16.1</td>
</tr>
<tr>
<td>8</td>
<td>7.3</td>
<td>11.1</td>
<td>8.05</td>
</tr>
<tr>
<td>16</td>
<td>4.8</td>
<td>6.0</td>
<td>4.03</td>
</tr>
<tr>
<td>32</td>
<td>4.3</td>
<td>3.6</td>
<td>2.01</td>
</tr>
</tbody>
</table>
Table 2: Matrix-Vector Multiply - Interleaved Partitioning
<table>
<thead>
<tr>
<th>Processors</th>
<th>$Pr = 0.0$ efficiency</th>
<th>$Pr = 0.2$ efficiency</th>
<th>$Pr = 0.4$ efficiency</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>0.36</td>
<td>0.29</td>
<td>0.25</td>
</tr>
<tr>
<td>16</td>
<td>0.23</td>
<td>0.18</td>
<td>0.17</td>
</tr>
<tr>
<td>32</td>
<td>0.19</td>
<td>0.11</td>
<td>0.10</td>
</tr>
</tbody>
</table>
is approximately 26 percent. We expect substantially lower overheads on RISC based multiprocessors both because of the high prevalence of shift operations in hashed cache calculations and because of the potential for being able to concurrently schedule executor floating point and integer operations.
5 Conclusion
We propose a data migration mechanism that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. Our proposed integration of a data migration scheme with a compiler is able to eliminate the migration of unneeded data that can occur in multiprocessor paging or caching. The overhead of adjudicating multiple concurrent writes to the same page or cache line is also eliminated. We present data that suggests that the scheme we suggest may be a practical method for efficiently supporting data migration.
References
We propose a data migration mechanism that allows an explicit and controlled mapping of data to memory. While read or write copies of each data element can be assigned to any processor's memory, longer term storage of each data element is assigned to a specific location in the memory of a particular processor. Our proposed integration of a data migration scheme with a compiler is able to eliminate the migration of unneeded data that can occur in multiprocessor paging or caching. The overhead of adjudicating multiple concurrent writes to the same page or cache line is also eliminated. We present data that suggests that the scheme we suggest may be a practical method for efficiently supporting data migration.
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900008836.pdf", "len_cl100k_base": 4890, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 35260, "total-output-tokens": 5853, "length": "2e12", "weborganizer": {"__label__adult": 0.0004396438598632813, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.0004565715789794922, "__label__education_jobs": 0.0008258819580078125, "__label__entertainment": 0.00012135505676269533, "__label__fashion_beauty": 0.0002419948577880859, "__label__finance_business": 0.00038814544677734375, "__label__food_dining": 0.0004940032958984375, "__label__games": 0.000690460205078125, "__label__hardware": 0.00783538818359375, "__label__health": 0.00099945068359375, "__label__history": 0.0005311965942382812, "__label__home_hobbies": 0.00019347667694091797, "__label__industrial": 0.0012216567993164062, "__label__literature": 0.0002524852752685547, "__label__politics": 0.0003733634948730469, "__label__religion": 0.0007076263427734375, "__label__science_tech": 0.38134765625, "__label__social_life": 9.250640869140624e-05, "__label__software": 0.00977325439453125, "__label__software_dev": 0.59033203125, "__label__sports_fitness": 0.00049591064453125, "__label__transportation": 0.001255035400390625, "__label__travel": 0.00032258033752441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23792, 0.02792]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23792, 0.60892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23792, 0.89451]], "google_gemma-3-12b-it_contains_pii": [[0, 473, false], [473, 473, null], [473, 1344, null], [1344, 1344, null], [1344, 2277, null], [2277, 2277, null], [2277, 5499, null], [5499, 8344, null], [8344, 9903, null], [9903, 11960, null], [11960, 13247, null], [13247, 16086, null], [16086, 16745, null], [16745, 19962, null], [19962, 22317, null], [22317, 23076, null], [23076, 23076, null], [23076, 23792, null]], "google_gemma-3-12b-it_is_public_document": [[0, 473, true], [473, 473, null], [473, 1344, null], [1344, 1344, null], [1344, 2277, null], [2277, 2277, null], [2277, 5499, null], [5499, 8344, null], [8344, 9903, null], [9903, 11960, null], [11960, 13247, null], [13247, 16086, null], [16086, 16745, null], [16745, 19962, null], [19962, 22317, null], [22317, 23076, null], [23076, 23076, null], [23076, 23792, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23792, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23792, null]], "pdf_page_numbers": [[0, 473, 1], [473, 473, 2], [473, 1344, 3], [1344, 1344, 4], [1344, 2277, 5], [2277, 2277, 6], [2277, 5499, 7], [5499, 8344, 8], [8344, 9903, 9], [9903, 11960, 10], [11960, 13247, 11], [13247, 16086, 12], [16086, 16745, 13], [16745, 19962, 14], [19962, 22317, 15], [22317, 23076, 16], [23076, 23076, 17], [23076, 23792, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23792, 0.07432]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
7ffeae963412c203e03a78d0276f2f09055f026e
|
Introduction
This workplan presents the plan for the creation and delivery of Pilot, in detail, through release 3.0 on March 30, 1978. This workplan does not cover the period beyond the release of Pilot 3.0 in nearly as much detail. Pilot 2.0 will provide almost all of the functions called for in the Pilot Functional Specifications but will not necessarily have either the performance or the main memory consumption required for product release.
Common Software (anything not covered in the Pilot Functional Specs) is not covered by this workplan.
Very little in the way of post release support for Pilot 1.0 (Pilot on the Alto) is contemplated and the mechanisms for post release support of system software products in general have not yet been worked out. When they are, additional tasks will undoubtedly be identified.
OIS Mesa, as represented by the OIS Mesa Functional Specification, will disappear as an identifiable project, as will that document. The functions represented are being divided between the Pilot project and the Mesa project and the portion subsumed by Pilot is covered by this workplan. The remainder of the functions are to be covered by the Mesa workplans (see [Maxc]<Wick>Pilot-Mesa.Bravo).
Background data
productivity assumptions - document preparation
This section will list the productivity assumptions upon which this workplan is based. The background material that was used to arrive at these assumptions is contained in <Lynch>Productivity.memo. It is the intention in this project to track the actual productivity rates achieved, both to enlarge our data base of productivity information and to
give early warning of potential schedule deviations.
The construction of written documentation is a very significant component of this project. We therefore need an estimate of the productivity of producing this documentation. I estimate a production rate of 4 pages/day if the material is already thought out and no background documents are required (RawPage). Otherwise the rate is .5 pages/day (FinishedPage). The thinking through of the material seems to require 3 SupportPages per FinishedPage. Each SupportPage is treated as a RawPage.
Each RawPage requires .67 hours of Alto usage (AltoHour). We therefore estimate 4.5 AltoHours/FinishedPage. Alto usage for document construction will be based on this figure.
document size distribution
In order to estimate the effort required to produce the necessary documents, we need to estimate the size of the documents. To this end I estimate that
The design spec is three times the size of the functional spec
[Is this born out by experience?]
The test spec is 12 pages
The technical documentation is twice the size of the functional spec
productivity of coding and unit test
Again from <Lynch>Productivity.memo we estimate that our programmers will code and unit test 800 lines of Mesa source code (loc) per man month (mm.), at the well established conversion figure of 2.5 [This should be 3.5] bytes of object code per loc we obtain a productivity of 2000 bytes/mm. These figures are also consistent with the estimate given in the SDD Software Development Procedures and Standards (P+S) which claims that coding and unit test represent 15% of the total effort and that each person will produce about 1500 loc per man year overall.
From the data presented in Productivity.memo I estimate that 0.16 AltoHours per loc are required and this plan is based on that figure.
Alto usage is not likely to be more than 50% effective. I will therefore assume that one Alto will yield 100 hrs. of usage per month.
program size distribution
In order to estimate the programming effort we need to estimate the size of the individual programs. Information of this sort is contained in <Lynch>PilotSizes.Memo. That document estimated the number of locs per procedure and turns that into a number of bytes generated. By doing a regression on the number of procedures per chapter of the Pilot Functional Specification we obtain a figure of one thousand bytes of code for every three pages of Pilot Functional Specification, deleting first the introductory chapter and appendices and the first introductory 1.5 pages of each chapter.
phase distribution
We are already part way through the Pilot project, being just finished with the design phase and beginning the code and unit test phase. Of the remaining effort, the P+S indicates that we will have
33% in the code and unit test phase
67% in the system test phase
The system test phase will commence with the release of Pilot 2.0. All other phases will be completed before the release of Pilot 2.0.
Alto/Pilot - Pilot 1.0 (Internal release only)
definition
Alto/Pilot 1.0 is an internal pre-release of Pilot intended to be used for the unit testing of small to medium sized applications modules. It runs on a standard Alto and consists of a thin layer of additional software over the Mesa System 3.0 package. It implements a subset of the procedures described in the Pilot Functional Specifications. It includes essentially all of the file system and memory management procedures in addition to the process structure features described in Appendix B of the Pilot Functional Specifications. The implementation heavily uses the existing Mesa procedures.
Alto/Pilot 1.0 was released on December 15, 1977. No further releases are planned.
references
Memo of May 19, 1977 from D. DeSantis to Bill Lynch - subject: Desired Alto-based Pilot Functions
Memo of June 2, 1977 from J. Szelong to W. Lynch - subject: Alto/Pilot
documents
Alto/Pilot Functional Specification version 1.0 October 1977
Standard Release Description
Alto/Pilot 1.0 Test Specification
Pilot 2.0
Pilot 2.0 is the first release of Pilot on the D0. This release will implement essentially all of the procedures in the Pilot Functional Specification. The implementation will operate on a "bare" D0, in the sense that the implementation will not be on top of or rely on the Alto Mesa (at that time 4.0) runtime. Any Mesa System functions required for Pilot (e.g. the frame allocator) will be integrated with Pilot. Any additional Mesa System procedures required for an application will have to be converted to use Pilot and will not be supplied with Pilot.
At the time of the delivery of Pilot 2.0, the Mesa byte code interface of the D0 will be a superset of the Mesa byte code interface of the Alto.
Consideration is being given to concurrent delivery of an Alto/Pilot 2.0. Such a system would operate on the Alto, would not support virtual memory (Virtual-Real) and would support code swapping.
Pilot 2.0 will not lay particular emphasis on achieving the required levels of processor efficiency or real memory usage. Much of the work in this area will take place after the release of Pilot 2.0 and be reflected in Pilot 3.0.
Pilot 2.0 will require two disk drives to operate. It seems advisable to operate it on a 192k D0 as the space optimization will not be complete until Pilot 3.0.
**D0 Conversion**
An important dependency in this workplan is the reliance on the existence of a reasonable programming environment on the D0. The attainment of this situation has come to be known as the D0 conversion problem. The current plan is to achieve the D0 programming environment in a stepwise fashion. The crucial point is to create a set of microcode which will cause the D0 to emulate the Alto. The Mesa debugger will be constructed in such a fashion that the debugger itself will execute in the well debugged Alto world while debugging code operating in the D0 princeops world. The microcode will be swapped upon entering and leaving the Mesa debugger.
With these facilities, D0 programs can be debugged with the full power of the Mesa System available and without regard to how much or how little of the D0 system is working. This dual world system with the two sets of microcode will be released with Pilot 2.0 so that Applications can also enjoy the benefits of a reliable Mesa debugger in their initial D0 efforts.
**references**
Memo dated June 6, 1977 from Wendell Shultz to distribution. subject: Conversion plan to D(0)
<Johnsson>Conversion27Jun.bravo
see <Lynch>D0Conversion.Memo
[Iris]<Johnsson>Debugger-Pilot-24Apr.bravo
**Documents**
<table>
<thead>
<tr>
<th>name</th>
<th>prsn</th>
<th>date</th>
<th>size</th>
<th>effort</th>
<th>Alto time</th>
</tr>
</thead>
<tbody>
<tr>
<td>D15 - Project document List</td>
<td>L</td>
<td>7/30/77</td>
<td>2 pages</td>
<td>.2 mm.</td>
<td>9 hrs.</td>
</tr>
<tr>
<td>D12 - Preliminary Work Plan</td>
<td>L</td>
<td>8/15/77</td>
<td>8 pages</td>
<td>1 mm.</td>
<td>36 hrs.</td>
</tr>
<tr>
<td>D2 - Pilot Functional Specifications</td>
<td>U</td>
<td>8/30/77</td>
<td>75 pages</td>
<td>4 mm.</td>
<td>340 hrs.</td>
</tr>
<tr>
<td>D13 - Design Work Plan</td>
<td>L</td>
<td>9/15/77</td>
<td>10 pages</td>
<td>1 mm.</td>
<td>40 hrs.</td>
</tr>
<tr>
<td>D18 - Alto/Pilot Test Spec</td>
<td>L</td>
<td>9/15/77</td>
<td>4 pages</td>
<td>1 mw.</td>
<td>9 hrs.</td>
</tr>
<tr>
<td>D16 - Alto/Pilot Functional Spec</td>
<td>H</td>
<td>10/1/77</td>
<td>8 pages</td>
<td>1 mw.</td>
<td>9 hrs.</td>
</tr>
<tr>
<td>D13 - Pilot Design Specifications</td>
<td>R(GM)</td>
<td>11/1/77</td>
<td>75 pages</td>
<td>7 mm.</td>
<td>300 hrs.</td>
</tr>
<tr>
<td>D14 - Implementation Work Plan</td>
<td>L</td>
<td>11/15/77</td>
<td>10 pages</td>
<td>1 mm.</td>
<td>40 hrs.</td>
</tr>
<tr>
<td>D11 - Pilot test Plan</td>
<td>L</td>
<td>12/1/77</td>
<td>8 pages</td>
<td>.5 mm.</td>
<td>36 hrs.</td>
</tr>
</tbody>
</table>
D19 - Alto/Pilot Standard Release Description
U 12/15/77 * 2 pages 1 mw. 9 hrs.
D21 - Pilot Functional Specifications - ver 2.0
U 5/1/78 * 90 pages 2 mm. 170 hrs.
D6 - Pilot Test Specs
L 6/1/78 8 pages 1 mm. 36 hrs.
D7 - Std Release Descriptions (2.0)
L 8/30/78 2 pages 1 mw. 9 hrs.
D1 - Pilot Concepts and Facilities
M 10/30/78 40 pages 1 mm. 60 hrs.
D8 - Std Release Descriptions (2.1)
L 11/31/78 2 pages 1 mw. 9 hrs.
D20 - Pilot Design Specifications - ver 2.0
RM 12/1/78 75 pages 3 mm. 100 hrs.
D9 - Std Release Descriptions (3.0)
L 3/1/79 2 pages 1 mw. 9 hrs.
D4 - Pilot Tech Manual
U 11/1/79 225 pages 8 mm. 600 hrs.
* means task completed
Programming Projects
<table>
<thead>
<tr>
<th>project name</th>
<th>prsn1 date</th>
<th>req size2</th>
<th>effort</th>
<th>Alto time</th>
</tr>
</thead>
<tbody>
<tr>
<td>P3 - Pilot 1.0 Memory mgmt.</td>
<td>G 9/77</td>
<td>* 1440</td>
<td>2 mm.</td>
<td>230 hrs.</td>
</tr>
<tr>
<td>P4 - Pilot 1.0 file system</td>
<td>M 10/77</td>
<td>* 2640</td>
<td>3 mm.</td>
<td>420 hrs.</td>
</tr>
<tr>
<td>P21 - D0 Test program</td>
<td>H 12/1</td>
<td>* 500</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
<tr>
<td>P22 - Command Test prog</td>
<td>M 12/1</td>
<td>* 1000</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
<tr>
<td>P7 - Pilot 2.0 process structure</td>
<td>R 2/78</td>
<td>* 2160</td>
<td>2 mm.</td>
<td>230 hrs.</td>
</tr>
<tr>
<td>P5 - Pilot 2.0 Memory mgmt.</td>
<td>M 6/78</td>
<td>1440</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
<tr>
<td>P11 - Pilot 2.0 Mesa mods - trap handlers</td>
<td>P 6/78</td>
<td>1000</td>
<td>4 mm.</td>
<td>560 hrs.</td>
</tr>
<tr>
<td>P6 - Pilot 2.0 file system</td>
<td>RP 6/78</td>
<td>2640</td>
<td>2 mm.</td>
<td>230 hrs.</td>
</tr>
<tr>
<td>P12 - Pilot 2.0 Swapper - FPT</td>
<td>RM 6/78</td>
<td>2000</td>
<td>3 mm.</td>
<td>420 hrs.</td>
</tr>
<tr>
<td>P26 - Timers</td>
<td>H 6/78</td>
<td>300</td>
<td>.5 mm.</td>
<td>60 hrs.</td>
</tr>
<tr>
<td>P9 - Integrate 2.0 communications³</td>
<td>U 7/78</td>
<td>n.a.</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
<tr>
<td>P10 - Integrate 2.0 I/O devices³</td>
<td>U 7/78</td>
<td>n.a.</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
<tr>
<td>P13 - Pilot 2.0 Release construction</td>
<td>H 8/78</td>
<td>n.a.</td>
<td>1 mm.</td>
<td>300 hrs.</td>
</tr>
<tr>
<td>P16 - Integrate 3.0 I/O devices³</td>
<td>U 8/78</td>
<td>n.a.</td>
<td>1 mm.</td>
<td>115 hrs.</td>
</tr>
</tbody>
</table>
P25 - Performance measurement J 9/78 800 3 mm. 300 hrs.
P14 - Pilot 2.1 Release construction H 11/78 n.a. 1 mm. 300 hrs.
P19 - Integrate 3.0 communications system^ U 11/78 n.a. 1 mm. 115 hrs.
P23 - Utilities Test P 11/78 n.a. 2 mm. 600 hrs.
P24 - Regression Test H 11/78 n.a. 2 mm. 600 hrs.
P8 - Pilot 3.0 configuration Install H 12/78 2000 2 mm. 230 hrs.
P20 - Pilot 3.0 Release construction H 2/79 n.a. 1 mm. 300 hrs.
P15 - Pilot 4.0 swapping over Xerox Wire M 8/79 1000 1 mm. 140 hrs.
P17 - Pilot 4.0 multiple MDS support R 8/79 300 .5mm. 60 hrs.
P18 - Pilot 4.0 Star support 8/79
* means task completed
notes
1 G - Dave Gifford
H - Tom Horsley
J - Paul Jalics
L - Bill Lynch
M - Paul McJones
P - Steve Purcell
R - Dave Redell
U - Hugh Lauer
2 - All numbers are in lines of Mesa code (loc) except the microcode tasks which are in lines of microcode.
3 - Assumes the assistance of the supplying organization.
Systems Integration and Support
<table>
<thead>
<tr>
<th>project name</th>
<th>prsn</th>
<th>date</th>
<th>size</th>
<th>effort</th>
<th>Alto time</th>
</tr>
</thead>
<tbody>
<tr>
<td>S6 - 2.0 System Integration</td>
<td>MPR</td>
<td>8/78</td>
<td>4 mm.</td>
<td></td>
<td>400 hrs.</td>
</tr>
<tr>
<td>S5 - 2.0 Alpha test</td>
<td>MPRLHU</td>
<td>9/78</td>
<td>6 mm.</td>
<td></td>
<td>600 hrs.</td>
</tr>
<tr>
<td>S2 - Performance tuning</td>
<td>M</td>
<td>8/79</td>
<td>12 mm.</td>
<td></td>
<td>1342 hrs.</td>
</tr>
<tr>
<td>S3 - Residency reduction</td>
<td>P</td>
<td>8/79</td>
<td>12 mm.</td>
<td></td>
<td>1342 hrs.</td>
</tr>
<tr>
<td>S4 - Customer Aid</td>
<td>GHI.MPRU</td>
<td>6/79</td>
<td>6 mm.</td>
<td></td>
<td>600 hrs.</td>
</tr>
</tbody>
</table>
totals
pages 40 mm. 4284 hrs.
Overhead activities
<table>
<thead>
<tr>
<th>Project Name</th>
<th>Prsn</th>
<th>Date Req</th>
<th>Size</th>
<th>Effort</th>
<th>Alto Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>V - 1978 Vacations</td>
<td>HLPMRU</td>
<td>12/78</td>
<td></td>
<td></td>
<td>3.5 mm. 0 hrs.</td>
</tr>
<tr>
<td>O1 - 1978 Group management</td>
<td>L</td>
<td>12/78</td>
<td>150</td>
<td>6 mm.</td>
<td>100 hrs.</td>
</tr>
<tr>
<td>O2 - 1978 Productivity tracking</td>
<td>L</td>
<td>12/78</td>
<td>50</td>
<td>2 mm.</td>
<td>30 hrs.</td>
</tr>
<tr>
<td>O3 - 1978 Xerox University Affairs</td>
<td>L</td>
<td>12/78</td>
<td></td>
<td>.5 mm.</td>
<td>5 hrs.</td>
</tr>
<tr>
<td>C - 1978 Conferences</td>
<td>HLPMRU</td>
<td>12/78</td>
<td></td>
<td>3 mm.</td>
<td>0 hrs.</td>
</tr>
<tr>
<td>V - 1979 Vacations</td>
<td>HLPMRU</td>
<td>12/79</td>
<td></td>
<td>3.5 mm.</td>
<td>0 hrs.</td>
</tr>
<tr>
<td>O1 - 1979 Group management</td>
<td>L</td>
<td>12/79</td>
<td>150</td>
<td>6 mm.</td>
<td>100 hrs.</td>
</tr>
<tr>
<td>O2 - 1979 Productivity tracking</td>
<td>L</td>
<td>12/79</td>
<td>50</td>
<td>2 mm.</td>
<td>30 hrs.</td>
</tr>
<tr>
<td>O3 - 1979 Xerox University Affairs</td>
<td>L</td>
<td>12/79</td>
<td></td>
<td>.5 mm.</td>
<td>5 hrs.</td>
</tr>
<tr>
<td>C - 1979 Conferences</td>
<td>HLPMRU</td>
<td>12/79</td>
<td></td>
<td>3 mm.</td>
<td>0 hrs.</td>
</tr>
</tbody>
</table>
**Totals**: 400 pages, 30 mm., 270 hrs.
Schedules
**1978**
<table>
<thead>
<tr>
<th>Year</th>
<th>Month</th>
<th>Horsley</th>
<th>Jalics</th>
<th>Lauer</th>
<th>Lynch</th>
<th>McJones</th>
<th>Purcell</th>
<th>Redell</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>P26</td>
<td>P13</td>
<td>P9</td>
<td>D6</td>
<td>P12</td>
<td>P11</td>
<td>P7</td>
</tr>
<tr>
<td></td>
<td></td>
<td>S5</td>
<td>VC</td>
<td>P10</td>
<td>D7</td>
<td>S5</td>
<td>S6</td>
<td>P6</td>
</tr>
<tr>
<td></td>
<td></td>
<td>VC</td>
<td></td>
<td>P16</td>
<td>S5</td>
<td>VC</td>
<td>VC</td>
<td>S6</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P14</td>
<td>P8</td>
<td>S2</td>
<td>D1</td>
<td>P14</td>
<td>P16</td>
<td>S5</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P19</td>
<td></td>
<td></td>
<td>D20</td>
<td>P19</td>
<td>P19</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>P15</td>
<td></td>
<td></td>
<td>D20</td>
<td>P15</td>
<td>P15</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>P16</td>
<td></td>
<td></td>
<td>D1</td>
<td>P16</td>
<td>P16</td>
<td></td>
</tr>
</tbody>
</table>
**Alto hours**
**1979**
<table>
<thead>
<tr>
<th>Year</th>
<th>Month</th>
<th>Horsley</th>
<th>Jalics</th>
<th>Lauer</th>
<th>Lynch</th>
<th>McJones</th>
<th>Purcell</th>
<th>Redell</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>P26</td>
<td>P13</td>
<td>P9</td>
<td>D6</td>
<td>P12</td>
<td>P11</td>
<td>P7</td>
</tr>
<tr>
<td></td>
<td></td>
<td>S5</td>
<td>VC</td>
<td>P10</td>
<td>D7</td>
<td>S5</td>
<td>S6</td>
<td>P6</td>
</tr>
<tr>
<td></td>
<td></td>
<td>VC</td>
<td></td>
<td>P16</td>
<td>S5</td>
<td>VC</td>
<td>VC</td>
<td>S6</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P14</td>
<td>P8</td>
<td>S2</td>
<td>D1</td>
<td>P14</td>
<td>P16</td>
<td>S5</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P19</td>
<td></td>
<td></td>
<td>D20</td>
<td>P19</td>
<td>P19</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>P15</td>
<td></td>
<td></td>
<td>D20</td>
<td>P15</td>
<td>P15</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>P16</td>
<td></td>
<td></td>
<td>D1</td>
<td>P16</td>
<td>P16</td>
<td></td>
</tr>
</tbody>
</table>
Special Hardware
One Interim Rigid Disk Controller (IRDC) and two (2) Model 31 disk drives
One extra memory board
Dependencies
Delivery to the Pilot group of an accepted D0 (model 4C see memo from W. Klein dated April 19, 1978 entitled "STAR Venture Program Update") - by June 1, 1978
IRDC - dual model 31's
D0 ethernet
Alto compatible keyboard, mouse, and display for initial debugging
D0 Alto environment
Debugger - D0 debugger received by July 1, 1978
Mesa features
Loader (copy global frame) - June 1, 1978
Loader (new global frame) - July 15, 1978
BootMesa - June 1, 1978
Process structure - May 1, 1978
D0 compiler - October 1, 1978
Global frame size reduction - October 1, 1978
D0 Mesa Princeops environment - October 1, 1978
Default parameters in D0/Mesa - October 1, 1978
Cross Debugger - *October 1, 1978*
**Tools features**
Program librarian - *Installed and operational by May 1, 1978*
Compiler server - *Installed and operational by June 1, 1978*
Consistant Compilation tool - *Installed and operational by Oct. 1, 1978*
Availability of the Mesa group for consultation in the construction of the frame allocation and other traps.
Mesa group - Collection and delivery of those Mesa modules (such as the Signaller) to be integrated into Pilot.
Receipt by June 1, 1978 of the IOCS and device driver implementations.
Receipt by June 1, 1978 of the Ethernet, Xerox Wire, and RS232C device driver modules from the communications group.
Receipt by June 1, 1978 of the microcode for booting.
Implementation for co-delivery with Pilot 2.0 of Scavenger, Movedisk, and Copydisk programs.
**Disk Driver Integration**
To minimize the schedule risks inherent in sequential development, the following approach will be taken to the development of the Pilot file system and the disk driver software. During the initial development of the Pilot file system, a dummy disk driver will be constructed which stores the files in un-occupied real memory. Following this, another disk driver will be constructed which stores the files in the Alto BFS system in a fashion similar to the way files are implemented in Pilot 1.0. Finally, the real disk driver supplied by the I/O group will replace these interim drivers and be integrated with the system. This approach decouples the debugging of the Pilot disk drivers from the development of the rest of Pilot.
**Unresolved Issues**
How much of the Alto/Mesa System shall we convert to run with Pilot? How will this be done?
What will be the mechanism by which customer software is updated after delivery? What impact does this have on the requirements for the Pilot Install feature?
**Releases**
Alto/Pilot - Pilot 1.0 - Dec. 15, 1977
Pilot 2.0 Alpha test - July 30, 1978
Pilot 2.0 - August 30, 1978
Pilot 2.1 - December 1, 1978
Pilot 3.0 - March 1, 1979
This is IT, the Star 1 release Pilot
Pilot 3.1 - June 1, 1979
Pilot 3.2 - Sept. 1, 1979
Pilot 4.0 - Dec. 1, 1979
Reviews
- Pilot Functional Specifications: September 15, 1977
- Pilot Design Specifications: June 1, 1978
- Pilot Test Specifications: June 1, 1978
- Pilot Release Plan: September 15, 1979
Milestones
All reviews and all releases
|
{"Source-Url": "http://www.bitsavers.org/pdf/xerox/sdd/memos_1978/19780525_Pilot_Implementation_Work_Plan.pdf", "len_cl100k_base": 6447, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28573, "total-output-tokens": 6367, "length": "2e12", "weborganizer": {"__label__adult": 0.00026917457580566406, "__label__art_design": 0.00023984909057617188, "__label__crime_law": 0.00015246868133544922, "__label__education_jobs": 0.0005726814270019531, "__label__entertainment": 5.3048133850097656e-05, "__label__fashion_beauty": 9.41157341003418e-05, "__label__finance_business": 0.0004189014434814453, "__label__food_dining": 0.00023221969604492188, "__label__games": 0.0005540847778320312, "__label__hardware": 0.001953125, "__label__health": 0.0002058744430541992, "__label__history": 0.0002110004425048828, "__label__home_hobbies": 0.00012683868408203125, "__label__industrial": 0.00043082237243652344, "__label__literature": 0.00021529197692871096, "__label__politics": 0.00011211633682250977, "__label__religion": 0.0002224445343017578, "__label__science_tech": 0.009063720703125, "__label__social_life": 7.492303848266602e-05, "__label__software": 0.01326751708984375, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0001832246780395508, "__label__transportation": 0.0002803802490234375, "__label__travel": 0.00013875961303710938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18804, 0.08821]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18804, 0.03474]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18804, 0.82834]], "google_gemma-3-12b-it_contains_pii": [[0, 1637, false], [1637, 4471, null], [4471, 6840, null], [6840, 9179, null], [9179, 11404, null], [11404, 12952, null], [12952, 15628, null], [15628, 16427, null], [16427, 18402, null], [18402, 18804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1637, true], [1637, 4471, null], [4471, 6840, null], [6840, 9179, null], [9179, 11404, null], [11404, 12952, null], [12952, 15628, null], [15628, 16427, null], [16427, 18402, null], [18402, 18804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18804, null]], "pdf_page_numbers": [[0, 1637, 1], [1637, 4471, 2], [4471, 6840, 3], [6840, 9179, 4], [9179, 11404, 5], [11404, 12952, 6], [12952, 15628, 7], [15628, 16427, 8], [16427, 18402, 9], [18402, 18804, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18804, 0.28959]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
da618135c2f18192de5b5c3585a6619d24c772d8
|
Collaborative, Dynamic & Complex Systems
Modeling, Provision & Execution
Vasilios Andrikopoulos, Santiago Gómez Sáez, Dimka Karastoyanova and Andreas Weiss
Institute of Architecture of Application Systems, University of Stuttgart, Stuttgart, Germany
Keywords: Collaborative, Dynamic & Complex Systems, Service Orchestration & Choreography, Pervasive Computing, Service Networks, Context-awareness.
Abstract: Service orientation has significantly facilitated the development of complex distributed systems spanning multiple organizations. However, different application areas approach such systems in domain-specific ways, focusing only on particular aspects relevant for their application types. As a result, we observe a very fragmented landscape of service-oriented systems, which does not enable collaboration across organizations. To address this concern, in this work we introduce the notion of Collaborative, Dynamic and Complex (CDC) systems and position them with respect to existing technologies. In addition, we present how CDC systems are modeled and the steps to provision and execute them. Furthermore, we contribute an architecture and prototypical implementation, which we evaluate by means of a case study in a Cloud-enabled context-aware pervasive application.
1 INTRODUCTION
Complex software systems involving multiple, independent partners/software components collaborating in order to achieve one or more goals find predominant application in the current IT landscape. Cases of such systems from different domains are for instance business applications targeting enactment of complex business transactions and service networks, scientific workflows providing one approach for scientific experimenting in eScience, and pervasive systems representing one flavor of ubiquitous computing. Based on our research work towards building support systems for the development and execution of such applications in these domains, we conclude that while all the above-mentioned application areas concentrate on creating complex systems with very specific features critical for the corresponding domain, there are requirements valid across all domains. Our experience also shows that synergies between these domains can be exploited and potential benefits realized through reuse of research results and available software systems.
In this respect, in this work we investigate the requirements towards software systems in the above mentioned application areas with the purpose of identifying overlaps and differences. As we are going to show, the overlaps are significant and the differences are mainly due to the special focus on critical aspects in each domain, and not because the solutions are not relevant in the other domains. Based on these findings we introduce the innovative notion of Collaborative, Dynamic and Complex (CDC) systems aiming to cover all identified requirements and allowing to apply already existing technologies and software systems. CDC systems exhibit the aspects of modeling, provision and execution.
The contributions of this work target enabling the modeling, provision and execution of CDC systems and can be summarized by:
- The synthesis of existing technologies and approaches from the service-oriented computing paradigm and beyond, into a new, unified type of Collaborative, Dynamic and Complex (CDC) systems.
- The specification of the architecture for a framework that supports the various aspects (modeling, provision and execution) of CDC systems.
- The realization of this framework into a prototype called CoDyCo, and its use for the evaluation of our approach based on a case study.
The remaining paper is structured as follows. Section 2 looks into different application areas that deal with relevant for this work systems in order to highlight their similarities and establish the minimum set
of requirements for our work. Section 3 presents our proposal for CDC systems and positions them with respect to existing approaches. Section 4 introduces the architecture of a CDC-supporting framework required for implementing CDC systems. Section 5 reports on the overview and current state of CoDyCo. Section 6 summarizes a case study we performed using CoDyCo for purposes of evaluation. Finally, Section 7 presents related works, and Section 8 concludes the paper.
2 MOTIVATION
In the following, we look into the areas of pervasive systems, service networks and scientific workflows. Our experience in research projects shows, that despite the differences, the available approaches from these areas have many commonalities.
Pervasive Systems. Pervasive systems strive towards enabling the paradigm of ubiquitous computing and have been a subject of interdisciplinary research. Advances in pervasive systems have focused on the aspect of context-awareness, i.e. taking into account the context of physical and virtual entities, which is in fact a view of the physical environment, and the influence of the context on the applications the entities are using or participating in (Baldauf et al., 2007). A major requirement in these systems is the ability to adapt their behavior with respect to the context. Another major challenge is the optimization of the distribution of applications based on context data and resource consumption. The distribution of pervasive applications across multiple software system and hardware devices require their integration and coordination towards enabling a collaboration among participating devices and systems. Due to the dynamic characteristics of the environment of pervasive applications, with participants and devices appearing and disappearing constantly, supporting context sharing, adaptation, and scalability are particularly challenging. Since recently, Cloud Platforms in the scope of the Internet of Things and Smart Systems initiatives have been investigated from the point of view of enabling scalability, multi-tenancy and adaptability (Distefano et al., 2012). Later in this paper, we present a detailed description of a pervasive context-aware application for booking a taxi nearest to the location of a user based on the information provided by mobile devices available to the taxi drivers. We also use the taxi application for purposes of evaluation as a case study.
Service Networks. Service Networks (SNs) (Caswell et al., 2008) are considered a specialized view on business processes, which focus on assisting business experts to evaluate the value of participating in a collaborative business activity. SNs are modeled as a network of business services exchanging offerings; basically, the composite perceived value of the exchanged offerings with the other services determines the value of participating in the network to one participant. Typical examples of SNs are supply chains; the taxi application described later can also be viewed as an SN. There is a significant gap between the meta-models used by business experts when designing the SNs, and the technological realization that needs to be bridged by means of software engineering techniques like model-driven development and code generation, whereas both top-down and bottom-up approaches are required. In addition, service networks are inherently collaborative activities and therefore imply efforts towards integration of applications across organizations.
A SOA-based realization of service networks, as well as a meta-model and graphical notation are presented in (Danylevych et al., 2010) and (Bitsaki et al., 2008). The interoperability of service implementations in an SN have been addressed by means of Web services and the high-level meta-model has been mapped on choreographies of composite service (i.e. organization-specific business processes). Additionally, choreographies take over the role of coordinating the services in a network, which addresses another important requirement. Changes in the perceived value of a network to a participant may initiate changes in the individual partners or in the network as a whole, which have to be propagated to their technological realization. We therefore identify the need for adaptation of service networks; some preliminary attempts to support only some types of service networks adaptation (Wagner et al., 2012) are already available. Monitoring the value of an SN for a participant is not directly measurable, but can only be derived based on monitoring data provided by the execution environment for choreographies, orchestrations and services. Approaches based on business activity monitoring, like (Guinea et al., 2011) and (Wetzstein et al., 2012) are only first steps towards the necessary technological support.
Scientific Workflows. Scientific workflows enable the modeling and execution of scientific experiments
and are part of the technology landscape in eScience (Sonntag and Karastoyanova, 2010). A major requirement in this field is first and foremost the user friendliness of the approach, so that scientists do not face a high learning burden when using the experiment modeling tools. The division between the way scientists model an experiment and the meta-models used in the supporting IT systems is significant and there are different approaches towards eliminating it (Sonntag and Karastoyanova, 2010). Both top-down and bottom-up approaches are required to enable the use of existing software and the development of experiments from scratch. The distributed nature of complex scientific experiments requires integration and composition of scientific computing software, which presents an additional challenge due to the lack of clearly defined software engineering principles for building scientific applications, including scientific workflows. Reusability is hampered by the heterogenous landscape of applications and integration is of high complexity, because of the large number of available techniques for composition. Since scientific discovery is based on exploring physical phenomena, huge amounts of data are collected via numerous types of mobile devices and sensors (e.g. simulations of the distribution of CO₂ in the soil, weather forecasts, biological system simulations, simulations of manufacturing systems, etc.), and need to be processed. Computations in scientific workflows are time-consuming and also distributed, and most often they do not exhibit characteristics of pervasive applications. Adaptation during the modeling and execution of scientific workflows is a must, as evidenced by existing work (Sonntag and Karastoyanova, 2010), (Sonntag and Karastoyanova, 2012).
Despite the different focus of the application systems described above, they all exhibit overlapping characteristics that can be leveraged in a unified manner across the various areas. The following section presents our proposal toward this goal.
3 CDC SYSTEMS
We define Collaborative, Dynamic and Complex (CDC) systems as distributed systems enabling collaboration among participants across different organizations. Participants of CDC systems are services, representing software systems of different granularity, virtual and physical devices, and individuals. CDC participants join and leave the system at will in order to fulfill their individual goals. CDC systems are capable of adapting with respect to different triggers in the system and/or in their environment. CDC systems consist potentially of a large amount of participants dealing with large amounts of data as part of multiple interactions between them, following one or more coordination protocols. CDC systems have three fundamental aspects: Modeling, Provision and Execution.
With respect to modeling, we use choreographies to define the high-level, domain-specific models of CDC systems. Choreographies describe the interaction protocol of the involved participants and the participant roles’ definitions. In SOA environments, individual participant roles are implemented by service orchestrations exposed as services, whereas their service interfaces are compliant with the participant role definitions modeled in the choreography. The services composed by the orchestrations are either available in the software landscape of the participating organizations, or are discoverable in global service registries. Utilizing these SOA-based approaches provides a flexible way of composing applications in complex systems and facilitates application integration. To enable context-awareness, choreographies and orchestrations, as well as involved services, have to incorporate in their models context information and define its use and reaction to potential changes. Since context information may be part of correlation data of orchestrations belonging to an enacted choreography, a mapping between context and correlation mechanisms has to be in place.
Performance indicators, like KPIs, utility, value, etc. are an inseparable part of the CDC system models. On the one hand, they are used to define the indicators according to which users will measure and evaluate whether they achieve their goals in a collaboration. On the other hand, this is the information needed to derive the data to be monitored during the execution of the CDC system. Therefore choreographies, orchestrations and services models have to contain elements defining the necessary monitoring information. In order to enable the dynamic features of CDC systems, constructs accommodating adaptation mechanisms in the choreographies and orchestrations have to be incorporated. Available approaches from the fields of workflow adaptation, flexible scientific workflows and pervasive dynamic flows, e.g. (Wetzstein et al., 2012) or (Sonntag and Karastoyanova, 2010) can be applied individually or in combination. Change propagation across all levels of the CDC systems and thus adaptation of choreographies can be identified as a major research challenge.
As identified in Section 2, two types of approach in modeling are required: top-down and bottom-up. Top-down CDC system modeling entails starting the development of the system with a choreography representing a realization of a high-level (domain-
specific model), like a SN, scientific workflow, or pervasive application. Techniques required to map the choreography into orchestrations and services, like code generation and transformations, are available from software engineering and various existing SOA-enabling systems. The bottom-up approach involves deriving a meaningful choreography model based on existing orchestrations and/or services. In this case, deriving fault handling, monitoring and adaptation information is based on the corresponding capabilities of the involved services and correctness of the derived choreography.
The provisioning aspect of CDC systems entails the provision of the choreography, which also requires the deployment of orchestrations onto execution engines, their provisioning as services, populating the system with the corresponding context and correlation data, and configuring the monitoring infrastructure with the requirements from the CDC model. Mechanisms in service composition systems and scientific workflows for the provisioning of orchestrations and services are already available. Solutions for mapping monitoring requirements to monitoring probes are available in pervasive systems and service-based applications. The provision of a choreography results into adaptive and context-aware orchestrations available as a service. The choreography can be initiated multiple times for multiple interactions and can be started by any of the participating orchestrations or services, if they are allowed to do so by the choreography definition. Any underlying infrastructure should therefore enable sharing of resources across different CDC systems while correlating interactions to tenants and their users.
Running a choreography is therefore realized as a distributed execution of the collaboration among participating orchestrations and services. Since context-awareness is inherent to the CDC system model, the execution environment has to be able to support this property. Adaptation mechanisms, predefined in the system model (like abstract activities, binding strategies for services, reactions to context change, etc.) and such that are orthogonal to the model (like manual adaptation, forced termination, substituting a service endpoint, etc.) need to be realized by the execution environment. Furthermore, the execution environment of CDC systems must scale with their number of participants and their interactions, as well as the volumes of data exchanged. Monitoring information is necessary in order to enable such scaling.

### 4 CDC FRAMEWORK ARCHITECTURE
Figure 1 provides an overview of our proposal for a framework supporting the modeling, provision and execution of CDC systems. Starting from the modeling aspect, a Choreography Editor is required to create, visualize and manage the choreography models of the CDC systems. A Transformer component can then either generate orchestration templates that the CDC participants are meant to implement (in the top-down approach in the previous section), or derive possible choreographies from existing orchestrations (in the bottom-up approach). In either case, an Orchestration Editor (not necessarily but preferably in the same environment as the Choreography Editor) should be available for orchestration visualization and manipulation. The transformer components also require as input the service descriptions of the used orchestrations in the bottom-up approach or generates (abstract) service descriptions for derived orchestrations. Moving to the provisioning aspect, the Deployment Manager allows the assignment of the necessary for operation computational resources to the orchestrations involved in the modeled choreographies beyond physically deploying the necessary artifacts on an Execution Engine, this additionally entails the creation of all service endpoints necessary for accessing the orchestration logic by the system participants. The Deployment Manager also handles the information needed for late and dynamic binding to concrete service endpoints and provides it to the ESB during the execution of orchestrations.
In principle, multiple organizational domains may be using the same instantiation of this framework for different CDC systems. It is therefore necessary to
offer multi-tenancy capabilities out of the box for all components in the provision and execution aspect of the framework. A Tenant Manager is responsible for this role, and implements administration and management capabilities for existing and new tenants (organizational domains) and their users (individuals or sub-systems in the same domain). The Tenant Manager is also meant to implement access control to both choreography and orchestration models, and to the computational resources corresponding to them during the execution of CDC systems, as assigned to them by the Deployment Manager. Only authorized parties should be allowed, for example, to participate in a given choreography. Furthermore, any collected contextual information relevant for tenants and users in terms of a representation of their environment, e.g. their physical location or the quality of observed data, is stored and accessed through the Tenant Manager.
While the Deployment and Tenant Managers play prominent roles in the provision aspect of CDC systems, they are also heavily involved during CDC system execution, since both of them need to interact with the actual Execution Engine that runs the orchestrations defined during modeling. Furthermore, the Execution Engine has to provide fault handling capabilities, both for pre-defined fault and compensation handlers in the orchestration models, and for failures during execution like service failures and unavailability of other components in the framework (e.g. access to the Deployment Manager).
The Adaptation Manager is responsible for triggering and managing the adaptivity features of CDC systems by providing mechanisms for different types of adaptations across the levels of the systems. It implements and/or coordinates the actions necessary to enable the adaptation constructs from the CDC system model and the ones implemented only on the level of the execution environment. The Adaptation Manager collaborates also with the Deployment Manager when necessary, e.g. for re-binding service endpoints, and with the Execution Engine, e.g. for injecting a new activity and control connectors into an existing orchestration or deploying a new orchestration in case a choreography has been changed. The Adaptation Manager acts on information provided by the Monitor component which monitors and analyses the behavior and performance of the executed orchestrations, of the enacted choreographies, and also of the execution components in the framework. The Monitor must be configurable based on the monitoring information required for the CDC system and is responsible for providing to the users of choreographies and orchestrations personalized views of the relevant monitoring information on their devices.
Leveraging the SOA paradigm, all components in the framework relevant to execution should be provided as services and communicate through an Enterprise Service Bus (ESB) solution to facilitate their integration. Furthermore, each component should be designed and implemented allowing for both types of scalability: horizontal (modify number of available instances as required) and vertical (adjustment of available computational resources for each component) (Vaquero et al., 2011).
5 IMPLEMENTATION
In this section we present CoDyCo, a realization approach of the CDC-supporting framework presented in the previous section. As shown in Fig. 2, two separate environments are distinguished in the CoDyCo architecture: a Modeling and Monitoring Toolset, and a Runtime Environment.
Starting from the Modeling and Monitoring Toolset, CDC choreographies are specified in the BPEL4Chor language using our BPEL4Chor Designer. BPEL4Chor was first introduced by (Decker et al., 2008) as an extension of the WS-BPEL language (OASIS, 2007) and stemming from the business transactions field. However, BPEL4Chor choreographies are by definition not executable and therefore the transformation of BPEL4Chor definitions to WS-BPEL process (orchestration) skeletons (Reimann, 2007) is supported in CoDyCo by a BPEL4Chor Transformer (Fig. 2). Based on the choreography topology, the participants’ grounding definitions, i.e. their WSDL interfaces, and their message links, this transformation generates the executable BPEL orchestrations for each participant in the choreography. The skeletons only model the interactions between partners so that together they can enact the choreography. Manual refinements can be performed on the created orchestrations, using our Mayflower BPEL Designer (Sonntag et al., 2012) developed in the context of the SimTech project2 as an Eclipse-based BPEL designer. These refinements allow defining specific process logic for each participant, for example by reusing predefined process fragments from a process fragment library, as demonstrated in (Sonntag et al., 2012) and (Schumm et al., 2010). Both choreography definition and transformation functionalities are wrapped as an Eclipse Graphical Editor, and provide a palette with the graphical elements of the choreography language (Weiß et al., 2013). Only the top-down modeling approach (see
---
2The SimTech project: http://www.simtech.uni-stuttgart.de
Section 3) is currently supported by the Modeling and Monitoring Toolset; supporting the bottom-up approach is part of our future work. For more information on the status of the tools the interested reader is referred to (Weiß et al., 2013).
The deployment and instantiation of the BPEL processes generated by the Modeling and Monitoring Toolset is done on our Mayflower BPEL Engine. This engine is an extended version of the open-source Apache ODE Engine. It provides an interface for event publishing and configurable filtering and a BPEL event model (Khalaf et al., 2007), which have been specialized for the purposes of monitoring and triggering dynamic adaptation of process instances (Sonntag et al., 2012). Functionalities provided by the Mayflower BPEL Engine address the requirements of the CDC execution environment in terms of orchestration execution, storing of historical information, failure and compensation handling, and in combination with the Adaptation Manager also enable some dynamic adaptation patterns. For instance, dynamic adaptation of process instances triggered by humans are supported in CoDyCo through the Modeling and Monitoring Toolset. More specifically, the Mayflower BPEL Designer interacts with the Mayflower BPEL Engine and the Adaptation Manager and so allow the users to view the status of process instances and trigger their adaptation. Based on this users can adapt the process instance by changing its graphical representation.
The adaptation operations that can be performed on a process instance in our current implementation are, e.g. re-execution or forced iteration of activities (Sonntag and Karastoyanova, 2012), insertion and deletion of process elements, or their substitution. The changes made on the viewed process instance are then propagated to the Adaptation Manager Component, which is responsible for performing the actual adaptation on the concrete process instance in the engine. This is also made possible by auxiliary functions in the Mayflower BPEL Designer, mirrored in the Mayflower BPEL Engine, like enabling user subscriptions to monitoring events published by the engine, and hence retrieval of real-time information about a concrete process instance, and built-in actions per process instance like stop, suspend, resume, etc. (Sonntag et al., 2012). Automatic adaptation of running process instances, like injection of process fragments is currently not supported by the Adaptation Manager in CoDyCo.
Communication between participants collaborating in the scope of a specific choreography instance, and between the different components comprising the runtime environment is established through the multi-tenant open source ESB solution ESBMT (Strauch et al., 2012a), (Strauch et al., 2012b). ESBMT enhances an existing ESB Solution, Apache ServiceMix with multi-tenant awareness both at the administration and management, and messaging levels. Management and administration functionalities implemented by a Tenant Manager enable the dynamic deployment and configuration of service endpoints with tenant- and user-specific information, while tenant-aware messaging capabilities isolate tenants’ messages routed to the service endpoints (Strauch et al., 2012b). As tenants represent organizational units, e.g. Taxi Company A, which has N taxi drivers,
---
2 ODE PGF: http://www.iaas.uni-stuttgart.de/forschung/projects/ODE-PGF/
4 Note that this is possible only manually, as described above.
5 Apache ServiceMix: http://servicemix.apache.org/
and M potential taxi customers, their communication in the scope of a choreography is supported in the ESB\textsuperscript{MT} through tenant-aware service endpoints. At the moment we are working on the integration of our execution engine with the ESB\textsuperscript{MT} as a JBI Service Engine, to enable multi-tenancy support for orchestrations and choreographies, too.
The Management Dashboard is the component in the Modeling and Monitoring Toolset responsible for providing analyzed and personalized monitoring information to users and tenants about the execution state of choreographies and orchestrations. Our prototype visualizes the execution status of orchestrations, the data they produce and consume, and the adaptations that have been performed (Sonntag et al., 2012). This is possible due to the interaction of the Management Dashboard with the Mayflower BPEL Engine via its event publishing interface, as described above. Our approaches for monitoring choreographies (Wagner et al., 2012) and KPIs (Wetzstein et al., 2012) are not yet integrated into CoDyCo.
The Context Management System stores the system context (i.e., a set of context properties of all active tenants) and constantly synchronizes its current configuration with the tenant specific applications by retrieving context data from each of the tenant users created in the system, e.g., from a location provider embedded system in a specific taxi cab. Context information is used in the execution of choreographies by the corresponding orchestrations through Context Integration Processes (CIPs) (Wieland et al., 2007) and may trigger their adaptation. Context-aware adaptations will be handled by a collaboration of the Adaptation Manager, the Mayflower Engine, the Context Management System (CMS), and the Tenant Manager and is part of our future work on the implementation. The modeling tool is currently also missing features supporting modeling of context in the choreographies.
6 CASE STUDY
In the scope of project 4CaaSt\textsuperscript{6}, the Taxi Scenario use case has been defined, where a service provider offers a taxi management software as a service to different taxi companies, i.e., tenants. Taxi company customers, who are the users of the tenant, submit their taxi transportation requests to the company they are registered with. The taxi company uses the taxi management software to contact nearby taxi drivers. Once one of the contacted taxi drivers has confirmed the transportation request, the taxi management software sends a transport notification containing the estimated arrival time to the customer. As discussed in Section 2, the Taxi Scenario constitutes a pervasive context-aware application and therefore, in the scope of this work, an ideal candidate for the evaluation of our proposal.
Figure 3 shows the processes the Taxi Scenario application realizes. The simplified BPMN diagram has three lanes depicting the three participants of the application choreography: Customer, Taxi Company, and Taxi Service Provider. If a Customer wishes to book a Taxi, he sends an initial request to the Taxi Company call center (usually through a Web GUI), which forwards it to the Taxi Service Provider. The Taxi Service Provider process determines the nearby available taxis and the contact information of the taxi drivers using CIPs (Wieland et al., 2007) (not shown here for brevity). Subsequently, the transport request is sent to each available taxi driver, and their responses are collected for a specified duration. The gathered transport information is sent back to the Customer. Implementing the Taxi Scenario required the manual design of all involved processes as orchestrations for each participant and their interactions, as well as the implementation of most services involved in them (except from those that already existed, e.g., a context provisioning framework exposed as a service (Knappmeyer et al., 2010)).
For the evaluation of our approach we started with the BPMN diagram in Fig. 3 which we translated into BPEL4Chor. Figure 4 shows the processes depicted in Fig. 3 as participants in our BPEL4Chor Designer. The rectangular shapes in the editor view in Fig. 4 stand for the choreography participants, whereas the message links and their directions are depicted by labeled arrows. The set of taxi drivers are represented by the Taxi Transmitters participant, standing for the devices carried by the drivers. Inside each participant, the control flow regarding its communication behavior such as receive or send activities is visible. The resulting choreography model was then transformed into a series of BPEL4Chor artifacts by the BPEL4Chor Designer: a participant topology specifying the involved participants in the choreography, the participant types, and the message links between participants.
The BPEL4Chor artifacts are used by the BPEL4Chor Transformer to generate Abstract BPEL processes and WSDL files. These WSDL files contain the technical information about the interfaces between the participants, i.e., the port types, operations, messages, and partner links. Each previously modeled participant was transformed into exactly one Abstract BPEL process. Basic executable completion of
\textsuperscript{6}The 4CaaSt project: http://www.4caast.eu
the Abstract BPEL processes, i.e. their transformation into executable ones, is supported by the BPEL4Chor Designer, as well as the manual refinement of the process logic in each participant that is not part of the choreography. The resulting (executable) processes were then deployed and executed successfully in our Mayflower BPEL Engine. Through this process we have also identified a number of technical issues with the current implementation of the BPEL4Chor Designer and Transformer (Weiß et al., 2013). For example, not all BPEL activities are currently supported, and the generated WSDL files need manual definition of the involved message types. We are already working on addressing these deficiencies.
7 RELATED WORK
As discussed in (Barker et al., 2009), the interaction between participants in a choreography can be modeled following the interaction, or interconnection modeling approaches. The former approach models atomic interactions between participants through interaction activities, while the latter interconnects the communication activities of each participant of the choreography. The WS-CDL\textsuperscript{7} language standard supports the interaction approach. Using the WS-CDL language as the basis, the Savara\textsuperscript{8} project aims to provide tooling support for a top-down choreography modeling approach. Interconnection modeling approaches are supported in the CHOREoS In-
\textsuperscript{7}WS-CDL: http://www.w3.org/TR/ws-cdl-10/
\textsuperscript{8}http://www.jboss.org/savara
The Open Knowledge framework employs a multi-agent protocol to control the interactions between participants in the choreography. Therefore, participants must be specified and deployed prior to the choreography enactment, and adaptation based on context modifications is not considered. As discussed in the previous sections, BPEL4Chor wraps the choreography specification in a layer atop of WS-BPEL which contains the choreography control flow, its participants description and message links between them, and the mapping support to their concrete communication descriptions (WSDL). BPEL4Chor does not support the explicit specification of rules for context-aware adaptation purposes, but decouples the choreography specification from communication specific details, enabling extensibility for dynamic context-aware choreography adaptation.
Context-aware systems have been widely studied in the scope of Ubiquitous Computing. In (Baldauf et al., 2007) a set of context-aware systems are presented, and a comparison focusing on the architectural principles of context-aware middleware and framework to ease the development of context-aware applications is provided. The CoWSAMI middleware infrastructure utilizes Web services for managing location context in open ambient intelligence environments (Athanasopoulos et al., 2008). The utilization of an ESB as the central piece for communication support in context-aware systems is discussed in (Chanda et al., 2011), where a Context-aware ESB (CA-ESB) proposed to discover and orchestrate services based on the users’ location and available services in specific regions.
Concerning different context views in pervasive environments, in (Abdulrazak et al., 2010) micro and macro context-awareness modeling approaches are presented. The former describes the users’ surroundings and aims to provide access to local context data, while the latter aggregates local context data to provide a global perspective of different spaces. Self-configuration operations in micro context-awareness models involve coordination of peers in a decentralized manner, making choreographies suitable for modeling the coordination between peers. Furthermore, in (Roy et al., 2008) high system availability is achieved by decentralizing the coordination of entities collaborating in context construction and decision making activities in open intelligence spaces ensures.
Context-aware workflows as an approach for easing the development of context-aware applications are presented in (Wieland et al., 2007). Thus, they propose Context4BPEL, a WS-BPEL extension for explicitly modeling the influence of context on workflows. However, WS-BPEL supports orchestration of services within a business process, while choreography modeling approaches demand a further semantic support for specifying process interactions from a global view. Further research on workflow flexibility has been conducted by integrating support of human interactions during the execution of scientific workflows (Karastoyanova et al., 2012). This approach triggers human interactions for non-automated activities via a framework supporting a multi-protocol communication between a scientific workflow management system and pluggable communication devices. All these approaches are focusing on only one particular aspect of CDC systems.
8 CONCLUSIONS AND FUTURE WORK
Our investigation into different application areas like pervasive systems, service networks and scientific workflow systems that have been influenced by service-orientation illustrated a series of overlapping characteristics that have not been leveraged so far. Toward this purpose, in this work we introduced the notion of Collaborative, Dynamic and Complex (CDC) Systems as dynamic distributed systems that allow participants from different organizations to collaborate to fulfill their goals. We discussed three fundamental aspects of CDC systems: modeling, provision and execution, and presented the architecture of a framework that supports these aspects. We then showed the current status of the prototypical implementation of CoDyCo, a system that realizes this framework. A case study on a context-aware pervasive application was then presented for purposes of evaluating our proposal.
Currently, we are working on improving the state of the CoDyCo prototype by addressing the deficiencies identified during the case study. Future work is aimed at finalizing the different aspects of our proposal. Concerning modeling and provision CDC systems, the bottom-up modeling approach has to be realized, as well as enabling context-awareness in
choreographies and orchestrations for both type of approaches. This also entails the realization of the context management system. In addition, multi-tenancy awareness has to be enabled for choreographies and orchestrations, and reflected in the Execution Engine. The Management Dashboard has to be integrated with approaches to monitoring KPIs and business transactions which is a step towards enabling the monitoring of choreographies. In terms of adaptation, available approaches context-aware adaptation and automatic adaptation of orchestration has to be integrated in CoDyCo. Finally, the scalability features of the CoDyCo components have to be investigated further in the scope of our Cloud computing research.
ACKNOWLEDGEMENTS
This work is funded by the projects FP7 EU-FET 600792 ALLOW Ensembles and the German DFG within the Cluster of Excellence (EXC310) in Simulation Technology.
REFERENCES
All links were last followed on January 29, 2014.
|
{"Source-Url": "https://www.rug.nl/research/portal/files/132452277/CLOSER_2014_46.pdf", "len_cl100k_base": 7338, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 35520, "total-output-tokens": 9809, "length": "2e12", "weborganizer": {"__label__adult": 0.000385284423828125, "__label__art_design": 0.0011129379272460938, "__label__crime_law": 0.0004117488861083984, "__label__education_jobs": 0.0013914108276367188, "__label__entertainment": 0.00019419193267822263, "__label__fashion_beauty": 0.00023424625396728516, "__label__finance_business": 0.0007557868957519531, "__label__food_dining": 0.0004475116729736328, "__label__games": 0.0005979537963867188, "__label__hardware": 0.0015859603881835938, "__label__health": 0.0007128715515136719, "__label__history": 0.000705718994140625, "__label__home_hobbies": 0.00010895729064941406, "__label__industrial": 0.0008502006530761719, "__label__literature": 0.0005674362182617188, "__label__politics": 0.00045228004455566406, "__label__religion": 0.0005927085876464844, "__label__science_tech": 0.3544921875, "__label__social_life": 0.00016486644744873047, "__label__software": 0.0288238525390625, "__label__software_dev": 0.60302734375, "__label__sports_fitness": 0.00024116039276123047, "__label__transportation": 0.0016002655029296875, "__label__travel": 0.0003428459167480469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44976, 0.0288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44976, 0.24103]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44976, 0.89554]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3855, false], [3855, 8766, null], [8766, 14098, null], [14098, 18428, null], [18428, 23607, null], [23607, 27127, null], [27127, 32417, null], [32417, 33939, null], [33939, 38553, null], [38553, 43810, null], [43810, 44976, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3855, true], [3855, 8766, null], [8766, 14098, null], [14098, 18428, null], [18428, 23607, null], [23607, 27127, null], [27127, 32417, null], [32417, 33939, null], [33939, 38553, null], [38553, 43810, null], [43810, 44976, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44976, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44976, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3855, 2], [3855, 8766, 3], [8766, 14098, 4], [14098, 18428, 5], [18428, 23607, 6], [23607, 27127, 7], [27127, 32417, 8], [32417, 33939, 9], [33939, 38553, 10], [38553, 43810, 11], [43810, 44976, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44976, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
aae5877c807b28901980f44351e5c2cbd3d296e8
|
On the Semantics of Gringo
Amelia Harrison, Vladimir Lifschitz, and Fangkai Yang
University of Texas
Abstract
Input languages of answer set solvers are based on the mathematically simple concept of a stable model. But many useful constructs available in these languages, including local variables, conditional literals, and aggregates, cannot be easily explained in terms of stable models in the sense of the original definition of this concept and its straightforward generalizations. Manuals written by designers of answer set solvers usually explain such constructs using examples and informal comments that appeal to the user’s intuition, without references to any precise semantics. We propose to approach the problem of defining the semantics of GRINGO programs by translating them into the language of infinitary propositional formulas. This semantics allows us to study equivalent transformations of GRINGO programs using natural deduction in infinitary propositional logic.
1 Introduction
In this note, Gringo is the name of the input language of the grounder GRINGO, which is used as the front end in many answer set programming systems. Several releases of GRINGO have been made public, and more may be coming in the future; accordingly, we can distinguish between several “dialects” of the language Gringo. We concentrate here on Version 4, released in March of 2013. (It differs from Version 3, described in the User’s Guide dated October 4, 2010, in several ways, including the approach to aggregates—it is modified as proposed by the ASP Standardization Working Group.)
The basis of Gringo is the language of logic programs with negation as failure, with the syntax and semantics defined in [Gelfond and Lifschitz, 1988]. Our goal
1http://potassco.sourceforge.net/.
2The User’s Guide can be downloaded from the Potassco website (Footnote 1). It is posted also at http://www.cs.utexas.edu/users/vl/teaching/lbai/clingo_guide.pdf.
3https://www.mat.unical.it/aspcomp2013/ASPStandardization.
Here is to extend that semantics to a larger subset of Gringo. Specifically, we would like to cover arithmetical functions and comparisons, conditions, and aggregates.\textsuperscript{4}
Our proposal is based on the informal and sometimes incomplete description of the language in the \textit{User’s Guide}, on the discussion of ASP programming constructs in [Gebser \textit{et al.}, 2012], on experiments with \textsc{gringo}, and on the clarifications provided in response to our questions by its designers.
The proposed semantics uses a translation from Gringo into the language of infinitary propositional formulas—propositional formulas with infinitely long conjunctions and disjunctions. Including infinitary formulas is essential, as we will see, when conditions or aggregates use variables ranging over infinite sets (for instance, over integers). The definition of a stable model for infinitary propositional formulas, given in [Truszczynski, 2012], is a straightforward generalization of the stable model semantics of propositional theories from [Ferraris, 2005].
The process of converting Gringo programs into infinitary propositional formulas defined in this note uses substitutions to eliminate variables. This form of grounding is quite different, of course, from the process of intelligent instantiation implemented in \textsc{gringo} and other grounders. Mathematically, it is much simpler than intelligent instantiation; as a computational procedure, it is much less efficient, not to mention the fact that sometimes it produces infinite objects. Like grounding in the original definition of a stable model [Gelfond and Lifschitz, 1988], it is modular, in the sense that it applies to the program rule by rule, and it is applicable even if the program is not safe. From this perspective, \textsc{gringo}'s safety requirement is an implementation restriction.
Instead of infinitary propositional formulas, we could have used first-order formulas with generalized quantifiers.\textsuperscript{5} The advantage of propositional formulas as the target language is that properties of these formulas, and of their stable models, are better understood. We may be able to prove, for instance, that two Gringo programs have the same stable models by observing that the corresponding infinitary formulas are equivalent in one of the natural deduction systems discussed in [Harrison \textit{et al.}, 2013]. We give here several examples of reasoning about Gringo programs based on this idea.
Our description of the syntax of Gringo disregards some of the features re-
\textsuperscript{4}The subset of Gringo discussed in this note includes also constraints, disjunctive rules, and choice rules, treated along the lines of [Gelfond and Lifschitz, 1991] and [Ferraris and Lifschitz, 2005]. The first of these papers introduces also “classical” (or “strong”) negation—a useful feature that we do not include. (Extending our semantics of Gringo to programs with classical negation is straightforward, using the process of eliminating classical negation in favor of additional atoms described in [Gelfond and Lifschitz, 1991, Section 4].)
\textsuperscript{5}Stable models of formulas with generalized quantifiers are defined by Lee and Meng [2012a, 2012b, 2012c].
lated to representing programs as strings of ASCII characters, such as using `:=` to separate the head from the body, using semicolons, rather than parentheses, to indicate the boundaries of a conditional literal, and representing falsity (which we denote here by `⊥`) as `#false`. Since the subset of Gringo discussed in this note does not include assignments, we can disregard also the requirement that equality be represented by two characters `==`.
2 Syntax
We begin with a signature `σ` in the sense of first-order logic that includes, among others,
(i) numerals—object constants representing all integers,
(ii) arithmetical functions—binary function constants `+`, `−`, `×`,
(iii) comparisons—binary predicate constants `<`, `>`, `≤`, `≥`.
We will identify numerals with the corresponding elements of the set `Z` of integers. Object, function, and predicate symbols not listed under (i)–(iii) will be called symbolic. A term is arithmetical if it does not contain symbolic object or function constants. A ground term is precomputed if it does not contain arithmetical functions.
We assume that in addition to the signature, a set of symbols called aggregate names is specified, and that for each aggregate name `α` a function `ˆα` from sets of tuples of precomputed terms to `Z∪{∞, −∞}` is given—the function denoted by `α`.
**Examples.** The functions denoted by the aggregate names `card`, `max`, and `sum` are defined as follows. For any set `T` of tuples of precomputed terms,
- `ˆcard(T)` is the cardinality of `T` if `T` is finite, and `∞` otherwise;
- `ˆmax(T)` is the least upper bound of the set of the integers `t_1` over all tuples `(t_1, \ldots, t_m) ∈ T` such that `t_1` is an integer;
- `ˆsum(T)` is the sum of the integers `t_1` over all tuples `(t_1, \ldots, t_m) ∈ T` such that `t_1` is a positive integer if there are finitely many such tuples, and `∞` otherwise.\(^6\)
\(^6\)To allow negative numbers in this example, we would have to define summation for a set that contains both infinitely many positive numbers and infinitely many negative numbers. For instance, we can define the sum to be 0 in this case. Admittedly, this is somewhat unnatural.
A literal is an expression of one of the forms
\[ p(t_1, \ldots, t_k), \ t_1 = t_2, \ \text{not } p(t_1, \ldots, t_k), \ \text{not } (t_1 = t_2) \]
where \( p \) is a symbolic predicate constant of arity \( k \), and each \( t_i \) is a term, or
\[ t_1 \prec t_2, \ \text{not } (t_1 \prec t_2) \]
where \( \prec \) is a comparison, and \( t_1, t_2 \) are arithmetical terms. A conditional literal is an expression of the form \( H : L \), where \( H \) is a literal or the symbol \( \bot \), and \( L \) is a list of literals, possibly empty. The members of \( L \) will be called conditions. If \( L \) is empty then we will drop the colon after \( H \), so that every literal can be viewed as a conditional literal.
**Example.** If available and person are unary predicate symbols then
\[ \text{available}(X) : \text{person}(X) \]
and
\[ \bot : (\text{person}(X), \text{not available}(X)) \]
are conditional literals.
An aggregate expression is an expression of the form
\[ \alpha\{t : L\} \prec s \]
where \( \alpha \) is an aggregate name, \( t \) is a list of terms, \( L \) is a list of literals, \( \prec \) is a comparison or the symbol \( = \), and \( s \) is an arithmetical term.
**Example.** If enroll is a unary predicate symbol and hours is a binary predicate symbol then
\[ \text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = N \]
is an aggregate expression.
A rule is an expression of the form
\[ H_1 | \cdots | H_m \leftarrow B_1, \ldots, B_n \]
(1)
\((m, n \geq 0)\), where each \( H_i \) is a conditional literal, and each \( B_i \) is a conditional literal or an aggregate expression. A program is a set of rules.
If \( p \) is a symbolic predicate constant of arity \( k \), and \( t \) is a \( k \)-tuple of terms, then
\[ \{p(t)\} \leftarrow B_1, \ldots, B_n \]
is shorthand for
\[ p(t) | \neg p(t) \leftarrow B_1, \ldots, B_n. \]
**Example.** For any positive integer \( n \),
\[
\{ p(i) \} \leftarrow p(X), p(Y), p(X+Y) \quad (i = 1, \ldots, n),
\]
is a program.
# 3 Semantics
We will define the semantics of Gringo using a syntactic transformation \( \tau \). It converts Gringo rules into infinitary propositional combinations of atoms of the form \( p(t) \), where \( p \) is a symbolic predicate constant, and \( t \) is a tuple of precomputed terms.\(^7\)
## 3.1 Semantics of Well-Formed Ground Literals
A term \( t \) is well-formed if it contains neither symbolic object constants nor symbolic function constants in the scope of arithmetical functions. For instance, all arithmetical terms and all precomputed terms are well-formed; \( c + 2 \) is not well-formed. The definition of “well-formed” for literals, aggregate expressions, and so forth is the same.
For every well-formed ground term \( t \), by \([t]\) we denote the precomputed term obtained from \( t \) by evaluating all arithmetical functions, and similarly for tuples of terms. For instance, \([f(2+2)] = f(4)\).
The translation \( \tau L \) of a well-formed ground literal \( L \) is defined as follows:
- \( \tau p(t) \) is \( p([t]) \);
- \( \tau (t_1 \prec t_2) \), where \( \prec \) is the symbol \( = \) or a comparison, is \( \top \) if the relation \( \prec \) holds between \([t_1]\) and \([t_2]\), and \( \bot \) otherwise;
- \( \tau (\neg A) \) is \( \neg \tau A \).
For instance, \( \tau (\neg p(f(2+2))) = \neg p(f(4)) \), and \( \tau (2+2 = 4) = \top \).
Furthermore, \( \tau \bot \) stands for \( \bot \), and, for any list \( L \) of ground literals, \( \tau L \) is the conjunction of the formulas \( \tau L \) for all members \( L \) of \( L \).
\(^7\)As in [Truszczynski, 2012], infinitary formulas are built from atoms and the falsity symbol \( \bot \) by forming (i) implications and (ii) conjunctions and disjunctions of arbitrary sets of formulas. We treat \( \neg F \) as shorthand for \( F \rightarrow \bot \), and \( \top \) stands for \( \bot \rightarrow \bot \).
3.2 Global Variables
About a variable we say that it is global
- in a conditional literal $H : L$, if it occurs in $H$ but does not occur in $L$;
- in an aggregate expression $\alpha\{t : L\} < s$, if it occurs in the term $s$;
- in a rule (1), if it is global in at least one of the expressions $H_i, B_i$.
For instance, the head of the rule
$$\text{total hours}(N) \leftarrow \text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = N$$
(3)
is a literal with the global variable $N$, and its body is an aggregate expression with the global variable $N$. Consequently $N$ is global in the rule as well.
A conditional literal, an aggregate expression, or a rule is closed if it has no global variables. An instance of a rule $R$ is any well-formed closed rule that can be obtained from $R$ by substituting precomputed terms for global variables. For instance,
$$\text{total hours}(6) \leftarrow \text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = 6$$
is an instance of rule (3). It is clear that if a rule is not well-formed then it has no instances.
3.3 Semantics of Closed Conditional Literals
If $t$ is a term, $x$ is a tuple of distinct variables, and $r$ is a tuple of terms of the same length as $x$, then the term obtained from $t$ by substituting $r$ for $x$ will be denoted by $t^x_r$. Similar notation will be used for the result of substituting $r$ for $x$ in expressions of other kinds, such as literals and lists of literals.
The result of applying $\tau$ to a closed conditional literal $H : L$ is the conjunction of the formulas
$$\tau(L^x_r) \rightarrow \tau(H^x_r)$$
where $x$ is the list of variables occurring in $H : L$, over all tuples $r$ of precomputed terms of the same length as $x$ such that both $L^x_r$ and $H^x_r$ are well-formed. For instance,
$$\tau(\text{available}(X) : \text{person}(X))$$
is the conjunction of the formulas $\text{person}(r) \rightarrow \text{available}(r)$ over all precomputed terms $r$;
$$\tau(\bot : p(2 \times X))$$
is the conjunction of the formulas \(-p(2 \times i)\) over all numerals \(i\).
When a conditional literal occurs in the head of a rule, we will translate it in a different way. By \(\tau_h(H : L)\) we denote the disjunction of the formulas
\[
\tau(L_x^r) \land \tau(H_x^r)
\]
where \(x\) and \(r\) are as above. For instance,
\[
\tau_h(\text{available}(X) : \text{person}(X))
\]
is the disjunction of the formulas \(\text{person}(r) \land \text{available}(r)\) over all precomputed terms \(r\).
### 3.4 Semantics of Closed Aggregate Expressions
In this section, the semantics of ground aggregates proposed in [Ferraris, 2005, Section 4.1] is adapted to closed aggregate expressions.
Let \(E\) be a closed aggregate expression \(\alpha\{t : L\} \prec s\), and let \(x\) be the list of variables occurring in \(E\). A tuple \(r\) of precomputed terms of the same length as \(x\) is admissible (w.r.t. \(E\)) if both \(t_x^r \land L_x^r\) are well-formed. About a set \(\Delta\) of admissible tuples we say that it justifies \(E\) if the relation \(\prec\) holds between \(\hat{\alpha}(\{t_x^r : r \in \Delta\})\) and \([s]\).
For instance, consider the aggregate expression
\[
\text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = 6.
\]
In this case, admissible tuples are arbitrary pairs of precomputed terms. The set \{(3, cs101), (3, cs102)\} justifies (4), because
\[
\hat{\text{sum}}(\{(H, C)^{H,C}_{3,cs101}, (H, C)^{H,C}_{3,cs102}\}) = \hat{\text{sum}}(\{(3, cs101), (3, cs102)\}) = 3 + 3 = 6.
\]
More generally, a set \(\Delta\) of pairs of precomputed terms justifies (4) whenever \(\Delta\) contains finitely many pairs \((h, c)\) in which \(h\) is a positive integer, and the sum of the integers \(h\) over all these pairs is 6.
We define \(\tau E\) as the conjunction of the implications
\[
\bigwedge_{r \in \Delta} \tau(L_x^r) \rightarrow \bigvee_{r \in A \setminus \Delta} \tau(L_x^r)
\]
over all sets \(\Delta\) of admissible tuples that do not justify \(E\), where \(A\) is the set of all admissible tuples. For instance, if \(E\) is (4) then the conjunctive terms of \(\tau E\) are the formulas
\[
\bigwedge_{(h,c) \in \Delta} (\text{enroll}(c) \land \text{hours}(h,c)) \rightarrow \bigvee_{(h,c) \notin \Delta} (\text{enroll}(c) \land \text{hours}(h,c)).
\]
The conjunctive term corresponding to \{(3, cs101)\} as \(\Delta\) says: if I am enrolled in CS101 for 3 hours then I am enrolled in at least one other course.
3.5 Semantics of Rules and Programs
For any rule $R$, $\tau_R$ stands for the conjunction of the formulas
$$\tau B_1 \land \cdots \land \tau B_n \rightarrow \tau_h H_1 \lor \cdots \lor \tau_h H_m$$
for all instances (1) of $R$. A stable model of a program $\Pi$ is a stable model, in the sense of [Truszczynski, 2012], of the set consisting of the formulas $\tau_R$ for all rules $R$ of $\Pi$.
Consider, for instance, the rules of program (2). If $R$ is the rule \{p(i)\} then $\tau_R$ is
$$p(i) \lor \neg p(i)$$
($i = 1, \ldots, n$). If $R$ is the rule
$$\leftarrow p(X), p(Y), p(X+Y)$$
then the instances of $R$ are rules of the form
$$\leftarrow p(i), p(j), p(i+j)$$
for all numerals $i, j$. (Substituting precomputed ground terms other than numerals would produce a rule that is not well formed.) Consequently $\tau_R$ is in this case the infinite conjunction
$$\bigwedge_{i,j,k \in \mathbb{Z}} \neg(p(i) \land p(j) \land p(k)).$$
The stable models of program (2) are the stable models of formulas (6), (7), that is, sets of the form \{p(i) : i \in S\} for all sum-free subsets $S$ of \{1, \ldots, n\}.
4 Reasoning about Gringo Programs
In this section we give examples of reasoning about Gringo programs on the basis of the semantics defined above. These examples use the results of [Harrison et al., 2013], and we assume here that the reader is familiar with that paper.
4.1 Simplifying a Rule from Example 3.7 of User’s Guide
The program in Example 3.7 of User’s Guide (see Footnote 2) contains the rule$^8$
$$\text{weekdays } \leftarrow \text{day}(X) : (\text{day}(X), \text{not weekend}(X)).$$
$^8$To be precise, the syntax of conditional literals in User’s Guide is somewhat different—it corresponds to an earlier version of GRINGO.
Replacing this rule with the fact *weekdays* within any program will not affect the set of stable models. Indeed, the result of applying translation $\tau$ to (8) is the formula
$$\bigwedge_r (\text{day}(r) \land \neg \text{weekend}(r) \rightarrow \text{day}(r)) \rightarrow \text{weekdays},$$
where the conjunction extends over all precomputed terms $r$. The formula
$$\text{day}(r) \land \neg \text{weekend}(r) \rightarrow \text{day}(r)$$
is intuitionistically provable. By the replacement property of the basic system of natural deduction from [Harrison et al., 2013], it follows that (9) is equivalent to *weekdays* in the basic system. By the main theorem of [Harrison et al., 2013], it follows that replacing (9) with the atom *weekdays* within any set of formulas does not affect the set of stable models.
### 4.2 Simplifying the Sorting Rule
The rule
$$\text{order}(X, Y) \leftarrow p(X), p(Y), X < Y, \neg p(Z) : (p(Z), X < Z, Z < Y)$$
(10)
can be used for sorting. It can be replaced by either of the following two simpler rules within any program without changing that program’s stable models.
$$\text{order}(X, Y) \leftarrow p(X), p(Y), X < Y, \perp : (p(Z), X < Z, Z < Y)$$
(11)
$$\text{order}(X, Y) \leftarrow p(X), p(Y), X < Y, \neg p(Z) : (X < Z, Z < Y)$$
(12)
Let’s prove this claim for rule (11). By the main theorem of [Harrison et al., 2013] it is sufficient to show that the result of applying $\tau$ to (10) is equivalent in the basic system to the result of applying $\tau$ to (11). The instances of (10) are the rules
$$\text{order}(i, j) \leftarrow p(i), p(j), i < j, \neg p(Z) : (p(Z), i < Z, Z < j),$$
and the instances of (11) are the rules
$$\text{order}(i, j) \leftarrow p(i), p(j), i < j, \perp : (p(Z), i < Z, Z < j)$$
where $i$ and $j$ are arbitrary numerals. The result of applying $\tau$ to (10) is the conjunction of the formulas
$$p(i) \land p(j) \land i < j \land \bigwedge_k (\neg p(k) \land i < k \land k < j \rightarrow p(k)) \rightarrow \text{order}(i, j)$$
(13)
for all numerals $i, j$. The result of applying $\tau$ to (11) is the conjunction of the formulas
$$p(i) \land p(j) \land i < j \land \bigwedge_k \neg p(k) \land i < k \land k < j \rightarrow \bot \rightarrow \text{order}(i, j).$$
(14)
By the replacement property of the basic system, it is sufficient to observe that
$$p(k) \land i < k \land k < j \rightarrow \neg p(k)$$
is intuitionistically equivalent to
$$p(k) \land i < k \land k < j \rightarrow \bot.$$
The proof for rule (12) is similar. Rule (11), like rule (10), is safe; rule (12) is not.
4.3 Eliminating Choice in Favor of Conditional Literals
Replacing the rule
$$\{p(X)\} \leftarrow q(X)$$
(15)
with
$$p(X) \leftarrow q(X), \bot : \text{not } p(X)$$
(16)
within any program will not affect the set of stable models. Indeed, the result of applying translation $\tau$ to (15) is
$$\bigwedge_r (q(r) \rightarrow p(r) \lor \neg p(r))$$
(17)
where the conjunction extends over all precomputed terms $r$, and the result of applying $\tau$ to (16) is
$$\bigwedge_r (q(r) \land \neg \neg p(r) \rightarrow p(r)).$$
(18)
The implication from (17) is equivalent to the implication from (18) in the extension of intuitionistic logic obtained by adding the axiom schema
$$\neg F \lor \neg \neg F,$$
and consequently in the extended system presented in [Harrison et al., 2013, Section 7]. By the replacement property of the extended system, it follows that (17) is equivalent to (18) in the extended system as well.
4.4 Eliminating a Trivial Aggregate Expression
The rule
\[ p(Y) \leftarrow \text{card}\{X,Y : q(X,Y)\} \geq 1 \] (19)
says, informally speaking, that we can conclude \( p(Y) \) once we established that there exists at least one \( X \) such that \( q(X,Y) \). Replacing this rule with
\[ p(Y) \leftarrow q(X,Y) \] (20)
within any program will not affect the set of stable models.
To prove this claim, we need to calculate the result of applying \( \tau \) to rule (19). The instances of (19) are the rules
\[ p(t) \leftarrow \text{card}\{X,t : q(X,t)\} \geq 1 \] (21)
for all precomputed terms \( t \). Consider the aggregate expression \( E \) in the body of (21). Any precomputed term \( r \) is admissible w.r.t. \( E \). A set \( \Delta \) of precomputed terms justifies \( E \) if
\[ \widetilde{\text{card}}\{(r,t) : r \in \Delta\} \geq 1, \]
that is to say, if \( \Delta \) is non-empty. Consequently \( \tau E \) consists of only one implication (5), with the empty \( \Delta \). The antecedent of this implication is the empty conjunction \( \top \), and its consequent is the disjunction \( \bigvee_u q(u,t) \) over all precomputed terms \( u \). Then the result of applying \( \tau \) to (19) is
\[ \bigwedge_t \left( \bigvee_u q(u,t) \rightarrow p(t) \right). \] (22)
On the other hand, the result of applying \( \tau \) to (20) is
\[ \bigwedge_{t,u} (q(u,t) \rightarrow p(t)). \]
This formula is equivalent to (22) in the basic system [Harrison et al., 2013, Example 2].
4.5 Replacing an Aggregate Expression with a Conditional Literal
Informally speaking, the rule
\[ q \leftarrow \text{card}\{X : p(X)\} = 0 \] (23)
says that we can conclude \( q \) once we have established that the cardinality of the set \( \{X : p(X)\} \) is 0; the rule
\[
q \leftarrow \bot : p(X)
\]
(24)
says that we can conclude \( q \) once we have established that \( p(X) \) does not hold for any \( X \). We'll prove that replacing (23) with (24) within any program will not affect the set of stable models. To this end, we'll show that the results of applying \( \tau \) to (23) and (24) are equivalent to each other in the extended system from [Harrison et al., 2013, Section 7].
First, we'll need to calculate the result of applying \( \tau \) to rule (23). Consider the aggregate expression \( E \) in the body of (23). Any precomputed term \( r \) is admissible w.r.t. \( E \). A set \( \Delta \) of precomputed terms justifies \( E \) if
\[
\hat{\text{card}}(\{r : r \in \Delta\}) = 0,
\]
that is to say, if \( \Delta \) is empty. Consequently \( \tau E \) is the conjunction of the implications
\[
\bigwedge_{r \in \Delta} p(r) \rightarrow \bigvee_{r \in A \setminus \Delta} p(r)
\]
(25)
for all non-empty subsets \( \Delta \) of the set \( A \) of precomputed terms. The result of applying \( \tau \) to (23) is
\[
\left( \bigwedge_{\Delta \subseteq A \setminus \emptyset} \left( \bigwedge_{r \in \Delta} p(r) \rightarrow \bigvee_{r \in A \setminus \Delta} p(r) \right) \right) \rightarrow q.
\]
(26)
The result of applying \( \tau \) to (24), on the other hand, is
\[
\left( \bigwedge_{r \in A} \neg p(r) \right) \rightarrow q.
\]
(27)
The fact that the antecedents of (26) and (27) are equivalent to each other in the extended system can be established by essentially the same argument as in [Harrison et al., 2013, Example 7]. By the replacement property of the extended system, it follows that (26) is equivalent to (27) in the extended system as well.
5 Conclusion
**GRINGO User’s Guide** and the monograph [Gebser et al., 2012] explain the meaning of many programming constructs using examples and informal comments that
appeal to the user’s intuition, without references to any precise semantics. In
the absence of such a semantics, it is impossible to put the study of some impor-
tant issues on a firm foundation. This includes the correctness of ASP programs,
grounders, solvers, and optimization methods, and also the relationship between
input languages of different solvers (for instance, the equivalence of the semantics
of aggregate expressions in Gringo to their semantics in the ASP Core language and
in the language proposed in [Gelfond, 2002] under the assumption that aggregates
are used nonrecursively).
In this note we approached the problem of defining the semantics of Gringo by
reducing Gringo programs to infinitary propositional formulas. We argued that
this approach to semantics may allow us to study equivalent transformations of
programs using natural deduction in infinitary propositional logic.
Acknowledgements
Many thanks to Roland Kaminski and Torsten Schaub for helping us understand
the input language of gringo. Roland, Michael Gelfond, Yuliya Lierler, and
Joohyung Lee provided valuable comments on drafts of this note.
References
[Ferraris and Lifschitz, 2005] Paolo Ferraris and Vladimir Lifschitz. Weight con-
straints as nested expressions. Theory and Practice of Logic Programming, 5:45–
74, 2005.
[Ferraris, 2005] Paolo Ferraris. Answer sets for propositional theories. In Pro-
cedings of International Conference on Logic Programming and Nonmonotonic
[Gebser et al., 2012] M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub. An-
swer Set Solving in Practice. Synthesis Lectures on Artificial Intelligence and
[Gelfond and Lifschitz, 1988] Michael Gelfond and Vladimir Lifschitz. The sta-
ble model semantics for logic programming. In Robert Kowalski and Kenneth
Bowen, editors, Proceedings of International Logic Programming Conference and
\(^{10}\)http://www.cs.utexas.edu/users/vl/papers/etinf.pdf
|
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fvl%2Fpapers%2Fgringo.pdf&pubid=127353", "len_cl100k_base": 7577, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43185, "total-output-tokens": 9180, "length": "2e12", "weborganizer": {"__label__adult": 0.0004112720489501953, "__label__art_design": 0.0004074573516845703, "__label__crime_law": 0.0005087852478027344, "__label__education_jobs": 0.00222015380859375, "__label__entertainment": 0.00012010335922241212, "__label__fashion_beauty": 0.0002079010009765625, "__label__finance_business": 0.00040030479431152344, "__label__food_dining": 0.0005750656127929688, "__label__games": 0.0007877349853515625, "__label__hardware": 0.0006403923034667969, "__label__health": 0.0007300376892089844, "__label__history": 0.00029778480529785156, "__label__home_hobbies": 0.00015056133270263672, "__label__industrial": 0.0006823539733886719, "__label__literature": 0.00118255615234375, "__label__politics": 0.00043487548828125, "__label__religion": 0.0006809234619140625, "__label__science_tech": 0.0797119140625, "__label__social_life": 0.0001735687255859375, "__label__software": 0.01029205322265625, "__label__software_dev": 0.89794921875, "__label__sports_fitness": 0.0002968311309814453, "__label__transportation": 0.000713348388671875, "__label__travel": 0.00019669532775878904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28342, 0.04585]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28342, 0.53634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28342, 0.84968]], "google_gemma-3-12b-it_contains_pii": [[0, 2010, false], [2010, 5282, null], [5282, 7467, null], [7467, 9281, null], [9281, 11397, null], [11397, 13397, null], [13397, 15843, null], [15843, 17604, null], [17604, 19629, null], [19629, 21118, null], [21118, 22754, null], [22754, 24756, null], [24756, 26765, null], [26765, 28342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2010, true], [2010, 5282, null], [5282, 7467, null], [7467, 9281, null], [9281, 11397, null], [11397, 13397, null], [13397, 15843, null], [15843, 17604, null], [17604, 19629, null], [19629, 21118, null], [21118, 22754, null], [22754, 24756, null], [24756, 26765, null], [26765, 28342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28342, null]], "pdf_page_numbers": [[0, 2010, 1], [2010, 5282, 2], [5282, 7467, 3], [7467, 9281, 4], [9281, 11397, 5], [11397, 13397, 6], [13397, 15843, 7], [15843, 17604, 8], [17604, 19629, 9], [19629, 21118, 10], [21118, 22754, 11], [22754, 24756, 12], [24756, 26765, 13], [26765, 28342, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28342, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
b34a4680f6f7e96ae9cea4c4ef97d14949412b4f
|
Inference in rule-based systems
Inferencing
- The inference engine uses one of several available forms of inferencing.
- By inferencing we mean the method used in a knowledge-based system (KBS) to process the supplied data, and the stored knowledge, so as to produce correct conclusions.
Distinctive features of KBS (1)
- The need for heuristic reasoning.
- Conventional computer programs are built around algorithms: reasoning strategies which are guaranteed to find the solution to whatever the problem is, if there is such a solution.
- For the large, difficult problems with which KBS are frequently concerned, it may be necessary to employ heuristics: strategies that often lead to the correct solution, but which also sometimes fail.
Humans use heuristics a great deal in their problem solving. Of course, if the heuristic does fail, it is necessary for the problem solver to either pick another heuristic, or know that it is appropriate to give up.
The rules, found in the knowledge bases of rule-based systems, are very often heuristics.
Distinctive features of KBS (3)
- Large solution spaces.
- One way to treat a problem is to describe the system concerned as a state space –
- a collection of states that it can get into, with a description of the transitions that take you from one state to another.
- Some of these states are solutions to the problem.
- In this approach, the way to solve a problem is to search in a systematic way until you have found a path from the start state to a goal state.
Distinctive features of KBS (4)
- Large solution spaces.
- In scheduling and design problems, there may be many millions of possible solutions to the problem as presented.
- It is not possible to consider each one in turn, to find the right (or best) solution; heuristically-guided search is required.
Distinctive features of KBS (5)
- Multiple solutions.
- In planning or design tasks, a single solution will probably be enough.
- In diagnostic tasks, all possible solutions are probably needed.
Distinctive features of KBS (6)
- Reasoning with uncertainty.
- Rules in the knowledge base may only express a probability that a conclusion follows from certain premises, rather than a certainty.
- This is particularly true of medicine and other life sciences.
- The items in the knowledge base must reflect this uncertainty, and the inference engine must process the uncertainties to give conclusions that are accompanied by a likelihood that they are true or correct.
Inference in rule-based systems
- Two control strategies: forward chaining and backward chaining
Inference in rule-based systems
- **Forward chaining**: working from the facts to a conclusion. Sometimes called the data-driven approach. To chain forward, match data in working memory against 'conditions' of rules in the rule-base.
- To chain forward, match data in working memory against 'conditions' of rules in the rule-base.
- When one of them fires, this is liable to produce more data.
- So the cycle continues
Inference in rule-based systems
- **Backward chaining**: working from the conclusion to the facts. Sometimes called the goal-driven approach.
- To chain backward, match a goal in working memory against 'conclusions' of rules in the rule-base.
- When one of them fires, this is liable to produce more goals.
- So the cycle continues.
Forward & backward chaining
- e.g. Here are two rules:
If corn is grown on poor soil, then it will rot.
If soil hasn't enough nitrogen, then it is poor soil.
- Forward chaining: This soil is low in nitrogen; therefore this is poor soil; therefore corn grown on it will rot.
- Backward chaining: This corn is rotten; therefore it must have been grown on poor soil; therefore the soil must be low in nitrogen.
Forward chaining
- More realistically,
- the forward chaining reasoning would be: there's something wrong with this corn. So I test the soil. It turns out to be low in nitrogen. If that’s the case, corn grown on it will rot.
Backward chaining
- More realistically,
- the backward chaining reasoning would be: there's something wrong with this corn. Perhaps it is rotten; if so, it must have been grown on poor soil; if so, the soil must be low in nitrogen. So test for low nitrogen content in soil, and then we'll know whether the problem is rot.
Example- A rule-based system for the classification of animals
1. IF (Animal has hair) or (Animal drinks milk) THEN Animal is a mammal
2. IF (Animal has feathers) or ((Animal can fly) and (Animal lays eggs)) THEN Animal is a bird
3. IF (Animal is a mammal) and ((Animal eats meat) or ((Animal has pointed_teeth) and (Animal has claws) and (Animal has forward_pointed_eyes))) THEN Animal is a carnivore
4. IF (Animal is a carnivore) and (Animal has tawny_colour) and (Animal has dark_spots) THEN Animal is a cheetah
5. IF (Animal is a carnivore) and (Animal has tawny_colour) and (Animal has black_stripes) THEN Animal is a tiger
6. IF (Animal is a bird) and (Animal cannot flies) and (Animal can swim) THEN Animal is a penguin
7. IF (Animal is a bird) and (Animal has large_wingspan) THEN Animal is a albatros
Example- A rule-based system for the classification of animals
- Assume there are the following facts in the working memory
- Jimmy has hair, jimmy has pointed_teeth, jimmy has claws, jimmy has forward_pointed_eyes, jimmy has black_stripes, jimmy has tawny_colour
- Suppose that we want to see whether jimmy is a tiger using backward chaining. How will it work?
- Suppose that we want to see what jimmy is using forward chaining. How will it work?
Example- Backward chaining for the classification of animals
- Given the following facts:
Fact (a): Jimmy has hair,
Fact (b): jimmy has pointed_teeth,
Fact (c): jimmy has claws,
Fact (d): jimmy has forward_pointed_eyes,
Fact (e): jimmy has black_stripes,
Fact (f): jimmy has tawny_colour
- Backward chaining
- We are asking the question “Is jimmy a tiger?”
- First, we search for a fact which give the answer or a rule the answer could be inferred
- Rule 5 if true would infer “jimmy is a tiger”.
Example- Backward chaining for the classification of animals
- Next, we check the conditions of rule 5.
- Is “jimmy has tawny_colour” true?
- Yes, given by fact (f).
- Is “jimmy has black_stripes” true?
- Yes, given by fact (e).
- Is “jimmy is a carnivore” true?
- None of the facts given can answer the question.
- However, Rule 3 is related to it.
Example- Backward chaining for the classification of animals
- We check the conditions for Rule 3.
- Is “jimmy eats meat” true? Or Is “jimmy has pointed teeth and claws and forward_pointing_eyes” true?
- Yes, given by facts (b), (c) & (d).
- Is “jimmy is a mammal” true”?
- None of the facts given can answer the question.
- However, Rule 1 is related to it.
- We check the conditions for Rule 1.
- Is “jimmy has hair or jimmy drinks milk” true?
- Yes, jimmy has hair.
- The fact (a) directly gives the answer.
- Hence, Rules 1, 3, 5 are used and “Jimmy is a tiger” is concluded.
Example- Forward chaining for the classification of animals
Forward Chaining
☐ The fact (a) matches the condition for Rule 1.
■ Jimmy has hair \(\Rightarrow\) jimmy is a mammal
■ The conclusion “jimmy is a mammal” gives a new fact.
☐ The new fact “Jimmy is a mammal” and facts (b), (c) & (d) matches the conditions for Rule 3.
■ “jimmy is a mammal” and “jimmy has pointed_teeth and claws and forward_pointing_eyes” \(\Rightarrow\) jimmy is a carnivore.
■ Again, the conclusion “jimmy is a carnivore” is a new fact.
Example- Forward chaining for the classification of animals
- The new fact “jimmy is a carnivore” and fact (f) matches the conditions for Rule 5.
- “jimmy is a carnivore” and “jimmy has tawny_colour and black_stripes” ➔ jimmy is a tiger.
- No other facts matches any of the rules.
- The conclusion “jimmy is a tiger” answers the question.
Forward & backward chaining
- The choice of strategy depends on the nature of the problem.
- Assume the problem is to get from facts to a goal (e.g. symptoms to a diagnosis).
Forward & backward chaining
Backward chaining is the best choice if:
- The goal is given in the problem statement, or can sensibly be guessed at the beginning of the consultation;
or:
- The system has been built so that it sometimes asks for pieces of data (e.g. "please now do the gram test on the patient's blood, and tell me the result"), rather than expecting all the facts to be presented to it.
Forward & backward chaining
Backward chaining
- This is because (especially in the medical domain) the test may be
- expensive,
- or unpleasant,
- or dangerous for the human participant
so one would want to avoid doing such a test unless there was a good reason for it.
Forward & backward chaining
Forward chaining is the best choice if:
- All the facts are provided with the problem statement;
- There are many possible goals, and a smaller number of patterns of data;
- There isn't any sensible way to guess what the goal is at the beginning of the consultation.
Forward & backward chaining
- Note also that
- a backward-chaining system tends to produce a sequence of questions which seems focused and logical to the user,
- a forward-chaining system tends to produce a sequence which seems random & unconnected.
- If it is important that the system should seem to behave like a human expert, backward chaining is probably the best choice.
Forward & backward chaining
- Some systems use *mixed chaining*, where some of the rules are specifically used for chaining forwards, and others for chaining backwards. The strategy is for the system to chain in one direction, then switch to the other direction, so that:
- the diagnosis is found with maximum efficiency;
- the system's behaviour is perceived as "human".
Problem decomposition into an and-or graph
- A technique for reducing a problem to a production system.
- One particular form of intermediate representation.
- A structured representation of the knowledge, which is not yet in the form of code that can be put into KBS’s knowledge base.
Problem decomposition into an and-or graph
- A technique for reducing a problem to a production system, as follows:
- The principle goal is identified; it is split into two or more sub-goals; these, too are split up.
- A goal is something you want to achieve. A sub-goal is a goal that must be achieved in order for the main goal to be achieved.
Problem decomposition into an and-or graph
- A graph is drawn of the goal and sub-goals.
- Each goal is written in a box, called a node, with its subgoals underneath it, joined by links.
Problem decomposition into an and-or graph
- The leaf nodes at the bottom of the tree - the boxes at the bottom of the graph that don’t have any links below them - are the pieces of data needed to solve the problem.
Problem decomposition into an and-or graph
A goal may be split into 2 (or more) sub-goals, **BOTH** of which must be satisfied if the goal is to succeed; the links joining the goals are marked with a curved line, like this:
Problem decomposition into an and-or graph
- Or a goal may be split into 2 (or more) sub-goals, **EITHER** of which must be satisfied if the goal is to succeed; the links joining the goals aren't marked with a curved line:
Problem decomposition into an and-or graph
Example
"The function of a financial advisor is to help the user decide whether to invest in a savings account, or the stock market, or both. The recommended investment depends on the investor's income and the current amount they have saved:"
Problem decomposition into an and-or graph
🌟 Individuals with inadequate savings should always increase the amount saved as their first priority, regardless of income.
⏰ Individuals with adequate savings and an adequate income should consider riskier but potentially more profitable investment in the stock market.
Problem decomposition into an and-or graph
- Individuals with low income who already have adequate savings may want to consider splitting their surplus income between savings and stocks, to increase the cushion in savings while attempting to increase their income through stocks.
Problem decomposition into an and-or graph
- The adequacy of both savings and income is determined by the number of dependants an individual must support. There must be at least £3000 in the bank for each dependant.
An adequate income is a steady income, and it must supply at least £9000 per year, plus £2500 for each dependant."
Problem decomposition into an and-or graph
- How can we turn this information into an and-or graph?
- Step 1: decide what the ultimate advice that the system should provide is. It’s a statement along the lines of “The investment should be X”, where X can be any one of several things.
Start to draw the graph by placing a box at the top:
Advise user:
investment
should be X
Step 2: decide what sub-goals this goal can be split into.
In this case, X can be one of three things: savings, stocks or a mixture.
Add three sub-goals to the graph. Make sure the links indicate “or” rather than “and”.
Advise user: investment should be X
- X is savings
- X is stocks
- X is mixture
Steps 3a, 3b and 3c: decide what sub-goals each of the goals at the bottom of the graph can be split into.
- It’s only true that “X is savings” if “savings are inadequate”. That provides a subgoal under “X is savings”.
- It’s only true that “X is stocks” if “savings are adequate” and “income is adequate. That provides two subgoals under “X is stocks” joined by “and” links.
- Similarly, there are two subgoals under “X is mixture” joined by “and” links.
Advise user:
investment should be X
- X is savings
- Savings are inadequate
- Savings are adequate
- X is stocks
- Income is adequate
- X is mixture
- Savings are adequate
- Income is inadequate
The next steps (4a, 4b, 4c, 4d & 4e) mainly involve deciding whether something’s big enough.
- **Step 4a**: savings are only inadequate if they are smaller than a certain figure (let’s call it Y).
- **Step 4b**: savings are only adequate if they are bigger than this figure (Y).
- **Step 4c**: income is only adequate if it is bigger than a certain figure (let’s call it W), and also steady.
- **Step 4d** is the same as 4b. **Step 4e** is like 4c, but “inadequate”, “smaller” and “not steady”.
Advise user: investment should be X
X is savings
Savings are inadequate
Amount saved < Y
X is stocks
Savings are adequate
Amount saved > Y
Income is adequate
Income > W
X is mixture
Savings are adequate
Income is adequate
Income < W
Income is inadequate
Income is not steady
Now we need a box in which the value of Y is calculated:
Y is Z times 3000
and we need a box in which the value of W is calculated:
W is 9000 plus 2500 times Z
- Z is the number of dependants, so we need a box in which this value is obtained:
Client has Z dependants
- We can now add these last three boxes into the bottom layers of the graph in the same way as we’ve added all the others:
advise user: investment should be $X$
$X$ is savings
- savings inadequate
- amount saved $< Y$
- $Y$ is $Z \times 3000$
- amount saved $> Y$
- savings adequate
$X$ is stocks
- income adequate
- savings adequate
- income inadequate
- income $< W$
- income is not steady
- income is steady
- client has $Z$ dependents
$W$ is $9000 + 2500 \times Z$
Problem decomposition into an and-or graph
- Pieces of evidence describing the current state of affairs appear in the bottom layer of the graph.
- In some cases, these are statements that may or may not be true, depending on the features of the case currently under consideration.
Problem decomposition into an and-or graph
- In the next layer up, there are simple operations such as calculations and comparisons.
- The lines indicate which pieces of evidence act as inputs to these operations.
Problem decomposition into an and-or graph
- In the upper layers, there are conclusions that can be drawn, if the pieces of evidence (and the results of the simple operations) feeding into them (from below) are true.
- Above them, there are further conclusions that can be drawn if the conclusions feeding into them are true.
Problem decomposition into an and-or graph
- At the top, we have the final conclusion of the reasoning process.
- The variables W, Y and Z allow this graph to represent both numerical and logical reasoning. The variable X allows it to deliver more than one possible conclusion.
Problem decomposition into an and-or graph
- Students often seem to mix up this sort of chart with a flow-chart. It isn’t the same thing at all.
- A flow-chart shows the sequence of operations that a program must go through (probably with some branching) to execute its task.
- The and-or graph shows the pieces of evidence that must be present if a certain conclusion is to be reached.
Problem decomposition into an and-or graph
- Why draw and-or chart?
- Because it makes the point that a decision-making process like this can be broken down into a sequence of simple decisions, in a very systematic way.
- Because it’s a first step towards turning a human being’s reasoning into a collection of production rules.
Problem decomposition into an and-or graph
- The and-or chart can be considered as a **backward-chaining production system**, reading from the top to the bottom. Or as a **forward-chaining production system**, reading from the bottom to the top.
Problem decomposition into an and-or graph
- Every node is the conclusion of a production rule, except for the leaf nodes at the bottom, which are requests for information from the user.
- When several links enter a node from below, they represent the conditions for that production rule. They may be joined by "and" connectives or by "or" connectives.
- Alternatively, if several links (or groups of links) enter a node from below, and they are "or" links (or groups of links separated by "or"), this can represent several different production rules which all have the same conclusion.
Production rule: if savings adequate and income adequate then X is stocks
Production rule: if income < $W$ and income is not steady then income is inadequate.
Production rule: if amount saved < Y then savings inadequate
- if $X$ is savings
- if $X$ is stocks
- if $X$ is mixture
- if savings inadequate
- if amount saved < $Y$
- if $Y$ is $Z \times 3000$
- if income is steady
- if income is not steady
- if income > $W$
- if income < $W$
- if $W$ is $9000 + 2500 \times Z$
- if client has $Z$ dependents
Production rule:
if X is savings
or X is stocks
or X is mixture
then advise user investment should be X
If X is savings:
- savings inadequate
- amount saved < Y
- Y is Z * 3000
- savings adequate
- amount saved > Y
If X is stocks:
- income adequate
If X is mixture:
- savings adequate
- income > W
- income < W
- W is 9000 + 2500 * Z
- income is steady
- income is not steady
Client has Z dependents
Problem decomposition into an and-or graph
- The underlying principle is that state spaces can always be converted into production systems, and vice-versa. Searching a large production system is essentially the same problem as searching a large state-space.
|
{"Source-Url": "http://www1.se.cuhk.edu.hk/~seem5750/Lecture_3.pdf", "len_cl100k_base": 4607, "olmocr-version": "0.1.53", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 97213, "total-output-tokens": 6838, "length": "2e12", "weborganizer": {"__label__adult": 0.0003991127014160156, "__label__art_design": 0.0007853507995605469, "__label__crime_law": 0.0006895065307617188, "__label__education_jobs": 0.01715087890625, "__label__entertainment": 0.00017178058624267578, "__label__fashion_beauty": 0.0002567768096923828, "__label__finance_business": 0.0023174285888671875, "__label__food_dining": 0.0005640983581542969, "__label__games": 0.0014934539794921875, "__label__hardware": 0.0011539459228515625, "__label__health": 0.0008625984191894531, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.00040984153747558594, "__label__industrial": 0.0013065338134765625, "__label__literature": 0.001946449279785156, "__label__politics": 0.0003368854522705078, "__label__religion": 0.0007090568542480469, "__label__science_tech": 0.2139892578125, "__label__social_life": 0.00038313865661621094, "__label__software": 0.0679931640625, "__label__software_dev": 0.68505859375, "__label__sports_fitness": 0.0003991127014160156, "__label__transportation": 0.0007128715515136719, "__label__travel": 0.0002219676971435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19349, 0.00855]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19349, 0.86737]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19349, 0.94353]], "google_gemma-3-12b-it_contains_pii": [[0, 32, false], [32, 289, null], [289, 746, null], [746, 1053, null], [1053, 1532, null], [1532, 1839, null], [1839, 2039, null], [2039, 2517, null], [2517, 2615, null], [2615, 3041, null], [3041, 3381, null], [3381, 3796, null], [3796, 4024, null], [4024, 4349, null], [4349, 5160, null], [5160, 5613, null], [5613, 6132, null], [6132, 6505, null], [6505, 7114, null], [7114, 7640, null], [7640, 7984, null], [7984, 8160, null], [8160, 8565, null], [8565, 8843, null], [8843, 9140, null], [9140, 9523, null], [9523, 9900, null], [9900, 10189, null], [10189, 10540, null], [10540, 10728, null], [10728, 10945, null], [10945, 11170, null], [11170, 11394, null], [11394, 11682, null], [11682, 11999, null], [11999, 12280, null], [12280, 12613, null], [12613, 12899, null], [12899, 12989, null], [12989, 13212, null], [13212, 13293, null], [13293, 13750, null], [13750, 13958, null], [13958, 14455, null], [14455, 14744, null], [14744, 14907, null], [14907, 15139, null], [15139, 15496, null], [15496, 15778, null], [15778, 15993, null], [15993, 16320, null], [16320, 16599, null], [16599, 16991, null], [16991, 17321, null], [17321, 17568, null], [17568, 18160, null], [18160, 18234, null], [18234, 18319, null], [18319, 18671, null], [18671, 19091, null], [19091, 19349, null]], "google_gemma-3-12b-it_is_public_document": [[0, 32, true], [32, 289, null], [289, 746, null], [746, 1053, null], [1053, 1532, null], [1532, 1839, null], [1839, 2039, null], [2039, 2517, null], [2517, 2615, null], [2615, 3041, null], [3041, 3381, null], [3381, 3796, null], [3796, 4024, null], [4024, 4349, null], [4349, 5160, null], [5160, 5613, null], [5613, 6132, null], [6132, 6505, null], [6505, 7114, null], [7114, 7640, null], [7640, 7984, null], [7984, 8160, null], [8160, 8565, null], [8565, 8843, null], [8843, 9140, null], [9140, 9523, null], [9523, 9900, null], [9900, 10189, null], [10189, 10540, null], [10540, 10728, null], [10728, 10945, null], [10945, 11170, null], [11170, 11394, null], [11394, 11682, null], [11682, 11999, null], [11999, 12280, null], [12280, 12613, null], [12613, 12899, null], [12899, 12989, null], [12989, 13212, null], [13212, 13293, null], [13293, 13750, null], [13750, 13958, null], [13958, 14455, null], [14455, 14744, null], [14744, 14907, null], [14907, 15139, null], [15139, 15496, null], [15496, 15778, null], [15778, 15993, null], [15993, 16320, null], [16320, 16599, null], [16599, 16991, null], [16991, 17321, null], [17321, 17568, null], [17568, 18160, null], [18160, 18234, null], [18234, 18319, null], [18319, 18671, null], [18671, 19091, null], [19091, 19349, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19349, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19349, null]], "pdf_page_numbers": [[0, 32, 1], [32, 289, 2], [289, 746, 3], [746, 1053, 4], [1053, 1532, 5], [1532, 1839, 6], [1839, 2039, 7], [2039, 2517, 8], [2517, 2615, 9], [2615, 3041, 10], [3041, 3381, 11], [3381, 3796, 12], [3796, 4024, 13], [4024, 4349, 14], [4349, 5160, 15], [5160, 5613, 16], [5613, 6132, 17], [6132, 6505, 18], [6505, 7114, 19], [7114, 7640, 20], [7640, 7984, 21], [7984, 8160, 22], [8160, 8565, 23], [8565, 8843, 24], [8843, 9140, 25], [9140, 9523, 26], [9523, 9900, 27], [9900, 10189, 28], [10189, 10540, 29], [10540, 10728, 30], [10728, 10945, 31], [10945, 11170, 32], [11170, 11394, 33], [11394, 11682, 34], [11682, 11999, 35], [11999, 12280, 36], [12280, 12613, 37], [12613, 12899, 38], [12899, 12989, 39], [12989, 13212, 40], [13212, 13293, 41], [13293, 13750, 42], [13750, 13958, 43], [13958, 14455, 44], [14455, 14744, 45], [14744, 14907, 46], [14907, 15139, 47], [15139, 15496, 48], [15496, 15778, 49], [15778, 15993, 50], [15993, 16320, 51], [16320, 16599, 52], [16599, 16991, 53], [16991, 17321, 54], [17321, 17568, 55], [17568, 18160, 56], [18160, 18234, 57], [18234, 18319, 58], [18319, 18671, 59], [18671, 19091, 60], [19091, 19349, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19349, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
bac61c55bd0cd0df919297d01c4cfd317c6d775c
|
Lustre Networking at Cray
Chris Horn
hornc@cray.com
Agenda
● Lustre Networking at Cray
● LNet Basics
● Flat vs. Fine-Grained Routing
● Cost Effectiveness - Bandwidth Matching
● Connection Reliability – Dealing with ARP Flux
● Serviceability – Generating and Emplacing Configuration
● Recent LNet Work in the Lustre Community
● Support for new Mellanox Hardware
● Multiple Fabric Support
● Summary
● Q&A
LNet Basics
● LNet is Lustre Networking layer
● Network type agnostic
● Lustre Network Drivers (LNDs) provide interface to specific network drivers
● gnilnd (Aries/Gemini)
● o2iblnd (InfiniBand/OPA)
● socklnd (Ethernet)
● LNet routers bridge clients on Cray’s high speed network with external Lustre servers
● Gemini/Aries ↔ InfiniBand
● Two types of routing: Flat and Fine-Grained
Flat LNet
- Simple configuration
- Any router can talk to any other peer
Flat LNet
- Performance can be optimal at small scale
Flat LNet
- Performance suffers at large scale from need to traverse inter-switch links
Fine-Grained Routing
- Define groups of peers
- Best performance at scale by avoiding ISLs
- Complex configuration
- # Groups is total # of servers divided by # servers in each group
## Cost Effectiveness and Bandwidth Matching
<table>
<thead>
<tr>
<th></th>
<th>1*</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sonexion-1600</td>
<td>3.00</td>
<td>6.00</td>
<td>9.00</td>
<td>12.00</td>
<td>15.00</td>
<td>18.00</td>
</tr>
<tr>
<td>Sonexion-2000</td>
<td>3.75</td>
<td>7.50</td>
<td>11.25</td>
<td>15.00</td>
<td>18.75</td>
<td>22.50</td>
</tr>
<tr>
<td>Single HCA</td>
<td>5.50</td>
<td>11.00</td>
<td>16.50</td>
<td>22.00</td>
<td>27.50</td>
<td>33.00</td>
</tr>
<tr>
<td>Dual HCA</td>
<td>4.20</td>
<td>8.40</td>
<td>12.60</td>
<td>16.80</td>
<td>21.00</td>
<td>25.20</td>
</tr>
</tbody>
</table>
- **Need to provide sufficient IB bandwidth in cost-effective manner**
- No network bottlenecks
- Minimize excess bandwidth
* Average throughput of 1 Server or IB link; 2 Servers or IB links; etc.
### Bandwidth Matching
<table>
<thead>
<tr>
<th></th>
<th>1*</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sonexion-1600</td>
<td>3.00</td>
<td>6.00</td>
<td>9.00</td>
<td>12.00</td>
<td>15.00</td>
<td>18.00</td>
</tr>
<tr>
<td>Sonexion-2000</td>
<td>3.75</td>
<td>7.50</td>
<td>11.25</td>
<td>15.00</td>
<td>18.75</td>
<td>22.50</td>
</tr>
<tr>
<td>Single HCA</td>
<td>5.50</td>
<td>11.00</td>
<td>16.50</td>
<td>22.00</td>
<td>27.50</td>
<td>33.00</td>
</tr>
<tr>
<td>Dual HCA</td>
<td>4.20</td>
<td>8.40</td>
<td>12.60</td>
<td>16.80</td>
<td>21.00</td>
<td>25.20</td>
</tr>
</tbody>
</table>
- **Single HCA** == Bandwidth of one IB port on XC40 LNet router node with one IB HCA
- **Dual HCA** == Bandwidth of one IB port on XC40 LNet router node with two IB HCAs
* Average throughput of 1 Server or IB link; 2 Servers or IB links; etc.
### Bandwidth Matching
<table>
<thead>
<tr>
<th></th>
<th>1*</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sonexion-1600</td>
<td>3.00</td>
<td>6.00</td>
<td>9.00</td>
<td>12.00</td>
<td>15.00</td>
<td>18.00</td>
</tr>
<tr>
<td>Sonexion-2000</td>
<td>3.75</td>
<td>7.50</td>
<td>11.25</td>
<td>15.00</td>
<td>18.75</td>
<td>22.50</td>
</tr>
<tr>
<td>Single HCA</td>
<td>5.50</td>
<td>11.00</td>
<td>16.50</td>
<td>22.00</td>
<td><strong>27.50</strong></td>
<td>33.00</td>
</tr>
<tr>
<td>Dual HCA</td>
<td>4.20</td>
<td>8.40</td>
<td>12.60</td>
<td>16.80</td>
<td>21.00</td>
<td>25.20</td>
</tr>
</tbody>
</table>
- 6 Sonexion 2000 OSSes (3 SSUs) ~ 22.5 GB/s
- 5 IB Links (from single HCA routers) ~ 27.50
- Servers using ~ 82% of available network bandwidth
## Bandwidth Matching
<table>
<thead>
<tr>
<th></th>
<th>1*</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sonexion-1600</td>
<td>3.00</td>
<td>6.00</td>
<td>9.00</td>
<td>12.00</td>
<td>15.00</td>
<td>18.00</td>
</tr>
<tr>
<td>Sonexion-2000</td>
<td>3.75</td>
<td>7.50</td>
<td>11.25</td>
<td>15.00</td>
<td>18.75</td>
<td>22.50</td>
</tr>
<tr>
<td>Single HCA</td>
<td>5.50</td>
<td>11.00</td>
<td>16.50</td>
<td>22.00</td>
<td>27.50</td>
<td>33.00</td>
</tr>
<tr>
<td>Dual HCA</td>
<td>4.20</td>
<td>8.40</td>
<td>12.60</td>
<td>16.80</td>
<td>21.00</td>
<td>25.20</td>
</tr>
</tbody>
</table>
- 6 Sonexion 2000 OSSes (3 SSUs) ~ 22.5 GB/s
- 6 IB Links (from dual HCA routers) ~ 25.2 GB/s
- Servers using ~ 90% of available network bandwidth
- Ideal ratio \( n:n \)
Connection Reliability – Dealing with ARP Flux
● **Address Resolution Protocol (ARP)**
● Maps Network layer address (e.g. IPv4) to link layer address (e.g. MAC address)
● Broadcasts ARP “who-has” request to all peers, “Who has IP w.x.y.z?”
● Peer who-has IP w.x.y.z responds with its MAC address
● **“Flux” occurs when multiple interfaces are on a single host**
● Both interfaces may respond to ARP request
● non-deterministic population of the ARP cache (a.k.a. neighbor table)
● Breaks IPoIB 😞
ARP Flux cont.
- Can workaround by issuing “lctl ping” from routers to servers
- Routers populate server’s ARP cache
- Investigated using kernel IP tunables but found it insufficient
- `net.ipv4.conf.all.arp_ignore = 1`
- `net.ipv4.conf.all.arp_announce = 2`
- Currently recommend placing interfaces on separate subnets
- More complexity
LNet Configuration
OSS0
ib0
10.149.0.1
OSS2
ib0
10.149.0.2
OSS4
ib0
10.149.0.3
OSS1
ib0
10.149.0.4
10.149.1.4 (ib0:1)
OSS3
ib0
10.149.0.5
10.149.1.5 (ib0:1)
OSS5
ib0
10.149.0.6
10.149.1.6 (ib0:1)
TOR 1
10.149.0.7
ib0
gni
10.149.1.7
ib2
TOR 2
10.149.0.8
ib0
gni
10.149.1.8
ib2
Routers
o2ib4002
o2ib4003
Serviceability - Dealing with Complexity
- Cray LNet Configuration and Validation Tool
- Simple and descriptive input file
- Knowledge of Cray Sonexion IB switch configuration
- Generates “ip2nets” and “routes” LNet module parameters
- Typically stored in files: “ip2nets.dat” and “routes.dat”
- Validates configuration
- Validate IB connectivity
- Validate LNet group membership
- Validate LNet destinations
Add/Remove IP alias to ib0 on module load
/sbin/ip -o -4 a show ib0 | \
/usr/bin/awk '/inet/ {s=$4;\n sub("10\.149\.0\.", "10.149.1.", s);\n print "/sbin/ip address add dev ib0 label ib0:1", s}' | \
/bin/sh
/sbin/modprobe --ignore-install lnet
/sbin/modprobe -r --ignore-remove lnet &&
/sbin/ip -o -4 a show label ib0:1 | \
awk '{print "/sbin/ip address del dev ib0 label ib0:1", $4}' | \
/bin/sh
Hat tip to Dave McMillen
LNet Design/Config Overview
- Use bandwidth matching to get router:server ratio
- Determine IP addressing scheme
- Use clcvt to generate ip2nets and routes configuration
- Configure interfaces
- Plug in cables
- Emplace LNet configuration
- ip2nets, routes, other module parameters
Configuration Emplacement
● Sharedroot in CLE < 6.0
● Access sharedroot from bootnode: xtopview -c lnet
● Edit modprobe.conf.local:
● options lnet ip2nets = "/path/to/ip2nets.dat"
● options lnet routes = "/path/to/routes.dat"
● Config sets in CLE >= 6.0
● Run `cfgset` command on smw:
● `cfgset update --service cray_lnet --mode interactive CONFIGSET`
● See slides at end of deck for example
● Advanced users can manipulate worksheets
Recent LNet Work in the Lustre Community
Memory Registration in o2ibInd
- Historically supported PMR and FMR APIs
- Physical Memory Region (PMR) dropped
- Fast Memory Region (FMR) deprecated
- “Fast Registration API” is the new (Linux 2.6.27) hotness
- Mellanox hardware utilizing mlx5 drivers do not support FMR
- LU-5783: Adds support for Fast Registration API
- Fallback for FMR
- Landed for upcoming Lustre 2.9 release
Mixed Fabric Concerns
- How to optimize ko2iblnd in presence of multiple HCAs?
- OPA & EDR; EDR & FDR; Aries & FDR(ib0) & EDR(ib2)
- LU-7101: per NI map_on_demand values
- FMR enhances performance of OPA
- FMR enabled by setting: $0 < \text{map\_on\_demand} \leq 256$
- MLX5 does not support FMR, so needs map_on_demand = 0
- Works in conjunction with LU-3322 to allow optimal settings
- Landed for upcoming Lustre 2.9 release
- LU-3322: Allow different peer_credits and map_on_demand values
- Available in just released Lustre 2.8
Summary
- Covered some LNet basics:
- Flat vs. Fine Grained Routing
- Cost/Reliability/Serviceability:
- Bandwidth Matching
- ARP Flux
- Cray LNet Configuration and Validation Tool - clcvt
- New configuration emplacement
- Bye Bye Sharedroot! Hello config sets!
- Recent changes in Lustre for new IB technology
- LU-5783, LU-3322, others
- Mixed fabric
- Dealing with different HCAs that use ko2ibln4d
Legal Disclaimer
Information in this document is provided in connection with Cray Inc. products. No license, express or implied, to any intellectual property rights is granted by this document.
Cray Inc. may make changes to specifications and product descriptions at any time, without notice.
All products, dates and figures specified are preliminary based on current expectations, and are subject to change without notice.
Cray hardware and software products may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Cray uses codenames internally to identify products that are in development and not yet publically announced for release. Customers and other third parties are not authorized by Cray Inc. to use codenames in advertising, promotion or marketing and any use of Cray Inc. internal codenames is at the sole risk of the user.
Performance tests and ratings are measured using specific systems and/or components and reflect the approximate performance of Cray Inc. products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance.
The following are trademarks of Cray Inc. and are registered in the United States and other countries: CRAY and design, SONEXION, and URIKA. The following are trademarks of Cray Inc.: APPRENTICE2, CHAPEL, CLUSTER CONNECT, CRAYPAT, CRAYPORT, ECOPHLEX, LIBSCI, NODEKARE, REVEAL, THREADSTORM. The following system family marks, and associated model number marks, are trademarks of Cray Inc.: CS, CX, XC, XE, XK, XMT, and XT. The registered trademark LINUX is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis. Other trademarks used in this document are the property of their respective owners.
Q&A
Chris Horn
hornc@cray.com
What is Multi-Rail
- Use multiple independent networks, or “rails”, to overcome bandwidth limitations or increase fault tolerance
- Allow communication between two hosts across multiple interfaces
- One or more networks
- Interfaces used concurrently
- Cray utilizes multiple interfaces in non-multi-rail configuration
Multi-Rail LNet
- **Basic capability**
- Multiplex across interfaces, as opposed to striping
- Need multiple streams to see any benefit
- **Extend peer discover to simplify configuration**
- Discover a peer’s interfaces and multi-rail capability
- **Enable run-time configuration changes**
- add/remove interfaces, etc., via lnetctl
- **Compatibility with non-multi-rail nodes**
- **Increase resiliency by using alternate paths**
- **Targeted for Lustre 2.10**
- **http://wiki.lustre.org/Multi-Rail_LNet**
smw:~ # cfgset update --service cray_lnet --mode interactive hornc-p2
Service Configuration Menu (Config Set: hornc-p2, type: cle)
cray_lnet [ status: enabled ] [ validation: valid ]
<table>
<thead>
<tr>
<th>Selected</th>
<th>#</th>
<th>Settings</th>
<th>Value/Status (level=basic)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ko2iblnd</td>
<td>1)</td>
<td>peer_credits</td>
<td>63</td>
</tr>
<tr>
<td></td>
<td>2)</td>
<td>concurrent_sends</td>
<td>63</td>
</tr>
<tr>
<td>local_lnet</td>
<td>3)</td>
<td>lnet_name</td>
<td>gni4</td>
</tr>
<tr>
<td></td>
<td>4)</td>
<td>ip_wildcard</td>
<td>10.129.<em>.</em></td>
</tr>
<tr>
<td></td>
<td>5)</td>
<td>flat_routes</td>
<td>[ 6 sub-settings unconfigured, select and enter C to add entries ]</td>
</tr>
<tr>
<td></td>
<td>6)</td>
<td>fgr_routes</td>
<td>[ 5 sub-settings unconfigured, select and enter C to add entries ]</td>
</tr>
</tbody>
</table>
CUG 2016
Copyright 2016 Cray Inc.
### Selected Settings
<table>
<thead>
<tr>
<th>Selected</th>
<th>#</th>
<th>Settings</th>
<th>Value/Status (level=basic)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>flat_routes</td>
<td>[ 6 sub-settings unconfigured, select and enter C to add entries ]</td>
</tr>
<tr>
<td>*</td>
<td>6)</td>
<td>fgr_routes</td>
<td>[ 5 sub-settings unconfigured, select and enter C to add entries ]</td>
</tr>
</tbody>
</table>
----
#### Select Options
- **a:** all
- **n:** none
- **c:** configured
- **u:** unconfigured
- **#:** toggle #
#### Actions on Selected (1 settings)
- **C:** configure
- **@:** show guidance
#### Other Actions
- **?:** help
- **l:** switch level
- **E:** toggle enable
- **I:** toggle inherit
- **^:** go to service list
- **r:** refresh
- **$:** view changelog
- **Q:** save & exit
- **x:** exit without save
---
**Enter “6”**
**Enter “C”**
Cray LNet Settings: FGR Routes
**fgr_routes**
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
Configured Values:
(none)
Inputs: menu commands (? for help)
- Enter “+”
**fgr_routes**
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
**dest_name -- Destination name**
Enter the name of the destination. This is not functionally important. A good convention would be to use the name of the destination. For example, if the destination is the husk2 external file system, enter 'husk2'.
Default: (none) Current: not configured yet
Value: string, blank values not allowed
level=basic, state=unset
Inputs: <string> -- OR -- menu commands (? for help)
```
cray_lnet.settings.fgr_routes.data.dest_name
[cr]=set'',<new value>,?=help,@=less] $ snx8675309
```
• Enter “snx8675309”
fgr_routes (current key: snx8675309)
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
routers -- LNet router nodes
Enter a list of router cnames which will be used to route from the source LNet to the destination LNet. If the router nodes are managed externally (e.g. you are currently configuring LNet on servers) this can be left empty.
Default: (none) Current: (none)
Value: list, blank values allowed, regex=^c\(d+\)-(d+)c\([0-2]\)s\([d[0-5]?n\([0-3]\)\$|^\([d{1,3})/(\.[d{1,3})\{3}\$
level=basic, state=unset
Inputs: menu commands (? for help)
Enter “+”
<cr>=set 0 entries, +=add an entry, ?=help, @=less] $ +
Add routers (Ctrl-d to exit) $ c0-0c0s2n1
Add routers (Ctrl-d to exit) $ c0-0c0s2n2
Add routers (Ctrl-d to exit) $ c0-0c0s3n1
Add routers (Ctrl-d to exit) $ c0-0c0s3n2
Add routers (Ctrl-d to exit) $ c0-0c1s2n1
Add routers (Ctrl-d to exit) $ c0-0c1s2n2
Add routers (Ctrl-d to exit) $
fgr_routes (current key: snx8675309)
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
ip2nets_file -- FGR ip2nets file
Enter the name of the ip2nets file for this FGR config. The file must be placed in the config_set at
smw:/var/opt/cray/imps/config/sets/<config_set>/files/roles/lnet/.
This file must be generated using an external tool, such as clcvt.
Default: (none) Current: not configured yet
Value: string, blank values not allowed, regex=^[!-0-~]+$
level=basic, state=unset
Inputs: <string> -- OR -- menu commands (? for help)
cray_lnet.settings.fgr_routes.data.snx8675309.ip2nets_file
[<cr>=set '', <new value>, ?=help, @=less] $ ip2nets.dat
frg_routes (current key: snx8675309)
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
routes_file -- FGR routes file
Enter the name of the routes file for this FGR config. The file must be placed in the config_set at
smw:/var/opt/cray/imps/config/sets/<config_set>/files/roles/lnet/.
This file must be generated using an external tool, such as clcvt.
Default: (none) Current: not configured yet
Value: string, blank values not allowed, regex=^[!-.0-~]+$
level=basic, state=unset
Inputs: <string> -- OR -- menu commands (? for help)
cray_lnet.settings.fgr_routes.data.snx8675309.routes_file
[<cr>=set '', <new value>, ?=help, @=less] $ routes.dat
fgr_routes (current key: snx8675309)
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
ko2iblnd_peer_credits -- ko2iblnd peer_credits
The number of concurrent sends allowed to a single peer. Cray recommends setting this to 126. peer_credits must be consistent across all peers on the IB network. This means it must be the same on the routers and the Lustre servers. If there is a mismatch, the file system will be unmountable. This value is specific to the routers specified in this FGR config, and it will override the general ko2iblnd peer_credits setting specified earlier.
Default: 126 Current: not configured yet
Value: integer, blank values allowed, regex=^[1-9]d*$
level=basic, state=unset
Inputs: <integer> -- OR -- menu commands (? for help)
fgr_routes (current key: snx8675309)
Enter all external LNets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat LNets will be used to set up ip2nets on the routers and routes to reach the external LNets through the routers on the clients.
ko2iblnd_concurrent_sends -- ko2iblnd concurrent_sends
Determines send work-queue sizing. If this option is omitted, the default is calculated based on peer_credits and map_on_demand. Cray recommends setting this to 63. concurrent_sends must be consistent across all peers on the IB network. This means it must be the same on the routers and the Lustre servers. If there is a mismatch, the file system will be unmountable. This value is specific to the routers specified in this FGR config, and it will override the general ko2iblnd concurrent_sends setting specified earlier.
Default: Current:
63 not configured yet
Value: integer, blank values allowed, regex=^[1-9]\d*$
level=basic, state=unset
Inputs: <integer> -- OR -- menu commands (? for help)
```
cray_lnet.settings.fgr_routes.data.snx8675309.ko2iblnd_concurrent_sends
[<cr>=set '63', <new value>, ?=help, @=less] $
```
fgr_routes
Enter all external L Nets which will be reached via Fine-Grained Routing (FGR). The information entered for each of these flat L Nets will be used to set up ip2nets on the routers and routes to reach the external L Nets through the routers on the clients.
Configured Values:
1) 'snx8675309'
a) routers:
c0-c0s2n1
c0-c0s2n2
c0-c0s3n1
c0-c0s3n2
c0-c1s2n1
c0-c1s2n2
b) ip2nets_file: ip2nets.dat
c) routes_file: routes.dat
d) ko2ibnlnd_peer_credits: 63
e) ko2ibnlnd_concurrent_sends: 63
Inputs: menu commands (? for help)
|
{"Source-Url": "https://cug.org/proceedings/cug2016_proceedings/includes/files/pap153s2-file2.pdf", "len_cl100k_base": 6265, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 52339, "total-output-tokens": 7393, "length": "2e12", "weborganizer": {"__label__adult": 0.0005145072937011719, "__label__art_design": 0.0005726814270019531, "__label__crime_law": 0.0004107952117919922, "__label__education_jobs": 0.0015344619750976562, "__label__entertainment": 0.00038242340087890625, "__label__fashion_beauty": 0.0002987384796142578, "__label__finance_business": 0.0016870498657226562, "__label__food_dining": 0.0003924369812011719, "__label__games": 0.0013799667358398438, "__label__hardware": 0.065673828125, "__label__health": 0.0004696846008300781, "__label__history": 0.00045609474182128906, "__label__home_hobbies": 0.0003025531768798828, "__label__industrial": 0.00473785400390625, "__label__literature": 0.00022792816162109375, "__label__politics": 0.00032520294189453125, "__label__religion": 0.0006384849548339844, "__label__science_tech": 0.328369140625, "__label__social_life": 0.0001881122589111328, "__label__software": 0.2430419921875, "__label__software_dev": 0.3466796875, "__label__sports_fitness": 0.0005903244018554688, "__label__transportation": 0.00089263916015625, "__label__travel": 0.00028824806213378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18930, 0.08344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18930, 0.08491]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18930, 0.7077]], "google_gemma-3-12b-it_contains_pii": [[0, 53, false], [53, 422, null], [422, 823, null], [823, 897, null], [897, 952, null], [952, 1041, null], [1041, 1227, null], [1227, 1836, null], [1836, 2448, null], [2448, 2970, null], [2970, 3503, null], [3503, 4015, null], [4015, 4362, null], [4362, 4686, null], [4686, 5104, null], [5104, 5534, null], [5534, 5819, null], [5819, 6280, null], [6280, 6321, null], [6321, 6712, null], [6712, 7259, null], [7259, 7676, null], [7676, 9580, null], [9580, 9611, null], [9611, 9935, null], [9935, 10456, null], [10456, 11245, null], [11245, 12094, null], [12094, 12471, null], [12471, 13285, null], [13285, 14112, null], [14112, 14395, null], [14395, 15286, null], [15286, 16156, null], [16156, 17139, null], [17139, 18348, null], [18348, 18930, null]], "google_gemma-3-12b-it_is_public_document": [[0, 53, true], [53, 422, null], [422, 823, null], [823, 897, null], [897, 952, null], [952, 1041, null], [1041, 1227, null], [1227, 1836, null], [1836, 2448, null], [2448, 2970, null], [2970, 3503, null], [3503, 4015, null], [4015, 4362, null], [4362, 4686, null], [4686, 5104, null], [5104, 5534, null], [5534, 5819, null], [5819, 6280, null], [6280, 6321, null], [6321, 6712, null], [6712, 7259, null], [7259, 7676, null], [7676, 9580, null], [9580, 9611, null], [9611, 9935, null], [9935, 10456, null], [10456, 11245, null], [11245, 12094, null], [12094, 12471, null], [12471, 13285, null], [13285, 14112, null], [14112, 14395, null], [14395, 15286, null], [15286, 16156, null], [16156, 17139, null], [17139, 18348, null], [18348, 18930, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18930, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18930, null]], "pdf_page_numbers": [[0, 53, 1], [53, 422, 2], [422, 823, 3], [823, 897, 4], [897, 952, 5], [952, 1041, 6], [1041, 1227, 7], [1227, 1836, 8], [1836, 2448, 9], [2448, 2970, 10], [2970, 3503, 11], [3503, 4015, 12], [4015, 4362, 13], [4362, 4686, 14], [4686, 5104, 15], [5104, 5534, 16], [5534, 5819, 17], [5819, 6280, 18], [6280, 6321, 19], [6321, 6712, 20], [6712, 7259, 21], [7259, 7676, 22], [7676, 9580, 23], [9580, 9611, 24], [9611, 9935, 25], [9935, 10456, 26], [10456, 11245, 27], [11245, 12094, 28], [12094, 12471, 29], [12471, 13285, 30], [13285, 14112, 31], [14112, 14395, 32], [14395, 15286, 33], [15286, 16156, 34], [16156, 17139, 35], [17139, 18348, 36], [18348, 18930, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18930, 0.09626]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
469109b2dbbef9b8a5132daf35179bc7fd04eabd
|
Software Defined Networks using Mininet
Pramod B Patil, Kanchan S. Bhagat, D K Kirange, S. D. Patil
Abstract: A software defined network (SDN) is combined with centralized management by providing separation of the network, data and control planes. Different cloud computing environments, enterprise data centers and service providers are using this important feature. By implementing software defined data centers. Here in this paper we have used Mininet to demonstrate the applicability of SDN for different scalability. We study the performance of two SDN controllers – RYU and POX, that are implemented in Python using Mininet and D-ITG, Distributed Internet Traffic Generator. During this study we have used two network topologies, single and linear. The performance parameters used are maximum delay, average jitter, average bitrate. Experimental results demonstrated that the linear topology with RYU controller performs better as compared to single topology with POX controller for different network sizes.
Keywords: POX, RYU, D-ITG, SDN
I. INTRODUCTION
Software define network (SDN) is an emerging technology and is a shortcut to the next generation of infrastructure in network engineering. SDN requires some mechanisms for the centralized controller to communicate with the distributed data plane as shown in Figure 1. In this way, the controller is a basic segment of the SDNs design that add achievement or disappointment of SDN. SDN is controlled by software applications and SDN controllers rather than the traditional network management consoles and commands that require a lot of administrative overhead and could be tedious to manage on a large scale[1]. Large data centers require scalability. SDN is able to transform today’s static networks and support large scalable networks. The virtualization is also needed to account for automated, dynamic and secure cloud environment. SDN is able to provide virtualization. [2].
It becomes a difficult job to configure and install network elements without experienced IT persons. Different simulators are required for simulating the networks consisting of interations among its elements such as switches, routers, etc. Supporting this is highly difficult to achieve. Therefore; a new network model is required to support these agility requirements.
SDN is characterized by four factors namely,
A. Separation of control plane and data plane
B. A centralized view of the network with centralized controller
C. Open interfaces between the devices in the control plane and the data plane
D. the network can be programmed by the use of external application network
II. SDN CONTROLLERS
SDN Controllers (aka SDN Controller Platforms) in a software-defined network (SDN) is known as the “brains” of the network. SDN controller is and application acting as a centralized control point in the SDN network. It also manages the flow control to the switches/ routers. The applications and business logic via northbound API for deployment of intelligent networks is also managed by controller. Recently, organizations are deploying more SDN networks, the Controllers are tasked with federating between SDN Controller domains, use of common application interfaces, including OpenFlow and open virtual switch database (OVSDB).
Various networks tasks in and SDN controller platform are performed by a collection of “pluggable” modules. The tasks include investigation of the devices within networks, finding the capabilities of the devices and gathering network statistics, etc. More advanced capabilities can be supported by providing extensions with more functionality including routing algorithms to perform analytics and orchestrating new rules throughout the network.
A. POX Controller
POX controller is responsible for providing communication with SDN switches with the help of the OpenFlow or OVSDB protocol. POX can be used by the developers for creating an SDN controller with the help of Python programming language. Python is a most popular tool for developing and designing software defined networks and network applications programming. POX is the best SDN controller which can be used with the help of stock components that come bundled with it.
More complex SDN controller can also be created by creating new POX components. Different network applications can also be written that address POX’s API. Components of POX
When POX is started from the command line, POX components are invoked. These are additional Python programs. These the network functionality in the software defined network is implemented by using these components. Some stock components of the POX are already available.
The POX controller can be start with the selected stock components, by entering the following command on a terminal session connected to the Mininet VM.
```bash
~$ sudo ~/pox/pox.py forwarding.l2_pairs
```
Following POX features are listed by the NOXrepo website: “Pythonic” OpenFlow interface.
- sample components for path selection, topology discovery, etc are reusable.
- POX is able to run anywhere. “It can bundle with install-free PyPy runtime for easy deployment
- Specifically targets Linux, Mac OS, and Windows.
- Topology discovery.
- The GUI and visualization tools of POX are same as NOX.
- Performance of POX application is better than NOX applications written in Python.
B. RYU Controller
Ryu is a component-based software defined networking framework. The software components with well defined API of RYU helps developers for creating different network management and control applications. Different protocols such as OpenFlow, Netconf, OF-config, etc. are supported by RYU for managing network devices. Ryu supports Openflow extensions fully 1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.
The RYU’s code is fully written in Python. It is freely available under the Apache 2.0 license and open for anyone to use. Different network management protocols supported by the Ryu Controller are NETCONF and OF-config, as well as OpenFlow. OpenFlow is one of the first and most widely deployed SDN communications standards. It also supports Nicira extensions.
III. MININET
It is a network emulator which makes a system of virtual hosts, switches, controllers, and connections. Mininet has run standard Linux arrange programming, and its switches bolster OF for profoundly adaptable custom steering and SDN technology. That can easily connect with system by software CLI (Command Line Interface) and API (application program interface). Mininet is utilized generally in view of: quick to begin a straightforward system, supporting custom topologies and bundle sending, running genuine projects accessible on Linux, running on PCs, servers, virtual machines, having sharing and recreating capacity, simple to utilize, being in open source and dynamic advancement state. Mininet uses process based virtualization to emulate entities on a single OS kernel by running a real code, including standard network applications, the real OS kernel and the network stack.
Therefore, a project that works correctly in Mininet can usually move directly to practical networks composed of real hardware devices. The code that is to be developed in Mininet, can also run in a real network without any modifications. It supports large scale networks containing large number of virtual hosts and switches[3] [4] [5].
A. Characteristics of MININET
- Flexibility: MININET is able to set new topologies and new features in software, with the help of programming languages and common operating systems.
- Applicability
- Interactivity: the simulated network must be deployed in the real network as if it happens in real networks.
- Scalability, large networks must be scaled with the help of the prototyping environment using hundreds or thousands of switches on only a computer.
- Realistic: the prototype behavior must be able for representing real time behavior with a high degree of confidence, so code modification should not be required for using applications and protocols stacks
- Shareable : The other collaborators should be able to reuse the created prototypes, which can then run and modify the experiments.
The followed steps are followed after successful installation:
Step 1: Run following command to get access to Mininet terminal.
```bash
sudo mn
```
above command will start mininet terminal)
A default topology will be loaded which has two hosts that are connected to a switch (OVSSwitch) and a default controller (ovs-controller ). The terminal will start displaying something like this:
```
minten>
```
Step 2: Use following commands inside the Mininet terminal to get information about topology:
1. For displaying nodes: nodes
2. For displaying links: net
3. To dump all nodes information: dump
4. For getting more CLI commands use help.
Step 3: Information of any particular host can be obtained by the following command inside the Mininet terminal:
```bash
host-name ifconfig -a
```
Example: h1 ifconfig -a
---
**Reference:**
- DOI:10.35940/ijrte.844
- F9375038620
**Published By:**
Blue Eyes Intelligence Engineering & Sciences Publication
Step 4: To ping from one node to another node:
host-name-1 ping host-name-2
Example: h1 ping h2
(above command will ping from h1 to h2)
Step 5: To ping all hosts from every host:
pingall
(above command will ping all hosts from every host)
Step 6: Some inbuilt topologies are are provided by Mininet like linear, single, minimal, reversed, torus and tree.
For using them, open Mininet from your bash terminal with the following command:
sudo mn --topo linear,5
(creates a topology of 5nodes, each connected with a separate switch )
sudo mn --topo single,6
(creates a topology 6 nodes, each connected with a single switch )
IV. USING D-ITG TRAFFIC GENERATOR IN MININET
1. Login into Mininet
2. $ cd ~/pox
3. $ /pox.py forwarding.l2_learning
mininet@mininet-vm:/pox$ /pox.py forwarding.l2_learning
POX 0.2.0 (c) Copyright 2011-2013 James McCauley, et al.
INFO:core:POX 0.2.0 (c) is up.
4. Now run these commands into the first PuTTY session.
5. $ cd ~
6. $ sudo mn --controller=remote,ip=127.0.0.1.port=6633
7. $ xterm h1
8. $ xterm h2
mininet> xterm h1
9. Now in the xterm window of h2, run these commands:
10. $ cd D-ITG-2.8.1-r1023/bin
11. $ ./ITGRecv
Now to analyze the logs, run these command.
12. Now in the xterm of h1, run these commands.
13. $ cd D-ITG-2.8.1-r1023/bin
14. $ ./ITGSend -T UDP -a 10.0.0.2 -c 100-C 10-t 15000
15. $ ./ITGDec sender.log
16. Run this in the xterm of h1.
17. $ ./ITGDec senderr.log
root@mininet-vm:/D-ITG-2.8.1-r1023/bin$ ./ITGDec sender.log
ITGdec version 2.8.1 (r1025)
Compile-time options: bursty multicast
Flow number: 1
From 10.0.0.1:154586
To 10.0.0.2:6999
Total time = 14,510472 s
Total packets = 145
Minin delay = 0.000000 s
Maxin delay = 0.000000 s
Average delay = 0.000000 s
Average jitter = 0.000000 s
Delay standard deviation = 0.000000 s
Bytes received = 1480
Average bitrate = 7.393472 kbit/s
Average packet rate = 9.392397 pkt/s
Packets dropped = 0 (0.00 %)
Average loss/burst size = 0.000000 pkt
root@mininet-vm:/D-ITG-2.8.1-r1023/bin$ ./ITGDec receiver.log
root@mininet-vm:/D-ITG-2.8.1-r1023/bin$ ./ITGDec receiver.log
root@mininet-vm:/D-ITG-2.8.1-r1023/bin$ ./ITGDec sender.log
ITGdec version 2.8.1 (r1025)
Compile-time options: bursty multicast
Flow number: 1
From 10.0.0.1:154586
To 10.0.0.2:6999
Total time = 14,510472 s
Total packets = 145
Minin delay = 0.000000 s
Maxin delay = 0.000000 s
Average delay = 0.000000 s
Average jitter = 0.000000 s
Delay standard deviation = 0.000000 s
Bytes received = 1480
Average bitrate = 7.393472 kbit/s
Average packet rate = 9.392397 pkt/s
Packets dropped = 0 (0.00 %)
Average loss/burst size = 0.000000 pkt
Error lines = 0
18. Similarly run this on h2.
19. $ ./ITGDec receiver.log
V. EXPERIMENTING WITH SDN
To evaluate the system, we created single controller topology in mininet. We created a topology with 50, 100, 200, 300, 400 and 500 hosts and remote controllers using mininet. We are using a script to flood the capacity of the switches. After flooding the capacity, we are analyzing the behavior of the switches with respect to the controller. For filling the capacity, we sending a different number of packets from hosts to different hosts simultaneously to understand the network behavior under the load. We have evaluated the performance using POX and RYU controllers.
A. Single Topology Evaluation
For the single topology, we configured the controller in mininet with port no and IP address of the machine. The POX controller is invoked using command
```
./pox.py forwarding.l2_pairs openflow.of_01 --port=6633
```
in one machine to establish the connection between the switch and the controller. So, to evaluate the performance, we fill the capacity with a different number of packets. We flood switches by simultaneously sending different packets. Calculate the maximum delay, average bitrate, and load jitter by observing the flow of the packets. We observe the switch controller communication to later compare it with the other topology. Once we start the traffic bridge between the hosts and client with the help of xterm and D-ITG tool we see how the controller responds and switch handles the traffic.
After setting up the topologies and conducting the experiments we have calculated the important parameters affecting the network traffic. We calculated the maximum delay to emphasize the behavior and differentiate between single and linear topology network performance. Another parameter we calculated is the average jitter. One of the important factors for networking infrastructure is to be able to handle the load in an effective manner. As the load in any network can be unpredictable and can any time reach the maximum. So, we calculated the average jitter to highlight the reaction of the topology. We have also evaluated the performance of the network using average bitrate.
We flooding the capacity of the switches with different flows. We sent multiple packets using D-ITG tool to different hosts, observed the flows, and plotted the graph for the different parameters affecting the network performance. We analyzed the output of the D-ITG tool and plotted a graph for different values. We have evaluated the performance for POX and RYU controllers.
Below the graph in figure 2 is a maximum delay for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the delay calculated for different packets in seconds.
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>POX Controller</th>
<th>RYU Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.0108</td>
<td>0.000525</td>
</tr>
<tr>
<td>100</td>
<td>0.003022</td>
<td>0.000815</td>
</tr>
<tr>
<td>200</td>
<td>0.004134</td>
<td>0.00115</td>
</tr>
<tr>
<td>300</td>
<td>0.00435</td>
<td>0.000708</td>
</tr>
<tr>
<td>400</td>
<td>0.004434</td>
<td>0.001055</td>
</tr>
</tbody>
</table>
The maximum delay for RYU controller is less as compared to POX controller for different number of hosts.
Below the graph in figure 3 is a average jitter for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the average jitter calculated for different packets in seconds.
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>POX Controller</th>
<th>RYU Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.000173</td>
<td>0.000073</td>
</tr>
<tr>
<td>100</td>
<td>0.00013</td>
<td>0.000111</td>
</tr>
<tr>
<td>200</td>
<td>0.000122</td>
<td>0.000093</td>
</tr>
<tr>
<td>300</td>
<td>0.004147</td>
<td>0.000085</td>
</tr>
<tr>
<td>400</td>
<td>0.004413</td>
<td>0.000056</td>
</tr>
</tbody>
</table>
As depicted in the above graph and table the average jitter is less for RYU controller as compared to POX controller for simple topology.
Below the graph in figure 4 is average bitrate for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the average jitter calculated for different packets in seconds.

**Figure 4: Average Bitrate**
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>POX Controller</th>
<th>RYU Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>7.936863</td>
<td>7.97688</td>
</tr>
<tr>
<td>100</td>
<td>7.966276</td>
<td>7.971505</td>
</tr>
<tr>
<td>200</td>
<td>7.981863</td>
<td>7.993634</td>
</tr>
<tr>
<td>300</td>
<td>8.114428</td>
<td>7.86923</td>
</tr>
<tr>
<td>400</td>
<td>8.065564</td>
<td>7.997105</td>
</tr>
<tr>
<td>500</td>
<td>7.985953</td>
<td>7.81005</td>
</tr>
</tbody>
</table>
### TABLE III. AVERAGE BITRATE
**B. Linear Topology Evaluation**
For the linear topology, we configured the controller in mininet with port no and IP address of the machine. After setting up the topologies and conducting the experiments we have calculated the important parameters affecting the network traffic.
Below the graph in figure 5 is a maximum delay for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the delay calculated for different packets in seconds.

**Figure 5: Maximum Delay for Linear Topology**
### TABLE IV. MAXIMUM DELAY FOR LINEAR TOPOLOGY
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>POX Controller</th>
<th>RYU Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.029656</td>
<td>0.001138</td>
</tr>
<tr>
<td>100</td>
<td>0.07761</td>
<td>0.00005</td>
</tr>
<tr>
<td>200</td>
<td>0.000054</td>
<td>0.00198</td>
</tr>
</tbody>
</table>
For Linear topology also the maximum delay is less for RYU controller as compared to POX controller considering 50,100 and 200 number of hosts.
Below the graph in figure 6 is a average jitter for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the average jitter calculated for different packets in seconds.

**Figure 6: Average Jitter for Linear Topology**
### TABLE V. AVERAGE JITTER FOR LINEAR TOPOLOGY
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>POX Controller</th>
<th>RYU Controller</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.000279</td>
<td>0.000092</td>
</tr>
<tr>
<td>100</td>
<td>0.000618</td>
<td>0.000073</td>
</tr>
<tr>
<td>200</td>
<td>0.001844</td>
<td>0.000071</td>
</tr>
</tbody>
</table>
Average Jitter is also low for RYU controller as compared to POX controller.
Below the graph in figure 7 is average bitrate for the value collected for different packets for a single controller. X-axis represents the number of hosts for each experiment and y-axis represents the average jitter calculated for different packets in seconds.

**Figure 7: Average Bitrate for linear topology**
C. Comparing the Performance of Single and Linear Topology
Figure 8 shows the maximum delay performance of single and linear topology using POX controller.

**Figure 8: Maximum Delay Using POX Controller**
Using POX controller the maximum delay is less for single topology.
Figure 9 shows the maximum delay performance of single and linear topology using RYU controller.

**Figure 9: Maximum Delay Using RYU Controller**
Using RYU controller the maximum delay is less for single topology.
Table VII depicts the performance of the single and linear topology using average jitter for POX Controller.
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>Single topology</th>
<th>Linear topology</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.000173</td>
<td>0.000279</td>
</tr>
<tr>
<td>100</td>
<td>0.00013</td>
<td>0.000618</td>
</tr>
<tr>
<td>200</td>
<td>0.000122</td>
<td>0.001844</td>
</tr>
</tbody>
</table>
As depicted in table VII the average Jitter is less for Single topology as compared to linear topology.
Table VIII depicts the performance of the single and linear topology using average jitter for RYU Controller.
<table>
<thead>
<tr>
<th>Number of Hosts</th>
<th>Single topology</th>
<th>Linear topology</th>
</tr>
</thead>
<tbody>
<tr>
<td>50</td>
<td>0.000073</td>
<td>0.000092</td>
</tr>
<tr>
<td>100</td>
<td>0.000111</td>
<td>0.000073</td>
</tr>
<tr>
<td>200</td>
<td>0.000093</td>
<td>0.000071</td>
</tr>
</tbody>
</table>
As depicted in table VIII the average Jitter is less for Linear topology as compared to linear topology.
VI. CONCLUSION AND FUTURE SCOPE
The Software Defined Networks are widely propagated and becoming the most promising technology. As the demand is increasing it’s important to take into consideration the challenges. With the help of the results of the experiment we have concluded the following results
- As per maximum delay, the maximum delay for RYU controller is less as compared to POX controller.
- The maximum delay for single topology is less as compared to linear topology.
- The average jitter for RYU controller is low as compared to POX controller.
- The average jitter for single topology is less as compared to linear topology.
- The performance of POX and RYU controllers for single and linear topologies is also evaluated for different number of hosts.
Hence the RYU controller with linear topology is found to perform better for different network sizes.
In future we will evaluate the performance of different SDN controllers. We will use different network simulation tools and performance analysis metrics.
REFERENCES
3. Mininet: An Instant Virtual Network on your Laptop (or other PC), available online: http://mininet.org/, access on April 2019.
|
{"Source-Url": "https://www.ijrte.org/wp-content/uploads/papers/v9i1/F9375038620.pdf", "len_cl100k_base": 5600, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22909, "total-output-tokens": 6902, "length": "2e12", "weborganizer": {"__label__adult": 0.0003769397735595703, "__label__art_design": 0.0003185272216796875, "__label__crime_law": 0.00039768218994140625, "__label__education_jobs": 0.0009455680847167968, "__label__entertainment": 0.0001766681671142578, "__label__fashion_beauty": 0.00015914440155029297, "__label__finance_business": 0.0004515647888183594, "__label__food_dining": 0.00042319297790527344, "__label__games": 0.0008211135864257812, "__label__hardware": 0.00493621826171875, "__label__health": 0.0007719993591308594, "__label__history": 0.00033473968505859375, "__label__home_hobbies": 0.00012135505676269533, "__label__industrial": 0.0007429122924804688, "__label__literature": 0.00024271011352539065, "__label__politics": 0.0002644062042236328, "__label__religion": 0.00047659873962402344, "__label__science_tech": 0.351318359375, "__label__social_life": 0.0001494884490966797, "__label__software": 0.056243896484375, "__label__software_dev": 0.5791015625, "__label__sports_fitness": 0.0003719329833984375, "__label__transportation": 0.000701904296875, "__label__travel": 0.000255584716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24642, 0.08045]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24642, 0.48254]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24642, 0.85178]], "google_gemma-3-12b-it_contains_pii": [[0, 3727, false], [3727, 9203, null], [9203, 12405, null], [12405, 16293, null], [16293, 19432, null], [19432, 23546, null], [23546, 24642, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3727, true], [3727, 9203, null], [9203, 12405, null], [12405, 16293, null], [16293, 19432, null], [19432, 23546, null], [23546, 24642, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24642, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24642, null]], "pdf_page_numbers": [[0, 3727, 1], [3727, 9203, 2], [9203, 12405, 3], [12405, 16293, 4], [16293, 19432, 5], [19432, 23546, 6], [23546, 24642, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24642, 0.16279]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
8872fd1aaeabf93f36cfc2cd4de4ca1f70f9ebc0
|
Agent-Oriented Software Development: A Case Study
Paolo Giorgini1 Anna Perini2 John Mylopoulos3 Fausto Giunchiglia4 Paolo Bresciani2
1 Department of Mathematics - University of Trento - Italy - pgiorgini@science.unitn.it
2 ITC-irst - Povo (Trento) - Italy - {bresciani,perini}@itc.it
3 Department of Computer Science - University of Toronto - Canada - jm@cs.toronto.edu
4 Dipartimento di Informatica e Studi Aziendali - University of Trento - Italy - fausto@cs.unitn.it
Abstract
We are developing a methodology, called Tropos, for building agent-oriented software systems. The methodology covers five software development phases: early requirements analysis, late requirements analysis, architectural design, detailed design, and implementation. Throughout, the concepts offered by i* are used to model both the stakeholders in the system’s environment, and the system itself. These concepts include actors, who can be (social) agents (organizational, human or software), positions or roles, goals, and social dependencies for defining the obligations of actors to other actors (called dependees and dependers respectively.) Dependencies may involve a goal, to be fulfilled by the dependee on behalf of the depender, a task to be carried out by the dependee, or a resource to be delivered. The paper presents a case study to illustrate the features and the strengths of the Tropos methodology.
1. Introduction
The explosive growth of application areas such as electronic commerce, enterprise resource planning and mobile computing has profoundly and irreversibly changed our views on software and Software Engineering. Software must now be based on open architectures that continuously change and evolve to accommodate new components and meet new requirements. Software must also operate on different platforms, without recompilation, and with minimal assumptions about its operating environment and its users. As well, software must be robust and autonomous, capable of serving a naïve user with a minimum of overhead and interference. These new requirements, in turn, call for new concepts, tools and techniques for engineering and managing software.
For these reasons – and more – agent-oriented software development is gaining popularity over traditional software development techniques, including structured and object-oriented ones (see for instance [6]). After all, agent-based architectures do provide for an open, evolving architecture which can change at run-time to exploit the services of new agents, or replace under-performing ones. In addition, software agents can, in principle, cope with unforeseen circumstances because they include in their architecture goals, along with a planning capability for meeting them. Finally, agent technologies have matured to the point where protocols for communication and negotiation have been standardized [1].
We are developing a software development methodology for agent-based software systems. The methodology adopts ideas from multi-agent system technologies, mostly to define the implementation phase of our methodology. The implementation platform we use is JACK Intelligent Agents [2], a commercial agent programming platform based on the BDI (Beliefs-Desires-Intentions) agent architecture. We are also adopting ideas from Requirements Engineering, where agents and goals have been heavily used for early requirements analysis [5, 11]. In particular, we adopt Eric Yu’s i* model which offers actors (agents, roles, or positions), goals, and actor dependencies as primitive concepts for modelling an application during early requirements analysis. The key assumption which distinguishes our work from others in Requirements Engineering is that actors and goals have been heavily used for early requirements analysis [5, 11]. In particular, we adopt Eric Yu’s i* model which offers actors (agents, roles, or positions), goals, and actor dependencies as primitive concepts for modelling an application during early requirements analysis.
Our methodology, named Tropos, is intended to support five phases of software development:
Early Requirements, concerned with the understanding of a problem by studying an existing organizational setting; the output of this phase is an organizational model which includes relevant actors and their respective dependencies;
Late requirements, where the system-to-be is described within its operational environment, along with relevant functions and qualities; this description models the system as a (small) number of actors which have a number of dependencies with actors in their environment; these dependencies define the system’s functional and non-functional requirements;
Architectural design, where the system’s global architecture is defined in terms of subsystems, interconnected through data and control flows; within our framework, subsystems are represented as actors and data/control interconnections are represented as (system) actor dependencies;
Detailed design, where each architectural component is defined in further detail in terms of inputs, outputs, control, and other relevant information; our framework adopts elements of AUML [8], to complement the features of i*;
Implementation, where the actual implementation of the system is carried out, consistently with the detailed design.
The motivations behind the Tropos project and an early glimpse of how the methodology would work for particular examples are given in [7, 3].
This paper reports on a case study which applies the Tropos framework to all phases of analysis, design and implementation for fragments of a system developed for the government of Trentino (Provincia Autonoma di Trento, or PAT). The system (which we will call throughout the eCulture System) is intended as a web-based broker of cultural information and services for the province of Trento, including information obtained from museums, exhibitions, and other cultural organizations. It is a government’s intention that the system be usable by a variety of users, including Trentinos and tourists looking for things to do, or scholars and students looking for material relevant to their studies.
Section 2 of the paper introduces the key features of i* and illustrates its use during early requirements analysis in Tropos. Sections 3, 4 and 5 cover late requirements analysis, architectural design and detailed design, respectively. Section 6 describes the implementation phase, during which the detailed design developed in terms of i* and AUML diagrams are mapped onto the skeleton of a multi-agent system using the JACK agent programming platform. Conclusions and directions for further research are presented in section 7.
2. Early Requirements Analysis
During early requirements analysis, the requirements engineer models and analyzes the intentions of the stakeholders. Intentions are modeled as goals which, through a goal-oriented analysis, eventually lead to the functional and non-functional requirements of the system-to-be [5]. In Tropos, early requirements are assumed to involve social actors who depend on each other for goals to be achieved, tasks to be performed, and resources to be furnished. In analogy with i*, Tropos includes actor diagrams for describing the network of social dependency relationships among actors, as well as rationale diagrams for analyzing goals through a means-ends analysis in order to discover ways of fulfilling them. These primitives have been formalized using intentional concepts from AI, such as goal, belief, ability, and commitment. The i* framework has been presented in detail in [11] and has been related to different application areas, including requirements engineering [10], business process reengineering [13], and software processes [12].
An actor diagram is a graph, where each node represents an actor, and each link between two actors indicates that one actor depends on the other for something in order that the former may attain some goal. We call the depending actor the dependee and the actor who is depended upon the depender. The object around which the dependency centers is called the dependum. By depending on another actor for a dependum, an actor is able to achieve goals that it is otherwise unable to achieve on its own, or not as easily, or not as well. At the same time, the dependee becomes vulnerable. If the dependee fails to deliver the dependum, the depender would be adversely affected in its ability to achieve its goals.
The list of relevant actors for the eCulture project includes, among others, the following stakeholders:
- Provincia Autonoma di Trento (PAT) is the government agency funding the project; their objectives include improving public information services, increasing tourism through new information services, also encouraging internet use within the province (and keeping citizens happy, of course!)
- Museums, who are major cultural information providers for their respective collections; museums want government funds to build/improve their cultural information services, and are willing to interface their systems with the eCulture System.
- Tourist, who will want to access cultural information before or during her visit to Trentino to make her visit interesting and/or pleasant.
- (Trentino) Citizen, who wants easily accessible information, of any sort, and (of course) good government!
Figure 1 shows these actors and their respective goals. In particular, Citizen is associated with a single relevant
\[\text{In i*, actor diagrams are called strategic dependency models, while rationale diagrams are called strategic rationale models.}\]
\[\text{The list has been scaled down from a longer list which included the organizations responsible for the development of the system as well as the developers.}\]
goal: get cultural information, while Visitor has an associated softgoal enjoy visit. Softgoals are distinguished from goals because they don’t have a formal definition, and are amenable to a different (more qualitative) kind of analysis [4]. Along similar lines, PAT wants to increase internet use while Museum wants to provide cultural services. Finally, the diagram includes one softgoal dependency where Citizen depends on PAT to fulfill the taxes well spent softgoal. It should be noted that the starting point of the early requirements analysis is higher level than in most other projects, where initial goals are more concrete. This is due to the fact that the eCulture project is an R&D project involving government and universities, as opposed to a business-to-business project. Once the stakeholders have been identified along with their goals, analysis proceeds by analyzing through a rationale diagram each goal relative to the stakeholder who is responsible for its fulfillment. Figure 2 shows fragments of such an analysis from the perspective of Citizen and Visitor. The rationale diagrams for each actor are shown as balloons within which goals are analyzed and dependencies to other actors are established. For the actor Citizen, the goal get cultural information is decomposed into visit cultural institutions and visit cultural web systems. The latter goal is fulfilled by the task (shown as a hexagonal icon) visit eCulture System, and this task is decomposed into sub-tasks use eCulture System and access internet. At this point, Citizen can rely on other actors, namely PAT to deliver on the eCulture System and make it usable too. The analysis for Visitor is simpler: to enjoy a visit, the visitor must plan for it and for this she needs the eCulture System too. Museum is assumed to rely on funding from PAT to fulfill its objective to offer good cultural services.
The example is intended to illustrate how means-ends analysis is conducted. Throughout, the idea is that goals are decomposed into subgoals and tasks are introduced for their fulfillment. Note that a goal may have several different tasks that can fulfill it. As in other frameworks where these concepts apply, goals describe what is desired, tasks describe how goals are to be fulfilled. The result of the means-ends analysis for each goal is a set of dependencies between the actor and other actors through which the goal can be fulfilled. Once a goal dependency is established from, say, Citizen to PAT, the goal is now PAT responsibility and the goal needs to be analyzed from PAT perspective. Portions of the means-ends analysis for PAT are shown in Figure 3. The goals increase internet use and eCulture System available are both well served by the goal build eCulture System. The softgoal taxes well spent gets positive contributions, which can be thought as justifications for the selection of particular dependencies.
The final result of this phase is a set of strategic dependencies among actors, built incrementally by performing means-end analysis on each goal, until all goals have been analyzed. The later it is added, the more specific a goal is. For instance, in the example in Figure 3 PAT goal build eCulture System is introduced last and, therefore, has no subgoals and it is motivated by the higher level goals it fulfills.4
3. Late Requirements Analysis
During late requirement analysis the system-to-be (the eCulture System in our case) is described within its operating environment, along with relevant functions and qualities. The system is represented as one or more actors which have a number of dependencies with the other actors of the organization. These dependencies define all functional and non-functional requirements for the system-to-be.
Figure 4 includes the eCulture System, introduced as another actor of the actor diagram. PAT depends on the
4In rationale diagrams one can also introduce tasks and resources and connect them to the fulfillment of goals.
eculture System to provide eCultural services, which is a goal that contributes to the main goal of PAT, namely increase internet use (see Figure 3). The eCulture System is expected to fulfill PAT softgoals such as extensible eCulture System, flexible eCulture System, usable eCulture System, and use internet technology. In order to exemplify the process, let’s concentrate here on an analysis for the goal provide eCultural services and the softgoal usable eCulture System. The goal provide eCultural services is decomposed (AND decomposition) into four subgoals: make reservations, provide info, educational services, and virtual visits. As basic eCultural service, the eCulture System must provide information (provide info), which can be logistic info, and cultural info. Logistic info concerns, for instance, timetables and visiting instructions for museums, while cultural info concerns the cultural content of museums and special cultural events. This content may include descriptions and images of historical objects, the description of an exhibition, and the history of a particular region. Virtual visits are services that allow, for instance, Citizen to pay a virtual visit to a city of the past (Rome during Caesar’s time!). Educational services includes presentation of historical and cultural material at different levels (e.g., high school or undergraduate university level) as well as on-line evaluation of the student’s grasp of this material. Make reservations allows the Citizen to make reservations for particular cultural events, such as concerts, exhibitions, and guided museum visits. The softgoal usable eCulture System has two positive (+) contributions from softgoals user friendly eCulture System and available eCulture System. The former contributes positively because a system must be user friendly to be usable, whereas the latter contributes positively because it makes the system portable, scalable, and available over time (temporal available).
Once these system goals and softgoals have been defined, new actors (including sub-actors) are introduced. Each new actor takes on the responsibility to fulfill one or more goals of the system. Figure 5 shows the decomposition in sub-actors of the eCulture System and the goal dependences between the eCulture System and its sub-actors. The eCulture System depends on the Info Broker to provide info, on the Educational Broker to provide educational services, on the Reservation Broker to make reservations, on Virtual Visit Broker to provide virtual visits, and on System Manager to provide interface. Additionally, each sub-actor can be itself decomposed into sub-actors responsible for the fulfillment of one or more sub-goals. In Figure 5, System Manager, a position, is decomposed into two roles:5 System Interface Manager and User Interface Manager, each of which is responsible for an interfacing goal.
Before moving to the architectural design phase, let describe in more detail a portion of the rationale diagram for the eCulture System. Figure 6 shows the analysis for the
---
5 An actor can be an agent, a role, or a position (see [11] for more details).
fulfillment of the goal get cultural information that the Citizen depends on the eCulture System for. The analysis starts from a root goal search information which can be fulfilled by four different tasks: search by area (thematic area), search by geographical area, search by keyword, and search by time period. The decomposition into sub-tasks is almost the same for all four kinds of search. For example, to search information about a particular thematic area, the Citizen provides information using an area specification form. Such information will be used to classify the area, get info on area, and synthesize results. The sub-task get info on area is decomposed in find info sources, that finds which information sources are more appropriate to provide information concerning the specified area, and query sources, that queries the information sources. The sub-task find info sources depends on the museums for the description of the information that the museums can provide (info about source), and synthesize results depends on museums for query result.
4. Architectural Design
The architectural design phase consists of four steps: addition of new actors, actor decomposition, capabilities identification and agents assignment.
In the first step new actors are added to the overall actor diagram in order both to make system interact with the external actors and to contribute positively to the fulfillment of some non-functional requirements. The final result of this step is the extended actor diagram, in which the new actors and their dependencies with the other actors are presented. Figure 7 shows the extended actor diagram with respect to the Info Broker. The User Interface Manager and the Sources Interface Manager are responsible for interfacing the system to the external actors Citizen and Museum. To facilitate actor interactions inside the system, we have introduced two more actors: the Services Broker and Sources Broker. Services Broker manages a repository of descriptions for services offered by actors within the eCulture System. Analogously, Sources Broker manages a repository of descriptions for information sources available outside the system. The introduction of the Services Broker and Sources Broker contributes positively to soft-goal extensible eCulture System that the PAT has delegated to the eCulture System (see Figure 4). In fact, the Services Broker supports extension of the system through new services (possibly provided by new actors), whereas the Sources Broker allows the system to use new information sources. The soft-goal analysis from the system perspective can be helpful for the architecture refine-
6These actually correspond to the Directory Facilitator and the Agent Resource Broker in the FIPA recommendations [1].
ment; in particular, for characterizing new actors in terms of functionalities to be inserted into the architecture. In our eCulture System, for example, we can think of adding a new actor called User Profiler which determines and maintains user preferences (profiles) and makes suggestions to the Info Broker accordingly. This actor could contribute positively to the soft-goal user friendly eCulture System.
The second step consists in the decomposition of actors in sub-actors. The aim of the decomposition is to expand in details each actor with respect to its goals and tasks. Figure 8 shows the Info Broker decomposition with respect to the goal of searching information, and in particular, the task search by area reported in Figure 6. The Info Broker is decomposed in three sub-actors: the Area Classifier, the Results Synthesizer, and the Info Searcher. Area Classifier is responsible for the classification of the information provided by the user. It depends on the User Interface Manager for interfacing to the users, and on the Service Broker to have information about the services provided by other actors. The Info Searcher depends on Area Classifier to have information about the thematic area that the user is interested in, on the Source Broker for the description of the information sources available outside the system, and on the Sources Interface Manager for interfacing to the sources. The Results Synthesizer depends on the Info Searcher for the information concerning the query that the Info Searcher asked, and on the Museum to have the query results.
The third step of the architectural design consists in the identification of the capabilities needed by the actors to fulfill their goals and tasks. Capabilities can be easily identified by analyzing the extended actor diagram. In particular each dependency relationship can give place to one or more capabilities triggered by external events. Table 1 lists the capabilities associated to the extended actor diagram of Figure 8. They are listed with respect to the system-to-be actors, and then numbered in order to eliminate possible copies whereas.
The last step of the architectural design is the agents assignment, in which a set of agent types is defined assigning to each agent one or more different capabilities (agent assignment). Table 2 reports the agents assignment with respect to the capabilities listed in Table 1. The capabilities concern exclusively the task search by area assigned to the Info Broker. Of course, many other capabilities and agent types are needed in case we consider all the goals and tasks associated to the complete extended actor diagram.
In general, the agents assignment is not unique and depends on the designer. The number of agents and the capabilities assigned to each of them are choices driven by the analysis of the extend actor diagram and by the way in which the designer think the system in term of agents. Some of the activities done in architectural design can be compared to what Wooldridge et al. propose to do within the Gaia methodology [9]. For instance what we do in actor diagram refinement can be compared to “role modeling” in Gaia. We instead consider also non-functional requirements. Similarly, capability analysis can be compared to “protocols modeling”, even if in Gaia only external events are considered.
5. Detailed design
The detailed design phase aims at specifying agent capabilities and interactions. The specification of capabilities amounts to modeling external and internal events that trigger plans and the beliefs involved in agent reasoning. Practical approaches to this step are often used. In the paper we adapt a subset of the AUML diagrams proposed in [8]. In particular:
1. **Capability diagrams.** The AUML activity diagram allows to model a capability (or a set of correlated capabilities), from the point of view of a specific actor. External events set up the starting state of a capability diagram, activity nodes model plans, transition arcs model events, beliefs are modeled as objects. For instance Figure 9 depicts the capability diagram of the query results capability of the User Interface Agent.
2. **Plan diagrams.** Each plan node of a capability diagram can be further specified by AUML action diagrams.
---
Footnote:
7 For instance the Data-Event-Plan diagram used by JACK developer. Ralph Ronnquist, personal communication.
<table>
<thead>
<tr>
<th>Actor Name</th>
<th>N</th>
<th>Capability</th>
</tr>
</thead>
<tbody>
<tr>
<td>Area Classifier</td>
<td>1</td>
<td>get area specification form</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>classify area</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>provide area information</td>
</tr>
<tr>
<td></td>
<td>4</td>
<td>provide service description</td>
</tr>
<tr>
<td>Info Searcher</td>
<td>5</td>
<td>get area information</td>
</tr>
<tr>
<td></td>
<td>6</td>
<td>find information source</td>
</tr>
<tr>
<td></td>
<td>7</td>
<td>compose query</td>
</tr>
<tr>
<td></td>
<td>8</td>
<td>query source</td>
</tr>
<tr>
<td></td>
<td>9</td>
<td>provide query information</td>
</tr>
<tr>
<td></td>
<td></td>
<td>provide service description</td>
</tr>
<tr>
<td>Results Synthesizer</td>
<td>10</td>
<td>get query information</td>
</tr>
<tr>
<td></td>
<td>11</td>
<td>get query results</td>
</tr>
<tr>
<td></td>
<td>12</td>
<td>provide query results</td>
</tr>
<tr>
<td></td>
<td>13</td>
<td>synthesize area query results</td>
</tr>
<tr>
<td></td>
<td></td>
<td>provide service description</td>
</tr>
<tr>
<td>Sources Interface</td>
<td>14</td>
<td>wrap information source</td>
</tr>
<tr>
<td>Manager</td>
<td></td>
<td>provide service description</td>
</tr>
<tr>
<td>Sources Broker</td>
<td>15</td>
<td>get source description</td>
</tr>
<tr>
<td></td>
<td>16</td>
<td>classify source</td>
</tr>
<tr>
<td></td>
<td>17</td>
<td>store source description</td>
</tr>
<tr>
<td></td>
<td>18</td>
<td>delete source description</td>
</tr>
<tr>
<td></td>
<td>19</td>
<td>provide sources information</td>
</tr>
<tr>
<td></td>
<td></td>
<td>provide service description</td>
</tr>
<tr>
<td>Services Broker</td>
<td>20</td>
<td>get service description</td>
</tr>
<tr>
<td></td>
<td>21</td>
<td>classify service</td>
</tr>
<tr>
<td></td>
<td>22</td>
<td>store service description</td>
</tr>
<tr>
<td></td>
<td>23</td>
<td>delete service description</td>
</tr>
<tr>
<td></td>
<td>24</td>
<td>provide services information</td>
</tr>
<tr>
<td>User Interface</td>
<td>25</td>
<td>get user specification</td>
</tr>
<tr>
<td>Manager</td>
<td></td>
<td>provide user specification</td>
</tr>
<tr>
<td></td>
<td>26</td>
<td>get query results</td>
</tr>
<tr>
<td></td>
<td>27</td>
<td>present query results</td>
</tr>
<tr>
<td></td>
<td></td>
<td>provide service description</td>
</tr>
</tbody>
</table>
Table 1. Actors’ capabilities
<table>
<thead>
<tr>
<th>Agent</th>
<th>Capabilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Query Handler</td>
<td>1, 3, 4, 5, 7, 8, 9, 10, 11, 12</td>
</tr>
<tr>
<td>Classifier</td>
<td>2, 4</td>
</tr>
<tr>
<td>Searcher</td>
<td>6, 4</td>
</tr>
<tr>
<td>Synthesizer</td>
<td>13, 4</td>
</tr>
<tr>
<td>Wrapper</td>
<td>14, 4</td>
</tr>
<tr>
<td>Agent Resource Broker</td>
<td>15, 16, 17, 18, 19, 4</td>
</tr>
<tr>
<td>Directory Facilitator</td>
<td>20, 21, 22, 23, 24, 4</td>
</tr>
<tr>
<td>User Interface Agent</td>
<td>25, 26, 27, 28, 4</td>
</tr>
</tbody>
</table>
Table 2. Agent types and their capabilities
3. Agent interaction diagrams. Here AUML sequence diagrams can be exploited. In AUML sequence diagrams, agents correspond to objects, whose life-line is independent from the specific interaction to be modeled (in UML an object can be created or destroyed during the interaction); communication acts between agents correspond to asynchronous message arcs. It can be shown that sequence diagrams modeling Agent Interaction Protocols, proposed by [8], can be straightforwardly applied to our example.
6. Implementation Using JACK
The BDI platform chosen for the implementation is JACK Intelligent Agents, an agent-oriented development environment built on top and fully integrated with Java. Agents in JACK are autonomous software components that have explicit goals (desires) to achieve or events to handle. Agents are programmed with a set of plans in order to make them capable of achieving goals.
The implementation activity follows step by step, in a natural way, the detailed design specification described in section 5. In fact, the notions introduced in that section have a direct correspondence with the following JACK’s constructs, as explained below:
- **Agent.** A JACK’s agent construct is used to define the behavior of an intelligent software agent. This includes the capabilities an agent has, the types of messages and events it responds to and the plans it uses to achieve its goals.
- **Capability.** A JACK’s capability construct can include plans, events, beliefs and other capabilities. An agent can be assigned a number of capabilities. Furthermore, a given capability can be assigned to different agents. JACK’s capability provides a way of applying reuse concepts.
- **Belief.** Currently, in Tropos, this concept is used only in the implementation phase, but we are considering to move it up to earlier phases. The JACK’s database construct provides a generic relational database. A database describes a set of beliefs that the agent can have.
- **Event.** Internal and external events specified in the detailed design map to the JACK’s event construct. In JACK an event describes a triggering condition for agents actions.
- **Plan.** The plans contained into the capability specification resulting from the detailed design level map to the
JACK’s plan construct. In JACK a plan is a sequence of instructions the agent follows to try to achieve goals and handle designed events.
As an example, the definition for the User Interface Agent, in JACK code, is as follows:
```java
public agent UserInterface extends Agent {
#has capability GetQueryResults;
#has capability ProvideUserSpecification;
#has capability GetUserSpecification;
#has capability PresentQueryResults;
#handles event InformQueryResults;
#handles event ResultsSet; }
```
The capability present query results, analyzed in Figure 9 is defined as follows:
```java
public capability PresentQueryResults
extends Capability {
#handles external event InformQueryResults;
#posts event ResultsSet;
#posts event EmptyResultsSet;
#private database QueryResults ();
#private database ResultsModel ();
#uses plan EvaluateQueryResults;
#uses plan PresentEmptyResults;
#uses plan PresentResults; }
```
7. Conclusions
In this paper we have reported on a case study which applies the Tropos framework to all phases on analysis, design and implementation for fragments of a system developed for the government of Trentino. Tropos is a new software development methodology for agent-based software systems, which allows us to exploit the advantages and the extra flexibility (if compared with other programming paradigms, for instance object oriented programming) coming from using Agent Oriented Programming. The basic assumption which distinguishes our work from others in Requirements Engineering is that actors and goals are used as fundamental concepts for modelling and analysis during all the phases of software development, not just early requirements.
Of course, much remains to be done to further refine the proposed methodology. We are currently working on several open points, such as the development of formal analysis techniques for Tropos, and also the development of tools which support different phases of the methodology.
References
|
{"Source-Url": "http://www.agent.ai/doc/upload/200402/gior02_1.pdf", "len_cl100k_base": 6507, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28349, "total-output-tokens": 7064, "length": "2e12", "weborganizer": {"__label__adult": 0.00029754638671875, "__label__art_design": 0.00038504600524902344, "__label__crime_law": 0.0002751350402832031, "__label__education_jobs": 0.0008006095886230469, "__label__entertainment": 5.2034854888916016e-05, "__label__fashion_beauty": 0.0001323223114013672, "__label__finance_business": 0.0002636909484863281, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.0004038810729980469, "__label__hardware": 0.0004963874816894531, "__label__health": 0.0003733634948730469, "__label__history": 0.0002236366271972656, "__label__home_hobbies": 6.759166717529297e-05, "__label__industrial": 0.0003008842468261719, "__label__literature": 0.0002040863037109375, "__label__politics": 0.0002372264862060547, "__label__religion": 0.00034117698669433594, "__label__science_tech": 0.00899505615234375, "__label__social_life": 7.647275924682617e-05, "__label__software": 0.00409698486328125, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.00024509429931640625, "__label__transportation": 0.00042128562927246094, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33958, 0.01832]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33958, 0.4388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33958, 0.92488]], "google_gemma-3-12b-it_contains_pii": [[0, 4333, false], [4333, 9789, null], [9789, 13777, null], [13777, 16926, null], [16926, 19707, null], [19707, 24119, null], [24119, 29697, null], [29697, 33958, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4333, true], [4333, 9789, null], [9789, 13777, null], [13777, 16926, null], [16926, 19707, null], [19707, 24119, null], [24119, 29697, null], [29697, 33958, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33958, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33958, null]], "pdf_page_numbers": [[0, 4333, 1], [4333, 9789, 2], [9789, 13777, 3], [13777, 16926, 4], [16926, 19707, 5], [19707, 24119, 6], [24119, 29697, 7], [29697, 33958, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33958, 0.28481]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
0b11f748afda5a2846c0a067502acb895e20a585
|
1. Reasoning with effects?
FP \[\text{equational reasoning}\] \[\text{monads}\]
\[\text{\smiley}\] \[?\] \[\text{\smiley}\] \[?\]
1.1. Seeing the wood through the trees
At TFP 2008, Hutton & Fulger discuss the ‘correctness’ of
\[
\text{relabel} :: \text{Tree } a \rightarrow \text{Tree Int}
\]
as an effectful (stateful) functional program.
I think they miss two opportunities for abstraction:
- from the specific effects (they expand the State monad to state-transforming functions), and
- from the pattern of computation (they use explicit induction on trees).
This is an attempt to address the first question. (The second is a story for another time.)
2. Monads
‘Ordinary’ monads, with the usual laws:
```haskell
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
```
Special cases:
```haskell
skip :: Monad m => m ()
skip = return ()
(≫) :: Monad m => m a -> m b -> m b
k ≫ l = k >>= const l
```
2.1. Fallibility
Computations may fail:
```haskell
class Monad m ⇒ MonadZero m where
mzero :: m a
```
such that
```haskell
mzero >>= k = mzero
```
(I'm curious as to why it's not like this in Haskell 98...) Often we just use
```haskell
mzero = ⊥
```
2.2. Guards
Define
\[
\text{guard} :: \text{MonadZero}~m \Rightarrow \text{Bool} \to m~() \\
guard~b = \text{if } b \text{ then } \text{skip} \text{ else } \text{mzero}
\]
We’ll write ‘\(b!\)’ for ‘\(\text{guard } b\)’.
Familiar properties:
\[
\begin{align*}
\text{True}! &= \text{skip} \\
\text{False}! &= \text{mzero} \\
(b_1 \land b_2)! &= b_1! \gg b_2! \\
b_1! \gg k \gg b_2! &= b_1! \gg k \iff b_1 \Rightarrow b_2
\end{align*}
\]
2.3. Assertions
For $k :: MonadZero ()$, write ‘$k \{ b \}$’ for
$$\text{do } \{ k; b! \} = \text{do } \{ k \} \quad (= k)$$
More generally, for $k :: MonadZero a$, define ‘$k \{ b \}$’ to be:
$$\text{do } \{ a \leftarrow k; b!; \text{return } a \} = \text{do } \{ a \leftarrow k; \text{return } a \}$$
By abuse of notation, extend to assertions about multiple statements: suppose statements $s_1; \ldots; s_n$ contain generators binding variables $\nu_1, \ldots, \nu_m$; write ‘$s_1; \ldots; s_n \{ b \}$’ for
$$\text{do } \{ s_1; \ldots; s_n; b!; \text{return } (\nu_1, \ldots, \nu_m) \} = \text{do } \{ s_1; \ldots; s_n; \text{return } (\nu_1, \ldots, \nu_m) \}$$
(A similar construction is used by Erkök and Launchbury (2000).)
2.4. Queries
A special class of monadic operations, particularly amenable to manipulation.
A *query* \( q \) has no side-effects:
\[
\text{do } \{ a \leftarrow q; k \} = \text{do } \{ k \} \quad \text{-- when } k \text{ doesn’t depend on } a
\]
and is consistent:
\[
\text{do } \{ a_1 \leftarrow q; a_2 \leftarrow q; k \ a_1 \ a_2 \} = \text{do } \{ a \leftarrow q; k \ a \ a \}
\]
(They’re not just the pure operations, ie those of the form \( \text{return } a \). Consider \( \text{get :: State } s \ s \) of the state monad.)
3. A counter example
A counting monad:
```haskell
class Monad m ⇒ MonadCount m where
tick :: m ()
total :: m Int
```
where `total` is a query, and
```haskell
n ← total; tick; n′ ← total {n′ = n + 1}
```
(exploiting our abuse of notation).
3.1. Towers of Hanoi—specification
Given this program:
\[
\begin{align*}
\text{hanoi} &:: \text{MonadCount} m \Rightarrow \text{Int} \rightarrow m () \\
hanoi 0 & = \text{skip} \\
hanoi (n + 1) & = \text{do} \{ \text{hanoi} n; \text{tick}; \text{hanoi} n \}
\end{align*}
\]
we claim:
\[
\begin{align*}
t &\leftarrow \text{total}; \text{hanoi} n; u &\leftarrow \text{total} \{ 2^n - 1 = u - t \}
\end{align*}
\]
Proof by induction on \( n \). The base case is immediate. Inductive step...
3.2. Reasoning
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ (n+1); \ u \leftarrow \text{total}; \ (2^{n+1} - 1 = u - t)! \}
\]
\[
= \quad [\text{definition of } \text{hanoi} \quad ]
\]
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ n; \ \text{tick}; \ hanoi \ n; \ u \leftarrow \text{total}; \ (2^{n+1} - 1 = u - t)! \}
\]
\[
= \quad [\text{inserting some queries} \quad ]
\]
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ n; \ u' \leftarrow \text{total}; \ \text{tick}; \ t' \leftarrow \text{total};
\]
\[
\quad \ hanoi \ n; \ u \leftarrow \text{total}; \ (2^{n+1} - 1 = u - t)!
\]
\[
= \quad [\text{inductive hypothesis; } \text{tick} \quad ]
\]
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ n; \ u' \leftarrow \text{total}; \ (2^n - 1 = u' - t);!; \ \text{tick}; \ t' \leftarrow \text{total};
\]
\[
\quad (t' = u' + 1);!; \ hanoi \ n; \ u \leftarrow \text{total}; \ (2^n - 1 = u - t');!; (2^{n+1} - 1 = u - t)!
\]
\[
= \quad [\text{arithmetic: } 2^{n+1} - 1 = u - t \text{ follows from other guards} \quad ]
\]
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ n; \ u' \leftarrow \text{total}; \ (2^n - 1 = u' - t);!; \ \text{tick}; \ t' \leftarrow \text{total};
\]
\[
\quad (t' = u' + 1);!; \ hanoi \ n; \ u \leftarrow \text{total}; \ (2^n - 1 = u - t')!
\]
\[
= \quad [\text{redundant guards, definition of } \text{hanoi} \quad ]
\]
\[
\text{do } \{ t \leftarrow \text{total}; \ hanoi \ (n+1); \ u \leftarrow \text{total} \} \]
4. Tree relabelling
A monad for generating fresh symbols:
```haskell
type Symbol = ...
instance Eq Symbol where ...
class Monad m ⇒ MonadGensym m where
fresh :: m Symbol
used :: m (Set Symbol)
```
such that `used` (only used in reasoning) is a query, and
\[
x ← used; n ← fresh; y ← used \{x ⊆ y ∧ n ∈ y − x\}
\]
4.1. Specification
Tree relabelling:
\[
\textbf{data} \quad \text{Tree } a = \text{Tip } a \mid \text{Bin (Tree } a\text{) (Tree } a\text{)}
\]
\[
\text{relabel} :: \text{MonadGensym } m \Rightarrow \text{Tree } a \to m(\text{Tree Symbol})
\]
\[
\text{relabel} (\text{Leaf } a) = \text{do}\{ n \leftarrow \text{fresh}; \text{return } (\text{Leaf } n) \}
\]
\[
\text{relabel} (\text{Bin } t u) = \text{do}\{ t' \leftarrow \text{relabel } t; u' \leftarrow \text{relabel } u; \text{return } (\text{Bin } t' u') \}
\]
(in fact, an idiomatic \textit{traverse}), satisfies
\[
x \leftarrow \text{used}; t' \leftarrow \text{relabel } t; y \leftarrow \text{used}\{\text{distinct } t' \land \text{labels } t' \subseteq y - x\}
\]
where
\[
\text{distinct} :: \text{Tree Symbol} \to \text{Bool}
\]
\[
\text{labels} :: \text{Tree Symbol} \to \text{Set Symbol}
\]
(written \(d\) and \(l\) below, for short).
4.2. Reasoning: base case
\[
\begin{align*}
&\textbf{do} \{ x \leftarrow \text{used}; \nu \leftarrow \text{relabel} \ (\text{Leaf } a); y \leftarrow \text{used}; (d \ \nu \land l \ \nu \subseteq y - x)! \} \\
= &\ [[ \text{ definition of } \text{relabel} \ ]] \\
&\textbf{do} \{ x \leftarrow \text{used}; n \leftarrow \text{fresh}; \textbf{let } \nu = \text{Leaf } n; y \leftarrow \text{used}; (d \ \nu \land l \ \nu \subseteq y - x)! \} \\
= &\ [[ \text{ definition of } d, l \ ]] \\
&\textbf{do} \{ x \leftarrow \text{used}; n \leftarrow \text{fresh}; \textbf{let } \nu = \text{Leaf } n; y \leftarrow \text{used}; (\text{True} \land \{ n \} \subseteq y - x)! \} \\
= &\ [[ \text{ axiom for fresh } \ ]] \\
&\textbf{do} \{ x \leftarrow \text{used}; n \leftarrow \text{fresh}; \textbf{let } u = \text{Leaf } n; y \leftarrow \text{used} \} \\
= &\ [[ \text{ folding definitions } \ ]] \\
&\textbf{do} \{ x \leftarrow \text{used}; \nu \leftarrow \text{relabel} \ (\text{Leaf } a); y \leftarrow \text{used} \}
\end{align*}
\]
4.3. Reasoning: inductive step
\[
\text{do } \{x \leftarrow \text{used}; v \leftarrow \text{relabel} \ (\text{Bin } t \ u); z \leftarrow \text{used}; (d \ v \land l \ v \subseteq z - x)\!\} \\
= \quad [[ \text{definition of } \text{relabel} \ ]] \\
\text{do } \{x \leftarrow \text{used}; t' \leftarrow \text{relabel } t; u' \leftarrow \text{relabel } u; \textbf{let } v = \text{Bin } t' \ u'; z \leftarrow \text{used}; \\
\quad (d \ v \land l \ v \subseteq z - x)\!\} \\
= \quad [[ \text{definition of } d, l \ ]] \\
\text{do } \{x \leftarrow \text{used}; t' \leftarrow \text{relabel } t; u' \leftarrow \text{relabel } u; \textbf{let } v = \text{Bin } t' \ u'; z \leftarrow \text{used}; \\
\quad (d \ t' \land d \ u' \land l \ t' \cap l \ u' = \emptyset \land l \ t' \cup l \ u' \subseteq z - x)\!\} \\
= \quad [[ \text{induction} \ ]] \\
\text{do } \{x \leftarrow \text{used}; t' \leftarrow \text{relabel } t; y \leftarrow \text{used}; (d \ t' \land l \ t' \subseteq y - x)\!; \\
\quad u' \leftarrow \text{relabel } u; z \leftarrow \text{used}; (d \ u' \land l \ u' \subseteq z - y)\!; \textbf{let } v = \text{Bin } t' \ u'; \\
\quad (d \ t' \land d \ u' \land l \ t' \cap l \ u' = \emptyset \land l \ t' \cup l \ u' \subseteq z - x)\!\} \\
= \quad [[ \text{queries, redundant guards, folding definitions} \ ]] \\
\text{do } \{x \leftarrow \text{used}; v \leftarrow \text{relabel} \ (\text{Bin } t \ u); z \leftarrow \text{used}\}
5. Towers of Hanoi, more directly
Hoare-style reasoning is a bit painfully long-winded: repeat the program on every line, gradually discharging guards.
Sometimes a more direct approach works. In fact,
\[ hanoi\ n = rep\ (2^n - 1)\ \text{tick} \]
where
\[ rep :: Monad\ m \Rightarrow Int \rightarrow m () \rightarrow m () \]
\[ rep\ 0 \quad ma = skip \]
\[ rep\ (n + 1)\ ma = ma \gg rep\ n\ ma \]
In particular, note that
\[ rep\ (m + n)\ ma = rep\ m\ ma \gg rep\ n\ ma \]
5.1. More direct proof
...by induction on $n$. Base case is trivial. For inductive step,
$$hanoi\ (n + 1)$$
$$= \begin{array}{l}
\quad \text{[[ definition of } hanoi \text{ ]]} \\
hanoi\ n \gg tick \gg hanoi\ n \\
\quad \text{[[ inductive hypothesis ]]} \\
rep\ (2^n - 1) \gg tick \gg rep\ (2^n - 1)\ tick \\
\quad \text{[[ composition ]]} \\
rep\ ((2^n - 1) + 1 + (2^n - 1))\ tick \\
\quad \text{[[ arithmetic ]]} \\
rep\ (2^{n+1} - 1)\ tick
\end{array}$$
But I don’t see how to do tree relabelling in this more direct style...
6. Probabilistic computations
Probability distributions form a monad (Giry, Jones, Ramsey, Erwig…).
For simplicity, only finitely-supported distributions here:
```haskell
class Monad m ⇒ MonadProb m where
choice :: Rational → m a → m a → m a
```
where the rationals are constrained to the unit interval.
Following Hoare, let’s write ‘\(mx \triangleleft p \triangleright my\)’ for ‘\(\text{choice } p \ mx \ my\)’.
6.1. Laws of choice
Unit, idempotence, commutativity:
\[ mx \triangleleft 0 \triangleright my = my \]
\[ mx \triangleleft 1 \triangleright my = mx \]
\[ mx \triangleleft p \triangleright mx = mx \]
\[ mx \triangleleft p \triangleright my = my \triangleleft 1 - p \triangleright mx \]
A kind of associativity:
\[ mx \triangleleft p \triangleright (my \triangleleft q \triangleright mz) = (mx \triangleleft r \triangleright my) \triangleleft s \triangleright mz \]
\[ \iff p = r s \land (1 - s) = (1 - p)(1 - q) \]
Bind distributes over choice, in both directions:
\[ mx \triangleright\triangleright \lambda a \rightarrow (k_1\ a) \triangleleft p \triangleright (k_2\ a) = (mx \triangleright\triangleright k_1) \triangleleft p \triangleright (mx \triangleright\triangleright k_2) \]
\[ mx \triangleleft p \triangleright my \triangleright\triangleright k = (mx \triangleright\triangleright k) \triangleleft p \triangleright (my \triangleright\triangleright k) \]
6.2. Normal form
Finite mappings from outcomes to probabilities (ignore order, disregard weightless entries, weights sum to one, amalgamate duplicates):
```hs
newtype Distribution a = D { unD :: [(a, Rational)] }
```
All you need to interpret a distribution is \textit{choice}:
```hs
fromDist :: MonadProb m ⇒ Distribution a → m a
fromDist d = fst (foldr1 combine [ (return a, p) | (a, p) ← unD d, p > 0 ])
where combine (mx, p) (my, q) = (mx ◁ p/p + q ◲ my, p + q)
```
For example,
```hs
uniform :: MonadProb m ⇒ [a] → m a
uniform x = fromDist (D [(a, p) | a ← x]) where p = 1 / length x
```
6.3. Implementation
Moreover, *Distribution* itself is a fine instance of *MonadProb*:
```haskell
instance Monad Distribution where
return a = D [(a, 1)]
px >>= f = D [(b, p \times q) | (a, p) \leftarrow \text{unD} px, (b, q) \leftarrow \text{unD} (f a)]
instance MonadProb Distribution where
ma \triangleright p \triangleright mb = D (\text{scale} p (\text{unD} ma) + \text{scale} (1 - p) (\text{unD} mb))
where \text{scale} r \text{pas} = [(a, r \times p) | (a, p) \leftarrow \text{pas}]
```
(Kidd points out that *Distribution* = *WriterT* Rational (*ListT* Identity), using the writer monad from the monoid of rationals with multiplication.)
6.4. Monty Hall
```haskell
data Door = A | B | C deriving (Eq, Show)
doors = [A, B, C]
hide :: MonadProb m ⇒ m Door
hide = uniform doors
pick :: MonadProb m ⇒ m Door
pick = uniform doors
tease :: MonadProb m ⇒ Door → Door → m Door
tease h p = uniform (doors \ [h, p])
switch :: MonadProb m ⇒ Door → Door → m Door
switch p t = return (head (doors \ [p, t]))
stick :: MonadProb m ⇒ Door → Door → m Door
stick p t = return p
```
6.5. The whole story
Monty’s script:
```
play :: MonadProb m ⇒ (Door → Door → m Door) → m Bool
play strategy =
do
h ← hide -- host hides the car behind door h
p ← pick -- you pick door p
t ← tease h p -- host teases you with door t (≠ h, p)
s ← strategy p t -- you choose, based on p and t (but not h!)
return (s == h) -- you win iff your choice s equals h
```
6.6. In support of Marilyn Vos Savant
It’s a straightforward proof by equational reasoning that
\[
\text{play switch} = \text{uniform} [ \text{True, True, False} ]
\]
\[
\text{play stick} = \text{uniform} [ \text{False, False, True} ]
\]
The key is that separate uniform distributions are independent:
\[
\text{do } \{ a \leftarrow \text{uniform } x; b \leftarrow \text{uniform } y; \text{return } (a, b) \} = \text{uniform } (\text{cp } x \ y)
\]
where
\[
\text{cp} :: [a] \to [b] \to [(a, b)]
\]
\[
\text{cp } x \ y = [(a, b) | a \leftarrow x, b \leftarrow y]
\]
(Ask me over a beer...)
7. Combining probability and nondeterminism
Nobody said that Monty has to play fair. He has a free choice in hiding the car, and in teasing you.
To model this, we need to combine probabilism with nondeterminism:
```
class MonadZero m ⇒ MonadPlus m where
mplus :: m a → m a → m a
```
such that \texttt{mzero} and \texttt{mplus} form a monoid, and
\[(m \texttt{‘mplus‘ } n) \gg k = (m \gg k) \texttt{‘mplus‘ } (n \gg k)\]
Happily, although monads do not compose in general, \([\texttt{Distribution } a]\) is a monad. Moreover, it is a \texttt{MonadProb} and a \texttt{MonadPlus} too.
(So is \texttt{Distribution [a]}, but I think that doesn’t help.)
(There’s a nice tale in terms of monad transformers.)
7.1. A simple example: mixing choices
A fair coin:
\[
\text{coin} :: \text{MonadProb } m \Rightarrow m \text{ Bool} \\
\text{coin} = (\text{return True}) \triangleleft \frac{1}{2} \triangleright (\text{return False})
\]
An arbitrary choice:
\[
\text{arb} :: \text{MonadPlus } m \Rightarrow m \text{ Bool} \\
\text{arb} = \text{return True} \text{ `mplus` return False}
\]
Two combinations:
\[
\text{arbcoin, coinarb} :: (\text{MonadPlus } m, \text{MonadProb } m) \Rightarrow m \text{ Bool} \\
\text{arbcoin} = \text{do}\{ a \leftarrow \text{arb}; c \leftarrow \text{coin}; \text{return} (a :: c) \} \\
\text{coinarb} = \text{do}\{ c \leftarrow \text{coin}; a \leftarrow \text{arb}; \text{return} (a :: c) \}
\]
What do you think they do?
7.2. ... as sets of distributions
Define
\[
\text{type} \ NondetProb \ a = [\ Distribution \ a ]
\]
Then (with suitable \textit{shows}):
\begin{align*}
* \text{Main)} \ arbcoin & :: \ NondetProb \ Bool \\
& \equiv [[[ (True, \frac{1}{2}), (False, \frac{1}{2}) ]], \\
& \quad [[[ (False, \frac{1}{2}), (True, \frac{1}{2}) ]]] \\
* \text{Main)} \ coinarb & :: \ NondetProb \ Bool \\
& \equiv [[[ (True, \frac{1}{2}), (False, \frac{1}{2}) ]], \\
& \quad [[[ (True, \frac{1}{2}), (True, \frac{1}{2}) ]], \\
& \quad [[[ (False, \frac{1}{2}), (False, \frac{1}{2}) ]], \\
& \quad [[[ (False, \frac{1}{2}), (True, \frac{1}{2}) ]]]
\end{align*}
7.3. ... as expectations
```haskell
class MonadProb m ⇒ MonadExpect m where
expect :: (Ord n, Fractional n) ⇒ m a → (a → n) → n
instance MonadExpect NondetProb where -- morally
expect px h = minimum (map (mean h ◦ unD) px) where
mean h aps = sum [ p × f a | (a, p) ← aps ] / sum (map snd aps)
```
Your reward is 1 if the booleans agree, and 0 otherwise:
```haskell
reward b = if b then 1 else 0
```
Then:
```haskell
*Main> expect (arbcoin :: NondetProb Bool) reward
1/2
*Main> expect (coinarb :: NondetProb Bool) reward
0
```
7.4. Back to nondeterministic Monty...
We could define instead:
\[
\begin{align*}
hide & :: \text{MonadPlus } m \Rightarrow m \text{ Door} \\
hide &= \text{arbitrary doors} \\
tease & :: \text{MonadPlus } m \Rightarrow \text{Door} \rightarrow \text{Door} \rightarrow m \text{ Door} \\
tease \ h \ p &= \text{arbitrary } (\text{doors} \ \text{\setminus} \ [h, p])
\end{align*}
\]
where
\[
\begin{align*}
arbitrary & :: \text{MonadPlus } m \Rightarrow [a] \rightarrow m\ a \\
arbitrary &= \text{foldr mplus mzero } \circ \text{ map return}
\end{align*}
\]
I believe that the calculation carries through just as before: still
\[
\begin{align*}
\text{play switch} &= \text{uniform } [\text{True, True, False}] \\
\text{play stick} &= \text{uniform } [\text{False, False, True}]
\end{align*}
\]
8. Summary
- axiomatic approach to reasoning with effects
- simple and generic
- smacks of ‘algebraic theories of effects’ (Plotkin & Power, Lawvere) (in particular, partiality and continuations do not arise from algebraic theories)
- IO is uninteresting?
- more examples wanted!
|
{"Source-Url": "http://www.cs.ox.ac.uk/ralf.hinze/WG2.8/28/slides/jeremy.pdf", "len_cl100k_base": 6376, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 58862, "total-output-tokens": 7883, "length": "2e12", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.0004048347473144531, "__label__crime_law": 0.0003826618194580078, "__label__education_jobs": 0.0006318092346191406, "__label__entertainment": 8.362531661987305e-05, "__label__fashion_beauty": 0.0001423358917236328, "__label__finance_business": 0.0001461505889892578, "__label__food_dining": 0.000560760498046875, "__label__games": 0.0008387565612792969, "__label__hardware": 0.0007047653198242188, "__label__health": 0.0005006790161132812, "__label__history": 0.0002455711364746094, "__label__home_hobbies": 0.00014293193817138672, "__label__industrial": 0.0004906654357910156, "__label__literature": 0.0004737377166748047, "__label__politics": 0.0003552436828613281, "__label__religion": 0.0005793571472167969, "__label__science_tech": 0.019805908203125, "__label__social_life": 0.00013577938079833984, "__label__software": 0.0042724609375, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00038313865661621094, "__label__transportation": 0.0006322860717773438, "__label__travel": 0.00019347667694091797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17675, 0.01445]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17675, 0.7271]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17675, 0.51834]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 137, false], [137, 668, null], [668, 943, null], [943, 1203, null], [1203, 1643, null], [1643, 2382, null], [2382, 2917, null], [2917, 3165, null], [3165, 3658, null], [3658, 5140, null], [5140, 5462, null], [5462, 6369, null], [6369, 7393, null], [7393, 8826, null], [8826, 9305, null], [9305, 9870, null], [9870, 10290, null], [10290, 11256, null], [11256, 11857, null], [11857, 12515, null], [12515, 12947, null], [12947, 13365, null], [13365, 13961, null], [13961, 14673, null], [14673, 15418, null], [15418, 16058, null], [16058, 16599, null], [16599, 17395, null], [17395, 17675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 137, true], [137, 668, null], [668, 943, null], [943, 1203, null], [1203, 1643, null], [1643, 2382, null], [2382, 2917, null], [2917, 3165, null], [3165, 3658, null], [3658, 5140, null], [5140, 5462, null], [5462, 6369, null], [6369, 7393, null], [7393, 8826, null], [8826, 9305, null], [9305, 9870, null], [9870, 10290, null], [10290, 11256, null], [11256, 11857, null], [11857, 12515, null], [12515, 12947, null], [12947, 13365, null], [13365, 13961, null], [13961, 14673, null], [14673, 15418, null], [15418, 16058, null], [16058, 16599, null], [16599, 17395, null], [17395, 17675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17675, null]], "pdf_page_numbers": [[0, 0, 1], [0, 137, 2], [137, 668, 3], [668, 943, 4], [943, 1203, 5], [1203, 1643, 6], [1643, 2382, 7], [2382, 2917, 8], [2917, 3165, 9], [3165, 3658, 10], [3658, 5140, 11], [5140, 5462, 12], [5462, 6369, 13], [6369, 7393, 14], [7393, 8826, 15], [8826, 9305, 16], [9305, 9870, 17], [9870, 10290, 18], [10290, 11256, 19], [11256, 11857, 20], [11857, 12515, 21], [12515, 12947, 22], [12947, 13365, 23], [13365, 13961, 24], [13961, 14673, 25], [14673, 15418, 26], [15418, 16058, 27], [16058, 16599, 28], [16599, 17395, 29], [17395, 17675, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17675, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
1ea79e82718e75d2797b56d167b4f4db16c5461c
|
Verification and validation of IM-DeCRuD approach using DESMET for its applicability
Jamaluddin Jasmis, Shamsul Jamel Elias, Azlan Abdul Aziz, Mohd Zaki Zakaria
Faculty of Computer and Mathematical Sciences, UiTM (Melaka) Jasin Campus, Malaysia
ABSTRACT
Requirements crosscutting in software development and maintenance has gradually become an important issue in software engineering with a growing need of traceability support to better understand the requirements crosscutting processes to comply with industrial standards. However, many recent works focusing on identification, modularization, composition and conflict dissolution of requirements crosscutting are mostly saturated at requirements level. Hence, these works fail to specify crosscutting properties for functional and non-functional requirements at other phases leading to insufficient support for software engineers. Recently, a new approach called the Identification, Modularization, Design Composition Rules and Conflict Dissolutions (IM-DeCRuD) was proposed to provide a special traceability to facilitate better understanding and reasoning towards requirements crosscutting implementation, along with a tool as a proof-of-concept. In this paper, the tool was evaluated and the results were verified by some experts in the industries. The feedbacks on applicability aspect were then gathered and analyzed using DESMET qualitative method. The outcome showed that the IM-DeCRuD was applicable to cope with the tedious engineering processes in handling crosscutting properties at requirements, analysis and design phases for system development and evolution.
Keywords:
DESMET
IM-DeCRuD
Requirements crosscutting
V&V
1. INTRODUCTION
Requirements is defined as “any matters of interest in a software system” which can be related to system functionalities or its properties [1]. It can be classified into functional (system’s behavior or subsystems) and non-functional (system’s properties). A requirement rarely acts standalone as most may influence or constraint other requirements. This type of scenario is called crosscutting [2]. Crosscutting is usually described in terms of scattering and tangling For example, a stakeholder’s functional requirement with a capability of handling a user on-line transaction might be described by some properties i.e. the non-functional requirements such as user’s response time within acceptable limit, with appropriate security features and affordable workload. This type of scenario is called tangling. In another situation, a non-functional requirement may describe properties for several other functional requirements in order for the functional requirements to remain useful. For example, the performance as a property of a system would be applied to several other functional requirements i.e. concerns with similar or different specifications. This type of scenario is called scattering.
Requirements crosscutting are related to each other within artifact as well as correlated artifacts across multiple phases [3-5]. Consequently, any changes to requirements crosscutting may yield direct or indirect impact to other artifacts. Most of recent works are focusing on identification, modularization,
composition and conflict analysis of requirements crosscutting solely at requirements level. It is due to its straightforwardness in dealing with high-level language in requirements documentations to specify requirements [6]. However, there is significant research gap to appropriately specify crosscutting properties for functional and non-functional concerns at both requirements and design phases. Due to this absence, software engineers have no appropriate guidelines to attend to crosscutting concerns across development stages [6, 7].
Several state-of-the-art approaches had been evaluated using the criteria obtained from the literature. The results showed us that so far, there is no single approach that fully satisfied with all the capabilities that have to be fulfilled to support requirements crosscutting [8]. Apparently, we proposed a new approach called the Identification, Modularization, Design Composition Rules and Conflict Dissolutions (IM-DeCRuD) has been done that provides a special traceability to facilitate better understanding and reasoning for engineering tasks towards requirements crosscutting during software development and evolution. Moreover, it also promotes a simple but significant way to support pragmatic changes of crosscutting properties at requirements, analysis and design phases for medium sizes of software development and maintenance projects [9, 10]. A tool has been developed based on the proposed approach to store relationship dependencies since traceability is highly considered among artifacts in Model Driven Engineering (MDE) to support understanding and maintenance of software systems [3, 11].
In this paper, we aim to evaluate the results that were generated by applying IM-DeCRuD approach to myPolicy case study by the domain experts to ensure its applicability. The evaluation method for this research was based on the evaluation guidelines proposed in DESMET [12]. DESMET is a project that attempts to provide practical evaluation approach in software engineering.
Organizations of this paper are as follows: Section 2 presents the implementation of DESMET and Section 3 discusses the overall results. Finally, Section 4 presents the conclusion.
2. RESEARCH METHOD
DESMET is chosen as the evaluation methodology as it is mainly used to evaluate the software engineering methods and tools. There are nine manners proposed by DESMET [13] including quantitative and qualitative manners. Since applicability can be measured by determining the appropriateness of features available in the proposed approach, feature analysis is selected as the purpose of the qualitative method in this research. This analysis might be formally organized in qualitative case study and survey. These settings suggest one person or a group of potential experts to undertake the evaluation [14]. Hence, a group of four working people with some working experiences were employed for this evaluation. Three steps are proposed by DESMET: identifying features, scoring features and analysis. These steps are briefly explained as following:
a) Identifying Features
Features are of two types; simple and compound features. The features that need to be assessed are concerns identification, composition, conflicts identification and traceability support; following would be mapping, maintenance and GUI support.
b) Scoring Features
Simple features are evaluated by YES or NO whereas compound features shall be measured on an ordinal scale [13]. However, each compound feature should be along with an assessment of its importance and conformance. The scales for measuring importance and conformance will be discussed below:
a. Importance
The importance of a feature can be judged by two ways; by examining whether it is mandatory or only desirable. This opinion of importance leads to two evaluation criteria. First analyze whether a feature is mandatory, and the second assesses the degree to which a non-mandatory feature is desirable. To assess a feature, following four scaling points are expressed: Mandatory, Highly Desirable, Desirable and Nice to Have.
b. Conformance
The assessment scale for conformance has two objectives; defining the level of support required for an individual feature and providing the evaluator with a measurement scale that will score the feature of a particular candidate. The granularity of the scale points is dependent upon the features that are to be considered. It may be from 5 (full support) to 0 (no support).
c) Analysis
Feature analysis gives clear picture of a tool whether it fulfills the demands of the potential users or not. After providing the importance and conformance of features, the score sheets must be analyzed. Predicated on the DESMET method, if the acceptance threshold is elaborated, the analysis must base on the difference between the acceptance threshold set by the users for every feature and the score gained for that
feature. If the acceptance threshold is not achievable, the assessment should be predicated on the scores of the approaches relative to one another. The latter approach has been used as the acceptance threshold is not achievable in this research. Therefore, the analysis must be based on accumulating the absolute scores.
The combined score for one feature set would be the sum of the conformance values of all features for a certain approach, which may be expressed as a percentage of the maximum score. For example, suppose that the combined score for three features is 11 out of 15 (the maximum score), then the converted percentage score would be 73%. Finally, the overall score can be obtained by determining the aggregate score for each feature set.
2.1. Subjects and Environment
Several sessions for this evaluation were conducted where these subjects (domain experts) were selected whom background are ranged from IT manager as well as freelance software developer that involves in many software development projects from government agencies. From investigation, all the experts had some working experiences for more than six years in the software industries. During the session, IM-DeCRuD tool was demonstrated and results of the case study were briefed.
2.2. Questionnaires
The questionnaire was designed to accommodate the domain experts’ evaluations on IM-DeCRuD approach. The main objective of this evaluation was to assess the applicability of the identification and modularization, design composition rule and conflict analysis for crosscutting concerns. The questionnaire consists of two sections (see Appendix C); Section A and Section B. Section A was designed to relate to the previous professional background of the subjects and Section B was used to get the scores for the features and overall usefulness of the prototype tool. There were nine questions related to the features of IM-DeCRuD. Participants were invited to provide some degree of its usefulness with respect to identification and modularization, design composition rule, conflict analysis, maintainability, scalability and GUI. The experts were also asked to give some comments on the general performance of IM-DeCRuD as an open question.
The independent variables used with the IM-DeCRuD prototype were, the subjects and the myPolicy case study, while the dependent variables were the scores of questions asked. The scheme of score evaluation was formulated based on [15].
2.3. Evaluation Procedures
Four subjects participated in two separate sessions where each session occupied approximately two hours with the above mentioned myPolicy case study. Subjects were given briefing sessions on the underlying concept of crosscutting concerns, the approach and the prototype tool before the case study being demonstrated. They were also provided with a set of questionnaires which require them to select one answer from available options.
2.4. Possible Threats and Validity
Following factors may kick in to some possible threats in this evaluation such as:
a) DESMET proposes two kinds of evaluation methods, namely as quantitative and qualitative. The later was adapted due to IM-DeCRuD underlies specific principles and user population for this kind of approach is not generic [13]. Due to this situation, as such, this evaluation does not encompass any benchmarking in term of comparative study on statistical results of IM-DeCRuD and other approaches or tools, applied onto the same case study (myPolicy) in which the researcher found it is hard to conduct due to limited number of subjects, human commitment and time. Moreover, the objective for this evaluation is to ensure IM-DeCRuD’s practicability towards the industrial-strength case study.
b) It can be issue of unfamiliarity with the tool/approach for evaluation. To overcome this issue, a short briefing was arranged on the utilization of IM-DeCRuD prior to its evaluation. The sessions were conducted on all subjects under proper supervision. A total of six hours on three sessions were allocated in order to secure a good result and to keep their enthusiasm during the evaluation time. It was foreseen that the subjects might very frequently involve in questions and answers during the sessions in order to reach for some acceptable understanding and awareness in this research. To reduce this problem, some useful information had been made available that included documentation in both hard and soft copies for easy references.
c) The evaluation is based on one person's experience of using the method/tool and the evaluation criteria are subjective. To evade this phenomenon, a group of subjects was selected in order to leverage the evaluation results obtained from them. Moreover, before the experiment, the subjects were analyzed with questions referring to years of experience and types of projects they have been involved with in software development. The subjects were also given an explanation regarding the skill level to make
sure they understand the term used in the skill level (beginner, advanced beginners, competent, proficient and expert). Besides, prior investigations on their level of familiarities and awareness with the underlying specific principles of crosscutting concerns or at least non-functional requirements element was also been done. This was intended to ensure that all subjects will have common understanding as possible.
3. EVALUATION RESULTS
This section discusses the qualitative results based on the tool/method evaluation. Here, the scored results are evaluated.
3.1. Features Evaluation by Users
In this section, results from myPolicy case study were analyzed. The analysis of results is based on the DESMET method as described previously. Furthermore, the type of analysis is feature analysis, as suggested by [12]. Based on the DESMET method, there are three steps for feature analysis: feature identification, feature scoring and analysis. These steps will be described in the following:
a) Features Identification
The features for assessment of the approach are as follows:
1. Concerns Identification Support (CI): Does the approach support identification of crosscutting concerns and other requirements components?
2. Composition Support (CS): Does the approach accommodate inter-relationship between crosscutting concerns and other requirements components?
3. Conflict Identification Support (CTI): Does the approach provide conflict identification support for conflicting crosscutting concerns?
4. Traceability Support (TS): Does the approach provide traceability for requirements components between requirements and design development phases?
5. Mapping Support (MP): Does the approach support crosscutting concerns mapping to upper level class diagram?
6. GUI: Does the approach facilitate graphical user interface?
7. Maintenance Support (MT): Does the approach support software maintenance?
8. Scalability Support (SC): Does the approach support small and big scale software projects?
b) Features Scoring
There are two types of features: simple features and compound features. Simple features are evaluated by answering “YES” or “NO” whereas compound features should be measured on an ordinal scale. Degree of support for compound features is offered by the approach. In this research, GUI is known as simple feature, whereas the remaining are compound features.
Every simple feature must go along with by an assessment of the level of importance. Each compound feature should be associated by an evaluation of their importance and conformance to a specific feature or characteristic. Scales to measure importance and conformance will be described in the following:
Importance
A great technique is one which contains the attributes that are crucial for customers. The importance could be evaluated by seeing whether it is mandatory or desirable. This viewpoint of importance brings about two assessment standards; (i) that indicates whether the feature is obligatory and (ii) that evaluates the degree to which a non-mandatory feature is needed. The following scale points need to be considered to evaluate a feature:
- M - Mandatory
- HD - Highly desirable
- D - Desirable
- N - Nice to have
Importance of features in this research is shown in Table 1:
<table>
<thead>
<tr>
<th>Features</th>
<th>CI</th>
<th>CS</th>
<th>CTI</th>
<th>TS</th>
<th>MP</th>
<th>GUI</th>
<th>MT</th>
<th>SC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Importance</td>
<td>HD</td>
<td>HD</td>
<td>M</td>
<td>HD</td>
<td>HD</td>
<td>HD</td>
<td>HD</td>
<td>HD</td>
</tr>
</tbody>
</table>
Moreover, the importance assessment can be considered as a weighting factor. The following weights are suggested by [12] as shown in Table 2.
Table 2. Features Weights
<table>
<thead>
<tr>
<th>Features</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mandatory</td>
<td>10</td>
</tr>
<tr>
<td>Highly Desirable</td>
<td>6</td>
</tr>
<tr>
<td>Desirable</td>
<td>3</td>
</tr>
<tr>
<td>Nice To Have</td>
<td>1</td>
</tr>
</tbody>
</table>
Conformance
The objective of the assessment scale for conformance is to define the necessary support level of a specific feature. Additionally it supplies the assessor with a continuous determining scale versus which to rack up the feature of a specific candidate. Following Table 3 presents the scores proposed by [13].
Table 3. Assessment Scale for Features to Support Tool
<table>
<thead>
<tr>
<th>Generic scale point</th>
<th>Definition of Scale point</th>
<th>Scale Point Mapping</th>
</tr>
</thead>
<tbody>
<tr>
<td>Makes things worse</td>
<td>Cause Confusion. The way the feature is implemented makes it difficult to use and/or encouraged incorrect use of the feature</td>
<td>+1</td>
</tr>
<tr>
<td>No support</td>
<td>Fails to recognise it. The feature is not supported nor referred to in the user manual</td>
<td>0</td>
</tr>
<tr>
<td>Little support</td>
<td>The feature is supported indirectly, for example by the use of other tool features in non-standard combinations</td>
<td>1</td>
</tr>
<tr>
<td>Some support</td>
<td>The feature appears explicitly in the feature list of the tools and user manual. However, some aspects of feature use are not covered for.</td>
<td>2</td>
</tr>
<tr>
<td>Strong support</td>
<td>The feature appears explicitly in the feature list of the tools and user manual. All aspects of the feature are covered but use of the feature depends on the expertise of the user.</td>
<td>3</td>
</tr>
<tr>
<td>Very strong support</td>
<td>The feature appears explicitly in the feature list of the tools and user manual. The tool provides tailored dialogue boxes to assist the user.</td>
<td>4</td>
</tr>
<tr>
<td>Full support</td>
<td>The feature appears explicitly in the feature list of the tools and user manual. All aspects of the feature are covered and the tool provides user scenarios to assist the user such as “Wizards”,</td>
<td>5</td>
</tr>
</tbody>
</table>
The assessment table for IM-DeCRuD with respect to the above assessment scale is shown.
Table 4. Assessment Table for IM-DeCRuD
<table>
<thead>
<tr>
<th>Features</th>
<th>CI</th>
<th>CS</th>
<th>CTI</th>
<th>TS</th>
<th>MP</th>
<th>GUI</th>
<th>MT</th>
<th>SC</th>
</tr>
</thead>
<tbody>
<tr>
<td>IM-DeCRuD</td>
<td>4</td>
<td>4</td>
<td>5</td>
<td>4</td>
<td>4</td>
<td>YES</td>
<td>4</td>
<td>4</td>
</tr>
</tbody>
</table>
The scores for all the above mentioned features are described in Table 4. The results in Table 4 show that all the features have been accepted to be highly satisfactory. Conflict Identification feature (which is one of the basic themes of this research) obtained the highest average score in which it shows the acceptance of this tool.
From the perspective of overall usefulness, 50% of the subjects chose 5 as Very Useful and another 50% chose 4 as Useful. None of the subjects chose moderate, useless or very useless. It again shows that overall performance of the IM-DeCRuD is considered as excellent.
Now the discussion will be on the individual features of IM-DeCRuD. On the support of Concerns Identification and Composition, 75% of the subjects agree to regard it as Useful and the other 25% chose Very Useful. These results seem to be very promising. With regard to support of Conflict Identification, 75% choose Very Useful, and 25% choose Useful. In terms of Traceability support, all subjects equally chose Useful with 50% scores and Moderate with similar scores as well.
Identical results have been obtained on the support of Mapping and Maintainability where subjects opted all to be Very Useful (25%), Useful (50%) and Moderate (25%). Similar results also have been obtained for GUI and Scalability where the subjects scored both as Useful (75%) instead of Moderate (25%). The best thing is that was no single subject choosing Useless or Very Useless for any feature of the prototype tool.
Verification and validation of IM-DeCRuD approach using DESMET for its... (Jamaluddin Jasmis)
3.2. Findings of the Analysis
In this section, the findings of the research will be predicated in qualitative perspective based on the study of domain expert evaluations on IM-DeCRuD approach and tool. The prototype tool, IM-DeCRuD, which is based on this research, was exposed to the subjects by letting them use and evaluate its practicability. Feedback and comments from users regarding its usefulness and support for software development and maintenance were taken into account. Some question helped to determine if the prototype tool was helpful and effective to support software development and maintenance. Majority agreed on its overall usefulness.
After providing the importance and conformance of features, the score sheets must be analyzed and the best approach is determined. Based on the DESMET method, if the acceptance threshold is explicated, the analysis must be relied on the difference between the acceptance threshold fixed by the users and the score that each approach got for the feature. If the acceptance threshold cannot be achieved, the assessment should be based on the scores of the approaches relative to one another. As the acceptance threshold is not achievable in this research, the latter approach is used.
For simple feature, a score of “YES” or “NO” must be assigned. According to the DESMET suggestion, providing a simple feature need to score five, while score can be zero if it fails to provide a simple feature. Moreover, the importance assessment can be utilized as a weighting factor.
To express the analysis results, DESMET proposes numerically and graphically profile for this approach. Conclusively, the overall results for this approach are discussed. For numerical evaluation, the result based on DESMET method via the average evaluation profile for IM-DeCRuD is shown in the Table 5.
Subjects found that identification and handling processes of requirements components including crosscutting concerns using requirements boilerplates is a good element which is being supported by IM-DeCRuD tool. Despite of having some limitations in term of expressing users’ needs for a system to be developed or maintained, they acknowledged that by promoting common formats in system documentation, it will results in standardization and consistency besides claiming it can be seamlessly integrated in many development disciplines.
Composition feature which inter-relates crosscutting concerns and other requirements components was also regarded by the subjects. With systematic relationships between requirement components on top of having FUR as jointpoints to connect with other requirements components, it accommodates other features which come along with IM-DeCRuD tool.
Another feature of identifying conflicts was highly acknowledged by the subjects. Good inter-relationship scheme between requirements components as well as practical implementation of Product-Oriented NFUR catalogue[16], requirements crosscutting prioritization scheme and Conflict Identification procedure accommodate conflicts identification at the earlier stage in software development and maintenance activities. Failing to deal with these conflicts might hinder software completion according to the schedule. With respect to traceability for requirements components, many of the subjects agreed that the tool able to help them to trace between requirements and design development phases by practically implementing the proposed mapping scheme. Using the same mapping scheme, crosscutting concerns and other requirements components identified at requirement phase were also agreed to be successfully mapped to class diagram at the design stage even in an upper level manner. This finding was seen to be a bit surprise to most of them as crosscutting concerns were previously regarded as a side note of system users’ requests and never being handled together with other requirements components.
Implementation of systematic requirements boilerplates for requirements components identification and handling as well as inter-relationship scheme that were addressed earlier were also being highly regarded by the subjects for simple and complex software maintenance support. Instead, the inter-relationship scheme
as well as zooming capabilities (supported by the tool), make requirements components to have different levels of granularity that are deemed possible to support small and big scale software projects. There were some good suggestions by the users. One of them suggested that this tool should be made open-sourced in order to facilitate vast sharing on its usefulness and improvements.
4. CONCLUSION
In this paper, verification and validation of the proposed approach and prototype tool are presented in term of qualitative findings by the domain experts. The results from myPolicy case study were evaluated on how the proposed approach can achieve its practicability in dealing with requirements identification, composition, conflicts identification, traceability, components mapping, software maintenance and project scalability. The scored results by the subjects were considered as the useful variables. Predicated on the results of the sessions, it is confirmed that the proposed approach provides some consequential achievements to handle the crosscutting concerns along with other requirements components. IM-DeCRuD’s pragmatic composition scheme would be the platform for inspection process upon any conflicts of requirements that arises. Moreover, this scheme also assists in mapping all requirements components including crosscutting concerns from previous phases onto design stage where common industrial practises nowadays still do not equally regard crosscutting concern instead of other requirements components. Therefore, the subjects have the same opinion that the schemes used by IM-DeCRuD provide useful features that contribute to traceability process improvements as well as accommodate any scale of software maintenance projects.
REFERENCES
|
{"Source-Url": "http://www.iaescore.com/journals/index.php/IJEECS/article/download/17017/11026", "len_cl100k_base": 5346, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22411, "total-output-tokens": 6445, "length": "2e12", "weborganizer": {"__label__adult": 0.0002918243408203125, "__label__art_design": 0.00042176246643066406, "__label__crime_law": 0.00029540061950683594, "__label__education_jobs": 0.0017995834350585938, "__label__entertainment": 4.500150680541992e-05, "__label__fashion_beauty": 0.00014710426330566406, "__label__finance_business": 0.0002906322479248047, "__label__food_dining": 0.00026679039001464844, "__label__games": 0.00046896934509277344, "__label__hardware": 0.0004808902740478515, "__label__health": 0.0003402233123779297, "__label__history": 0.00018906593322753904, "__label__home_hobbies": 7.015466690063477e-05, "__label__industrial": 0.00029087066650390625, "__label__literature": 0.0002722740173339844, "__label__politics": 0.00016117095947265625, "__label__religion": 0.0003542900085449219, "__label__science_tech": 0.00878143310546875, "__label__social_life": 8.702278137207031e-05, "__label__software": 0.0064849853515625, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00022089481353759768, "__label__transportation": 0.00030517578125, "__label__travel": 0.0001558065414428711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29826, 0.01785]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29826, 0.12229]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29826, 0.94358]], "google_gemma-3-12b-it_contains_pii": [[0, 3214, false], [3214, 8139, null], [8139, 13136, null], [13136, 16727, null], [16727, 20814, null], [20814, 25045, null], [25045, 29826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3214, true], [3214, 8139, null], [8139, 13136, null], [13136, 16727, null], [16727, 20814, null], [20814, 25045, null], [25045, 29826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29826, null]], "pdf_page_numbers": [[0, 3214, 1], [3214, 8139, 2], [8139, 13136, 3], [13136, 16727, 4], [16727, 20814, 5], [20814, 25045, 6], [25045, 29826, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29826, 0.16154]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
92869b92a70ec5815d0295e1c7c24d47f57ebdf9
|
NAG C Library Function Document
nag_dsprfs (f07phc)
1 Purpose
nag_dsprfs (f07phc) returns error bounds for the solution of a real symmetric indefinite system of linear equations with multiple right-hand sides, $AX = B$ using packed storage. It improves the solution by iterative refinement, in order to reduce the backward error as much as possible.
2 Specification
```c
void nag_dsprfs (Nag_OrderType order, Nag_UploType uplo, Integer n, Integer nrhs,
const double ap[], const double afp[], const Integer ipiv[], const double b[],
Integer pdb, double x[], Integer pdx, double ferr[], double berr[],
NagError *fail)
```
3 Description
nag_dsprfs (f07phc) returns the backward errors and estimated bounds on the forward errors for the solution of a real symmetric indefinite system of linear equations with multiple right-hand sides $AX = B$, using packed storage. The function handles each right-hand side vector (stored as a column of the matrix $B$) independently, so we describe the function of nag_dsprfs (f07phc) in terms of a single right-hand side $b$ and solution $x$.
Given a computed solution $x$, the function computes the component-wise backward error $\beta$. This is the size of the smallest relative perturbation in each element of $A$ and $b$ such that $x$ is the exact solution of a perturbed system
$$(A + \delta A)x = b + \delta b$$
such that $|\delta a_{ij}| \leq \beta |a_{ij}|$ and $|\delta b_i| \leq \beta |b_i|$. Then the function estimates a bound for the component-wise forward error in the computed solution, defined by:
$$\max_i |x_i - \hat{x}_i| / \max_i |x_i|$$
where $\hat{x}$ is the true solution.
For details of the method, see the f07 Chapter Introduction.
4 References
5 Parameters
1: order – Nag_OrderType
Input
On entry: the order parameter specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by order = Nag_RowMajor. See Section 2.2.1.4 of the Essential Introduction for a more detailed explanation of the use of this parameter.
Constraint: order = Nag_RowMajor or Nag_ColMajor.
2: \textbf{uplo} – Nag_UploType \hspace{1cm} \textit{Input}
\textit{On entry}: indicates whether the upper or lower triangular part of \(A\) is stored and how \(A\) is to be factorized, as follows:
- if \(\text{uplo} = \text{Nag\_Upper}\), the upper triangular part of \(A\) is stored and \(A\) is factorized as
\[PUU^TPT\], where \(U\) is upper triangular;
- if \(\text{uplo} = \text{Nag\_Lower}\), the lower triangular part of \(A\) is stored and \(A\) is factorized as
\[PLDL^TPT\], where \(L\) is lower triangular.
\textit{Constraint}: \(\text{uplo} = \text{Nag\_Upper}\) or \(\text{Nag\_Lower}\).
3: \textbf{n} – Integer \hspace{1cm} \textit{Input}
\textit{On entry}: \(n\), the order of the matrix \(A\).
\textit{Constraint}: \(n \geq 0\).
4: \textbf{nrhs} – Integer \hspace{1cm} \textit{Input}
\textit{On entry}: \(r\), the number of right-hand sides.
\textit{Constraint}: \(\text{nrhs} \geq 0\).
5: \textbf{ap}[\text{dim}] – const double \hspace{1cm} \textit{Input}
\textit{Note}: the dimension, \(\text{dim}\), of the array \(\text{ap}\) must be at least \(\max(1, n \times (n + 1)/2)\).
\textit{On entry}: the \(n\) by \(n\) original symmetric matrix \(A\) as supplied to nag_dsptrf (f07pdc).
6: \textbf{afp}[\text{dim}] – const double \hspace{1cm} \textit{Input}
\textit{Note}: the dimension, \(\text{dim}\), of the array \(\text{afp}\) must be at least \(\max(1, n \times (n + 1)/2)\).
\textit{On entry}: details of the factorization of \(A\) stored in packed form, as returned by nag_dsptrf (f07pdc).
7: \textbf{ipiv}[\text{dim}] – const Integer \hspace{1cm} \textit{Input}
\textit{Note}: the dimension, \(\text{dim}\), of the array \(\text{ipiv}\) must be at least \(\max(1, n)\).
\textit{On entry}: details of the interchanges and the block structure of \(D\), as returned by nag_dsptrf (f07pdc).
8: \textbf{b}[\text{dim}] – const double \hspace{1cm} \textit{Input}
\textit{Note}: the dimension, \(\text{dim}\), of the array \(b\) must be at least \(\max(1, \text{pdb} \times \text{nrhs})\) when \(\text{order} = \text{Nag\_ColMajor}\) and at least \(\max(1, \text{pdb} \times n)\) when \(\text{order} = \text{Nag\_RowMajor}\).
If \(\text{order} = \text{Nag\_ColMajor}\), the \((i, j)\)th element of the matrix \(B\) is stored in \(b[(j - 1) \times \text{pdb} + i - 1]\) and if \(\text{order} = \text{Nag\_RowMajor}\), the \((i, j)\)th element of the matrix \(B\) is stored in \(b[(i - 1) \times \text{pdb} + j - 1]\).
\textit{On entry}: the \(n\) by \(r\) right-hand side matrix \(B\).
9: \textbf{pdb} – Integer \hspace{1cm} \textit{Input}
\textit{On entry}: the stride separating matrix row or column elements (depending on the value of \textit{order}) in the array \(b\).
\textit{Constraints}:
- if \(\text{order} = \text{Nag\_ColMajor}\), \(\text{pdb} \geq \max(1, n)\);
- if \(\text{order} = \text{Nag\_RowMajor}\), \(\text{pdb} \geq \max(1, \text{nrhs})\).
10: \textbf{x}[\text{dim}] – double \hspace{1cm} \textit{Input/Output}
\textit{Note}: the dimension, \(\text{dim}\), of the array \(x\) must be at least \(\max(1, \text{pdx} \times \text{nrhs})\) when \(\text{order} = \text{Nag\_ColMajor}\) and at least \(\max(1, \text{pdx} \times n)\) when \(\text{order} = \text{Nag\_RowMajor}\).
If order = Nag_ColMajor, the \((i,j)\)th element of the matrix \(X\) is stored in \(x[(j-1) \times \text{pdx} + i - 1]\) and if order = Nag_RowMajor, the \((i,j)\)th element of the matrix \(X\) is stored in \(x[(i-1) \times \text{pdx} + j - 1]\).
On entry: the \(n\) by \(r\) solution matrix \(X\), as returned by nag_dsptrs (f07pec).
On exit: the improved solution matrix \(X\).
11: \textbf{pdx} – Integer \hspace{1cm} \textit{Input}
On entry: the stride separating matrix row or column elements (depending on the value of \texttt{order}) in the array \(x\).
Constraints:
- if order = Nag_ColMajor, \(\text{pdx} \geq \max(1, n)\);
- if order = Nag_RowMajor, \(\text{pdx} \geq \max(1, nrhs)\).
12: \textbf{ferr}[ extit{dim}] – double \hspace{1cm} \textit{Output}
Note: the dimension, \textit{dim}, of the array \textit{ferr} must be at least \(\max(1, \text{nrhs})\).
On exit: \(\text{ferr}[j-1]\) contains an estimated error bound for the \(j\)th solution vector, that is, the \(j\)th column of \(X\), for \(j = 1, 2, \ldots, r\).
13: \textbf{berr}[ extit{dim}] – double \hspace{1cm} \textit{Output}
Note: the dimension, \textit{dim}, of the array \textit{berr} must be at least \(\max(1, \text{nrhs})\).
On exit: \(\text{berr}[j-1]\) contains the component-wise backward error bound \(\beta\) for the \(j\)th solution vector, that is, the \(j\)th column of \(X\), for \(j = 1, 2, \ldots, r\).
14: \textbf{fail} – NagError * \hspace{1cm} \textit{Output}
The NAG error parameter (see the Essential Introduction).
6 \hspace{1cm} \textbf{Error Indicators and Warnings}
**NE_INT**
On entry, \textit{n} = \langle value\rangle.
Constraint: \(n \geq 0\).
On entry, \textit{nrhs} = \langle value\rangle.
Constraint: \(\text{nrhs} \geq 0\).
On entry, \textit{pdb} = \langle value\rangle.
Constraint: \(\text{pdb} > 0\).
On entry, \textit{pdx} = \langle value\rangle.
Constraint: \(\text{pdx} > 0\).
**NE_INT_2**
On entry, \textit{pdb} = \langle value\rangle, \textit{n} = \langle value\rangle.
Constraint: \(\text{pdb} \geq \max(1, n)\).
On entry, \textit{pdb} = \langle value\rangle, \textit{nrhs} = \langle value\rangle.
Constraint: \(\text{pdb} \geq \max(1, \text{nrhs})\).
On entry, \textit{pdx} = \langle value\rangle, \textit{n} = \langle value\rangle.
Constraint: \(\text{pdx} \geq \max(1, n)\).
On entry, \textit{pdx} = \langle value\rangle, \textit{nrhs} = \langle value\rangle.
Constraint: \(\text{pdx} \geq \max(1, \text{nrhs})\).
NE_ALLOC_FAIL
Memory allocation failed.
NE_BAD_PARAM
On entry, parameter (value) had an illegal value.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please consult NAG for assistance.
7 Accuracy
The bounds returned in ferr are not rigorous, because they are estimated, not computed exactly; but in practice they almost always overestimate the actual error.
8 Further Comments
For each right-hand side, computation of the backward error involves a minimum of $4n^2$ floating-point operations. Each step of iterative refinement involves an additional $6n^2$ operations. At most 5 steps of iterative refinement are performed, but usually only 1 or 2 steps are required.
Estimating the forward error involves solving a number of systems of linear equations of the form $Ax = b$; the number is usually 4 or 5 and never more than 11. Each solution involves approximately $2n^2$ operations.
The complex analogues of this function are nag_zhprfs (f07pvc) for Hermitian matrices and nag_zsprfs (f07qvc) for symmetric matrices.
9 Example
To solve the system of equations $AX = B$ using iterative refinement and to compute the forward and backward error bounds, where
$$A = \begin{pmatrix}
2.07 & 3.87 & 4.20 & -1.15 \\
3.87 & -0.21 & 1.87 & 0.63 \\
4.20 & 1.87 & 1.15 & 2.06 \\
-1.15 & 0.63 & 2.06 & -1.81
\end{pmatrix} \quad \text{and} \quad B = \begin{pmatrix}
-9.50 & 27.85 \\
-8.38 & 9.90 \\
-6.07 & 19.25 \\
-0.96 & 3.93
\end{pmatrix}.$$
Here $A$ is symmetric indefinite, stored in packed form, and must first be factorized by nag_dsprf (f07pdc).
9.1 Program Text
/* nag_dsprfs (f07phc) Example Program.
* Copyright 2001 Numerical Algorithms Group.
* Mark 7, 2001. */
#include <stdio.h>
#include <nag.h>
#include <nag_stdlib.h>
#include <nagf07.h>
#include <nagx04.h>
int main(void)
{
/* Scalars */
Integer i, j, n, nrhs, ap_len, afp_len, pdb, pdx, ferr_len, berr_len;
Integer exit_status=0;
NagError fail;
Nag_UploType uplo_enum;
Nag_OrderType order;
/* Arrays */
Integer *ipiv=0;
char uplo[2];
double *afp=0, *ap=0, *b=0, *berr=0, *ferr=0, *x=0;
#define A_LOWER(I,J) ap[(2*n-J)*(J-1)/2 + I - 1]
#define A_UPPER(I,J) ap[J*(J-1)/2 + I - 1]
#define B(I,J) b[(J-1)*pdb + I - 1]
#define X(I,J) x[(J-1)*pdx + I - 1]
order = Nag_ColMajor;
#else
#define A_LOWER(I,J) ap[I*(I-1)/2 + J - 1]
#define A_UPPER(I,J) ap[(2*n-I)*(I-1)/2 + J - 1]
#define B(I,J) b[(I-1)*pdb + J - 1]
#define X(I,J) x[(I-1)*pdx + J - 1]
order = Nag_RowMajor;
#endif
INIT_FAIL(fail);
Vprintf("f07phc Example Program Results\n\n");
/* Skip heading in data file */
Vscanf("\n");
Vscanf("%ld%ld\n", &n, &nrhs);
ap_len = n * (n + 1)/2;
afp_len = n * (n + 1)/2;
#ifdef NAG_COLUMN_MAJOR
pdb = n;
pdx = n;
#else
pdb = nrhs;
pdx = nrhs;
#endif
ferr_len = nrhs;
berr_len = nrhs;
/* Allocate memory */
if ( !(ipiv = NAG_ALLOC(n, Integer)) ||
!(afp = NAG_ALLOC(ap_len, double)) ||
!(ap = NAG_ALLOC(afp_len, double)) ||
!(b = NAG_ALLOC(n * nrhs, double)) ||
!(berr = NAG_ALLOC(berr_len, double)) ||
!(ferr = NAG_ALLOC(ferr_len, double)) ||
!(x = NAG_ALLOC(n * nrhs, double)) )
{
Vprintf("Allocation failure\n");
exit_status = -1;
goto END;
}
/* Read A and B from data file, and copy A to AFP and B to X */
Vscanf(" %ls %*[\n"] , uplo);
if (*(unsigned char *)uplo == 'L')
uplo_enum = Nag_Lower;
else if (*(unsigned char *)uplo == 'U')
uplo_enum = Nag_Upper;
else
{
Vprintf("Unrecognised character for Nag_UploType type\n");
exit_status = -1;
goto END;
}
if (uplo_enum == Nag_Upper)
{
for (i = 1; i <= n; ++i)
{
for (j = i; j <= n; ++j)
{
Vscanf("%lf", &A_UPPER(i,j));
}
Vscanf("%*[\n ]");
else
{
for (i = 1; i <= n; ++i)
{
for (j = 1; j <= i; ++j)
Vscanf("%lf", &A_LOWER(i,j));
}
Vscanf("%*[\n ]");
}
for (i = 1; i <= n; ++i)
{
for (j = 1; j <= nrhs; ++j)
Vscanf("%lf", &B(i,j));
}
Vscanf("%*[\n ]");
for (i = 1; i <= n * (n + 1) / 2; ++i)
afp[i - 1] = ap[i - 1];
for (i = 1; i <= n; ++i)
{
for (j = 1; j <= nrhs; ++j)
X(i,j) = B(i,j);
}
/* Factorize A in the array AFP */
f07pdc(order, uplo_enum, n, afp, ipiv, &fail);
if (fail.code != NE_NOERROR)
{
Vprintf("Error from f07pdc.
%s
", fail.message);
exit_status = 1;
goto END;
}
/* Compute solution in the array X */
f07pec(order, uplo_enum, n, nrhs, afp, ipiv, x, pdx, &fail);
if (fail.code != NE_NOERROR)
{
Vprintf("Error from f07pec.
%s
", fail.message);
exit_status = 1;
goto END;
}
/* Improve solution, and compute backward errors and */
/* estimated bounds on the forward errors */
f07phc(order, uplo_enum, n, nrhs, ap, afp, ipiv, b, pdb,
x, pdx, ferr, berr, &fail);
if (fail.code != NE_NOERROR)
{
Vprintf("Error from f07phc.
%s
", fail.message);
exit_status = 1;
goto END;
}
/* Print solution */
x04cac(order, Nag_GeneralMatrix, Nag_NonUnitDiag, n, nrhs, x, pdx,
"Solution(s)", 0, &fail);
if (fail.code != NE_NOERROR)
{
Vprintf("Error from x04cac.
%s
", fail.message);
exit_status = 1;
goto END;
}
Vprintf("\nBackward errors (machine-dependent)\n");
for (j = 1; j <= nrhs; ++j)
Vprintf("%11.1e %", berr[j-1], j%7==0 ?"\n":" ");
Vprintf("Estimated forward error bounds (machine-dependent)\n");
for (j = 1; j <= nrhs; ++j)
Vprintf("%11.1e %", ferr[j-1], j%7==0 ?"\n":" ");
Vprintf("\n");
END:
if (ipiv) NAG_FREE(ipiv);
if (afp) NAG_FREE(afp);
if (ap) NAG_FREE(ap);
if (b) NAG_FREE(b);
if (berr) NAG_FREE(berr);
if (ferr) NAG_FREE(ferr);
if (x) NAG_FREE(x);
return exit_status;
}
9.2 Program Data
f07phc Example Program Data
4 2 :Values of N and NRHS
'L' :Value of UPLO
2.07
3.87 -0.21
4.20 1.87 1.15
-1.15 0.63 2.06 -1.81 :End of matrix A
-9.50 27.85
-8.38 9.90
-6.07 19.25
-0.96 3.93 :End of matrix B
9.3 Program Results
f07phc Example Program Results
Solution(s)
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>-4.0000</td>
<td>1.0000</td>
</tr>
<tr>
<td>2</td>
<td>-1.0000</td>
<td>4.0000</td>
</tr>
<tr>
<td>3</td>
<td>2.0000</td>
<td>3.0000</td>
</tr>
<tr>
<td>4</td>
<td>5.0000</td>
<td>2.0000</td>
</tr>
</tbody>
</table>
Backward errors (machine-dependent)
4.1e-17 5.5e-17
Estimated forward error bounds (machine-dependent)
2.3e-14 3.3e-14
|
{"Source-Url": "http://wwwuser.gwdg.de/~nag/NAGdoc/cl/pdf/F07/f07phc.pdf", "len_cl100k_base": 5000, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22201, "total-output-tokens": 5800, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0004401206970214844, "__label__crime_law": 0.0004901885986328125, "__label__education_jobs": 0.0012636184692382812, "__label__entertainment": 0.00012433528900146484, "__label__fashion_beauty": 0.00019633769989013672, "__label__finance_business": 0.00033092498779296875, "__label__food_dining": 0.0005869865417480469, "__label__games": 0.0010356903076171875, "__label__hardware": 0.0022716522216796875, "__label__health": 0.0008101463317871094, "__label__history": 0.000339508056640625, "__label__home_hobbies": 0.00020182132720947263, "__label__industrial": 0.0009284019470214844, "__label__literature": 0.00026488304138183594, "__label__politics": 0.00038814544677734375, "__label__religion": 0.0006322860717773438, "__label__science_tech": 0.203125, "__label__social_life": 0.00013017654418945312, "__label__software": 0.0090789794921875, "__label__software_dev": 0.775390625, "__label__sports_fitness": 0.0004706382751464844, "__label__transportation": 0.0007467269897460938, "__label__travel": 0.00023853778839111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14140, 0.05288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14140, 0.71428]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14140, 0.60552]], "google_gemma-3-12b-it_contains_pii": [[0, 2296, false], [2296, 5541, null], [5541, 8007, null], [8007, 10043, null], [10043, 11658, null], [11658, 13422, null], [13422, 14140, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2296, true], [2296, 5541, null], [5541, 8007, null], [8007, 10043, null], [10043, 11658, null], [11658, 13422, null], [13422, 14140, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14140, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14140, null]], "pdf_page_numbers": [[0, 2296, 1], [2296, 5541, 2], [5541, 8007, 3], [8007, 10043, 4], [10043, 11658, 5], [11658, 13422, 6], [13422, 14140, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14140, 0.01935]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
fb7539f066328af95a53d4ab25f4b4fa72988f6a
|
Assistance for the domain modeling dialog
Sander Bosman Theo van der Weide
sanderb@cs.kun.nl tvdw@cs.kun.nl
Computing Science Institute, University of Nijmegen
The Netherlands
Abstract
This paper considers the domain modeling dialog between domain expert and system analyst. In this dialog, the system analyst interprets the domain knowledge provided by the expert, and creates a formal model that captures this knowledge. As the expert may not express knowledge in a very precise way, the system analyst has to find the correct interpretation out of many possible interpretations.
In order to improve the quality of the modeling dialog, we propose a mechanism that enables the system analyst to have a better idea of the intentions of the domain expert, especially where the expert expresses these intentions poorly.
1 Introduction
Domain modeling is the process of creating a domain model in some formal language, from domain knowledge available through domain experts. The resulting domain model is a precise, complete and consistent description of the relevant domain knowledge, at the right level of abstraction, denoted in a formal language such as ER (Chen, 1976) or PSM (Hofstede and Weide, 1993).
Two important roles can be distinguished in the modeling process. The domain expert is responsible for providing domain knowledge, but does not (need to) have modeling skills. The system analyst does not (need to) have domain knowledge, but has the skills to create a formal, well-abstracted model from the domain knowledge provided (Frederiks and Weide, 2004).
Several methods exist that help system analysts in structuring their task. Of the more elaborate methods, NIAM (Nijssen, 1989) and its descendants such as ORM (Halpin, 1995) are good examples. These methods are language oriented, using a domain description as input for analysis. The domain description, consisting of a set of elementary sentences, is assumed to be created by the domain expert and available when the method starts.
We consider the modeling process as a dialog between domain expert and system analyst. Here, the complete domain description is not available before the analysis starts, but grows with each statement that is expressed by the domain expert. The domain description can be seen as the minutes of the dialog. This situation is depicted in figure 1.
The methods mentioned earlier assume the domain expert can express domain knowledge in a precise, consistent and well-abstracted way. However, domain experts often can not express domain knowledge in this strict way. Rather, the knowledge is often expressed
This paper proposes a mechanism to improve the quality of the modeling dialog. Instead of rejecting imprecise expressions from the domain expert, we allow them as they may contain valuable domain knowledge. In addition, interpretation knowledge is built up that has the goal of reducing the possible interpretations (ultimately to a single correct interpretation).
Figure 2 shows how we consider the interpretation of domain knowledge by the system analyst. The domain description \( D \) is the domain knowledge expressed by the domain expert. This domain description is interpreted by the system analyst, resulting in the formal model \( M \). The extra information needed to interpret \( D \) and produce \( M \) is the interpretation knowledge \( I \).
The interpretation knowledge often remains implicit, in the mind of the system analyst. Making this knowledge explicit may improve both the modeling efficiency as well as the quality of the final model. The explicit interpretation knowledge can be seen as a motivation of the modeling decisions taken, based on the dialog minutes. Using this, the system analyst may become more conscious and rational about how the domain knowledge is handled (Veldhuijzen van Zanten et al., 2003).
Section 2 describes the basic interpretation knowledge needed to interpret a semi-formal domain description as is expected by NIAM, where informality does not play a role. Then, section 3 discusses the implications of allowing informal domain knowledge.
2 Interpretation of semi-formal domain descriptions
This section discusses the interpretation of semi-formal domain descriptions, providing the basis for the next section where informal domain descriptions are discussed.
2.1 Domain descriptions
The domain description is the collection of information provided by a domain expert as domain knowledge. The domain description changes during the modeling dialog: new information is added and information found to be false or irrelevant is changed or removed.
For our purposes, we assume a domain description can be seen as a set of sentences, each being the representation of an elementary statement, denoted in some language called $L_D$. Both textual and graphical descriptions can be viewed this way, but we will limit ourselves to textual descriptions in this paper.
For now we consider semi-formal domain descriptions: descriptions which are consistent and unambiguous, such that only one single interpretation can result.
Example 1 The set of elementary sentences resulting from CSDP Step 1 of the ORM method (Halpin, 1995) is such a domain description. For example:
Person with name Lee has age of 38 years.
Person with name Mary has age of 19 years.
The sentence structure of the first sentence will be shown in example 3.
2.2 Formal model
The formal model can also be seen as a description, consisting of a set of statements, denoted in a formal language $L_M$. Many modeling languages display a model as a graphical schema, such as conceptual schemas. These schemas can also be seen as a set of statements.
Example 2 Let $L_M$ be a formal language, used in the following examples, based on a subset of PSM (Hofstede and Weide, 1993). The following statements are included in $L_M$, and can be used to create a formal model with:
<table>
<thead>
<tr>
<th>Statement</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>entity(x)</td>
<td>$x$ is an entity, such as 'person with name Lee'</td>
</tr>
<tr>
<td>entity-type(x)</td>
<td>$x$ is an entity type, such as 'person'</td>
</tr>
<tr>
<td>label(x)</td>
<td>$x$ is a label, such as 'Lee'</td>
</tr>
<tr>
<td>label-type(x)</td>
<td>$x$ is a label type, such as 'name'</td>
</tr>
<tr>
<td>relation-type(x)</td>
<td>$x$ is a relation type, such as '<person> has <age>'</td>
</tr>
<tr>
<td>relation(x)</td>
<td>$x$ is a relation, e.g., '<person with name Lee> has (age of 38 years)'</td>
</tr>
<tr>
<td>instance-of(x,y)</td>
<td>$x$ is an instance of type $y$. E.g., 'person with name Lee' is instance of type 'person'.</td>
</tr>
</tbody>
</table>
In the following section we will show how, from this set of formal statements, a model is built using the sentences from a domain description.
2.3 Interpretation of domain descriptions
For semi-formal domain descriptions, the interpretation information \( I \) is agreed upon by both domain expert and system analyst, before the dialog starts. It specifies the correct interpretation for each domain statement \( s \in D \).
Let \( \text{Interp} \) be the interpretation function that produces the formal model \( M \), given a domain description \( D \) and interpretation knowledge \( I \):
\[
\text{Interp}(D, I) \rightarrow M
\]
In this situation, only two types of interpretation knowledge are needed to interpret \( D \):
1. The specification of a Parse function, needed to recognize the sentence structure of sentences in \( D \);
2. The specification of a translation Trans, translating statements from \( D \) into statements in \( M \).
Together, these functions fully specify the interpretation information:
\[
I = (\text{Parse}, \text{Trans})
\]
The interpretation function can now be specified as:
\[
\text{Interp}(D, (\text{Parse}, \text{Trans})) = \text{Trans}(\text{Parse}(D))
\]
The following sections discuss the functions Parse and Trans in more detail.
2.3.1 Recognizing sentence structure
The first step to interpret a domain description is parsing it into meaningful concepts and structures. We already assumed a domain description can be seen as a set of sentences. For our purposes, we make additional assumptions on the structure of sentences in the domain description.
Assumption 1 A sentence can be seen as a hierarchical composition of parts, called terms. Atomic terms, or labels, can not be decomposed any further. Terms which are not atomic are called complex terms.
Example 3 The first statement of example 1 may have the following structure:
![Tree Diagram]
person with name Lee has age of 38 years
Assumption 2 Terms represent concepts. Concepts, or conceptions, are abstract things that reside within the mind of the domain expert. As we assume the concepts in a mental model are related to the domain, we can say that terms represent things in the domain (Bosman and Weide, 2003).
The Parse function takes a sentence from \( L_D \), and produces a \textit{parse expression} from the set of possible parse expressions \( \text{PExpr} \):
\[
\text{Parse} : L_D \rightarrow \text{PExpr}
\]
Parse expressions represent the hierarchical structure of a sentence. Their form is a reflection of assumption 1 (using EBNF notation):
\[
\begin{align*}
\text{PExpr} & : : \rightarrow \text{Label} \\
\text{PExpr} & : : \rightarrow \text{TermType} \ (\text{PExpr}+)
\end{align*}
\]
\textbf{Example 4} This example considers Natural language Normal Form (NNF), which closely resembles the NIAM normal form. In NNF sentences, entities are uniquely described by a \textit{standard name} (such as \textit{‘person with name Lee’}), consisting of the following three elements:
- \textit{Entity type} the type of which the entity is an instance (e.g., “person”).
- \textit{Label type} the type of the concrete object used to identify the entity (e.g., “name”)
- \textit{Label} the concrete object that identifies the entity (e.g., “Lee”)
A sentence is in NNF if it is unsplittable and all entities are described by their standard name.
The following context-free grammar \( G_{\text{NNF}} \) specifies the language of NNF sentences:
\[
\begin{align*}
(\text{sentence construct}) & \quad S \rightarrow S \ c \ S \\
(\text{standard name}) & \quad S \rightarrow \text{ET} \ c \ \text{LT} \ L
\end{align*}
\]
Here, \( c = \text{verb or preposition}, \text{ET} = \text{entity-type}, \text{LT} = \text{label-type}, \ L = \text{label} \). The grammar is a variation of the one described in (Collignon and Weide, 1993). Textual labels are to be surrounded by quotes to avoid ambiguity.
\textbf{Example 5} The following sentence is in NNF:
\textit{CD with title Urk ‘is created by’ artist with name ‘The Nits’}
Parsing the sentence yields:
\[
S(
\begin{align*}
\text{SN}(\text{ET}(‘CD’), \ P(‘with’), \ \text{LT}(‘title’), \ L(‘Urk’)), \\
'is created by’, \\
\text{SN}(\text{ET}(‘artist’), \ P(‘with’), \ \text{LT}(‘name’), \ L(‘The Nits’))
\end{align*}
)
\]
\subsection{2.3.2 Translation to a formal language}
The second interpretation step is to translate a parsed sentence from the domain description into the formal language. This is specified by the \textit{Trans} function:
\[
\text{Trans} : \text{PExpr} \rightarrow \mathcal{Q}(L_M)
\]
Translation of a sentence in \( L_D \) results in a \textit{set} of sentences in \( L_M \), since more than one statement may be needed to describe the sentence to be translated.
Example 6 Let $L_D$ be a language conforming to the NNF structure from example 4. Let $L_M$ be the formal language from example 2. The translation function $\operatorname{Trans}$, that translates from $L_D$ to $L_M$, can be given by the following recursive specification. The two Trans rules directly correspond to the two grammar rules of example 4.
(sentence construct)
\[
\operatorname{Trans}(S(s_1, c, s_2)) =
\{ \text{relation-type}(rt), \text{relation}(s), \text{instance-of}(s, rt) \} \cup \operatorname{Trans}(t_1) \cup \operatorname{Trans}(t_2)
\]
where
\[
s = S(S_1, C, S_2)
\]
\[
t_1 = \operatorname{getType}(s_1)
\]
\[
t_2 = \operatorname{getType}(s_2)
\]
\[
rt = RT(t_1, c, t_2)
\]
(standard name)
\[
\operatorname{Trans}(SN(ET(et), P(p), LT(lt), L(l))) =
\{ \text{entity-type}(et), \text{entity}(e), \text{instance-of}(e, et), \text{label-type}(lt), \text{label}(l), \text{instance-of}(l, lt) \}
\]
where
\[
e = SN(ET(et), P(p), LT(lt), L(l))
\]
The help-function $\operatorname{getType}$ is needed by $\operatorname{Trans}$: it returns the type of a given term. This function is defined recursively by:
\[
\operatorname{getType}(SN(ET(et), P(p), LT(li), L(l))) = et
\]
\[
\operatorname{getType}(S(s_1, c, s_2)) = RT(t_1, c, t_2)
\]
where $t_1 = \operatorname{getType}(s_1)$, $t_2 = \operatorname{getType}(s_2)$
Note that the parse expression of the form $RT(t_1, c, t_2)$ is introduced to represent a relation type with role types $t_1$ and $t_2$, and connector $c$.
The precise nature of the domain description allows a straightforward translation function. The function is also suitable for any domain, since no domain knowledge is present in the specification of the translation function.
3 Interpreting informal domain descriptions
In the previous section, it has been discussed how a negotiated domain description may be transformed into a formal model, using conventional techniques. In this section, we shift focus and take the negotiation process also into account.
A main issue encountered is the interpretation of informal domain descriptions. Intuitively, an informal domain description is vague or incomplete in some way: there is uncertainty about how to interpret it correctly. The base method from the previous section cannot handle informal knowledge, as it requires each statement can be interpreted with certainty.
Instead of rejecting informal knowledge, as done in the previous section, we would rather like to accept every piece of available knowledge. Any piece of knowledge, even when it is informal, may help the system analyst in constructing a correct formal model.
The intention of this section is to propose a model to handle informal domain descriptions. We will not go into details what heuristics and strategies are best to support a system analyst during the dialog, but focus on a general model in which heuristics and strategies can be "plugged in".
3.1 The nature of informal descriptions
A description is informal when there is uncertainty about the correctness of the domain description and/or the interpretation of that domain description. This uncertainty may appear at various places:
- A statement $s \in D$ may be invalid, although the domain expert may believe the statement is valid at the time he expresses it.
- The interpretation $\text{Interp}(s)$ of a statement $s \in D$ may be invalid or ambiguous.
- The interpretation $\text{Interp}(s)$ of a statement $s \in D$ may be not known at all.
Uncertainty is a fundamental property of informal descriptions. As a result, a system analyst can never be sure he has the right interpretation as is intended by the domain expert. Any domain description and any interpretation of it is based on uncertain knowledge, and therefore their validity can never be established with certainty.
However, there is still a big difference between plausible and implausible interpretations. Absolute certainty about an interpretation may never be achieved, but becoming almost certain about the correctness of an interpretation can still be aimed at.
The approach for handling uncertainty we will use in the next sections is based on these ideas:
- The system analyst tries to find and work with the most plausible interpretation of the domain description.
- This interpretation is assumed to be valid, until proven otherwise. In that case, another more plausible interpretation is to be chosen.
- At any point within the dialog, the system analyst may ask the domain expert for information that can diminish uncertainty. This information may lead to improved confidence in the current interpretation, or it may lead to another, more plausible interpretation.
This approach assumes that invalid interpretations can be detected:
Assumption 3 (invalidity of interpretation is detectable)
Invalidity of the interpretation of an informal description can eventually be detected, when the size of the domain description grows ($|D| \to \infty$). Invalidity can be detected when:
- There is no interpretation of a statement: $\text{Interp}(s) = \emptyset$. In this case, either there should be an interpretation, or the statement should not be part of the domain description.
- The model $M$, the result of the interpretation, is invalid. The most common reason for a model to be invalid is inconsistency, but other reasons may also cause it to be invalid.
The following section illustrates the dialog approach in more detail.
3.2 An approach for handling uncertainty
In this section, we will discuss our basic approach for handling uncertainty. In the next sections, two types of uncertainty will illustrate the approach.
Let $\text{Dialog}_t$ be the sequence of actions that have taken place as part of the dialog, up to moment $t$. Actions include asking questions, giving answers, etc. At any point during the dialog, the domain description $D$ is the result of actions that occurred in the dialog. Let $R_d$ be the function that produces the domain description from a dialog:
$$R_d : \text{Dialog}_t \rightarrow D_t$$
We will omit the time $t$ when the current state of the dialog is meant.
The interpretation of a domain description, $\text{Interp}(D_t, I_t)$, results in a model $M_t$. For informal descriptions, this interpretation is now the *most plausible* or *most probable* interpretation at time $t$, from the system analyst’s point of view. All the choices and assumptions made to reach this interpretation are part of the interpretation knowledge $I_t$.
The following procedure illustrates how the system analyst can structure the dialog such that informal descriptions can be handled:
dialog:
Dialog $= \emptyset$
repeat
$D := R_d(\text{Dialog})$
$M := \text{Interp}(D, I)$
if invalid($M$):
find new plausible interpretation, by adjusting $I$ to $I'$
$M := \text{Interp}(D, I')$
choose:
ask domain expert for any new input, or,
ask domain expert for information that reduces uncertainty
until ready
In each iteration, a choice is made about what to do next. The first choice asks the domain expert for any input. This input may be a new statement that should be added to the domain description, or, that a statement should be removed from $D$.
ask new input:
ask domain expert for new input
answer: action $A_{t+1}$
Dialog $+= A_{t+1}$
The second choice is to ask the domain expert for information that may reduce uncertainty about the current interpretation. This may be triggered by low confidence in the current interpretation, or, it may be that more interpretations are equally plausible, and the system analyst wants more information on which one to choose.
The result can be a validation of the current interpretation, when the current interpretation remains the most plausible one and its confidence is increased. An other result can be an interpretation shift to a new interpretation that is more plausible than the current one.
It is important for the system analyst to have a good strategy for when to reduce uncertainty. If asking too quickly, the domain expert might be disturbed in his current line of thought because he has to explain details of how to interpret the expressed knowledge. On the other hand, waiting to diminish uncertainty for too long lets the system analyst work with a current interpretation that he is not confident about.
3.3 Validity of domain statements
We illustrate the approach by lifting the requirement that domain statements need to be valid. Although validity of domain statements can still initially be assumed, any domain statement may prove to be invalid.
Assessing whether domain statements are valid becomes part of the system analyst’s task. For the current interpretation, he needs to keep track of which domain statements he believes are valid and which are not.
Let $SV \subseteq D$ be the set of domain statements of which the system analyst believes they are valid. This set becomes part of the interpretation information $I$:
$$I \triangleq (\text{Parse, Trans, } SV)$$
Interpretation of a domain description is only based on the statements that are believed to be valid:
$$\text{Interp}(D, (\text{Parse, Trans, } SV)) = \text{Trans}(\text{Parse}(SV))$$
At any point within the dialog, an interpretation may become invalid caused by a wrong choice of $SV$. Following the dialog procedure illustrated in the previous section, the system analyst will need to find a new plausible interpretation that is valid. This involves finding a new $SV$.
A new interpretation can be found by first creating a set of possible interpretations, and then selecting the most plausible one from it.
Let $S$ be a set of possible plausible interpretations, where each interpretation is fully defined by the interpretation information $I$. A possible interpretation is considered to be plausible when it is consistent and maximal:
$$S = \{I | I.SV \subseteq D \wedge \text{IsValid}(\text{Interp}(D, I)) \wedge I.SV \text{ is maximal}\}$$
An interpretation is maximal when no other domain statement can be assumed to be valid, without making that interpretation invalid. This reflects the assumption that domain statements must be assumed to be valid unless they ’cause problems’.
For any possible interpretation, the statements thought to be invalid are:
$$\text{InvalidStatements}(D, I) = D - I.SV$$
The conflict set $CS$ is the minimal set that is assumed to contain conflicts:
$$CS(S) \triangleq D - \bigcap_{I \in S} I.SV$$
Example 7 Let the following domain statements be specified:
\begin{align*}
s_1 & : \text{John lives in Nijmegen.} \\
s_2 & : \text{John has a bike.} \\
s_3 & : \text{John lives in Maastricht.} \\
s_4 & : \text{A person lives in one city.}
\end{align*}
Interpretation of the first 3 sentences causes no inconsistency, therefore $SV_3 = \{s_1, s_2, s_3\}$. However, statement 4 causes inconsistency. Now, $D_4 = \{s_1, s_2, s_3, s_4\}$. The set of possible plausible interpretations $S = \{I_1, I_2, I_3\}$ where
\begin{align*}
I_1.SV & = \{s_1, s_2, s_3\} \\
I_2.SV & = \{s_1, s_2, s_4\} \\
I_3.SV & = \{s_2, s_3, s_4\}
\end{align*}
The conflict set $CS = \{s_1, s_3, s_4\}$; statement $s_2$ never causes inconsistency.
Eventually, the assumptions made in $SV$ should be reflected in the domain description itself. When a current interpretation (which includes $SV$) is validated with the domain expert, the domain expert may realize the statements assumed to be invalid by the system analyst are indeed invalid. This leads to removal of these invalid statements from $D$. At the end of the modeling process, this should lead to $\text{InvalidStatements}(D, I) = \emptyset$.
3.4 Conceptual ambiguity
Handling conceptual ambiguity is another illustration of the approach.
Conceptual ambiguity arises when a term is used to refer to more than one concept (homonynity). A similar situation is when a concept is referred to by more than one term (synonymity). Homonyms and synonyms disrupt the assumption made in assumption 2 that there is a one-to-one relation between term and concept. With conceptual ambiguity, terms and concepts are not interchangeable things any longer.
Example 8 Let $G_{\text{RNNF}}$ be the grammar $G_{\text{NNF}}$ from example 6, extended with the following production rule:
\[ S \rightarrow L \]
With this extra rule, entities can be identified by either a standard name (SN) or a simple label (L). No restriction is put on simple labels, therefore labels may be ambiguous.
Example 9 Using $G_{\text{RNNF}}$, the following domain description can be parsed and interpreted:
\begin{align*}
s_1 & : \text{John lives in Nijmegen.} \\
s_2 & : \text{John has a bike.} \\
s_3 & : \text{John lives in Maastricht.} \\
s_4 & : \text{A person lives in one city.}
\end{align*}
In the previous section, the terms John in $s_1$ and $s_2$ were required to identify the same person. However, now they might refer to the same person, or they might refer to two different persons.
Interpretation of ambiguous terms is handled by adding an extra step in interpretation. Let DA be a function that assigns an ambiguous parse expression to an unambiguous parse expression:
\[ DA : \text{PExpr} \rightarrow \text{PExpr} \]
The interpretation information \( I \) is augmented with the disambiguating function DA, which disambiguates terms where needed:
\[ I \triangleq (\text{Parse, Trans, DA}) \]
and
\[ \text{Interp}(D, (\text{Parse, Trans, DA})) = \text{Trans}(DA(\text{Parse})) \]
Note that we can also combine this extension with the one from the previous section (adding SV). However, we do not do this here for clarity.
**Example 10** An instance of the term John may be disambiguated to the concept John₂:
\[ DA(\text{John}) = \text{John}_2 \]
Which disambiguations to use in a specific dialog is often unknown or at least uncertain. As done in the previous sections, a plausible interpretation is worked with, based on a plausible disambiguation function DA.
When at some time during the dialog the current interpretation becomes invalid, it becomes apparent that the interpretation knowledge \( I \) must be wrong. Therefore, \( I \) has to be changed such that a new plausible interpretation will result. This involves changing DA. The new interpretation can be selected from the set of plausible possible interpretations:
\[ S = \{ I | I. DA \in DA \land \text{IsValid(Interp}(D, I)) \} \]
Here, \( DA \) is the set of possible plausible disambiguation functions DA.
**Example 11** Interpretation of the four sentences of example 9 causes inconsistency, when DA specifies that the three instances of the term John refer to one single person John₁. Another interpretation, where the terms from \( s_1 \) and \( s_3 \) are assumed to refer to John₁ and John₂ respectively, is one of the plausible possible interpretations from \( S \).
### 4 Conclusion
This paper discussed the information modeling process as a dialog between domain expert and system analyst. It was shown how the formal model, being the result of the modeling process, can be seen as the interpretation of the domain description provided by the domain expert. The interpretation knowledge specifies how this interpretation is to be performed. For conventional techniques, where certainty is required about how to interpret the domain description, it was shown what the interpretation knowledge consists of.
Then focus shifted to informal domain descriptions, where uncertainty about how to interpret a domain description is accepted. A model was introduced which may help the system analyst in handling these uncertainties. Using this model, the system analyst works with the most plausible interpretation, until this interpretation causes inconsistency. When this occurs, another plausible interpretation is to be chosen. The assumptions and choices that lead to the current interpretation are again part of the interpretation knowledge. Two types of uncertainty were discussed, and it was shown how they are handled by the approach.
References
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/60150/60150.pdf?sequence=1", "len_cl100k_base": 6518, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26221, "total-output-tokens": 7710, "length": "2e12", "weborganizer": {"__label__adult": 0.0004658699035644531, "__label__art_design": 0.0016164779663085938, "__label__crime_law": 0.0007147789001464844, "__label__education_jobs": 0.02899169921875, "__label__entertainment": 0.0002219676971435547, "__label__fashion_beauty": 0.00035500526428222656, "__label__finance_business": 0.0020885467529296875, "__label__food_dining": 0.0005922317504882812, "__label__games": 0.0008726119995117188, "__label__hardware": 0.0008630752563476562, "__label__health": 0.001178741455078125, "__label__history": 0.0007772445678710938, "__label__home_hobbies": 0.0003094673156738281, "__label__industrial": 0.0010814666748046875, "__label__literature": 0.0036525726318359375, "__label__politics": 0.0005679130554199219, "__label__religion": 0.0008220672607421875, "__label__science_tech": 0.44921875, "__label__social_life": 0.0004506111145019531, "__label__software": 0.033172607421875, "__label__software_dev": 0.470703125, "__label__sports_fitness": 0.0002853870391845703, "__label__transportation": 0.0008921623229980469, "__label__travel": 0.0002651214599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29192, 0.01167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29192, 0.71434]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29192, 0.8454]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2621, false], [2621, 4117, null], [4117, 6950, null], [6950, 9041, null], [9041, 11572, null], [11572, 14193, null], [14193, 17009, null], [17009, 19257, null], [19257, 22073, null], [22073, 24595, null], [24595, 27638, null], [27638, 29192, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2621, true], [2621, 4117, null], [4117, 6950, null], [6950, 9041, null], [9041, 11572, null], [11572, 14193, null], [14193, 17009, null], [17009, 19257, null], [19257, 22073, null], [22073, 24595, null], [24595, 27638, null], [27638, 29192, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29192, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29192, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2621, 2], [2621, 4117, 3], [4117, 6950, 4], [6950, 9041, 5], [9041, 11572, 6], [11572, 14193, 7], [14193, 17009, 8], [17009, 19257, 9], [19257, 22073, 10], [22073, 24595, 11], [24595, 27638, 12], [27638, 29192, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29192, 0.03383]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
f17bfa092326eeef21f039e9598e993f97a24768
|
Systems Engineering Technical Review (SETR): Navy’s Acquisition and Development Process and Software Lessons Learned
Naval Air Systems Command
Systems Engineering Department
Software Engineering & Acquisition Management Division
Distribution Statement A. Approved for public release; distribution is unlimited.
Processes, Models, and Frameworks
- The following key processes, models, and frameworks are synergistic and when used together provide for efficiency and effectiveness of systems engineering and software acquisition and development:
- **Systems Engineering Technical Review (SETR) Acquisition framework**
- Basic Systems Engineering
- IEEE/EIA 12207 *Software life cycle processes*
- Capability Maturity Model Integration (CMMI®) Best Practices framework
Processes, Models, and Frameworks cont.
- While each of the four is normally taught separately, the establishment of a software engineering environment consistent with all four provides for an infrastructure that can be measured for effectiveness and improved to provide the warfighter with better quality products faster and cheaper — BEST VALUE
- They are based on best practices throughout industry, government, and academia and are designed to work with each other
Basic Systems Engineering
Systems Engineering
- A system is an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective.
- Systems Engineering (SE) is the effective application of scientific and engineering efforts to transform an operational need into a defined system configuration through the top-down iterative process of requirements definition, functional analysis and allocation, synthesis, optimization, design, test, and evaluation.
Systems Engineering Process
- Define customer needs and required functionality
- Plan the project
- Document the requirements
- Validate and baseline requirements
- Develop the design based on the requirements
- Validate that design meets all requirements and is baselined
- Produce the item based on the design
- Perform system validation
- Ensure all requirements met and the required functionality performs as expected
Based on the International Council on Systems Engineering (INCOSE) definition
Systems Engineering Technical Review Process
What is the SETR?
- **Systems Engineering Technical Review** was developed by NAVAIR and has been adapted for use within the entire Navy.
- An iterative program timeline that maps the technical reviews to the acquisition process described in DOD 5000 documentation:
- Aligned with DODI 5000.02
- Policy and process described in NAVAIRINST 4355.19D
- Applies to all personnel supporting all NAVAIR and Aviation Program Executive Officer (PEO) programs involved with the design, development, test and evaluation, acquisition, in-service support, and disposal of naval aviation weapon systems and equipment.
**Phases**
- **Material Solution Analysis** \(^1\) - ends with Milestone A (**to begin technology development**)
- **Technology Development** - ends with Milestone B (**to begin SD&D**)
- **Engineering & Manufacturing Development** \(^2\) - ends w/ Milestone C (**LRIP**)
- **Production & Deployment** - ends with Initial Operating Capability (**IOC**)
- **Operations & Support** (**maintenance**)
\(^1\) Was previously called Concept Refinement
\(^2\) Was previously called System Development & Demonstration
Systems Engineering Technical Review Timing
Milestones Phases
- Concept Refinement
- Technology Development
- System Development & Demonstration
- Production & Deployment
- Operations & Support
Work Efforts
- Concept Decision
- System Design
- System Demonstration
- LRIP / IOT&E / Full Rate Production & Deployment
- Sustainment Disposal
Activities
- Pre System Acquisition
- System Acquisition
- Sustainment
Instruction & References
- TRA
- SSR
- IRR
- TRA
Reviews
- ITR
- ASR
- SRR
- IBR
- SFR
- PDR
- CDR
- TRR
- FRR
- SVR / FCA
- OTRR
- PCA
- ISR
Technical Baseline
- Preferred System Concept
- System Specification CDD
- System Functional Baseline
- Allocated Baseline
- Product Baseline
- Product Baseline
Technology Readiness Assessment
- ASR - Alternative System Review
- CDR - Critical Design Review
- FCA - Functional Configuration Audit
- FRR - Flight Readiness Review
- IBR - Integrated Baseline Review
- IRR - Integration Readiness Review
- ISR - In Service Review
- ITR - Initial Technical Review
- OTRR - Operational Test Readiness Review
- PDR - Preliminary Design Review
- PRR - Production Readiness Review
- SFR - System Functional Review
- SRR - Systems Requirements Review
- SSR - Software Specification Review
- SVR - Systems Verification Review
- TRR - Test Readiness Review
Found at: https://nserc.navy.mil/Pages/SETRTimeline.aspx
Documented in: NAVAIRINST 4355.19D
Changes Based on new 5000.02
- Concept Refinement to Materiel Solution Analysis
- System Development & Demonstration back to Engineering & Manufacturing Development
- The Preliminary Design Review will be either before or after Milestone B as defined in the program’s acquisition strategy
Objective: Establish a disciplined and integrated process for requirements and acquisition decision-making within DON, endorsing or approving key Joint Capabilities Integration and Development Systems (JCIDS) and acquisition documents at Gate reviews, and facilitating decisions regarding required Navy and Marine Corps capabilities and acquisition of corresponding material solutions.
*The SETR and the Two Pass/Six Gate process are complimentary*
DON Requirements/Acquisition Two-Pass/Six-Gate Process with Development of a System Design Specification
(illustrated example for program initiation at Milestone A)
DON Requirements
OSD/Joint Level
- PASS 1
- JROC
- CD
Two Pass
NAVY/USMC Level
- PASS 2
- JROC
- MSB
Annual CSB
Lead Org: Chair:
- OPNAV/HQMC
- DCNO (N8)/DC, CD/I
PEO/SYSCOM/OPNAV/HQMC Level
- SSAC
- SDD
AOA
- CONOPS
- CDD
SOS Plan
- SDS
- RFP
- IBR
# Gate Review
* DON CIO pre-certification, Investment Review Board certification, and Defense Business System (DBS) Management Committee approval prior to obligation of funding for a DBS program when cost > $1 million
** Capability Production Document (CPD) reviews will be chaired by CNO/CMC
Enclosure (1)
Major Phases of SETR
SETR Software Aspects
- Software is a key component in almost every SETR
- The NAVAIR Software Enterprise has developed a set of templates for each review
- Defines the contents of the software section for the review
- Includes a list of items to be covered along with the entrance criteria as it applies to software
Example of Items from SRR Template
- SW Development Team
- Integrated Mater Schedule Highlighted with Software Milestones
- Software Entrance Criteria
- Requirements Analysis and Allocation Methodology
- System Specifications Tree
- Contract Data Requirements List (CDRL)
- Software Development Strategy
- Software Development Process
- SW Safety, Information Assurance and Security requirements
- Software Supplier Management
- Software Measurement
- Software Risk Assessment with Mitigation Strategies
- Issues and Concerns
Systems Requirements Review (SRR)
SRR
- Technical assessment establishing the system specification of the system under review to ensure a reasonable expectation of being judged operationally effective & suitable
- Ensures the Capability Development Document (CDD), DOD Directives, statutory and regulatory guidance, and applicable public law has been correctly and completely represented in the system specification and can be developed within program cost and schedule constraints
- Systems requirements are evaluated to determine whether they are fully defined and consistent with the mature system solution, and whether traceability of systems requirements to the CDD is maintained
- Assesses the performance requirements as captured in the system specification, and ensures the preferred system solution is:
- Consistent with the system specification
- Correctly captures derived and correlated requirements
- Has well understood acceptance criteria and processes
- Is achievable through available technologies resulting from the Technology Development phase
SRR-I and SRR-II
- For major programs going through both concept refinement/materiel solution analysis and technology development, there will be 2 SRRs
- SRR-I
- Technical assessment establishing the specification of the system under review, previously represented by the Performance Based Specification (PBS), to continue the requirements decomposition process prior to MS B
- SRR-II
- This review ensures the contractors participating in the Technology Development (TD) Phase understands that the requirements of the contract including the system specification, SOW, CDD, DoD Directives, statutory and regulatory guidance, and applicable public law has been correctly and completely represented in the system specification and can be developed within program cost and schedule constraints
SRR Software Products
- Software Development Plan (SDP)
- Schedule
- Processes
- Software Development Environment
- System Specification
- Initial Software Requirements Description (SRD)
- Modeling and Simulation Plan
- Supporting Products
- Systems Engineering Plan
- Risk Management Plan
- Measurement Plan
- Quality Assurance Plan
- Measurement Data
- Cost
- Size
- Requirements Volatility
Technical Readiness Assessment (TRA)
TRA
- A systematic metrics-based process that assesses the maturity of Critical Technology Elements (CTEs) by an independent panel of technical experts.
- Applicable to all acquisition category (ACAT) programs per Secretary of the Navy Instruction 5000.2C.
- May be combined with a SRR.
## Technology Readiness Level Definitions
<table>
<thead>
<tr>
<th>TRL Level</th>
<th>Criteria</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>1</strong></td>
<td><strong>Basic principles observed and reported:</strong> Transition from scientific research to applied research. Essential characteristics and behaviors of systems and architectures. Descriptive tools are mathematical formulations or algorithms.</td>
</tr>
<tr>
<td><strong>2</strong></td>
<td><strong>Technology concept and/or application formulated:</strong> Applied research. Theory and scientific principles are focused on specific application area to define the concept. Characteristics of the application are described. Analytical tools are developed for simulation or analysis of the application.</td>
</tr>
<tr>
<td><strong>3</strong></td>
<td><strong>Analytical and experimental critical function and/or characteristic proof-of-concept:</strong> Proof of concept validation. Active Research and Development (R&D) is initiated with analytical and laboratory studies. Demonstration of technical feasibility using breadboard or brassboard implementations that are exercised with representative data.</td>
</tr>
<tr>
<td><strong>4</strong></td>
<td><strong>Component/subsystem validation in laboratory environment:</strong> Standalone prototyping implementation and test. Integration of technology elements. Experiments with full-scale problems or data sets.</td>
</tr>
<tr>
<td><strong>5</strong></td>
<td><strong>System/subsystem/component validation in relevant environment:</strong> Thorough testing of prototyping in representative environment. Basic technology elements integrated with reasonably realistic supporting elements. Prototyping implementations conform to target environment and interfaces.</td>
</tr>
<tr>
<td>TRL Level</td>
<td>Criteria</td>
</tr>
<tr>
<td>-----------</td>
<td>----------</td>
</tr>
<tr>
<td>6</td>
<td>System/subsystem model or prototyping demonstration in a relevant end-to-end environment (ground or space): Prototyping implementations on full-scale realistic problems. Partially integrated with existing systems. Limited documentation available. Engineering feasibility fully demonstrated in actual system application.</td>
</tr>
<tr>
<td>7</td>
<td>System prototyping demonstration in an operational environment (ground or space): System prototyping demonstration in operational environment. System is at or near scale of the operational system, with most functions available for demonstration and test. Well integrated with collateral and ancillary systems. Limited documentation available.</td>
</tr>
<tr>
<td>8</td>
<td>Actual system completed and "mission qualified" through test and demonstration in an operational environment (ground or space): End of system development. Fully integrated with operational hardware and software systems. Most user documentation, training documentation, and maintenance documentation completed. All functionality tested in simulated and operational scenarios. Verification and Validation (V&V) completed.</td>
</tr>
<tr>
<td>9</td>
<td>Actual system "mission proven" through successful mission operations (ground or space): Fully integrated with operational hardware/software systems. Actual system has been thoroughly demonstrated and tested in its operational environment. All documentation completed. Successful operational experience. Sustaining engineering support in place.</td>
</tr>
</tbody>
</table>
Integrated Baseline Review (IBR)
IBR
- Employed by Program Managers (PMs) throughout the life of projects where Earned Value Management (EVM) is required
- The IBR establishes a mutual understanding of the project Performance Management Baseline (PMB) and provides for an agreement on a plan of action to evaluate risks inherent in the PMB and the management processes that operate during project execution
- Assessment of risk within the PMB and the degree to which the following have been established:
- Technical scope of work
- Project schedule key milestones
- Resources
- Task
- Rationale
- Management Processes
System Functional Review (SFR)
Technical assessment establishing the system functional baseline of the system under review to ensure a reasonable expectation of being judged operationally effective and suitable
*Assesses the decomposition of the system specification to system functional specifications derived from use case analysis*
Functional requirements for operations and maintenance are assigned to sub-systems, hardware, software, or support after detailed reviews of the architecture in the environment it will be employed
- The system’s lower level performance requirements are evaluated to determine whether they are fully defined and consistent with the mature system concept, and whether traceability of lower-level systems requirements to top-level system performance and the CDD is maintained
- The development of representative operational use cases for the system
- The SFR determines whether the systems functional definition is fully decomposed to its lower level, and that the Team is prepared to start preliminary design for hardware
- Risk Management, Measurement, Quality Assurance, & Configuration Management processes fully functional
SFR Products
- Updates to SRR products
- SDP
- System Specification
- Modeling and Simulation Plan
- Supporting Products
- Systems Engineering Plan
- Risk Management Plan
- Measurement Plan
- Quality Assurance Plan
- Cost and Size Estimates
- Updated (more detail) Software Requirements Description (SRD)
- Interface Control Documents
- Draft Test Plan
- Measurement Data
- Cost
- Size
- Requirements Volatility
Software Specification Review
(SSR)
SSR
- Technical assessment establishing the software requirements baseline to ensure the preliminary design and ultimately the software solution has a reasonable expectation of being judged operationally effective and suitable
- The software’s lower level performance requirements are fully defined and consistent with a mature system concept, and traceability of lower-level software requirements to top-level system performance and the CDD is maintained
- A review of the finalized Computer Software Configuration Item (CSCI) requirements and operational concept
- Software Requirements Specification (SwRS) or Software Requirements Description (SRD); Interface Requirements Specification(s) (IRS) or Software Interface Requirements Description (SIRD); Software Integration Plan; and the user’s Concept of Operation Description or User Documentation Description form a satisfactory basis for proceeding into preliminary software design
SSR Products
- Updates to SRR and SFR products
- Final Software Requirements Specification (SwRS) or Software Requirements Description (SRD)
- Requirements Verification Matrix
- CDD to specifications to SwRS or SRD
- Interface Control Documents (ICDs) and Interface Requirements Specification (IRS) or Software Interface Requirements description (SI RD)
- Declassification, Anti-Tamper, Open Architecture, and Information Assurance requirements
- Completed Test Plan
- Software Integration Plan
- Measurement Data
- Cost
- Size
- Requirements Volatility
Preliminary Design Review (PDR)
PDR
- Technical assessment establishing the physically allocated baseline to ensure that the system has a reasonable expectation of being judged operationally effective and suitable.
- **Assesses the allocated design** captured in subsystem product specifications for each configuration item in the system and ensures that each function, in the functional baseline, has been allocated to one or more system configuration items.
- Subsystem specifications for hardware and software, along with associated Interface Control Documents (ICDs), enable detailed design or procurement of subsystems.
- A successful review is predicated on the Team’s determination that the **subsystem requirements**, **subsystem preliminary design**, **results of peer reviews**, and **plans for development and testing** form a satisfactory basis for proceeding into detailed design and test procedure development.
PDR Products
- Updates to SRR, SFR, and SSR products
- Top Level Software Design Description and/or Software Architecture Description
- Completed Test Plan
- Draft Test Procedures
- Traceability from design documentation to subsystem test requirements
- Representative mission profiles
- Measurement Data
- Size
- Defects
- Requirements Volatility
Critical Design Review
(CDR)
CDR
- Technical assessment establishing the build baseline to ensure that the system has a reasonable expectation of being judged operationally effective and suitable
- Assesses the final design as captured in product specifications for each configuration item in the system, and ensures that item has been captured in detailed documentation
- Product specifications for software enable coding of the Computer Software Configuration Item (CSCI)
- A successful review is predicated on the Team’s determination that the subsystem requirements, subsystem detail design, results of peer reviews, and plans for testing form a satisfactory basis for proceeding into implementation, demonstration, and test
CDR Products
- Updates to SRR, SFR, SSR, & PDR products
- All specifications and requirements documentation are complete/stable
- Detailed Software Design Description
- Units Test Procedures
- Traceability Verification Matrix
- CDD to specifications to requirements documentation to design to test procedures
- Traceability in both directions
- Measurement Data
- Size
- Defects
- Maturity
- Requirements Volatility
Integration Readiness Review (IRR)
Technical assessment establishing the configuration to be used in integration test to ensure that the system has a reasonable expectation of being judged operationally effective and suitable.
A product and process assessment to ensure that hardware and software or software components are ready to begin integrated configuration item (CI) testing:
- Assess prior component or unit level testing adequacy, test planning, test objectives, test methods and procedures, scope of tests, and determines if required test resources have been properly identified and coordinated to support planned tests.
- Verifies the traceability of planned tests to program, engineering data, analysis, and certification requirements.
Testing is based upon the Test Plan (TP):
- Begun in requirements phase and completed during design.
- Conducted after the test and/or validation procedures are complete and unit level testing is complete.
IRR Products
- Updates to SRR, SFR, SSR, PDR, & CDR products
- Approved Integration Test Plan
- Approved Integration Test Procedures
- Format for Integration Test Report
- Completed Integration Test Verification Matrix
- Measurement Data
- Quality/maturity
- Defects
Test Readiness Review (TRR)
TRR
- Technical assessment establishing the configuration used in test to ensure that the system has a reasonable expectation of being judged operationally effective and suitable
- TRR is a multi-disciplined product and process assessment to ensure that the subsystem, system, or systems of systems has stabilized in configuration and is ready to proceed into formal test
- Assesses prior unit level and system integration testing adequacy, test planning, test objectives, test methods and procedures, scope of tests, and determines if required test resources have been properly identified and coordinated to support planned tests
- Verifies the traceability of planned tests to program, engineering, analysis, and certification requirements
- Determines the completeness of test procedures and their compliance with test plans and descriptions
- Assesses the impact of known discrepancies to determine if testing is appropriate prior to implementation of the corrective fix
- The TRR process is equally applicable to all tests in all phases of an acquisition program
TRR Products
- Updates to SRR, SFR, SSR, PDR, CDR, and IRR products
- Traceability Analysis
- Approved Test Plan
- Approved Test Procedures
- Format for Test Report
- Completed Test Verification Matrix
- Software Version Document
- Measurement Data
- Quality/maturity
- Defects
Flight Readiness Review (FRR)
FRR
- Technical assessment establishing the configuration used in flight test to ensure that the system has a reasonable expectation of being judged operationally effective and suitable
- Assesses the system and test environment to ensure that the system under review can proceed into flight test with:
- NAVAIR airworthiness standards met
- Objectives clearly stated
- Flight test data requirements clearly identified
- Acceptable risk management plan defined and approved
- Ensures that:
- Proper coordination has occurred between engineering and flight test
- All applicable disciplines understand and concur with:
- The scope of effort that has been identified
- How this effort will be executed to derive the data necessary (to satisfy airworthiness and test and evaluation requirements) to ensure the weapon system evaluated is ready to proceed to flight test
FRR Products
- Updates to SRR, SFR, SSR, PDR, CDR, IRR, & TRR products
- Approved Flight Test Plan
- Approved Test Points/Flight Scenarios/Knee Cards
- Format for Flight Test Report
- Software Version Document
- Measurement Data
- Quality/maturity
- Defects/Test Hour
Operational Test Readiness Review (OTRR)
Multi-disciplined product and process assessment to ensure that the system under review can proceed into Operational Test and Evaluation (OT&E) with a high probability the system will successfully complete operational testing.
Successful performance during Operational Evaluation (OPEVAL) generally indicates the system being tested is effective and suitable for Fleet introduction.
- The decision to enter production may be based on this successful determination.
Of critical importance to this review is the understanding of available system performance to meet the Capability Production Document (CPD).
Operational requirements defined in the CPD must match the Requirements tested to in the Test and Evaluation Master Plan (TEMP).
OTRR Products
- Updates to SRR, SFR, SSR, PDR, CDR, IRR, TRR, & FRR products
- Approved Test & Evaluation Master Plan (TEMP)
- Approved Test Points/Flight Scenarios
- Format for Operational Test Report & Quick-look Report
- Measurement Data
- Quality/maturity
- Defects/Test Hour
NAVAI R-Specific Lessons Learned
Requirements Lessons Learned
- Some developers tend to resist documenting requirements in a requirements traceability tool
- Inability to trace requirements back to customer’s/sponsor’s requirements
- Requirements creep – adding requirements not needed to meet user’s/customer’s desires
- Lack of concurrence among the **stakeholders** of the requirements (collaboration)
- Key contributor to requirements instability, which leads to cost and schedule problems
- Requirements too loose/broadly written, making the requirements decomposition more difficult
Requirements Lessons Learned cont.
- Tendency to begin preliminary design before requirements done or maturity of the requirements verified
- Can result in a lot of rework
- If the requirements are not fully defined, then your estimates for both cost and schedule will be inaccurate
- Resistance to having a requirements change control board early in the requirements phase
- Lack of requirements stability measure (metric)
- Requirements phase too little time
Design Lessons Learned
- Tendency to combine preliminary design and detailed design into a single phase to save time
- Peer reviews tend to have more defects because designs not thought out well
- Tendency to begin coding/production before design maturity verified
- Those who use a design tool, tend to do better designs, especially if requirements were managed with a tool
- See some confusion between architecture definition and design
- These developers tend to begin coding early and call it prototyping
Agile Programming
Dilbert is copyright by Scott Adams, Incorporated and is provided for non-commercial use by United Features Syndicate, Incorporated.
Coding Lessons Learned
- Size tends to go up and amount of reuse tends to go down as a function of time
- Add growth into planning
- Tend to underestimate the amount of effort required for reusing code
- Consider if new code will be cheaper
- Can be as much as 1.5 times the cost of new code
- Auto code generator use increasing
- Problems/defects come from humans
- Most effective when used in conjunction with a design tool
- Peer reviews are extremely important during this phase
- Detects problems when they are cheaper to fix
- Integrated unit testing tends to get shortened when schedules slide
- Consider sliding schedule or reducing content if schedule cannot slide
- Resource planning (labs, tools, and people) not necessarily well thought out, especially when there is a hardware in the loop lab
Testing Lessons Learned
- Late development of test plans and procedures
- May not be able to get all the resources needed
- May not have the proper review or secured commitment to the testing
- Too little time for testing
- Have to rush through tests and may overlook problems
- Have to cut out some of the tests
- Have to test concurrent with development, which may cause retest when tested areas of the software are changed
- Automated testing promotes
- Better coverage during testing
- More efficient use of staff
- Repeatability of testing
- Efficient usage of test facilities during 2\textsuperscript{nd} & 3\textsuperscript{rd} shifts
Testing Lessons Learned cont.
- Don’t document test sessions
- Anomalies discovered in the test session may get overlooked
- Don’t have data to show what has been tested to support decisions
- Readiness for next step
- Flight clearances
- Production decisions
- Can’t take credit for the testing performed and may have to redo testing
- Wrong configuration tested
- Contention for test facilities due to poor up-front planning
- Schedule delays
- Removal of tests
I found a clever way to write my application code in one hour!
Normally this sort of thing would take weeks.
I assume my high level of efficiency will be recognized and rewarded.
Let me know how that works out for you.
You did all of that in one hour?
Yes. I did.
From now on, I expect you to finish all of your projects in one hour.
Otherwise I'll assume you're ripping off the company.
You could have warned me.
That's not how experience works.
Dilbert is copywritten by Scott Adams, Incorporated and is provided for non-commercial use by United Features Syndicate, Incorporated.
Schedule Issues
- Beginning tasks before maturity or readiness has been verified rarely if ever saves time and funding
- J SF\(^1\)
- The J SF started **system development before requisite technologies were ready**, started manufacturing test aircraft before designs were stable, and moved to production before flight tests have adequately demonstrated that the aircraft design meets performance and operational suitability requirements.
- FCS\(^2\)
- The current FCS practice is to **overlap builds more than the traditional spiral model does** with software developers reporting that evolving requirements have caused them to interpret and implement changes in requirements well into the design and code phases, compromising the amount of time allotted for testing.
1 Source: GAO-08-388, *Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks*. March 2008
Schedule Driven Demos
Take your time and make sure the product is mature before going to the next step
Schedule Lessons Learned
- Tendency for schedules to be too aggressive for the amount of work to be performed
- “When faced with unrealistic schedules, engineering teams often behave irrationally”*
- Race through requirements
- Produce a superficial design
- Rush into coding
- Software tends to be delivered later than if it were developed with a rational plan
- Lack of staffing allocation across the schedule
- Most common problem is staff brought in too late
- Overlapping of phases (to save time) and beginning next phase with an immature product
- Critical path is not evident
- Impacts risk management
- Puts you in reaction mode (chaotic) because you did nothing to mitigate
- Insufficient detail to assess risks
- Unknown linkages and interdependencies of the tasks identified
Software Suppliers Lessons
In an article in CrossTalk Magazine* on software acquisition in the Army, Edgar Dalrymple, the Program Manager for the Future Combat Systems Brigade Combat Team and Associate Director of Software and Distributed Systems, when answering a question making one change in the way the government procures software:
“The government, at least the Army, needs to stop buying software exclusively from the traditional defense contracting base. These companies have the overhead costs of manufacturing companies, yet software development should carry a far smaller overhead burden. Most defense contractors are still managed by manufacturing engineers or business managers.”
Subcontractor Management Lessons
- In major contracts where the prime has subcontracted out subsystems, we have seen:
- Software requirements regarding process maturity and development of deliverables not passed down to the subcontractor
- Prime subcontractor management personnel not experienced in software development
- Results in latency of requirements changes to software
- Rework
- Schedule delay
- Requirements volatility
Summary
- The Systems Engineering Technical Review process is an iterative program timeline that maps to DODI 5000.02 acquisition timeline and is compatible with:
- Basic Systems Engineering
- CMMI
- IEEE 12207
- There are many software development lessons that NAVAIR has learned from its participation in SETR events
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/1014731.pdf", "len_cl100k_base": 6435, "olmocr-version": "0.1.50", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 133770, "total-output-tokens": 8655, "length": "2e12", "weborganizer": {"__label__adult": 0.00036787986755371094, "__label__art_design": 0.00026798248291015625, "__label__crime_law": 0.000690460205078125, "__label__education_jobs": 0.0027027130126953125, "__label__entertainment": 5.537271499633789e-05, "__label__fashion_beauty": 0.0002084970474243164, "__label__finance_business": 0.002223968505859375, "__label__food_dining": 0.00031638145446777344, "__label__games": 0.0008792877197265625, "__label__hardware": 0.00133514404296875, "__label__health": 0.00038361549377441406, "__label__history": 0.00024890899658203125, "__label__home_hobbies": 0.0001156926155090332, "__label__industrial": 0.0010538101196289062, "__label__literature": 0.0001995563507080078, "__label__politics": 0.0003445148468017578, "__label__religion": 0.00022125244140625, "__label__science_tech": 0.0236053466796875, "__label__social_life": 9.262561798095704e-05, "__label__software": 0.01103973388671875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0003020763397216797, "__label__transportation": 0.0009150505065917968, "__label__travel": 0.0001807212829589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31909, 0.00493]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31909, 0.06958]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31909, 0.89489]], "google_gemma-3-12b-it_contains_pii": [[0, 313, false], [313, 777, null], [777, 1248, null], [1248, 1274, null], [1274, 1813, null], [1813, 2321, null], [2321, 2366, null], [2366, 3489, null], [3489, 4887, null], [4887, 5177, null], [5177, 5627, null], [5627, 6369, null], [6369, 6390, null], [6390, 6712, null], [6712, 7239, null], [7239, 7273, null], [7273, 8313, null], [8313, 9109, null], [9109, 9523, null], [9523, 9560, null], [9560, 9850, null], [9850, 11360, null], [11360, 12901, null], [12901, 12934, null], [12934, 13532, null], [13532, 13563, null], [13563, 14698, null], [14698, 15145, null], [15145, 15181, null], [15181, 16124, null], [16124, 16687, null], [16687, 16719, null], [16719, 17613, null], [17613, 17968, null], [17968, 17997, null], [17997, 18698, null], [18698, 19129, null], [19129, 19164, null], [19164, 20087, null], [20087, 20359, null], [20359, 20387, null], [20387, 21464, null], [21464, 21747, null], [21747, 21777, null], [21777, 22666, null], [22666, 22939, null], [22939, 22980, null], [22980, 23719, null], [23719, 24004, null], [24004, 24037, null], [24037, 24600, null], [24600, 25066, null], [25066, 25582, null], [25582, 25734, null], [25734, 26555, null], [26555, 27216, null], [27216, 27703, null], [27703, 28295, null], [28295, 29343, null], [29343, 29447, null], [29447, 30346, null], [30346, 31128, null], [31128, 31583, null], [31583, 31909, null]], "google_gemma-3-12b-it_is_public_document": [[0, 313, true], [313, 777, null], [777, 1248, null], [1248, 1274, null], [1274, 1813, null], [1813, 2321, null], [2321, 2366, null], [2366, 3489, null], [3489, 4887, null], [4887, 5177, null], [5177, 5627, null], [5627, 6369, null], [6369, 6390, null], [6390, 6712, null], [6712, 7239, null], [7239, 7273, null], [7273, 8313, null], [8313, 9109, null], [9109, 9523, null], [9523, 9560, null], [9560, 9850, null], [9850, 11360, null], [11360, 12901, null], [12901, 12934, null], [12934, 13532, null], [13532, 13563, null], [13563, 14698, null], [14698, 15145, null], [15145, 15181, null], [15181, 16124, null], [16124, 16687, null], [16687, 16719, null], [16719, 17613, null], [17613, 17968, null], [17968, 17997, null], [17997, 18698, null], [18698, 19129, null], [19129, 19164, null], [19164, 20087, null], [20087, 20359, null], [20359, 20387, null], [20387, 21464, null], [21464, 21747, null], [21747, 21777, null], [21777, 22666, null], [22666, 22939, null], [22939, 22980, null], [22980, 23719, null], [23719, 24004, null], [24004, 24037, null], [24037, 24600, null], [24600, 25066, null], [25066, 25582, null], [25582, 25734, null], [25734, 26555, null], [26555, 27216, null], [27216, 27703, null], [27703, 28295, null], [28295, 29343, null], [29343, 29447, null], [29447, 30346, null], [30346, 31128, null], [31128, 31583, null], [31583, 31909, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31909, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31909, null]], "pdf_page_numbers": [[0, 313, 1], [313, 777, 2], [777, 1248, 3], [1248, 1274, 4], [1274, 1813, 5], [1813, 2321, 6], [2321, 2366, 7], [2366, 3489, 8], [3489, 4887, 9], [4887, 5177, 10], [5177, 5627, 11], [5627, 6369, 12], [6369, 6390, 13], [6390, 6712, 14], [6712, 7239, 15], [7239, 7273, 16], [7273, 8313, 17], [8313, 9109, 18], [9109, 9523, 19], [9523, 9560, 20], [9560, 9850, 21], [9850, 11360, 22], [11360, 12901, 23], [12901, 12934, 24], [12934, 13532, 25], [13532, 13563, 26], [13563, 14698, 27], [14698, 15145, 28], [15145, 15181, 29], [15181, 16124, 30], [16124, 16687, 31], [16687, 16719, 32], [16719, 17613, 33], [17613, 17968, 34], [17968, 17997, 35], [17997, 18698, 36], [18698, 19129, 37], [19129, 19164, 38], [19164, 20087, 39], [20087, 20359, 40], [20359, 20387, 41], [20387, 21464, 42], [21464, 21747, 43], [21747, 21777, 44], [21777, 22666, 45], [22666, 22939, 46], [22939, 22980, 47], [22980, 23719, 48], [23719, 24004, 49], [24004, 24037, 50], [24037, 24600, 51], [24600, 25066, 52], [25066, 25582, 53], [25582, 25734, 54], [25734, 26555, 55], [26555, 27216, 56], [27216, 27703, 57], [27703, 28295, 58], [28295, 29343, 59], [29343, 29447, 60], [29447, 30346, 61], [30346, 31128, 62], [31128, 31583, 63], [31583, 31909, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31909, 0.026]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
16d85c8dd20ab0f48d6edf1213eb6d3b249a4198
|
QDG Python Controller and PAT Framework Manual
Victor I. Barua
August 28, 2013
Distillation of four months of programming work during the summer of 2013. If question arise they can be directed to victor.barua@gmail.com. All of the code written during this work term can be found here:
https://github.com/vbarua/QDG-Lab-Framework
Contents
1 Device Controllers 3
1.1 Lab Jack 3
1.1.1 Controller Parameters 3
1.1.2 Software Installation 3
1.1.3 Device Details 3
1.1.4 Controller Details 3
1.2 Personal Measurement Device (PMD) 4
1.2.1 Controller Parameters 4
1.2.2 Software Installation 4
1.2.3 Device Details 5
1.2.4 Controller Details 6
1.3 Stabil Ion Controller 6
1.3.1 Controller Parameters 6
1.3.2 Software Installation 6
1.3.3 Controller Details 6
1.4 MKS SRG3 6
1.4.1 Controller Parameters 6
1.4.2 Software Installation 7
1.4.3 Controller Details 7
1.5 Point Grey 7
1.5.1 Controller Parameters 7
1.5.2 Software Installation 7
1.5.3 Device Details 7
1.5.4 Controller Details 8
1.6 PixeLink 8
1.6.1 Controller Parameters 8
1.6.2 Device Specific Function 9
2 PAT Control System - General Usage 9
2.1 Script Breakdown 9
2.2 Example Scripts 10
3 PAT Control System - Implementation Details 10
3.1 Devices 10
3.1.1 DeviceControllers 10
3.1.2 DeviceMediators 10
3.2 Settings 10
3.2.1 DefaultSettings .................................................. 10
3.2.2 Settings Class .................................................... 11
3.2.3 SettingsConsolidator ............................................. 11
3.2.4 TemplateGenerator .............................................. 11
3.3 Communication ..................................................... 11
3.3.1 Communication Protocol .................................... 11
3.3.2 PATServer ................................................... 11
3.3.3 PATClient .................................................. 11
3.4 Save Controller ................................................... 11
3.5 Device Mediator Interface ....................................... 11
3.5.1 start ...................................................... 12
3.5.2 stop ....................................................... 12
3.5.3 save ...................................................... 12
3.6 Explanation of Common Settings Options ....................... 12
4 PAT Control System - Useful Tips .................................. 12
4.1 Starting the Server ............................................ 12
4.2 Handling Data Collection Failures ............................. 12
4.3 Adding a New Controller ...................................... 12
4.4 Porting the Framework ........................................ 13
5 Optimizer ........................................................... 13
5.1 Particle Swarm Optimization .................................. 13
5.2 Initialization ................................................ 13
5.3 Update Formula .............................................. 14
5.4 Boundaries ................................................... 14
5.5 Implementation/Usage Details ................................ 14
6 GIT ................................................................. 15
6.1 Rationale .................................................... 15
6.2 Account Details ............................................... 15
1 Device Controllers
The main component of each controller module is the controller class which encapsulates all of the attributes and functions that the device associated with it can carry out. The controllers can be used in other Python scripts by importing the controller class from the module, creating an instance of it and then calling the required commands through that instance. Every controller is capable of at least taking data and saving it to an external file. Additional capabilities like automated processing and plotting of data have also been implemented in some cases. Each controller module also has a threading class which is used in conjunction with the controller class to allow for data to be taken without blocking the main thread. These controllers have been designed so that all of their data collection parameters (ie. sample rate, collection duration, etc) are set when they are constructed. Following this a simple method call, usually something like takeData(), can be used to take data with the controllers internal settings.
The controller documentation in this text gives a broad overview of each controller. The modules themselves have been written with ease of readability in mind and should in-theory be relatively self-documenting. Each controller module contains example code that will take data when called directly from the command prompt. This code is at the bottom of each module and serves as an example of how data can be taken. Manuals for the controllers can also be found in the DeviceControllers folder of the PAT Framework.
1.1 Lab Jack
1.1.1 Controller Parameters
- activeChannels: Array of size 1, 2 or 4 indicating which channels to collect data from.
- sampleRatePerChannel: Number of samples taken per channel per second.
- scanDuration: Time over which to collect data (in seconds).
- trigger: Determines whether a trigger will be used.
- triggerChannel: Digital IO channel to use as trigger.
- idnum: Local ID of LabJack to utilize. -1 uses first available LabJack.
1.1.2 Software Installation
The software required to utilize the Lab Jack U12 can be found here: http://labjack.com/support/u12. This will install the necessary drivers the device controller utilizes (specifically the ljjackuw.dll library).
1.1.3 Device Details
The Lab Jack is a USB data acquisition device that can both measure and output digital and analog voltage signals. Based on the needs of the QDG lab, only its analog voltage capture capabilities have been implemented. Using the Lab Jack in single-ended mode, either 1, 2 or 4 analog channels (AI in figure 1) can be read simultaneously with an aggregate sample rate between 200-1200 samples per second. Each channel has 12 bits of resolution over a range of ±10V.
1.1.4 Controller Details
When a LabJackController is created, it connects to the first available Lab Jack and configures it using the settings passed to the constructor. Data is taken by simply calling collectData(), after which a call to save() can be used to save the data. This method of taking data will however prevent you from running other code whilst data is being collected. If you wish to run other code, instead of calling collectData() call start(). Then before calling save() call end().
The Lab Jack can also be utilized with a software trigger by using the appropriate configuration. By virtue of it being a software trigger however, delays on the order of milliseconds can be expected. If triggering functionality is required, it would be better to used the PMD device.
1.2 Personal Measurement Device (PMD)
1.2.1 Controller Parameters
- activeChannels: Array of channels from which to record data.
- gainSettings: Array of gain settings for each of the channels. Must have same size as activeChannels.
- sampleRatePerChannel: Sample rate per channel.
- scanDuration: Duration of scan in seconds.
- vRange: Voltage range identifier code from PMDTypes file.
- trigger: Boolean to indicate whether a trigger should be used or not.
- trigType: Trigger type identifier code from PMDTypes.
- boardNum: Board number registered through InstaCal program. 0 will use first available board.
1.2.2 Software Installation
The software required to utilize the PMD-1208FS can be found here: http://www.mccdaq.com/software.aspx. Specifically, download and run the MCC DAQCD package. The device controller makes use of either the cbw32.dll or cbw64.dll depending on the bitness of the OS (ie. 32 or 64 bit). Make sure that the right version is set in the PMDController file, otherwise an error will be raised when attempting to run PMD code.
Before data can be taken, it is necessary to run the InstaCal program (it should have been installed with the previous package) with the PMD connected to the computer. This program will detect a device and create a configuration file which is needed to utilize it.
1.2.3 Device Details
The PMD-1208FS (Figure 2) is a USB data acquisition device that can both measure and output digital and analog voltage signals. Based on the needs of the QDG lab however, only its analog voltage capture capabilities have been implemented. Using the PMD in differential mode, up to 4 channels can be read at a maximum theoretical aggregate rate of 50kHz, though this rate will depend on the capabilities of the computer it is connected to.
In order to measure a voltage with the PMD, connect the signal to one of the "CHx IN HI" pins, the signal ground to an "AGND" pin, and finally the corresponding "CHx IN LOW" to the same "AGND" as before. The pin diagram can be seen in Figure 3. In differential mode, the PMD provides 12 bit resolution in its voltage measurements. The voltage range can be set to any of the range values listed in the PMDTypes file. This file holds constants for ease of access. It is recommended that the smallest range that includes the signal to be measured be utilized in order to maximize the resolution of the measurements.
The PMD can also be utilized with a hardware trigger by using the appropriate configuration. The trigger input should be connected to Channel 18 (TRIG IN in Figure 3). By default the PMD will trigger on a rising edge (though triggering on a falling edge is also available).
1.2.4 Controller Details
When a PMD Controller is created, it connects to the first available PMD and configures it using the settings passed to the constructor. Data is taken by simply calling collectData(). After this calling processData() will extract the readings from the data buffer and a subsequent call to save() will save the data. This method of taking data will however prevent you from running other code whilst data is being collected. If you wish to run other code, instead of calling collectData() and processData() call start(), Then before calling save() call stop().
1.3 Stabil Ion Controller
1.3.1 Controller Parameters
- port: COM port to which gauge is connected.
- duration_s: duration of data collection in seconds.
- secondsPerSample: the number of seconds to wait between samples. Not recommended to be anything less than 1.
1.3.2 Software Installation
The Granville-Phillips Series 370 Stabil-Ion Gauge Vacuum Gauge is controlled via the RS232 communication protocol. The pySerial package found here http://pyserial.sourceforge.net/ can be used to communicate with the gauge.
1.3.3 Controller Details
The Stabil Ion controller subclasses the Serial class (from pySerial) in order to have access to its services. To create an instance of the controller, it is necessary to know the COM port to which it is connected. A list of COM ports is available within the device list on Windows. Note that communication ports are listed using one-based indexing, which means that COM2 would correspond to port 1.
The majority of this controller is wrapper for RS232 commands. It depends heavily on the read and write methods provided by the Serial class. write() is used to send message strings to the device, and read() is used to read messaged returned from the device.
The RS232 protocol for this device indicates that all commands must be terminated by a carriage-return and line-feed. For this reason, the write() method has been overwritten to append “\r\n” to all messages sent. read() has also been overwritten to remove these two characters from the returned data.
To collect data with the controller, simply call the collectData() method. The controller will then request a reading from the gauge every time a reading is required. Note that as the timing of readings is being controlled via software, time resolution cannot be guaranteed beyond 1s. After data is collected, it can be saved to a csv file by calling saveData().
1.4 MKS SRG3
1.4.1 Controller Parameters
- port: COM port to which gauge is connected.
- duration_s: duration of data collection in seconds.
- measurementTime_s: time over which a single measurement will be made.
- gType: the type of gas being measured. See controller file for the numeric codes.
- pUnits: the pressure units used for outputs. See controller file for the numeric codes.
- tUnits: the temperature units used for inputs. See controller file for the numeric codes.
1.4.2 Software Installation
The MKS SRG3 spinning rotary gauge is controlled via the RS232 communication protocol. The pySerial package found here [http://pyserial.sourceforge.net/](http://pyserial.sourceforge.net/) can be used to communicate with the gauge.
1.4.3 Controller Details
The MKS SRG3 controller subclasses the Serial class (from pySerial) in order to have access to its services. To create an instance of the controller, it is necessary to know the COM port to which it is connected. A list of COM ports is available within the device list on Windows. Note that communication ports are listed using one-based indexing, which means that COM2 would correspond to port 1.
The majority of this controller is wrapper for RS232 commands. It depends heavily on the read and write methods provided by the Serial class. write() is used to send message strings to the device, and read() is used to read messages returned from the device.
The RS232 protocol for this device indicates that all commands must be terminated by a carriage-return. For this reason, the write() method has been overwritten to append ”\r” to all messages sent. read() has also been overwritten to remove this character from the returned data.
To collect data with the controller, simply call the collectData() method. The controller will then take a reading from the gauge whenever one becomes available, which can be anywhere from 5 to 60 seconds. After data is collected, it can be saved to a csv file by calling saveData().
1.5 Point Grey
1.5.1 Controller Parameters
- `numOfImages`: Number of images which will be taken.
- `expTime_ms`: Exposure time for each image in milliseconds.
- `gain`: Camera gain level.
- `roi`: Region of Interest object. If false the full region will be utilized, otherwise the specified region of interest will be used.
- `speedBoost`: When toggled the camera will be able to operate at a higher frame rate but images will have a lower bit depth.
1.5.2 Software Installation
The controller requires various drivers to operate correctly. The required drivers can be obtained here: [http://www.ptgrey.com/support/downloads/downloads_admin/Index.aspx](http://www.ptgrey.com/support/downloads/downloads_admin/Index.aspx). Note that the site requires a login. After logging in, select the software tab and download the FlyCapture v2.4 Release executable. The installation will install all of the necessary drivers. It will also install the Point Grey FlyCap2 utility which provides a useful interface for viewing the camera status and configuration.
1.5.3 Device Details
The Point Grey controller was written and tested specifically for the Flea2 camera (model FL2G-13S2M-C). The particular camera used in the lab can operate in either Y8 or Y16 mode, which correspond to monochrome images with either 8 or 12 bits of depth per pixel. The full 16 bits of Y16 mode are not available due to limitations of the A/D converter. The Flea2 has a maximum resolution of 1280x960, and depending on whether it is in Y8 or Y16 mode a frame rate of either 15 or 30 fps respectively. However by using the region of interest settings to reduce the data collection area it is possible to exceed these frame rates. This is important because the camera cannot be triggered consecutively at a rate faster than its frame rate.
1.5.4 Controller Details
When a Point Grey controller is created, it connects to the first available camera and configures it using the settings passed to the constructor. Hardware or software triggering can be enabled by calling the corresponding enableSoftwareTrigger() or enableHardwareTrigger() methods, but they are not necessary. The software trigger can be fired by using the fireSoftwareTrigger(). Before calling the start() method, it is necessary to call the setDataBuffers() method which tells the camera how many triggers to expect and/or images to take.
At this point the start() method which will make the camera wait for triggers can be called. After all of the triggers have been received the stop() method should be called, after which data can be saved by calling any combination of the saveRAWImages(), savePNGImages() and saveLog() methods. The last one of these saves a text log of the camera settings used for the collection as well as the time at which each image was taken compared to the first image.
1.6 PixeLink
The PixeLink controller was written by Ovidiu Toader. It has been interfaced into the PAT Controller via the PixeLinkMediator, however it requires a number of device specific commands to function properly. The documentation here specifies its usage within the PAT Framework.
1.6.1 Controller Parameters
- gain: Camera gain level.
- expTime_ms: Exposure time for each image in milliseconds.
- ROI_width: Width of the region of interest in pixels.
- ROI_height: Height of the region of interest in pixels.
- ROI_left: Offset from the left of the region of interest in pixels.
- ROI_top: Offset from the top of the region of interest in pixels.
- useROIcenter: Boolean indicating whether the region of interest should be centered using the ROI_center value.
• ROI_center: Tuple indicating the location of the center of the region of interest.
1.6.2 Device Specific Function
The PixeLink has three functions unique to it in the PATController class (more on this later). The first is triggerPixeLink() which will send a trigger pulse via the UTBus to the camera and also increment an internal trigger counter. This counter is used by the second function, which is setPixeLinkImageCount(). This function should be called before startDevices() and after all of the triggerPixeLink() calls, and sends a message to the server notifying it of how many triggers to expect. The third function is flushPixeLink(), which should be called whenever the camera fails to collect data. The notification for this arrives via the stopDevices() function in the PATController. The PixeLinkTest script gives an example of how to use the camera within the framework.
2 PAT Control System - General Usage
The PAT control system draws heavily from the Python framework developed for the MOL experiment. The main component of the system is the PATController class which extends the Recipe class from the UTBus. This means that everything that can be done with the Recipe can also be done with the PATController. The power of the PATController comes from the fact that it consolidates all of the functionality available to the PAT experiment in one class. Using the PATController requires that an instance of the PATServer be running on the machine collecting data.
2.1 Script Breakdown
What follows is a breakdown of a script for controlling the PAT system. First the PATController is imported along with the defaultSettings of the system and the overwriteSettings() function. The next import is an updatePackage (more details on this later) from the updateSettings files.
```python
from PATController import PATController, defaultSettings,
overwriteSettings
from updatedSettings import updatePackage
```
Using the overwriteSettings() function, the updatePackage can be used to overwrite settings in the defaultSettings. These updated settings are then used to construct a PATController object (named PATCtrl in this example).
```python
updatedSettings = overwriteSettings(defaultSettings, updatePackage)
PATCtrl = PATController('Example', updatedSettings)
```
Next the start() method is called which allows UTBus commands to be recorded.
```python
PATCtrl.start() # Starts recording UTBus Commands
```
Following this, a series of UTBus commands defined directly in the PATController can be called.
```python
PATCtrl.set_2D_I_1(3.9)
PATCtrl.close_all_shutters()
PATCtrl.set_3D_coils_I(1.2)
PATCtrl.set_3DRb_pump_amplitude(0.8)
PATCtrl.set_3DRb_pump_detuning(12)
PATCtrl.wait_s(5)
PATCtrl.open_all_shutters()
```
Note that the UTBus commands won’t be executed until the end() method is called, which will take the recorded UTBus commands, compile them and send them to the UTBus. The end() method also blocks the script until the UTBus finished. For this reason, if data is to be taken the startDevices() function should be called before it. This can be seen below.
```python
PATCtrl.startDevices() # Start recording devices.
PATCtrl.end() # Stop recording UTBus commands and execute them.
```
Before saving data it is necessary to call the stopDevices() function which, depending on the device, will either stop data collection or wait until the device is finished collecting data.
```
PATCtrl.stopDevices() # Stop recording devices.
PATCtrl.save() # Save data from devices.
```
Once the script is done the closeClient method should be called. This notifies the server that the script is done.
```
PATCtrl.closeClient() # Close link between controller and server.
```
### 2.2 Example Scripts
Along with this document, the QDG-GATEWAY machine contains a folder of example scripts that showcase some of the capabilities of the framework.
- **VariableDataCollectionParameters**: This script showcases how to change data collection parameters between trials.
- **RunMultiple**: This script runs and processes data from multiple devices simultaneously.
- **FluorescenceOptimization**: Showcases the optimization capabilities of the framework.
- **TestScripts**: Folder of scripts to collect data individually with each device.
### 3 PAT Control System - Implementation Details
The PAT control system has been split into two main components: a client and a server. The client side is setup on the QDG-GATEWAY machine and interfaces with the UTBUS. The server side is setup on the QDG-PATPC machine and interfaces with all of the data collection devices. Originally there was no such separation and everything was running on the QDG-GATEWAY machine, however as data collection is a processor intensive task and this is a shared machine this proved problematic.
#### 3.1 Devices
##### 3.1.1 DeviceControllers
The DeviceControllers folder contains the individual controllers for each of the data acquisition devices available to the PAT system. These controllers are the same as those described in the Controller section and in fact can be used independently of the control system.
##### 3.1.2 DeviceMediators
The DeviceMediators folder contains a mediator module for each of the controllers available to the PAT system. Each of the mediators must implement the methods of the DeviceMediatorInterface class, which mandates common methods like start() and save(). This means that anyone writing a device controller doesn’t have to worry about how it will plug into the framework as long as general capabilities likes starting data collection and saving data are available.
#### 3.2 Settings
##### 3.2.1 DefaultSettings
The DefaultSettings folder contains settings files which represent the default settings of various components of the PAT system. In general these settings files should not be modified, rather any controller instance that needs to utilize them should import and overwrite them.
3.2.2 Settings Class
The Settings module defines a custom dictionary class which prevents new items from being added to the dictionary after it is created. This makes overwriting settings safer as invalid names won’t be accepted.
3.2.3 SettingsConsolidator
The SettingsConsolidator module is used to pack up all of the individual settings files into a defaultSettings Setting dictionary which is used to construct a PATController object. The defaultSettings dictionary consists of two subcomponents, the deviceSettings dictionary which stores the default information associated with devices and the generalSettings dictionary which stores everything else. The first half of the module imports the individual settings files and then places them into either the deviceSettings or generalSettings dictionaries.
The second half of the module converts all of the regular dictionaries into Settings dictionaries and packs them into defaultSettings. It also defines the overwriteSettings function which can be used to overwrite the defaultSettings dictionary.
3.2.4 TemplateGenerator
The TemplateGenerator script takes all of the settings in the DefaultSettings folder and generates a file, settingsTemplate.py, with all of the possible settings that can be modified present.
3.3 Communication
3.3.1 Communication Protocol
Both the PATServer and PATClient are able to send and receive messages. Only one client can be connected to the server at any point in time. Messages are sent and received in two parts. First the sender sends a 4 character string to the receiver indicating the length of the message, after which the actual message is sent. This protocol ensures that messages arrive completely.
On the server, messages are parsed by the interpretMessage function. The first character of any message is a control character which is used by this function to dictate the servers response.
3.3.2 PATServer
The PAT Server is a socket server that listens for a connection by a PAT Client. When a client connects, the server will dedicate itself to the client until it disconnects meaning that no more than one client can connect at a time. On connection the server also creates all of the device mediators needed based on the initialization settings and stores them in a list, which allows for easy iteration over common device mediator methods like start() and save().
3.3.3 PATClient
The PAT Client can send messages to the PAT Server to execute the various methods of the device mediators. It can also send messages to execute functions from specific device mediators.
3.4 Save Controller
The SaveController module is used by the server to generate the file paths to which experimental data will be saved. In a multi-trial run, the state of the SaveController is saved after each trial. If at any point the server is interrupted or crashes, it is possible to reload the server state and continue the experimental run.
3.5 Device Mediator Interface
The Device Mediator Interface class defines a number of methods that all device mediators must implement. The core methods are the start(), stop() and save() methods. These must have proper implementations. All other methods can either be implemented or call the pass function.
3.5.1 start
The start method is used to signal a device to either start taking data or prepare for a trigger signal. Data collection must happen in a separate thread.
3.5.2 stop
The stop method can either stop a device or wait for it to finish collecting data.
3.5.3 save
The save method must take a folder path. The device must then save its data to the folder given.
3.6 Explanation of Common Settings Options
- takeData: Boolean that determines whether data is taken or not.
- processData: Boolean that determines whether data is processed when processData() is called.
- persistent: Boolean that determines whether a device should have a persistent state between trials. Utilizes the saveState() and restoreState() methods of the DeviceMediatorInterface.
- dataFolderName: The file name under which data taken by the device will be saved.
4 PAT Control System - Useful Tips
4.1 Starting the Server
The server can be started by running the PATServer file, which will start a new instance of the server. However, it is also possible to restore the server to a particular save state from a multi-trial run so as to continue saving data in the same data file with the correct trial numbers. This was implemented mainly for usage with the optimizer built into the framework for cases in which the server crashed. When running a script that utilizes saveTrial(), the state of the saveController is saved in the experiment folder after each trial. If the server crashes and needs to be restored, running:
```
python .../PATServer .../ExperimentDataFolder
```
in the command prompt will restart the server in the state it was in before the crash. It is also possible to restore the state of the server to that after any particular trial by copying the SaveController.pkl file from the trial folder into the experiment folder.
4.2 Handling Data Collection Failures
During data collection, the PATServer keeps track of any devices that fail to take data. When stopDevices() is called on the client side by the PATController, it will return a boolean value that indicates whether any of the data collection devices has failed. This feature is useful for multi-trial runs to prevent holes in the data from unsuccessful trials. By checking the return value of this function and only saving data when it is false, it is possible to repeat a particular trial an arbitrary number of times until data is collected possibly. This status checking behaviour can be seen in the PixeLinkTest script and the FluorescenceOptimizer script.
4.3 Adding a New Controller
Adding a new controller to the PAT system requires modifications in a number of places.
1. Add the controller module to the Server/DeviceMediators/DeviceController folder. The controller must be capable of starting a new thread and collecting data in it.
2. Create a new mediator in the Server/DeviceMediators for the controller. The mediator should extend the DeviceMediatorInterface class.
3. Add an import statement for the controller in the PATServer file.
4. Add a default settings file for the controller in the Client/Settings/DefaultSettings folder.
5. Import the defaultSettings into the SettingConsolidator file (in Client/Settings) and add an entry into the deviceSettings dictionary.
4.4 Porting the Framework
Porting the framework to another one of the experimental systems requires a number of steps. First copy all of the framework files to the machine hosting the UTBus modules and the machine connected the data collection devices. Then:
Server Changes
1. Modify the Python environment path to include both the Client and Server folders.
2. In the PATServer file, change the PORT value to an open one on the machine and the dataPath value to the location in which data should be saved (make sure to create this folder if it doesn’t already exist).
3. Remove the device controllers and mediators for devices which are not available to the setup.
Client Changes
1. Modify the Python environment path to include the Client folder.
2. Remove unnecessary settings from the DefaultSettings folder as well as from the SettingsConsolidator file.
3. Change the HOST and PORT in the PATClientSettings file to match that of the server.
4. Change the database used by the PATController within the init() function from ‘PAT’ to the appropriate one for the experiment.
5. Modify/replace the functions in the PATController to those of the experiment module replacing it.
Afterwards, it is highly recommended that all references to PAT- be replace by YourExperimentDeviceHere-. This includes file names and contents.
5 Optimizer
5.1 Particle Swarm Optimization
Particle Swarm Optimization (PSO) is an iterative optimization method. Each particle in the optimization has a position and a velocity. The position represents the location of the particle in the parameter space, and the velocity describes how the particle traverses the space. The particles are also experience an "attraction" to the best global particle (the particle with the best fitness value so far out of every particle evaluated) and the best position that particle has had in its past. Every iteration, each particles fitness is evaluated and its historical best along with the global best are updated. Once every particle has been evaluated, new velocities are calculated for the particles, then the particles are moved to new positions based on their velocities and a new iteration can begin.
5.2 Initialization
When the optimizer first start, it will seed the parameter space with randomized particles that are initialized as follows:
\[ s = U(s_{\text{min}}, s_{\text{max}}) \]
\[ v = U(v_{\text{min}}, v_{\text{max}}) \]
where the min and max values represent the boundaries of the search space.
5.3 Update Formula
The velocities of the particles are updated based on the following formula.
\[ \mathbf{v} \leftarrow \omega \mathbf{v} + \phi_p r_p (\mathbf{s}_p - \mathbf{s}) + \phi_g r_g (\mathbf{s}_g - \mathbf{s}) \]
\( \omega \) is a damping factor on the particles current velocity. \( \phi_p \) and \( \phi_g \) are tunable weighing factors towards the particles historical best and global best respectively. \( r_p \) and \( r_g \) are random values chosen from \( U(0, 1) \). \( \mathbf{s} \) represent the particles position and \( \mathbf{s}_p - \mathbf{s} \) and \( \mathbf{s}_g - \mathbf{s} \) are vectors towards the particles historical best and the global best respectively. With the velocity determined, calculating the particles new position is simply:
\[ \mathbf{s} \leftarrow \mathbf{s} + \mathbf{v} \]
5.4 Boundaries
As part of the optimization process, it is necessary to pass bounds for every parameter to the optimizer. These bounds are used as hard bounds on the particles position and also used to limit the particles max velocity based on the following formula:
\[ |\mathbf{v}| \leq (s_{\text{max}} - s_{\text{min}}) \alpha \]
Where \( \alpha \) is a parameter between 0 and 1 that can be tuned to limit the maximum speed.
When a particle exceeds a boundary in any of its dimensions, the position of that particle will be readjusted as either
\[ s \leftarrow U(s_{\text{min}}, s) \quad \text{or} \quad s \leftarrow U(s, s_{\text{max}}) \]
depending on whether the lower or upper boundary was exceeded. The velocity will then be adjusted to:
\[ \mathbf{v} \leftarrow \mathbf{s} - \mathbf{s}_{\text{old}} \]
These adjustments are based on the PSO Bound Handling and PSO High-Dimensional Spaces papers found in the Manuals folder.
5.5 Implementation/Usage Details
The PAT framework has PSO built into it via the ParticleSwarmOptimizer module which is treated as just another device by the framework. Like the other devices, the optimizer has a settings dictionary associated with it. The following are unique settings it includes.
- **paramBounds**: A tuple of 2-tuples representing the lower and upper bounds for parameters.
- **numOfParticles**: The number of particles in the swarm.
- **numOfGenerations**: The number of generations to iterate the swarm over.
- **fitnessEvalScript**: A path to the fitness evaluation script on the server. This script will be executed in the current trial folder to evaluate the current particles fitness.
- **phiG (\( \phi_g \))**: Weighing factor towards the global best.
- **phiP (\( \phi_p \))**: Weighing factors towards a particles historical best.
- **w (\( \omega \))**: Velocity damping factor.
- **alpha (\( \alpha \))**: Speed limiting factor.
- **minimization**: Boolean indicating whether the optimization is a minimization or maximization.
The PSO Parameter Selection paper in the Manuals folder gives a summary of good PSO parameter values based on the number of parameters being optimized. Table 1 on page 7 gives a good summary of the results.
6 GIT
6.1 Rationale
Git is a distributed control system that allows for tracking and distribution of source code. It is currently used to track the entire PAT framework (link can be found here: https://github.com/vbarua/QDG-Lab-Framework). Now while the advantages of using git are numerous (read about them here http://git-scm.com/about) the main use for this project would be for future support services. If at any point in the future the framework stops working or you desire new features, an up to date git repository would allow an off site developer (ie. Victor) to quickly and easily get the current version of the framework and see exactly what changes have been made. Now while the command-line version of git can be daunting, the free Source Tree application (http://www.sourcetreeapp.com/) makes it very easy to use. I would also recommend reading the first three chapters of this book http://git-scm.com/book to get a feel for the basics of git and its terminology.
6.2 Account Details
A user account has been setup on https://github.com/ to allow the lab to contribute to the project. The username is QDGLab and the password is the same as that for QDG-GATEWAY machine.
|
{"Source-Url": "https://www.phas.ubc.ca/~qdg/publications/InternalReports/Barua-USRA-2013.pdf", "len_cl100k_base": 8014, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32393, "total-output-tokens": 8864, "length": "2e12", "weborganizer": {"__label__adult": 0.0005736351013183594, "__label__art_design": 0.0011043548583984375, "__label__crime_law": 0.00024139881134033203, "__label__education_jobs": 0.0009183883666992188, "__label__entertainment": 0.000186920166015625, "__label__fashion_beauty": 0.00025010108947753906, "__label__finance_business": 0.0001970529556274414, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.0008716583251953125, "__label__hardware": 0.0162811279296875, "__label__health": 0.0005998611450195312, "__label__history": 0.0004210472106933594, "__label__home_hobbies": 0.0005254745483398438, "__label__industrial": 0.0017843246459960938, "__label__literature": 0.00023925304412841797, "__label__politics": 0.0001786947250366211, "__label__religion": 0.0006856918334960938, "__label__science_tech": 0.2164306640625, "__label__social_life": 0.00014317035675048828, "__label__software": 0.02093505859375, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.0005021095275878906, "__label__transportation": 0.0008287429809570312, "__label__travel": 0.00024187564849853516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36992, 0.0481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36992, 0.48648]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36992, 0.85201]], "google_gemma-3-12b-it_contains_pii": [[0, 1467, false], [1467, 3532, null], [3532, 7078, null], [7078, 8403, null], [8403, 9753, null], [9753, 12697, null], [12697, 16022, null], [16022, 17822, null], [17822, 21047, null], [21047, 23765, null], [23765, 27006, null], [27006, 29963, null], [29963, 32765, null], [32765, 35806, null], [35806, 36992, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1467, true], [1467, 3532, null], [3532, 7078, null], [7078, 8403, null], [8403, 9753, null], [9753, 12697, null], [12697, 16022, null], [16022, 17822, null], [17822, 21047, null], [21047, 23765, null], [23765, 27006, null], [27006, 29963, null], [29963, 32765, null], [32765, 35806, null], [35806, 36992, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36992, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36992, null]], "pdf_page_numbers": [[0, 1467, 1], [1467, 3532, 2], [3532, 7078, 3], [7078, 8403, 4], [8403, 9753, 5], [9753, 12697, 6], [12697, 16022, 7], [16022, 17822, 8], [17822, 21047, 9], [21047, 23765, 10], [23765, 27006, 11], [27006, 29963, 12], [29963, 32765, 13], [32765, 35806, 14], [35806, 36992, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36992, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
dd3f1c20d96488aa5e2c97d5a82f883297774660
|
Package ‘AMPLE’
September 29, 2023
Title Shiny Apps to Support Capacity Building on Harvest Control Rules
Version 1.0.2
Author Finlay Scott [aut, cre] (<https://orcid.org/0000-0001-9950-9023>),
Pacific Community (SPC) [cph]
Maintainer Finlay Scott <finlays@spc.int>
Description Three Shiny apps are provided that introduce Harvest Control Rules (HCR) for fisheries management.
'Introduction to HCRs' provides a simple overview to how HCRs work. Users are able to select their own HCR and step through its performance, year by year. Biological variability and estimation uncertainty are introduced.
'Measuring performance' builds on the previous app and introduces the idea of using performance indicators to measure HCR performance.
'Comparing performance' allows multiple HCRs to be created and tested, and their performance compared so that the preferred HCR can be selected.
License GPL (>= 3)
URL https://github.com/PacificCommunity/ofp-sam-ample
Encoding UTF-8
RoxygenNote 7.2.3
Suggests knitr, markdown, bookdown, testthat (>= 3.1.5)
Config/testthat/edition 3
Depends shiny (>= 1.7.3)
Imports markdown, graphics, grDevices, RColorBrewer, stats, R6 (>= 2.5.1), scales (>= 1.2.1), shinyjs (>= 2.1.0), ggplot2 (>= 3.4.0), shinyscreenshot (>= 0.2.0)
VignetteBuilder knitr
NeedsCompilation no
Repository CRAN
Date/Publication 2023-09-29 07:40:09 UTC
**R topics documented:**
- assessment .................................................. 2
- comparing_performance ................................. 3
- constant ...................................................... 3
- estimation_error .......................................... 4
- get_hcr_ip .................................................. 4
- get_hcr_op .................................................. 5
- intro_hcr .................................................... 5
- measuring_performance ................................. 6
- MP modules .................................................. 6
- Stochasticity module ...................................... 7
- Stock .......................................................... 8
- Stock module ............................................... 13
- threshold ..................................................... 13
**Index**
<table>
<thead>
<tr>
<th>assessment</th>
<th>assessment</th>
</tr>
</thead>
</table>
**Description**
Function used by `get_hcr_ip()` to generate input data for an assessment based HCR. The input to the HCR is depletion (i.e. Biomass / K).
**Usage**
```r
assessment(stock, mp_params, yr, iters = 1:dim(stock$biomass)[1])
```
**Arguments**
- `stock` The stock object
- `mp_params` A named list of MP parameters (with `est_sigma` and `est_bias` elements)
- `yr` The timestep that the biomass is taken from.
- `iters` Numeric vector of iters. Default is all of them.
comparing_performance 'Comparing HCR Performance' app launcher
Description
Launches the Comparing Performance Shiny app. See the 'Information' tab in the app for more information. Also see the package vignette(vignette("comparing_performance", package="AMPLE")) for a tutorial.
Usage
comparing_performance(...)
Arguments
... Not used
Examples
## Not run: comparing_performance()
constant Evaluates a constant harvest control rule
Description
Evaluates a constant harvest control rule, i.e. one that ignores the stock status and just returns the constant level (catch or effort). Used by the hcr_op function.
Usage
constant(mp_params, ...)
Arguments
mp_params The HCR / management procedure parameters used to evaluate the HCR (as a list).
... Unused
estimation_error
Description
Estimation error applied to the 'true' stock status to generate an 'observed' stock status used in the HCR. The error is a combination of bias and lognormally distributed noise.
Usage
estimation_error(input, sigma, bias)
Arguments
input A vector of the 'true' stock status
sigma Observation error standard deviation
bias Observation error bias
get_hcr_ip
Get the input to the HCR
Description
Run the MP analyses function to generate the input to the HCR i.e. observed stock status. For example, estimated biomass from an assessment.
Usage
get_hcr_ip(stock, mp_params, yr, ...)
Arguments
stock The stock object
mp_params The HCR / management procedure parameters used to evaluate the HCR (as a list).
yr The time step of the true stock status used to generate the HCR IP .
... Other arguments, including iters
get_hcr_op
Evaluates the harvest control rule.
Description
Evaluates the harvest control rule in a single year (timestep).
Usage
get_hcr_op(stock, mp_params, yr, iters = 1:dim(stock$biomass)[1])
Arguments
stock The stock object
mp_params The HCR / management procedure parameters used to evaluate the HCR (as a list).
yr The timestep.
iters A numeric vector of iters.
Value
A vector of outputs from the HCR.
intro_hcr
Introduction to HCRs app launcher
Description
Launches the introduction to HCRs Shiny app. See the ‘Information’ tab in the app for more information. Also see the package vignette (vignette("intro_hcr", package="AMPLE").)
Usage
intro_hcr(...)
Arguments
... Not used
Examples
## Not run: intro_hcr()
measuring_performance Measuring performance app launcher
Description
Launches the 'Measuring Performance' Shiny app. See the 'Information' tab in the app for more information. Also see the package vignette (vignette("measuring_performance", package="AMPLE")) for a tutorial.
Usage
measuring_performance(...)
Arguments
... Not used
Examples
## Not run: measuring_performance()
MP modules mpParamsSetterUI
Description
The interface for the HCR options. The parameter selection inputs shown in the app are conditional on the selected type of HCR. Some of the inputs have initial values that can be set using the function arguments.
Does the setting part of the MP params module. Returns a list of MP params based on the MP inputs.
Creates the MP params list based on the MP selection from the Shiny UI. Defined outside of a reactive environment above so we can use it non-reactively (helpful for testing).
Usage
mpParamsSetterUI(
id,
mp_visible = NULL,
title = "Select the type of HCR you want to test.",
init_thresh_max_catch = 140,
init_thresh_belbow = 0.5,
init_constant_catch = 50,
init_constant_effort = 1
)
Stochasticity module
mpParamsSetterServer(id, get_stoch_params = NULL)
mp_params_switcheroo(input, est_sigma = 0, est_bias = 0)
Arguments
id The id (shiny magic)
mp_visible Which HCR types to show.
title The title.
init_thresh_max_catch Initial value of the maximum catch for the catch threshold HCR.
init_thresh_bellow Initial value of the bellow for the catch threshold HCR.
init_constant_catch Initial value of constant catch for the constant catch HCR.
init_constant_effort Initial value of constant effort for the constant effort HCR.
get_stoch_params Reactive expression that gets the parameters from the stochasticity setter. Otherwise est_sigma and est_bias are set to 0.
input List of information taken from the Shiny UI (mpParamsSetterUI)
est_sigma Standard deviation of the estimation variability (default = 0).
est_bias Estimation bias as a proportion. Can be negative (default = 0).
Value
A taglist
A list of HCR options.
Stochasticity module stochParamsSetterUI
Description
stochParamsSetterUI() is the UI part for the stochasticity options. Stochasticity is included in the projections in two areas: biological variability (e.g. recruitment) and estimation error (to represent the difference between the 'true' status of the stock and the estimated status that is used by the HCR). Estimation error includes bias and variability. The arguments to this function allow only some of these elements to be shown.
stochParamSetterServer() does the server side stuff for the stochasticity options.
set_stoch_params() sets up default values for the stochasticity parameters. Defined as a separate function so it can be used for testing outside of a reactive environment.
Usage
```
stockParamsSetterUI(
id,
show_var = FALSE,
show_biol_sigma = TRUE,
show_est_sigma = TRUE,
show_est_bias = TRUE,
init_biol_sigma = 0,
init_est_sigma = 0,
init_est_bias = 0
)
```
```
stockParamsSetterServer(id)
```
```
set_stoch_params(input)
```
Arguments
- **id**
- The id (shiny magic)
- **show_var**
- Show the variability options when app opens (default is FALSE).
- **show_biol_sigma**
- Show the biological productivity variability option (default is TRUE).
- **show_est_sigma**
- Show the estimation variability option (default is TRUE).
- **show_est_bias**
- Show the estimation bias option (default is TRUE).
- **init_biol_sigma**
- Default value for biological productivity variability (ignored if not shown).
- **init_est_sigma**
- Default value for estimation variability (ignored if not shown).
- **init_est_bias**
- Default value for estimation bias (ignored if not shown).
- **input**
- A list of stochasticity parameters.
Value
A taglist
A list of stochasticity options.
---
**Stock**
*R6 Class representing a stock*
**Description**
A stock object has life history parameters, fields and methods for a biomass dynamic model.
Details
A stock has biomass, effort, catch and hcr_ip and hcr_op fields as well as the life history parameters. The population dynamics are a simple biomass dynamic model. The Stock class is used for the Shiny apps in the AMPLE package.
Public fields
biomass Array of biomass
catch Array of catches
effort Array of fishing effort
hcr_ip Array of HCR input signals
hcr_op Array of HCR output signals
msy MSY (default = 100).
r Growth rate (default = 0.6). Set by the user in the app.
k Carrying capacity (default = NULL - set by msy and r when object is initialised).
p Shape of the production curve (default = 1).
q Catchability (default = 1).
lrp Limit reference point, expressed as depletion (default = 0.2).
trp Target reference point, expressed as depletion (default = 0.5).
b0 Virgin biomass (default = NULL - set by msy and r when object is initialised).
current_corrnoise Stores the current values of the correlated noise (by iteration).
biol_sigma Standard deviation of biological variability (default = 0).
last_historical_timestep The last historical timestep of catch and effort data.
Methods
Public methods:
- `Stock$new()`
- `Stock$reset()`
- `Stock$reactive()`
- `Stock$fill_history()`
- `Stock$fill_catch_history()`
- `Stock$fill_biomass()`
- `Stock$as_data_frame()`
- `Stock$project()`
- `Stock$relative_cpue()`
- `Stock$relative_effort()`
- `Stock$replicate_table()`
- `Stock$time_periods()`
- `Stock$performance_indicators()`
- `Stock$pi_table()`
• Stock$clone()
Method new(): Create a new stock object, with fields of the right dimension and NA values (by calling the reset() method. See the reset() method for more details.
Usage:
Stock$new(stock_params, mp_params, niters = 1)
Arguments:
stock_params A list of stock parameters with essential elements: r (growth rate, numeric), stock_history (string: "fully", "over", "under") initial_year (integer), last_historical_timestep (integer), nyears (integer), biol_sigma (numeric).
mp_params A list of the MP parameters. Used to fill HCR ip and op.
niters The number of iters in the stock (default = 1).
Returns: A new Stock object.
Method reset(): Resets an existing stock object, by remaking all fields (possibly with different dimensions for the array fields). Fills up the catch, effort and biomass fields in the historical period based on the stock history and life history parameters in the stock_params argument. This is a reactive method which invalidates a reactive instance of this class after it is called.
Usage:
Stock$reset(stock_params, mp_params, niters)
Arguments:
stock_params A list with essential elements: r (growth rate, numeric, default=6), stock_history (string: "fully", "over", "under", default="fully") initial_year (integer, default=2000), last_historical_timestep (integer, default=10), nyears (integer, default=30), biol_sigma (numeric, default = 0).
mp_params A list of the MP parameters. Used to fill HCR ip and op.
niters The number of iters in the stock (default = 1).
Returns: A new Stock object.
Method reactive(): Method to create a reactive instance of a Stock.
Usage:
Stock$reactive()
Returns: a reactiveExpr.
Method fill_history(): Fills the historical period of the stock
Usage:
Stock$fill_history(stock_params, mp_params)
Arguments:
stock_params Named list with last_historical_timestep and stock_history elements.
mp_params A list of the MP parameters. Used to fill HCR ip and op.
Method fill_catch_history(): Fill up the historical period of catches with random values to simulate a catch history
Usage:
Stock$fill_caListen_history(stock_params)
Arguments:
stock_params A list with essential elements: r (growth rate, numeric), stock_history (string: "fully", "over", "under") initial_year (integer), last_historical_timestep (integer), nyears (integer).
stock_history Character string of the exploitation history (default = "fully", alternatives are "under" or "over").
Method fill_biomass(): Fills the biomass in the next timestep based on current biomass and catches. The surplus production model has the general form: $B_{t+1} = B_t + f(B_t) - Ct$ Where the production function $f()$ is a Pella & Tomlinson model with shape $f(B_t) = \frac{r}{p} B_t * (1 - \left(\frac{B_t}{k}\right)^p)$ Here $p$ is fixed at 1 to give a Schaefer model $cpue = \frac{Ct}{Et} = qB_t$
Usage:
Stock$fill_biomass(ts, iters = 1:dim(self$biomass)[1])
Arguments:
ts The biomass time step to be filled (required catch etc in ts - 1).
iters The iterations to calculate the biomass for (optional - default is all of them).
Method as_data_frame(): Produces a data.frame of some of the array-based fields, like biomass. Just used for testing purposes.
Usage:
Stock$as_data_frame()
Method project(): Projects the stock over the time steps given and updates the biomass, HCR ip / op and catches. It uses a simple biomass dynamic model where the catches or fishing effort are set every time step by the harvest control rule.
Usage:
Stock$project(timesteps, mp_params, iters = 1:dim(self$biomass)[1])
Arguments:
timesteps The timesteps to project over. A vector of length 2 (start and end).
mp_params A vector of management procedure parameters.
iters A vector of iterations to be projected. Default is all the iterations in the stock
Returns: A stock object (a reactiveValues object with bits for the stock)
Method relative_cpue(): The catch per unit effort (CPUE, or catch rate) relative to the CPUE in the last historical period.
Usage:
Stock$relative_cpue()
Returns: An array of same dims as the catch and effort fields.
Method relative_effort(): The effort relative to the effort in the last historical period.
Usage:
Stock$relative_effort()
Returns: An array of same dims as the effort field.
Method replicate_table(): Summarises the final year of each iteration. Only used for the Measuring Performance app.
Usage:
Stock$replicate_table(iters = 1, quantiles = c(0.05, 0.95))
Arguments:
itersthe iterations to calculate the table values for (default is iteration 1).
quantiles Numeric vector of the quantile range. Default values are 0.05 and 0.95.
Method time_periods(): Calculates the short, medium and long term periods to calculate the performance indicators over, based on the last historic year of data and the number of years in the projection.
Usage:
Stock$time_periods()
Usage:
Stock$performance_indicators(
iters = 1:dim(self$biomass)[1],
quantiles = c(0.05, 0.95)
)
Arguments:
itersthe iterations to calculate the table values for (default is all of them).
quantiles Numeric vector of the quantile range. Default values are 0.05 and 0.95.
Returns: A data.frame
Method pi_table(): Makes a table of the performance indicators.
Usage:
Stock$pi_table(iters = 1:dim(self$biomass)[1], quantiles = c(0.05, 0.95))
Arguments:
itersthe iterations to calculate the table values for (default is all of them).
quantiles Numeric vector, length 2, of the low and high quantiles.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Stock$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
**Arguments**
- **id**
Shiny magic
- **get_stoch_params**
Reactive expression that accesses the stochasticity module server.
- **input**
List of stock parameters taken from the shiny UI (stockParamsSetterUI()).
- **biol_sigma**
Standard deviation of the biological variability (default = 0).
**Value**
A taglist
A list of stock options.
---
**threshold**
*Evaluates a threshold harvest control rule*
**Description**
Evaluates a threshold (i.e. hockey stick) harvest control rule. Used by the hcr_op function.
**Usage**
threshold(input, mp_params, ...)
---
**Stock module**
*stockParamsSetterUI*
**Description**
stockParamsSetterUI() is the interface for the stock options (e.g. life history and exploitation status). stockParamsSetterServer() does the setting of the stock parameters in the server.
get_stock_params() Sets up default values for the stock, including year range. It’s a separate function so it can be used and tested outside of a reactive environment.
**Usage**
stockParamsSetterUI(id)
stockParamsSetterServer(id, get_stoch_params = NULL)
get_stock_params(input, biol_sigma = 0)
Arguments
input A vector of the 'true' stock status
mp_params The HCR / management procedure parameters used to evaluate the HCR (as a list).
... Unused
Value
A vector of the same dimension as the input.
Index
assessment, 2
comparing_performance, 3
constant, 3
estimation_error, 4
get_hcr_ip, 4
get_hcr_op, 5
get_stock_params (Stock module), 13
intro_hcr, 5
measuring_performance, 6
MP modules, 6
mp_params_switcheroo (MP modules), 6
mpParamsSetterServer (MP modules), 6
mpParamsSetterUI (MP modules), 6
set_stoch_params (Stochasticity module), 7
Stochasticity module, 7
stochParamsSetterServer (Stochasticity module), 7
stochParamsSetterUI (Stochasticity module), 7
Stock, 8
Stock module, 13
stockParamsSetterServer (Stock module), 13
stockParamsSetterUI (Stock module), 13
threshold, 13
|
{"Source-Url": "https://cran.r-project.org/web/packages/AMPLE/AMPLE.pdf", "len_cl100k_base": 4514, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28246, "total-output-tokens": 5429, "length": "2e12", "weborganizer": {"__label__adult": 0.0003058910369873047, "__label__art_design": 0.00078582763671875, "__label__crime_law": 0.0003020763397216797, "__label__education_jobs": 0.0010004043579101562, "__label__entertainment": 0.0001773834228515625, "__label__fashion_beauty": 0.00020301342010498047, "__label__finance_business": 0.0007176399230957031, "__label__food_dining": 0.0007538795471191406, "__label__games": 0.0016460418701171875, "__label__hardware": 0.0019664764404296875, "__label__health": 0.0005145072937011719, "__label__history": 0.0005021095275878906, "__label__home_hobbies": 0.0005788803100585938, "__label__industrial": 0.0024662017822265625, "__label__literature": 0.00019609928131103516, "__label__politics": 0.0002751350402832031, "__label__religion": 0.0004203319549560547, "__label__science_tech": 0.12890625, "__label__social_life": 0.00012373924255371094, "__label__software": 0.06585693359375, "__label__software_dev": 0.7900390625, "__label__sports_fitness": 0.0013914108276367188, "__label__transportation": 0.0006690025329589844, "__label__travel": 0.0003705024719238281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18382, 0.01347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18382, 0.65139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18382, 0.64015]], "google_gemma-3-12b-it_contains_pii": [[0, 1372, false], [1372, 2801, null], [2801, 3568, null], [3568, 4428, null], [4428, 5155, null], [5155, 6298, null], [6298, 7988, null], [7988, 9201, null], [9201, 10672, null], [10672, 12737, null], [12737, 14924, null], [14924, 16455, null], [16455, 17590, null], [17590, 17797, null], [17797, 18382, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1372, true], [1372, 2801, null], [2801, 3568, null], [3568, 4428, null], [4428, 5155, null], [5155, 6298, null], [6298, 7988, null], [7988, 9201, null], [9201, 10672, null], [10672, 12737, null], [12737, 14924, null], [14924, 16455, null], [16455, 17590, null], [17590, 17797, null], [17797, 18382, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18382, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18382, null]], "pdf_page_numbers": [[0, 1372, 1], [1372, 2801, 2], [2801, 3568, 3], [3568, 4428, 4], [4428, 5155, 5], [5155, 6298, 6], [6298, 7988, 7], [7988, 9201, 8], [9201, 10672, 9], [10672, 12737, 10], [12737, 14924, 11], [14924, 16455, 12], [16455, 17590, 13], [17590, 17797, 14], [17797, 18382, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18382, 0.00519]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
6c39785c4339595d99e34c7bbaa3b287879bd37c
|
METHOD FOR CODING PICTURES USING HIERARCHICAL TRANSFORM UNITS
Inventors: Robert A. Cohen, Somerville, MA (US); Anthony Vetro, Arlington, MA (US); Huifang Sun, Woburn, MA (US)
Assignee: Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 247 days.
Appl. No.: 13/169,959
Filed: Jun. 27, 2011
Prior Publication Data
US 2012/0281928 A1 Nov. 8, 2012
Related U.S. Application Data
Provisional application No. 61/482,873, filed on May 5, 2011.
Int. Cl. G06K 9/36 (2006.01)
U.S. Cl. 382/232
Field of Classification Search
USPC 382/232-233, 236, 238-240, 244-250; 375/240.11, 240.18-240.19, 240.22; 348/395.1, 348/400.1-403.1, 408.1-413.1, 416.1, 420.1-421.1; 708/317, 400-405
See application file for complete search history.
ABSTRACT
A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TUs) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
20 Claims, 6 Drawing Sheets
Fig. 1 PRIOR ART
Level 0 Level 1 Level 2 Level 3
100 110
101
a b c d e f g h i j
If split-flag is set, first split into four nodes.
If split-flag is not set, do not split and transform.
Merge nodes so each PU contains fewer transforms.
Input: PU partitioning
STEP 1
Fig. 4
METHOD FOR CODING PICTURES USING HIERARCHICAL TRANSFORM UNITS
RELATED APPLICATION
FIELD OF THE INVENTION
The invention relates generally to coding pictures, and more particularly to methods for coding pictures using hierarchical transform units in the context of encoding and decoding pictures.
BACKGROUND OF THE INVENTION
For the High Efficiency Video Coding (HEVC) standard currently under development as H.264/MPEG-4 AVC, the application of TUs to residual blocks is represented by a tree as described in "Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp. 1676-1687, December 2010.
Coding Layers
The hierarchical coding layers defined in the standard include video sequence, picture, slice, and treeblock layers. Higher layers contain lower layers.
Treeblock
According to the proposed standard, a picture is partitioned into slices, and each slice is partitioned into a sequence of treeblocks (TBs) ordered consecutively in a raster scan. Pictures and TBs are broadly analogous to frames and macroblocks, respectively, in previous video coding standards, such as H.264/AVC. The maximum allowed size of the TB is 64x64 pixels luma (intensity), and chroma (color) samples.
Coding Unit
A Coding Unit (CU) is the basic unit of splitting used for Intra and Inter prediction. Intra prediction operates in the spatial domain of a single picture, while Inter prediction operates in the temporal domain among the picture to be predicted and a set of previously-decoded pictures. The CU is always square, and can be 128x128 (LCU), 64x64, 32x32, 16x16 and 8x8 pixels. The CU allows recursive splitting into four equally sized blocks, starting from the TB. This process gives a content-adaptive coding tree structure comprised of CU blocks that can be as large as the TB, or as small as 8x8 pixels.
Prediction Unit (PU)
A Prediction Unit (PU) is the basic unit used for carrying the information (data) related to the prediction processes. In general, the PU is not restricted to being square in shape, in order to facilitate partitioning, which matches, for example, the boundaries of real objects in the picture. Each CU may contain one or more PUs.
Transform Unit (TU)
As shown in FIG. 1, a root node 101 of the transform tree 100 corresponds to an NxN TU or "Transform Unit" (TU) applied to a block of data 110. The TU is the basic unit used for the transformation and quantization processes. In the proposed standard, the TU is always square and can take a size from 4x4 to 32x32 pixels. The TU cannot be larger than the PU and does not exceed the size of the CU. Multiple TUs can be arranged in a tree structure, henceforth—transform tree.
Each CU may contain one or more TUs, where multiple TUs can be arranged in a tree structure. The example transform tree is a quadtree with four levels 0-3. If the transform tree is split once, then four N/2xN/2 TUs are applied. Each of these TUs can subsequently be split down to a predefined limit. For Intra-coded pictures, transform trees are applied over "Prediction Units" (PUs) of Intra-prediction residual data. These PUs are currently defined as squares or rectangles of size 2Nx2N, 2NxN, Nx2N, or NxN pixels. For Intra-coded pictures, the square TU must be contained entirely within a PU, so the largest allowed TU size is typically 2Nx2N or NxN pixels. The relation between a-j TUs and a-j PUs within this transform tree structure is shown in FIG. 1.
As shown in FIG. 2, a new PU structure has been proposed for the proposed HEVC standard as described by Cao, et al. "CE6.01 Report on Short Distance Intra Prediction Method (SDIP)," ICTVC E278, March 2011. With the SDIP method, PUs can be strips or rectangles 201 as small as one or two pixels wide, e.g. Nx1, 2Nx1, 2NxN, or 1xN pixels. When overlaying a transform tree on an Intra-coded block that has been partitioned into such narrow PUs, the transform tree is split to a level where the size of the TU is only 2x2 or 1x1. The TU size cannot be greater than the PU size; otherwise, the transformation and prediction process is complicated. The prior art SDIP method that utilizes these new PU structures define, for example, as 1xN and 2xN TUs. Due to the rectangular TU sizes, the prior art is not compatible with the transform tree structure that is in the current draft specification of the HEVC standard. The SDIP does not use the transform tree mandated in the standard, instead the TU size is implicitly dictated by the sizes of the PUs.
Hence, there is a need for a method of splitting and applying square and rectangular TUs on rectangular, and sometimes very narrow rectangular PUs, while still maintaining the tree structure of the TUs as defined by the proposed standard.
SUMMARY OF THE INVENTION
A bitstream includes coded pictures, and split-flags. The split flags are used for generating a transform tree. Effectively, the bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs).
The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with CUs.
The generation splits each CU only if the corresponding split-flag is set.
For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging.
Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is diagram of a tree splitting for transform units according to the prior art;
FIG. 2 is diagram of a decomposition into rectangular prediction units according to the prior art;
FIG. 3A is a flow diagram of an example decoding system used by embodiments of the invention;
FIG. 3B is a flow diagram of transform tree generation used by embodiments of the invention;
FIG. 4 is a diagram of a first step of the transform tree generation according to this invention; and
FIG. 5 is a diagram of a second step of the transform tree generation according to this invention.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments of our invention provide a method for coding pictures using hierarchical transform units (TUs). Coding encompasses encoding and decoding. Generally, encoding and decoding are performed in a codec (CODER-DECODER). The codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal. For example, the coder encodes a bit stream or signal for compression, transmission, storage or encryption, and the decoder decodes the encoded bit stream for playback or editing.
The method applies square and rectangular TUs on rectangular, and sometimes very narrow rectangular portions of pictures, while still maintaining a hierarchical transform tree structure of the Transform in Units (TUs) as defined in the High Efficiency Video Coding (HEVC) standard. Transforms can refer either to transforms or inverse transforms. In the preferred embodiment, the transform tree is a quadtree (Q-tree), however other tree structures, such as binary trees (B-tree) and octrees, generally N-ary trees are also possible.
Input to the method is an N×N coding unit (CU) partitioned into Prediction Units (PUs). Our invention generates a transform tree that is used to apply TUs on the PUs.
Decoding System
FIGS. 3A-3B show an example decoder and method system 300 used by embodiments of the invention, i.e., the steps of the method are performed by the decoder, which can be software, firmware or a processor connected to a memory and input/output interfaces as known in the art.
Input to the method (or decoder) is a bit stream 301 of coded pictures, e.g., an image or a sequence of images in a video. The bit stream is parsed 310 to obtain split-flags 311 for generating the transform tree. The split-flags are associated with TUs of corresponding nodes of a transform tree 221, and data 312 to be processed, e.g., N×N blocks of data. The data includes a partitioning of the coding units (CUs) into Prediction Units (PUs).
In other words, any node represents a CU at a given depth in the transform tree. In most cases, only TUs at leaf nodes are realized. However, the code can implement the TU at nodes higher in the hierarchy of the transform tree.
The split-flags are used to generate 320 a transform tree 321. Then, the data in the PUs are decoded according to the transform tree to produce decoded data 302.
The generation step 320 includes splitting 350 each TUs only if the split-flag 311 is set.
For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU. For example, a 16x8 PU can be partitioned by two 8x8 TUs. These two 8x8 TUs can be merged into one 16x8 TU. In another example, a 64x64 square PU is partitioned into sixteen 8x32 TUs. Four of these TUs are merged into a 32x32 square TU, and the other TUs remain as 8x32 rectangles. The merging solves the problem in the prior art of having many very small, e.g., 1x1 TUs, see Cao, et al., then, the transform tree 321 is modified 370 according to merging.
The splitting, partitioning, merging and modifying can be repeated 385 until a size of the TU is equal to a predetermined minimum 380.
After the transform tree has been generated 320, the data 312 contained in each PU can be decoded using the TUs associated with the PU.
Various embodiments are now described.
Embodiment 1
FIG. 4 shows the partitioning 312 of the input CU into PUs 312, the iterative splitting 350 (or not) of the PUs according to split-flags, and the subsequent merging.
Step 1: A root node of the transform tree corresponds to an initial N×N TU covering the N×N PU 312. The bit stream 301 received by the decoder 300, as shown in FIG. 3, contains the split-flag 311 that is associated with this node. If the split-flag is not set 401, then the corresponding TU is not split, and the process for this node is complete. If the split-flag is set 402, then the N×N TU is split into TUs 403. The number of TUs produced corresponds to the structure of the tree, e.g., four for a quadtree. It is noted that the number of TUs produced by the splitting can vary.
Then, the decoder determines the PU includes multiple than TUs. For example, a rectangular PU includes multiple TUs, e.g., two square TUs, each of size N/2×N/2. In this case, the multiple TUs in that PU are merged 404 into an N×N/2 TU or an N/2xN rectangular TUs 405 aligned with the dimensions of the PU. The rectangular PUs and TUs can include longer axes corresponding to lengths, and a shorter axis corresponding to width. Merging square TUs into larger rectangular TUs eliminates the problem where a long narrow rectangle can be split into many small square TUs, as in the prior art, see Cao et al. Merging also reduces the number of TUs in the PUs.
Having many small TUs is usually less effective than having a few larger TUs, especially when the dimensions of these TUs are small, or when multiple TUs cover similar data.
The transform tree is then modified. The branch of the transform tree that corresponded with the first N/2×N/2 TUs 406 is redefined to correspond to the merged rectangular TU, and the branch of the transform tree that corresponded to the second merged TU is eliminated.
Step 2: For each node generated in Step 1, if a size of the TU is equal to a predefined minimum, the process is done for that node. Each remaining node is further split when the associated split-flag is set, or if the TU for that node is not contained entirely within the PU.
Unlike Step 1, however, the way that the node is split depends upon the shape of the PU, as shown in FIG. 5, because the PUs can have arbitrary shapes and sizes. This splitting is performed as described in Step 2a or Step 2b below. The decision whether to look for the split-flag in the bit stream or to split when the TU covers more than one PU can be made beforehand, i.e., the system is defined such that the split-flag is signaled in the bit stream, or the split-flag is inferred based upon criteria such as minimum or maximum TU sizes, or whether a TU spans multiple PUs.
Implicit Split-Flag
Alternatively, an "implicit split-flag" can be parsed from the bit stream 301. If the implicit split-flag is not set, then the split-flag is signaled for the corresponding node. If the implicit split-flag is set, then the split-flag is not signaled for this node, and the splitting decision is made based on predefined split conditions. The predefined split conditions can include other factors, such as whether the TU spans multiple PUs, or if the TU size limitation is met. In this case, the implicit split-flag is received before the split-flag, if any.
For example, the implicit split-flag can be received before each node, before each transform tree, before each image or video frame, or before each video sequence. For Intra PUs, a TU is not allowed to span multiple PUs because the PU is predicted from a set of neighboring PUs, so those neighboring PUs are to be fully decoded, inverse transformed, and reconstructed in order to be used for predicting the current PU.
In another example, the implicit flag cannot be set, but predefined metrics or conditions are used to decide whether to split a node without requiring the presence of a split-flag.
Step 2a: If the TU for this node is square, the process goes back to Step 1 treating this node as a new root node and splitting it into four square TUs, e.g., of size N/4 x N/4.
Step 2b: If the TU for this node is rectangular, e.g., N/2 x N, then the node is split into two nodes corresponding to N/4 x N TUs. Similarly, an N x N/2 TU is split into two nodes corresponding to N x N/4 TUs. The process then repeats Step 2 for each of these nodes, ensuring that rectangular TUs are split along the direction of the longer axis, so that rectangular TUs become thinner.
Embodiment 2
In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs are split into multiple nodes, e.g., four nodes and four TUs. For example, an N/2 x N TU is split into four N/2 x N TUs. This partitioning into a larger number of TUs can be beneficial for cases where the data in the PU is different for different portions in the PU. Rather than require two levels of a binary tree to split one rectangular TU into four rectangular TUs, this embodiment requires only one quadtree level, and thus only one split-flag, to split one TU into four rectangular TUs. This embodiment can be predefined, or can be signaled as a “multiple split-flag” in the bitstream, similar to the way the implicit flag was signaled.
Embodiment 3
Here, Step 1 is modified so that nodes associated with square TUs are not merged to become very large TUs until the size of the square TU is less than a predefined threshold. For example, if the threshold is four, then a rectangular 8 x 4 PU may be covered by two 4 x 4 TUs. A 4 x 2 PU, however, may not be covered by two 2 x 2 TUs. In this embodiment, Embodiment 1 is applied, and the two nodes are merged to form a 4 x 2 TU to cover the 4 x 2 PU. This embodiment is useful for cases where square TUs are preferred due to performance or complexity considerations, and rectangular TUs are used only when the square TUs lose effectiveness due to their small dimensions.
Embodiment 4
In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs can be split to form more than two square or rectangular TUs, where the split is not necessarily aligned with the longer dimension of the rectangle. For example, a 16 x 4 TU can be split into four 4 x 4 TUs or two 8 x 4 TUs. The choice of whether to split into a square or rectangular TU can be explicitly indicated by a flag in the bitstream, as was the case for the implicit flag, or it can be predefined as part of the encoding/decoding process.
This embodiment is typically used for very large rectangular TUs, e.g., 64 x 16, so that eight 16 x 16 TUs are used instead of two 64 x 8 TUs. Another example splits a 64 x 16 TU into four 32 x 8 TUs. A very long horizontal TU, for example, can produce artifacts such as ringing in the horizontal direction, so this embodiment reduces the artifacts by reducing the maximum length of a rectangular TU. This maximum length may also be included as a signal in the bitstream. Similarly, a maximum width can be specified.
Embodiment 5
In this embodiment, Step 1 is modified so that the N x N TU is directly split into rectangular TUs, i.e., of size N/2 x N/2. For example, the N x N TU can be split into four N/2 x N TUs. This embodiment differs from Embodiment 2 in that a square TU can be split directly into multiple rectangular TUs, even though the PU may be square.
This embodiment is useful for cases where features in the PU are oriented horizontally or vertically, so that a horizontal or vertical rectangular TUs aligned with the direction of the features can be more effective than multiple square TUs that split the oriented data in the PU. Features can include, color, edges, ridges, corners, objects and other points of interest. As before, whether or not to do this kind of splitting can be predefined or be signaled, as was the case for the implicit split-flag.
Embodiment 6
In this embodiment, Step 1 is modified so that a TU can span multiple PUs. This can occur when the PUs are Inter-predicted. For example, Inter-predicted PUs are predicted using data from previously-decoded pictures, not from data decoded from within the same CU. A transform can therefore be applied over multiple PUs within a CU.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. A method for coding pictures, comprising the steps of: parsing a bitstream including coded pictures to obtain split-flags for generating a transform tree, and a partitioning of coding units (CUs) into Prediction Units (PUs); generating the transform tree according to the split-flags, wherein nodes in the transform tree are transform units (TUs) associated with the CUs, wherein the transform tree comprises: splitting each TU only if the split-flag is set; merging, for each PU that includes multiple TUs, the multiple TUs into a larger TU; modifying the transform tree according to the splitting and merging; and decoding data contained in each PU using the TUs associated with the PU according to the transform tree, wherein the steps are performed in a processor.
2. The method of claim 1, wherein square TUs are split into multiple rectangular TUs.
3. The method of claim 1, further comprising: repeating the splitting, merging and modifying until a size of each TU is equal to a predetermined minimum.
4. The method of claim 3, wherein the repeating continues when the TU for a particular node is not contained entirely within the associated PU.
5. The method of claim 1, wherein the bitstream includes an implicit-split flag, and if the implicit, split-flag is not set, then the split-flag is signaled in the bitstream for the corresponding node in the transform tree.
6. The method of claim 3, wherein the bitstream includes an implicit-split flag, and the repeating is performed only if the implicit-split flag is set and a predefined split condition is met.
7. The method of claim 1, wherein the splitting of a rectangular TU is along a direction of a longer axis of the rectangular TU.
8. The method of claim 1, wherein the splitting produces more than two TUs.
9. The method of claim 1, wherein a maximum length or a maximum width of the TUs are reduced.
10. The method of claim 1, wherein the PUs have arbitrary shapes and sizes.
11. The method of claim 1, wherein the splitting produces rectangular TUs.
12. The method of claim 1, wherein horizontal rectangular TUs and vertical rectangular TUs are aligned with a direction of features in the PU.
13. The method of claim 1, wherein the PU contains a portion of video data.
14. The method of claim 1, wherein the PU contains residual data obtained from a prediction process.
15. The method of claim 1, wherein the transform tree is an N-ary tree.
16. The method of claim 1, wherein the splitting of rectangular TUs is along a direction of a shorter axis.
17. The method of claim 1, wherein square or rectangular TUs are merged into larger TUs.
18. The method of claim 15, wherein values of N of the N-ary tree differs for different nodes of the transform tree.
19. The method of claim 1, wherein the TU spans multiple PUs when the PUs are Inter-predicted.
20. The method of claim 1, wherein the TUs are represented by leaf nodes of the transform tree.
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/8494290", "len_cl100k_base": 5480, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 12617, "total-output-tokens": 6250, "length": "2e12", "weborganizer": {"__label__adult": 0.0004682540893554687, "__label__art_design": 0.0010671615600585938, "__label__crime_law": 0.0009975433349609375, "__label__education_jobs": 0.0005354881286621094, "__label__entertainment": 0.00023365020751953125, "__label__fashion_beauty": 0.0002130270004272461, "__label__finance_business": 0.0005564689636230469, "__label__food_dining": 0.0005006790161132812, "__label__games": 0.0009064674377441406, "__label__hardware": 0.0099639892578125, "__label__health": 0.0005636215209960938, "__label__history": 0.0003383159637451172, "__label__home_hobbies": 0.00010901689529418944, "__label__industrial": 0.0012102127075195312, "__label__literature": 0.0003001689910888672, "__label__politics": 0.000423431396484375, "__label__religion": 0.0005245208740234375, "__label__science_tech": 0.392578125, "__label__social_life": 5.7578086853027344e-05, "__label__software": 0.02471923828125, "__label__software_dev": 0.5625, "__label__sports_fitness": 0.00035381317138671875, "__label__transportation": 0.0005917549133300781, "__label__travel": 0.00018417835235595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23103, 0.05837]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23103, 0.72931]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23103, 0.88292]], "google_gemma-3-12b-it_contains_pii": [[0, 1560, false], [1560, 1658, null], [1658, 1658, null], [1658, 1658, null], [1658, 1658, null], [1658, 1855, null], [1855, 1855, null], [1855, 8176, null], [8176, 15338, null], [15338, 22350, null], [22350, 23103, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1560, true], [1560, 1658, null], [1658, 1658, null], [1658, 1658, null], [1658, 1658, null], [1658, 1855, null], [1855, 1855, null], [1855, 8176, null], [8176, 15338, null], [15338, 22350, null], [22350, 23103, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23103, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23103, null]], "pdf_page_numbers": [[0, 1560, 1], [1560, 1658, 2], [1658, 1658, 3], [1658, 1658, 4], [1658, 1658, 5], [1658, 1855, 6], [1855, 1855, 7], [1855, 8176, 8], [8176, 15338, 9], [15338, 22350, 10], [22350, 23103, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23103, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
e68e4b3ed96fa6ec962d9168e8a75601404b054a
|
Service Terms
Any terms defined but not used herein shall have the meanings ascribed to them in your contract with us ("Agreement"). Definitions are listed below at the end. Capitalized terms used but not defined herein have the definition provided in the Agreement.
1. General Terms
1.1. Access; Usage. You are solely (and we shall not be held) responsible for (a) maintaining the security and confidentiality of your access credentials to the Service Offerings and (b) any use of, or decisions based on, the Service Offerings associated with your account(s), whether or not authorized by you. You may only access the Service Offerings using the authorized access credentials (e.g., API access tokens, username/password, license keys, etc).
1.2. Prohibited Uses. You may not engage in any of the following uses of the Service Offerings:
1.2.1. disseminate material that is abusive, obscene, pornographic, defamatory, harassing, grossly offensive, vulgar, threatening, or malicious;
1.2.2. aide or implement practices that violate or are intended to violate basic human rights or civil liberties (for clarity, you may not use the Service Offerings to assist in the creation of databases of identifying information for any government to abrogate any human rights, civil rights, or civil liberties of individuals on the basis of race, gender, gender expression or gender identity, sexual orientation, religion, age, national origin or based on any protected classification under applicable laws);
1.2.3. violate the copyright, trademark, patent, trade secret, or other intellectual property or proprietary rights of any person (including us);
1.2.4. operate a product or service where the use or failure of the Services could lead to death, personal injury or significant property or environmental damage;
1.2.5. interfere with, disrupt, or attempt to gain unauthorized access to any of our accounts, services, or computer networks;
1.2.6. disseminate, store, or transmit viruses, Trojan horses, or any other malicious code or program;
1.2.7. violate any applicable laws, regulations, or rules; or
1.2.8. host with, transmitting to or providing to us any information that is subject to specific government regulation, including, without limitation, Protected Health Information (as defined in the U.S. Health Insurance Portability and Accountability Act, as amended), financial information (as regulated by the U.S. Financial Services Modernization Act, as amended), consumer reports and consumer-reporting information (as regulated by the U.S. Fair Credit Reporting Act, as amended) and information subject to Export Control Laws.
In addition, you will comply with, will obtain all required authorization from applicable authorities under, and are not (and are not 50% or more owned by one or more individuals, organization or entities that are) listed on any restricted or sanctioned party list maintained under the Export Control Laws.
1.3. Attribution and Logo.
1.3.1. Mandatory Attribution. Any use of the Services must include the following attribution, as described in our Attribution Documentation (or other prominent location agreed in writing by the parties): (a) the Mapbox logo; (b) “© Mapbox” that links to www.mapbox.com/about/maps; (c) “© OpenStreetMap” that links to www.openstreetmap.org/about; and (d) upon reasonable notice, other such attribution, similar in size and placement to the notice specified in (b) and (c), as may be required by our suppliers and licensors; provided that the requirements in clauses (b) and (c) only apply when you use Licensed Map Content.
1.3.2. Mapbox Map. When displaying a Mapbox Map, you must also include “Improve this map” that links to our map feedback tool at www.mapbox.com/map-feedback in the lower right corner of the map.
1.3.3. Form and Format. Attribution must be in a form that is prominent and can be easily viewed by End Users when using the Licensed Application. Without limiting our other rights and remedies hereunder, if we reasonably determine that you are obscuring the required attribution or otherwise not complying with the foregoing attribution requirements, you will work with us in good faith to promptly remedy the non-compliant attribution.
1.3.4. Whitelabeling. If you have licensed a whitelabeling right from us, you may omit the Mapbox logo when using the Services; however, you must provide the other attribution listed above. For clarity, whitelabeling is not available with the following services: (i) GL JS version 2.0 or later, or (ii) the Maps SDK for Mobile version 10.0.0 or later.
1.4. Reverse Engineering; Derivative Works. You may not (i) modify, create derivative works from, disassemble, decompile or otherwise reverse engineer or attempt to derive any source code or underlying structure, ideas or algorithms from the Service Offerings, except to the extent such restriction is expressly prohibited under applicable law, or (ii) modify, obscure, or delete any product identification, proprietary rights, or other notices included in or with the Services. Further, unless this prohibition is expressly prohibited under applicable law, you may not use the Services to develop, test, validate and/or improve any service or dataset that is a substitute for, or substantially similar to, the Services (including any portion thereof).
1.5. Tracing, Deriving and Extracting. Except as expressly permitted by this Agreement, you may not trace or otherwise derive or extract content, data and/or information from the Services. Notwithstanding the foregoing sentence, you may use Studio or third-party software to trace Mapbox maps solely comprised of satellite imagery and produce derivative vector datasets (i) for non-commercial purposes and (ii) for OpenStreetMap.
1.6. Print or Video Use. You may not use the Licensed Map Content in print, static digital or video media (including media distributed by internet, cable, satellite, etc.) other than (1) to promote your Licensed Applications so long as the Licensed Map Content is shown incidentally in the context of the Licensed Applications, or (2) for making and offering for sale to End Users custom depictions of the map features (excluding Japan map data), provided that for each depiction, an End User directs which map features are included in such depiction using an interface that uses Mapbox APIs, and you do not exceed 500
such depictions annually. The selection tool and sold items must both include attribution in accordance with our documentation.
1.6.1. If expressly permitted in your Order, you may print and/or screenshot Map Assets (“Permitted Printing”) from the following; provided that you comply with all attribution requirements in the Agreement (except for the requirement to include the URL links where that is not feasible):
- Mapbox Streets v 7 or earlier
- Mapbox Streets v 8 or later, excluding Japan map data
- Japan map data in Mapbox Streets v 8 or later, for educational or private uses only
Subject to the limitation in this Section, you may use Studio to make up to 100 high resolution static exports of images during the lifetime of your account.
1.7. Denial of Service. You may not knowingly use the Service Offerings in any manner that could damage, disable, overburden, or impair the Service Offerings or interfere with any other party's use and enjoyment of the Service Offerings (“Interfering Use”), and you agree to use commercially reasonable efforts to prevent and avoid any such use. (For the purpose of the Agreement, reasonable efforts or commercially reasonable efforts means efforts commensurate with similarly sized companies acting in a reasonable manner under similar circumstances.) If your use is an Interfering Use, we may suspend or limit your use of the Service Offerings for the duration of the harmful use. Following such suspension or limit, we will notify you promptly and we will work with you to resolve the issue expeditiously.
1.8. Product Specific Restrictions; Default Restrictions. Except as expressly permitted below, you may (1) only query the Services in response to human user queries and human app interactions, (2) not perform bulk or automated queries, (3) not scrape or systematically download Licensed Map Content, and (4) not cache or store results from the Services.
1.9. No Redistribution. Except as expressly provided herein to the contrary, you may not redistribute, encumber, sell, rent, lease, sublicense, or otherwise transfer any rights to (or content, data and/or information derived from) the Services or the Support Services.
1.10. End Users and Notification. You may not allow End Users or other third parties to use the Service Offerings in any way that would violate this Agreement if done by you. You agree to promptly notify us in writing if you become aware of any misappropriation or unauthorized use of the Service Offerings.
1.11. Beta Service Offerings. In the event that we choose to make available to you any Beta Service Offerings, you agree to only use them for internal evaluation or testing purposes.
2. Mapping APIs
2.1. In General. You may only access Map Assets directly through the Mapping APIs. You may cache Map Assets on end-user devices for offline use for up to thirty (30) days, but each device must populate its cache using direct requests to the Mapping APIs and content from a cache may only be consumed by the single end user of the device. On mobile devices, you may only cache up to the limits set in the Mobile SDKs, and you may not circumvent or change those limits. You may not redistribute Map Assets, including from a cache, by proxying, or by using a screenshot or other static image instead of accessing Map Assets through the Mapping APIs.
2.2. **Satellite Imagery.** You may not use our satellite and/or aerial imagery to improve the accuracy of or otherwise enhance any imagery.
2.3. **Seats and Map Load Pricing Requirements.** If your Order includes Seats or Map Loads, you must use a Qualified Renderer. In the event that you fail to use, or interfere with the information sent by, the Qualified Renderer, we will charge you for your usage based on your usage of map tiles at the then current rates listed on our Pricing Page.
3. **China APIs.** Your Order includes access to China APIs only if it is expressly included in your Order. “China APIs” means the mapbox.cn endpoint and “Chinese Map Data” means the content, data and/or information that you receive from the China APIs. For clarity, this section does not apply to use of our global mapbox.com API endpoint, which does permit display of data of China outside of the country.
You acknowledge that specific laws apply to Chinese Map Data and acknowledge and agree that:
3.1. You must include the attribution, copyright notices, survey map number and other required legends and notices specified by us from time to time on all maps that use Chinese Map Data.
3.2. We may alter, change, remove or update Chinese Map Data or the China APIs due to Chinese laws or requests by Chinese government officials. If an update prevents you from using the China APIs within the Licensed Application, we will, as your sole and exclusive remedy, provide a pro-rata refund of any pre-paid fees specifically allocated to China APIs in your Order for the period of service following the update.
3.3. You may not distribute outside of China any Chinese Map Data of China.
3.4. You may not modify Chinese Map Data or extract Chinese Map Data from the Services. If you provide Chinese Map Data to end users in conjunction with other data, you are solely responsible for ensuring that the combination complies with Chinese laws and that you are legally permitted to provide the combination to end users.
4. **Geocoding API**
4.1. **In General.** You may not use Geocodes: (a) to develop a general database of locations, addresses, areas or boundaries (of any geographic size); (b) to develop any general purpose printed or digital map (of any geographic size); (c) to develop or test another geocoding application, service or API; (d) in connection with navigation products preinstalled or integrated into automobiles by auto manufacturers, auto electronic component manufacturers or auto system integrators; or (e) for in-flight navigation.
4.2. **Temporary Geocodes.** You may not export, store or cache Temporary Geocodes, nor may you permit any third party do so. You may not resell or re-syndicate any Temporary Geocodes to other publishers or third parties; provided that you may display the Temporary Geocodes to your End Users in connection with your Licensed Application(s). You may use latitude and longitude information from Temporary Geocodes to position results on a map, but you may not display the latitudes or longitudes directly to End Users.
4.3. **Permanent Geocodes.** You may store Permanent Geocodes and may query the Permanent Geocoding API programmatically. You may only use Permanent Geocodes for your own internal use, and not for resale, distribution, or sublicense.
4.4. **Studio.** Points placed on a map through the places search function in the Studio dataset editor are Permanent Geocodes.
4.5. **POI Results.** You may not use any POI Results (a) except in conjunction with a Mapbox Map; (b) for lead generation, advertiser targeting or advertising analysis; (c) to create or augment user profiles or audience segments based on or derived from points of interest results (including, for clarity, calculations or analysis of footfall traffic for any point of interest); or (d) for geofencing—i.e., to give End Users real-time mobile alerts or personalized content based on the End User’s current proximity to a point of interest; provided that giving an End User content in response to such End User’s searches or map interactions does not constitute geofencing.
5. **Directions, Isochrone, Map Matching, Matrix and Optimization APIs.** You may not cache or store results from the Directions, Isochrone, Map Matching, Matrix or Optimization APIs.
6. **Boundaries.** Boundaries may only be used in conjunction with a Mapbox Map, and you may not (and may not permit any third party to) trace or otherwise derive or extract content, data and/or information from Boundaries. You may only access Boundaries via the Mapbox APIs (i.e., not as a data file).
7. **Mobile SDKs.**
7.1. **General Requirements.** You must use the Mobile SDKs as your exclusive means of accessing the Services in mobile applications. The Mobile SDKs will periodically send location and usage data to us, which we may use for the purpose of fixing bugs and errors, accounting and generating aggregated anonymized statistics. You may not interfere with or limit the data that the Mobile SDKs send to us, whether by modifying the SDK or by other means, except as otherwise required by this paragraph. For all mobile applications using the Services, you must (a) obtain end users’ affirmative express consent before accessing or collecting their location and (b) allow users to opt out of location data sharing using one of the methods described in our developer documentation.
7.2. **Minimum Version Requirement.** If your Order includes Monthly Active Users or MAUs, then, as applicable, you must use at least the minimum following SDK to take advantage of such pricing: (1) for Maps SDK for Mobile, v5.0.0 or higher for iOS and v8.0.0 or higher for Android and (2) for Navigation SDK v1.0.0 or higher (for both iOS and Android) (collectively, a “Qualified SDK”). In the event that you fail to use, or interfere with the information sent by, the Qualified SDK, we will charge you for your usage based on your usage of API requests at the then current rates listed on our Pricing Page. Furthermore, at any given time, the Mobile SDK that you use must be the version that has been released within the immediately preceding twelve (12) months (unless no such update has been released).
7.3. **Additional Maps SDK for Mobile Conditions.** Your license to use the Maps SDK for Mobile v10.0.0 or higher (for both iOS and Android) (collectively, “Maps SDK v10”) is granted solely by this Agreement. You may not modify the billing and accounting code in Maps SDKs v10. You may modify the other parts of the Maps SDK v10 code, provided that you do not change, modify, diminish, or otherwise interfere with the billing and accounting code or the data Maps SDK v10 sends to us. Your license to use and modify the Maps SDK v10 lasts so long as you have a Mapbox account and terminates automatically if your account terminates.
7.4. **Additional Navigation SDK for Mobile Conditions.** Your license to use the Navigation SDK for Mobile v2.0.0 or higher (for both iOS and Android) (collectively, “Navigation SDK v2”) is granted solely by this Agreement. You may not modify the billing and accounting code in Navigation SDKs v2. You may modify the other parts of the Navigation SDK v2 code, provided that you do not change, modify, diminish, or
otherwise interfere with the billing and accounting code or the data Navigation SDK v2 sends to us. Your license to use and modify the Navigation SDK v2 lasts so long as you have a Mapbox account and terminates automatically if your account terminates.
8. Mapbox Web SDK
8.1. General Requirements. Your license to use mapbox-gl.js version 2.0 or higher ("Mapbox Web SDK") is granted solely by this Agreement. The Mapbox Web SDK will send usage data to us, which we may use for the purpose of fixing bugs and errors, accounting and generating anonymized statistics. You may not interfere with or limit the data that the Mapbox Web SDK send to us, whether by modifying the SDK or by other means. You may not modify the billing and accounting code in Mapbox Web SDK. You may modify the other parts of the Mapbox Web SDK code, provided that you do not change, modify, diminish, or otherwise interfere with the billing and accounting code or the data the Mapbox Web SDK sends to us. Your license to use and modify the Mapbox Web SDK lasts so long as you have a Mapbox account and terminates automatically if your account terminates.
9.1. You shall not modify the default rate limits in the Atlas Software or download data updates more frequently than listed in your Order. Unless listed in your Order or expressly permitted below, you may not cache or store Atlas Map Content.
9.2. You may only license additional Atlas Instances if you also have an Atlas Enterprise license for the same network environment. All licenses for Atlas Software include at no additional cost one (1) Atlas Installation for the sole purpose of passive failover. In the event that this failover Atlas Installation is converted to an active state in a production environment, it will be counted as an additional production Atlas Installation.
9.3. You may store Atlas Geocoding Data for internal use during the Term and may query Atlas Search programmatically. You may not resell or re-sytndicate any Atlas Geocoding Data to other publishers or third parties; provided that you may display the Atlas Geocoding Data to your End Users in connection with your Licensed Application(s).
9.4. Upon 30 days’ advance written notice by us, you must delete all Atlas Map Content and download a new version from us.
9.5. You may not: (i) access, disclose, or permit any third party to access the Atlas Map Content other than through the Atlas Software; (ii) sublicense, sell, rent, lease, transfer, assign, disclose, or distribute the Atlas Software to third parties; (iii) host the Atlas Software for the benefit of third parties other than to provide the Atlas Map Content as part of your Licensed Application; (iv) host the Atlas Software in a way that makes any of it accessible to the public; or (v) try to avoid or change any license registration processes we implement.
9.6. You will report to us the number of Atlas Installations and Instances by network environment.
10. Vision SDK.
10.1. “Vision SDK” means our proprietary software development kits that (1) display routing and map information on top of real-time imagery, (2) identify, classify and locate features
in real-time imagery, and/or (3) have the additional capabilities included in our online documentation.
10.2. Unless listed in your Order, you (i) will only use the Vision SDK in iOS and Android mobile applications and (ii) will not save, download or otherwise store or cache any content, data and/or information generated by Vision SDK.
10.3. You may not use the Vision SDK to develop a general database of locations or road features for any neighborhood, city, state, country, or other such geographic region, or to develop any other general purpose digital map database.
10.4. You may not distribute the Vision SDK (i) in human readable form or (ii) on a standalone basis in any form. You may only distribute the Vision SDK in compiled object code format as part of an application (a) licensed under these Terms, (ii) that you own or control and (iii) that provides significant additional functionality to the Vision SDK. You are responsible for any use of the Vision SDK by users of your Licensed Application(s).
10.5. By using the Vision SDK, you acknowledge that the Vision SDK will send us front-facing camera imagery and derived information (the “Vision Data”). You will (i) not prevent or interfere with the Vision SDK sending the Vision Data to us and (ii) ensure that you have obtained the necessary rights for use of the Vision Data as permitted under this Agreement. You acknowledge and agree that we may, free-of-charge and without restriction, exploit and make available the Vision Data.
10.6. To the extent that you use the Android version of the Vision SDK, the following applies: Qualcomm Technologies, Inc. (“QTI”) is an intended third-party beneficiary of this Agreement and your use of the Vision SDK does not convey or otherwise provide any rights under any patents of QTI or its affiliates.
10.7. Upon termination of your limited license to use the Vision SDK, you agree to immediately destroy all copies of the Vision SDK.
11. Traffic Data and Mapbox Movement.
11.1. You may not (i) resell or re-syndicate Mapbox Traffic Data or Mapbox Movement to other publishers or third parties or (ii) attempt to re-identify any individuals or their locations therefrom.
11.2. During the Term, you may store and use Mapbox Traffic Data and Mapbox Movement with or without a Mapbox Map. Unless your use case for Traffic Data or Mapbox Movements is specifically authorized by Mapbox in an Order, then, notwithstanding anything else herein, (1) you may only use Mapbox Traffic Data and Mapbox Movements for evaluation purposes and (2) Mapbox may terminate your license to use Traffic Data or Mapbox Movements at any time and without any obligation to provide a refund.
Definitions
- “Address-Level Geocode” means a Geocoding Result of the address type, as specified in Mapbox API documentation.
- “Area-Level Geocode” means a Geocoding Result of the country, region, postcode, district, place, locality, or neighborhood type, as specified in Mapbox API documentation.
- “Atlas Basic” includes one (1) Atlas Installation with the right to use global basemaps and the Maps APIs included with the Atlas Software. It does not include the right to use any other Atlas Software functionality (including Studio), even if included in the Atlas Software.
● “Atlas Enterprise” includes (a) Atlas Installations for use in internal development environments and (b) one (1) Atlas Installation for use in a production environment, in each case with the right to use global basemaps, the Maps APIs and the Studio application included with the Atlas Software. You may also use additional Atlas Software functionality specifically listed in your Order.
● “Atlas Geocoding Data” means any geocoding data provided by Atlas Search.
● “Atlas Installation” is the installation of Atlas Software that results in installation of a single “atlas-ddb” database on a single networking environment. Atlas Installations may not be shared across networking environments.
● “Atlas Instance” or “Instance” permits you to increase the default rate limits for a single Atlas Installation by up to 100%. For example, if the default rate limit for your Atlas Installation is 100 requests per minute, then licensing one additional Atlas Instance would increase your rate limit to 200 requests per minute.
● “Atlas Map Content” means any content, data and/or information that we make available to you for use with Atlas Software.
● “Atlas Search” includes access to the Atlas Software that provides geocoding functionality and may only be used when licensed in your Order in connection with Atlas Standard and/or Atlas Enterprise.
● “Atlas Software” means the source code or object code version of our on-premise mapping APIs and applications provided by us, including any updates, along with any Atlas Map Content.
● “Atlas Standard” includes (a) Atlas Installations for use in internal development environments and (b) one (1) Atlas Installation for use in a production environment, in each case with the right to use global basemaps, the Maps APIs and the Studio application included with the Atlas Software. You may also use additional Atlas Software functionality specifically listed in your Order.
● “Attribution Documentation” means docs.mapbox.com/help/how-mapbox-works/attribution/ (or its successor page).
● “Beta Service Offerings” means any of our products or services that are in beta or not generally available.
● “China” means, for the purpose of this Agreement, the People’s Republic of China, excluding Hong Kong, Macau and Taiwan.
● “Boundaries” means our administrative level polygon vector tiles.
● “Export Control Laws” means applicable export control, re-export control and trade sanctions laws, regulations, legislative and regulatory requirements, rules and licenses, including, without limitation, trade and economic sanctions maintained by the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC), the Export Administration Regulations (EAR) administered by the US Department of Commerce’s Bureau of Industry and Security, the International Traffic in Arms Regulations (ITAR) administered by the Department of State, laws and regulations targeting proliferation activities, and the restricted persons lists maintained by the U.S. Government (including but not limited to the Denied Persons List, Unverified List, Entity List, Specially Designated Nationals List, Debarred List and Non-proliferation Sanctions), the European Union or the United Kingdom.
● “Geocode” or “Geocoding Result” means the response to a query to the Geocoding API or Atlas Search. Responses to bulk geocoding requests constitute multiple Geocodes.
● “Legacy Static Image” means an image from the Mapbox Legacy Static Images API.
● “Map Assets” means map tiles, static map images, style files, glyphs, and sprites that we provide to you through the Mapping APIs. Map Assets excludes Your Uploads and Third-Party Data.
● “Map Load” means a new instantiation of a Mapbox GL map object, i.e., anytime new mapboxgl.Map() is called, subject to a 12-hour timeout. Each Map Load will not be charged for Vector Tiles and Raster Tiles in the applicable Licensed Application.
● “Map View” means the following assets retrieved through the Mapping APIs (excluding requests provided by the Mobile SDKs): (i) four (4) Raster Tiles from Mapbox Studio styles; (ii) four (4) Vector Tiles; (iii) 15 Raster Tiles from user-uploaded raster tilesets, Mapbox Editor Classic projects, or Mapbox Studio Classic styles; or (iv) one (1) static map.
● “Mapbox Movement” means Mapbox’s proprietary dataset of aggregated and anonymized movement data that we make available to you.
● “Mapbox Map” means a map made up of Map Assets (excluding Boundaries).
● “Mapbox Traffic Data” means the road network speed profiles that we make available to you.
● “Mapbox SDKs” means, unless specified otherwise herein, the software development kits maintained and made available by us as described in our documentation, located at docs.mapbox.com/
o Each Maps SDK for Mobile MAU will not be charged for Vector Tiles and Raster Tiles requests in the applicable Licensed Application.
o Each Navigation SDK MAU will not be charged for Vector Tiles, Raster Tiles, and Directions API requests in the applicable Licensed Application.
● “Mapping APIs” means the Maps service APIs described in our documentation (docs.mapbox.com/api and docs.mapbox.com/api/legacy/static-classic/) or that are included in the Atlas Software).
● “Matrix Element” or “Matrix Request” means each origin-destination pair included in a Matrix API request. For example, a request with three origins and six destinations would result in 18 Matrix Elements.
● “Mobile SDKs” means the Mapbox SDKs for mobile applications.
● “Monthly Active User” or “MAU” means a device that makes at least one call to a Mapbox API or uses a Mapbox SDK, counted on a monthly basis, per application (counted separately for each SDK), as determined by our records.
● “Permanent Geocode” means a Geocode obtained from using the Geocoding API in mapbox.places-permanent mode (“Permanent Mode”) or an Atlas Permanent Geocode.
● “POI Result” means a Geocode that is not an Address-Level Geocode or an Area-Level Geocode.
● “Precision Level” means the detail level of your tileset, as determined by the maximum zoom of your tileset:
o zoom levels 6-10 for 10-meter precision;
o zoom levels 11-13 for 1-meter precision;
o zoom levels 14-16 for 30-centimeters precision;
o zoom levels 17-22 for 1-centimeter precision.
● “Qualified Renderer” means for a Map Load, use of Mapbox GL JS version 1.0 or higher, and for a Seats, use of Mapbox GL JS Seats version 1.0 or higher (for information on how to download this renderer please read our documentation at docs.mapbox.com/gl-js-seats/). We may require you to update to a newer version of a Qualified Renderer upon 30 days’ advance written notice.
● “Raster Tile” means one map tile from the Raster Tiles API.
● “Seat” means an End User that can access one of your Licensed Applications in a month, as determined by our records (unless specified otherwise in your Order). Multiple End Users are not allowed to use the same Seat, even if they do not use them at the same time. Each Seat for a specific Licensed Application will not be charged for Vector Tiles and Raster Tiles that may be used in connection with the applicable Licensed Application.
● “Square Kilometers” means the area of the surface of the earth represented by your vector or raster tiles.
● “Static Tile” means one raster map tile from the Mapbox Static API.
● “Static Image” means one image file from the Mapbox Static Images API.
● “Studio” means Mapbox's design studio, described at www.mapbox.com/mapbox-studio/.
● “Temporary Geocode” means a Geocode obtained from using the Geocoding API in mapbox.places mode (“Temporary Mode”).
● “Tileset Hosting” means storing a vector or raster tileset on the Mapbox Tiling Service or Uploads API as calculated once per day according to our records.
● “Tileset Processing” means publishing or updating a vector or raster tileset with the Mapbox Tiling Service or Uploads API.
● “Vector Tile” is one map tile from the Mapbox Vector Tiles API.
|
{"Source-Url": "https://assets.website-files.com/5d4296d7a839ea49599adba1/602d6bc7d45fdf55a03f3db5_Mapbox%20Service%20Terms.pdf", "len_cl100k_base": 6756, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28232, "total-output-tokens": 7257, "length": "2e12", "weborganizer": {"__label__adult": 0.0013952255249023438, "__label__art_design": 0.0034694671630859375, "__label__crime_law": 0.01654052734375, "__label__education_jobs": 0.0010528564453125, "__label__entertainment": 0.0004546642303466797, "__label__fashion_beauty": 0.00035452842712402344, "__label__finance_business": 0.08673095703125, "__label__food_dining": 0.0007567405700683594, "__label__games": 0.003108978271484375, "__label__hardware": 0.004680633544921875, "__label__health": 0.0003380775451660156, "__label__history": 0.0011425018310546875, "__label__home_hobbies": 0.00044465065002441406, "__label__industrial": 0.00150299072265625, "__label__literature": 0.0010061264038085938, "__label__politics": 0.0013217926025390625, "__label__religion": 0.00054168701171875, "__label__science_tech": 0.008270263671875, "__label__social_life": 0.00019884109497070312, "__label__software": 0.37744140625, "__label__software_dev": 0.48095703125, "__label__sports_fitness": 0.0005183219909667969, "__label__transportation": 0.00665283203125, "__label__travel": 0.0012073516845703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31392, 0.01542]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31392, 0.12804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31392, 0.88454]], "google_gemma-3-12b-it_contains_pii": [[0, 2950, false], [2950, 6394, null], [6394, 9739, null], [9739, 13174, null], [13174, 17001, null], [17001, 20164, null], [20164, 23430, null], [23430, 26821, null], [26821, 29700, null], [29700, 31392, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2950, true], [2950, 6394, null], [6394, 9739, null], [9739, 13174, null], [13174, 17001, null], [17001, 20164, null], [20164, 23430, null], [23430, 26821, null], [26821, 29700, null], [29700, 31392, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31392, null]], "pdf_page_numbers": [[0, 2950, 1], [2950, 6394, 2], [6394, 9739, 3], [9739, 13174, 4], [13174, 17001, 5], [17001, 20164, 6], [20164, 23430, 7], [23430, 26821, 8], [26821, 29700, 9], [29700, 31392, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31392, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
70676ac30a83fd7c26e3447aaa3f92562abac855
|
Editable Authority Control
Advanced Authority Control (previously Editable Authority Control)
Introduction
Editable Authority Control provides a solution for the management of assigned Authority Control records used within DSpace authority control and assigned to DSpace Item metadata. This solution automatically assigns a unique Authority record that can be used to locally preserve externally controlled vocabulary information when assigned to DSpace Item metadata fields.
The model is designed to provide:
- Workflow for Concepts and Terms: Additional support for designating that a Concept, Term or Thesaurus is ready for “publication and use” via status as “candidate”, “accepted” and so-on.
- Access Control for Metadata Attributes: Additional support to limit viewing of specific metadata attributes (for example authors email addresses, phone numbers, so-on) to only repository admins is supported via a “hidden” attribute.
- Extensibility has been provided to support additional metadata fields on Authority Thesaurus, Concepts, and Terms through Metadata registries and value tables similar to DSpace Item Metadata.
- Reuse of existing DSpace XMLUI Admin UI have been employed to support management and viewing of Authority records. Admin UI Interfaces are now provided to support:
- Creation and Management of new thesauri (for example: dc.types, dc.language, authors, organizations, and so on). Repository Admins will be able to easily produce new Thesaurus for use in Authority Control via this use interface.
Creation and Management of Concepts within the Thesauri, including support for creating and assigning Preferred Terms, Non-Preferred Terms and Hidden Terms in DSpace Authority Control Concepts. In addition support for managing the hierarchical and associative relationships between Authority Control Concepts will be available. This will allow support to create and manage hierarchical vocabularies (taxonomies) as well as relate concepts across thesauri (for example, assigning organization membership to authors).
Finally, support for assigning additional metadata to both Concepts and Terms will be have in a manner equivalent to attaching “Notes” to Term and Concept objects (skos and skos-xl namespaces will be employed to facilitate thesaurus specific metadata attributes. Additional attributes will be able to be defined in namespaces decided on my the Repository Administrators (Dublin Core, Foaf, MADS, MODS, so-on)
Versions
DSpace 6.x : A recent production rollout of the 6.0 version of this codebase is deployed on the TXState repository with additional UI enhancements. Added Features in DSpace 6 version of solution include:
- Authority Concept Linking in Discovery: Links will be traversable to profile and used in filters for associated Authority Concept. Authority Concepts can be reached by clicking on a glyphicon located next to the Authority Concepts name. Non administrative users will only have access to view the public Authority Concept profile page. Administrators will be able to view all Concept data.
- Public/Private Authority Concept View (Author / Organization Profiles): Allow for administrator to identify if a Profile should be publicly viewable or not. Feature will be controlled by setting Authority Concept as Public / Private in Authority Concept Record.
- Hidden Metadata Fields: Allow for Repository Sys Admin to Configure specific Concept Metadata Fields to be publicly Visible or not. Similar to Item Provenance metadata visibility.
- Advanced Edit Forms for Authority Concepts: Similar in design to the Community and Collection Edit forms, this interface will allow for configuration of the fields that should be populated when creating a new Authority Control manually. Configuration will allow for different forms per Authority Scheme in DSpace (person, organization, type, subject, etc).
- Example Case: Person (Scheme) Specific Create and Edit Page:
- Required and Optional Fields
- Multiple Value Fields
- Fields for Relationships (IsMemberOf Organization)
- Field titles and Help instructions
- Support for Controlled Vocabularies, Value Lists
- Merging Authority Concepts: Many Concepts in the current Authority Control are duplicates due to name variants not being matched on Item Deposit. This required a solution in the Admin UI to support to assure that Duplicates can be easily resolved by merging them with other pre-existing Authority records. Provided support in the Concept view to search for and merge other concepts in the same Scheme. Support will included Curation task queue scheduling to assure that updates to existing Item metadata and discovery indexes.
DSpace 5.x : Code for AAC was released in concert with DSpace LoD Sesame Repository support for DSpace 5.x located in the following fork of the DSpace codebase: https://github.com/dspace-oceanlink/DSpace/tree/oceanlink-5_x/dspace-aac It is currently deployed into WHOAS DSpace Repository with LoD features included, solution provides:
- Creation and management of “Authority Records” in DSpace database.
- Integrates Sesame with dspace RDF service to provide out-of-box Triplestore support
- Extends DSpace RDF support to include RDF representations of a number of objects in DSpace, including Communities, Collections, Items, Bitstreams, BitstreamFormats, as well as Authority Concepts, Terms and Schemes as SKOS-XL entities.
Data Model Overview
The database schema is available here to see how the two solutions differ between DSpace versions.
DSpace 6.x Data Model
Simplifies previous 5.x Schema for DSpace AAC model
- Use UUID in model
- Improved AAC Scheme, Concept and Term model to use new Hibernate support
- Leveraged existing design patterns to create DAO Services for each object type.
### 6.x AAC Data Model
---
**Sequences for creating new IDs (primary keys) for tables.**
---
```
CREATE SEQUENCE conceptrelationtype_seq;
CREATE SEQUENCE conceptrelation_seq;
```
---
**Advanced Authority Control Entities**
---
```
CREATE TABLE scheme
(
uuid UUID PRIMARY KEY DEFAULT gen_random_uuid() REFERENCES dspaceobject(uuid),
created TIMESTAMP WITH TIME ZONE,
modified TIMESTAMP WITH TIME ZONE,
name VARCHAR(256) UNIQUE,
lang VARCHAR(24)
);
```
```
CREATE TABLE term
(
uuid UUID PRIMARY KEY DEFAULT gen_random_uuid() REFERENCES dspaceobject(uuid),
created TIMESTAMP WITH TIME ZONE,
modified TIMESTAMP WITH TIME ZONE,
source VARCHAR(256),
status VARCHAR(256),
literalForm TEXT,
lang VARCHAR(24),
hidden BOOL,
discoverable BOOL
);
```
```
CREATE TABLE concept
(
uuid UUID PRIMARY KEY DEFAULT gen_random_uuid() REFERENCES dspaceobject(uuid),
created TIMESTAMP WITH TIME ZONE,
modified TIMESTAMP WITH TIME ZONE,
status VARCHAR(256),
lang VARCHAR(24),
source VARCHAR(256),
hidden BOOL,
replaced_by UUID REFERENCES concept(uuid),
preferred_term UUID REFERENCES term(uuid)
);
```
```
CREATE TABLE conceptrelationtype
(
id INT PRIMARY KEY DEFAULT NEXTVAL('conceptrelationtype_seq'),
hierarchical BOOL,
incoming_label VARCHAR(64) UNIQUE,
outgoing_label VARCHAR(64) UNIQUE
);
```
---
**Advanced Authority Control Relations**
---
```
CREATE TABLE scheme2concept
(
uuid UUID PRIMARY KEY DEFAULT gen_random_uuid() REFERENCES dspaceobject(uuid),
created TIMESTAMP WITH TIME ZONE,
modified TIMESTAMP WITH TIME ZONE,
status VARCHAR(256),
lang VARCHAR(24),
source VARCHAR(256),
hidden BOOL,
replaced_by UUID REFERENCES concept(uuid),
preferred_term UUID REFERENCES term(uuid)
);
```
Functional and Abstract Concepts behind Data Model Approach
Original design background for the Editable Authority Control addon is based research completed by Pete Johnston on aligning ISO-25964 and SKOS-EX as published in the following eFoundations blog articles:
The EAC datamodel is based in its general design on important parts of SKOS-XL datamodel.
More specifically, on the **Scheme**, **Concept** and **Term** entity types that are defined by the model. SKOS-XL extends on SKOS by making Terms entities rather than literal values, allowing for additional descriptive statements to be provided on the model.
The database model for the EAC had been expressed below, in this model, MetadataTerms can be created within MetadataConcept that are contained within a MetadataThesaurus (Scheme). All are treated as core DSpaceObject types and as such, also have metadata tables that can be associated with them.
**DSpace 6.X Advanced Authority Control**
(To be documented)
**DSpace 5.X Editable Authority Control**
Enable the Editable Authority Control System
Add the following configuration into xmlui.xconf to enable the authority management ui.
```xml
<aspect name="Authority" path="resource://aspects/Authority/" />
```
If not logged in as admin, users can still browse the existing authority objects by using the link.
After login, the management link "Manage Scheme" will show up in the navigation section on the left side of the page.
Configuring Choices in the Editable Authority Control System
In the dspace.cfg, add new choices plugins to use new authority scheme as the data type in the item submission.
eg. use the author scheme as the "dc.contributor.author" field type
```plaintext
authority.minconfidence = ambiguous
choices.plugin.dc.contributor.author = SolrAuthorAuthority
choices.presentation.dc.contributor.author = lookup
authority.controlled.dc.contributor.author = true
authority.controlled.txstate.person.email = true
authority.controlled.txstate.person.institution = true
authority.author.indexer.field.1=dc.contributor.author
```
Database changes
Run the following script ([dspace]/etc/postgres/authority.sql) to add all the new data structure into database.
Migrate Data into Database
Original SolrAuthority data is preserved in Authority Solr Core, a migration tool ([dspace]/bin/dspace import-authority) is provided on the commandline to import the data from the authority into the EAC tables.
This command will take the following solr record
```xml
<doc>
<date name="creation_date">2014-06-30T12:37:36.679Z</date>
<str name="email">wjgeerts@txstate.edu</str>
<str name="field">Author</str>
<str name="id">3877CFB67867B422038A1E390837AFE</str>
<date name="last_modified_date">2014-06-30T12:37:36.717Z</date>
<str name="value">Geerts, Wilhelmus J.</str>
<str name="institute">Louisiana State University</str>
</doc>
```
The importer will break each record up and record it into its appropriate scheme.
After creation all the concepts, each solr record is updated with additional data from the created Concept.
eg. an "Institute" Authority Control Database Record
```xml
<doc>
<date name="creation_date">2014-06-30T12:37:36.455Z</date>
<str name="field">Organization</str>
<str name="id">05bdbef2752f48818440235ea05e2399</str>
<date name="last_modified_date">2014-06-30T12:37:36.456Z</date>
<str name="value">Louisiana State University</str>
</doc>
```
eg. an "Author" Authority Control Record
```xml
<doc>
<date name="creation_date">2014-06-30T12:37:36.455Z</date>
<str name="field">Organization</str>
<str name="id">05bdbef2752f48818440235ea05e2399</str>
<date name="last_modified_date">2014-06-30T12:37:36.456Z</date>
<str name="value">Louisiana State University</str>
</doc>
```
eg. an "Author" Authority Control Record
TODO
and its associated Solr Authority Index Record.
TODO
and its associated Solr Authority Index Record.
User Interface
After enabling the authority aspect, there will be a link ("Manage Scheme") in the Administration menu representing the management page of the authority system.
<table>
<thead>
<tr>
<th>Administrative</th>
<th>Access Control</th>
<th>People</th>
<th>Groups</th>
<th>Authorizations</th>
</tr>
</thead>
<tbody>
<tr>
<td>Registries</td>
<td>Metadata</td>
<td>Format</td>
<td>Items</td>
<td>Withdrawn Items</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Private Items</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Control Panel</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Statistics</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Import Metadata</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Curation Tasks</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Manage Scheme</td>
</tr>
</tbody>
</table>
Click on this link will show the scheme management page where admin users can create, edit or delete an authority object.
Scheme
Main Page
After clicking the "manage scheme" link, the scheme main page will show all the schemes currently in database.
eg. the current database contains two schemes: institute and author.
The institute scheme contains all the instantiation concepts and the author scheme contains all the authors concepts. A author concept can relate to a institute concept if the author is the member of this institute.
Search a scheme
You can search a particular scheme by its identifier.
Type the keyword in the search box and hit search will return all the matching scheme on the same page in the search result section.
Create new scheme
Click on the "create new scheme link" on the main management page will lead the admin user to the scheme creation page.
Create a new Scheme
New Scheme's information:
Status:
Language:
Create Scheme Cancel
After filling in the information, the system will generate a unique identifier for the new scheme. The new scheme in the main management page.
The new scheme can be used as a source for Authority Control fields in the metadata when configured properly in DSpace choice configuration.
TODO
r submission.
eg. the metadata field "dc.contributor.author" is using the author scheme as a data set.
Add the new scheme in the dspace.cfg to configure it as mentioned in the previous section "Configure the choice authority system"
Delete a scheme
On the main scheme management page, check the box in front of each scheme and click delete will delete the whole scheme include all metadata, concepts and terms in the scheme.
View a scheme
Choose one of the scheme from the search result or type the url to the scheme(https://digital-test.library.txstate.edu/scheme/[id]) directly in the web browser will to go to the scheme view page:
This page contains all the information about the current scheme including the name, identifier, create date, status, all the metadata fields and a list lists all the concepts that in this scheme.
eg.
The institute scheme contains some institutes concepts that is refereed in the dspace object metadata as "txstate.person.institute".
The author scheme contains some authors concepts that is refereed in the dspace object metadata as "dc.contributor.author".
In the scheme view page, if logged in as administrator, there will be a list of operation links show in the context menu. The current actions relate to a scheme will be "edit scheme attribute", "edit scheme metadata value","search & add concepts".
Edit Scheme Attribute Page
If click on the "edit scheme attribute", you can change the status, identifier, language of a scheme or even delete the scheme.
Edit Metadata Page
Click on the "edit scheme metadata value" will lead the admin user to the metadata editing page for the scheme. Here you can add all the metadata fields that existing in dspace to the current scheme.
Manage Concepts in Current Scheme Page
In order to manage all the concepts in the current scheme, go to the "search & add concepts" link in the scheme view page.
The user will be sent to the main concepts management page.
Search a concept inside a scheme
In the main concepts management page, admin user can search for a concept inside the current scheme by its identifier, preferred term or id.
The page will show all the concepts in the current scheme as a result when first entered, 20 results per page. Admin users can browse all the concepts by hitting the "next page" link. Or type the key word in the search box and hit search will give the user all the matching concepts in this scheme.
Create a New Concept and Add it to Current Scheme
On the main concept management page, admin user can add a new concept to the current scheme.
File up the concept value, top concept, status, language information and hit create concept will generate a concept and a preferred term under the current scheme.
The concept value will be turned into the literal form of the preferred term of this concept.
The status will be used to decide whether this concept should be add to the choice list for user to choose or not. If you want the user to use this concepts value then select "Accepted".
The identifier for this concept will be automatically generated and stored, this value will be used as the authority of the metadata field value.
Delete a concept
On the main concept management page, admin user can delete concepts from the current scheme by checking the check box in front of each concept and click delete. This will delete the concept, metadata fields and its terms.
Concept
Edit
To view a concept, admin user can either type in the concept link
https://digital-test.library.txstate.edu/concept/id
directly in the web browser or find the concept in the scheme and then click the link to the concept in the result list of the main concept management page. See "Search a concept inside a scheme" to get how to find a concept in scheme.
On the concept view page, admin user can see all the information about this concept including parent scheme, attribute(identifier, create date, status, source), preferred terms, alternative terms, metadata(author email), parent concept(eg. institution is the parent concept to the author concept), child concept (eg. author concept is the child concept to the institution concept).
If login in as a admin user, you will see the management link for this concept in the navigation section on the left side of the page. The actions we can do on this concept is "Edit concept attribute", "Edit concept metadata value", "Add related concept" and "Search & add terms".
Edit Concept Attribute Page
Click on "edit concept attribute" link on the concept view page will lead admin users to the editing page about this concept including changing the status, top concept and language information. Once the concept is created, the identifier will be generated automatically and assigned to it. We don't allow change the identifier of the concept.
Edit Concept Metadata Page
In the editing metadata page for the concept, admin user can add metadata to the concept.
e.g. We use metadata field for the author concept to store the email information of the author. We add the metadata field "txstate.person.email" to the author concept.
Add relationships between concepts
In the add concept relationships page, there will be a list of concepts. We only allow add child relationships so if you want to add a parent concept to current concept, you have to go to the parent concept and search for the concept you were working on.
Type the child concepts' identifier, preferred term or id in the search box and search for the concept you want to add as a child concept of the current concept, and check the check box behind it and hit add. The you will see the new child concept show in the concept view page.
There are three types of relationships between concept and concept: “associate”, “equal”, “broader/narrower”.
eg. The concept in institute scheme could be the parent of the concept in author scheme, and their relationship type is “Broader/Narrower”.
PLEASE NOTE: These changes are not validated in any way. You are responsible for entering the data in the correct format. If you are not sure what the format is, please do NOT make changes.
Manage Terms in Current Concept Page
In order to manage all the terms in the current concept, go to the "search & add terms" link in the concept view page.
The user will be sent to the main terms management page.
Search a term inside a concept
In the main term management page, admin user can search for a term inside the current concept by its identifier, literal form or id.
The page will show all the terms in the current concept as a result when first entered, 20 results per page. Admin users can browse all the terms by hitting the "next page" link. Or type the key word in the search box and hit search will give the user all the matching terms in this concept.
Create a New Term and Add it to Current Concept
On the main term management page, admin user can add a new term to the current concept.
File up the term literal form, preferred term,status,source,language information and hit create term will generate a new term under the current concept.
The identifier for this term will be automatically generated and stored.
Term
View Page
To view a term, admin user can either type in the term link
https://digitalid-test.library.txstate.edu/term/[id]
directly in the web browser or find the term in the concept and then click the link to the term in the result list of the main term management page. See “Search a term inside a concept” to get how to find a term in concept.
On the term view page, admin user can see all the information about this term including parent concept, attribute(identifier, create date, status, literal form), metadata values, parent concept.
If login in as a admin user, you will see the management link for this term in the navigation section on the left side of the page. The actions we can do on this term is “Edit term attribute”, “Edit term metadata value”.
Create a new Metadata Term
New Metadata Term’s information:
Identifier:
Source:
Status:
Literal Form:
Language:
Create Metadata Term Cancel
Edit Term Attribute Page
To change a term's information, go to the edit term attribute page. Here admin users can change the term's literal form, status, source, language or even delete the term. Note we don’t allow user to change the identifier of the term once it is generated automatically and stored. Change the literal form of the preferred term will affect the search result for its parent concept.
Edit Term
Term Properties
Literal Form:
Henderson, Don D.
Status:
Accepted
Source:
dc_contributor_author
Identifier:
967746f4de464629a4831b35e3f515668
Language:
en
Save Delete Cancel
|
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200510-100520-1232-40343/DSPACE-EditableAuthorityControl-100520-1232-40344.pdf?contentType=application/pdf", "len_cl100k_base": 4899, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 31820, "total-output-tokens": 6038, "length": "2e12", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.0009965896606445312, "__label__crime_law": 0.00045609474182128906, "__label__education_jobs": 0.006046295166015625, "__label__entertainment": 0.00019407272338867188, "__label__fashion_beauty": 0.0002288818359375, "__label__finance_business": 0.0006933212280273438, "__label__food_dining": 0.00020742416381835935, "__label__games": 0.0008535385131835938, "__label__hardware": 0.0005316734313964844, "__label__health": 0.0002570152282714844, "__label__history": 0.0004050731658935547, "__label__home_hobbies": 0.00016570091247558594, "__label__industrial": 0.00024700164794921875, "__label__literature": 0.0006656646728515625, "__label__politics": 0.00028586387634277344, "__label__religion": 0.0004725456237792969, "__label__science_tech": 0.016693115234375, "__label__social_life": 0.0004353523254394531, "__label__software": 0.290771484375, "__label__software_dev": 0.6787109375, "__label__sports_fitness": 0.00019109249114990232, "__label__transportation": 0.00023257732391357425, "__label__travel": 0.0002548694610595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22720, 0.01836]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22720, 0.2027]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22720, 0.81807]], "google_gemma-3-12b-it_contains_pii": [[0, 5519, false], [5519, 7599, null], [7599, 8387, null], [8387, 9362, null], [9362, 11950, null], [11950, 13521, null], [13521, 14038, null], [14038, 15552, null], [15552, 15772, null], [15772, 16472, null], [16472, 17210, null], [17210, 18487, null], [18487, 18859, null], [18859, 19146, null], [19146, 20160, null], [20160, 20834, null], [20834, 21199, null], [21199, 22122, null], [22122, 22528, null], [22528, 22720, null], [22720, 22720, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5519, true], [5519, 7599, null], [7599, 8387, null], [8387, 9362, null], [9362, 11950, null], [11950, 13521, null], [13521, 14038, null], [14038, 15552, null], [15552, 15772, null], [15772, 16472, null], [16472, 17210, null], [17210, 18487, null], [18487, 18859, null], [18859, 19146, null], [19146, 20160, null], [20160, 20834, null], [20834, 21199, null], [21199, 22122, null], [22122, 22528, null], [22528, 22720, null], [22720, 22720, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22720, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22720, null]], "pdf_page_numbers": [[0, 5519, 1], [5519, 7599, 2], [7599, 8387, 3], [8387, 9362, 4], [9362, 11950, 5], [11950, 13521, 6], [13521, 14038, 7], [14038, 15552, 8], [15552, 15772, 9], [15772, 16472, 10], [16472, 17210, 11], [17210, 18487, 12], [18487, 18859, 13], [18859, 19146, 14], [19146, 20160, 15], [20160, 20834, 16], [20834, 21199, 17], [21199, 22122, 18], [22122, 22528, 19], [22528, 22720, 20], [22720, 22720, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22720, 0.0297]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
ff0141bcdd243479aa16bd63c808584e410d1504
|
Swot Analysis Of Software Quality Metrics For Global Software Development: A Systematic Literature Review Protocol
Sadia Rehman¹,², Siffat Ullah Khan¹,²,³
¹Software Engineering Research Group (SERG, UOM),
²Department of Computer Science and IT,
³Department of Software Engineering,
University of Malakand, Khyber Pakhtunkhiwa, Pakistan.
Abstract: CONTEXT – Global Software Development (GSD) is a modern software engineering paradigm adopted by many client organisations in developed countries to get high quality product at low cost in low wage countries. Production of high quality software is considered as one of the key factor in the rapid growth of GSD. However GSD projects have put new challenges to practitioners and researchers. In order to address these challenges Software Quality Metrics (SQMs) are frequently used in organisations to fabricate high quality products.
OBJECTIVE - The objective of this SLR protocol is to identify and assess strengths and weaknesses of the existing SQMs used in GSD to assist vendor organisations in choosing appropriate SQMs for measuring software quality.
METHOD – Systematic Literature Review (SLR) will be used for the identification of the existing SQMs in GSD. SLR is based on a structured protocol, and is therefore, different from ordinary review.
EXPECTED OUTCOME – We have developed the SLR protocol and are currently in process of its implementation. The expected outcomes of this review will be the identification of different SQMs for GSD along with their SWOT analysis to assist vendor organisations in choosing appropriate SQMs at the right time to produce a high quality product.
Keywords - Software Quality Metrics, Global Software Development, Systematic Literature Review Protocol
I. Introduction
Global Software Development (GSD) has become the fashion of software industry from the last two decades. GSD is the fabrication of software product in vendor’s countries to get high quality product at affordable cost for clients[1]. To survive in the global market, organisations endeavor their best to fabricate high quality product as the quest is now for the high quality.
GSD is leading the software industry in instance. GSD is an increasing fashion of the industry so we can say that the future software industry will be a global software industry in reality. Globalization makes the software development more complex than a standalone software development, that’s why organisation engage in GSD have more challenges to face in software development more complex than a standalone software development, that’s why organisation engage GSD projects have put new challenges to practitioners and researchers. In order to address these challenges Software Quality Metrics (SQMs) are frequently used in organisations to fabricate high quality products.[2]. Quality of software often expresses in its attributes which can be further subdivided into other attributes such as efficiency, maintainability, testability, flexibility, interface facility, reusability, and transferability[3]. These internal and external attributes group together to define the quality of a software [4].
Measurement is a process that helps the organisations to evaluate their products. Measurement serves to control and predicate things [5]. In software industry the measurement culture is not innovative, measurement gives the quantitative values which are easy to interpret [6]. Horst Zuse [7] defined that researchers are active in the area software measurement since more than thirty years, this area is known as software metrics[8]. “Measurement is the process by which numbers or symbols are assigned to attributes of entities in the real world so as to describe such entities according to clearly defined rules ”. Software Quality Metrics (SQMs) are used in software industry to enhance the quality and stay away from all possible risks that organisations have to face in software development process [9]. “Software quality metrics are a subset of software metrics that focus on the quality aspects of the product, process, and project. In general, software quality metrics are more closely associated with process and product metrics than with project metrics”[10]. The use of metrics can be more productive when organizations choose the appropriate metrics according to their need at the right time.
The main objective of this research is to study thoroughly the existing literature and gain intensive understanding of SQMs for GSD through a Systematic literature review (SLR). The following two research questions (RQs) were formulated to be answered through SLR.
www.iosrjournals.org
RQ1. What are the different quality attributes, as identified in the literature, that affect software quality in GSD?
RQ2. What are the existing software quality metrics (SQMs) in GSD, as identified in the literature?
RQ2.1. What is SWOT analysis of existing SQMs for GSD?
RQ3. What are the different quality attributes, as identified in the real world practice, which affect software quality in GSD?
RQ4. What are different the software quality metrics, as identified in the real world practice, which affect software quality in GSD?
II. Background
Software industry is using software metrics since when “source lines of code” or SLOC was developed for quantifying the output of a software project [11]. Vendor organisations use metrics to improve its quality by measuring its capabilities and efficiencies. Using software metrics in GSD plays a vital role to avert on hand and future risks and to improve its software quality [12]. The purpose of software measurement is to quantify all attributes of quality and predict the future quality of the software [13]. The use of appropriate software metrics at right time helps the organisations to achieve their required and expected outcomes. The use of SQMs facilitate the organisations to get both short and long term advantages by introducing high quality product to the global market [14].
2.1 Published Literature on Software Quality and Software Quality Metrics
The concept of software quality is the core of software development life cycle in GSD. Organisations have more challenges to meet the expectations and requirements of clients. Quality is being gauged by measuring its internal and external attributes [15]. The measurement process is done with the help of Software Quality Metrics. A number of researchers have worked to address various issues in this domain. e.g.
- “Software quality is the extent to which an industry-defined set of desirable features are incorporated into a product so as to enhance its lifetime performance”[16].
- ISO standards of quality are being adopted by organisations in GSD to excel their performance. ISO/ICE 9126 quality model have various internal and external quality factors [17].
- Kitchenham defined quality as “Quality is a complex concept. Because it means different things to different people, it is highly context-dependent. Just as there is no one automobile to satisfy everyone’s needs, so too there is no universal definition of quality. Thus, there can be no single, simple measure of software quality acceptable to everyone. To assess or improve software quality in your organisation, you must define the aspects of quality in which you are interested, and then decide how you are going to measure them”[18].
- Dr. Deepshikha Jamwal discussed different quality models (McCall’s Quality Model, Boehm’s Quality Model, Dromey’s Quality Model, FURPS Quality Model, ISO 9126 Quality Model.) and concluded that only “reliability” is the common attribute in all models. A criteria has been defined based on some questions in order to choose quality model for any organisation that will save organisation’s time [19].
- SQMs for GSD assist to have control over quality as according to Tom DeMacrio “You cannot control what you cannot measure. Measurement is the prerequisite to management control” [5]
- In order to avoid software crisis effective software management is required that can be facilitated by the use of software metrics [9]
- Barbara Kitchenham has conducted a survey to describe advancement in software metrics research. The study assesses 103 papers published between 2000 and 2005. She has suggested that researchers in software metrics domain need to refine their empirical methodology to answer empirical questions [20].
GSD induces the organisations to invest more cost and efforts to get high software quality by selecting appropriate SQMs according to their requirements at right time. The use of software metrics becomes ineffective and extra load on organisations when metrics do not observe the goals of organisations [21].
This research will help the organisations to choose appropriate SQMs for GSD at right time to fabricate high quality product. The outcomes of this research will be SWOT Analysis of currently used SQMS in software industry. In order to make the work more reliable we are using systematic review approach. A systematic literature review is a way of discovering, assessing and inferring all available research relevant to a particular research question, or topic area [22]. Currently we are presenting the planning phase of the SLR which is the first step of SLR.
III. Systematic Literature Review Protocol For SWOT Analysis Of Software Quality Metrics For Global Software Development
A systematic literature review is a way of discovering, assessing and inferring all available research relevant to a particular research question, or topic area [22]. This paper illustrates the SLR protocol on SQMs for GSD. Formally SLR has three phases which are as follows:
www.iosrjournals.org 2 | Page
1. Planning the review
2. Conducting the review
3. Reporting the review
The expected outcomes of this review will be the identification of different SQMs for GSD along with their SWOT analysis to assist vendor organizations to select appropriate SQMs at the right time to get high quality product.
3.1 Construction of Search Terms
Following details will be supportive in constructing search terms.
- **Population:** Software quality in Global Software Development (GSD), quality metrics in GSD
- **Intervention:** Quality attributes, characteristics, factors, parameters, software quality measures
- **Outcomes of relevance:** Quality Software
- **Experimental Design:** Empirical studies, theoretical studies, case studies, experts’ opinions
Example of research questions containing the above details are:
**RQ1:**
[What are the different quality attributes/factors] that affect [Software quality] in [Global Software Development]
**RQ2:**
Software quality metrics, Global software development
3.2 Search Strategy
3.2.1 Trial Search
A trial search was conducted using the following search string on:
- IEEE Explore (http://ieeexplore.ieee.org)
- Google Scholar (scholar.google.com)
- Emerald digital library (http://www.emeraldinsight.com)
3.2.2 Trail Search
("Software quality” OR "software quality assurance” OR "software quality management”) AND (factors OR attributes OR characteristics) AND (“software quality metrics” OR "software quality measuring tools” OR "software quality assessment tool”) AND (“Global software development” OR GSD OR "offshore software outsourcing” OR "offshore software development outsourcing” OR OSDO))
<table>
<thead>
<tr>
<th>S. No</th>
<th>Digital Library</th>
<th>Total result Displayed</th>
<th>No of Relevant Papers</th>
<th>Date of Search</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>IEEE Explorer</td>
<td>322</td>
<td>100</td>
<td>6-June-2012</td>
</tr>
<tr>
<td>2</td>
<td>Google Scholar</td>
<td>780</td>
<td>576</td>
<td>6-June-2012</td>
</tr>
<tr>
<td>3</td>
<td>Emerald</td>
<td>315</td>
<td>90</td>
<td>6-June-2012</td>
</tr>
</tbody>
</table>
Table 1: Trial Search Results
This search string is used as a guide for the development and validation of major search terms.
3.3 Identifying Search Terms for SQMs for GSD
The search strategy for the SLR is a plan to:
- a. Construct search terms by distinguishing population, intervention and outcome.
- b. Discover the alternative spellings and synonyms.
- c. Verify the key words in any relevant paper.
- d. Use Boolean Operators.
- e. Integrate the search string into a summarized form, if required.
The fifth step is included from the SLR study [23].
**Result for a)**
RQ1: Software quality, attributes, Global software development
RQ2: Software quality metrics, Global software development
Result for b)
RQ1:
Software quality: (“Software quality” OR “software quality management” OR “software quality assurance” OR “Application quality” OR SQA OR “total quality management” OR “software standard” OR SQM OR “software rank” OR “software ability” OR “software caliber”)
Attributes: (Characteristics OR aspects OR factors OR features OR components OR parameters OR drivers OR motivators)
GSD: (“offshore software outsourcing” OR “information systems outsourcing” OR “information technology outsourcing” OR “IS outsourcing” OR “IT outsourcing” OR “CBIS outsourcing” OR “computer-based information systems outsourcing” OR “software-contracting-out” OR “distributed software development” OR “multi-site software development” OR “global software development” OR “GSD” OR “offshore software development outsourcing” OR OSDO)
RQ2:
Software quality: (“Software quality” OR “software quality management” OR “software quality assurance” OR “Application quality” OR SQA OR “total quality management” OR “software standard” OR SQM OR “software rank” OR “software ability” OR “software caliber”)
Metrics: (Metrics OR “measuring tool” OR “assessment tool” OR “evaluation tool” OR “quantification tool” OR measure)
GSD: (“offshore software outsourcing” OR “information systems outsourcing” OR “information technology outsourcing” OR “IS outsourcing” OR “IT outsourcing” OR “CBIS outsourcing” OR “computer-based information systems outsourcing” OR “software contracting-out” OR “distributed software development” OR “multi-site software development” OR “global software development” OR “GSD” OR “offshore software development outsourcing” OR OSDO)
Result for c)
Software quality, software quality attributes, software quality metrics, and Global software development
Result for d)
RQ1:
(“Software quality factors” OR "Software quality" OR "Software standard" OR "Software Quality Management" OR SQM OR "Software Quality Assurance" OR "Application quality" OR SQA OR "Software rank" OR "Software ability" OR "software caliber" OR “total quality management”) AND (Characteristics OR aspects OR factors OR features OR components OR parameters OR drivers OR motivators) AND (“Global software development” OR GSD OR "Offshore software outsourcing" OR "Information systems outsourcing" OR "Information technology outsourcing" OR "IS outsourcing" OR "IT outsourcing" OR "CBIS outsourcing" OR "Computer-based information systems outsourcing" OR "Distributed Software Development" OR "Multi-site Software Development" OR OSDO))
RQ2:
("Software quality" OR "Software standard" OR "Software Quality Management" OR SQM OR "Software Quality Assurance" OR "Application quality" OR SQA OR "Software rank" OR "Software ability" OR "software caliber" OR “total quality management”) AND (Metrics OR "measuring tool" OR "assessment tool" OR "quantification tool" OR "evaluation tool") AND (“Global software development” OR GSD OR "Offshore software outsourcing" OR "Information systems outsourcing" OR "Information technology outsourcing" OR "IS outsourcing" OR "IT outsourcing" OR "CBIS outsourcing" OR "Computer-based information systems outsourcing" OR "Distributed Software Development" OR "Multi-site Software Development" OR OSDO))
Result for e)
As the strings successfully retrieved relevant papers therefore no result for e.
3.4 Search Terms Breakup
We break down the above mention search strings as some databases do not allow length search strings, for this reason we will do the following steps
- Fragmented search string in small substrings
- Execute separate search for these strings
- Summaries to eliminate redundancy
- In IEEEXplore and ScienceDirect Digital libraries, search string will be used with a technique that it should be put in the pane, large rectangular box, instead of the small sized text-boxes furnished in the advanced search of the library.
Substrings for RQ1
String 1
("Software quality factors" OR "Software quality" OR "Software rank" OR "Software ability") AND (Characteristics OR aspects) AND ("Global software development" OR GSD OR "Offshore software outsourcing" OR "IS outsourcing")
String 2
(SQM OR "Software Quality Assurance" OR "Application quality") AND (factors OR features OR parameters OR motivators) AND ("IT outsourcing" OR "CBIS outsourcing" OR "Computer-based information systems outsourcing" OR "Distributed Software Development")
String 3
("Software standard" OR SQA OR "Software Quality Management" OR "software caliber") AND (components OR drivers) AND ("Information systems outsourcing" OR "Information technology outsourcing" OR "Multi-site Software Development" OR OSDO))
Substrings for RQ2
String 1
("Software quality" OR "Software standard" OR "Software Quality Management") AND (Metrics OR "measuring tool") AND ("Global software development" OR GSD OR "Offshore software outsourcing" OR "Information systems outsourcing" OR "IT outsourcing")
String 2
(SQMs OR "Software Quality Assurance" OR "Application quality" OR "Software rank") AND ("assessment tool" OR "quantification tool") AND ("Information technology outsourcing" OR "IS outsourcing" OR "CBIS outsourcing" OR "Distributed Software Development")
String 3
(SQA OR "Software ability" OR "software caliber" OR "total quality management") AND (measure OR "evaluation tool") AND ("Computer-based information systems outsourcing" OR "Multi-site Software Development" OR GSD)
3.5 Perform Search for Relevant Studies
Resources to be searched
- ACM Portal (http://dl.acm.org)
- ScienceDirect (www.sciencedirect.com)
- CiteSeer Digital Library (www.citeseer.ist.psu.edu)
- Google Scholar (www.scholar.google.com)
- IEEExplore (http://ieeexplore.ieee.org)
- Emerald (http://www.emeraldinsight.com)
- SpringerLink (www.springerlink.com)
3.6 Search Constraints and Validation
We are searching through literature as follow.
- Search for all relevant literature without any time (years) boundary/limit
- Initially few relevant papers were found through major search terms
- Prior to undertake the review, these relevant papers will be used for the validation of the search strings.
3.7 Search Documentation
The candidate papers will be stored in the form of a table with keeping following records:
(S. No, Name of Database, Search Strategy, Search Phase, Date of Search, Years Covered, No of Publication Found, Initial Selection Decision, Final Selection Decision)
3.8 Search Result Management
The records of all the candidate papers from the digital libraries are kept in a directory. Record of each digital library is stored in separate directory, where each page is saved as.html page.
IV. Publication Selection
4.1 Inclusion Criteria
The inclusion criteria we used to authenticate which piece of literature found by the search string(s) will be used for the data extraction. We will only consider papers related to Software Quality Metrics with focus on GSD that are written in English. The criteria are listed below.
- Studies that describe SQMs for global software development
- Studies that describe different attributes of software quality in GSD
- Studies that describe exiting SQMs in GSD
4.2 Exclusion Criteria
The exclusion criteria are listed as follows:
- Studies that are not relevant to the research questions
- Studies that don’t describe attributes of software quality and software quality metrics in GSD
- Studies that don’t describe SQMs for GSD
- Studies that don’t describe software measurement and software quality in GSD
- Studies other than SQMs
4.3 Selecting Primary Sources
Initial selection of the primary sources will be executed by reviewing the title, keywords, and abstract. The inclusion/exclusion criteria will be checked against each selected paper after reviewing its full text. The record of inclusion/exclusion decision regarding each primary source will be sustained properly with suitable justification.
<table>
<thead>
<tr>
<th>S. No</th>
<th>Digital library</th>
<th>Total Papers for RQ1</th>
<th>Primary Selection for RQ1</th>
<th>Total Papers for RQ2</th>
<th>Primary Selection for RQ2</th>
<th>Date of Search</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>ACM</td>
<td>441</td>
<td>47</td>
<td>400</td>
<td>43</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>2</td>
<td>Google Scholar</td>
<td>1277</td>
<td>710</td>
<td>747</td>
<td>360</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>3</td>
<td>Cite Seer</td>
<td>1,477(300)</td>
<td>46</td>
<td>701(500)</td>
<td>64</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>4</td>
<td>SpringerLink</td>
<td>114</td>
<td>19</td>
<td>74</td>
<td>18</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>5</td>
<td>Emerald</td>
<td>68</td>
<td>7</td>
<td>470</td>
<td>50</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>6</td>
<td>IEEEExplore</td>
<td>529</td>
<td>132</td>
<td>218</td>
<td>39</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td>7</td>
<td>Science Direct</td>
<td>111</td>
<td>29</td>
<td>113</td>
<td>41</td>
<td>5-14 June-2012</td>
</tr>
<tr>
<td></td>
<td><strong>Total</strong></td>
<td><strong>3040</strong></td>
<td><strong>600</strong></td>
<td><strong>2723</strong></td>
<td><strong>615</strong></td>
<td><strong>5-14 June-2012</strong></td>
</tr>
</tbody>
</table>
Table 3: Results of Primary Study
V. Publication Quality Assessment
The measurement of quality is executed after final selection of publications. The quality of publications is evaluated in parallel at the time of data extraction. The quality checklists include the following questions:
- Is it clear how quality attributes were identified in GSD?
- Is it clear how quality attributes were measured/evaluated in GSD?
- Is it clear how the use of SQMs for GSD is essential?
Each of the above factors will be marked as ‘YES’ or ‘NO’ or ‘Partial’ or ‘N.A.’
VI. Data Extraction Strategy
The following subsections are considered in the data extraction strategy
6.1 Primary Study Data
The rationale of the study is to accumulate the data, from the finally selected publications, which is concerned on satisfying the research questions for the review. The following data will be extracted from each publication Title, Authors, Journal/Conference title, etc, background information and information related to research questions.
6.2 Data Extraction Process
The review will be attempted by a single researcher, who will be responsible for the data extraction. A secondary reviewer will be approached for guidance in case of an issue concerning the data extraction. The inter-rater reliability test will be executed after the data extraction process by the primary reviewer.
6.3 Data Storage
The summarized data for each publication will be kept as a Microsoft Word/SPSS document and will be stored electronically.
VII. Data Synthesis
Due to two research questions, the synthesis will be categorized into two parts. The extracted data for RQ1 will be stored in a table having columns (S.No, Quality Attributes, Frequency, Percentage). For the RQ2, the table will have the following columns (S. No, software quality metrics, Frequency, Percentages).
VIII. Acknowledgements
We are obliged to Software Engineering Research Group at University of Malakand (SERG_UOM) in general and to Muhammad Ilyas Azeem in particular for the review and their valuable comments in validation process of the protocol.
References
|
{"Source-Url": "http://www.iosrjournals.org/iosr-jce/papers/vol2-issue1/A0210107.pdf", "len_cl100k_base": 5223, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22371, "total-output-tokens": 6403, "length": "2e12", "weborganizer": {"__label__adult": 0.0004050731658935547, "__label__art_design": 0.0002808570861816406, "__label__crime_law": 0.0004291534423828125, "__label__education_jobs": 0.0020751953125, "__label__entertainment": 5.799531936645508e-05, "__label__fashion_beauty": 0.0001747608184814453, "__label__finance_business": 0.0009622573852539062, "__label__food_dining": 0.00032711029052734375, "__label__games": 0.0005884170532226562, "__label__hardware": 0.0004606246948242187, "__label__health": 0.00043272972106933594, "__label__history": 0.00017321109771728516, "__label__home_hobbies": 7.468461990356445e-05, "__label__industrial": 0.00025081634521484375, "__label__literature": 0.0004200935363769531, "__label__politics": 0.000232696533203125, "__label__religion": 0.0003142356872558594, "__label__science_tech": 0.0053558349609375, "__label__social_life": 0.0001176595687866211, "__label__software": 0.00644683837890625, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00025916099548339844, "__label__transportation": 0.0003249645233154297, "__label__travel": 0.0001556873321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26921, 0.03771]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26921, 0.0439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26921, 0.87068]], "google_gemma-3-12b-it_contains_pii": [[0, 4643, false], [4643, 9693, null], [9693, 12537, null], [12537, 16392, null], [16392, 19645, null], [19645, 23236, null], [23236, 26921, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4643, true], [4643, 9693, null], [9693, 12537, null], [12537, 16392, null], [16392, 19645, null], [19645, 23236, null], [23236, 26921, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26921, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26921, null]], "pdf_page_numbers": [[0, 4643, 1], [4643, 9693, 2], [9693, 12537, 3], [12537, 16392, 4], [16392, 19645, 5], [19645, 23236, 6], [23236, 26921, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26921, 0.07353]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
93fd511eb484fc670174c527099f744389053512
|
Numerical Python for Scalable Architectures
Kristensen, Mads Ruben Burgdorff; Vinter, Brian
Published in:
PGAS ‘10 Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model
DOI:
10.1145/2020373.2020388
Publication date:
2010
Document version
Early version, also known as pre-print
Citation for published version (APA):
Numerical Python for Scalable Architectures
Mads Ruben Burgdorff Kristensen
Brian Vinter
eScience Centre
University of Copenhagen
Denmark
madsbk@diku.dk/vinter@diku.dk
Abstract
In this paper, we introduce DistNumPy, a library for doing numerical computation in Python that targets scalable distributed memory architectures. DistNumPy extends the NumPy module[15], which is popular for scientific programming. Replacing NumPy with DistNumPy enables the user to write sequential Python programs that seamlessly utilize distributed memory architectures. This feature is obtained by introducing a new backend for NumPy arrays, which distribute data amongst the nodes in a distributed memory multiprocessor. All operations on this new array will seek to utilize all available processors. The array itself is distributed between multiple processors in order to support larger arrays than a single node can hold in memory.
We perform three experiments of sequential Python programs running on an Ethernet based cluster of SMP-nodes with a total of 64 CPU-cores. The results show an 88% CPU utilization when running a Monte Carlo simulation, 63% CPU utilization on an N-body simulation and a more modest 50% on a Jacobi solver. The primary limitation in CPU utilization is identified as SMP limitations and not the distribution aspect. Based on the experiments we find that it is possible to obtain significant speedup from using our new array-backend without changing the original Python code.
Keywords NumPy, Productivity, Parallel language
1. Introduction
In many scientific and engineering areas, there is a need to solve numerical problems. Researchers and engineers behind these applications often prefer a high level programming language to implement new algorithms. Of particular interest are languages that support a broad range of high-level operations directly on vectors and matrices. Also of interest is the possibility to get immediate feedback when experimenting with an application. The programming language Python combined with the numerical library NumPy[15] supports all these features and has become a popular numerical framework amongst researchers.
The idea in NumPy is to provide a numerical extension to the Python language. NumPy provides not only an API to standardized numerical solvers, but a possibility to develop new numerical solvers that are both implemented and efficiently executed in Python, much like the idea behind the MATLAB[8] framework. NumPy is mostly implemented in C and introduces a flexible N-dimensional array object that supports a broad range of numerical operations. The performance of NumPy is significantly increased when using array-operations instead of scalar-operations on this new array.
Parallel execution is supported by a limited set of NumPy functions, but only in a shared memory environment. However, many scientific computations are executed on large distributed memory machines because of the computation and memory requirements of the applications. In such cases, the communication between processors has to be implemented by the programmer explicitly. The result is a significant difference between the sequential program and the parallelized program. DistNumPy eliminates this difference by introducing a distributed version of the N-dimensional array object. All operations on such distributed arrays will utilize all available processors and the array itself is distributed between multiple processors, which makes it possible to expand the size of the array to the aggregated available memory.
1.1 Motivation
Solutions to numerical problems often consist of two implementations: a prototype and a final version. The algorithm is developed and implemented in a prototype by which the correctness of the algorithm can be verified. Typical many iterations of development...
are required to obtain a correct prototype, thus for this purpose a high productivity language is used, most often MATLAB. However, when the correct algorithm is finished the performance of the implementation becomes essential for doing research with the algorithm. This performance requirement presents a problem for the researcher since highly optimized code requires a fairly low-level programming language such as C/C++ or Fortran. The final version will therefore typically be a reimplemention of the prototype, which involves both changing the programming language and parallelizing the implementation (Fig. 1a).
The overall target of DistNumPy is to provide a high productivity tool that meets both the need for a high productivity tool that allows researcher to move from idea to prototype in a short time, and the need for a high performance solution that will eliminate the need for a costly and risky reimplemention (Fig. 1b). It should be possible to develop and implement an algorithm using a simple notebook and then effortlessly execute the implementation on a cluster of computers while utilizing all available CPUs.
1.2 Target architectures
NumPy supports a long range of architectures from the widespread x86 to the specialized Blue Gene architecture. However, NumPy is incapable of utilizing distributed memory architectures like Blue Gene supercomputers or clusters of x86 machines. The target of DistNumPy is to close this gap and fully support and utilize distributed memory architectures.
1.3 Related work
Libraries and programming languages that support parallelization on distributed memory architectures is a well known concept. The existing tools either seek to provide optimal performance in parallel applications or, like DistNumPy, seek to ease the task of writing parallel applications.
The library ScalAPACK[2] is a parallel version of the linear algebra library LAPACK[1]. It introduces efficient parallel operations on distributed matrices and vectors. To use ScalAPACK, an application must be programmed using MPI[7] and it is the responsibility of the programmer to ensure that the allocation of matrices and vectors comply with the distribution layout ScalAPACK specifies.
Another library, Global Arrays[13], introduces a distributed data object (global array), which makes the data distribution transparent to the user. It also supports efficient parallel operations and provides a higher level of abstraction than ScaLAPACK. However, the programmer must still explicitly coordinate the multiple processes that are involved in the computation. The programmer must specify which region of a global array is relevant for a given process.
Both ScalAPACK and Global Arrays may be used from within Python and can even be used in combination with NumPy, but it is only possible to use NumPy locally and not with distributed operations. A more closely integrated Python project Pyton[16] supports parallelized NumPy operations. IPython introduces a distributed NumPy array much like the distributed array that is introduced in this paper. Still, the user-application must use the MPI framework and the user has to differentiate between the running MPI-processes.
Co-Array Fortran[14] is a small language extension of Fortran-95 for parallel processing on Distributed Memory Machines. It introduces a Partitioned Global Address Space (PGAS) by extending Fortran arrays with a co-array dimension. Each process can access remote instances of an array by indexing into the co-array dimensions. A similar PGAS extension called Unified Parallel C (UPC)[3] extent the C language with a distributed array declaration. Both languages provide a high abstraction level, but users still program with the SPMD model in mind, writing code with the understanding that multiple instances of it will be executing cooperatively.
A higher level of abstraction is found in projects where the execution, seen from the perspective of the user, is represented as a sequential algorithm. The High Performance Fortran (HPF)[12] programming languages provide such an abstraction level. However, HPF requires the user to specify parallelizable regions in the code and which data distribution scheme the runtime should use.
The Simple Parallel R INTerface (SPRINT)[9] is a parallel framework for the programming language R. The abstraction level in SPRINT is similar to DistNumPy in the sense that the distribution and parallelization is completely transparent to the user.
2. NumPy
Python has become a popular language for high performance computing even though the performance of Python programs is much lower than that of compiled languages. The growing popularity is because Python is used as the coordinating language while the compute intensive tasks are implemented in a high performance language.
NumPy[15] is a library for numerical operations in Python which is implemented in the C programming language. NumPy provides the programmer with an N-dimensional array object and a whole range of supported array operations. By using the array operations, NumPy takes advantage of the performance of C while retaining the high abstraction level of Python. However, this also means that no performance improvement is obtained otherwise e.g. using a Python loop to traverse a NumPy array does not result in any performance gain.
2.1 Interfaces
The primary interface in NumPy is a Python interface and it is possible to use NumPy exclusively from Python. NumPy also provides a C interface in which it is possible to access the same function-ality as in the Python interface. Additionally, the C interface also allows programmers to access low level data structures like pointers to array data and thereby provides the possibility to implement arbitrary array operations efficiently in C. The two interfaces may be used interchangeably through the Python program.
2.2 Universal functions
An important mechanism in NumPy is a concept called Universal functions. A universal function (ufunc) is a function that operates on all elements in an array independently. That is, a ufunc is a vectorized wrapper for a function that takes a fixed number of scalar inputs and produces a fixed number of scalar outputs. Using ufunc can result in a significant performance boost compared to native Python because the computation-loop is implemented in C.
2.2.1 Function broadcasting
To make ufunc more flexible it is possible to use arrays with different number of dimensions. To utilize this feature the size of the dimensions must either be identical or have the length one. When the ufunc is applied, all dimensions with a size of one will be broadcasted in the NumPy terminology. That is, the array will be duplicated along the broadcasted dimension (Fig. 2).
It is possible to implement many array operations efficiently in Python by combining NumPy’s ufunc with more traditional numerical functions like matrix multiplication, factorization etc.
2.3 Basic Linear Algebra Subprograms
NumPy makes use of the numerical library Basic Linear Algebra Subprograms (BLAS) [11]. A highly optimized BLAS implementation exists for almost all HPC platforms and NumPy exploits
this when possible. Operations on vector-vector, matrix-vector and
matrix-matrix (BLAS level 1, 2 and 3 respectively) all use BLAS
in NumPy.
3. DistNumPy
DistNumPy is a new version of NumPy that parallelizes array op-
erations in a manner completely transparent to the user – from
the perspective of the user, the difference between NumPy and
DistNumPy is minimal. DistNumPy can use multiple processors
through the communication library Message Passing Interface
(MPI)[7]. However, we have chosen not to follow the standard
MPI approach in which the same user-program is executed on all
MPI-processes. This is because the standard MPI approach requires
the user to differentiate between the MPI-processes, e.g. sequential
areas in the user-program must be guarded with a branch based on
the MPI-rank of the process. In DistNumPy MPI communication
must be fully transparent and the user needs no knowledge of MPI
or any parallel programming model. However, the user is required
to use the array operations in DistNumPy to obtain any kind of
speedup. We think this is a reasonable requirement since it is also
required by NumPy.
The only difference in the API of NumPy and DistNumPy is
the array creation routines. DistNumPy allows both distributed and
non-distributed arrays to co-exist thus the user must specify, as an
optional parameter, if the array should be distributed. The following
illustrates the only difference between the creation of a standard
array and a distributed array:
```python
# Non-Distributed
A = numpy.array([1,2,3])
# Distributed
B = numpy.array([1,2,3], dist=True)
```
3.1 Interfaces
There are two programming interfaces in NumPy – one in Python
and one in C. We aim to support the complete MPI interface and
a great subset of the C interface. However, the part of the C
interface that involves direct access to low level data structures
will not be supported. It is not feasible to return a C-pointer that
represents the elements in a distributed array.
3.2 Data layout
Two-Dimensional Block Cyclic Distribution is a very popular
distribution scheme and it is used in numerical libraries like
ScaLAPACK[2] and LINPACK[5]. It supports matrices and vec-
tors and has a good load balance in numerical problems that have
a diagonal computation workflow e.g. Gaussian elimination. The
distribution scheme works by arranging all MPI-processes in a two
dimensional grid and then distributing data-blocks in a round-robin
fashion either along one or both grid dimensions (Fig. 3); the result
is a well-balanced distribution.
Inspired by NumPy, DistNumPy implements an array hierarchy where distributed arrays are represented by the following two data structures.
- **Array-base** is the base of an array and has direct access to the content of the array in main memory. An array-base is created with all related meta-data when the user allocates a new distributed array, but the user will never access the array directly through the array-base. The array-base always describes the whole array and its meta-data such as array size and data type are constant.
- **Array-view** is a view of an array-base. The view can represent the whole array-base or only a sub-part of the array-base. An array-view can even represent a non-contiguous sub-part of the array-base. An array-view contains its own meta-data that describe which part of the array-base is visible and it can add non-existing 1-length dimensions to the array-base. The array-view is manipulated directly by the user and from the users perspective the array-view is the array.
Array-views are not allowed to refer to each other, which means that the hierarchy is flat with only two levels: array-base below array-view. However, multiple array-views are allowed to refer to the same array-base. This hierarchy is illustrated in Figure 4.
**3.5 Optimization hierarchy**
It is a significant performance challenge to support array-views that are not aligned with the distribution block size, i.e. an array view that has a starting offset that a not aligned with the distribution block size or represents a non-contiguous sub-part of the array-base. The difficulty lies in how to handle data blocks that are located on multiple MPI-processes and are not aligned to each other. Such problems can be handled by partitioning data blocks into sub-blocks that both are aligned and located on a single MPI-process. However, in this work we will not focus on problems that involve non-aligned array-views, but instead simply handle them by communicating and computing each array element individually.
In general we introduce a hierarchy of implementations where each implementation is optimized for specific operation scenarios. When an operation is applied a lookup in the hierarchy determines the best suited implementation for that particular operation. All operations have their own hierarchy some with more levels than others, but at the bottom of the hierarchy all operations have an implementation that can handle any scenario simply by handling each array element individually.
```python
from numpy import *
x, y = (empty([S], dist=True),
empty([S], dist=True))
x, y = (random(x), random(y))
(x, y) = (square(x), square(y))
z = (x * y) < 1
print add.reduce(z) + 4.0 / S #The result
```
**Figure 5.** Computing Pi using Monte Carlo simulation. S is the number of samples used. We have defined a new ufunc (ufunc_random) to make sure that we use an identical random number generator in all benchmarks. The ufunc uses `rand()/(double)RAND_MAX` from the ANSI C standard library (stdlib.h) to generate numbers.
**3.6 Parallel BLAS**
As previously mentioned NumPy supports BLAS operations on vectors and matrices. DistNumPy therefore implements a parallel version of BLAS inspired by PBLAS from the ScaLAPACK library. Since DistNumPy uses the same data-layout as ScaLAPACK, it would be straightforward to use PBLAS for all parallel BLAS operations. However, to simplify the installation and maintenance of DistNumPy we have chosen to implement our own parallel version of BLAS. We use SUMMA[6] for matrix multiplication, which enable us to use the already available BLAS library locally on the MPI-processes. SUMMA is only applicable on complete array-views and we therefore use a straightforward implementation that computes one element at a time if partial array-views are involved in the computation.
**3.7 Universal function**
In DistNumPy, the implementation of ufunc uses three different scenarios.
1. In the simplest scenario we have a perfect match between all elements in the array-views and applying an ufunc does not require any communication between MPI-processes. The scenario is applicable when the ufunc is applied on complete array-views with identical shapes.
2. In the second scenario the array-views must represent a continuous part of the underlying array-base. The computation is parallelized by the data distribution of the output array and data blocks from the input arrays are fetched when needed. We use non-blocking one-side communication (MPI_Get) when fetching data blocks, which makes it possible to compute one block while fetching the next block (double buffering).
3. The final scenario does not use any simplifications and works with any kind of array-view. It also uses non-blocking one-side communication but only one element at a time.
**4. Examples**
To evaluate DistNumPy we have implemented three Python programs that all make use of NumPy’s vector-operations (ufunc). They are all optimized for a sequential execution on a single CPU and the only program change we make, when going from the original NumPy to our DistNumPy, is the array creation argument `dist`.
A walkthrough of a Monte Carlo simulation is presented as an example of how DistNumPy handles Python executions.
**4.1 Monte Carlo simulation**
We have implemented an efficient Monte Carlo Pi simulation using NumPy’s ufunc. The implementation is a translation of the Monte Carlo simulation included in the benchmark suite SciMark 2.0[17],
```
```
The master sends two CREATE_ARRAY messages to all slaves. The two messages contain an array shape and unique identifier (UID), which in this case identifies x and y, respectively. All MPI-processes allocate memory for the arrays and stores the array information.
4: The master sends two UFUNC messages to all slaves. Each message contains a UID and a function name ufunc_random. All MPI-processes apply the function on the array with the specified UID. A pointer to the function is found by calling PyObject_GetAttrString with the function name. It is thereby possible to support all ufuncs from NumPy.
5: After the master sends two UFUNC messages to all slaves but this time with function name square.
6: The master sends a UFUNC message with function name add followed by a UFUNC messages with function name less than. The scalar 1 is also in the message.
7: The master sends a UFUNC,REDUCE messages with function name add. The result is a scalar, which is not distributed, and the master therefore solely computes the remainder of the computation and print the result. When the master is done a SHUTDOWN message is sent to the slaves and the slaves call exit(0).
4.2 Jacobi method
The Jacobi method is an algorithm for determining the solutions of a system of linear equations. It is an iterative method that uses a spitting scheme to approximate the result.
Our implementation uses ufunc operations in a while-loop until it converges. Most of the implementation is included here(Fig. 6).
```python
h = zeros(shape(B), float, dist=True)
dmax = 1.0
AD = A.diagonal()
while (dmax > tol):
hnew = h + (B - add.reduce(A * h, 1)) / AD
tmp = absolute((h - hnew) / h)
dmax = maximum.reduce(tmp)
h = hnew
print h #The result
```
Figure 6. Iteratively Jacobi solver for matrix A with solution vector B both are distributed arrays. The import statement and the creation of A and B is not included here. tol is the maximum tolerated value of the diagonal-element with the highest value (dmax).
which is written in Java. It is very simple and uses two vectors with length equal the number of samples used in the calculation. Because of the memory requirements, this drastically reduces the maximum number of samples. Combining multiple simulations will allow more samples but we will only use one simulation. The implementation is included in its full length (Fig. 5) and the following is a walkthrough of a simulation (the bullet-numbers represent lines numbers):
1: All MPI-processes interpret the import statement and initiate DistNumPy. Besides calling MPI_Init() the initialization is identical to the original NumPy but instead of returning from the import statement, the slaves, MPI-processes with rank greater than zero, listen for a command message from the master, the MPI-process with rank zero.
2-3: The master sends two CREATE_ARRAY messages to all slaves. The two messages contain an array shape and unique identifier (UID), which in this case identifies x and y, respectively. All MPI-processes allocate memory for the arrays and stores the array information.
4: The master sends two UFUNC messages to all slaves. Each message contains a UID and a function name ufunc_random. All MPI-processes apply the function on the array with the specified UID. A pointer to the function is found by calling PyObject_GetAttrString with the function name. It is thereby possible to support all ufuncs from NumPy.
5: Again the master sends two UFUNC messages to all slaves but this time with function name square.
6: The master sends a UFUNC messages with function name add followed by a UFUNC messages with function name less than. The scalar 1 is also in the message.
7: The master sends a UFUNC,REDUCE messages with function name add. The result is a scalar, which is not distributed, and the master therefore solely computes the remainder of the computation and print the result. When the master is done a SHUTDOWN message is sent to the slaves and the slaves call exit(0).
4.3 Newtonian N-body simulation
A Newtonian N-body simulation is one that studies how bodies, represented by a mass, a location, and a velocity, move in space according to the laws of Newtonian physics. We use a straightforward algorithm computing all body-body interactions. The NumPy implementation is a direct translation of a MATLAB program[4]. The working loop of the two implementations take up 19 lines in Python and 22 lines in MATLAB thus it is too big to include here. However, the implementation is straightforward and use universal functions and matrix multiplications.
5. Experiments
In this section, we will conduct performance benchmarks on DistNumPy and NumPy\(^1\). We will benchmark the three Python programs presented in Section 4. All benchmarks are executed on two different Linux clusters – an Intel Core 2 Quad cluster and an Intel Nehalem cluster. Both clusters consist of processors with four CPU-cores, but the number of processors per node differs. Intel Core 2 Quad cluster has one CPU per node whereas the Intel Nehalem cluster has two CPUs per node. The interconnect is Gigabit Ethernet in both clusters. (Table 1).
Our experiments consist of a speedup benchmark, which we define as an execution time comparison between a sequential execution with NumPy and a parallelized execution with DistNumPy while the input is identical. Strong-scaling is used in all benchmarks and the input size is therefore constant.
5.1 Monte Carlo simulation
A Distributed Monte Carlo simulation is embarrassingly parallel and requires a minimum of communication. This is also the case when using DistNumPy because ufuncs are only applied on identically shaped arrays and it is therefore the simplest ufunc scenario. Additionally, the implementation is CPU-intensive because a complex ufunc is used as random number generator.
The result of the speedup benchmark is illustrated in Figure 7. We see a close to linear speedup for the Nehalem cluster – a CPU utilization of 88% is achieved on 64 CPU-cores. The penalty of using multiple CPU-cores per node is noticeable on the Core 2 architecture – a CPU utilization of 68% is achieved on 32 CPU-cores.
5.2 Jacobi method
The dominating part of the Jacobi method, performance-wise, is the element-by-element multiplication of A and h (Fig. 6 line 5). It consists of \(O(n^2)\) operations where as all the other operations only consist \(O(n)\) operations. Since scalar-multiplication is a very simple operation, the dominating ufunc in the implementation is memory-intensive.
The result of the speedup benchmark is illustrated in Figure 8. We see a good speedup with 8 CPU-cores and to some degree also with 16 Nehalem CPU-cores. However, the CPU utilization when
\(^1\)NumPy version 1.3.0
<table>
<thead>
<tr>
<th>Table 1. Hardware specifications</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
</tr>
<tr>
<td>CPU Frequency</td>
</tr>
<tr>
<td>CPU per node</td>
</tr>
<tr>
<td>Cores per CPU</td>
</tr>
<tr>
<td>Memory per node</td>
</tr>
<tr>
<td>Number of nodes</td>
</tr>
<tr>
<td>Network</td>
</tr>
</tbody>
</table>
using more than 16 CPU-cores is very poor. The problem is memory bandwidth – since we use multiple CPU-cores per node when using more than 8 CPU-cores, the aggregated memory bandwidth of the Core 2 cluster does only increase up to 8 CPU-cores. The Nehalem cluster is a bit better because it has two memory buses per node, but using more than 16 CPU-cores will not increase the aggregated memory bandwidth.
5.2.1 Profiling of the Jacobi implementation
To investigate the memory bandwidth limitation observed in the Jacobi execution we have profiled the execution by measuring the time spend on computation and communication (Fig. 9). As expected the result shows that the percentages used with communication increases when the number of CPU-cores increases. Furthermore, a noteworthy observation is the almost identical communication overhead at eight CPU-cores and sixteen CPU-cores. This is because half of the communication is performed through the use of shared memory at sixteen CPU-cores, which also means that the communication, just like the computation, is bound by the limited memory bandwidth.
5.3 Newtonian N-body simulation
The result of the speedup benchmark is illustrated in Figure 10. Compared to the Jacobi method we see a similar speedup and CPU utilization. This is expected because the dominating operations are also simple ufuncs. Even though there are some matrix-multiplications, which have a great scalability, it is not enough to significantly boost the overall scalability.
5.4 Alternative programming language
DistNumPy introduces a performance overhead compared to a lower-level programming language such as C/C++ or Fortran. To investigate this overhead we have implemented the Jacobi benchmark in C. The implementation uses the same sequential algorithm as the NumPy and DistNumPy implementations.
Executions on both architectures show that DistNumPy and NumPy is roughly 50% slower than the C implementation when executing the Jacobi method on one CPU-core. This is in rough runtime numbers: 21 seconds for C, 31 seconds for NumPy and 32 seconds for DistNumPy.
Obviously highly hand-optimized implementations have a clear performance advantages over DistNumPy. For instance by the use of a highly optimized implementation in C [10] demonstrates extreme scalability of a similar Jacobi computation – an execution by 16384 CPU-cores achieves a CPU utilization of 70% on a Blue Gene/P architecture.
5.5 Summary
The benchmarks clearly show that DistNumPy has both good performance and scalability when execution is not bound by the memory bandwidth, which is evident from looking at the CPU utilization when only one CPU-core per node is used. As expected the scalability of the Monte Carlo simulation is better than the Jacobi and the N-body computation because of the reduced communication requirements and more CPU-intensive ufunc operation.
The scalability of the Jacobi and the N-body computation is drastically reduced when using multiple CPU-cores per node. The problem is the complexity of the ufunc operations. As opposed to the Monte Carlo simulation, which makes use of a complex ufunc, the Jacobi and the N-body computation only use simple ufuncs e.g. add and multiplication.
As expected the performance of the C implementation is better than the DistNumPy implementation. However, by utilizing two CPU-cores it is possible to outperform the C implementation in the case of the Jacobi method. This is not a possibility in the case of the Monte Carlo simulation where the algorithm does not favor vectorization.
6. Future work
In its current state DistNumPy does not implement the NumPy interface completely. Many specialized operations like Fast Fourier transform or LU factorization is not implemented, but it is our intention to implement the complete Python interface and most of the C interface.
Figure 8. Speedup of the Jacobi solver. In graph (a) the two architectures use a minimum number of CPU-cores per node. Added in graph (b) is the result of using multiple CPU-cores on a single node (SMP).
Figure 9. Profiling of the Jacobi experiment. The two figures illustrate the relationship between communication and computation when running on the Core 2 Quad architecture (a) and the Nehalem architecture (b). The area with the check pattern represent MPI communication and the clean area represent computation. Note that these figures relates directly to the Jacobi speedup graph (Fig 8a).
The performance of NumPy programs that make use of array-views that are not aligned with the distribution block size is very poor because each array element is handled individually. This is not a problem for a whole range of NumPy programs, including the experiments presented in this paper, since they do not use non-aligned array-views. However some operations, such as stencil operations, require non-aligned array-views and an important future work is therefore to support all array views with similar efficiency.
Other important future work includes performance and scalability improvement. As showed by the benchmarks, applications that are dominated by non-complex ufuncs easily become memory bounded. One solutions is to merge calls to ufuncs, that operate on common arrays, together in one joint operation and thereby make the joint operation more CPU-intensive. If it is possible to merge enough ufuncs together the application may become CPU bound rather than memory bound.
7. Conclusions
In this work we have successfully shown that it is possible to implement a parallelized version of NumPy\cite{15} that seamlessly utilize distributed memory architectures. The only API difference between NumPy and our parallelized version, DistNumPy, is an extra optional parameter in the array creation routines.
Performance measurements of three Python program, which make use of DistNumPy, show very good performance and scalability. A CPU utilization of 88% is achieved on a 64 CPU-core Nehalem cluster running a CPU-intensive Monte Carlo simulation. A more memory-intensive N-body simulation achieves a CPU utilization of 91% on 16 CPU-cores but only 63% on 64 CPU-cores. Similar a Jacobi solver achieves a CPU utilization of 85% on 16 CPU-cores and 50% on 64 CPU-cores.
To obtain good performance with NumPy the user is required to make use of array operations rather than using Python loops. DistNumPy take advantage of this fact and parallelizes array operations. Thus most efficient NumPy applications should be able to benefit from DistNumPy with the distribution parameter as the only change.
We conclude that it is possible to obtain significant speedup with DistNumPy. However, further work is needed if shared memory machines are to be fully utilized as nodes in a scalable architecture.
References
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/51458292/Numerical_python.pdf", "len_cl100k_base": 6859, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29216, "total-output-tokens": 8737, "length": "2e12", "weborganizer": {"__label__adult": 0.0003943443298339844, "__label__art_design": 0.0005121231079101562, "__label__crime_law": 0.0004317760467529297, "__label__education_jobs": 0.0015506744384765625, "__label__entertainment": 0.00019550323486328125, "__label__fashion_beauty": 0.0002161264419555664, "__label__finance_business": 0.0003600120544433594, "__label__food_dining": 0.0005092620849609375, "__label__games": 0.0007815361022949219, "__label__hardware": 0.0017604827880859375, "__label__health": 0.0009222030639648438, "__label__history": 0.0005617141723632812, "__label__home_hobbies": 0.00015437602996826172, "__label__industrial": 0.00101470947265625, "__label__literature": 0.0003561973571777344, "__label__politics": 0.0004727840423583984, "__label__religion": 0.000713348388671875, "__label__science_tech": 0.474365234375, "__label__social_life": 0.00018703937530517575, "__label__software": 0.01206207275390625, "__label__software_dev": 0.5009765625, "__label__sports_fitness": 0.000469207763671875, "__label__transportation": 0.000942707061767578, "__label__travel": 0.00027942657470703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36607, 0.03827]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36607, 0.57208]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36607, 0.87024]], "google_gemma-3-12b-it_contains_pii": [[0, 704, false], [704, 4547, null], [4547, 11722, null], [11722, 14272, null], [14272, 19775, null], [19775, 26852, null], [26852, 30700, null], [30700, 31297, null], [31297, 35371, null], [35371, 36607, null]], "google_gemma-3-12b-it_is_public_document": [[0, 704, true], [704, 4547, null], [4547, 11722, null], [11722, 14272, null], [14272, 19775, null], [19775, 26852, null], [26852, 30700, null], [30700, 31297, null], [31297, 35371, null], [35371, 36607, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36607, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36607, null]], "pdf_page_numbers": [[0, 704, 1], [704, 4547, 2], [4547, 11722, 3], [11722, 14272, 4], [14272, 19775, 5], [19775, 26852, 6], [26852, 30700, 7], [30700, 31297, 8], [31297, 35371, 9], [35371, 36607, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36607, 0.04018]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1f0a50958aab8f21f42348e15098eadd2fb9d449
|
Learning is the ability to improve one’s behavior based on experience.
- The range of behaviors is expanded: the agent can do more.
- The accuracy on tasks is improved: the agent can do things better.
- The speed is improved: the agent can do things faster.
The following components are part of any learning problem:
- **task**: The behavior or task that’s being improved.
- For example: classification, acting in an environment
- **data**: The experiences that are being used to improve performance in the task.
- **measure of improvement**: How can the improvement be measured?
- For example: increasing accuracy in prediction, new skills that were not present initially, improved speed.
**Common Learning Tasks**
- **Supervised classification**: Given a set of pre-classified training examples, classify a new instance.
- **Unsupervised learning**: Find natural classes for examples.
- **Reinforcement learning**: Determine what to do based on rewards and punishments.
- **Transfer Learning**: Learning from an expert
- **Active Learning**: Learner actively seeks to learn
- **Inductive logic programming**: Build richer models in terms of logic programs.
**Representation**
- Draw a conclusion from a knowledge base: **deduction** (top-down)
- Infer a representation from data: **induction** (bottom-up)
- The richer the representation, the more useful it is for subsequent problem solving.
- The richer the representation, the more difficult it is to learn.
### Feedback
Learning tasks can be characterized by the feedback given to the learner.
- **Supervised learning**: What has to be learned is specified for each example.
- **Unsupervised learning**: No classifications are given; the learner has to discover categories and regularities in the data.
- **Reinforcement learning**: Feedback occurs after a sequence of actions. Credit assignment problem.
### Measuring Success
The measure of success is not how well the agent performs on the training examples, but how well the agent performs for new examples.
Consider two agents:
- $P$ claims the negative examples seen are the only negative examples. Every other instance is positive.
- $N$ claims the positive examples seen are the only positive examples. Every other instance is negative.
Both agents correctly classify every training example, but disagree on every other example.
### Bias
The tendency to prefer one hypothesis over another is called a bias.
A bias is necessary to make predictions on unseen data.
Saying a hypothesis is better than $N$'s or $P$'s hypothesis isn’t something that’s obtained from the data.
To have any inductive process make predictions on unseen data, you need a bias.
What constitutes a good bias is an empirical question about which biases work best in practice.
### Learning as search
Given a representation and a bias, the problem of learning can be reduced to one of search.
Learning is search through the space of possible representations looking for the representation or representations that best fits the data, given the bias.
These search spaces are typically prohibitively large for systematic search.
A learning algorithm is made of a search space, an evaluation function, and a search method.
### Noise
Data isn’t perfect:
- some of the attributes are assigned the wrong value
- the attributes given are inadequate to predict the classification
- there are examples with missing attributes
Overfitting occurs when a distinction appears in the data, but doesn’t appear in the unseen examples. This occurs because of random correlations in the training set.
### Supervised Learning
Given:
- a set of **input features** $X_1, \ldots, X_n$
- a set of **target features** $Y_1, \ldots, Y_k$
- a set of **training examples** where the values for the input features and the target features are given for each example
- a set of **test examples**, where only the values for the input features are given to predict the values for the target features for the test examples.
**Classification** when the $Y_i$ are discrete
**Regression** when the $Y_i$ are continuous
Very important: keep training and test sets separate!
Evaluating Predictions
Suppose \( Y \) is a feature and \( e \) is an example:
- \( Y(e) \) is the value of feature \( Y \) for example \( e \).
- \( \hat{Y}(e) \) is the predicted value of feature \( Y \) for example \( e \).
- The error of the prediction is a measure of how close \( \hat{Y}(e) \) is to \( Y(e) \).
- There are many possible errors that could be measured.
Measures of error
- absolute error
\[
\sum_{e \in E} \sum_{Y \in T} |Y(e) - \hat{Y}(e)|
\]
- sum of squares error
\[
\sum_{e \in E} \sum_{Y \in T} (Y(e) - \hat{Y}(e))^2
\]
- worst-case error:
\[
\max_{e \in E} \max_{Y \in T} |Y(e) - \hat{Y}(e)|.
\]
Measures of error (cont.)
When target features are \{0, 1\} and predicted features are in \[0, 1\]:
- likelihood of the data
\[
\prod_{e \in E} \prod_{Y \in T} \hat{Y}(e)^{Y(e)}(1 - \hat{Y}(e))^{(1 - Y(e))}
\]
- A cost-based error takes into account costs of various errors.
Measures of error (cont.)
When target features are \{0, 1\} and predicted features are \in [0, 1]:
- likelihood of the data
\[
\prod_{e \in E} \prod_{Y \in T} \hat{Y}(e)^Y(1 - \hat{Y}(e))^{1-Y(e)}
\]
- entropy or negative log likelihood
\[
- \sum_{e \in E} \sum_{Y \in T} [Y(e) \log \hat{Y}(e) + (1 - Y(e)) \log(1 - \hat{Y}(e))]
\]
Receiver Operating Curve
<table>
<thead>
<tr>
<th>actual</th>
<th>T</th>
<th>predicted</th>
</tr>
</thead>
<tbody>
<tr>
<td>T</td>
<td>true positive (TP)</td>
<td>false negative (FN)</td>
</tr>
<tr>
<td>F</td>
<td>false positive (FP)</td>
<td>true negative (TN)</td>
</tr>
</tbody>
</table>
- recall = sensitivity = TP/(TP+FN)
- specificity = TN/(TN+FP)
- precision = TP/(TP+FP)
- F-measure = 2*Precision*Recall/(Precision+Recall)
Basic Models for Supervised Learning
Many learning algorithms can be seen as deriving from:
- decision trees
- linear classifiers
- Bayesian classifiers
Example: user discussion board behaviors
<table>
<thead>
<tr>
<th>example</th>
<th>author</th>
<th>thread</th>
<th>length</th>
<th>where read</th>
<th>user's action</th>
</tr>
</thead>
<tbody>
<tr>
<td>e1</td>
<td>known</td>
<td>new long</td>
<td>home</td>
<td>skips</td>
<td></td>
</tr>
<tr>
<td>e2</td>
<td>unknown</td>
<td>new short</td>
<td>work</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e3</td>
<td>unknown</td>
<td>follow up</td>
<td>long</td>
<td>work</td>
<td>skips</td>
</tr>
<tr>
<td>e4</td>
<td>known</td>
<td>follow up</td>
<td>long</td>
<td>home</td>
<td>skips</td>
</tr>
<tr>
<td>e5</td>
<td>known</td>
<td>new short</td>
<td>home</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e6</td>
<td>known</td>
<td>follow up</td>
<td>long</td>
<td>work</td>
<td>skips</td>
</tr>
<tr>
<td>e7</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>work</td>
<td>skips</td>
</tr>
<tr>
<td>e8</td>
<td>unknown</td>
<td>new short</td>
<td>work</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e9</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
<td>skips</td>
</tr>
<tr>
<td>e10</td>
<td>known</td>
<td>new long</td>
<td>work</td>
<td>skips</td>
<td></td>
</tr>
<tr>
<td>e11</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
<td>skips</td>
</tr>
<tr>
<td>e12</td>
<td>known</td>
<td>new long</td>
<td>work</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e13</td>
<td>known</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
<td>reads</td>
</tr>
<tr>
<td>e14</td>
<td>known</td>
<td>new short</td>
<td>work</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e15</td>
<td>known</td>
<td>new short</td>
<td>home</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e16</td>
<td>known</td>
<td>follow up</td>
<td>short</td>
<td>work</td>
<td>reads</td>
</tr>
<tr>
<td>e17</td>
<td>known</td>
<td>new short</td>
<td>home</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e18</td>
<td>unknown</td>
<td>new short</td>
<td>work</td>
<td>reads</td>
<td></td>
</tr>
<tr>
<td>e19</td>
<td>unknown</td>
<td>new long</td>
<td>work</td>
<td>?</td>
<td></td>
</tr>
<tr>
<td>e20</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
<td>skips</td>
</tr>
</tbody>
</table>
Learning Decision Trees
- Representation is a decision tree.
- Bias is towards simple decision trees.
- Search through the space of decision trees, from simple decision trees to more complex ones.
Simple, successful technique for supervised learning from discrete data. Learn a decision tree from data:
- Nodes are input attributes/features
- Branches are labeled with input feature value(s)
- Leaves are predictions for target features
- Can have many branches per node
- Branches can be labeled with multiple feature values
### Learning a decision tree
- Incrementally split the training data
- Recursively solve sub-problems
- Hard part: how to split the data?
- Criteria for a good decision tree (bias):
- small decision tree,
- good classification (low error on training data),
- good generalisation (low error on test data)
### Decision tree learning: pseudocode
```plaintext
//X is input features, Y is output features, //E is training examples
//output is a decision tree, which is either // - a point estimate of Y, or // - of the form < X_i, T_1, ... , T_N > where
//X_i is an input feature and T_1, ... , T_N are decision trees
procedure DecisionTreeLearner(X,Y,E)
if stopping criteria is met then
return pointEstimate(Y,E)
else
select feature X_i ∈ X for each value x_i of X_i do
E_i = all examples in E where X_i = x_i
T_i = DecisionTreeLearner(X\{X_i}, Y, E_i)
end for
return < X_i, T_1, ... , T_N >
end procedure
```
### Decision tree classification: pseudocode
```plaintext
//X is input features, Y is output features, //e is test example //DT is a decision tree //output is a prediction of Y for e
procedure ClassifyExample(e,X,Y,DT)
S ← DT
while S is internal node of the form < X_i, T_1, ... , T_N > do
j ← X_i(e)
S ← T_j
end while
return S
end procedure
```
### Remaining issues
- Stopping criteria
- Selection of features
- Point estimate (final return value at leaf)
- Reducing number of branches (partition of domain)
Stopping Criteria
- How do we decide to stop splitting?
- The stopping criteria is related to the final return value
- depends on what we will need to do
- Possible stopping criteria:
▶ No more features
▶ Performance on training data is good enough
Feature Selection
- Ideal: choose sequence of features that result in smallest tree
- Actual: myopically split - as if only allowed one split, which feature would give best performance?
- Most even split
- Maximum information gain
- GINI index
Bad Feature Selection
- Example:
distinguish \{ a, b, c, d \} with
\[ P(a) = 0.5, P(b) = 0.25, P(c) = P(d) = 0.125 \]
If we encode
a:00 b:01 c:10: d:11
uses on average
2 bits
but if we encode
a:0 b:10 c:110 d:111
uses on average
\[ P(a) \times 1 + P(b) \times 2 + P(c) \times 3 + P(d) \times 3 = \]
1.75 bits
Information Theory
- a bit is a binary digit: 0 or 1
- n bits can distinguish \(2^n\) items
- can do better by taking probabilities into account
Example:
distinguish \{ a, b, c, d \} with
\[ P(a) = 0.5, P(b) = 0.25, P(c) = P(d) = 0.125 \]
If we encode
a:00 b:01 c:10: d:11
uses on average
2 bits
but if we encode
a:0 b:10 c:110 d:111
uses on average
\[ P(a) \times 1 + P(b) \times 2 + P(c) \times 3 + P(d) \times 3 = \]
1.75 bits
- In general, need \(-\log_2 P(x)\) bits to encode x
- Each symbol requires on average
\[ -P(x) \log_2 P(x) \] bits
- To transmit an entire sequence distributed according to \(P(x)\), we need on average
\[ \sum_x -P(x) \log_2 P(x) \] bits
of information per symbol we wish to transmit
- information content or entropy of the sequence
Information gain
Given a set \( E \) of \( N \) training examples, if the number of examples with output feature \( Y = y_i \) is \( N_i \), then
\[
P(Y = y_i) = \frac{N_i}{N}
\]
(the point estimate)
Total information content for the set \( E \) is:
\[
I(E) = - \sum_{y_i \in Y} P(y_i) \log P(y_i)
\]
So, after splitting \( E \) up into \( E_1 \) and \( E_2 \) (size \( N_1, N_2 \)) based on input attribute \( X_i \), the information content
\[
I(E_{split}) = \frac{N_1}{N} I(E_1) + \frac{N_2}{N} I(E_2)
\]
and we want the \( X_i \) that maximises the information gain:
\[
I(E) - I(E_{split})
\]
Final return value
- Point estimate of \( Y \) (output features) over all examples
- Point estimate is just a prediction of target features
- mean value,
- median value,
- most likely classification,
- etc.
\( P(Y = y_i) = \frac{N_i}{N} \)
where
- \( N_i \) is the number of training samples at the leaf with \( Y = Y_i \)
- \( N \) is the total number of training samples at the leaf.
Using a Priority Queue to Learn the DT
- The “vanilla” version we saw grows all branches for a node
- But there might be some branches that are more worthwhile to expand
- Idea: sort the leaves using a priority queue ranked by how much information can be gained with the best feature at that leaf
- always expand the leaf at the top of the queue
Example: user discussion board behaviors
<table>
<thead>
<tr>
<th>example</th>
<th>author</th>
<th>thread length</th>
<th>where read</th>
<th>user's action</th>
</tr>
</thead>
<tbody>
<tr>
<td>e1</td>
<td>known</td>
<td>new</td>
<td>long</td>
<td>home</td>
</tr>
<tr>
<td>e2</td>
<td>unknown</td>
<td>new</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e3</td>
<td>unknown</td>
<td>follow up</td>
<td>long</td>
<td>work</td>
</tr>
<tr>
<td>e4</td>
<td>known</td>
<td>follow up</td>
<td>long</td>
<td>home</td>
</tr>
<tr>
<td>e5</td>
<td>known</td>
<td>new</td>
<td>short</td>
<td>home</td>
</tr>
<tr>
<td>e6</td>
<td>known</td>
<td>follow up</td>
<td>long</td>
<td>work</td>
</tr>
<tr>
<td>e7</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e8</td>
<td>unknown</td>
<td>new</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e9</td>
<td>known</td>
<td>follow up</td>
<td>long</td>
<td>home</td>
</tr>
<tr>
<td>e10</td>
<td>known</td>
<td>new</td>
<td>long</td>
<td>work</td>
</tr>
<tr>
<td>e11</td>
<td>unknown</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
</tr>
<tr>
<td>e12</td>
<td>known</td>
<td>new</td>
<td>long</td>
<td>work</td>
</tr>
<tr>
<td>e13</td>
<td>known</td>
<td>follow up</td>
<td>short</td>
<td>home</td>
</tr>
<tr>
<td>e14</td>
<td>known</td>
<td>new</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e15</td>
<td>known</td>
<td>new</td>
<td>short</td>
<td>home</td>
</tr>
<tr>
<td>e16</td>
<td>known</td>
<td>follow up</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e17</td>
<td>known</td>
<td>new</td>
<td>short</td>
<td>home</td>
</tr>
<tr>
<td>e18</td>
<td>unknown</td>
<td>new</td>
<td>short</td>
<td>work</td>
</tr>
<tr>
<td>e19</td>
<td>unknown</td>
<td>new</td>
<td>long</td>
<td>work</td>
</tr>
</tbody>
</table>
Overfitting
Sometimes the decision tree is “too good” at classifying the training data, and will not generalise very well.
This often occurs when there is not much data
3 attributes: bad weather, burnt toast, train late
Training data:
A: true, true, true;
A: false, false, false;
A: false, false, false;
A: true, false, false;
Decision tree learning: pseudocode V2
\[
\text{procedure } \text{DecisionTreeLearner}(X,Y,E) \\
\text{DT} = \text{pointEstimate}(Y, E) = \text{initial decision tree} \\
\{X', \Delta I\} \leftarrow \text{best feature and Information Gain value for } E \\
PQ \leftarrow \{\text{DT}, E, X', \Delta I\} = \text{priority queue of leaves ranked by } \Delta I \\
\text{while stopping criteria is not met do:} \\
\{S_i, E_i, X_i, \Delta I_i\} \leftarrow \text{leaf at the head of } PQ \\
\text{for each value } x_i \text{ of } X_i \text{ do} \hspace{1cm} \text{leaf at the head of } PQ \\
\text{E}_i = \text{all examples in } E_i \text{ where } X_i = x_i \\
\{X_j, \Delta I_j\} = \text{best feature and value for } E_i \\
\text{T}_i \leftarrow \text{pointEstimate}(Y, E_i) \\
\text{insert } \{T_i, E_i, X_j, \Delta I_j\} \text{ into } PQ \text{ according to } I_j \\
\text{end for} \\
S_i \leftarrow < X_i, T_1, \ldots, T_N > \\
\text{end while} \\
\text{return } \text{DT} \\
\text{end procedure}
\]
Overfitting
Sometimes the decision tree is “too good” at classifying the training data, and will not generalise very well. This often occurs when there is not much data for the 3 attributes: bad weather, burnt toast, train late.
Example training data:
- A: true, true, true;
- A: false, false, false;
- A: true, false, false;
- A: true, true, true;
- A: true, false, true;
- A: false, true, false;
- A: false, false, false;
...
Some methods to avoid overfitting:
- **Regularization**: e.g. Prefer small decision trees over big ones, so add a 'complexity' penalty to the stopping criteria - stop early.
- **Pseudocounts**: add some data based on prior knowledge.
- **Cross validation**
Test set errors caused by:
- **Bias**: the error due to the algorithm finding an imperfect model.
- **Representation bias**: model is too simple.
- **Search bias**: not enough search.
- **Variance**: the error due to lack of data.
- **Noise**: the error due to the data depending on features not modeled or because the process generating the data is inherently stochastic.
- **Bias-variance trade-off**: Complicated model, not enough data (low bias, high variance). Simple model, lots of data (high bias, low variance).
Cross Validation
- Split training data into a training and a validation set.
- Use the validation set as a “pretend” test set.
- Optimise the decision maker to perform well on the validation set, not the training set.
- Can do this multiple times with different validation sets.
Next:
- Supervised Learning II (Poole & Mackworth (2nd ed.) Chapter 7.3.2,7.5-7.6).
- Uncertainty (Poole & Mackworth (2nd ed.) Chapter 8).
|
{"Source-Url": "https://cs.uwaterloo.ca/~jhoey/teaching/cs486/lecture7a-nup.pdf", "len_cl100k_base": 5513, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29796, "total-output-tokens": 5308, "length": "2e12", "weborganizer": {"__label__adult": 0.0006375312805175781, "__label__art_design": 0.0010223388671875, "__label__crime_law": 0.0007839202880859375, "__label__education_jobs": 0.102294921875, "__label__entertainment": 0.00016701221466064453, "__label__fashion_beauty": 0.0004227161407470703, "__label__finance_business": 0.0011415481567382812, "__label__food_dining": 0.0008130073547363281, "__label__games": 0.001491546630859375, "__label__hardware": 0.0022525787353515625, "__label__health": 0.0015897750854492188, "__label__history": 0.0007176399230957031, "__label__home_hobbies": 0.0007123947143554688, "__label__industrial": 0.001800537109375, "__label__literature": 0.0012826919555664062, "__label__politics": 0.0005507469177246094, "__label__religion": 0.000972270965576172, "__label__science_tech": 0.2332763671875, "__label__social_life": 0.0004627704620361328, "__label__software": 0.035369873046875, "__label__software_dev": 0.60986328125, "__label__sports_fitness": 0.000942230224609375, "__label__transportation": 0.0009355545043945312, "__label__travel": 0.0004963874816894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16999, 0.01296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16999, 0.49759]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16999, 0.81301]], "google_gemma-3-12b-it_contains_pii": [[0, 1478, false], [1478, 4160, null], [4160, 5091, null], [5091, 7575, null], [7575, 9356, null], [9356, 10975, null], [10975, 15364, null], [15364, 16999, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1478, true], [1478, 4160, null], [4160, 5091, null], [5091, 7575, null], [7575, 9356, null], [9356, 10975, null], [10975, 15364, null], [15364, 16999, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16999, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16999, null]], "pdf_page_numbers": [[0, 1478, 1], [1478, 4160, 2], [4160, 5091, 3], [5091, 7575, 4], [7575, 9356, 5], [9356, 10975, 6], [10975, 15364, 7], [15364, 16999, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16999, 0.13584]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
a4f809dcf0743375cd2fae46817015064a309fe1
|
Patterns of Co-evolution between Requirements and Source Code
Mona Rahimi and Jane Cleland-Huang
School of Computing
DePaul University, Chicago, IL, 60604, USA
m.rahimi@acm.org, jhuang@cs.depaul.edu
Abstract—Software systems are characterized by continual change which often occurs concurrently across various artifact types. For example, changes may be initiated at the requirements, design, or source code level. Understanding patterns of co-evolution across requirements and source code provides fundamental building blocks for automating software engineering activities such as trace link creation and evolution, requirements analysis and maintenance, refactoring detection, and the generation of task recommendations. However, prior work has focused almost entirely on the evolution of individual artifact types such as requirements, design, or source code. In this paper we document patterns of co-evolution that occur between requirements and source code. We illustrate the utility of the patterns for detecting missing requirements and for evolving requirements to source code trace links.
Index Terms—Source code evolution, requirements evolution, patterns, co-evolution
I. INTRODUCTION
Software systems are composed of many different types of artifacts including requirements, design, source code, test cases, and bug reports. In a healthy environment, all of these artifacts evolve in tandem as the system matures [8]. In this paper we focus on the co-evolution of requirements and source code. Requirements describe the desired functionality and behavior of the system [25] while the source code represents what the system actually does at any point in time. It is desirable for source code to accurately reflect the functionality specified in the requirements and for requirements to reflect all features implemented in the code. Gold-plating or inclusion of undocumented features should be avoided [1].
Understanding the co-evolution of requirements and code has ramifications for a number of important Software Engineering activities. For example, it helps software trace links between requirements and source code to be accurately maintained, especially in safety-critical software systems [5]. Furthermore, a broad swathe of tools that assist developers by recommending specific tasks such as refactorings or updating design rationales [26] could be improved if both source code and requirements changes were taken into consideration.
Despite the potential benefits that could be realized through accurately co-evolving requirements and code, much of the prior work in this area has focused either on changes to requirements [15], [11] or on evolution of code [6]. For example, Martin Fowler lists over 70 different kinds of source code refactorings such as rename class, promote method, and hide method [24]. Similarly, Mäder et al., identified a set of change activities at the Object-Oriented design level which included refinements of associations, resolving many-to-many associations and association classes; moving, splitting, or merging an attribute, method, class, or package; splitting a class, component, or package; merging, and promoting a class to a component or an attribute to a class [17], [16]. On the requirements side, Cleland-Huang classified change events such as adding, modifying, inactivating, merging, refining, decomposing, and replacing requirements [4]. In each of these three cases, the emphasis was either on the evolution of source code, or on the evolution of requirements, but not on both.
In contrast, our focus in this paper is on the co-evolution of requirements and code. We identify five major classes of change, namely added functionality, deleted (obsolete) functionality, modified functionality, source code refactoring, and requirements modifications. The first three types of classes represent co-evolution of requirements and source code, while the last two represent independent changes within source code and requirements respectively. For each class of change we identify specific scenarios and document them in the form of 18 different patterns. These patterns provide the foundations needed to support activities such as automated trace link evolution, recommending the ‘next’ developer task, and identifying missing requirements.
We used several sources of information in order to formulate these patterns. First we included previously recognized patterns of change related to code refactorings [9], [27] and requirements change [4]. Second, we analyzed source code and requirements changes in several different open source systems. From these observations we propose the patterns described in this paper.
II. PATTERN STRUCTURE
We present the change patterns using the following structured format.
- **Name** and description of change class, e.g., Added functionality.
- **Triggers** of the change, e.g., Gold plating.
- **Impact on Requirements**: The potential and desired impact upon the requirements e.g., requirements reflect changes in code.
- **Impact on Code**: The observed refactoring that occurs in the code. For several of the change classes, multiple
patterns are identified. Each of these patterns is structured as follows:
- **Class level change** e.g. Added class.
- **Pattern description** e.g. The new class is added as a result of a new requirement being added and implemented.
- **Visual representation** of the pattern.
**Potential Usage:** A description of the ways in which identifying the change scenarios could benefit other software engineering activities.
In all sections below, \( V_i \) refers to the older version \( i \) before the change and \( V_{i+1} \) refers to the newer version \( i + 1 \) after the change. Similarly classes with subscript \( i \) refer to the classes in the older version \( i \) and classes with subscript \( i + 1 \) refer to the classes in the version after the change \( i + 1 \).
### III. Functionality Added
This first set of patterns describes changes related to the addition of new functionality. New functionality is ideally specified first in the requirements and then implemented in the code. However, in practice functionality may be implemented in the code first. Requirements may be updated after the fact, or may not be updated at all leading to missing requirements.
#### A. Triggers
- A new requirement is added and implemented in source code.
- A previously specified requirement is now implemented. Its status is updated accordingly.
- Needed functionality is added to the source code but not reflected in the requirements specification.
- Buggy code is fixed. Evidence is likely found in the bug repository and/or version control log. There is no direct impact upon requirements; however, the fixed code is expected to satisfy existing requirements.
- Unnecessary functionality is added to the source code and not reflected in the requirements specification. Gold plating has occurred.
#### B. Impact on Requirements
As a result of new functionality being added the requirements may be impacted in one of the following ways:
- A **new requirement** is created and added to the requirements specifications.
- The **status** of an existing requirement is changed.
- The requirements become **inconsistent** with the source code.
#### C. Impact on Code
The addition of new functionality can impact source in the following three primary ways as depicted in Figure 1:
- **Added Class:** New functionality may be introduced at the class-level granularity through the creation of a new class.
- **Added Method:** New functionality may appear at the method-level granularity in the form of a new method.
- **Modified Method:** New functionality may be introduced at a lower level of granularity through modifying the body of a method.
#### D. Potential Usage
Recognizing the addition of new functionality can be useful for supporting a number of software engineering activities:
- New trace links may need to be added between the new requirement and the code.
- If it is determined that new functionality has been added to the code without updating the requirements specification, the user could be prompted to provide the missing requirement.
- Design rationales may need to be updated to reflect design decisions behind the new functionality.
- In safety critical systems, the addition of new functionality triggers the need for a new safety analysis.
### IV. Functionality Deleted
Features may be deleted from code for several different reasons. Ideally, changes in the code are initiated by changes in requirements. However, in practice, features may also be removed from the code without requirements being updated.
#### A. Triggers
- A feature specified in the requirements is no longer needed or desired. Its status is changed to ‘obsolete’ or the requirement is removed from the specification. The related feature is deleted from the source code.
- Required functionality is accidentally or deliberately removed from the source code. Requirements are no longer (partially or completely) satisfied. This problem may occur as a side-effect of code maintenance efforts, especially if developers are not kept informed of the relationships between code and requirements.
- Gold plating is removed from the implementation. Individual requirements are unaffected, however, requirements coverage [28] increases.
---
Fig. 1: Three change patterns that might be observed in the source code when functionality is added. Thicker dashed circles represent added class/method while thinner ones represent modified method.
Fig. 2: Three change patterns that might be observed in the source code when functionality is deleted. Thinner dashed circles represent added class/method while thinner ones represent modified method.
B. Impact on Requirements
When the change in code is initiated by a change in requirements - the deletion of code preserves or achieves synchrony between requirements and code. When change is initiated in the code, then requirements must be updated accordingly if required.
C. Impact on Code
The removal and/or obsolescence of functionality can be observed in the source code in three primary ways. These are depicted in Figure 2:
- **Deleted Class**: The deletion of one or more classes.
- **Deleted Method**: The deletion of one or more methods.
- **Modified Method**: Modifications to one or more methods.
D. Potential Usage
Recognizing when features are deleted and/or requirements are made obsolete is important for the following reasons:
- Existing trace links may need to be made obsolete or deleted.
- The status of an existing requirement may need to be changed to obsolete.
- In a safety critical environment, safety cases may need to be checked to ensure that the deleted functionality does not impact safety.
V. FUNCTIONALITY IS MODIFIED
Modifying a requirement could result in features being added, deleted or modified in the code. We do not document this as a unique set of patterns but treat it as a special combination of additions and deletions of classes and methods, and of modifications to methods.
A. Triggers
- An existing requirement is modified and source code is updated accordingly.
- Source code is modified to reflect required changes that have been communicated informally. However, the original requirement(s) is not updated.
- Features that were previously implemented in the source code, perhaps due to gold-plating, are removed. There is no impact upon requirements.
B. Impact on Requirements
Ideally, modifications are triggered by changes in requirements. However, if source code is changed first, then requirements will need to be updated to maintain consistency.
C. Impact on Code
Modifications can result in a broad range of additions, deletions, and modifications to code at the class or method level.
D. Potential Usage
Same as for adding and deleting functionality. However, the level of difficulty of recognizing a series of additions and deletions and of subsequently performing related tasks can increase when changes are a result of modifications – causing problems such as architectural degradation [10] and bad code smells [2]. Therefore, the ability to deconstruct a series of modification edits into their constituent parts, by reverse engineering source code refactorings and requirements refinements is particularly important. Specific usage scenarios include:
- Updating trace links automatically
- Keeping developers informed of underlying functional requirements and quality concerns as they modify existing source code.
- Monitoring the impact of bug fixes versus the introduction of modified features upon the quality of the design.
VI. SOURCE CODE IS REFACTORED
Martin Fowler defines refactoring as “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior” [9]. While refactoring does not directly impact requirements, it is important to understand refactoring behavior and to differentiate it from behavior in which requirements and source code co-evolve.
A. Triggers
Refactoring may be triggered in a number of ways such as:
- Recognition that a bad smell has been introduced, for example: redundant code, excessive use of switch statements, excessive complexity.
- A bug fix that requires the developer to refactor in order to increase understanding of the code.
- As part of a code review.
- As part of a regularly scheduled activity (i.e. a developer dedicates 10% of her time to refactoring efforts.
B. Impact on Requirements
While refactoring does not directly impact requirements, it may be triggered by the degradation of code that occurs when features are added or modified. The challenge is to differentiate between pure refactorings and other change scenarios which necessitate requirements co-evolution.
C. Impact on Code
Fowler has identified over 70 different types of refactorings. Some of the major ones are visually depicted in 3 and include:
- **Extracted Class**: A new class is added to source code. The new class has been extracted from an existing one.
- **Merged Classes**: A new class is added to source code. The new class represents a merging of existing ones.
- **Promoted Method**: A new class is added to source code. The new class was created as a result of promoting method.
- **Extracted Subclass**: A new class is added to source code. The new class is an extracted subclass.
- **Extracted Superclass**: A new class is added to source code. The new class is an extracted superclass.
- **Divided Class**: A new class is deleted from source code. The deleted class has been divided into new classes.
- **Merged Classes**: Classes are deleted from source code. The deleted classes have been merged into a new class.
- **Extracted Method**: A new method is added to source code. The new method is an old promoted method.
- **Divided Methods**: A new method is added to source code. The new method represents a merging of old ones.
- **Divided Methods**: A new method is added to source code. The new method represents a division of the old method.
- **Divided Method**: A new method is deleted from source code. The deleted method has been divided into new methods.
- **Merged Method**: A new method is deleted from source code. The deleted methods have been merged into a new method.
D. Potential Usage
Refactoring source code, and reverse engineering refactorings, has numerous benefits [27].
- Ability to capture and replay changes in order to understand how and why a system has evolved to its current state.
- Ability to measure the impact of various types of changes upon system quality.
- Ability to differentiate between changes which are pure refactorings, bug fixes, or which represent real changes in functionality and should be reflected in changes to requirements. In the latter case, the benefits are described in the previously described patterns for add functionality, delete functionality, and modify functionality.
VII. ONLY REQUIREMENTS ARE CHANGED
It is common for requirements to be specified in advance of code implementation. For example in agile projects user stories are often written and then stored into a backlog. They are only implemented when selected for a specific iteration. Similarly, in other projects, an initial set of requirements are typically elicited and specified during a requirements phase. Requirements are then carefully prioritized and scheduled for release in a specific version or iteration. As a result, requirements may be added, deleted, and/or modified without any immediate impact upon the code.
A. Triggers
All activities are characterized by requirements-only changes, typically discovered through a requirements elicitation process [25]. Such requirements-only changes typically result in the creation of an initial software requirements specification.
B. Impact on Requirements
The following patterns of change were identified in prior work [4].
- A new requirement is added.
- A requirement is modified.
- Two or more requirements are merged.
- A new requirement is derived from an existing one.
- A requirement is replaced with another requirement.
TABLE 1: Requirement specifications for Domain Analysis application
<table>
<thead>
<tr>
<th>Req</th>
<th>Requirement Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>R1</td>
<td>A document describing concepts in the domain shall be imported.</td>
</tr>
<tr>
<td>R2</td>
<td>A set of non-domain related documents shall be imported.</td>
</tr>
<tr>
<td>R3</td>
<td>All of words in the documents shall be stemmed to their root forms.</td>
</tr>
<tr>
<td>R4</td>
<td>The frequency of all nouns and noun phrases shall be computed.</td>
</tr>
<tr>
<td>R5</td>
<td>The domain specificity of each noun and noun phrase will be computed.</td>
</tr>
<tr>
<td>R6</td>
<td>A part-of-speech tagger will be used to tag all words.</td>
</tr>
<tr>
<td>R7</td>
<td>A list of domain terms shall be printed.</td>
</tr>
<tr>
<td>R8</td>
<td>Sort the output by DESCENDING Domain Specificity with subset of DESCENDING Frequency (ADDED BY DEVELOPER)</td>
</tr>
<tr>
<td>R9</td>
<td>Instead of listing all domain files individually – just point to a directory and read all files from the directory (ADDED BY DEVELOPER)</td>
</tr>
</tbody>
</table>
C. Impact on Code
There are no immediate impacts in the code. However, architectural design decisions may be made early in the design process and may constrain future coding decisions [20], [22].
D. Potential Impact
Depending upon the development process and philosophy, the discovery and specification of a new requirement could potentially lead to a re-evaluation of the architectural design. Changes in unimplemented requirements could lead to changes in the release plan.
VIII. USING THE CO-EVOLUTION CHANGE PATTERNS
The patterns presented in this paper provide the foundations for automating numerous software engineering activities. However, in order to accomplish this goal we need the ability to detect when a specific run pattern has occurred.
A. Detecting Change Patterns
We provide a brief discussion of how change patterns could be detected, and leave the implementation and evaluation for future work.
- **Changes in code** can be identified using one of the many refactoring detection tools [7], [13], [23].
- **Changes in Requirements** can be identified by analyzing deltas between versions of requirements and leveraging structural information (i.e., requirements numbering schemes) and text analysis to determine how the requirements have changed.
It is outside the scope of this workshop paper to describe these techniques in more detail or to demonstrate their use for recognizing various types of change. However, all of the above techniques have been demonstrated in prior work to detect source code or requirements changes, and/or to discover the relationships between them.
Fig. 4: Changes and their impact on code-to-requirement trace links done by a Java developer including: Addition of two classes in gray; Addition of 5 methods in bold and underlined; Deletion of crossed-out method; Change of five method signatures in bold and underlined.
B. Example of Evolutionary Change
For illustrative purposes, Figure 4 shows an example of refactoring changes made by a Java developer for an application called Domain Term Extraction and also the impact of these changes on code-to-requirement trace links. The refactored Java application consisted of 237 lines of Java code, four classes, 14 methods, nine requirements, and 11 requirements-to-code trace links. The application reads a set of domain-related files (e.g., documents about electronic health care records), and a set of general documents (e.g., books about business, astronomy, romance, etc), performed natural language part-of-speech tagging using QTag, and then outputs a list of domain-specific nouns and noun phrases. Refactoring changes illustrated in Figure 4 include: addition of gray classes; addition of bold and underlined methods; elimination of crossed-out methods; and the creation of new associations (dependencies) represented by solid arrows. In Figure 4, small black arrows represent trace links between classes and requirements. The ones marked with a check mark should be retained after the change, while those marked with an X should be made obsolete. Finally, bolded arrows without any annotation represent new links. Table I lists requirements for the Domain Analysis application. The two final requirements (R10 and R11) are added and implemented in the code by the Java developer during modification.
C. Maintaining Synchrony between Requirements and Source Code
The problem of loss of synchronization between requirements and source code occurs in both directions. In open-source systems, stakeholders often complain that they lose track of which feature requests have been implemented. In all types of system, code is sometimes updated without making corresponding changes to the requirements. This affects future impact analysis, scheduling, and coverage assessment activities [8]. By detecting non-refactoring changes in source code which lack corresponding changes in requirements, a recommender system could prompt the user to update the requirements accordingly.
The patterns therefore provide a fundamental building block for generating requirements recommendations. To clarify further, in our illustrative example of Figure 4, we could either monitor the development environment or perform an offline comparison between the old and new versions of code in order to detect changes in the code. The granularity at which the change is detected depends upon instrumentation strategies.
If the system detects that the new class TermComparator.java (class-level granularity) or the new method compare() (method-level granularity), responsible for sorting, has been added it must determine whether the change falls under the category of “Added Functionality” or “Refactoring tools” using a combination of the techniques described in Section VIII-A. If a non-refactoring change is identified but no related requirement is found, the system could recommend that the developer should add a relevant requirement describing the need to sort data. An example is provided in R10 (in table I).
D. Evolving Trace Links Automatically
A second useful application is the automated evolution of trace links. We provide two examples, covering both class and method level granularity. The first example is at class-level granularity. We assume that an existing set of pre-change trace links exists between classes and requirements while in the second example changes are detected at the method level and the assumption is that an initial set of trace links is provided between methods and requirements.
Class-level granularity: In Figure 4, after the system detects that a new class Program.java has been extracted from the old class NounsExtractor.java, the system can recommend that the developer creates two new trace links between related requirements $R_1$ and $R_2$ and the new class.
Method-level granularity: In this case, after detecting that the method getFileFromDirectory() is promoted to class Program.java, the system can recommend new trace links from the related requirements $R_1$ and $R_2$ to the method getFileFromDirectory().
IX. RELATED WORK
Most of the work in the area of code evolution is related to source code refactoring [14], [19], [18]. Martin Fowler lists over 70 different kinds of refactorings such as rename class, promote method, and hide method [24]. While the primary purpose is to help developers maintain and modify code, refactoring catalogs provide a useful vocabulary for describing changes in source code. Furthermore, several tools exist for detecting after-the-fact refactorings [12], [21]. Such tools can be used to capture the intent behind code changes by differentiating between bug fixes and refactorings, capturing and replaying changes to understand how and why a system has evolved to its current state, and for evaluating the impact of various changes upon system quality [27]. Such approaches focus almost entirely on changes to the code and do not take the co-evolution of requirements into account.
Cleland-Huang et al. identified patterns of requirements change [4], and integrated them into Event-Based Traceability (EBT) [3] which allowed users to register their interest in specific requirements and receive notification messages when various change events occurred. Specifically, the change events of adding, modifying, inactivating, merging, refining, decomposing, and replacing requirements were identified.
In more closely related work, Mäder et al. developed a tool and related algorithms for the semi-automated maintenance of trace links between requirements and UML class diagrams [17], [16]. They identified a set of change activities that occur in UML class diagrams including refining unspecified associations into directed associations, or into aggregation or composition associations; resolving many-to-many associations and association classes; moving, splitting, or merging an attribute, method, class, or package; splitting a class, component, or package; merging, and promoting a class to a component or an attribute to a class. They captured these activities through a series of 21 rules with 67 variants. Many of the rules are similar to Fowler’s refactorings, but applied to UML design instead of to code. However, the work focused on evolving existing trace links to UML classes, and did not explore patterns of co-evolution between requirements and code.
To the best of our knowledge, this paper is the first attempt to document patterns of co-evolution between requirements and source code.
X. CONCLUSION AND FUTURE WORK
In this paper we have identified and explicitly modeled 18 patterns of co-evolution between requirements and source code. These patterns provide the foundations for advancing several different research areas including automating the evolution of trace links, integrating requirements knowledge into the refactoring process, maintaining a closer integration between requirements and source code, and ultimately keeping requirements and source code synchronized. Documenting common types of co-evolution changes as patterns increases accessibility to other researchers.
While the identified patterns focus upon the object-oriented domain, we believe that many of the patterns are also relevant in structured domains at the file and function level. We leave examining this notion to future work.
ACKNOWLEDGMENTS
The work in this paper was partially funded by the US National Science Foundation Grant CCF:1319680.
REFERENCES
|
{"Source-Url": "http://re.cs.depaul.edu/Preprints/REPatternsPrePrint.pdf", "len_cl100k_base": 5422, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24763, "total-output-tokens": 7712, "length": "2e12", "weborganizer": {"__label__adult": 0.000347137451171875, "__label__art_design": 0.0002295970916748047, "__label__crime_law": 0.00029850006103515625, "__label__education_jobs": 0.0008912086486816406, "__label__entertainment": 3.8564205169677734e-05, "__label__fashion_beauty": 0.00014400482177734375, "__label__finance_business": 0.00017440319061279297, "__label__food_dining": 0.000286102294921875, "__label__games": 0.0004682540893554687, "__label__hardware": 0.0004582405090332031, "__label__health": 0.0003330707550048828, "__label__history": 0.0001461505889892578, "__label__home_hobbies": 5.97834587097168e-05, "__label__industrial": 0.0002357959747314453, "__label__literature": 0.00022101402282714844, "__label__politics": 0.00017571449279785156, "__label__religion": 0.00034928321838378906, "__label__science_tech": 0.0028400421142578125, "__label__social_life": 7.30752944946289e-05, "__label__software": 0.003690719604492187, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00026488304138183594, "__label__transportation": 0.0003299713134765625, "__label__travel": 0.00016570091247558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33228, 0.01502]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33228, 0.66155]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33228, 0.90622]], "google_gemma-3-12b-it_contains_pii": [[0, 5143, false], [5143, 9585, null], [9585, 13878, null], [13878, 17212, null], [17212, 21420, null], [21420, 27417, null], [27417, 33228, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5143, true], [5143, 9585, null], [9585, 13878, null], [13878, 17212, null], [17212, 21420, null], [21420, 27417, null], [27417, 33228, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33228, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33228, null]], "pdf_page_numbers": [[0, 5143, 1], [5143, 9585, 2], [9585, 13878, 3], [13878, 17212, 4], [17212, 21420, 5], [21420, 27417, 6], [27417, 33228, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33228, 0.05556]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
cc7089f4f1c7949f38876e475edf92a9e5be1319
|
Last Time: Synchronization
• Operating systems have a variety of multithreading issues
• Frequently have shared state manipulated by multiple threads
• Usually solve this problem using some kind of mutual-exclusion mechanism, e.g. disabling interrupts, mutexes, semaphores, etc.
• Many examples of shared state within the OS kernel
• Scheduler ready-queue, other queues (accessed concurrently on multicore systems)
• Filesystem cache (shared across all processes on the system)
• Virtual memory mapping (used by fault handlers and trap handlers)
• Frequently managed in linked lists (although other more sophisticated structures are often used)
• Frequently this state is read much more than it’s written
Example: `vm_area_struct` Lists
- Example: `vm_area_struct` list used for process memory
- Nodes contain many values describing memory regions
- Mostly used to resolve page faults and protection faults
- Also modified by trap handler, e.g. `mmap()`, `sbrk()` functions
Example Problem: Linked Lists
• How would we implement a linked list that supports concurrent access from multiple kernel control paths?
• Consider a simplified list type:
• Each element contains several important fields, and a pointer to next node in the list
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
```
• Example list contents:
Example Problem: Linked Lists (2)
- Operations on our linked list:
- Iterate over the list nodes, examining each one
- e.g. to find relevant data, or to find a node that needs modified
- Insert a node into the linked list
- Modify a node in the linked list
- Remove a node from the linked list
- All of these operations are straightforward to implement
- Can imagine other similar operations, variants of the above
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
```
Linked List and Concurrent Access
• Should be obvious that our linked list will be corrupted if manipulated concurrently by different threads
• Example:
• One thread is traversing the list, searching for the node with \( a = 12 \), so it can retrieve the current value of \( b \)
• Another thread is inserting a new node into the list
Linked List and Concurrent Access (2)
• This scenario can fail in many different ways
• Writer-thread $T_2$ must perform several operations:
```c
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
p->next = new;
```
• Really have no guarantees about how the compiler will order this. Or the CPU, for that matter.
Linked List and Concurrent Access (3)
- Operations that writer-thread T<sub>2</sub> must perform:
```
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
p->next = new;
```
- These operations form a critical section in our code: must enforce exclusive access to the affected nodes during these operations
Fixing Our Linked List
How do we avoid concurrency bugs in our linked list implementation?
An easy solution: use a single lock to guard the entire list
- Any thread that needs to read or modify the list must acquire the lock before accessing head
Design this solution to work from multiple kernel control paths, e.g.
- On a single-core system, trap handler and interrupt handlers simply disable interrupts while accessing the list
- On a multi-core system, use a combination of spin-locks and disabling interrupts to protect access to the list
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
lock_t list_lock;
```
Fixing Our Linked List (2)
• How do we avoid concurrency bugs in our linked list implementation?
• An easy solution: use a single lock to guard the entire list
• Any thread that needs to read or modify the list must acquire the lock before accessing head
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
lock_t list_lock;
```
• Why must readers also acquire the lock before reading??
• Only way for the writer to ensure that readers won’t access the list concurrently, while it’s being modified 😞
Linked List: A Single Lock
• What’s the obvious issue with this approach?
• Readers shouldn’t ever block other readers!
• (we know the list will mostly be accessed by readers anyway…)
• It’s okay if writers hold exclusive access to the list while modifying it
• (it would be better if multiple writers could concurrently modify independent sections of the list)
• This approach has very high lock contention
• Threads spend a lot of time waiting to acquire the lock so they can access the shared resource
• No concurrent access is allowed to the shared resource
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
lock_t list_lock;
```
Linked List: Improving Concurrency
• Ideally, readers should never block other readers
• (we will accept the behavior that writers block everybody, for now)
• How can we achieve this?
• Can use a read/write lock instead of our simple lock
• Multiple readers can acquire shared access to the lock: readers can access the shared resource concurrently without any issues
• Writers can acquire exclusive access to the lock
• Two lock-request operations:
• read_lock(rwlock_t *lock) – used by readers
• write_lock(rwlock_t *lock) – used by writers
Linked List: Read/Write Lock (2)
• Using a read/write lock greatly increases concurrency and reduces lock contention
• Still a few annoying issues:
• Readers still must acquire a lock every time they access the shared resource
• All threads incur a certain amount of lock overhead when they acquire the lock (in this case, CPU cycles)
• And, it turns out this overhead can be hundreds of CPU cycles, even for efficiently implemented read/write locks!
```c
typedef struct list_node {
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
rwlock_t list_lock;
```
Linked List: Read/Write Lock (3)
- Using a read/write lock greatly increases concurrency and reduces lock contention
- Still a few annoying issues:
- Also, writers still block everybody
- Can we come up with a way to manipulate this linked list that doesn’t require writers to acquire exclusive access?
### Linked List: Multiple Locks
- One approach for reducing lock contention is to decrease the **granularity** of the lock
- i.e. how much data is the lock protecting?
- **Idea**: Introduce more locks, each of which governs a smaller region of data
- For our linked list, could put a read/write lock in each node
- Threads must acquire many more locks to work with the list, which means that the locking overhead goes way up 😞
- But, writers can lock only the parts of the list they are changing, which means we can reduce lock contention/increase concurrency
```c
typedef struct list_node {
rwlock_t node_lock;
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
```
Linked List: Multiple Locks (2)
- We need one more read/write lock, to guard the head pointer
- Need to coordinate accesses and updates of head so that a thread doesn’t follow an invalid pointer!
- If a thread needs to change what head points to, it needs to protect this with a critical section
- Now we have all the locks necessary to guard the list when it’s accessed concurrently
```c
typedef struct list_node {
rwlock_t node_lock;
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
rwlock_t head_lock;
```
Linked List: Multiple Locks (3)
- With multiple locks in our structure, must beware of the potential for deadlock...
- Can easily avoid deadlock by requiring that all threads lock nodes in the same total order
- Prevent “circular wait” condition
- This is easy – it’s a singly linked list! Always lock nodes in order from head to tail.
- This makes it a bit harder on writers
- How does a writer know whether to acquire a read lock or a write lock on a given node?
- Need to acquire a read lock first, examine the node, then release and reacquire a write lock if the node must be altered.
```c
typedef struct list_node {
rwlock_t node_lock;
int a;
int b;
struct list_node *next;
} list_node;
list_node *head;
rwlock_t head_lock;
```
Linked List: Multiple Locks, Example
- $T_1$ acquires a read-lock on head so it won’t change.
- Then $T_1$ follows head to the first node, and acquires a read lock on this node so it won’t change.
- (and so forth)
- This process of holding a lock on the current item, then acquiring a lock on the next item before releasing the current item’s lock, is called **crabbing**
- As long as $T_1$ holds a read lock on the current node, and acquires read-lock on the next node before visiting it, it won’t be affected by other threads.
Linked List: Multiple Locks, Example (2)
- $T_2$ behaves in a similar manner:
- $T_2$ acquires a read-lock on head so it won’t change.
- Then $T_2$ follows head to the first node, and acquires a read lock on this node so it won’t change.
- When $T_2$ sees that the new node must go after the first node, it can acquire a write-lock on the first node
- Ensures its changes won’t become visible to other threads until lock is released
- After $T_2$ inserts the new node, it can release its locks to allow other threads to see the changes
Linked List: Holding Earlier Locks
• A critical question: **How long should each thread hold on to the locks it previously acquired in the list?**
• If a thread releases locks on nodes after it leaves them, then other threads might change those nodes
• Does the thread need to be aware of values written by other threads, that appear earlier in the list?
• What if another thread completely changes the list earlier on?
• If these scenarios are acceptable, then threads can release locks as soon as they leave a node
• (Often, it’s acceptable!)
Linked List: Holding Earlier Locks (2)
• A critical question: **How long should each thread hold on to the locks it previously acquired in the list?**
• If such scenarios are unacceptable, threads can simply hold on to all locks until they are finished with the list
• Ensures that each thread will see a completely consistent snapshot of the list until the thread is finished with its task
• Even simple changes in how locks are managed can have significant implications…
Lock-Based Mutual Exclusion
- **Lock-based approaches have a lot of problems**
- Have to make design decisions about what granularity of locking to use
- Coarse-granularity locking = lower lock overhead, but writers block everyone
- Fine-granularity locking = much higher lock overhead, but can achieve more concurrency with infrequent writers in the mix
- More locks means more potential for deadlock to occur
- Locks make us prone to other issues like priority inversion (more on this in a few lectures)
- Can’t use locks in interrupt context anyway, unless we are very careful in how they are used!
Mutual Exclusion
- What is the fundamental issue we are trying to prevent?
- Different threads seeing (or creating) inconsistent or invalid state
- Earlier example: writer-thread $T_2$ inserting a node
```c
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
p->next = new;
```
- A big part of the problem is that we can’t guarantee the order or interleaving of these operations
- Locks help us sidestep this issue by guarding all the operations
Order of Operations
• What if we could impose a more intelligent ordering?
• When $T_2$ inserts a node:
• **Step 1:** Prepare the new node, but **don’t** insert it into the list yet
```c
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
```
• Last three operations can occur in any order. No one cares, because they aren’t visible to anyone.
• $T_1$ can go merrily along; $T_2$ hasn’t made any visible changes yet.
Order of Operations (2)
- What if we could impose a more intelligent ordering?
- When $T_2$ inserts a node:
- Step 2: Atomically change the list to include the new node
\[ p \rightarrow \text{next} = \text{new}; \]
- This is a single-word write. If the CPU can perform this atomically, then threads will either see the old version of the list, or the new version.
- Result: Reader threads will never see an invalid version of the list!
- For this to work, we must ensure these operations happen in the correct order.
Read-Copy-Update
- This mechanism is called Read-Copy-Update (RCU)
- A lock-free mechanism for providing a kind of mutual exclusion
- All changes to shared data structures are made in such a way that concurrent readers never see intermediate state
- They either see the old version of the structure, or they see the new version.
- Changes are broken into two phases:
- If necessary, a copy is made of specific parts of the data structure. Changes take place on the copy; readers cannot observe them.
- Once changes are complete, they are made visible to readers in a single atomic operation.
- In RCU, this atomic operation is always changing a pointer from one value to another value
- e.g. $T_2$ performs $p->next = new$, and change becomes visible
Publish and Subscribe
• It’s helpful to think of changing the `p->next` pointer in terms of a publish/subscribe problem
• $T_2$ operations:
• **Step 1:** Prepare the new node
```c
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
```
• **Step 2:** Atomically change the list to include the new node
```c
p->next = new;
```
• Before the new node is **published** for others to access, all initialization must be completed
• We can enforce this with a write memory barrier
• Enforce that all writes before the barrier are completed before any writes after the barrier are started
• (Also need to impose an optimization barrier for the compiler…)
"Publish and Subscribe"
Publish and Subscribe (2)
• Implement this as a macro:
```c
/* Atomically publish a value v to pointer p. */
/* smp_wmb() also includes optimization barrier. */
#define rcu_assign_pointer(p, v) ({
smp_wmb(); (p) = (v);
})
```
• IA32 and x86-64 ISAs both guarantee that as long as the pointer-write is properly word-aligned (or dword-aligned), it will be atomic.
(Even on multiprocessor systems!)
• \(T_2\) operations become:
```c
list_node *new = malloc(sizeof(list_node));
new->a = 51;
new->b = 24;
new->next = p->next;
/* Publish the new node! */
rcu_assign_pointer(p->next, new);
```
Publish and Subscribe (3)
- $T_1$ needs to see the “current state” of the $p\rightarrow next$ pointer (whatever that value might be when it reads it)
- Example: $T_1$ is looking for node with a specific value of $a$
```c
list_node *p = head;
int b = -1;
while (p != NULL) {
if (p->a == value) {
b = p->b;
break;
}
p = p->next;
}
return b;
```
- When $T_1$ reads $\text{head}$, or $p\rightarrow next$, it is subscribing to the most recently published value
Publish and Subscribe (4)
- Example: $T_1$ is looking for node with a specific value of $a$:
```c
list_node *p = head;
int b = -1;
while (p != NULL) {
if (p->a == value) {
b = p->b;
break;
}
}
p = p->next;
return b;
```
- Must ensure that the read of $p->next$ is completed before any accesses to $p->a$ or $p->b$ occur
- We could use a read memory barrier, but IA32 already ensures that this occurs, automatically
- (Not all CPUs ensure this… DEC ALPHA CPU, for example…)
Publish and Subscribe (5)
• Again, encapsulate this “subscribe” operation in a macro:
/* Atomically subscribe to a pointer p's value. */
#define rcu_dereference(p) ({
typeof(p) _value = ACCESS_ONCE(p);
smp_read_barrier_depends();
(_value);
})
• On IA32, smp_read_barrier_depends() is a no-op
• On DEC ALPHA, it’s an actual read barrier
• ACCESS_ONCE(x) is a macro that ensures p is read
directly from memory, not a register
• (Usually generates no additional instructions)
• Subscribing to a pointer is very inexpensive. Nice!
Publish and Subscribe (6)
• Updated version of $T_1$ code:
```c
list_node *p = rcu_dereference(head);
int b = -1;
while (p != NULL) {
if (p->a == value) {
b = p->b;
break;
}
p = rcu_dereference(p->next);
}
return b;
```
• So far, this is an extremely inexpensive mechanism!
• Writers must sometimes perform extra copying, and use a write memory barrier.
• But, we expect writes to occur infrequently. And, writers don’t block anyone anymore. (!!!)
• Usually, readers incur zero overhead from RCU. (!!!)
Modifying a List Node
• Another example: change node with \( a = 19 \); set \( b = 15 \)
• Assume pointer to node being changed is in local variable \( p \)
• Assume pointer to previous node is in \( \text{prev} \)
• (Also, assume \( \text{rcu_dereference}() \) was used to navigate to \( p \))
• Can’t change the node in place; must make a copy of it
\[
\begin{align*}
\text{copy} &= \text{malloc(} \text{sizeof(} \text{list_node} \text{)} \text{)}; \\
\text{copy} &\rightarrow a = p \rightarrow a; \\
\text{copy} &\rightarrow b = 15; \\
\text{copy} &\rightarrow \text{next} = p \rightarrow \text{next}; \\
\text{rcu_assign_pointer} &\left( \text{prev} \rightarrow \text{next}, \text{copy} \right);
\end{align*}
\]
Modifying a List Node (2)
• Since `rcu_assign_pointer()` atomically publishes the change, readers must fall into one of two categories:
• Readers that saw the old value of `prev->next`, and therefore end up at the old version of the node
• Readers that see the new value of `prev->next`, and therefore end up at the new version of the node
• All readers will see a valid version of the shared list
• And, we achieve this with much less overhead than with locking!
• (The writer has to work a bit harder…
\[
\begin{align*}
\text{head} &: a = 5, b = 31, \text{next} \\
\text{prev} &: a = 19, b = 2, \text{next} \\
\text{p} &: \text{copy} \\
\text{a = 19} &: b = 2, \text{next} \\
\text{a = 12} &: b = 6, \text{next}
\end{align*}
\]
Modifying a List Node (3)
- Are we finished?
```c
copy = malloc(sizeof(list_node));
copy->a = p->a;
copy->b = 15;
copy->next = p->next;
rcu_assign_pointer(prev->next, copy);
```
- Thread must deallocate the old node, or else there will be a memory leak
```c
free(p);
```
- Problems?
- If a reader saw the old version of `prev->next`, they may still be using the old node!
Reclaiming Old Data
• The hardest problem in RCU is ensuring that old data is only deleted after all readers have finished with it
• How do we tell that all readers have actually finished?
• Define the concept of a read-side critical section:
• A reader enters a read-side critical section when it reads an RCU pointer (rcu_dereference())
• A reader leaves the read-side critical section when it is no longer using an RCU pointer
• We require that readers explicitly denote the start and end of read-side critical sections in their code:
• rcu_read_lock() starts a read-side critical section
• rcu_read_unlock() ends a read-side critical section
Read-Side Critical Sections
- Update T₁ to declare its read-side critical section:
```c
rcu_read_lock(); /* Enter read-side critical section */
list_node *p = rcu_dereference(head);
int b = -1;
while (p != NULL) {
if (p->a == value) {
b = p->b;
break;
}
p = rcu_dereference(p->next);
}
rcu_read_unlock(); /* Leave read-side critical section */
return b;
```
Read-Side Critical Sections (2)
- A critical constraint on read-side critical sections:
- Readers **cannot** block / sleep inside read-side critical sections!
- Should be obvious that $T_1$ follows this constraint:
```c
rcu_read_lock(); /* Start read-side critical section */
list_node *p = rcu_dereference(head);
int b = -1;
while (p != NULL) {
if (p->a == value) {
b = p->b;
break;
}
p = rcu_dereference(p->next);
}
rcu_read_unlock(); /* End read-side critical section */
return b;
```
Read-Side Critical Sections (3)
- Can use read-side critical sections to define when old data may be reclaimed
- Each reader’s interaction with shared data structure is contained entirely within its read-side critical section
- Each reader’s arrow starts with a call to `rcu_read_lock()`, and ends with `rcu_read_unlock()`
Read-Side Critical Sections (4)
• Writer publishes a change to the data structure with a call to \texttt{rcu\_assign\_pointer()}
• Divides readers into two groups – readers that might see the old version, and readers that cannot see the old version
• What readers might see the old version of the data?
• Any reader that called \texttt{rcu\_read\_lock()} before \texttt{rcu\_assign\_pointer} is called
```
rcu_assign_pointer()
```
```
Writer: Replace
```
```
Reader 1
Reader 2
Reader 3
Reader 4
Reader 5
Reader 6
Reader 7
```
```
Reader 1
Reader 2
Reader 3
Reader 4
Reader 5
Reader 6
Reader 7
```
```
time
```
Read-Side Critical Sections (5)
• When can the writer reclaim the old version of the data?
• After all readers that called `rcu_read_lock()` before `rcu_assign_pointer()` have also called `rcu_read_unlock()`
• This is the earliest that the writer may reclaim the old data; it is also allowed to wait longer (no cost except that resources are still held)
• Time between release and reclamation is called the grace period
End of Grace Period
• Writer must somehow find out when grace period is over
• Doesn’t have to be a precise determination; it can be approximate, as long as writer can’t think it’s over before it’s actually over
• Encapsulate this in the `synchronize_rcu()` operation
• This call blocks the writer until the grace period is over
• Updating our writer’s code:
```c
copy = malloc(sizeof(list_node));
copy->a = p->a;
copy->b = 15;
copy->next = p->next;
rcu_assign_pointer(prev->next, copy);
/* Wait for readers to get out of our way... */
synchronize_rcu();
free(p);
```
End of Grace Period (2)
- Updated diagram with call to `synchronize_rcu()`
- But how does this actually work?
End of Grace Period (3)
- Recall: readers are not allowed to block or sleep when inside a read-side critical section
- What is the maximum number of readers that can be inside read-side critical sections at any given time?
- Same as the number of CPUs in the system
- If a reader is inside its read-side critical section, it must also occupy a CPU
```markdown
rcu_assign_pointer()
Reader 1 Reader 2 Reader 3 Reader 4 Reader 5 Reader 6 Reader 7
Writer: Replace Grace Period Reclaim
synchronize_rcu()
```
End of Grace Period (4)
- Recall: readers are not allowed to block or sleep when inside a read-side critical section
- Also, require that the operating system cannot preempt a kernel thread that’s currently inside a read-side critical section
- Don’t allow OS to context-switch away from a thread in a read-side critical section
- In other words, don’t allow kernel preemption during the read-side critical section
End of Grace Period (5)
- Recall: readers are not allowed to block or sleep when inside a read-side critical section
- If a CPU executes a context-switch, then we know the kernel-thread completed any read-side critical section it might have been in…
- Therefore, `synchronize_rcu()` can simply wait until at least one context-switch has occurred on every CPU in the system
- Gives us an upper bound on the length of the grace period… Good enough! 😊
---
```
rcu_assign_pointer()
```
---
![Diagram showing the timeline of readers and writers with the grace period highlighted]
```
Grace Period synchronize_rcu() Reclaim
```
Completing the RCU Implementation
• Now we know enough to complete RCU implementation
• `synchronize_rcu()` waits until at least one context-switch has occurred on each CPU
```c
void synchronize_rcu() {
int cpu;
for_each_online_cpu(cpu)
run_on(cpu);
}
```
• `run_on()` causes the kernel thread to run on a specific processor
• Can be implemented by setting kernel thread’s processor-affinity, then yielding the current CPU
• Once the kernel thread has switched to every processor, at least one context-switch has definitely occurred on every CPU (duh!)
Completing the RCU Implementation (2)
- On a single-processor system, `synchronize_rcu()` is a no-op (!!!)
- `synchronize_rcu()` might block; therefore it cannot be called from within a read-side critical section
- Any read-side critical section started before `synchronize_rcu()` was called, must have already ended at this point
- Therefore, since `synchronize_rcu()` is running on the CPU, the grace period is already over, and the old data may be reclaimed
Completing the RCU Implementation (3)
- `read_lock()` and `read_unlock()` are very simple:
- Since `synchronize_cpu()` uses context-switches to tell when grace period is over, these functions don’t actually have to do any bookkeeping (!!!)
- On a multicore system, or an OS with kernel preemption:
- Must enforce constraint that readers cannot be switched away from while inside their read-side critical section
```c
void read_lock() {
preempt_disable(); /* Disable preemption */
}
void read_unlock() {
preempt_enable(); /* Reenable preemption */
}
```
- `(preempt_disable() and preempt_enable() simply increment or decrement the kernel’s preempt_count for the kernel thread)`
Completing the RCU Implementation (4)
- On a single-processor system with an OS that doesn’t allow kernel preemption:
- (Recall: this means all context-switches will be scheduled context-switches)
- In this case, `read_lock()` and `read_unlock()` don’t have to do anything!
- Already have a guarantee that nothing can cause a context-switch away from the kernel thread inside its read-side critical section
- The “implementation” also becomes a no-op:
```
#define read_lock()
#define read_unlock()
```
Results: The Good
- RCU is a very sophisticated mechanism for supporting concurrent access to shared data structures
- Conceptually straightforward to understand how to implement readers and writers
- Understanding how it works is significantly more involved…
- Doesn’t involve any locks (!!!!):
- Little to no lock overhead, no potential for deadlocks, no priority-inversion issues with priority scheduling
- Extremely lightweight
- In common scenarios, many RCU operations either reduce to a single instruction, or a no-op
- Only requires a very small number of clocks; far fewer than acquiring a lock
Entire RCU Implementation
/** RCU READER SUPPORT FUNCTIONS **/
/* Enter read-side critical section */
void read_lock(void) {
preempt_disable();
}
/* Leave read-side critical section */
void read_unlock(void) {
preempt_enable();
}
/* Subscribe to pointer p's value */
#define rcu_dereference(p) ({
typeof(p) _v = ACCESS_ONCE(p);
smp_read_barrier_depends();
(_value);
})
/** RCU WRITER SUPPORT FUNCTIONS **/
/* Publish a value v to pointer p */
/* smp_wmb() includes opt.barrier */
#define rcu_assign_pointer(p, v) (
smp_wmb(); (p) = (v); }
/* Wait for grace period to end */
void synchronize_rcu(void) {
int cpu;
for_each_online_cpu(cpu)
run_on(cpu);
}
Results: The Bad and the Ugly
- RCU is only useful in very specific circumstances:
- Must have many more readers than writers
- Consistency must not be a strong requirement
- Under RCU, readers may see a mix of old and new versions of data, or even only old data that is about to be reclaimed
- If either of these conditions isn’t met, may be much better to rely on more standard lock-based approaches
- Surprisingly, many parts of Linux satisfy the above circumstances, and RCU is becoming widely utilized
RCU Implementation Notes
• There are much more advanced implementations of RCU
• RCU discussed today is known as “Classic RCU”
• Many refinements to the implementation as well, offering additional features, and improving performance and efficiency
• Our implementation is a “toy implementation,” but it still works
• (Also doesn’t support multiple writers accessing the same pointer; need to use locks to prevent this, so it gets much slower…)
• SRCU (Sleepable RCU) allows readers to sleep inside their read-side critical sections
• Also preemption of kernel threads inside read-side critical sections
• Preemptible RCU also supports readers suspending within their read-side critical sections
References
• For everything you could ever want to know about RCU:
• Paul McKenney did his PhD research on RCU, and has links to an extensive array of articles, papers and projects on the subject
• http://www2.rdrop.com/users/paulmck/RCU/
• Most helpful/accessible resources:
• What is RCU, Really? (3-part series of articles)
• http://www.rdrop.com/users/paulmck/RCU/whatisRCU.html
• What Is RCU? (PDF of lecture slides)
• User-Level Implementations of Read-Copy Update
• http://www.rdrop.com/users/paulmck/RCU/urcu-main-accepted.2011.08.30a.pdf (actual article)
References (2)
- Andrei Alexandrescu has also written a few good articles:
- Lock-Free Data Structures (big overlap with many RCU concepts)
- Lock-Free Data Structures with Hazard Pointers
|
{"Source-Url": "http://users.cms.caltech.edu/~donnie/cs24/CS24Lec27.pdf", "len_cl100k_base": 7396, "olmocr-version": "0.1.50", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 90403, "total-output-tokens": 9978, "length": "2e12", "weborganizer": {"__label__adult": 0.00029397010803222656, "__label__art_design": 0.0002310276031494141, "__label__crime_law": 0.0002440214157104492, "__label__education_jobs": 0.0003104209899902344, "__label__entertainment": 5.054473876953125e-05, "__label__fashion_beauty": 0.00010466575622558594, "__label__finance_business": 0.0001266002655029297, "__label__food_dining": 0.0003266334533691406, "__label__games": 0.0005626678466796875, "__label__hardware": 0.0014629364013671875, "__label__health": 0.0003361701965332031, "__label__history": 0.0001989603042602539, "__label__home_hobbies": 8.910894393920898e-05, "__label__industrial": 0.000362396240234375, "__label__literature": 0.00015938282012939453, "__label__politics": 0.00022077560424804688, "__label__religion": 0.000457763671875, "__label__science_tech": 0.0113983154296875, "__label__social_life": 6.216764450073242e-05, "__label__software": 0.0050201416015625, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00026297569274902344, "__label__transportation": 0.0004284381866455078, "__label__travel": 0.00019431114196777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29896, 0.01051]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29896, 0.72921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29896, 0.84349]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 716, false], [716, 992, null], [992, 1395, null], [1395, 1944, null], [1944, 2285, null], [2285, 2641, null], [2641, 3004, null], [3004, 3689, null], [3689, 4255, null], [4255, 4967, null], [4967, 5523, null], [5523, 6124, null], [6124, 6432, null], [6432, 7135, null], [7135, 7687, null], [7687, 8445, null], [8445, 8982, null], [8982, 9531, null], [9531, 10083, null], [10083, 10561, null], [10561, 11168, null], [11168, 11677, null], [11677, 12162, null], [12162, 12693, null], [12693, 13455, null], [13455, 14213, null], [14213, 14814, null], [14814, 15300, null], [15300, 15816, null], [15816, 16372, null], [16372, 16912, null], [16912, 17637, null], [17637, 18378, null], [18378, 18781, null], [18781, 19437, null], [19437, 19829, null], [19829, 20352, null], [20352, 20678, null], [20678, 21299, null], [21299, 21720, null], [21720, 22313, null], [22313, 22425, null], [22425, 22953, null], [22953, 23373, null], [23373, 24003, null], [24003, 24584, null], [24584, 25052, null], [25052, 25748, null], [25748, 26264, null], [26264, 26879, null], [26879, 27578, null], [27578, 28095, null], [28095, 28799, null], [28799, 29558, null], [29558, 29896, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 716, true], [716, 992, null], [992, 1395, null], [1395, 1944, null], [1944, 2285, null], [2285, 2641, null], [2641, 3004, null], [3004, 3689, null], [3689, 4255, null], [4255, 4967, null], [4967, 5523, null], [5523, 6124, null], [6124, 6432, null], [6432, 7135, null], [7135, 7687, null], [7687, 8445, null], [8445, 8982, null], [8982, 9531, null], [9531, 10083, null], [10083, 10561, null], [10561, 11168, null], [11168, 11677, null], [11677, 12162, null], [12162, 12693, null], [12693, 13455, null], [13455, 14213, null], [14213, 14814, null], [14814, 15300, null], [15300, 15816, null], [15816, 16372, null], [16372, 16912, null], [16912, 17637, null], [17637, 18378, null], [18378, 18781, null], [18781, 19437, null], [19437, 19829, null], [19829, 20352, null], [20352, 20678, null], [20678, 21299, null], [21299, 21720, null], [21720, 22313, null], [22313, 22425, null], [22425, 22953, null], [22953, 23373, null], [23373, 24003, null], [24003, 24584, null], [24584, 25052, null], [25052, 25748, null], [25748, 26264, null], [26264, 26879, null], [26879, 27578, null], [27578, 28095, null], [28095, 28799, null], [28799, 29558, null], [29558, 29896, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29896, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29896, null]], "pdf_page_numbers": [[0, 0, 1], [0, 716, 2], [716, 992, 3], [992, 1395, 4], [1395, 1944, 5], [1944, 2285, 6], [2285, 2641, 7], [2641, 3004, 8], [3004, 3689, 9], [3689, 4255, 10], [4255, 4967, 11], [4967, 5523, 12], [5523, 6124, 13], [6124, 6432, 14], [6432, 7135, 15], [7135, 7687, 16], [7687, 8445, 17], [8445, 8982, 18], [8982, 9531, 19], [9531, 10083, 20], [10083, 10561, 21], [10561, 11168, 22], [11168, 11677, 23], [11677, 12162, 24], [12162, 12693, 25], [12693, 13455, 26], [13455, 14213, 27], [14213, 14814, 28], [14814, 15300, 29], [15300, 15816, 30], [15816, 16372, 31], [16372, 16912, 32], [16912, 17637, 33], [17637, 18378, 34], [18378, 18781, 35], [18781, 19437, 36], [19437, 19829, 37], [19829, 20352, 38], [20352, 20678, 39], [20678, 21299, 40], [21299, 21720, 41], [21720, 22313, 42], [22313, 22425, 43], [22425, 22953, 44], [22953, 23373, 45], [23373, 24003, 46], [24003, 24584, 47], [24584, 25052, 48], [25052, 25748, 49], [25748, 26264, 50], [26264, 26879, 51], [26879, 27578, 52], [27578, 28095, 53], [28095, 28799, 54], [28799, 29558, 55], [29558, 29896, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29896, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
cbbc4b73ac5bfc87cc6e71f7d78ce158ba238460
|
Algorithms in Everything
Using MATLAB & Simulink to Build Algorithms in Everything
Simplifying your work…
…often at higher levels of abstraction.
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Design → Outputs
The capability of a machine to match or exceed intelligent human behavior by training a machine to learn the desired behavior.
There are two ways to get a computer to do what you want
Traditional Programming
- Data
- Program
COMPUTER
Output
There are two ways to get a computer to do what you want
Machine Learning
Data → COMPUTER → Model
Output
Artificial Intelligence
Data → Machine Learning → Deep Learning → Model
Using MATLAB and Simulink to Build Deep Learning Models
Using Apps for Ground Truth Labeling
Image and Video Data
Using Apps for Ground Truth Labeling
Signal Data
Using Apps for Ground Truth Labeling
Audio Data
Audio Toolbox
Using Apps for Designing Deep Learning Networks
Deep Learning Toolbox
Using Transfer Learning with Pre-trained Models
- AlexNet
- VGG-16
- GoogLeNet
- Inception-v3
- DenseNet-201
- Xception
- NasNetLarge
- VGG-19
- ResNet-50
- Inception-ResNet-v2
- MobileNet-v2
- NasNetMobile
- ResNet-101
- ResNet-18
- Places365-GoogLeNet
- ShuffleNet
- SqueezeNet
Year:
- 2016
- 2017
- 2018
- 2019
Using Models from Other Frameworks
MATLAB
Keras-Tensorflow
PyTorch
ONNX
Caffe2
MXNet
Core ML
(...)
Caffe
CNTK
Deep Learning Toolbox
Deploying Deep Learning Applications
Matlab Coder
GPU Coder
Pre-processing → Deep Learning Application → Post-processing → Coder Products
Intel MKL-DNN Library
NVIDIA TensorRT & cuDNN Libraries
ARM Compute Library
Using MATLAB and Simulink for Reinforcement Learning
Inputs
Data
→
Design
Machine Learning
Deep Learning
→
Outputs
Model
Reinforcement Learning Toolbox
Using MATLAB and Simulink for Reinforcement Learning
Using MATLAB and Simulink for Reinforcement Learning
Data → Machine Learning → Deep Learning → Model
Inputs → Design → Outputs
Reinforcement Learning Toolbox
Using MATLAB and Simulink for Reinforcement Learning
- **Generate Data**
- Scenario Design
- Simulation-based data generation
- **Design**
- Machine Learning
- Deep Learning
- **Outputs**
- Model
Inputs
- **Simulink Reinforcement Learning Toolbox**
Using MATLAB and Simulink for Reinforcement Learning
Using MATLAB and Simulink for Reinforcement Learning
Find out more:
Deep Learning and Reinforcement Learning Workflows in A.I.
Avi Nehemiah
Deep Learning & Autonomous Systems Track
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Design → Outputs
MATLAB & SIMULINK®
# Working with Text Data
<table>
<thead>
<tr>
<th>Date</th>
<th>Time</th>
<th>Time Stamp</th>
<th>Provider</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>2015-06-02</td>
<td>12:00</td>
<td>AM, 14073,</td>
<td>118743,</td>
<td>DRIVER'S REPORT,"FM SERVICE, CHECK TURN SIGNAL, CLUNKING NOISE WHEN DRIVING", 493.85, 0.493.85</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14232,</td>
<td>230973,</td>
<td>"FM SERVICE"</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14006,</td>
<td>1243, 116</td>
<td>"DRIVER'S REPORT, NEED 4 FLOW PINS, 45.0, 45</td>
</tr>
<tr>
<td>2015-06-02</td>
<td>12:00</td>
<td>AM, 14140,</td>
<td>B39109,</td>
<td>178.0, 0, DRIVER'S REPORT, INSTALL SPINNER ASSY, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14163,</td>
<td>574950, 215</td>
<td>SNOW BREAKDOWN, DONT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14163,</td>
<td>B39109,</td>
<td>178.0, 0, DRIVER'S REPORT, DOG BONE PIN BROKEN, 20, 20</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14000,</td>
<td>766153, 248.08</td>
<td>"FM SERVICE"</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14157,</td>
<td>621909, 213</td>
<td>"NEGLIGENCE, TARP VALVE STICKING RIGHT SIDE MIRROR BRACKET BROKEN, 50.02, 0, 50.02</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14164,</td>
<td>1226, 117</td>
<td>13, SNOW BREAKDOWN, HANDLES IN CAB LOOSE, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14165,</td>
<td>525999, 114</td>
<td>04, DRIVERS REPORT, NO FLOW LIGHTS, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14172,</td>
<td>B34632,</td>
<td>1276.10, ROADCALL, WILL NOT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14172,</td>
<td>1469, 122</td>
<td>10, ROADCALL, WILL NOT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14176,</td>
<td>68932, 147</td>
<td>10, ROADCALL, WILL NOT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14176,</td>
<td>68933, 148</td>
<td>10, ROADCALL, WILL NOT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14177,</td>
<td>621907, 208</td>
<td>10, ROADCALL, WILL NOT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14181,</td>
<td>337657, 218</td>
<td>04, DRIVER'S REPORT, CONVEYOR NOT WORKING, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14182,</td>
<td>15920, 164</td>
<td>10, ROADCALL, DONT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14183,</td>
<td>525998, 217</td>
<td>10, ROADCALL, DONT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14184,</td>
<td>526000, 225</td>
<td>10, ROADCALL, DONT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14185,</td>
<td>621921, 214</td>
<td>04, DRIVER'S REPORT, CONVEYOR NOT WORKING, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14186,</td>
<td>301469, 201</td>
<td>04, DRIVER'S REPORT, NEEDS DEF/JIM F, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14190,</td>
<td>337657, 219</td>
<td>04, DRIVER'S REPORT, NEEDS FLOOR MATTS, 65.0, 65.09999999999993, 0.65.06999999999993</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14191,</td>
<td>B34632,</td>
<td>1276.10, ROADCALL, DONT START, 0, 0</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14196,</td>
<td>1222, 118</td>
<td>04, DRIVER'S REPORT, HARDWARE FOR REAR SPRINGS, 14.32, 0, 14.32</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14199,</td>
<td>52565, 626</td>
<td>04, DRIVER'S REPORT, WASHING FLUID DEF, 28.60, 0, 28.60</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14107,</td>
<td>1487, 121</td>
<td>08, FM SERVICE</td>
</tr>
<tr>
<td>2015-06-03</td>
<td>12:00</td>
<td>AM, 14107,</td>
<td>1487, 121</td>
<td>08, FM SERVICE</td>
</tr>
</tbody>
</table>
Working with Text Data
```matlab
% Read a table from a text file
filename = 'example.txt';
t = readtable(filename, 'TextType', 'string');
disp(t(1:20,6:7))
```
<table>
<thead>
<tr>
<th>Reason</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"PM SERVICE, CHECK TURN SIGNAL, CLUNKING NOISE WHEN DRIVING"</td>
</tr>
<tr>
<td>"08 PM SERVICE"</td>
<td>"SERVICERO8,EXT,5604"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"NEED 4 PLOW PINS"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"INSTALL SPINNER ASSY"</td>
</tr>
<tr>
<td>"13 SNOW BREAKDOWN"</td>
<td>"DONT START"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"DOG BONE PIN BROKEN"</td>
</tr>
<tr>
<td>"08 PM SERVICE"</td>
<td>"NEED SERVICE, CHECK BRAKES"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"HYD CAP CHECK ENGINE LIGHT ON"</td>
</tr>
<tr>
<td>"40 NEGLIGENCE"</td>
<td>"TARP VALVE STICKING RIGHT SIDE MIRROR BRACKET BROKEN"</td>
</tr>
<tr>
<td>"13 SNOW BREAKDOWN"</td>
<td>"HANDLES IN CAB LOOSE"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"NO PLOW LIGHTS"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"WILL NOT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"WILL NOT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"WILL NOT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"WILL NOT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"WILL NOT START"</td>
</tr>
<tr>
<td>"04 DRIVER'S REPORT"</td>
<td>"CONVEYOR NOT WORKING"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"DONT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"DONT START"</td>
</tr>
<tr>
<td>"10 ROADCALL"</td>
<td>"DONT START"</td>
</tr>
</tbody>
</table>
Working with Text Data
Working with Text Data
Text Analytics Toolbox
MATLAB
Creating Your Own Data
Identifying the Useful Data
Data Acquisition
- Acquire Data
- Preprocess Data
Data Analysis
- Identify Condition Indicators
Machine Learning
- Train Model
Deployment
- Deploy & Integrate
Data Visualization
- Visualize data
- Extract Features
- Select the most useful features
Machine Learning
- Identify Condition Indicators
Select the most useful features
Identifying the Useful Data
Identifying the Useful Data
Signal Features
- Generate statistics from signals
- Generate features from signals
Rotating Machinery Features
- Generate features from rotating machinery signals
Nonlinear Features
- Generate nonlinear features from signals
Spectral Features
- Condition variables: faultCode
- Computation mode: use full signal
- Spectral peaks
- Peak amplitude
- Peak frequency
- Peak value lower threshold
- Number of peaks
- Minimum frequency gap
- Peak excursion tolerance
- Modal coefficients
- Band power
Identifying the Useful Data
Designing Decision Logic with Stateflow
```matlab
inNormalRegion = true;
counter = 0;
for i=1:length(inData)
if(inNormalRegion)
if(inData(i)<t1)
counter = counter+1;
if(counter>=N1)
inNormalRegion = false;
end
else
counter = 0;
end
else
if(inData(i)>=t2)
counter = counter+1;
if(counter>=N2)
inNormalRegion = true;
end
else
counter = 0;
end
end
if(inNormalRegion)
outData(i) = inData(i);
else
outData(i) = 0;
end
end
```
Normal: \( y = u; \)
Abnormal: \( y = 0; \)
Transition conditions:
- \([\text{count}(u < t1) >= N1]\)
- \([\text{count}(u >= t2) >= N2]\)
Using Stateflow in MATLAB
% Callbacks that handle component events
methods (Access = private)
% Code that executes after component creation
function startupFcn(app)
app.lanternlogic = blink.lanternLogic('app',app);
end
% Button pushed function: POWERButton
function POWERButtonPushed(app, event)
app.lanternlogic.powerButton();
end
% Button pushed function: COLORButton
function COLORButtonPushed(app, event)
app.lanternLogic.colorButton();
end
% Close request function: UIFigure
function UIFigureCloseRequest(app, event)
delete(app.lanternLogic);
delete(app);
end
% Button pushed function: BLINKButton
function BLINKButtonPushed(app, event)
app.lanternLogic.blinkButton();
end
end
Editing at the Speed of Thought
Editing at the Speed of Thought
Editing at the Speed of Thought
Editing at the Speed of Thought
Editing at the Speed of Thought
Controlling the Execution of Model Components
Schedulable Rate-Based Model
Export Function Model
Controlling the Execution of Model Components
Simplifying Integration with External C/C++ Code
Simulink Coder
Column-Major
```c
#include "rtwdemo_rowlutcol2row_workflow_rowrow.h"
/* Block parameters (default storage) */
PrtP = {
/* Variable: Tbl_1 */
/* Referenced by: 'Root/2-D Lookup Table' */
{ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0, 35.0, 36.0, 37.0, 38.0, 39.0, 40.0, 41.0, 42.0, 43.0, 44.0, 45.0, 46.0, 47.0, 48.0, 49.0, 50.0, 51.0, 52.0, 53.0, 54.0, 55.0, 56.0, 57.0, 58.0, 59.0, 60.0 }
};
```
Simplifying Integration with External C/C++ Code
Row-Major
Viewing Generated Code Alongside the Model
Viewing Generated Code Alongside the Model
Sharing Live Scripts
Estimating Sunrise and Sunset
Using the latitude ($\phi$), the sun's declination ($\delta$) and the solar time correction ($SC$) we can calculate sunrise and sunset times.
\[
\begin{align*}
sunrise &= 12 - \frac{\cos^{-1}(-\tan \phi \tan \delta)}{15^\circ} - \frac{SC}{60} \\
sunset &= 12 + \frac{\cos^{-1}(-\tan \phi \tan \delta)}{15^\circ}
\end{align*}
\]
Refer to this page for background and details on the equations used.
Sharing Live Scripts
Exploring Exoplanets
In this example we will explore some data on exoplanets - planets outside our own solar system. The data used here is a subset of data from the NASA Exoplanet Archive. We will start by using the data to answer some questions about the set of exoplanets in the archive. Then we will do some calculations to try to identify planets in the archive that might be capable of supporting life.
```matlab
exoplanets = readtable('exoplanets.xlsx');
exoplanets(zScores);
```
How Far Away Are these Planets?
There are 90 exoplanets within 50 light-years of earth and 460 exoplanets within 200 light-years.
```matlab
histogram(x20*exoplanets.st_distance), hold on
scatter(x20*exoplanets.st_distance); hold off
xlabel('Number of Planets')
ylabel('Light Years From Earth')
```
Where is the nearest exoplanet?
```matlab
ldx = find(exoplanets.st_distance == min(exoplanets.st_distance));
name = char(exoplanets(ldx, 'name'))
```
Sharing Live Scripts

- **P**: 1:40
- **Slider**: 350
- **Drop down**: "carbon dioxide"
**Graph**:
- Title: carbon dioxide @ 350 Kelvin
- X-axis: Viscosity Factor, Z
- Y-axis: 1.0 to 0.92
[Hide Code]
Creating Apps
Plate Browser Summary Tables
Select Files Current File: microtiter_data0001.csv
Microplate Plot
EC50 Curves
% Signal
Log [Compound]
<table>
<thead>
<tr>
<th>File</th>
<th>Compound Nr</th>
<th>NegControl</th>
<th>Conc1</th>
<th>Conc2</th>
<th>Conc3</th>
<th>Conc4</th>
<th>Conc5</th>
<th>Conc6</th>
<th>Conc7</th>
<th>Conc8</th>
</tr>
</thead>
<tbody>
<tr>
<td>microtiter_d...</td>
<td>1</td>
<td>-0.9741</td>
<td>0.3564</td>
<td>9.8759</td>
<td>56.8743</td>
<td>91.7323</td>
<td>96.7684</td>
<td>97.1532</td>
<td>57.1910</td>
<td>97.1940</td>
</tr>
<tr>
<td>microtiter_d...</td>
<td>2</td>
<td>-0.0143</td>
<td>-0.5044</td>
<td>-0.5944</td>
<td>-0.5944</td>
<td>-0.5944</td>
<td>-0.5944</td>
<td>-0.5944</td>
<td>0.0436</td>
<td>17.0436</td>
</tr>
<tr>
<td>microtiter_d...</td>
<td>3</td>
<td>0.0054</td>
<td>-0.4702</td>
<td>3.1906</td>
<td>52.9962</td>
<td>97.5746</td>
<td>100.5606</td>
<td>100.6096</td>
<td>100.6096</td>
<td>100.6096</td>
</tr>
<tr>
<td>microtiter_d...</td>
<td>4</td>
<td>-0.1986</td>
<td>0.2325</td>
<td>0.2355</td>
<td>0.3712</td>
<td>3.2339</td>
<td>41.1610</td>
<td>94.7343</td>
<td>100.6591</td>
<td>100.9487</td>
</tr>
<tr>
<td>microtiter_d...</td>
<td>5</td>
<td>-0.0572</td>
<td>0.7461</td>
<td>1.7104</td>
<td>26.8872</td>
<td>84.5134</td>
<td>96.2395</td>
<td>100.4717</td>
<td>100.5601</td>
<td>100.5700</td>
</tr>
</tbody>
</table>
Deploying Web Apps
MATLAB Web Apps
Transient Heat Conduction
Initial and Boundary Conditions
- Initial T (°C)
- Top T (°C)
- Bottom T (°C)
- Left T (°C)
- Right T (°C)
Geometry
- x (m)
- y (m)
Time and Convergence
- dt (s)
- Total Time (s)
- Convergence Criterion
Note: Numerical stability requires ε
Current Pe = 6.0000
Thermal Diffusivity
- Alpha (m²/s)
Mat: Copper or Water
Start | Stop
Time: 35 s
Output
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Design → Outputs
Evaluating Architectures
Inputs → Architecture → Design → Outputs
MATLAB & SIMULINK
Evaluating Architectures
Inputs → Architecture → Design → Outputs
MATLAB & SIMULINK®
Designing System and Software Architectures
Designing System and Software Architectures
Find out more:
Systems Engineering: Requirements to Architecture to Simulation
Gaurav Dubey
Systems Modeling, Implementation, and Verification Track
Designing **Beyond** System and Software Architectures
- Systems and Software
- SoC Hardware and Software
- AUTOSAR Software
System Composer
SoC Blockset
AUTOSAR Blockset
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Architecture → Design → Outputs
Test & Verification → Collaboration → Scaling
Using MATLAB & Simulink to Build Algorithms in Everything
Integrating with Third-party Requirements Tools
External Requirements
- .doc
- .xls
- Database
Requirements Management Tools
Simulink Requirements
- External Requirements
- Authored Requirements
R2019a
Import
Edit
Export
ReqIF
Include Custom Code in Test & Verification
Simulink
C/C++
Stateflow
C/C++
Simulink Design Verifier
Test & Verification
Include Custom Code in Test & Verification
Find out more:
Simplifying Requirements-Based Verification with Model-Based Design
Vamshi Kumbham
Systems Modeling, Implementation, and Verification Track
Using the MATLAB Unit Test Framework
```matlab
>> result.table
ans =
2×6 table
Name Passed Failed Incomplete Duration Details
'test_Predictions/Test_ModelType' true false false 0.12241 [1×1 struct]
'test_Predictions/Test_Prediction' false true true 0.11542 [1×1 struct]
```
Using the MATLAB App Testing Framework
testCase.press(myApp.checkbox)
testCase.choose(myApp.discreteKnob, "Medium")
testCase.drag(myApp.continuousKnob, 10, 90)
testCase.type(myApp.editfield, myTextVar)
Using the MATLAB Performance Testing Framework
Using Continuous Integration
Plugins Index
Discover the 1000+ community contributed Jenkins plugins to support building, deploying and automating any project.
Browse categories:
- Platforms
- User interface
- Administration
- Source code management
New Plugins:
- ORebel
- MATLAB
- MISRA Compliance Report
- Zoom
- VectorCAST Execution
- Klocwork Community
- jQuery
- Analysis Model API
MATLAB
https://plugins.jenkins.io/
Using Continuous Integration
MATLAB 10.0
Minimum Jenkins requirement: 2.7.3
ID: matlab
Installation: No usage data available
GitHub →
Last released: 2 days ago
Maintainers
MathWorks
Dependencies
- bouncy-castleg API v.2.16.0 (implies) (what's this?)
- Command Agent Launcher v.1.0 (implies) (what's this?)
- JDK Tool v.1.0 (implies) (what's this?)
- JAXB v.2.3.0 (implies) (what's this?)
The Jenkins plugin for MATLAB® enables you to easily run your MATLAB tests and generate test artifacts in formats such as JUnit, TAP, and Cobertura code coverage reports.
Features
- Support to run MATLAB tests, present in the Jenkins workspace automatically. (This also includes the tests present in .prj files)
- Generate tests artifacts in JUnit, TAP & Cobertura code coverage formats.
- Support to run tests using custom MATLAB command or custom MATLAB script file.
Using Projects in MATLAB
<table>
<thead>
<tr>
<th>Name</th>
<th>Status</th>
<th>Git</th>
<th>Classification</th>
</tr>
</thead>
<tbody>
<tr>
<td>+Test</td>
<td>✓</td>
<td>•</td>
<td>Test</td>
</tr>
<tr>
<td>ACI</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>Dashboard</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>Documents</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>Elasticsearch</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>MachineLearning</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>MATLAB_Kafka_Producer_Java</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>mps_stream</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>SimExecutable</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>Simulation</td>
<td>✓</td>
<td>•</td>
<td></td>
</tr>
<tr>
<td>DocExample_MultiClassFaultDetectionUsing</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>genPumpData.m</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
<tr>
<td>javasetup.m</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
<tr>
<td>Main_ExampleWorkflow.m</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
<tr>
<td>MLModels.mat</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
<tr>
<td>rawdata.mat</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
<tr>
<td>README.md</td>
<td>✓</td>
<td>•</td>
<td>Design</td>
</tr>
</tbody>
</table>
Parallel Simulations in Simulink
Simulation Manager
Simulink
Parallel Computing Toolbox
batchsim
Simulation Jobs
Simulation Results
MATLAB Desktop
Worker
Worker
Worker
Head
Input
Output
Design
Scaling
Scaling Computations on Clusters and Clouds
MATLAB
Parallel Computing Toolbox
MATLAB Parallel Server
Cloud
GPU
Multi-core CPU
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Architecture → Design → Outputs
Test & Verification → Collaboration → Scaling
Specialized Tools for Building Algorithms in Everything
Communications
Physical interconnects
Analog Mixed-Signal
5G Toolbox
SerDes Toolbox
Mixed-Signal Blockset
Developing Autonomous Systems
- Perception
- Planning
- Control
Evaluate Sensor Fusion Architectures
Simulate Path Planning Algorithms
Design Lane-following and Spacing Control Algorithms
Developing Autonomous Systems
- Lidar Processing & Tracking
- HERE HD Maps & OpenDRIVE Roads
- UAV Algorithms
Computer Vision Toolbox
Automated Driving Toolbox
Robotics System Toolbox
Developing Autonomous Systems
Lidar Processing & Tracking
HERE HD Maps & Roads
UAV Algorithms
Find out more:
Automated Driving System Design and Simulation
Dr. Amod Anandkumar
Deep Learning and Autonomous Systems Track
Using MATLAB & Simulink to Build Algorithms in Everything
Inputs → Architecture → Design → Outputs
Test & Verification → Collaboration → Scaling
MATLAB & Simulink
© 2019 The MathWorks, Inc.
Get Started
**MATLAB Onramp**
Quickly learn the essentials of MATLAB.
**Simulink Onramp**
Learn to create, edit, and troubleshoot Simulink models.
**Deep Learning Onramp**
Learn to use deep learning techniques in MATLAB for image recognition.
|
{"Source-Url": "https://kr.mathworks.com/content/dam/mathworks/mathworks-dot-com/images/events/matlabexpo/in/2019/2019-expo-whats-new-in-matlab-simulink.pdf", "len_cl100k_base": 7281, "olmocr-version": "0.1.53", "pdf-total-pages": 83, "total-fallback-pages": 0, "total-input-tokens": 87877, "total-output-tokens": 9792, "length": "2e12", "weborganizer": {"__label__adult": 0.00031638145446777344, "__label__art_design": 0.0006737709045410156, "__label__crime_law": 0.0002529621124267578, "__label__education_jobs": 0.0012693405151367188, "__label__entertainment": 0.00012362003326416016, "__label__fashion_beauty": 0.00016188621520996094, "__label__finance_business": 0.00024020671844482425, "__label__food_dining": 0.00033974647521972656, "__label__games": 0.0006375312805175781, "__label__hardware": 0.0022296905517578125, "__label__health": 0.0003883838653564453, "__label__history": 0.00026154518127441406, "__label__home_hobbies": 0.0001283884048461914, "__label__industrial": 0.0007185935974121094, "__label__literature": 0.00017750263214111328, "__label__politics": 0.00020623207092285156, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.1143798828125, "__label__social_life": 0.00012826919555664062, "__label__software": 0.0271148681640625, "__label__software_dev": 0.8486328125, "__label__sports_fitness": 0.0003108978271484375, "__label__transportation": 0.0005974769592285156, "__label__travel": 0.00017917156219482422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21661, 0.03408]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21661, 0.23255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21661, 0.66893]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 25, false], [25, 148, null], [148, 233, null], [233, 360, null], [360, 478, null], [478, 586, null], [586, 659, null], [659, 715, null], [715, 773, null], [773, 822, null], [822, 885, null], [885, 956, null], [956, 1272, null], [1272, 1415, null], [1415, 1632, null], [1632, 1794, null], [1794, 1847, null], [1847, 2008, null], [2008, 2272, null], [2272, 2325, null], [2325, 2508, null], [2508, 2613, null], [2613, 6401, null], [6401, 8699, null], [8699, 8722, null], [8722, 8776, null], [8776, 8799, null], [8799, 8799, null], [8799, 9163, null], [9163, 9191, null], [9191, 9723, null], [9723, 9751, null], [9751, 10530, null], [10530, 11242, null], [11242, 11274, null], [11274, 11306, null], [11306, 11338, null], [11338, 11370, null], [11370, 11402, null], [11402, 11501, null], [11501, 11547, null], [11547, 12176, null], [12176, 12236, null], [12236, 12279, null], [12279, 12322, null], [12322, 12774, null], [12774, 13737, null], [13737, 13994, null], [13994, 14940, null], [14940, 15359, null], [15359, 15444, null], [15444, 15530, null], [15530, 15617, null], [15617, 15661, null], [15661, 15705, null], [15705, 15855, null], [15855, 16030, null], [16030, 16177, null], [16177, 16235, null], [16235, 16468, null], [16468, 16593, null], [16593, 16793, null], [16793, 17144, null], [17144, 17350, null], [17350, 17397, null], [17397, 17825, null], [17825, 18688, null], [18688, 19965, null], [19965, 20177, null], [20177, 20309, null], [20309, 20456, null], [20456, 20624, null], [20624, 20689, null], [20689, 20726, null], [20726, 20760, null], [20760, 20813, null], [20813, 20999, null], [20999, 21221, null], [21221, 21415, null], [21415, 21661, null], [21661, 21661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 25, true], [25, 148, null], [148, 233, null], [233, 360, null], [360, 478, null], [478, 586, null], [586, 659, null], [659, 715, null], [715, 773, null], [773, 822, null], [822, 885, null], [885, 956, null], [956, 1272, null], [1272, 1415, null], [1415, 1632, null], [1632, 1794, null], [1794, 1847, null], [1847, 2008, null], [2008, 2272, null], [2272, 2325, null], [2325, 2508, null], [2508, 2613, null], [2613, 6401, null], [6401, 8699, null], [8699, 8722, null], [8722, 8776, null], [8776, 8799, null], [8799, 8799, null], [8799, 9163, null], [9163, 9191, null], [9191, 9723, null], [9723, 9751, null], [9751, 10530, null], [10530, 11242, null], [11242, 11274, null], [11274, 11306, null], [11306, 11338, null], [11338, 11370, null], [11370, 11402, null], [11402, 11501, null], [11501, 11547, null], [11547, 12176, null], [12176, 12236, null], [12236, 12279, null], [12279, 12322, null], [12322, 12774, null], [12774, 13737, null], [13737, 13994, null], [13994, 14940, null], [14940, 15359, null], [15359, 15444, null], [15444, 15530, null], [15530, 15617, null], [15617, 15661, null], [15661, 15705, null], [15705, 15855, null], [15855, 16030, null], [16030, 16177, null], [16177, 16235, null], [16235, 16468, null], [16468, 16593, null], [16593, 16793, null], [16793, 17144, null], [17144, 17350, null], [17350, 17397, null], [17397, 17825, null], [17825, 18688, null], [18688, 19965, null], [19965, 20177, null], [20177, 20309, null], [20309, 20456, null], [20456, 20624, null], [20624, 20689, null], [20689, 20726, null], [20726, 20760, null], [20760, 20813, null], [20813, 20999, null], [20999, 21221, null], [21221, 21415, null], [21415, 21661, null], [21661, 21661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21661, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 25, 3], [25, 148, 4], [148, 233, 5], [233, 360, 6], [360, 478, 7], [478, 586, 8], [586, 659, 9], [659, 715, 10], [715, 773, 11], [773, 822, 12], [822, 885, 13], [885, 956, 14], [956, 1272, 15], [1272, 1415, 16], [1415, 1632, 17], [1632, 1794, 18], [1794, 1847, 19], [1847, 2008, 20], [2008, 2272, 21], [2272, 2325, 22], [2325, 2508, 23], [2508, 2613, 24], [2613, 6401, 25], [6401, 8699, 26], [8699, 8722, 27], [8722, 8776, 28], [8776, 8799, 29], [8799, 8799, 30], [8799, 9163, 31], [9163, 9191, 32], [9191, 9723, 33], [9723, 9751, 34], [9751, 10530, 35], [10530, 11242, 36], [11242, 11274, 37], [11274, 11306, 38], [11306, 11338, 39], [11338, 11370, 40], [11370, 11402, 41], [11402, 11501, 42], [11501, 11547, 43], [11547, 12176, 44], [12176, 12236, 45], [12236, 12279, 46], [12279, 12322, 47], [12322, 12774, 48], [12774, 13737, 49], [13737, 13994, 50], [13994, 14940, 51], [14940, 15359, 52], [15359, 15444, 53], [15444, 15530, 54], [15530, 15617, 55], [15617, 15661, 56], [15661, 15705, 57], [15705, 15855, 58], [15855, 16030, 59], [16030, 16177, 60], [16177, 16235, 61], [16235, 16468, 62], [16468, 16593, 63], [16593, 16793, 64], [16793, 17144, 65], [17144, 17350, 66], [17350, 17397, 67], [17397, 17825, 68], [17825, 18688, 69], [18688, 19965, 70], [19965, 20177, 71], [20177, 20309, 72], [20309, 20456, 73], [20456, 20624, 74], [20624, 20689, 75], [20689, 20726, 76], [20726, 20760, 77], [20760, 20813, 78], [20813, 20999, 79], [20999, 21221, 80], [21221, 21415, 81], [21415, 21661, 82], [21661, 21661, 83]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21661, 0.13628]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
197e64b5dde188ed99f9a4471e216a6e0a738879
|
Securing GRC - designing effective security within GRC Access Control
Sergiy Mysyk
▪ Understanding of SAP GRC Authorization System
▪ Approach to designing security within Access Control
▪ How to restrict access beyond roles and authorization objects - customizing GRC end user interface
Introduction to SAP GRC Access Control security
Leading practice of designing GRC AC Security that segregate access within GRC but still meet your business requirements
Access control beyond roles: how to control what users can view and change by modifying user interface
Wrap up and questions
GRC AC AUTHORIZATION BASICS
GRC AC 10.0 is an ABAP based system and uses standard SAP Netweaver authorization system.
Security roles are maintained through PFCG and assigned directly to user master records – SU01
End User Interface and SAP Roles
- Netweaver Business Client (NWBC) is the “front end” user interface and is accessed via internet browser
- Security roles are maintained through PFCG on the “backend” – SAP GUI
- Roles and authorizations within them control what is visible and what the user can do within each application (webdynpro) in NWBC
Introduction to SAP GRC AC Security
NWBC Work Center Access
- The Menu tab of a PFCG role and the folders within it control the Work Centers (folders) that are displayed in NWBC.
- Does not provide authorizations.
- Contents of each standard SAP delivered work center can be modified by changing the Launch Pad (next section).
Follow @ASUG365 and #ASUG2013 on Twitter
• You can also directly add individual web dynpro applications in place of a launch pad
Access Within GRC AC Work Centers, cont
- Cross Application Authorization Object CA_POWL restricts access to the entire POWL* iViews within the NWBC tabs (similar to S_TCODE)
<table>
<thead>
<tr>
<th>Authorization Object Technical Name</th>
<th>Field Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>CA_POWL</td>
<td>POWL_APPID</td>
<td>GRAC_SOD_FUNCTION</td>
</tr>
<tr>
<td></td>
<td></td>
<td>GRAC_SOD_RULESET</td>
</tr>
<tr>
<td></td>
<td></td>
<td>GRAC_SODRISK</td>
</tr>
</tbody>
</table>
*POWL – Personal Object Work List
Access Within GRC AC Work Centers, cont.
- GRC authorization objects can further restrict access on individual webdynpro applications.
Additional Examples of Authorization Objects
- Restricting Access to certain types of sensitive reports, such as Firefighter Log reports is normally required. Object GRAC_REP controls access to the various reporting links.
<table>
<thead>
<tr>
<th>Report Id</th>
<th>Report Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>GRAC_SPM_CLOGS</td>
<td>Consolidated Log Report</td>
</tr>
<tr>
<td>GRAC_SPM_INVALID_SU</td>
<td>Invalid Superuser Report</td>
</tr>
<tr>
<td>GRAC_SPM_FFLOG_SUM</td>
<td>Firefighter Log Summary</td>
</tr>
<tr>
<td>GRAC_SPM_RCODE_ACTVT</td>
<td>Reason Code and Activity Report</td>
</tr>
<tr>
<td>GRAC_SPM_TXN_LOG</td>
<td>Transaction Log and Session Details</td>
</tr>
<tr>
<td>GRAC_SPM_SOD_REPORT</td>
<td>SOD Conflict Report for Firefighter IDs</td>
</tr>
</tbody>
</table>
- Additional security available through other GRAC objects.
- Reports may appear under multiple Launch Pads.
Introduction to SAP GRC Access Control security
Leading practice of designing GRC AC Security that segregate access within GRC but still meet your business requirements
Access control beyond roles: how to control what users can view and change by modifying user interface
Wrap up and questions
• SAP provides standard-delivered roles that can be used as starting point
- SAP_GRAC_ACCESS_APPROVER
- SAP_GRAC_ACCESS_REQUEST_ADMIN
- SAP_GRAC_ACCESS_REQUESTER
- SAP_GRAC_ALERTS
- SAP_GRAC_ALL
- SAP_GRAC_BASE
- SAP_GRAC_CONTROL_APPROVER
- SAP_GRAC_CONTROL_MONITOR
- SAP_GRAC_CONTROL_OWNER
- SAP_GRAC_DISPLAY_ALL
- SAP_GRAC_END_USER
- SAP_GRAC_FUNCTION_APPROVER
- SAP_GRAC_NWBC
- SAP_GRAC_REPORTS
- SAP_GRAC_RISK_ANALYSIS
- SAP_GRAC_RISK_OWNER
- SAP_GRAC_ROLE_MGMT_ADMIN
- SAP_GRAC_ROLE_MGMT_DESIGNER
- SAP_GRAC_ROLE_MGMT_DESINGER
- SAP_GRAC_ROLE_MGMT_ROLE_OWNER
- SAP_GRAC_ROLE_MGMT_USER
- SAP_GRAC_RULE_SETUP
- SAP_GRAC_SETUP
- SAP_GRAC_SPM_FFID
- SAP_GRAC_SUPER_USER_MGMT_ADMIN
- SAP_GRAC_SUPER_USER_MGMT_CNTLR
- SAP_GRAC_SUPER_USER_MGMT.Owner
- SAP_GRAC_SUPER_USER_MGMT_USER
• The leading practice is to create a matrix of all the available web applications and future roles
• A role design workshop can be held with the business to customize the GRC roles
• Logically group functionality and consider assignment flexibility - because of the limited functionality of the SAP GRC AC as compared to SAP ECC task based vs. job based roles are normally not a concern
• Consider Segregation of Duties (SOD) risks within SAP GRC Access Control tool itself
SOD Examples:
• Creating mitigating controls and approving their assignment?
• Being a Firefighter Owner/approving Firefighter access and being able to create Firefighter IDs?
- Use the design workshop output to create a list of applicable SAP GRC Roles
<table>
<thead>
<tr>
<th>Role</th>
<th>My Home</th>
<th>Access Management</th>
</tr>
</thead>
<tbody>
<tr>
<td>EAM Security Administrator</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>EAM Controller</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>EAM Firefighter Owner</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>EAM Firefighters</td>
<td>D D D D</td>
<td>U U</td>
</tr>
<tr>
<td>BRM Business User</td>
<td>D D D D</td>
<td>U U</td>
</tr>
<tr>
<td>BRM Security Administrator</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>ARA Security Administrator</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>ARA Business User</td>
<td>D D D D</td>
<td>U U</td>
</tr>
<tr>
<td>Mitigating Control Owner</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>Risk Owner</td>
<td>D U D D</td>
<td>U U</td>
</tr>
<tr>
<td>Coordinator</td>
<td>D D D D</td>
<td>U U</td>
</tr>
<tr>
<td>Mitigation Monitor</td>
<td>D D U D</td>
<td>U D D D</td>
</tr>
<tr>
<td>ARM Requestor</td>
<td>D D U U</td>
<td>U D D D</td>
</tr>
<tr>
<td>ARM Administrator</td>
<td>D U D D</td>
<td>U D D D</td>
</tr>
<tr>
<td>ARM Security Administrator</td>
<td>D U D D</td>
<td>U D D D</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Roles</th>
<th>Role Text</th>
</tr>
</thead>
<tbody>
<tr>
<td>ZS:GRAC:EAM:SECURITY_ADMIN</td>
<td>IT: GRAC EAM Administrator Role</td>
</tr>
<tr>
<td>ZS:GRAC:EAM:FF_CONTROLLER</td>
<td>IT: GRAC EAM Controller Role</td>
</tr>
<tr>
<td>ZS:GRAC:EAM:FF_OWNER</td>
<td>IT: GRAC EAM Owner Role</td>
</tr>
<tr>
<td>ZS:GRAC:EAM:FF_USER</td>
<td>IT: GRAC EAM Firefighter Role</td>
</tr>
<tr>
<td>ZS:GRAC:EAM:FF_ID</td>
<td>IT: GRAC EAM Firefighter ID Role</td>
</tr>
<tr>
<td>ZS:GRAC:BRM:BUS_USER_DSP</td>
<td>IT: GRAC BRM Display access Role</td>
</tr>
<tr>
<td>ZS:GRAC:BRM:SECURITY_ADMIN</td>
<td>IT: GRAC BRM Security Administrator Role</td>
</tr>
<tr>
<td>ZS:GRAC:BRM:ROLE_OWNER</td>
<td>IT: GRAC BRM Owner Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:SECURITY_ADMIN</td>
<td>IT: GRAC ARA Security Administrator Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:BUS_USER_DSP</td>
<td>IT: GRAC ARA Display access Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:AUDIT_MITCRL_APPRV</td>
<td>IT: GRAC ARA Audit Mitigating Controller Approver Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:MITCRL_OWNER</td>
<td>IT: GRAC ARA Mitigating Control Owner Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:RISKOWNER</td>
<td>IT: GRAC ARA Risk Owner Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:SOD_REVIEW_COORD</td>
<td>IT: GRAC ARA Segregation of Duties (SoD) Review Coordinator Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARA:MITCRL_MONITOR</td>
<td>IT: GRAC ARA Mitigating Monitor Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARM:UAR_COORD</td>
<td>IT: GRAC ARM User Access Review Coordinator Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARM:ACCESS_REQUESTOR</td>
<td>IT: GRAC ARM Access Requestor Role</td>
</tr>
<tr>
<td>ZS:GRAC:ARM:ADMIN</td>
<td>IT: GRAC ARM Administrator Role</td>
</tr>
</tbody>
</table>
• When building GRC roles, authorization objects will be added manually
• Positive and negative testing should be conducted during UAT
AGENDA
- Introduction to SAP GRC Access Control security
- Leading practice of designing GRC AC Security that segregate access within GRC but still meet your business requirements
- Access control beyond roles: how to control what users can view and change by modifying user interface
- Wrap up and questions
It may be required to customize the end user interface to further restrict access where it is not possible via authorizations alone –
- Authorization object GRAC_BGJOB with any values makes the following two applications visible. Only viewing the jobs – “Background Jobs” and not scheduling – “Background Scheduler” is desired
Access may be restricted by creating a **custom** Access Management Launch Pad without the link “Background Scheduler”
Customizing GRC AC Launch Pads
- GRC administrators can add, remove, or move links on the SAP GRC v10.0 Launch pad or work center.
- SPRO -> GRC -> General Settings -> Maintain Customer Specific Menus -> Configure Launchpad for Menus
- Or transaction code LPD_CUST
- GRC uses 1 repository for all GRC applications and 7 Launch Pads. 1 for each workcenter.
• Role – specifies who can use the launch pad. Together with the instance, it uniquely defines the launch pad.
• Instance- Specifies what purpose the launch pad is used for.
• Repository – flag indicates if launch pad is a repository (i.e. collection of launch pads)
• Embedded – Indicates if workcenter is embedded within another workcenter.
• User Interface Building Block for FPM applications – most likely selections
• Change, Display and Delete icons to perform respective activity.
Step 1 – Create or Modify Launch Pad
- Open existing Launch Pad by double clicking
- Choose Launch Pad - Save-As
- Fill in attributes
- Right click items to Hide, Disable, Rename, etc
- To add a link from existing application click the Link to a Repository Application
Step 2 - Moving Links From Repository
- Double click `GRCIAPREOS`
- Expand `GRC_AccessControl`
- Click and drag existing application to folder structure
Step 2 - Adding a URL, optional
- Right-click a folder and select New Application
- Select URL from the drop down
- Click the Change icon next to URL
- Enter URL including http://
- Click SAVE. By default it will be inactive. Right click and choose Set Visible as a Quick Link
Modifying GRC AC User Interface
Step 3 – Create Webdynpro Components – FPM UIBB
- Required if want to create NEW workcenter
- FPM – Floor Plan Manager; UIBB – Universal Building Block
- SE80 – Locate the UIBB LPD components under Component Configuration
- Double click and then choose Start Configurator
- Select COPY
- Enter name for configuration and description
- Package should be Customer package or Local (if not transporting)
Step 4 - Associate FPM UIBB with Launch Pad
- Choose Change
- Click Goto Component Configuration
- Click Configure UIBB (sandbox has errors which is why not showing the custom configuration)
Step 4 - Associate FPM – UIBB with Launch Pad, cont.
- Click Launchpad
- Set the Launchpad by choosing the combination of Role and Instance
Modifying GRC AC User Interface
- SE80 – Locate the FPM - CC components under Application Configuration
- Double click and then choose Start Configurator
- Select COPY
- Enter name for configuration and description
- Package should be Customer package or Local (if not transporting)
Modifying GRC AC User Interface
Step 6 – Change Attributes of Component Configuration
- Choose Change
- Click Goto Component Configuration
- Click Attributes under the HOME section in the right panel
- Change the configuration name to the FPM-UIBB configuration from the prior step
Step 7 - Create Web Dynpro Components – Application Configuration
SE80 – Locate the FPM_AC components under Application Configuration
Double click and then choose Start Configurator
Select COPY
Enter name for configuration and description
Package should be Customer package or Local (if not transporting)
Step 7 - Create Web Dynpro Components – Application Configuration, cont.
- Click the change button of the newly copied AC configuration
- Change the configuration to the newly created Component Configuration created in step 6
Modifying GRC AC User Interface
Step 8— Associate PFCG Role with Launch Pad
- Transaction code PFCG
- Best to copy an existing role
- Right click item in Menu and choose Details
- Select the Application Configuration created in Step 7
Modifying GRC AC User Interface
Administratively Changing Text and Hiding Links
- Individual links and text may be changed/hidden system wide to suit your needs
- Locate Webdynpro application by right-clicking the page you wish to modify and select More Field Help
- Locate the Web Dynpro Component
- If you see Launch Pad or LPD, use the previous method
Follow @ASUG365 and #ASUG2013 on Twitter
Modifying GRC AC User Interface
Administratively Changing Text and Hiding Links
- Transaction code SE80
- Use Repository Browser or Repository Information System to location configuration object
- Package is typically GRAC_ACCESS_REQUEST
- Double click folder
- From the menu bar – Webdynpro Application-> Test – In Browser - Admin Mode
- This will launch an IE window
Modifying GRC AC User Interface
Administratively Changing Text and Hiding Links
- Notice admin mode
- Right click area you wish to change
- Select Settings for Current Configuration
Modifying GRC AC User Interface
Administratively Changing Text and Hiding Links
1. Hide links by changing to Invisible
2. Change the text that appears when hovering over a link
3. Change the text of a link
- Click SAVE and CLOSE
- Enter transport information
Follow @ASUG365 and #ASUG2013 on Twitter
Modifying GRC AC User Interface
Administratively Changing Text and Hiding Links
• Some applications may require an additional parameter to be included in the URL upon entering the Admin Mode
• GRAC_OIF_REQUEST_APPROVAL application, for example, requires a dummy request number to open in Admin mode, otherwise an error is displayed
• To avoid the error add a dummy instance of an object, such as access request - &OBJECT_ID=ACCREQ/123
End User Personalization Restriction
- By default end users are allowed to personalize their interface.
- This can pose a risk, especially when shared users are used (e.g. End User Logon Page).
- The permission can be restricted in “wd_global_setting” service in SICF.
End User Personalization Restriction
- Start Transaction SICF
- Drill down to \default_host/sap/bc/webdynpro/sap/ and right click on WD_GLOBAL_SETTINGS, click Test Service
- You can globally disable the setting that allows end users to personalize their screens
AGENDA
- Introduction to SAP GRC Access Control security
- Leading practice of designing GRC AC Security that segregate access within GRC but still meet your business requirements
- Access control beyond roles: how to control what users can view and change by modifying user interface
- Wrap up and questions
SAP GRC 10.0 is an ABAP system and users ABAP authorization objects to control access
Standard SAP Delivered Roles Should be Customized as per your Requirement
- Role Folder Structure, Authorization to access specific POWL as well as other GRC authorization objects determine what the user sees or has access to in NWBC
You can customize or create your own individual work centers due to business or security requirements
You can administratively customize end user interface within NWBC and disallow end users to do the same
Follow @ASUG365 and ASUG CEO Bridgette Chambers @BChambersASUG on Twitter to keep up to date with everything at ASUG.
Follow the ASUGNews team of Tom Wailgum: @twailgum and Courtney Bjorlin: @cbjorlin for all things SAP.
THANK YOU FOR PARTICIPATING
Please provide feedback on this session by completing a short survey via the event mobile application.
SESSION CODE: 0907
For ongoing education on this area of focus, visit www.ASUG.com
This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.
© 2013 PricewaterhouseCoopers LLP, a Delaware limited liability partnership. All rights reserved. PwC refers to the US member firm, and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further details.
|
{"Source-Url": "http://events.asug.com/2013AC/Business%20Integration%20Technology%20%26%20%20Infrastructure/0907%20Securing%20GRC%20designing%20effective%20security.pdf", "len_cl100k_base": 4251, "olmocr-version": "0.1.49", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 56426, "total-output-tokens": 5235, "length": "2e12", "weborganizer": {"__label__adult": 0.00061798095703125, "__label__art_design": 0.0007262229919433594, "__label__crime_law": 0.007350921630859375, "__label__education_jobs": 0.00899505615234375, "__label__entertainment": 0.00018513202667236328, "__label__fashion_beauty": 0.0002739429473876953, "__label__finance_business": 0.03662109375, "__label__food_dining": 0.00037169456481933594, "__label__games": 0.0009741783142089844, "__label__hardware": 0.0015268325805664062, "__label__health": 0.0008873939514160156, "__label__history": 0.0002872943878173828, "__label__home_hobbies": 0.000362396240234375, "__label__industrial": 0.001868247985839844, "__label__literature": 0.0003817081451416016, "__label__politics": 0.0008654594421386719, "__label__religion": 0.0004839897155761719, "__label__science_tech": 0.032257080078125, "__label__social_life": 0.0005288124084472656, "__label__software": 0.43408203125, "__label__software_dev": 0.46923828125, "__label__sports_fitness": 0.00034046173095703125, "__label__transportation": 0.0006041526794433594, "__label__travel": 0.00027823448181152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16339, 0.00563]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16339, 0.16733]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16339, 0.7597]], "google_gemma-3-12b-it_contains_pii": [[0, 84, false], [84, 288, null], [288, 582, null], [582, 797, null], [797, 1141, null], [1141, 1512, null], [1512, 1600, null], [1600, 2224, null], [2224, 2360, null], [2360, 3264, null], [3264, 3561, null], [3561, 4342, null], [4342, 4524, null], [4524, 4995, null], [4995, 7853, null], [7853, 7988, null], [7988, 8298, null], [8298, 8746, null], [8746, 9103, null], [9103, 9596, null], [9596, 9866, null], [9866, 10020, null], [10020, 10298, null], [10298, 10733, null], [10733, 10925, null], [10925, 11066, null], [11066, 11350, null], [11350, 11634, null], [11634, 11944, null], [11944, 12171, null], [12171, 12408, null], [12408, 12807, null], [12807, 13178, null], [13178, 13362, null], [13362, 13671, null], [13671, 14110, null], [14110, 14380, null], [14380, 14643, null], [14643, 14953, null], [14953, 15485, null], [15485, 15707, null], [15707, 16339, null]], "google_gemma-3-12b-it_is_public_document": [[0, 84, true], [84, 288, null], [288, 582, null], [582, 797, null], [797, 1141, null], [1141, 1512, null], [1512, 1600, null], [1600, 2224, null], [2224, 2360, null], [2360, 3264, null], [3264, 3561, null], [3561, 4342, null], [4342, 4524, null], [4524, 4995, null], [4995, 7853, null], [7853, 7988, null], [7988, 8298, null], [8298, 8746, null], [8746, 9103, null], [9103, 9596, null], [9596, 9866, null], [9866, 10020, null], [10020, 10298, null], [10298, 10733, null], [10733, 10925, null], [10925, 11066, null], [11066, 11350, null], [11350, 11634, null], [11634, 11944, null], [11944, 12171, null], [12171, 12408, null], [12408, 12807, null], [12807, 13178, null], [13178, 13362, null], [13362, 13671, null], [13671, 14110, null], [14110, 14380, null], [14380, 14643, null], [14643, 14953, null], [14953, 15485, null], [15485, 15707, null], [15707, 16339, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16339, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16339, null]], "pdf_page_numbers": [[0, 84, 1], [84, 288, 2], [288, 582, 3], [582, 797, 4], [797, 1141, 5], [1141, 1512, 6], [1512, 1600, 7], [1600, 2224, 8], [2224, 2360, 9], [2360, 3264, 10], [3264, 3561, 11], [3561, 4342, 12], [4342, 4524, 13], [4524, 4995, 14], [4995, 7853, 15], [7853, 7988, 16], [7988, 8298, 17], [8298, 8746, 18], [8746, 9103, 19], [9103, 9596, 20], [9596, 9866, 21], [9866, 10020, 22], [10020, 10298, 23], [10298, 10733, 24], [10733, 10925, 25], [10925, 11066, 26], [11066, 11350, 27], [11350, 11634, 28], [11634, 11944, 29], [11944, 12171, 30], [12171, 12408, 31], [12408, 12807, 32], [12807, 13178, 33], [13178, 13362, 34], [13362, 13671, 35], [13671, 14110, 36], [14110, 14380, 37], [14380, 14643, 38], [14643, 14953, 39], [14953, 15485, 40], [15485, 15707, 41], [15707, 16339, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16339, 0.19084]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
341a383bb3a7726d601d30180442ace9e54ae519
|
Hello to COS 217
Introduction to Programming Systems
Fall 2019
Hello to COS 217
Importance of Programming Systems
Fall 2019
Goal 1: Programming in the Large
Learn how to compose large computer programs
Topics
- Modularity/abstraction, information hiding, resource management,
error handling, testing, debugging, performance improvement,
tool support
Goal 2: Under the Hood
Learn what happens "under the hood" of computer systems
Downward tours
C Language
Assembly Language
Machine Language
Application Program
Operating System
Hardware
Lead Instructor
- Jennifer Rexford
Lead Preceptors
- Xiaoyan Li
- Christopher Moretti
Graduate Student Preceptors
- Alberto Benmamun
- Greg Chan
- John Li
- Ethan Tseng
- Josh Zhang
Introductions
Agenda
Course overview
- Introductions
- Course goals
- Resources
- Grading
- Policies
- Schedule
Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
Modularity!
Goals: Summary
Help you to become a...
Power Programmer!!!
Specific Goal: Learn C
Question: Why C instead of Java?
Answer 1: A primary language for “under the hood” programming
Answer 2: Knowing a variety of approaches helps you “program in the large”
Specific Goal: Learn Linux
Question: Why use the Linux operating system?
Answer 1: Linux is the industry standard for servers, embedded devices, education, and research
Answer 2: Linux (with GNU tools) is good for programming (which helps explain answer 1)
Agenda
Course overview
• Introductions
• Course goals
• Resources
• Grading
• Policies
• Schedule
Getting started with C
• History of C
• Building and running C programs
• Characteristics of C
• C details (if time)
Lectures
Lectures
• Describe material at conceptual (high) level
• Slides available via course website
Etiquette
• Use electronic devices only for taking notes or annotating slides (but consider taking notes by hand – research shows it works better!)
• No SnapFaceNewsBookInstaGoo, please
iClicker
• Register in Blackboard (not with iClicker – they’ll charge you)
• Occasional questions in class, graded on participation (with a generous allowance for not being able to attend)
iClicker Question
Q: Do you have an iClicker with you today?
A. Yes
B. No, but I've been practicing my mental electrotelekinesis and the response is being registered anyway
C. I'm not here, but someone is iClicking for me (don't do this – it's a violation of our course policies!)
Precepts
Precepts
- Describe material at the "practical" (low) level
- Support your work on assignments
- Hand copy handouts distributed during precepts
- Handouts available via course website
Etiquette
- Attend your precept – attendance will be taken
- Must miss your precept? ⇒ inform preceptors & attend another
- Use TigerHub to move to another precept
- Trouble ⇒ See Colleen Kenny (CS Bldg 210)
- But Colleen can’t move you into a full precept
Precepts begin next week!
Website
https://www.cs.princeton.edu/courses/archive/fall19/cos217/
- Home page, schedule page, assignment page, policies page
Piazza
Piazza
- Instructions provided in first precept
Piazza etiquette
- Study provided material before posting question
- Lecture slides, precept handouts, required readings
- Read / search all (recent) Piazza threads before posting question
- Don’t reveal your code!
- See course policies
Books
- King
- C programming language and standard libraries
ARM 64-bit Assembly Language (required)
- Pyeatt & Uighetta
- Book or preprint will be made available later in the term
The Practice of Programming (recommended)
- Kernighan & Pike
- "Programming in the large"
- Bryant & O’Hallaron
- "Under the hood"
Manuals
Manuals (for reference only, available online)
- ARMv8 Instruction Set Overview
- Using as, the GNU Assembler
See also
- Linux man command
Programming Environment
Server
ArmLab Cluster
- Linux OS
- Your Program
- armlab01
- armlab02
Client
Your Computer
On-campus or off-campus
Agenda
Course overview
- Introductions
- Course goals
- Resources
Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
Grading
* Final assignment counts double; penalties for lateness
** Closed book, closed notes, no electronic devices
*** Did your involvement benefit the course as a whole?
- Lecture/precept attendance and participation counts
Programming Assignments
Regular (not-quite-weekly) assignments
0. Introductory survey
1. *De-comment* program
2. String module
3. Symbol table module
4. Assembly language programs
5. Buffer overrun attack
6. Heap manager module
7. Unix shell
*(some individual, some done with a partner from your precept)*
Assignments 0 and 1 are available now
Start early!!!
Policies
Learning is a collaborative activity!
- Discussions with others that help you understand concepts from class are encouraged
But programming assignments are graded!
- Everything that gets submitted for a grade must be exclusively your own work
- Don’t look at code from someone else, the web, Github, etc. – see the course "Policies" web page
- Don’t reveal your code or design decisions to anyone except course staff – see the course "Policies" web page
Violations of course policies
- Typical course-level penalty is 0 on the assignment
- Typical University-level penalty is suspension from University for 1 academic year
### Assignment Related Policies
**Some highlights:**
- You may not reveal any of your assignment solutions (products, descriptions of products, design decisions) on Piazza.
- **Getting help:** To help you compose an assignment solution you may use only authorized sources of information, may consult with other people only via the course's Piazza account or via interactions that might legitimately appear on the course's Piazza account, and must declare your sources in your readme file for the assignment.
- **Giving help:** You may help other students with assignments only via the course's Piazza account or interactions that might legitimately appear on the course's Piazza account, and you may not share your assignment solutions with anyone, ever (including after the semester is over), in any form.
**Ask the instructor for clarifications**
- Permission to deviate from policies must be obtained in writing.
### Questions?
### Agenda
#### Course overview
- Introductions
- Course goals
- Resources
- Grading
- Policies
- Schedule
#### Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
### Course Schedule
<table>
<thead>
<tr>
<th>Weeks</th>
<th>Lectures</th>
<th>Precepts</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-2</td>
<td>C (conceptual) Number Systems</td>
<td>C (pragmatic) Linux/GNU</td>
</tr>
<tr>
<td>3-6</td>
<td>Programming in the Large</td>
<td>Advanced C</td>
</tr>
<tr>
<td>6</td>
<td>Midterm Exam</td>
<td></td>
</tr>
<tr>
<td>7</td>
<td>Fall break!</td>
<td></td>
</tr>
<tr>
<td>8-13</td>
<td>"Under the Hood" (conceptual)</td>
<td>"Under the Hood" (assignment how-to)</td>
</tr>
<tr>
<td></td>
<td>Reading Period</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Final Exam</td>
<td></td>
</tr>
</tbody>
</table>
### The C Programming Language
**Who?** Dennis Ritchie
**When?** ~1972
**Where?** Bell Labs
**Why?** Build the Unix OS
Java vs. C: History
<table>
<thead>
<tr>
<th>Year</th>
<th>Language</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>1960</td>
<td>BCPL</td>
<td></td>
</tr>
<tr>
<td>1970</td>
<td>B</td>
<td></td>
</tr>
<tr>
<td>1972</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>1978</td>
<td>K&R C</td>
<td></td>
</tr>
<tr>
<td>1989</td>
<td>ANSI C89</td>
<td></td>
</tr>
<tr>
<td>1999</td>
<td>ISO C99</td>
<td></td>
</tr>
<tr>
<td>2011</td>
<td>ISO C11</td>
<td></td>
</tr>
</tbody>
</table>
C vs. Java: Design Goals
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Build the Unix OS</td>
<td>Language of the Internet</td>
</tr>
<tr>
<td>Low-level; close to HW and OS</td>
<td>High-level; insulated from hardware and OS</td>
</tr>
<tr>
<td>Good for system-level programming</td>
<td>Good for application-level programming</td>
</tr>
<tr>
<td>Support structured programming</td>
<td>Support object-oriented programming</td>
</tr>
<tr>
<td>Unsafe: don’t get in the programmer’s way</td>
<td>Safe: can’t step “outside the sandbox”</td>
</tr>
<tr>
<td>Look like C!</td>
<td></td>
</tr>
</tbody>
</table>
Agenda
Course overview
- Introductions
- Course goals
- Resources
- Grading
- Policies
- Schedule
Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
Building Java Programs
```
$ javac MyProg.java
```
Java compiler (machine lang code)
```
HW (ArmLab) OS (Linux) MyProg.java (Java code) MyProg.class (bytecode)
```
Running Java Programs
```
$ java MyProg
```
Java interpreter / “virtual machine” (machine lang code)
```
HW (ArmLab) OS (Linux) java MyProg.class (bytecode)
```
Building C Programs
```
$ gcc217 myprog.c –o myprog
```
C “Compiler driver” (machine lang code)
```
HW (ArmLab) OS (Linux) gcc217 myprog (machine lang code)
```
Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
Running C Programs
$ ./myprog
HW (ArmLab)
OS (Linux)
myprog
(data)
(data)
Agenda
Course overview
• Introductions
• Course goals
• Resources
• Grading
• Policies
• Schedule
Getting started with C
• History of C
• Building and running C programs
• Characteristics of C
• C details (if time)
Java vs. C: Portability
<table>
<thead>
<tr>
<th>Program</th>
<th>Code Type</th>
<th>Portable?</th>
</tr>
</thead>
<tbody>
<tr>
<td>MyProg.java</td>
<td>Java source code</td>
<td>Yes</td>
</tr>
<tr>
<td>myprog.c</td>
<td>C source code</td>
<td>Mostly</td>
</tr>
<tr>
<td>MyProg.class</td>
<td>Bytecode</td>
<td>Yes</td>
</tr>
<tr>
<td>myprog</td>
<td>Machine lang code</td>
<td>No</td>
</tr>
</tbody>
</table>
Conclusion: Java programs are more portable
(In particular, last semester we moved from the x86_64-based "courselab" to the ARM64-based "armlab", and all of the programs had to be recompiled!)
Java vs. C: Safety & Efficiency
Java
• Automatic array-bounds checking,
• NULL pointer checking,
• Automatic memory management (garbage collection)
• Other safety features
C
• Manual bounds checking
• NULL pointer checking,
• Manual memory management
Conclusion 1: Java is often safer than C
Conclusion 2: Java is often slower than C
Java vs. C: Characteristics
<table>
<thead>
<tr>
<th></th>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Portability</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>Efficiency</td>
<td>-</td>
<td>+</td>
</tr>
<tr>
<td>Safety</td>
<td>+</td>
<td>-</td>
</tr>
</tbody>
</table>
iClicker Question
Q: Which corresponds to the C programming language?
A.
B.
C.
Agenda
Course overview
- Introductions
- Course goals
- Resources
- Grading
- Policies
- Schedule
Getting started with C
- History of C
- Building and running C programs
- Characteristics of C
- C details (if time)
Java vs. C: Details
Remaining slides provide some details
Use for future reference
Slides covered now, as time allows...
Java vs. C: Details
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hello.java</td>
<td>hello.c</td>
</tr>
<tr>
<td>public class Hello</td>
<td>include<stdio.h></td>
</tr>
<tr>
<td>{</td>
<td>int main(void)</td>
</tr>
<tr>
<td>System.out.println("hello, world");</td>
<td></td>
</tr>
<tr>
<td>}</td>
<td>return 0;</td>
</tr>
</tbody>
</table>
Building
$ javac Hello.java
$ gcc hello.c -o hello
Running
$ java Hello
hello, world
$ ./hello
hello, world
Java vs. C: Details
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arrays</td>
<td>Arrays</td>
</tr>
<tr>
<td>int [] a = new int [10];</td>
<td>int a[10];</td>
</tr>
<tr>
<td>float [] b = new float [5][20];</td>
<td>float b[5][20];</td>
</tr>
</tbody>
</table>
Array bound checking
// run-time check /* no run-time check */
Pointer type
// Object reference is an implicit pointer
int *p;
Record type
class Mine
{ int x;
float y;
};
struct Mine
{ int x;
float y;
};
Java vs. C: Details
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>String x1 = "Hello";</td>
<td>char *x1 = "Hello";</td>
</tr>
<tr>
<td>String x2 = new String("Hello");</td>
<td>#include <string.h></td>
</tr>
<tr>
<td>x1 += x2</td>
<td>strcat(x1, x2);</td>
</tr>
</tbody>
</table>
Logical ops *
&&, ||, !
Relational ops *
=, !=, <, >, <=, >=
Arithmetic ops *
+, -, *, /, %
<, >>, >>>, &", ", -
Assignment ops
=, +=, -=, *=, /=, %=,
* Essentially the same in the two languages
Java vs. C: Details
### If stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>if (i < 0)</code></td>
<td><code>if (i < 0)</code></td>
</tr>
<tr>
<td><code>statement1;</code></td>
<td><code>statement1;</code></td>
</tr>
<tr>
<td><code>else</code></td>
<td></td>
</tr>
<tr>
<td><code>statement2;</code></td>
<td><code>statement2;</code></td>
</tr>
</tbody>
</table>
### Switch stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>switch (i)</code></td>
<td><code>switch (i)</code></td>
</tr>
<tr>
<td>`{</td>
<td></td>
</tr>
<tr>
<td><code>case 1:</code></td>
<td></td>
</tr>
<tr>
<td><code>break;</code></td>
<td></td>
</tr>
<tr>
<td><code>case 2:</code></td>
<td></td>
</tr>
<tr>
<td><code>break;</code></td>
<td></td>
</tr>
<tr>
<td><code>...</code></td>
<td></td>
</tr>
<tr>
<td><code>default:</code></td>
<td></td>
</tr>
<tr>
<td><code>break;</code></td>
<td></td>
</tr>
<tr>
<td><code>...</code></td>
<td></td>
</tr>
<tr>
<td><code>}</code></td>
<td><code>}</code></td>
</tr>
</tbody>
</table>
### Goto stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>goto</code></td>
<td><code>goto</code></td>
</tr>
<tr>
<td><code>statement;</code></td>
<td></td>
</tr>
</tbody>
</table>
* Essentially the same in the two languages
### For stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>for (int i=0; i<10; i++)</code></td>
<td><code>for (i=0; i<10; i++)</code></td>
</tr>
<tr>
<td><code>statement;</code></td>
<td></td>
</tr>
</tbody>
</table>
### While stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>while (i < 0)</code></td>
<td></td>
</tr>
<tr>
<td><code>statement;</code></td>
<td></td>
</tr>
</tbody>
</table>
### Do-while stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>do</code></td>
<td><code>while</code></td>
</tr>
<tr>
<td><code>statement;</code></td>
<td></td>
</tr>
<tr>
<td><code>while (i < 0)</code></td>
<td></td>
</tr>
</tbody>
</table>
### Continue stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>continue;</code></td>
<td><code>continue;</code></td>
</tr>
</tbody>
</table>
### Break stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>break;</code></td>
<td><code>break;</code></td>
</tr>
</tbody>
</table>
### Compound stmt
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>{</code></td>
<td></td>
</tr>
<tr>
<td><code>statement1;</code></td>
<td></td>
</tr>
<tr>
<td><code>statement2;</code></td>
<td></td>
</tr>
<tr>
<td><code>}</code></td>
<td><code>}</code></td>
</tr>
</tbody>
</table>
### Exceptions
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>throw, try-catch-finally</code></td>
<td><code>/* no equivalent */</code></td>
</tr>
</tbody>
</table>
### Comments
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>// another kind</code></td>
<td></td>
</tr>
<tr>
<td><code>/* comment */</code></td>
<td><code>/* comment */</code></td>
</tr>
</tbody>
</table>
### Method / function
<table>
<thead>
<tr>
<th>Java</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>f(x, y, z);</code></td>
<td><code>f(x, y, z);</code></td>
</tr>
<tr>
<td><code>someObject.f(x, y, z);</code></td>
<td><code>SomeClass.f(x, y, z);</code></td>
</tr>
<tr>
<td><code>f(x, y, z);</code></td>
<td><code>f(x, y, z);</code></td>
</tr>
</tbody>
</table>
* Essentially the same in the two languages
Example C Program
```c
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
const double KMETERS_PER_MILE = 1.609;
int miles;
double kMeters;
printf("miles: ");
if (scanf("%d", &miles) != 1)
{
fprintf(stderr, "Error: Expected a number.\n");
exit(EXIT_FAILURE);
}
kMeters = (double)miles * KMETERS_PER_MILE;
printf("%d miles is %f kilometers.\n", miles, kMeters);
return 0;
}
```
Summary
**Course overview**
- Introductions
- Course goals
- Goal 1: Learn "programming in the large"
- Goal 2: Look "under the hood" and learn low-level programming
- Use of C and Linux supports both goals
- Resources
- Lectures, precepts, programming environment, Piazza, textbooks
- Course website: access via http://www.cs.princeton.edu
- Grading
- Policies
- Schedule
**Getting started with C**
- History of C
- Building and running C programs
- Characteristics of C
- Details of C
- Java and C are similar
- Knowing Java gives you a head start at learning C
Getting Started
Check out course website soon
- Study "Policies" page
- First assignment is available
Establish a reasonable computing environment soon
- Instructions given in first precept
|
{"Source-Url": "https://www.cs.princeton.edu/courses/archive/fall19/cos217/lectures/01_Intro-6up.pdf", "len_cl100k_base": 5287, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28989, "total-output-tokens": 4841, "length": "2e12", "weborganizer": {"__label__adult": 0.0012025833129882812, "__label__art_design": 0.0009870529174804688, "__label__crime_law": 0.0008530616760253906, "__label__education_jobs": 0.115966796875, "__label__entertainment": 0.00021076202392578125, "__label__fashion_beauty": 0.0006694793701171875, "__label__finance_business": 0.0007443428039550781, "__label__food_dining": 0.0015163421630859375, "__label__games": 0.00257110595703125, "__label__hardware": 0.0027313232421875, "__label__health": 0.001270294189453125, "__label__history": 0.0006155967712402344, "__label__home_hobbies": 0.000598907470703125, "__label__industrial": 0.0012664794921875, "__label__literature": 0.0009479522705078124, "__label__politics": 0.0007004737854003906, "__label__religion": 0.0016641616821289062, "__label__science_tech": 0.007801055908203125, "__label__social_life": 0.0004715919494628906, "__label__software": 0.006046295166015625, "__label__software_dev": 0.8466796875, "__label__sports_fitness": 0.00159454345703125, "__label__transportation": 0.0019664764404296875, "__label__travel": 0.0006842613220214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16229, 0.01086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16229, 0.56632]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16229, 0.75403]], "google_gemma-3-12b-it_contains_pii": [[0, 962, false], [962, 2194, null], [2194, 4082, null], [4082, 5635, null], [5635, 7568, null], [7568, 9262, null], [9262, 10634, null], [10634, 12608, null], [12608, 16038, null], [16038, 16229, null]], "google_gemma-3-12b-it_is_public_document": [[0, 962, true], [962, 2194, null], [2194, 4082, null], [4082, 5635, null], [5635, 7568, null], [7568, 9262, null], [9262, 10634, null], [10634, 12608, null], [12608, 16038, null], [16038, 16229, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16229, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16229, null]], "pdf_page_numbers": [[0, 962, 1], [962, 2194, 2], [2194, 4082, 3], [4082, 5635, 4], [5635, 7568, 5], [7568, 9262, 6], [9262, 10634, 7], [10634, 12608, 8], [12608, 16038, 9], [16038, 16229, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16229, 0.21965]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
e276ba6b3242c37fe1d85a9d1f8c2d04a6425f63
|
The risk analysis of software projects based on Bayesian Network
1,2Feng XU, 3,4Guijie QI, Yanan Sun
1,2First AuthorSchool of Management, Shandong University, Jinan P.R.China
2School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan P.R.China
3,4Corresponding authorSchool of Management Shandong University, qiguijie@sdu.edu.cn
4School of Business, Shandong University of Finance and Economics, Jinan P.R.China
Abstract
With the improving of software technology and the level of project management, the success of software project is not a special case. But this does not mean that software project can be finished easily. In fact, the success ratio of software project is still low. The existing of Software system aim to reduce the uncertainty of practical system, but the process of itself is full of uncertainty. So it is essential to identify and analyze the software risk. Now lots of researches focus on the ERP project risk analysis, which apply a number of qualitative methods and quantitative methods. But most of methods ignore the dynamic nature of ERP project, only paying attention to predict in advance or analyze afterwards, which cannot meet the challenge of ERP project management. So Bayesian network which can provide real-time analysis is selected to analyze the ERP project risks. Through literature review and investigation of Chinese software professionals, the paper finds out 12 risk factors. In terms of an ERP project of automobile mould factory in china, revising above-mentioned risk factors, according to easy expansibility of topological structure and strong self-learning ability of Bayesian network, the paper uses prior knowledge and posterior knowledge to analyze the risk of the ERP project in requirements period. Actual application shows that Bayesian network provide an effective system method for software project risk analysis.
Keywords: Bayesian network; risk analysis; IT project; software engineering.
1. Introduction
With the demand for software systems growing, there are more and more problems in the software industry, such as project delay, budget overruns or poor quality. Since 1994, with researching on IT projects, the Standish Group has promulgated <The Standish Group CHAOS Report> (http://www.standishgroup.com), which takes the longest time and largest energy to investigate the history data. From 1994 to 2004, the result shows that the success rate is very low as shown in figure 1. In 2005, the result for the third quarter of 2004 promulgated by the Standish Group was not so exciting either. The result showed that the completely failure rate was 18% in all project, and 53% of them were done with unsatisfactory time, costs or effect, only 29% of them had successfully achieved the project target.

The core of ERP project is software development which is different from other products. The process of software development is just the process of engineering (no manufacturing). In addition, the
main needs in software development is not only the material resources, but also the human resources. And the software product has no materialization, but just codes and technical files. Based on the above-mentioned characteristics, the software project management is so unique compared with others. The software development needs many new techniques and old ones that have been validated, and the final product quantity is always low. It’s hard to form a standard technical process or mature process. Especially, ERP projects need long time, large-scale organization and coordination, as well as long development cycle, so there are many unforeseeable uncertainties which will not make the plan, costs, time and quality of software project to be predicted. The existing of Software system aim to reduce the uncertainty of practical system, but the process of itself is full of uncertainty. Therefore the problem of how to define, evaluate and measure ERP project risk and how to make solutions must be solved so as to minimize the impact of risk or reduce it to an acceptable level.
2. Recognition of the risk factor of ERP Project management
Boehm (1991) proposed most deadly top ten risk factors in software project management which are the staff’s shortcomings (talents, skills, responsibility, etc.), unrealistic project plan and budget, the wrong software functions and features, misunderstanding of the goal, user interface mistakes, frequent changes of the demand, the defects of completed external components, shortcomings in project management, the defect of real-time performance, the breach of computer science.
Marvin J Carr (1993) use classification to identify risk factors. He classified the software risk into three parts: product engineering, development environment and program restrictions. In detail, product engineering includes requirements, design, coding and testing, integration testing and engineering properties. Development environment includes the development process, development systems, management processes, management practices, working conditions. program restrictions include resources, contracts and procedures interfaces. Each of the sub-categories contains a number of specific risk factors. 54 specific risk factors were proposed totally.
Jones (1994) proposed 60 risk factors on enterprise, employees, customers, software technology and other aspects in software development environment. Also he specifically gave the frequency and the loss of events relative to each risk factor.
Anthony Kwok Tai Hui (2004) proposed 24 direct risk factors and other risk events including staff’s experience, morale, technology and environment needed for software project, as well as software interfaces and organizational structure.
Linda Wallace and Mark Keil (2004) said that the software risk exists in six dimensions: the team, organization, environment, demand, planning and controlling, the user, the project’s complexity. Each dimension included a number of risk factors.
With reference to the literature summary mentioned above, we made a survey for kinds of roles such as project managers, QA, testers, consultant, programmer, systems analyst, etc., having been working on software for more than 4 years from 6 different ERP projects. On this basis, this paper proposed the following risk factors:
- Lack the key technology. When the software project need some specific technology, such as demand analysis, algorithm designing, there is no one can do it.
- Rely on the key person. Without effective knowledge management, all the members can not reach the level that the project requires in a short time, so there always is over-reliance on a small number of core members.
- The lack of responsibility and low morale. This refers to lacking of discipline, staff instability, and low employee satisfaction.
- Lower productivity. This refers to means the serious overruns on project progress and cost.
- The lack of customer support. This means the lack of clear customer demands and low satisfaction for software products.
- The wrong criterion, refer to the wrong or hazy definition of success or fail project.
- The lack of necessary communication means that there is no effective communication mechanism between the project stakeholders.
- Lower CMMI Level means poor capacity about software project management and lacking software project management system.
Poor change control ability. This means when changes happen during the project implementation period, there is no systematic control for it.
High project complexity. This means the relationship between many activities are too complex to predict or control.
The pressure on the project time limit. This refers to the unreasonable compression on project time resulting in that the project cannot be finished.
Immature development technology means that members are not familiar with the development platform, products or development tools.
The above-mentioned risk factors are shown in table 1. According to the related events we can determine the level of risk factors, while the occurrence of related events will give rise to the associated risk factor. This relationship will form a Bayesian network structure.
Table 1. The list of risk factor
<table>
<thead>
<tr>
<th>Risk factors</th>
<th>Related events</th>
<th>The associated risk factors</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lack the key technology</td>
<td>progress delay, Wrong designing, High error rate, Wrong product</td>
<td>Lower productivity</td>
</tr>
<tr>
<td>Rely on the key person</td>
<td>progress delay</td>
<td>The lack of responsibility and low morale</td>
</tr>
<tr>
<td>The lack of responsibility and low morale</td>
<td>High error rate, Staff mobility, Heavy workload</td>
<td>Lower productivity</td>
</tr>
<tr>
<td>Lower productivity</td>
<td>Long period, Increased workload, Increased project cost</td>
<td>The pressure on the project time limit</td>
</tr>
<tr>
<td>Lack of customer support</td>
<td>Lose customer demand, High error rate, Long period</td>
<td>The lack of responsibility and low morale, Poor change control ability</td>
</tr>
<tr>
<td>The wrong criterion</td>
<td>The wrong workload estimation, The wrong staff arrange</td>
<td>The pressure on the project time limit</td>
</tr>
<tr>
<td>The lack of necessary communication</td>
<td>High error rate, Poor understanding of the objectives</td>
<td>The pressure on the project time limit, Poor change control ability, The lack of responsibility and low morale, Lack of customer support</td>
</tr>
<tr>
<td>Lower CMMI Level</td>
<td>Lack of development norms</td>
<td>The pressure on the project time limit, The wrong criterion</td>
</tr>
<tr>
<td>Poor change control ability</td>
<td>High error rate, Heavy workload and cannot be measured, Test delay, High possibility of catastrophe losses</td>
<td>Lower production efficiency</td>
</tr>
<tr>
<td>High project complexity</td>
<td>Increasing number of exchanges, Increasing effect of other risk factors</td>
<td>All risk factors</td>
</tr>
<tr>
<td>The pressure on the project time limit</td>
<td>High error rate, High fatigue, Lack of development norms</td>
<td>Lower production efficiency, The lack of responsibility and low morale, Poor change control ability</td>
</tr>
<tr>
<td>Immature development technology</td>
<td>High error rate, Poor quality assurance, Wrong processing methods</td>
<td>The pressure on the project time limit, Lower production efficiency</td>
</tr>
</tbody>
</table>
3. The reasons of using Bayesian network
The risk analysis methods can usually be divided into qualitative analysis and quantitative analysis.
Qualitative risk analysis approach to define the source of risk, and give an imprecise numerical analysis on the possibility or the loss of the risk, then with a comprehensive evaluation of the two above-mentioned dimension in different ways, the division of the risk level is made to measure the risk’s size and importance. Next we can separate the different risks into different levels so that some measures can be taken to deal with the risks according to their risk levels. At present, methods of qualitative risk analysis commonly used include Risk Assessment Code (RAC), Total Risk Exposure Code (TREC), and Short-Cut Risk Assessment Method (SCRAM) and so on.
Based on qualitative analysis, quantitative analysis methods give the quantitative indicators of risk factors and probability of occurrence by mathematical methods or algorithms. Commonly used methods of quantitative analysis include Monte Carlo Simulation Approach (MCSA), Venture Evaluation Review Technique (VERT), Fault Tree Analysis (FTA), Influence Diagram (ID), Probability Risk Assessment (PRA), Dynamic Probability Risk Assessment (DPRA), Artificial Neural Network (ANN), Bayesian network (BN) and so on.
Clyde Chittister (1994) proposed three questions about software risk analysis to answer: Where is the problem? What is the probability of occurrence? What are the consequences of the problem? Basically methods of qualitative analysis can solve the first question, and give some conceptual answers to the third question. But it cannot give the specific values of quantitative indicators, so that it cannot solve the probability problems. Therefore methods of qualitative analysis can serve as a small software project risk analysis tools, or as a supporting tool for large-scale software project risk analysis.
Methods of quantitative analysis based on the criteria above-mentioned can be divided into:
- Methods of quantitative analysis Based on the tree, such as FTA, PRA, DPRA, etc. These methods can solve the above questions by using probability of events in the model to reflect the consequences. But the independence evaluation between the events can not be given using these methods. Also the factors that affect the human behavior can’t be modeling clearly.
- Methods of quantitative analysis based on the network such as VERT, ID, ANN, BN, etc. These methods are more complex than those based on the tree, but closer to the nature of relationships between things as regarding the risk factors as a system to research. However, there is a big difference between the above methods. For example, ID is on the basis of the data theory, reasoning rigorously, resulting in high reliability, but when facing network with large number of middle nodes, the reasoning process is so complex that you can not update the node’s confidence level reversely, but only from the top-level node to the target node. ANN has strong function of learning and analyzing, and is good at facing uncertain problems such as risks, with accurate result, But the training of neural network requires a large amount of sample data, and because of the invisible middle node, the results always are forecasted, and can’t be tracked or simulated.
BN can give a very good answer to the three questions proposed by Clyde Chittister. It’s closer to the characteristics of software projects than FTA or other methods, as well as to supply a gap of the above-mentioned methods at a certain extent. As a flexible method, BN can put different information and evidence in nodes. If sufficient data can be provided as evidence to support the samples, even the network topology can be changed. This is very important because real-time updating of the causal relationship created and captured during the software development process become possible. And Bayesian network has a very powerful function on reasoning and posteriori learning. Experiences can be spread according to the basic mathematical theory so that we can update the node in any direction. And in the case of missing some data this method can still maintain studying and analyzing ability. Therefore BN is more appropriate than others for software project risk analysis and evaluation.
4. The processes of the risk factor analysis based on BN
The core processes of software development can be divided into business modeling, requirements, analysis and design, implementation, test, and deployment (www.rational.com). Actually there are
many other processes in an ERP project. At the same time, there are many differences on kinds, number and origins of risk factors between different ERP projects of processes. ERP project starts from the requirements of customers. In most cases, customers’ requirements need to be induced by software developers. Requirements analysis is the process that developers recognize the feasibility and consistency of requirements of customers, and developers need to communicate with customers again and again. If there are any faults in this process, it will be expanded gradually. So we must pay attention to the risk in requirements process.
This paper builds a Bayesian network analysis model for risk factors on the base of data from the requirement process of an ERP project of automobile mould factory in China.
4.1 To build a BN analysis model of risk factors
A BN is a network of nodes connected by directed links with a probability function attached to each node. The network (or graph) of a BN is a directed acyclic graph (DAG), i.e., there is no directed starting path or ending path at the same node. If a node doesn't have any parents (i.e., there is no link pointing towards it), the node will contain a marginal probability table; If a node has parents (i.e., one or more links pointing towards it), the node contains a conditional probability table (CPT).
Formally, a Bayesian network can be defined as follows:
Definition: A Bayesian network is a pair (G,P), where G=(V,E) is a directed acyclic graph (DAG) over a finite set of nodes (or vertices), V, interconnected by directed links (or edges), E, and P is a set of (conditional) probability distributions. The network has the following property:
Each node representing a variable A with parent nodes representing variables B1, B2,..., Bn is assigned a conditional probability table (CPT) representing P(A | B1, B2, ..., Bn).
The nodes represent random variables, and the links represent probabilistic dependences between variables. These dependences are quantified through a set of conditional probability tables (CPTs): Each variable is assigned a CPT of the variable given its parents. For variables without parents, this is an unconditional (also called a marginal) distribution.
According to above-mentioned definition and survey data, the network of the risk factor of requirements process is shown in figure 2. (In this paper we select a part of topological structure of risk factor)
Figure 2. The network of risk factor of requirements process
To finish the BN model, we should assign a CPT to each node. BN have self-learning ability that can use some algorithm to get CPT. According to the quality of sample data, there are different algorithms to get CPT, such as gradient ascent algorithm (GAA), expectation-maximization algorithm (EMA). Firstly we need to define the states of nodes that are shown in table 2.
Table 2. The states of nodes in Figure 2
<table>
<thead>
<tr>
<th>Nodes</th>
<th>States of nodes</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Kperson</td>
<td>Low, Medium, High</td>
<td>The level of depending on key-persons</td>
</tr>
<tr>
<td>Ktechnology</td>
<td>Yes, No</td>
<td>Lack of key-technology</td>
</tr>
<tr>
<td>Dtechnology</td>
<td>Low, Medium, High</td>
<td>The degree of maturity of development technology</td>
</tr>
<tr>
<td>CMMI</td>
<td>1,2,3,4,5</td>
<td>The level of CMMI-SE/SW</td>
</tr>
<tr>
<td>Metrics</td>
<td>Right, Wrong</td>
<td>The metrics standard for software project</td>
</tr>
<tr>
<td>Pcomplexity</td>
<td>Low, Medium, High</td>
<td>Project complexity</td>
</tr>
<tr>
<td>Scustomer</td>
<td>Low, Medium, High</td>
<td>The level of Support of customer</td>
</tr>
<tr>
<td>Communication</td>
<td>Low, Medium, High</td>
<td>The level of Communication in project team</td>
</tr>
<tr>
<td>Productivity</td>
<td>Low, Medium, High</td>
<td>Productivity</td>
</tr>
<tr>
<td>Workload</td>
<td>Small, Medium, Large</td>
<td>The size of workload</td>
</tr>
<tr>
<td>Ccontrol</td>
<td>Low, Medium, High</td>
<td>The level of Change control</td>
</tr>
<tr>
<td>Ptime</td>
<td>Low, Medium, High</td>
<td>The pressure of Project time</td>
</tr>
<tr>
<td>DefectsRate</td>
<td>Low, Medium, High</td>
<td>Defects rate</td>
</tr>
</tbody>
</table>
Secondly, we use the following method to get the initial value of CPT. If the nodes have single-parent node, according to the intensity between the nodes with parent node conditional probability of the node can be given three values. Corresponding to level 1, level 2 and level 3, the conditional probability of the nodes is 0.6, 0.7 and 0.8, that is, every level corresponds to 0.5 add 0.1. Contrary, if the parent node does not occur, the conditional probability corresponds to 0.5 minus 0.1; if the nodes have multi-parent nodes, the conditional probability corresponds to the sum of all parent nodes. Of course, this value should be normalized. For example, the network of nodes is shown in figure 3.

In figure 3 C and E with only one parent node, D with two parent nodes, according to above-mentioned method, we can get the following results:
\[
P(C \mid A) = 0.6, P(C \mid \overline{A}) = 0.4; P(E \mid B) = 0.8, P(E \mid \overline{B}) = 0.2
\]
\[
P(D \mid AB) = 0.5 + 0.2 + 0.1 = 0.8, P(D \mid \overline{AB}) = 0.5 - 0.2 - 0.1 = 0.2,
\]
\[
P(D \mid \overline{AB}) = 0.5 - 0.2 + 0.1 = 0.4, P(D \mid AB) = 0.5 + 0.2 - 0.1 = 0.6
\]
The initial value can be used to self-learning by BN for CPT, or be used to the value of CPT directly, when sample data is scarce. Though this method is not reasoned strictly, to some extent it can reflect the survey results from the analysis model that is proposed by Anthony Kwok Tai Hui (2004).
Finally, we use EM algorithm to get CPT, because we select a simply topological structure in this paper, we use Delphi method to get CPT of nodes.
4.2 To predict the probability of risk factor
After we have built the BN analysis model, we can predict the probability of risk, which can help us to take precautions against and reduce risks in requirements process. In this paper we use Hugin Expert, which is popular software for building BN, to carry out above-mentioned process. Before ERP project starts, according to the survey data from ERP project, Figure 4 illustrate the probability of risk factor.

According to the results in figure 4, when ERP project is ready to carry out the requirements process, the project team, especially the project manager, must pay attention to control the project time, because the probability of \(P_{time}\) in low level is 22.31%, which suggest that project is likely to delay. At the same time the project also should pay attention to defects rate, because the probability of Defects Rate in low level is 34.28%, which suggest that the defects are probable to appear.
4.3 To track the change of the probability of risk factor in real time
We use prior knowledge to predict the probability of risk factor, so it can direct our work and take steps to respond the risk before the project start. For the flexibility of BN, we can update the BN in real-time with the data from the project. For example, after ERP project starts, project team find project change cannot be controlled well, this means that the probability of \(C_{control}\) in Low level is 1. After updating, the result of the probability of risk factor is shown in figure 5.

Comparing with figure 4, we can find the probability of \(P_{time}\) in low level reduce from 22.31% to 17.06%, and the probability of Defects Rate in low level reduce from 34.28% to 25.68%, this means that the project has to delay and the user will not be satisfied with the quality of the project delivery. If the project team take steps to control the project change strictly, and at the same time they increase the productivity to high level, the results is presented in figure 6.
Comparing with figure 5, the probability of Ptime in low level rise from 17.06% to 52.91%, and the probability of DefectsRate in low level rise from 25.68% to 41.82%. The project is probable to be finished in time and provide the higher quality delivery by improving the level of productivity and change control. Therefore, using BN, we can know about the result of the measures that we have taken, or the efficiency of all kinds of measures, which can help us make a decision.
4.4 To find the causes for risks
During the implement of the project, we usually face all kinds of problem, so we usually rack us brains to find the origin of the problem. Because BN can calculate the posterior probability, BN can help us to find the root of problem. For example, ERP project team is confronted with the problem of the high defects rate. This means that the probability of Ptime in high level is 1. Using BN, The results are presented in figure 7.
Comparing with figure 4, when the defects rate is in high level, the probable reasons come from the low level of project change control or scope management. Because the result of the probability of Control in low level rise from 37.92% to 50.11%, and at the same time the result of the probability of Workload in large level rise from 28.88% to 37.46%. Therefore, according to the change in figure 7, ERP Project team can find out some key risk factors to take measures. BN is a tool that can be used to find out the causes for occurred risk.
5. Conclusions
As an effective mathematical model on probability inference, Bayesian network has many advantages in controlling risks of software projects. We can use priori knowledge to identify the probability of risk, and then establish measures to deal with before project starting. In the implementation process, the real-time risk analysis is essential to estimate the impact on project time-limit, quality, etc. At the same time, we can evaluate the effect of optional measures to determine the best decision-making. Taking advantages of Bayesian network’s superior ability on posterior analysis, combined with the implementation situations, we can rapidly identify and analyze the risk factors resulting in the project delay, decline of quality or cost overruns, and make measures quickly. In addition, Bayesian network topology structures are fit for the changing characteristics of risk factors
of software projects due to their easy expansibility and strong self-learning ability. So applying Bayesian network to analyze the risk factors of software project is more suitable than other methods, and the results from this method are much fit for the real situation. Actually the above conclusions have been proved correct in practical applications.
6. References
|
{"Source-Url": "http://www.aicit.org/JCIT/ppl/JCIT%20VOL7NO5_part20.pdf", "len_cl100k_base": 5649, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22017, "total-output-tokens": 6923, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0003664493560791016, "__label__crime_law": 0.00043272972106933594, "__label__education_jobs": 0.0015554428100585938, "__label__entertainment": 6.884336471557617e-05, "__label__fashion_beauty": 0.0001424551010131836, "__label__finance_business": 0.0009174346923828124, "__label__food_dining": 0.00032067298889160156, "__label__games": 0.0006613731384277344, "__label__hardware": 0.0004727840423583984, "__label__health": 0.0004470348358154297, "__label__history": 0.0001608133316040039, "__label__home_hobbies": 9.119510650634766e-05, "__label__industrial": 0.0003974437713623047, "__label__literature": 0.00026702880859375, "__label__politics": 0.00017392635345458984, "__label__religion": 0.0002970695495605469, "__label__science_tech": 0.0166168212890625, "__label__social_life": 0.0001150369644165039, "__label__software": 0.009979248046875, "__label__software_dev": 0.96533203125, "__label__sports_fitness": 0.00023567676544189453, "__label__transportation": 0.00033545494079589844, "__label__travel": 0.00015437602996826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29145, 0.02483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29145, 0.6059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29145, 0.91378]], "google_gemma-3-12b-it_contains_pii": [[0, 3078, false], [3078, 7481, null], [7481, 10194, null], [10194, 14792, null], [14792, 17688, null], [17688, 20555, null], [20555, 22720, null], [22720, 25116, null], [25116, 29145, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3078, true], [3078, 7481, null], [7481, 10194, null], [10194, 14792, null], [14792, 17688, null], [17688, 20555, null], [20555, 22720, null], [22720, 25116, null], [25116, 29145, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29145, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29145, null]], "pdf_page_numbers": [[0, 3078, 1], [3078, 7481, 2], [7481, 10194, 3], [10194, 14792, 4], [14792, 17688, 5], [17688, 20555, 6], [20555, 22720, 7], [22720, 25116, 8], [25116, 29145, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29145, 0.20863]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
84c1f5df86887290c3785627997ff32078bc2584
|
OntoUML and UFO-A for Software Engineering
Robert PERGL
Zdeněk RYBOLA
David BUCHTELA
Ivan RYANT
Abstract: OntoUML is an extension to the well-known notation of UML by ontology-oriented conceptual modeling aspects. The OntoUML diagrams offer higher expressivity for conceptual modeling thanks to a finer categorization and definition of entity types. This paper summarizes basic principles and concepts of OntoUML in the perspective of using OntoUML for development of information systems. The advantages of higher expressivity of OntoUML are illustrated by an example. Other aspects like transformation to an implementation model and further development of OntoUML are discussed. Difficulties for wider spread of OntoUML in the professional community are also discussed.
Keywords: ontological modeling, conceptual modeling, OntoUML, software engineering, information system development.
JEL Classification: C63, C80
Introduction
The process of developing complex business information system consists of several consecutive activities (Beck 1998). Assuming the requirements are defined and the project infrastructure is set, three crucial phases lead to the software realization – not considering consecutive phases of testing, deployment and support:
1) Analysis,
2) Design,
3) Implementation.
Figure 1. Simplified model of software development process denoting the key artifacts between the phases
In Figure 1, artifacts are shown to serve as inputs and outputs between phases of the process. The quality of the input artifacts significantly influences the success of each phase – a high quality design cannot be created from low quality analytical sources and a high quality implementation cannot be created from a low quality design. The quality is influenced especially by these two factors:
- The quality of the modeling notation (Guizzardi, Ontological Foundations for Structural Conceptual Models 2005).
- The preservation of information between phases (Pícka and Pergl 2006).
In this paper, we deal especially with the quality of the analytical output artifacts, in particular with the conceptual model. Furthermore, we discuss the transformation of the structural conceptual model to a structural implementation model in the design phase.
**Goals**
The goal of this paper is to outline the possibilities and advantages of using the OntoUML notation for conceptual modeling of business information systems. We deal only with the structural models of the systems. Another goal of this paper is to outline the issue of transformation of the OntoUML models into implementation models, in particular for the pure object-oriented models. A side goal of the paper is to introduce the OntoUML notation that is not yet well-known by the professional public and the difficulties for the wider spread of this notation.
**Structure of the paper**
In section 2, we introduce the origin and structure of OntoUML. In section 3, we show the fundamentals and principles of OntoUML and provide the basic categorization of entity types. Higher expressivity of OntoUML is demonstrated on an example by comparing a UML and an OntoUML model. We discuss the task of transformation of an OntoUML conceptual model into an implementation model in section 4. Finally, in section 5, we outline the difficulties for wider usage of OntoUML by professional community and we provide conclusions and further plans.
**Origins of OntoUML**
OntoUML was created as an attempt to merge the ontological analysis and conceptual modeling. The goal of the intention was to provide the analyst with a set of various entity types with precisely defined qualities and characteristics to model the reality as precise as possible. OntoUML is based on the Cognitive Science knowledge about spec-
cifics of our perception and on the modal logic and the mathematical foundations of logic, sets and relations. Syntactically it is built on the notation of UML class diagrams (Fowler 2003) that is extended by a set of new concepts by special stereotypes – this formally conforms to the UML metamodel (OMG n.d.) and therefore current CASE tools, presentation tools, transformation tools, etc. can be used for OntoUML models. Unlike other extensions of UML, OntoUML is created from the very foundations and constitute a complete system independent of the original UML elements. It uses some aspects (like classes), however, it omits a set of other problematic concepts (for instance aggregation and composition) and replaces them with own ontologically correct concepts.
For the first time, OntoUML was presented in the dissertation thesis by Dr. Giancarlo Guizzardi that was defended with highest honors in 2005 at the University of Twente. The thesis was later published as a monograph (Guizzardi, Ontological Foundations for Structural Conceptual Models 2005). Since then, Dr. Guizzardi is one of the most active scientist and author in the field of conceptual modeling and he is a member of many conference boards in the field. Currently, Dr. Guizzardi works at the Federal University of Espírito Santo in Vittoria, Brasil, where he leads the NEMO research group.
*Three foundational ontologies are created using OntoUML – UFO (Unified Foundational Ontology):*
1) UFO-A: Structural aspects – Objects, their types, parts, roles they play ...
2) UFO-B: Dynamic aspects – Events, their parts and relations, object participation in events, time-dependent behaviour ...
3) UFO-C: Social aspects – Based on UFO-A and UFO-B, dealing with agents, states, goals, actions, norms, social commitments and claims ...
UFO-A is considered to be completed and proved in a set of projects in practice, no more intensive research for its extension and development is expected. On the other hand, UFO-B and UFO-C are targets of current intensive research.
OntoUML and UFO has already been applied in practice in many complex projects, for example Off-Shore Software Development – conceptual analysis for an international oil company, in projects in the field of telecommunications and Media Content Management.
---
1) A list of publications available for downloading is published at the personal webpage of Dr. Guizzardi (Guizzardi, Homepage n.d.).
Basic Principles of OntoUML
In the following, we deal only with the UFO-A and the OntoUML term is used for the notation and UFO-A together. We focus on the structural aspects for static modeling of structures.
Categories of the entity types
As mentioned above, OntoUML strictly distinguishes various categories of entities in the reality around us. Part of the entity types’ hierarchy is shown in Figure 2.
A set of strict criteria is defined to distinguish the categories:
- **Identity** – if the entity type provides or has an ontological identity. The types that do not provide identity inherit the identity from their ancestors.
- **Rigidity** – the changeability or immutability of the type. OntoUML uses modal logic for definitions of various aspects using operators necessity and possibility in time and space – modal logic uses the term world. We distinguish rigidity, anti-rigidity and non-rigidity (the negation of rigidity). Based on these characteristics, we distinguish rigid, anti-rigid and semi-rigid – for some instances rigid, for other non-rigid) entity types.
- **Relational dependency** – the entity type can exist only with a relation to another entity type.
---
Figure 2. The hierarchy of basic entity type categories in OntoUML
The table shown in Figure 3 summarizes characteristics of the basic entity type categories.
<table>
<thead>
<tr>
<th>Category of Type</th>
<th>Supply Identity</th>
<th>Identity</th>
<th>Rigidity</th>
<th>Dependence</th>
</tr>
</thead>
<tbody>
<tr>
<td>SORTAL</td>
<td>-</td>
<td>+</td>
<td></td>
<td></td>
</tr>
<tr>
<td>« kind »</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>« subkind »</td>
<td>-</td>
<td>+</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>« role »</td>
<td>-</td>
<td>+</td>
<td>-</td>
<td>+</td>
</tr>
<tr>
<td>« phase »</td>
<td>-</td>
<td>+</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>NON-SORTAL</td>
<td>-</td>
<td>-</td>
<td></td>
<td></td>
</tr>
<tr>
<td>« category »</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>« roleMixin »</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>+</td>
</tr>
<tr>
<td>« mixin »</td>
<td>-</td>
<td>-</td>
<td>~</td>
<td>-</td>
</tr>
</tbody>
</table>
**Figure 3. Characteristics of the basic entity type categories**
Strict distinguishing of the entity type’s category has a practical significance in the software engineering. An example of a UML model and its OntoUML counterpart is shown in Figure 4. It describes a small part of a university information system. The UML model defines only four entity types for a general person, a student, an employee and an insured employee. The OntoUML counterpart strictly distinguishes several various categories of the entity types and adds some other required types. The UML model misses some important information that can lead to faulty design and implementation:
**Figure 4. An example of a UML model and its OntoUML counterpart for a university information system**
In the UML model, we distinguish students and employees, both as subtypes of a person. However, a person cannot be a student and an employee in the same time – in this case, two independent instances must be created. On the other hand, in the OntoUML model, we classify the student and employee types as roles of the person type. This means that each person can be in the role of a student or an employee at a faculty. The role category is anti-rigid. This classification enables the person to gain or lose the role or even be in both roles at the same time.
In the OntoUML model, the roles inherit the identity from the kind Person. This allows a person to gain or lose a role without losing its identity in the system. In the UML model, the student and the employee have their own identity. When a student finishes its studies and becomes an employee, it loses all its history in the system.
Phases in the OntoUML model are anti-rigid, too. This means that the employee can change its state between the insured employee and the uninsured employee without changing its identity. Unlike the roles, any person can be only in one and exactly one phase of the same phase partition. The rigid UML model does not support changing the insurance state of a person without changing the person’s identity. We could create an optional association between an employee and an insurance entity type; however, we would lose the polymorphism of employees. In the OntoUML, we can simply create an association between the insured employee and a car entity type denoting that only an insured employee can drive a car. This constraint cannot be expressed in pure UML.
A role in OntoUML is a relationally dependent type – there must be another entity type connected to the role by an association with the minimal multiplicity value of 1. This rule forces the analyst to search for the truth maker of the role – what makes the kind entity to get the role – and to include it in the model. In UML, there is no such concept.
**Characteristics of the Whole-Part relationship**
OntoUML also deals with the Whole-Part relationship in more detail than it is defined in UML. UML defines composition and aggregation associations very vaguely and imprecisely. OntoUML brings more precise definition of the relationship and its multiplicities and obligation from the point of view of both the whole and the parts. Omitting these important aspects can result to faulty design and implementation because of the missing information.
---
2) There exist implementation solutions for such a situation, however, we focus on the structural conceptual modeling and the way to express such characteristics on this level. We try to respect the rule that the conceptual model should be maximal and complete.
3) Such constraints can be expressed by OCL invariants attached to the diagram, however, these invariants can not be expressed directly by the diagram elements.
**Types of the obligatory participation from the point of view of the whole entity**
The types of the obligatory participation from the perspective of the whole entity are shown in Figure 5.
- **Optional Part** – the part entity is optional for the whole entity, the whole instance can exist without any instance of the part entity.
- **Mandatory Part** – the part entity is obligatory for the whole entity. Any instance of the whole must be linked to an instance of the part entity. However, the part entity instance may change. An example of this type of obligation is a human heart: a human must have a heart to live but the heart can be transplanted – the instance of the heart is changed while the human instance does not. Another example could be an engine of a car.
- **Essential Part** – the part entity is obligatory for the whole entity. Additionally, the part entity is essential – the part entity instance can not be changed. Changing the part entity instance would destroy the identity of the whole entity instance. An example of this type of obligation is a human brain – the brain can not be transplanted and even if it could be the human would be someone else with new identity. Similarly, the chassis of the car can not be changed without changing the car's identity because the VIN number is printed on it.
*Figure 5. The types of the obligatory participation of the part entity from the perspective of the Whole*
Types of the obligatory participation from the point of view of the part entity
The types of the obligatory participation of the whole entity from the perspective of the part entity are shown in Figure 6.
- **Optional Whole** – the whole entity is optional for the part entity. An instance of the part entity can exist without any instance of the whole entity.
- **Mandatory Whole** – the whole entity is obligatory for the part entity. An instance of the part entity must be always connected to an instance of the whole entity. However, it can change the whole entity's instance. An example of this type is a kidney that can be transplanted from one person to another.
- **Inseparable Part** – an instance of the part entity must be a part of the same whole entity instance for its whole life. An example of this type can be the human brain, again, as it can not be transplanted.
Other characteristics of the participation in the Whole-Part relationship
All types of the obligation mentioned above stand for rigid whole and part entities. When one of the types is anti-rigid, we have to distinguish, if the instance linked to the anti-rigid type instance can change when the type changes its anti-rigid state – remember, a rigid entity can gain or lose a role and change its phase and when it gains one of the roles or phases, it is always linked to the same instance of the other type. This characteristic is expressed by these meta-attributes of the relationship:
- **Immutable Part** – when an instance of the whole entity is in the anti-rigid state, it is always connected to the same instance of the part entity.
- **Immutable Whole** – when an instance of the part entity is in the anti-rigid state, it is always connected to the same instance of the whole entity.
OntoUML also defines the shareability of the part entity instance among the whole entity instances. This characteristic uses similar notation as aggregation and composition in UML but with a different meaning:
- The full symbol ♦ or the meta-attribute isShareable = false stands for a not-shareable part – an instance of the part entity can be a part of only one whole entity instance at the same time.
- The empty symbol ◊ or the meta-attribute isShareable = true stands for a shareable part – an instance of the part entity can be a part of multiple instances of the whole entity at the same time.
It is important to mention that all these characteristics – the types of obligatory participation from the perspective of both the part and the whole entity and the shareability of the part entity instances are completely independent of each other allowing modeling of any combination of these characteristics as required by the domain reality.
**Types of the Whole-Part relationship**
OntoUML also distinguishes various types of the Whole-Part relationship and the roles the parts take in the whole:
**Quantity**
- The whole consists of parts of the same type.
- The whole is infinitely dividable into parts.
- The part represents all the material of the container.
- Examples: water in a bottle, stone of a statue, etc.
- Relation subQuantityOf: alcohol-wine, sugar-coffee, etc.
**Collective**
- The whole consists of parts of other types.
- The collective is not infinitely dividable.
- All parts take the same role in the collective.
- Relation memberOf: a tree – a forest, a student – a class, etc.
- Relation subCollectionOf: north part of a forest – a forest, cars – vehicles, etc.
Functional Entity
- The whole consists of parts that take various roles.
- `relation componentOf`: a heart – a vascular system, a director – a company, an engine – a car, etc.
Other parts of the UFO-A
UFO-A also deals in detail with associations, aspects (existentially dependent objects), qualities (complex measurable attributes) and their domains and also with a redefinition of specialization.
Transformation of an OntoUML conceptual model into an implementation model
While the conceptual model is the result in the case of conceptual analysis, in software engineering it is just one of the first input artefacts for other development phases. The process of implementation and source code creation based on the conceptual OntoUML model with a set of additional information and rules not captured in the model (behavioural models, state models, etc.) is possible but error-prone because the semantic gap between the models is huge. The conceptual model also does not contain some additional information required for the implementation just because of the principle of the conceptual model – it must be independent of the implementation. However, the implementation requires information like the associations’ direction for object navigation, queries’ optimization and structure actualization.
Therefore, we suggest creating an implementation model that captures how the conceptual model will be implemented. This model contains the structural information of the conceptual model along with additional implementation dependent information and it can be transformed into the source code.
Currently, we focus on the pure object-oriented implementation so the implementation model consists of classes, attributes, methods, inheritance and composition. Of course, the conceptual model degenerates during the transformation, therefore even the conceptual model must be discussed during the implementation.
Another option is the transformation to a relational model. The NEMO research group focused on this transformation; however, the results are not public because the research was done as a part of a commercial project. The relational implementation model seems to be more complicated because of the required application logic to make the implementation valid according to the former conceptual model (in PL-SQL, for example).
Conclusions
OntoUML and UFO can be used as tools for ontology-oriented conceptual modelling in many areas where an exact mental model needs to be expressed:
- in the area of information exchange,
- in the area of term and relations definition (legal regulations, etc.),
- in the area of data integration (business and knowledge engineering, business intelligence, e-government, etc.),
- in the area of service and component integration (heterogeneous environment).
In software development, the information exchange is crucial for success of the project and for satisfaction of the customer’s requirements and needs. OntoUML and UFO-A provide higher expressivity and precision for structural conceptual model than UML models.
Important aspects of OntoUML
In our experience and opinion, a subset of OntoUML aspects is sufficient for a software engineer. We consider these aspects as the most helpful:
- The distinguishing of various categories of entity types into Sortals and Non-Sortals.
- The distinguishing of rigidity of entity types.
- The distinguishing of the obligatory participation in the Whole-Part relations.
- The distinguishing of various types of the Whole-Part relations.
- The distinguishing of material and formal relations.
- The distinguishing of aspect categories (qualities and modes).
We believe that a software engineer finds the basic definitions and characteristics of the mentioned concepts sufficient for the software development practise. Dr. Guizzardi introduces very deep theory of mereology for the Whole-Part relations issue, however, we do not consider this necessary for the software development.
Difficulties of OntoUML spread in the public community
OntoUML is not yet well-known and spread among the community of software engineers. There are several difficulties that prevent its wider spread. We find the following problems the most important:
- Lack of literature and information sources.
- Lack of public examples and case studies.
- Missing courses and training programs.
- Hardly available support and community.
- Missing CASE tools.
In the following paragraphs, the problems are described in more detail.
Lack of literature and information sources
There is only one complex and publicly available publication about OntoUML – the dissertation thesis of Dr. Guizzardi (Guizzardi, Ontological Foundations for Structural Conceptual Models 2005). It is a top-class research thesis; however, it is really tough and intense for the public community. Additionally, it does not describe the most recent version of OntoUML, one have to study the most recent conference papers, in English.
Lack of public examples and case studies
As mentioned above, OntoUML was applied in several successful projects; however, these projects were commercial and the results are kept private. Some case studies were published in (Guizzardi, Homepage n.d.) but it is too little for adequate education.
Missing courses and training programs
OntoUML is taught by the Dr. Guizzardi’s team only at the Federal University of Espírito Santo in Vittoria, Brasil, and at some universities in Holland. However, we started to teach OntoUML at FIT CTU in Prague in the last year. Also, no commercial courses for the public community of software engineers are available worldwide.
**Hardly available support and community**
The community of OntoUML users and developers is too small and consists almost exclusively of the NEMO team of Dr. Guizzardi and several other researchers in Holland and Germany. The community is very important to support developers new to OntoUML and to provide help, tips and sources for them. Currently, we try to build a community of OntoUML developers at FIT CTU in Prague.
**Missing CASE tools**
Currently, a CASE tool based on the Eclipse platform created as a part of Dr. Guizzardi dissertation thesis is available for OntoUML modelling. The tool supports all OntoUML concepts of UFO-A and is capable of model checking according to the rules for OntoUML models. However, the tool is not developed anymore and misses many important functions for practical usage in software development. OntoUML models can be also created in many current CASE tools for UML, however, these tools provide no semantic support for OntoUML concepts.
**Future plans and perspectives**
The NEMO team currently works on a software tool for behavioural simulation of entities in a structural diagram. Such a tool could help a software engineer to verify a conceptual model and validate it with the customer on a prototype of the structures.
Finishing the rules and definitions for the transformation of an OntoUML model into various types of implementation models can provide a powerful tool for rapid development, flexible adaptability of models and consistency of the model and its implementation. However, this transformation will probably require additional information and therefore it can be only semi-automatic.
UFO-B is intensively developed by the NEMO team. When it is developed enough, modelling of dynamic aspects of information systems in OntoUML will be possible. Although many methods, techniques and notations are available for process modelling, the situation is similar to the structural modelling – the notation is not satisfactory enough from the ontological point of view – see the analysis of BPMN notation (Guizzardi, Can bpmn be used for making simulation models? 2011).
**Acknowledgement**
The OntoUML research, studies and teaching is supported by the Centre for Conceptual Modelling and Ontologies at Faculty of Information Technologies, Czech Technical University.
References
Department of Software Engineering, FIT CTU in Prague, Thákurova 9, 160 00, Praha 6 (robert.pergl, zdenek.rybola, david.buchtela, ivan.ryant}@fit.cvut.cz
|
{"Source-Url": "http://bit.fsv.cvut.cz/issues/02-12/full_02-12_06.pdf", "len_cl100k_base": 5361, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 25637, "total-output-tokens": 6065, "length": "2e12", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.0004494190216064453, "__label__crime_law": 0.00033020973205566406, "__label__education_jobs": 0.0015287399291992188, "__label__entertainment": 5.322694778442383e-05, "__label__fashion_beauty": 0.00013947486877441406, "__label__finance_business": 0.0002455711364746094, "__label__food_dining": 0.0002732276916503906, "__label__games": 0.0004184246063232422, "__label__hardware": 0.00041604042053222656, "__label__health": 0.00041556358337402344, "__label__history": 0.0002157688140869141, "__label__home_hobbies": 7.37905502319336e-05, "__label__industrial": 0.00034427642822265625, "__label__literature": 0.0003371238708496094, "__label__politics": 0.00020122528076171875, "__label__religion": 0.0004475116729736328, "__label__science_tech": 0.0166015625, "__label__social_life": 0.00011348724365234376, "__label__software": 0.007602691650390625, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00023651123046875, "__label__transportation": 0.00041365623474121094, "__label__travel": 0.00016176700592041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25891, 0.00601]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25891, 0.49483]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25891, 0.91237]], "google_gemma-3-12b-it_contains_pii": [[0, 1407, false], [1407, 3773, null], [3773, 6214, null], [6214, 7474, null], [7474, 9146, null], [9146, 12080, null], [12080, 13517, null], [13517, 15297, null], [15297, 16994, null], [16994, 19330, null], [19330, 20968, null], [20968, 22632, null], [22632, 24959, null], [24959, 25891, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1407, true], [1407, 3773, null], [3773, 6214, null], [6214, 7474, null], [7474, 9146, null], [9146, 12080, null], [12080, 13517, null], [13517, 15297, null], [15297, 16994, null], [16994, 19330, null], [19330, 20968, null], [20968, 22632, null], [22632, 24959, null], [24959, 25891, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25891, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25891, null]], "pdf_page_numbers": [[0, 1407, 1], [1407, 3773, 2], [3773, 6214, 3], [6214, 7474, 4], [7474, 9146, 5], [9146, 12080, 6], [12080, 13517, 7], [13517, 15297, 8], [15297, 16994, 9], [16994, 19330, 10], [19330, 20968, 11], [20968, 22632, 12], [22632, 24959, 13], [24959, 25891, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25891, 0.06962]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
b653cf2c15e4d052f6a59574ddb04d519920bed6
|
Toward Unification of Explicit and Implicit Invocation-Style Programming
Yoonsik Cheon
TR #15-98
December 2015
Keywords: application framework, control flow, explicit invocation, event-based programming, implicit invocation.
Department of Computer Science
The University of Texas at El Paso
500 West University Avenue
El Paso, Texas 79968-0518, U.S.A.
Toward Unification of Explicit and Implicit Invocation-Style Programming
Yoonsik Cheon
Department of Computer Science
The University of Texas at El Paso
El Paso, Texas, U.S.A.
ycheon@utep.edu
Abstract—Subprograms like procedures and methods can be invoked explicitly or implicitly. In implicit invocation, an event causes invocations of subprograms registered for the event. Mixing these two styles is common in programming and often unavoidable in developing software systems such as GUI applications and event-based control systems. However, mixed use of these two styles oftentimes complicates the programming logic and thus produces unclean code—code that is hard to read, understand, maintain, and reuse. We show, through a small but realistic example, that the problem is not much on the fact that two different styles are mixed but more on mixing them in an unconstrained manner. We propose a few principles or guidelines for blending them harmoniously and also describe a simple proof-of-concept framework for converting one style to the other for the unification. Our work enables one to blend the two different invocation styles harmoniously and in a properly constrained manner to produce clean code.
Keywords: application framework, control flow, explicit invocation, event-based programming, implicit invocation.
I. INTRODUCTION
The most common programming style in imperative languages including procedural and object-oriented programming languages is to call procedures or methods directly. In this style, a program explicitly specifies the flow of the control, i.e., the order in which statements such as procedure or method invocations are executed. Another popular programming style is an implicit invocation style in which the flow of the program is not explicitly stated but determined by events such as user actions, sensor outputs, and messages from other programs [1]. The idea behind implicit invocation is that instead of invoking a procedure directly, one can register an interest in an event by associating a procedure with the event. When an event occurs, the runtime system invokes all of the procedures that have been registered for the event. It is the dominating programming style in graphical user interfaces and other applications that are centered on performing certain actions in response to user inputs and other events.
It is common that programmers use these two styles together in a single application. In fact, it is unavoidable to mix use them in modern, framework-based application development. In a GUI application, for example, application code is called implicitly from within the GUI framework, rather that the application code calls framework code explicitly. Control is inverted in that it is owned by the framework and the framework calls application code, not the other way around.
This inversion of control is one key characteristic of an object-oriented application framework, and the framework often plays the role of the main program in coordinating and sequencing application activities [2]. However, it is also not uncommon that the mixed use of two invocation styles complicates the control flow of a program and produces code that is hard to read and understand, and thus less reusable and maintainable. This is because there exist two, opposite directions of control flow in the program, from application code to the framework and vice versa. The former is stated explicitly and centralized with a single entry point. The later is stated implicitly and decentralized with no single entry point, and there are multiples of it scattered and dispersed throughout the program, one for each event handler. Thus it is difficult to figure out the overall flow of control.
In this paper, we first illustrate the problem of mixing the two method invocation styles using a small but realistic programming example, a tic-tac-toe program. We claim that the real problem is not much on mixing the two styles itself but more on using them in an unconstrained or uncontrolled manner. One key observation that we made, for example, is that local use of an invocation style should be encapsulated in that its use and effect shouldn’t be visible to or observable from outside. This is particularly true when intra- and inter-component coding styles are different. If a component is written in an event-based, implicit invocation style, for example, its event handlers shouldn’t call methods outside the component, directly or indirectly. It’s support separation of concerns between intra- and inter-component styles by separating them cleanly and modularly. Based on this and other observations, we then explore ways to blend the two different invocation styles harmoniously to produce so-called “clean code”, code that is easy to read and understand [3], which is the first step for code reuse and maintenance. We propose a few principles or guidelines for unifying the two invocation styles by converting one to the other. We also describe a simple, proof-of-concept framework for converting invocation styles for the unification. An application of our guidelines and framework to the tic-tac-toe program shows a very promising and encouraging result. Use of invocation styles, especially implicit invocation, can be localized and encapsulated properly. The key control flow of an application can be expressed apparently in the source code itself. In short, judicious use of the guidelines produces clean code.
II. TIC-TAC-TOE GAME—RUNNING EXAMPLE
We will use a tic-tac-toe game program as a running example to illustrate the problem and to describe our solution as well. Tic-tac-toe is a simple strategy game played by two players, X and O, who take turns marking the places in a 3×3 grid (see Fig. 1). The player who succeeds first in marking three places in a horizontal, vertical, or diagonal row wins the game. The game ends in a tie if all nine places are marked and no player has three marks in a row.
Let's first write a Java program that allows two players to play the game through a graphical user interface using a mouse; later we will extend it to support a computer play. As can be guessed from Fig. 1, a player clicks a mouse on a place in a board to mark it. Fig. 2 shows main classes of the program along with their relationships.
The Board class is the main model class and is an abstraction of a tic-tac-toe board consisting of 3×3 places that can be marked by players. The BoardPanel class is a UI class displaying a board as a 2D grid as shown in Fig. 1.
It’s quite natural to use an event-based, implicit invocation style for our program, for players interact with it through GUI including a mouse. In fact we have to, as we need to handle a mouse click event generated by the Java Swing GUI framework [4]. Specifically we define the following mouse event handler in the BoardPanel class.
```java
public void mouseClicked(MouseEvent e) {
if (!controller.isGameOver()) {
Place place = locatePlace(e.getX(), e.getY());
if (place != null && !board.isMarked(place)) {
controller.makeMove(place);
}
}
}
```
When a mouse is clicked on a board panel, the corresponding place of the board is located and, if it isn’t marked yet, is marked by the current player. The actual place marking is done by the `makeMove()` method defined in the T3Dialog class.
```java
public void makeMove(Place place) {
board.mark(place, currentPlayer());
if (board.isWonBy(currentPlayer())) {
endInWin();
} else if (board.isFull()) {
endInDraw();
} else {
changeTurn();
}
}
```
Note that the `mouseClicked()` event handler is not called directly from the application code. It will be invoked implicitly by the GUI framework when a user clicks a mouse on the board panel. Control is inverted in that application code is called from within the framework, rather than it calls framework code [5]. This inversion of control is one key characteristic of an object-oriented application framework such as the Java Swing GUI framework and is caused by implicit invocation.
III. THE PROBLEM
Let’s spice up the tic-tac-toe program written in the previous section by allowing one to play against a computer. For this we introduce a few different move strategies for the computer. A move strategy means figuring out what a (computer) player needs to do to win. Fig. 3 shows one possible extension to our design from the previous section, including several new classes and their relationships.
The primary change is the addition of the ComputerPlayer class as a subclass of the Player class to model a computer player, a new concept introduced in our extension of the program. As shown in the class diagram, it uses the Strategy design pattern [6] to allow a different move strategy such as Random and Smart for a computer player. The ComputerPlayer class defines a method named `nextMove()` that returns a place to be marked by a computer player; it is of course written in terms of a strategy method defined in a strategy class that calculates the next move for the associated player.
It’s so far, so good for the extension, but now it’s time to make an important design decision. We need to integrate new components such as a computer player and move strategies into the main game playing logic, taking turns and marking places. Remember that the main game logic is implemented in the `makeMove()` method of the T3Dialog class. We override
this method in the T3StrategyDialog, a new subclass added in our extension, as follows
```java
public void makeMove(Place place) {
if (isPlayerTurn()) {
super.makeMove(place);
} else if (isGameOver()) {
new Thread(this::makeComputerMove).start();
}
}
private void makeComputerMove() {
Place p = (ComputerPlayer) currentPlayer().nextMove();
super.makeMove(p);
}
```
The logic of making a move is extended so that every move by a human player is followed by a computer player’s if the human move is not a game ending move. Let’s examine the code to see the details. Remember that the method is called by a mouse event handler when a user click a board using a mouse. The method first checks if it’s a human player’s turn. If so, it proceeds as before by calling the overridden method; otherwise, it does nothing—i.e., the human player’s move request is ignored because it’s the computer’s turn. However, after the overridden method invocation returns, it makes a computer player’s move by calling the `makeComputerMove()` method in a new background thread, not to tie the UI thread, if the game is not over yet. As expected, the `makeComputerMove()` calls the overridden `makeMove()` method by passing a place obtained from the computer player.
The extension is complete, and the program should run correctly by supporting a computer play. However, there is a potential issue in its detailed design and coding. It uses two different styles, explicit and implicit invocations, for the same functionality and worse, the mixed use happens in the key business logic of taking turns and marking places.
A human player’s move is coded in event-based, implicit invocation whereas a computer player’s move is done in explicit invocation. There are several problems caused by this nonuniformity. As shown in Fig. 4, the nonuniformity becomes apparent in the control flow of the program; a dashed line denotes control flow originated from an implicit invocation of an event handler.

There are two, opposite directions of control flow in the main business logic of the program. It’s confusing and makes it hard to figure out the overall flow of control for the key business logic, meaning that the code is less readable and understandable. Differentiating two players also produces code that performs case analysis or type casting as apparent in the second line of the `makeComputerMove()` method. Such code commonly appearing in abstract data types is understood to be less extensible and reusable than object-oriented code that utilizes polymorphism [7]. Yet another problem is code scattering. The code of an identical or similar functionality, determining the next place to mark, is scattered over multiple, unrelated components, BoardPanel and ComputerPlayer. If a computer player’s next move is defined in the ComputerPlayer class, don’t we expect a human player’s in the HumanPlayer class? In summary, the code suffers from an inappropriate mixed use of implicit and explicit invocations. The coding of the logic is complicated, resulting in code that is less readable, understandable, reusable, and maintainable. In the following section we will refactor it to produce so-called “clean code”, code that is easy to read and understand [3].
IV. OUR APPROACH
Mixed use of explicit and implicit invocation styles of programming is unavoidable in developing most modern, complicate software systems such as GUI applications and event-driven control systems. However, the problem described in the previous section is not much about the fact of mixing the two styles itself but more on mixing them in an undisciplined and unconstrained way, e.g., two different styles for the same functionality and local use exposed to outside. Our approach is to constrain the mixed use of styles in such a way to produce clean code. We propose a few guidelines for mixing the two styles to have disciplined use and a simple framework for coding accordingly (see Section V for the framework).
- **G1: Encapsulate styles.** Local use of a style should be encapsulated in that its use and effect shouldn’t be visible to or observable from outside. This is particularly true when intra- and inter-component invocation styles are different. If a component is written in an event-based, implicit invocation style, for example, its event handlers shouldn’t call methods outside the component, directly or indirectly.
- **G2: Use the same style for the same, similar, or related functionality.** Using a different invocation style in coding the same, similar, or related functionality results in confusing and unclean code. Pick one and stick to it throughout the program.
- **G3: Avoid mixing styles at the same abstraction level.** Using both styles in a single component or at the same abstraction level complicates the logic, producing confusing and unclean code. This guideline is crucial for higher-level components or abstraction levels, such as systems and system architectures. In practice, it is hard to achieve this for lowest-level components such as classes, e.g., coding solely in implicit invocation.
As said earlier, the guidelines suggest to mix use styles in a more disciplined way, e.g., fixing the direction of main control flow to one (explicit or implicit) and moving the differences down to lower-level components and encapsulating there. Our technical approach for achieving this is converting one invocation style to the other by simulating or mimicking it. Below we explain our approach in detail by applying it to and refactoring our tic-tac-toe program.
### A. Implicit to Explicit
Our extended tic-tac-toe program in Section III violates all three guidelines. A similar functionality—getting or calculating the next place to mark—is written in two different styles, the BoardPanel class is written in both implicit and explicit styles, and the implicit invocation in the BoardPanel class isn’t encapsulated. Let’s examine the BoardPanel class. A mouse event handler `mouseClicked()` is invoked implicitly from within the Swing framework and it calls the `makeMove()` method of the T3Dialog class explicitly. Although the second, inter-class method invocation is done explicitly, it is initiated by an implicitly-invoked event handler and thus the implicit invocation is propagated to outside the BoardPanel class; it crosses the class boundary and thus is observable from outside. The implicit invocation is not encapsulated properly as we observed two, opposite directions of control flow in Section III.

**Fig. 5.** From implicit invocation to explicit
One way to fix the problem is to convert implicit invocation to explicit and to have a structure similar to the one shown in Fig. 5. The top-level invocation is explicit while the component-level can be explicit, implicit or both, of course, encapsulated properly. For this, we let a human player to provide his or her next move (see below for details) so that the controller can call the next move method explicitly as done for a computer player. With this done, the main controller code can be rewritten in the `play()` method as follows.
```java
public play() {
while (!isGameOver()) {
Place place = currentPlayer().nextMove();
makeMove(place);
}
}
```
Note that the current player can be either a computer or a human player. Both players are now treated uniformly in an object-oriented fashion relying on polymorphism.
More interesting is coding a human player requiring an implicit-to-explicit invocation conversion. We define a new subclass of the Player class, named HumanPlayer, and override the `nextMove()` method, which is now promoted to the Player class. The method will essentially wait for a mouse click to obtain a human player’s next move.
```java
EventBroker(Place) eventBroker;
public Place nextMove() {
return eventBroker.consume();
}
```
The EventBroker class is a framework class that we wrote for our approach and can serve as a synchronized, thread-safe buffer between a producer and a consumer (see Section V). The `nextMove()` method simply calls the `consume()` method of an event broker to retrieve the next place from the broker. If there is no place available, the `consume()` method will suspend the calling thread temporarily until a place becomes available. The BoardPanel class is a producer and produces a place when a human player click a mouse on it. Its mouse event handler is rewritten as follows.
```java
EventBroker(Place) eventBroker = new EventBroker();
public void mouseClicked(MouseEvent e) {
eventBroker.produce(locatePlace(e.getX(), e.getY()));
}
```
As before it first calculates the board place corresponding to the screen location on which a mouse is clicked, but then it stores the place in the even broker for a consumer. This completes our refactoring for converting implicit invocation to explicit. As planned, implicit invocation is localized and encapsulated in the BoardPanel class, and the rest of the program use explicit invocation. All players are treated equally and uniformly in an object-oriented way, producing more extensible code; for example, a new type of players, say a network player, can be added easily with a minimal change to the existing code. Best of all, the overall, key control flow is expressed apparently in the code itself, and the code is clean.
### B. Explicit to Implicit
Another general solution is to convert explicit invocation to implicit. In our case, we can rewrite the code handling a computer player’s move to use event-based, implicit invocation. In an implicit invocation style, one writes an event handler to be invoked implicitly when an event occurs. Earlier we wrote the following mouse event handler to process a human player’s move; it is slightly rewritten to check for the turn.
```java
public void mouseClicked(MouseEvent e) {
if (!controller.isGameOver() && controller.isPlayerTurn()) {
Place place = locatePlace(e.getX(), e.getY());
if (place != null && !board.isMarked(place)) {
controller.makeMove(place);
}
}
}
```
How to convert explicit invocation code to implicit? In general, a custom event needs to be defined along with an event generation and notification mechanism for it. In our case, we need to (1) define a new event to represent a computer’s next move, (2) generate an instance of the new event every time when it is a computer’s turn, and (3) notify the generated event to all event handlers registered an interested in the new event. The purpose of the new event is to invert the control flow by making the makeMove() method to be called by an event handler. But, how to generate a new event? It should be done independently of the application code, and thus we can create a new background thread that checks for a computer’s turn to create a new event and notify it. We can write custom code doing this or better develop a reusable class. In fact, we wrote such a reusable, generic class named EventGenerator that generates events by calling a provided event creation method periodically (see Section V). Using the EventGenerator(T) class, we can write implicit invocation code for a computer’s move in the T3StrategyDialog class as follows.
```java
EventGenerator(Place) eventGen;
{
eventGen = new EventGenerator(Place)(this::nextPlace);
eventGen.addListener(place → makeMove(place));
eventGen.start();
}
private Place nextPlace() {
if (!isGameOver() && isComputerPlace()) {
return computerPlayer().nextMove();
}
return null;
}
```
An event generator named eventGen generates a new place (event) whenever it is a computer’s turn and notifies it to registered event handlers. It does this by calling a helper method named nextPlace() that calls the computer’s nextMove() method. The event handler registered to eventGen by using the addListener() method calls the makeMove() method, which happens whenever a new place is generated by eventGen, i.e., when it is a computer player’s turn. In short, the nextMove is invoked implicitly through a custom event system provided by the EventGenerator(T) class.
We showed how to convert explicit invocation code to implicit invocation by rewriting the code handling a computer player’s move. The refactored code treats both players uniformly in implicit invocation, thus conforming to guideline G2 (Use the same style for the same, similar or related functionality). However, the improvement ends there; the code still violates guidelines G1 and G3.
V. Supporting Framework
In this section we describe a very simple framework written as a proof-of-concept to help converting explicit invocation to implicit and vice versa. The key components of the framework are two generic classes, EventBroker(T) and EventGenerator(T), that were used in the previous section.
A. EventBroker
The EventBroker(T) class provides a synchronized, thread-safe buffer between a producer and a consumer, both of which are threads. It is a generic class to allow a custom event type. It is for converting implicit invocation code to explicit invocation. The idea is to let an event handler, rather than invoking application code directly, to store an occurred event in an event broker so as to be consumed by the application code (see Fig. 6). An event handler becomes a producer and stores occurred events in a buffer, and the application code being originally called by the event handler becomes a consumer and reads stored events from the buffer. One key role of an event broker is to prevent propagation of the inverted control caused by the implicit invocation of an event handler. As shown in Fig. 6, implicit invocation can’t cross an event broker and thus an event broker provides a boundary for encapsulating implicit invocation; implicit invocation is hidden inside a program module containing both an event handler and its event broker. Thus the EventBroker(T) class enforces the first guideline (G1), encapsulating invocation styles (see Section IV).
```java
EventBroker<Place> eventBroker;
⟨
eventBroker.eventGen = new EventGenerator(Place)(this::nextPlace);
eventBroker.addListener(place → makeMove(place));
eventBroker.start();
⟩
private Place nextPlace() {
if (!isGameOver() && isComputerPlace()) {
return computerPlayer().nextMove();
}
return null;
}
```
Two key operations of the EventBroker(T) class are:
- void produce(T): stores a given event in the buffer.
- T consume(): return the next event stored in the buffer.
Both methods are thread-safe and synchronized. If there is no event stored, for example, the consume() method will suspend the calling thread temporarily until an event is stored by the produce() method. Both methods are written using the guarded suspension pattern [8]. The class can be certainly improved by considering several different design choices, including bounded versus unbounded buffer, lossy versus lossless buffer, waiting or suspension time, initial capacity of the buffer, and size of the buffer, some of which can be parameters.
B. EventGenerator
The EventGenerator(T) class provides a custom event system including an event generation and notification mechanism. It is generic to work with a custom event type. It generates an event by calling a provided event creation method periodically. And if an event is generated successfully, the generated event is delivered or notified to all event handlers registered an
interest for the event. The class uses two generic interfaces,
EventSource(T) and EventListener(T), to implement the Ob-
server design pattern [6].
```java
public interface EventSource<T> {
T createEvent();
}
public interface EventListener<T> {
void eventOccurred(T event);
}
```
The EventSource(T) interface is to provide an event generator with a method that creates an event, called an event creation
method. The event creation method is used by the event generator to check for an occurrence of an event periodically; the method is supposed to return null if no event occurs. The EventListener(T) interface specifies an event handler to be notified for an event generated by an event generator. Listed below are key operations of the EventGenerator(T) class.
- EventGenerator(EventSource(T), long): create an event generator that uses the given event source to create a new event periodically.
- addListenerEventListener(T): register the given event handler for an interested in the events to be generated. The registered handler will be called when a new event is generated successfully.
- start(): start the generator in a new background thread.
- stop(): stop the generator.
As shown in Section IV, the EventGenerator(T) class is for converting explicit invocation code to implicit invocation (see Fig. 7). To convert an explicit method call from A to B, one can provide an event generator with (1) a method to create a new event when A should call B and (2) an event handler which is B in this case. The event generator calls the provided event creation method periodically, and upon a successful creation of an event it calls the event handler (B), thus mimicking an explicit call from A to B.

**Fig. 7.** EventGenerator class
As a proof-of-concept implementation, the class can be improved by considering different design choices and parameters, such as multiple event sources, start and end time, fixed-delay generation, and fixed-rate generation.
VI. CONCLUSION
The work presented in this paper was initially motivated by programming assignments done by students in a junior-level
object-oriented design and programming class. The assignments, Battleship game, require students to use both explicit and implicit method invocations for the same functionality of determining the next place to hit for two players. For one player (human), the place is obtained through a graphical user interface using an event handler in an implicit invocation style. For the other player (computer), however, it is calculated by invoking a predefined strategy method directly. The programs of all thirty some students have a code pattern essentially similar to the one shown in Section III that complicates programming logic and results in code that is hard to read, understand, reuse, and maintain. A careful examination of the programs showed that the problem was mainly caused by mixing the two different invocation styles in an unconstrained manner. Based on this observation, we proposed a few principles or guidelines for mixing the two styles judiciously, if necessary, by converting one to the other. We also described a simple, proof-of-concept framework for converting one style to the other for mixing them in a single application. We believe that our guidelines along with the framework enable one to blend the two different programming styles harmoniously and in a properly constrained fashion to produce clean code.
There are several contributions that our work makes, including the need for mixing two different invocation styles judiciously, a set of practical guidelines for mixing them, the idea of converting one style to the other, and a supporting framework for the conversion. However, the most important contribution is the concept of localizing and encapsulating method invocation styles. It is as important as, if not more than, data hiding and encapsulation. The idea is to support separation of concerns between intra- and inter-component invocation styles by separating them cleanly and in a modular way. It is especially crucial when intra- and inter-component coding styles are different. If a component is written in an event-based, implicit invocation style, for example, its event handlers shouldn’t call methods outside the component, directly or indirectly; the inverted control caused by the implicit invocation shouldn’t cross the boundary of the component. The notion of invocation encapsulation allows one to express the key control flow of an application apparently in source code itself to produce so-called clean code.
REFERENCES
[6] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995.
|
{"Source-Url": "http://www.cs.utep.edu/cheon/techreport/tr15-98.pdf", "len_cl100k_base": 6087, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22143, "total-output-tokens": 6940, "length": "2e12", "weborganizer": {"__label__adult": 0.0004589557647705078, "__label__art_design": 0.00027179718017578125, "__label__crime_law": 0.00035190582275390625, "__label__education_jobs": 0.000973224639892578, "__label__entertainment": 6.16908073425293e-05, "__label__fashion_beauty": 0.0001691579818725586, "__label__finance_business": 0.00014662742614746094, "__label__food_dining": 0.00040030479431152344, "__label__games": 0.000701904296875, "__label__hardware": 0.0005898475646972656, "__label__health": 0.0003337860107421875, "__label__history": 0.00019657611846923828, "__label__home_hobbies": 7.56978988647461e-05, "__label__industrial": 0.0003066062927246094, "__label__literature": 0.00026297569274902344, "__label__politics": 0.000293731689453125, "__label__religion": 0.0004878044128417969, "__label__science_tech": 0.002124786376953125, "__label__social_life": 8.475780487060547e-05, "__label__software": 0.0021038055419921875, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0003788471221923828, "__label__transportation": 0.0005292892456054688, "__label__travel": 0.00022482872009277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31429, 0.01015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31429, 0.47629]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31429, 0.90705]], "google_gemma-3-12b-it_contains_pii": [[0, 798, false], [798, 6261, null], [6261, 10271, null], [10271, 15451, null], [15451, 20481, null], [20481, 25848, null], [25848, 31429, null]], "google_gemma-3-12b-it_is_public_document": [[0, 798, true], [798, 6261, null], [6261, 10271, null], [10271, 15451, null], [15451, 20481, null], [20481, 25848, null], [25848, 31429, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31429, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31429, null]], "pdf_page_numbers": [[0, 798, 1], [798, 6261, 2], [6261, 10271, 3], [10271, 15451, 4], [15451, 20481, 5], [20481, 25848, 6], [25848, 31429, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31429, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
c1c55a83f8d722474dc645a48c3db3afa4e7aec8
|
Package ‘conjuror’
March 22, 2020
Type Package
Title A Parametric Method for Generating Synthetic Data
Version 1.1.1
Date 2020-03-22
Description Builds synthetic data applicable across multiple domains. This package also provides flexibility to control data distribution to make it relevant to many industry examples.
Depends R (>= 2.10)
License MIT + file LICENSE
URL https://github.com/SidharthMacherla/conjurer
BugReports https://github.com/SidharthMacherla/conjurer/issues
Encoding UTF-8
LazyData TRUE
RoxygenNote 7.0.2.9000
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Author Sidharth Macherla [aut, cre] (<https://orcid.org/0000-0002-4825-2026>)
Maintainer Sidharth Macherla <msidharthrasik@gmail.com>
Repository CRAN
Date/Publication 2020-03-22 03:00:02 UTC
R topics documented:
buildCust .................................................. 2
buildDistr .................................................. 3
buildName .................................................. 4
buildNames ............................................... 5
buildOutliers ............................................. 6
buildPareto ............................................... 7
buildCust
Description
Builds a customer identifier. This is often used as a primary key of the customer dim table in databases.
Usage
buildCust(numOfCust)
Arguments
numOfCust A number. This specifies the number of unique customer identifiers to be built.
Details
A customer is identified by a unique customer identifier(ID). A customer ID is alphanumeric with prefix "cust" followed by a numeric. This numeric ranges from 1 and extend to the number of customers provided as the argument within the function. For example, if there are 100 customers, then the customer ID will range from cust001 to cust100. This ensures that the customer ID is always of the same length.
Value
A character with unique customer identifiers
Examples
df <- buildCust(numOfCust = 1000)
df <- buildCust(numOfCust = 223)
**buildDistr**
**Build Data Distribution**
**Description**
Builds data distribution. For example, the function `genTrans` uses this function to build the data distributions necessary. This function uses trigonometry based functions to generate data. This is an internal function and is currently not exported in the package.
**Usage**
```
buildDistr(st, en, cycles, trend)
```
**Arguments**
- **st**: A number. This defines the starting value of the number of data points.
- **en**: A number. This defines the ending value of the number of data points.
- **cycles**: A string. This defines the cyclicality of data distribution.
- **trend**: A number. This defines the trend of data distribution i.e if the data has a positive slope or a negative slope.
**Details**
A parametric method is used to build data distribution. The data distribution function uses the formulation of
\[ \sin(a \times x) + \cos(b \times x) + c \]
Where,
1. a and b are the parameters
2. x is a variable
3. c is a constant
Firstly, parameter 'a' defines the number of outer level crests (peaks in the data distribution). Generally speaking, the number of crests is approximately twice the value of a. This means that if a is set to a value 0.5, there will be one crest and if it is set to 2, there will be 4 crests. On account of this behavior, this parameter is set based on the argument cycles of the function. For example, if the argument cycles is set to "y" i.e yearly cycle, it means that there must be one crest i.e peak in the distribution. To have one crest, the parameter must be around 0.5. A random number is then generated between 0.2 and 0.6 to get to that one crest.
Secondly, the variable 'x' is the x-axis of the data distribution. Since the function `buildDistr` is used internally to generate data at different levels, this variable could have a range of 1 to 12 or 1 to 31 depending on the arguments 'st' and 'en'. For example, if the data is generated at the month level, then arguments 'st' is set to 1 and 'en' is set to 12. Similarly, if the data is set to day level, the 'st' is set to 1 and 'en' is set to the number of days in that month i.e 28 for month 2 and 31 for month 12 etc.
Thirdly, the parameter 'b' defines the inner level crests (peaks in data distribution). This parameter helps in making the data distribution seem more realistic by adding more "ruggedness" of the distribution.
Finally, the constant 'c' is the intercept part of the formulation and primarily serves as a way to ensure that the data distribution has a positive 'y' axis component. This value is randomly generated between 2 and 5.
Value
A data frame with data distribution is returned.
<table>
<thead>
<tr>
<th>buildName</th>
<th>Build Dynamic Strings</th>
</tr>
</thead>
</table>
Description
Builds strings that could be further used as identifiers. This is an internal function and is currently not exported in the package.
Usage
buildName(numOfItems, prefix)
Arguments
- numOfItems: A number. This defines the number of elements to be output.
- prefix: A string. This defines the prefix for the strings. For example, the function buildCust uses this function and passes the prefix "cust" while the function buildProd passes the prefix "sku"
Details
This function is used by other internal functions namely, buildCust and buildProd to produce the alphanumeric identifiers for customers and products respectively.
Value
A character with the alphanumeric strings is returned. These strings use the prefix that is mentioned in the argument "prefix"
**buildNames**
Generates names based on a given training data or using the default data.
**Usage**
```r
buildNames(dframe, numOfNames, minLength, maxLength)
```
**Arguments**
- `dframe`: A dataframe. This argument is passed on to another function `genMatrix` for generating an alphabet frequency table. This dataframe is single column dataframe with rows that contain names. These names must only contain English alphabets (upper or lower case) from A to Z.
- `numOfNames`: A numeric. This specifies the number of names to be generated. It should be non-zero natural number.
- `minLength`: A numeric. This specifies the minimum number of alphabets in the name. It must be a non-zero natural number.
- `maxLength`: A numeric. This specifies the maximum number of alphabets in the name. It must be a non-zero natural number.
**Details**
This function generates names. There are two options to generate names. The first option is to use an existing sample of names and generate names. The second option is to use the default table of prior probabilities.
**Value**
A list of names.
**Examples**
```r
buildNames(numOfNames = 3, minLength = 5, maxLength = 7)
```
**buildOutliers**
*Build Outliers in Data Distribution*
**Description**
Builds outlier values and replaces random data points with outliers. This is an internal function and is currently not exported in the package.
**Usage**
```r
buildOutliers(distr)
```
**Arguments**
- `distr` numeric vector. This is the target vector which is processed for outlier generation.
**Details**
It is a common occurrence to have outliers in production data. For instance, in the retail industry, there are days such as black Friday where the sales for that day are far more than the daily average for the year. For the synthetic data generated to seem similar to production data, package conjurer uses this function to build such outlier data.
This function takes a numeric vector and then randomly selects at least 1 data point and a maximum of 3 percent data points to be replaced with an outlier. The process for generating outliers is as follows. This methodology of outlier generation is based on a popular method of identifying outliers. For more details refer to the function 'outlier' in R package 'GmAMisc'.
1. First, the interquartile range(IQR) of the numeric vector is computed.
2. Second, a random number between 1.5 and 3 is generated.
3. Finally, the random number above is multiplied with the IQR to compute the outlier.
These steps mentioned above are repeated for at least once and a maximum of 3
**Value**
A numeric vector with random values replaced with outlier values.
buildPareto
Map Factors Based on Pareto Arguments
**Description**
Maps a factor to another factor in a one to many relationship following Pareto principle. For example, 80 percent of transactions can be mapped to 20 percent of customers.
**Usage**
buildPareto(factor1, factor2, pareto)
**Arguments**
- **factor1**: A factor. This factor is mapped to factor2 as given in the details section.
- **factor2**: A factor. This factor is mapped to factor1 as given in the details section.
- **pareto**: This defines the percentage allocation and is a numeric data type. This argument takes the form of c(x,y) where x and y are numeric and their sum is 100. If we set Pareto to c(80,20), it then allocates 80 percent of factor1 to 20 percent of factor 2. This is based on a well-known concept of the Pareto principle.
**Details**
This function is used to map one factor to another based on the Pareto argument supplied. If factor1 is a factor of customer identifiers, factor2 is a factor of transactions and Pareto is set to c(80,20), then 80 percent of customer identifiers will be mapped to 20 percent of transactions and vice versa.
**Value**
A data frame with factor 1 and factor 2 as columns. Based on the Pareto arguments passed, column factor 1 is mapped to factor 2.
---
buildProd
Build Product Data
**Description**
Builds a unique product identifier and price. The price of the product is generated randomly within the minimum and the maximum range provided as input.
**Usage**
buildProd(numOfProd, minPrice, maxPrice)
buildSpike
**Arguments**
- **numOfProd**
- A number. This defines the number of unique products.
- **minPrice**
- A number. This is the minimum value of the product’s price range.
- **maxPrice**
- A number. This is the maximum value of the product’s price range.
**Details**
A product ID is alphanumeric with prefix "sku" which signifies a stock keeping unit. This prefix is followed by a numeric ranging from 1 and extending to the number of products provided as the argument within the function. For example, if there are 10 products, then the product ID will range from sku01 to sku10. This ensures that the product ID is always of the same length. For these product IDs, the product price will be within the range of minPrice and maxPrice arguments.
**Value**
A character with product identifier and price.
**Examples**
```r
df <- buildProd(numOfProd = 1000, minPrice = 5, maxPrice = 100)
df <- buildProd(numOfProd = 29, minPrice = 3, maxPrice = 50)
```
---
**buildSpike**
*Build Spikes in the Data Distribution*
**Description**
Builds spikes in the data distribution. For example, in retail industry transactions are generally higher during the holiday season such as December. This function is used to set the same.
**Usage**
```r
buildSpike(distr, spike)
```
**Arguments**
- **distr**
- numeric vector. This is the input vector for which the spike value needs to be set.
- **spike**
- A number. This represents the seasonality of data. It can take any value from 1 to 12. These numbers represent months in a year, from January to December respectively. For example, if the spike is set to 12, it means that December has the highest number of transactions. This is an internal function and is currently not exported in the package.
**Value**
A numeric vector reordered
**genFirstPairs**
*Extracts the First Two Alphabets of the String*
**Description**
For a given string, this function extracts the first two alphabets. This function is further used by **genMatrix** function.
**Usage**
```r
genFirstPairs(s)
```
**Arguments**
- `s` A string. This is the string from which the first two alphabets are to be extracted.
**Value**
First two alphabets of the string input.
---
**genMatrix**
*Generate Frequency Distribution Matrix*
**Description**
For a given names dataframe and placement, a frequency distribution table is returned.
**Usage**
```r
genMatrix(dframe, placement)
```
**Arguments**
- `dframe` A dataframe with one column that has one name per row. These names must be English alphabets from A to Z and must not include any non-alphabet characters such as hyphen or apostrophe.
- `placement` A string argument that takes three values namely "first", "last" and "all". Currently, only "first" and "all" are used while the option "last" is a placeholder for future versions of the package **conjurer**
Details
The purpose of this function is to generate a frequency distribution table of alphabets. There are currently 2 tables that could be generated using this function. The first table is generated using the internal function \texttt{genFirstPairs}. For this, the argument \texttt{placement} is assigned the value "first". The rows of the table returned by the function represent the first alphabet of the string and the columns represent the second alphabet. The values in the table represent the number of times the combination is observed i.e the combination of the row and column alphabets. The second table is generated using the internal function \texttt{genTriples}. For this, the argument \texttt{placement} is assigned the value "all". The rows of the table returned by the function represent two consecutive alphabets of the string and the columns represent the third consecutive alphabet. The values in the table represent the number of times the combination is observed i.e the combination of the row and column alphabets.
Value
A table. The rows and columns of the table depend on the argument \texttt{placement}. A detailed explanation is as given below in the detail section.
---
\texttt{genTrans} \quad \textit{Build Transaction Data}
Description
Build Transaction Data
Usage
\texttt{genTrans(cycles, trend, transactions, spike, outliers)}
Arguments
\begin{itemize}
\item \texttt{cycles} \quad \text{This represents the cyclicity of data. It can take the following values}
\begin{enumerate}
\item "y". If cycles is set to the value "y", it means that there is only one instance of a high number of transactions during the entire year. This is a very common situation for some retail clients where the highest number of sales are during the holiday period in December.\item "q". If cycles is set to the value "q", it means that there are 4 instances of a high number of transactions. This is generally noticed in the financial services industry where the financial statements are revised every quarter and have an impact on the equity transactions in the secondary market.\item "m". If cycles is set to the value "m", it means that there are 12 instances of a high number of transactions for a year. This means that the number of transactions increases once every month and then subside for the rest of the month.\end{enumerate}
\item \texttt{trend} \quad \text{A number. This represents the slope of data distribution. It can take a value of 1 or -1. If the trend is set to value 1, then the aggregated monthly transactions will exhibit an upward trend from January to December and vice versa if it is set to -1.}\end{itemize}
transactions A number. This represents the number of transactions to be generated.
spike A number. This represents the seasonality of data. It can take any value from 1 to 12. These numbers represent months in a year, from January to December respectively. For example, if the spike is set to 12, it means that December has the highest number of transactions.
outliers A number. This signifies the presence of outliers. If set to value 1, then outliers are generated randomly. If set to value 0, then no outliers are generated. The presence of outliers is a very common occurrence and hence setting the outliers to 1 is recommended. However, there are instances where outliers are not needed. For example, if the objective of data generation is solely for visualization purposes then outliers may not be needed.
Value
A dataframe with day number and count of transactions on that day
Examples
df <- genTrans(cycles = "y", trend = 1, transactions = 10000, spike = 10, outliers = 0)
df <- genTrans(cycles = "q", trend = -1, transactions = 32000, spike = 12, outliers = 1)
genTriples Extracts Three Consecutive Alphabets of the String
description
For a given string, this function extracts three consecutive alphabets. This function is further used by genMatrix function.
Usage
genTriples(s)
Arguments
s A string. This is the string from which three consecutive alphabets are to be extracted.
Value
List of three alphabet combinations of the string input.
missingArgHandler
**Handle Missing Arguments in Function**
**Description**
Replaces the missing argument with the default value. This is an internal function and is currently not exported in the package.
**Usage**
```r
missingArgHandler(argMissed, argDefault)
```
**Arguments**
- **argMissed** This is the argument that needs to be handled.
- **argDefault** This is the default value of the argument that is missing in the function called.
**Details**
This function plays the role of error handler by setting the default values of the arguments when a function is called without specifying any arguments.
**Value**
The default value of the missing argument.
---
nextAlphaProb
**Generate Next Alphabet**
**Description**
Generates next alphabet based on prior probabilities.
**Usage**
```r
nextAlphaProb(alphaMatrix, currentAlpha, placement)
```
**Arguments**
- **alphaMatrix** A table. This table is generated using the `genMatrix` function.
- **currentAlpha** A string. This is the alphabet(s) for which the next alphabet is generated.
- **placement** A string. This takes one of the two values namely "first" or "all".
Details
The purpose of this function is to generate the next alphabet for a given alphabet(s). This function uses prior probabilities to generate the next alphabet. Although there are two types of input tables passed into the function by using the parameter `alphaMatrix`, the process to generate the next alphabet remains the same as given below.
Firstly, the input table contains frequencies of the combination of current alphabet `currentAlpha` (represented by rows) and next alphabet (represented by columns). These frequencies are converted into a percentage at a row level. This means that for each row, the sum of all the column values will add to 1.
Secondly, for the given `currentAlpha`, the table is looked up for the corresponding column where the probability is the highest. The alphabet for the column with maximum prior probability is selected as the next alphabet and is returned by the function.
Value
The next alphabet following the input alphabet(s) passed by the argument `currentAlpha`.
Index
buildCust, 2
buildDistr, 3, 3
buildName, 4
buildNames, 5
buildOutliers, 6
buildPareto, 7
buildProd, 7
buildSpike, 8
genFirstPairs, 9, 10
genMatrix, 5, 9, 9, 11, 12
genTrans, 3, 10
genTriples, 10, 11
missingArgHandler, 12
nextAlphaProb, 12
|
{"Source-Url": "https://cran.rstudio.com/web/packages/conjurer/conjurer.pdf", "len_cl100k_base": 4514, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 25500, "total-output-tokens": 5355, "length": "2e12", "weborganizer": {"__label__adult": 0.0003516674041748047, "__label__art_design": 0.0007085800170898438, "__label__crime_law": 0.00033354759216308594, "__label__education_jobs": 0.0014324188232421875, "__label__entertainment": 0.00017392635345458984, "__label__fashion_beauty": 0.00015485286712646484, "__label__finance_business": 0.0014286041259765625, "__label__food_dining": 0.00040531158447265625, "__label__games": 0.0007653236389160156, "__label__hardware": 0.0006880760192871094, "__label__health": 0.0003402233123779297, "__label__history": 0.0002779960632324219, "__label__home_hobbies": 0.0001634359359741211, "__label__industrial": 0.0006194114685058594, "__label__literature": 0.0002913475036621094, "__label__politics": 0.00029659271240234375, "__label__religion": 0.0003592967987060547, "__label__science_tech": 0.05841064453125, "__label__social_life": 0.0001684427261352539, "__label__software": 0.10333251953125, "__label__software_dev": 0.82861328125, "__label__sports_fitness": 0.000232696533203125, "__label__transportation": 0.00025916099548339844, "__label__travel": 0.00023794174194335935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19133, 0.01948]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19133, 0.79527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19133, 0.81219]], "google_gemma-3-12b-it_contains_pii": [[0, 1203, false], [1203, 2015, null], [2015, 4214, null], [4214, 5555, null], [5555, 6724, null], [6724, 8212, null], [8212, 9750, null], [9750, 11557, null], [11557, 12616, null], [12616, 15272, null], [15272, 16740, null], [16740, 17874, null], [17874, 18885, null], [18885, 19133, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1203, true], [1203, 2015, null], [2015, 4214, null], [4214, 5555, null], [5555, 6724, null], [6724, 8212, null], [8212, 9750, null], [9750, 11557, null], [11557, 12616, null], [12616, 15272, null], [15272, 16740, null], [16740, 17874, null], [17874, 18885, null], [18885, 19133, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19133, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19133, null]], "pdf_page_numbers": [[0, 1203, 1], [1203, 2015, 2], [2015, 4214, 3], [4214, 5555, 4], [5555, 6724, 5], [6724, 8212, 6], [8212, 9750, 7], [9750, 11557, 8], [11557, 12616, 9], [12616, 15272, 10], [15272, 16740, 11], [16740, 17874, 12], [17874, 18885, 13], [18885, 19133, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19133, 0.00717]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b779b4bf4d30dff8ee986d98a05a116acf9c4abe
|
GROMACS - Feature #911
implement CMake option to enable fully static binaries
04/03/2012 03:22 PM - Szilárd Páll
Status: Closed
Priority: Normal
Assignee: Szilárd Páll
Category: build system
Target version: 5.1
Difficulty: uncategorized
Description
On some systems like Cray statically linked binaries are either a must or highly recommended.
Below are the steps required to get static linking to work; the method is not foolproof, it can fail in some cases (e.g. if some external libraries get shared version detected). We should attempt to automate this as much as possible.
- set the target properties LINK_SEARCH_START_STATIC and LINK_SEARCH_END_STATIC, e.g:
set_target_properties(mdrun PROPERTIES LINK_SEARCH_START_STATIC ON)
set_target_properties(mdrun PROPERTIES LINK_SEARCH_END_STATIC ON)
- configure with -static and disable CMake RPATH:
cmake /path/to/gromacs/source -DCMAKE_PREFIX_PATH=/path/to/fftw -DGMX_PREFER_STATIC_LIBS=ON
-DCMAKE_C_FLAGS="-static" -DCMAKE_CXX_FLAGS="-static" -DCMAKE_SKIP_RPATH=YES
Related issues:
Related to GROMACS - Feature #1641: Add toolchain file for Cray systems
New 11/11/2014
Associated revisions
Revision 88fda475 - 05/13/2015 10:45 AM - Roland Schulz
Facilitate linking of static binaries
Minimal solution. The user has to manually set both
-DBUILD_SHARED_EXE=no and CFLAGS=CXXFLAGS=-static, perhaps manage
their own toolchain, and certainly make static libraries available for
all dependencies. Also does not auto-detect if compiler defaults to
static (Cray). Works better than LINK_SEARCH_END_STATIC because
otherwise dynamic flags can be added to the middle if some libraries
in default search path exist as both dynamic and shared.
Fixes #911
Related to #1641
Change-Id: If7b8192b44c33c861f126e3422df04388d2f2be5
History
#1 - 04/03/2012 03:22 PM - Szilárd Páll
- Assignee deleted (Rossen Apostolov)
#2 - 08/04/2012 08:33 PM - Roland Schulz
For GMX_PREFER_STATIC_LIBS we use
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a $(CMAKE_FIND_LIBRARY_SUFFIXES))
Any reason not to use
SET(CMAKE_FIND_LIBRARY_SUFFIXES .a)
for fully static libraries?
LINK_SEARCH_END_STATIC is only available starting with cmake 2.8.5. In what cases are LINK_SEARCH_START_STATIC/LINK_SEARCH_END_STATIC needed? For me it always worked without. I thought that giving "-static" automatically makes it work.
While the rpath is not necessary for static binaries I don't think it hurts. Again it just works for me.
#3 - 10/08/2012 03:07 PM - Szilárd Páll
Roland Schulz wrote:
For GMX_PREFER_STATIC_LIBS we use
[...]
Any reason not to use
[...]
for fully static libraries?
Hmm, I didn't want to just override the content of CMAKE_FIND_LIBRARY_SUFFIXES. Would that be always safe?
Those are needed to link system libraries statically:
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_START_STATIC
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_END_STATIC
Now, what I'm not sure about anymore whether both are always needed or it's better to set both because in some cases the former, in other cases the latter is needed.
While the rpath is not necessary for static binaries I don't think it hurts. Again it just works for me.
Not sure about the exact source of the idea of disabling RPATH, but as far as I remember it was recommended on the CMake mailing list.
#4 - 10/08/2012 11:57 PM - Roland Schulz
Szilárd Páll wrote:
Roland Schulz wrote:
For GMX_PREFER_STATIC_LIBS we use
[...]
Any reason not to use
[...]
for fully static libraries?
Hmm, I didn't want to just override the content of CMAKE_FIND_LIBRARY_SUFFIXES. Would that be always safe?
Not sure. But it is used by others successful: http://www.mail-archive.com/cmake@cmake.org/msg21354.html
Those are needed to link system libraries statically:
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_START_STATIC
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_END_STATIC
That means it is not a proper solution if we want it to work with 2.8.0. Also there are other problems:
While the rpath is not necessary for static binaries I don't think it hurts. Again it just works for me.
Not sure about the exact source of the idea of disabling RPATH, but as far as I remember it was recommended on the CMake mailing list.
I guess it doesn't hurt since it is obviously not needed.
Szilárd Páll wrote:
Roland Schulz wrote:
For GMX_PREFER_STATIC_LIBS we use
[...]
Any reason not to use
[...]
for fully static libraries?
Hmm, I didn't want to just override the content of CMAKE_FIND_LIBRARY_SUFFIXES. Would that be always safe?
Not sure. But it is used by others successful: http://www.mail-archive.com/cmake@cmake.org/msg21354.html
LINK_SEARCH_END_STATIC is only available starting with cmake 2.8.5. In what cases are LINK_SEARCH_START_STATIC/LINK_SEARCH_END_STATIC needed? For me it always worked without. I thought that giving "-static" automatically makes it work.
Those are needed to link system libraries statically:
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_START_STATIC
http://www.cmake.org/cmake/help/v2.8.9/cmake.html#prop_tgt:LINK_SEARCH_END_STATIC
That means it is not a proper solution if we want it to work with 2.8.0. Also there are other problems:
Yeah, that looks familiar, it's my thread after all :) I have not seen the -lgcc_s errors for a year or so.
Anyway, my point was not to implement a rock-solid feature because that's simply not possible (the build will fail if not all external dependencies are detected as static archives), but to make a fair attempt to build a fully static mdrun (perhaps tools as well) and control it through an advanced variable.
And my question was whether this is useful or not. For me and a handful of other working on quirky Crays it would be useful to not have to patch CMake sources before compiling static binaries.
While the rpath is not necessary for static binaries I don't think it hurts. Again it just works for me.
Not sure about the exact source of the idea of disabling RPATH, but as far as I remember it was recommended on the CMake mailing list.
I guess it doesn't hurt since it is obviously not needed.
#6 - 10/09/2012 12:25 AM - Roland Schulz
Yeah, that looks familiar, it's my thread after all :) I have not seen the -lgcc_s errors for a year or so.
Didn't see that ;-) And my question was whether this is useful or not. For me and a handful of other working on quirky Crays it would be useful to not have to patch CMake sources before compiling static binaries.
I'm surprised you need to do anything special on Cray. All the Cray I have access to automatically add the "-static" in the "cc" compiler wrapper and it just works for me.
#7 - 10/09/2012 12:30 AM - Szilárd Páll
And my question was whether this is useful or not. For me and a handful of other working on quirky Crays it would be useful to not have to patch CMake sources before compiling static binaries.
I'm surprised you need to do anything special on Cray. All the Cray I have access to automatically add the "-static" in the "cc" compiler wrapper and it just works for me.
Newer Crays don't, in particular on an XK6 (where it would even be impossible due to CUDA not being distributed statically) and both XE6 machines I have access to. I guess I'll have to ask the question differently: do you have anything against such a feature?
#8 - 10/09/2012 12:34 AM - Roland Schulz
No. Since it is anyhow only for a small group of people it might be OK if it only works with =>2.8.5 or it doesn't always work. As long as it doesn't have any effect when it is disabled.
#9 - 11/05/2012 10:40 PM - Roland Schulz
Somehow they changed something on Hopper and now you need the LINK_SEARCH_END_STATIC there too. And it works by itself without problems. So I think it is a good idea to have a option which enables it. I don't think it should be enabled by GMX_PREFER_STATIC_LIBS (I don't think you suggested that - instead I thought originally this would be a good idea). Instead we need a separate flag for this (e.g. GMX_FULLY_STATIC). This should also add "-static". This later isn't needed on Cray because it already gets added by their compiler wrapper but is useful in general. E.g. it avoid the gcc_s problem on non Cray machines. We should print a warning on cmake <2.8.5 that the option GMX_FULLY_STATIC isn't working.
#10 - 11/08/2012 03:07 PM - Roland Schulz
I looked it up incorrectly 3 months ago for comment #2. LINK_SEARCH_END_STATIC is available in 2.8.0 - only LINK_SEARCH_START_STATIC was added in 2.8.5 (http://www.cmake.org/Wiki/CMake_Version_Compatibility_Matrix/Properties#Properties_on_Targets). But I don't think we actually need LINK_SEARCH_START_STATIC. So fixing this issue should be as simple as adding a new option which activates LINK_SEARCH_START_STATIC and adds "-static".
#11 - 11/08/2012 03:18 PM - Szilárd Páll
Roland Schulz wrote:
I looked it up incorrectly 3 months ago for comment #2. LINK_SEARCH_END_STATIC is available in 2.8.0 - only LINK_SEARCH_START_STATIC was added in 2.8.5 (http://www.cmake.org/Wiki/CMake_Version_Compatibility_Matrix/Properties#Properties_on_Targets). But I don't think we actually need LINK_SEARCH_START_STATIC. So fixing this issue should be as simple as adding a new option which activates LINK_SEARCH_START_STATIC and adds "-static".
I was just looking at that earlier today. What I don't understand is how does the -Bstatic affect the linking when at the beginning or end of the library list?
#12 - 11/08/2012 03:31 PM - Roland Schulz
What I assume (inferred from the behavior without really knowing how it works) is that the libraries added by the linker by default (e.g. libgcc,glibc,...) are added to the end, and are affected by the last Bdynamic/Bstatic in the list. Thus without the LINK_SEARCH_END_STATIC the default libs are linked dynamic and with it they are linked static. I don't think LINK_SEARCH_START_STATIC has any effect for any of the compilers we use. The only place it seems to matter is for odd custom tool-chains: http://cmake.3232098.n2.nabble.com/To-avoid-target-link-libraries-magic-td6192280.html. But I don't think it hurts either (neither with version which support it nor with those which don't). So I would simply set it and don't worry about that it isn't supported with older cmake because 99% of users probably don't need it anyhow.
#13 - 11/08/2012 03:36 PM - Szilárd Páll
I'm starting to get it now. The LINK_SEARCH_END_STATIC affects system libs while the LINK_SEARCH_START__ affects the -lXXX library list. Assuming that this is the case, we should use both (if supported).
#14 - 11/08/2012 03:46 PM - Roland Schulz
Well if cmake finds static libraries with PREFER_STATIC then it already adds the Bstatic before those libraries (and if it doesn't find the static library something is anyhow wrong). That's why I think it isn't necessary to have LINK_SEARCH_START_STATIC. But as I wrote it shouldn't hurt and might be helpful in some odd cases.
#15 - 01/07/2013 03:19 PM - Mark Abraham
Are we able to get something done here for 4.6?
#16 - 01/08/2013 06:12 AM - Roland Schulz
I think we agreed on:
- add a new option GMX_STATIC_BINARY (probably better name than GMX_FULLY_STATIC)
- add -static (if supported compiler option), LINK_SEARCH_END_STATIC and LINK_SEARCH_START_STATIC (if cmake=>2.8.5)
So should be very easy. Who wants to do it?
#17 - 01/08/2013 02:50 PM - Szilárd Páll
Roland Schulz wrote:
I think we agreed on:
- add a new option GMX_STATIC_BINARY (probably better name than GMX_FULLY_STATIC)
- add -static (if supported compiler option), LINK_SEARCH_END_STATIC and LINK_SEARCH_START_STATIC (if cmake>=2.8.5)
Yes, that is the plan.
So should be very easy. Who wants to do it?
I wanted to do it, but have been quite busy with other stuff and forgot about it. Will do it asap.
#18 - 01/08/2013 02:50 PM - Szilárd Páll
- Status changed from New to In Progress
- Assignee set to Szilárd Páll
#19 - 01/18/2013 04:07 PM - Erik Lindahl
- Target version changed from 4.6 to future
#20 - 02/04/2013 12:09 AM - Roland Schulz
The reason why LINK_SEARCH_END_STATIC sometimes is required and sometimes not, seems to be related to whether any library in standard locations is used. In that case cmake uses "-Wl,-Bstatic -lsomelib" instead of "/usr/lib/libsomelib.a". Now that xml is removed and fftw isn't in the default location on Cray, it seems to be no problem on Cray.
#21 - 05/22/2013 05:56 AM - Mark Abraham
- Target version changed from future to 4.6.x
#22 - 11/07/2013 09:34 PM - Mark Abraham
- Target version changed from 4.6.x to future
I seem to be able to build on Cray without problems, but my libraries were standard or own-fftw.
#23 - 11/11/2013 09:03 PM - Szilárd Páll
Mark Abraham wrote:
I seem to be able to build on Cray without problems, but my libraries were standard or own-fftw.
That's because on Crays the compiler wrappers force static builds whenever they can without being asked to do so, but this of course only works if all external libraries in their static form. However, even on Crays without turning off RPATH you won't be able to do make install because cmake thinks that it built dynamically linked binaries.
Note that you may not even be using the "own" FFTW because the Cray toolchain will link binaries against the scientific libs which contain Cray's own FFTW (which is anyway tuned and typically faster than the standard FFTW). Hence, I guess it's a matter or linking order whether you get to use your or Cray's FFTW.
#24 - 11/11/2013 09:57 PM - Roland Schulz
So we should expect/design the CMake detection to pick up that the toolchain provides FFTW even if there's nothing specific in any of the the paths? Does anybody know this happens?
#25 - 11/11/2013 10:20 PM - Roland Schulz
Currently it isn't checked. But it could be easily added by testing with CHECK_FUNCTION_EXISTS before looking for the library. But I think it isn't strictly related to the issue of having a static binary.
#26 - 11/12/2013 02:33 PM - Szilárd Páll
Mark Abraham wrote:
So we should expect/design the CMake detection to pick up that the toolchain provides FFTW even if there's nothing specific in any of the the paths? Does anybody know this happens?
Given that it will be a rare thing that compiler toolchains silently pull in loads of libraries "thought to be good for you" (does it happen on other HPC machine besides Cray?), unless it's trivial I would considere such an issue (if filed) a low priority one.
Note that I know this happens (at least on the CSCS XE6 Rosa and XK7 Toedi) because a while ago I wanted to benchmark standard FFTW against Cray's modified FFTW and after some struggling I gave up because I could not get the linking order tweaked appropriately.
...and indeed, this aspect is not really related to the current issue.
#27 - 11/13/2013 01:36 PM - Rossen Apostolov
Regarding the linking order - if you need to add a library right at the end of the link line, at least on some systems (e.g. SuperMUC) CMAKE_EXE_LINKER_FLAGS doesn't help. Instead one needs to use CMAKE_C_STANDARD_LIBRARIES to pass the arguments.
#28 - 06/12/2014 04:08 PM - Rossen Apostolov
Update - in 5.0.x and master one needs to use CMAKE_C_STANDARD_LIBRARIES_INIT instead.
#29 - 06/13/2014 12:12 PM - Szilárd Páll
- Status changed from In Progress to Accepted
#30 - 10/29/2014 10:21 AM - Mark Abraham
A somewhat related issue is that on Cray systems, the early tests by CMake for a functional compiler will pick up the CMake-standard Modules/Platform/Linux-xyz-lang.cmake files, which fails completely when Intel is the base compiler for the Cray wrapper, because these files hardcode the use of CMAKE_SHARED_LIBRARY_LINK_CXX_FLAGS=-rdynamic. Fix for 5.0 in progress
#31 - 10/29/2014 11:29 AM - Gerrit Code Review Bot
Gerrit received a related DRAFT patchset ‘1’ for Issue #911.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: ld5e9f6594bb481617e58d1ed4e79fdc692689ece
Gerrit URL: https://gerrit.gromacs.org/4188
#32 - 11/18/2014 04:26 PM - Mark Abraham
- Related to Feature #1641: Add toolchain file for Cray systems added
#33 - 11/18/2014 05:55 PM - Gerrit Code Review Bot
Gerrit received a related patchset ‘1’ for Issue #911.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: f47b4524e9c33f52f20bf0f54082d290ee22d9993b
Gerrit URL: https://gerrit.gromacs.org/4227
#34 - 11/26/2014 07:27 PM - Gerrit Code Review Bot
Gerrit received a related patchset ‘1’ for Issue #911.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: f47b4524e9c33f52f20bf0f54082d290ee22d9993b
Gerrit URL: https://gerrit.gromacs.org/4242
#35 - 12/12/2014 04:19 PM - Mark Abraham
- Description updated
#36 - 03/17/2015 05:12 AM - Gerrit Code Review Bot
Gerrit received a related patchset ‘1’ for Issue #911.
Uploader: Roland Schulz (roland@rschulz.eu)
Change-Id: f7f81802b44c33c861126e3422fd04388d2f2be5
Gerrit URL: https://gerrit.gromacs.org/4496
#37 - 06/20/2015 11:48 PM - Szilárd Páll
- Status changed from Accepted to Closed
- Target version changed from future to 5.1
|
{"Source-Url": "http://redmine.gromacs.org/issues/911.pdf", "len_cl100k_base": 4675, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 14093, "total-output-tokens": 5453, "length": "2e12", "weborganizer": {"__label__adult": 0.0002236366271972656, "__label__art_design": 0.0001958608627319336, "__label__crime_law": 0.0001852512359619141, "__label__education_jobs": 0.00020766258239746096, "__label__entertainment": 3.439188003540039e-05, "__label__fashion_beauty": 8.219480514526367e-05, "__label__finance_business": 9.435415267944336e-05, "__label__food_dining": 0.00020551681518554688, "__label__games": 0.0003218650817871094, "__label__hardware": 0.000400543212890625, "__label__health": 0.00013196468353271484, "__label__history": 8.32676887512207e-05, "__label__home_hobbies": 4.661083221435547e-05, "__label__industrial": 0.00018036365509033203, "__label__literature": 8.612871170043945e-05, "__label__politics": 0.0001386404037475586, "__label__religion": 0.00024628639221191406, "__label__science_tech": 0.001110076904296875, "__label__social_life": 5.4717063903808594e-05, "__label__software": 0.00817108154296875, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0001461505889892578, "__label__transportation": 0.00014841556549072266, "__label__travel": 0.00011837482452392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17235, 0.09311]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17235, 0.0263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17235, 0.90525]], "google_gemma-3-12b-it_contains_pii": [[0, 2107, false], [2107, 4431, null], [4431, 7096, null], [7096, 11580, null], [11580, 14649, null], [14649, 17235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2107, true], [2107, 4431, null], [4431, 7096, null], [7096, 11580, null], [11580, 14649, null], [14649, 17235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17235, null]], "pdf_page_numbers": [[0, 2107, 1], [2107, 4431, 2], [4431, 7096, 3], [7096, 11580, 4], [11580, 14649, 5], [14649, 17235, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17235, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ecd19065bbd517305bb7fed1de409b8c36e5ea3d
|
**LOD**: A C++ extension for OODBMSs with orthogonal persistence to class hierarchies
E.-S. Cho\textsuperscript{a,}, H.-J. Kim\textsuperscript{b}
\textsuperscript{a}ROPAS, Department of Computer Science, KAIST, 373-1 Kusong-dong Yusong-gu, Taejon 305-701, South Korea
\textsuperscript{b}Department of Computer Engineering, Seoul National University, Shilim-dong 56-1, Gwanak-gu, Seoul 151-742, South Korea
Received 2 March 1998; received in revised form 30 September 1999; accepted 15 October 1999
Abstract
There exist some preprocessing based language extensions for database management where persistence is orthogonal to the class hierarchy. They allow a class hierarchy to be built from both database classes and non-database classes together. Such a property is important in that classes can be reused in implementing database classes, and vice versa. In this paper, we elaborate on the orthogonality of persistence to class-hierarchies, and find that the existing method to achieve this is not satisfactory because of the side-effects of the heterogeneity of the links in a class hierarchy; some links represent subset (IsA) relationships between database classes, while the others denote inheritance for code-reuse. Finally, we propose LOD\textsuperscript{p}, a C++ extension to database access, which separates the different categories of links into independent hierarchies, and supports orthogonal persistence to the class hierarchy, overcoming the limitations in the previous methods.
\textcopyright 2000 Elsevier Science B.V. All rights reserved.
Keywords: Object-oriented database programming language; C++; Class interface; Class implementation; IsA relationships; Code-reuse
1. Introduction
Most object-oriented database systems (OODBMS) regard programming languages as the main interfaces [19]. Now, it is popular to integrate OODBMSs and general-purpose languages like C++, Smalltalk and Java. In this paper, we call the general-purpose languages the 'base languages'.
There are two widely used approaches to the integration of database management and programming languages. In the first approach, the (class) library for database management is provided for the base-language programmers. For example, the ODMG-93 object model defines such library interfaces with C++, Smalltalk and Java [8] with the persistent root class 'd:Object'. In the second approach [20,21], the syntax and/or the semantics of the base language are extended to support database management facilities. Their programs are usually preprocessed into low-level function calls in the pure base languages. One of the advantages of the latter over the former approach is that a class hierarchy can be constructed with database classes and non-database classes mixed, which enables reusing useful codes of non-database classes in implementing database classes, and vice versa [20]. That is, persistence is orthogonal to class-hierarchies.
In this paper, we elaborate on the orthogonality of persistence to class-hierarchies. We find that the existing method to achieve this is not satisfactory because of the side-effects of the heterogeneity of the links in a class hierarchy; some links represent subset (IsA) relationships between database classes, while the others denote inheritance for code-reuse between classes. This paper proposes LOD\textsuperscript{p}, a C++ extension to database access, supporting orthogonal persistence to the class-hierarchy without the limitations in the existing method, by separating the different categories of links.
The sequence of the paper is as follows. The next section introduces the main ideas and language features of LOD\textsuperscript{p} with some examples. Sections 3–5 elaborate on our implementation; Section 3 focuses on the preprocessing phase, and Sections 4 and 5 briefly mention the type system and
others, respectively. Discussions and related works are covered in Section 6. This paper is concluded in Section 7. Throughout this paper, we assume that readers are acquainted with the ODMG C++ binding [8].
### 2. Overview of LOD
#### 2.1. Motivations
As mentioned in the previous section, there already exists the mechanism of constructing a class hierarchy with database classes and non-database classes together. For example, supposing ‘D1’ and ‘D2’ are database classes, the non-database class N can be one of the subclasses of D1 and/or one of the super classes of D2.
This enables two different kinds of links between classes to coexist in a class hierarchy. One is for the subset (IsA) relationships between extents of database classes, which denotes database schema, usually viewed from more than one program sharing the database. The other represents inheritance for code-reuse between classes, which is usually related to the implementation details and likely to be hidden from programs other than the one the class is being implemented in. In the above example, both the relationships $D_1\rightarrow N$ and $N\rightarrow D_2$ are for reuse. On the other hand, when $D_3$ is a super class of both $D_1$ and $D_2$, the relationships $D_0\rightarrow D_1$ and $D_0\rightarrow D_2$ are subset links.
Unfortunately, this heterogeneity of the links in a class hierarchy entails the following limitations:
- The programmers have to take much care in defining links of class hierarchies, not to cause side-effects from other kinds of links. In the above example, there would be the link $D_1\rightarrow D_2$, which was not originally intended if the implementation of classes was not considered.
- Additional facilities for filtering the IsA hierarchy from the entire hierarchy are needed to encapsulate what depends on implementation. In the above example, we should provide the view of $\{D_0, D_1, D_2\}$ for the users of query languages and schema designers who do not care about the implementation details.
- Changes in non-database classes are discouraged since they may cause database schema evolution.
In the above example, when we decide not to reuse $N$ in the implementation of $D_2$ any more, this entails the modification of entire class hierarchies.
#### 2.2. Basic characteristics of LOD
In this section, we introduce the main idea of LOD.
##### 2.2.1. Separate modules for a database class
There are two different aspects of a database class. The ‘interface’ of a class is related to the semantics of the database, which makes classes form in the subset (IsA) relationships. The ‘implementation’ of a class is the whole description for implementing the class, which is responsible for the inheritance for code-reuse.
This motivates two separate modules for a database class—one for the interface and one for the implementation. The idea of such separation is not new in data sharing environments, especially in multi-language environments and/or distributed database environments, where the interface is usually referred to by more than one program, while class implementation is hidden from users other than the one implementing the class. In the previous example, for each of $D_0, D_1$, and $D_2$, an interface and implementation are given.
##### 2.2.2. Distinct class hierarchies
A LOD program deals with two distinct class hierarchies for interfaces of database classes and for implementations of database classes and non-database classes. The interface hierarchies are based on subset relationships between extents of related schema classes, which are viewed by all of the users including query language users and schema designers. The hierarchies for the implementations of database classes and non-database classes are based on code-reuse and are independent from the interface hierarchies. In the above example, one hierarchy is $D_0^{\text{interf}} - \{D_1^{\text{interf}}, D_2^{\text{interf}}\}$, while the other is $D_0^{\text{impl}} - D_1^{\text{impl}} - N - D_2^{\text{impl}}$.
Note that implementations of database classes can become super/sub-classes of non-database classes, while interfaces cannot. This is because only the implementation aspect of a database class requires relationships with non-database classes.
Independence between interfaces and implementation frees users from considering one kind of link in defining other kinds of link.
Although the separate interface hierarchy would look similar to a distinct database class hierarchy from the specific root class ‘d_Object’ in the database library, it is absolutely independent from the implementation of the database classes, which eliminates the needs for interactions with non-database classes. Thus, although the hierarchies with the implementations of database classes and non-database classes are constructed together, users can still enjoy orthogonality of the persistence to the class-hierarchy.
##### 2.2.3. More than one implementation for a class
In LOD, one interface and more than one implementation is allowed for a database class. It is easy to see that multiple implementations for a class is necessary for the independence between two kinds of hierarchies.
In addition, the schema evolution cost can be degenerated in some cases. We will discuss this more in a later section.
#### 2.3. Examples
##### 2.3.1. Definition of interfaces of persistent classes
In most preprocessing based DBPLs [1,20,21], a database
persistent class Deposit: Account {... void put_money(); ...};
persistent class Loan: Account {...(void borrow_money());...};
Not only can these schema interfaces be defined in $\texttt{LOD}$, but also in ODLs (Object Definition Language) [7].
2.3.2. Definition of implementations and bindings to interfaces
With both the public and private parts of a class, an implementation has the same declaration syntax as a non-database class, except for its additional keyword ‘implementation’, which binds it to an interface. For example, the implementation ‘Deposit_Impl1’ in the next example implements the interface ‘Deposit’.
class Deposit_Impl1 { // implementation
implements Deposit;
.... // same as declaration in Deposit(can be omitted)
money amount; time issue_date;
.... // method implementation ...
};
Implementations have to be declared and defined only in programming language interfaces, like $\texttt{LOD}$. In the declaration of an implementation, the data/methods declared in the corresponding interface can be omitted or rephrased. If rephrased, they are checked to see whether the user-defined binding destroys the type safety. If a binding is not destructive, we call it an ‘acceptable binding’ [10].
The hierarchy of interfaces and that of implementations in a program are independent of each other. The interface hierarchy is based on subset relationships and related to modeling a schema, while the implementation hierarchy is constructed from reuse relationships. For example, the implementation ‘Money_Deposit’, which implements the interface ‘Deposit’, is placed in a hierarchy that is independent of the hierarchy of ‘Deposit’ and ‘Account’. Assume that the class ‘moneyManager’ is an existing class for managing the ‘money’ type data.
class Money_Deposit: moneyManager { implements Deposit; ... };
Implementations of the interface ‘Deposit’ can be reused for other interfaces. The following examples show how ‘Deposit_Impl1’ is used to implement the interface ‘Loan’. The resulting class hierarchy is shown in Fig. 1.
class Loan_Impl1: Deposit_Impl1{ implements Loan; ...
Since interfaces and implementations are not restricted to map on a one-to-one basis, a schema interface can have instances created by different implementations. For example, assuming that ‘account_data’ is a system-defined
3. Implementation details
In this section, we present our implementation which translates $\mathcal{O}^+$ codes into the expressions in ODMG C++ binding.
The C++ object structures, deeply reliant on language implementation for efficiency, makes it hard to design an extension of the language [13]. Since this also makes it difficult to support database and class-separation, we have decided to impose some restrictions on C++. First, $\mathcal{O}^+$ deals with only global interfaces and implementations, i.e. only those defined outside all blocks. Second, protected members/member functions are not considered. Third, no function overloading is allowed. Fourth, we exclude operator overloading. Fifth, we allow neither the array types of user defined types for members, nor the class templates for interfaces/implementations.
3.1. System-defined classes
For each declaration of an interface and implementation, some definitions of C++ classes are generated. As in ODMG C++ binding, where all database classes are defined as subclasses of `Persistent_Object`, implementations are translated to be the subclasses of `Imp1Object`, a system-defined class derived from `Persistent_Object`. The definition of `Imp1Object` is as follows:
```cpp
struct Imp1Object: public Persistent_Object{
Fptrtb1Class *my_class_tb1;
Imp1Object(_lod_ClassId class_id) {
my_class_tb1 = &ftb1[class_id];
}
}
```
With a pointer to a member function table named `my_class_tb1`, we determine at run time which function to execute. `ftb1` is the list of member function tables constructed from the declarations of interfaces and implementations. A `class_id` is a class identifier assigned to each class during the preprocessing step. `ftb1` is searched with the class identifier as a key.
Schema interfaces are defined as subclasses of `InterObject`, another system-defined class derived from `Ref(Imp1Object)`. The definition of `InterObject` is as follows.
```cpp
struct InterObject: public Ref(Imp1Object) {
InterObject(POID t = 0): Ref(Imp1Object)(t) { }
InterObject(Imp1Object *t): Ref(Imp1Object)(t) { }
InterObject(const RefAny &t): Ref(Imp1Object)(t) { }
InterObject(const Ref(Imp1Object) &t): Ref(Imp1Object)(t) { }
...
}
```
3.2. Translation
3.2.1. Interfaces
For each member originally defined in an interface, two member functions prefixed with `get_` and `set_` are generated, and every member access is translated into the `get_/set_` member function call. For each `get_/set_` member function, the preprocessor defines a new global function, and also generates in every `get_/set_` function body the code for the global function call. The member functions in the original interfaces remain after the translation. The body of such a member function is converted to the corresponding member function call of the implementation.
For example, consider a simple schema interface `SCollection` and its super class `SCollectionSuper`.
```cpp
interface SCollectionSuper { void insert(int x); }
interface SCollection: SCollectionSuper { char * name; }
```
After translation, they are redefined as a subclass of the class `InterObject` with some new member functions.
```cpp
class SCollectionSuper: virtual public InterObject {
public:
void insert(int x) {
```
void (*f) (const RefAny&, int);
f = (void (*)(const RefAny&, int))
("operator () \to my_class_tb1")
[FTN_insert_ID] \to ptr;
f(*this, x);
}
public:
SCollectionSuper(POID t = 0): InterObject(t) {}
SCollectionSuper(Imp1Object * t): InterObject(t) {}
SCollectionSuper(const Ref<Imp1Object> & t): InterObject(t) {}
SCollectionSuper(const RefAny & t): InterObject(t) {}
SCollectionSuper & operator = (const Ref<Imp1Object> & t) {
SCollectionSuper * this = t;
return *this;
} ...
);
class SCollection: public SCollectionSuper {
public:
void set_name(char n[]){
void (*f) (const RefAny&, const char []);
f = (void (*)(const RefAny&, const char []))
("operator () \to my_class_tb1")
[FTN_set_name_ID] \to ptr;
f(*this,n);
}
const char * get_name() {
char * ("f") (const RefAny&);
f = (char * ("f") (const RefAny&))
("operator () \to my_class_tb1")
[FTN_get_name_ID] \to ptr;
return f(*this);
}
};
3.2.2. Implementations
The declaration of an implementation is made containing
the copies of the members and the member functions of its
Corresponding interface, unless they are rephrased in the
Declaration of the implementation. For example, we could
define the implementations ‘BSetSuper’, ‘BSet’ and
‘BSetSub’ as follows:
class BSetSuper {
public: char name[10]; void insert(int x) {…..} }
class BSet: public BSetSuper {
implements SCollection; int number_of(){…} }
class BSetSub: public BSet {
public: void insert(int x) {…..} }
As seen above, an interface without super classes is trans-
lated into a direct subclass of InterObject. For the body of a
member function like ‘insert()’, the preprocessor generates
code for the call of the corresponding global function
through the corresponding function pointer in
my_class_tb1. For each member, get/set_ functions
are generated, with their bodies filled with the correspond-
ing global function calls.
The constants prefixed with ‘FTN_’ are the identifiers for
member function names, and are used as indices for the
member functions in each my_class_table. Such identi-
fiers should be carefully selected, to avoid the conflict of
indices in the case of inheritance. For example, if we assign
the integer 1 to the insert of SCollectionSuper, it should be
ensured that the identifier for insert remains 1 in all
subclasses of SCollectionSuper, in order for its call
through the my_class_table entry to work properly. It
would be more sophisticated in the case of multiple inheri-
tance.
In LCPD, during the preprocessing step, the entire inter-
face hierarchy is scanned, and a unique ‘color’ (identifier)
assigned to each member function name in the hierarchy.
These colors are found by the graph coloring algorithm [12].
We use colors as indices for the member functions in each
my_class_table. The member function identifiers for the
above example are defined by our preprocessor, as follows:
enum_lod_FtnId {FTN__ERROR_ID = -1,
FTN_insert_ID = 0, FTN_set_name_ID, FTN_get_name_ID};
those global functions into member.
needed for each member function, and two for each
interface and its implementation, one global function is
member function of the implementation. For any pair of
implementations. They contain the call of the corresponding
ated in order to connect corresponding interfaces and imple-
mation. Note that, after translation, an actual database object
Ref
the same layout as that of the ODMG
becomes the implementation object, while its interface has
An interface requires no more words than a normal smart
member function table entry is one word per table entry.
An interface requires no more words than an object of
Persistent_Object
as the keyword ‘implements’, since implementations,
unlike non-database classes, should be registered into the
database. Classes such as BSetSuper and BSetSub are
also made subclasses of Imp1Object, as long as they are
linked with implementation BSet in the inheritance hierar-
chy.
The preprocessor discriminates implementations from
non-database classes through syntactical differences, such
as the keyword ‘implements’, since implementations,
Unlike non-database classes, should be registered into the
database. Classes such as BSetSuper and BSetSub are
also made subclasses of Imp1Object, as long as they are
linked with implementation BSet in the inheritance hierar-
chy.
The constants prefixed with ‘CLASS_’ are the identifiers
for the implementations. They are used to search ftb1 in the
constructor of the Imp1Object, in order to get appropriate
my_class_tables. The identifiers for the example imple-
mentations defined by the LOD preprocessor are as
follows:
```c
enum_lod_ClassId { CLASS_ERROR_ID = -1,
CLASS_BSetSuper_ID, CLASS_BSet_ID, CLASS_BSetSub_ID};
```
3.2.3. Definition and registration of global functions
During the translation, some global functions are gener-
gated in order to connect corresponding interfaces and imple-
mentations. They contain the call of the corresponding
member function of the implementation. For any pair of
interface and its implementation, one global function is
needed for each member function, and two for each member.
The preprocessor also generates the code for registering
those global functions into ftb1.
Fig. 2 depicts a LOD object structure of our implementa-
tion. Note that, after translation, an actual database object
becomes the implementation object, while its interface has
the same layout as that of the ODMG Ref handler [7], but
with the more difference in its functionality.
3.2.4. Expressions
As mentioned earlier, a database object is created by the
‘new’ expression and handled by a persistent pointer. Let us
consider the following example which involves creating and
handling persistent objects:
```c
SCollection * sp1 = new (obase) BSet;
sp1->insert(3);
sp1->name = "MySet1";
cout << sp1->get_name() << end1;
SCollectionSuper * sp2 = new (obase) BSetSub;
sp2->insert(5);
```
The preprocessor translates this code segment into:
```c
SCollection sp1 = new (obase) BSet;
sp1.insert(3);
sp1.set_name("MySet1");
cout << sp1.get_name() << end1;
SCollectionSuper sp2 = new (obase) BSetSub;
sp2.insert(5);
```
Note that, to avoid multiple indirections, an interface
pointer is not translated into a smart pointer of an interface,
but instead into an interface object itself, since an interface
object is already a smart pointer.
3.2.5. Cost analysis
The memory required for member function tables is
\[
\sum_{i=1}^{I} M_i \times (number\_of\_member\_functions\_of\_interface,
+ 2 \times number\_of\_members\_of\_interface)\]
+ number\_of\_implementation\_objects words.
The overhead for calling a member function through an
interface object is roughly the same as the time for one
member function call and one global function call. The
overhead for accessing a member through an interface
object is roughly the same as the time for two member
function calls (for the get/set function) and one global
function call. Note that no extra time is required for object
creation.
The time needed for building up the member function
tables and for registering system-defined global functions
depends on the total number of members and
member functions of interfaces, that is, \( \sum_{i=1}^{I} M_i \times (number\_of\_member\_functions\_of\_interface,
+ 2 \times number\_of\_members\_of\_interface) \).
The overhead for assigning colors is in proportion to
\[
\sum_{i=1}^{I} M_i \times (number\_of\_member\_functions\_of\_interface),
\]
and the space for the color table in preprocessing depends on $\sum_{i=1}^{N} M_i \times (\text{number of member functions of interface})$. However, they do not affect the execution time, but increases only the preprocessing time.
4. Type checking
In pure C++ programs, an object of ‘Set(Bicycle)’ cannot be assigned to the variable of ‘Set(Person)’. However, in ODMG C++ binding [7], objects of Ref can appear at any position expecting RefAny, even when the type parameters of the template class Ref are different. For example, an object of ‘Ref(Bicycle)’ can be used in contexts expecting ‘Ref(Person)’. Accordingly, it is preprocessor’s responsibility to type check to make the smart pointer type Ref behave like a real C++ pointer type. At the same time, it means that the preprocessor is allowed to have its own type system for persistent pointer types.
Type checking for class-separation support is categorized into four cases, as follows:
1. Subtyping relationship checking between two interfaces: to see if objects of one interface can be used in contexts expecting the other interface.
2. Subtyping relationship checking between two implementations: to see if objects of one implementation can be used in contexts expecting the other implementation.
3. Acceptability checking: to see if the user-defined binding of an interface and an implementation affects type safety.
4. ‘Implementing’ relationship checking between an interface and an implementation: to see if objects of the implementation can be used in contexts expecting the interface.
Note that an implementing relationship does not simply mean an acceptable binding, because an implementation not explicitly bound to an interface may implement the interface [10]. The acceptability checking is made at the implementation declaration; the subtype relationship and implementation relationship are checked at each expression.\(^3\)
While type checking, the preprocessor builds and refers to a table which contains the description of the classes being preprocessed and the information from the schema manager. Details on the type checking in ODMG are found in Ref. [10].
5. Others, besides preprocessing
The schema manager maintains a flag for each class to determine if the class is an interface or an implementation.
The schema class generated by an on-line importing tool is always considered as an interface. As for an implementation, it also keeps the information on the interface it is bound to.
An extent for a schema class is collected by traversing all its implementations, as well as its subclasses in the interface hierarchy. In application language interfaces such as $\mathcal{LCP}$ and ODMG C++ binding, however, it is hard to clearly define extents because of their transient instances and uncommitted object creation. Thus, containers like ‘Set’ and ‘Bag’ usually replace extents and collect the instances explicitly.
The preprocessing steps are summarized in Fig. 3.
6. Related works and discussions
6.1. The ODMG 2.0 model
The ODMG object model [8] supports the separate implementation for a class, in a different way from ours. First of all, unlike $\mathcal{LCP}$, the ODMG C++ binding allows only one implementation for a database class in a program. That is, to a C++ programmer, the implementation looks identical to its class, while another language is used for the schema design in order to define the separate interface of a class. Thus in a C++ program, all the database classes are required to be subclasses of the root class ‘d_Object’, and persistence is not orthogonal to the class-hierarchy.
The ODMG object model also supports distinct hierarchies for ‘interfaces’ and ‘classes’, but unlike the interfaces and implementations in $\mathcal{LCP}$, its separation is aimed at the database modeling power, an ‘interface’ describes abstract behaviors without its own extent, and a ‘class’ specifies the abstract behaviors and states of its objects, which are collectively called ‘types’,\(^4\) and viewed from multiple programs sharing a database. Neither the ‘interfaces’ nor ‘classes’ is concerned in implementing database classes.
Fig. 4 depicts how the ODMG 2.0 object model supports separate implementation from classes, compared with how $\mathcal{LCP}$ does so. The dotted lines represent the binding relationships between interfaces and implementations. In the ODMG model, only one-to-one mapping is allowed, without independent hierarchies for implementations.
6.2. Schema evolution
Due to the independence of interface hierarchies and implementation hierarchies, the schema evolution cost can be degenerated, as mentioned earlier. For example, let us assume that there is an interface of a persistent class ‘Deposit’ which has three ways of implementation named ‘Deposit_Impl1’, ‘Deposit_Impl2’ and ‘Deposit_Impl3’.
\(^3\) Actually, checking subtyping between interfaces can be put in the charge of the C++ compiler instead of the preprocessor, since the subtyping is based on C++ inheritance [10].
\(^4\) There is one more constructor called ‘structure’ [8] which describes abstract states, but we omit it here since it is not so related to this paper.
Without interface/implementation separation, such multiple implementations have to be made subclasses of ‘Deposit’ in a class hierarchy. This can be represented in a C++ like syntax as follows:
```
// a schema class
persistent class Deposit { ... };
// various ways of implementing Deposit
persistent class Deposit_Impl1: virtual Deposit { ... };
persistent class Deposit_Impl2: virtual Deposit { ... };
persistent class Deposit_Impl3: virtual Deposit { ... };
```
Now, if a new class ‘SpecialDeposit’ is created as a subclass of ‘Deposit’ in the schema, it should be inherited from all three classes:
```
// a subclass of Deposit
persistent class SpecialDeposit: Deposit_Impl1, Deposit_Impl2, Deposit_Impl3 { ... };
```
Otherwise, the way of implementation of the SpecialDeposit is required to be decided in the schema designing phase.
```
// we decide to make it a subclass of Deposit_Impl1
persistent class SpecialDeposit: Deposit_Impl1 { ... };
```
In the former case, ambiguities [13] may arise if any pair of those three subclasses happen to have common names of attributes/methods, which is not rare, and users have to override them in the class ‘SpecialDeposit’. In the latter case, schema designers have to consider the implementation of the class, and in addition, it is hard to add other implementations of the SpecialDeposit to the class hierarchy later.
In our approach, such a schema can be added in a simpler and more elegant way. Since a hierarchy for implementations can be built regardless of the interface hierarchy, ‘DepositImpl1’, ‘DepositImpl2’ and ‘DepositImpl3’ in the above example do not have to be subclasses of ‘Deposit’ any more. Instead, they are bound to ‘Deposit’:
```
class DepositImpl1 {implements Deposit;...};
class DepositImpl2 {implements Deposit;...};
class DepositImpl3 {implements Deposit;...};
```
Since the changes in the implementation hierarchy do not affect the interface hierarchy and vice versa, the new interface ‘SpecialDeposit’ can inherit from ‘Deposit’ directly, without consideration of the implementations of Deposit:
```
persistent class SpecialDeposit: Deposit{...};
```
Also, when a user wants to add/delete/modify the private attributes/methods in the implementation of the class ‘Deposit’ or ‘SpecialDeposit’, she/he does not have to change the whole class declaration or application programs concerned. Instead, he/she is supposed to change only the very implementation, or add a new implementation of the interface and uses it from then on. Hence, our approach of degeneration of schema evolution cost has similar advantage to that of the schema versioning [22].
6.3. Implementation
6.3.1. Java
Like LCP*, Java supports the independent hierarchies of interfaces from those of implementations. However, ODMG Java-binding [8] does not take advantage of the semantics in database access.
In Java [24], an object is represented by a pointer to a ‘handle pool entry’ which has two pointers again: a pointer to instance data and a pointer to class data.
Although this is similar to the object layout of LCP*, Java uses naive pointers to the object. On the other hands, pointers to database objects in LCP* are translated into ‘InterObject’ objects, a special kind of ‘d_Ref’ objects in ODMG C++ binding. This enables LCP* programs to share the object management facilities of the ODMG C++ binding, for the portability of standardization. Each object handlers in LCP* contains the information of the interface that the pointed object belongs to. This is another difference from a Java, where the information of the interface can be obtained by indirect retrieval via the information of a class that is linked from the object.
6.3.2. ODMG
Currently, we have developed LCP* on top of the previous version ‘SOP[2]’ that supports the ODMG model release 1.2. However, even if it will be built on the system of the ODMG 2.0 release, the way of the implementation of LCP* will not have remarkable changes, since we cannot make use of any new features of the new release; the separate implementation support in the new ODMG release are not helpful for LCP* implementation, because it mismatches the model of LCP*.
For example, it is possible to translate LCP* interfaces into the separate ‘interfaces’ in the ODMG 2.0 model, to realize the independent interface hierarchy in an easier way. However, because of the absence of extents for the ‘interfaces’ in the ODMG 2.0 model, it entails complications in the management of extents for the LCP* interfaces. Thus, as in the current translation of LCP*, we had better translate LCP* interfaces into the ‘classes’ of the ODMG model, for automatic and efficient management of extents. However, there will be some changes of the class names ‘Object’ and ‘Ref’ of the old release into ‘d_Object’ and ‘d_Ref’, respectively.
6.3.3. CORBA
CORBA [11] supports the separation of the interface and implementation of classes, and there have been developed two main approaches to binding an implementation of a class to its separate interface.
In the CORBA TIE approach [15], which uses C++ macros for binding the implementation and its interface
together, a hidden TIE object is automatically created for each object [15], which holds a reference to its target object and delegates all function invocations to it. However, a simple integration of this mechanism and the ODMG C++ binding will cause multiple indirections with both the d_Ref object and the TIE object involved in sequence. Such indirections are eliminated in our mechanism by merging the interface pointer with the smart pointer Ref, and transferring the functionalities of TIE to those of the implementation objects themselves.
In the CORBA BOAImpl approach [15], an implementation is translated to a C++ derived class of the interface of its class. Although this mechanism is simple and does not cause any indirections, the interfaces which are related by inheritance require that their implementations reflect the same inheritance hierarchy, and the implementation classes have to use the inheritance [15]. This side-effect increases the complexity of the class hierarchies [9].
Some CORBA-compliant systems support OODBMS-persistence of CORBA objects [16,17,23] by extending CORBA loaders [15] for persistent storage. However, they focus neither on supporting persistence orthogonal to the class hierarchy nor on integrating database classes and non-database classes in a proper way, since the purpose of those systems is only to improve CORBA loaders.
6.3.4. Interface-separation in GNU C++
The facility for separate interface is currently incorporated in the GNU C++ compiler [5]. In this mechanism, an interface pointer is made to be the object that contains two pointers again—a pointer to the implementation object and a pointer to the ‘member function table object’.

However, there are two big drawbacks. First, this table is initialized with the actual function pointers of the implementation, which requires casting the member functions of the implementation to those of the table object, but this is prohibited in C++ [13]. They modify the front end of the GNU C++ compiler [5], with the loss of portability. Second, a ’member function pointer table’ for an interface pointer is dynamically generated at every assignment of the assigned implementation object to the interface pointer. Note that, in our implementation, placing member function tables in implementation objects instead of interface pointers, management of the table is not required during the assignment.
In most other cases, except for CORBA [11], member access is not supported directly [3,5]. In LOD*, members are accessed through get/set functions like in CORBA [11]. Studies on reducing the overhead of member access through get_/set_ function calls are in Refs. [14,18].
7. Conclusions
This paper proposes LOD*, a C++ extension for database access. LOD* allows implementing database classes by reusing non-database classes and vice versa, with the persistence support orthogonal to the class hierarchy. The main idea is to have distinct hierarchies for database classes for subset-based hierarchies and code-reuse hierarchies, based on separate modules of interface and implementation for a database class. Only interfaces are viewed to the programs accessing the shared database, with implementation details of classes hidden from users other than those related to implementing the classes. In the case that the implementation-dependent part of a class needs to be changed, users are supposed to simply add a new implementation for the class without care of the existing instances, which decreases schema evolution cost.
LOD* uses the C++ pointer interface, from which the preprocessor generates ODMG C++ binding code. After translation, an actual database object is represented as an implementation object, and an interface object behaves like a smart pointer. Unlike in general purpose languages, the implementation introduced in this paper meets the various requirements for the database applications. Since there is not much reference material on the related implementation, we hope this paper will be helpful to future implementers.
Currently, we are expanding the LOD* preprocessor to eliminate the restrictions described in this paper. We consider it another interesting issue to apply our idea to the language extensions like the Java interface for databases which already support the independent interfaces and implementations for non-database classes.
References
[10] E.S. Cho, H.-J. Kim, Class-separation mechanism for integrating OODBMs and general-purpose OOPs, submitted for publication.
|
{"Source-Url": "https://pdfs.semanticscholar.org/9a85/df26693e5874703cd2d58b3947d55d9b88dc.pdf", "len_cl100k_base": 8187, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35245, "total-output-tokens": 9865, "length": "2e12", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.00025653839111328125, "__label__crime_law": 0.00023698806762695312, "__label__education_jobs": 0.0003840923309326172, "__label__entertainment": 4.017353057861328e-05, "__label__fashion_beauty": 0.00011909008026123048, "__label__finance_business": 0.0001367330551147461, "__label__food_dining": 0.00029158592224121094, "__label__games": 0.0003981590270996094, "__label__hardware": 0.0005230903625488281, "__label__health": 0.00033092498779296875, "__label__history": 0.0001728534698486328, "__label__home_hobbies": 6.139278411865234e-05, "__label__industrial": 0.00025463104248046875, "__label__literature": 0.00017309188842773438, "__label__politics": 0.000171661376953125, "__label__religion": 0.00036835670471191406, "__label__science_tech": 0.004302978515625, "__label__social_life": 5.728006362915039e-05, "__label__software": 0.0035953521728515625, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00024509429931640625, "__label__transportation": 0.0003495216369628906, "__label__travel": 0.00017404556274414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40136, 0.0201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40136, 0.63054]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40136, 0.8543]], "google_gemma-3-12b-it_contains_pii": [[0, 3848, false], [3848, 9301, null], [9301, 11643, null], [11643, 14949, null], [14949, 17972, null], [17972, 22459, null], [22459, 27636, null], [27636, 32787, null], [32787, 34510, null], [34510, 40136, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3848, true], [3848, 9301, null], [9301, 11643, null], [11643, 14949, null], [14949, 17972, null], [17972, 22459, null], [22459, 27636, null], [27636, 32787, null], [32787, 34510, null], [34510, 40136, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40136, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40136, null]], "pdf_page_numbers": [[0, 3848, 1], [3848, 9301, 2], [9301, 11643, 3], [11643, 14949, 4], [14949, 17972, 5], [17972, 22459, 6], [22459, 27636, 7], [27636, 32787, 8], [32787, 34510, 9], [34510, 40136, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40136, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
bade80893c54c02833dd120f81b6fd3f26516f9d
|
Abstract
Innovative self-optimizing systems which go far beyond current approaches for mechatronic products become possible when systems are enabled to optimize their own behavior at runtime. Such self-optimizing systems are characterized by their ability to endogenously modify their objectives in response to changing conditions and autonomously adapt their parameters and structure and as a result their behavior to fulfill their objectives. This paper outlines a systematic approach for the development of self-optimizing systems. The approach helps to reduce the considerable additional development efforts resulting from self-optimization by employing different forms of patterns throughout the whole development process to enable the reuse of design knowledge. At first, the employed notion of patterns covers the multiple disciplines involved as well as different phases of the development process. In addition, the patterns are used to enable a systematic transition between the different milestones of conceptual design such as the function hierarchy, the active structure, and the construction and component structure. The approach is presented using the example of autonomously driving shuttles which self-optimize their behavior.
1. Introduction
The integration of advanced information technology offers considerable potential for innovations in the field of conventional mechanical engineering. Most modern mechanical engineering products already rely on the close interaction between mechanical engineering, electronical engineering, software engineering and control engineering that is known as
“mechatronics”. The aim of mechatronics is to improve the behavior of technical systems by using sensors to obtain information about the environment and the system itself and processing this information to enable the system to adapt to its current situation.
Given the tremendous pace of development in information technology, we can identify further options that go far beyond mechatronics – systems with inherent intelligence. We call such systems “self-optimizing”. Self-optimization can be characterized by the presence of the three joint actions of self-optimization (cf. [FGK+04], p. 22): (1) analyze current situation, (2) determine objectives, and (3) adapt system behavior. A self-optimizing system is thus capable to analyze and detect relevant modifications of the environment or the system itself, to endogenously modify its objectives in response to changing influence on the technical system from its surroundings, the user, or the system itself, and to autonomously adapt its behavior by means of parameter changes or structure changes to achieve its objectives.
The paradigm of self-optimization opens up fascinating prospects for mechanical engineering and its associated fields. The challenge is to effectively design such self-optimizing systems. Typically, self-optimizing systems are characterized by the high interconnectivity of its system elements and to other self-optimizing systems. To enable autonomous adaptation of their behavior, self-optimizing systems need to have a high degree of freedom concerning alternative structures and configurations. This implies a shift of decisions that are conventionally made at design-time towards the deployment phase of the product itself. In the Collaborative Research Center 614, “Self-Optimizing Concepts and Structures in Mechanical Engineering” (cf. [SFB-ol]), we are developing a novel design methodology for self-optimizing systems which addresses this challenge. A very important aspect of the design of self-optimizing systems constitutes the specification of their behavior. We make use of the general observation that the most successful reusable building blocks for the interaction and structuring of complex system at the design level are patterns. Patterns are best characterized by a problem specific structuring and interaction of elements (cf. [AIS+77]). Each pattern often defines a partial view on the system in form of a set of roles which has to be realized by the different system elements. This concept has found widespread use for the development of complex systems. Design patterns [GHJ+96] and architectural patterns [BMR+96] are very successfully employed in software engineering. In mechanical engineering and electrical engineering a slightly more general form of patterns – so called active principles are frequently used (cf. [PB03]).
This paper describes our approach by means of an example of autonomously driving shuttles as parts of a novel transportation system - the New Railway Technology Paderborn [NBP-ol] project. These shuttles practice self-optimization by exchanging their experience to improve performance.
We will first review relevant related work for development processes and pattern notions in Section 2. Then, the development process and its elements are outlined in Section 3. The conceptual design which rests upon the application of so-called active patterns for self-optimization follows (Section 4). In Section 5, the elaboration of the conceptual design for the software by means of design patterns and rigorously defined coordination patterns is presented. A final conclusion closes the paper.
2. Related Work
Each domain that participate in the development of self-optimizing systems – mechanical engineering, electrical engineering and software engineering – apply an own specific domain dependant development process model, e.g. in mechanical engineering [Rot00] and [PB03], in digital electronics [Esc93] and in software engineering [Pre94] and [PB96]. For the development of mechatronic systems, the domain-dependent process models are not sufficient because they do not consider the synergetic collaboration of the domains with each other. The key-aspect is within an integrative, cross-domain development process. There are several approaches for the development of mechatronic systems where the VDI guideline 2206 and the process model of Isermann constitute two of the most important models.
VDI guideline 2206 “A design methodology for mechatronic systems” depicts the current consensus of the experts. The result of the first step - the integrative conceptual design - is a domain-spanning conception of the desired system by all domain-experts, the so-called principle solution. Based on the principle solution, the subsequent elaboration takes place in parallel. The emergence of a functioning product that fulfils all requirements is guaranteed by frequent reconciliations and a coordinated system integration phase [VDI04].
Isermann is focussing on systems with intelligent control which for example include error detection and adaptation. He proposes an approach for the system design that assumes a mechanic basic-construction. In an analysis-step, potentials for optimization are identified which realize functions easier and more cost-efficient on the basis of digital electronics. Models for the evaluation of the behavior are composed and an information processing is build up which consists of controlling elements and also in parts optimizing elements [Ise99].
The Collaborative Research Center 241 “Integrated Mechatronic Systems” (IMES) has demonstrated the potential of mechatronics to increase performance on several systems ([IBH02], [GSS+00]).
None of today’s approaches are suited for the development of self-optimizing systems. New requirements for the design methodology result from the paradigm of self-optimization: self-optimization allows for autonomous decisions at runtime. Not all imaginable behavior needs to be anticipated before – a great amount of functionality is realized by the use of software which initiates and implements coordination-, adaptation- and transformation-processes of the systems. All of this needs to be considered already at the beginning of product development process. In the course of the work of SFB 614, appropriate approaches for the development of self-optimizing systems are developed. They are based on the design methodology for mechatronic systems.
An important aspect of our design methodology is the use of patterns to describe intelligent behavior of self-optimizing systems in a generalized domain- and application-independent way and to reuse already successfully applied design knowledge in new contexts. This way we regard patterns as reusable building blocks for complex system design processes. [LRS01] brings forward requirements for the discovery, characterization and catalogue of patterns as a core of the methodology for software adaptivity. However, only patterns for the late development phases of knowledge-based systems are investigated, such as self-monitoring, self-diagnosis and self-recovery. Neither there is a concept on how to apply patterns in the early phases, e.g. when designing the active structure and the principle solution, nor are the proposed patterns adequate for a cross-domain design of self-optimizing systems.
The Unified Problem-Solving Method Description Language (UPML) brings forward a concept and specification for the reuse of so-called “problem-solving methods” in the domain of artificial intelligence [GFR+04]. UPML specifies abstract patterns for problem-solving processes which describe intelligent behavior. However, the behavior is based on inference-processes in the sense of rule-based systems only. The approach cannot model a wide range of intelligent behavior beyond the inference mechanisms. Furthermore, there is no concept for the combination of the patterns with real mechatronic systems.
Patterns which are to be used for the software of mechatronic or self-optimizing systems must exhibit special properties. These properties include honoring real-time requirements, appropriate abstraction, formal semantics, and design for verification.
Patterns for the design of adaptive and safety critical software systems are presented in [SFO03]. Those patterns do not respect the real-time requirements of mechatronics. They additionally lack a formal specification.
Design patterns in the software engineering domain have typically been specified informally [GHJ+96]. The usage of informally specified patterns for safety critical software is not appropriate, since the behavior of the resulting software systems is not foreseeable. In the last years, formal pattern specifications (such as [KFG04], [KFG+03], [SH04]) have been introduced to overcome this problem. The structure as well as the behavior of the patterns must be formally specified. Based on these formal specifications, verification techniques like model checking are applicable. Verification techniques allow for proving that the software does behave in accordance with its specifications.
In [SH04], patterns are formally specified on a programming language level. For the considered domain, patterns should be specified on a modeling level, as the abstraction which is provided by the models allows for easier verification.
The Role-Based Metamodeling Language (RBML) [KFG04] is a formal pattern specification notation which can be used to express domain-specific patterns. In this approach, a pattern specification consists of (1) a static pattern specification which describes the structure of the pattern and (2) an interaction pattern specification which describes constraints on the allowed interaction between the structural pattern elements. The RBML approach does not support the specification of behavior which conforms to real-time requirements. In addition, the employed refinement notion [KFG+03] allows arbitrary refinement and thus does not enable compositional verification techniques.
Besides the observed lack of appropriate pattern notions for the principle solution as well as elaboration phase, all existing approaches are restricted to either the mechanical engineering or software engineering domain. However, for the intended development of self-optimizing systems the seamless support for reuse of design know-how in form of patterns is required such that the transitions between results of “key milestones” of the development process, like requirements, principle solution, and elaboration phase can efficiently be bridged.
3. Definition and Use of Solution Patterns
The demonstrator of our overall research-project is a rail-bound transportation system consisting of autonomous shuttle-vehicles for the transport of persons and goods. Several concepts of self-optimization are validated by means of the transportation system example. The shuttles shall use the existing railway system, travel in single- or convoy drive-mode and behave optimal in any situation. Among others, to behave optimal may mean to achieve best comfort for the passengers. Technically speaking, comfort depends on the movement and the acceleration of the shuttle-chassis which must be minimized by the implementation of appropriate compensation measures. The minimization of the acceleration is carried out by an active suspension/tilt module. Conventional control uses a so-called “skyhook”-approach for the dampening of the active suspension/tilt module. This approach allows the suspension/tilt module to adjust to the track-profile in such a way that the shuttle-chassis moves along a predefined straightened trajectory. However, once the preset trajectory is implemented in operational mode, this approach does not consider changes in the track-profile which inevitably occur because of wear-out of the tracks or the like. The aim of a self-optimizing solution is to provide experience-based trajectories for the shuttles in operational mode and to make this approach efficient by the cooperation within a whole shuttle community.
The development of self-optimizing systems of the complexity such as the New Railway Technology Paderborn project can be compared with the design-procedure of mechatronic systems e.g. in the automobile or aerospace industry. Fundamental decisions are taken in the early phases of the development process. A domain-spanning conceptual design phase constitutes how the system is going to be constructed and how the functionality can be achieved. After that a elaboration of the particular modules is conducted. The requirements are the starting point of the conceptual design phase. The functionality of the product is extracted and described in a solution-independent way with the help of a function hierarchy. Solutions of the participating domains are searched for. These solutions result in a principle solution that specifies the aimed product concept by a set of coherent partial models. Essentially, these models are the active structure, which describes the connectivity of the system elements, the raw construction and component structure which designates the shape of the single elements and their position in space and the behavior of the system.
Starting from the principle solution, the partial solutions are concretized with applying domain-specific methodologies. In the case of software controlling the self-optimization behavior, the development is carried out with the help of active patterns for self-optimization
(AP\textsubscript{SO}) which specify schemas for the system-behavior. AP\textsubscript{SO} are selected at conceptual design phase and realized by well-proven software patterns later on.
Different types of solutions are applied for the design of self-optimizing systems. We have developed a classification schema for these solutions and their concretization levels. As a superordinated concept we use the term pattern or solution pattern. According to \cite{AIS+77} a pattern describes a recurrent problem within our environment and the core of a solution for this problem. The core of the solution pattern is specified by the characteristics of its elements and their collaboration. In our context, solution patterns are applied to work-out product concepts, drafts, realizations and implementations; they lead to mechanical and software components. Figure 1 depicts the overall classification scheme.

We differentiate solution patterns that are based upon physical effects and patterns that contain information processing. We call solution patterns that rely on physical effects as “Active Principles”. In particular, active principles of mechanical engineering and electrical engineering are relevant for self-optimizing systems. According to the definition of Pahl / Beitz \cite{PB03} active principles describe the relationship of physical effects and material and geometrical characteristics (active geometry, active motions and material properties).
To a great extend, self-optimization is realized by information technology. We subsume pattern of control engineering, self-optimization and software engineering under the general term of “Pattern of Information Processing”. Software patterns consist of a problem-solution
pair which makes well-proven software engineering knowledge applicable for new problem contexts. Patterns of control engineering specify how a plant is modelled, influenced or quantities are measured and observed. Active patterns for self-optimization (APSO) depict schematic solutions for the self-optimization process as described in [FGK+04]. We use the following terms at the elaboration phase: system elements constitute the elements of the active structure which is designed at the phase of conceptual design. They represent parts of the system which are not developed in detail, yet. After a further concretization, system elements with a spatial geometry are transformed into components of the construction structure and software-containing elements are transformed into software components of the component-structure. Figure 2 depicts at which phases of the design process solution patterns are applied and how they relate to each other.
**Figure 2: Transitions from Functions to Components**
The design methodology is based upon two basic steps - first the conceptual design, here reduced to the transition from the function hierarchy to the active structure; second the elaboration which describes the transition from the active structure to the construction and component structure. Starting point for the first step is the function hierarchy, which specifies the product functionality. The function hierarchy primarily results from the requirements. Solutions are determined for specific functions. These may be active principles, software patterns, patterns of control engineering, active patterns for self-optimization or solution
elements, if already known. By solution elements we understand a realized and well-proven solution for the fulfilment of one or more functions. In general, this means a module, a component, a group of components or a software component, which relies on one or more solution patterns. The linkage of all system elements by means of energy-, material-, and information-flow leads to the active structure. The active structure describes the physical and logical interaction between all participating system elements. Already known solution elements are treated as system elements within the active structure. Furthermore, ideas about the shape of the system arise. Therefore system elements with a spatial geometry are going to be concretized towards modules and components and are positioned in space under special consideration of geometric constraints. Thus, details about the number, shape, position, alignment and type of active surface and active location can be made. The subsequent elaboration develops the construction structure with geometry-determining components and component groups. In parallel, information-processing system elements are concretized, assembled to software components and depicted in the component structure. This is done on the basis of software patterns where applicable. The development of software for self-optimization using active patterns of self-optimization is detailed in the following sections.
4. Active Patterns for Self-Optimization
Active patterns for self-optimization (APSO) realize functions of self-optimization \(^1\). APSO constitute templates which specify generally accepted, autonomous and intelligent behaviour of self-optimizing systems with the help of principle-concepts, application-scenarios, structures, behaviour and methods (Figure 3). APSO cover the whole self-optimization process or only parts of it. Essential is the fact that system state changes are caused, supported and / or deployed by autonomous, intelligent behaviour. APSO are iteratively concretized throughout the whole system development process.
The principle concept characterizes the basic idea of the APSO. It is used to allow the engineer an intuitive access to the APSO.
Application-scenarios depict situations in which the APSO have already been applied successfully in the past. Those scenarios shall help the engineer to select an appropriate APSO for the task at hand.
\(^1\) Apart from conventional functions of mechanical engineering, we research so-called functions of self-optimization such as autonomous planning, cooperation, and learning for the description of the functionality of self-optimizing systems.
The structure specifies necessary participating system elements and their relations among each other. One or more behavior models describe adaptation processes, which an AP$_{SO}$ shall execute.
**Figure 3: The Active Pattern for Self-Optimization „Experience-Based Exploration“**
The focus is on the modelling of autonomous intelligent behavior, which activates, supports and/or executes the statechange. The following example is based on an adaptation process consisting of three activities:
1. Query knowledge: Knowledge of other systems is used to better achieve a task at hand.
2. Explore environment: The environment of the system is explored to enrich and extent the queried knowledge such that new experience is build up.
3. Learn results: New experience that was made when exploring the environment is learned and distributed among the participating system elements so that the knowledge-level of the whole system is continually increased with time passing by.
Finally it is shown how a system is transformed from a given current state to a desired target-state by the use of specific methods, e.g. Case-Based Reasoning for the query and adaptation of knowledge.
In the course of the conceptual design phase the experience-based exploration of trajectories is elaborated in the application-scenario “Cooperative Learning when Driving on a Track”. Starting point is the active structure of a rail-bound transportation system. Figure 4. shows an extract of the active structure where shuttles drive on track-segments that power and direct the shuttles. The active structure also describes how track deviations affect shuttles negatively.
According to the above mentioned task, the function hierarchy of the conventional rail-bound transportation system is extended by functions of self-optimization like “Determine Track-profile”, “Calculate Trajectory”, and “Adapt Behavior”.

**Figure 4: Extract of an Active Structure for a Rail-bound Transportation System**
The active pattern for self-optimization is converted into the active structure as follows. According to the structure aspect of the AP\textsubscript{SO} at least one knowledge carrier and one knowledge user is necessary. The upper part of figure 5 depicts the realization of the active patterns for self-optimization at the type-level. The middle part shows the instantiated active structure for this application-scenario. Decisive for the solution is that shuttle Sh\textsubscript{2} changes its state on the grounds of the access to the experience of others. In this case, the internal states are based on a mental state-space model\textsuperscript{2}. The state-change is achieved by the above mentioned three activities: 1. query knowledge, 2. explore the environment – here: exploring the track-profile, and 3. learn from results and distribute the experience among all other system elements such as shuttles and switches.
\textsuperscript{2}Mental state-space models are used in the domains of epistemology and artificial intelligence to model thought processes such as planning and problem solving (cf. [MA02]).
Figure 5: Active Structure for „Cooperative Learning when Driving on a Track“
The lower part of figure 5 depicts components of the active pattern of self-optimization “Experience-based Exploration” where above mentioned activities are carried out by the method of Case-Based Reasoning. This method provides a multi-criteria search for similar problems as well as the adaptation of historic solutions to the current situation. This way the knowledge of shuttle Sh₁ in terms of successfully deployed past trajectories is used for the adaptation of the behavior of shuttle Sh₂. Starting in state S₀, this initial knowledge is the basis for the exploration of an optimum trajectory. The exploration activity may lead to one of the two subsequent states S₁ and S₂. Shuttle Sh₂ of the active structure contains links to the
applied $\text{AP}_{\text{SO}}$ and the semi-formal specification of the behavior as well as to the deployed method. The exploration of reference trajectories in a multi-agent system setting is detailed in [SSO+04].
5. Elaboration – From the Principle Solution to the Component Structure
In order to realize the transition from the active structure to the component structure, (1) the relevant elements of the active structure are mapped to a corresponding UML component architecture, (2) active pattern present in the active structure have to be realized by related software patterns, and (3) additional requirements might be realized by additional software pattern (e.g., to enable the exchange of software at run-time).
Here, we further consider step (2) and present how to realize the above employed active pattern for self-optimization named “Experience-Based Exploration” in the software design. At first, we have to identify the corresponding general, reusable design pattern describing the main actors and sequences of events at the information processing level. Based on this, we have to identify in a second step the related more detailed coordination pattern which also fulfills the domain specific requirements. We also have to make sure that the relevant timing constraints are present and that the pattern can be subject to formal verification.
**Design Pattern**
Abstracting from the physical aspects of the system and focusing on the knowledge-related interactions, we have the Knowledge Carrier, the Knowledge Users and the Subject Matter as the principal elements (roles) of the pattern. The Knowledge Carrier has knowledge about a specific Subject Matter. The Knowledge Users ask the Knowledge Carrier about the Subject Matter and use the obtained information to guide them in their exploration. They may report their experiences back to the Knowledge Carrier. These relationships are documented by a UML Class Diagram (see Figure 6).
Nonetheless, a more detailed description of the interactions is required in order for the pattern to be useful. The exchanges between the objects, or rather the roles they are playing, are thus documented by means of one or more scenarios. Scenarios are idealized sequences of concrete messages passed between roles that serve to illustrate desirable behavior.
The most important generic scenario for this pattern is depicted by the UML Sequence Diagram in Figure 7. It describes a Knowledge Carrier tailoring an optimized response to the specific query of a Knowledge User, who then proceeds to explore the Subject Matter based on this recommendation. The resulting experiences are sent back to the Knowledge Carrier, who processes them and makes them available for future users. The diagram is annotated with comments in bold face that point out the abstract steps (Query, Exploration, Learning) and the characteristic joint actions of self-optimization, (1) analyze current situation, (2) determine objectives, and (3) adapt system behavior.
Our application-scenario “Cooperative learning when driving on a track” can be seen as a specific instantiation of this pattern. The shuttle receives a trajectory encoded as a mathematical function that allows it to adapt its active suspension to the actual profile of the current track section. The parameters of the trajectory can be adapted online in order to fine-tune it in accordance with the current objectives, e.g. weighting efficiency against perceived comfort. The trajectory is based on the experiences of the shuttles that have previously used the track section.
The corresponding self-optimization process is distributed between the shuttles and the track section. The shuttle analyzes its current status concerning preferences and objectives, payload, and energy reserves (first analysis of the current situation) and communicates the results to the track section. Based on the communicated configuration and the stored experiences, the
**Figure 6: Class Diagram Specifying the Main Actors and their Relationships**
track section now computes a suitably optimized trajectory reflecting the shuttle’s preferences with respect to its objectives (determination of objectives) and transmits it to the shuttle.
The shuttle adapts the reference input for the active suspension system based on the trajectory (adaptation of the system behavior). After leaving the track section, the shuttle analyses the perceived comfort and the expended energy (second analysis of the current situation) and transmits its experiences back to the track section. The new experience is incorporated into the track section’s repository and thus used in the trajectory optimization of subsequent shuttles. The optimization process thus depends on a distributed analysis of the current situation – the assessment of the shuttle’s state by the shuttle itself (first analysis of the current situation) and the reports about the track profile by the preceding shuttles (second analysis of the current situation).
**Figure 7: An Idealized Exchange between Knowledge Carrier and Knowledge User**
The shuttle adapts the reference input for the active suspension system based on the trajectory (adaptation of the system behavior). After leaving the track section, the shuttle analyses the perceived comfort and the expended energy (second analysis of the current situation) and transmits its experiences back to the track section. The new experience is incorporated into the track section’s repository and thus used in the trajectory optimization of subsequent shuttles. The optimization process thus depends on a distributed analysis of the current situation – the assessment of the shuttle’s state by the shuttle itself (first analysis of the current situation) and the reports about the track profile by the preceding shuttles (second analysis of the current situation).
Coordination Pattern
Based on the design pattern, we can now derive a coordination pattern [GTB+03] that precisely defines the behavior that is required of the actors. While the more informal scenario captures the underlying idea in an intuitively accessible form, it does not specify which behavior is required and which is optional or incidental, nor does it provide any timing information. It is therefore insufficient as the sole specification of the – typically safety-critical – software of a mechatronic system. Coordination patterns are specifically suited to this task, as they allow the specification of verifiable real-time requirements. The diagram in Figure 8 describes the abstract structure of the derived coordination pattern. The two roles are linked by a connector representing the communication channel. On the conceptual level, the roles are linked by the communication protocol which is specified in the behavior of the two roles.
Figure 8: The Actors are Mapped to Roles of a Coordination Pattern
Our approach is based on the current UML 2.0 specification [OMG03], which in turn is based on ROOM [SGW94] and UML/RT [SR98]. Structure is specified using component diagrams; behavior is specified by a real-time variant of UML state machines called Real-Time Statecharts [BG03].
Real-Time coordination patterns (in short coordination patterns) as in Figure 8 capture the coordination behavior between abstract entities. They are subsequently applied to components, which need to implement the required coordination behavior in a way that respects all specified constraints. Coordination patterns consist of a number of abstract entities (roles) and their coordination behavior (role behavior). The role structure is specified by component diagrams; roles are displayed as ports. Communication between roles is indicated by connectors between the participating roles.
Each role of the coordination pattern is specified by a protocol state machine, i.e. a Real-Time Statechart without side effects other than message sending. As the focus of coordination
patterns is on the exact specification of the (safe) interaction between components, they largely abstract from internal behavior that is irrelevant for the determination of communication behavior. Important steps like analysis, determination of goals or learning therefore do not explicitly figure in the specification, but are abstracted into non-deterministic behavior, bounded by appropriate time guards.
The User queries the Carrier for data which encode the experience. If it does not receive a reply before a certain deadline, the user has to handle the situation without this information. If, however, the information is provided in time, the user employs it and sends feedback based on the gathered experience (see Figure 9).

The Carrier basically waits for requests by users and the experiences they send as feedback (see Figure 10).

The behavior of the connector role is also specified by Real-Time Statecharts. The connector models the assumed properties of the communication channels like message losses, message delays, etc.
Safety critical constraints on the behavior are the final part of a coordination pattern specification. The constraints are written in OCL-RT [FM02] or a temporal logic (ATCTL).
In order to ensure that no unsafe behavior due to a violation of the constraints may occur, the model checker Uppaal [LPY97] is used to verify that all constraints for the specified behavior hold under the assumed channel behavior [BGH+04]. If the constraints hold, the pattern specification is valid and is stored in a pattern library for future reuse.
The software of a self-optimizing system consists of a number of components, which are connected via ports and channels. The components are developed by reusing some of the verified coordination patterns, which are stored in the pattern library. First, an appropriate coordination pattern is loaded from the pattern library. The roles of the pattern are added to the component as ports. A refinement of the role behavior is then added as parallel state to the behavior of the component. The refinement must respect certain restrictions in order that the results of the pattern verification still hold for the component. A component typically does not only refine one pattern role. Instead several different pattern roles are refined and added to the component behavior. Typically, a synchronization behavior is also added which coordinates between the refined roles.
Real-Time Statecharts are used for the specification of discrete event-based behavior. As mechatronic and self-optimize systems contain continuous behavior, continuous controllers are added to the components [BTG04, GBS+04]. The states of the discrete behavior are annotated by controller structures. Only, the controllers are executed during runtime which are associated with the current state of the Real-Time Statechart. This integration of continuous behavior (controllers) and discrete event-based behavior (Real-Time Statecharts) is specified using hybrid components and Hybrid Statecharts. Special fading functions are used for the specification of switching between the states and the annotated controller structures.
Figure 11: Shuttle and Registry Implement the Coordination Pattern
Based on the system structure, we define the components Shuttle and Track Section and apply the coordination pattern Experience Sharing to them. The Shuttle acts as the Knowledge User,
the Track Section as the Knowledge Carrier. The pattern thus specifies the way shuttle and track section exchange reference trajectories and gathered experiences. The relevant part of the component structure in specified by the component diagram in Figure 11.
The Shuttle (Knowledge User) queries the Track Section (Knowledge Carrier) for a reference trajectory. If it does not receive a reply before a certain deadline, the shuttle assumes that no trajectory is available and switches to a robust controller that can safely operate the active suspension system without a reference trajectory, albeit in a less comfortable and efficient manner. If, however, the trajectory is provided in time, the shuttle uses it for traversing the track section and sends feedback based on the experience gathered through its sensors to the track section (see Figure 9).

**Figure 12: Composition of the Shuttle Component**
The Shuttle component is defined by combining and refining all applicable coordination patterns (see Figure 12). In order to allow the shuttle to obtain the trajectory for the upcoming track section in time while still communicating with the current section, the experience sharing pattern is executed twice in parallel. It is refined by specifying additional internal communications where before there were only non-deterministic constraints.
The Hybrid Statechart in the diagram’s upper half specifies which controller should be active, depending on the available inputs. When data from the acceleration sensor is available, the Absolute controller may be used. If additionally the Track Section has provided a trajectory, the Reference controller is activated. When the required inputs fail, the system needs to quickly switch back to the Robust controller to ensure safe behavior (see [GBS+04]).
During the elaboration phase the active patterns for self-optimization, which are part of the principle solution, are refined using design patterns and subsequently coordination patterns. These coordination patterns further enable the development of systems of interacting hybrid software components which can be compositionally verified to ensure safety.
6. Conclusion
Self-optimization makes innovative mechanical systems possible which go far beyond current approaches for mechatronics. The outlined approach addresses this challenge by means of a specifically tailored design methodology. It main elements are a development process adjusted to the specific needs of self-optimizing systems, the exploration of design alternatives during the principle design, the reuse of the building blocks for self-optimization in form of active patterns of self-optimization in the principle solution, and their subsequent refinement during the elaboration of the information processing with coordination patterns within UML component diagrams.
The additional development efforts for self-optimizing systems can be drastically reduced as the generalized pattern concept is employed throughout the whole development process to enable the reuse of design knowledge. The application of the pattern covers the multiple disciplines involved such as mechanical engineering and software engineering as well as different phases such as the conceptual design and the elaboration. Furthermore, it bridges the results of the different phases as the active patterns are used to derive an adequate active structure from the function hierarchy and enable to derive the UML design by refining the active structure and included active patterns by means of component structures and coordination patterns.
References
ACKNOWLEDGEMENT
This contribution was developed in the course of the Collaborative Research Center “Self-Optimizing Concepts and Structures in Mechanical Engineering” (Speaker: Prof. Gausemeier) funded by the German Research Foundation (DFG) under grant number SFB614.
|
{"Source-Url": "https://www.hpi.uni-potsdam.de/giese/misc/publications/AAET05.pdf", "len_cl100k_base": 7461, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 46483, "total-output-tokens": 10491, "length": "2e12", "weborganizer": {"__label__adult": 0.0006756782531738281, "__label__art_design": 0.001064300537109375, "__label__crime_law": 0.0005574226379394531, "__label__education_jobs": 0.0014600753784179688, "__label__entertainment": 0.00014841556549072266, "__label__fashion_beauty": 0.0003066062927246094, "__label__finance_business": 0.0003535747528076172, "__label__food_dining": 0.0006313323974609375, "__label__games": 0.0018148422241210935, "__label__hardware": 0.003690719604492187, "__label__health": 0.0008106231689453125, "__label__history": 0.0007014274597167969, "__label__home_hobbies": 0.0002818107604980469, "__label__industrial": 0.002475738525390625, "__label__literature": 0.0005221366882324219, "__label__politics": 0.00044345855712890625, "__label__religion": 0.0008721351623535156, "__label__science_tech": 0.298095703125, "__label__social_life": 0.0001512765884399414, "__label__software": 0.0099029541015625, "__label__software_dev": 0.66650390625, "__label__sports_fitness": 0.0008139610290527344, "__label__transportation": 0.007289886474609375, "__label__travel": 0.0003919601440429687}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46527, 0.01975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46527, 0.70369]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46527, 0.89629]], "google_gemma-3-12b-it_contains_pii": [[0, 1614, false], [1614, 4449, null], [4449, 7130, null], [7130, 9828, null], [9828, 12203, null], [12203, 15124, null], [15124, 16901, null], [16901, 18549, null], [18549, 21204, null], [21204, 22382, null], [22382, 24321, null], [24321, 25140, null], [25140, 27099, null], [27099, 29179, null], [29179, 31005, null], [31005, 33082, null], [33082, 34419, null], [34419, 36622, null], [36622, 37572, null], [37572, 40256, null], [40256, 43350, null], [43350, 46258, null], [46258, 46527, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1614, true], [1614, 4449, null], [4449, 7130, null], [7130, 9828, null], [9828, 12203, null], [12203, 15124, null], [15124, 16901, null], [16901, 18549, null], [18549, 21204, null], [21204, 22382, null], [22382, 24321, null], [24321, 25140, null], [25140, 27099, null], [27099, 29179, null], [29179, 31005, null], [31005, 33082, null], [33082, 34419, null], [34419, 36622, null], [36622, 37572, null], [37572, 40256, null], [40256, 43350, null], [43350, 46258, null], [46258, 46527, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46527, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46527, null]], "pdf_page_numbers": [[0, 1614, 1], [1614, 4449, 2], [4449, 7130, 3], [7130, 9828, 4], [9828, 12203, 5], [12203, 15124, 6], [15124, 16901, 7], [16901, 18549, 8], [18549, 21204, 9], [21204, 22382, 10], [22382, 24321, 11], [24321, 25140, 12], [25140, 27099, 13], [27099, 29179, 14], [29179, 31005, 15], [31005, 33082, 16], [33082, 34419, 17], [34419, 36622, 18], [36622, 37572, 19], [37572, 40256, 20], [40256, 43350, 21], [43350, 46258, 22], [46258, 46527, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46527, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
a78decad7ace5b4629206bef314526509c71b920
|
IBM WebSphere Application Server Version 3.5 Advanced Edition for AS/400 Manages Your Enterprise-Wide Applications
Overview
WebSphere™ Application Server V3.5 Advanced Edition manages and integrates enterprise-wide applications while leveraging open Java™-based technologies and APIs. This Web application server deploys and manages Java applications and Enterprise JavaBean components. The latest version takes advantage of Java 2 Software Development Kit (SDK) 1.2.2 performance and capabilities and improves the overall usability of your applications.
WebSphere Application Server Advanced Edition furthers its strong reputation as an integral product in the IBM e-business platform and the WebSphere software family.
Key Prerequisites
- AS/400® systems hosting applications using Enterprise JavaBeans:
- AS/400e™ Server 170 with processor feature 2385 (recommended minimum)
- AS/400e Server 270 with processor feature 2252 or AS/400e Server 820 with processor feature 2396 (recommended minimum)
- Recommended minimum 1 GB memory
- AS/400 systems hosting applications using servlets and JavaServer Pages only:
- AS/400e Server 170 with processor feature 2292 or AS/400e Server 720 with processor feature 2061 (recommended minimum)
- AS/400e Server 270 with processor feature 2248 or AS/400e Server 820 with processor feature 2395 (recommended minimum)
- Recommended minimum 512 MB memory
- OS/400® V4R4, or later
- PC running Windows NT™ V4.0 or Windows™ 2000 Version 5.0; Sun workstation running Solaris V2.6 and V7 at the latest available maintenance level; RS/6000® running AIX® V4.3.3.02, or later; HP workstation running HP-UX V11.0
- Java 2 SDK V1.2.2 on both the server and workstation
- Web browser on the client
At a Glance
WebSphere Application Server Version 3.5 Advanced Edition for AS/400 features:
- Enhanced support for Java 2 SDK V1.2.2 across operating systems
- Improved product integration with other key application offerings in IBM e-business platform, including VisualAge® for Java and WebSphere Commerce Suite
- Improved usability for administration
Planned Availability Date
- September 28, 2000, build to plan
- October 13, 2000, build to order
Versions 3.5 and 3.0.2
Note: The previous announcement of WebSphere Application Server Version 3.0.2 Advanced Edition (refer to Software Announcement 200-013, dated February 8, 2000) had a restriction on 5733-WA3. That restriction is removed and the product is available worldwide.
For ordering, contact:
Your IBM representative, an IBM Business Partner, or IBM Americas Call Centers at 800-IBM-CALL
Reference: AE001
This announcement is provided for your information only. For additional information, contact your IBM representative, call 800-IBM-4YOU, or visit the IBM home page at: http://www.ibm.com.
WebSphere Application Server V3.5 Advanced Edition adds the following features to an already function-rich environment.
Java 2 Software Development Kit (SDK) V1.2.2 Support
SDK 1.2.2 is the latest version of the Java Virtual Machine base required within the server run-time environment. WebSphere now supports SDK 1.2.2 and includes it in the product package across all supported operating platforms. SDK 1.2.2 increases performance and helps improve dynamic and static Web content.
Product Integration
WebSphere, core to the foundation services of the IBM e-business platform, is easily integrated with other leading products for a robust set of application solution offerings.
- Integration with WebSphere Commerce Suite is improved with performance and security features.
- Integration with VisualAge for Java Enterprise Edition is enhanced with support for the JDK 1.2 base across platforms when generating e-business applications such as servlets, and JavaServer Pages, JavaBeans, and Enterprise JavaBeans components.
Usability
Administrative procedures improve usability while maintaining a rich and flexible set of selectable configurations and prerequisites for tailored, open, and complete solutions.
WebSphere Application Server
WebSphere Application Server V3.5 Advanced Edition for AS/400 maintains the functions of the previous version, including:
- Rich e-business application deployment environment with a comprehensive set of application services for transaction management, security, clustering, performance, availability, connectivity, and scalability
- Leading-edge implementations of the most up-to-date Java-based technologies and APIs
- Comprehensive connectivity support for interacting with relational databases, object databases, transaction processing systems, MQ-enabled applications, and hierarchical databases
- Migration path to a more scalable and transactional environment from WebSphere Standard Edition for a complete Java-based Web application platform
- Focus on medium- to high-level transactional environments used in conjunction with dynamic Web content generation and Web-initiated transactions
- Performance and scaling attributes that support bean-managed and container-managed persistence for entity beans and session beans with transaction management and monitoring
- Container management and persistent storage within a transactional environment for servlets and Enterprise JavaBean components
- Rich XML parsing and configuration environment
- Tivoli®-ready modules that can be managed by Tivoli-based tools
Related Information
Customers with WebSphere V3.0 Advanced Edition who upgrade to WebSphere V3.5 will receive Site Analyzer V3.5 with the upgrade.
Year 2000
This product is Year 2000 ready. When used in accordance with its associated documentation, it is capable of correctly processing, providing, or receiving date data within and between the twentieth and twenty-first centuries, provided that all products (for example, hardware, software, and firmware) used with the product properly exchange accurate date data with it.
The service end date for this Year 2000 ready product is December 31, 2002.
Trademarks
WebSphere and AS/400e are trademarks of International Business Machines Corporation in the United States or other countries or both.
AS/400, OS/400, AIX, RS/6000, and VisualAge are registered trademarks of International Business Machines Corporation in the United States or other countries or both.
Windows and Windows NT are trademarks of Microsoft Corporation.
Java is a trademark of Sun Microsystems, Inc.
Tivoli is a registered trademark of Tivoli Systems, Inc. in the United States or other countries or both. In Denmark, Tivoli is a trademark licensed from Kjobenhavns Sommer—Tivoli A/S.
Other company, product, and service names may be trademarks or service marks of others.
Offering Information
Product information is available through Offering Information (OITOOL) at:
http://www.ibm.com/wwoi
Publications
No publications are shipped with this program. An HTML version of the product documentation is available on the product CD-ROM. For optimal viewing of this documentation users need a Web browser that supports HTML 4 and Cascading Style Sheet (CSS). Terms and conditions for use of the machine-readable files are shipped with the files.
Technical Information
Hardware Requirements
AS/400® Hardware Requirements
- Systems hosting applications using Enterprise JavaBeans:
- AS/400e™ Server 170 with processor feature number 2385 or AS/400e Server 720 with processor feature number 2062 (recommended minimum)
- AS/400e Server 270 with processor feature number 2252 or AS/400e Server 820 with processor feature number 2396 (recommended minimum)
- Recommended minimum 1 GB memory
- Systems hosting applications using servlets and JavaServer Pages only:
- AS/400e Server 170 with processor feature number 2292 or AS/400e Server 720 with processor feature number 2061 (recommended minimum)
- AS/400e Server 270 with processor feature number 2248 or AS/400e Server 820 with processor feature number 2395 (recommended minimum)
- Recommended minimum 512 MB memory
- Systems below this recommended minimum can be used in environments supporting a small number of users and where longer server initialization times can be tolerated.
- Disk requirements:
- *BASE (client application development software only): 500 MB during installation, 250 MB after installation
- Option 1 (includes *BASE and WebSphere™ Application Server environment): 650 MB during installation, 450 MB after installation
- Communications adapter that supports TCP/IP
- AS/400 Workload Estimator for help with sizing all system configurations
http://as400service.ibm.com/estimator
Workstation Hardware Requirements
- For Windows NT™:
- Any Intel®-based PC running Windows NT V4.0 or Windows™ 2000
- Support for a communications adapter or an appropriate network interface
- Minimum 40 MB free disk space
- Minimum 96 MB memory
- CD-ROM
- For AIX®:
- RS/6000® or RS/6000 SP™ running AIX V4.3.3.02, or later (5765-C34, 5765-655)
- Support for a communications adapter or an appropriate network interface
- Minimum 40 MB free disk space
- Minimum 96 MB memory
- CD-ROM
- For HP:
- HP workstation running HP-UX 11.0, or later
- Support for a communications adapter or an appropriate network interface
- Minimum 40 MB free disk space
- Minimum 96 MB memory
- CD-ROM
Software Requirements
AS/400 Software Requirements
- OS/400® V4R4, or later (in an unrestricted state), to install and run
- Latest database enhance PAK or group PTF SF99104
This announcement is provided for your information only. For additional information, contact your IBM representative, call 800-IBM-4YOU, or visit the IBM home page at: http://www.ibm.com.
• Java Development Kit 1.2.2 provided via the AS/400 Developer Kit for Java (5769-JV1)
• OS/400 HostServers (5769-SS1 Option 12) for remote installation (installing to your AS/400 system from the CD-ROM of another workstation)
• OS/400 QShell Interpreter (5769-SS1 Option 30) for local installation from the CD-ROM
• AS/400 TCP/IP Connectivity Utilities/400 (5769-TC1) for remote installation (installing to your AS/400 system from the CD-ROM of another workstation)
DB2 Universal Database® (UDB) for AS/400 must be configured to work with WebSphere Application Server V3.5 Advanced Edition for AS/400 to run the WebSphere Application Server on AS/400.
DB2® Query Manager and SQL Development Kit for AS/400 (5769-ST1) is an optional requirement for developing client applications.
Note: If you need advanced schema mapping to database tables in Container Managed Persistent Entity Beans, you also need VisualAge® for Java Enterprise Edition 3.5.
Workstation Software Requirements
For Windows NT:
• Windows NT V4.0
• Java 2 SDK V1.2.2
• Web browser that supports HTML 4 and CSS
For Windows 2000:
• Windows 2000
• Java 2 SDK V1.2.2
• Web browser that supports HTML 4 and CSS
For Sun Solaris:
• Sun Solaris V2.6 or V7 at the latest available maintenance level of AIX V4.3.3.02
• Java 2 SDK V1.2.2.03
• Web browser that supports HTML 4 and CSS
For AIX:
• AIX V4.3.3.02, or later (5765-C34, 5765-655)
• Java 2 SDK V1.2.2.03
• Web browser that supports HTML 4 and CSS
For HP-UX:
• HP-UX 11.0, or later, at a maintenance level at April 2000, or later
• Java 2 SDK V1.2.2.03
• Web browser that supports HTML 4 and CSS
Install the WebSphere Application Server Administrative Console on your Windows NT, Sun Solaris, HP, or AIX workstation. The WebSphere Administrative Console requires 40 MB hard drive space and 64 MB RAM on your workstation.
TCP/IP must be installed and running on your workstation.
VisualAge for Java Enterprise Edition 3.5 is required if you need advanced data mapping to database tables in container-managed persistent entity beans.
Planning Information
Packaging: WebSphere Application Server V3.5 Advanced Edition for AS/400 is shipped in one package that contains the following:
• IBM International Program License Agreement (translated)
• Service and Support Information Sheet
• License information
• Proof of Entitlement (PoE)
• Ten CD-ROMs that contain the program code and the product documentation
This program when downloaded from a Web site, contains the applicable IBM license agreement, and License Information (LI), if appropriate, and will be presented for acceptance at the time of installation of the program. The license and LI will be stored in a directory such as LICENSE.TXT for future reference.
Security, Auditability, and Control
WebSphere Application Server V3.5 Advanced Edition for AS/400 uses the security and auditability features of OS/400.
The customer is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communication facilities.
Ordering Information
WebSphere Application Server V3.5 Advanced Edition for AS/400 is a stand-alone product with one charge unit — processors. It is available in units of one. The Processor Program Package feature (CD-ROM) is required for the first processor on the first server. A one-processor entitlement is required for each processor beyond the first one.
WebSphere Application Server V3.5 Advanced Edition for AS/400 is not available at a priced upgrade but is no-charge if you have AS/400 Software Subscription.
The price of WebSphere Application Server Version 3.5 Advanced Edition for AS/400 is based on the number of processors in the system.
Ordering Information (BTP)
<table>
<thead>
<tr>
<th>Description</th>
<th>Program Number</th>
<th>Part Number</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>WebSphere Application Server V3.5 Advanced Edition for AS/400</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>128-bit encryption, Processor Program Package</td>
<td>11K6835</td>
<td></td>
</tr>
<tr>
<td>128-bit encryption, 1 Processor Entitlement</td>
<td>11K6838</td>
<td></td>
</tr>
<tr>
<td><strong>Electronic Delivery</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>128-bit encryption</td>
<td>11K6840</td>
<td></td>
</tr>
<tr>
<td><strong>Passport Advantage</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Passport Advantage part numbers have been previously announced.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>For additional Passport Advantage information, ordering information, and charges, access the Web site at:</td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="http://w3.Lotus.com/passportadvantage">http://w3.Lotus.com/passportadvantage</a></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Media Part Description Number
<table>
<thead>
<tr>
<th>Description</th>
<th>Media Part Number</th>
<th>Feature Number</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>WebSphere Application Server V3.5 Advanced Edition for AS/400</strong></td>
<td>BA6J6NA</td>
<td></td>
</tr>
<tr>
<td>128-bit Processor Program Package Media Pack</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Type/Model Number Ordering Information (BTO)
An upgrade order to V3.5 from V3.0.2 is no-charge but requires an AS/400 Software Subscription. The upgrade order is similar to an initial order except that no billing feature is included.
Upgrades
<table>
<thead>
<tr>
<th>Description</th>
<th>OTC Feature Feature Program</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>WebSphere Application Server V3.5 Advanced Edition for AS/400</strong></td>
<td></td>
</tr>
<tr>
<td>CD-ROM, 128-bit encryption, 1001 5849 5733-WA3 Processor Program Package</td>
<td></td>
</tr>
<tr>
<td>128-bit encryption, 1002 5733-WA3 Processor Entitlement</td>
<td></td>
</tr>
<tr>
<td>1 One-time charge</td>
<td></td>
</tr>
</tbody>
</table>
**Terms and Conditions**
| Licensing: IBM International Program License Agreement. PoE are required for all authorized use. |
| License Information Form Number: CT7HAIE |
| Limited Warranty Applies: Yes |
| Program Services: Available until December 31, 2002 |
| The service availability period for WebSphere Application Server V3.0.2 Advanced Edition for AS/400 has been extended from August 31, 2001, to December 31, 2001. |
| Money-Back Guarantee: Two-month, money-back guarantee |
| Copy and Use on Home/Portable Computer: No |
| Volume Orders (IVO): Yes, contact your IBM representative. |
| Passport Advantage Applies: Yes |
| Passport Advantage Subscription Applies: Yes |
Upgrades: Customers can acquire upgrades up to the currently authorized level of use of the qualifying programs.
Support Line: Yes
Other Support: AS/400
AIX/UNIX® Upgrade Protection Applies: No
Entitled Upgrade for Current AIX/UNIX Upgrade Protection Licensees: No
AS/400 Software Subscription Applies: Yes
Variable Charges Apply: No
Educational Allowance Available: Yes, 15% education allowance applies to qualified education institution customers.
### Charges
#### Build to Plan
<table>
<thead>
<tr>
<th>Description</th>
<th>Program Number</th>
<th>OTC</th>
</tr>
</thead>
<tbody>
<tr>
<td>WebSphere Application Server V3.5 Advanced Edition for AS/400</td>
<td>11K6835</td>
<td>$7,500</td>
</tr>
<tr>
<td>128-bit encryption, Processor Program Package</td>
<td>11K6838</td>
<td>7,475</td>
</tr>
<tr>
<td>128-bit encryption, Upgrade from V3.0.2, Processor Program Package</td>
<td>11K6843</td>
<td>3,749</td>
</tr>
<tr>
<td>Upgrade from V3.0.2, 1 Processor Entitlement</td>
<td>11K6845</td>
<td>3,739</td>
</tr>
<tr>
<td>128-bit encryption, Electronic Delivery</td>
<td>11K6840</td>
<td>7,475</td>
</tr>
<tr>
<td>128-bit encryption, Upgrade from V3.0.2, Electronic Delivery</td>
<td>11K6842</td>
<td>3,739</td>
</tr>
</tbody>
</table>
Passport Advantage
Note: For Passport Advantage charges, contact your Lotus representative or authorized Lotus Business Partner. Additional information is also available on the Passport Advantage Web site:
Customer Financing: IBM Global Financing offers attractive financing to credit-qualified commercial and government customers and Business Partners in more than 40 countries around the world. IBM Global Financing is provided by the IBM Credit Corporation in the United States. Offerings, rates, terms, and availability may vary by country. Contact your local IBM Global Financing organization. Country organizations are listed on the Web at:
#### Order Now
Use Priority/Reference Code: AE001
Phone: 800-IBM-CALL
Fax: 800-2IBM-FAX
Internet: ibm_direct@us.ibm.com
Mail: IBM Atlanta Sales Center
Dept. AE001
P.O. Box 2690
Atlanta, GA 30301-2690
You can also contact your local IBM Business Partner or IBM representative. To identify them, call 800-IBM-4YOU.
Note: Shipments will begin after the planned availability date.
### Trademarks
AS/400, WebSphere, and SP are trademarks of International Business Machines Corporation in the United States or other countries or both.
AS/400, AIX, RS/6000, OS/400, DB2 Universal Database, DB2, and VisualAge are registered trademarks of International Business Machines Corporation in the United States or other countries or both.
Intel is a registered trademark of Intel Corporation.
Windows NT and Windows are trademarks of Microsoft Corporation.
Java is a trademark of Sun Microsystems, Inc.
UNIX is a registered trademark in the United States and other countries exclusively through X/Open Company Limited.
Lotus is a registered trademark of Lotus Development Corporation.
Other company, product, and service names may be trademarks or service marks of others.
### Year 2000 Readiness Disclosure
|
{"Source-Url": "https://www-01.ibm.com/common/ssi/rep_ca/0/897/ENUS200-240/ENUS200-240.PDF", "len_cl100k_base": 4603, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18749, "total-output-tokens": 5209, "length": "2e12", "weborganizer": {"__label__adult": 0.0004475116729736328, "__label__art_design": 0.0002682209014892578, "__label__crime_law": 0.0003919601440429687, "__label__education_jobs": 0.000957012176513672, "__label__entertainment": 9.697675704956056e-05, "__label__fashion_beauty": 0.0001537799835205078, "__label__finance_business": 0.0137176513671875, "__label__food_dining": 0.0002378225326538086, "__label__games": 0.0009241104125976562, "__label__hardware": 0.00553131103515625, "__label__health": 0.0002684593200683594, "__label__history": 0.0001500844955444336, "__label__home_hobbies": 0.00015926361083984375, "__label__industrial": 0.0006089210510253906, "__label__literature": 0.0001468658447265625, "__label__politics": 0.00014126300811767578, "__label__religion": 0.0002942085266113281, "__label__science_tech": 0.00449371337890625, "__label__social_life": 5.8531761169433594e-05, "__label__software": 0.310302734375, "__label__software_dev": 0.65966796875, "__label__sports_fitness": 0.0001854896545410156, "__label__transportation": 0.0004401206970214844, "__label__travel": 0.0002808570861816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20351, 0.05369]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20351, 0.04048]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20351, 0.80877]], "google_gemma-3-12b-it_contains_pii": [[0, 2802, false], [2802, 6673, null], [6673, 9653, null], [9653, 13408, null], [13408, 17099, null], [17099, 20351, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2802, true], [2802, 6673, null], [6673, 9653, null], [9653, 13408, null], [13408, 17099, null], [17099, 20351, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20351, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20351, null]], "pdf_page_numbers": [[0, 2802, 1], [2802, 6673, 2], [6673, 9653, 3], [9653, 13408, 4], [13408, 17099, 5], [17099, 20351, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20351, 0.16318]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a963fd7b1a147e575d8ebbd12e071dec2af4109d
|
Abstraction and Concurrency
Two fundamental concepts to build larger software are:
- **abstraction**: an object storing certain data and providing certain functionality may be used without reference to its internals
- **composition**: several objects can be combined to a new object without interference
Both, **abstraction** and **composition** are closely related, since the ability to compose depends on the ability to abstract from details.
Abstraction and Concurrency
Two fundamental concepts to build larger software are:
- **abstraction**: an object storing certain data and providing certain functionality may be used without reference to its internals
- **composition**: several objects can be combined to a new object without interference
Both, abstraction and composition are closely related, since the ability to compose depends on the ability to abstract from details.
Consider an example:
- a linked list data structure exposes a fixed set of operations to modify the list structure, such as `push()` and `forAll()`
- a set object may internally use the list object and expose a set of operations, including `push()`
The `insert()` operation uses the `forAll()` operation to check if the element already exists and uses `push()` if not.
Abstraction and Concurrency
Two fundamental concepts to build larger software are:
- **abstraction**: an object storing certain data and providing certain functionality may be used without reference to its internals
- **composition**: several objects can be combined to a new object without interference
Both, abstraction and composition are closely related, since the ability to compose depends on the ability to abstract from details.
Consider an example:
- a linked list data structure exposes a fixed set of operations to modify the list structure, such as `push()` and `forAll()`
- a set object may internally use the list object and expose a set of operations, including `push()`
The `insert()` operation uses the `forAll()` operation to check if the element already exists and uses `push()` if not.
Wrapping the linked list in a mutex does not help to make the `set` thread-safe.
- wrap the two calls in `insert()` in a mutex
- but other list operations can still be called → use the `same` mutex
Transactional Memory [2]
Idea: automatically convert atomic blocks into code that ensures atomic execution of the statements.
```plaintext
atomic {
// code
if (cond) retry;
atomic {
// more code
}
// code
}
```
**Transactional Memory [2]**
Idea: automatically convert atomic blocks into code that ensures atomic execution of the statements.
```java
atomic {
// code
if (cond) retry;
atomic {
// more code
}
// code
}
```
Execute code as transaction:
- execute the code of an atomic block
- nested atomic blocks act like a single atomic block
- check that it runs without conflicts due to accesses from another thread
- if another thread interferes through conflicting updates:
- undo the computation done so far
- re-start the transaction
- provide a retry keyword similar to the wait of monitors
---
**Managing Conflicts**
**Definition (Conflicts)**
A conflict occurs when accessing the same piece of data, a conflict is detected when the TM system observes this, it is resolved when the TM system takes action (by delaying or aborting a transaction).
Design choices for transactional memory implementations:
- **optimistic vs. pessimistic concurrency control**:
- pessimistic: detection/resolution when the conflict is about to occur
- resolution here is usually delaying one transaction
- can be implemented using locks: deadlock problem
- optimistic: detection and resolution happen after a conflict occurs
- resolution here must be aborting one transaction
- need to repeat aborted transaction: livelock problem
- **eager vs. lazy version management**: how read and written data are managed during the transaction
- eager: writes modify the memory and an undo-log is necessary if the transaction aborts
- lazy: writes are stored in a redo-log and modifications are done on committing
---
**Choices for Optimistic Concurrency Control**
Design choices for TM that allow conflicts to happen:
1. **granularity of conflict detection**: may be a cache-line or an object, false conflicts possible
2. **conflict detection**:
- eager: conflicts are detected when memory locations are first accessed
- validation: check occasionally that there is no conflict yet, always validate when committing
- lazy: conflicts are detected when committing a transaction
3. **reference of conflict** (for non-eager conflict detection)
- tentative: detect conflicts before transactions commit, e.g., aborting when transaction T_A reads while T_B may write the same location
- committed: detect conflicts only against transactions that have committed
Semantics of Transactions
The goal is to use transactions to specify atomic executions. Transactions are rooted in databases where they have the ACID properties:
- **Atomicity**: a transaction completes or seems not to have run
- we call this failure atomicity to distinguish it from atomic executions
- **Consistency**: each transaction transforms a consistent state to another consistent state
- a consistent state is one in which certain invariants hold
- invariants depend on the application (e.g., queue data structure)
- **Isolation**: transactions do not interfere with each other
- not so evident with respect to non-transactional memory
- **Durability**: the effects are permanent
Transactions themselves must be **serializable**:
- the result of running concurrent transactions must be identical to one execution of them in sequence
- serializability for transactions is insufficient to perform synchronization between threads
Consistency During Transactions
**ACID states how committed transactions behave but not what may happen until a transaction commits.**
- a transaction that is run on an inconsistent state may generate an inconsistent state -- zombie transaction
- this is usually ok since it will be aborted eventually
- but transactions may cause havoc when run on inconsistent states
```
atomic {
int tmp1 = x; // preserved invariant: x == y
int tmp2 = y; atomic {
int x = 10;
assert(tmp1-tmp2==0); y = 10;
}
critical for C/C++ if, for instance, variables are pointers
}
```
**Definition (opacity)**
A TM system provides opacity if failing transactions are serializable w.r.t. committing transactions.
- failing transactions still see a consistent view of memory
Weak- and Strong Isolation
If guarantees are only given about memory accessed inside atomic, a TM implementation provides weak isolation. Can we mix transactions with code accessing memory non-transactionally?
- no conflict detection for non-transactional accesses
- standard race problems as in unlocked shared accesses
```cpp
// Thread 1
atomic {
x = 42;
int tmp = x;
}
```
- give programs with races the same semantics as if using a single global lock for all atomic blocks
- strong isolation: retain order between accesses to TM and non-TM
Definition (SLA)
The single-lock atomicity is a model in which the program executes as if all transactions acquire a single, program-wide mutual exclusion lock.
Weak- and Strong Isolation
If guarantees are only given about memory accessed inside atomic, a TM implementation provides weak isolation. Can we mix transactions with code accessing memory non-transactionally?
- no conflict detection for non-transactional accesses
- standard race problems as in unlocked shared accesses
```cpp
// Thread 1
atomic {
x = 42;
int tmp = x;
}
```
- give programs with races the same semantics as if using a single global lock for all atomic blocks
- strong isolation: retain order between accesses to TM and non-TM
Definition (SLA)
The single-lock atomicity is a model in which the program executes as if all transactions acquire a single, program-wide mutual exclusion lock.
Properties of Single-Lock Atomicity
Observation:
- SLA enforces order between TM and non-TM accesses
- this guarantees strong isolation between TM and non-TM accesses
- within one transaction, accesses may be re-ordered
- the content of non-TM memory conveys information which atomic block has executed, even if the TM regions do not access the same memory
- SLA makes it possible to use atomic block for synchronization
---
**Like sequential consistency**, SLA is a statement about program equivalence
Disadvantages of the SLA model
The SLA model is *simple* but often too strong:
1. SLA has a *weaker progress* guarantee than a transaction should have.
```
// Thread 1
atomic {
while (true) {};
}
// Thread 2
atomic {
int tmp = x; // x in TM
}
```
2. SLA correctness is too strong in practice.
```
// Thread 1
data = 1;
atomic {
}
ready = 1;
// Thread 2
atomic {
int tmp = data;
// Thread 1 not in atomic
if (ready) {
// use tmp
}
}
```
- under the SLA model, atomic {} acts as barrier
- intuitively, the two transactions should be independent rather than synchronize
~~~ need a weaker model for more flexible implementation of *strong isolation*
Transactional Sequential Consistency
How about a more permissive view of transaction semantics?
- TM should not have the blocking behaviour of locks
- the programmer cannot rely on synchronization
**Definition (TSC)**
The *transactional sequential consistency* is a model in which the accesses within each transaction are sequentially consistent.
```latex
\begin{center}
\begin{tikzpicture}
\node (A) at (0,0) {$A$};
\node (i) at (0,1) {$i$};
\node (j) at (1,0) {$j$};
\node (k) at (1,1) {$k$};
\node (B) at (2,0) {$B$};
\node (k42) at (2,1) {$k=42$};
\node (atomici) at (0,0.5) {atomic \{ $k = i+j$; \}};
\node (atomicj) at (1,0.5) {atomic \{ $k = i+j$; \}};
\node (atomici42) at (0,1.5) {atomic \{ $k = i+j$; \}};
\node (atomicj42) at (1,1.5) {atomic \{ $k = i+j$; \}};
\draw[->] (A) -- (i);
\draw[->] (A) -- (j);
\draw[->] (A) -- (k);
\draw[->] (i) -- (j);
\draw[->] (i) -- (k);
\draw[->] (j) -- (k);
\draw[->] (i) -- (atomici);
\draw[->] (j) -- (atomicj);
\draw[->] (k) -- (atomici42);
\draw[->] (k42) -- (atomicj42);
\end{tikzpicture}
\end{center}
```
- TSC is weaker: gives *strong isolation*, but allows parallel execution ✔️
- TSC is stronger: accesses within a transaction may not be re-ordered ❌
~~~ actual implementations use TSC with some *race free* re-orderings
**Translation of atomic-Blocks**
A TM system must track which shared memory locations are accessed:
- convert every read access \( x \) from a shared variable to \( \text{ReadTx}(kx) \)
- convert every write access \( x^w \) to a shared variable to \( \text{WriteTx}(kx,e) \)
Convert atomic blocks as follows:
```
atomic {
\( \text{StartTx}() \);
// code with ReadTx and WriteTx
while (!CommitTx());
}
```
- translation can be done using a pre-processor
- determining a minimal set of memory accesses that need to be transactional requires a good static analysis
- idea: translate all accesses to global variables and the heap as TM
- more fine-grained control using manual translation
- an actual implementation might provide a \( \text{retry} \) keyword
- when executing \( \text{retry} \), the transaction aborts and re-starts
- the transaction will again wind up at \( \text{retry} \) unless its \( \text{read set} \) changes
- block until a variable in the read-set has changed
- similar to condition variables in monitors
**Software Transactional Memory**
A software TM implementation allocates a transaction descriptor to store data specific to each atomic block, for instance:
- \( \text{undo-log} \) of writes if writes have to be undone if a commit fails
- \( \text{redo-log} \) of writes if writes are postponed until a commit
- \( \text{read-} \) and \( \text{write-set} \): locations accessed so far
- \( \text{read-} \) and \( \text{write-version} \): time stamp when value was accessed
Consider the TL2 STM (software transactional memory) algorithm [1]:
- provides opacity: zombie transactions do not see inconsistent state
- uses lazy versioning: writes are stored in a \( \text{redo-log} \) and done on commit
- validating conflict detection: accessing a modified address aborts
A Software TM Implementation
A software TM implementation allocates a transaction descriptor to store data specific to each atomic block, for instance:
- **undo-log** of writes if writes have to be undone if a commit fails
- **redo-log** of writes if writes are postponed until a commit
- **read-** and **write-set**: locations accessed so far
- **read-** and **write-version**: time stamp when value was accessed
Consider the TL2 STM (software transactional memory) algorithm [1]:
- provides **opacity**: zombie transactions do not see inconsistent state
- uses **lazy versioning**: writes are stored in a redo-log and done on commit
- validates **conflict detection**: accessing a modified address aborts
TL2 stores a global version counter and:
- a read version in each object (allocate a few bytes more in each call to malloc, or inherit from a transaction object in e.g. Java)
- a redo-log in the transaction descriptor
- a read- and a write-set in the transaction descriptor
- a read-version: the version when the transaction started
Committing a Transaction
A transaction can succeed if none of the read locations has changed:
```c
bool CommitTx(TMDesc tx) {
foreach (e in tx.writeSet)
if (!try_wait(e.obj.sem)) goto Fail;
WV = FetchAndAdd(&globalClock);
foreach (e in tx.readSet)
if (e.obj.version > tx.RV) goto Fail;
foreach (e in tx.redoLog)
e.obj[e.offset] = e.value;
foreach (e in tx.writeSet) {
e.obj = WV; signal(e.obj.sem);
}
return true;
Fail:
// signal all acquired semaphores
return false
}
```
Properties of TL2
Opacity is guaranteed by aborting a read access with an inconsistent value:
```
StartTx → ReadTx → WriteTx → ReadTx → CommitTx
```
Other observations:
- read-only transactions just need to check that read versions are consistent (no need to increment the global clock)
- writing values still requires locks
- deadlocks are still possible
- since other transactions can be aborted, one can preempt transactions that are deadlocked
- since lock accesses are generated, computing a lock order up-front might be possible
- at least two memory barriers are necessary in ReadTx
- read version=lock, 1fence, read value, 1fence, read version
- there might be contention on the global clock
General Challenges when using TM
Executing atomic blocks by repeatedly trying to execute them non-atomically creates new problems:
- a transaction might unnecessarily be aborted
- the granularity of what is locked might be too large
- a TM implementation might impose restrictions:
```
// Thread 1
atomic {
// clock=12
...
int r = ReadTx(kx, 0);
} // tx.RV=12 // clock
// Thread 2
atomic {
WriteTx(kx, 0) = 42; // clock=13
}
```
- lock-based commits can cause contention
- organize cells that participate in a transaction in one object
- compute a new object as result of a transaction
- atomically replace a pointer to the old object with a pointer to the new object
if the old object has not changed
~ idea of the original STM proposal
- TM system should figure out which memory locations must be logged
- danger of live-locks: transaction B might abort A which might abort B ...
Integrating Non-TM Resources
Allowing access to other resources than memory inside an atomic block poses problems:
- storage management, condition variables, volatile variables, input/output
- semantics should be as if atomic implements SLA or TSC semantics
Usual choice is one of the following:
- **Prohibit It.** Certain constructs do not make sense. Use compiler to reject these programs.
- **Execute It.** I/O operations may only happen in some runs (e.g. file writes usually go to a buffer). Abort if I/O happens.
- **Irrevocably Execute It.** Universal way to deal with operations that cannot be undone: enforce that this transaction terminates (possibly before starting) by making all other transactions conflict.
- **Integrate It.** Re-write code to be transactional: error logging, writing data to a file, ...
Hardware Transactional Memory
Transactions of a limited size can also be implemented in hardware:
- additional hardware to track read- and write-sets
- conflict detection is **eager** using the cache:
- additional hardware makes it cheap to perform conflict detection
- if a cache-line in the read set is invalidated, the transaction aborts
- if a cache-line in the write set must be written-back, the transaction aborts
- limited by fixed hardware resources, a software backup must be provided
Two principal implementation of HTM:
1. **Explicit Transactional HTM**: each access is marked as transactional
- similar to StartTx, RedTx, WriteTx, and CommitTx
- requires separate transaction instructions
- a transaction has to be translated differently
- mixing transactional and non-transactional accesses is problematic
2. **Implicit Transactional HTM**: only the beginning and end of a transaction are marked
- same instructions can be used, hardware interprets them as transactional
- only instructions affecting memory that can be cached can be executed transactionally
- hardware access, OS calls, page table changes, etc. all abort a transaction
- provides **strong isolation**
Example for HTM
**AMD Advanced Synchronization Facilities (ASF):**
- defines a logical **speculative region**
- **LOCK/MOV** instructions provide **explicit** data transfer between normal memory and speculative region
- aimed to implement larger atomic operations
Intel’s TSX in Broadwell/Skylake microarchitecture (since Aug 2014):
- **implicit transactional**, can use normal instructions within transactions
- tracks read/write set using a single **transaction** bit on cache lines
- provides space for a backup of the whole CPU state (registers, ...)
- use a simple counter to support nested transactions
- may abort at any time due to lack of resources
- aborting in an inner transaction means aborting all of them
Example for HTM
**AMD Advanced Synchronization Facilities (ASF):**
- defines a logical **speculative region**
- **LOCK/MOV** instructions provide **explicit** data transfer between normal memory and speculative region
- aimed to implement larger atomic operations
Intel’s TSX in Broadwell/Skylake microarchitecture (since Aug 2014):
- **implicit transactional**, can use normal instructions within transactions
- tracks read/write set using a single **transaction** bit on cache lines
- provides space for a backup of the whole CPU state (registers, ...)
- use a simple counter to support nested transactions
- may abort at any time due to lack of resources
- aborting in an inner transaction means aborting all of them
Intel provides two software interfaces to TM:
1. **Restricted Transactional Memory (RTM)**
2. **Hardware Lock Elision (HLE)**
Restricted Transactional Memory (Intel)
Provides new instructions `XBEGIN`, `XEND`, `XABORT`, and `XTEST`:
- `XBEGIN` takes an instruction address where execution continues if the transaction aborts
- `XEND` commits the transaction started by the last `XBEGIN`
- `XABORT` aborts the current transaction with an error code
- `XTEST` checks if the processor is executing transactionally
The instruction `XBEGIN` can be implemented as a C function:
```c
int data[100]; // shared
void update(int idx, int value) {
if (xbegin() == -1) {
data[idx] += value;
xend();
} else {
// transaction failed
}
}
```
Considerations for the Fall-Back Path
Consider executing the following code in parallel with itself:
```c
int data[100]; // shared
void update(int idx, int value) {
if (_xbegin() == -1) {
data[idx] += value;
_xend();
} else {
data[idx] += value;
}
}
```
Protecting the Fall-Back Path
Use a lock to prevent the transaction from interrupting the fall-back path:
```c
int data[100]; // shared
int mutex;
void update(int idx, int value) {
if (xbegin() == -1) {
data[idx] += value;
xend();
} else {
wait(mutex);
data[idx] += value;
signal(mutex);
}
}
```
- Fall-back path may not run in parallel with others
- **⚠️** Transactional region may not run in parallel with fall-back path
Protecting the Fall-Back Path
Use a lock to prevent the transaction from interrupting the fall-back path:
```c
int data[100]; // shared
int mutex;
void update(int idx, int value) {
if (_xbegin() == 1) {
if (_mutex > 0) _xabort();
data[idx] += value;
_xend();
} else {
wait(mutex);
data[idx] += value;
signal(mutex);
}
}
```
- fall-back path may not run in parallel with others
- transactional region may not run in parallel with fall-back path
Implementing RTM using the Cache
- augment each cache line with an extra bit T
- use a nesting counter C and a backup register set
~~ additional transaction logic:
- _XBEGIN increment C and, if C = 0, back up registers
- r/w access to cache lines sets T if C > 0
- applying an _invalidate_ message from _invalidate queue_ to a cache line with T = 1 issues _XABORT_
- observing a _read_ message for a _modified_ cache line with T = 1 issues _XABORT_
- _XABORT_ transition from all T? flags to T, sets C = 0 and restores CPU registers
- _XCOMMIT_ decrement C and, if C = 0, clear all T flags
Illustrating Transactions
Augment MESI state with extra bit T per cache line. CPU A: E5, CPU B: I
**Thread A**
```c
int tmp = data[idx];
data[idx] = tmp + value;
_xend();
```
**Thread B**
```c
int tmp = data[idx];
data[idx] = tmp + value;
_xend();
```
![Diagram showing the interaction between threads A and B]
|
{"Source-Url": "http://ttt.in.tum.de/recordings/Programmiersprachen_2016_11_16/Programmiersprachen_2016_11_16.pdf", "len_cl100k_base": 5229, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 13659, "total-output-tokens": 6038, "length": "2e12", "weborganizer": {"__label__adult": 0.00026798248291015625, "__label__art_design": 0.0002803802490234375, "__label__crime_law": 0.000286102294921875, "__label__education_jobs": 0.00020992755889892575, "__label__entertainment": 4.190206527709961e-05, "__label__fashion_beauty": 8.618831634521484e-05, "__label__finance_business": 0.00013494491577148438, "__label__food_dining": 0.0002894401550292969, "__label__games": 0.0005888938903808594, "__label__hardware": 0.0011949539184570312, "__label__health": 0.0001990795135498047, "__label__history": 0.00017023086547851562, "__label__home_hobbies": 7.539987564086914e-05, "__label__industrial": 0.0003345012664794922, "__label__literature": 0.00014102458953857422, "__label__politics": 0.00023245811462402344, "__label__religion": 0.00031757354736328125, "__label__science_tech": 0.00608062744140625, "__label__social_life": 4.9114227294921875e-05, "__label__software": 0.004459381103515625, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0002419948577880859, "__label__transportation": 0.0004329681396484375, "__label__travel": 0.0001569986343383789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22121, 0.0091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22121, 0.85411]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22121, 0.79523]], "google_gemma-3-12b-it_contains_pii": [[0, 448, false], [448, 2515, null], [2515, 4917, null], [4917, 6659, null], [6659, 8597, null], [8597, 10646, null], [10646, 12472, null], [12472, 14743, null], [14743, 16535, null], [16535, 19323, null], [19323, 20734, null], [20734, 22121, null]], "google_gemma-3-12b-it_is_public_document": [[0, 448, true], [448, 2515, null], [2515, 4917, null], [4917, 6659, null], [6659, 8597, null], [8597, 10646, null], [10646, 12472, null], [12472, 14743, null], [14743, 16535, null], [16535, 19323, null], [19323, 20734, null], [20734, 22121, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22121, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22121, null]], "pdf_page_numbers": [[0, 448, 1], [448, 2515, 2], [2515, 4917, 3], [4917, 6659, 4], [6659, 8597, 5], [8597, 10646, 6], [10646, 12472, 7], [12472, 14743, 8], [14743, 16535, 9], [16535, 19323, 10], [19323, 20734, 11], [20734, 22121, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22121, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
f32d68a2f643bd5dc3c446540e5a5ba3d2701d9a
|
2021 IEEE International Conference on Software Maintenance and Evolution (ICSME 2021)
Luxembourg City, Luxembourg
27 September – 1 October 2021
2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)
ICSME 2021
Table of Contents
Message from the General Co-Chairs and Program Co-Chairs ....................................................... xv
Organizing Committee ......................................................................................................................... xvii
Research Track
Evaluating The Energy Consumption of Java I/O APIs ................................................................. 1
Zakaria Ournani (Orange Labs/ Inria / Univ.Lille, France), Romain Rouvoy (Univ.Lille / Inria / ILIF, France), Pierre Rust (Orange Labs, France), and Joel Penhoat (Orange Labs, France)
Improving Traceability Link Recovery Using Fine-grained Requirements-to-Code Relations ........ 12
Tobias Hey (Karlsruhe Institute of Technology, Germany), Fei Chen (Karlsruhe Institute of Technology, Germany), Sebastian Weigelt (Karlsruhe Institute of Technology, Germany), and Walter F. Tichy (Karlsruhe Institute of Technology, Germany)
SMARTGIFT: Learning to Generate Practical Inputs for Testing Smart Contracts ..................... 23
Teng Zhou (Nanjing University of Aeronautics and Astronautics, China), Kui Liu (Nanjing University of Aeronautics and Astronautics, China; State Key Laboratory of Mathematical Engineering and Advanced Computing, China), Li Li (Monash University, Australia), Zhe Liu (Nanjing University of Aeronautics and Astronautics, China), Jacques Klein (University of Luxembourg, Luxembourg), and Tegawendé F. Bissyandé (University of Luxembourg, Luxembourg)
Revisiting Test Cases to Boost Generate-and-Validate Program Repair ........................................ 35
Jingtang Zhang (Nanjing University of Aeronautics and Astronautics, China), Kui Liu (Nanjing University of Aeronautics and Astronautics, China; State Key Laboratory of Mathematical Engineering and Advanced Computing, China), Dongsun Kim (Kyungpook National University, South Korea), Li Li (Monash University, Australia), Zhe Liu (Nanjing University of Aeronautics and Astronautics, China), Jacques Klein (University of Luxembourg, Luxembourg), and Tegawendé F. Bissyandé (University of Luxembourg, Luxembourg)
The Unit Test Quality of Deep Learning Libraries: A Mutation Analysis .................................................. 47
Li Jia (Shanghai Jiao Tong University, China), Hao Zhong (Shanghai
Jiao Tong University, China), and Linpeng Huang (Shanghai Jiao Tong
University, China)
Test Case Reduction: A Framework, Benchmark, and Comparative Study ............................................. 58
Patrick Kreutzer (Friedrich-Alexander University Erlangen-Nürnberg,
Germany), Toni Kunze (Friedrich-Alexander University Erlangen-Nürnberg,
Germany), and Michael Philippsen (Friedrich-Alexander University
Erlangen-Nürnberg, Germany)
You Look so Different: Finding Structural Clones and Subclones in Java Source Code .................... 70
Wolfram Amme (Friedrich Schiller University Jena, Germany), Thomas S.
Heinz (German Aerospace Center, Germany), and André Schäfer
(Friedrich Schiller University Jena, Germany)
Leveraging Intermediate Artifacts to Improve Automated Trace Link Retrieval ................................ 81
Alberto D. Rodriguez (California Polytechnic State University, USA),
Jane Cleland-Huang (University of Notre Dame, USA), and Davide Falessi
(University of Rome Tor Vergata, Italy)
Incorporating Multiple Features to Predict Bug Fixing Time with Neural Networks ......................... 93
Wei Yuan (Beihang University, China; Beijing Advanced Innovation
Center for Big Data and Brain Computing, China), Yuan Xiong (Beihang
University, China), Haitong Sun (Beihang University, China; Beijing
Advanced Innovation Center for Big Data and Brain Computing, China),
and Xudong Liu (Beihang University, China; Beijing Advanced Innovation
Center for Big Data and Brain Computing, China)
Analysis of Non-Discrimination Policies in the Sharing Economy ...................................................... 104
Miroslav Tushev (Louisiana State University), Fahimeh Ebrahimi
(Louisiana State University), and Anas Mahmoud (Louisiana State
University)
Towards Just-Enough Documentation for Agile Effort Estimation: What Information Should Be
Documented? ........................................................................................................................................ 114
Jirat Pasuksmit (The University of Melbourne, Australia), Patanamon
Thongtanunam (The University of Melbourne, Australia), and Shanika
Karunakera (The University of Melbourne, Australia)
On the Evaluation of Commit Message Generation Models: An Experimental Study ...................... 126
Wei Tao (Fudan University), Yanlin Wang (Microsoft Research Asia),
Ensheng Shi (Xi’an Jiaotong University), Lun Du (Microsoft Research
Asia), Shi Han (Microsoft Research Asia), Hongyu Zhang (The University
of Newcastle), Dongmei Zhang (Microsoft Research Asia), and Wenqiang
Zhang (Fudan University)
Characterization and Automatic Updates of Deprecated Machine-Learning API Usages ................ 137
Stefanus A. Haryono (Singapore Management University, Singapore),
Ferdian Thung (Singapore Management University, Singapore), David Lo
(Singapore Management University, Singapore), Julia Lawall (Inria,
France), and Lingxiao Jiang (Singapore Management University,
Singapore)
A Method to Comprehend Feature Dependencies Based on Semi-Static Structures ....................... 148
Narumasa Kande (Konan University, Japan) and Naoya Nitta (Konan
University, Japan)
Dialogue Management for Interactive API Search ........................................ 274
Zachary Eberhart (University of Notre Dame, USA) and Collin McMillan
(University of Notre Dame, USA)
Ensemble Models for Neural Source Code Summarization of Subroutines .......... 286
Alexander LeClair (University of Notre Dame, USA), Aakash Bansal
(University of Notre Dame, USA), and Collin McMillan (University of
Notre Dame, USA)
Look Ahead! Revealing Complete Composite Refactorings and Their Smelliness Effects ........... 298
Ana Carla Bibiano (Pontifical Catholic University of Rio de Janeiro,
Brazil), Wesley K. G Assunção (Pontifical Catholic University of Rio
de Janeiro, Brazil), Daniel Coutinho (Pontifical Catholic University
of Rio de Janeiro, Brazil), Kleber Santos (Federal University of
Campina Grande, Brazil), Vinicius Soares (Pontifical Catholic
University of Rio de Janeiro, Brazil), Rohit Gheyi (Federal University
of Campina Grande, Brazil), Alessandro Garcia (Pontifical Catholic
University of Rio de Janeiro, Brazil), Baldoino Fonseca (Federal
University of Alagoas, Brazil), Márcio Ribeiro (Federal University of
Alagoas, Brazil), Daniel Oliveira (Pontifical Catholic University of
Rio de Janeiro, Brazil), Caio Barbosa (Pontifical Catholic University
of Rio de Janeiro, Brazil), João Lucas Marques (Federal University of
Alagoas, Brazil), and Anderson Oliveira (Pontifical Catholic
University of Rio de Janeiro, Brazil)
Soundy Automated Parallelization of Test Execution .................................. 309
Shouvick Mondal (Indian Institute of Technology Madras, India), Denini
Silva (Federal University of Pernambuco, Brazil), and Marcelo d’Amorim
(Federal University of Pernambuco, Brazil)
Energy Efficient Guidelines for iOS Core Location Framework ......................... 320
Abdul Ali Bangash (University of Alberta, Canada), Daniil Tiganov
(University of Alberta, Canada), Karim Ali (University of Alberta,
Canada), and Abram Hindle (University of Alberta, Canada)
Design Smells in Deep Learning Programs: An Empirical Study ..................... 332
Amin Nikanjam (Polytechnique Montréal, Canada) and Foutse Khomh
(Polytechnique Montréal, Canada)
Understanding Quantum Software Engineering Challenges: An Empirical Study on Stack
Exchange Forums and GitHub Issues ....................................................... 343
Mohamed Raed El Aoun (Polytechnique Montréal, Canada), Heng Li
(Polytechnique Montréal, Canada), Foutse Khomh (Polytechnique
Montréal, Canada), and Moses Openja (Polytechnique Montréal, Canada)
Md Omar Faruk Rokon (UC Riverside), Pei Yan (UC Riverside), Risul
Islam (UC Riverside), and Michalis Faloutsos (UC Riverside)
SPICA: a Methodology for Reviewing and Analysing Fault Localisation Techniques ............. 366
Xiao-Yi Zhang (National Institute of Informatics, Japan) and Mingyue
Jiang (Zhejiang Sci-Tech University, China)
Cross-Language Code Coupling Detection: A Preliminary Study on Android Applications ..........378
Bo Shen (Peking University, China), Wei Zhang (Peking University, China), Ailun Yu (Peking University, China), Zhao Wei (Huawei Technologies Co., Ltd., China), Guangtai Liang (Huawei Technologies Co., Ltd., China), Haiyan Zhao (Peking University, China), and Zhi Jin (Peking University, China)
A First Look at Accessibility Issues in Popular GitHub Projects ........................................390
Tingting Bi (Monash University, Australia), Xin Xia (Monash University, Australia), David Lo (Singapore Management University, Singapore), and Aldeida Aleti (Monash University, Australia)
FluentCrypto: Cryptography in Easy Mode .................................................................402
Simon Kafader (University of Bern, Switzerland) and Mohammad Ghafari (University of Auckland, New Zealand)
An Evolutionary Analysis of Software-Architecture Smells ........................................413
Philipp Gnoyke (Otto-von-Guericke University Magdeburg, Germany), Sandro Schulze (University of Potsdam, Germany), and Jacob Krüger (Ruhr-University Bochum, Germany)
Assessing Generalizability of CodeBERT .............................................................425
Xin Zhou (Singapore Management University, Singapore), DongGyun Han (Singapore Management University, Singapore), and David Lo (Singapore Management University, Singapore)
Sirius: Static Program Repair with Dependence Graph-Based Systematic Edit Patterns ........437
Kunihiro Noda (Fujitsu Research, Japan), Haruki Yokoyama (Fujitsu Research, Japan), and Shinji Kikuchi (Fujitsu Research, Japan)
Task-Oriented API Usage Examples Prompting Powered By Programming Task Knowledge Graph ....448
Jiamou Sun (Australian National University, Australia), Zhenchang Xing (Australian National University, Australia), Xin Peng (Fudan University, China), Xiwei Xu (CSIRO, Australia), and Liming Zhu (CSIRO, Australia)
CAT: Change-Focused Android GUI Testing .........................................................460
Chao Peng (University of Edinburgh, United Kingdom), Ajitha Rajan (University of Edinburgh, United Kingdom), and Tianqin Cai (ByteDance Network Technology, United Kingdom)
CI/CD Pipelines Evolution and Restructuring: A Qualitative and Quantitative Study ..........471
Fiorella Zampetti (University of Sannio, Italy), Salvatore Geremia (University of Sannio, Italy), Gabriele Bavota (Università della Svizzera italiana, Switzerland), and Massimiliano Di Penta (University of Sannio, Italy)
Multimodal Representation for Neural Code Search .............................................483
Jian Gu (University of Zurich), Zimin Chen (KTH Royal Institute of Technology), and Martin Monperrus (KTH Royal Institute of Technology)
Industry Track
Migrating GUI Behavior: from GWT to Angular ......................................................... 495
Benoît Verhaeghe (Berger-Levrault, France; Univ. Lille, France), Anas
Shatnawi (Berger-Levrault, France), Abderrahmane Seriai
(Berger-Levrault, France), Nicolas Anquetil (Univ. Lille, France),
Annie Etien (Univ. Lille, France), Stéphane Ducasse (Univ. Lille,
France), and Mustapha Derras (Berger-Levrault, France)
Breaking down Monoliths with Microservices and DevOps: an Industrial Experience Report ........ 505
Danilo Pianini (Università di Bologna, Italy) and Alessandro Neri
(Maggioli S.p.A., Italy)
Report From The Trenches A Case Study In Modernizing Software Development Practices .......... 515
Mahugnon Honoré Houekpetodji (SA-CIM, Univ. Lille, CNRS, France),
Nicolas Anquetil (Univ. Lille, CNRS, France), Stéphane Ducasse (Univ.
Lille, CNRS, France), Fatiha Djareddir (SA-CIM, Univ. Lille, CNRS,
France), and Jérôme Sudich (SA-CIM, Univ. Lille, CNRS, France)
DeepOrder: Deep Learning for Test Case Prioritization in Continuous Integration Testing ............ 525
Aizaz Sharif (Simula Research Laboratory, Norway), Dusica Marijan
(Simula Research Laboratory, Norway), and Marius Liaaen (Cisco
Systems, Norway)
Duplicate Bug Report Detection by Using Sentence Embedding and Fine-Tuning ....................... 535
Haruna Isotani (Waseda University, Japan), Hironori Washizaki (Waseda
University, Japan), Yoshiaki Fukazawa (Waseda University, Japan),
Tsutomu Nomoto (NTT CORPORATION, Japan), Saori Ōuji (NTT CORPORATION,
Japan), and Shinobu Saito (NTT CORPORATION, Japan)
Efficient Platform Migration of a Mainframe Legacy System Using Custom Transpilation ............ 545
Markus Schnappinger (Technical University of Munich, Germany) and
Jonathan Streit (isterra GmbH, Germany)
The Used, the Bloated, and the Vulnerable: Reducing the Attack Surface of an Industrial
Application ........................................................................................................... 555
Serena Elisa Ponta (SAP Security Research), Wolfram Fischer (SAP
Security Research), Henrik Plate (SAP Security Research), and Antonino
Sabetta (SAP Security Research)
eknows: Platform for Multi-Language Reverse Engineering and Documentation Generation ....... 559
Michael Moser (Software Competence Center Hagenberg, Austria) and
Josef Pichler (University of Applied Sciences Upper Austria, Austria)
Tool Demo Track
An NLP-Based Tool for Software Artifacts Analysis ....................................................... 569
Andrea Di Sorbo (University of Sannio, Italy), Corrado A. Visaggio
(University of Sannio, Italy), Massimiliano Di Penta (University of
Sannio, Italy), Gerardo Canfora (University of Sannio, Italy), and
Sebastiano Panichella (Zurich University of Applied Sciences,
Switzerland)
Sorrel: an IDE Plugin for Managing Licenses and Detecting License Incompatibilities ........................................574
Dmitry Pogrebnoy (JetBrains, Saint Petersburg State University), Ivan
Kuznetsov (Saint Petersburg Polytechnic University), Yaroslav Golubev
(JetBrains Research), Vladislav Tankov (Higher School of Economics,
JetBrains, JetBrains Research), and Timofey Bryksin (JetBrains
Research, Saint Petersburg State University)
iSCREAM: a Suite for Smart Contract REAdability Assessment .................................................................579
Gerardo Canfora (University of Sannio, Italy), Andrea Di Sorbo
(University of Sannio, Italy), Michele Fredella (University of Sannio,
Italy), Anna Vacca (University of Sannio, Italy), and Corrado A.
Visaggio (University of Sannio, Italy)
MLCatchUp: Automated Update of Deprecated Machine-Learning APIs in Python ........................................584
Stefanus A. Haryono (Singapore Management University, Singapore),
Ferdian Thung (Singapore Management University, Singapore), David Lo
(Singapore Management University, Singapore), Julia Lawall (Inria,
France), and Lingxiao Jiang (Singapore Management University,
Singapore)
FeaRS: Recommending Complete Android Method Implementations ..........................................................589
Fengcai Wen (Software Institute - USI, Switzerland), Valentina Ferrari
(Software Institute - USI, Switzerland), Emad Aghajani (Software
Institute - USI, Switzerland), Csaba Nagy (Software Institute - USI,
Switzerland), Michele Lanza (Software Institute - USI, Switzerland),
and Gabriele Bavota (Software Institute - USI, Switzerland)
Restats: A Test Coverage Tool for RESTful APIs .........................................................................................594
Davide Corradini (University of Verona, Italy), Amedeo Zampieri
(University of Verona, Italy), Michele Pasqua (University of Verona,
Italy), and Mariano Ceccato (University of Verona, Italy)
IDEAL: An Open-Source Identifier Name Appraisal Tool ...........................................................................599
Anthony Peruma (Rochester Institute of Technology), Venera Arnaoudova
(Washington State University), and Christian D. Newman (Rochester
Institute of Technology)
CodeRibbon: More Efficient Workspace Management and Navigation for Mainstream Development
Environments ................................................................................................................................................. 604
Benjamin P. Klein (University of Tennessee) and Austin Z. Henley
(University of Tennessee)
FACER-AS: An API Usage-Based Code Recommendation Tool for Android Studio ....................................609
Maha Kamal (Lahore University of Management Sciences, Pakistan), Ayman
Abaid (Lahore University of Management Sciences, Pakistan), Shamsa
Abid (Lahore University of Management Sciences, Pakistan), and Shafay
Shamail (Lahore University of Management Sciences, Pakistan)
NIER Track
Is Reputation on Stack Overflow Always a good Indicator for Users' Expertise? No! .................................614
Shaowei Wang (University of Manitoba, Canada), Daniel M. German
(University of Victoria, Canada), Tse-Hsun Chen (Concordia University,
Canada), Yuan Tian (Queen’s University, Canada), and Ahmed E. Hassan
(Queen’s University, Canada)
Links do Matter: Understanding the Drivers of Developer Interactions in Software Ecosystems .......................................................... 619
Subhajit Datta (Singapore Management University, Singapore), Amrita Bhattacharjee (Arizona State University, USA), and Subhashis Majumder (Heritage Institute of Technology, India)
The Impact of Continuous Code Quality Assessment on Defects .................................................. 624
Rolf-Helge Pfeiffer (IT University of Copenhagen, Denmark)
Stepwise Refactoring Tools ............................................................................................................. 629
Anna Maria Eilertsen (University of Bergen, Norway) and Gail C. Murphy (University of British Columbia, Canada)
Software Architecture Challenges for ML Systems ................................................................. 634
Grace A. Lewis (Carnegie Mellon Software Engineering Institute, USA), Ipek Ozkaya (Carnegie Mellon Software Engineering Institute, USA), and Xiwei Xu (CSIRO Data61, Australia)
NLP-Assisted Web Element Identification Toward Script-free Testing ............................................. 639
Hiroyuki Kirinuki (Osaka University, Japan), Shinsuke Matsumoto (Osaka University, Japan), Yoshiki Higo (Osaka University, Japan), and Shinji Kusumoto (Osaka University, Japan)
BiasHeal: On-the-Fly Black-Box Healing of Bias in Sentiment Analysis Systems ................................. 644
Zhou Yang (Singapore Management University, Singapore), Harshit Jain (Singapore Management University, Singapore), Jieke Shi (Singapore Management University, Singapore), Muhammad Hilmi Asyrofi (Singapore Management University, Singapore), and David Lo (Singapore Management University, Singapore)
Using Bandit Algorithms for Project Selection in Cross-Project Defect Prediction .............................. 649
Takuya Asano (Kindai University, Japan), Masateru Tsunoda (Kindai University, Japan), Koji Toda (Fukuoka Institute of Technology, Japan), Amjed Tahir (Massey University, New Zealand), Kwabena Ebo Bennin (Wageningen University & Research, Netherlands), Keitaro Nakasai (National Institute of Technology, Kagoshima College, Japan), Akito Monden (Okayama University, Japan), and Kenichi Matsumoto (Nara Institute of Science and Technology, Japan)
Human, bot or both? A Study on the Capabilities of Classification Models on Mixed Accounts....... 654
Nathan Cassee (Eindhoven University of Technology, Netherlands), Christos Kitsanelis (Eindhoven University of Technology, Netherlands), Eleni Constantinou (Eindhoven University of Technology, Netherlands), and Alexander Serebrenik (Eindhoven University of Technology, Netherlands)
Hurdles for Developers in Cryptography .......................................................................................... 659
Mohammadreza Hazhirpasand (University of Bern, Switzerland), Mohammadhossein Shabani (Azad University, Iran), Mohammad Ghafari (University of Auckland, New Zealand), and Oscar Nierstras (University of Bern, Switzerland)
Contrasting Third-Party Package Management User Experience .......................................................... 664
Syful Islam (Nara Institute of Science and Technology, Japan), Raula Gaikovina Kula (Nara Institute of Science and Technology, Japan), Christoph Treude (University of Adelaide, Australia), Bodin Chinthanet (Nara Institute of Science and Technology, Japan), Takashi Ishio (Nara Institute of Science and Technology, Japan), and Kenichi Matsumoto (Nara Institute of Science and Technology, Japan)
Clustering, Separation, and Connection: A Tale of Three Characteristics ........................................... 669
Subhajit Datta (Singapore Management University, Singapore), Aniruddha Mysore (PES University, India), Haziqshah Wira (Singapore Institute of Technology, Singapore), and Santonu Sarkar (BITS Pilani, India)
Can Differential Testing Improve Automatic Speech Recognition Systems? ........................................... 674
Muhammad Hilmi Asyrofi (Singapore Management University, Singapore), Zhou Yang (Singapore Management University, Singapore), Jieke Shi (Singapore Management University, Singapore), Chu Wei Quan (Singapore Management University, Singapore), and David Lo (Singapore Management University, Singapore)
Disambiguating Mentions of API Methods in Stack Overflow via Type Scoping ................................... 679
Kien Luong (Singapore Management University, Singapore), Ferdian Thung (Singapore Management University, Singapore), and David Lo (Singapore Management University, Singapore)
ROSE Festival
Philip Oliver (Victoria University of Wellington, New Zealand), Michael Homer (Victoria University of Wellington, New Zealand), Jens Dietrich (Victoria University of Wellington, New Zealand), and Craig Anslow (Victoria University of Wellington, New Zealand)
Doctoral Symposium
Automated Refactoring for Energy-Aware Software .............................................................................. 689
Déaglán Connolly Bree (University College Dublin, Ireland) and Mel O’Cinneide (University College Dublin, Ireland)
Logs and Models in Engineering Complex Embedded Systems ........................................................... 695
Nan Yang (Eindhoven University of Technology, The Netherlands), Pieter Cuipers (Eindhoven University of Technology, The Netherlands), Ramon Schiffelers (ASML, Eindhoven University of Technology, The Netherlands), Johan Lukkien (Eindhoven University of Technology, The Netherlands), and Alexander Serebrenik (Eindhoven University of Technology, The Netherlands)
Sine-Cosine Algorithm for Software Fault Prediction ............................................................................. 701
Tamanna Sharma (Guru Jambheshwar University of Science and Technology, India) and Om Prakash Sangwan (Guru Jambheshwar University of Science and Technology, India)
|
{"Source-Url": "https://www.proceedings.com/content/061/061245webtoc.pdf", "len_cl100k_base": 5182, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24161, "total-output-tokens": 5859, "length": "2e12", "weborganizer": {"__label__adult": 0.0004317760467529297, "__label__art_design": 0.0005211830139160156, "__label__crime_law": 0.0002264976501464844, "__label__education_jobs": 0.0033130645751953125, "__label__entertainment": 0.0001227855682373047, "__label__fashion_beauty": 0.00017952919006347656, "__label__finance_business": 0.00028014183044433594, "__label__food_dining": 0.0003299713134765625, "__label__games": 0.0008320808410644531, "__label__hardware": 0.0009517669677734376, "__label__health": 0.000583648681640625, "__label__history": 0.0003619194030761719, "__label__home_hobbies": 0.00012171268463134766, "__label__industrial": 0.00028395652770996094, "__label__literature": 0.0004000663757324219, "__label__politics": 0.00025010108947753906, "__label__religion": 0.00045609474182128906, "__label__science_tech": 0.0152740478515625, "__label__social_life": 0.0002310276031494141, "__label__software": 0.00830078125, "__label__software_dev": 0.96533203125, "__label__sports_fitness": 0.00041294097900390625, "__label__transportation": 0.0003769397735595703, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23801, 0.01026]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23801, 0.0922]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23801, 0.59518]], "google_gemma-3-12b-it_contains_pii": [[0, 145, false], [145, 145, null], [145, 2355, null], [2355, 5724, null], [5724, 5724, null], [5724, 8716, null], [8716, 11531, null], [11531, 14367, null], [14367, 17741, null], [17741, 20827, null], [20827, 23801, null], [23801, 23801, null]], "google_gemma-3-12b-it_is_public_document": [[0, 145, true], [145, 145, null], [145, 2355, null], [2355, 5724, null], [5724, 5724, null], [5724, 8716, null], [8716, 11531, null], [11531, 14367, null], [14367, 17741, null], [17741, 20827, null], [20827, 23801, null], [23801, 23801, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23801, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23801, null]], "pdf_page_numbers": [[0, 145, 1], [145, 145, 2], [145, 2355, 3], [2355, 5724, 4], [5724, 5724, 5], [5724, 8716, 6], [8716, 11531, 7], [11531, 14367, 8], [14367, 17741, 9], [17741, 20827, 10], [20827, 23801, 11], [23801, 23801, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23801, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ef98b7f5d2920da7ede480e79cd17758c4eb006a
|
Integration Obstacles during ERP Development
Negin Banaeianjahromi
Lappeenranta University of Technology
negin.banaeianjahromi@lut.fi
Tommi Kähkönen
Lappeenranta University of Technology
tommi.kahkonen@lut.fi
Aki Alanne
Tampere University of Technology
aki.alanne@student.tut.fi
Kari Smolander
Lappeenranta University of Technology
kari.smolander@lut.fi
Abstract
ERP (Enterprise Resource Planning) systems have increasingly been developed and integrated with other internal and external systems. This paper contributes to the field of enterprise systems integration by clarifying the concept of integration in the context of ERP systems. We investigated integration obstacles during ERP development in 5 large organizations through theme-based interviews. Besides considering integration as purely technical challenge, our findings reveal the other perspectives of integration. In total 31 environmental, technical, managerial, and organizational integration obstacles were identified from empirical data and further mapped with 13 ERP challenge categories derived from the literature. Our findings reveal that integration barriers are related to all 13 categories of ERP challenges. This indicates that integration should not be a separate project from ERP development. Identifying the integration obstacles is necessary for practitioners to develop counteractions to enterprise integration problems.
1. Introduction
Companies must improve their business procedures and processes in order to remain competitive. They must also share their in-house information with their suppliers, distributors, and customers [27]. This information should be timely and accurate. Companies adopt Enterprise Resource Planning (ERP) systems to fulfill these objectives. ERP systems are information systems that integrate different business functions [5,11]. Companies spend significant amounts of their IT budget in ERP installations and upgrades [10,11]. However, ERP projects are associated with considerable problems and high failure rates [19,33]. Besides technical aspects, ERP implementation imposes numerous social and organizational issues [10].
Implementing an ERP system does not guarantee the integration of the organization, as ERPs need to coexist with other enterprise applications and systems [39]. Mainstream ERP studies in the realm of integration address the approaches to achieve integration [25,26] or implementing ERP as a way to achieve integration [16,18] However, it has been identified that integration in the context of enterprise systems is surrounded by confusion [6,17,25].
With this study, we aim to increase the understanding of the nature of integration in ERP development that is often overlooked in research. To achieve this, we employ both literature and empirical data from 52 interviews in 5 large enterprises. This paper addresses the following research questions:
RQ1: What issues hinder integration?
RQ2: How do issues hindering integration relate to general ERP development challenges?
In the remainder of this paper, we first review the literature for this research. After that, we explain the research approach and present the results. Before concluding the paper, we discuss about our contributions and lessons learned based on the findings.
2. Background
ERP systems are configurable Information System (IS) packages that aid in accomplishing the business goals by facilitating real-time planning, production, and customer response [20,32]. They consist of different modules, such as sales, production, human resource, which are interconnected to enable the exchange of business data across different organizational units [12]. ERP systems offer a central repository for enterprise data and promise reduced data redundancy, increased supply chain efficiency, increased customer access to products and services, and reduced operating costs [13,29].
These benefits are not easily accomplished. It has been estimated that 90% of ERP implementations fail to provide all the desired business benefits [28]. Several distinct characteristics make ERP projects troublesome. The implementation involves multiple organizations and stakeholders that need to interact and...
communicate. This makes the implementation prone to errors and misunderstandings [35]. Moreover, there is the constant dilemma to decide, should the system be customized or should the existing ways of working be altered [4]. Due to the role of ERP systems as a backbone for enterprise integration, they need to co-exist with other enterprise systems [38]. Interconnections with internal and external systems is a necessity and a crucial part in ERP development [15]. When replacing the existing legacy systems with the ERP, usually the migration process involves the implementation of temporary interfaces between systems, which can be expensive and time consuming [40]. In general, ERP systems have limited capability in integrating with other systems [5].
Due to the challenging nature of ERP projects, a considerable amount of literature has focused on critical factors in these projects [2,9,14,28,30,31,36]. These studies have not widely addressed integration issues. Integration is mainly seen as something that is finished during the project phase of the system development, such as in terms of data management between legacy systems. However, instead of being an outcome or an activity occurring during a single phase, in the context of ERP systems, integration is a continuous activity conducted during the whole life cycle of the system [22].
Furthermore, the term integration is generally a concept surrounded by a fair amount of confusion [6,17,25]. For instance, some authors in the literature tends to consider integration as a project outcome or as a technical feature [6]. We understand ERP system integration as a process during the ERP system life cycle, in which interfaces and interconnections between the ERP and other internal and external systems are built and managed as a collaborative effort conducted by different organizations and stakeholders involved in development. With this study, we want to better understand the nature of this activity by examining the issues that hinder it.
3. Research process
This research was designed as a qualitative, thematic study. We deemed this approach to be suitable when approaching the research problem of enterprise system integration because besides its technical nature, it includes organizational and managerial issues [3]. The main instrument in the data collection was theme-based interviews in five companies. The companies were large – their sizes ranged 1000 to 30000 employees. To analyze the data we employed a qualitative inductive analysis, in which we identified new kind of occurrences in the data and classified them with codes. This is also called “open coding” [7] in grounded theory. According to [34] “qualitative inductive analysis generates new concepts, explanations, results, and/or theories from the specific data of a qualitative study.”
3.1. Data collection
To conduct this study, both literature and empirical data were employed. We carried out two rounds of theme-based interviews. In the first round, we gathered data from three organizations (Case A, B, and C) in the period from February 2013 to May 2014. These interviewees included stakeholders from client organizations, the vendors and third parties, such as a middleware vendor and offshore departments. No strict interview protocol was used, but instead, the questions focused on general challenges in ERP development. More detailed questions were asked based on the answers. Total of 45 interviews with an average duration of one hour were made in the first round. In the second round, we gathered data from three organizations (Case A, D, and E) in May and June 2014. In total, 9 experts were interviewed, with the average duration of the interviews being 1 hour and 15 minutes. The question set included technologies, standards, organizations and stakeholders dealing with integration issues. We consider second round as our main dataset in this study since its focus was on integration issues while the first round data served more as supportive material. Table 1 lists the case organizations and the roles of interviewees.
<table>
<thead>
<tr>
<th>Cases</th>
<th>Size and industry</th>
<th>ERP systems</th>
<th>No. of interviews</th>
<th>Role of interviewees</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Large & global manufacturing enterprise with 30000 employees</td>
<td>Tailored system for sales and logistics, SAP ERP for administrative processes</td>
<td>17</td>
<td>Different roles representing the client organization, the vendor and third party organizations</td>
</tr>
<tr>
<td>B</td>
<td>Large & global service provider in retail business with 1000 employees</td>
<td>Tailored ERP system for retail business processes</td>
<td>16</td>
<td></td>
</tr>
<tr>
<td>C</td>
<td>Large and global manufacturing enterprise with 20000 employees</td>
<td>Tailored ERP system for the raw material procurement</td>
<td>10</td>
<td>Different roles representing the client organization</td>
</tr>
</tbody>
</table>
2nd Round
The ERP systems in case organizations were in different phases in their life cycle. In Case A, the tailored system had been in use and development for 20 years. During the interviews the retirement phase of the system had begun as the company was considering replacing it with a SAP ERP. The ERP system in Case B was in the middle of the implementation phase. The system in Case C was in the post-implementation phase, currently being deployed to a new business location in another country. Case D started the implementation of a new ERP system in 2012 and has been since improving the business processes and enhancing the system. Case E was about to change their ERP system from Informix to Oracle. Most of the transition had been done but few systems were needed to be changed to Oracle.
### 3.2. Data analysis
We extracted and identified integration obstacles from the transcribed interviews. Three researchers used the principles of open coding [8] to label the data and to find the integration obstacles from the primary data set. Due to the fact that each researcher makes his/her own interpretations from the data, it was necessary to discuss and compare the identified obstacles. After several brainstorming sessions, a list of 31 integration obstacles was constructed, with six obstacles not previously mentioned in the literature.
To have a comprehensive view on the integration obstacles and their relationships to general ERP development challenges studied in the literature, we used the classification of ERP development challenges by [1]. In addition, we reviewed seven literature reviews on ERP development challenges [2,9,14,28,30,31,36] and modified the original classification. This comparison produced in total 13 categories of ERP development challenges. For example, in the literature category “Network and communication” concerned with “boundary crossing activities” and issues related to “consultant and vendor companies” was divided into “Inter-organizational environment” and “Communication and coordination”.
Inspired by Themistocleus [37] and Shaul & Tauber, [36] we further classified the 13 categories into four main themes: Environmental, Technical, Managerial, and Organizational obstacles. Table 2 presents this categorization. We then mapped the integration obstacles extracted from the data to the general categories of ERP challenges found from literature.
<table>
<thead>
<tr>
<th>Main themes</th>
<th>Categories of general ERP challenges from the literature [2,9,14,28,30,31,36]</th>
<th>Integration obstacles derived from data</th>
<th>Cases</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Environmental obstacles</strong></td>
<td><strong>Intra-organizational environment</strong> Issues related to organizational culture as well as organization’s experience on ERP projects</td>
<td>Complicated end product A</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Inexperience on integration projects A, E</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Heterogeneous operating environment C</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Different strategic interests of business units A</td>
<td></td>
</tr>
<tr>
<td></td>
<td><strong>Inter-organizational environment</strong> Issues related to external environment such as conflicts between the organizations, poor management of partnerships with these organizations and underperformance of either vendor or consultant</td>
<td>Sanctions in licensing E</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Competitors taking new technologies into use A, C, E</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Failing to commit customers in integration projects A, D</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Discovering a way to satisfy customers by integration A, E</td>
<td></td>
</tr>
<tr>
<td><strong>Technical obstacles</strong></td>
<td><strong>ERP product selection & implementation strategy</strong> Issues regarding selecting and comparing different ERP products</td>
<td>Selecting unsuitable integration technologies A</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Troublesome management of integration product licenses A, E</td>
<td></td>
</tr>
<tr>
<td></td>
<td><strong>ERP system characteristics</strong> Issues related to the lack of ERP system’s quality</td>
<td>Design flaws in ERP system A</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>ERP system’s incompatibility A</td>
<td></td>
</tr>
</tbody>
</table>
---
Table 2 Main themes, literature categories and integration obstacles
### Table 2: Integration Obstacles
<table>
<thead>
<tr>
<th>Managerial obstacles</th>
<th>Organizational obstacles</th>
<th>IT-infrastructure & legacy systems</th>
<th>ERP software development & configuration</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Problems in integrating the ERP system with other systems and converting the data between the systems as well as managing the master data</td>
<td>Issues dealing with requirement specifications definition and changes, system configuration, and software development tools and methods. Also, issues related to troubleshooting and functional testing of the software</td>
</tr>
<tr>
<td>Business visioning & planning</td>
<td></td>
<td>Cost cutting hindering integration projects</td>
<td>Poor evaluation of integration requirements</td>
</tr>
<tr>
<td>Organizational management & leadership</td>
<td></td>
<td>Top management does not understand integration</td>
<td>Slow development process</td>
</tr>
<tr>
<td>Project management</td>
<td></td>
<td>Troublesome management of integration projects</td>
<td>Inadequate testing of integration</td>
</tr>
<tr>
<td>Project team & human resources</td>
<td></td>
<td>Lack of integration experts</td>
<td>No dedicated persons for integration</td>
</tr>
<tr>
<td>Quality management & evaluation</td>
<td></td>
<td>Not measuring integration projects</td>
<td>Lack of company-wide policies for integration</td>
</tr>
<tr>
<td>Change management</td>
<td></td>
<td>The need for comprehensive training</td>
<td>Personnel change resistance</td>
</tr>
<tr>
<td>Communication & coordination</td>
<td></td>
<td>Lack of collaboration</td>
<td></td>
</tr>
</tbody>
</table>
### 4. Results
We mapped the identified 31 integration obstacles into the ERP challenges found from literature. Table 2 shows the categorization. The next sections explain the integration obstacles derived from data.
#### 4.1. Environmental obstacles
In Case A, because of the complicated product of the company, complex structures were needed to store product information in the ERP systems. This made some of the integration projects difficult. For instance, customers faced difficulties when defining the product variables in their systems, as mappings and conversions between different ERP systems required significant efforts. In Cases A and E facilities **inexperience on integration projects** and low maturity level of organization hindered integration. In Case A, some facilities did not have previous experience on ERP system deployments, which hindered the roll-outs. On the other hand in Case E, organizational immaturity was seen as a major barrier for integration. The differing readiness for integration in organizational units was highlighted:
"We have 75% integration in our supply and distribution department but we only achieved 30% integration in after sale service department because the maturity level of this section was very low" —Case E, Head of systems analyze & design
Case C encountered difficulties due to the **heterogeneous operating environments**. The misfit between the ERP system and the new operating environment was learnt the hard way. As the ERP system was to be deployed to a new business location in another country, the drastic differences between the business processes and practices forced the company to consider implementing a new instance of the system which would then have to be integrated with the ERP system currently in use. Initiating the deployment project to this environment was eventually cancelled. The environment was characterized as being "20 years behind" the focal country and being a "conservative, old-fashioned field".
In Case A, **different strategic interests of business units** introduced conflicts in ERP development. A development need that other unit considered important might not be an interest for the other. As a consequence, integration projects were prioritized differently, which increased the
development time, causing the other units to wait for the needed features.
Besides the intra-organizational environment, integration can also be hindered by external forces. For example, in Case E due to the political sanctions, licensing caused problems. An ERP provider refused to sell the required licenses to the company. Therefore, the company bore financial loss, having already trained and prepared to adapt the specific ERP system. Eventually, the company was forced to change the ERP provider.
In Case A, the possibility of competitors taking new technologies into use might cause them to reconsider their existing integration solutions. Similarly in Case C, competitors were planning to take a new domain standard into use. This caused pressures for the company. Possibly it had to abandon the application logic developed in-house and re-develop the system interfaces to comply with the new standard:
“If we get involved in [the standardization project], it would mean that part of the ERP system would be outsourced to an external service, which would be integrated with the system” – Case C, Client organization representative
Also in Case E, the pace of environmental change was mentioned as a matter setting pressures on integration. It was mentioned that it is difficult to “attune with those changes”.
In Cases A and D customers’ loyalty and commitment in integration projects were considered as an obstacle. Customers facing organizational changes could stop the ongoing integration initiatives in Case A. On the other hand, in Case D, small customers sometimes lacked the needed knowledge on integration, which made it more difficult to cooperate with them. In Case B, it was mentioned that as several business partners are involved in the ERP project, sometimes coordination issues emerge as it is necessary to wait partners to complete certain operations before the development can continue.
Cases A and E looked for tighter integration with customers. Instead of responding to customers’ needs, companies were discovering ways to better satisfy their customers, trying to “make it easy to buy from us” and implement new solutions “even before they come to us and ask for it”. This was considered difficult. For instance, in Case A, mobile applications for customers were considered in order to achieve tighter integration with customers.
4.2. Technical obstacles
In Case A, selecting unsuitable integration technologies caused the system architecture to be redesigned in the early phases of implementation. According to the middleware provider, not enough attention was paid on the selection of the base technologies of the system. In addition, troublesome management of integration product licenses turned out to be an obstacle in Case A and E. Knowing the limitations of licenses and avoiding getting fines or sued by the product providers was emphasized, “as huge costs are always involved in license management”.
In Case A, certain architectural decisions caused that the facilities used different codes in system messages sent from the facility systems to the ERP system. This later led to problems when trying to collect the same information from all the facility systems:
“Because all [facilities are using] different codes and that’s a nightmare [...] when you want to report something or when for example our sales offices who are using [the ERP system], for all the [facilities]. They actually see very different data for them, because of the different codes which we have allowed in our ERP.” – Case A, Business support manager
This was identified as one of the design flaws in the customized ERP system that would not be able to be fixed during the life cycle of the system. In addition, by having a vendor specific message format in the system, integrating the ERP system with external systems was considered challenging. Because of different levels of standards being used internally and externally, the ERP system incompatibility challenged integration with external systems.
In Case B, characteristics of integrative systems introduced a fundamental obstacle of integration. The data formats of two systems were different and the older system could not handle specific data types. Similarly in Case A, the factor affecting the easiness of rollouts was said to be dependent on the characteristics of the facility system in question.
Complex systems landscape where integration takes place was one aspect that made integration difficult. In Case A, the organization was dealing with a huge number of different systems. Business-IT Negotiator of Case A stated that it is difficult to “reach the ideal world” as the landscape of system “evolves constantly” due to the organizational changes. An integration project that required exchanging of messages between three ERP systems, was considered as “a mission impossible”. A project in which an invoice was to be sent from one office to another through several system, had been initiated four years ago but was still ongoing during the interviews. Furthermore, the increased complexity hindered the information retrieval from the logistics systems:
“[When getting information from logistics systems] there are not only delays, there are total black outs.”
We don't always get the information. [Then we] get the customer calls: 'Where is my order? It should be here now!'” –Case A, Director of business process development
Troublesome migration was encountered in Cases A and B. During migration, data conversions from legacy systems, master data management and parallel run of systems are needed. In Case A, migration from the old system to the new one took years. In Case B, using two systems simultaneously was considered too difficult from the end-user’s viewpoint and because of this, the new ERP system was not deployed to all the sales offices. Major technical problems were encountered when running the two ERP systems in parallel. The data transfer between the two systems was unreliable, due to insufficiently designed interfaces:
“The problems emerged because the interface was the problem. The data might have been accurate in the new system [...] but they did not manage to make the logic between two of their applications bullet proof. [...] the data that came to our system was somehow corrupted” –Case B, Representative of Finance
Transferring the master data from the old system to the new one was seen problematic in Cases A and B. In Case B, it was claimed that the parent company “did not have a capability for master data”. Moreover, different policies for master data were used in group and national levels of the company.
The slow development process turned out to make integration more difficult. In order to cut down the development costs caused by a customized system, in Cases A and C the vendor had offshored the development to remote locations. Because of this, it took a long time until new feature requests would realize as new features in a system:
“If the development on our side is something which is then related to [our ERP], then it takes time [...] then we are really talking about six seven eight months.” –Case A, Manager of e-business and integration
The slow development process was also highlighted by a representative of Case C, who emphasized that the development process should be made faster.
Poor evaluation of integration requirements was sometimes hindering integration projects. In Case A, it was specifically highlighted that the need for integration and testing of it may appear suddenly, if the development is done without establishing separate projects and the requirements for integration are not comprehensively investigated. Similarly, inadequate testing of integration was mentioned as a major obstacle in integration projects. In Case A, a sudden need for testing appeared due to the lack of inappropriate planning. Resources that were not initially allocated for the project were needed:
“It is not realized that [integration] requires a lot of testing [...] the resources that are then used, are not specifically allocated for the project but instead internal resources. But then, what are their skills and motivation? How it is being documented that something has been tested?” –Case A, Business-IT Negotiator
Lack of knowledge mentioned as an obstacle for integration. Integration projects that were performed for the first time with no previous experience on similar projects were considered challenging in Case A. For example, having a customer using SAP involved for the first time was considered painful. Similarly, if integration would require an implementation of a totally new business process with new messages, needed more effort that the projects in which already existing knowledge could be utilized. In Case E on the other hand, the lack of documentation about integration frameworks and technologies caused a big halt in the project as the needed information was gathered from different places.
4.3. Managerial obstacles
Another issue hindering integration in Case A was the constant development cost cutting. Because of this, fewer resources were available for developing and extending the system further. This caused some of the integration projects to be postponed. According to the current trends and the changed role of the ERP system from a back-end tool to a tool of salesmen that interact with customers on the field, the company was planning to build mobile applications to enable end users and customers to access the ERP system from remote locations. This was, however, considered too expensive, and the initiative was dropped out due to the cost saving:
“We have been talking about [the mobile interfaces of the system] and made some pilots, but they haven’t gone further [...] they are probably the first thing to drop out when cutting down the development costs.” –Case A, vendor, Lead software developer
Cost cutting was also considered as a major barrier when developing the business processes further through integration, it was said that there is “a lot of unattached potential but no willingness to invest”.
Identification of business needs and evaluating the benefits of integration was mentioned to be burdensome to integration projects. According to the Enterprise Architect in Case A, the challenging phase in some of the integration projects was the evaluation of costs and the business benefits. The business-IT negotiator stated that evaluating the size and the
complexity of integration projects were difficult, and the significance of integration was “mainly underestimated”. This led to resource allocation problems in these projects. Due to the lack of internal collaboration and organizational silos, certain cross-checking and verification (i.e., by finding out which part of the system the development would have an impact) was sometimes omitted when developing new functionality. This also led to wasted resources.
In cases A, C and D, the top management sometimes lacked the understanding of integration. In Case A, the management had too high expectations what could be achieved by integration. Similarly in Case D, management lacked the understanding on integration:
“The high management cannot really realize the benefits of integration. It is hard to convince them how an integration project can benefit the organization. In words they say ‘Ok, let’s do the integration project’ but when it comes to practice and reality they withdraw” –Case D, Manager of IT department and organizational engineering
Also, sometimes management was unwilling to participate in integration projects which caused the project to lack the management support. In Case C, as the system was deployed to the new operating environment in different nation, local manager’s attitude was not supportive and the project lacked leadership.
The lack of top management’s support in integration projects came up in Case A and D. The constant changes of top management terminated the on-going customer integration projects in Case A and it took years to re-establish them. Similarly in Case E, changes in top management “brought chaos and even terminated the existing integration projects”. The extent to which the top management prioritizes integration was seen crucial in Cases A and E.
Due to the Lack of companywide policies for integration, difficult integration scenarios were encountered in Case A. When the ERP system was under the busiest implementation, the policies of individual facilities had an impact on how the integration between the ERP system and a manufacturing execution system in question was done. This led to a problem when querying information from the facilities as the quality of the retrieved information was varying. It was suggested that the common rules should have been decided in advance to prevent this, and there should be “a dictator”, when defining these rules. Similarly in Case D, due to the fact that different enterprise systems were developed separately, the end users had to separately log in to each system. It was suggested that there should be a single sign in option instead to avoid the manual work and redundancy.
Difficulties in integration project management were experienced. Allocating resources for these projects and keeping them in budget and schedule were not easy. Some of the development projects were not done in a systematic manner as projects. Instead, “the one who has the money” could initiate development activities, without negotiating with other parties. These projects encountered unexpected issues with resources. A representative of Case A highlighted the attitude towards integration:
“The biggest challenge is to evaluate the size and complexity of the project. I state that the significance of integration is mainly underestimated [...] is it just stated that the technology and tools are clear, ‘this cannot be a big issue’” –Case A, Business-IT Negotiator
Convincing the top management and developers about the value and importance of software testing in integration projects was mentioned as a considerable challenge for project managers.
Case D faced resourcing issues due to the lack of integration experts. Lack of personnel with skills on middleware and SOA (Service-Oriented Architecture) hindered integration projects. It was stated that suppliers familiar with specific technologies, such as BizTalk or Oracle “can’t really implement anything themselves”. Similarly, selecting of the supplier was said to be “risky because of their limited knowledge”. In addition the company had no dedicated persons responsible for integration. Instead, managing the IT architecture and doing integration were considered as additional works which reduce the pace of integration:
“When you are integrating systems using middleware, we should unify some architectural basics. Sometimes you need to re-engineer the tasks. This work conflicts with our routine work. We cannot stop this ‘moving train’ to do integration.” –Case D, Manager of IT department and organizational engineering
Not measuring integration projects to evaluate whether or not the desired business goals are met, was considered problematic. Case A was using measurement to evaluate how much the certain business integration solutions were used by different business units. However, in integration projects, measurements were not established. Also, if an integration project was carried out in a non-systematic way, there were no proper quality management practices in place. Measuring the performance of integration project on customers’ side and evaluate the value of customers’ satisfaction was considered “difficult if not impossible” since there were no similar access to customer’s resources and systems in a similar fashion as own internal systems.
### 4.4. Organizational obstacles
**Personnel resistance to change** was described as an obstacle that comes with integration in Cases A and D. The interviewees highlighted the need to carefully explain to the related personnel what the change means in practice:
“*You should assure them that changes that have come up with integration does not mean that you are going to lose your job*” – Case D, Head of IT department
Case C also faced personnel resistance to change and their unwillingness to take the new system into use when deploying the system to a new geographical location:
“They didn’t really want to have that system [...] or even willing to develop it to fit their needs in general.” – Case C, Client organization representative
In addition, in Case A, change resistance was identified as a major barrier that terminated the attempts trying to simplify the complex systems landscape:
“*System-specific groups have been established there [...] they do not have the desire to make this (ERP system landscape) any simpler. And all the external players who enter this field, are excluded in one way or another.*” – Case A, Business-IT Negotiator
The importance of the need for comprehensive training programs were considered essential when trying to mitigate the change resistance caused by integration. The interviewee from Case A mentioned the necessity of training when deploying new systems, considering it as a “major part of the ERP project”. Similarly, integration with customers created a need for training due to the changed roles of the persons dealing with customers.
**Lack of collaboration** made the coordination of integration activities more difficult in various ways. In Case D, lack of teamwork was said to be a major inhibitor of integration. In Case A, despite the fact that the business units had different strategic interests, the representative of Sales noted that the services needed from the ERP system can still be the same. Because of lack of cooperation, duplicate development was sometimes done, which led to increased costs:
“*Better tools for sales prediction may be an essential development requirement for both of these big business areas, and still these things may not be handled together. [...] Instead of doing one joint project, we may do two in parallel.*” – Case A, representative of Sales
The lack of inter-departmental cooperation caused that certain parts of the organization could not benefit from the services already developed in the other parts of the ERP system. Similarly in Case E, the communication between branches in different cities was considered as limited. Using improper tools added manual work, suggesting that the communication was not carried out in the desirable manner.
### 5. Discussion
The main contribution of this study is to increase understanding of integration in the context of ERP systems. The current literature on ERP challenges mainly focuses on the challenges encountered during the main ERP project and mostly highlight the technical issues when interfacing with legacy systems [2], incompatible existing systems [30], and data management and conversion [36]. Besides considering integration as purely technical challenge, our findings reveal the other (environmental, managerial and organizational) perspectives of integration. The identified integration obstacles are interrelated with all the 13 categories of ERP challenges derived from literature. This shows that integration should not be viewed as a separate task that is finished during an ERP project. Instead, integration is tightly coupled with ERP development and it is a continuous effort requiring attention during the entire life cycle of the system. We found some integration obstacles that have not been widely covered in the ERP literature before, such as political sanctions, management of product licenses, lack of measurements for integration projects, discovering a way to satisfy customers by integration, lack of previous experience on integration projects and lack of company-wide policies for integration.
Integration challenges and barriers in enterprise application integration and in e-government have been studied, e.g. in [21,23,37]. Themistocleus (2004) identified 12 application integration barriers. Our findings can be considered as an extension of this list. Similarly, we found that resistance to change, training, and lack of technical skills as barriers for integration. However, we did not see the costs as a major barrier. Another study about critical factors of adopting EAI revealed technical, organizational and environmental dimensions that majorly impact integration in a health care environment [21]. The authors found out that the top management support did not have a high impact on the EAI integration. We, however, found top management support as a critical barrier in three of our case organizations in manufacturing domain. Similar to healthcare domain, the external pressure from competitors appeared to introduce integration challenges in three of our case organizations in manufacturing domain.
#### 5.1. Lessons learned
It is possible to derive from the findings some important considerations for practitioners to overcome the obstacles in integration:
- Integration should be regarded as a systematic and well planned activity that involves multiple systems, and stakeholders. Separate programs or projects are always needed to be established.
- Dedicated expertise is needed. There should be stakeholders with a full-time responsibility of integration issues. Coordination and communication among the stakeholders is crucial.
- Integration projects need to be managed from different levels. Besides the top management support, project and quality managers as well as change management is needed.
- Due to the complex nature of integration, it is important to maintain the architectural descriptions of the interconnected systems to facilitate the identification of integration needs and requirements.
- Corporate-level integration strategies are needed to ensure that integration is aligned with organizational goals.
5.2. Limitations
This study has its limitations. As in all qualitative studies, it is also impossible to make direct statistical generalizations from these five companies. We, however, believe that the classification of integration obstacles is valuable information to other researchers with similar objectives and also to practitioners that wish to manage integration in their organizations. Instead of statistical generalization we consider our generalization as theoretical [24], where we formed abstract categories out of specific and concrete observations. Another limitation is that at the time of data collection each enterprise was in a different phase of their ERP development life-cycle. Their challenges and problems were slightly different from each other. For instance, Case B faced challenges regarding parallel run and migration, because they were in the middle of implementation. Being at the beginning of the retirement phase, these challenges were not considered as the main problems in Case A. This difference is not only a limitation, but also enables richer categorization with variation in observation.
6. Conclusion
With this study we increase the understanding of the concept of integration in ERP development by examining its obstacles. As a result of the analysis of empirical data, we identified 31 integration obstacles. Issues in intra-organizational environment, such as complicated end product and inexperience are the barriers for integration. The pressure from competitors and customer commitment in integration projects impose challenges. Technical barriers are related to integration product selection, and system development and configuration. In addition, the characteristics of the existing systems and the complexity of the IT infrastructure can further complicate the integration efforts. Integration requires management in order to be realized. Management from four levels, organizational, project, quality, and change management is needed to overcome the barriers of integration. We also identified the common categories of ERP challenges from the literature. Our findings suggest that integration is tightly coupled with ERP development, and it should not be regarded as a single project activity, but rather as a continuous effort during the system life cycle. Finally, we provided practitioners with recommendations based on the lessons learned from our findings.
The future research on integration obstacles should consider different domains and include also other organizations involved in ERP development besides the ERP adopters, such as vendors, consultants and business partners. In the future we aim to investigate the solutions to overcome the integration obstacles in different settings.
7. Acknowledgements
This study was funded by Academy of Finland grant #259454.
8. References
|
{"Source-Url": "https://www.computer.org/csdl/proceedings/hicss/2016/5670/00/5670e697.pdf", "len_cl100k_base": 8116, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34043, "total-output-tokens": 10501, "length": "2e12", "weborganizer": {"__label__adult": 0.0006814002990722656, "__label__art_design": 0.0007967948913574219, "__label__crime_law": 0.0011510848999023438, "__label__education_jobs": 0.0186767578125, "__label__entertainment": 0.00018107891082763672, "__label__fashion_beauty": 0.0003552436828613281, "__label__finance_business": 0.07623291015625, "__label__food_dining": 0.0007243156433105469, "__label__games": 0.00107574462890625, "__label__hardware": 0.001898765563964844, "__label__health": 0.001827239990234375, "__label__history": 0.0007190704345703125, "__label__home_hobbies": 0.00025343894958496094, "__label__industrial": 0.002239227294921875, "__label__literature": 0.0007104873657226562, "__label__politics": 0.0005388259887695312, "__label__religion": 0.0005431175231933594, "__label__science_tech": 0.08538818359375, "__label__social_life": 0.00027060508728027344, "__label__software": 0.10137939453125, "__label__software_dev": 0.70263671875, "__label__sports_fitness": 0.0003769397735595703, "__label__transportation": 0.0009512901306152344, "__label__travel": 0.0004897117614746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48625, 0.02655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48625, 0.09429]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48625, 0.94884]], "google_gemma-3-12b-it_contains_pii": [[0, 4222, false], [4222, 9081, null], [9081, 13095, null], [13095, 16974, null], [16974, 22225, null], [22225, 27447, null], [27447, 32764, null], [32764, 37891, null], [37891, 42894, null], [42894, 48625, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4222, true], [4222, 9081, null], [9081, 13095, null], [13095, 16974, null], [16974, 22225, null], [22225, 27447, null], [27447, 32764, null], [32764, 37891, null], [37891, 42894, null], [42894, 48625, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48625, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48625, null]], "pdf_page_numbers": [[0, 4222, 1], [4222, 9081, 2], [9081, 13095, 3], [13095, 16974, 4], [16974, 22225, 5], [22225, 27447, 6], [27447, 32764, 7], [32764, 37891, 8], [37891, 42894, 9], [42894, 48625, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48625, 0.15426]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9f70e81d0744d59d7b66ea91ed6121e0e9aed976
|
A conceptual Bayesian net model for integrated software quality prediction
Łukasz Radliński
Institute of Information Technology in Management, University of Szczecin
Mickiewicza 64, 71-101 Szczecin, Poland
Abstract – Software quality can be described by a set of features, such as functionality, reliability, usability, efficiency, maintainability, portability and others. There are various models for software quality prediction developed in the past. Unfortunately, they typically focus on a single quality feature. The main goal of this study is to develop a predictive model that integrates several features of software quality, including relationships between them. This model is an expert-driven Bayesian net, which can be used in diverse analyses and simulations. The paper discusses model structure, behaviour, calibration and enhancement options as well as possible use in fields other than software engineering.
1 Introduction
Software quality has been one of the most widely studied areas of software engineering. One of the aspects of quality assurance is quality prediction. Several predictive models have been proposed since 1970’s. A clear trade-off can be observed between model’s analytical potential and the number of used quality features. Models that contain a wide range of quality features [1, 2, 3] typically have low analytical potential and are more frames for building calibrated predictive models. On the other hand, models with higher analytical potential typically focus on a single or very few aspects of quality, for example on reliability [4, 5].
This trade-off has been the main motivation for research focused on building predictive models that both incorporate various aspects of software quality and have high analytical potential. The aim of this paper is to build such predictive model as a
*lukrad@uoo.univ.szczecin.pl
Bayesian net (BN). This model may be used to deliver information for decision-makers about managing software projects to achieve specific targets for software quality.
Bayesian nets have been selected for this study for several reasons. The most important is related with the ability to incorporate both expert knowledge and empirical data. Typically, predictive models for software engineering are built using data-driven techniques like multiple regression, neural networks, nearest neighbours or decision trees. For the current type of study, a dataset with past projects of high volume and appropriate level of details is typically not available. Thus, the model has to be based more on expert knowledge and only partially on empirical data. Other advantages of BNs include the ability to incorporate causal relationships between variables, explicit incorporation of uncertainty through probabilistic definition of variables, no fixed lists of independent and dependent variables, running the model with incomplete data, forward and backward inference, and graphical representation. More information on the BN theory can be found in [6, 7] while recent applications in software engineering have been discussed in [8, 9, 10, 11, 12, 13, 14, 15, 16].
The rest of this paper is organized as follows: Section 2 brings closer the point of view on software quality that was the subject of the research. Section 3 discusses background knowledge used when building the predictive model. Section 4 provides the details on the structure of the proposed predictive model. Section 5 focuses on the behaviour of this model. Section 6 discusses possibilities for calibrating and extending the proposed model. Section 7 considers the use of such type of model in other areas. Section 8 summarizes this study.
2 Software Quality
Software quality is typically expressed in science and industry as a range of features rather than a single aggregated value. This study follows the ISO approach where software quality is defined as a “degree to which the software product satisfies stated and implied needs when used under specified conditions” [1]. This standard defines eleven characteristics, shown in Fig. 1 with dark backgrounds. The last three characteristics (on the left) refer to “quality in use” while others refer to internal and external metrics. Each characteristic is decomposed into the sub-characteristics, shown in Fig. 1 with white background. On the next level each sub-characteristic aggregates the values of metrics that describe the software product. The metrics are not shown here because they should be selected depending on the particular environment where such quality assessment would be used. Other quality models have been proposed in literature [17, 3], from which some concepts may be adapted when building a customized predictive model.
In our approach we follow the general taxonomy of software quality proposed by ISO. However, our approach is not limited to the ISO point of view and may be adjusted according to specific needs. For this reason our approach uses slightly different
3 Background knowledge
Our approach assumes that the industrial-scale model for integrated software quality prediction has to be calibrated for specific needs and environment before it can be used in decision support. Normally such calibration should be performed among domain experts from the target environment, for example using a questionnaire survey. However, at this point such survey has not been completed yet so the current model has been built fully based on the available literature and expert knowledge of the modellers. This is the reason why the model is currently at the “conceptual” stage. The literature used includes quality standards [1, 18, 2, 19, 20, 21], widely accepted results on software quality [22, 23, 24, 17, 3, 25, 26, 27, 28, 29], and experience from building models for similar areas of software engineering [8, 9, 10, 11, 12, 30, 13, 14, 15, 16].
Available literature provides useful information on the relationships among quality features. Fig. 2 illustrates the relationships encoded in the proposed predictive model. There are two types of relationships: positive (“+”) and negative (“−”). The positive relationship indicates a situation where the increased level of one feature causes a probable increase of the level of another feature. The negative relationship indicates a situation where the increased level of one feature causes a probably decrease of the level of another feature unless some compensation is provided. This compensation typically has a form of additional effort, increase of development process quality, or use of better tools or technologies.
Table 1 summarizes the relationships between the effort and the quality features.
Currently there are two groups of controllable factors in the model: effort and process quality – defined separately for three development phases. It is assumed that the increase of effort or process quality has a positive impact on the selected quality features. This impact is not deterministic though, i.e. the increased effort does not guarantee better quality but causes that this better quality is more probable.
It should be noted that the relationships in Fig. 2 and Table 1 may be defined differently in specific target environments.
4 Model Structure
The proposed predictive model is a Bayesian net where the variables are defined as conditional probability distributions given their parents (i.e. immediate predecessors). It is beyond the scope of the paper to discuss the structure of the whole model because the full model contains over 100 variables. However, for full transparency and reproducibility of the results full model definition is available on-line [31].
Fig. 3 illustrates a part of the model structure by showing two quality features and relevant relationships. The whole model is a set of linked hierarchical naïve Bayesian classifiers where each quality feature is modelled by one classifier. Quality feature is the root of this classifier, sub-features are in the second level (children) and measures are the leaves.

To enable relatively easy model calibration and enhancement this model was built with the following assumptions:
- the links between various aspects of software quality may be defined only at the level of features;
- controllable factors are aggregated as the “effectiveness” variables, which, in turn, influence selected quality features.
Currently, all variables in the model, except measures, are expressed on a five-point ranked scale from ‘very low’ to ‘very high’. Two important concepts, implemented in AgenaRisk tool [32], were used to simplify the definition of probability distributions. First, the whole scale of ranked variable is internally treated as the numeric a range (0, 1) with five intervals – i.e. for ‘very low’ an interval (0, 0.2), for ‘low’ an interval (0.2,
This gives the possibility to express the variable not only as a probability
distribution but also using summary statistics, such as the mean (used in the next
section). It also opens the door for the second concept – using expressions to define
probability distributions for variables. Instead of time-consuming and prone to in-consistencies manual filling probability tables for each variable, it is sufficient to provide
only very few parameters for the expressions like Normal distribution (mean, variance),
TNormal distribution (mean, variance, lower bound, upper bound), or weighted mean
function – wmean(weight for parameter 1, parameter 1, weight for parameter 2, pa-
rameter 2 etc.). Table 2 provides the definitions for the selected variables in different
layers of the model.
Table 2. Definition of selected variables.
<table>
<thead>
<tr>
<th>Type</th>
<th>Variable</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>feature</td>
<td>usability</td>
<td>TNormal(wmean(1, 0.5, 3, wmean(3, reg_effect, 2, impl_effect, 1, test_effect), 1, funct_suit), 0.05, 0.1)</td>
</tr>
<tr>
<td>sub-feature</td>
<td>effectiveness</td>
<td>TNormal usability 0.01, 0.1</td>
</tr>
<tr>
<td>measure</td>
<td>percentage of tasks accomplished</td>
<td>effectiveness = 'very high' → Normal(95, 10)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>effectiveness = 'high' → Normal(90, 40)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>effectiveness = 'medium' → Normal(75, 60)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>effectiveness = 'low' → Normal(65, 80)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>effectiveness = 'very low' → Normal(50, 100)</td>
</tr>
<tr>
<td>controllable</td>
<td>testing effort</td>
<td>TNormal(0.5, 0.05, 0.1)</td>
</tr>
<tr>
<td>controllable</td>
<td>testing effectiveness</td>
<td>TNormal(wmean(3, test_effort, 4, test_proce), 0.001, 0.1)</td>
</tr>
</tbody>
</table>
5 Model Behaviour
To demonstrate model behaviour four simulations were performed with the focus to
analyse the impact of one group of variables on another.
Simulation 1 was focused on the sensitivity analysis of quality features in response
to the level of controllable factors. An observation about the state for a single controllable factor was entered to the model and then the predictions for all quality features
were analyzed. This procedure was repeated for each state of each controllable factor.
Fig. 4 illustrates the results for one of such runs by demonstrating the changes of
predicted levels of maintainability and performance efficiency caused by different levels of implementation effort. These results have been compared with the background
knowledge in Table 1 to validate if the relationships have been correctly defined, i.e.
if the change of the level of the controllable factor causes the assumed direction of
changed level of quality feature. In this case the obtained results confirm that the
background knowledge was correctly incorporated into the model.
With these graphs, it is possible to analyze the strength of impact of controllable factors on quality features. The impact of implementation effort is larger on maintainability than on performance efficiency – predicted probability distributions are more ‘responsive’ to different states of implementation effort for maintainability than for performance efficiency. Such information may be used in decision support.
**Fig. 4.** Impact of implementation effort on the selected quality features.
**Simulation 2** is similar to simulation 1 because it also analyses the impact of controllable factors on quality features. However, this simulation involves the analysis of summary statistics (mean values) rather than full probability distributions. Here, an observation ‘very high’ was entered to each controllable factor (one at the time) and then the mean value of predicted probability distribution for each quality feature was analyzed. Table 3 summarizes the results for effort at various phases. All of these mean values are above the default value of 0.5. These higher values suggest the increase in the predicted level specific quality features. These values correspond to “+” signs in Table 1 which further confirms the correct incorporation of the relationships between the controllable factors and the quality features.
<table>
<thead>
<tr>
<th>Quality feature</th>
<th>Requirements effort</th>
<th>Implementation effort</th>
<th>Testing effort</th>
</tr>
</thead>
<tbody>
<tr>
<td>functional suitability</td>
<td>0.55</td>
<td>0.56</td>
<td></td>
</tr>
<tr>
<td>reliability</td>
<td>0.56</td>
<td>0.55</td>
<td></td>
</tr>
<tr>
<td>performance efficiency</td>
<td>0.54</td>
<td>0.53</td>
<td></td>
</tr>
<tr>
<td>operability</td>
<td>0.60</td>
<td></td>
<td></td>
</tr>
<tr>
<td>security</td>
<td></td>
<td></td>
<td>0.55</td>
</tr>
<tr>
<td>compatibility</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>maintainability</td>
<td>0.56</td>
<td>0.57</td>
<td></td>
</tr>
<tr>
<td>portability</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>usability</td>
<td>0.56</td>
<td>0.55</td>
<td>0.52</td>
</tr>
<tr>
<td>flexibility</td>
<td>0.57</td>
<td>0.56</td>
<td></td>
</tr>
<tr>
<td>safety</td>
<td></td>
<td></td>
<td>0.55</td>
</tr>
</tbody>
</table>
**Simulation 3** was focused on the analysis of the relationships among various quality features. Similarly to simulation 2, it also covered the analysis of the mean values of predicted probability distributions. The results are presented in Table 4.
Table 4. Predictions in simulation 3.
<table>
<thead>
<tr>
<th>Predicted</th>
<th>functional suitability</th>
<th>reliability</th>
<th>usability</th>
<th>operability</th>
<th>performance efficiency</th>
<th>maintainability</th>
<th>portability</th>
<th>usability</th>
<th>safety</th>
<th>flexibility</th>
</tr>
</thead>
<tbody>
<tr>
<td>Observed</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>functional suitability</td>
<td>0.55</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>reliability</td>
<td>0.55</td>
<td>0.55</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>security</td>
<td>0.46</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>compatibility</td>
<td>0.55</td>
<td>0.46</td>
<td>0.53</td>
<td>0.48</td>
<td></td>
<td>0.55</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>operability</td>
<td>0.55</td>
<td>0.52</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>performance efficiency</td>
<td>0.46</td>
<td>0.48</td>
<td>0.46</td>
<td></td>
<td></td>
<td>0.47</td>
<td>0.44</td>
<td>0.48</td>
<td><0.50</td>
<td>0.47</td>
</tr>
<tr>
<td>maintainability</td>
<td>0.57</td>
<td>0.55</td>
<td></td>
<td>0.55</td>
<td>0.47</td>
<td></td>
<td>0.56</td>
<td>0.57</td>
<td></td>
<td></td>
</tr>
<tr>
<td>portability</td>
<td>0.55</td>
<td>0.57</td>
<td>0.53</td>
<td>0.44</td>
<td>0.56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>usability</td>
<td>0.58</td>
<td>0.55</td>
<td>0.53</td>
<td>0.56</td>
<td>0.48</td>
<td>0.58</td>
<td></td>
<td></td>
<td>0.54</td>
<td>0.57</td>
</tr>
<tr>
<td>safety</td>
<td>0.53</td>
<td>0.54</td>
<td>0.52</td>
<td>0.55</td>
<td><0.50</td>
<td></td>
<td>0.54</td>
<td></td>
<td></td>
<td>0.48</td>
</tr>
<tr>
<td>flexibility</td>
<td>0.57</td>
<td>0.54</td>
<td>0.47</td>
<td>0.57</td>
<td>0.48</td>
<td>0.58</td>
<td>0.56</td>
<td>0.57</td>
<td>0.48</td>
<td></td>
</tr>
</tbody>
</table>
The predicted mean values are either lower or higher than the default value 0.5. The values lower than 0.5 correspond to “−” signs in Fig. 2 while the values higher than 0.5 correspond to “+” signs in Fig. 2. Such results confirm that the model correctly incorporates the assumed relationships among quality features.
Simulation 4 has been focused on demonstrating more advanced model capabilities for delivering important information for decision support using what-if and trade-off analysis. Although such analysis may involve more variables, for simplicity, four variables were investigated: implementation effort, testing effort, maintainability, and performance efficiency. Some input data on the hypothetical project under consideration were entered into the model. The model provides predictions for these four variables as shown in Fig. 5 (scenario: baseline).
Let us assume that a manager is not satisfied with the low level of maintainability. Apart from previously entered input data, an additional constraint is entered to the model to analyze how to achieve high level of maintainability (maintainability=’high’ → mean(maintainability)=0.7). As shown in Fig. 5, scenario: revision 1, the model predicts that such target is achievable with the increased level of implementation effort and testing effort (although the increase of required testing effort is very narrow). The model also predicts that the level of performance efficiency is expected to be lower. This is due to the negative relationship between the maintainability and performance efficiency (Fig. 2).
Let us further assume that, due to limited resources, not only the increase of effort is impossible, but even it has to be reduced to the level ‘low’ for implementation and...
testing. In such case the level of performance efficiency is expected to be further decreased (scenario: revision 2).
It is possible to perform various other types of simulations similar to simulation 4 to use the model with what-if, trade-off and goal-seeking analyses for decision support. Such simulation may involve more steps and more variables. Such simulations will be performed in future to enhance the validation of model correctness and usefulness.
6 Calibration and Enhancement Options
The proposed model has a structure that enables relatively easy calibration. As the variables are defined using expressions, the calibration requires setting appropriate parameters in these expressions:
- the values of weights in wmean functions – higher value for weight indicates stronger impact of particular variable on the aggregated value;
- the value of variance in TNormal expressions (second parameter) – value closer to zero indicates stronger relationship, higher values indicate lower relationships. Note, that since ranked variables are internally defined over the range (0, 1), typically a variance of 0.001 indicates very strong relationship and 0.01 – medium relationship.
Apart from calibration focused on the defining parameters for the existing structure, the model may be enhanced to meet specific needs:
- by adding new sub-features to features or new measures to sub-features – such change requires only the definition of newly added variable, no change in definitions of existing variables is necessary;
• by adding new controllable factors – such change requires the change in definition of “effectiveness” variable for specific phase, typically by setting new weights in wmean function;
• by adding new quality feature – such change requires the most work because it involves setting sub-features and measures, relationships among features, and relationships between the controllable factors and this new feature.
Currently the model does not contain many causal relationships. This may reduce the analytical potential. Defining the model using more causal relationships may increase analytical potential but may also make the model more difficult in calibration. Thus, this issue needs to be investigated carefully when building a tailored model.
The model enables static analysis, i.e. for the assumed point of time. Because both the project and the development environment evolve over time, it may be useful to reflect such dynamics in the model. However, such enhancement requires significantly more time spent on modelling and makes the calibration more difficult because more parameters need to be set.
7 Possible Use in Other Fields
The proposed predictive model is focused on the software quality area. Such approach may also be used in other fields/domains because the general constraints on model structure may also apply there. Possible use outside software quality area depends on the following conditions:
• the problem under investigation is complex but can be divided to a set of sub-problems,
• there is no or not enough empirical data to generate a reliable model from them,
• domain expert (or group of experts) is able to define, calibrate and enhance the model,
• the relationships are of stochastic and non-linear nature,
• there is a need for a high analytical potential.
However, even meeting these conditions, the use in other fields may be difficult. This happens in the case of a high number of additional deterministic relationships, which have to be reflected in the model with high precision. Possible use in other fields will be investigated in detail in future.
8 Conclusions
This paper introduced a new model for integrated software quality prediction. Formally, a model is a Bayesian net. This model contains a wide range of quality aspects (features, sub-features, measures) together with relationships among them. To make
the model useful in decision support it also contains a set of controllable factors (currently effort and process quality in different development phases).
This model encodes knowledge on software quality area published in literature as well as personal expert judgment. To prepare the model for using in the target environment it is necessary to calibrate the model, for example using questionnaires. The model may also be enhanced to meet specific needs. The model was partially validated for correctness and usefulness in providing information for decision support.
In future, such model may become a heart of an intelligent system for analysis and managing software quality. To achieve this higher level of automation would be required, for example in calibration and enhancement by automated extraction of relevant data/knowledge. In addition, the model would have to reflect more details on the development process, project or software architecture.
The stages of building customized models will be formalized in a framework supporting the proposed approach. This framework may also be used in building models with a similar general structure but in the fields other than software quality.
Acknowledgement: This work has been supported by the research funds from the Ministry of Science and Higher Education as a research grant no. N N111 291738 for the years 2010-2012.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3331/2525", "len_cl100k_base": 5423, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24110, "total-output-tokens": 6771, "length": "2e12", "weborganizer": {"__label__adult": 0.0003006458282470703, "__label__art_design": 0.0003173351287841797, "__label__crime_law": 0.0002799034118652344, "__label__education_jobs": 0.0010242462158203125, "__label__entertainment": 5.602836608886719e-05, "__label__fashion_beauty": 0.00012731552124023438, "__label__finance_business": 0.0003254413604736328, "__label__food_dining": 0.00032639503479003906, "__label__games": 0.0004758834838867187, "__label__hardware": 0.0004982948303222656, "__label__health": 0.0004410743713378906, "__label__history": 0.0001544952392578125, "__label__home_hobbies": 7.11679458618164e-05, "__label__industrial": 0.00027441978454589844, "__label__literature": 0.00031113624572753906, "__label__politics": 0.0001646280288696289, "__label__religion": 0.00033473968505859375, "__label__science_tech": 0.014007568359375, "__label__social_life": 9.775161743164062e-05, "__label__software": 0.00745391845703125, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00021517276763916016, "__label__transportation": 0.0002796649932861328, "__label__travel": 0.0001506805419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28641, 0.03686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28641, 0.13843]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28641, 0.89561]], "google_gemma-3-12b-it_contains_pii": [[0, 1865, false], [1865, 4971, null], [4971, 6659, null], [6659, 7203, null], [7203, 8840, null], [8840, 12170, null], [12170, 14871, null], [14871, 18600, null], [18600, 20130, null], [20130, 22493, null], [22493, 25488, null], [25488, 28641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1865, true], [1865, 4971, null], [4971, 6659, null], [6659, 7203, null], [7203, 8840, null], [8840, 12170, null], [12170, 14871, null], [14871, 18600, null], [18600, 20130, null], [20130, 22493, null], [22493, 25488, null], [25488, 28641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28641, null]], "pdf_page_numbers": [[0, 1865, 1], [1865, 4971, 2], [4971, 6659, 3], [6659, 7203, 4], [7203, 8840, 5], [8840, 12170, 6], [12170, 14871, 7], [14871, 18600, 8], [18600, 20130, 9], [20130, 22493, 10], [22493, 25488, 11], [25488, 28641, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28641, 0.24051]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
6ed3928962ff9aff904d307ec3766e07b6cfd9ac
|
Documenting the Mined Feature Implementations from the Object-oriented Source Code of a Collection of Software Product Variants
Ra’Fat Ahmad Al-Msie’Deen, Abdelhak-Djamel Seriai, Marianne Huchard, Christelle Urtado, Sylvain Vauttier
To cite this version:
HAL Id: lirmm-01003860
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01003860
Submitted on 10 Jun 2014
Abstract
Companies often develop a set of software variants that share some features and differ in other ones to meet specific requirements. To exploit existing software variants and build a Software Product Line (SPL), a Feature Model (FM) of this SPL must be built as a first step. To do so, it is necessary to mine optional and mandatory features in addition to associating the FM with its documentation. In our previous work, we mined a set of feature implementations as identified sets of source code elements. In this paper, we propose a complementary approach, which aims to document the mined feature implementations by giving them names and descriptions, based on the source code elements that form feature implementations and use-case diagrams of software variants. The novelty of our approach is that it exploits commonality and variability across software variants, at feature implementation and use-case levels, to apply Information Retrieval methods in an efficient way. Considering commonality and variability across software variants enables us to cluster the use-cases and feature implementations into disjoint and minimal clusters based on Relational Concept Analysis (RCA); where each cluster is disjoint from the others and consists of a minimal subset of feature implementations and their corresponding use-cases. Then, we use Latent Semantic Indexing (LSI) to define a similarity measure that enables us to identify which use-cases characterize the name and description of the features and the number of software variants grows, it is worth re-engineering them into a Software Product Line (SPL) for systematic reuse. The first step towards re-engineering software variants into SPL is to mine the Feature Model (FM) of these variants. To obtain such a FM, common and optional features for software variants have to be identified. This consists in identifying among source code elements, groups of such elements that implement candidate features and associating them with their documentation (i.e., a feature name and description). In our previous work, we proposed an approach for feature mining from the object-oriented source code of software variants (REVPLINE approach). REVPLINE allows us mining of functional features as a set of Source Code Elements (SCEs) (e.g., package, class, attribute, method or method body elements).
To assist a human expert to document the mined feature implementations, we propose an automatic approach which associates names and descriptions using source code elements of feature implementations and use-case diagrams of software variants. Compared with existing work that documents source code (cf. section 6), the novelty of our approach is that we exploit commonality and variability across software variants at feature implementation and use-case levels, to apply Information Retrieval (IR) methods in an efficient way. Considering commonality and variability across software variants enables us to cluster the use-cases and feature implementations into disjoint and minimal clusters based on Relational Concept Analysis (RCA); where each cluster is disjoint from the others and consists of a minimal subset of feature implementations and their corresponding use-cases. Then, we use Latent Semantic Indexing (LSI) to define a similarity measure that enables us to identify which use-cases characterize the name and description.
Keywords: Software variants, Software Product Line, Feature documentation, Code comprehension, Formal Concept Analysis, Relational Concept Analysis, Use-case diagram, Latent Semantic Indexing.
1 Introduction
Software variants often evolve from an initial product, developed for and successfully used by the first customer. These product variants usually share some common features and differ regarding others. As the number of features
scription of each feature implementation by using Formal Concept Analysis (FCA).
The remainder of this paper is structured as follows: Section 2 briefly presents the background. Section 3 shows an overview of the feature documentation process. Section 4 presents the feature documentation process step by step. Section 5 describes the experimentation. Section 6 discusses the related work. Finally, section 7 concludes and provides perspectives for this work.
2 Background
This section provides a glimpse on FCA, RCA and LSI. It also shortly describes the example that illustrates the remaining sections of the paper.
2.1 Formal and Relational Concept Analysis
FCA [3] is a classification technique that extracts a partially ordered set of concepts (the concept lattice) from a dataset composed of objects described by attributes (the formal context). A concept is composed of two sets: an object set called the concept’s extent and an attribute set called the concept’s intent. The extent is the maximal set of objects which share the maximal set of attributes of the intent (cf. Section 4.3). We use the concepts of the AOC-poset, namely the concepts that introduce at least one object or one attribute. The interested reader can find more information about our use of FCA in [2].
RCA [4] is an iterative version of FCA in which the objects are classified not only according to the attributes they share, but also according to the relations between them (cf. Section 4.1). Data are encoded into a Relational Context Family (RCF), which is a pair \((K, R)\), where \(K\) is a set of formal (object-attribute) contexts \(K_i = (O_i, A_i, I_i)\) and \(R\) is a set of relational (object-object) contexts \(r_{ij} \subseteq O_i \times O_j\), where \(O_i\) (domain of \(r_{ij}\)) and \(O_j\) (range of \(r_{ij}\)) are the object sets of the contexts \(K_i\) and \(K_j\), respectively (cf. Table 1). A RCF is used in an iterative process to generate, at each step, a set of concept lattices. Firstly, concept lattices are built, using the formal contexts only. Then, in the following steps, a scaling mechanism translates the links between objects into conventional FCA attributes and derives a collection of lattices whose concepts are linked by relations (cf. Figure 2). The interested reader can find more information about RCA in [4]. For applying FCA and RCA we used the Eclipse eRCA platform.
2.2 Latent Semantic Indexing
LSI is an advanced Information Retrieval (IR) method [1]. LSI assumes that software artifacts can be regarded as textual documents. Occurrences of terms are extracted from the documents in order to calculate similarities between them and then to classify together a set of similar documents as related to a common concept (cf. Section 4.2). The heart of LSI is the singular value decomposition technique. This technique is used to mitigate noise introduced by stop words (like “the”, “an”, “above”) and to overcome two issues arising in natural languages processing: synonymy and polysemy. The effectiveness of IR methods is usually measured by metrics including recall, precision and F-measure. In our context, for a given use-case (query), recall is the percentage of correctly retrieved feature implementations (documents) to the total number of relevant feature implementations, while precision is the percentage of correctly retrieved feature implementations to the total number of retrieved feature implementations. F-Measure defines a trade-off between precision and recall, so that it gives a high value only in cases where both recall and precision are high. All measures have values in \([0\%, 100\%]\). If recall equals 100\%, all relevant feature implementations (documents) are retrieved. However, some retrieved feature implementations might not be relevant. If recall equals 100\%, all retrieved feature implementations are relevant. Nevertheless, relevant feature implementations might not be retrieved. If F-Measure equals 100\%, all relevant feature implementations are retrieved. However, some retrieved feature implementations might not be relevant. The interested reader can find more information about our use of LSI in [2].
2.3 The Mobile Tourist Guide Example
We consider in this example four software variants of a Mobile Tourist Guide (MTG) application. These applications enable users to inquire about some tourist information on mobile devices. MTG_1 supports core MTG functionalities: view map, place marker on a map, view direction, launch Google map and show street view. MTG_2 has the core MTG functionalities and a new functionality called download map from Google. MTG_3 has the core MTG functionalities and a new functionality called show satellite view. MTG_4 supports search for nearest attraction, show next attraction and retrieve data functionalities, together with the core ones.
3 The Feature Documentation Process
Our goal is to document the mined feature implementations by using the use-case diagrams of these variants. In our work, we rely on the same assumption as in the work of [5] stating that each use-case represents a feature. The feature documentation process aims at identifying which
use-cases characterize the name and description of each feature implementation. We rely on lexical similarity to identify the use-cases that characterize the name and description of feature implementations. The performance and efficiency of the IR technique depends on the size of the search space. In order to apply LSI, we take advantage of the commonality and variability between software variants to group feature implementations and the corresponding use-cases in the software family into disjoint, minimal clusters (e.g., Concept_1 of Figure 2). We call each disjoint minimal cluster a Hybrid Block (HB). After reducing the search space to a set of hybrid blocks, we rely on textual similarity to identify, from each hybrid block, which use-cases depict the name and description of each feature implementation.
For a product variant, our approach takes as inputs the set of use-cases that documents the variant and the set of mined feature implementations that are produced by REVPLINE. Each use-case is identified by its name and description. This information represents domain knowledge that is usually available from software variants documentation (i.e., requirement model). In our work, the use-case description consists of a short paragraph in a natural language. Our approach provides as its output a name and description for each feature implementation based on a use-case name and description. Each use-case is mapped into a functional feature thanks to our assumption. If two or more use-cases have a relation with the same feature implementation, we consider them all as the documentation for this feature implementation.
Figure 1 shows an overview of our feature documentation process. The first step of this process aims at identifying hybrid blocks based on RCA (cf. Section 4.1). In the second step, LSI is applied to determine the similarity between use-cases and feature implementations (cf. Section 4.2). This similarity measure is used to identify use-case clusters based on FCA. Each cluster identifies the name and description for feature implementation (cf. Section 4.3).
4 Feature Documentation Step by Step
In this section, we describe the feature documentation process step by step. According to our approach, we identify the feature name and its description in three steps as detailed in the following.
4.1 Identifying Hybrid Blocks of Use-cases and Feature Implementations via RCA
We use the existing use-case diagrams of software variants to document the feature implementations mined from those variants. In order to apply LSI in an efficient way, we need to reduce the search space for use-cases and feature implementations. Starting from existing feature implementations and use-cases, these elements are clustered into disjoint minimal clusters (i.e., hybrid blocks) to apply LSI. The search space is reduced based on the commonality and variability of software variants. RCA is used to cluster: the use-cases and feature implementations common to all software variants; the use-cases and feature implementations that are shared by a set of software variants, but not all variants; the use-cases and feature implementations that are held by a single variant.
A RCF for feature documentation is automatically generated from use-case diagrams and the mined feature implementations associated with software variant. The RCF corresponding to our approach contains two formal contexts and one relational context, as illustrated in Table 1. The first formal context represents the use-case diagrams. The second formal context represents feature implementations. In the formal context of use-case diagrams, objects are use-cases and attributes are software variants. In the formal context of feature implementations, objects are feature implementations and attributes are software variants. The relational context (i.e., appears-with) indicates which use-case appears in the same software variants as feature implementations.
For the RCF presented in Table 1, a close-up view of two lattices of the Concept Lattice Family (CLF) is represented in Figure 2. As an example of hybrid block we can see in Figure 2 a set of use-cases (in the extent of Concept_1 of the Use_case_Diagrams lattice) that always appear with a set of feature implementations (in the extent of Concept_6 of the Feature_Implementations lattice). As shown in Figure 2, RCA allows us to reduce the search space by exploiting
---
3Source code: [https://code.google.com/p/rcafcfa/](https://code.google.com/p/rcafcfa/)
---
commonality and variability across software variants. In our work, we are filtering CLF to get a set of hybrid blocks from bottom to top. Figure 2 shows an example of hybrid block (the dashed block).
4.2 Measuring the Lexical Similarity Between Use-cases and Feature Implementations via LSI
Based on the previous step, each hybrid block consists of a set of use-cases and a set of feature implementations. We need to identify from each hybrid block which use-cases characterize the name and description of each feature implementation. To do so, we use textual similarity between use-cases and feature implementations. This similarity measure is calculated using LSI. We rely on the fact that a use-case corresponding to the feature implementation is supposed to be lexically closer to this feature implementation than to the other feature implementations. Similarity between use-cases and feature implementations in the hybrid blocks is computed in three steps as detailed below.
4.2.1 Building the LSI Corpus
In order to apply LSI, we build a corpus that represents a collection of documents and queries. In our work, each use-case name and description in the hybrid block represents a query and each feature implementation represents a document. This document contains all the segments of SCE names as a result of splitting words into terms using the Camel-case technique. Regardless of word location (first, middle or last) in the SCE name, we store all words in the document. For example, for the SCE name ManualTestWrapper all words are important: manual, test and wrapper. We apply the same technique to all feature implementations. Our approach creates a query for each use-case. This query contains the use-case name and its description. We apply the same process to all use-cases. To be processed, documents and queries must be normalized as follows: stop words, articles, punctuation marks, or numbers are removed; text is tokenized and lower-cased; text is split into terms; stemming is performed (e.g., removing word endings); terms are sorted alphabetically. We use WordNet\(^4\) to do some simple preprocessing (e.g., stemming and removal of stop words). The most important parameter of LSI is the number of term-topics (i.e., k-Topics) chosen. A term-topic is a collection of terms that co-occur often in the documents of the corpus, for example \{user, account, password, authentication\}. In our work, the number of k-Topics is equal to the number of feature implementations for each corpus.
4.2.2 Building the Term-document and the Term-query Matrices for Each Hybrid Block
All hybrid blocks are considered and the same processes are applied to them. The term-document matrix is of size \(m \times n\), where \(m\) is the number of terms extracted from feature implementations and \(n\) is the number of feature implementations (i.e., documents) in a corpus. The matrix values indicate the number of occurrences of a term in a document, according to a specific weighting scheme. In our work, terms are weighted with the TF-IDF function (the most common weighting scheme)\(^\|\). The term-query matrix is of size \(m \times n\), where \(m\) is the number of terms that are extracted from use-cases and \(n\) is the number of use-cases (i.e., queries) in a corpus. An entry of term-query matrix refers to the weight of the \(i^{th}\) term in the \(j^{th}\) query.
---
\(^4\)Source code: [https://code.google.com/p/fecola/](https://code.google.com/p/fecola/)
\(^\|\)http://wordnet.princeton.edu/
4.2.3 Building the Cosine Similarity Matrix
Similarity between use-cases and feature implementations in each hybrid block is described by a cosine similarity matrix which columns (documents) represent vectors of feature implementations and rows (queries) vectors of use-cases. The textual similarity between documents and queries is measured by the cosine of the angle between their corresponding vectors [2].
4.3 Identifying Feature Name via FCA
Based on the cosine similarity matrix we use FCA to identify, from each hybrid block of use-cases and feature implementations, which elements are similar. To transform the (numerical) cosine similarity matrices into (binary) formal contexts, we use a 0.7 threshold (after having tested many threshold values). This means that only pairs of use-cases and feature implementations having a calculated similarity greater than or equal to 0.70 are considered similar.
For example, for the hybrid block Concept_I of Figure 4, the number of term-topics of LSI is equal to 5. In the formal context associated with this hybrid block, the use-case “Launch Google Map” is linked to the feature implementation “Feature Implementation_1” because their similarity equals 0.86, which is greater than the threshold. However, the use-case “View Direction” and the feature implementation “Feature Implementation_5” are not linked because their similarity equals 0.10, which is less than the threshold. The resulting AOC-poset is composed of concepts which extent represents the use-case name and intent represents the feature implementation.
For the MTG example, the AOC-poset of Figure 5 shows five non comparable concepts (that correspond to five distinct features) mined from a single hybrid block (Concept_I from Figure 2). The same feature documentation process is used for each hybrid block.
5 Experimentation
To validate our approach, we ran experiments on two Java open-source softwares: Mobile media software variants (small systems) [6] and ArgoUML-SPL (large systems) [7]. We used 4 variants for Mobile media, 10 for ArgoUML.
The advantage of having two case studies is that they implement variability at different levels: class and method levels. In addition, these case studies are well documented and their feature implementations, use-case diagrams and FMs are available for comparison of our results and validation of our proposal. Table 2 summarizes the obtained results.
Table 2: Features documented from case studies.
<table>
<thead>
<tr>
<th>#</th>
<th>Feature Name</th>
<th>Hybrid Block</th>
<th>k-TOPICS</th>
<th>Recall</th>
<th>Precision</th>
<th>F-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Delete Album</td>
<td>HB_1</td>
<td>4</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>2</td>
<td>Delete Photo</td>
<td>HB_1</td>
<td>4</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
<tr>
<td>3</td>
<td>Add Album</td>
<td>HB_1</td>
<td>4</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>4</td>
<td>Add Photo</td>
<td>HB_2</td>
<td>4</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
<tr>
<td>5</td>
<td>Exception handling</td>
<td>HB_3</td>
<td>1</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>6</td>
<td>Count Photo</td>
<td>HB_3</td>
<td>3</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
<tr>
<td>7</td>
<td>View Sorted Photos</td>
<td>HB_3</td>
<td>3</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
<tr>
<td>8</td>
<td>Edit Label</td>
<td>HB_3</td>
<td>3</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>9</td>
<td>Set Favourites</td>
<td>HB_3</td>
<td>2</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
<tr>
<td>10</td>
<td>View Favourites</td>
<td>HB_3</td>
<td>2</td>
<td>100%</td>
<td>50%</td>
<td>66%</td>
</tr>
</tbody>
</table>
For the two case studies presented, we observe that the recall values are 100% of all the features that are documented. The recall values are an indicator for the efficiency of our approach. The values of precision are between [50% - 100%], which is high. F-Measure values rely on precision and recall values. The values of F-Measure are high too, between [66% - 100%] for the documented features. Results show that recall value in all cases is 100% and value of precision either 100% or 50% it is because of the similarity threshold (0.70) in addition, this result is due to search space reduction. In most cases, the contents of hybrid blocks are in the range of [1 - 4] use-cases and feature implementations. Another reason for this good result is that a common vocabulary is used in the use-case descriptions and feature implementations, thus lexical similarity was a suitable tool. In our work we cannot use a fixed number of topics for LSI because we have hybrid blocks (clusters) with different sizes.
The column (k-TOPICS) in Table 2 represents the number of term-topics. All feature names produced by our approach, in the column (Feature Name) of Table 2 represent the names of the use cases. For example, in the FM of Mobile media [6] there is a feature called sorting. The name proposed by our approach for this feature is view sorted photos and its description is "the device sorts the photos based on the number of times photo has been viewed".
---
6Case studies and code: [http://www.lirmm.fr/CaseStudy](http://www.lirmm.fr/CaseStudy)
As a limitation to our approach, developers might not use the same vocabularies to name SCEs and use-cases across software variants. This means that lexical similarity may not be reliable (or should be improved with other techniques) in all cases to identify the relationship between use-cases and feature implementations. Furthermore, using FCA as clustering technique has also limits. FCA deals with binary formal context. This affects the quality of the result, since a similarity value of 0.99 (resp. 0.69) is treated as a similarity value of 0.70 (resp. 0). Selecting the appropriate number of dimensions (K) for the LSI representation is an open research question.
6 Related Work
Most existing approaches are designed to extract labels, names, topics or identify code to use-case traceability links in a single software system. In the context of feature documentation, most existing approaches manually assign feature names (without any description) to feature implementations. Conversely our approach is designed to automatically assign a name and a description to each feature implementation in a set of software variants based on several techniques (FCA, RCA and LSI). Feature documentation is inferred from use case names and descriptions.
Ziadi et al. [8] propose an approach to identify features across software variants. In their work, they manually create feature names. In our previous work [2], we manually propose feature names for the mined feature implementations. Bragança and Machado [5] describe an approach for automating the process of transforming UML use-cases into FMs. In their work, each use-case is mapped to a feature. The identification of relationships (i.e., traceability links) between use-case diagrams and source code for a single software is the subject of the work by Grechanik et al. [9]. Kuhn et al. [10] present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for components of single software. Xue et al. [1] propose an automatic approach to identify the code-to-feature traceability link for a collection of software product variants.
7 Conclusion and Perspectives
In this paper, we proposed an approach for documenting the mined feature implementations of a set of software variants. We exploit commonalities and variabilities between software variants at feature implementation and use-case levels to apply IR methods in an efficient way in order to automatically document the mined features. We have implemented our approach and evaluated its results on two case studies. The results of this evaluation showed that most of the features were documented correctly. Regarding future work, we plan to use the mined and documented features to build automatically the relations between the features of a FM (i.e., reverse engineering FMs).
Acknowledgments: This work has been supported by project CUTTER ANR-10-BLAN-0219.
References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01003860/file/FD.pdf", "len_cl100k_base": 5583, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24092, "total-output-tokens": 6561, "length": "2e12", "weborganizer": {"__label__adult": 0.0002644062042236328, "__label__art_design": 0.0002810955047607422, "__label__crime_law": 0.00026488304138183594, "__label__education_jobs": 0.0005965232849121094, "__label__entertainment": 4.1365623474121094e-05, "__label__fashion_beauty": 0.00010502338409423828, "__label__finance_business": 0.0001538991928100586, "__label__food_dining": 0.00019478797912597656, "__label__games": 0.0003533363342285156, "__label__hardware": 0.00042128562927246094, "__label__health": 0.00020766258239746096, "__label__history": 0.0001399517059326172, "__label__home_hobbies": 4.982948303222656e-05, "__label__industrial": 0.0001875162124633789, "__label__literature": 0.0002033710479736328, "__label__politics": 0.0001430511474609375, "__label__religion": 0.0002224445343017578, "__label__science_tech": 0.005779266357421875, "__label__social_life": 6.0617923736572266e-05, "__label__software": 0.008148193359375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00014972686767578125, "__label__transportation": 0.0002117156982421875, "__label__travel": 0.00013172626495361328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27365, 0.0464]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27365, 0.20089]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27365, 0.87932]], "google_gemma-3-12b-it_contains_pii": [[0, 725, false], [725, 4555, null], [4555, 9731, null], [9731, 14261, null], [14261, 17781, null], [17781, 22681, null], [22681, 27365, null]], "google_gemma-3-12b-it_is_public_document": [[0, 725, true], [725, 4555, null], [4555, 9731, null], [9731, 14261, null], [14261, 17781, null], [17781, 22681, null], [22681, 27365, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27365, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27365, null]], "pdf_page_numbers": [[0, 725, 1], [725, 4555, 2], [4555, 9731, 3], [9731, 14261, 4], [14261, 17781, 5], [17781, 22681, 6], [22681, 27365, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27365, 0.13043]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
77420c1f41ea3b4cca9b6008722b3bd041146766
|
Chapter 4
Data Transfers, Addressing, and Arithmetic
4.1 Data Transfer Instructions 79
4.1.1 Introduction 79
4.1.2 Operand Types 80
4.1.3 Direct Memory Operands 80
4.1.4 MOV Instruction 81
4.1.5 Zero/Sign Extension of Integers 82
4.1.6 LAHF and SAHF Instructions 84
4.1.7 XCHG Instruction 84
4.1.8 Direct-Offset Operands 84
4.1.9 Example Program (Moves) 85
4.1.10 Section Review 86
4.2 Addition and Subtraction 87
4.2.1 INC and DEC Instructions 87
4.2.2 ADD Instruction 87
4.2.3 SUB Instruction 88
4.2.4 NEG Instruction 88
4.2.5 Implementing Arithmetic Expressions 89
4.2.6 Flags Affected by Addition and Subtraction 89
4.2.7 Example Program (Addsub3) 92
4.2.8 Section Review 93
4.3 Data-Related Operators and Directives 94
4.3.1 OFFSET Operator 94
4.3.2 ALIGN Directive 95
4.3.3 PTR Operator 95
4.3.4 TYPE Operator 96
4.3.5 LENGTHOF Operator 97
4.3.6 SIZEOF Operator 97
4.3.7 LABEL Directive 97
4.3.8 Section Review 98
4.4 Indirect Addressing 99
4.4.1 Indirect Operands 99
4.4.2 Arrays 100
4.4.3 Indexed Operands 101
4.4.4 Pointers 102
4.4.5 Section Review 103
4.5 JMP and Loop Instructions 104
4.5.1 JMP Instruction 104
4.5.2 LOOP Instruction 105
4.5.3 Summing an Integer Array 106
4.5.4 Coping a String 106
4.5.5 Section Review 107
4.6 Chapter Summary 108
4.7 Programming Exercises 109
Chapter 4
Data Transfers, Addressing, and Arithmetic
4.1 Data Transfer Instructions 79
4.1.1 Introduction 79
• This chapter introduces a great many details, highlighting a fundamental difference between assembly language and high-level language.
4.1.2 Operand Types 80
• Three basic types of operands:
o Immediate: a constant integer (8, 16, or 32 bits)
▪ value is encoded within the instruction
o Register: the name of a register
▪ register name is converted to a number and encoded within the instruction
o Memory: reference to a location in memory
▪ memory address is encoded within the instruction, or a register holds the address of a memory location
<table>
<thead>
<tr>
<th>Operand</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>r8</code></td>
<td>8-bit general-purpose register: AH, AL, BH, BL, CH, CL, DH, DL</td>
</tr>
<tr>
<td><code>r16</code></td>
<td>16-bit general-purpose register: AX, BX, CX, DX, SI, DI, SP, BP</td>
</tr>
<tr>
<td><code>r32</code></td>
<td>32-bit general-purpose register: EAX, EBX, ECX, EDX, ESL, EDI, ESP, EBP</td>
</tr>
<tr>
<td><code>reg</code></td>
<td>any general-purpose register</td>
</tr>
<tr>
<td><code>sreg</code></td>
<td>16-bit segment register: CS, DS, ES, FS, GS</td>
</tr>
<tr>
<td><code>imm</code></td>
<td>8-, 16-, or 32-bit immediate value</td>
</tr>
<tr>
<td><code>imm8</code></td>
<td>8-bit immediate byte value</td>
</tr>
<tr>
<td><code>imm16</code></td>
<td>16-bit immediate word value</td>
</tr>
<tr>
<td><code>imm32</code></td>
<td>32-bit immediate doubleword value</td>
</tr>
<tr>
<td><code>r/m8</code></td>
<td>8-bit operand which can be an 8-bit general register or memory byte</td>
</tr>
<tr>
<td><code>r/m16</code></td>
<td>16-bit operand which can be a 16-bit general register or memory word</td>
</tr>
<tr>
<td><code>r/m32</code></td>
<td>32-bit operand which can be a 32-bit general register or memory doubleword</td>
</tr>
<tr>
<td><code>mem</code></td>
<td>an 8-, 16-, or 32-bit memory operand</td>
</tr>
</tbody>
</table>
4.1.3 Direct Memory Operands
- Direct Memory Operands
- A direct memory operand is a named reference to storage in memory
- The named reference (label) is automatically dereferenced by the assembler. The brackets imply a deference operation.
```
.data
var1 BYTE 10h
.code
mov al, var1 ; AL = 10h
mov al, [var1] ; AL = 10h
```
4.1.4 MOV Instruction
- MOV Instruction
- Move (copy) from source to destination
- Syntax:
**MOV destination, source**
- *destination* operand’s contents change
- *source* operand’s contents do not change
- Both operands must be the same size
- Both operands cannot be memory operands
- CS, EIP, and IP cannot be the destination
- No immediate to segment moves
- Here is a list of the general variants of MOV, excluding segment registers:
MOV reg, reg
MOV mem, reg
MOV reg, mem
MOV mem, imm
MOV reg, imm
- Examples:
```
.data
count BYTE 100
wVal WORD 2
.code
mov bl, count
mov ax, wVal
mov count, al
mov al, wVal ; error, AL is 8 bits
mov ax, count ; error, AX is 16 bits
mov eax, count ; error, EAX is 32 bits
.data
bVal BYTE 100
bVal2 BYTE ?
.code
```
mov bVal2, bVal ; error, memory-to-memory move not permitted
4.1.5 Zero/Sign Extension of Integers
- Zero Extension: MOVZX instruction
- When copy a smaller value into a larger destination, the MOVZX instruction fills (extends) the upper half of the destination with zeros
**FIGURE 4-1 Diagram of MOVZX ax, 8Fh.**
![Diagram of MOVZX ax, 8Fh.]
- The destination must be a register
mov bl, 10001111b
movzx ax, bl ; zero-extension – 00000010001111b
- Sign Extension: MOVSX instruction
- The MOVSX instruction fills the upper half of the destination with a copy of the source operand's sign bit
**FIGURE 4-2 Diagram of MOVSX ax, 8Fh.**
![Diagram of MOVSX ax, 8Fh.]
- The destination must be a register
mov bl, 10001111b
movsx ax, bl ; sign-extension – 111111110001111b
4.1.6 LAHF and SAHF Instructions
- LAHF (load status flags into AH) instruction copies the low byte of the EFLAGS register into AH. The following flags are copied: Sign, Zero, Auxiliary Carry, Parity, and Carry.
```asm
.data
saveflags BYTE ?
.code
lahf ; load flags into AH
mov saveflags, ah ; save them in a variable
```
- SAHF (store status flags into AH) instruction copies the low byte of the EFLAGS register into AH. The following flags are copied: Sign, Zero, Auxiliary Carry, Parity, and Carry.
```asm
mov ah, saveflags ; load saved flags into AH
sahf ; copy into Flags register
```
4.1.7 XCHG Instruction
- XCHG Instruction exchanges the values of two operand
- At least one operand must be a register
- No immediate operands are permitted.
```asm
.data
var1 WORD 1000h
var2 WORD 2000h
.code
xchg ax,bx ; exchange 16-bit regs
xchg ah,al ; exchange 8-bit regs
xchg var1,bx ; exchange mem, reg
xchg eax,ebx ; exchange 32-bit regs
xchg var1,var2 ; error: two memory operands
```
4.1.8 Direct-Offset Operands
- Direct-Offset Operands
- A constant offset is added to a data label to produce an effective address (EA)
- The address is dereferenced to get the value inside its memory location
```asm
.data
arrayB BYTE 10h,20h,30h,40h, 50h
.code
mov al,arrayB+1 ; AL = 20h
mov al,[arrayB+1] ; alternative notation
mov al,arrayB+2 ; AL = 30h
```
4.1.9 Example Program (Moves)
- The following program demonstrates most of the data transfer examples form Section 4.1:
```
TITLE Data Transfer Examples (Moves.asm)
; Chapter 4 example. Demonstration of MOV and
; XCHG with direct and direct-offset operands.
; Last update: 06/01/2006
INCLUDE Irvine32.inc
.data
val1 WORD 1000h
val2 WORD 2000h
arrayB BYTE 10h,20h,30h,40h,50h
arrayW WORD 100h,200h,300h
arrayD DWORD 10000h,20000h
.code
main PROC
; MOVZX
mov bx,0A69Bh
movzx eax,bx ; EAX = 0000A69Bh
movzx edx,bl ; EDX = 0000009Bh
movzx cx,bl ; CX = 009Bh
; MOVXS
mov bx,0A69Bh
movsx eax,bx ; EAX = FFFFA69Bh
movsx edx,bl ; EDX = FFFFFF9Bh
mov bl,7Bh
movsx cx,bl ; CX = 007Bh
; Memory-to-memory exchange:
mov ax,val1 ; AX = 1000h
xchg ax,val2 ; AX = 2000h, val2 = 1000h
mov val1,ax ; val1 = 2000h
; Direct-Offset Addressing (byte array):
mov al,arrayB ; AL = 10h
mov al,[arrayB+1] ; AL = 20h
mov al,[arrayB+2] ; AL = 30h
; Direct-Offset Addressing (word array):
mov ax,arrayW ; AX = 100h
mov ax,[arrayW+2] ; AX = 200h
; Direct-Offset Addressing (doubleword array):
mov eax,arrayD ; EAX = 10000h
mov eax,[arrayD+4] ; EAX = 20000h
mov eax,[arrayD+TYPE arrayD] ; EAX = 20000h
exit
main ENDP
END main
```
4.2 Addition and Subtraction 87
4.2.1 INC and DEC Instructions 87
- INC and DEC Instructions
- Add 1, subtract 1 from destination operand, operand may be register or memory
- INC destination
Logic: $destination \leftarrow destination + 1$
- DEC destination
Logic: $destination \leftarrow destination - 1$
- INC and DEC Examples
```
.data
myWord WORD 1000h
.code
inc myWord ; 1001h
move bx, myWord
dec myWord ; 1000h
```
4.2.2 ADD Instruction 87
- ADD Instruction
- $ADD destination, source$
Logic: $destination \leftarrow destination + source$
- Examples:
```
.data
var1 DWORD 10000h
var2 DWORD 20000h
.code
mov eax, var1 ; EAX = 00010000h
add eax, var2 ; EAX = 00030000h
```
4.2.3 SUB Instruction
- SUB Instructions
- SUB destination, source
Logic: destination ← destination – source
- Examples:
```
.data
var1 DWORD 30000h
var2 DWORD 10000h
.code
mov eax, var1 ; EAX = 00030000h
sub eax, var2 ; EAX = 00020000h
```
4.2.4 NEG Instruction
- NEG (negate) Instruction
- Reverses the sign of an operand
- Operand can be a register or memory operand
- Examples:
```
.data
valB BYTE -1
valW WORD +32767
.code
mov al, valB ; AL = -1
neg al ; AL = +1
neg valW ; valW = -32767
```
- NEG Instruction and the Flags
- The processor implements NEG using the following internal operation:
\[
\text{SUB } 0, \text{operand}
\]
- Any nonzero operand causes the Carry flag to be set
- Examples:
```
.data
valB BYTE 1, 0
valC SBYTE -128
.code
neg valB ; CF = 1, OF = 0
neg [valB + 1] ; CF = 0, OF = 0
neg valC ; CF = 1, OF = 1
```
4.2.5 Implementing Arithmetic Expressions
- Implementing Arithmetic Expressions
- Translate mathematical expressions into assembly language
- Example:
\[ Rval = -Xval + (Yval - Zval) \]
.data
Rval DWORD ?
Xval DWORD 26
Yval DWORD 30
Zval DWORD 40
.code
mov eax, Xval
neg eax ; EAX = -26, EAX = -Xval
mov ebx, Yval ; EBX = 30, EBX = Yval
sub ebx, Zval ; EBX = -10, EBX = (Yval - Zval)
add eax, ebx ; EAX = -36, EAX = -Xval + (Yval - Zval)
mov Rval, eax ; Rval = -36
4.2.6 Flags Affected by Addition and Subtraction
- Flags Affected by Arithmetic
- The ALU has a number of status flags that reflect the outcome of arithmetic (and bitwise) operations based on the contents of the destination operand.
- The MOV instruction never affects the flags.
- Essential flags:
- **Zero flag:** Set when destination operand equals zero
```
mov cx,1
sub cx,1 ; CX = 0, ZF = 1
mov ax,0FFh
inc ax ; AX = 0, ZF = 1
inc ax ; AX = 1, ZF = 0
```
**Note:** A flag is set when it equals 1
A flag is clear when it equals 0
- **Sign flag:** Set when the destination operand is negative
Clear when the destination is positive
```
mov cx,0
sub cx,1 ; CX = -1, SF = 1
add cx,2 ; CX = 1, SF = 0
```
**Note:** The sign flag is a copy of the destination's highest bit
- **Carry flag:** Set when unsigned destination operand value is out of range
```
mov al,7Fh
add al,1 ; AL = 80, CF = 0
mov al,0FFh
add al,1 ; AL = 00, CF = 1, **Too big**
mov al,1
sub al,2 ; AL = FF, CF = 1, **Below zero**
```
- **Auxiliary Carry:** Set when carry out of bit 3 in the destination operand
```
mov al,0Fh
add al,1 ; AL = 10, AC = 1
```
- **Parity flag:** Set when the least significant byte of the destination has even number of 1 bits.
```
mov al,10001100b
add al,00000010b ; AL = 10001110, PF = 1
sub al,10000000b ; AL = 00001110, PF = 0
```
o **Overflow flag**: Set when *signed destination* operand value is out of range
```
mov al, 7Fh ; OF = 1, AL = 80h
add al, 1
```
*Note*: When adding two integers, the Overflow flag is only set when:
- Two positive operands are added and their sum is negative
- Two negative operands are added and their sum is positive
- A hardware viewpoint of signed and unsigned Integers
- All CPU instructions operate *exactly the same* on signed and unsigned integers
- The CPU *cannot* distinguish between signed and unsigned integers
- The programmers are solely responsible for using the correct data type with each instruction
- A hardware viewpoint of Overflow and Carry flags
- How the **ADD** instruction modifies OF and CF:
\[
\begin{align*}
\text{OF} &= (\text{carry out of the MSB}) \text{ XOR} (\text{carry into the MSB}) \\
\text{CF} &= (\text{carry out of the MSB})
\end{align*}
\]
- How the **SUB** instruction modifies OF and CF:
\[
\begin{align*}
\text{NEG} \text{ the source and ADD it to the destination} \\
\text{OF} &= (\text{carry out of the MSB}) \text{ XOR} (\text{carry into the MSB}) \\
\text{CF} &= \text{INVERT} (\text{carry out of the MSB})
\end{align*}
\]
*Notation:*
- **MSB** = Most Significant Bit (high-order bit)
- **XOR** = eXclusive-OR operation
- eXclusive-OR operation only returns a 1 when its two input bits are different
- **NEG** = Negate (same as **SUB 0**, operand)
- **Examples**:
```
mov al, -128 ; AL = 10000000b
neg al ; AL = 10000000b, CF = 1, OF = 1
mov al, 80h ; AL = 10000000b
add al, 2 ; AL = 10000010b, CF = 0, OF = 0
mov al, 1 ; AL = 00000001b
sub al, 2 ; AL = 11111111b, CF = 1, OF = 0
mov al, 7Fh
add al, 2 ; AL = 10000011b, CF = 0, OF = 1
```
4.2.7 Example Program (Addsub3)
The following program implements various arithmetic expressions using the ADD, SUB, INC, DEC, and NEG instructions, and show how certain status flags are affected:
TITLE Addition and Subtraction (AddSub3.asm)
; Chapter 4 example. Demonstration of ADD, SUB, INC, DEC, and NEG instructions, and how they affect the CPU status flags.
; Last update: 06/01/2006
INCLUDE Irvine32.inc
.data
Rval SDWORD ?
Xval SDWORD 26
Yval SDWORD 30
Zval SDWORD 40
.code
main PROC
; INC and DEC
mov ax,1000h
inc ax ; 1001h
dec ax ; 1000h
; Expression: Rval = -Xval + (Yval - Zval)
mov eax,Xval
neg eax ; -26
mov ebx,Yval
sub ebx,Zval ; -10
add eax,ebx
mov Rval, eax ; -36
; Zero flag example:
mov cx,1
sub cx,1 ; ZF = 1
mov ax,0FFFFh
inc ax ; ZF = 1
; Sign flag example:
mov cx,0
sub cx,1 ; SF = 1
mov ax,7FFFh
add ax,2 ; SF = 1
; Carry flag example:
mov al,0FFh
add al,1 ; CF = 1, AL = 00
; Overflow flag example:
mov al,+127
add al,1 ; OF = 1
mov al,-128
sub al,1 ; OF = 1
exit
main ENDP
END main
4.3 Data-Related Operators and Directives
4.3.1 OFFSET Operator
- OFFSET operator returns the distance in bytes, of a label from the beginning of its enclosing segment
- Protected mode: Offset are 32 bits
- Real mode: Offset are 16 bits
- OFFSET Example
- Assume that the data segment begins at 00404000h
```
.data
bVal BYTE ?
wVal WORD ?
dVal DWORD ?
dVal2 DWORD ?
.code
mov esi,OFFSET bVal ; ESI = 00404000
mov esi,OFFSET wVal ; ESI = 00404001
mov esi,OFFSET dVal ; ESI = 00404003
mov esi,OFFSET dVal2 ; ESI = 00404007
```
- Relating to C/C++
- The value returned by OFFSET is a pointer
- Compare the following code written for both C++ and assembly language
```
// C++ version:
char array[1000];
char * p = array;
; Assembly version
.data
array BYTE 1000 DUP(?)
.code
mov esi,OFFSET array
```
4.3.2 ALIGN Directive
- The ALIGN directive aligns a variable on a byte, word, doubleword, or paragraph boundary.
- ALIGN Example
- Assume that the data segment begins at \texttt{00404000h}
```
.data
bVal BYTE ?, 00404000
ALIGN 2
wVal WORD ?, 00404002
bVal2 BYTE ?, 00404004
ALIGN 4
dVal DWORD ?, 00404008
dVal2 DWORD ?, 0040400C
```
4.3.3 PTR Operator
- PTR Operator
- Overrides the default type of a label (variable)
- Provides the flexibility to access part of a variable
- Must be used in combination with one of the standard assembly data type: BYTE, SBYTE, WORD, SWORD, DWORD, SDWORD, FWORD, QWORD, or TWORD
- Little Endian Order
- Little endian order refers to the way Intel stores integers in memory.
- Multi-byte integers are stored in reverse order, with the least significant byte stored at the lowest address.
- For example, the doubleword 12345678h would be stored as:
![Byte offset table]
- Data
- `myDouble` DWORD 12345678h
- Code
```
.data
myDouble DWORD 12345678h
.code
mov ax,myDouble ; error
mov ax,WORD PTR myDouble ; AX = 5678h
mov ax,WORD PTR [myDouble+2] ; AX = 1234h
mov al,BYTE PTR myDouble ; AL = 78h
mov al,BYTE PTR [myDouble+1] ; AL = 56h
mov al,BYTE PTR [myDouble+2] ; AL = 34h
```
- PTR operator can combine elements of a smaller data type and move them into a larger operand
```
.data
myBytes BYTE 12h,34h,56h,78h
.code
mov ax,WORD PTR [myBytes] ; AX = 3412h
mov ax,WORD PTR [myBytes+2] ; AX = 7856h
mov eax,DWORD PTR myBytes ; EAX = 78563412h
```
4.3.4 TYPE Operator
- TYPE operator returns the size, in bytes, of a single element of a data declaration
- TYPE Example:
```
.data
var1 BYTE ?
var2 WORD ?
var3 DWORD ?
var4 QWORD ?
.code ; TYPE
mov eax,TYPE var1 ; 1
mov eax,TYPE var2 ; 2
mov eax,TYPE var3 ; 4
mov eax,TYPE var4 ; 8
```
4.3.5 LENGTHOF Operator
- LENGTHOF operator counts the number of elements in a single data declaration
- LENGTHOF Example:
```
.data ; LENGTHOF
byte1 BYTE 10,20,30 ; 3
array1 WORD 30 DUP(?),0,0 ; 32
array2 WORD 5 DUP(3 DUP(?)) ; 15
array3 DWORD 1,2,3,4 ; 4
digitStr BYTE "12345678",0 ; 9
.code
mov ecx,LENGTHOF array1 ; 32
```
4.3.6 SIZEOF Operator
- SIZEOF Operator returns a value that is equivalent to multiplying LENGTHOF by TYPE
```plaintext
.data
byte1 BYTE 10,20,30 ; 3
array1 WORD 30 DUP(?),0,0 ; 64
array2 WORD 5 DUP(3 DUP(?)) ; 30
array3 DWORD 1,2,3,4 ; 16
digitStr BYTE "12345678",0 ; 9
.code
mov ecx,SIZEOF array1 ; 64
```
- A data declaration spans multiple lines if each line (except the last) ends with a comma
- The LENGTHOF and SIZEOF operators include all lines belonging to the declaration
```plaintext
.data
array WORD 10,20,
30,40,
50,60
.code
mov eax,LENGTHOF array ; 6
mov ebx,SIZEOF array ; 12
```
4.3.7 LABEL Directive
- LABEL Directive
- Assigns an alternate label name and type to an existing storage location
- LABEL does not allocate any storage of its own
- Removes the need for the PTR operator
- LABEL Examples
```plaintext
.data
val16 LABEL WORD
val32 DWORD 12345678h
.code
mov ax,val16 ; AX = 5678h
mov dx,[val16+2] ; DX = 1234h
.data
LongValue LABEL DWORD
val1 WORD 5678h
val2 WORD 1234h
.code
mov eax,LongValue ; EAX = 12345678h
```
4.4 Indirect Addressing 99
4.4.1 Indirect Operands 99
- Indirect Operands
- An indirect operand holds the address of a variable, usually an array or string
- It can be dereferenced (just like a pointer).
```assembly
.data
val1 BYTE 10h,20h,30h
.code
mov esi,OFFSET val1 ; ESI = the address of Val1
mov al,[esi] ; AL = 10h, dereference ESI
inc esi
mov al,[esi] ; AL = 20h
inc esi
mov al,[esi] ; AL = 30h
```
- Use PTR to clarify the size attribute of a memory operand
```assembly
.data
myCount WORD 0
.code
mov esi,OFFSET myCount
inc [esi] ; error: ambiguous
inc WORD PTR [esi] ; ok
```
4.4.2 Arrays 100
- Array Sum Example
- Indirect operands are ideal for traversing an array
- The register in brackets must be incremented by a value that matches the array type
```assembly
.data
arrayW WORD 1000h,2000h,3000h
.code
mov esi,OFFSET arrayW ; ESI = the address of Val1
mov ax,[esi] ; AX = 1000h
add esi,2 ; or: add esi,TYPE arrayW
add ax,[esi] ; AX = 3000h
add esi,2
add ax,[esi] ; AX = 6000h
```
4.4.3 Indexed Operands
- Indexed operands
- An indexed operand adds a constant to a register to generate an effective address. There are two notational forms:
\[ \text{label} + \text{reg} \quad \text{or} \quad \text{label}[ ext{reg}] \]
- Indexed operands Example
```
.data
arrayW WORD 1000h,2000h,3000h
.code
mov esi,0
mov ax, [arrayW + esi] ; AX = 1000h
mov ax, arrayW[esi] ; alternate format
add esi,2
add ax, [arrayW + esi] ; AX = 2000h
```
- Index Scaling
- You can scale an indirect or indexed operand to the offset of an array element
- This is done by multiplying the index by the array's TYPE
```
.data
arrayB BYTE 0,1,2,3,4,5
arrayW WORD 0,1,2,3,4,5
arrayD DWORD 0,1,2,3,4,5
.code
mov esi,4 ; ESI = index of array
mov al, arrayB[esi*TYPE arrayB] ; AL = 04h
mov bx, arrayW[esi*TYPE arrayW] ; BX = 0004h
mov edx, arrayD[esi*TYPE arrayD] ; EDX = 00000004h
```
4.4.4 Pointers
- Pointers
- Declare a pointer variable that contains the offset of another variable
```
.data
arrayW WORD 1000h,2000h,3000h
ptrW DWORD arrayW ; ptrW (pointer variable)
.code
mov esi,ptrW
mov ax, [esi] ; AX = 1000h
```
- Alternate format:
\[ \text{ptrW DWORD OFFSET arrayW} \quad ; \text{ptrW = Offset (address) of arrayW} \]
4.5 JMP and Loop Instructions 104
4.5.1 JMP Instruction 104
- JMP Instruction
- JMP is an **unconditional** jump to a label that is usually within the same procedure
- Syntax: `JMP target`
- Logic: `EIP ← target`
- JMP Example
```
top:
...
jmp top
```
4.5.2 LOOP Instruction 105
- LOOP Instruction
- The LOOP instruction creates a counting loop
- Syntax: `LOOP target`
- Logic:
- First, `ECX ← ECX – 1`
- Next, if `ECX != 0`, jump to target
- Implementation:
- The assembler calculates the distance, in bytes, between the offset of the following instruction and the offset of the target label. It is called the relative offset.
- The relative offset is added to EIP.
- LOOP Example
- Add 1 to AX each time the loop repeats
- When the loop ends, `AX = 5` and `ECX = 0`
```
mov ax, 0
mov ecx, 5
L1:
add ax
loop L1
```
• Nested Loop
o If you need to code a loop within a loop, you **must save the outer loop counter's ECX value**
o In the following example, the outer loop executes 100 times, and the inner loop 20 times
```asm
.data
count DWORD ?
.code
mov ecx,100 ; set outer loop count
L1:
mov count,ecx ; save outer loop count
mov ecx,20 ; set inner loop count
L2:
...
loop L2 ; repeat the inner loop
mov ecx,count ; restore outer loop count
loop L1 ; repeat the outer loop
```
4.5.3 Summing an Integer Array 106
• Summing an Integer Array
```asm
TITLE Summing an Array (SumArray.asm)
; This program sums an array of words.
; Last update: 06/01/2006
INCLUDE Irvine32.inc
.data
intarray WORD 100h,200h,300h,400h
.code
main PROC
mov edi,OFFSET intarray ; address of intarray
mov ecx,LENGTHOF intarray ; loop counter
mov ax, 0 ; zero the accumulator
L1:
add ax,[edi] ; add an integer
add edi,TYPE intarray ; point to next integer
loop L1 ; repeat until ECX = 0
exit
main ENDP
END main
```
4.5.4 Coping a String
- The following code copies a string from source to target:
```
TITLE Copying a String (CopyStr.asm)
; This program copies a string.
; Last update: 06/01/2006
INCLUDE Irvine32.inc
.data
source BYTE "This is the source string",0
target BYTE SIZEOF source DUP(0),0
.code
main PROC
mov esi,0 ; index register
mov ecx,SIZEOF source ; loop counter
L1:
mov al,source[esi] ; get a character from source
mov target[esi],al ; store it in the target
inc esi ; move to next character
loop L1 ; repeat for entire string
exit
main ENDP
END main
```
4.6 Chapter Summary 108
- Data Transfer
- MOV – data transfer from source to destination
- MOVSX, MOVZX, XCHG
- Operand types
- direct, direct-offset, indirect, indexed
- Arithmetic instructions
- INC, DEC, ADD, SUB, NEG
- Status Flags
- Sign, Carry, Auxiliary Carry, Zero, and Overflow flags
- Operators
- OFFSET, PTR, TYPE, LENGTHOF, SIZEOF
- Loops
- JMP and LOOP – branching instructions
|
{"Source-Url": "http://www2.southeastern.edu/Academics/Faculty/kyang/2009/Fall/CMPS293&290/ClassNotes/CMPS293&290ClassNotesChap04.pdf", "len_cl100k_base": 7880, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 44576, "total-output-tokens": 9425, "length": "2e12", "weborganizer": {"__label__adult": 0.00036263465881347656, "__label__art_design": 0.00026726722717285156, "__label__crime_law": 0.0002741813659667969, "__label__education_jobs": 0.0005745887756347656, "__label__entertainment": 5.906820297241211e-05, "__label__fashion_beauty": 0.0001513957977294922, "__label__finance_business": 0.0001854896545410156, "__label__food_dining": 0.0004024505615234375, "__label__games": 0.0007834434509277344, "__label__hardware": 0.005279541015625, "__label__health": 0.00032806396484375, "__label__history": 0.0002191066741943359, "__label__home_hobbies": 0.00015735626220703125, "__label__industrial": 0.0010700225830078125, "__label__literature": 0.00015497207641601562, "__label__politics": 0.0002243518829345703, "__label__religion": 0.0004987716674804688, "__label__science_tech": 0.0245513916015625, "__label__social_life": 5.328655242919922e-05, "__label__software": 0.00516510009765625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.0003933906555175781, "__label__transportation": 0.0007357597351074219, "__label__travel": 0.00017786026000976562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23377, 0.07347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23377, 0.91114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23377, 0.60248]], "google_gemma-3-12b-it_contains_pii": [[0, 1295, false], [1295, 2854, null], [2854, 4002, null], [4002, 4810, null], [4810, 6173, null], [6173, 7418, null], [7418, 8179, null], [8179, 9062, null], [9062, 9590, null], [9590, 11104, null], [11104, 12890, null], [12890, 14003, null], [14003, 14831, null], [14831, 15169, null], [15169, 16440, null], [16440, 17059, null], [17059, 18136, null], [18136, 19147, null], [19147, 20441, null], [20441, 21323, null], [21323, 22335, null], [22335, 22971, null], [22971, 23377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1295, true], [1295, 2854, null], [2854, 4002, null], [4002, 4810, null], [4810, 6173, null], [6173, 7418, null], [7418, 8179, null], [8179, 9062, null], [9062, 9590, null], [9590, 11104, null], [11104, 12890, null], [12890, 14003, null], [14003, 14831, null], [14831, 15169, null], [15169, 16440, null], [16440, 17059, null], [17059, 18136, null], [18136, 19147, null], [19147, 20441, null], [20441, 21323, null], [21323, 22335, null], [22335, 22971, null], [22971, 23377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 23377, null]], "pdf_page_numbers": [[0, 1295, 1], [1295, 2854, 2], [2854, 4002, 3], [4002, 4810, 4], [4810, 6173, 5], [6173, 7418, 6], [7418, 8179, 7], [8179, 9062, 8], [9062, 9590, 9], [9590, 11104, 10], [11104, 12890, 11], [12890, 14003, 12], [14003, 14831, 13], [14831, 15169, 14], [15169, 16440, 15], [16440, 17059, 16], [17059, 18136, 17], [18136, 19147, 18], [19147, 20441, 19], [20441, 21323, 20], [21323, 22335, 21], [22335, 22971, 22], [22971, 23377, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23377, 0.01916]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3264df44b1d7f357eaf78ccf986453b22ba919f3
|
# Guide Contents
## Overview
- Adafruit Feather M0 Express - Designed for CircuitPython
- Adafruit Feather M4 Express - Featuring ATSAMD51
- Adafruit FeatherWing OLED - 128x32 OLED Add-on For Feather
- Rotary Encoder + Extras
- Mini Oval Speaker - 8 Ohm 1 Watt
- Other Parts and Tool
- Half-size breadboard
- Breadboarding wire bundle
## Hardware
- Lithium Ion Polymer Battery - 3.7v 500mAh
## Code
- The List of Shapes
- The Main Code
- The Display
- The Generator
## Operation
- Speaker
## Going Further
- Amplifier
- Adafruit Mono 2.5W Class D Audio Amplifier - PAM8302
- Low Pass Filter
- More Waveforms
- Waveforms as Classes
- Adjustable Duty Cycle
- Dynamic Sample Rate
© Adafruit Industries
https://learn.adafruit.com/waveform-generator
Overview
In this guide we’ll take a few simple parts, add some not quite as simple code, and build an adjustable waveform generator (aka frequency generator). Using a rotary encoder push-switch, we’ll select one of multiple waveforms, and using the rotation along with the three buttons on the OLED FeatherWing we’ll adjust the frequency. We have great guides on the Feather M0 Express (https://adafruit.it/B4b) and Feather M4 Express (https://adafruit.it/CJN) as well as the OLED Featherwing (https://adafruit.it/nek) to get you up to speed on this hardware.
A small speaker is useful to listen to the signal as a functional test. There’s a nice one listed below but there are others; any 8 ohm speaker will work. The signal isn’t very strong, so stick with a small one. You can easily power this by the USB port, but a LiPo battery will let you use it on it’s own, untethered from a computer.
This project will work on either a Feather M0 Express or Feather M4 Express. Due to the additional performance and memory of the M4 board, capability of the code can be increased. This is pointed out in the code.
Adafruit Feather M4 Express - Featuring ATSAMD51
$22.95
IN STOCK
ADD TO CART
Adafruit FeatherWing OLED - 128x32 OLED Add-on For Feather
$14.95
OUT OF STOCK
OUT OF STOCK
Rotary Encoder + Extras
$4.50
IN STOCK
ADD TO CART
Mini Oval Speaker - 8 Ohm 1 Watt
$1.95
IN STOCK
ADD TO CART
Other Parts and Tool
Half-size breadboard
$5.00
IN STOCK
ADD TO CART
Breadboarding wire bundle
$4.95
IN STOCK
ADD TO CART
Hardware
The hardware is pretty straightforward: a Feather M0 or M4 Express, an OLED FeatherWing, a speaker, and a rotary encoder. In the wiring diagram above we’ve put the Feather and wing on a feather doubler protoboard to show both, but you can use stacking headers to put the wing on top of the Feather. The Feather M4 is shown above, but the wiring is the same in either case.
Pushing the encoder will cycle through the different waveforms, while turning it will adjust the frequency. We'll use the A, B, and C buttons on the OLED wing to scale the frequency adjustment: A will change it by 1000 at a time, B by 100, and C by 10. Simply turning the encoder will adjust by 1 at a time.
A0, the true analog output pin on the Feather, will be used as the signal output.
You can power over USB or using a LiPo battery for a portable waveform generator!
Lithium Ion Polymer Battery - 3.7v 500mAh
$7.95
IN STOCK
ADD TO CART
We'll be using CircuitPython for this project. Are you new to using CircuitPython? No worries, there is a full getting started guide here (https://adafruit.it/cpy-welcome).
Adafruit suggests using the Mu editor to edit your code and have an interactive REPL in CircuitPython. You can learn about Mu and its installation in this tutorial (https://adafruit.it/ANO).
This project requires the latest CircuitPython 4.0 runtime and library bundle.
In `CIRCUITPY/lib` you will need to have:
- adafruit_bus_device
- adafruit_register
- adafruit_framebuf.mpy
- adafruit_ssd1306.mpy
- adafruit_debouncer.mpy
In addition to putting the project's python files on `CIRCUITPY`, you'll also need to copy the `font5x8.bin` from the project.
The basic approach to the user interface:
1. Check the inputs
2. Do something appropriate
3. Update the outputs
4. Go back to 1
More specifically:
1. Check the switches and encoder
2. If the encoder has been rotated change the frequency, possibly scaling based on OLED wing buttons
3. If the encoder was pushed, change the waveform
4. Update the display
5. Update the generated signal
6. Go back to 1
Since we are dealing with hardware buttons, it's a good idea to clean them up with a debouncer. We use the debouncer in the CircuitPython library bundle.
Using a SAMD21 board (in this case the Feather M0 Express) provides plenty of room for well structured CircuitPython code, but constrains how big of a sample buffer can be allocated. The code reflects this. Notes below indicate where the lower frequency limit and sample rate can be changed to take advantage of the SAMD51 chip on the Feather M4 Express.
In regards to well structured code, bundling the debouncer into a class is one example of this. This lets us create separate debouncer objects for the three wing buttons and the encoder's push switch in addition to putting all the code for the debouncer in one place.
To further separate code functionality, the display and signal generator can be placed into their own classes (which will be in separate files/modules). Everything needs to know what waveforms are supported, so that list can go into its own file/module as well.
The List of Shapes
Let's start with the simplest: the waveforms. This is imported by everything else (except the debouncer).
```python
# Signal generator wave shapes.
# Adafruit invests time and resources providing this open source code.
# Please support Adafruit and open source hardware by purchasing
# products from Adafruit!
Written by Dave Astels for Adafruit Industries
Copyright (c) 2018 Adafruit Industries
Licensed under the MIT license.
All text above must be included in any redistribution.
```
SINE = 0
SQUARE = 1
TRIANGLE = 2
SAWTOOTH = 3
NUMBER_OF_SHAPES = 4
The Main Code
We'll structure things nicely and separate out areas of concern. There are two functions that handle changing the frequency and shape. Both of these let us ignore the details of how the changes occur when we write the main loop. It also puts those details in a clearly defined place. Do you want to change the frequency range? `change_frequency` is the logical place to do it. The `change_frequency` function is also responsible for limiting the frequency range. The lower limit of 150Hz prevents the sample buffer from becoming un-allocatably large on a Feather M0 board. If you are running this on a Feather M4 board you can safely decrease it to 10Hz.
def change_frequency(frequency, delta):
return min(20000, max(150, frequency + delta))
def change_shape(shape):
return (shape + 1) % shapes.NUMBER_OF_SHAPES
def get_encoder_change(encoder, pos):
new_position = encoder.position
if pos is None:
return (new_position, 0)
else:
return (new_position, new_position - pos)
Similarly, we bundle the handling of the encoder into a function. It gets the new position from the encoder and computes the difference. That's the important part, as it determines how much to change the frequency by.
Note something else: The encoder and current position are passed in as parameters, and the new position and delta are returned. That new position gets stored by the main loop and passed back in next time. Doing this completely avoids having a global variable. Not only is this a good programming practice, but it also lets the code run significantly faster. This code is very user interface limited, so performance isn't much of a concern, but in a lot of embedded code it could be crucial. The reason is that referencing a global variable requires dictionary lookups, whereas using a local variable (or parameter) is a direct reference. A local/parameter also lives on the stack and not the heap, so there isn't allocation and garbage collection impact.
As you look through the code, you'll see that there are no global variables.
Now we have the main setup and loop code. This has been placed in a function. It starts by creating the variables it needs, initializing things, then entering the main loop.
The Display and Generator are created next. We'll look into these in detail later. Then the debouncers (see above) and rotary encoder are created. Next, a handful of state variables are allocated and initialized. Finally the display is given some initial content.
def make_debouncable(pin):
switch_io = digitalio.DigitalInOut(pin)
switch_io.direction = digitalio.Direction.INPUT
switch_io.pull = digitalio.Pull.UP
return switch_io
def run():
display = Display()
generator = Generator()
button_a = Debouncer(make_debouncable(board.D9))
button_b = Debouncer(make_debouncable(board.D6))
button_c = Debouncer(make_debouncable(board.D5))
encoder_button = Debouncer(make_debouncable(board.D12))
encoder = rotaryio.IncrementalEncoder(board.D10, board.D11)
current_position = None # current encoder position
change = 0 # the change in encoder position
delta = 0 # how much to change the frequency by
shape = shapes.SINE # the active waveform
frequency = 440 # the current frequency
display.update_shape(shape) # initialize the display contents
display.update_frequency(frequency)
Now we have the main loop. First, all the debouncers are updated. This debounces the switches, giving us a stable value to use as well as detected rising and falling edges (switching being pushed or released). For the wing buttons, we just need button values and we can get that (logically enough) from the debouncer’s `value` attribute. For the encoder switch, we want to know when it was pushed and don’t care (in this case) when it’s released or even what the value is. We can use the debouncer’s `fell` attribute give us that information.
After updating the debouncers, we check the encoder to see if it has been rotated. If so, we use the values of the three wing buttons to scale the change by 10, 100, or 1000 (depending on which, if any, is pressed). If none are pressed, the change isn’t scaled. Finally, if there was a change, the frequency is updated.
Next, if the encoder button was pushed since last time, the wave shape is changed.
At the end of the loop, the display and generator are updated.
Since the main code is in a function, all that's left is to execute it:
```python
run()
```
It's common to name the file of python code `code.py`. In this example `main.py` is a better choice since there are several files of python code. Naming this one `main.py` makes it clear that this is the file with the code that drives the whole system.
Be sure that there isn't a code.py file on your Feather left from another project. If so, rename it or copy it off and delete the one on the Feather.
```python
while True:
encoder_button.update()
button_a.update()
button_b.update()
button_c.update()
current_position, change = get_encoder_change(encoder, current_position)
if change != 0:
if not button_a.value:
delta = change * 1000
elif not button_b.value:
delta = change * 100
elif not button_c.value:
delta = change * 10
else:
delta = change
frequency = change_frequency(frequency, delta)
if encoder_button.fell:
shape = change_shape(shape)
display.update_shape(shape)
display.update_frequency(frequency)
generator.update(shape, frequency)
```
from debouncer import Debouncer
from generator import Generator
import shapes
def change_frequency(frequency, delta):
return min(20000, max(150, frequency + delta))
def change_shape(shape):
return (shape + 1) % shapes.NUMBER_OF_SHAPES
def get_encoder_change(encoder, pos):
new_position = encoder.position
if pos is None:
return (new_position, 0)
else:
return (new_position, new_position - pos)
def run():
display = Display()
generator = Generator()
button_a = Debouncer(board.D9, digitalio.Pull.UP, 0.01)
button_b = Debouncer(board.D6, digitalio.Pull.UP, 0.01)
button_c = Debouncer(board.D5, digitalio.Pull.UP, 0.01)
encoder_button = Debouncer(board.D12, digitalio.Pull.UP, 0.01)
encoder = rotaryio.IncrementalEncoder(board.D10, board.D11)
current_position = None # current encoder position
change = 0 # the change in encoder position
delta = 0 # how much to change the frequency by
shape = shapes.SINE # the active waveform
frequency = 440 # the current frequency
display.update_shape(shape) # initialize the display contents
display.update_frequency(frequency)
while True:
encoder_button.update()
button_a.update()
button_b.update()
button_c.update()
current_position, change = get_encoder_change(encoder, current_position)
if change != 0:
if not button_a.value:
delta = change * 1000
elif not button_b.value:
delta = change * 100
elif not button_c.value:
delta = change * 10
else:
delta = change
frequency = change_frequency(frequency, delta)
if encoder_button.fell:
The Display
The display functionality is encapsulated into the `Display` class which is instantiated (just a single instance) in the `run` function above. It's then used to show the user the current waveform and frequency.
The class has several instance variables: `I2C` and `SSD1306_I2C` instances for manipulating the physical display, and places to hold the current shape and frequency.
The constructor initializes the hardware and blanks the display.
```python
def __init__(self):
self.i2c = busio.I2C(board.SCL, board.SDA)
self.oled = adafruit_ssd1306.SSD1306_I2C(128, 32, self.i2c)
self.oled.fill(0)
self.oled.show()
```
There are methods to update the shape and frequency that are used from the main loop. These both operate similarly: they check that there actually was a change, update the appropriate instance variable, and call `update` to refresh the screen.
```python
def update_shape(self, shape):
if shape != self.shape:
self.shape = shape
self.update()
def update_frequency(self, frequency):
if frequency != self.frequency:
self.frequency = frequency
self.update()
```
The `update` method and the `draw_*` methods do the work of putting pixels and text on the screen. This involves drawing the appropriate waveform and showing the selected frequency.
def draw_sine(self):
for i in range(32):
self.oled.pixel(i, int(math.sin(i/32 * math.pi * 2) * 16) + 16, 1)
def draw_square(self):
for i in range(16):
self.oled.pixel(0, 32 - i, 1)
self.oled.pixel(i, 31, 1)
self.oled.pixel(31, i, 1)
self.oled.pixel(15, 16 + i, 1)
self.oled.pixel(15, i, 1)
self.oled.pixel(16 + i, 0, 1)
def draw_triangle(self):
for i in range(8):
self.oled.pixel(i, 16 + i * 2, 1)
self.oled.pixel(8 + i, 32 - i * 2, 1)
self.oled.pixel(16 + i, 16 - i * 2, 1)
self.oled.pixel(24 + i, i * 2, 1)
def draw_sawtooth(self):
for i in range(16):
self.oled.pixel(0, 16 + i, 1)
self.oled.pixel(31, i, 1)
for i in range(32):
self.oled.pixel(i, 31 - i, 1)
def update(self):
self.oled.fill(0)
if self.shape == shapes.SINE:
self.draw_sine()
elif self.shape == shapes.SQUARE:
self.draw_square()
elif self.shape == shapes.TRIANGLE:
self.draw_triangle()
elif self.shape == shapes.SAWTOOTH:
self.draw_sawtooth()
self.oled.text("{0}".format(self.frequency), 40, 10)
self.oled.show()
***
Display code for signal generator.
Adafruit invests time and resources providing this open source code. Please support Adafruit and open source hardware by purchasing products from Adafruit!
Written by Dave Astels for Adafruit Industries
Copyright (c) 2018 Adafruit Industries
Licensed under the MIT license.
All text above must be included in any redistribution.
***
import math
import board
import busio
import adafruit_ssd1306
import shapes
class Display:
"""Manage the OLED Featherwing display"""
i2c = None
oled = None
shape = None
frequency = None
def __init__(self):
self.i2c = busio.I2C(board.SCL, board.SDA)
self.oled = adafruit_ssd1306.SSD1306_I2C(128, 32, self.i2c)
self.oled.fill(0)
self.oled.show()
def draw_sine(self):
for i in range(32):
self.oled.pixel(i, int(math.sin(i/32 * math.pi * 2) * 16) + 16, 1)
def draw_square(self):
for i in range(16):
self.oled.pixel(0, 32 - i, 1)
self.oled.pixel(i, 31, 1)
self.oled.pixel(31, i, 1)
self.oled.pixel(15, 16 + i, 1)
self.oled.pixel(15, i, 1)
self.oled.pixel(16 + i, 0, 1)
def draw_triangle(self):
for i in range(8):
self.oled.pixel(i, 16 + i * 2, 1)
self.oled.pixel(8 + i, 32 - i * 2, 1)
self.oled.pixel(16 + i, 16 - i * 2, 1)
self.oled.pixel(24 + i, i * 2, 1)
def draw_sawtooth(self):
for i in range(16):
self.oled.pixel(0, 16 + i, 1)
self.oled.pixel(31, i, 1)
for i in range(32):
self.oled.pixel(i, 31 - i, 1)
def update(self):
self.oled.fill(0)
if self.shape == shapes.SINE:
self.draw_sine()
elif self.shape == shapes.SQUARE:
self.draw_square()
The Generator
The generator is pretty simple. It uses the `audioio` library to play an array of samples.
The generator is in `generator.py` and implemented in the class: `Generator`.
There are a few instance variables to store the sample array, the sample player, and the current shape and frequency.
The constructor allocates and initializes the `AudioOut` driver that will be used to play the samples.
```
def __init__(self):
self.dac = audioio.AudioOut(board.A0)
```
If we jump down to the update method that's called from the main loop, we see that it first checks that there is a reason to update. If not, it immediately returns. This is an example of the guard clause pattern discussed in this guide (https://adafru.it/Czt).
If the frequency has changed, the sample array is reallocated: the size is the sample rate (64,000 in this case) divided by the frequency. The logic is that since we update the signal update 64,000 times per second (i.e. the sample rate) we have that many samples to use over the course of a second. For a 1 Hz signal we can use all 64,000 samples for one full cycle, but for a 1000 Hz signal we only have 64. That knowledge is encapsulated in the `length` function. The actual allocation of the array is done in the `reallocate` method. The sample rate of 32000 keeps the buffer small from being to large for a Feather M0 board. If running on a Feather M4 board you can increase it to 64000 without trouble.
```
def length(frequency):
return int(32000 / frequency)
def reallocate(self, frequency):
self.sample = array.array("h", [0] * self.length(frequency))
```
Back in the update method, the sample array is filled with a waveform determined by the shape parameter. Note that the sample array is always refilled. Either the frequency changed and is now a different size and thus the samples
```
def update_shape(self, shape):
if shape != self.shape:
self.shape = shape
self.update()
def update_frequency(self, frequency):
if frequency != self.frequency:
self.frequency = frequency
self.update()
```
need to be regenerated, or the shape has changed and new samples are needed to reflect the new shape.
Finally the sample player is stopped and restarted with the new samples.
```python
def update(self, shape, frequency):
if shape == self.shape and frequency == self.frequency:
return
if frequency != self.frequency:
self.reallocate(frequency)
self.frequency = frequency
self.shape = shape
if shape == shapes.SINE:
self.make_sine()
elif shape == shapes.SQUARE:
self.make_square()
elif shape == shapes.TRIANGLE:
self.make_triangle()
elif shape == shapes.SAWTOOTH:
self.make_sawtooth()
self.dac.stop()
self.dac.play(audioio.RawSample(self.sample, channel_count=1, sample_rate=64000), loop=True)
```
All that's left are the functions that generate the samples for each waveform. Sample values range from -32767 to +32767. These are the $2^{15-1}$ values in the code The CircuitPython bytecode compiler will optimize these constant expressions to a constant value, so they don't have any runtime performance impact.
def make_sine(self):
l = len(self.sample)
for i in range(l):
self.sample[i] = min(2 ** 15 - 1, int(math.sin(math.pi * 2 * i / l) * (2 ** 15))
def make_square(self):
l = len(self.sample)
half_l = l // 2
for i in range(l):
if i < half_l:
self.sample[i] = -1 * ((2 ** 15) - 1)
else:
self.sample[i] = (2 ** 15) - 1
def make_triangle(self):
l = len(self.sample)
half_l = l // 2
s = 0
for i in range(l):
if i <= half_l:
s = int((i / half_l) * (2 ** 16)) - (2 ** 15)
else:
s = int((1 - ((i - half_l) / half_l)) * (2 ** 16)) - (2 ** 15)
self.sample[i] = min(2 ** 15 -1, s)
def make_sawtooth(self):
l = len(self.sample)
for i in range(l):
self.sample[i] = int((i / l) * (2 ** 16)) - (2 ** 15)
""
Output generator code for signal generator.
Adafruit invests time and resources providing this open source code.
Please support Adafruit and open source hardware by purchasing
products from Adafruit!
Written by Dave Astels for Adafruit Industries
Copyright (c) 2018 Adafruit Industries
Licensed under the MIT license.
All text above must be included in any redistribution.
""
import math
import array
import board
import audioio
import shapes
def length(frequency):
return int(32000 / frequency)
class Generator:
sample = None
dac = None
shape = None
frequency = None
def __init__(self):
self.dac = audioio.AudioOut(board.A0)
def reallocate(self, frequency):
self.sample = array.array("h", [0] * length(frequency))
def make_sine(self):
l = len(self.sample)
for i in range(l):
self.sample[i] = min(2 ** 15 - 1, int(math.sin(math.pi * 2 * i / l) * (2 ** 15)))
def make_square(self):
l = len(self.sample)
half_l = l // 2
for i in range(l):
if i < half_l:
self.sample[i] = -1 * ((2 ** 15) - 1)
else:
self.sample[i] = (2 ** 15) - 1
def make_triangle(self):
l = len(self.sample)
half_l = l // 2
s = 0
for i in range(l):
if i <= half_l:
s = int((i / half_l) * (2 ** 16)) - (2 ** 15)
else:
s = int((1 - ((i - half_l) / half_l)) * (2 ** 16)) - (2 ** 15)
self.sample[i] = min(2 ** 15 - 1, s)
def make_sawtooth(self):
l = len(self.sample)
for i in range(l):
self.sample[i] = int((i / l) * (2 ** 16)) - (2 ** 15)
def update(self, shape, frequency):
if shape == self.shape and frequency == self.frequency:
return
if frequency != self.frequency:
self.reallocate(frequency)
self.frequency = frequency
self.shape = shape
if shape == shapes.SINE:
self.make_sine()
elif shape == shapes.SQUARE:
self.make_square()
elif shape == shapes.TRIANGLE:
self.make_triangle()
elif shape == shapes.SAWTOOTH:
self.make_sawtooth()
self.dac.stop()
self.dac.play(audioio.RawSample(self.sample, channel_count=1, sample_rate=64000), loop=True)
Using an oscilloscope, we can look at the output signal.
On the left are the four waveforms that can be generated. All are at 440 Hz. All four look relatively nice and smooth.
However, if we zoom in on the 440Hz sine wave we can see that it's not a smooth curve. Rather it's quite jaggy and noisy.
If the frequency is increased to 8440 the sine curve has degenerated into something very unlike a sine wave. This is because the sample rate is fixed, so as the frequency increases there are fewer samples available for a full cycle. That means fewer points on the signal, and that means a chunkier curve.
The easy answer to this is just to increase the sample rate. That, however, is limited by the speed of the underlying hardware. A more significant problem is the sample array. A higher sample rate means a bigger sample array at lower frequencies.
This is where the Feather M4 has an advantage over a Feather M0, it's faster and has more memory for more samples.
Speaker
You can hook the speaker to the output to hear the sounds of the various waveforms you generate. Without an amplifier, the volume is fixed and not very loud.
If you use a speaker with a JST connector on the end, use jumper wires to plug into the connector and the other end
will plug into the breadboard easily.
Going Further
There are a variety of further steps you could take with this project that would make it even more capable. We'll look at a few here.
Amplifier
Adding a class D breakout would provide a stronger output signal. This breakout would do the job nicely. It's the amplifier used in the HalloWing (https://adafru.it/CmY) and similar to the one in the Circuit Playground Express (https://adafru.it/wpF).
Low Pass Filter
Adding a low pass filter on the output to filter out some of the high frequency digital artifacts would produce a cleaner output signal.
More Waveforms
While the four provided waveforms are the standard ones, it could be useful to add more. Sine based signals with different sets of harmonics would be one possibility.
The current sawtooth wave ramps up slowly then drops. The opposite is also often present in signal generators: slowly ramping down before jumping up.
Waveforms as Classes
This would be an interesting as an exercise, and quite useful if you wanted to add to the selection of waveforms. Each waveform would be implemented by a separate class, possibly with a base class. Each of these classes would have methods to generate \((x, y)\) pairs that define how to draw the thumbnail of the wave, and a method to generate the sample array.
There would need to be some way of storing the instances, and selecting from among them. This might require the Feather M4 due to memory.
Adjustable Duty Cycle
Currently square and triangle waves have a 50% duty cycle. Making that adjustable would be extremely useful. In fact, the sawtooth could be removed as a separate waveform (possibly completely, but probably just the implementation.
since it's common enough that you might want to keep it as a selection) since it can be considered to be a triangle wave with an extreme duty cycle (0% or 100%).
Dynamic Sample Rate
The current design uses a fixed sample rate and varies the number of samples based on frequency. A result of this is that high frequency signals have fewer samples and thus are much choppier, and have far more digital artifacts.
By keeping the number of samples fixed we can get good quality signals for all frequencies. To support that we need to vary the sample rate based on frequency.
|
{"Source-Url": "https://cdn-learn.adafruit.com/downloads/pdf/waveform-generator.pdf", "len_cl100k_base": 6862, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 43598, "total-output-tokens": 8343, "length": "2e12", "weborganizer": {"__label__adult": 0.0015010833740234375, "__label__art_design": 0.002422332763671875, "__label__crime_law": 0.0006442070007324219, "__label__education_jobs": 0.0006322860717773438, "__label__entertainment": 0.0005745887756347656, "__label__fashion_beauty": 0.0008764266967773438, "__label__finance_business": 0.0002624988555908203, "__label__food_dining": 0.0013475418090820312, "__label__games": 0.001822471618652344, "__label__hardware": 0.43359375, "__label__health": 0.0010204315185546875, "__label__history": 0.0005660057067871094, "__label__home_hobbies": 0.0017366409301757812, "__label__industrial": 0.003726959228515625, "__label__literature": 0.00031185150146484375, "__label__politics": 0.00045108795166015625, "__label__religion": 0.0015707015991210938, "__label__science_tech": 0.08978271484375, "__label__social_life": 0.0001747608184814453, "__label__software": 0.01052093505859375, "__label__software_dev": 0.44384765625, "__label__sports_fitness": 0.0009284019470214844, "__label__transportation": 0.0014448165893554688, "__label__travel": 0.0004057884216308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27502, 0.03051]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27502, 0.65985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27502, 0.82668]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 789, false], [789, 1900, null], [1900, 2187, null], [2187, 2313, null], [2313, 3242, null], [3242, 4533, null], [4533, 6687, null], [6687, 8534, null], [8534, 10445, null], [10445, 11631, null], [11631, 13397, null], [13397, 14728, null], [14728, 16280, null], [16280, 17761, null], [17761, 19858, null], [19858, 20967, null], [20967, 22314, null], [22314, 23616, null], [23616, 23955, null], [23955, 24132, null], [24132, 25209, null], [25209, 25247, null], [25247, 26929, null], [26929, 27502, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 789, true], [789, 1900, null], [1900, 2187, null], [2187, 2313, null], [2313, 3242, null], [3242, 4533, null], [4533, 6687, null], [6687, 8534, null], [8534, 10445, null], [10445, 11631, null], [11631, 13397, null], [13397, 14728, null], [14728, 16280, null], [16280, 17761, null], [17761, 19858, null], [19858, 20967, null], [20967, 22314, null], [22314, 23616, null], [23616, 23955, null], [23955, 24132, null], [24132, 25209, null], [25209, 25247, null], [25247, 26929, null], [26929, 27502, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27502, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27502, null]], "pdf_page_numbers": [[0, 0, 1], [0, 789, 2], [789, 1900, 3], [1900, 2187, 4], [2187, 2313, 5], [2313, 3242, 6], [3242, 4533, 7], [4533, 6687, 8], [6687, 8534, 9], [8534, 10445, 10], [10445, 11631, 11], [11631, 13397, 12], [13397, 14728, 13], [14728, 16280, 14], [16280, 17761, 15], [17761, 19858, 16], [19858, 20967, 17], [20967, 22314, 18], [22314, 23616, 19], [23616, 23955, 20], [23955, 24132, 21], [24132, 25209, 22], [25209, 25247, 23], [25247, 26929, 24], [26929, 27502, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27502, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8fa293f03679a46fd87b81a2c680861e2b37b1bc
|
x86-64 Programming II
CSE 351 Winter 2020
Instructor:
Ruth Anderson
Teaching Assistants:
Jonathan Chen
Justin Johnson
Porter Jones
Josie Lee
Jeffery Tian
Callum Walker
Eddy (Tianyi) Zhou
http://xkcd.com/409/
Administrivia
- Lab 1a due tonight!
- Pay attention to Gradescope’s feedback!
- Lab 2 (x86-64) coming soon
- Learn to read x86-64 assembly and use GDB
- Submissions that fail the autograder get a **ZERO**
- No excuses – make full use of tools & Gradescope’s interface
- Midterm is in two weeks (2/10 during lecture)
- You will be provided a fresh reference sheet
- Study and use this NOW so you are comfortable with it when the exam comes around
- Form study groups and look at past exams!
Address Computation Instruction
- **leaq src, dst**
- "lea" stands for *load effective address*
- src is address expression (any of the formats we’ve seen)
- dst is a register
- Sets dst to the *address* computed by the src expression (does not go to memory! – it just does math)
- **Example:** leaq (%rdx,%rcx,4), %rax
- **Uses:**
- Computing addresses without a memory reference
- *e.g.* translation of `p = &x[i];`
- Computing arithmetic expressions of the form `x+k*i+d`
- Though `k` can only be 1, 2, 4, or 8
Example: \texttt{lea} vs. \texttt{mov}
<table>
<thead>
<tr>
<th>Registers</th>
<th>Memory</th>
<th>Word Address</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rax</td>
<td>0x400</td>
<td>0x120</td>
</tr>
<tr>
<td>%rbx</td>
<td>0xF</td>
<td>0x118</td>
</tr>
<tr>
<td>%rcx</td>
<td>0x8</td>
<td>0x110</td>
</tr>
<tr>
<td>%rdx</td>
<td>0x10</td>
<td>0x108</td>
</tr>
<tr>
<td>%rdi</td>
<td>0x1</td>
<td>0x100</td>
</tr>
<tr>
<td>%rsi</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
\begin{align*}
\texttt{leaq} & (\%rdx,\%rcx,4), \%rax \\
\texttt{movq} & (\%rdx,\%rcx,4), \%rbx \\
\texttt{leaq} & (\%rdx), \%rdi \\
\texttt{movq} & (\%rdx), \%rsi
\end{align*}
lea – “It just does math”
Arithmetic Example
```c
long arith(long x, long y, long z)
{
long t1 = x + y;
long t2 = z + t1;
long t3 = x + 4;
long t4 = y * 48;
long t5 = t3 + t4;
long rval = t2 * t5;
return rval;
}
```
### Interesting Instructions
- **leaq**: “address” computation
- **salq**: shift
- **imulq**: multiplication
- Only used once!
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>1st argument (x)</td>
</tr>
<tr>
<td>%rsi</td>
<td>2nd argument (y)</td>
</tr>
<tr>
<td>%rdx</td>
<td>3rd argument (z)</td>
</tr>
</tbody>
</table>
Arithmetic Example
```c
long arith(long x, long y, long z)
{
long t1 = x + y;
long t2 = z + t1;
long t3 = x + 4;
long t4 = y * 48;
long t5 = t3 + t4;
long rval = t2 * t5;
return rval;
}
```
### Register Use(s)
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>x</td>
</tr>
<tr>
<td>%rsi</td>
<td>y</td>
</tr>
<tr>
<td>%rdx</td>
<td>z, t4</td>
</tr>
<tr>
<td>%rax</td>
<td>t1, t2, rval</td>
</tr>
<tr>
<td>%rcx</td>
<td>t5</td>
</tr>
</tbody>
</table>
**arith:**
```
leaq (%rdi,%rsi), %rax # rax/t1 = x + y
addq %rdx, %rax # rax/t2 = t1 + z
leaq (%rsi,%rsi,2), %rdx # rdx = 3 * y
salq $4, %rdx # rdx/t4 = (3*y) * 16
leaq 4(%rdi,%rdx), %rcx # rcx/t5 = x + t4 + 4
imulq %rcx, %rax # rax/rval = t5 * t2
ret
```
Polling Question
- Which of the following x86-64 instructions correctly calculates $\%rax = 9 \times \%rdi$?
- Vote at [http://pollev.com/rea](http://pollev.com/rea)
A. `leaq (,%rdi,9), %rax`
B. `movq (,%rdi,9), %rax`
C. `leaq (%rdi,%rdi,8), %rax`
D. `movq (%rdi,%rdi,8), %rax`
E. We’re lost...
Control Flow
```c
long max(long x, long y)
{
long max;
if (x > y) {
max = x;
} else {
max = y;
}
return max;
}
```
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>1st argument (x)</td>
</tr>
<tr>
<td>%rsi</td>
<td>2nd argument (y)</td>
</tr>
<tr>
<td>%rax</td>
<td>return value</td>
</tr>
</tbody>
</table>
```
max:
???
movq %rdi, %rax
???
???
movq %rsi, %rax
???
ret
```
Control Flow
```
long max(long x, long y) {
long max;
if (x > y) {
max = x;
} else {
max = y;
}
return max;
}
```
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>1<sup>st</sup> argument (x)</td>
</tr>
<tr>
<td>%rsi</td>
<td>2<sup>nd</sup> argument (y)</td>
</tr>
<tr>
<td>%rax</td>
<td>return value</td>
</tr>
</tbody>
</table>
Conditional jump: if $x \leq y$ then jump to else
Unconditional jump: jump to done
max:
```
movq %rdi, %rax
jmp done
```
done:
```
ret
```
Conditionals and Control Flow
- **Conditional branch/jump**
- Jump to somewhere else if some condition is true, otherwise execute next instruction
- **Unconditional branch/jump**
- Always jump when you get to this instruction
- Together, they can implement most control flow constructs in high-level languages:
- `if (condition) then {...} else {...}
- `while (condition) {...}
- `do {...} while (condition)
- `for (initialization; condition; iterative) {...}
- `switch {...}
x86 Control Flow
- Condition codes
- Conditional and unconditional branches
- Loops
- Switches
Processor State (x86-64, partial)
- Information about currently executing program
- Temporary data (%rax, ...)
- Location of runtime stack (%rsp)
- Location of current code control point (%rip, ...)
- Status of recent tests (CF, ZF, SF, OF)
- Single bit registers:
<table>
<thead>
<tr>
<th>Registers</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>%rax</td>
<td>%r8</td>
<td></td>
</tr>
<tr>
<td>%rbx</td>
<td>%r9</td>
<td></td>
</tr>
<tr>
<td>%rcx</td>
<td>%r10</td>
<td></td>
</tr>
<tr>
<td>%rdx</td>
<td>%r11</td>
<td></td>
</tr>
<tr>
<td>%rsi</td>
<td>%r12</td>
<td></td>
</tr>
<tr>
<td>%rdi</td>
<td>%r13</td>
<td></td>
</tr>
<tr>
<td>%rsp</td>
<td>%r14</td>
<td></td>
</tr>
<tr>
<td>%rbp</td>
<td>%r15</td>
<td></td>
</tr>
</tbody>
</table>
- Program Counter (instruction pointer)
- Current top of the Stack
- Condition Codes
- CF
- ZF
- SF
- OF
Condition Codes (Implicit Setting)
- Implicitly set by arithmetic operations
- (think of it as side effects)
- Example: `addq` `src, dst ↔ r = d+s`
- **CF=1** if carry out from MSB (unsigned overflow)
- **ZF=1** if `r==0`
- **SF=1** if `r<0` (if MSB is 1)
- **OF=1** if signed overflow
\[(s>0 && d>0 && r<0) || (s<0 && d<0 && r>=0)\]
- **Not set by lea instruction (beware!)**
Condition Codes (Explicit Setting: Compare)
- *Explicitly* set by **Compare** instruction
- `cmpq src1, src2`
- `cmpq a, b` sets flags based on `b-a`, but doesn’t store
- **CF=1** if carry out from MSB (good for *unsigned* comparison)
- **ZF=1** if `a==b`
- **SF=1** if `(b-a)<0` (if MSB is 1)
- **OF=1** if *signed* overflow
\[
(a>0 \ \&\& \ b<0 \ \&\& \ (b-a)>0) \ \| \ \\
(a<0 \ \&\& \ b>0 \ \&\& \ (b-a)<0)
\]
Condition Codes (Explicit Setting: Test)
- *Explicitly* set by Test instruction
- `testq src2, src1`
- `testq a, b` sets flags based on `a&b`, but *doesn’t store*
- Useful to have one of the operands be a *mask*
- Can’t have carry out (CF) or overflow (OF)
- ZF=1 if `a&b==0`
- SF=1 if `a&b<0` (signed)
<table>
<thead>
<tr>
<th>CF</th>
<th>Carry Flag</th>
</tr>
</thead>
<tbody>
<tr>
<td>ZF</td>
<td>Zero Flag</td>
</tr>
<tr>
<td>SF</td>
<td>Sign Flag</td>
</tr>
<tr>
<td>OF</td>
<td>Overflow Flag</td>
</tr>
</tbody>
</table>
Using Condition Codes: Jumping
- \( j \) * Instructions
- Jumps to **target** (an address) based on condition codes
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Condition</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>jmp target</td>
<td>( 1 )</td>
<td>Unconditional</td>
</tr>
<tr>
<td>je target</td>
<td>ZF</td>
<td>Equal / Zero</td>
</tr>
<tr>
<td>jne target</td>
<td>(~ZF)</td>
<td>Not Equal / Not Zero</td>
</tr>
<tr>
<td>js target</td>
<td>SF</td>
<td>Negative</td>
</tr>
<tr>
<td>jns target</td>
<td>(~SF)</td>
<td>Nonnegative</td>
</tr>
<tr>
<td>jg target</td>
<td>(~(SF^OF)) & ~ZF</td>
<td>Greater (Signed)</td>
</tr>
<tr>
<td>jge target</td>
<td>(~(SF^OF))</td>
<td>Greater or Equal (Signed)</td>
</tr>
<tr>
<td>jl target</td>
<td>(SF^OF)</td>
<td>Less (Signed)</td>
</tr>
<tr>
<td>jle target</td>
<td>(SF^OF)</td>
<td>Less or Equal (Signed)</td>
</tr>
<tr>
<td>ja target</td>
<td>~CF & ~ZF</td>
<td>Above (unsigned “>”)</td>
</tr>
<tr>
<td>jb target</td>
<td>CF</td>
<td>Below (unsigned “<“)</td>
</tr>
</tbody>
</table>
## Using Condition Codes: Setting
- **set* Instructions**
- Set low-order byte of `dst` to 0 or 1 based on condition codes
- Does not alter remaining 7 bytes
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Condition</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>sete dst</code></td>
<td>ZF</td>
<td>Equal / Zero</td>
</tr>
<tr>
<td><code>setne dst</code></td>
<td>~ZF</td>
<td>Not Equal / Not Zero</td>
</tr>
<tr>
<td><code>sets dst</code></td>
<td>SF</td>
<td>Negative</td>
</tr>
<tr>
<td><code>setns dst</code></td>
<td>~SF</td>
<td>Nonnegative</td>
</tr>
<tr>
<td><code>setg dst</code></td>
<td>~ (SF^OF) & ~ZF</td>
<td>Greater (Signed)</td>
</tr>
<tr>
<td><code>setge dst</code></td>
<td>~ (SF^OF)</td>
<td>Greater or Equal (Signed)</td>
</tr>
<tr>
<td><code>setl dst</code></td>
<td>(SF^OF)</td>
<td>Less (Signed)</td>
</tr>
<tr>
<td><code>setle dst</code></td>
<td>(SF^OF)</td>
<td>Less or Equal (Signed)</td>
</tr>
<tr>
<td><code>seta dst</code></td>
<td>~CF & ~ZF</td>
<td>Above (unsigned “>”)</td>
</tr>
<tr>
<td><code>setb dst</code></td>
<td>CF</td>
<td>Below (unsigned “<”)</td>
</tr>
</tbody>
</table>
## Reminder: x86-64 Integer Registers
- **Accessing the low-order byte:**
| %rax | %al |
| %rbx | %bl |
| %rcx | %cl |
| %rdx | %dl |
| %rsi | %sil |
| %rdi | %dil |
| %rsp | %spl |
| %rbp | %bpl |
| %r8 | %r8b |
| %r9 | %r9b |
| %r10 | %r10b|
| %r11 | %r11b|
| %r12 | %r12b|
| %r13 | %r13b|
| %r14 | %r14b|
| %r15 | %r15b|
Reading Condition Codes
- **set* Instructions**
- Set a low-order byte to 0 or 1 based on condition codes
- Operand is byte register (e.g. `al`, `dl`) or a byte in memory
- Do not alter remaining bytes in register
- Typically use `movzbl` (zero-extended `mov`) to finish job
```c
int gt(long x, long y) {
return x > y;
}
```
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>1st argument (x)</td>
</tr>
<tr>
<td>%rsi</td>
<td>2nd argument (y)</td>
</tr>
<tr>
<td>%rax</td>
<td>return value</td>
</tr>
</tbody>
</table>
```asm
cmpq %rsi, %rdi #
setg %al #
movzbl %al, %eax #
ret
```
Reading Condition Codes
- **set*** Instructions
- Set a low-order byte to 0 or 1 based on condition codes
- Operand is byte register (e.g. al, dl) or a byte in memory
- Do not alter remaining bytes in register
- Typically use movzbl (zero-extended mov) to finish job
```c
int gt(long x, long y) {
return x > y;
}
```
### Register Use(s)
<table>
<thead>
<tr>
<th>Register</th>
<th>Use(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>%rdi</td>
<td>1st argument (x)</td>
</tr>
<tr>
<td>%rsi</td>
<td>2nd argument (y)</td>
</tr>
<tr>
<td>%rax</td>
<td>return value</td>
</tr>
</tbody>
</table>
Aside: movz and movs
movz src, regDest # Move with zero extension
movs src, regDest # Move with sign extension
- Copy from a smaller source value to a larger destination
- Source can be memory or register; Destination must be a register
- Fill remaining bits of dest with zero (movz) or sign bit (movs)
movz SD / movs SD:
S – size of source (b = 1 byte, w = 2)
D – size of dest (w = 2 bytes, l = 4, q = 8)
Example:
movzbq %al, %rbx
\[
\begin{array}{c}
0x?? 0x?? 0x?? 0x?? 0x?? 0x?? 0x?? 0xFF \leftarrow %rax \\
0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xFF \leftarrow %rbx
\end{array}
\]
Aside: movz and movs
\[
\begin{align*}
\text{movz} & \_\_ \ src, \ regDest & \# \ Move \ with \ \text{zero} \ extension \\
\text{movs} & \_\_ \ src, \ regDest & \# \ Move \ with \ \text{sign} \ extension \\
\end{align*}
\]
- Copy from a \textit{smaller} source value to a \textit{larger} destination
- Source can be memory or register; Destination \textit{must} be a register
- Fill remaining bits of dest with \textit{zero} (\texttt{movz}) or \textit{sign} bit (\texttt{movs})
\[
\begin{align*}
\text{movz} & SD / \text{movs} \ SD: \\
S & – \text{size of source} (b = 1 \text{ byte}, w = 2) \\
D & – \text{size of dest} (w = 2 \text{ bytes}, l = 4, q = 8)
\end{align*}
\]
\textbf{Example:}
\texttt{movsbl (\%rax), \%ebx}
Copy 1 byte from memory into 8-byte register & sign extend it
\texttt{0x00 0x00 0x7F 0xFF 0xC6 0x1F 0xA4 0xE8} \leftarrow \%rax
\texttt{0x00 0x00 0xFF 0xFF 0x80} \leftarrow \%rbx
\[
\begin{align*}
\ldots & 0x?? 0x?? 0x80 0x?? 0x?? 0x?? \ldots \leftarrow \text{MEM}
\end{align*}
\]
\textbf{Note:} In x86-64, \textit{any instruction} that generates a 32-bit (long word) value for a register also sets the high-order portion of the register to 0. Good example on p. 184 in the textbook.
Summary
- Control flow in x86 determined by status of Condition Codes
- Showed **Carry**, **Zero**, **Sign**, and **Overflow**, though **others exist**
- Set flags with arithmetic instructions (implicit) or Compare and Test (explicit)
- Set instructions read out flag values
- Jump instructions use flag values to determine next instruction to execute
|
{"Source-Url": "https://courses.cs.washington.edu/courses/cse351/20wi/lectures/09/CSE351-L09-asm-II_20wi.pdf", "len_cl100k_base": 5011, "olmocr-version": "0.1.49", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 42457, "total-output-tokens": 5383, "length": "2e12", "weborganizer": {"__label__adult": 0.0006413459777832031, "__label__art_design": 0.0009102821350097656, "__label__crime_law": 0.0007643699645996094, "__label__education_jobs": 0.041351318359375, "__label__entertainment": 0.00015878677368164062, "__label__fashion_beauty": 0.0004374980926513672, "__label__finance_business": 0.0004396438598632813, "__label__food_dining": 0.0008311271667480469, "__label__games": 0.0016880035400390625, "__label__hardware": 0.0137176513671875, "__label__health": 0.0007457733154296875, "__label__history": 0.0006384849548339844, "__label__home_hobbies": 0.0004639625549316406, "__label__industrial": 0.00267791748046875, "__label__literature": 0.0003917217254638672, "__label__politics": 0.0007352828979492188, "__label__religion": 0.001216888427734375, "__label__science_tech": 0.11529541015625, "__label__social_life": 0.0002956390380859375, "__label__software": 0.01288604736328125, "__label__software_dev": 0.80029296875, "__label__sports_fitness": 0.0010204315185546875, "__label__transportation": 0.001929283142089844, "__label__travel": 0.0003647804260253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12196, 0.02415]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12196, 0.24233]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12196, 0.55114]], "google_gemma-3-12b-it_contains_pii": [[0, 211, false], [211, 716, null], [716, 1254, null], [1254, 1824, null], [1824, 1850, null], [1850, 2380, null], [2380, 3090, null], [3090, 3388, null], [3388, 3794, null], [3794, 4279, null], [4279, 4772, null], [4772, 4868, null], [4868, 5651, null], [5651, 6036, null], [6036, 6467, null], [6467, 6884, null], [6884, 7738, null], [7738, 8539, null], [8539, 8899, null], [8899, 9511, null], [9511, 10031, null], [10031, 10620, null], [10620, 11836, null], [11836, 12196, null]], "google_gemma-3-12b-it_is_public_document": [[0, 211, true], [211, 716, null], [716, 1254, null], [1254, 1824, null], [1824, 1850, null], [1850, 2380, null], [2380, 3090, null], [3090, 3388, null], [3388, 3794, null], [3794, 4279, null], [4279, 4772, null], [4772, 4868, null], [4868, 5651, null], [5651, 6036, null], [6036, 6467, null], [6467, 6884, null], [6884, 7738, null], [7738, 8539, null], [8539, 8899, null], [8899, 9511, null], [9511, 10031, null], [10031, 10620, null], [10620, 11836, null], [11836, 12196, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12196, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12196, null]], "pdf_page_numbers": [[0, 211, 1], [211, 716, 2], [716, 1254, 3], [1254, 1824, 4], [1824, 1850, 5], [1850, 2380, 6], [2380, 3090, 7], [3090, 3388, 8], [3388, 3794, 9], [3794, 4279, 10], [4279, 4772, 11], [4772, 4868, 12], [4868, 5651, 13], [5651, 6036, 14], [6036, 6467, 15], [6467, 6884, 16], [6884, 7738, 17], [7738, 8539, 18], [8539, 8899, 19], [8899, 9511, 20], [9511, 10031, 21], [10031, 10620, 22], [10620, 11836, 23], [11836, 12196, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12196, 0.24935]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.